WorldWideScience

Sample records for models typically perform

  1. Modeling typical performance measures

    NARCIS (Netherlands)

    Weekers, Anke Martine

    2009-01-01

    In the educational, employment, and clinical context, attitude and personality inventories are used to measure typical performance traits. Statistical models are applied to obtain latent trait estimates. Often the same statistical models as the models used in maximum performance measurement are appl

  2. Exergoeconomic performance optimization for a steady-flow endoreversible refrigeration model including six typical cycles

    Energy Technology Data Exchange (ETDEWEB)

    Chen, Lingen; Kan, Xuxian; Sun, Fengrui; Wu, Feng [College of Naval Architecture and Power, Naval University of Engineering, Wuhan 430033 (China)

    2013-07-01

    The operation of a universal steady flow endoreversible refrigeration cycle model consisting of a constant thermal-capacity heating branch, two constant thermal-capacity cooling branches and two adiabatic branches is viewed as a production process with exergy as its output. The finite time exergoeconomic performance optimization of the refrigeration cycle is investigated by taking profit rate optimization criterion as the objective. The relations between the profit rate and the temperature ratio of working fluid, between the COP (coefficient of performance) and the temperature ratio of working fluid, as well as the optimal relation between profit rate and the COP of the cycle are derived. The focus of this paper is to search the compromised optimization between economics (profit rate) and the utilization factor (COP) for endoreversible refrigeration cycles, by searching the optimum COP at maximum profit, which is termed as the finite-time exergoeconomic performance bound. Moreover, performance analysis and optimization of the model are carried out in order to investigate the effect of cycle process on the performance of the cycles using numerical example. The results obtained herein include the performance characteristics of endoreversible Carnot, Diesel, Otto, Atkinson, Dual and Brayton refrigeration cycles.

  3. Exergoeconomic performance optimization for a steady-flow endoreversible refrigeration model including six typical cycles

    Directory of Open Access Journals (Sweden)

    Lingen Chen, Xuxian Kan, Fengrui Sun, Feng Wu

    2013-01-01

    Full Text Available The operation of a universal steady flow endoreversible refrigeration cycle model consisting of a constant thermal-capacity heating branch, two constant thermal-capacity cooling branches and two adiabatic branches is viewed as a production process with exergy as its output. The finite time exergoeconomic performance optimization of the refrigeration cycle is investigated by taking profit rate optimization criterion as the objective. The relations between the profit rate and the temperature ratio of working fluid, between the COP (coefficient of performance and the temperature ratio of working fluid, as well as the optimal relation between profit rate and the COP of the cycle are derived. The focus of this paper is to search the compromised optimization between economics (profit rate and the utilization factor (COP for endoreversible refrigeration cycles, by searching the optimum COP at maximum profit, which is termed as the finite-time exergoeconomic performance bound. Moreover, performance analysis and optimization of the model are carried out in order to investigate the effect of cycle process on the performance of the cycles using numerical example. The results obtained herein include the performance characteristics of endoreversible Carnot, Diesel, Otto, Atkinson, Dual and Brayton refrigeration cycles.

  4. Development of Performance Models for a Typical Flexible Road Pavement in Nigeria

    Directory of Open Access Journals (Sweden)

    Adebayo Oladipo Owolabi

    2012-09-01

    Full Text Available The results of a study conducted to facilitate the development of road pavement performance models that are appropriate for Nigeria and similar developing countries andcould predict the rate of deterioration over their lifespan have been presented. Comprehensive investigations were carried out on the expressway linking Lagos (the economic nerve centre of Nigeria with Ibadan (the largest city in West Africa - apparently one of the most heavily trafficked roads in the country. Data relating to traffic characteristics, pavement condition ratings, distress types, pavement thickness, roughness index, rainfall and temperature, were collected. Models were developed to determine Pavement Condition Score (PCS and International Roughness Index (IRI. Stepwise Regression was used to analyse the data and quantify the impact of key input parameters on the PCS and IRI. Parameters such as depth of ruts and area of pot holes were found to be statistically significant in predicting PCS while number of patches, length of longitudinal cracks and depth of ruts were statistically significant in predicting IRI. The models can be used for planning road maintenance programs, thus minimizing the need for comprehensive data collection on pavement condition before the maintenance exercise, which is costly and time consuming.

  5. A mathematical model for the performance assessment of engineering barriers of a typical near surface radioactive waste disposal facility

    Energy Technology Data Exchange (ETDEWEB)

    Antonio, Raphaela N.; Rotunno Filho, Otto C. [Universidade Federal, Rio de Janeiro, RJ (Brazil). Coordenacao dos Programas de Pos-graduacao de Engenharia. Lab. de Hidrologia e Estudos do Meio Ambiente]. E-mail: otto@hidro.ufrj.br; Ruperti Junior, Nerbe J.; Lavalle Filho, Paulo F. Heilbron [Comissao Nacional de Energia Nuclear (CNEN), Rio de Janeiro, RJ (Brazil)]. E-mail: nruperti@cnen.gov.br

    2005-07-01

    This work proposes a mathematical model for the performance assessment of a typical radioactive waste disposal facility based on the consideration of a multiple barrier concept. The Generalized Integral Transform Technique is employed to solve the Advection-Dispersion mass transfer equation under the assumption of saturated one-dimensional flow, to obtain solute concentrations at given times and locations within the medium. A test-case is chosen in order to illustrate the performance assessment of several configurations of a multi barrier system adopted for the containment of sand contaminated with Ra-226 within a trench. (author)

  6. Modelling Typical Online Language Learning Activity

    Science.gov (United States)

    Montoro, Carlos; Hampel, Regine; Stickler, Ursula

    2014-01-01

    This article presents the methods and results of a four-year-long research project focusing on the language learning activity of individual learners using online tasks conducted at the University of Guanajuato (Mexico) in 2009-2013. An activity-theoretical model (Blin, 2010; Engeström, 1987) of the typical language learning activity was used to…

  7. Performance of a commercial transport under typical MLS noise environment

    Science.gov (United States)

    Ho, J. K.

    1986-01-01

    The performance of a 747-200 automatic flight control system (AFCS) subjected to typical Microwave Landing System (MLS) noise is discussed. The performance is then compared with the results from a previous study which had a B747 AFCS subjected to the MLS standards and recommended practices (SARPS) maximum allowable noise. A glide slope control run with Instrument Landing System (ILS) noise is also conducted. Finally, a linear covariance analysis is presented.

  8. Modelling object typicality in description logics - [Workshop on Description Logics

    CSIR Research Space (South Africa)

    Britz, K

    2009-07-01

    Full Text Available The authors presents a semantic model of typicality of concept members in description logics that accords well with a binary, globalist cognitive model of class membership and typicality. The authors define a general preferential semantic framework...

  9. Flux measurement and modeling in a typical mediterranean vineyard

    Science.gov (United States)

    Marras, Serena; Bellucco, Veronica; Pyles, David R.; Falk, Matthias; Sirca, Costantino; Duce, Pierpaolo; Snyder, Richard L.; Tha Paw U, Kyaw; Spano, Donatella

    2014-05-01

    Vineyard ecosystems are typical in the Mediterranean area, since wine is one of the most important economic sectors. Nevertheless, only a few studies have been conducted to investigate the interactions between this kind of vegetation and the atmosphere. These information are important both to understand the behaviour of such ecosystems in different environmental conditions, and are crucial to parameterize crop and flux simulation models. Combining direct measurements and modelling can obtain reliable estimates of surface fluxes and crop evapotranspiration. This study would contribute both to (1) directly measure energy fluxes and evapotranspiration in a typical Mediterranean vineyard, located in the South of Sardinia (Italy), through the application of the Eddy Covariance micrometeorological technique and to (2) evaluate the land surface model ACASA (Advanced-Canopy-Atmosphere-Soil Algorithm) in simulating energy fluxes and evapotranspiration over vineyard. Independent datasets of direct measurements were used to calibrate and validate model results during the growing period. Statistical analysis was performed to evaluate model performance and accuracy in predicting surface fluxes. Results will be showed as well as the model capability to be used for future studies to predict energy fluxes and crop water requirements under actual and future climate.

  10. A Modal Model to Simulate Typical Structural Dynamic Nonlinearity

    Energy Technology Data Exchange (ETDEWEB)

    Pacini, Benjamin Robert [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Mayes, Randall L. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Roettgen, Daniel R [Univ. of Wisconsin, Madison, WI (United States)

    2015-10-01

    Some initial investigations have been published which simulate nonlinear response with almost traditional modal models: instead of connecting the modal mass to ground through the traditional spring and damper, a nonlinear Iwan element was added. This assumes that the mode shapes do not change with amplitude and there are no interactions between modal degrees of freedom. This work expands on these previous studies. An impact experiment is performed on a structure which exhibits typical structural dynamic nonlinear response, i.e. weak frequency dependence and strong damping dependence on the amplitude of vibration. Use of low level modal test results in combination with high level impacts are processed using various combinations of modal filtering, the Hilbert Transform and band-pass filtering to develop response data that are then fit with various nonlinear elements to create a nonlinear pseudo-modal model. Simulations of forced response are compared with high level experimental data for various nonlinear element assumptions.

  11. Is performance better when brain functions are typically lateralized?

    NARCIS (Netherlands)

    Geuze, Reint; Zickert, Nele; Beking, Tess; Groothuis, Antonius

    2014-01-01

    Lateralization refers to the dominant involvement of one homologous region of the brain over the other in functional task performance. Direction and strength of lateralization depend on the functional task. It is well known that language is lateralized to the left hemisphere, even in most left-hande

  12. Analysis and Comparison of Typical Models within Distribution Network Design

    DEFF Research Database (Denmark)

    Jørgensen, Hans Jacob; Larsen, Allan; Madsen, Oli B.G.

    are covered in the categorisation include fixed vs. general networks, specialised vs. general nodes, linear vs. nonlinear costs, single vs. multi commodity, uncapacitated vs. capacitated activities, single vs. multi modal and static vs. dynamic. The models examined address both strategic and tactical planning...... for educational purposes. Furthermore, the paper can be seen as a practical introduction to network design modelling as well as a being an art manual or recipe when constructing such a model.......This paper investigates the characteristics of typical optimisation models within Distribution Network Design. During the paper fourteen models known from the literature will be thoroughly analysed. Through this analysis a schematic approach to categorisation of distribution network design models...

  13. Comparison of Energy Performance of Different HVAC Systems for a Typical Office Room and a Typical Classroom

    DEFF Research Database (Denmark)

    Yu, Tao; Heiselberg, Per; Pomianowski, Michal Zbigniew

    This report is part of the work performed under the project “Natural cooling and Ventilation through Diffuse Ceiling Supply and Thermally Activated Building Constructions”. In this project, a new system solution combining natural ventilation with diffuse ceiling inlet and thermally activated...... the energy consumption for buildings with cooling demand in cold seasons. In this way, the building system can operate at a very low energy use all the year round. The main purpose of this task is to investigate the energy performance of different HVAC systems used in the office room and the classroom......, and find the potential of energy saving for the proposed new system solution. In this report, a typical room is selected according to the previous study, but the occupation is different for the purpose of the office and the classroom. Energy performance of these two types of room under different internal...

  14. Aeroelastic Calculations Using CFD for a Typical Business Jet Model

    Science.gov (United States)

    Gibbons, Michael D.

    1996-01-01

    Two time-accurate Computational Fluid Dynamics (CFD) codes were used to compute several flutter points for a typical business jet model. The model consisted of a rigid fuselage with a flexible semispan wing and was tested in the Transonic Dynamics Tunnel at NASA Langley Research Center where experimental flutter data were obtained from M(sub infinity) = 0.628 to M(sub infinity) = 0.888. The computational results were computed using CFD codes based on the inviscid TSD equation (CAP-TSD) and the Euler/Navier-Stokes equations (CFL3D-AE). Comparisons are made between analytical results and with experiment where appropriate. The results presented here show that the Navier-Stokes method is required near the transonic dip due to the strong viscous effects while the TSD and Euler methods used here provide good results at the lower Mach numbers.

  15. Motor Skill Performance by Low SES Preschool and Typically Developing Children on the PDMS-2

    Science.gov (United States)

    Liu, Ting; Hoffmann, Chelsea; Hamilton, Michelle

    2017-01-01

    The purpose of this study was to compare the motor skill performance of preschool children from low socioeconomic (SES) backgrounds to their age matched typically developing peers using the Peabody Developmental Motor Scales-2 (PDMS-2). Sixty-eight children (34 low SES and 34 typically developing; ages 3-5) performed the PDMS-2. Standard scores…

  16. Analysis and Comparison of Typical Models within Distribution Network Design

    DEFF Research Database (Denmark)

    Jørgensen, Hans Jacob; Larsen, Allan; Madsen, Oli B.G.

    a number of important issues which have been identified when addressing the Distribution Network Design problem from a modelling angle. More specifically, we present an analysis of the research which has been performed in utilizing operational research in developing and optimising distribution systems....

  17. Working Hard and Working Smart: Motivation and Ability during Typical and Maximum Performance

    Science.gov (United States)

    Klehe, Ute-Christine; Anderson, Neil

    2007-01-01

    The distinction between what people "can" do (maximum performance) and what they "will" do (typical performance) has received considerable theoretical but scant empirical attention in industrial-organizational psychology. This study of 138 participants performing an Internet-search task offers an initial test and verification of P. R. Sackett, S.…

  18. Typical performance of regular low-density parity-check codes over general symmetric channels

    Energy Technology Data Exchange (ETDEWEB)

    Tanaka, Toshiyuki [Department of Electronics and Information Engineering, Tokyo Metropolitan University, 1-1 Minami-Osawa, Hachioji-shi, Tokyo 192-0397 (Japan); Saad, David [Neural Computing Research Group, Aston University, Aston Triangle, Birmingham B4 7ET (United Kingdom)

    2003-10-31

    Typical performance of low-density parity-check (LDPC) codes over a general binary-input output-symmetric memoryless channel is investigated using methods of statistical mechanics. Relationship between the free energy in statistical-mechanics approach and the mutual information used in the information-theory literature is established within a general framework; Gallager and MacKay-Neal codes are studied as specific examples of LDPC codes. It is shown that basic properties of these codes known for particular channels, including their potential to saturate Shannon's bound, hold for general symmetric channels. The binary-input additive-white-Gaussian-noise channel and the binary-input Laplace channel are considered as specific channel models.

  19. Typical Phases of Transformative Learning: A Practice-Based Model

    Science.gov (United States)

    Nohl, Arnd-Michael

    2015-01-01

    Empirical models of transformative learning offer important insights into the core characteristics of this concept. Whereas previous analyses were limited to specific social groups or topical terrains, this article empirically typifies the phases of transformative learning on the basis of a comparative analysis of various social groups and topical…

  20. Threshold Research on Highway Length under Typical Landscape Patterns Based on Drivers’ Physiological Performance

    Directory of Open Access Journals (Sweden)

    Xia Zhao

    2015-01-01

    Full Text Available The appropriately landscaped highway scenes may not only help improve road safety and comfort but also help protect ecological environment. Yet there is very little research data on highway length threshold with consideration of distinctive landscape patterns. Against this backdrop, the paper aims to quantitatively analyze highway landscape’s effect on driving behavior based on drivers’ physiological performance and quantify highway length thresholds under three typical landscape patterns, namely, “open,” “semiopen,” and “vertical” ones. The statistical analysis was based on data collected in a driving simulator and electrocardiograph. Specifically, vehicle-related data, ECG data, and supplemental subjective stress perception were collected. The study extracted two characteristic indices, lane deviation and LF/HF, and extrapolated the drivers’ U-shaped physiological response to landscape patterns. Models on highway length were built based on LF/HF’s variation trend with highway length. The results revealed that the theoretical highway length threshold tended to increase when the landscape pattern was switched to open, semiopen, and vertical ones. And the reliability and accuracy of the results were validated by questionnaires and field operational tests. Findings from this research will assist practitioners in taking active environmental countermeasures pertaining to different roadside landscape patterns.

  1. Outdoor performances of four photovoltaic technologies under four typical meteorological conditions

    Science.gov (United States)

    Guenounou, A.; Aillerie, M.; Malek, A.; Triki, A.; Oulebsir, A.; Smara, Z.; Mahrane, A.; Chikh, M.

    2016-07-01

    We present a comparative study of the behavior and performance undervarious weather conditions of four PV modules of different technologies recorded in four typical daysin summer and winter. The study is based on the simultaneous and continuous testing of PV modules under natural conditions of a site located in a coastal area of southern Mediterranean. We essentially interested to the fill factor, the conversion efficiency and the energy performance. A brief description of the experimental set up and the originally method is given after the introductive paragraph. All obtained graphical results allow, at first, the validation of the approach and at second, point out that the daily evolution curves of the fill factor and the efficiency of the PV modules adopt different paces depending on the PV technology. In addition, the results of the energy study showthat the performance ratios of the different technologies aredifferently influenced by weather environment and seasons.

  2. A Modal Model to Simulate Typical Structural Dynamic Nonlinearity [PowerPoint

    Energy Technology Data Exchange (ETDEWEB)

    Mayes, Randall L.; Pacini, Benjamin Robert; Roettgen, Dan

    2016-01-01

    Some initial investigations have been published which simulate nonlinear response with almost traditional modal models: instead of connecting the modal mass to ground through the traditional spring and damper, a nonlinear Iwan element was added. This assumes that the mode shapes do not change with amplitude and there are no interactions between modal degrees of freedom. This work expands on these previous studies. An impact experiment is performed on a structure which exhibits typical structural dynamic nonlinear response, i.e. weak frequency dependence and strong damping dependence on the amplitude of vibration. Use of low level modal test results in combination with high level impacts are processed using various combinations of modal filtering, the Hilbert Transform and band-pass filtering to develop response data that are then fit with various nonlinear elements to create a nonlinear pseudo-modal model. Simulations of forced response are compared with high level experimental data for various nonlinear element assumptions.

  3. Energy-Performance-Based Design-Build Process: Strategies for Procuring High-Performance Buildings on Typical Construction Budgets: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Scheib, J.; Pless, S.; Torcellini, P.

    2014-08-01

    NREL experienced a significant increase in employees and facilities on our 327-acre main campus in Golden, Colorado over the past five years. To support this growth, researchers developed and demonstrated a new building acquisition method that successfully integrates energy efficiency requirements into the design-build requests for proposals and contracts. We piloted this energy performance based design-build process with our first new construction project in 2008. We have since replicated and evolved the process for large office buildings, a smart grid research laboratory, a supercomputer, a parking structure, and a cafeteria. Each project incorporated aggressive efficiency strategies using contractual energy use requirements in the design-build contracts, all on typical construction budgets. We have found that when energy efficiency is a core project requirement as defined at the beginning of a project, innovative design-build teams can integrate the most cost effective and high performance efficiency strategies on typical construction budgets. When the design-build contract includes measurable energy requirements and is set up to incentivize design-build teams to focus on achieving high performance in actual operations, owners can now expect their facilities to perform. As NREL completed the new construction in 2013, we have documented our best practices in training materials and a how-to guide so that other owners and owner's representatives can replicate our successes and learn from our experiences in attaining market viable, world-class energy performance in the built environment.

  4. [Effects of fuel properties on the performance of a typical Euro IV diesel engine].

    Science.gov (United States)

    Chen, Wen-miao; Wang, Jian-xin; Shuai, Shi-jin

    2008-09-01

    With the purpose of establishing diesel fuel standard for China National 4th Emission Standard, as one part of Beijing "Auto-Oil" programme, engine performance test has been done on a typical Euro IV diesel engine using eight diesel fuels with different fuel properties. Test results show that, fuel properties has little effect on power, fuel consumption, and in-cylinder combustion process of tested Euro IV diesel engine; sulfate in PM and gaseous SO2 emissions increase linearly with diesel sulfur content increase; cetane number increase cause BSFC and PM reduce and NOx increase; T90 decrease cause NOx reduce while PM shows trend of reduce. Prediction equations of tested Euro IV diesel engine's ESC cycle NOx and PM emissions before SCR response to diesel fuel sulfur content, cetane number, T90 and aromatics have been obtained using linear regression method on the base of test results.

  5. Predicting the seismic performance of typical R/C healthcare facilities: emphasis on hospitals

    Science.gov (United States)

    Bilgin, Huseyin; Frangu, Idlir

    2017-07-01

    Reinforced concrete (RC) type of buildings constitutes an important part of the current building stock in earthquake prone countries such as Albania. Seismic response of structures during a severe earthquake plays a vital role in the extent of structural damage and resulting injuries and losses. In this context, this study evaluates the expected performance of a five-story RC healthcare facility, representative of common practice in Albania, designed according to older codes. The design was based on the code requirements used in this region during the mid-1980s. Non-linear static and dynamic time history analyses were conducted on the structural model using the Zeus NL computer program. The dynamic time history analysis was conducted with a set of ground motions from real earthquakes. The building responses were estimated in global levels. FEMA 356 criteria were used to predict the seismic performance of the building. The structural response measures such as capacity curve and inter-story drift under the set of ground motions and pushover analyses results were compared and detailed seismic performance assessment was done. The main aim of this study is considering the application and methodology for the earthquake performance assessment of existing buildings. The seismic performance of the structural model varied significantly under different ground motions. Results indicate that case study building exhibit inadequate seismic performance under different seismic excitations. In addition, reasons for the poor performance of the building is discussed.

  6. Predicting the seismic performance of typical R/C healthcare facilities: emphasis on hospitals

    Science.gov (United States)

    Bilgin, Huseyin; Frangu, Idlir

    2017-09-01

    Reinforced concrete (RC) type of buildings constitutes an important part of the current building stock in earthquake prone countries such as Albania. Seismic response of structures during a severe earthquake plays a vital role in the extent of structural damage and resulting injuries and losses. In this context, this study evaluates the expected performance of a five-story RC healthcare facility, representative of common practice in Albania, designed according to older codes. The design was based on the code requirements used in this region during the mid-1980s. Non-linear static and dynamic time history analyses were conducted on the structural model using the Zeus NL computer program. The dynamic time history analysis was conducted with a set of ground motions from real earthquakes. The building responses were estimated in global levels. FEMA 356 criteria were used to predict the seismic performance of the building. The structural response measures such as capacity curve and inter-story drift under the set of ground motions and pushover analyses results were compared and detailed seismic performance assessment was done. The main aim of this study is considering the application and methodology for the earthquake performance assessment of existing buildings. The seismic performance of the structural model varied significantly under different ground motions. Results indicate that case study building exhibit inadequate seismic performance under different seismic excitations. In addition, reasons for the poor performance of the building is discussed.

  7. Typical off-design analytical performances of internal combustion engine cogeneration

    Institute of Scientific and Technical Information of China (English)

    Xiaohong HE; Ruixian CAI

    2009-01-01

    Based on experimental data, typical off-design characteristic curves with corresponding formulas of internal combustion engine (ICE) are summarized and investigated. In combination with analytical solution of single-pressure heat recovery steam generator (HRSG) and influence of ambient pressure on combined heat and power (CHP) system, off-design operation regularities of ICE cogeneration are analyzed. The approach temperature difference Ata , relative steam production and superheated steam temperature decrease with the decrease in engine load. The total energy efficiency, equivalent exergy efficiency and economic exergy efficiency first increase and then decrease. Therefore, there exists an optimum value, corresponding to ICE best efficiency operating condition. It is worth emphasizing that Ata is likely to be negative in low load condition with high design steam parameter and low ICE design exhaust gas temperature. Compared with single shaft gas turbine cogeneration, Ata in ICE cogeneration is more likely to be negative. The main reason for this is that the gas turbine has an increased exhaust gas flow with the decrease in load; while ICE is on the contrary. Moreover, ICE power output and efficiency decrease with the decrease in ambient pressure. Hence, approach temperature difference, relative steam production and superheated steam temperature decrease rapidly while the cogeneration efficiencies decrease slightly. It is necessary to consider the influence of ambient conditions, especially the optimization of ICE performances at different places, on cogeneration performances.

  8. Energy and exergy performance evaluation of a typical solar photovoltaic module

    Directory of Open Access Journals (Sweden)

    Pandey Adarsh Kumar

    2015-01-01

    Full Text Available This paper presents the energy and exergy performance evaluation of heterojunction with intrinsic thin layer (HIT solar photovoltaic (SPV module for a particular day of different months of the year of a typical climatic zone of north India. The energy, exergy and power conversion efficiencies have been calculated and plotted against time based on hourly insolation. The variation in all the efficiencies has been observed with respect to variation in solar radiation and wind speed and found that all the efficiencies are higher in morning and evening time as compared to noon time which is due to the variation in temperature of module throughout the day. Performance of SPV module has been found to be the best in the month of February i.e. all the three efficiencies have been found to be the highest among all the months analysed and presented in the study for the month of February. The energy efficiency is found to be always higher than that power conversion and exergy efficiencies. However, exergy efficiency in some months like February, May, June, September, October and December has been found to be higher than that of power conversion efficiency, reverse is found in rest of the months.

  9. An ideal-typical model for comparing interprofessional relations and skill mix in health care.

    Science.gov (United States)

    Schönfelder, Walter; Nilsen, Elin Anita

    2016-11-08

    Comparisons of health system performance, including the regulations of interprofessional relations and the skill mix between health professions are challenging. National strategies for regulating interprofessional relations vary widely across European health care systems. Unambiguously defined and generally accepted performance indicators have to remain generic, with limited power for recognizing the organizational structures regulating interprofessional relations in different health systems. A coherent framework for in-depth comparisons of different models for organizing interprofessional relations and the skill mix between professional groups is currently not available. This study aims to develop an ideal-typical framework for categorizing skill mix and interprofessional relations in health care, and to assess the potential impact for different ideal types on care coordination and integrated service delivery. A document analysis of the Health Systems in Transition (HiT) reports published by the European Observatory on Health Systems and Policies was conducted. The HiT reports to 31 European health systems were analyzed using a qualitative content analysis and a process of meaning condensation. The educational tracks available to nurses have an impact on the professional autonomy for nurses, the hierarchy between professional groups, the emphasis given to negotiating skill mix, interdisciplinary teamwork and the extent of cooperation across the health and social service interface. Based on the results of the document analysis, three ideal types for regulating interprofessional relations and skill mix in health care are delimited. For each ideal type, outcomes on service coordination and holistic service delivery are described. Comparisons of interprofessional relations are necessary for proactive health human resource policies. The proposed ideal-typical framework provides the means for in-depth comparisons of interprofessional relations in the health care

  10. A Case Study on Procedural Modeling of Geo-Typical Southern Afghasistan Terrain

    NARCIS (Netherlands)

    Smelik, R.M.; Kraker, J.K. de; Tutenel, T.; Bidarra, R.

    2009-01-01

    A cost-effective method of military training is game-based training, for which custom geo-typical terrain models are often most suitable. However, for training instructors, scenario preparation is seriously slowed down by the complexity of current terrain modeling tools and methods. They would

  11. A Case Study on Procedural Modeling of Geo-Typical Southern Afghasistan Terrain

    NARCIS (Netherlands)

    Smelik, R.M.; Kraker, J.K. de; Tutenel, T.; Bidarra, R.

    2009-01-01

    A cost-effective method of military training is game-based training, for which custom geo-typical terrain models are often most suitable. However, for training instructors, scenario preparation is seriously slowed down by the complexity of current terrain modeling tools and methods. They would benef

  12. Decision-Tree Models of Categorization Response Times, Choice Proportions, and Typicality Judgments

    Science.gov (United States)

    Lafond, Daniel; Lacouture, Yves; Cohen, Andrew L.

    2009-01-01

    The authors present 3 decision-tree models of categorization adapted from T. Trabasso, H. Rollins, and E. Shaughnessy (1971) and use them to provide a quantitative account of categorization response times, choice proportions, and typicality judgments at the individual-participant level. In Experiment 1, the decision-tree models were fit to…

  13. Decision-Tree Models of Categorization Response Times, Choice Proportions, and Typicality Judgments

    Science.gov (United States)

    Lafond, Daniel; Lacouture, Yves; Cohen, Andrew L.

    2009-01-01

    The authors present 3 decision-tree models of categorization adapted from T. Trabasso, H. Rollins, and E. Shaughnessy (1971) and use them to provide a quantitative account of categorization response times, choice proportions, and typicality judgments at the individual-participant level. In Experiment 1, the decision-tree models were fit to…

  14. Thermal Performance for Wet Cooling Tower with Different Layout Patterns of Fillings under Typical Crosswind Conditions

    Directory of Open Access Journals (Sweden)

    Ming Gao

    2017-01-01

    Full Text Available A thermal-state model experimental study was performed in lab to investigate the thermal performance of a wet cooling tower with different kinds of filling layout patterns under windless and 0.4 m/s crosswind conditions. In this paper, the contrast analysis was focused on comparing a uniform layout pattern and one kind of optimal non-uniform layout pattern when the environmental crosswind speed is 0 m/s and 0.4 m/s. The experimental results proved that under windless conditions, the heat transfer coefficient and total heat rejection of circulating water for the optimal non-uniform layout pattern can enhance by approximately 40% and 28%, respectively, compared with the uniform layout pattern. It was also discovered that the optimal non-uniform pattern can dramatically relieve the influence of crosswind on the thermal performance of the tower when the crosswind speed is equal to 0.4 m/s. For the uniform layout pattern, the heat transfer coefficient under 0.4 m/s crosswind conditions decreased by 9.5% compared with the windless conditions, while that value lowered only by 2.0% for the optimal non-uniform layout pattern. It has been demonstrated that the optimal non-uniform layout pattern has the better thermal performance under 0.4 m/s crosswind condition.

  15. Body ideals in women after viewing images of typical and healthy weight models.

    Science.gov (United States)

    Owen, Rebecca; Spencer, Rebecca M C

    2013-09-01

    Viewing thin models, pervasive in popular culture, is correlated with body dissatisfaction and anxiety in women. Whether or not the same is true when viewing healthy weight models is unknown. In this study we tested whether viewing healthy weight models increases the ideal female body size. Body image, anxiety, happiness and depression were measured in 44 female participants following viewing of images of thin or healthy weight models (within-subject separated by two weeks). We found that after viewing images of healthy weight models, women's body ideals (as measured by a participant-adjusted virtual model) were significantly larger than when the same women viewed images of very thin models. This effect was greatest in those women with the highest levels of baseline anxiety (as measured by the Hospital Anxiety and Depression Scale). These results suggest that viewing healthy weight models results in more healthy body ideals than those typically promoted through media. Copyright © 2013 Elsevier Ltd. All rights reserved.

  16. Gender Gap in the National College Entrance Exam Performance in China: A Case Study of a Typical Chinese Municipality

    Science.gov (United States)

    Zhang, Yu; Tsang, Mun

    2015-01-01

    This is one of the first studies to investigate gender achievement gap in the National College Entrance Exam in a typical municipality in China, which is the crucial examination for the transition from high school to higher education in that country. Using ordinary least square model and quantile regression model, the study consistently finds that…

  17. Photovoltaic array performance model.

    Energy Technology Data Exchange (ETDEWEB)

    Kratochvil, Jay A.; Boyson, William Earl; King, David L.

    2004-08-01

    This document summarizes the equations and applications associated with the photovoltaic array performance model developed at Sandia National Laboratories over the last twelve years. Electrical, thermal, and optical characteristics for photovoltaic modules are included in the model, and the model is designed to use hourly solar resource and meteorological data. The versatility and accuracy of the model has been validated for flat-plate modules (all technologies) and for concentrator modules, as well as for large arrays of modules. Applications include system design and sizing, 'translation' of field performance measurements to standard reporting conditions, system performance optimization, and real-time comparison of measured versus expected system performance.

  18. Variability in Classroom Social Communication: Performance of Children with Fetal Alcohol Spectrum Disorders and Typically Developing Peers

    Science.gov (United States)

    Kjellmer, Liselotte; Olswang, Lesley B.

    2013-01-01

    Purpose: In this study, the authors examined how variability in classroom social communication performance differed between children with fetal alcohol spectrum disorders (FASD) and pair-matched, typically developing peers. Method: Twelve pairs of children were observed in their classrooms, 40 min per day (20 min per child) for 4 days over a…

  19. Performance degradation of a typical twin engine commuter type aircraft in measured natural icing conditions

    Science.gov (United States)

    Ranaudo, R. J.; Mikkelsen, K. L.; Mcknight, R. C.; Perkins, P. J., Jr.

    1984-01-01

    The performance of an aircraft in various measured icing conditions was investigated. Icing parameters such as liquid water content, temperature, cloud droplet sizes and distributions were measured continuously while in icinig. Flight data wre reduced to provide plots of the aircraft drag polars and lift curves (CL vs. alpha) for the measured 'iced' condition as referenced to the uniced aircraft. These data were also reduced to provide plots of thrust horsepower required vs. single engine power available to show how icing affects engine out capability. It is found that performance degradation is primarily influenced by the amount and shape of the accumulated ice. Glaze icing caused the greatest aerodynamic performance penalties in terms of increased drag and reduction in lift while aerodynamic penalties due to rime icing were significantly lower. Previously announced in STAR as N84-13173

  20. A comparative analysis of the two typical farmland transfer models in Chongqing

    Institute of Scientific and Technical Information of China (English)

    肖轶; 魏朝富; 尹珂; 罗光莲

    2009-01-01

    This paper attempts to conduct a comparative analysis of the two typical farmland transfer models introduced by Chongqing in its comprehensive coordinated reform experiment for balanced urban and rural development: i) the "pooling of land as shares" in Qilin village, Changshou district; and ii) the "homestead/house swap, contracted land/ social security swap" in Jiulongpo district. It is estimated that the former model offers lower land appreciation benef its than the latter; the former faces greater operational risks, whereas the latter can to a certain extent mitigate risks by boosting regulatory control and reasonable government guidance. The homestead/house swap, contracted land/social security swap model is therefore the preferred choice. It can solve a series of social security problems that arise after peasants are divorced from land and enable peasants to garner higher land appreciation benefits through farmland transfer.

  1. Hadoop Performance Models

    OpenAIRE

    Herodotou, Herodotos

    2011-01-01

    Hadoop MapReduce is now a popular choice for performing large-scale data analytics. This technical report describes a detailed set of mathematical performance models for describing the execution of a MapReduce job on Hadoop. The models describe dataflow and cost information at the fine granularity of phases within the map and reduce tasks of a job execution. The models can be used to estimate the performance of MapReduce jobs as well as to find the optimal configuration settings to use when r...

  2. Hadoop Performance Models

    CERN Document Server

    Herodotou, Herodotos

    2011-01-01

    Hadoop MapReduce is now a popular choice for performing large-scale data analytics. This technical report describes a detailed set of mathematical performance models for describing the execution of a MapReduce job on Hadoop. The models describe dataflow and cost information at the fine granularity of phases within the map and reduce tasks of a job execution. The models can be used to estimate the performance of MapReduce jobs as well as to find the optimal configuration settings to use when running the jobs.

  3. Thermal Performance of Typical Residential Building in Karachi with Different Materials for Construction

    Directory of Open Access Journals (Sweden)

    Nafeesa Shaheen

    2016-04-01

    Full Text Available This research work deals with a study of a residential building located in climatic context of Karachi with the objective of being the study of thermal performance based upon passive design techniques. The study helps in reducing the electricity consumption by improving indoor temperatures. The existing residential buildings in Karachi were studied with reference to their planning and design, analyzed and evaluated. Different construction?s compositions of buildings were identified, surveyed and analyzed in making of the effective building envelops. Autodesk® Ecotect, 2011 was used to determine indoor comfort conditions and HVAC (Heating, Ventilation, Air-Conditioning and Cooling loads. The result of the research depicted significant energy savings of 38.5% in HVAC loads with proposed building envelop of locally available materials and glazing.

  4. A study on emission performance of a diesel engine fueled with five typical methyl ester biodiesels

    Science.gov (United States)

    Wu, Fujia; Wang, Jianxin; Chen, Wenmiao; Shuai, Shijin

    As an alternative and renewable fuel, biodiesel can effectively reduce diesel engine emissions, especially particulate matter and dry soot. However, the biodiesel effects on emissions may vary as the source fuel changes. In this paper, the performance of five methyl esters with different sources was studied: cottonseed methyl ester (CME), soybean methyl ester (SME), rapeseed methyl ester (RME), palm oil methyl ester (PME) and waste cooking oil methyl ester (WME). Total particulate matter (PM), dry soot (DS), non-soot fraction (NSF), nitrogen oxide (NO x), unburned hydrocarbon (HC), and carbon monoxide (CO) were investigated on a Cummins ISBe6 Euro III diesel engine and compared with a baseline diesel fuel. Results show that using different methyl esters results in large PM reductions ranging from 53% to 69%, which include the DS reduction ranging from 79% to 83%. Both oxygen content and viscosity could influence the DS emission. Higher oxygen content leads to less DS at high load while lower viscosity results in less DS at low load. NSF decreases consistently as cetane number increases except for PME. The cetane number could be responsible for the large NSF difference between different methyl esters.

  5. NIF capsule performance modeling

    OpenAIRE

    Weber S.; Callahan D.; Cerjan C.; Edwards M.; Haan S.; Hicks D.; Jones O.; Kyrala G.; Meezan N.; Olson R; Robey H.; Spears B.; Springer P.; Town R.

    2013-01-01

    Post-shot modeling of NIF capsule implosions was performed in order to validate our physical and numerical models. Cryogenic layered target implosions and experiments with surrogate targets produce an abundance of capsule performance data including implosion velocity, remaining ablator mass, times of peak x-ray and neutron emission, core image size, core symmetry, neutron yield, and x-ray spectra. We have attempted to match the integrated data set with capsule-only simulations by adjusting th...

  6. [Modeling Study of A Typical Summer Ozone Pollution Event over Yangtze River Delta].

    Science.gov (United States)

    Zhang, Liang; Zhu, Bin; Gao, Jin-hui; Kang, Han-qing; Yang, Peng; Wang, Hong-lei; Li, Yue-e; Shao, Ping

    2015-11-01

    WRF/Chem model was used to analyze the temporal and spatial distribution characteristics and physical and chemical mechanism of a typical summer ozone pollution event over Yangtze River Delta (YRD). The result showed that the model was capable of reproducing the temporal and spatial distribution and evolution characteristics of the typical summer ozone pollution event over YRD. The YRD region was mainly affected by the subtropical high-pressure control, and the weather conditions of sunshine, high temperature and small wind were favorable for the formation of photochemical pollution on August 10-18, 2013. The results of simulation showed that the spatial and temporal distribution of O3 was obviously affected by the meteorological fields, geographic location, regional transport and chemical formation over YRD. The sensitivity experiment showed that the O3 concentration affected by maritime airstream was low in Shanghai, but the impact of Shanghai emissions on the spatial and temporal distribution of O3 concentration over YRD was significant; The main contribution of the high concentration of O3 in Nanjing surface was chemical generation ( alkene and aromatic) and the vertical transport from high-altitude O3, whereas the main contribution of the high concentration of O3 in Hangzhou and Suzhou was physics process. The influence of the 15:00 peak concentration of O3 over YRD was very obvious when O3 precursor was reduced at the maximum O3 formation rate (11-13 h).

  7. Modeling a typical winter-time dust event over the Arabian Peninsula and the Red Sea

    KAUST Repository

    Kalenderski, S.

    2013-02-20

    We used WRF-Chem, a regional meteorological model coupled with an aerosol-chemistry component, to simulate various aspects of the dust phenomena over the Arabian Peninsula and Red Sea during a typical winter-time dust event that occurred in January 2009. The model predicted that the total amount of emitted dust was 18.3 Tg for the entire dust outburst period and that the two maximum daily rates were ?2.4 Tg day-1 and ?1.5 Tg day-1, corresponding to two periods with the highest aerosol optical depth that were well captured by ground-and satellite-based observations. The model predicted that the dust plume was thick, extensive, and mixed in a deep boundary layer at an altitude of 3-4 km. Its spatial distribution was modeled to be consistent with typical spatial patterns of dust emissions. We utilized MODIS-Aqua and Solar Village AERONET measurements of the aerosol optical depth (AOD) to evaluate the radiative impact of aerosols. Our results clearly indicated that the presence of dust particles in the atmosphere caused a significant reduction in the amount of solar radiation reaching the surface during the dust event. We also found that dust aerosols have significant impact on the energy and nutrient balances of the Red Sea. Our results showed that the simulated cooling under the dust plume reached 100 W m-2, which could have profound effects on both the sea surface temperature and circulation. Further analysis of dust generation and its spatial and temporal variability is extremely important for future projections and for better understanding of the climate and ecological history of the Red Sea.

  8. Modeling a typical winter-time dust event over the Arabian Peninsula and the Red Sea

    Directory of Open Access Journals (Sweden)

    S. Kalenderski

    2013-02-01

    Full Text Available We used WRF-Chem, a regional meteorological model coupled with an aerosol-chemistry component, to simulate various aspects of the dust phenomena over the Arabian Peninsula and Red Sea during a typical winter-time dust event that occurred in January 2009. The model predicted that the total amount of emitted dust was 18.3 Tg for the entire dust outburst period and that the two maximum daily rates were ~2.4 Tg day−1 and ~1.5 Tg day−1, corresponding to two periods with the highest aerosol optical depth that were well captured by ground- and satellite-based observations. The model predicted that the dust plume was thick, extensive, and mixed in a deep boundary layer at an altitude of 3–4 km. Its spatial distribution was modeled to be consistent with typical spatial patterns of dust emissions. We utilized MODIS-Aqua and Solar Village AERONET measurements of the aerosol optical depth (AOD to evaluate the radiative impact of aerosols. Our results clearly indicated that the presence of dust particles in the atmosphere caused a significant reduction in the amount of solar radiation reaching the surface during the dust event. We also found that dust aerosols have significant impact on the energy and nutrient balances of the Red Sea. Our results showed that the simulated cooling under the dust plume reached 100 W m−2, which could have profound effects on both the sea surface temperature and circulation. Further analysis of dust generation and its spatial and temporal variability is extremely important for future projections and for better understanding of the climate and ecological history of the Red Sea.

  9. Enhanced air dispersion modelling at a typical Chinese nuclear power plant site: Coupling RIMPUFF with two advanced diagnostic wind models.

    Science.gov (United States)

    Liu, Yun; Li, Hong; Sun, Sida; Fang, Sheng

    2017-09-01

    An enhanced air dispersion modelling scheme is proposed to cope with the building layout and complex terrain of a typical Chinese nuclear power plant (NPP) site. In this modelling, the California Meteorological Model (CALMET) and the Stationary Wind Fit and Turbulence (SWIFT) are coupled with the Risø Mesoscale PUFF model (RIMPUFF) for refined wind field calculation. The near-field diffusion coefficient correction scheme of the Atmospheric Relative Concentrations in the Building Wakes Computer Code (ARCON96) is adopted to characterize dispersion in building arrays. The proposed method is evaluated by a wind tunnel experiment that replicates the typical Chinese NPP site. For both wind speed/direction and air concentration, the enhanced modelling predictions agree well with the observations. The fraction of the predictions within a factor of 2 and 5 of observations exceeds 55% and 82% respectively in the building area and the complex terrain area. This demonstrates the feasibility of the new enhanced modelling for typical Chinese NPP sites. Copyright © 2017 Elsevier Ltd. All rights reserved.

  10. NIF capsule performance modeling

    Directory of Open Access Journals (Sweden)

    Weber S.

    2013-11-01

    Full Text Available Post-shot modeling of NIF capsule implosions was performed in order to validate our physical and numerical models. Cryogenic layered target implosions and experiments with surrogate targets produce an abundance of capsule performance data including implosion velocity, remaining ablator mass, times of peak x-ray and neutron emission, core image size, core symmetry, neutron yield, and x-ray spectra. We have attempted to match the integrated data set with capsule-only simulations by adjusting the drive and other physics parameters within expected uncertainties. The simulations include interface roughness, time-dependent symmetry, and a model of mix. We were able to match many of the measured performance parameters for a selection of shots.

  11. Performance Engineering in the Community Atmosphere Model

    Energy Technology Data Exchange (ETDEWEB)

    Worley, P; Mirin, A; Drake, J; Sawyer, W

    2006-05-30

    The Community Atmosphere Model (CAM) is the atmospheric component of the Community Climate System Model (CCSM) and is the primary consumer of computer resources in typical CCSM simulations. Performance engineering has been an important aspect of CAM development throughout its existence. This paper briefly summarizes these efforts and their impacts over the past five years.

  12. Comparing performance within a virtual supermarket of children with traumatic brain injury to typically developing children: a pilot study.

    Science.gov (United States)

    Erez, Neta; Weiss, Patrice L; Kizony, Rachel; Rand, Debbie

    2013-01-01

    The purpose of this study was to determine the usability of a virtual reality environment for pediatric traumatic brain injury (TBI) by assessing the performance of a simple virtual shopping task and comparing their results to typically developing peers. Twenty children with TBI and 20 typically developing children, matched in age and sex, "shopped" for four items in a virtual supermarket (VMall). A short feedback questionnaire, Borg's scale of perceived exertion, and the Zoo Map subtest from the Behavioral Assessment of the Dysexecutive Syndrome for Children were also administered. All of the children were able to complete a four-item test within the VMall. Overall, good usability was obtained. A significant difference in shopping performance was found between the two groups; the mean shopping time and number of mistakes was higher for the children with TBI. The use of a short shopping test within a functional virtual environment enabled detection of poorer performance of children with TBI that may be due to executive function deficits. Because the task was enjoyable and motivating, the VMall may also be used to enhance participation in instrumental activities of daily living and play for children with TBI. [OTJR: Occupation, Participation and Health. 2013;33(4):218-227.].

  13. Improved Algorithm of SCS-CN Model Parameters in Typical Inland River Basin in Central Asia

    Science.gov (United States)

    Wang, Jin J.; Ding, Jian L.; Zhang, Zhe; Chen, Wen Q.

    2017-02-01

    Rainfall-runoff relationship is the most important factor for hydrological structures, social and economic development on the background of global warmer, especially in arid regions. The aim of this paper is find the suitable method to simulate the runoff in arid area. The Soil Conservation Service Curve Number (SCS-CN) is the most popular and widely applied model for direct runoff estimation. In this paper, we will focus on Wen-quan Basin in source regions of Boertala River. It is a typical valley of inland in Central Asia. First time to use the 16m resolution remote sensing image about high-definition earth observation satellite “Gaofen-1” to provide a high degree accuracy data for land use classification determine the curve number. Use surface temperature/vegetation index (TS/VI) construct 2D scatter plot combine with the soil moisture absorption balance principle calculate the moisture-holding capacity of soil. Using original and parameter algorithm improved SCS-CN model respectively to simulation the runoff. The simulation results show that the improved model is better than original model. Both of them in calibration and validation periods Nash-Sutcliffe efficiency were 0.79, 0.71 and 0.66,038. And relative error were3%, 12% and 17%, 27%. It shows that the simulation accuracy should be further improved and using remote sensing information technology to improve the basic geographic data for the hydrological model has the following advantages: 1) Remote sensing data having a planar characteristic, comprehensive and representative. 2) To get around the bottleneck about lack of data, provide reference to simulation the runoff in similar basin conditions and data-lacking regions.

  14. Performance modeling of Beamlet

    Energy Technology Data Exchange (ETDEWEB)

    Auerbach, J.M.; Lawson, J.K.; Rotter, M.D.; Sacks, R.A.; Van Wonterghem, B.W.; Williams, W.H.

    1995-06-27

    Detailed modeling of beam propagation in Beamlet has been made to predict system performance. New software allows extensive use of optical component characteristics. This inclusion of real optical component characteristics has resulted in close agreement between calculated and measured beam distributions.

  15. Typical structural elements of seismicity and impact crater morphology identified in GIS ENDDB digital models.

    Science.gov (United States)

    Mikheeva, Anna

    2014-05-01

    The subject database of the ENDDB system (Earth's Natural Disasters Database) is a combination of the EISC catalog (Earth's impact structures Catalog [1]) and seismological data of more than 60 earthquake catalogs (EC). ENDDB geographic subsystem uses the NASA ASTER GDEM data arrays to obtain a high-resolution (1 arc-second) shaded relief model, as well as the digital mapping technology, which consists in shading surface points according to their brightness controlled by the illumination angle. For example, the identifying impact craters by means of ENDDB begins with selecting the optimum base colors of the image, the parameters of illumination and shadow depth for constructing a shaded model on a regular grid of values. This procedure allows obtaining precise 3D images of the terrain and gravity patterns, and, moreover, furnishes data for recognizing standard morphological elements according to which impact structures can be visually detected. For constructing a shaded gravity anomaly with the ENDDB tools, Global marine gravity data (of models V16.1 and V18.1 [2]) are embedded into the system. These models, which are arrays of gravity pixel values, are of the resolution increased from the equator to the poles, being 30 arc-seconds per point on average. This resolution is the same as in the more recent V21.1 model. Due to these data, new morphological elements typical of impact structures, which are expressed in the shaded elevation and gravity models (identified using the ENDDB visualization tools) was found and compared in hundreds of craters from the EISC-catalog: tail-shaped asymmetry of relief, heart-shaped geometry of craters, and tail-shaped gravity lows [3] and so on. New diagnostic criteria associated with typical morphological elements revealed with advanced image processing technologies are very important to confirm the impact origin for many potential craters. The basic hypothesis of the impact-explosive tectonics [4] is that meteorite craters on the

  16. ATR performance modeling concepts

    Science.gov (United States)

    Ross, Timothy D.; Baker, Hyatt B.; Nolan, Adam R.; McGinnis, Ryan E.; Paulson, Christopher R.

    2016-05-01

    Performance models are needed for automatic target recognition (ATR) development and use. ATRs consume sensor data and produce decisions about the scene observed. ATR performance models (APMs) on the other hand consume operating conditions (OCs) and produce probabilities about what the ATR will produce. APMs are needed for many modeling roles of many kinds of ATRs (each with different sensing modality and exploitation functionality combinations); moreover, there are different approaches to constructing the APMs. Therefore, although many APMs have been developed, there is rarely one that fits a particular need. Clarified APM concepts may allow us to recognize new uses of existing APMs and identify new APM technologies and components that better support coverage of the needed APMs. The concepts begin with thinking of ATRs as mapping OCs of the real scene (including the sensor data) to reports. An APM is then a mapping from explicit quantized OCs (represented with less resolution than the real OCs) and latent OC distributions to report distributions. The roles of APMs can be distinguished by the explicit OCs they consume. APMs used in simulations consume the true state that the ATR is attempting to report. APMs used online with the exploitation consume the sensor signal and derivatives, such as match scores. APMs used in sensor management consume neither of those, but estimate performance from other OCs. This paper will summarize the major building blocks for APMs, including knowledge sources, OC models, look-up tables, analytical and learned mappings, and tools for signal synthesis and exploitation.

  17. Entanglement typicality

    Science.gov (United States)

    Dahlsten, Oscar C. O.; Lupo, Cosmo; Mancini, Stefano; Serafini, Alessio

    2014-09-01

    We provide a summary of both seminal and recent results on typical entanglement. By ‘typical’ values of entanglement, we refer here to values of entanglement quantifiers that (given a reasonable measure on the manifold of states) appear with arbitrarily high probability for quantum systems of sufficiently high dimensionality. We shall focus on pure states and work within the Haar measure framework for discrete quantum variables, where we report on results concerning the average von Neumann and linear entropies as well as arguments implying the typicality of such values in the asymptotic limit. We then proceed to discuss the generation of typical quantum states with random circuitry. Different phases of entanglement, and the connection between typical entanglement and thermodynamics are discussed. We also cover approaches to measures on the non-compact set of Gaussian states of continuous variable quantum systems.

  18. Science learning and literacy performance of typically developing, at-risk, and disabled, non-English language background students

    Science.gov (United States)

    Larrinaga McGee, Patria Maria

    Current education reform calls for excellence, access, and equity in all areas of instruction, including science and literacy. Historically, persons of diverse backgrounds or with disabilities have been underrepresented in science. Gaps are evident between the science and literacy achievement of diverse students and their mainstream peers. The purpose of this study was to document, describe, and examine patterns of development and change in the science learning and literacy performance of Hispanic students. The two major questions of this study were: (1) How is science content knowledge, as evident in oral and written formats, manifested in the performance of typically developing, at-risk, and disabled non-English language background (NELB) students? and (2) What are the patterns of literacy performance in science, and as evident in oral and written formats, among typically developing, at-risk, and disabled NELB students? This case study was part of a larger research project, the Promise Project, undertaken at the University of Miami, Coral Gables, Florida, under the sponsorship of the National Science Foundation. The study involved 24 fourth-grade students in seven classrooms located in Promise Project schools where teachers were provided with training and materials for instruction on two units of science content: Matter and Weather. Four students were selected from among the fourth-graders for a closer analysis of their performance. Qualitative and quantitative data analysis methods were used to document, describe, and examine specific events or phenomena in the processes of science learning and literacy development. Important findings were related to (a) gains in science learning and literacy development, (b) students' science learning and literacy development needs, and (c) general and idiosyncratic attitudes toward science and literacy. Five patterns of science "explanations" identified indicated a developmental cognitive/linguistic trajectory in science

  19. Comparison of thermal performance between test cells with different coverage systems for experimental typical day of heat in Brazilian Southeastern

    Directory of Open Access Journals (Sweden)

    Grace Tiberio Cardoso

    2014-09-01

    Full Text Available This article shows experimentally the thermal performance of two test cells with different coverage systems, Light Green Roof (LGR and ceramic roof by analyzing internal surface temperatures (IST in the ceiling and dry bulb temperatures (DBT. The objective was to evaluate the spatial distribution of temperatures in buildings according to spatial and temporal Dynamic Climatology approaches. An experimental, typical day for heat conditions was determined. The data of the main climatic variables provided by an automatic weather station and temperatures inside the test cells were collected using thermocouples installed such that the entire space is included. The results led to the conclusion that the LGR has a balanced IST and DBT spatial distribution compared with ceramic roofs. Nevertheless, the analysis of the thermal performance is only one of the variables that must be considered when developing a construction proposal that is adapted to the context. The manner in which the thermocouples were placed inside the test cells also showed the importance of specifying the location of the sensors in experimental studies on the behavior and thermal performance of buildings.

  20. Experience, use, and performance measurement of the Hadoop File System in a typical nuclear physics analysis workflow

    Science.gov (United States)

    Sangaline, E.; Lauret, J.

    2014-06-01

    The quantity of information produced in Nuclear and Particle Physics (NPP) experiments necessitates the transmission and storage of data across diverse collections of computing resources. Robust solutions such as XRootD have been used in NPP, but as the usage of cloud resources grows, the difficulties in the dynamic configuration of these systems become a concern. Hadoop File System (HDFS) exists as a possible cloud storage solution with a proven track record in dynamic environments. Though currently not extensively used in NPP, HDFS is an attractive solution offering both elastic storage and rapid deployment. We will present the performance of HDFS in both canonical I/O tests and for a typical data analysis pattern within the RHIC/STAR experimental framework. These tests explore the scaling with different levels of redundancy and numbers of clients. Additionally, the performance of FUSE and NFS interfaces to HDFS were evaluated as a way to allow existing software to function without modification. Unfortunately, the complicated data structures in NPP are non-trivial to integrate with Hadoop and so many of the benefits of the MapReduce paradigm could not be directly realized. Despite this, our results indicate that using HDFS as a distributed filesystem offers reasonable performance and scalability and that it excels in its ease of configuration and deployment in a cloud environment.

  1. Numerical Investigations of Two Typical Unsteady Flows in Turbomachinery Using the Multi-Passage Model

    Science.gov (United States)

    Zhou, Di; Lu, Zhiliang; Guo, Tongqing; Shen, Ennan

    2016-06-01

    In this paper, the research on two types of unsteady flow problems in turbomachinery including blade flutter and rotor-stator interaction is made by means of numerical simulation. For the former, the energy method is often used to predict the aeroelastic stability by calculating the aerodynamic work per vibration cycle. The inter-blade phase angle (IBPA) is an important parameter in computation and may have significant effects on aeroelastic behavior. For the latter, the numbers of blades in each row are usually not equal and the unsteady rotor-stator interactions could be strong. An effective way to perform multi-row calculations is the domain scaling method (DSM). These two cases share a common point that the computational domain has to be extended to multi passages (MP) considering their respective features. The present work is aimed at modeling these two issues with the developed MP model. Computational fluid dynamics (CFD) technique is applied to resolve the unsteady Reynolds-averaged Navier-Stokes (RANS) equations and simulate the flow fields. With the parallel technique, the additional time cost due to modeling more passages can be largely decreased. Results are presented on two test cases including a vibrating rotor blade and a turbine stage.

  2. Effect of cattle breed on finishing performance, carcass characteristics and economic benefits under typical beef production system in China

    Directory of Open Access Journals (Sweden)

    Liping Ren

    2012-07-01

    Full Text Available This study compared the finishing performance carcass characteristics and economic benefits of two imported (Limousin and Simmental and three local (Luxi, Jinnan and Qinchuan cattle breeds slaughtered at 18.5 months of age under thetypical Chinese beef production system. All cattle (n=71 were reared under the same production system and fed the same finishing diet for 105 days. Eight bulls from each breed were randomly selected for slaughtering. Compared with the three local breeds, the two imported breeds had higher average daily gain, dry matter intake and gain efficiency. Regarding carcass characteristics, the two imported breeds had higher carcass weight, bone weight, net meat weight, and ribeye area (P<0.001. However, the local breeds had higher (P<0.01 marbling scores than the imported breeds. The imported breeds showed higher economic benefits (P<0.001 than the local breeds. In conclusion, the imported cattle breeds had better growth performance, carcass traits and economic benefits compared with the local cattle breeds at 18.5 months old under the typical Chinese feeding conditions whereas, in this study, the local breeds may have some advantage in terms of meat quality.

  3. Identification of a Typical CSTR Using Optimal Focused Time Lagged Recurrent Neural Network Model with Gamma Memory Filter

    Directory of Open Access Journals (Sweden)

    S. N. Naikwad

    2009-01-01

    Full Text Available A focused time lagged recurrent neural network (FTLR NN with gamma memory filter is designed to learn the subtle complex dynamics of a typical CSTR process. Continuous stirred tank reactor exhibits complex nonlinear operations where reaction is exothermic. It is noticed from literature review that process control of CSTR using neuro-fuzzy systems was attempted by many, but optimal neural network model for identification of CSTR process is not yet available. As CSTR process includes temporal relationship in the input-output mappings, time lagged recurrent neural network is particularly used for identification purpose. The standard back propagation algorithm with momentum term has been proposed in this model. The various parameters like number of processing elements, number of hidden layers, training and testing percentage, learning rule and transfer function in hidden and output layer are investigated on the basis of performance measures like MSE, NMSE, and correlation coefficient on testing data set. Finally effects of different norms are tested along with variation in gamma memory filter. It is demonstrated that dynamic NN model has a remarkable system identification capability for the problems considered in this paper. Thus FTLR NN with gamma memory filter can be used to learn underlying highly nonlinear dynamics of the system, which is a major contribution of this paper.

  4. A Typical Path Model of Tracheobronchial Clearance of Inhaled Particles in Rats

    Science.gov (United States)

    2002-01-01

    A mathematical description of particle clearance from the ciliated conducting airways (tracheobronchial region) of the lungs in rats was developed...particle transport velocities for given generations of airways were estimated from reported tracheal transport velocities. Using typical rat airway geometry...and estimated particle transport velocities solutions of sets of rate equations for transport from each generation of airways were summed to estimate

  5. Typical Protectionism

    Institute of Scientific and Technical Information of China (English)

    ZHANG ZHIPING

    2010-01-01

    @@ In the middle of October,acting on a complaint from the United Steelworkers union,the Office of the U.S.Trade Representative initiated a Section 301 investigation into China's clean energy policies and practices.The move has aroused strong opposition from China.China's New Energy Association says the probe is neither well-founded nor responsible,and is typical trade protectionism.

  6. Effective cluster typical medium theory for the diagonal Anderson disorder model in one- and two-dimensions

    Science.gov (United States)

    Ekuma, Chinedu E.; Terletska, Hanna; Meng, Zi Yang; Moreno, Juana; Jarrell, Mark; Mahmoudian, Samiyeh; Dobrosavljević, Vladimir

    2014-07-01

    We develop a cluster typical medium theory to study localization in disordered electronic systems. Our formalism is able to incorporate non-local correlations beyond the local typical medium theory in a systematic way. The cluster typical medium theory utilizes the momentum-resolved typical density of states and hybridization function to characterize the localization transition. We apply the formalism to the Anderson model of localization in one- and two-dimensions. In one-dimension, we find that the critical disorder strength scales inversely with the linear cluster size with a power law, Wc ˜ (1/Lc)1/ν, whereas in two-dimensions, the critical disorder strength decreases logarithmically with the linear cluster size. Our results are consistent with previous numerical work and are in agreement with the one-parameter scaling theory.

  7. Effects of ongoing task context and target typicality on prospective memory performance: the importance of associative cueing

    Science.gov (United States)

    Nowinski, Jessica Lang; Dismukes, Key R.

    2005-01-01

    Two experiments examined whether prospective memory performance is influenced by contextual cues. In our automatic activation model, any information available at encoding and retrieval should aid recall of the prospective task. The first experiment demonstrated an effect of the ongoing task context; performance was better when information about the ongoing task present at retrieval was available at encoding. Performance was also improved by a strong association between the prospective memory target as it was presented at retrieval and the intention as it was encoded. Experiment 2 demonstrated boundary conditions of the ongoing task context effect, which implicate the association between the ongoing and prospective tasks formed at encoding as the source of the context effect. The results of this study are consistent with predictions based on automatic activation of intentions.

  8. Computational fluid dynamics modeling of rope-guided conveyances in two typical kinds of shaft layouts.

    Directory of Open Access Journals (Sweden)

    Renyuan Wu

    Full Text Available The behavior of rope-guided conveyances is so complicated that the rope-guided hoisting system hasn't been understood thoroughly so far. In this paper, with user-defined functions loaded, ANSYS FLUENT 14.5 was employed to simulate lateral motion of rope-guided conveyances in two typical kinds of shaft layouts. With rope-guided mine elevator and mine cages taken into account, results show that the lateral aerodynamic buffeting force is much larger than the Coriolis force, and the side aerodynamic force have the same order of magnitude as the Coriolis force. The lateral aerodynamic buffeting forces should also be considered especially when the conveyance moves along the ventilation air direction. The simulation shows that the closer size of the conveyances can weaken the transverse aerodynamic buffeting effect.

  9. A Model Performance

    Science.gov (United States)

    Thornton, Bradley D.; Smalley, Robert A.

    2008-01-01

    Building information modeling (BIM) uses three-dimensional modeling concepts, information technology and interoperable software to design, construct and operate a facility. However, BIM can be more than a tool for virtual modeling--it can provide schools with a 3-D walkthrough of a project while it still is on the electronic drawing board. BIM can…

  10. Typical reconstruction performance for distributed compressed sensing based on ℓ2,1-norm regularized least square and Bayesian optimal reconstruction: influences of noise

    Science.gov (United States)

    Shiraki, Yoshifumi; Kabashima, Yoshiyuki

    2016-06-01

    A signal model called joint sparse model 2 (JSM-2) or the multiple measurement vector problem, in which all sparse signals share their support, is important for dealing with practical signal processing problems. In this paper, we investigate the typical reconstruction performance of noisy measurement JSM-2 problems for {{\\ell}2,1} -norm regularized least square reconstruction and the Bayesian optimal reconstruction scheme in terms of mean square error. Employing the replica method, we show that these schemes, which exploit the knowledge of the sharing of the signal support, can recover the signals more precisely as the number of channels increases. In addition, we compare the reconstruction performance of two different ensembles of observation matrices: one is composed of independent and identically distributed random Gaussian entries and the other is designed so that row vectors are orthogonal to one another. As reported for the single-channel case in earlier studies, our analysis indicates that the latter ensemble offers better performance than the former ones for the noisy JSM-2 problem. The results of numerical experiments with a computationally feasible approximation algorithm we developed for this study agree with the theoretical estimation.

  11. Performance of Information Criteria for Spatial Models.

    Science.gov (United States)

    Lee, Hyeyoung; Ghosh, Sujit K

    2009-01-01

    Model choice is one of the most crucial aspect in any statistical data analysis. It is well known that most models are just an approximation to the true data generating process but among such model approximations it is our goal to select the "best" one. Researchers typically consider a finite number of plausible models in statistical applications and the related statistical inference depends on the chosen model. Hence model comparison is required to identify the "best" model among several such candidate models. This article considers the problem of model selection for spatial data. The issue of model selection for spatial models has been addressed in the literature by the use of traditional information criteria based methods, even though such criteria have been developed based on the assumption of independent observations. We evaluate the performance of some of the popular model selection critera via Monte Carlo simulation experiments using small to moderate samples. In particular, we compare the performance of some of the most popular information criteria such as Akaike Information Criterion (AIC), Bayesian Information Criterion (BIC), and Corrected AIC (AICc) in selecting the true model. The ability of these criteria to select the correct model is evaluated under several scenarios. This comparison is made using various spatial covariance models ranging from stationary isotropic to nonstationary models.

  12. A Structural Equation Model of the Writing Process in Typically Developing Sixth Grade Children

    Science.gov (United States)

    Koutsoftas, Anthony D.

    2010-01-01

    Educational reform initiatives of the last decade have focused on the three R's: reading, writing, and arithmetic, with writing receiving the least attention in the research literature (National Commission on Writing, 2003). Studies of writing performance in United States schoolchildren indicate that many are writing only at basic levels. The…

  13. A Typical Model Audit Approach: Spreadsheet Audit Methodologies in the City of London

    CERN Document Server

    Croll, Grenville J

    2003-01-01

    Spreadsheet audit and review procedures are an essential part of almost all City of London financial transactions. Structured processes are used to discover errors in large financial spreadsheets underpinning major transactions of all types. Serious errors are routinely found and are fed back to model development teams generally under conditions of extreme time urgency. Corrected models form the essence of the completed transaction and firms undertaking model audit and review expose themselves to significant financial liability in the event of any remaining significant error. It is noteworthy that in the United Kingdom, the management of spreadsheet error is almost unheard of outside of the City of London despite the commercial ubiquity of the spreadsheet.

  14. A Structural Equation Model of the Writing Process in Typically-Developing Sixth Grade Children

    Science.gov (United States)

    Koutsoftas, Anthony D.; Gray, Shelley

    2013-01-01

    The purpose of this study was to evaluate how sixth grade children planned, translated, and revised written narrative stories using a task reflecting current instructional and assessment practices. A modified version of the Hayes and Flower (1980) writing process model was used as the theoretical framework for the study. Two hundred one…

  15. General model of wood in typical coupled tasks. Part I. – Phenomenological approach

    Directory of Open Access Journals (Sweden)

    Petr Koňas

    2008-01-01

    Full Text Available The main aim of this work is focused on FE modeling of wood structure. This task is conditioned mainly by different organized structures/regions (tissues, anomalies... and leads to homogenization process of multiphysics declaration of common scientific and engineering problems. The crucial role in this paper is played by derivation of coefficient form of general PDE which is solvable by nowadays numerical solvers. Generality of supposed model is given by wide range of coupled physical fields included in the model. Used approach summarizes and brings together models for various fields of matter and energy common in wood material in wood drying process, but is also suitable for a lot of different tasks of similar materials. Namely microwave drying of wood with orthotropic, visco-elastic material properties together with time, moisture and temperature dependency of structural strains by modified mechanical properties were included. Specific matrixes of elasticity for in­di­vi­dual fields were derived. Thermal field in wood was described by conduction type of spreading. Coupling of physical fields is based on diffusive character of temperature, moisture, static pressure fields movement.

  16. Principles of Sonar Performance Modeling

    NARCIS (Netherlands)

    Ainslie, M.A.

    2010-01-01

    Sonar performance modelling (SPM) is concerned with the prediction of quantitative measures of sonar performance, such as probability of detection. It is a multidisciplinary subject, requiring knowledge and expertise in the disparate fields of underwater acoustics, acoustical oceanography, sonar sig

  17. Structural Analysis of Cuban Typical Model of Telecommunication Self-Supporting Towers under Seismic Load

    Directory of Open Access Journals (Sweden)

    Patricia Martín Rodríguez

    2012-07-01

    Full Text Available Self-supporting lattice towers are slender structures with low damping and high flexibility. They are sensitive to dynamic loads such as wind and earthquake. In the West of Cuba, structures should be analyzed under extreme winds and seismic effects, which raised their frequency during 2010 year. Self-supporting towers do not have the same dynamic behavior as buildings under seismic loads. Their specific structure features are not developed at design seismic codes, for that reason it is necessary to study methods of seismic analysis for self-supporting towers. The methods selected in this research are Modal Analysis Method proposed by Cuban Seismic Code (NC-46:1999 and modal superposition lineal dynamic analysis named Time History. It was selected for the study two self-supporting towers designed in Cuba, Versalles Model (3-legged and Najasa Model (4-legged. Comparative analysis between both methods is realized with extreme values of internal forces at element towers.

  18. Digital Troposcatter Performance Model

    Science.gov (United States)

    1983-12-01

    D,.-iD out ’e Pthr ~ These performance measures require a complete statistical de- scription of the components of the detection variable, which we...BER threshold Pthr " Let us denote by r the region of the 5-dimensional space (y,y) in which the BER exceeds Pthr : r = (Yy I): Pe(Y,!i) > Pthrl (A.46...y) by solving the nonlinear equation ... Pe ( Y,_21)= Pthr . A closed form expression for Pout(y) cannot be - obtained. Instead we developed an

  19. Magnesium degradation influenced by buffering salts in concentrations typical of in vitro and in vivo models.

    Science.gov (United States)

    Agha, Nezha Ahmad; Feyerabend, Frank; Mihailova, Boriana; Heidrich, Stefanie; Bismayer, Ulrich; Willumeit-Römer, Regine

    2016-01-01

    Magnesium and its alloys have considerable potential for orthopedic applications. During the degradation process the interface between material and tissue is continuously changing. Moreover, too fast or uncontrolled degradation is detrimental for the outcome in vivo. Therefore in vitro setups utilizing physiological conditions are promising for the material/degradation analysis prior to animal experiments. The aim of this study is to elucidate the influence of inorganic salts contributing to the blood buffering capacity on degradation. Extruded pure magnesium samples were immersed under cell culture conditions for 3 and 10 days. Hank's balanced salt solution without calcium and magnesium (HBSS) plus 10% of fetal bovine serum (FBS) was used as the basic immersion medium. Additionally, different inorganic salts were added with respect to concentration in Dulbecco's modified Eagle's medium (DMEM, in vitro model) and human plasma (in vivo model) to form 12 different immersion media. Influences on the surrounding environment were observed by measuring pH and osmolality. The degradation interface was analyzed by electron-induced X-ray emission (EIXE) spectroscopy, including chemical-element mappings and electron microprobe analysis, as well as Fourier transform infrared reflection micro-spectroscopy (FTIR).

  20. Global Modeling of Nebulae with Particle Growth, Drift and Evaporation Fronts. I: Methodology and Typical Results

    CERN Document Server

    Estrada, Paul R; Morgan, Demitri A

    2015-01-01

    We model particle growth in a turbulent, viscously evolving protoplanetary nebula, incorporating sticking, bouncing, fragmentation, and mass transfer at high speeds. We treat small particles using a moments method and large particles using a traditional histogram binning, including a probability distribution function of collisional velocities. The fragmentation strength of the particles depends on their composition (icy aggregates are stronger than silicate aggregates). The particle opacity, which controls the nebula thermal structure, evolves as particles grow and mass redistributes. While growing, particles drift radially due to nebula headwind drag. Particles of different compositions evaporate at "evaporation fronts" (EFs) where the midplane temperature exceeds their respective evaporation temperatures. We track the vapor and solid phases of each component, accounting for advection and radial and vertical diffusion. We present characteristic results in evolutions lasting $2 \\times 10^5$ years. In general,...

  1. Magnesium degradation influenced by buffering salts in concentrations typical of in vitro and in vivo models

    Energy Technology Data Exchange (ETDEWEB)

    Agha, Nezha Ahmad; Feyerabend, Frank [Helmholtz-Zentrum Geesthacht, Institute of Material Research, Division of Metallic Biomaterials, Max-Planck-Str. 1, 21502 Geesthacht (Germany); Mihailova, Boriana; Heidrich, Stefanie; Bismayer, Ulrich [University of Hamburg, Department of Earth Sciences, Grindelallee 48, 20146 Hamburg (Germany); Willumeit-Römer, Regine [Helmholtz-Zentrum Geesthacht, Institute of Material Research, Division of Metallic Biomaterials, Max-Planck-Str. 1, 21502 Geesthacht (Germany)

    2016-01-01

    Magnesium and its alloys have considerable potential for orthopedic applications. During the degradation process the interface between material and tissue is continuously changing. Moreover, too fast or uncontrolled degradation is detrimental for the outcome in vivo. Therefore in vitro setups utilizing physiological conditions are promising for the material/degradation analysis prior to animal experiments. The aim of this study is to elucidate the influence of inorganic salts contributing to the blood buffering capacity on degradation. Extruded pure magnesium samples were immersed under cell culture conditions for 3 and 10 days. Hank's balanced salt solution without calcium and magnesium (HBSS) plus 10% of fetal bovine serum (FBS) was used as the basic immersion medium. Additionally, different inorganic salts were added with respect to concentration in Dulbecco's modified Eagle's medium (DMEM, in vitro model) and human plasma (in vivo model) to form 12 different immersion media. Influences on the surrounding environment were observed by measuring pH and osmolality. The degradation interface was analyzed by electron-induced X-ray emission (EIXE) spectroscopy, including chemical-element mappings and electron microprobe analysis, as well as Fourier transform infrared reflection micro-spectroscopy (FTIR). - Highlights: • Influence of blood buffering salts on magnesium degradation was studied. • CaCl{sub 2} reduced the degradation rate by Ca–PO{sub 4} layer formation. • MgSO{sub 4} influenced the morphology of the degradation interface. • NaHCO{sub 3} induced the formation of MgCO{sub 3} as a degradation product.

  2. Underlying Dynamics of Typical Fluctuations of an Emerging Market Price Index: The Heston Model from Minutes to Months

    CERN Document Server

    Vicente, R; Leite, V B P; Caticha, N; Vicente, Renato; Toledo, Charles M. de; Leite, Vitor B.P.; Caticha, Nestor

    2006-01-01

    We investigate the Heston model with stochastic volatility and exponential tails as a model for the typical price fluctuations of the Brazilian S\\~ao Paulo Stock Exchange Index (IBOVESPA). Raw prices are first corrected for inflation and a period spanning 15 years characterized by memoryless returns is chosen for the analysis. Model parameters are estimated by observing volatility scaling and correlation properties. We show that the Heston model with at least two time scales for the volatility mean reverting dynamics satisfactorily describes price fluctuations ranging from time scales larger than 20 minutes to 160 days. At time scales shorter than 20 minutes we observe autocorrelated returns and power law tails incompatible with the Heston model. Despite major regulatory changes, hyperinflation and currency crises experienced by the Brazilian market in the period studied, the general success of the description provided may be regarded as an evidence for a general underlying dynamics of price fluctuations at i...

  3. Techniques for mass resolution improvement achieved by typical plasma mass analyzers: Modeling and simulations

    Science.gov (United States)

    Nicolaou, Georgios; Yamauchi, Masatoshi; Wieser, Martin; Barabash, Stas; Fedorov, Andrei

    2016-04-01

    Mass separation and particularly distinction between atomic ions and molecular ions are essential in understanding a wide range of plasma environments, with each consisted of different species with various properties. In this study we present the optimization results of light-weight (about 2 kg) magnetic mass analyzers with high g-factor for Rosetta (Ion Composition Analyser: ICA) and for Mars Express and Venus Express (Ion Mass Analyser: IMA). For the instrument's optimization we use SIMION, a 3D ion tracing software in which we can trace particle beams of several energies and directions, passing through the instrument's units. We first reproduced ICA and IMA results, which turned out to be different from simple models for low energy (< 100 eV). We then change the mechanical structure of several units of the instrument and we quantify the new mass resolution achieved with each change. Our goal is to find the optimal instrument's structure, which will allow us to achieve a proper mass resolution to distinguish atomic nitrogen from atomic oxygen for the purposes of a future magnetospheric mission.

  4. GLOBAL MODELING OF NEBULAE WITH PARTICLE GROWTH, DRIFT, AND EVAPORATION FRONTS. I. METHODOLOGY AND TYPICAL RESULTS

    Energy Technology Data Exchange (ETDEWEB)

    Estrada, Paul R. [Carl Sagan Center, SETI Institute, 189 N. Bernardo Avenue # 100, Mountain View, CA 94043 (United States); Cuzzi, Jeffrey N. [Ames Research Center, NASA, Mail Stop 245-3, Moffett Field, CA 94035 (United States); Morgan, Demitri A., E-mail: Paul.R.Estrada@nasa.gov [USRA, NASA Ames Research Center, Mail Stop 245-3, Moffett Field, CA 94035 (United States)

    2016-02-20

    We model particle growth in a turbulent, viscously evolving protoplanetary nebula, incorporating sticking, bouncing, fragmentation, and mass transfer at high speeds. We treat small particles using a moments method and large particles using a traditional histogram binning, including a probability distribution function of collisional velocities. The fragmentation strength of the particles depends on their composition (icy aggregates are stronger than silicate aggregates). The particle opacity, which controls the nebula thermal structure, evolves as particles grow and mass redistributes. While growing, particles drift radially due to nebula headwind drag. Particles of different compositions evaporate at “evaporation fronts” (EFs) where the midplane temperature exceeds their respective evaporation temperatures. We track the vapor and solid phases of each component, accounting for advection and radial and vertical diffusion. We present characteristic results in evolutions lasting 2 × 10{sup 5} years. In general, (1) mass is transferred from the outer to the inner nebula in significant amounts, creating radial concentrations of solids at EFs; (2) particle sizes are limited by a combination of fragmentation, bouncing, and drift; (3) “lucky” large particles never represent a significant amount of mass; and (4) restricted radial zones just outside each EF become compositionally enriched in the associated volatiles. We point out implications for millimeter to submillimeter SEDs and the inference of nebula mass, radial banding, the role of opacity on new mechanisms for generating turbulence, the enrichment of meteorites in heavy oxygen isotopes, variable and nonsolar redox conditions, the primary accretion of silicate and icy planetesimals, and the makeup of Jupiter’s core.

  5. Global Modeling of Nebulae with Particle Growth, Drift, and Evaporation Fronts. I. Methodology and Typical Results

    Science.gov (United States)

    Estrada, Paul R.; Cuzzi, Jeffrey N.; Morgan, Demitri A.

    2016-02-01

    We model particle growth in a turbulent, viscously evolving protoplanetary nebula, incorporating sticking, bouncing, fragmentation, and mass transfer at high speeds. We treat small particles using a moments method and large particles using a traditional histogram binning, including a probability distribution function of collisional velocities. The fragmentation strength of the particles depends on their composition (icy aggregates are stronger than silicate aggregates). The particle opacity, which controls the nebula thermal structure, evolves as particles grow and mass redistributes. While growing, particles drift radially due to nebula headwind drag. Particles of different compositions evaporate at “evaporation fronts” (EFs) where the midplane temperature exceeds their respective evaporation temperatures. We track the vapor and solid phases of each component, accounting for advection and radial and vertical diffusion. We present characteristic results in evolutions lasting 2 × 105 years. In general, (1) mass is transferred from the outer to the inner nebula in significant amounts, creating radial concentrations of solids at EFs; (2) particle sizes are limited by a combination of fragmentation, bouncing, and drift; (3) “lucky” large particles never represent a significant amount of mass; and (4) restricted radial zones just outside each EF become compositionally enriched in the associated volatiles. We point out implications for millimeter to submillimeter SEDs and the inference of nebula mass, radial banding, the role of opacity on new mechanisms for generating turbulence, the enrichment of meteorites in heavy oxygen isotopes, variable and nonsolar redox conditions, the primary accretion of silicate and icy planetesimals, and the makeup of Jupiter’s core.

  6. Statistical modeling of program performance

    Directory of Open Access Journals (Sweden)

    A. P. Karpenko

    2014-01-01

    Full Text Available A task of evaluation of program performance often occurs in the process of design of computer systems or during iterative compilation. A traditional way to solve this problem is emulation of program execution on the target system. A modern alternative approach to evaluation of program performance is based on statistical modeling of program performance on a computer under investigation. This statistical method of modeling program performance called Velocitas was introduced in this work. The method and its implementation in the Adaptor framework were presented. Investigation of the method's effectiveness showed high adequacy of program performance prediction.

  7. Schizophrenic patients treated with clozapine or olanzapine perform better on theory of mind tasks than those treated with risperidone or typical antipsychotic medications.

    Science.gov (United States)

    Savina, Ioulia; Beninger, Richard J

    2007-08-01

    Theory of mind (ToM), the ability to attribute mental states to others, is associated with medial prefrontal cortical (mPFC) activity and is impaired in schizophrenia. Olanzapine or clozapine but not typical antipsychotics or risperidone preferentially affect c-fos expression in mPFC in animals. We tested the hypothesis that schizophrenic patients treated with different antipsychotics would perform differently on ToM tasks. Groups receiving Typicals (n=23), Clozapine (n=18), Olanzapine (n=20) or Risperidone (n=23) and a Control group of healthy volunteers (n=24) were matched for age, gender, handedness and education. ToM functioning was assessed with picture sequence, second-order belief and faux-pas tests. Schizophrenic groups performed similarly to controls on non-ToM conditions. The Olanzapine and Clozapine groups performed similarly to Controls on ToM tasks. The Typicals and Risperidone groups performed worse than the other groups on ToM tasks. We concluded that ToM performance of schizophrenic patients is influenced by the antipsychotic they are taking. Our results suggest that olanzapine or clozapine but not typicals or risperidone may improve or protect ToM ability.

  8. The effects of typical and atypical antipsychotics on the electrical activity of the brain in a rat model

    Directory of Open Access Journals (Sweden)

    Oytun Erbaş

    2013-09-01

    Full Text Available Objective: Antipsychotic drugs are known to have strongeffect on the bioelectric activity in the brain. However,some studies addressing the changes on electroencephalography(EEG caused by typical and atypical antipsychoticdrugs are conflicting. We aimed to compare the effectsof typical and atypical antipsychotics on the electricalactivity in the brain via EEG recordings in a rat model.Methods: Thirty-two Sprague Dawley adult male ratswere used in the study. The rats were divided into fivegroups, randomly (n=7, for each group. The first groupwas used as control group and administered 1 ml/kg salineintraperitoneally (IP. Haloperidol (1 mg/kg (group 2,chlorpromazine (5 mg/kg (group 3, olanzapine (1 mg/kg(group 4, ziprasidone (1 mg/ kg (group 5 were injectedIP for five consecutive days. Then, EEG recordings ofeach group were taken for 30 minutes.Results: The percentages of delta and theta waves inhaloperidol, chlorpromazine, olanzapine and ziprasidonegroups were found to have a highly significant differencecompared with the saline administration group (p<0.001.The theta waves in the olanzapine and ziprasidonegroups were increased compared with haloperidol andchlorpromazine groups (p<0.05.Conclusion: The typical and atypical antipsychotic drugsmay be risk factor for EEG abnormalities. This studyshows that antipsychotic drugs should be used with caution.J Clin Exp Invest 2013; 4 (3: 279-284Key words: Haloperidol, chlorpromazine, olanzapine,ziprasidone, EEG, rat

  9. A Study of Number Sense Performance among Low-SES Students, New Immigrant Children, and Typical Learners in Grades Four through Six

    Science.gov (United States)

    Chen, Pei-Chieh; Li, Mao-Neng; Yang, Der-Ching

    2015-01-01

    To examine the relative performance in number sense among low-SES students, new immigrant students, and typical learners in grades 4 through 6, data were collected through a number sense web-based, two-tier test. A total of 628 fourth graders, 535 fifth graders, and 524 sixth graders in Taiwan participated in this study. Results showed that there…

  10. Underlying dynamics of typical fluctuations of an emerging market price index: The Heston model from minutes to months

    Science.gov (United States)

    Vicente, Renato; de Toledo, Charles M.; Leite, Vitor B. P.; Caticha, Nestor

    2006-02-01

    We investigate the Heston model with stochastic volatility and exponential tails as a model for the typical price fluctuations of the Brazilian São Paulo Stock Exchange Index (IBOVESPA). Raw prices are first corrected for inflation and a period spanning 15 years characterized by memoryless returns is chosen for the analysis. Model parameters are estimated by observing volatility scaling and correlation properties. We show that the Heston model with at least two time scales for the volatility mean reverting dynamics satisfactorily describes price fluctuations ranging from time scales larger than 20 min to 160 days. At time scales shorter than 20 min we observe autocorrelated returns and power law tails incompatible with the Heston model. Despite major regulatory changes, hyperinflation and currency crises experienced by the Brazilian market in the period studied, the general success of the description provided may be regarded as an evidence for a general underlying dynamics of price fluctuations at intermediate mesoeconomic time scales well approximated by the Heston model. We also notice that the connection between the Heston model and Ehrenfest urn models could be exploited for bringing new insights into the microeconomic market mechanics.

  11. The loudspeaker as musical instrument: An examination of the issues surrounding loudspeaker performance of music in typical rooms

    Science.gov (United States)

    Moulton, David

    2003-04-01

    The loudspeaker is the most important and one of the most variable elements in the electroacoustic music performance process. Nonetheless, its performance is subject to a ``willing suspension of disbelief'' by listeners and its behavior and variability are usually not accounted for in assessments of the quality of music reproduction or music instrument synthesis, especially as they occur in small rooms. This paper will examine the aesthetic assumptions underlying loudspeaker usage, the general timbral qualities and sonic characteristics of loudspeakers and some of the issues and problems inherent in loudspeakers interactions with small rooms and listeners.

  12. A GIS-BASED DISTRIBUTED SOIL EROSION MODEL:A CASE STUDY OF TYPICAL WATERSHED, SICHUAN BASIN

    Institute of Scientific and Technical Information of China (English)

    Zaijian YUAN; Qiangguo CAI; Yingmin CHU

    2007-01-01

    Based on the measuring data and Digital Elevation Data (DEM) in a typical watershed--Hemingguan Watershed, Nanbu County, Sichuan Province of China, a GIS-based distributed soil erosion model was developed particularly for the purple soil type. It takes 20 m × 20 m grid as calculating unit and operates at 10-minute time interval. The required input data to the model include DEM, soil, land use, and time-series of precipitation and evaporation loss. The model enables one to estimate runoff, erosion and sediment yield for each grid cell and route the flow along its flow path to the watershed outlet. Furthermore, the model is capable of calculating the total runoff; erosion and sediment yield for the entire watershed by recursion algorithm. The validation of the model demonstrated that it could quantitatively simulate the spatial distribution of hydrological variables in a watershed, such as runoff, vegetation entrapment, soil erosion, the degree of soil and water loss. Moreover, it can evaluate the effect of land use change on the runoff generation and soil erosion with an accuracy of 80% and 75% respectively. The application of this model to a neighboring watershed with similar conditions indicates that this distributed model could be extended to other similar regions in China.

  13. MODELING SUPPLY CHAIN PERFORMANCE VARIABLES

    Directory of Open Access Journals (Sweden)

    Ashish Agarwal

    2005-01-01

    Full Text Available In order to understand the dynamic behavior of the variables that can play a major role in the performance improvement in a supply chain, a System Dynamics-based model is proposed. The model provides an effective framework for analyzing different variables affecting supply chain performance. Among different variables, a causal relationship among different variables has been identified. Variables emanating from performance measures such as gaps in customer satisfaction, cost minimization, lead-time reduction, service level improvement and quality improvement have been identified as goal-seeking loops. The proposed System Dynamics-based model analyzes the affect of dynamic behavior of variables for a period of 10 years on performance of case supply chain in auto business.

  14. Using AGWA and the KINEROS2 Model-to-Model Green Infrastructure in Two Typical Residential Lots in Prescott, AZ

    Science.gov (United States)

    The Automated Geospatial Watershed Assessment (AGWA) Urban tool provides a step-by-step process to model subdivisions using the KINEROS2 model, with and without Green Infrastructure (GI) practices. AGWA utilizes the Kinematic Runoff and Erosion (KINEROS2) model, an event driven, ...

  15. Effects of Temperature and Method of Solution Preparation on the Performance of a Typical Red Mud Flocculent

    Science.gov (United States)

    Ferland, Pierre; Malito, John T.; Phillips, Everett C.

    Alcan International Ltd. in collaboration with Ondeo Nalco Company have carried out a fundamental study on the dissolution and performance of a 100% anionic polymer. The effects of method of preparation, solvent composition, temperature and exposure time on flocculent activity under conditions relevant to both atmospheric and pressure decantation were investigated. Flocculent activity was determined using static and dynamic settling tests, and the results were correlated with the reduced specific viscosity (RSV). For any given method of preparation of the flocculent solutions (makeup/dilution) the RSV tended to decrease with increasing solution ionic strength, independent of ionic speciation. While a significant loss in flocculent activity occurred with long exposure of the solution to high temperature, only a minor loss occurred in the short time required to flocculate and settle the mud in a decanter operating at 150 °C. Recent results in an actual plant pressure decanter appear to validate this conclusion.

  16. Air Conditioner Compressor Performance Model

    Energy Technology Data Exchange (ETDEWEB)

    Lu, Ning; Xie, YuLong; Huang, Zhenyu

    2008-09-05

    During the past three years, the Western Electricity Coordinating Council (WECC) Load Modeling Task Force (LMTF) has led the effort to develop the new modeling approach. As part of this effort, the Bonneville Power Administration (BPA), Southern California Edison (SCE), and Electric Power Research Institute (EPRI) Solutions tested 27 residential air-conditioning units to assess their response to delayed voltage recovery transients. After completing these tests, different modeling approaches were proposed, among them a performance modeling approach that proved to be one of the three favored for its simplicity and ability to recreate different SVR events satisfactorily. Funded by the California Energy Commission (CEC) under its load modeling project, researchers at Pacific Northwest National Laboratory (PNNL) led the follow-on task to analyze the motor testing data to derive the parameters needed to develop a performance models for the single-phase air-conditioning (SPAC) unit. To derive the performance model, PNNL researchers first used the motor voltage and frequency ramping test data to obtain the real (P) and reactive (Q) power versus voltage (V) and frequency (f) curves. Then, curve fitting was used to develop the P-V, Q-V, P-f, and Q-f relationships for motor running and stalling states. The resulting performance model ignores the dynamic response of the air-conditioning motor. Because the inertia of the air-conditioning motor is very small (H<0.05), the motor reaches from one steady state to another in a few cycles. So, the performance model is a fair representation of the motor behaviors in both running and stalling states.

  17. Effect of the inclusion of dry pasta by-products at different levels in the diet of typical Italian finishing heavy pigs: Performance, carcass characteristics, and ham quality.

    Science.gov (United States)

    Prandini, A; Sigolo, S; Moschini, M; Giuberti, G; Morlacchini, M

    2016-04-01

    The effect of pasta inclusion in finishing pig diets was evaluated on growth performance, carcass characteristics, and ham quality. Pigs (144) were assigned to 4 diets with different pasta levels: 0 (control, corn-based diet), 30, 60, or 80%. Pigs fed pasta had greater (linear, PPasta increased (quadratic, PPasta decreased (linear, Ppasta. Pasta could be considered as an ingredient in the diet for typical Italian finishing heavy pigs.

  18. Causes and typical control model of wind-drift sandy lands in abandoned channel of the Yellow River

    Institute of Scientific and Technical Information of China (English)

    Zhang Guo-zhen; Yang Li; Xu Wei; Sun Bao-ping

    2006-01-01

    The historical formation and development of the abandoned channel of the Yellow River is reviewed and its causes of formation and present condition of prevention and control are analyzed in this paper. Based on this analysis, some ideas about control,critical problems and countermeasures in the next period are proposed with two typical control models as examples. We suggest that in preventing and controlling the wind-drift sandy lands in the region, the emphasis should be to develop, with a greatly expanded effort, a recycling economy. This should realize a combination of two ideas, i.e. integrate combating desertification with a structural adjustment of agricultural and an increase in the income of farmers.

  19. Modeling road-cycling performance.

    Science.gov (United States)

    Olds, T S; Norton, K I; Lowe, E L; Olive, S; Reay, F; Ly, S

    1995-04-01

    This paper presents a complete set of equations for a "first principles" mathematical model of road-cycling performance, including corrections for the effect of winds, tire pressure and wheel radius, altitude, relative humidity, rotational kinetic energy, drafting, and changed drag. The relevant physiological, biophysical, and environmental variables were measured in 41 experienced cyclists completing a 26-km road time trial. The correlation between actual and predicted times was 0.89 (P road-cycling performance are maximal O2 consumption, fractional utilization of maximal O2 consumption, mechanical efficiency, and projected frontal area. The model is then applied to some practical problems in road cycling: the effect of drafting, the advantage of using smaller front wheels, the effects of added mass, the importance of rotational kinetic energy, the effect of changes in drag due to changes in bicycle configuration, the normalization of performances under different conditions, and the limits of human performance.

  20. Performability Modelling Tools, Evaluation Techniques and Applications

    NARCIS (Netherlands)

    Haverkort, Boudewijn R.H.M.

    1990-01-01

    This thesis deals with three aspects of quantitative evaluation of fault-tolerant and distributed computer and communication systems: performability evaluation techniques, performability modelling tools, and performability modelling applications. Performability modelling is a relatively new

  1. Modeling the Maturation of Grip Selection Planning and Action Representation: Insights from Typical and Atypical Motor Development.

    Directory of Open Access Journals (Sweden)

    Ian eFuelscher

    2016-02-01

    Full Text Available We investigated the purported association between developmental changes in grip selection planning and improvements in an individual’s capacity to represent action at an internal level (i.e., motor imagery. Participants were groups of healthy children aged 6-7 years and 8-12 years respectively, while a group of adolescents (13-17 years and adults (18-34 years allowed for consideration of childhood development in the broader context of motor maturation. A group of children aged 8-12 years with probable DCD (pDCD was included as a reference group for atypical motor development. Participants’ proficiency to generate and/or engage internal action representations was inferred from performance on the hand rotation task, a well-validated measure of motor imagery. A grip selection task designed to elicit the end-state comfort (ESC effect provided a window into the integrity of grip selection planning. Consistent with earlier accounts, the efficiency of grip selection planning followed a non-linear developmental progression in neurotypical individuals. As expected, analysis confirmed that these developmental improvements were predicted by an increased capacity to generate and/or engage internal action representations. The profile of this association remained stable throughout the (typical developmental spectrum. These findings are consistent with computational accounts of action planning that argue that internal action representations are associated with the expression and development of grip selection planning across typical development. However, no such association was found for our sample of children with pDCD, suggesting that individuals with atypical motor skill may adopt an alternative, sub-optimal strategy to plan their grip selection compared to their same-age control peers.

  2. Modeling the Maturation of Grip Selection Planning and Action Representation: Insights from Typical and Atypical Motor Development

    Science.gov (United States)

    Fuelscher, Ian; Williams, Jacqueline; Wilmut, Kate; Enticott, Peter G.; Hyde, Christian

    2016-01-01

    We investigated the purported association between developmental changes in grip selection planning and improvements in an individual’s capacity to represent action at an internal level [i.e., motor imagery (MI)]. Participants were groups of healthy children aged 6–7 years and 8–12 years respectively, while a group of adolescents (13–17 years) and adults (18–34 years) allowed for consideration of childhood development in the broader context of motor maturation. A group of children aged 8–12 years with probable DCD (pDCD) was included as a reference group for atypical motor development. Participants’ proficiency to generate and/or engage internal action representations was inferred from performance on the hand rotation task, a well-validated measure of MI. A grip selection task designed to elicit the end-state comfort (ESC) effect provided a window into the integrity of grip selection planning. Consistent with earlier accounts, the efficiency of grip selection planning followed a non-linear developmental progression in neurotypical individuals. As expected, analysis confirmed that these developmental improvements were predicted by an increased capacity to generate and/or engage internal action representations. The profile of this association remained stable throughout the (typical) developmental spectrum. These findings are consistent with computational accounts of action planning that argue that internal action representations are associated with the expression and development of grip selection planning across typical development. However, no such association was found for our sample of children with pDCD, suggesting that individuals with atypical motor skill may adopt an alternative, sub-optimal strategy to plan their grip selection compared to their same-age control peers. PMID:26903915

  3. Coupling CFAST fire modeling and SAPHIRE probabilistic assessment software for internal fire safety evaluation of a typical TRIGA research reactor

    Energy Technology Data Exchange (ETDEWEB)

    Safaei Arshi, Saiedeh [School of Engineering, Shiraz University, 71348-51154 Shiraz (Iran, Islamic Republic of); Nematollahi, Mohammadreza, E-mail: nema@shirazu.ac.i [School of Engineering, Shiraz University, 71348-51154 Shiraz (Iran, Islamic Republic of); Safety Research Center of Shiraz University, 71348-51154 Shiraz (Iran, Islamic Republic of); Sepanloo, Kamran [Safety Research Center of Shiraz University, 71348-51154 Shiraz (Iran, Islamic Republic of)

    2010-03-15

    Due to the significant threat of internal fires for the safety operation of nuclear reactors, presumed fire scenarios with potential hazards for loss of typical research reactor safety functions are analyzed by coupling CFAST fire modeling and SAPHIRE probabilistic assessment software. The investigations show that fire hazards associated with electrical cable insulation, lubricating oils, diesel, electrical equipment and carbon filters may lead to unsafe situations called core damage states. Using system-specific event trees, the occurrence frequency of core damage states after the occurrence of each possible fire scenario in critical fire compartments is evaluated. Probability that the fire ignited in the given fire compartment will burn long enough to cause the extent of damage defined by each fire scenario is calculated by means of detection-suppression event tree. As a part of detection-suppression event trees quantification, and also for generating the necessary input data for evaluating the frequency of core damage states by SAPHIRE 7.0 software, CFAST fire modeling software is applied. The results provide a probabilistic measure of the quality of existing fire protection systems in order to maintain the reactor at a reasonable safety level.

  4. Evaluation of the AnnAGNPS Model for Predicting Runoff and Nutrient Export in a Typical Small Watershed in the Hilly Region of Taihu Lake

    Directory of Open Access Journals (Sweden)

    Chuan Luo

    2015-09-01

    Full Text Available The application of hydrological and water quality models is an efficient approach to better understand the processes of environmental deterioration. This study evaluated the ability of the Annualized Agricultural Non-Point Source (AnnAGNPS model to predict runoff, total nitrogen (TN and total phosphorus (TP loading in a typical small watershed of a hilly region near Taihu Lake, China. Runoff was calibrated and validated at both an annual and monthly scale, and parameter sensitivity analysis was performed for TN and TP before the two water quality components were calibrated. The results showed that the model satisfactorily simulated runoff at annual and monthly scales, both during calibration and validation processes. Additionally, results of parameter sensitivity analysis showed that the parameters Fertilizer rate, Fertilizer organic, Canopy cover and Fertilizer inorganic were more sensitive to TN output. In terms of TP, the parameters Residue mass ratio, Fertilizer rate, Fertilizer inorganic and Canopy cover were the most sensitive. Based on these sensitive parameters, calibration was performed. TN loading produced satisfactory results for both the calibration and validation processes, whereas the performance of TP loading was slightly poor. The simulation results showed that AnnAGNPS has the potential to be used as a valuable tool for the planning and management of watersheds.

  5. Data management system performance modeling

    Science.gov (United States)

    Kiser, Larry M.

    1993-01-01

    This paper discusses analytical techniques that have been used to gain a better understanding of the Space Station Freedom's (SSF's) Data Management System (DMS). The DMS is a complex, distributed, real-time computer system that has been redesigned numerous times. The implications of these redesigns have not been fully analyzed. This paper discusses the advantages and disadvantages for static analytical techniques such as Rate Monotonic Analysis (RMA) and also provides a rationale for dynamic modeling. Factors such as system architecture, processor utilization, bus architecture, queuing, etc. are well suited for analysis with a dynamic model. The significance of performance measures for a real-time system are discussed.

  6. Evaluation of water conservation capacity of loess plateau typical mountain ecosystems based on InVEST model simulation

    Science.gov (United States)

    Lv, Xizhi; Zuo, Zhongguo; Xiao, Peiqing

    2017-06-01

    With increasing demand for water resources and frequently a general deterioration of local water resources, water conservation by forests has received considerable attention in recent years. To evaluate water conservation capacities of different forest ecosystems in mountainous areas of Loess Plateau, the landscape of forests was divided into 18 types in Loess Plateau. Under the consideration of the factors such as climate, topography, plant, soil and land use, the water conservation of the forest ecosystems was estimated by means of InVEST model. The result showed that 486417.7 hm2 forests in typical mountain areas were divided into 18 forest types, and the total water conservation quantity was 1.64×1012m3, equaling an average of water conversation quantity of 9.09×1010m3. There is a great difference in average water conversation capacity among various forest types. The water conservation function and its evaluation is crucial and complicated issues in the study of ecological service function in modern times.

  7. Performance modeling of optical refrigerators

    Energy Technology Data Exchange (ETDEWEB)

    Mills, G.; Mord, A. [Ball Aerospace and Technologies Corp., Boulder, CO (United States). Cryogenic and Thermal Engineering

    2006-02-15

    Optical refrigeration using anti-Stokes fluorescence in solids has several advantages over more conventional techniques including low mass, low volume, low cost and no vibration. It also has the potential of allowing miniature cryocoolers on the scale of a few cubic centimeters. It has been the topic of analysis and experimental work by several organizations. In 2003, we demonstrated the first optical refrigerator. We have developed a comprehensive system-level performance model of optical refrigerators. Our current version models the refrigeration cycle based on the fluorescent material emission and absorption data at ambient and reduced temperature for the Ytterbium-ZBLAN glass (Yb:ZBLAN) cooling material. It also includes the heat transfer into the refrigerator cooling assembly due to radiation and conduction. In this paper, we report on modeling results which reveal the interplay between size, power input, and cooling load. This interplay results in practical size limitations using Yb:ZBLAN. (author)

  8. [Simultaneous determination of 22 typical pharmaceuticals and personal care products in environmental water using ultra performance liquid chromatography- triple quadrupole mass spectrometry].

    Science.gov (United States)

    Wu, Chunying; Gu, Feng; Bai, Lu; Lu, Wenlong

    2015-08-01

    An analytical method for simultaneous determination of 22 typical pharmaceuticals and personal care products (PPCPs) in environmental water samples was developed by ultra performance liquid chromatography-triple quadrupole mass spectrometry (UPLC-MS/MS). An Oasis HLB solid phase extraction cartridge, methanol as washing solution, water containing 0. 1% formic acid-methanol (7:3, v/v) as the mobile phases were selected for sample pretreatment and chromatographic separation. Based on the optimized sample pretreatment procedures and separation condition, the target recoveries ranged from 73% to 125% in water with the relative standard deviations ( RSDs) from 8.8% to 17.5%, and the linear ranges were from 2 to 2 000 µg/L with correlation coefficients (R2) not less than 0.997. The method can be applied to simultaneous determination of the 22 typical PPCPs in environmental water samples because of its low detection limits and high recoveries. It can provide support and help for the related research on water environmental risk assessment and control of the micro-organic pollutants.

  9. Investigating the Effects of Typical Rowing Strength Training Practices on Strength and Power Development and 2,000 m Rowing Performance

    Directory of Open Access Journals (Sweden)

    Ian Gee Thomas

    2016-04-01

    Full Text Available This study aimed to determine the effects of a short-term, strength training intervention, typically undertaken by club-standard rowers, on 2,000 m rowing performance and strength and power development. Twenty-eight male rowers were randomly assigned to intervention or control groups. All participants performed baseline testing involving assessments of muscle soreness, creatine kinase activity (CK, maximal voluntary contraction (leg-extensors (MVC, static-squat jumps (SSJ, counter-movement jumps (CMJ, maximal rowing power strokes (PS and a 2,000 m rowing ergometer time-trial (2,000 m with accompanying respiratory-exchange and electromyography (EMG analysis. Intervention group participants subsequently performed three identical strength training (ST sessions, in the space of five days, repeating all assessments 24 h following the final ST. The control group completed the same testing procedure but with no ST. Following ST, the intervention group experienced significant elevations in soreness and CK activity, and decrements in MVC, SSJ, CMJ and PS (p < 0.01. However, 2,000 m rowing performance, pacing strategy and gas exchange were unchanged across trials in either condition. Following ST, significant increases occurred for EMG (p < 0.05, and there were non-significant trends for decreased blood lactate and anaerobic energy liberation (p = 0.063 – 0.086. In summary, club-standard rowers, following an intensive period of strength training, maintained their 2,000 m rowing performance despite suffering symptoms of muscle damage and disruption to muscle function. This disruption likely reflected the presence of acute residual fatigue, potentially in type II muscle fibres as strength and power development were affected.

  10. Performance-oriented Organisation Modelling

    NARCIS (Netherlands)

    Popova, V.; Sharpanskykh, A.

    2006-01-01

    Each organisation exists or is created for the achievement of one or more goals. To ensure continued success, the organisation should monitor its performance with respect to the formulated goals. In practice the performance of an organization is often evaluated by estimating its performance indicato

  11. Assembly line performance and modeling

    National Research Council Canada - National Science Library

    Rane, Arun B; Sunnapwar, Vivek K

    2017-01-01

    Automobile sector forms the backbone of manufacturing sector. Vehicle assembly line is important section in automobile plant where repetitive tasks are performed one after another at different workstations...

  12. [Adaptability analysis of FAO Penman-Monteith model over typical underlying surfaces in the Sanjiang Plain, Northeast China].

    Science.gov (United States)

    Jia, Zhi-Jun; Han, Lin; Wang, Ge; Zhang, Tong-Shun

    2014-05-01

    It is very important for studying surface energy and water balance to improve the accuracy of evapotranspiration (ET) estimation. Based on eddy covariance measurements and microclimate observational data available, comparisons were done in accuracy of simulating ET with the FAO Penman-Monteith model from the marshland, rice paddy and soybean field in the Sanjiang Plain. The results showed that the values of ET simulated with the model over marshland was significantly higher than the measured one (averagely 81.8% higher) when the crop coefficients recommend by FAO were adopted, and its modeling efficiency was negative, which indicated that the ET from the marshland couldn' t be simulated by the model. While the seasonal variation of ET over rice paddy and soybean field could be simulated by the model and the accuracy in simulating ET from rice paddy was better than that from soybean field. Crop coefficients (Kc) of marshland, rice paddy and soybean field were all significantly positively related to leaf area index, and crop coefficient of soybean field was also significantly negatively related to vapor pressure deficit. With Kc modified through linear regression, the FAO Penman-Monteith model markedly improved the estimation accuracy for marshland, rice paddy and soybean field, with the mean bias error ranging from -0.1 to 0.3 mm x d(-1), root mean square error ranging from 0.50 to 0.67 mm x d(-1) and modeling efficiency ranging from 0.69 to 0.85. Still, the accuracy in simulating ET from rice paddy was superior to that from the other two underlying surfaces. The FAO Penman-Monteith model was suitable to simulate the ET from rice paddy whether the crop coefficient was modified or not. However, the crop coefficient must be modified if the model was used to simulate the ET from marshland and soybean field.

  13. High-resolution modeling of the cusp density anomaly: Response to particle and Joule heating under typical conditions

    Science.gov (United States)

    Brinkman, Douglas G.; Walterscheid, Richard L.; Clemmons, James H.; Hecht, James. H.

    2016-03-01

    An established high-resolution dynamical model is employed to understand the behavior of the thermosphere beneath the Earth's magnetic cusps, with emphasis on the factors contributing to the density structures observed by the CHAMP and Streak satellite missions. In contrast to previous modeling efforts, this approach combines first principles dynamical modeling with the high spatial resolution needed to describe accurately mesoscale features such as the cusp. The resulting density structure is shown to be consistent with observations, including regions of both enhanced and diminished neutral density along the satellite track. This agreement is shown to be the result of a straightforward application of input conditions commonly found in the cusp rather than exaggerated or extreme conditions. It is found that the magnitude of the density change is sensitive to the width of the cusp region and that models that can resolve widths on the order of 2° of latitude are required to predict density variations that are consistent with the observations.

  14. Identification of a Typical CSTR Using Optimal Focused Time Lagged Recurrent Neural Network Model with Gamma Memory Filter

    National Research Council Canada - National Science Library

    Naikwad, S. N; Dudul, S. V

    2009-01-01

    .... It is noticed from literature review that process control of CSTR using neuro-fuzzy systems was attempted by many, but optimal neural network model for identification of CSTR process is not yet available...

  15. Effect of variation of length-to-depth ratio and Mach number on the performance of a typical double cavity scramjet combustor

    Science.gov (United States)

    Mahto, Navin Kumar; Choubey, Gautam; Suneetha, Lakka; Pandey, K. M.

    2016-11-01

    The two equation standard k-ɛ turbulence model and the two-dimensional compressible Reynolds-Averaged Navier-Stokes (RANS) equations have been used to computationally simulate the double cavity scramjet combustor. Here all the simulations are performed by using ANSYS 14-FLUENT code. At the same time, the validation of the present numerical simulation for double cavity has been performed by comparing its result with the available experimental data which is in accordance with the literature. The results are in good agreement with the schlieren image and the pressure distribution curve obtained experimentally. However, the pressure distribution curve obtained numerically is under-predicted in 5 locations by numerical calculation. Further, investigations on the variations of the effects of the length-to-depth ratio of cavity and Mach number on the combustion characteristics has been carried out. The present results show that there is an optimal length-to-depth ratio for the cavity for which the performance of combustor significantly improves and also efficient combustion takes place within the combustor region. Also, the shifting of the location of incident oblique shock took place in the downstream of the H2 inlet when the Mach number value increases. But after achieving a critical Mach number range of 2-2.5, the further increase in Mach number results in lower combustion efficiency which may deteriorate the performance of combustor.

  16. Research on the recycling industry development model for typical exterior plastic components of end-of-life passenger vehicle based on the SWOT method.

    Science.gov (United States)

    Zhang, Hongshen; Chen, Ming

    2013-11-01

    In-depth studies on the recycling of typical automotive exterior plastic parts are significant and beneficial for environmental protection, energy conservation, and sustainable development of China. In the current study, several methods were used to analyze the recycling industry model for typical exterior parts of passenger vehicles in China. The strengths, weaknesses, opportunities, and challenges of the current recycling industry for typical exterior parts of passenger vehicles were analyzed comprehensively based on the SWOT method. The internal factor evaluation matrix and external factor evaluation matrix were used to evaluate the internal and external factors of the recycling industry. The recycling industry was found to respond well to all the factors and it was found to face good developing opportunities. Then, the cross-link strategies analysis for the typical exterior parts of the passenger car industry of China was conducted based on the SWOT analysis strategies and established SWOT matrix. Finally, based on the aforementioned research, the recycling industry model led by automobile manufacturers was promoted.

  17. Cognitive performance modeling based on general systems performance theory.

    Science.gov (United States)

    Kondraske, George V

    2010-01-01

    General Systems Performance Theory (GSPT) was initially motivated by problems associated with quantifying different aspects of human performance. It has proved to be invaluable for measurement development and understanding quantitative relationships between human subsystem capacities and performance in complex tasks. It is now desired to bring focus to the application of GSPT to modeling of cognitive system performance. Previous studies involving two complex tasks (i.e., driving and performing laparoscopic surgery) and incorporating measures that are clearly related to cognitive performance (information processing speed and short-term memory capacity) were revisited. A GSPT-derived method of task analysis and performance prediction termed Nonlinear Causal Resource Analysis (NCRA) was employed to determine the demand on basic cognitive performance resources required to support different levels of complex task performance. This approach is presented as a means to determine a cognitive workload profile and the subsequent computation of a single number measure of cognitive workload (CW). Computation of CW may be a viable alternative to measuring it. Various possible "more basic" performance resources that contribute to cognitive system performance are discussed. It is concluded from this preliminary exploration that a GSPT-based approach can contribute to defining cognitive performance models that are useful for both individual subjects and specific groups (e.g., military pilots).

  18. Extracting topographic characteristics of landforms typical of Canadian agricultural landscapes for agri-environmental modeling. I. Methodology

    NARCIS (Netherlands)

    Li, S.; Lobb, D.A.; McConkey, B.G.; MacMillan, R.A.; Moulin, A.; Fraser, W.R.

    2011-01-01

    Soil and topographic information are key inputs for many agri-environmental models and there are linkages between soil and topography at the Field scale. A major source of soil data is soil databases established based on field soil survey. Although both soil and topographic information are recorded

  19. The typical behaviour of relays

    OpenAIRE

    Alamino, Roberto C.; Saad, David

    2007-01-01

    The typical behaviour of the relay-without-delay channel and its many-units generalisation, termed the relay array, under LDPC coding, is studied using methods of statistical mechanics. A demodulate-and-forward strategy is analytically solved using the replica symmetric ansatz which is exact in the studied system at the Nishimori's temperature. In particular, the typical level of improvement in communication performance by relaying messages is shown in the case of small and large number of re...

  20. METAPHOR (version 1): Users guide. [performability modeling

    Science.gov (United States)

    Furchtgott, D. G.

    1979-01-01

    General information concerning METAPHOR, an interactive software package to facilitate performability modeling and evaluation, is presented. Example systems are studied and their performabilities are calculated. Each available METAPHOR command and array generator is described. Complete METAPHOR sessions are included.

  1. [Optimization of sample pretreatment method for the determination of typical artificial sweeteners in soil by high performance liquid chromatography-tandem mass spectrometry].

    Science.gov (United States)

    Feng, Biting; Gan, Zhiwei; Hu, Hongwei; Sun, Hongwen

    2014-09-01

    The sample pretreatment method for the determination of four typical artificial sweeteners (ASs) including sucralose, saccharin, cyclamate, and acesulfame in soil by high performance liquid chromatography-tandem mass spectrometry (HPLC-MS/MS) was optimized. Different conditions of extraction, including four extractants (methanol, acetonitrile, acetone, deionized water), three kinds of ionic strength of sodium acetate solution (0.001, 0.01, 0.1 mol/L), four pH values (3, 4, 5 and 6) of 0.01 mol/L acetate-sodium acetate solution, four set durations of extraction (20, 40, 60, 120 min) and number of extraction times (1, 2, 3, 4 times) were compared. The optimal sample pretreatment method was finally set up. The sam- ples were extracted twice with 25 mL 0.01 mol/L sodium acetate solution (pH 4) for 20 min per cycle. The extracts were combined and then purified and concentrated by CNW Poly-Sery PWAX cartridges with methanol containing 1 mmol/L tris (hydroxymethyl) amino methane (Tris) and 5% (v/v) ammonia hydroxide as eluent. The analytes were determined by HPLC-MS/MS. The recoveries were obtained by spiked soil with the four artificial sweeteners at 1, 10, 100 μg/kg (dry weight), separately. The average recoveries of the analytes ranged from 86.5% to 105%. The intra-day and inter-day precisions expressed as relative standard deviations (RSDs) were in the range of 2.56%-5.94% and 3.99%-6.53%, respectively. Good linearities (r2 > 0.995) were observed between 1-100 μg/kg (dry weight) for all the compounds. The limits of detection were 0.01-0.21 kg/kg and the limits of quantification were 0.03-0.70 μg/kg for the analytes. The four artificial sweeteners were determined in soil samples from farmland contaminated by wastewater in Tianjin. This method is rapid, reliable, and suitable for the investigation of artificial sweeteners in soil.

  2. Assembly line performance and modeling

    Science.gov (United States)

    Rane, Arun B.; Sunnapwar, Vivek K.

    2017-03-01

    Automobile sector forms the backbone of manufacturing sector. Vehicle assembly line is important section in automobile plant where repetitive tasks are performed one after another at different workstations. In this thesis, a methodology is proposed to reduce cycle time and time loss due to important factors like equipment failure, shortage of inventory, absenteeism, set-up, material handling, rejection and fatigue to improve output within given cost constraints. Various relationships between these factors, corresponding cost and output are established by scientific approach. This methodology is validated in three different vehicle assembly plants. Proposed methodology may help practitioners to optimize the assembly line using lean techniques.

  3. Generalization performance of regularized neural network models

    DEFF Research Database (Denmark)

    Larsen, Jan; Hansen, Lars Kai

    1994-01-01

    Architecture optimization is a fundamental problem of neural network modeling. The optimal architecture is defined as the one which minimizes the generalization error. This paper addresses estimation of the generalization performance of regularized, complete neural network models. Regularization...

  4. [Effect of a model of the H-component of a typical magnetic storm on early ontogenesis in Daphnia magna Straus].

    Science.gov (United States)

    Krylov, V V; Zotov, O D; Osipova, E A; Znobishcheva, A V; Demtsun, N A

    2010-01-01

    The effect of a model of the H-component of a typical magnetic storm on the early ontogenesis of Daphnia magna Straus at 21 and 23 degrees C has been studied. It was shown based on the rates of the early ontogenesis that the effects of the model magnetic storm from the sudden onset of the storm to its end differ from the effects of the model magnetic storm from the recovery phase to the end of the storm. The effects of the model magnetic storm depended on temperature. The action of the model magnetic storm from the sudden onset of the storm to its end led to changes in the body length in the first progeny broods.

  5. Summary of photovoltaic system performance models

    Energy Technology Data Exchange (ETDEWEB)

    Smith, J. H.; Reiter, L. J.

    1984-01-15

    The purpose of this study is to provide a detailed overview of photovoltaics (PV) performance modeling capabilities that have been developed during recent years for analyzing PV system and component design and policy issues. A set of 10 performance models have been selected which span a representative range of capabilities from generalized first-order calculations to highly specialized electrical network simulations. A set of performance modeling topics and characteristics is defined and used to examine some of the major issues associated with photovoltaic performance modeling. Next, each of the models is described in the context of these topics and characteristics to assess its purpose, approach, and level of detail. Then each of the issues is discussed in terms of the range of model capabilities available and summarized in tabular form for quick reference. Finally, the models are grouped into categories to illustrate their purposes and perspectives.

  6. The Optimal Price Ratio of Typical Energy Sources in Beijing Based on the Computable General Equilibrium Model

    Directory of Open Access Journals (Sweden)

    Yongxiu He

    2014-04-01

    Full Text Available In Beijing, China, the rational consumption of energy is affected by the insufficient linkage mechanism of the energy pricing system, the unreasonable price ratio and other issues. This paper combines the characteristics of Beijing’s energy market, putting forward the society-economy equilibrium indicator R maximization taking into consideration the mitigation cost to determine a reasonable price ratio range. Based on the computable general equilibrium (CGE model, and dividing four kinds of energy sources into three groups, the impact of price fluctuations of electricity and natural gas on the Gross Domestic Product (GDP, Consumer Price Index (CPI, energy consumption and CO2 and SO2 emissions can be simulated for various scenarios. On this basis, the integrated effects of electricity and natural gas price shocks on the Beijing economy and environment can be calculated. The results show that relative to the coal prices, the electricity and natural gas prices in Beijing are currently below reasonable levels; the solution to these unreasonable energy price ratios should begin by improving the energy pricing mechanism, through means such as the establishment of a sound dynamic adjustment mechanism between regulated prices and market prices. This provides a new idea for exploring the rationality of energy price ratios in imperfect competitive energy markets.

  7. A medal share model for Olympic performance

    OpenAIRE

    Ang Sun; Rui Wang; Zhaoguo Zhan

    2015-01-01

    A sizable empirical literature relates a nation's Olympic performance to socioeconomic factors by adopting linear regression or a Tobit approach suggested by Bernard and Busse (2004). We propose an alternative model where a nation's medal share depends on its competitiveness relative to other nations and the model is logically consistent. Empirical evidence shows that our model fits data better than the existing linear regression and Tobit model. Besides Olympic Games, the proposed model and ...

  8. Transmutation Fuel Performance Code Thermal Model Verification

    Energy Technology Data Exchange (ETDEWEB)

    Gregory K. Miller; Pavel G. Medvedev

    2007-09-01

    FRAPCON fuel performance code is being modified to be able to model performance of the nuclear fuels of interest to the Global Nuclear Energy Partnership (GNEP). The present report documents the effort for verification of the FRAPCON thermal model. It was found that, with minor modifications, FRAPCON thermal model temperature calculation agrees with that of the commercial software ABAQUS (Version 6.4-4). This report outlines the methodology of the verification, code input, and calculation results.

  9. Performance of GeantV EM Physics Models

    Energy Technology Data Exchange (ETDEWEB)

    Amadio, G.; et al.

    2016-10-14

    The recent progress in parallel hardware architectures with deeper vector pipelines or many-cores technologies brings opportunities for HEP experiments to take advantage of SIMD and SIMT computing models. Launched in 2013, the GeantV project studies performance gains in propagating multiple particles in parallel, improving instruction throughput and data locality in HEP event simulation on modern parallel hardware architecture. Due to the complexity of geometry description and physics algorithms of a typical HEP application, performance analysis is indispensable in identifying factors limiting parallel execution. In this report, we will present design considerations and preliminary computing performance of GeantV physics models on coprocessors (Intel Xeon Phi and NVidia GPUs) as well as on mainstream CPUs.

  10. Intern Performance in Three Supervisory Models

    Science.gov (United States)

    Womack, Sid T.; Hanna, Shellie L.; Callaway, Rebecca; Woodall, Peggy

    2011-01-01

    Differences in intern performance, as measured by a Praxis III-similar instrument were found between interns supervised in three supervisory models: Traditional triad model, cohort model, and distance supervision. Candidates in this study's particular form of distance supervision were not as effective as teachers as candidates in traditional-triad…

  11. Performance modeling of automated manufacturing systems

    Science.gov (United States)

    Viswanadham, N.; Narahari, Y.

    A unified and systematic treatment is presented of modeling methodologies and analysis techniques for performance evaluation of automated manufacturing systems. The book is the first treatment of the mathematical modeling of manufacturing systems. Automated manufacturing systems are surveyed and three principal analytical modeling paradigms are discussed: Markov chains, queues and queueing networks, and Petri nets.

  12. Model performance analysis and model validation in logistic regression

    Directory of Open Access Journals (Sweden)

    Rosa Arboretti Giancristofaro

    2007-10-01

    Full Text Available In this paper a new model validation procedure for a logistic regression model is presented. At first, we illustrate a brief review of different techniques of model validation. Next, we define a number of properties required for a model to be considered "good", and a number of quantitative performance measures. Lastly, we describe a methodology for the assessment of the performance of a given model by using an example taken from a management study.

  13. Photovoltaic performance models - A report card

    Science.gov (United States)

    Smith, J. H.; Reiter, L. R.

    1985-01-01

    Models for the analysis of photovoltaic (PV) systems' designs, implementation policies, and economic performance, have proliferated while keeping pace with rapid changes in basic PV technology and extensive empirical data compiled for such systems' performance. Attention is presently given to the results of a comparative assessment of ten well documented and widely used models, which range in complexity from first-order approximations of PV system performance to in-depth, circuit-level characterizations. The comparisons were made on the basis of the performance of their subsystem, as well as system, elements. The models fall into three categories in light of their degree of aggregation into subsystems: (1) simplified models for first-order calculation of system performance, with easily met input requirements but limited capability to address more than a small variety of design considerations; (2) models simulating PV systems in greater detail, encompassing types primarily intended for either concentrator-incorporating or flat plate collector PV systems; and (3) models not specifically designed for PV system performance modeling, but applicable to aspects of electrical system design. Models ignoring subsystem failure or degradation are noted to exclude operating and maintenance characteristics as well.

  14. Product Data Model for Performance-driven Design

    Science.gov (United States)

    Hu, Guang-Zhong; Xu, Xin-Jian; Xiao, Shou-Ne; Yang, Guang-Wu; Pu, Fan

    2017-09-01

    When designing large-sized complex machinery products, the design focus is always on the overall performance; however, there exist no design theory and method based on performance driven. In view of the deficiency of the existing design theory, according to the performance features of complex mechanical products, the performance indices are introduced into the traditional design theory of "Requirement-Function-Structure" to construct a new five-domain design theory of "Client Requirement-Function-Performance-Structure-Design Parameter". To support design practice based on this new theory, a product data model is established by using performance indices and the mapping relationship between them and the other four domains. When the product data model is applied to high-speed train design and combining the existing research result and relevant standards, the corresponding data model and its structure involving five domains of high-speed trains are established, which can provide technical support for studying the relationships between typical performance indices and design parameters and the fast achievement of a high-speed train scheme design. The five domains provide a reference for the design specification and evaluation criteria of high speed train and a new idea for the train's parameter design.

  15. Modeling Performance of Plant Growth Regulators

    Directory of Open Access Journals (Sweden)

    W. C. Kreuser

    2017-03-01

    Full Text Available Growing degree day (GDD models can predict the performance of plant growth regulators (PGRs applied to creeping bentgrass ( L.. The goal of this letter is to describe experimental design strategies and modeling approaches to create PGR models for different PGRs, application rates, and turf species. Results from testing the models indicate that clipping yield should be measured until the growth response has diminished. This is in contrast to reapplication of a PGR at preselected intervals. During modeling, inclusion of an amplitude-dampening coefficient in the sinewave model allows the PGR effect to dissipate with time.

  16. Cost and Performance Model for Photovoltaic Systems

    Science.gov (United States)

    Borden, C. S.; Smith, J. H.; Davisson, M. C.; Reiter, L. J.

    1986-01-01

    Lifetime cost and performance (LCP) model assists in assessment of design options for photovoltaic systems. LCP is simulation of performance, cost, and revenue streams associated with photovoltaic power systems connected to electric-utility grid. LCP provides user with substantial flexibility in specifying technical and economic environment of application.

  17. Performance of hedging strategies in interval models

    NARCIS (Netherlands)

    Roorda, Berend; Engwerda, Jacob; Schumacher, J.M.

    2005-01-01

    For a proper assessment of risks associated with the trading of derivatives, the performance of hedging strategies should be evaluated not only in the context of the idealized model that has served as the basis of strategy development, but also in the context of other models. In this paper we consid

  18. Analytical performance modeling for computer systems

    CERN Document Server

    Tay, Y C

    2013-01-01

    This book is an introduction to analytical performance modeling for computer systems, i.e., writing equations to describe their performance behavior. It is accessible to readers who have taken college-level courses in calculus and probability, networking and operating systems. This is not a training manual for becoming an expert performance analyst. Rather, the objective is to help the reader construct simple models for analyzing and understanding the systems that they are interested in.Describing a complicated system abstractly with mathematical equations requires a careful choice of assumpti

  19. Disseminated typical bronchial carcinoid tumor

    Directory of Open Access Journals (Sweden)

    Novković Dobrivoje

    2013-01-01

    Full Text Available Introduction. Bronchial carcinoids belong to a rare type of lung tumors. If they do not expose outstanding neuroendocrine activity, they develop without clearly visible symptoms. They are often detected during a routine examination. According to their clinical pathological features, they are divided into typical and atypical tumors. Typical bronchial carcinoids metastasize to distant organs very rarely. Localized forms are effectively treated by surgery. The methods of conservative treatment should be applied in other cases. Case report. We presented a 65-year-old patient with carcinoid lung tumor detected by a routine examination. Additional analysis (chest X-ray, computed tomography of the chest, ultrasound of the abdomen, skeletal scintigraphy, bronhoscopy, histopathological analysis of the bioptate of bronchial tumor, as well as bronchial brushing cytology and immunohistochemical staining performed with markers specific for neuroendocrine tumor proved a morphologically typical lung carcinoid with dissemination to the liver and skeletal system, which is very rarely found in typical carcinoids. Conclusion. The presented case with carcinoid used to be showed morphological and pathohistological characteristics of typical bronchial carcinoid. With its metastasis to the liver and skeletal system it demonstrated unusual clinical course that used to be considered as rare phenomenon. Due to its frequent asymptomatic course and varied manifestation, bronchial carcinoid could be considered as a diagnostic challenge requiring a multidisciplinary approach.

  20. Analysing the temporal dynamics of model performance for hydrological models

    NARCIS (Netherlands)

    Reusser, D.E.; Blume, T.; Schaefli, B.; Zehe, E.

    2009-01-01

    The temporal dynamics of hydrological model performance gives insights into errors that cannot be obtained from global performance measures assigning a single number to the fit of a simulated time series to an observed reference series. These errors can include errors in data, model parameters, or m

  1. Performance modeling, loss networks, and statistical multiplexing

    CERN Document Server

    Mazumdar, Ravi

    2009-01-01

    This monograph presents a concise mathematical approach for modeling and analyzing the performance of communication networks with the aim of understanding the phenomenon of statistical multiplexing. The novelty of the monograph is the fresh approach and insights provided by a sample-path methodology for queueing models that highlights the important ideas of Palm distributions associated with traffic models and their role in performance measures. Also presented are recent ideas of large buffer, and many sources asymptotics that play an important role in understanding statistical multiplexing. I

  2. Model Performance Evaluation and Scenario Analysis (MPESA) Tutorial

    Science.gov (United States)

    This tool consists of two parts: model performance evaluation and scenario analysis (MPESA). The model performance evaluation consists of two components: model performance evaluation metrics and model diagnostics. These metrics provides modelers with statistical goodness-of-fit m...

  3. Towards Systematic Benchmarking of Climate Model Performance

    Science.gov (United States)

    Gleckler, P. J.

    2014-12-01

    The process by which climate models are evaluated has evolved substantially over the past decade, with the Coupled Model Intercomparison Project (CMIP) serving as a centralizing activity for coordinating model experimentation and enabling research. Scientists with a broad spectrum of expertise have contributed to the CMIP model evaluation process, resulting in many hundreds of publications that have served as a key resource for the IPCC process. For several reasons, efforts are now underway to further systematize some aspects of the model evaluation process. First, some model evaluation can now be considered routine and should not require "re-inventing the wheel" or a journal publication simply to update results with newer models. Second, the benefit of CMIP research to model development has not been optimal because the publication of results generally takes several years and is usually not reproducible for benchmarking newer model versions. And third, there are now hundreds of model versions and many thousands of simulations, but there is no community-based mechanism for routinely monitoring model performance changes. An important change in the design of CMIP6 can help address these limitations. CMIP6 will include a small set standardized experiments as an ongoing exercise (CMIP "DECK": ongoing Diagnostic, Evaluation and Characterization of Klima), so that modeling groups can submit them at any time and not be overly constrained by deadlines. In this presentation, efforts to establish routine benchmarking of existing and future CMIP simulations will be described. To date, some benchmarking tools have been made available to all CMIP modeling groups to enable them to readily compare with CMIP5 simulations during the model development process. A natural extension of this effort is to make results from all CMIP simulations widely available, including the results from newer models as soon as the simulations become available for research. Making the results from routine

  4. Performance results of HESP physical model

    Science.gov (United States)

    Chanumolu, Anantha; Thirupathi, Sivarani; Jones, Damien; Giridhar, Sunetra; Grobler, Deon; Jakobsson, Robert

    2017-02-01

    As a continuation to the published work on model based calibration technique with HESP(Hanle Echelle Spectrograph) as a case study, in this paper we present the performance results of the technique. We also describe how the open parameters were chosen in the model for optimization, the glass data accuracy and handling the discrepancies. It is observed through simulations that the discrepancies in glass data can be identified but not quantifiable. So having an accurate glass data is important which is possible to obtain from the glass manufacturers. The model's performance in various aspects is presented using the ThAr calibration frames from HESP during its pre-shipment tests. Accuracy of model predictions and its wave length calibration comparison with conventional empirical fitting, the behaviour of open parameters in optimization, model's ability to track instrumental drifts in the spectrum and the double fibres performance were discussed. It is observed that the optimized model is able to predict to a high accuracy the drifts in the spectrum from environmental fluctuations. It is also observed that the pattern in the spectral drifts across the 2D spectrum which vary from image to image is predictable with the optimized model. We will also discuss the possible science cases where the model can contribute.

  5. Critical review of glass performance modeling

    Energy Technology Data Exchange (ETDEWEB)

    Bourcier, W.L. [Lawrence Livermore National Lab., CA (United States)

    1994-07-01

    Borosilicate glass is to be used for permanent disposal of high-level nuclear waste in a geologic repository. Mechanistic chemical models are used to predict the rate at which radionuclides will be released from the glass under repository conditions. The most successful and useful of these models link reaction path geochemical modeling programs with a glass dissolution rate law that is consistent with transition state theory. These models have been used to simulate several types of short-term laboratory tests of glass dissolution and to predict the long-term performance of the glass in a repository. Although mechanistically based, the current models are limited by a lack of unambiguous experimental support for some of their assumptions. The most severe problem of this type is the lack of an existing validated mechanism that controls long-term glass dissolution rates. Current models can be improved by performing carefully designed experiments and using the experimental results to validate the rate-controlling mechanisms implicit in the models. These models should be supported with long-term experiments to be used for model validation. The mechanistic basis of the models should be explored by using modern molecular simulations such as molecular orbital and molecular dynamics to investigate both the glass structure and its dissolution process.

  6. Temporal diagnostic analysis of the SWAT model to detect dominant periods of poor model performance

    Science.gov (United States)

    Guse, Björn; Reusser, Dominik E.; Fohrer, Nicola

    2013-04-01

    four reoccurring patterns of typical model performance, which can be related to different phases of the hydrograph. Overall, the baseflow cluster has the lowest performance. By combining the periods with poor model performance with the dominant model components during these phases, the groundwater module was detected as the model part with the highest potential for model improvements. The detection of dominant processes in periods of poor model performance enhances the understanding of the SWAT model. Based on this, concepts how to improve the SWAT model structure for the application in German lowland catchment are derived.

  7. Typicals/Típicos

    Directory of Open Access Journals (Sweden)

    Silvia Vélez

    2004-01-01

    Full Text Available Typicals is a series of 12 colour photographs digitally created from photojournalistic images from Colombia combined with "typical" craft textiles and text from guest writers. Typicals was first exhibited as photographs 50cm x 75cm in size, each with their own magnifying glass, at the Contemporary Art Space at Gorman House in Canberra, Australia, in 2000. It was then exhibited in "Feedback: Art Social Consciousness and Resistance" at Monash University Museum of Art in Melbourne, Australia, from March to May 2003. From May to June 2003 it was exhibited at the Museo de Arte de la Universidad Nacional de Colombia Santa Fé Bogotá, Colombia. In its current manifestation the artwork has been adapted from the catalogue of the museum exhibitions. It is broken up into eight pieces corresponding to the contributions of the writers. The introduction by Sylvia Vélez is the PDF file accessible via a link below this abstract. The other seven PDF files are accessible via the 'Research Support Tool' section to the right of your screen. Please click on 'Supp. Files'. Please note that these files are around 4 megabytes each, so it may be difficult to access them from a dial-up connection.

  8. Performance modeling, stochastic networks, and statistical multiplexing

    CERN Document Server

    Mazumdar, Ravi R

    2013-01-01

    This monograph presents a concise mathematical approach for modeling and analyzing the performance of communication networks with the aim of introducing an appropriate mathematical framework for modeling and analysis as well as understanding the phenomenon of statistical multiplexing. The models, techniques, and results presented form the core of traffic engineering methods used to design, control and allocate resources in communication networks.The novelty of the monograph is the fresh approach and insights provided by a sample-path methodology for queueing models that highlights the importan

  9. PV performance modeling workshop summary report.

    Energy Technology Data Exchange (ETDEWEB)

    Stein, Joshua S.; Tasca, Coryne Adelle (SRA International, Inc., Fairfax, VA); Cameron, Christopher P.

    2011-05-01

    During the development of a solar photovoltaic (PV) energy project, predicting expected energy production from a system is a key part of understanding system value. System energy production is a function of the system design and location, the mounting configuration, the power conversion system, and the module technology, as well as the solar resource. Even if all other variables are held constant, annual energy yield (kWh/kWp) will vary among module technologies because of differences in response to low-light levels and temperature. A number of PV system performance models have been developed and are in use, but little has been published on validation of these models or the accuracy and uncertainty of their output. With support from the U.S. Department of Energy's Solar Energy Technologies Program, Sandia National Laboratories organized a PV Performance Modeling Workshop in Albuquerque, New Mexico, September 22-23, 2010. The workshop was intended to address the current state of PV system models, develop a path forward for establishing best practices on PV system performance modeling, and set the stage for standardization of testing and validation procedures for models and input parameters. This report summarizes discussions and presentations from the workshop, as well as examines opportunities for collaborative efforts to develop objective comparisons between models and across sites and applications.

  10. Performance of skylight illuminance inside a dome shaped adobe house under composite climate at New Delhi (India): A typical zero energy passive house

    OpenAIRE

    Arvind Chel

    2014-01-01

    This paper presents annual experimental performance of pyramid shaped skylight for daylighting of a dome shaped adobe house located at solar energy park in New Delhi (India). This approach of single story dome shaped building with skylight is more useful for rural and semi-urban sectors for both office and residential buildings reducing artificial lighting energy consumption. The hourly measured data of inside and outside illuminance for three different working surface levels inside the exist...

  11. A Procurement Performance Model for Construction Frameworks

    Directory of Open Access Journals (Sweden)

    Terence Y M Lam

    2015-07-01

    Full Text Available Collaborative construction frameworks have been developed in the United Kingdom (UK to create longer term relationships between clients and suppliers in order to improve project outcomes. Research undertaken into highways maintenance set within a major county council has confirmed that such collaborative procurement methods can improve time, cost and quality of construction projects. Building upon this and examining the same single case, this research aims to develop a performance model through identification of performance drivers in the whole project delivery process including pre and post contract phases. A priori performance model based on operational and sociological constructs was proposed and then checked by a pilot study. Factor analysis and central tendency statistics from the questionnaires as well as content analysis from the interview transcripts were conducted. It was confirmed that long term relationships, financial and non-financial incentives and stronger communication are the sociological behaviour factors driving performance. The interviews also established that key performance indicators (KPIs can be used as an operational measure to improve performance. With the posteriori performance model, client project managers can effectively collaboratively manage contractor performance through procurement measures including use of longer term and KPIs for the contract so that the expected project outcomes can be achieved. The findings also make significant contribution to construction framework procurement theory by identifying the interrelated sociological and operational performance drivers. This study is set predominantly in the field of highways civil engineering. It is suggested that building based projects or other projects that share characteristics are grouped together and used for further research of the phenomena discovered.

  12. Outdoor FSO Communications Under Fog: Attenuation Modeling and Performance Evaluation

    KAUST Repository

    Esmail, Maged Abdullah

    2016-07-18

    Fog is considered to be a primary challenge for free space optics (FSO) systems. It may cause attenuation that is up to hundreds of decibels per kilometer. Hence, accurate modeling of fog attenuation will help telecommunication operators to engineer and appropriately manage their networks. In this paper, we examine fog measurement data coming from several locations in Europe and the United States and derive a unified channel attenuation model. Compared with existing attenuation models, our proposed model achieves a minimum of 9 dB, which is lower than the average root-mean-square error (RMSE). Moreover, we have investigated the statistical behavior of the channel and developed a probabilistic model under stochastic fog conditions. Furthermore, we studied the performance of the FSO system addressing various performance metrics, including signal-to-noise ratio (SNR), bit-error rate (BER), and channel capacity. Our results show that in communication environments with frequent fog, FSO is typically a short-range data transmission technology. Therefore, FSO will have its preferred market segment in future wireless fifth-generation/sixth-generation (5G/6G) networks having cell sizes that are lower than a 1-km diameter. Moreover, the results of our modeling and analysis can be applied in determining the switching/thresholding conditions in highly reliable hybrid FSO/radio-frequency (RF) networks.

  13. Generating Performance Models for Irregular Applications

    Energy Technology Data Exchange (ETDEWEB)

    Friese, Ryan D.; Tallent, Nathan R.; Vishnu, Abhinav; Kerbyson, Darren J.; Hoisie, Adolfy

    2017-05-30

    Many applications have irregular behavior --- non-uniform input data, input-dependent solvers, irregular memory accesses, unbiased branches --- that cannot be captured using today's automated performance modeling techniques. We describe new hierarchical critical path analyses for the \\Palm model generation tool. To create a model's structure, we capture tasks along representative MPI critical paths. We create a histogram of critical tasks with parameterized task arguments and instance counts. To model each task, we identify hot instruction-level sub-paths and model each sub-path based on data flow, instruction scheduling, and data locality. We describe application models that generate accurate predictions for strong scaling when varying CPU speed, cache speed, memory speed, and architecture. We present results for the Sweep3D neutron transport benchmark; Page Rank on multiple graphs; Support Vector Machine with pruning; and PFLOTRAN's reactive flow/transport solver with domain-induced load imbalance.

  14. Performance of skylight illuminance inside a dome shaped adobe house under composite climate at New Delhi (India: A typical zero energy passive house

    Directory of Open Access Journals (Sweden)

    Arvind Chel

    2014-06-01

    Full Text Available This paper presents annual experimental performance of pyramid shaped skylight for daylighting of a dome shaped adobe house located at solar energy park in New Delhi (India. This approach of single story dome shaped building with skylight is more useful for rural and semi-urban sectors for both office and residential buildings reducing artificial lighting energy consumption. The hourly measured data of inside and outside illuminance for three different working surface levels inside the existing rooms are presented for each month of the year. The embodied energy payback time of the skylight is also determined on the basis of lighting energy saving potential.

  15. Performance benchmarks for a next generation numerical dynamo model

    Science.gov (United States)

    Matsui, Hiroaki; Heien, Eric; Aubert, Julien; Aurnou, Jonathan M.; Avery, Margaret; Brown, Ben; Buffett, Bruce A.; Busse, Friedrich; Christensen, Ulrich R.; Davies, Christopher J.; Featherstone, Nicholas; Gastine, Thomas; Glatzmaier, Gary A.; Gubbins, David; Guermond, Jean-Luc; Hayashi, Yoshi-Yuki; Hollerbach, Rainer; Hwang, Lorraine J.; Jackson, Andrew; Jones, Chris A.; Jiang, Weiyuan; Kellogg, Louise H.; Kuang, Weijia; Landeau, Maylis; Marti, Philippe; Olson, Peter; Ribeiro, Adolfo; Sasaki, Youhei; Schaeffer, Nathanaël.; Simitev, Radostin D.; Sheyko, Andrey; Silva, Luis; Stanley, Sabine; Takahashi, Futoshi; Takehiro, Shin-ichi; Wicht, Johannes; Willis, Ashley P.

    2016-05-01

    Numerical simulations of the geodynamo have successfully represented many observable characteristics of the geomagnetic field, yielding insight into the fundamental processes that generate magnetic fields in the Earth's core. Because of limited spatial resolution, however, the diffusivities in numerical dynamo models are much larger than those in the Earth's core, and consequently, questions remain about how realistic these models are. The typical strategy used to address this issue has been to continue to increase the resolution of these quasi-laminar models with increasing computational resources, thus pushing them toward more realistic parameter regimes. We assess which methods are most promising for the next generation of supercomputers, which will offer access to O(106) processor cores for large problems. Here we report performance and accuracy benchmarks from 15 dynamo codes that employ a range of numerical and parallelization methods. Computational performance is assessed on the basis of weak and strong scaling behavior up to 16,384 processor cores. Extrapolations of our weak-scaling results indicate that dynamo codes that employ two-dimensional or three-dimensional domain decompositions can perform efficiently on up to ˜106 processor cores, paving the way for more realistic simulations in the next model generation.

  16. A conceptual model for manufacturing performance improvement

    Directory of Open Access Journals (Sweden)

    M.A. Karim

    2009-07-01

    Full Text Available Purpose: Important performance objectives manufacturers sought can be achieved through adopting the appropriate manufacturing practices. This paper presents a conceptual model proposing relationship between advanced quality practices, perceived manufacturing difficulties and manufacturing performances.Design/methodology/approach: A survey-based approach was adopted to test the hypotheses proposed in this study. The selection of research instruments for inclusion in this survey was based on literature review, the pilot case studies and relevant industrial experience of the author. A sample of 1000 manufacturers across Australia was randomly selected. Quality managers were requested to complete the questionnaire, as the task of dealing with the quality and reliability issues is a quality manager’s major responsibility.Findings: Evidence indicates that product quality and reliability is the main competitive factor for manufacturers. Design and manufacturing capability and on time delivery came second. Price is considered as the least important factor for the Australian manufacturers. Results show that collectively the advanced quality practices proposed in this study neutralize the difficulties manufacturers face and contribute to the most performance objectives of the manufacturers. The companies who have put more emphasize on the advanced quality practices have less problem in manufacturing and better performance in most manufacturing performance indices. The results validate the proposed conceptual model and lend credence to hypothesis that proposed relationship between quality practices, manufacturing difficulties and manufacturing performances.Practical implications: The model shown in this paper provides a simple yet highly effective approach to achieving significant improvements in product quality and manufacturing performance. This study introduces a relationship based ‘proactive’ quality management approach and provides great

  17. Sex-dependent antipsychotic capacity of 17β-estradiol in the latent inhibition model: a typical antipsychotic drug in both sexes, atypical antipsychotic drug in males.

    Science.gov (United States)

    Arad, Michal; Weiner, Ina

    2010-10-01

    The estrogen hypothesis of schizophrenia suggests that estrogen is a natural neuroprotector in women and that exogenous estrogen may have antipsychotic potential, but results of clinical studies have been inconsistent. We have recently shown using the latent inhibition (LI) model of schizophrenia that 17β-estradiol exerts antipsychotic activity in ovariectomized (OVX) rats. The present study sought to extend the characterization of the antipsychotic action of 17β-estradiol (10, 50 and 150 μg/kg) by testing its capacity to reverse amphetamine- and MK-801-induced LI aberrations in gonadally intact female and male rats. No-drug controls of both sexes showed LI, ie, reduced efficacy of a previously non-reinforced stimulus to gain behavioral control when paired with reinforcement, if conditioned with two but not five tone-shock pairings. In both sexes, amphetamine (1 mg/kg) and MK-801 (50 μg/kg) produced disruption (under weak conditioning) and persistence (under strong conditioning) of LI, modeling positive and negative/cognitive symptoms, respectively. 17β-estradiol at 50 and 150 μg/kg potentiated LI under strong conditioning and reversed amphetamine-induced LI disruption in both males and females, mimicking the action of typical and atypical antipsychotic drugs (APDs) in the LI model. 17β-estradiol also reversed MK-induced persistent LI, an effect mimicking atypical APDs and NMDA receptor enhancers, but this effect was observed in males and OVX females but not in intact females. These findings indicate that in the LI model, 17β-estradiol exerts a clear-cut antipsychotic activity in both sexes and, remarkably, is more efficacious in males and OVX females where it also exerts activity considered predictive of anti-negative/cognitive symptoms.

  18. A New Model to Simulate Energy Performance of VRF Systems

    Energy Technology Data Exchange (ETDEWEB)

    Hong, Tianzhen; Pang, Xiufeng; Schetrit, Oren; Wang, Liping; Kasahara, Shinichi; Yura, Yoshinori; Hinokuma, Ryohei

    2014-03-30

    This paper presents a new model to simulate energy performance of variable refrigerant flow (VRF) systems in heat pump operation mode (either cooling or heating is provided but not simultaneously). The main improvement of the new model is the introduction of the evaporating and condensing temperature in the indoor and outdoor unit capacity modifier functions. The independent variables in the capacity modifier functions of the existing VRF model in EnergyPlus are mainly room wet-bulb temperature and outdoor dry-bulb temperature in cooling mode and room dry-bulb temperature and outdoor wet-bulb temperature in heating mode. The new approach allows compliance with different specifications of each indoor unit so that the modeling accuracy is improved. The new VRF model was implemented in a custom version of EnergyPlus 7.2. This paper first describes the algorithm for the new VRF model, which is then used to simulate the energy performance of a VRF system in a Prototype House in California that complies with the requirements of Title 24 ? the California Building Energy Efficiency Standards. The VRF system performance is then compared with three other types of HVAC systems: the Title 24-2005 Baseline system, the traditional High Efficiency system, and the EnergyStar Heat Pump system in three typical California climates: Sunnyvale, Pasadena and Fresno. Calculated energy savings from the VRF systems are significant. The HVAC site energy savings range from 51 to 85percent, while the TDV (Time Dependent Valuation) energy savings range from 31 to 66percent compared to the Title 24 Baseline Systems across the three climates. The largest energy savings are in Fresno climate followed by Sunnyvale and Pasadena. The paper discusses various characteristics of the VRF systems contributing to the energy savings. It should be noted that these savings are calculated using the Title 24 prototype House D under standard operating conditions. Actual performance of the VRF systems for real

  19. Analysing the temporal dynamics of model performance for hydrological models

    Directory of Open Access Journals (Sweden)

    D. E. Reusser

    2008-11-01

    Full Text Available The temporal dynamics of hydrological model performance gives insights into errors that cannot be obtained from global performance measures assigning a single number to the fit of a simulated time series to an observed reference series. These errors can include errors in data, model parameters, or model structure. Dealing with a set of performance measures evaluated at a high temporal resolution implies analyzing and interpreting a high dimensional data set. This paper presents a method for such a hydrological model performance assessment with a high temporal resolution and illustrates its application for two very different rainfall-runoff modeling case studies. The first is the Wilde Weisseritz case study, a headwater catchment in the eastern Ore Mountains, simulated with the conceptual model WaSiM-ETH. The second is the Malalcahuello case study, a headwater catchment in the Chilean Andes, simulated with the physics-based model Catflow. The proposed time-resolved performance assessment starts with the computation of a large set of classically used performance measures for a moving window. The key of the developed approach is a data-reduction method based on self-organizing maps (SOMs and cluster analysis to classify the high-dimensional performance matrix. Synthetic peak errors are used to interpret the resulting error classes. The final outcome of the proposed method is a time series of the occurrence of dominant error types. For the two case studies analyzed here, 6 such error types have been identified. They show clear temporal patterns which can lead to the identification of model structural errors.

  20. Analysing the temporal dynamics of model performance for hydrological models

    Directory of Open Access Journals (Sweden)

    E. Zehe

    2009-07-01

    Full Text Available The temporal dynamics of hydrological model performance gives insights into errors that cannot be obtained from global performance measures assigning a single number to the fit of a simulated time series to an observed reference series. These errors can include errors in data, model parameters, or model structure. Dealing with a set of performance measures evaluated at a high temporal resolution implies analyzing and interpreting a high dimensional data set. This paper presents a method for such a hydrological model performance assessment with a high temporal resolution and illustrates its application for two very different rainfall-runoff modeling case studies. The first is the Wilde Weisseritz case study, a headwater catchment in the eastern Ore Mountains, simulated with the conceptual model WaSiM-ETH. The second is the Malalcahuello case study, a headwater catchment in the Chilean Andes, simulated with the physics-based model Catflow. The proposed time-resolved performance assessment starts with the computation of a large set of classically used performance measures for a moving window. The key of the developed approach is a data-reduction method based on self-organizing maps (SOMs and cluster analysis to classify the high-dimensional performance matrix. Synthetic peak errors are used to interpret the resulting error classes. The final outcome of the proposed method is a time series of the occurrence of dominant error types. For the two case studies analyzed here, 6 such error types have been identified. They show clear temporal patterns, which can lead to the identification of model structural errors.

  1. High temperature furnace modeling and performance verifications

    Science.gov (United States)

    Smith, James E., Jr.

    1992-01-01

    Analytical, numerical, and experimental studies were performed on two classes of high temperature materials processing sources for their potential use as directional solidification furnaces. The research concentrated on a commercially available high temperature furnace using a zirconia ceramic tube as the heating element and an Arc Furnace based on a tube welder. The first objective was to assemble the zirconia furnace and construct parts needed to successfully perform experiments. The 2nd objective was to evaluate the zirconia furnace performance as a directional solidification furnace element. The 3rd objective was to establish a data base on materials used in the furnace construction, with particular emphasis on emissivities, transmissivities, and absorptivities as functions of wavelength and temperature. A 1-D and 2-D spectral radiation heat transfer model was developed for comparison with standard modeling techniques, and were used to predict wall and crucible temperatures. The 4th objective addressed the development of a SINDA model for the Arc Furnace and was used to design sample holders and to estimate cooling media temperatures for the steady state operation of the furnace. And, the 5th objective addressed the initial performance evaluation of the Arc Furnace and associated equipment for directional solidification. Results of these objectives are presented.

  2. System Advisor Model: Flat Plate Photovoltaic Performance Modeling Validation Report

    Energy Technology Data Exchange (ETDEWEB)

    Freeman, J.; Whitmore, J.; Kaffine, L.; Blair, N.; Dobos, A. P.

    2013-12-01

    The System Advisor Model (SAM) is a free software tool that performs detailed analysis of both system performance and system financing for a variety of renewable energy technologies. This report provides detailed validation of the SAM flat plate photovoltaic performance model by comparing SAM-modeled PV system generation data to actual measured production data for nine PV systems ranging from 75 kW to greater than 25 MW in size. The results show strong agreement between SAM predictions and field data, with annualized prediction error below 3% for all fixed tilt cases and below 8% for all one axis tracked cases. The analysis concludes that snow cover and system outages are the primary sources of disagreement, and other deviations resulting from seasonal biases in the irradiation models and one axis tracking issues are discussed in detail.

  3. Application of a modified distributed-dynamic erosion and sediment yield model in a typical watershed of a hilly and gully region, Chinese Loess Plateau

    Science.gov (United States)

    Wu, Lei; Liu, Xia; Ma, Xiaoyi

    2016-11-01

    Soil erosion not only results in the destruction of land resources and the decline of soil fertility, but also contributes to river channel sedimentation. In order to explore the spatiotemporal evolution of erosion and sediment yield before and after returning farmland in a typical watershed of the hilly and gully region (Chinese Loess Plateau), a distributed-dynamic model of sediment yield based on the Chinese Soil Loss Equation (CSLE) was established and modified to assess the effects of hydrological factors and human activities on erosion and sediment yield between 1995 and 2013. Results indicate that (1) the modified model has the characteristics of a simple algorithm, high accuracy, wide practicability and easy expansion, and can be applied to predict erosion and sediment yield in the study area, (2) soil erosion gradations are closely related to the spatial distribution of rainfall erosivity and land use patterns, and the current soil and water conservation measures are not efficient for high rainfall intensities, and (3) the average sediment yield rate before and after model modification in the most recent 5 years (in addition to 2013) is 4574.62 and 1696.1 Mg km-2, respectively, decreasing by about 35.4 and 78.2 % when compared to the early governance (1995-1998). However, in July 2013 the once-in-a-century storm is the most important reason for maximum sediment yield. Results may provide an effective and scientific basis for soil and water conservation planning and ecological construction of the hilly and gully region, Chinese Loess Plateau.

  4. Human visual performance model for crewstation design

    Science.gov (United States)

    Larimer, James; Prevost, Michael; Arditi, Aries; Azueta, Steven; Bergen, James; Lubin, Jeffrey

    1991-01-01

    An account is given of a Visibility Modeling Tool (VMT) which furnishes a crew-station designer with the means to assess configurational tradeoffs, with a view to the impact of various options on the unambiguous access of information to the pilot. The interactive interface of the VMT allows the manipulation of cockpit geometry, ambient lighting, pilot ergonomics, and the displayed symbology. Performance data can be displayed in the form of 3D contours into the crewstation graphic model, thereby yielding an indication of the operator's visual capabilities.

  5. The Brain’s sense of walking: a study on the intertwine between locomotor imagery and internal locomotor models in healthy adults, typically developing children and children with cerebral palsy

    Directory of Open Access Journals (Sweden)

    Marco eIosa

    2014-10-01

    Full Text Available Motor imagery and internal motor models have been deeply investigated in literature. It is well known that the development of motor imagery occurs during adolescence and it is limited in people affected by cerebral palsy. However, the roles of motor imagery and internal models in locomotion as well as their intertwine received poor attention. In this study we compared the performances of healthy adults (n=8, 28.1±5.1 years old, children with typical development (n=8, 8.1±3.8 years old and children with cerebral palsy (n=12, 7.5±2.9 years old, measured by an optoelectronic system and a trunk-mounted wireless inertial magnetic unit, during three different tasks. Subjects were asked to achieve a target located at 2 or 3m in front of them simulating their walking by stepping in place, or actually walking blindfolded or normally walking with open eyes. Adults performed a not significantly different number of steps (p=0.761 spending not significantly different time between tasks (p=0.156. Children with typical development showed task-dependent differences both in terms of number of steps (p=0.046 and movement time (p=0.002. However, their performance in simulated and blindfolded walking were strictly correlated (R=0.871 for steps, R=0.673 for time. Further, their error in blindfolded walking was in mean only of -2.2% of distance. Also children with cerebral palsy showed significant differences in number of steps (p=0.022 and time (p<0.001, but neither their number of steps nor their movement time recorded during simulated walking were found correlated with those of blindfolded and normal walking. Adults used a unique strategy among different tasks. Children with typical development seemed to be less reliable on their motor predictions, using a task-dependent strategy probably more reliable on sensorial feedback. Children with cerebral palsy showed less efficient performances, especially in simulated walking, suggesting an altered locomotor imagery.

  6. A Typical Synergy

    Science.gov (United States)

    van Noort, Thomas; Achten, Peter; Plasmeijer, Rinus

    We present a typical synergy between dynamic types (dynamics) and generalised algebraic datatypes (GADTs). The former provides a clean approach to integrating dynamic typing in a statically typed language. It allows values to be wrapped together with their type in a uniform package, deferring type unification until run time using a pattern match annotated with the desired type. The latter allows for the explicit specification of constructor types, as to enforce their structural validity. In contrast to ADTs, GADTs are heterogeneous structures since each constructor type is implicitly universally quantified. Unfortunately, pattern matching only enforces structural validity and does not provide instantiation information on polymorphic types. Consequently, functions that manipulate such values, such as a type-safe update function, are cumbersome due to boilerplate type representation administration. In this paper we focus on improving such functions by providing a new GADT annotation via a natural synergy with dynamics. We formally define the semantics of the annotation and touch on novel other applications of this technique such as type dispatching and enforcing type equality invariants on GADT values.

  7. Hybrid Modeling Improves Health and Performance Monitoring

    Science.gov (United States)

    2007-01-01

    Scientific Monitoring Inc. was awarded a Phase I Small Business Innovation Research (SBIR) project by NASA's Dryden Flight Research Center to create a new, simplified health-monitoring approach for flight vehicles and flight equipment. The project developed a hybrid physical model concept that provided a structured approach to simplifying complex design models for use in health monitoring, allowing the output or performance of the equipment to be compared to what the design models predicted, so that deterioration or impending failure could be detected before there would be an impact on the equipment's operational capability. Based on the original modeling technology, Scientific Monitoring released I-Trend, a commercial health- and performance-monitoring software product named for its intelligent trending, diagnostics, and prognostics capabilities, as part of the company's complete ICEMS (Intelligent Condition-based Equipment Management System) suite of monitoring and advanced alerting software. I-Trend uses the hybrid physical model to better characterize the nature of health or performance alarms that result in "no fault found" false alarms. Additionally, the use of physical principles helps I-Trend identify problems sooner. I-Trend technology is currently in use in several commercial aviation programs, and the U.S. Air Force recently tapped Scientific Monitoring to develop next-generation engine health-management software for monitoring its fleet of jet engines. Scientific Monitoring has continued the original NASA work, this time under a Phase III SBIR contract with a joint NASA-Pratt & Whitney aviation security program on propulsion-controlled aircraft under missile-damaged aircraft conditions.

  8. Modeling the fate of p,p'-DDT in water and sediment of two typical estuarine bays in South China: Importance of fishing vessels' inputs.

    Science.gov (United States)

    Fang, Shu-Ming; Zhang, Xianming; Bao, Lian-Jun; Zeng, Eddy Y

    2016-05-01

    Antifouling paint applied to fishing vessels is the primary source of dichloro-diphenyl-trichloroethane (DDT) to the coastal marine environments of China. With the aim to provide science-based support of potential regulations on DDT use in antifouling paint, we utilized a fugacity-based model to evaluate the fate and impact of p,p'-DDT, the dominant component of DDT mixture, in Daya Bay and Hailing Bay, two typical estuarine bays in South China. The emissions of p,p'-DDT from fishing vessels to the aquatic environments of Hailing Bay and Daya Bay were estimated as 9.3 and 7.7 kg yr(-1), respectively. Uncertainty analysis indicated that the temporal variability of p,p'-DDT was well described by the model if fishing vessels were considered as the only direct source, i.e., fishing vessels should be the dominant source of p,p'-DDT in coastal bay areas of China. Estimated hazard quotients indicated that sediment in Hailing Bay posed high risk to the aquatic system, and it would take at least 21 years to reduce the hazards to a safe level. Moreover, p,p'-DDT tends to migrate from water to sediment in the entire Hailing Bay and Daya Bay. On the other hand, our previous research indicated that p,p'-DDT was more likely to migrate from sediment to water in the maricultured zones located in shallow waters of these two bays, where fishing vessels frequently remain. These findings suggest that relocating mariculture zones to deeper waters would reduce the likelihood of farmed fish contamination by p,p'-DDT.

  9. Response surface modeling-based source contribution analysis and VOC emission control policy assessment in a typical ozone-polluted urban Shunde, China.

    Science.gov (United States)

    You, Zhiqiang; Zhu, Yun; Jang, Carey; Wang, Shuxiao; Gao, Jian; Lin, Che-Jen; Li, Minhui; Zhu, Zhenghua; Wei, Hao; Yang, Wenwei

    2017-01-01

    To develop a sound ozone (O3) pollution control strategy, it is important to well understand and characterize the source contribution due to the complex chemical and physical formation processes of O3. Using the "Shunde" city as a pilot summer case study, we apply an innovative response surface modeling (RSM) methodology based on the Community Multi-Scale Air Quality (CMAQ) modeling simulations to identify the O3 regime and provide dynamic analysis of the precursor contributions to effectively assess the O3 impacts of volatile organic compound (VOC) control strategy. Our results show that Shunde is a typical VOC-limited urban O3 polluted city. The "Jiangmen" city, as the main upper wind area during July 2014, its VOCs and nitrogen oxides (NOx) emissions make up the largest contribution (9.06%). On the contrary, the contribution from local (Shunde) emission is lowest (6.35%) among the seven neighbor regions. The local VOCs industrial source emission has the largest contribution comparing to other precursor emission sectors in Shunde. The results of dynamic source contribution analysis further show that the local NOx control could slightly increase the ground O3 under low (10.00%) and medium (40.00%) reduction ratios, while it could start to turn positive to decrease ground O3 under the high NOx abatement ratio (75.00%). The real-time assessment of O3 impacts from VOCs control strategies in Pearl River Delta (PRD) shows that the joint regional VOCs emission control policy will effectively reduce the ground O3 concentration in Shunde. Copyright © 2016. Published by Elsevier B.V.

  10. Typicality, graded membership, and vagueness.

    Science.gov (United States)

    Hampton, James A

    2007-05-01

    This paper addresses theoretical problems arising from the vagueness of language terms, and intuitions of the vagueness of the concepts to which they refer. It is argued that the central intuitions of prototype theory are sufficient to account for both typicality phenomena and psychological intuitions about degrees of membership in vaguely defined classes. The first section explains the importance of the relation between degrees of membership and typicality (or goodness of example) in conceptual categorization. The second and third section address arguments advanced by Osherson and Smith (1997), and Kamp and Partee (1995), that the two notions of degree of membership and typicality must relate to fundamentally different aspects of conceptual representations. A version of prototype theory-the Threshold Model-is proposed to counter these arguments and three possible solutions to the problems of logical selfcontradiction and tautology for vague categorizations are outlined. In the final section graded membership is related to the social construction of conceptual boundaries maintained through language use.

  11. Multilevel Modeling of the Performance Variance

    Directory of Open Access Journals (Sweden)

    Alexandre Teixeira Dias

    2012-12-01

    Full Text Available Focusing on the identification of the role played by Industry on the relations between Corporate Strategic Factors and Performance, the hierarchical multilevel modeling method was adopted when measuring and analyzing the relations between the variables that comprise each level of analysis. The adequacy of the multilevel perspective to the study of the proposed relations was identified and the relative importance analysis point out to the lower relevance of industry as a moderator of the effects of corporate strategic factors on performance, when the latter was measured by means of return on assets, and that industry don‟t moderates the relations between corporate strategic factors and Tobin‟s Q. The main conclusions of the research are that the organizations choices in terms of corporate strategy presents a considerable influence and plays a key role on the determination of performance level, but that industry should be considered when analyzing the performance variation despite its role as a moderator or not of the relations between corporate strategic factors and performance.

  12. Modelling the predictive performance of credit scoring

    Directory of Open Access Journals (Sweden)

    Shi-Wei Shen

    2013-02-01

    Full Text Available Orientation: The article discussed the importance of rigour in credit risk assessment.Research purpose: The purpose of this empirical paper was to examine the predictive performance of credit scoring systems in Taiwan.Motivation for the study: Corporate lending remains a major business line for financial institutions. However, in light of the recent global financial crises, it has become extremely important for financial institutions to implement rigorous means of assessing clients seeking access to credit facilities.Research design, approach and method: Using a data sample of 10 349 observations drawn between 1992 and 2010, logistic regression models were utilised to examine the predictive performance of credit scoring systems.Main findings: A test of Goodness of fit demonstrated that credit scoring models that incorporated the Taiwan Corporate Credit Risk Index (TCRI, micro- and also macroeconomic variables possessed greater predictive power. This suggests that macroeconomic variables do have explanatory power for default credit risk.Practical/managerial implications: The originality in the study was that three models were developed to predict corporate firms’ defaults based on different microeconomic and macroeconomic factors such as the TCRI, asset growth rates, stock index and gross domestic product.Contribution/value-add: The study utilises different goodness of fits and receiver operator characteristics during the examination of the robustness of the predictive power of these factors.

  13. Inconsistent strategies to spin up models in CMIP5: implications for ocean biogeochemical model performance assessment

    Science.gov (United States)

    Seferian, R.; Gehlen, M.; Bopp, L.; Resplandy, L.; Orr, J. C.; Marti, O.

    2016-12-01

    During the fifth phase of the Coupled Model Intercomparison Project (CMIP5) substantial efforts were made to systematically assess the skills of Earth system models against available modern observations. However, most of these skill-assessment approaches can be considered as "blind" given that they were applied without considering models' specific characteristics and treat models a priori as independent of observations. Indeed, since these models are typically initialized from observations, the spin-up procedure (e.g. the length of time for which the model has been run since initialization, and therefore the degree to which it has approached it's own equilibrium) has the potential to exert a significant control over the skill-assessment metrics calculated for each model. Here, we explore how the large diversity in spin-up protocols used for marine biogeochemistry in CMIP5 Earth system models (ESM) contributes to model-to-model differences in the simulated fields. We focus on the amplification of biases in selected biogeochemical fields (O2, NO3, Alk-DIC) as a function of spin-up duration in a dedicated 500-year-long spin-up simulation performed with IPSL-CM5A-LR as well as an ensemble of 24 CMIP5 ESMs. We demonstrate that a relationship between spin-up duration and skill-assessment metrics emerges from the results of a single model and holds when confronted with a larger ensemble of CMIP5 models. This shows that drift in biogeochemical fields has implications for performance assessment in addition to possibly influence estimates of climate change impact. Our study suggests that differences in spin-up protocols could explain a substantial part of model disparities, constituting a source of model-to-model uncertainty. This requires more attention in future model intercomparison exercises in order to provide quantitatively more correct ESM results on marine biogeochemistry and carbon cycle feedbacks.

  14. Typical Logistical Processes

    Directory of Open Access Journals (Sweden)

    Igor Trupac

    2012-10-01

    Full Text Available For modern global economic activity changes in the structureof goods, networks, technical and technological developmentand increased competence are significant, which requiresnew solutions and new mode of thinking.The competitive model which relied in the past on productinnovation will have to be largely supplemented by process innovationthat add greater value for customers. The basis forcompeting today and in the future will be competitive advantagewhich enhances product excellence as well as process excellence.

  15. CASTOR detector Model, objectives and simulated performance

    CERN Document Server

    Angelis, Aris L S; Bartke, Jerzy; Bogolyubsky, M Yu; Chileev, K; Erine, S; Gladysz-Dziadus, E; Kharlov, Yu V; Kurepin, A B; Lobanov, M O; Maevskaya, A I; Mavromanolakis, G; Nicolis, N G; Panagiotou, A D; Sadovsky, S A; Wlodarczyk, Z

    2001-01-01

    We present a phenomenological model describing the formation and evolution of a Centauro fireball in the baryon-rich region in nucleus-nucleus interactions in the upper atmosphere and at the LHC. The small particle multiplicity and imbalance of electromagnetic and hadronic content characterizing a Centauro event and also the strongly penetrating particles (assumed to be strangelets) frequently accompanying them can be naturally explained. We describe the CASTOR calorimeter, a subdetector of the ALICE experiment dedicated to the search for Centauro in the very forward, baryon-rich region of central Pb+Pb collisions at the LHC. The basic characteristics and simulated performance of the calorimeter are presented. (22 refs).

  16. CASTOR detector. Model, objectives and simulated performance

    Energy Technology Data Exchange (ETDEWEB)

    Angelis, A. L. S.; Mavromanolakis, G.; Panagiotou, A. D. [University of Athens, Nuclear and Particle Physics Division, Athens (Greece); Aslanoglou, X.; Nicolis, N. [Ioannina Univ., Ioannina (Greece). Dept. of Physics; Bartke, J.; Gladysz-Dziadus, E. [Institute of Nuclear Physics, Cracow (Poland); Lobanov, M.; Erine, S.; Kharlov, Y.V.; Bogolyubsky, M.Y. [Institute for High Energy Physics, Protvino (Russian Federation); Kurepin, A.B.; Chileev, K. [Institute for Nuclear Research, Moscow (Russian Federation); Wlodarczyk, Z. [Pedagogical University, Institute of Physics, Kielce (Poland)

    2001-10-01

    It is presented a phenomenological model describing the formation and evolution of a Centauro fireball in the baryon-rich region in nucleus-nucleus interactions in the upper atmosphere and at the LHC. The small particle multiplicity and imbalance of electromagnetic and hadronic content characterizing a Centauro event and also the strongly penetrating particles (assumed to be strangelets) frequently accompanying them can be naturally explained. It is described the CASTOR calorimeter, a sub detector of the ALICE experiment dedicated to the search for Centauro in the very forward, baryon-rich region of central Pb+Pb collisions at the LHC. The basic characteristics and simulated performance of the calorimeter are presented.

  17. Computer modeling of thermoelectric generator performance

    Science.gov (United States)

    Chmielewski, A. B.; Shields, V.

    1982-01-01

    Features of the DEGRA 2 computer code for simulating the operations of a spacecraft thermoelectric generator are described. The code models the physical processes occurring during operation. Input variables include the thermoelectric couple geometry and composition, the thermoelectric materials' properties, interfaces and insulation in the thermopile, the heat source characteristics, mission trajectory, and generator electrical requirements. Time steps can be specified and sublimation of the leg and hot shoe is accounted for, as are shorts between legs. Calculations are performed for conduction, Peltier, Thomson, and Joule heating, the cold junction can be adjusted for solar radition, and the legs of the thermoelectric couple are segmented to enhance the approximation accuracy. A trial run covering 18 couple modules yielded data with 0.3% accuracy with regard to test data. The model has been successful with selenide materials, SiGe, and SiN4, with output of all critical operational variables.

  18. Optical Performance Modeling of FUSE Telescope Mirror

    Science.gov (United States)

    Saha, Timo T.; Ohl, Raymond G.; Friedman, Scott D.; Moos, H. Warren

    2000-01-01

    We describe the Metrology Data Processor (METDAT), the Optical Surface Analysis Code (OSAC), and their application to the image evaluation of the Far Ultraviolet Spectroscopic Explorer (FUSE) mirrors. The FUSE instrument - designed and developed by the Johns Hopkins University and launched in June 1999 is an astrophysics satellite which provides high resolution spectra (lambda/Delta(lambda) = 20,000 - 25,000) in the wavelength region from 90.5 to 118.7 nm The FUSE instrument is comprised of four co-aligned, normal incidence, off-axis parabolic mirrors, four Rowland circle spectrograph channels with holographic gratings, and delay line microchannel plate detectors. The OSAC code provides a comprehensive analysis of optical system performance, including the effects of optical surface misalignments, low spatial frequency deformations described by discrete polynomial terms, mid- and high-spatial frequency deformations (surface roughness), and diffraction due to the finite size of the aperture. Both normal incidence (traditionally infrared, visible, and near ultraviolet mirror systems) and grazing incidence (x-ray mirror systems) systems can be analyzed. The code also properly accounts for reflectance losses on the mirror surfaces. Low frequency surface errors are described in OSAC by using Zernike polynomials for normal incidence mirrors and Legendre-Fourier polynomials for grazing incidence mirrors. The scatter analysis of the mirror is based on scalar scatter theory. The program accepts simple autocovariance (ACV) function models or power spectral density (PSD) models derived from mirror surface metrology data as input to the scatter calculation. The end product of the program is a user-defined pixel array containing the system Point Spread Function (PSF). The METDAT routine is used in conjunction with the OSAC program. This code reads in laboratory metrology data in a normalized format. The code then fits the data using Zernike polynomials for normal incidence

  19. The COD Model: Simulating Workgroup Performance

    Science.gov (United States)

    Biggiero, Lucio; Sevi, Enrico

    Though the question of the determinants of workgroup performance is one of the most central in organization science, precise theoretical frameworks and formal demonstrations are still missing. In order to fill in this gap the COD agent-based simulation model is here presented and used to study the effects of task interdependence and bounded rationality on workgroup performance. The first relevant finding is an algorithmic demonstration of the ordering of interdependencies in terms of complexity, showing that the parallel mode is the most simplex, followed by the sequential and then by the reciprocal. This result is far from being new in organization science, but what is remarkable is that now it has the strength of an algorithmic demonstration instead of being based on the authoritativeness of some scholar or on some episodic empirical finding. The second important result is that the progressive introduction of realistic limits to agents' rationality dramatically reduces workgroup performance and addresses to a rather interesting result: when agents' rationality is severely bounded simple norms work better than complex norms. The third main finding is that when the complexity of interdependence is high, then the appropriate coordination mechanism is agents' direct and active collaboration, which means teamwork.

  20. Love channel-waves dispersion characteristic analysis of typical coal models%典型含煤模型Love型槽波的频散特征分析

    Institute of Scientific and Technical Information of China (English)

    程建远; 姬广忠; 朱培民

    2012-01-01

    In order to study the dispersion characteristics of Love channel-waves under different geological conditions,four kinds models of 2D typical seismogeology were designed.There are various coal thickness models,different wall rock(high-velocity and low-velocity wall rock models)for which the coal thickness is stable as 10 m,coal models containing 5 m fault for which the coal thickness is 10 m,and coal imbedded by leat which sizes are 5 m×5 m,5 m×10 m,5 m×20 m and 5 m×50 m.These models were numerically simulated by SH wave equation.Analyzed the wave-fields characteristics of Love channel-waves and used Fourier transform on synthetic seismic records to obtain channel-waves dispersion curves;the results show that the coal thickness mainly influences energy distribution of different modes' dispersion curves.The minimum shear wave velocity between top and bottom wall rock controls upper limit of dispersion curves;leat bodies make dispersion curves diffuse.The fault has a low impact on channel-waves dispersion.%为了研究不同地质条件下煤层Love型槽波的频散特征,设计了4种典型二维地震地质模型,即围岩不变而煤厚变化(3,5,10,15和20 m)模型,围岩变化(高速围岩与低速围岩)而煤厚不变(10 m)模型,煤层含有断层(煤厚10 m、断层落差5 m)模型和10 m煤层中夹有5 m×5 m、5 m×10 m、5 m×20 m、5 m×50 m砂体模型等;采用SH波波动方程,对上述模型进行了数值模拟,分析了Love型槽波的波场特征,并对正演记录进行了傅里叶变换,计算得到槽波频散图。结果表明:煤厚变化主要影响Love型槽波各阶模式频散曲线的能量分布,上下围岩中最小横波速度控制频散曲线的上限,砂体使频散曲线发散,断层对槽波频散影响较小。

  1. Exploring Modeling Options and Conversion of Average Response to Appropriate Vibration Envelopes for a Typical Cylindrical Vehicle Panel with Rib-stiffened Design

    Science.gov (United States)

    Harrison, Phil; LaVerde, Bruce; Teague, David

    2009-01-01

    Although applications for Statistical Energy Analysis (SEA) techniques are more widely used in the aerospace industry today, opportunities to anchor the response predictions using measured data from a flight-like launch vehicle structure are still quite valuable. Response and excitation data from a ground acoustic test at the Marshall Space Flight Center permitted the authors to compare and evaluate several modeling techniques available in the SEA module of the commercial code VA One. This paper provides an example of vibration response estimates developed using different modeling approaches to both approximate and bound the response of a flight-like vehicle panel. Since both vibration response and acoustic levels near the panel were available from the ground test, the evaluation provided an opportunity to learn how well the different modeling options can match band-averaged spectra developed from the test data. Additional work was performed to understand the spatial averaging of the measurements across the panel from measured data. Finally an evaluation/comparison of two conversion approaches from the statistical average response results that are output from an SEA analysis to a more useful envelope of response spectra appropriate to specify design and test vibration levels for a new vehicle.

  2. Performance Management: A model and research agenda

    NARCIS (Netherlands)

    D.N. den Hartog (Deanne); J.P.P.E.F. Boselie (Paul); J. Paauwe (Jaap)

    2004-01-01

    textabstractPerformance Management deals with the challenge organizations face in defining, measuring and stimulating employee performance with the ultimate goal to improve organizational performance. Thus, Performance Management involves multiple levels of analysis and is clearly linked to the topi

  3. Development of an integrated performance measurement (PM) model for pharmaceutical industry.

    Science.gov (United States)

    Shabaninejad, Hosein; Mirsalehian, Mohammad Hossein; Mehralian, Gholamhossein

    2014-01-01

    With respect to special characteristics of pharmaceutical industry and lack of reported performance measure, this study tries to design an integrated PM model for pharmaceutical companies. For generating this model; we first identified the key performance indicators (KPIs) and the key result indicators (KRIs) of a typical pharmaceutical company. Then, based on experts᾽ opinions, the identified indicators were ranked with respect to their importance, and the most important of them were selected to be used in the proposed model; In this model, we identified 25 KPIs and 12 KRIs. Although, this model is mostly appropriate to measure the performances of pharmaceutical companies, it can be also used to measure the performances of other industries with some modifications. We strongly recommend pharmaceutical managers to link these indicators with their payment and reward system, which can dramatically affect the performance of employees, and consequently their organization`s success.

  4. DKIST Polarization Modeling and Performance Predictions

    Science.gov (United States)

    Harrington, David

    2016-05-01

    Calibrating the Mueller matrices of large aperture telescopes and associated coude instrumentation requires astronomical sources and several modeling assumptions to predict the behavior of the system polarization with field of view, altitude, azimuth and wavelength. The Daniel K Inouye Solar Telescope (DKIST) polarimetric instrumentation requires very high accuracy calibration of a complex coude path with an off-axis f/2 primary mirror, time dependent optical configurations and substantial field of view. Polarization predictions across a diversity of optical configurations, tracking scenarios, slit geometries and vendor coating formulations are critical to both construction and contined operations efforts. Recent daytime sky based polarization calibrations of the 4m AEOS telescope and HiVIS spectropolarimeter on Haleakala have provided system Mueller matrices over full telescope articulation for a 15-reflection coude system. AEOS and HiVIS are a DKIST analog with a many-fold coude optical feed and similar mirror coatings creating 100% polarization cross-talk with altitude, azimuth and wavelength. Polarization modeling predictions using Zemax have successfully matched the altitude-azimuth-wavelength dependence on HiVIS with the few percent amplitude limitations of several instrument artifacts. Polarization predictions for coude beam paths depend greatly on modeling the angle-of-incidence dependences in powered optics and the mirror coating formulations. A 6 month HiVIS daytime sky calibration plan has been analyzed for accuracy under a wide range of sky conditions and data analysis algorithms. Predictions of polarimetric performance for the DKIST first-light instrumentation suite have been created under a range of configurations. These new modeling tools and polarization predictions have substantial impact for the design, fabrication and calibration process in the presence of manufacturing issues, science use-case requirements and ultimate system calibration

  5. A reformer performance model for fuel cell applications

    Science.gov (United States)

    Sandhu, S. S.; Saif, Y. A.; Fellner, J. P.

    A performance model for a reformer, consisting of the catalytic partial oxidation (CPO), high- and low-temperature water-gas shift (HTWGS and LTWGS), and preferential oxidation (PROX) reactors, has been formulated. The model predicts the composition and temperature of the hydrogen-rich reformed fuel-gas mixture needed for the fuel cell applications. The mathematical model equations, based on the principles of classical thermodynamics and chemical kinetics, were implemented into a computer program. The resulting software was employed to calculate the chemical species molar flow rates and the gas mixture stream temperature for the steady-state operation of the reformer. Typical computed results, such as the gas mixture temperature at the CPO reactor exit and the profiles of the fractional conversion of carbon monoxide, temperature, and mole fractions of the chemical species as a function of the catalyst weight in the HTWGS, LTWGS, and PROX reactors, are here presented at the carbon-to-oxygen atom ratio (C/O) of 1 for the feed mixture of n-decane (fuel) and dry air (oxidant).

  6. Global Processing Speed in Children with Low Reading Ability and in Children and Adults with Typical Reading Ability: Exploratory Factor Analytic Models

    Science.gov (United States)

    Peter, Beate; Matsushita, Mark; Raskind, Wendy H.

    2011-01-01

    Purpose: To investigate processing speed as a latent dimension in children with dyslexia and children and adults with typical reading skills. Method: Exploratory factor analysis (FA) was based on a sample of multigenerational families, each ascertained through a child with dyslexia. Eleven measures--6 of them timed--represented verbal and…

  7. Numerical modeling capabilities to predict repository performance

    Energy Technology Data Exchange (ETDEWEB)

    1979-09-01

    This report presents a summary of current numerical modeling capabilities that are applicable to the design and performance evaluation of underground repositories for the storage of nuclear waste. The report includes codes that are available in-house, within Golder Associates and Lawrence Livermore Laboratories; as well as those that are generally available within the industry and universities. The first listing of programs are in-house codes in the subject areas of hydrology, solute transport, thermal and mechanical stress analysis, and structural geology. The second listing of programs are divided by subject into the following categories: site selection, structural geology, mine structural design, mine ventilation, hydrology, and mine design/construction/operation. These programs are not specifically designed for use in the design and evaluation of an underground repository for nuclear waste; but several or most of them may be so used.

  8. Performance model to predict overall defect density

    Directory of Open Access Journals (Sweden)

    J Venkatesh

    2012-08-01

    Full Text Available Management by metrics is the expectation from the IT service providers to stay as a differentiator. Given a project, the associated parameters and dynamics, the behaviour and outcome need to be predicted. There is lot of focus on the end state and in minimizing defect leakage as much as possible. In most of the cases, the actions taken are re-active. It is too late in the life cycle. Root cause analysis and corrective actions can be implemented only to the benefit of the next project. The focus has to shift left, towards the execution phase than waiting for lessons to be learnt post the implementation. How do we pro-actively predict defect metrics and have a preventive action plan in place. This paper illustrates the process performance model to predict overall defect density based on data from projects in an organization.

  9. High-performance phase-field modeling

    KAUST Repository

    Vignal, Philippe

    2015-04-27

    Many processes in engineering and sciences involve the evolution of interfaces. Among the mathematical frameworks developed to model these types of problems, the phase-field method has emerged as a possible solution. Phase-fields nonetheless lead to complex nonlinear, high-order partial differential equations, whose solution poses mathematical and computational challenges. Guaranteeing some of the physical properties of the equations has lead to the development of efficient algorithms and discretizations capable of recovering said properties by construction [2, 5]. This work builds-up on these ideas, and proposes novel discretization strategies that guarantee numerical energy dissipation for both conserved and non-conserved phase-field models. The temporal discretization is based on a novel method which relies on Taylor series and ensures strong energy stability. It is second-order accurate, and can also be rendered linear to speed-up the solution process [4]. The spatial discretization relies on Isogeometric Analysis, a finite element method that possesses the k-refinement technology and enables the generation of high-order, high-continuity basis functions. These basis functions are well suited to handle the high-order operators present in phase-field models. Two-dimensional and three dimensional results of the Allen-Cahn, Cahn-Hilliard, Swift-Hohenberg and phase-field crystal equation will be presented, which corroborate the theoretical findings, and illustrate the robustness of the method. Results related to more challenging examples, namely the Navier-Stokes Cahn-Hilliard and a diusion-reaction Cahn-Hilliard system, will also be presented. The implementation was done in PetIGA and PetIGA-MF, high-performance Isogeometric Analysis frameworks [1, 3], designed to handle non-linear, time-dependent problems.

  10. A modelling study of long term green roof retention performance.

    Science.gov (United States)

    Stovin, Virginia; Poë, Simon; Berretta, Christian

    2013-12-15

    This paper outlines the development of a conceptual hydrological flux model for the long term continuous simulation of runoff and drought risk for green roof systems. A green roof's retention capacity depends upon its physical configuration, but it is also strongly influenced by local climatic controls, including the rainfall characteristics and the restoration of retention capacity associated with evapotranspiration during dry weather periods. The model includes a function that links evapotranspiration rates to substrate moisture content, and is validated against observed runoff data. The model's application to typical extensive green roof configurations is demonstrated with reference to four UK locations characterised by contrasting climatic regimes, using 30-year rainfall time-series inputs at hourly simulation time steps. It is shown that retention performance is dependent upon local climatic conditions. Volumetric retention ranges from 0.19 (cool, wet climate) to 0.59 (warm, dry climate). Per event retention is also considered, and it is demonstrated that retention performance decreases significantly when high return period events are considered in isolation. For example, in Sheffield the median per-event retention is 1.00 (many small events), but the median retention for events exceeding a 1 in 1 yr return period threshold is only 0.10. The simulation tool also provides useful information about the likelihood of drought periods, for which irrigation may be required. A sensitivity study suggests that green roofs with reduced moisture-holding capacity and/or low evapotranspiration rates will tend to offer reduced levels of retention, whilst high moisture-holding capacity and low evapotranspiration rates offer the strongest drought resistance.

  11. Ecological niche modeling in Maxent: the importance of model complexity and the performance of model selection criteria.

    Science.gov (United States)

    Warren, Dan L; Seifert, Stephanie N

    2011-03-01

    Maxent, one of the most commonly used methods for inferring species distributions and environmental tolerances from occurrence data, allows users to fit models of arbitrary complexity. Model complexity is typically constrained via a process known as L1 regularization, but at present little guidance is available for setting the appropriate level of regularization, and the effects of inappropriately complex or simple models are largely unknown. In this study, we demonstrate the use of information criterion approaches to setting regularization in Maxent, and we compare models selected using information criteria to models selected using other criteria that are common in the literature. We evaluate model performance using occurrence data generated from a known "true" initial Maxent model, using several different metrics for model quality and transferability. We demonstrate that models that are inappropriately complex or inappropriately simple show reduced ability to infer habitat quality, reduced ability to infer the relative importance of variables in constraining species' distributions, and reduced transferability to other time periods. We also demonstrate that information criteria may offer significant advantages over the methods commonly used in the literature.

  12. PV Performance Modeling Methods and Practices: Results from the 4th PV Performance Modeling Collaborative Workshop.

    Energy Technology Data Exchange (ETDEWEB)

    Stein, Joshua [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2017-03-01

    In 2014, the IEA PVPS Task 13 added the PVPMC as a formal activity to its technical work plan for 2014-2017. The goal of this activity is to expand the reach of the PVPMC to a broader international audience and help to reduce PV performance modeling uncertainties worldwide. One of the main deliverables of this activity is to host one or more PVPMC workshops outside the US to foster more international participation within this collaborative group. This report reviews the results of the first in a series of these joint IEA PVPS Task 13/PVPMC workshops. The 4th PV Performance Modeling Collaborative Workshop was held in Cologne, Germany at the headquarters of TÜV Rheinland on October 22-23, 2015.

  13. How well can we forecast future model error and uncertainty by mining past model performance data

    Science.gov (United States)

    Solomatine, Dimitri

    2016-04-01

    Consider a hydrological model Y(t) = M(X(t), P), where X=vector of inputs; P=vector of parameters; Y=model output (typically flow); t=time. In cases when there is enough past data on the model M performance, it is possible to use this data to build a (data-driven) model EC of model M error. This model EC will be able to forecast error E when a new input X is fed into model M; then subtracting E from the model prediction Y a better estimate of Y can be obtained. Model EC is usually called the error corrector (in meteorology - a bias corrector). However, we may go further in characterizing model deficiencies, and instead of using the error (a real value) we may consider a more sophisticated characterization, namely a probabilistic one. So instead of rather a model EC of the model M error it is also possible to build a model U of model M uncertainty; if uncertainty is described as the model error distribution D this model will calculate its properties - mean, variance, other moments, and quantiles. The general form of this model could be: D = U (RV), where RV=vector of relevant variables having influence on model uncertainty (to be identified e.g. by mutual information analysis); D=vector of variables characterizing the error distribution (typically, two or more quantiles). There is one aspect which is not always explicitly mentioned in uncertainty analysis work. In our view it is important to distinguish the following main types of model uncertainty: 1. The residual uncertainty of models. In this case the model parameters and/or model inputs are considered to be fixed (deterministic), i.e. the model is considered to be optimal (calibrated) and deterministic. Model error is considered as the manifestation of uncertainty. If there is enough past data about the model errors (i.e. its uncertainty), it is possible to build a statistical or machine learning model of uncertainty trained on this data. Here the following methods can be mentioned: (a) quantile regression (QR

  14. Performance Improvement/HPT Model: Guiding the Process

    Science.gov (United States)

    Dessinger, Joan Conway; Moseley, James L.; Van Tiem, Darlene M.

    2012-01-01

    This commentary is part of an ongoing dialogue that began in the October 2011 special issue of "Performance Improvement"--Exploring a Universal Performance Model for HPT: Notes From the Field. The performance improvement/HPT (human performance technology) model represents a unifying process that helps accomplish successful change, create…

  15. HANDOVER MANAGEABILITY AND PERFORMANCE MODELING IN

    African Journals Online (AJOL)

    SOFTLINKS DIGITAL

    April 2014. 103 engineering and development including ... formal model management team must rely on guess work. ... model provides a systematic method for comparing ...... In 18th Annual Software Engineering and Knowledge. Engineering ...

  16. Modelling and simulating fire tube boiler performance

    DEFF Research Database (Denmark)

    Sørensen, K.; Condra, T.; Houbak, Niels;

    2003-01-01

    A model for a flue gas boiler covering the flue gas and the water-/steam side has been formulated. The model has been formulated as a number of sub models that are merged into an overall model for the complete boiler. Sub models have been defined for the furnace, the convection zone (split in 2......: a zone submerged in water and a zone covered by steam), a model for the material in the boiler (the steel) and 2 models for resp. the water/steam zone (the boiling) and the steam. The dynamic model has been developed as a number of Differential-Algebraic-Equation system (DAE). Subsequently Mat......Lab/Simulink has been applied for carrying out the simulations. To be able to verify the simulated results experiments has been carried out on a full scale boiler plant....

  17. Modelling and simulating fire tube boiler performance

    DEFF Research Database (Denmark)

    Sørensen, Kim; Karstensen, Claus; Condra, Thomas Joseph;

    2003-01-01

    A model for a ue gas boiler covering the ue gas and the water-/steam side has been formulated. The model has been formulated as a number of sub models that are merged into an overall model for the complete boiler. Sub models have been dened for the furnace, the convection zone (split in 2: a zone...... submerged in water and a zone covered by steam), a model for the material in the boiler (the steel) and 2 models for resp. the water/steam zone (the boiling) and the steam. The dynamic model has been developed as a number of Differential-Algebraic- Equation system (DAE). Subsequently MatLab/Simulink has...... been applied for carrying out the simulations. To be able to verify the simulated results an experiments has been carried out on a full scale boiler plant....

  18. Probability and Statistics in Sensor Performance Modeling

    Science.gov (United States)

    2010-12-01

    transformed Rice- Nakagami distribution ......................................................................... 49 Report Documentation Page...acoustic or electromagnetic waves are scattered by both objects and turbulent wind. A version of the Rice- Nakagami model (specifically with a...Gaussian, lognormal, exponential, gamma, and the 2XX → transformed Rice- Nakagami —as well as a discrete model. (Other examples of statistical models

  19. On the significance of the noise model for the performance of a linear MPC in closed-loop operation

    DEFF Research Database (Denmark)

    Hagdrup, Morten; Boiroux, Dimitri; Mahmoudi, Zeinab

    2016-01-01

    models typically means less parameters to identify. Systematic tuning of such controllers is discussed. Simulation studies are conducted for linear time-invariant systems showing that choosing a noise model of low order is beneficial for closed-loop performance. (C) 2016, IFAC (International Federation...... of Automatic Control) Hosting by Elsevier Ltd. All rights reserved....

  20. Performance Appraisal: A New Model for Academic Advisement.

    Science.gov (United States)

    Hazleton, Vincent; Tuttle, George E.

    1981-01-01

    Presents the performance appraisal model for student advisement, a centralized developmental model that focuses on the content and process of advisement. The model has three content objectives: job definition, performance assessment, and goal setting. Operation of the model is described. Benefits and potential limitations are identified. (Author)

  1. New Metacognitive Model for Human Performance Technology

    Science.gov (United States)

    Turner, John R.

    2011-01-01

    Addressing metacognitive functions has been shown to improve performance at the individual, team, group, and organizational levels. Metacognition is beginning to surface as an added cognate discipline for the field of human performance technology (HPT). Advances from research in the fields of cognition and metacognition offer a place for HPT to…

  2. New Metacognitive Model for Human Performance Technology

    Science.gov (United States)

    Turner, John R.

    2011-01-01

    Addressing metacognitive functions has been shown to improve performance at the individual, team, group, and organizational levels. Metacognition is beginning to surface as an added cognate discipline for the field of human performance technology (HPT). Advances from research in the fields of cognition and metacognition offer a place for HPT to…

  3. Building performance modelling for sustainable building design

    Directory of Open Access Journals (Sweden)

    Olufolahan Oduyemi

    2016-12-01

    The output revealed that BPM delivers information needed for enhanced design and building performance. Recommendations such as the establishment of proper mechanisms to monitor the performance of BPM related construction are suggested to allow for its continuous implementation. This research consolidates collective movements towards wider implementation of BPM and forms a base for developing a sound BIM strategy and guidance.

  4. Performance of Typical Structures of A Type of Aircraft after Natural Exposure in a Coastal Airport%沿海机场某型飞机典型结构件自然曝晒试验研究

    Institute of Scientific and Technical Information of China (English)

    张勇; 王晨光; 卞贵学; 王安东

    2016-01-01

    目的:研究某型飞机典型结构件的腐蚀机理。方法在海南陵水机场进行自然曝晒试验,采用表面宏观形貌记录和微观形貌表征法、腐蚀产物物相分析等手段对基体及上覆防护体系的腐蚀、老化性能进行评估,进而得出该型飞机不同部位的防护性能变化规律。结果其中试件边缘、与螺栓直接接触的部位和试件连接处涂层老化、脱落最为严重,涂层脱落后裸露的基体也会出现较为严重的腐蚀现象;而一般远离螺栓、边缘和断口的区域则涂层老化、脱落均匀,腐蚀程度较轻。结论相同环境谱作用下,受力壁板连接关键部位不同部位发生腐蚀的类型、机理不同。%Objective To study the corrosion mechanism of typical structures of a type of aircraft. Methods Natural exposure test of typical aircraft structures were carried out in Lingshui coastal airport. The surface morphology and the composition of corrosion products were obtained by means of SEM and XRD, which were used to evaluate the corrosion and aging performance of substrate and its overlying corrosion protection system, and to obtain the changing law of the protection performance at different parts of the aircraft. Results Different parts showed different mechanism and types of corrosion even in the same environment. The coating of the specimen, which directly contacted with the edge and bolt parts, and the specimen joint showed most serious aging and falling off phenomenon, leading to serious corrosion in the exposed substrate, while the corrosion degree of other parts was lighter, with even aging and falling off of the coating. Conclusion Under the action of the same environmental spectrum, the corrosion types and mechanisms at different parts of the critical connecting structures of the force-bearing panel.

  5. Individualized Biomathematical Modeling of Fatigue and Performance

    Science.gov (United States)

    2008-05-29

    prior information about the initial state parameters may be acquired by other means, though. For instance, actigraphy could be used to track sleep ...J., Saper C. B. Neurobiology of the sleep -wake cycle: Sleep architecture , circadian regulation, and regulatory feedback. J. Biol. Rhythms 21, 482... Sleep and Performance Research Center 8. PERFORMING ORGANIZATION REPORT NUMBER Washington State University, Spokane P.O. Box 1495 Spokane, WA

  6. Detailed Performance Model for Photovoltaic Systems: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Tian, H.; Mancilla-David, F.; Ellis, K.; Muljadi, E.; Jenkins, P.

    2012-07-01

    This paper presents a modified current-voltage relationship for the single diode model. The single-diode model has been derived from the well-known equivalent circuit for a single photovoltaic cell. The modification presented in this paper accounts for both parallel and series connections in an array.

  7. Complex Systems and Human Performance Modeling

    Science.gov (United States)

    2013-12-01

    constitute a cognitive architecture or decomposing the work flows and resource constraints that characterize human-system interactions, the modeler...also explored the generation of so-called “ fractal ” series from simple task network models where task times are the calculated by way of a moving

  8. HANDOVER MANAGEABILITY AND PERFORMANCE MODELING IN

    African Journals Online (AJOL)

    SOFTLINKS DIGITAL

    West African Journal of Industrial & Academic Research Vol.10 No.1 April ... sketches out a model of proactive and reactive mitigation response model for individuals ... valuable asset to all firms. ... information is shared only among ... ability to ensure that a party to a contract or ... organizations only react to security threats,.

  9. Performance evaluation of quality monitor models in spot welding

    Institute of Scientific and Technical Information of China (English)

    Zhang Zhongdian; Li Dongqing; Wang Kai

    2005-01-01

    Performance of quality monitor models in spot welding determines the monitor precision directly, so it's crucial to evaluate it. Previously, mean square error ( MSE ) is often used to evaluate performances of models, but it can only show the total errors of finite specimens of models, and cannot show whether the quality information inferred from models are accurate and reliable enough or not. For this reason, by means of measure error theory, a new way to evaluate the performances of models according to the error distributions is developed as follows: Only if correct and precise enough the error distribution of model is, the quality information inferred from model is accurate and reliable.

  10. A unified tool for performance modelling and prediction

    Energy Technology Data Exchange (ETDEWEB)

    Gilmore, Stephen [Laboratory for Foundations of Computer Science, University of Edinburgh, King' s Buildings, Mayfield Road, Edinburgh, Scotland EH9 3JZ (United Kingdom)]. E-mail: stg@inf.ed.ac.uk; Kloul, Leila [Laboratory for Foundations of Computer Science, University of Edinburgh, King' s Buildings, Mayfield Road, Edinburgh, Scotland EH9 3JZ (United Kingdom)

    2005-07-01

    We describe a novel performability modelling approach, which facilitates the efficient solution of performance models extracted from high-level descriptions of systems. The notation which we use for our high-level designs is the Unified Modelling Language (UML) graphical modelling language. The technology which provides the efficient representation capability for the underlying performance model is the multi-terminal binary decision diagram (MTBDD)-based PRISM probabilistic model checker. The UML models are compiled through an intermediate language, the stochastic process algebra PEPA, before translation into MTBDDs for solution. We illustrate our approach on a real-world analysis problem from the domain of mobile telephony.

  11. Modeling and optimization of LCD optical performance

    CERN Document Server

    Yakovlev, Dmitry A; Kwok, Hoi-Sing

    2015-01-01

    The aim of this book is to present the theoretical foundations of modeling the optical characteristics of liquid crystal displays, critically reviewing modern modeling methods and examining areas of applicability. The modern matrix formalisms of optics of anisotropic stratified media, most convenient for solving problems of numerical modeling and optimization of LCD, will be considered in detail. The benefits of combined use of the matrix methods will be shown, which generally provides the best compromise between physical adequacy and accuracy with computational efficiency and optimization fac

  12. Selecting among competing models of electro-optic, infrared camera system range performance

    Science.gov (United States)

    Nichols, Jonathan M.; Hines, James E.; Nichols, James D.

    2013-01-01

    Range performance is often the key requirement around which electro-optical and infrared camera systems are designed. This work presents an objective framework for evaluating competing range performance models. Model selection based on the Akaike’s Information Criterion (AIC) is presented for the type of data collected during a typical human observer and target identification experiment. These methods are then demonstrated on observer responses to both visible and infrared imagery in which one of three maritime targets was placed at various ranges. We compare the performance of a number of different models, including those appearing previously in the literature. We conclude that our model-based approach offers substantial improvements over the traditional approach to inference, including increased precision and the ability to make predictions for some distances other than the specific set for which experimental trials were conducted.

  13. Hydrologic Evaluation of Landfill Performance (HELP) Model

    Science.gov (United States)

    The program models rainfall, runoff, infiltration, and other water pathways to estimate how much water builds up above each landfill liner. It can incorporate data on vegetation, soil types, geosynthetic materials, initial moisture conditions, slopes, etc.

  14. Integrated thermodynamic model for ignition target performance

    Directory of Open Access Journals (Sweden)

    Springer P.T.

    2013-11-01

    Full Text Available We have derived a 3-dimensional synthetic model for NIF implosion conditions, by predicting and optimizing fits to a broad set of x-ray and nuclear diagnostics obtained on each shot. By matching x-ray images, burn width, neutron time-of-flight ion temperature, yield, and fuel ρr, we obtain nearly unique constraints on conditions in the hotspot and fuel in a model that is entirely consistent with the observables. This model allows us to determine hotspot density, pressure, areal density (ρr, total energy, and other ignition-relevant parameters not available from any single diagnostic. This article describes the model and its application to National Ignition Facility (NIF tritium–hydrogen–deuterium (THD and DT implosion data, and provides an explanation for the large yield and ρr degradation compared to numerical code predictions.

  15. Manufacturing Excellence Approach to Business Performance Model

    Directory of Open Access Journals (Sweden)

    Jesus Cruz Alvarez

    2015-03-01

    Full Text Available Six Sigma, lean manufacturing, total quality management, quality control, and quality function deployment are the fundamental set of tools to enhance productivity in organizations. There is some research that outlines the benefit of each tool into a particular context of firm´s productivity, but not into a broader context of firm´s competitiveness that is achieved thru business performance. The aim of this theoretical research paper is to contribute to this mean and propose a manufacturing excellence approach that links productivity tools into a broader context of business performance.

  16. Testing typicality in multiverse cosmology

    Science.gov (United States)

    Azhar, Feraz

    2015-05-01

    In extracting predictions from theories that describe a multiverse, we face the difficulty that we must assess probability distributions over possible observations prescribed not just by an underlying theory, but by a theory together with a conditionalization scheme that allows for (anthropic) selection effects. This means we usually need to compare distributions that are consistent with a broad range of possible observations with actual experimental data. One controversial means of making this comparison is by invoking the "principle of mediocrity": that is, the principle that we are typical of the reference class implicit in the conjunction of the theory and the conditionalization scheme. In this paper, we quantitatively assess the principle of mediocrity in a range of cosmological settings, employing "xerographic distributions" to impose a variety of assumptions regarding typicality. We find that for a fixed theory, the assumption that we are typical gives rise to higher likelihoods for our observations. If, however, one allows both the underlying theory and the assumption of typicality to vary, then the assumption of typicality does not always provide the highest likelihoods. Interpreted from a Bayesian perspective, these results support the claim that when one has the freedom to consider different combinations of theories and xerographic distributions (or different "frameworks"), one should favor the framework that has the highest posterior probability; and then from this framework one can infer, in particular, how typical we are. In this way, the invocation of the principle of mediocrity is more questionable than has been recently claimed.

  17. Developing an Energy Performance Modeling Startup Kit

    Energy Technology Data Exchange (ETDEWEB)

    none,

    2012-10-01

    In 2011, the NAHB Research Center began assessing the needs and motivations of residential remodelers regarding energy performance remodeling. This report outlines: the current remodeling industry and the role of energy efficiency; gaps and barriers to adding energy efficiency into remodeling; and support needs of professional remodelers to increase sales and projects involving improving home energy efficiency.

  18. Simulation of changes in heavy metal contamination in farmland soils of a typical manufacturing center through logistic-based cellular automata modeling.

    Science.gov (United States)

    Qiu, Menglong; Wang, Qi; Li, Fangbai; Chen, Junjian; Yang, Guoyi; Liu, Liming

    2016-01-01

    A customized logistic-based cellular automata (CA) model was developed to simulate changes in heavy metal contamination (HMC) in farmland soils of Dongguan, a manufacturing center in Southern China, and to discover the relationship between HMC and related explanatory variables (continuous and categorical). The model was calibrated through the simulation and validation of HMC in 2012. Thereafter, the model was implemented for the scenario simulation of development alternatives for HMC in 2022. The HMC in 2002 and 2012 was determined through soil tests and cokriging. Continuous variables were divided into two groups by odds ratios. Positive variables (odds ratios >1) included the Nemerow synthetic pollution index in 2002, linear drainage density, distance from the city center, distance from the railway, slope, and secondary industrial output per unit of land. Negative variables (odds ratios soil pH, and distance from bodies of water. Categorical variables, including soil type, parent material type, organic content grade, and land use type, also significantly influenced HMC according to Wald statistics. The relative operating characteristic and kappa coefficients were 0.91 and 0.64, respectively, which proved the validity and accuracy of the model. The scenario simulation shows that the government should not only implement stricter environmental regulation but also strengthen the remediation of the current polluted area to effectively mitigate HMC.

  19. Space Station Freedom electrical performance model

    Science.gov (United States)

    Hojnicki, Jeffrey S.; Green, Robert D.; Kerslake, Thomas W.; Mckissock, David B.; Trudell, Jeffrey J.

    1993-01-01

    The baseline Space Station Freedom electric power system (EPS) employs photovoltaic (PV) arrays and nickel hydrogen (NiH2) batteries to supply power to housekeeping and user electrical loads via a direct current (dc) distribution system. The EPS was originally designed for an operating life of 30 years through orbital replacement of components. As the design and development of the EPS continues, accurate EPS performance predictions are needed to assess design options, operating scenarios, and resource allocations. To meet these needs, NASA Lewis Research Center (LeRC) has, over a 10 year period, developed SPACE (Station Power Analysis for Capability Evaluation), a computer code designed to predict EPS performance. This paper describes SPACE, its functionality, and its capabilities.

  20. Manufacturing Excellence Approach to Business Performance Model

    OpenAIRE

    Jesus Cruz Alvarez; Carlos Monge Perry

    2015-01-01

    Six Sigma, lean manufacturing, total quality management, quality control, and quality function deployment are the fundamental set of tools to enhance productivity in organizations. There is some research that outlines the benefit of each tool into a particular context of firm´s productivity, but not into a broader context of firm´s competitiveness that is achieved thru business performance. The aim of this theoretical research paper is to contribute to this mean and propose a manufacturing ex...

  1. An Outline Course on Human Performance Modeling

    Science.gov (United States)

    2006-01-01

    complementary or competing tasks: Dario SaIvucci, ??? 46. Bonnie Johns, David Kieras 47. ecological interface design 48. More into modeling human... alarcon 70. Ben Knott 71. Evelyn Rozanski 7.Pete Khooshabeh Optional: If ou would like to be on a mailin list for further seminars lease enter our email

  2. Persistence Modeling for Assessing Marketing Strategy Performance

    NARCIS (Netherlands)

    M.G. Dekimpe (Marnik); D.M. Hanssens (Dominique)

    2003-01-01

    textabstractThe question of long-run market response lies at the heart of any marketing strategy that tries to create a sustainable competitive advantage for the firm or brand. A key challenge, however, is that only short-run results of marketing actions are readily observable. Persistence modeling

  3. Persistence Modeling for Assessing Marketing Strategy Performance

    NARCIS (Netherlands)

    M.G. Dekimpe (Marnik); D.M. Hanssens (Dominique)

    2003-01-01

    textabstractThe question of long-run market response lies at the heart of any marketing strategy that tries to create a sustainable competitive advantage for the firm or brand. A key challenge, however, is that only short-run results of marketing actions are readily observable. Persistence modeling

  4. Seismic wave field simulation of several typical geological models%几种典型地质模型的地震波场数值模拟

    Institute of Scientific and Technical Information of China (English)

    徐佼; 张智; 董超; 陈立波; ; 李飞

    2014-01-01

    Stability condition and astringency of difference equation,the elimination of dispersion are important in seismic wave numerical modeling.From the factors in modeling result,based on two-dimensional acoustic wave equation,the seismic wave numerical modeling is shown by finite difference method with two-order ab-sorbing boundary condition in the wave-field of heterogeneous media and inhomogeneous media.By theory cal-culation and finite difference wave equation numerical modeling in the heterogeneous model,the strata model, the fault model and the anticline model,this method can achieve ideal results if the parameters are reasonable. Through the finite difference method,the wave equation is derived by Taylor series,which makes inherent errors in scattering the wave equation.The low-order difference method may increase the numerical dispersion,and vice versa,the high-order difference method leads to small errors and exact solutions.Meanwhile,larger spatial sampling interval and sampling time interval are better in the calculation rate,but may decrease the correspond-ing numerical dispersion.So consideration must be given to both the accuracy and speed in the finite difference wave equation numerical modeling.%在地震波场数值模拟中,边界条件的选取、差分方程的稳定性条件和收敛性以及频散的消除对于模拟质量至关重要。在综合考虑这些影响因素的基础上,对二维声波方程采用有限差分方法,利用二阶吸收边界条件,模拟了均匀模型、层状模型、断层模型、背斜模型等几种典型介质模型的地震波场。模拟结果揭示了参数的选取对模拟精度的影响程度。通过泰勒级数展开推导得到有限差分波动方程,造成离散的波动方程固有误差的存在。差分阶数越低,频散越严重;相反,有限差分的误差就越小,就越接近于精确解;此外,空间和时间采样间隔越大,会加快计算速度,同时可能会产

  5. An Empirical Study of a Solo Performance Assessment Model

    Science.gov (United States)

    Russell, Brian E.

    2015-01-01

    The purpose of this study was to test a hypothesized model of solo music performance assessment. Specifically, this study investigates the influence of technique and musical expression on perceptions of overall performance quality. The Aural Musical Performance Quality (AMPQ) measure was created to measure overall performance quality, technique,…

  6. Modeling the marketing strategy-performance relationship : towards an hierarchical marketing performance framework

    NARCIS (Netherlands)

    Huizingh, Eelko K.R.E.; Zengerink, Evelien

    2001-01-01

    Accurate measurement of marketing performance is an important topic for both marketing academics and marketing managers. Many researchers have recognized that marketing performance measurement should go beyond financial measurement. In this paper we propose a conceptual framework that models

  7. Modeling the marketing strategy-performance relationship : towards an hierarchical marketing performance framework

    NARCIS (Netherlands)

    Huizingh, Eelko K.R.E.; Zengerink, Evelien

    2001-01-01

    Accurate measurement of marketing performance is an important topic for both marketing academics and marketing managers. Many researchers have recognized that marketing performance measurement should go beyond financial measurement. In this paper we propose a conceptual framework that models marketi

  8. A fuel-efficient cruise performance model for general aviation piston engine airplanes. Ph.D. Thesis. Final Report

    Science.gov (United States)

    Parkinson, R. C. H.

    1983-01-01

    A fuel-efficient cruise performance model which facilitates maximizing the specific range of General Aviation airplanes powered by spark-ignition piston engines and propellers is presented. Airplanes of fixed design only are considered. The uses and limitations of typical Pilot Operating Handbook cruise performance data, for constructing cruise performance models suitable for maximizing specific range, are first examined. These data are found to be inadequate for constructing such models. A new model of General Aviation piston-prop airplane cruise performance is then developed. This model consists of two subsystem models: the airframe-propeller-atmosphere subsystem model; and the engine-atmosphere subsystem model. The new model facilitates maximizing specific range; and by virtue of its implicity and low volume data storge requirements, appears suitable for airborne microprocessor implementation.

  9. Dynamic Model of Centrifugal Compressor for Prediction of Surge Evolution and Performance Variations

    Energy Technology Data Exchange (ETDEWEB)

    Jung, Mooncheong; Han, Jaeyoung; Yu, Sangseok [Chungnam National Univ., Daejeon (Korea, Republic of)

    2016-05-15

    When a control algorithm is developed to protect automotive compressor surges, the simulation model typically selects an empirically determined look-up table. However, it is difficult for a control oriented empirical model to show surge characteristics of the super charger. In this study, a dynamic supercharger model is developed to predict the performance of a centrifugal compressor under dynamic load follow-up. The model is developed using Simulink® environment, and is composed of a compressor, throttle body, valves, and chamber. Greitzer’s compressor model is used, and the geometric parameters are achieved by the actual supercharger. The simulation model is validated with experimental data. It is shown that compressor surge is effectively predicted by this dynamic compressor model under various operating conditions.

  10. Understanding the Effect of Baseline Modeling Implementation Choices on Analysis of Demand Response Performance

    Energy Technology Data Exchange (ETDEWEB)

    University of California, Berkeley; Addy, Nathan; Kiliccote, Sila; Mathieu, Johanna; Callaway, Duncan S.

    2012-06-13

    Accurate evaluation of the performance of buildings participating in Demand Response (DR) programs is critical to the adoption and improvement of these programs. Typically, we calculate load sheds during DR events by comparing observed electric demand against counterfactual predictions made using statistical baseline models. Many baseline models exist and these models can produce different shed calculations. Moreover, modelers implementing the same baseline model can make different modeling implementation choices, which may affect shed estimates. In this work, using real data, we analyze the effect of different modeling implementation choices on shed predictions. We focused on five issues: weather data source, resolution of data, methods for determining when buildings are occupied, methods for aligning building data with temperature data, and methods for power outage filtering. Results indicate sensitivity to the weather data source and data filtration methods as well as an immediate potential for automation of methods to choose building occupied modes.

  11. Evaluation of the CENTURY model using long-term fertilization trials under corn-wheat cropping systems in the typical croplands of China.

    Science.gov (United States)

    Cong, Rihuan; Wang, Xiujun; Xu, Minggang; Ogle, Stephen M; Parton, William J

    2014-01-01

    Soil organic matter models are widely used to study soil organic carbon (SOC) dynamics. Here, we used the CENTURY model to simulate SOC in wheat-corn cropping systems at three long-term fertilization trials. Our study indicates that CENTURY can simulate fertilization effects on SOC dynamics under different climate and soil conditions. The normalized root mean square error is less than 15% for all the treatments. Soil carbon presents various changes under different fertilization management. Treatment with straw return would enhance SOC to a relatively stable level whereas chemical fertilization affects SOC differently across the three sites. After running CENTURY over the period of 1990-2050, the SOC levels are predicted to increase from 31.8 to 52.1 Mg ha-1 across the three sites. We estimate that the carbon sequestration potential between 1990 and 2050 would be 9.4-35.7 Mg ha-1 under the current high manure application at the three sites. Analysis of SOC in each carbon pool indicates that long-term fertilization enhances the slow pool proportion but decreases the passive pool proportion. Model results suggest that change in the slow carbon pool is the major driver of the overall trends in SOC stocks under long-term fertilization.

  12. Modeling the Mechanical Performance of Die Casting Dies

    Energy Technology Data Exchange (ETDEWEB)

    R. Allen Miller

    2004-02-27

    The following report covers work performed at Ohio State on modeling the mechanical performance of dies. The focus of the project was development and particularly verification of finite element techniques used to model and predict displacements and stresses in die casting dies. The work entails a major case study performed with and industrial partner on a production die and laboratory experiments performed at Ohio State.

  13. Testing typicality in multiverse cosmology

    CERN Document Server

    Azhar, Feraz

    2015-01-01

    In extracting predictions from theories that describe a multiverse, we face the difficulty that we must assess probability distributions over possible observations, prescribed not just by an underlying theory, but by a theory together with a conditionalization scheme that allows for (anthropic) selection effects. This means we usually need to compare distributions that are consistent with a broad range of possible observations, with actual experimental data. One controversial means of making this comparison is by invoking the 'principle of mediocrity': that is, the principle that we are typical of the reference class implicit in the conjunction of the theory and the conditionalization scheme. In this paper, I quantitatively assess the principle of mediocrity in a range of cosmological settings, employing 'xerographic distributions' to impose a variety of assumptions regarding typicality. I find that for a fixed theory, the assumption that we are typical gives rise to higher likelihoods for our observations. I...

  14. Developing an Energy Performance Modeling Startup Kit

    Energy Technology Data Exchange (ETDEWEB)

    Wood, A.

    2012-10-01

    In 2011, the NAHB Research Center began the first part of the multi-year effort by assessing the needs and motivations of residential remodelers regarding energy performance remodeling. The scope is multifaceted - all perspectives will be sought related to remodeling firms ranging in size from small-scale, sole proprietor to national. This will allow the Research Center to gain a deeper understanding of the remodeling and energy retrofit business and the needs of contractors when offering energy upgrade services. To determine the gaps and the motivation for energy performance remodeling, the NAHB Research Center conducted (1) an initial series of focus groups with remodelers at the 2011 International Builders' Show, (2) a second series of focus groups with remodelers at the NAHB Research Center in conjunction with the NAHB Spring Board meeting in DC, and (3) quantitative market research with remodelers based on the findings from the focus groups. The goal was threefold, to: Understand the current remodeling industry and the role of energy efficiency; Identify the gaps and barriers to adding energy efficiency into remodeling; and Quantify and prioritize the support needs of professional remodelers to increase sales and projects involving improving home energy efficiency. This report outlines all three of these tasks with remodelers.

  15. VCP associated inclusion body myopathy and paget disease of bone knock-in mouse model exhibits tissue pathology typical of human disease.

    Directory of Open Access Journals (Sweden)

    Mallikarjun Badadani

    Full Text Available Dominant mutations in the valosin containing protein (VCP gene cause inclusion body myopathy associated with Paget's disease of bone and frontotemporal dementia (IBMPFD. We have generated a knock-in mouse model with the common R155H mutation. Mice demonstrate progressive muscle weakness starting approximately at the age of 6 months. Histology of mutant muscle showed progressive vacuolization of myofibrils and centrally located nuclei, and immunostaining shows progressive cytoplasmic accumulation of TDP-43 and ubiquitin-positive inclusion bodies in quadriceps myofibrils and brain. Increased LC3-II staining of muscle sections representing increased number of autophagosomes suggested impaired autophagy. Increased apoptosis was demonstrated by elevated caspase-3 activity and increased TUNEL-positive nuclei. X-ray microtomography (uCT images show radiolucency of distal femurs and proximal tibiae in knock-in mice and uCT morphometrics shows decreased trabecular pattern and increased cortical wall thickness. Bone histology and bone marrow derived macrophage cultures in these mice revealed increased osteoclastogenesis observed by TRAP staining suggestive of Paget bone disease. The VCP(R155H/+ knock-in mice replicate the muscle, bone and brain pathology of inclusion body myopathy, thus representing a useful model for preclinical studies.

  16. Performance of turbulence models for transonic flows in a diffuser

    Science.gov (United States)

    Liu, Yangwei; Wu, Jianuo; Lu, Lipeng

    2016-09-01

    Eight turbulence models frequently used in aerodynamics have been employed in the detailed numerical investigations for transonic flows in the Sajben diffuser, to assess the predictive capabilities of the turbulence models for shock wave/turbulent boundary layer interactions (SWTBLI) in internal flows. The eight turbulence models include: the Spalart-Allmaras model, the standard k - 𝜀 model, the RNG k - 𝜀 model, the realizable k - 𝜀 model, the standard k - ω model, the SST k - ω model, the v2¯ - f model and the Reynolds stress model. The performance of the different turbulence models adopted has been systematically assessed by comparing the numerical results with the available experimental data. The comparisons show that the predictive performance becomes worse as the shock wave becomes stronger. The v2¯ - f model and the SST k - ω model perform much better than other models, and the SST k - ω model predicts a little better than the v2¯ - f model for pressure on walls and velocity profile, whereas the v2¯ - f model predicts a little better than the SST k - ω model for separation location, reattachment location and separation length for strong shock case.

  17. Determinants of business model performance in software firms

    OpenAIRE

    Rajala, Risto

    2009-01-01

    The antecedents and consequences of business model design have gained increasing interest among information system (IS) scholars and business practitioners alike. Based on an extensive literature review and empirical research, this study investigates the factors that drive business model design and the performance effects generated by the different kinds of business models in software firms. The main research question is: “What are the determinants of business model performance in the softwar...

  18. High Performance Geostatistical Modeling of Biospheric Resources

    Science.gov (United States)

    Pedelty, J. A.; Morisette, J. T.; Smith, J. A.; Schnase, J. L.; Crosier, C. S.; Stohlgren, T. J.

    2004-12-01

    We are using parallel geostatistical codes to study spatial relationships among biospheric resources in several study areas. For example, spatial statistical models based on large- and small-scale variability have been used to predict species richness of both native and exotic plants (hot spots of diversity) and patterns of exotic plant invasion. However, broader use of geostastics in natural resource modeling, especially at regional and national scales, has been limited due to the large computing requirements of these applications. To address this problem, we implemented parallel versions of the kriging spatial interpolation algorithm. The first uses the Message Passing Interface (MPI) in a master/slave paradigm on an open source Linux Beowulf cluster, while the second is implemented with the new proprietary Xgrid distributed processing system on an Xserve G5 cluster from Apple Computer, Inc. These techniques are proving effective and provide the basis for a national decision support capability for invasive species management that is being jointly developed by NASA and the US Geological Survey.

  19. Advanced Performance Modeling with Combined Passive and Active Monitoring

    Energy Technology Data Exchange (ETDEWEB)

    Dovrolis, Constantine [Georgia Inst. of Technology, Atlanta, GA (United States); Sim, Alex [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)

    2015-04-15

    To improve the efficiency of resource utilization and scheduling of scientific data transfers on high-speed networks, the "Advanced Performance Modeling with combined passive and active monitoring" (APM) project investigates and models a general-purpose, reusable and expandable network performance estimation framework. The predictive estimation model and the framework will be helpful in optimizing the performance and utilization of networks as well as sharing resources with predictable performance for scientific collaborations, especially in data intensive applications. Our prediction model utilizes historical network performance information from various network activity logs as well as live streaming measurements from network peering devices. Historical network performance information is used without putting extra load on the resources by active measurement collection. Performance measurements collected by active probing is used judiciously for improving the accuracy of predictions.

  20. Typical errors of ESP users

    Science.gov (United States)

    Eremina, Svetlana V.; Korneva, Anna A.

    2004-07-01

    The paper presents analysis of the errors made by ESP (English for specific purposes) users which have been considered as typical. They occur as a result of misuse of resources of English grammar and tend to resist. Their origin and places of occurrence have also been discussed.

  1. Evaluation of BRCA1 and BRCA2 mutations and risk-prediction models in a typical Asian country (Malaysia) with a relatively low incidence of breast cancer.

    Science.gov (United States)

    Thirthagiri, E; Lee, S Y; Kang, P; Lee, D S; Toh, G T; Selamat, S; Yoon, S-Y; Taib, N A Mohd; Thong, M K; Yip, C H; Teo, S H

    2008-01-01

    The cost of genetic testing and the limited knowledge about the BRCA1 and BRCA2 genes in different ethnic groups has limited its availability in medium- and low-resource countries, including Malaysia. In addition, the applicability of many risk-assessment tools, such as the Manchester Scoring System and BOADICEA (Breast and Ovarian Analysis of Disease Incidence and Carrier Estimation Algorithm) which were developed based on mutation rates observed primarily in Caucasian populations using data from multiplex families, and in populations where the rate of breast cancer is higher, has not been widely tested in Asia or in Asians living elsewhere. Here, we report the results of genetic testing for mutations in the BRCA1 or BRCA2 genes in a series of families with breast cancer in the multi-ethnic population (Malay, Chinese and Indian) of Malaysia. A total of 187 breast cancer patients with either early-onset breast cancer (at age model and the Manchester Scoring System was significantly better for BRCA1 than BRCA2, but that the overall sensitivity, specificity and positive-predictive value was lower in this

  2. Performance of Air Pollution Models on Massively Parallel Computers

    DEFF Research Database (Denmark)

    Brown, John; Hansen, Per Christian; Wasniewski, Jerzy

    1996-01-01

    To compare the performance and use of three massively parallel SIMD computers, we implemented a large air pollution model on the computers. Using a realistic large-scale model, we gain detailed insight about the performance of the three computers when used to solve large-scale scientific problems...

  3. System Level Modelling and Performance Estimation of Embedded Systems

    DEFF Research Database (Denmark)

    Tranberg-Hansen, Anders Sejer

    is simulation based and allows performance estimation to be carried out throughout all design phases ranging from early functional to cycle accurate and bit true descriptions of the system, modelling both hardware and software components in a unied way. Design space exploration and performance estimation...... an efficient system level design methodology, a modelling framework for performance estimation and design space exploration at the system level is required. This thesis presents a novel component based modelling framework for system level modelling and performance estimation of embedded systems. The framework...... is performed by having the framework produce detailed quantitative information about the system model under investigation. The project is part of the national Danish research project, Danish Network of Embedded Systems (DaNES), which is funded by the Danish National Advanced Technology Foundation. The project...

  4. Performance Predictable ServiceBSP Model for Grid Computing

    Institute of Scientific and Technical Information of China (English)

    TONG Weiqin; MIAO Weikai

    2007-01-01

    This paper proposes a performance prediction model for grid computing model ServiceBSP to support developing high quality applications in grid environment. In ServiceBSP model,the agents carrying computing tasks are dispatched to the local domain of the selected computation services. By using the IP (integer program) approach, the Service Selection Agent selects the computation services with global optimized QoS (quality of service) consideration. The performance of a ServiceBSP application can be predicted according to the performance prediction model based on the QoS of the selected services. The performance prediction model can help users to analyze their applications and improve them by optimized the factors which affects the performance. The experiment shows that the Service Selection Agent can provide ServiceBSP users with satisfied QoS of applications.

  5. Performance measurement and modeling of component applications in a high performance computing environment : a case study.

    Energy Technology Data Exchange (ETDEWEB)

    Armstrong, Robert C.; Ray, Jaideep; Malony, A. (University of Oregon, Eugene, OR); Shende, Sameer (University of Oregon, Eugene, OR); Trebon, Nicholas D.

    2003-11-01

    We present a case study of performance measurement and modeling of a CCA (Common Component Architecture) component-based application in a high performance computing environment. We explore issues peculiar to component-based HPC applications and propose a performance measurement infrastructure for HPC based loosely on recent work done for Grid environments. A prototypical implementation of the infrastructure is used to collect data for a three components in a scientific application and construct performance models for two of them. Both computational and message-passing performance are addressed.

  6. Performance Modeling of Communication Networks with Markov Chains

    CERN Document Server

    Mo, Jeonghoon

    2010-01-01

    This book is an introduction to Markov chain modeling with applications to communication networks. It begins with a general introduction to performance modeling in Chapter 1 where we introduce different performance models. We then introduce basic ideas of Markov chain modeling: Markov property, discrete time Markov chain (DTMe and continuous time Markov chain (CTMe. We also discuss how to find the steady state distributions from these Markov chains and how they can be used to compute the system performance metric. The solution methodologies include a balance equation technique, limiting probab

  7. Hierarchical Bulk Synchronous Parallel Model and Performance Optimization

    Institute of Scientific and Technical Information of China (English)

    HUANG Linpeng; SUNYongqiang; YUAN Wei

    1999-01-01

    Based on the framework of BSP, aHierarchical Bulk Synchronous Parallel (HBSP) performance model isintroduced in this paper to capture the performance optimizationproblem for various stages in parallel program development and toaccurately predict the performance of a parallel program byconsidering factors causing variance at local computation and globalcommunication. The related methodology has been applied to several realapplications and the results show that HBSP is a suitable model foroptimizing parallel programs.

  8. Reciprocating and Screw Compressor semi-empirical models for establishing minimum energy performance standards

    Science.gov (United States)

    Javed, Hassan; Armstrong, Peter

    2015-08-01

    The efficiency bar for a Minimum Equipment Performance Standard (MEPS) generally aims to minimize energy consumption and life cycle cost of a given chiller type and size category serving a typical load profile. Compressor type has a significant chiller performance impact. Performance of screw and reciprocating compressors is expressed in terms of pressure ratio and speed for a given refrigerant and suction density. Isentropic efficiency for a screw compressor is strongly affected by under- and over-compression (UOC) processes. The theoretical simple physical UOC model involves a compressor-specific (but sometimes unknown) volume index parameter and the real gas properties of the refrigerant used. Isentropic efficiency is estimated by the UOC model and a bi-cubic, used to account for flow, friction and electrical losses. The unknown volume index, a smoothing parameter (to flatten the UOC model peak) and bi-cubic coefficients are identified by curve fitting to minimize an appropriate residual norm. Chiller performance maps are produced for each compressor type by selecting optimized sub-cooling and condenser fan speed options in a generic component-based chiller model. SEER is the sum of hourly load (from a typical building in the climate of interest) and specific power for the same hourly conditions. An empirical UAE cooling load model, scalable to any equipment capacity, is used to establish proposed UAE MEPS. Annual electricity use and cost, determined from SEER and annual cooling load, and chiller component cost data are used to find optimal chiller designs and perform life-cycle cost comparison between screw and reciprocating compressor-based chillers. This process may be applied to any climate/load model in order to establish optimized MEPS for any country and/or region.

  9. Cost and Performance Assumptions for Modeling Electricity Generation Technologies

    Energy Technology Data Exchange (ETDEWEB)

    Tidball, Rick [ICF International, Fairfax, VA (United States); Bluestein, Joel [ICF International, Fairfax, VA (United States); Rodriguez, Nick [ICF International, Fairfax, VA (United States); Knoke, Stu [ICF International, Fairfax, VA (United States)

    2010-11-01

    The goal of this project was to compare and contrast utility scale power plant characteristics used in data sets that support energy market models. Characteristics include both technology cost and technology performance projections to the year 2050. Cost parameters include installed capital costs and operation and maintenance (O&M) costs. Performance parameters include plant size, heat rate, capacity factor or availability factor, and plant lifetime. Conventional, renewable, and emerging electricity generating technologies were considered. Six data sets, each associated with a different model, were selected. Two of the data sets represent modeled results, not direct model inputs. These two data sets include cost and performance improvements that result from increased deployment as well as resulting capacity factors estimated from particular model runs; other data sets represent model input data. For the technologies contained in each data set, the levelized cost of energy (LCOE) was also evaluated, according to published cost, performance, and fuel assumptions.

  10. Cost and Performance Assumptions for Modeling Electricity Generation Technologies

    Energy Technology Data Exchange (ETDEWEB)

    Tidball, R.; Bluestein, J.; Rodriguez, N.; Knoke, S.

    2010-11-01

    The goal of this project was to compare and contrast utility scale power plant characteristics used in data sets that support energy market models. Characteristics include both technology cost and technology performance projections to the year 2050. Cost parameters include installed capital costs and operation and maintenance (O&M) costs. Performance parameters include plant size, heat rate, capacity factor or availability factor, and plant lifetime. Conventional, renewable, and emerging electricity generating technologies were considered. Six data sets, each associated with a different model, were selected. Two of the data sets represent modeled results, not direct model inputs. These two data sets include cost and performance improvements that result from increased deployment as well as resulting capacity factors estimated from particular model runs; other data sets represent model input data. For the technologies contained in each data set, the levelized cost of energy (LCOE) was also evaluated, according to published cost, performance, and fuel assumptions.

  11. Compound fuzzy model for thermal performance of refrigeration compressors

    Institute of Scientific and Technical Information of China (English)

    2000-01-01

    The fuzzy method is introduced to the calculation of thermal performance of refrigeration compressors. A compound model combining classical thermodynamic theory and fuzzy theory is presented and compared with a simple fuzzy model without classical thermodynamic fundamentals. Case study of refrigeration compressors shows that the compound fuzzy model and the simple fuzzy model are both more efficient than the classical thermodynamic method. However, the compound fuzzy model is of better precision and adaptability.

  12. Reactive puff model SCICHEM: Model enhancements and performance studies

    Science.gov (United States)

    Chowdhury, B.; Karamchandani, P. K.; Sykes, R. I.; Henn, D. S.; Knipping, E.

    2015-09-01

    The SCICHEM model incorporates complete gas phase, aqueous and aerosol phase chemistry within a state-of-the-science Gaussian puff model SCIPUFF (Second-order Closure Integrated Puff). The model is a valuable tool that can be used to calculate the impacts of a single source or a small number of sources on downwind ozone and PM2.5. The model has flexible data requirements: it can be run with routine surface and upper air observations or with prognostic meteorological model outputs and source emissions are specified in a simple text format. This paper describes significant advances to the dispersion and chemistry components of the model in the latest release, SCICHEM 3.0. Some of the major advancements include modeling of skewed turbulence for convective boundary layer and updated chemistry schemes (CB05 gas phase chemical mechanism; AERO5 aerosol and aqueous modules). The results from SCICHEM 3.0 are compared with observations from a tracer study as well as aircraft measurements of reactive species in power plant plumes from two field studies. The results with the tracer experiment (Copenhagen study) show that the incorporation of skewed turbulence improves the calculation of tracer dispersion and transport. The comparisons with the Cumberland and Dolet Hills power plume measurements show good correlation between the observed and predicted concentrations of reactive gaseous species at most downwind distances from the source.

  13. Accelerating scientific codes by performance and accuracy modeling

    CERN Document Server

    Fabregat-Traver, Diego; Bientinesi, Paolo

    2016-01-01

    Scientific software is often driven by multiple parameters that affect both accuracy and performance. Since finding the optimal configuration of these parameters is a highly complex task, it extremely common that the software is used suboptimally. In a typical scenario, accuracy requirements are imposed, and attained through suboptimal performance. In this paper, we present a methodology for the automatic selection of parameters for simulation codes, and a corresponding prototype tool. To be amenable to our methodology, the target code must expose the parameters affecting accuracy and performance, and there must be formulas available for error bounds and computational complexity of the underlying methods. As a case study, we consider the particle-particle particle-mesh method (PPPM) from the LAMMPS suite for molecular dynamics, and use our tool to identify configurations of the input parameters that achieve a given accuracy in the shortest execution time. When compared with the configurations suggested by exp...

  14. A Spectral Evaluation of Models Performances in Mediterranean Oak Woodlands

    Science.gov (United States)

    Vargas, R.; Baldocchi, D. D.; Abramowitz, G.; Carrara, A.; Correia, A.; Kobayashi, H.; Papale, D.; Pearson, D.; Pereira, J.; Piao, S.; Rambal, S.; Sonnentag, O.

    2009-12-01

    Ecosystem processes are influenced by climatic trends at multiple temporal scales including diel patterns and other mid-term climatic modes, such as interannual and seasonal variability. Because interactions between biophysical components of ecosystem processes are complex, it is important to test how models perform in frequency (e.g. hours, days, weeks, months, years) and time (i.e. day of the year) domains in addition to traditional tests of annual or monthly sums. Here we present a spectral evaluation using wavelet time series analysis of model performance in seven Mediterranean Oak Woodlands that encompass three deciduous and four evergreen sites. We tested the performance of five models (CABLE, ORCHIDEE, BEPS, Biome-BGC, and JULES) on measured variables of gross primary production (GPP) and evapotranspiration (ET). In general, model performance fails at intermediate periods (e.g. weeks to months) likely because these models do not represent the water pulse dynamics that influence GPP and ET at these Mediterranean systems. To improve the performance of a model it is critical to identify first where and when the model fails. Only by identifying where a model fails we can improve the model performance and use them as prognostic tools and to generate further hypotheses that can be tested by new experiments and measurements.

  15. Performance of Modeling wireless networks in realistic environment

    CERN Document Server

    Siraj, M

    2012-01-01

    A wireless network is realized by mobile devices which communicate over radio channels. Since, experiments of real life problem with real devices are very difficult, simulation is used very often. Among many other important properties that have to be defined for simulative experiments, the mobility model and the radio propagation model have to be selected carefully. Both have strong impact on the performance of mobile wireless networks, e.g., the performance of routing protocols varies with these models. There are many mobility and radio propagation models proposed in literature. Each of them was developed with different objectives and is not suited for every physical scenario. The radio propagation models used in common wireless network simulators, in general researcher consider simple radio propagation models and neglect obstacles in the propagation environment. In this paper, we study the performance of wireless networks simulation by consider different Radio propagation models with considering obstacles i...

  16. Performance modeling and prediction for linear algebra algorithms

    OpenAIRE

    Iakymchuk, Roman

    2012-01-01

    This dissertation incorporates two research projects: performance modeling and prediction for dense linear algebra algorithms, and high-performance computing on clouds. The first project is focused on dense matrix computations, which are often used as computational kernels for numerous scientific applications. To solve a particular mathematical operation, linear algebra libraries provide a variety of algorithms. The algorithm of choice depends, obviously, on its performance. Performance of su...

  17. Comparison of Two Models for Damage Accumulation in Simulations of System Performance

    Energy Technology Data Exchange (ETDEWEB)

    Youngblood, R. [Idaho National Laboratory, Idaho Falls, ID (United States); Mandelli, D. [Idaho National Laboratory, Idaho Falls, ID (United States)

    2015-11-01

    A comprehensive simulation study of system performance needs to address variations in component behavior, variations in phenomenology, and the coupling between phenomenology and component failure. This paper discusses two models of this: 1. damage accumulation is modeled as a random walk process in each time history, with component failure occurring when damage accumulation reaches a specified threshold; or 2. damage accumulation is modeled mechanistically within each time history, but failure occurs when damage reaches a time-history-specific threshold, sampled at time zero from each component’s distribution of damage tolerance. A limiting case of the latter is classical discrete-event simulation, with component failure times sampled a priori from failure time distributions; but in such models, the failure times are not typically adjusted for operating conditions varying within a time history. Nowadays, as discussed below, it is practical to account for this. The paper compares the interpretations and computational aspects of the two models mentioned above.

  18. Comparisons of Faulting-Based Pavement Performance Prediction Models

    Directory of Open Access Journals (Sweden)

    Weina Wang

    2017-01-01

    Full Text Available Faulting prediction is the core of concrete pavement maintenance and design. Highway agencies are always faced with the problem of lower accuracy for the prediction which causes costly maintenance. Although many researchers have developed some performance prediction models, the accuracy of prediction has remained a challenge. This paper reviews performance prediction models and JPCP faulting models that have been used in past research. Then three models including multivariate nonlinear regression (MNLR model, artificial neural network (ANN model, and Markov Chain (MC model are tested and compared using a set of actual pavement survey data taken on interstate highway with varying design features, traffic, and climate data. It is found that MNLR model needs further recalibration, while the ANN model needs more data for training the network. MC model seems a good tool for pavement performance prediction when the data is limited, but it is based on visual inspections and not explicitly related to quantitative physical parameters. This paper then suggests that the further direction for developing the performance prediction model is incorporating the advantages and disadvantages of different models to obtain better accuracy.

  19. Atomic scale simulations for improved CRUD and fuel performance modeling

    Energy Technology Data Exchange (ETDEWEB)

    Andersson, Anders David Ragnar [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Cooper, Michael William Donald [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-01-06

    A more mechanistic description of fuel performance codes can be achieved by deriving models and parameters from atomistic scale simulations rather than fitting models empirically to experimental data. The same argument applies to modeling deposition of corrosion products on fuel rods (CRUD). Here are some results from publications in 2016 carried out using the CASL allocation at LANL.

  20. Performance model for grid-connected photovoltaic inverters.

    Energy Technology Data Exchange (ETDEWEB)

    Boyson, William Earl; Galbraith, Gary M.; King, David L.; Gonzalez, Sigifredo

    2007-09-01

    This document provides an empirically based performance model for grid-connected photovoltaic inverters used for system performance (energy) modeling and for continuous monitoring of inverter performance during system operation. The versatility and accuracy of the model were validated for a variety of both residential and commercial size inverters. Default parameters for the model can be obtained from manufacturers specification sheets, and the accuracy of the model can be further refined using measurements from either well-instrumented field measurements in operational systems or using detailed measurements from a recognized testing laboratory. An initial database of inverter performance parameters was developed based on measurements conducted at Sandia National Laboratories and at laboratories supporting the solar programs of the California Energy Commission.

  1. Performance Modeling for Heterogeneous Wireless Networks with Multiservice Overflow Traffic

    DEFF Research Database (Denmark)

    Huang, Qian; Ko, King-Tim; Iversen, Villy Bæk

    2009-01-01

    Performance modeling is important for the purpose of developing efficient dimensioning tools for large complicated networks. But it is difficult to achieve in heterogeneous wireless networks, where different networks have different statistical characteristics in service and traffic models....... Multiservice loss analysis based on multi-dimensional Markov chain becomes intractable in these networks due to intensive computations required. This paper focuses on performance modeling for heterogeneous wireless networks based on a hierarchical overlay infrastructure. A method based on decomposition...... of the correlated traffic is used to achieve an approximate performance modeling for multiservice in hierarchical heterogeneous wireless networks with overflow traffic. The accuracy of the approximate performance obtained by our proposed modeling is verified by simulations....

  2. Modelling object typicality in description logics

    CSIR Research Space (South Africa)

    Britz, K

    2009-12-01

    Full Text Available base consists of a Tbox which contains terminological axioms, and an Abox which contains assertions, i.e. facts about specific named objects and rela- tionships between objects in the domain. Depending on the expressive power of the DL, a knowledge...”. An interpretation I satisfies C v D, written I C v D, iff CI ⊆ DI . C v D is valid, written |= C v D, iff it is satisfied by all interpretations. Rbox statements include role inclusions of the form R v S, and assertions used to define role proper- ties...

  3. Energy performance modelling and heat recovery unit efficiency assessment of an office building

    Directory of Open Access Journals (Sweden)

    Harmati Norbert L.

    2015-01-01

    Full Text Available This paper investigates and analyzes a typical multi-zone office building’s annual energy performance for the location and climate data of central Belgrade. The aim is to evaluate the HVAC system’s and HR unit’s performance in order to conduct the most preferable heating and cooling solution for the typical climate of Belgrade city. The energy performance of four HVAC system types (heat pump - air to air, gas-electricity, electrical and fan coil system was analyzed, compared and evaluated on a virtual office building model in order to assess the total annual energy performance and to determine the efficiency of the HR unit’s application. Further, the parameters of an energy efficient building envelope, HVAC system, internal loads, building operation schedules and occupancy intervals were implemented into the multi-zone analysis model. The investigation was conducted in EnergyPlus simulation engine using system thermodynamic algorithms and surface/air heat balance modules. The comparison and evaluation of the obtained results was achieved through the conversion of the calculated total energy demand into primary energy. The goal is conduct the most preferable heating and cooling solution (Best Case Scenario for the climate of Belgrade city and outline major criteria in qualitative enhancement.

  4. Assessing Predictive Performance of Published Population Pharmacokinetic Models of Intravenous Tobramycin in Pediatric Patients.

    Science.gov (United States)

    Bloomfield, Celeste; Staatz, Christine E; Unwin, Sean; Hennig, Stefanie

    2016-06-01

    Several population pharmacokinetic models describe the dose-exposure relationship of tobramycin in pediatric patients. Before the implementation of these models in clinical practice for dosage adjustment, their predictive performance should be externally evaluated. This study tested the predictive performance of all published population pharmacokinetic models of tobramycin developed for pediatric patients with an independent patient cohort. A literature search was conducted to identify suitable models for testing. Demographic and pharmacokinetic data were collected retrospectively from the medical records of pediatric patients who had received intravenous tobramycin. Tobramycin exposure was predicted from each model. Predictive performance was assessed by visual comparison of predictions to observations, by calculation of bias and imprecision, and through the use of simulation-based diagnostics. Eight population pharmacokinetic models were identified. A total of 269 concentration-time points from 41 pediatric patients with cystic fibrosis were collected for external evaluation. Three models consistently performed best in all evaluations and had mean errors ranging from -0.4 to 1.8 mg/liter, relative mean errors ranging from 4.9 to 29.4%, and root mean square errors ranging from 47.8 to 66.9%. Simulation-based diagnostics supported these findings. Models that allowed a two-compartment disposition generally had better predictive performance than those that used a one-compartment disposition model. Several published models of the pharmacokinetics of tobramycin showed reasonable low levels of bias, although all models seemed to have some problems with imprecision. This suggests that knowledge of typical pharmacokinetic behavior and patient covariate values alone without feedback concentration measurements from individual patients is not sufficient to make precise predictions. Copyright © 2016, American Society for Microbiology. All Rights Reserved.

  5. Real-time Performance Verification of Core Protection and Monitoring System with Integrated Model for SMART Simulator

    Energy Technology Data Exchange (ETDEWEB)

    Koo, Bon-Seung; Kim, Sung-Jin; Hwang, Dae-Hyun [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2015-05-15

    In keeping with these purposes, a real-time model of the digital core protection and monitoring systems for simulator implementation was developed on the basis of SCOPS and SCOMS algorithms. In addition, important features of the software models were explained for the application to SMART simulator, and the real-time performance of the models linked with DLL was examined for various simulation scenarios. In this paper, performance verification of core protection and monitoring software is performed with integrated simulator model. A real-time performance verification of core protection and monitoring software for SMART simulator was performed with integrated simulator model. Various DLL connection tests were done for software algorithm change. In addition, typical accident scenarios of SMART were simulated with 3KEYMASTER and simulated results were compared with those of DLL linked core protection and monitoring software. Each calculational result showed good agreements.

  6. Towards an Accurate Performance Modeling of Parallel SparseFactorization

    Energy Technology Data Exchange (ETDEWEB)

    Grigori, Laura; Li, Xiaoye S.

    2006-05-26

    We present a performance model to analyze a parallel sparseLU factorization algorithm on modern cached-based, high-end parallelarchitectures. Our model characterizes the algorithmic behavior bytakingaccount the underlying processor speed, memory system performance, aswell as the interconnect speed. The model is validated using theSuperLU_DIST linear system solver, the sparse matrices from realapplications, and an IBM POWER3 parallel machine. Our modelingmethodology can be easily adapted to study performance of other types ofsparse factorizations, such as Cholesky or QR.

  7. Model of single-electron performance of micropixel avalanche photodiodes

    CERN Document Server

    Sadygov, Z; Akhmedov, G; Akhmedov, F; Khorev, S; Mukhtarov, R; Sadigov, A; Sidelev, A; Titov, A; Zerrouk, F; Zhezher, V

    2014-01-01

    An approximate iterative model of avalanche process in a pixel of micropixel avalanche photodiode initiated by a single photoelectron is presented. The model describes development of the avalanche process in time, taking into account change of electric field within the depleted region caused by internal discharge and external recharge currents. Conclusions obtained as a result of modelling are compared with experimental data. Simulations show that typical durations of the front and rear edges of the discharge current have the same magnitude of less than 50 ps. The front of the external recharge current has the same duration, however duration of the rear edge depends on value of the quenching micro-resistor. It was found that effective capacitance of the pixel calculated as the slope of linear dependence of the pulse charge on bias voltage exceeds its real capacitance by a factor of two.

  8. Emerging Carbon Nanotube Electronic Circuits, Modeling, and Performance

    OpenAIRE

    Yao Xu; Ashok Srivastava; Sharma, Ashwani K.

    2010-01-01

    Current transport and dynamic models of carbon nanotube field-effect transistors are presented. A model of single-walled carbon nanotube as interconnect is also presented and extended in modeling of single-walled carbon nanotube bundles. These models are applied in studying the performances of circuits such as the complementary carbon nanotube inverter pair and carbon nanotube as interconnect. Cadence/Spectre simulations show that carbon nanotube field-effect transistor circuits can operate a...

  9. Planetary Suit Hip Bearing Model for Predicting Design vs. Performance

    Science.gov (United States)

    Cowley, Matthew S.; Margerum, Sarah; Harvil, Lauren; Rajulu, Sudhakar

    2011-01-01

    Designing a planetary suit is very complex and often requires difficult trade-offs between performance, cost, mass, and system complexity. In order to verifying that new suit designs meet requirements, full prototypes must eventually be built and tested with human subjects. Using computer models early in the design phase of new hardware development can be advantageous, allowing virtual prototyping to take place. Having easily modifiable models of the suit hard sections may reduce the time it takes to make changes to the hardware designs and then to understand their impact on suit and human performance. A virtual design environment gives designers the ability to think outside the box and exhaust design possibilities before building and testing physical prototypes with human subjects. Reductions in prototyping and testing may eventually reduce development costs. This study is an attempt to develop computer models of the hard components of the suit with known physical characteristics, supplemented with human subject performance data. Objectives: The primary objective was to develop an articulating solid model of the Mark III hip bearings to be used for evaluating suit design performance of the hip joint. Methods: Solid models of a planetary prototype (Mark III) suit s hip bearings and brief section were reverse-engineered from the prototype. The performance of the models was then compared by evaluating the mobility performance differences between the nominal hardware configuration and hardware modifications. This was accomplished by gathering data from specific suited tasks. Subjects performed maximum flexion and abduction tasks while in a nominal suit bearing configuration and in three off-nominal configurations. Performance data for the hip were recorded using state-of-the-art motion capture technology. Results: The results demonstrate that solid models of planetary suit hard segments for use as a performance design tool is feasible. From a general trend perspective

  10. Models used to assess the performance of photovoltaic systems.

    Energy Technology Data Exchange (ETDEWEB)

    Stein, Joshua S.; Klise, Geoffrey T.

    2009-12-01

    This report documents the various photovoltaic (PV) performance models and software developed and utilized by researchers at Sandia National Laboratories (SNL) in support of the Photovoltaics and Grid Integration Department. In addition to PV performance models, hybrid system and battery storage models are discussed. A hybrid system using other distributed sources and energy storage can help reduce the variability inherent in PV generation, and due to the complexity of combining multiple generation sources and system loads, these models are invaluable for system design and optimization. Energy storage plays an important role in reducing PV intermittency and battery storage models are used to understand the best configurations and technologies to store PV generated electricity. Other researcher's models used by SNL are discussed including some widely known models that incorporate algorithms developed at SNL. There are other models included in the discussion that are not used by or were not adopted from SNL research but may provide some benefit to researchers working on PV array performance, hybrid system models and energy storage. The paper is organized into three sections to describe the different software models as applied to photovoltaic performance, hybrid systems, and battery storage. For each model, there is a description which includes where to find the model, whether it is currently maintained and any references that may be available. Modeling improvements underway at SNL include quantifying the uncertainty of individual system components, the overall uncertainty in modeled vs. measured results and modeling large PV systems. SNL is also conducting research into the overall reliability of PV systems.

  11. Comparison of performance of simulation models for floor heating

    DEFF Research Database (Denmark)

    Weitzmann, Peter; Svendsen, Svend

    2005-01-01

    only the floor construction, the differences can be directly compared. In this comparison, a two-dimensional model of a slab-on-grade floor including foundation is used as reference. The other models include a one-dimensional model and a thermal network model including the linear thermal transmittance......This paper describes the comparison of performance of simulation models for floor heating with different level of detail in the modelling process. The models are compared in an otherwise identical simulation model containing room model, walls, windows, ceiling and ventilation system. By exchanging...... of the foundation. The result can be also be found in the energy consumption of the building, since up to half the energy consumption is lost through the ground. Looking at the different implementations it is also found, that including a 1m ground volume below the floor construction under a one-dimensional model...

  12. Selecting Optimal Subset of Features for Student Performance Model

    Directory of Open Access Journals (Sweden)

    Hany M. Harb

    2012-09-01

    Full Text Available Educational data mining (EDM is a new growing research area and the essence of data mining concepts are used in the educational field for the purpose of extracting useful information on the student behavior in the learning process. Classification methods like decision trees, rule mining, and Bayesian network, can be applied on the educational data for predicting the student behavior like performance in an examination. This prediction may help in student evaluation. As the feature selection influences the predictive accuracy of any performance model, it is essential to study elaborately the effectiveness of student performance model in connection with feature selection techniques. The main objective of this work is to achieve high predictive performance by adopting various feature selection techniques to increase the predictive accuracy with least number of features. The outcomes show a reduction in computational time and constructional cost in both training and classification phases of the student performance model.

  13. Integrated Main Propulsion System Performance Reconstruction Process/Models

    Science.gov (United States)

    Lopez, Eduardo; Elliott, Katie; Snell, Steven; Evans, Michael

    2013-01-01

    The Integrated Main Propulsion System (MPS) Performance Reconstruction process provides the MPS post-flight data files needed for postflight reporting to the project integration management and key customers to verify flight performance. This process/model was used as the baseline for the currently ongoing Space Launch System (SLS) work. The process utilizes several methodologies, including multiple software programs, to model integrated propulsion system performance through space shuttle ascent. It is used to evaluate integrated propulsion systems, including propellant tanks, feed systems, rocket engine, and pressurization systems performance throughout ascent based on flight pressure and temperature data. The latest revision incorporates new methods based on main engine power balance model updates to model higher mixture ratio operation at lower engine power levels.

  14. Thermal Model Predictions of Advanced Stirling Radioisotope Generator Performance

    Science.gov (United States)

    Wang, Xiao-Yen J.; Fabanich, William Anthony; Schmitz, Paul C.

    2014-01-01

    This presentation describes the capabilities of three-dimensional thermal power model of advanced stirling radioisotope generator (ASRG). The performance of the ASRG is presented for different scenario, such as Venus flyby with or without the auxiliary cooling system.

  15. Performance modeling of data dissemination in vehicular ad hoc networks

    DEFF Research Database (Denmark)

    Chaqfeh, Moumena; Lakas, Abderrahmane; Lazarova-Molnar, Sanja

    2013-01-01

    ad hoc nature which does not require fixed infrastructure or centralized administration. However, designing scalable information dissemination techniques for VANET applications remains a challenging task due to the inherent nature of such highly dynamic environments. Existing dissemination techniques...... often resort to simulation for performance evaluation and there are only few studies that offer mathematical modeling. In this paper we provide a comparative study of existing performance modeling approaches for data dissemination techniques designed for different VANET applications....

  16. Modeling radial flow ion exchange performance for condensate polisher conditions

    Energy Technology Data Exchange (ETDEWEB)

    Shallcross, D. [University of Melbourne, Melbourne, VIC (Australia). Department of Chemical Engineering; Renouf, P.

    2001-11-01

    A theoretical model is developed which simulates ion exchange performance within an annular resin bed. Flow within the mixed ion exchange bed is diverging, with the solution flowing outwards away from the bed's axis. The model is used to simulate performance of a mixed annular bed operating under condensate polisher conditions. The simulation predictions are used to develop design envelope curves for practical radial flow beds and to estimate potential cost savings flowing from less expensive polisher vessels. (orig.)

  17. Performance of Air Pollution Models on Massively Parallel Computers

    DEFF Research Database (Denmark)

    Brown, John; Hansen, Per Christian; Wasniewski, Jerzy

    1996-01-01

    To compare the performance and use of three massively parallel SIMD computers, we implemented a large air pollution model on the computers. Using a realistic large-scale model, we gain detailed insight about the performance of the three computers when used to solve large-scale scientific problems...... that involve several types of numerical computations. The computers considered in our study are the Connection Machines CM-200 and CM-5, and the MasPar MP-2216...

  18. A Formal Comparison of Model Variants for Performance Prediction

    Science.gov (United States)

    2009-12-01

    400 450 500 1 2 3 4 5 6 7 8 P e rf o rm a n c e S c o re s Mission Team Performance in UAS Predator Simulation CERI , 2005 Humans Model...Simulation CERI , 2005 Humans Model Team Performance in F-16 Simulator Missions DMO Testbd, Mesa Table 2. Cross-validation RMSD...Warfighter Readiness Research Division. The authors would like to thank the Cognitive Engineering Research Institute ( CERI ) and researchers from Mesa’s

  19. Port performance evaluation tool based on microsimulation model

    Directory of Open Access Journals (Sweden)

    Tsavalista Burhani Jzolanda

    2017-01-01

    Full Text Available As port performance is becoming correlative to national competitiveness, the issue of port performance evaluation has significantly raised. Port performances can simply be indicated by port service levels to the ship (e.g., throughput, waiting for berthing etc., as well as the utilization level of equipment and facilities within a certain period. The performances evaluation then can be used as a tool to develop related policies for improving the port’s performance to be more effective and efficient. However, the evaluation is frequently conducted based on deterministic approach, which hardly captures the nature variations of port parameters. Therefore, this paper presents a stochastic microsimulation model for investigating the impacts of port parameter variations to the port performances. The variations are derived from actual data in order to provide more realistic results. The model is further developed using MATLAB and Simulink based on the queuing theory.

  20. A Mathematical Model to Improve the Performance of Logistics Network

    Directory of Open Access Journals (Sweden)

    Muhammad Izman Herdiansyah

    2012-01-01

    Full Text Available The role of logistics nowadays is expanding from just providing transportation and warehousing to offering total integrated logistics. To remain competitive in the global market environment, business enterprises need to improve their logistics operations performance. The improvement will be achieved when we can provide a comprehensive analysis and optimize its network performances. In this paper, a mixed integer linier model for optimizing logistics network performance is developed. It provides a single-product multi-period multi-facilities model, as well as the multi-product concept. The problem is modeled in form of a network flow problem with the main objective to minimize total logistics cost. The problem can be solved using commercial linear programming package like CPLEX or LINDO. Even in small case, the solver in Excel may also be used to solve such model.Keywords: logistics network, integrated model, mathematical programming, network optimization

  1. A Mathematical Model to Improve the Performance of Logistics Network

    Directory of Open Access Journals (Sweden)

    Muhammad Izman Herdiansyah

    2012-01-01

    Full Text Available The role of logistics nowadays is expanding from just providing transportation and warehousing to offering total integrated logistics. To remain competitive in the global market environment, business enterprises need to improve their logistics operations performance. The improvement will be achieved when we can provide a comprehensive analysis and optimize its network performances. In this paper, a mixed integer linier model for optimizing logistics network performance is developed. It provides a single-product multi-period multi-facilities model, as well as the multi-product concept. The problem is modeled in form of a network flow problem with the main objective to minimize total logistics cost. The problem can be solved using commercial linear programming package like CPLEX or LINDO. Even in small case, the solver in Excel may also be used to solve such model.Keywords: logistics network, integrated model, mathematical programming, network optimization

  2. Performance analysis of FXLMS algorithm with secondary path modeling error

    Institute of Scientific and Technical Information of China (English)

    SUN Xu; CHEN Duanshi

    2003-01-01

    Performance analysis of filtered-X LMS (FXLMS) algorithm with secondary path modeling error is carried out in both time and frequency domain. It is shown firstly that the effects of secondary path modeling error on the performance of FXLMS algorithm are determined by the distribution of the relative error of secondary path model along with frequency.In case of that the distribution of relative error is uniform the modeling error of secondary path will have no effects on the performance of the algorithm. In addition, a limitation property of FXLMS algorithm is proved, which implies that the negative effects of secondary path modeling error can be compensated by increasing the adaptive filter length. At last, some insights into the "spillover" phenomenon of FXLMS algorithm are given.

  3. Performance evaluation of quantum well infrared phototransistor instrumentation through modeling

    Science.gov (United States)

    El-Tokhy, Mohamed S.; Mahmoud, Imbaby I.

    2014-05-01

    This paper presents a theoretical analysis for the characteristics of quantum well infrared phototransistors (QWIPTs). A mathematical model describing this device is introduced under nonuniformity distribution of quantum wells (QWs). MATLAB environment is used to devise this model. Furthermore, block diagram models through the VisSim environment were used to describe the device characteristics. The developed models are used to investigate the behavior of the device with different values of performance parameters such as bias voltage, spacing between QWs, and temperature. These parameters are tuned to enhance the performance of these quantum phototransistors through the presented modeling. Moreover, the resultant performance characteristics and comparison between both QWIPTs and quantum wire infrared phototransistors are investigated. Also, the obtained results are validated against experimental published work and full agreements are obtained.

  4. Construction Of A Performance Assessment Model For Zakat Management Institutions

    Directory of Open Access Journals (Sweden)

    Sri Fadilah

    2016-12-01

    Full Text Available The objective of the research is to examine the performance evaluation using Balanced Scorecard model. The research is conducted due to a big gap existing between zakat (alms and religious tax in Islam with its potential earn of as much as 217 trillion rupiahs and the realization of the collected zakat fund that is only reached for three trillion. This indicates that the performance of zakat management organizations in collecting the zakat is still very low. On the other hand, the quantity and the quality of zakat management organizations have to be improved. This means the performance evaluation model as a tool to evaluate performance is needed. The model construct is making a performance evaluation model that can be implemented to zakat management organizations. The organizational performance with Balanced Scorecard evaluation model will be effective if it is supported by three aspects, namely:  PI, BO and TQM. This research uses explanatory method and data analysis tool of SEM/PLS. Data collecting technique are questionnaires, interviews and documentation. The result of this research shows that PI, BO and TQM simultaneously and partially gives a significant effect on organizational performance.

  5. Configuration of Distributed Message Converter Systems using Performance Modeling

    NARCIS (Netherlands)

    Aberer, Karl; Risse, Thomas; Wombacher, Andreas

    2001-01-01

    To find a configuration of a distributed system satisfying performance goals is a complex search problem that involves many design parameters, like hardware selection, job distribution and process configuration. Performance models are a powerful tools to analyse potential system configurations, howe

  6. A Composite Model for Employees' Performance Appraisal and Improvement

    Science.gov (United States)

    Manoharan, T. R.; Muralidharan, C.; Deshmukh, S. G.

    2012-01-01

    Purpose: The purpose of this paper is to develop an innovative method of performance appraisal that will be useful for designing a structured training programme. Design/methodology/approach: Employees' performance appraisals are conducted using new approaches, namely data envelopment analysis and an integrated fuzzy model. Interpretive structural…

  7. Performance Implications of Business Model Change: A Case Study

    Directory of Open Access Journals (Sweden)

    Jana Poláková

    2015-01-01

    Full Text Available The paper deals with changes in performance level introduced by the change of business model. The selected case is a small family business undergoing through substantial changes in reflection of structural changes of its markets. The authors used the concept of business model to describe value creation processes within the selected family business and by contrasting the differences between value creation processes before and after the change introduced they prove the role of business model as the performance differentiator. This is illustrated with the use of business model canvas constructed on the basis interviews, observations and document analysis. The two business model canvases allow for explanation of cause-and-effect relationships within the business leading to change in performance. The change in the performance is assessed by financial analysis of the business conducted over the period of 2006–2012 demonstrates changes in performance (comparing development of ROA, ROE and ROS having their lowest levels before the change of business model was introduced, growing after the introduction of the change, as well as the activity indicators with similar developments of the family business. The described case study contributes to the concept of business modeling with the arguments supporting its value as strategic tool facilitating decisions related to value creation within the business.

  8. Generation of (synthetic) influent data for performing wastewater treatment modelling studies

    DEFF Research Database (Denmark)

    Flores Alsina, Xavier; Ort, Christoph; Martin, Cristina

    2014-01-01

    The success of many modelling studies strongly depends on the availability of sufficiently long influent time series - the main disturbance of a typical wastewater treatment plant (WWTP) - representing the inherent natural variability at the plant inlet as accurately as possible. This is an impor...... in current WWTP influent disturbance models. Finally, the outcome of these discussions will be used to define specific tasks that should be tackled in the near future to achieve more general acceptance and use of WWTP influent generators.......The success of many modelling studies strongly depends on the availability of sufficiently long influent time series - the main disturbance of a typical wastewater treatment plant (WWTP) - representing the inherent natural variability at the plant inlet as accurately as possible....... This is an important point since most modelling projects suffer from a lack of realistic data representing the influent wastewater dynamics. The objective of this paper is to show the advantages of creating synthetic data when performing modelling studies for WWTPs. This study reviews the different principles...

  9. Activity-Based Costing Model for Assessing Economic Performance.

    Science.gov (United States)

    DeHayes, Daniel W.; Lovrinic, Joseph G.

    1994-01-01

    An economic model for evaluating the cost performance of academic and administrative programs in higher education is described. Examples from its application at Indiana University-Purdue University Indianapolis are used to illustrate how the model has been used to control costs and reengineer processes. (Author/MSE)

  10. Null Objects in Second Language Acquisition: Grammatical vs. Performance Models

    Science.gov (United States)

    Zyzik, Eve C.

    2008-01-01

    Null direct objects provide a favourable testing ground for grammatical and performance models of argument omission. This article examines both types of models in order to determine which gives a more plausible account of the second language data. The data were collected from second language (L2) learners of Spanish by means of four oral…

  11. Modelling the Performance of Product Integrated Photovoltaic (PIPV) Cells Indoors

    NARCIS (Netherlands)

    Apostolou, G.; Verwaal, M.; Reinders, Angelina H.M.E.

    2014-01-01

    In this paper we present a model, which have been developed for the estimation of the PV products’ cells’ performance in an indoor environment. The model computes the efficiency and power production of PV technologies, as a function of distance from natural and artificial light sources. It intents

  12. MODELING AND PERFORMANCE ANALYSIS FOR THE SERIAL AND ARALLEL PRODUCTION SYSTEM BASED ON GSPN

    Institute of Scientific and Technical Information of China (English)

    Gao Jianhua; Hu Xudong; Yang Ruqing

    2004-01-01

    Differed from the existed applications of generalized stochastic Petri net (GSPN) theory in machine-tool manufacturing system, reliability computation of FMS, testability parameters determination and fault analysis, a new idea of applying GSPN to model and performance analysis for the serial and parallel production system is proposed. And one typical discrete event dynamic system (DEDS), turner-unit of palletizing system, is taken as a real case to research. Based upon the established GSPN models, the working performances of serial and parallel layout are compared. Furthermore, their differences of working mechanisms including feeding mechanism, coordinating mechanism and monitoring mechanism are discussed. Thus the theoretical basis which is helpful to appraise layout plan and its reasonableness is provided. Meanwhile, the research results show that parallel layout is more advantageous to greatly improve the operational speed of production system than serial one.

  13. The Use of Neural Network Technology to Model Swimming Performance

    Science.gov (United States)

    Silva, António José; Costa, Aldo Manuel; Oliveira, Paulo Moura; Reis, Victor Machado; Saavedra, José; Perl, Jurgen; Rouboa, Abel; Marinho, Daniel Almeida

    2007-01-01

    The aims of the present study were: to identify the factors which are able to explain the performance in the 200 meters individual medley and 400 meters front crawl events in young swimmers, to model the performance in those events using non-linear mathematic methods through artificial neural networks (multi-layer perceptrons) and to assess the neural network models precision to predict the performance. A sample of 138 young swimmers (65 males and 73 females) of national level was submitted to a test battery comprising four different domains: kinanthropometric evaluation, dry land functional evaluation (strength and flexibility), swimming functional evaluation (hydrodynamics, hydrostatic and bioenergetics characteristics) and swimming technique evaluation. To establish a profile of the young swimmer non-linear combinations between preponderant variables for each gender and swim performance in the 200 meters medley and 400 meters font crawl events were developed. For this purpose a feed forward neural network was used (Multilayer Perceptron) with three neurons in a single hidden layer. The prognosis precision of the model (error lower than 0.8% between true and estimated performances) is supported by recent evidence. Therefore, we consider that the neural network tool can be a good approach in the resolution of complex problems such as performance modeling and the talent identification in swimming and, possibly, in a wide variety of sports. Key pointsThe non-linear analysis resulting from the use of feed forward neural network allowed us the development of four performance models.The mean difference between the true and estimated results performed by each one of the four neural network models constructed was low.The neural network tool can be a good approach in the resolution of the performance modeling as an alternative to the standard statistical models that presume well-defined distributions and independence among all inputs.The use of neural networks for sports

  14. Introducing Model Predictive Control for Improving Power Plant Portfolio Performance

    DEFF Research Database (Denmark)

    Edlund, Kristian Skjoldborg; Bendtsen, Jan Dimon; Børresen, Simon

    2008-01-01

    This paper introduces a model predictive control (MPC) approach for construction of a controller for balancing the power generation against consumption in a power system. The objective of the controller is to coordinate a portfolio consisting of multiple power plant units in the effort to perform...... reference tracking and disturbance rejection in an economically optimal way. The performance function is chosen as a mixture of the `1-norm and a linear weighting to model the economics of the system. Simulations show a significant improvement of the performance of the MPC compared to the current...

  15. Models of performance of evolutionary program induction algorithms based on indicators of problem difficulty.

    Science.gov (United States)

    Graff, Mario; Poli, Riccardo; Flores, Juan J

    2013-01-01

    Modeling the behavior of algorithms is the realm of evolutionary algorithm theory. From a practitioner's point of view, theory must provide some guidelines regarding which algorithm/parameters to use in order to solve a particular problem. Unfortunately, most theoretical models of evolutionary algorithms are difficult to apply to realistic situations. However, in recent work (Graff and Poli, 2008, 2010), where we developed a method to practically estimate the performance of evolutionary program-induction algorithms (EPAs), we started addressing this issue. The method was quite general; however, it suffered from some limitations: it required the identification of a set of reference problems, it required hand picking a distance measure in each particular domain, and the resulting models were opaque, typically being linear combinations of 100 features or more. In this paper, we propose a significant improvement of this technique that overcomes the three limitations of our previous method. We achieve this through the use of a novel set of features for assessing problem difficulty for EPAs which are very general, essentially based on the notion of finite difference. To show the capabilities or our technique and to compare it with our previous performance models, we create models for the same two important classes of problems-symbolic regression on rational functions and Boolean function induction-used in our previous work. We model a variety of EPAs. The comparison showed that for the majority of the algorithms and problem classes, the new method produced much simpler and more accurate models than before. To further illustrate the practicality of the technique and its generality (beyond EPAs), we have also used it to predict the performance of both autoregressive models and EPAs on the problem of wind speed forecasting, obtaining simpler and more accurate models that outperform in all cases our previous performance models.

  16. Hybrid Corporate Performance Prediction Model Considering Technical Capability

    Directory of Open Access Journals (Sweden)

    Joonhyuck Lee

    2016-07-01

    Full Text Available Many studies have tried to predict corporate performance and stock prices to enhance investment profitability using qualitative approaches such as the Delphi method. However, developments in data processing technology and machine-learning algorithms have resulted in efforts to develop quantitative prediction models in various managerial subject areas. We propose a quantitative corporate performance prediction model that applies the support vector regression (SVR algorithm to solve the problem of the overfitting of training data and can be applied to regression problems. The proposed model optimizes the SVR training parameters based on the training data, using the genetic algorithm to achieve sustainable predictability in changeable markets and managerial environments. Technology-intensive companies represent an increasing share of the total economy. The performance and stock prices of these companies are affected by their financial standing and their technological capabilities. Therefore, we apply both financial indicators and technical indicators to establish the proposed prediction model. Here, we use time series data, including financial, patent, and corporate performance information of 44 electronic and IT companies. Then, we predict the performance of these companies as an empirical verification of the prediction performance of the proposed model.

  17. Facial Performance Transfer via Deformable Models and Parametric Correspondence.

    Science.gov (United States)

    Asthana, Akshay; de la Hunty, Miles; Dhall, Abhinav; Goecke, Roland

    2012-09-01

    The issue of transferring facial performance from one person's face to another's has been an area of interest for the movie industry and the computer graphics community for quite some time. In recent years, deformable face models, such as the Active Appearance Model (AAM), have made it possible to track and synthesize faces in real time. Not surprisingly, deformable face model-based approaches for facial performance transfer have gained tremendous interest in the computer vision and graphics community. In this paper, we focus on the problem of real-time facial performance transfer using the AAM framework. We propose a novel approach of learning the mapping between the parameters of two completely independent AAMs, using them to facilitate the facial performance transfer in a more realistic manner than previous approaches. The main advantage of modeling this parametric correspondence is that it allows a "meaningful" transfer of both the nonrigid shape and texture across faces irrespective of the speakers' gender, shape, and size of the faces, and illumination conditions. We explore linear and nonlinear methods for modeling the parametric correspondence between the AAMs and show that the sparse linear regression method performs the best. Moreover, we show the utility of the proposed framework for a cross-language facial performance transfer that is an area of interest for the movie dubbing industry.

  18. MODEL-BASED PERFORMANCE EVALUATION APPROACH FOR MOBILE AGENT SYSTEMS

    Institute of Scientific and Technical Information of China (English)

    Li Xin; Mi Zhengkun; Meng Xudong

    2004-01-01

    Claimed as the next generation programming paradigm, mobile agent technology has attracted extensive interests in recent years. However, up to now, limited research efforts have been devoted to the performance study of mobile agent system and most of these researches focus on agent behavior analysis resulting in that models are hard to apply to mobile agent systems. To bridge the gap, a new performance evaluation model derived from operation mechanisms of mobile agent platforms is proposed. Details are discussed for the design of companion simulation software, which can provide the system performance such as response time of platform to mobile agent. Further investigation is followed on the determination of model parameters. Finally comparison is made between the model-based simulation results and measurement-based real performance of mobile agent systems. The results show that the proposed model and designed software are effective in evaluating performance characteristics of mobile agent systems. The proposed approach can also be considered as the basis of performance analysis for large systems composed of multiple mobile agent platforms.

  19. Observer analysis and its impact on task performance modeling

    Science.gov (United States)

    Jacobs, Eddie L.; Brown, Jeremy B.

    2014-05-01

    Fire fighters use relatively low cost thermal imaging cameras to locate hot spots and fire hazards in buildings. This research describes the analyses performed to study the impact of thermal image quality on fire fighter fire hazard detection task performance. Using human perception data collected by the National Institute of Standards and Technology (NIST) for fire fighters detecting hazards in a thermal image, an observer analysis was performed to quantify the sensitivity and bias of each observer. Using this analysis, the subjects were divided into three groups representing three different levels of performance. The top-performing group was used for the remainder of the modeling. Models were developed which related image quality factors such as contrast, brightness, spatial resolution, and noise to task performance probabilities. The models were fitted to the human perception data using logistic regression, as well as probit regression. Probit regression was found to yield superior fits and showed that models with not only 2nd order parameter interactions, but also 3rd order parameter interactions performed the best.

  20. Assessment of performance of survival prediction models for cancer prognosis

    Directory of Open Access Journals (Sweden)

    Chen Hung-Chia

    2012-07-01

    Full Text Available Abstract Background Cancer survival studies are commonly analyzed using survival-time prediction models for cancer prognosis. A number of different performance metrics are used to ascertain the concordance between the predicted risk score of each patient and the actual survival time, but these metrics can sometimes conflict. Alternatively, patients are sometimes divided into two classes according to a survival-time threshold, and binary classifiers are applied to predict each patient’s class. Although this approach has several drawbacks, it does provide natural performance metrics such as positive and negative predictive values to enable unambiguous assessments. Methods We compare the survival-time prediction and survival-time threshold approaches to analyzing cancer survival studies. We review and compare common performance metrics for the two approaches. We present new randomization tests and cross-validation methods to enable unambiguous statistical inferences for several performance metrics used with the survival-time prediction approach. We consider five survival prediction models consisting of one clinical model, two gene expression models, and two models from combinations of clinical and gene expression models. Results A public breast cancer dataset was used to compare several performance metrics using five prediction models. 1 For some prediction models, the hazard ratio from fitting a Cox proportional hazards model was significant, but the two-group comparison was insignificant, and vice versa. 2 The randomization test and cross-validation were generally consistent with the p-values obtained from the standard performance metrics. 3 Binary classifiers highly depended on how the risk groups were defined; a slight change of the survival threshold for assignment of classes led to very different prediction results. Conclusions 1 Different performance metrics for evaluation of a survival prediction model may give different conclusions in

  1. Disaggregation of Rainy Hours: Compared Performance of Various Models.

    Science.gov (United States)

    Ben Haha, M.; Hingray, B.; Musy, A.

    In the urban environment, the response times of catchments are usually short. To de- sign or to diagnose waterworks in that context, it is necessary to describe rainfall events with a good time resolution: a 10mn time step is often necessary. Such in- formation is not always available. Rainfall disaggregation models have thus to be applied to produce from rough rainfall data that short time resolution information. The communication will present the performance obtained with several rainfall dis- aggregation models that allow for the disaggregation of rainy hours into six 10mn rainfall amounts. The ability of the models to reproduce some statistical character- istics of rainfall (mean, variance, overall distribution of 10mn-rainfall amounts; ex- treme values of maximal rainfall amounts over different durations) is evaluated thanks to different graphical and numerical criteria. The performance of simple models pre- sented in some scientific papers or developed in the Hydram laboratory as well as the performance of more sophisticated ones is compared with the performance of the basic constant disaggregation model. The compared models are either deterministic or stochastic; for some of them the disaggregation is based on scaling properties of rainfall. The compared models are in increasing complexity order: constant model, linear model (Ben Haha, 2001), Ormsbee Deterministic model (Ormsbee, 1989), Ar- tificial Neuronal Network based model (Burian et al. 2000), Hydram Stochastic 1 and Hydram Stochastic 2 (Ben Haha, 2001), Multiplicative Cascade based model (Olsson and Berndtsson, 1998), Ormsbee Stochastic model (Ormsbee, 1989). The 625 rainy hours used for that evaluation (with a hourly rainfall amount greater than 5mm) were extracted from the 21 years chronological rainfall series (10mn time step) observed at the Pully meteorological station, Switzerland. The models were also evaluated when applied to different rainfall classes depending on the season first and on the

  2. Human performance modeling for system of systems analytics: combat performance-shaping factors.

    Energy Technology Data Exchange (ETDEWEB)

    Lawton, Craig R.; Miller, Dwight Peter

    2006-01-01

    The US military has identified Human Performance Modeling (HPM) as a significant requirement and challenge of future systems modeling and analysis initiatives. To support this goal, Sandia National Laboratories (SNL) has undertaken a program of HPM as an integral augmentation to its system-of-system (SoS) analytics capabilities. The previous effort, reported in SAND2005-6569, evaluated the effects of soldier cognitive fatigue on SoS performance. The current effort began with a very broad survey of any performance-shaping factors (PSFs) that also might affect soldiers performance in combat situations. The work included consideration of three different approaches to cognition modeling and how appropriate they would be for application to SoS analytics. This bulk of this report categorizes 47 PSFs into three groups (internal, external, and task-related) and provides brief descriptions of how each affects combat performance, according to the literature. The PSFs were then assembled into a matrix with 22 representative military tasks and assigned one of four levels of estimated negative impact on task performance, based on the literature. Blank versions of the matrix were then sent to two ex-military subject-matter experts to be filled out based on their personal experiences. Data analysis was performed to identify the consensus most influential PSFs. Results indicate that combat-related injury, cognitive fatigue, inadequate training, physical fatigue, thirst, stress, poor perceptual processing, and presence of chemical agents are among the PSFs with the most negative impact on combat performance.

  3. Review of typical cognitive models and their application in human reliability analysis%典型认知模型及其在人因可靠性分析中的应用评述

    Institute of Scientific and Technical Information of China (English)

    蒋英杰; 孙志强; 李龙; 宫二玲; 谢红卫

    2011-01-01

    评述了典型认知模型及其在人因可靠性分析中的应用.首先,强调了在人因可靠性分析中使用认知模型的必要性; 然后,介绍了认知心理学、行为科学等学科所建立的几种典型认知模型,包括信息处理模型、决策过程的阶梯模型和通用认知模型,分别分析了它们的基本结构和功能,讨论了各自的优缺点; 随后,介绍了几种人因可靠性分析方法中所建立或使用的认知模型,包括HCR方法使用的SRK框架、ATHEANA方法使用的信息处理模型、CREAM方法建立的COCOM模型以及IDAC方法构建的IDA模型,分别分析了这些模型的特点; 最后,提出了建立认知模型所要遵循的基本原则,并展望了认知模型的未来发展方向.%The typical cognitive models and their applications in human reliability analysis are reviewed. Firstly, emphasis is placed on the necessity to establish cognitive models in human reliability analysis. Cognitive model can simplify the human behavior and abstract the general structure of human cognitive process. On the other hand,cognitive model can be used as the basis of human error interpretation. Secondly, several typical cognitive models in cognitive psychology and behavior science are reviewed, which are information process model, step model of decision making process and general cognitive model. Their basic structures and functions are analyzed. Meanwhile, their strong points and shortcomings are discussed as well. It is pointed out that the structure of cognitive models is increasingly closer to the real world and these models are more and more applicable. Thirdly, the cognitive models in several human reliability analysis methods are introduced, which are SRK framework of HCR, information process model of ATHEANA, COCOM model of CREAM and IDA model of IADC. Their characteristics are analyzed respectively.Lasfiy, the basic principles to establish cognitive models are proposed where the human cognitive

  4. Causal Analysis for Performance Modeling of Computer Programs

    Directory of Open Access Journals (Sweden)

    Jan Lemeire

    2007-01-01

    Full Text Available Causal modeling and the accompanying learning algorithms provide useful extensions for in-depth statistical investigation and automation of performance modeling. We enlarged the scope of existing causal structure learning algorithms by using the form-free information-theoretic concept of mutual information and by introducing the complexity criterion for selecting direct relations among equivalent relations. The underlying probability distribution of experimental data is estimated by kernel density estimation. We then reported on the benefits of a dependency analysis and the decompositional capacities of causal models. Useful qualitative models, providing insight into the role of every performance factor, were inferred from experimental data. This paper reports on the results for a LU decomposition algorithm and on the study of the parameter sensitivity of the Kakadu implementation of the JPEG-2000 standard. Next, the analysis was used to search for generic performance characteristics of the applications.

  5. Performance Model and Sensitivity Analysis for a Solar Thermoelectric Generator

    Science.gov (United States)

    Rehman, Naveed Ur; Siddiqui, Mubashir Ali

    2017-01-01

    In this paper, a regression model for evaluating the performance of solar concentrated thermoelectric generators (SCTEGs) is established and the significance of contributing parameters is discussed in detail. The model is based on several natural, design and operational parameters of the system, including the thermoelectric generator (TEG) module and its intrinsic material properties, the connected electrical load, concentrator attributes, heat transfer coefficients, solar flux, and ambient temperature. The model is developed by fitting a response curve, using the least-squares method, to the results. The sample points for the model were obtained by simulating a thermodynamic model, also developed in this paper, over a range of values of input variables. These samples were generated employing the Latin hypercube sampling (LHS) technique using a realistic distribution of parameters. The coefficient of determination was found to be 99.2%. The proposed model is validated by comparing the predicted results with those in the published literature. In addition, based on the elasticity for parameters in the model, sensitivity analysis was performed and the effects of parameters on the performance of SCTEGs are discussed in detail. This research will contribute to the design and performance evaluation of any SCTEG system for a variety of applications.

  6. Performance Model and Sensitivity Analysis for a Solar Thermoelectric Generator

    Science.gov (United States)

    Rehman, Naveed Ur; Siddiqui, Mubashir Ali

    2017-03-01

    In this paper, a regression model for evaluating the performance of solar concentrated thermoelectric generators (SCTEGs) is established and the significance of contributing parameters is discussed in detail. The model is based on several natural, design and operational parameters of the system, including the thermoelectric generator (TEG) module and its intrinsic material properties, the connected electrical load, concentrator attributes, heat transfer coefficients, solar flux, and ambient temperature. The model is developed by fitting a response curve, using the least-squares method, to the results. The sample points for the model were obtained by simulating a thermodynamic model, also developed in this paper, over a range of values of input variables. These samples were generated employing the Latin hypercube sampling (LHS) technique using a realistic distribution of parameters. The coefficient of determination was found to be 99.2%. The proposed model is validated by comparing the predicted results with those in the published literature. In addition, based on the elasticity for parameters in the model, sensitivity analysis was performed and the effects of parameters on the performance of SCTEGs are discussed in detail. This research will contribute to the design and performance evaluation of any SCTEG system for a variety of applications.

  7. Impact of reactive settler models on simulated WWTP performance.

    Science.gov (United States)

    Gernaey, K V; Jeppsson, U; Batstone, D J; Ingildsen, P

    2006-01-01

    Including a reactive settler model in a wastewater treatment plant model allows representation of the biological reactions taking place in the sludge blanket in the settler, something that is neglected in many simulation studies. The idea of including a reactive settler model is investigated for an ASM1 case study. Simulations with a whole plant model including the non-reactive Takács settler model are used as a reference, and are compared to simulation results considering two reactive settler models. The first is a return sludge model block removing oxygen and a user-defined fraction of nitrate, combined with a non-reactive Takács settler. The second is a fully reactive ASM1 Takács settler model. Simulations with the ASM1 reactive settler model predicted a 15.3% and 7.4% improvement of the simulated N removal performance, for constant (steady-state) and dynamic influent conditions respectively. The oxygen/nitrate return sludge model block predicts a 10% improvement of N removal performance under dynamic conditions, and might be the better modelling option for ASM1 plants: it is computationally more efficient and it will not overrate the importance of decay processes in the settler.

  8. Desempenho de crianças com desenvolvimento típico de linguagem em prova de vocabulário expressivo Performance by children with typical language development in expressive vocabulary test

    Directory of Open Access Journals (Sweden)

    Simone Rocha de Vasconcellos Hage

    2006-12-01

    Full Text Available OBJETIVO: obter o perfil de crianças com desenvolvimento típico de linguagem em prova de vocabulário expressivo, e verificar os tipos de desvios semânticos mais utilizados por elas. MÉTODOS: participaram do estudo 400 crianças com desenvolvimento típico de linguagem, entre três anos e seis anos. Foi aplicado protocolo de avaliação lexical com 100 itens. Para cada faixa etária, foi efetuado estudo estatístico, comparando-se as faixas etárias por meio de teste não paramétrico. RESULTADOS: as crianças de cinco e seis anos obtiveram desempenho semelhante e superior às crianças de três e quatro anos quanto ao número de itens nomeados, e o número de itens não nomeados aumentou conforme diminuiu a idade. Não houve diferença estatisticamente significante apenas entre as idades de cinco e seis anos quanto aos itens nomeados e não nomeados. O número total de desvios semânticos das crianças de três anos foi superior às de quatro, que por sua vez foi superior às de cinco e seis anos. Os desvios de maior ocorrência foram os de superextensão e por contigüidade, sendo que as crianças menores tiveram um número maior de ocorrência que as maiores nos dois tipos de desvio. A ocorrência dos desvios de proximidade morfológica, fonológica, antonísia, dêitico, perífrase e designação não verbal foram insignificantes. CONCLUSÃO: com o avanço da idade, maior foi o número de ocorrência do vocábulo esperado e quanto menor a idade, maior a ocorrência de itens não nomeados. Dentre os desvios semânticos, os de maior ocorrência foram superextensão e por relação de contigüidade.PURPOSE: to obtain the profile of children with typical language development in expressive vocabulary test as well as to verify the types of semantic deviations such children used more frequently. METHODS: the study involved 400 normal children aging from three to six years. A lexical assessment protocol with 100 items was applied. A statistical

  9. Comparative Performance of Volatility Models for Oil Price

    Directory of Open Access Journals (Sweden)

    Afees A. Salisu

    2012-07-01

    Full Text Available In this paper, we compare the performance of volatility models for oil price using daily returns of WTI. The innovations of this paper are in two folds: (i we analyse the oil price across three sub samples namely period before, during and after the global financial crisis, (ii we also analyse the comparative performance of both symmetric and asymmetric volatility models for the oil price. We find that oil price was most volatile during the global financial crises compared to other sub samples. Based on the appropriate model selection criteria, the asymmetric GARCH models appear superior to the symmetric ones in dealing with oil price volatility. This finding indicates evidence of leverage effects in the oil market and ignoring these effects in oil price modelling will lead to serious biases and misleading results.

  10. Modeling and performance analysis of QoS data

    Science.gov (United States)

    Strzeciwilk, Dariusz; Zuberek, Włodzimierz M.

    2016-09-01

    The article presents the results of modeling and analysis of data transmission performance on systems that support quality of service. Models are designed and tested, taking into account multiservice network architecture, i.e. supporting the transmission of data related to different classes of traffic. Studied were mechanisms of traffic shaping systems, which are based on the Priority Queuing with an integrated source of data and the various sources of data that is generated. Discussed were the basic problems of the architecture supporting QoS and queuing systems. Designed and built were models based on Petri nets, supported by temporal logics. The use of simulation tools was to verify the mechanisms of shaping traffic with the applied queuing algorithms. It is shown that temporal models of Petri nets can be effectively used in the modeling and analysis of the performance of computer networks.

  11. Performance Models for Split-execution Computing Systems

    Energy Technology Data Exchange (ETDEWEB)

    Humble, Travis S [ORNL; McCaskey, Alex [ORNL; Schrock, Jonathan [ORNL; Seddiqi, Hadayat [ORNL; Britt, Keith A [ORNL; Imam, Neena [ORNL

    2016-01-01

    Split-execution computing leverages the capabilities of multiple computational models to solve problems, but splitting program execution across different computational models incurs costs associated with the translation between domains. We analyze the performance of a split-execution computing system developed from conventional and quantum processing units (QPUs) by using behavioral models that track resource usage. We focus on asymmetric processing models built using conventional CPUs and a family of special-purpose QPUs that employ quantum computing principles. Our performance models account for the translation of a classical optimization problem into the physical representation required by the quantum processor while also accounting for hardware limitations and conventional processor speed and memory. We conclude that the bottleneck in this split-execution computing system lies at the quantum-classical interface and that the primary time cost is independent of quantum processor behavior.

  12. Performance Assessment of Hydrological Models Considering Acceptable Forecast Error Threshold

    Directory of Open Access Journals (Sweden)

    Qianjin Dong

    2015-11-01

    Full Text Available It is essential to consider the acceptable threshold in the assessment of a hydrological model because of the scarcity of research in the hydrology community and errors do not necessarily cause risk. Two forecast errors, including rainfall forecast error and peak flood forecast error, have been studied based on the reliability theory. The first order second moment (FOSM and bound methods are used to identify the reliability. Through the case study of the Dahuofang (DHF Reservoir, it is shown that the correlation between these two errors has great influence on the reliability index of hydrological model. In particular, the reliability index of the DHF hydrological model decreases with the increasing correlation. Based on the reliability theory, the proposed performance evaluation framework incorporating the acceptable forecast error threshold and correlation among the multiple errors can be used to evaluate the performance of a hydrological model and to quantify the uncertainties of a hydrological model output.

  13. Comparison of Simple Versus Performance-Based Fall Prediction Models

    Directory of Open Access Journals (Sweden)

    Shekhar K. Gadkaree BS

    2015-05-01

    Full Text Available Objective: To compare the predictive ability of standard falls prediction models based on physical performance assessments with more parsimonious prediction models based on self-reported data. Design: We developed a series of fall prediction models progressing in complexity and compared area under the receiver operating characteristic curve (AUC across models. Setting: National Health and Aging Trends Study (NHATS, which surveyed a nationally representative sample of Medicare enrollees (age ≥65 at baseline (Round 1: 2011-2012 and 1-year follow-up (Round 2: 2012-2013. Participants: In all, 6,056 community-dwelling individuals participated in Rounds 1 and 2 of NHATS. Measurements: Primary outcomes were 1-year incidence of “any fall” and “recurrent falls.” Prediction models were compared and validated in development and validation sets, respectively. Results: A prediction model that included demographic information, self-reported problems with balance and coordination, and previous fall history was the most parsimonious model that optimized AUC for both any fall (AUC = 0.69, 95% confidence interval [CI] = [0.67, 0.71] and recurrent falls (AUC = 0.77, 95% CI = [0.74, 0.79] in the development set. Physical performance testing provided a marginal additional predictive value. Conclusion: A simple clinical prediction model that does not include physical performance testing could facilitate routine, widespread falls risk screening in the ambulatory care setting.

  14. Human performance modeling for system of systems analytics.

    Energy Technology Data Exchange (ETDEWEB)

    Dixon, Kevin R.; Lawton, Craig R.; Basilico, Justin Derrick; Longsine, Dennis E. (INTERA, Inc., Austin, TX); Forsythe, James Chris; Gauthier, John Henry; Le, Hai D.

    2008-10-01

    A Laboratory-Directed Research and Development project was initiated in 2005 to investigate Human Performance Modeling in a System of Systems analytic environment. SAND2006-6569 and SAND2006-7911 document interim results from this effort; this report documents the final results. The problem is difficult because of the number of humans involved in a System of Systems environment and the generally poorly defined nature of the tasks that each human must perform. A two-pronged strategy was followed: one prong was to develop human models using a probability-based method similar to that first developed for relatively well-understood probability based performance modeling; another prong was to investigate more state-of-art human cognition models. The probability-based modeling resulted in a comprehensive addition of human-modeling capability to the existing SoSAT computer program. The cognitive modeling resulted in an increased understanding of what is necessary to incorporate cognition-based models to a System of Systems analytic environment.

  15. Comparison of Simple Versus Performance-Based Fall Prediction Models

    Directory of Open Access Journals (Sweden)

    Shekhar K. Gadkaree BS

    2015-05-01

    Full Text Available Objective: To compare the predictive ability of standard falls prediction models based on physical performance assessments with more parsimonious prediction models based on self-reported data. Design: We developed a series of fall prediction models progressing in complexity and compared area under the receiver operating characteristic curve (AUC across models. Setting: National Health and Aging Trends Study (NHATS, which surveyed a nationally representative sample of Medicare enrollees (age ≥65 at baseline (Round 1: 2011-2012 and 1-year follow-up (Round 2: 2012-2013. Participants: In all, 6,056 community-dwelling individuals participated in Rounds 1 and 2 of NHATS. Measurements: Primary outcomes were 1-year incidence of “ any fall ” and “ recurrent falls .” Prediction models were compared and validated in development and validation sets, respectively. Results: A prediction model that included demographic information, self-reported problems with balance and coordination, and previous fall history was the most parsimonious model that optimized AUC for both any fall (AUC = 0.69, 95% confidence interval [CI] = [0.67, 0.71] and recurrent falls (AUC = 0.77, 95% CI = [0.74, 0.79] in the development set. Physical performance testing provided a marginal additional predictive value. Conclusion: A simple clinical prediction model that does not include physical performance testing could facilitate routine, widespread falls risk screening in the ambulatory care setting.

  16. THE USE OF NEURAL NETWORK TECHNOLOGY TO MODEL SWIMMING PERFORMANCE

    Directory of Open Access Journals (Sweden)

    António José Silva

    2007-03-01

    Full Text Available The aims of the present study were: to identify the factors which are able to explain the performance in the 200 meters individual medley and 400 meters front crawl events in young swimmers, to model the performance in those events using non-linear mathematic methods through artificial neural networks (multi-layer perceptrons and to assess the neural network models precision to predict the performance. A sample of 138 young swimmers (65 males and 73 females of national level was submitted to a test battery comprising four different domains: kinanthropometric evaluation, dry land functional evaluation (strength and flexibility, swimming functional evaluation (hydrodynamics, hydrostatic and bioenergetics characteristics and swimming technique evaluation. To establish a profile of the young swimmer non-linear combinations between preponderant variables for each gender and swim performance in the 200 meters medley and 400 meters font crawl events were developed. For this purpose a feed forward neural network was used (Multilayer Perceptron with three neurons in a single hidden layer. The prognosis precision of the model (error lower than 0.8% between true and estimated performances is supported by recent evidence. Therefore, we consider that the neural network tool can be a good approach in the resolution of complex problems such as performance modeling and the talent identification in swimming and, possibly, in a wide variety of sports

  17. Performance Analysis, Modeling and Scaling of HPC Applications and Tools

    Energy Technology Data Exchange (ETDEWEB)

    Bhatele, Abhinav [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2016-01-13

    E cient use of supercomputers at DOE centers is vital for maximizing system throughput, mini- mizing energy costs and enabling science breakthroughs faster. This requires complementary e orts along several directions to optimize the performance of scienti c simulation codes and the under- lying runtimes and software stacks. This in turn requires providing scalable performance analysis tools and modeling techniques that can provide feedback to physicists and computer scientists developing the simulation codes and runtimes respectively. The PAMS project is using time allocations on supercomputers at ALCF, NERSC and OLCF to further the goals described above by performing research along the following fronts: 1. Scaling Study of HPC applications; 2. Evaluation of Programming Models; 3. Hardening of Performance Tools; 4. Performance Modeling of Irregular Codes; and 5. Statistical Analysis of Historical Performance Data. We are a team of computer and computational scientists funded by both DOE/NNSA and DOE/ ASCR programs such as ECRP, XStack (Traleika Glacier, PIPER), ExaOSR (ARGO), SDMAV II (MONA) and PSAAP II (XPACC). This allocation will enable us to study big data issues when analyzing performance on leadership computing class systems and to assist the HPC community in making the most e ective use of these resources.

  18. A CHAID Based Performance Prediction Model in Educational Data Mining

    Directory of Open Access Journals (Sweden)

    R. Bhaskaran

    2010-01-01

    Full Text Available The performance in higher secondary school education in India is a turning point in the academic lives of all students. As this academic performance is influenced by many factors, it is essential to develop predictive data mining model for students' performance so as to identify the slow learners and study the influence of the dominant factors on their academic performance. In the present investigation, a survey cum experimental methodology was adopted to generate a database and it was constructed from a primary and a secondary source. While the primary data was collected from the regular students, the secondary data was gathered from the school and office of the Chief Educational Officer (CEO. A total of 1000 datasets of the year 2006 from five different schools in three different districts of Tamilnadu were collected. The raw data was preprocessed in terms of filling up missing values, transforming values in one form into another and relevant attribute/ variable selection. As a result, we had 772 student records, which were used for CHAID prediction model construction. A set of prediction rules were extracted from CHIAD prediction model and the efficiency of the generated CHIAD prediction model was found. The accuracy of the present model was compared with other model and it has been found to be satisfactory.

  19. An ambient agent model for analyzing managers' performance during stress

    Science.gov (United States)

    ChePa, Noraziah; Aziz, Azizi Ab; Gratim, Haned

    2016-08-01

    Stress at work have been reported everywhere. Work related performance during stress is a pattern of reactions that occurs when managers are presented with work demands that are not matched with their knowledge, skills, or abilities, and which challenge their ability to cope. Although there are many prior findings pertaining to explain the development of manager performance during stress, less attention has been given to explain the same concept through computational models. In such, a descriptive nature in psychological theories about managers' performance during stress can be transformed into a causal-mechanistic stage that explains the relationship between a series of observed phenomena. This paper proposed an ambient agent model for analyzing managers' performance during stress. Set of properties and variables are identified through past literatures to construct the model. Differential equations have been used in formalizing the model. Set of equations reflecting relations involved in the proposed model are presented. The proposed model is essential and can be encapsulated within an intelligent agent or robots that can be used to support managers during stress.

  20. Human performance modeling for system of systems analytics :soldier fatigue.

    Energy Technology Data Exchange (ETDEWEB)

    Lawton, Craig R.; Campbell, James E.; Miller, Dwight Peter

    2005-10-01

    The military has identified Human Performance Modeling (HPM) as a significant requirement and challenge of future systems modeling and analysis initiatives as can be seen in the Department of Defense's (DoD) Defense Modeling and Simulation Office's (DMSO) Master Plan (DoD 5000.59-P 1995). To this goal, the military is currently spending millions of dollars on programs devoted to HPM in various military contexts. Examples include the Human Performance Modeling Integration (HPMI) program within the Air Force Research Laboratory, which focuses on integrating HPMs with constructive models of systems (e.g. cockpit simulations) and the Navy's Human Performance Center (HPC) established in September 2003. Nearly all of these initiatives focus on the interface between humans and a single system. This is insufficient in the era of highly complex network centric SoS. This report presents research and development in the area of HPM in a system-of-systems (SoS). Specifically, this report addresses modeling soldier fatigue and the potential impacts soldier fatigue can have on SoS performance.

  1. 几种常用似大地水准面插值方法精度分析%Several Typical Interpolation Method and its Precision Analysis Based on Quasi-Geoid Model

    Institute of Scientific and Technical Information of China (English)

    徐平; 杜向锋

    2014-01-01

    This article details several typical mathematical models of the interpolation methods of quasi-geoid,and compiles the corresponding model interpolation software on the basis of these models.Meanwhile,by using the interpolation methods provided by the inter-polation software and a certain quasi-geoid model,some GPS/leveling data are calculated by using elevation interpolation.Through the analysis of the interpolation results,some useful conclusions are drawn.%详细介绍了几种常用的似大地水准面插值方法的数学模型,并根据这些模型编写了相应的模型内插软件,利用该内插软件提供的内插方法及某似大地水准面模型,对一些GPS/水准数据进行了高程内插计算,通过对内插结果的分析获得了一些有益的结论。

  2. Predicting the operation performance of condensate polishing plant using a mathematical kinetic model

    Energy Technology Data Exchange (ETDEWEB)

    Handy, B.J.; Greene, J.C. [NNC Solutions Ltd, Warrington (United Kingdom)

    2004-09-01

    NNC limited provides an ion exchange resin technology facility, which includes a resin testing service. A range of ion exchange resin properties is measured and this includes ion exchange capacity, resin bead particle sizes and anion kinetic performance in terms of mass transfer coefficients. It has long been considered by the authors that the experimental data for resins taken from operating condensate polishing plant (CPP) could be used to predict the expected plant performance. This has now been realised with the development of a mathematical model which predicts CPP behaviour using appropriate experimentally derived parameters and plant design data. Modelling methods for the separate anion and cation components of a mixed bed were initially developed before the mixed bed as a whole was addressed. Initially, an analytical approach was adopted, which proved successful for simple cases. For more complex examples a numerical approach was developed and found to be more suitable. The paper describes the development of anion and cation bed models, and a mixed bed model. In the latter model, the anion and cation components modelled earlier are combined, and used to model simultaneously typical concentrations of ammonia, sodium, chloride and sulphate. Examples of operation are given, and observations and points of interest are discussed with respect to the calculated concentration profiles. The experimental behaviour of a number of resin samples taken from operating plant was examined in a purpose-built ultrapure water recirculation loop equipped with a range of analytical instruments. This has permitted the observed experimental results to be compared with model predictions. The next stage of the model development is to identify plants suitable for testing the model against real plant performance and the authors are now seeking to identify plant managers interested in collaborating in this venture. (orig.)

  3. CORPORATE FORESIGHT AND PERFORMANCE: A CHAIN-OF-EFFECTS MODEL

    DEFF Research Database (Denmark)

    Jissink, Tymen; Huizingh, Eelko K.R.E.; Rohrbeck, René

    2015-01-01

    , formal organization, and culture. We investigate the relation of corporate foresight with three innovation performance dimensions – new product success, new product innovativeness, and financial performance. We use partial-least-squares structural equations modelling to assess our measurement mode ls......In this paper we develop and validate a measurement scale for corporate foresight and examine its impact on performance in a chain-of-effects model. We conceptualize corporate foresight as an organizational ability consisting of five distinct dimensions: information scope, method usage, people...... and test our research hypotheses. Using a cross-industry sample of 153 innovative firms, we find that corporate foresight can be validly and reliably measured by our measurement instrument. The results of the structural model support the hypothesized positive effects of corporate foresight on all...

  4. A multiserver multiqueue network:modeling and performance analysis

    Institute of Scientific and Technical Information of China (English)

    ZhiguangShan; YangYang; 等

    2002-01-01

    A new categroy of system model,multiserver multiqueue network(MSMQN),is proposed for distributed systems such as the geopgraphically distributed web-server clusters.A MSMQN comprises multiple multiserver multiqueue(MSMQ) nodes distributed over the network.and every node consists of a number of servers that each contains multiple priority queues for waiting customers.An incoming request can be distributed to a waiting queue of any server in any node,according to the routing policy integrated by the nodeselection policy at network-level,request-dispatching policy at node-level,and request-scheduling policy at server-level.The model is investigated using stochastic high-level Petrinet(SHLPN) modeling and performance analysis techniques.The performance metrics concerned includes the delay time of requests in the MSMQ node and the response time perceived by the users.The numerical example shows the feeiciency of the performance analysis technique.

  5. Mantis: Predicting System Performance through Program Analysis and Modeling

    CERN Document Server

    Chun, Byung-Gon; Lee, Sangmin; Maniatis, Petros; Naik, Mayur

    2010-01-01

    We present Mantis, a new framework that automatically predicts program performance with high accuracy. Mantis integrates techniques from programming language and machine learning for performance modeling, and is a radical departure from traditional approaches. Mantis extracts program features, which are information about program execution runs, through program instrumentation. It uses machine learning techniques to select features relevant to performance and creates prediction models as a function of the selected features. Through program analysis, it then generates compact code slices that compute these feature values for prediction. Our evaluation shows that Mantis can achieve more than 93% accuracy with less than 10% training data set, which is a significant improvement over models that are oblivious to program features. The system generates code slices that are cheap to compute feature values.

  6. Aircraft Anomaly Detection Using Performance Models Trained on Fleet Data

    Science.gov (United States)

    Gorinevsky, Dimitry; Matthews, Bryan L.; Martin, Rodney

    2012-01-01

    This paper describes an application of data mining technology called Distributed Fleet Monitoring (DFM) to Flight Operational Quality Assurance (FOQA) data collected from a fleet of commercial aircraft. DFM transforms the data into aircraft performance models, flight-to-flight trends, and individual flight anomalies by fitting a multi-level regression model to the data. The model represents aircraft flight performance and takes into account fixed effects: flight-to-flight and vehicle-to-vehicle variability. The regression parameters include aerodynamic coefficients and other aircraft performance parameters that are usually identified by aircraft manufacturers in flight tests. Using DFM, the multi-terabyte FOQA data set with half-million flights was processed in a few hours. The anomalies found include wrong values of competed variables, (e.g., aircraft weight), sensor failures and baises, failures, biases, and trends in flight actuators. These anomalies were missed by the existing airline monitoring of FOQA data exceedances.

  7. A Bibliometric Analysis and Review on Performance Modeling Literature

    Directory of Open Access Journals (Sweden)

    Barbara Livieri

    2015-04-01

    Full Text Available In management practice, performance indicators are considered as a prerequisite to make informed decisions in line with the organization’s goals. On the other hand, indicators summarizes compound phenomena in a few digits, which can induce to inadequate decisions, biased by information loss and conflicting values. Model driven approaches in enterprise engineering can be very effective to avoid these pitfalls, or to take it under control. For that reason, “performance modeling” has the numbers to play a primary role in the “model driven enterprise” scenario, together with process, information and other enterprise-related aspects. In this perspective, we propose a systematic review of the literature on performance modeling in order to retrieve, classify, and summarize existing research, identify the core authors and define areas and opportunities for future research.

  8. Testing a Model of Work Performance in an Academic Environment

    Directory of Open Access Journals (Sweden)

    B. Charles Tatum

    2012-04-01

    Full Text Available In modern society, people both work and study. The intersection between organizational and educational research suggests that a common model should apply to both academic and job performance. The purpose of this study was to apply a model of work and job performance (based on general expectancy theory to a classroom setting, and test the predicted relationships using a causal/path model methodology. The findings revealed that motivation and ability predicted student expectations and self-efficacy, and that expectations and efficacy predicted class performance. Limitations, implications, and future research directions are discussed. This study showed how the research in industrial and organizational psychology is relevant to education. It was concluded that greater effort should be made to integrate knowledge across a wider set of domains.

  9. Discussed on Typical Cases of Bilingual Teaching Model in Xinjiang Primary Schools%新疆双语教学模式(小学)典型案例分析。

    Institute of Scientific and Technical Information of China (English)

    曹春梅

    2012-01-01

    通过对新疆双语教学模式的三个典型案例的分析,提出了作为母语教学的汉语文教学和作为第二语言教学的汉语教学的差异,并对两者的适应性改进提出了作者的看法。%The essay is proposed the differences between Chinese language and literature teaching as the mother tongue and Chinese language teaching as the second language by analysis of three typical cases of bilingual teaching model in Xinjiang, and How to integrate the differences of two language teachings have been discussed.

  10. Performance modeling of a feature-aided tracker

    Science.gov (United States)

    Goley, G. Steven; Nolan, Adam R.

    2012-06-01

    In order to provide actionable intelligence in a layered sensing paradigm, exploitation algorithms should produce a confidence estimate in addition to the inference variable. This article presents a methodology and results of one such algorithm for feature-aided tracking of vehicles in wide area motion imagery. To perform experiments a synthetic environment was developed, which provided explicit knowledge of ground truth, tracker prediction accuracy, and control of operating conditions. This synthetic environment leveraged physics-based modeling simulations to re-create both traffic flow, reflectance of vehicles, obscuration and shadowing. With the ability to control operating conditions as well as the availability of ground truth, several experiments were conducted to test both the tracker and expected performance. The results show that the performance model produces a meaningful estimate of the tracker performance over the subset of operating conditions.

  11. On Performance Modeling of Ad Hoc Routing Protocols

    Directory of Open Access Journals (Sweden)

    Khayam SyedAli

    2010-01-01

    Full Text Available Simulation studies have been the predominant method of evaluating ad hoc routing algorithms. Despite their wide use and merits, simulations are generally time consuming. Furthermore, several prominent ad hoc simulations report inconsistent and unrepeatable results. We, therefore, argue that simulation-based evaluation of ad hoc routing protocols should be complemented with mathematical verification and comparison. In this paper, we propose a performance evaluation framework that can be used to model two key performance metrics of an ad hoc routing algorithm, namely, routing overhead and route optimality. We also evaluate derivatives of the two metrics, namely, total energy consumption and route discovery latency. Using the proposed framework, we evaluate the performance of four prominent ad hoc routing algorithms: DSDV, DSR, AODV-LL, and Gossiping. We show that the modeled metrics not only allow unbiased performance comparison but also provide interesting insight about the impact of different parameters on the behavior of these protocols.

  12. Bounding SAR ATR performance based on model similarity

    Science.gov (United States)

    Boshra, Michael; Bhanu, Bir

    1999-08-01

    Similarity between model targets plays a fundamental role in determining the performance of target recognition. We analyze the effect of model similarity on the performance of a vote- based approach for target recognition from SAR images. In such an approach, each model target is represented by a set of SAR views sampled at a variety of azimuth angles and a specific depression angle. Both model and data views are represented by locations of scattering centers, which are peak features. The model hypothesis (view of a specific target and associated location) corresponding to a given data view is chosen to be the one with the highest number of data-supported model features (votes). We address three issues in this paper. Firstly, we present a quantitative measure of the similarity between a pair of model views. Such a measure depends on the degree of structural overlap between the two views, and the amount of uncertainty. Secondly, we describe a similarity- based framework for predicting an upper bound on recognition performance in the presence of uncertainty, occlusion and clutter. Thirdly, we validate the proposed framework using MSTAR public data, which are obtained under different depression angles, configurations and articulations.

  13. Performance verification tests of JT-60SA CS model coil

    Energy Technology Data Exchange (ETDEWEB)

    Obana, Tetsuhiro, E-mail: obana.tetsuhiro@LHD.nifs.ac.jp [National Institute for Fusion Science, 322-6 Oroshi, Toki, Gifu 509-5292 (Japan); Murakami, Haruyuki [Japan Atomic Energy Agency, 801-1 Mukoyama, Naka, Ibaraki 311-0193 (Japan); Takahata, Kazuya; Hamaguchi, Shinji; Chikaraishi, Hirotaka; Mito, Toshiyuki; Imagawa, Shinsaku [National Institute for Fusion Science, 322-6 Oroshi, Toki, Gifu 509-5292 (Japan); Kizu, Kaname; Natsume, Kyohei; Yoshida, Kiyoshi [Japan Atomic Energy Agency, 801-1 Mukoyama, Naka, Ibaraki 311-0193 (Japan)

    2015-11-15

    Highlights: • The performance of the JT-60SA CS model coil was verified. • The CS model coil comprised a quad-pancake wound with a Nb{sub 3}Sn CIC conductor. • The CS model coil met the design requirements. - Abstract: As a final check of the coil manufacturing method of the JT-60 Super Advanced (JT-60SA) central solenoid (CS), we verified the performance of a CS model coil. The model coil comprised a quad-pancake wound with a Nb{sub 3}Sn cable-in-conduit conductor. Measurements of the critical current, joint resistance, pressure drop, and magnetic field were conducted in the verification tests. In the critical-current measurement, the critical current of the model coil coincided with the estimation derived from a strain of −0.62% for the Nb{sub 3}Sn strands. As a result, critical-current degradation caused by the coil manufacturing process was not observed. The results of the performance verification tests indicate that the model coil met the design requirements. Consequently, the manufacturing process of the JT-60SA CS was established.

  14. Thermal performance modeling of NASA s scientific balloons

    Science.gov (United States)

    Franco, H.; Cathey, H.

    The flight performance of a scientific balloon is highly dependant on the interaction between the balloon and its environment. The balloon is a thermal vehicle. Modeling a scientific balloon's thermal performance has proven to be a difficult analytical task. Most previous thermal models have attempted these analyses by using either a bulk thermal model approach, or by simplified representations of the balloon. These approaches to date have provided reasonable, but not very accurate results. Improvements have been made in recent years using thermal analysis tools developed for the thermal modeling of spacecraft and other sophisticated heat transfer problems. These tools, which now allow for accurate modeling of highly transmissive materials, have been applied to the thermal analysis of NASA's scientific balloons. A research effort has been started that utilizes the "Thermal Desktop" addition to AUTO CAD. This paper will discuss the development of thermal models for both conventional and Ultra Long Duration super-pressure balloons. This research effort has focused on incremental analysis stages of development to assess the accuracy of the tool and the required model resolution to produce usable data. The first stage balloon thermal analyses started with simple spherical balloon models with a limited number of nodes, and expanded the number of nodes to determine required model resolution. These models were then modified to include additional details such as load tapes. The second stage analyses looked at natural shaped Zero Pressure balloons. Load tapes were then added to these shapes, again with the goal of determining the required modeling accuracy by varying the number of gores. The third stage, following the same steps as the Zero Pressure balloon efforts, was directed at modeling super-pressure pumpkin shaped balloons. The results were then used to develop analysis guidelines and an approach for modeling balloons for both simple first order estimates and detailed

  15. Comparison of Predictive Models for PV Module Performance (Presentation)

    Energy Technology Data Exchange (ETDEWEB)

    Marion, B.

    2008-05-01

    This paper examines three models used to estimate the maximum power (P{sub m}) of PV modules when the irradiance and PV cell temperature are known: (1) the power temperature coefficient model, (2) the PVFORM model, and (3) the bilinear interpolation model. A variation of the power temperature coefficient model is also presented that improved model accuracy. For modeling values of P{sub m}, an 'effective' plane-of-array (POA) irradiance (E{sub e}) and the PV cell temperature (T) are used as model inputs. Using E{sub e} essentially removes the effects of variations in solar spectrum and reflectance losses, and permits the influence of irradiance and temperature on model performance for P{sub m} to be more easily studied. Eq. 1 is used to determine E{sub e} from T and the PV module's measured short-circuit current (I{sub sc}). Zero subscripts denote performance at Standard Reporting Conditions (SRC).

  16. Computational model of sustained acceleration effects on human cognitive performance.

    Science.gov (United States)

    McKinlly, Richard A; Gallimore, Jennie J

    2013-08-01

    Extreme acceleration maneuvers encountered in modern agile fighter aircraft can wreak havoc on human physiology, thereby significantly influencing cognitive task performance. As oxygen content declines under acceleration stress, the activity of high order cortical tissue reduces to ensure sufficient metabolic resources are available for critical life-sustaining autonomic functions. Consequently, cognitive abilities reliant on these affected areas suffer significant performance degradations. The goal was to develop and validate a model capable of predicting human cognitive performance under acceleration stress. Development began with creation of a proportional control cardiovascular model that produced predictions of several hemodynamic parameters, including eye-level blood pressure and regional cerebral oxygen saturation (rSo2). An algorithm was derived to relate changes in rSo2 within specific brain structures to performance on cognitive tasks that require engagement of different brain areas. Data from the "precision timing" experiment were then used to validate the model predicting cognitive performance as a function of G(z) profile. The following are value ranges. Results showed high agreement between the measured and predicted values for the rSo2 (correlation coefficient: 0.7483-0.8687; linear best-fit slope: 0.5760-0.9484; mean percent error: 0.75-3.33) and cognitive performance models (motion inference task--correlation coefficient: 0.7103-0.9451; linear best-fit slope: 0.7416-0.9144; mean percent error: 6.35-38.21; precision timing task--correlation coefficient: 0.6856-0.9726; linear best-fit slope: 0.5795-1.027; mean percent error: 6.30-17.28). The evidence suggests that the model is capable of accurately predicting cognitive performance of simplistic tasks under high acceleration stress.

  17. Developing a model of forecasting information systems performance

    Directory of Open Access Journals (Sweden)

    G. N. Isaev

    2017-01-01

    Full Text Available Research aim: to develop a model to forecast the performance ofinformation systems as a mechanism for preliminary assessment of the information system effectiveness before the beginning of financing the information system project.Materials and methods: the starting material used the results of studying the parameters of the statistical structure of information system data processing defects. Methods of cluster analysis and regression analysis were applied.Results: in order to reduce financial risks, information systems customers try to make decisions on the basis of preliminary calculations on the effectiveness of future information systems. However, the assumptions on techno-economic justification of the project can only be obtained when the funding for design work is already open. Its evaluation can be done before starting the project development using a model of forecasting information system performance. The model is developed using regression analysis in the form of a multiple linear regression. The value of information system performance is the predicted variable in the regression equation. The values of data processing defects in the classes of accuracy, completeness and timeliness are the forecast variables. Measurement and evaluation of parameters of the statistical structure of defects were done through programmes of cluster analysis and regression analysis. The calculations for determining the actual and forecast values of the information system performance were conducted.Conclusion: in terms of implementing the model, a research of information systems was carried out, as well as the development of forecasting model of information system performance. The conducted experimental work showed the adequacy of the model. The model is implemented in the complex task of designing information systems in education and industry.

  18. A Real-Time Performance Analysis Model for Cryptographic Protocols

    Directory of Open Access Journals (Sweden)

    Amos Olagunju

    2012-12-01

    Full Text Available Several encryption algorithms exist today for securing data in storage and transmission over network systems. The choice of encryption algorithms must weigh performance requirements against the call for protection of sensitive data. This research investigated the processing times of alternative encryption algorithms under specific conditions. The paper presents the architecture of a model multiplatform tool for the evaluation of candidate encryption algorithms based on different data and key sizes. The model software was used to appraise the real-time performance of DES, AES, 3DES, MD5, SHA1, and SHA2 encryption algorithms.

  19. Youth spare time: Typical patterns of behavior

    Directory of Open Access Journals (Sweden)

    Stepanović Ivana

    2009-01-01

    Full Text Available In this research we have tried to identify typical patterns of young people's behavior in their spare time and to use these patterns in order to group our subjects regarding their interests and preferences. Main-component analysis showed that it was possible to find different patterns of secondary school students' behavior in spare time as well as that the identified models could be the criteria for grouping them. Five patterns have been identified describing youth orientations towards their free time: academic orientation, orientation towards sports, orientation towards entertainment, orientation towards spending time going out and orientation towards music and computers.

  20. PTL: A Propositional Typicality Logic

    CSIR Research Space (South Africa)

    Booth, R

    2012-09-01

    Full Text Available in which a formula holds. The semantics is in terms of ranked models as studied in KLM-style preferential reasoning. This allows us to show that rational consequence relations can be embedded in our logic. Moreover we show that we can define consequence...

  1. Model of service-oriented catering supply chain performance evaluation

    Directory of Open Access Journals (Sweden)

    Juanqiong Gou

    2013-03-01

    Full Text Available Purpose: The aim of this paper is constructing a performance evaluation model for service-oriented catering supply chain. Design/methodology/approach: With the research on the current situation of catering industry, this paper summarized the characters of the catering supply chain, and then presents the service-oriented catering supply chain model based on the platform of logistics and information. At last, the fuzzy AHP method is used to evaluate the performance of service-oriented catering supply chain. Findings: With the analysis of the characteristics of catering supply chain, we construct the performance evaluation model in order to guarantee the food safety, logistics efficiency, price stability and so on. Practical implications: In order to evolve an efficient and effective service supply chain, it can not only used to own enterprise improvement, but also can be used for selecting different customers, to choose a different model of development. Originality/value: This paper has a new definition of service-oriented catering supply chain. And it offers a model to evaluate the performance of this catering supply chain.

  2. Performance based Ranking Model for Cloud SaaS Services

    Directory of Open Access Journals (Sweden)

    Sahar Abdalla Elmubarak

    2017-01-01

    Full Text Available Cloud computing systems provide virtualized resources that can be provisioned on demand basis. Enormous number of cloud providers are offering diverse number of services. The performance of these services is a critical factor for clients to determine the cloud provider that they will choose. However, determining a provider with efficient and effective services is a challenging task. There is a need for an efficient model that help clients to select the best provider based on the performance attributes and measurements. Cloud service ranking is a standard method used to perform this task. It is the process of arranging and classifying several cloud services within the cloud, then compute the relative ranking values of them based on the quality of service required by clients and the features of the cloud services. The objective of this study is to propose an enhanced performance based ranking model to help users choose the best service they need. The proposed model combines the attributes and measurements from cloud computing field and the welldefined and established software engineering field. SMICloud Toolkit has been used to test the applicability of the proposed model. The experimentation results of the proposed model were promising.

  3. Neural Network Based Model for Predicting Housing Market Performance

    Institute of Scientific and Technical Information of China (English)

    Ahmed Khalafallah

    2008-01-01

    The United States real estate market is currently facing its worst hit in two decades due to the slowdown of housing sales. The most affected by this decline are real estate investors and home develop-ers who are currently struggling to break-even financially on their investments. For these investors, it is of utmost importance to evaluate the current status of the market and predict its performance over the short-term in order to make appropriate financial decisions. This paper presents the development of artificial neu-ral network based models to support real estate investors and home developers in this critical task. The pa-per describes the decision variables, design methodology, and the implementation of these models. The models utilize historical market performance data sets to train the artificial neural networks in order to pre-dict unforeseen future performances. An application example is analyzed to demonstrate the model capabili-ties in analyzing and predicting the market performance. The model testing and validation showed that the error in prediction is in the range between -2% and +2%.

  4. New performance evaluation models for character detection in images

    Science.gov (United States)

    Wang, YanWei; Ding, XiaoQing; Liu, ChangSong; Wang, Kongqiao

    2010-02-01

    Detection of characters regions is a meaningful research work for both highlighting region of interest and recognition for further information processing. A lot of researches have been performed on character localization and extraction and this leads to the great needs of performance evaluation scheme to inspect detection algorithms. In this paper, two probability models are established to accomplish evaluation tasks for different applications respectively. For highlighting region of interest, a Gaussian probability model, which simulates the property of a low-pass Gaussian filter of human vision system (HVS), was constructed to allocate different weights to different character parts. It reveals the greatest potential to describe the performance of detectors, especially, when the result detected is an incomplete character, where other methods cannot effectively work. For the recognition destination, we also introduced a weighted probability model to give an appropriate description for the contribution of detection results to final recognition results. The validity of performance evaluation models proposed in this paper are proved by experiments on web images and natural scene images. These models proposed in this paper may also be able to be applied in evaluating algorithms of locating other objects, like face detection and more wide experiments need to be done to examine the assumption.

  5. Model-based approach for elevator performance estimation

    Science.gov (United States)

    Esteban, E.; Salgado, O.; Iturrospe, A.; Isasa, I.

    2016-02-01

    In this paper, a dynamic model for an elevator installation is presented in the state space domain. The model comprises both the mechanical and the electrical subsystems, including the electrical machine and a closed-loop field oriented control. The proposed model is employed for monitoring the condition of the elevator installation. The adopted model-based approach for monitoring employs the Kalman filter as an observer. A Kalman observer estimates the elevator car acceleration, which determines the elevator ride quality, based solely on the machine control signature and the encoder signal. Finally, five elevator key performance indicators are calculated based on the estimated car acceleration. The proposed procedure is experimentally evaluated, by comparing the key performance indicators calculated based on the estimated car acceleration and the values obtained from actual acceleration measurements in a test bench. Finally, the proposed procedure is compared with the sliding mode observer.

  6. Model for performance prediction in multi-axis machining

    CERN Document Server

    Lavernhe, Sylvain; Lartigue, Claire; 10.1007/s00170-007-1001-4

    2009-01-01

    This paper deals with a predictive model of kinematical performance in 5-axis milling within the context of High Speed Machining. Indeed, 5-axis high speed milling makes it possible to improve quality and productivity thanks to the degrees of freedom brought by the tool axis orientation. The tool axis orientation can be set efficiently in terms of productivity by considering kinematical constraints resulting from the set machine-tool/NC unit. Capacities of each axis as well as some NC unit functions can be expressed as limiting constraints. The proposed model relies on each axis displacement in the joint space of the machine-tool and predicts the most limiting axis for each trajectory segment. Thus, the calculation of the tool feedrate can be performed highlighting zones for which the programmed feedrate is not reached. This constitutes an indicator for trajectory optimization. The efficiency of the model is illustrated through examples. Finally, the model could be used for optimizing process planning.

  7. PORFLOW Modeling Supporting The H-Tank Farm Performance Assessment

    Energy Technology Data Exchange (ETDEWEB)

    Jordan, J. M.; Flach, G. P.; Westbrook, M. L.

    2012-08-31

    Numerical simulations of groundwater flow and contaminant transport in the vadose and saturated zones have been conducted using the PORFLOW code in support of an overall Performance Assessment (PA) of the H-Tank Farm. This report provides technical detail on selected aspects of PORFLOW model development and describes the structure of the associated electronic files. The PORFLOW models for the H-Tank Farm PA, Rev. 1 were updated with grout, solubility, and inventory changes. The aquifer model was refined. In addition, a set of flow sensitivity runs were performed to allow flow to be varied in the related probabilistic GoldSim models. The final PORFLOW concentration values are used as input into a GoldSim dose calculator.

  8. Software life cycle dynamic simulation model: The organizational performance submodel

    Science.gov (United States)

    Tausworthe, Robert C.

    1985-01-01

    The submodel structure of a software life cycle dynamic simulation model is described. The software process is divided into seven phases, each with product, staff, and funding flows. The model is subdivided into an organizational response submodel, a management submodel, a management influence interface, and a model analyst interface. The concentration here is on the organizational response model, which simulates the performance characteristics of a software development subject to external and internal influences. These influences emanate from two sources: the model analyst interface, which configures the model to simulate the response of an implementing organization subject to its own internal influences, and the management submodel that exerts external dynamic control over the production process. A complete characterization is given of the organizational response submodel in the form of parameterized differential equations governing product, staffing, and funding levels. The parameter values and functions are allocated to the two interfaces.

  9. Rethinking board role performance: Towards an integrative model

    Directory of Open Access Journals (Sweden)

    Babić Verica M.

    2011-01-01

    Full Text Available This research focuses on the board role evolution analysis which took place simultaneously with the development of different corporate governance theories and perspectives. The purpose of this paper is to provide understanding of key factors that make a board effective in the performance of its role. We argue that analysis of board role performance should incorporate both structural and process variables. This paper’s contribution is the development of an integrative model that aims to establish the relationship between the board structure and processes on the one hand, and board role performance on the other.

  10. Performance analysis of IP QoS provision model

    Institute of Scientific and Technical Information of China (English)

    SUN Danning; Moonsik Kang

    2006-01-01

    The Performance of a heterogeneous IP QoS provision service model was analyzed. This model utilized RSVP technique to set up dynamic resource reservation interface between the user and the network, meanwhile, DiffServ technique was utilized to transmit class-based packets in different per hop behaviors. Furthermore, accordingly queue management and packets scheduling mechanisms were presented for end-to-end QoS guarantees and appropriate cooperation of network elements.

  11. Does better rainfall interpolation improve hydrological model performance?

    Science.gov (United States)

    Bàrdossy, Andràs; Kilsby, Chris; Lewis, Elisabeth

    2017-04-01

    High spatial variability of precipitation is one of the main sources of uncertainty in rainfall/runoff modelling. Spatially distributed models require detailed space time information on precipitation as input. In the past decades a lot of effort was spent on improving precipitation interpolation using point observations. Different geostatistical methods like Ordinary Kriging, External Drift Kriging or Copula based interpolation can be used to find the best estimators for unsampled locations. The purpose of this work is to investigate to what extents more sophisticated precipitation estimation methods can improve model performance. For this purpose the Wye catchment in Wales was selected. The physically-based spatially-distributed hydrological model SHETRAN is used to describe the hydrological processes in the catchment. 31 raingauges with 1 hourly temporal resolution are available for a time period of 6 years. In order to avoid the effect of model uncertainty model parameters were not altered in this study. Instead 100 random subsets consisting of 14 stations each were selected. For each of the configurations precipitation was interpolated for each time step using nearest neighbor (NN), inverse distance (ID) and Ordinary Kriging (OK). The variogram was obtained using the temporal correlation of the time series measured at different locations. The interpolated data were used as input for the spatially distributed model. Performance was evaluated for daily mean discharges using the Nash-Sutcliffe coefficient, temporal correlations, flow volumes and flow duration curves. The results show that the simplest NN and the sophisticated OK performances are practically equally good, while ID performed worse. NN was often better for high flows. The reason for this is that NN does not reduce the variance, while OK and ID yield smooth precipitation fields. The study points out the importance of precipitation variability and suggests the use of conditional spatial simulation as

  12. Kinetic models in industrial biotechnology - Improving cell factory performance.

    Science.gov (United States)

    Almquist, Joachim; Cvijovic, Marija; Hatzimanikatis, Vassily; Nielsen, Jens; Jirstrand, Mats

    2014-07-01

    An increasing number of industrial bioprocesses capitalize on living cells by using them as cell factories that convert sugars into chemicals. These processes range from the production of bulk chemicals in yeasts and bacteria to the synthesis of therapeutic proteins in mammalian cell lines. One of the tools in the continuous search for improved performance of such production systems is the development and application of mathematical models. To be of value for industrial biotechnology, mathematical models should be able to assist in the rational design of cell factory properties or in the production processes in which they are utilized. Kinetic models are particularly suitable towards this end because they are capable of representing the complex biochemistry of cells in a more complete way compared to most other types of models. They can, at least in principle, be used to in detail understand, predict, and evaluate the effects of adding, removing, or modifying molecular components of a cell factory and for supporting the design of the bioreactor or fermentation process. However, several challenges still remain before kinetic modeling will reach the degree of maturity required for routine application in industry. Here we review the current status of kinetic cell factory modeling. Emphasis is on modeling methodology concepts, including model network structure, kinetic rate expressions, parameter estimation, optimization methods, identifiability analysis, model reduction, and model validation, but several applications of kinetic models for the improvement of cell factories are also discussed.

  13. Model for determining and optimizing delivery performance in industrial systems

    Directory of Open Access Journals (Sweden)

    Fechete Flavia

    2017-01-01

    Full Text Available Performance means achieving organizational objectives regardless of their nature and variety, and even overcoming them. Improving performance is one of the major goals of any company. Achieving the global performance means not only obtaining the economic performance, it is a must to take into account other functions like: function of quality, delivery, costs and even the employees satisfaction. This paper aims to improve the delivery performance of an industrial system due to their very low results. The delivery performance took into account all categories of performance indicators, such as on time delivery, backlog efficiency or transport efficiency. The research was focused on optimizing the delivery performance of the industrial system, using linear programming. Modeling the delivery function using linear programming led to obtaining precise quantities to be produced and delivered each month by the industrial system in order to minimize their transport cost, satisfying their customers orders and to control their stock. The optimization led to a substantial improvement in all four performance indicators that concern deliveries.

  14. Performance of the meteorological radiation model during the solar eclipse of 29 March 2006

    Directory of Open Access Journals (Sweden)

    B. E. Psiloglou

    2007-12-01

    Full Text Available Various solar broadband models have been developed in the last half of the 20th century. The driving demand has been the estimation of available solar energy at different locations on earth for various applications. The motivation for such developments, though, has been the ample lack of solar radiation measurements at global scale. Therefore, the main goal of such codes is to generate artificial solar radiation series or calculate the availability of solar energy at a place.

    One of the broadband models to be developed in the late 80's was the Meteorological Radiation Model (MRM. The main advantage of MRM over other similar models was its simplicity in acquiring and using the necessary input data, i.e. air temperature, relative humidity, barometric pressure and sunshine duration from any of the many meteorological stations.

    The present study describes briefly the various steps (versions of MRM and in greater detail the latest version 5. To show the flexibility and great performance of the MRM, a harsh test of the code under the (almost total solar eclipse conditions of 29 March 2006 over Athens was performed and comparison of its results with real measurements was made. From this hard comparison it is shown that the MRM can simulate solar radiation during a solar eclipse event as effectively as on a typical day. Because of the main interest in solar energy applications about the total radiation component, MRM focuses on that. For this component, the RMSE and MBE statistical estimators during this study were found to be 7.64% and −1.67% on 29 March as compared to the respective 5.30% and +2.04% for 28 March. This efficiency of MRM even during an eclipse makes the model promising for easy handling of typical situations with even better results.

  15. Link performance model for filter bank based multicarrier systems

    Science.gov (United States)

    Petrov, Dmitry; Oborina, Alexandra; Giupponi, Lorenza; Stitz, Tobias Hidalgo

    2014-12-01

    This paper presents a complete link level abstraction model for link quality estimation on the system level of filter bank multicarrier (FBMC)-based networks. The application of mean mutual information per coded bit (MMIB) approach is validated for the FBMC systems. The considered quality measure of the resource element for the FBMC transmission is the received signal-to-noise-plus-distortion ratio (SNDR). Simulation results of the proposed link abstraction model show that the proposed approach is capable of estimating the block error rate (BLER) accurately, even when the signal is propagated through the channels with deep and frequent fades, as it is the case for the 3GPP Hilly Terrain (3GPP-HT) and Enhanced Typical Urban (ETU) models. The FBMC-related results of link level simulations are compared with cyclic prefix orthogonal frequency division multiplexing (CP-OFDM) analogs. Simulation results are also validated through the comparison to reference publicly available results. Finally, the steps of link level abstraction algorithm for FBMC are formulated and its application for system level simulation of a professional mobile radio (PMR) network is discussed.

  16. Typicity in Potato: Characterization of Geographic Origin

    Directory of Open Access Journals (Sweden)

    Marco Manzelli

    2010-03-01

    Full Text Available A two-year study was carried out in three regions of Italy and the crop performance and the chemical composition of tubers of three typical potato varieties evaluated. Carbon and nitrogen tuber content was determined by means of an elemental analyzer and the other mineral elements by means of a spectrometer. The same determinations were performed on soil samples taken from experimental areas. The Principal Component Analysis, applied to the results of mineral element tuber analysis, permitted the classification of all potato tuber samples according to their geographic origin. Only a partial discrimination was obtained in function of potato varieties. Some correlations between mineral content in the tubers and in the soil were also detected. Analytical and statistical methods proved to be useful in verifying the authenticity of guaranteed geographical food denominations.

  17. Simulation, Characterization, and Optimization of Metabolic Models with the High Performance Systems Biology Toolkit

    Energy Technology Data Exchange (ETDEWEB)

    Lunacek, M.; Nag, A.; Alber, D. M.; Gruchalla, K.; Chang, C. H.; Graf, P. A.

    2011-01-01

    The High Performance Systems Biology Toolkit (HiPer SBTK) is a collection of simulation and optimization components for metabolic modeling and the means to assemble these components into large parallel processing hierarchies suiting a particular simulation and optimization need. The components come in a variety of different categories: model translation, model simulation, parameter sampling, sensitivity analysis, parameter estimation, and optimization. They can be configured at runtime into hierarchically parallel arrangements to perform nested combinations of simulation characterization tasks with excellent parallel scaling to thousands of processors. We describe the observations that led to the system, the components, and how one can arrange them. We show nearly 90% efficient scaling to over 13,000 processors, and we demonstrate three complex yet typical examples that have run on {approx}1000 processors and accomplished billions of stiff ordinary differential equation simulations. This work opens the door for the systems biology metabolic modeling community to take effective advantage of large scale high performance computing resources for the first time.

  18. Performance variation due to stiffness in a tuna-inspired flexible foil model.

    Science.gov (United States)

    Rosic, Mariel-Luisa N; Thornycroft, Patrick J M; Feilich, Kara L; Lucas, Kelsey N; Lauder, George V

    2017-01-17

    Tuna are fast, economical swimmers in part due to their stiff, high aspect ratio caudal fins and streamlined bodies. Previous studies using passive caudal fin models have suggested that while high aspect ratio tail shapes such as a tuna's generally perform well, tail performance cannot be determined from shape alone. In this study, we analyzed the swimming performance of tuna-tail-shaped hydrofoils of a wide range of stiffnesses, heave amplitudes, and frequencies to determine how stiffness and kinematics affect multiple swimming performance parameters for a single foil shape. We then compared the foil models' kinematics with published data from a live swimming tuna to determine how well the hydrofoil models could mimic fish kinematics. Foil kinematics over a wide range of motion programs generally showed a minimum lateral displacement at the narrowest part of the foil, and, immediately anterior to that, a local area of large lateral body displacement. These two kinematic patterns may enhance thrust in foils of intermediate stiffness. Stiffness and kinematics exhibited subtle interacting effects on hydrodynamic efficiency, with no one stiffness maximizing both thrust and efficiency. Foils of intermediate stiffnesses typically had the greatest coefficients of thrust at the highest heave amplitudes and frequencies. The comparison of foil kinematics with tuna kinematics showed that tuna motion is better approximated by a zero angle of attack foil motion program than by programs that do not incorporate pitch. These results indicate that open questions in biomechanics may be well served by foil models, given appropriate choice of model characteristics and control programs. Accurate replication of biological movements will require refinement of motion control programs and physical models, including the creation of models of variable stiffness.

  19. Evaluating performances of simplified physically based models for landslide susceptibility

    Directory of Open Access Journals (Sweden)

    G. Formetta

    2015-12-01

    Full Text Available Rainfall induced shallow landslides cause loss of life and significant damages involving private and public properties, transportation system, etc. Prediction of shallow landslides susceptible locations is a complex task that involves many disciplines: hydrology, geotechnical science, geomorphology, and statistics. Usually to accomplish this task two main approaches are used: statistical or physically based model. Reliable models' applications involve: automatic parameters calibration, objective quantification of the quality of susceptibility maps, model sensitivity analysis. This paper presents a methodology to systemically and objectively calibrate, verify and compare different models and different models performances indicators in order to individuate and eventually select the models whose behaviors are more reliable for a certain case study. The procedure was implemented in package of models for landslide susceptibility analysis and integrated in the NewAge-JGrass hydrological model. The package includes three simplified physically based models for landslides susceptibility analysis (M1, M2, and M3 and a component for models verifications. It computes eight goodness of fit indices by comparing pixel-by-pixel model results and measurements data. Moreover, the package integration in NewAge-JGrass allows the use of other components such as geographic information system tools to manage inputs-output processes, and automatic calibration algorithms to estimate model parameters. The system was applied for a case study in Calabria (Italy along the Salerno-Reggio Calabria highway, between Cosenza and Altilia municipality. The analysis provided that among all the optimized indices and all the three models, the optimization of the index distance to perfect classification in the receiver operating characteristic plane (D2PC coupled with model M3 is the best modeling solution for our test case.

  20. Evaluating performances of simplified physically based models for landslide susceptibility

    Science.gov (United States)

    Formetta, G.; Capparelli, G.; Versace, P.

    2015-12-01

    Rainfall induced shallow landslides cause loss of life and significant damages involving private and public properties, transportation system, etc. Prediction of shallow landslides susceptible locations is a complex task that involves many disciplines: hydrology, geotechnical science, geomorphology, and statistics. Usually to accomplish this task two main approaches are used: statistical or physically based model. Reliable models' applications involve: automatic parameters calibration, objective quantification of the quality of susceptibility maps, model sensitivity analysis. This paper presents a methodology to systemically and objectively calibrate, verify and compare different models and different models performances indicators in order to individuate and eventually select the models whose behaviors are more reliable for a certain case study. The procedure was implemented in package of models for landslide susceptibility analysis and integrated in the NewAge-JGrass hydrological model. The package includes three simplified physically based models for landslides susceptibility analysis (M1, M2, and M3) and a component for models verifications. It computes eight goodness of fit indices by comparing pixel-by-pixel model results and measurements data. Moreover, the package integration in NewAge-JGrass allows the use of other components such as geographic information system tools to manage inputs-output processes, and automatic calibration algorithms to estimate model parameters. The system was applied for a case study in Calabria (Italy) along the Salerno-Reggio Calabria highway, between Cosenza and Altilia municipality. The analysis provided that among all the optimized indices and all the three models, the optimization of the index distance to perfect classification in the receiver operating characteristic plane (D2PC) coupled with model M3 is the best modeling solution for our test case.

  1. New Mechanical Model for the Transmutation Fuel Performance Code

    Energy Technology Data Exchange (ETDEWEB)

    Gregory K. Miller

    2008-04-01

    A new mechanical model has been developed for implementation into the TRU fuel performance code. The new model differs from the existing FRAPCON 3 model, which it is intended to replace, in that it will include structural deformations (elasticity, plasticity, and creep) of the fuel. Also, the plasticity algorithm is based on the “plastic strain–total strain” approach, which should allow for more rapid and assured convergence. The model treats three situations relative to interaction between the fuel and cladding: (1) an open gap between the fuel and cladding, such that there is no contact, (2) contact between the fuel and cladding where the contact pressure is below a threshold value, such that axial slippage occurs at the interface, and (3) contact between the fuel and cladding where the contact pressure is above a threshold value, such that axial slippage is prevented at the interface. The first stage of development of the model included only the fuel. In this stage, results obtained from the model were compared with those obtained from finite element analysis using ABAQUS on a problem involving elastic, plastic, and thermal strains. Results from the two analyses showed essentially exact agreement through both loading and unloading of the fuel. After the cladding and fuel/clad contact were added, the model demonstrated expected behavior through all potential phases of fuel/clad interaction, and convergence was achieved without difficulty in all plastic analysis performed. The code is currently in stand alone form. Prior to implementation into the TRU fuel performance code, creep strains will have to be added to the model. The model will also have to be verified against an ABAQUS analysis that involves contact between the fuel and cladding.

  2. A Dynamic Network Model to Explain the Development of Excellent Human Performance.

    Science.gov (United States)

    Den Hartigh, Ruud J R; Van Dijk, Marijn W G; Steenbeek, Henderien W; Van Geert, Paul L C

    2016-01-01

    Across different domains, from sports to science, some individuals accomplish excellent levels of performance. For over 150 years, researchers have debated the roles of specific nature and nurture components to develop excellence. In this article, we argue that the key to excellence does not reside in specific underlying components, but rather in the ongoing interactions among the components. We propose that excellence emerges out of dynamic networks consisting of idiosyncratic mixtures of interacting components such as genetic endowment, motivation, practice, and coaching. Using computer simulations we demonstrate that the dynamic network model accurately predicts typical properties of excellence reported in the literature, such as the idiosyncratic developmental trajectories leading to excellence and the highly skewed distributions of productivity present in virtually any achievement domain. Based on this novel theoretical perspective on excellent human performance, this article concludes by suggesting policy implications and directions for future research.

  3. Work group diversity and group performance: an integrative model and research agenda.

    Science.gov (United States)

    van Knippenberg, Daan; De Dreu, Carsten K W; Homan, Astrid C

    2004-12-01

    Research on the relationship between work group diversity and performance has yielded inconsistent results. To address this problem, the authors propose the categorization-elaboration model (CEM), which reconceptualizes and integrates information/decision making and social categorization perspectives on work-group diversity and performance. The CEM incorporates mediator and moderator variables that typically have been ignored in diversity research and incorporates the view that information/decision making and social categorization processes interact such that intergroup biases flowing from social categorization disrupt the elaboration (in-depth processing) of task-relevant information and perspectives. In addition, the authors propose that attempts to link the positive and negative effects of diversity to specific types of diversity should be abandoned in favor of the assumption that all dimensions of diversity may have positive as well as negative effects. The ways in which these propositions may set the agenda for future research in diversity are discussed.

  4. A dynamic network model to explain the development of excellent human performance

    Directory of Open Access Journals (Sweden)

    Ruud J.R. Den Hartigh

    2016-04-01

    Full Text Available Across different domains, from sports to science, some individuals accomplish excellent levels of performance. For over 150 years, researchers have debated the roles of specific nature and nurture components to develop excellence. In this article, we argue that the key to excellence does not reside in specific underlying components, but rather in the ongoing interactions among the components. We propose that excellence emerges out of dynamic networks consisting of idiosyncratic mixtures of interacting components such as genetic endowment, motivation, practice, and coaching. Using computer simulations we demonstrate that the dynamic network model accurately predicts typical properties of excellence reported in the literature, such as the idiosyncratic developmental trajectories leading to excellence and the highly skewed distributions of productivity present in virtually any achievement domain. Based on this novel theoretical perspective on excellent human performance, this article concludes by suggesting policy implications and directions for future research.

  5. A Dynamic Network Model to Explain the Development of Excellent Human Performance

    Science.gov (United States)

    Den Hartigh, Ruud J. R.; Van Dijk, Marijn W. G.; Steenbeek, Henderien W.; Van Geert, Paul L. C.

    2016-01-01

    Across different domains, from sports to science, some individuals accomplish excellent levels of performance. For over 150 years, researchers have debated the roles of specific nature and nurture components to develop excellence. In this article, we argue that the key to excellence does not reside in specific underlying components, but rather in the ongoing interactions among the components. We propose that excellence emerges out of dynamic networks consisting of idiosyncratic mixtures of interacting components such as genetic endowment, motivation, practice, and coaching. Using computer simulations we demonstrate that the dynamic network model accurately predicts typical properties of excellence reported in the literature, such as the idiosyncratic developmental trajectories leading to excellence and the highly skewed distributions of productivity present in virtually any achievement domain. Based on this novel theoretical perspective on excellent human performance, this article concludes by suggesting policy implications and directions for future research. PMID:27148140

  6. Waterflooding performance using Dykstra-Parsons as compared with numerical model performance

    Energy Technology Data Exchange (ETDEWEB)

    Mobarak, S.

    1975-01-01

    Multilayered models have been used by a number of investigators to represent heterogeneous reservoirs. The purpose of this note is to present waterflood performance for multilayered systems using the standard Dykstra-Parsons method as method as compared with that predicted by the modified form using equations given and those obtained by using a numerical model. The predicted oil recovery, using Johnson charts or the standard Dykstra-Parsons recovery modulus chart is always conservative, if not overly pessimistic. The modified Dykstra-Parsons method, as explained in the text, shows good agreement with the numerical model.

  7. Modeling performance measurement applications and implementation issues in DEA

    CERN Document Server

    Cook, Wade D

    2005-01-01

    Addresses advanced/new DEA methodology and techniques that are developed for modeling unique and new performance evaluation issuesPesents new DEA methodology and techniques via discussions on how to solve managerial problemsProvides an easy-to-use DEA software - DEAFrontier (www.deafrontier.com) which is an excellent tool for both DEA researchers and practitioners.

  8. High Performance Computing tools for the Integrated Tokamak Modelling project

    Energy Technology Data Exchange (ETDEWEB)

    Guillerminet, B., E-mail: bernard.guillerminet@cea.f [Association Euratom-CEA sur la Fusion, IRFM, DSM, CEA Cadarache (France); Plasencia, I. Campos [Instituto de Fisica de Cantabria (IFCA), CSIC, Santander (Spain); Haefele, M. [Universite Louis Pasteur, Strasbourg (France); Iannone, F. [EURATOM/ENEA Fusion Association, Frascati (Italy); Jackson, A. [University of Edinburgh (EPCC) (United Kingdom); Manduchi, G. [EURATOM/ENEA Fusion Association, Padova (Italy); Plociennik, M. [Poznan Supercomputing and Networking Center (PSNC) (Poland); Sonnendrucker, E. [Universite Louis Pasteur, Strasbourg (France); Strand, P. [Chalmers University of Technology (Sweden); Owsiak, M. [Poznan Supercomputing and Networking Center (PSNC) (Poland)

    2010-07-15

    Fusion Modelling and Simulation are very challenging and the High Performance Computing issues are addressed here. Toolset for jobs launching and scheduling, data communication and visualization have been developed by the EUFORIA project and used with a plasma edge simulation code.

  9. Range-dependent sonar performance modelling during Battlespace Preparation 2007

    NARCIS (Netherlands)

    Raa, L.A. te; Lam, F.P.A.; Schouten M.W.; Janmaat, J.

    2009-01-01

    Spatial and temporal variations in sound speed can have substantial effects on sound propagation and hence sonar performance. Operational oceanographic models can provide forecasts of oceanographic variables as temperature, salinity and sound speed up to several days ahead. These four-dimensional fo

  10. Towards a Social Networks Model for Online Learning & Performance

    Science.gov (United States)

    Chung, Kon Shing Kenneth; Paredes, Walter Christian

    2015-01-01

    In this study, we develop a theoretical model to investigate the association between social network properties, "content richness" (CR) in academic learning discourse, and performance. CR is the extent to which one contributes content that is meaningful, insightful and constructive to aid learning and by social network properties we…

  11. Introducing Model Predictive Control for Improving Power Plant Portfolio Performance

    DEFF Research Database (Denmark)

    Edlund, Kristian Skjoldborg; Bendtsen, Jan Dimon; Børresen, Simon

    2008-01-01

    This paper introduces a model predictive control (MPC) approach for construction of a controller for balancing the power generation against consumption in a power system. The objective of the controller is to coordinate a portfolio consisting of multiple power plant units in the effort to perform...

  12. Evaluating hydrological model performance using information theory-based metrics

    Science.gov (United States)

    The accuracy-based model performance metrics not necessarily reflect the qualitative correspondence between simulated and measured streamflow time series. The objective of this work was to use the information theory-based metrics to see whether they can be used as complementary tool for hydrologic m...

  13. Performance evaluation:= (process algebra + model checking) x Markov chains

    NARCIS (Netherlands)

    Hermanns, H.; Katoen, J.P.; Larsen, Kim G.; Nielsen, Mogens

    2001-01-01

    Markov chains are widely used in practice to determine system performance and reliability characteristics. The vast majority of applications considers continuous-time Markov chains (CTMCs). This tutorial paper shows how successful model specification and analysis techniques from concurrency theory c

  14. Performance in model transformations: experiments with ATL and QVT

    NARCIS (Netherlands)

    van Amstel, Marcel; Bosems, S.; Ivanov, Ivan; Ferreira Pires, Luis; Cabot, Jordi; Visser, Eelco

    Model transformations are increasingly being incorporated in software development processes. However, as systems being developed with transformations grow in size and complexity, the performance of the transformations tends to degrade. In this paper we investigate the factors that have an impact on

  15. An e-Procurement Model for Logistic Performance Increase

    NARCIS (Netherlands)

    Toma, Cristina; Vasilescu, Bogdan; Popescu, Catalin; Soliman, KS

    2009-01-01

    This paper discusses the suitability of an e-procurement system in increasing logistic performance, given the growth in fast Internet availability,. In consequence, a model is derived and submitted for analysis. The scope of the research is limited at the intermediary goods importing sector for a be

  16. Performance Analysis of OFDM with Frequency Offset and Correction Model

    Institute of Scientific and Technical Information of China (English)

    QIN Sheng-ping; YIN Chang-chuan; LUO Tao; YUE Guang-xin

    2003-01-01

    The performance of OFDM with frequency offset is analyzed and simulated in this paper. It is concluded that the SIR is very large and the BER of OFDM system with frequency offset is strongly affected. A BER calculating method is introduced and simulated. Assumed that the frequency offset is known, frequency offset correction model is discussed.

  17. Towards a Social Networks Model for Online Learning & Performance

    Science.gov (United States)

    Chung, Kon Shing Kenneth; Paredes, Walter Christian

    2015-01-01

    In this study, we develop a theoretical model to investigate the association between social network properties, "content richness" (CR) in academic learning discourse, and performance. CR is the extent to which one contributes content that is meaningful, insightful and constructive to aid learning and by social network properties we…

  18. Stutter-Step Models of Performance in School

    Science.gov (United States)

    Morgan, Stephen L.; Leenman, Theodore S.; Todd, Jennifer J.; Kentucky; Weeden, Kim A.

    2013-01-01

    To evaluate a stutter-step model of academic performance in high school, this article adopts a unique measure of the beliefs of 12,591 high school sophomores from the Education Longitudinal Study, 2002-2006. Verbatim responses to questions on occupational plans are coded to capture specific job titles, the listing of multiple jobs, and the listing…

  19. Performance Analysis of MANET Routing Protocols in Different Mobility Models

    Directory of Open Access Journals (Sweden)

    Anuj K. Gupta

    2013-05-01

    Full Text Available A mobile ad-hoc network (MANET is basically called as a network without any central administration or fixed infrastructure. It consists of a number of mobile nodes that use to send data packets through a wireless medium. There is always a need of a good routing protocol in order to establish the connection between mobile nodes since they possess the property of dynamic changing topology. Further, in all the existing routing protocols, mobility of a node has always been one of the important characteristics in determining the overall performance of the ad hoc network. Thus, it is essential to know about various mobility models and their effect on the routing protocols. In this paper, we have made an attempt to compare different mobility models and provide an overview of their current research status. The main focus is on Random Mobility Models and Group Mobility Models. Firstly, we present a survey of the characteristics, drawbacks and research challenges of mobility modeling. At the last we present simulation results that illustrate the importance of choosing a mobility model in the simulation of an ad hoc network protocol. Also, we illustrate how the performance results of an ad hoc network protocol drastically change as a result of changing the mobility model simulated.

  20. Gender consequences of a national performance-based funding model

    DEFF Research Database (Denmark)

    Nielsen, Mathias Wullum

    2015-01-01

    This article investigates the extent to which the Danish Bibliometric Research Indicator (BRI) reflects the performance of men and women differently. The model is based on a differentiated counting of peer-reviewed publications, awarding three and eight points for contributions to ‘well-regarded’...... privileges collaborative research, which disadvantages women due to gender differences in collaborative network relations.......This article investigates the extent to which the Danish Bibliometric Research Indicator (BRI) reflects the performance of men and women differently. The model is based on a differentiated counting of peer-reviewed publications, awarding three and eight points for contributions to ‘well......-regarded’ and highly selective journals and book publishers, and 1 and 5 points for equivalent scientific contributions via ‘normal level’ channels. On the basis of bibliometric data, the study shows that the BRI considerably widens the existing gender gap in researcher performance, since men on average receive more...

  1. Performance optimization of Jatropha biodiesel engine model using Taguchi approach

    Energy Technology Data Exchange (ETDEWEB)

    Ganapathy, T.; Murugesan, K.; Gakkhar, R.P. [Mechanical and Industrial Engineering Department, Indian Institute of Technology Roorkee, Roorkee 247 667 (India)

    2009-11-15

    This paper proposes a methodology for thermodynamic model analysis of Jatropha biodiesel engine in combination with Taguchi's optimization approach to determine the optimum engine design and operating parameters. A thermodynamic model based on two-zone Weibe's heat release function has been employed to simulate the Jatropha biodiesel engine performance. Among the important engine design and operating parameters 10 critical parameters were selected assuming interactions between the pair of parameters. Using linear graph theory and Taguchi method an L{sub 16} orthogonal array has been utilized to determine the engine test trials layout. In order to maximize the performance of Jatropha biodiesel engine the signal to noise ratio (SNR) related to higher-the-better (HTB) quality characteristics has been used. The present methodology correctly predicted the compression ratio, Weibe's heat release constants and combustion zone duration as the critical parameters that affect the performance of the engine compared to other parameters. (author)

  2. Performance Models and Risk Management in Communications Systems

    CERN Document Server

    Harrison, Peter; Rüstem, Berç

    2011-01-01

    This volume covers recent developments in the design, operation, and management of telecommunication and computer network systems in performance engineering and addresses issues of uncertainty, robustness, and risk. Uncertainty regarding loading and system parameters leads to challenging optimization and robustness issues. Stochastic modeling combined with optimization theory ensures the optimum end-to-end performance of telecommunication or computer network systems. In view of the diverse design options possible, supporting models have many adjustable parameters and choosing the best set for a particular performance objective is delicate and time-consuming. An optimization based approach determines the optimal possible allocation for these parameters. Researchers and graduate students working at the interface of telecommunications and operations research will benefit from this book. Due to the practical approach, this book will also serve as a reference tool for scientists and engineers in telecommunication ...

  3. Human task animation from performance models and natural language input

    Science.gov (United States)

    Esakov, Jeffrey; Badler, Norman I.; Jung, Moon

    1989-01-01

    Graphical manipulation of human figures is essential for certain types of human factors analyses such as reach, clearance, fit, and view. In many situations, however, the animation of simulated people performing various tasks may be based on more complicated functions involving multiple simultaneous reaches, critical timing, resource availability, and human performance capabilities. One rather effective means for creating such a simulation is through a natural language description of the tasks to be carried out. Given an anthropometrically-sized figure and a geometric workplace environment, various simple actions such as reach, turn, and view can be effectively controlled from language commands or standard NASA checklist procedures. The commands may also be generated by external simulation tools. Task timing is determined from actual performance models, if available, such as strength models or Fitts' Law. The resulting action specification are animated on a Silicon Graphics Iris workstation in real-time.

  4. Modeling the seakeeping performance of luxury cruise ships

    Science.gov (United States)

    Cao, Yu; Yu, Bao-Jun; Wang, Jian-Fang

    2010-09-01

    The seakeeping performance of a luxury cruise ship was evaluated during the concept design phase. By comparing numerical predictions based on 3-D linear potential flow theory in the frequency domain with the results of model tests, it was shown that the 3-D method predicted the seakeeping performance of the luxury cruise ship well. Based on the model, the seakeeping features of the luxury cruise ship were analyzed, and then the influence was seen of changes to the primary design parameters (center of gravity, inertial radius, etc.). Based on the results, suggestions were proposed to improve the choice of parameters for luxury cruise ships during the concept design phase. They should improve seakeeping performance.

  5. An integrative modeling approach to elucidate suction-feeding performance.

    Science.gov (United States)

    Holzman, Roi; Collar, David C; Mehta, Rita S; Wainwright, Peter C

    2012-01-01

    Research on suction-feeding performance has mostly focused on measuring individual underlying components such as suction pressure, flow velocity, ram or the effects of suction-induced forces on prey movement during feeding. Although this body of work has advanced our understanding of aquatic feeding, no consensus has yet emerged on how to combine all of these variables to predict prey-capture performance. Here, we treated the aquatic predator-prey encounter as a hydrodynamic interaction between a solid particle (representing the prey) and the unsteady suction flows around it, to integrate the effects of morphology, physiology, skull kinematics, ram and fluid mechanics on suction-feeding performance. We developed the suction-induced force-field (SIFF) model to study suction-feeding performance in 18 species of centrarchid fishes, and asked what morphological and functional traits underlie the evolution of feeding performance on three types of prey. Performance gradients obtained using SIFF revealed that different trait combinations contribute to the ability to feed on attached, evasive and (strain-sensitive) zooplanktonic prey because these prey types impose different challenges on the predator. The low overlap in the importance of different traits in determining performance also indicated that the evolution of suction-feeding ability along different ecological axes is largely unconstrained. SIFF also yielded estimates of feeding ability that performed better than kinematic traits in explaining natural patterns of prey use. When compared with principal components describing variation in the kinematics of suction-feeding events, SIFF output explained significantly more variation in centrarchid diets, suggesting that the inclusion of more mechanistic hydrodynamic models holds promise for gaining insight into the evolution of aquatic feeding performance.

  6. A performance model for the communication in fast multipole methods on high-performance computing platforms

    KAUST Repository

    Ibeid, Huda

    2016-03-04

    Exascale systems are predicted to have approximately 1 billion cores, assuming gigahertz cores. Limitations on affordable network topologies for distributed memory systems of such massive scale bring new challenges to the currently dominant parallel programing model. Currently, there are many efforts to evaluate the hardware and software bottlenecks of exascale designs. It is therefore of interest to model application performance and to understand what changes need to be made to ensure extrapolated scalability. The fast multipole method (FMM) was originally developed for accelerating N-body problems in astrophysics and molecular dynamics but has recently been extended to a wider range of problems. Its high arithmetic intensity combined with its linear complexity and asynchronous communication patterns make it a promising algorithm for exascale systems. In this paper, we discuss the challenges for FMM on current parallel computers and future exascale architectures, with a focus on internode communication. We focus on the communication part only; the efficiency of the computational kernels are beyond the scope of the present study. We develop a performance model that considers the communication patterns of the FMM and observe a good match between our model and the actual communication time on four high-performance computing (HPC) systems, when latency, bandwidth, network topology, and multicore penalties are all taken into account. To our knowledge, this is the first formal characterization of internode communication in FMM that validates the model against actual measurements of communication time. The ultimate communication model is predictive in an absolute sense; however, on complex systems, this objective is often out of reach or of a difficulty out of proportion to its benefit when there exists a simpler model that is inexpensive and sufficient to guide coding decisions leading to improved scaling. The current model provides such guidance.

  7. Reference Manual for the System Advisor Model's Wind Power Performance Model

    Energy Technology Data Exchange (ETDEWEB)

    Freeman, J.; Jorgenson, J.; Gilman, P.; Ferguson, T.

    2014-08-01

    This manual describes the National Renewable Energy Laboratory's System Advisor Model (SAM) wind power performance model. The model calculates the hourly electrical output of a single wind turbine or of a wind farm. The wind power performance model requires information about the wind resource, wind turbine specifications, wind farm layout (if applicable), and costs. In SAM, the performance model can be coupled to one of the financial models to calculate economic metrics for residential, commercial, or utility-scale wind projects. This manual describes the algorithms used by the wind power performance model, which is available in the SAM user interface and as part of the SAM Simulation Core (SSC) library, and is intended to supplement the user documentation that comes with the software.

  8. Models for the energy performance of low-energy houses

    DEFF Research Database (Denmark)

    Andersen, Philip Hvidthøft Delff

    such as mechanical ventilation, floor heating, and control of the lighting effect, the heat dynamics must be taken into account. Hence, this thesis provides methods for data-driven modeling of heat dynamics of modern buildings. While most of the work in this thesis is related to characterization of heat dynamics...... - referred to as "grey-box” modeling - one-step predictions can be generated and used for model validation by testing statistically whether the model describes all variation and dynamics observed in the data. The possibility of validating the model dynamics is a great advantage from the use of stochastic......-building. The building is well-insulated and features large modern energy-effcient windows and oor heating. These features lead to increased non-linear responses to solar radiation and longer time constants. The building is equipped with advanced control and measuring equipment. Experiments are designed and performed...

  9. Acoustic performance of industrial mufflers with CAE modeling and simulation

    Science.gov (United States)

    Jeon, Soohong; Kim, Daehwan; Hong, Chinsuk; Jeong, Weuibong

    2014-12-01

    This paper investigates the noise transmission performance of industrial mufflers widely used in ships based on the CAE modeling and simulation. Since the industrial mufflers have very complicated internal structures, the conventional Transfer Matrix Method (TMM) is of limited use. The CAE modeling and simulation is therefore required to incorporate commercial softwares: CATIA for geometry modeling, MSC/PATRAN for FE meshing and LMS/ SYSNOISE for analysis. Main sources of difficulties in this study are led by complicated arrangement of reactive elements, perforated walls and absorption materials. The reactive elements and absorbent materials are modeled by applying boundary conditions given by impedance. The perforated walls are modeled by applying the transfer impedance on the duplicated node mesh. The CAE approach presented in this paper is verified by comparing with the theoretical solution of a concentric-tube resonator and is applied for industrial mufflers.

  10. Acoustic performance of industrial mufflers with CAE modeling and simulation

    Directory of Open Access Journals (Sweden)

    Jeon Soohong

    2014-12-01

    Full Text Available This paper investigates the noise transmission performance of industrial mufflers widely used in ships based on the CAE modeling and simulation. Since the industrial mufflers have very complicated internal structures, the conventional Transfer Matrix Method (TMM is of limited use. The CAE modeling and simulation is therefore required to incorporate commercial softwares: CATIA for geometry modeling, MSC/PATRAN for FE meshing and LMS/ SYSNOISE for analysis. Main sources of difficulties in this study are led by complicated arrangement of reactive elements, perforated walls and absorption materials. The reactive elements and absorbent materials are modeled by applying boundary conditions given by impedance. The perforated walls are modeled by applying the transfer impedance on the duplicated node mesh. The CAE approach presented in this paper is verified by comparing with the theoretical solution of a concentric-tube resonator and is applied for industrial mufflers.

  11. Multiscale Modeling and Uncertainty Quantification for Nuclear Fuel Performance

    Energy Technology Data Exchange (ETDEWEB)

    Estep, Donald [Colorado State Univ., Fort Collins, CO (United States); El-Azab, Anter [Florida State Univ., Tallahassee, FL (United States); Pernice, Michael [Idaho National Lab. (INL), Idaho Falls, ID (United States); Peterson, John W. [Idaho National Lab. (INL), Idaho Falls, ID (United States); Polyakov, Peter [Univ. of Wyoming, Laramie, WY (United States); Tavener, Simon [Colorado State Univ., Fort Collins, CO (United States); Xiu, Dongbin [Purdue Univ., West Lafayette, IN (United States); Univ. of Utah, Salt Lake City, UT (United States)

    2017-03-23

    In this project, we will address the challenges associated with constructing high fidelity multiscale models of nuclear fuel performance. We (*) propose a novel approach for coupling mesoscale and macroscale models, (*) devise efficient numerical methods for simulating the coupled system, and (*) devise and analyze effective numerical approaches for error and uncertainty quantification for the coupled multiscale system. As an integral part of the project, we will carry out analysis of the effects of upscaling and downscaling, investigate efficient methods for stochastic sensitivity analysis of the individual macroscale and mesoscale models, and carry out a posteriori error analysis for computed results. We will pursue development and implementation of solutions in software used at Idaho National Laboratories on models of interest to the Nuclear Energy Advanced Modeling and Simulation (NEAMS) program.

  12. Modeling time-lagged reciprocal psychological empowerment-performance relationships.

    Science.gov (United States)

    Maynard, M Travis; Luciano, Margaret M; D'Innocenzo, Lauren; Mathieu, John E; Dean, Matthew D

    2014-11-01

    Employee psychological empowerment is widely accepted as a means for organizations to compete in increasingly dynamic environments. Previous empirical research and meta-analyses have demonstrated that employee psychological empowerment is positively related to several attitudinal and behavioral outcomes including job performance. While this research positions psychological empowerment as an antecedent influencing such outcomes, a close examination of the literature reveals that this relationship is primarily based on cross-sectional research. Notably, evidence supporting the presumed benefits of empowerment has failed to account for potential reciprocal relationships and endogeneity effects. Accordingly, using a multiwave, time-lagged design, we model reciprocal relationships between psychological empowerment and job performance using a sample of 441 nurses from 5 hospitals. Incorporating temporal effects in a staggered research design and using structural equation modeling techniques, our findings provide support for the conventional positive correlation between empowerment and subsequent performance. Moreover, accounting for the temporal stability of variables over time, we found support for empowerment levels as positive influences on subsequent changes in performance. Finally, we also found support for the reciprocal relationship, as performance levels were shown to relate positively to changes in empowerment over time. Theoretical and practical implications of the reciprocal psychological empowerment-performance relationships are discussed. (PsycINFO Database Record (c) 2014 APA, all rights reserved).

  13. A PERFORMANCE MANAGEMENT MODEL FOR PHYSICAL ASSET MANAGEMENT

    Directory of Open Access Journals (Sweden)

    J.L. Jooste

    2012-01-01

    Full Text Available

    ENGLISH ABSTRACT: There has been an emphasis shift from maintenance management towards asset management, where the focus is on reliable and operational equipment and on effective assets at optimum life-cycle costs. A challenge in the manufacturing industry is to develop an asset performance management model that is integrated with business processes and strategies. The authors developed the APM2 model to satisfy that requirement. The model has a generic reference structure and is supported by operational protocols to assist in operations management. It facilitates performance measurement, business integration and continuous improvement, whilst exposing industry to the latest developments in asset performance management.

    AFRIKAANSE OPSOMMING: Daar is ‘n klemverskuiwing vanaf onderhoudsbestuur na batebestuur, waar daar gefokus word op betroubare en operasionele toerusting, asook effektiewe bates teen optimum lewensikluskoste. ‘n Uitdaging in die vervaardigingsindustrie is die ontwikkeling van ‘n prestasiemodel vir bates, wat geïntegreer is met besigheidsprosesse en –strategieë. Die outeurs het die APM2 model ontwikkel om in hierdie behoefte te voorsien. Die model het ‘n generiese verwysingsstruktuur, wat ondersteun word deur operasionele instruksies wat operasionele bestuur bevorder. Dit fasiliteer prestasiebestuur, besigheidsintegrasie en voortdurende verbetering, terwyl dit die industrie ook blootstel aan die nuutste ontwikkelinge in prestasiebestuur van bates.

  14. Cognition and procedure representational requirements for predictive human performance models

    Science.gov (United States)

    Corker, K.

    1992-01-01

    Models and modeling environments for human performance are becoming significant contributors to early system design and analysis procedures. Issues of levels of automation, physical environment, informational environment, and manning requirements are being addressed by such man/machine analysis systems. The research reported here investigates the close interaction between models of human cognition and models that described procedural performance. We describe a methodology for the decomposition of aircrew procedures that supports interaction with models of cognition on the basis of procedures observed; that serves to identify cockpit/avionics information sources and crew information requirements; and that provides the structure to support methods for function allocation among crew and aiding systems. Our approach is to develop an object-oriented, modular, executable software representation of the aircrew, the aircraft, and the procedures necessary to satisfy flight-phase goals. We then encode in a time-based language, taxonomies of the conceptual, relational, and procedural constraints among the cockpit avionics and control system and the aircrew. We have designed and implemented a goals/procedures hierarchic representation sufficient to describe procedural flow in the cockpit. We then execute the procedural representation in simulation software and calculate the values of the flight instruments, aircraft state variables and crew resources using the constraints available from the relationship taxonomies. The system provides a flexible, extensible, manipulative and executable representation of aircrew and procedures that is generally applicable to crew/procedure task-analysis. The representation supports developed methods of intent inference, and is extensible to include issues of information requirements and functional allocation. We are attempting to link the procedural representation to models of cognitive functions to establish several intent inference methods

  15. A personality trait-based interactionist model of job performance.

    Science.gov (United States)

    Tett, Robert P; Burnett, Dawn D

    2003-06-01

    Evidence for situational specificity of personality-job performance relations calls for better understanding of how personality is expressed as valued work behavior. On the basis of an interactionist principle of trait activation (R. P. Tett & H. A. Guterman, 2000), a model is proposed that distinguishes among 5 situational features relevant to trait expression (job demands, distracters, constraints, releasers, and facilitators), operating at task, social, and organizational levels. Trait-expressive work behavior is distinguished from (valued) job performance in clarifying the conditions favoring personality use in selection efforts. The model frames linkages between situational taxonomies (e.g., J. L. Holland's [1985] RIASEC model) and the Big Five and promotes useful discussion of critical issues, including situational specificity, personality-oriented job analysis, team building, and work motivation.

  16. PHARAO Laser Source Flight Model: Design and Performances

    CERN Document Server

    Lévèque, Thomas; Esnault, François-Xavier; Delaroche, Christophe; Massonnet, Didier; Grosjean, Olivier; Buffe, Fabrice; Torresi, Patrizia; Bomer, Thierry; Pichon, Alexandre; Béraud, Pascal; Lelay, Jean-Pierre; Thomin, Stéphane; Laurent, Philippe

    2015-01-01

    In this paper, we describe the design and the main performances of the PHARAO laser source flight model. PHARAO is a laser cooled cesium clock specially designed for operation in space and the laser source is one of the main sub-systems. The flight model presented in this work is the first remote-controlled laser system designed for spaceborne cold atom manipulation. The main challenges arise from mechanical compatibility with space constraints, which impose a high level of compactness, a low electric power consumption, a wide range of operating temperature and a vacuum environment. We describe the main functions of the laser source and give an overview of the main technologies developed for this instrument. We present some results of the qualification process. The characteristics of the laser source flight model, and their impact on the clock performances, have been verified in operational conditions.

  17. Performance Comparison of Sub Phonetic Model with Input Signal Processing

    Directory of Open Access Journals (Sweden)

    Dr E. Ramaraj

    2006-01-01

    Full Text Available The quest to arrive at a better model for signal transformation for speech has resulted in striving to develop better signal representations and algorithm. The article explores the word model which is a concatenation of state dependent senones as an alternate for phoneme. The Research Work has an objective of involving the senone with the Input signal processing an algorithm which has been tried with phoneme and has been quite successful and try to compare the performance of senone with ISP and Phoneme with ISP and supply the result analysis. The research model has taken the SPHINX IV[4] speech engine for its implementation owing to its flexibility to the new algorithm, robustness and performance consideration.

  18. A multiserver multiqueue network: modeling and performance analysis

    Institute of Scientific and Technical Information of China (English)

    2002-01-01

    A new category of system model, multiserver multiqueue network (MSMQN), is proposed for distributed systems such as the geographically distributed Web-server clusters. A MSMQN comprises multiple multiserver multiqueue (MSMQ) nodes distributed over the network, and everynode consists of a number of servers that each contains multiple priority queues for waiting customers. An incoming request can be distributed to a waiting queue of any server in any node, according to the routing policy integrated by the node-selection policy at network-level, request-dispatching policy at node-level, and request-scheduling policy at server-level. The model is investigated using stochastic high-level Petri net (SHLPN) modeling and performance analysis techniques. Theperformance metrics concerned includes the delay time of requests in the MSMQ node and the response time perceived by the users. The numerical example shows the efficiency of the performance analysis technique.

  19. Frequency modulated continuous wave lidar performance model for target detection

    Science.gov (United States)

    Du Bosq, Todd W.; Preece, Bradley L.

    2017-05-01

    The desire to provide the warfighter both ranging and reflected intensity information is increasing to meet expanding operational needs. LIDAR imaging systems can provide the user with intensity, range, and even velocity information of a scene. The ability to predict the performance of LIDAR systems is critical for the development of future designs without the need to conduct time consuming and costly field studies. Performance modeling of a frequency modulated continuous wave (FMCW) LIDAR system is challenging due to the addition of the chirped laser source and waveform mixing. The FMCW LIDAR model is implemented in the NV-IPM framework using the custom component generation tool. This paper presents an overview of the FMCW Lidar, the customized LIDAR components, and a series of trade studies using the LIDAR model.

  20. Tiling for Performance Tuning on Different Models of GPUs

    CERN Document Server

    Xu, Chang; Jenkins, Samantha

    2010-01-01

    The strategy of using CUDA-compatible GPUs as a parallel computation solution to improve the performance of programs has been more and more widely approved during the last two years since the CUDA platform was released. Its benefit extends from the graphic domain to many other computationally intensive domains. Tiling, as the most general and important technique, is widely used for optimization in CUDA programs. New models of GPUs with better compute capabilities have, however, been released, new versions of CUDA SDKs were also released. These updated compute capabilities must to be considered when optimizing using the tiling technique. In this paper, we implement image interpolation algorithms as a test case to discuss how different tiling strategies affect the program's performance. We especially focus on how the different models of GPUs affect the tiling's effectiveness by executing the same program on two different models of GPUs equipped testing platforms. The results demonstrate that an optimized tiling...

  1. An improved model for TPV performance predictions and optimization

    Science.gov (United States)

    Schroeder, K. L.; Rose, M. F.; Burkhalter, J. E.

    1997-03-01

    Previously a model has been presented for calculating the performance of a TPV system. This model has been revised into a general purpose algorithm, improved in fidelity, and is presented here. The basic model is an energy based formulation and evaluates both the radiant and heat source elements of a combustion based system. Improvements in the radiant calculations include the use of ray tracking formulations and view factors for evaluating various flat plate and cylindrical configurations. Calculation of photocell temperature and performance parameters as a function of position and incident power have also been incorporated. Heat source calculations have been fully integrated into the code by the incorporation of a modified version of the NASA Complex Chemical Equilibrium Compositions and Applications (CEA) code. Additionally, coding has been incorporated to allow optimization of various system parameters and configurations. Several examples cases are presented and compared, and an optimum flat plate emitter/filter/photovoltaic configuration is also described.

  2. PHARAO laser source flight model: Design and performances

    Energy Technology Data Exchange (ETDEWEB)

    Lévèque, T., E-mail: thomas.leveque@cnes.fr; Faure, B.; Esnault, F. X.; Delaroche, C.; Massonnet, D.; Grosjean, O.; Buffe, F.; Torresi, P. [Centre National d’Etudes Spatiales, 18 avenue Edouard Belin, 31400 Toulouse (France); Bomer, T.; Pichon, A.; Béraud, P.; Lelay, J. P.; Thomin, S. [Sodern, 20 Avenue Descartes, 94451 Limeil-Brévannes (France); Laurent, Ph. [LNE-SYRTE, CNRS, UPMC, Observatoire de Paris, 61 avenue de l’Observatoire, 75014 Paris (France)

    2015-03-15

    In this paper, we describe the design and the main performances of the PHARAO laser source flight model. PHARAO is a laser cooled cesium clock specially designed for operation in space and the laser source is one of the main sub-systems. The flight model presented in this work is the first remote-controlled laser system designed for spaceborne cold atom manipulation. The main challenges arise from mechanical compatibility with space constraints, which impose a high level of compactness, a low electric power consumption, a wide range of operating temperature, and a vacuum environment. We describe the main functions of the laser source and give an overview of the main technologies developed for this instrument. We present some results of the qualification process. The characteristics of the laser source flight model, and their impact on the clock performances, have been verified in operational conditions.

  3. Performance and Prediction: Bayesian Modelling of Fallible Choice in Chess

    Science.gov (United States)

    Haworth, Guy; Regan, Ken; di Fatta, Giuseppe

    Evaluating agents in decision-making applications requires assessing their skill and predicting their behaviour. Both are well developed in Poker-like situations, but less so in more complex game and model domains. This paper addresses both tasks by using Bayesian inference in a benchmark space of reference agents. The concepts are explained and demonstrated using the game of chess but the model applies generically to any domain with quantifiable options and fallible choice. Demonstration applications address questions frequently asked by the chess community regarding the stability of the rating scale, the comparison of players of different eras and/or leagues, and controversial incidents possibly involving fraud. The last include alleged under-performance, fabrication of tournament results, and clandestine use of computer advice during competition. Beyond the model world of games, the aim is to improve fallible human performance in complex, high-value tasks.

  4. Smart Campus Construction Over all Architecture Model and Typical Applications Prospects%智慧校园建设总体架构模型及典型应用分析

    Institute of Scientific and Technical Information of China (English)

    王燕

    2014-01-01

    智慧校园是信息化发展的新阶段。在从数字校园到智慧校园的建设过程中,加强智慧校园的顶层设计,构建智慧校园建设的通用架构模型具有十分重要的意义。在智慧校园中开展典型应用对于推动智慧校园的建设和应用进程具有良好的示范带动作用。为了探索智慧校园的建设模式,该文在分析智慧校园的内涵特征和关键技术的基础上,设计了智慧校园建设总体架构模型。该模型从下到上依次分为感知层、网络层、数据层、应用层和服务层,通过构建信息标准与规范体系、运行维护与安全体系来保障智慧校园的规范建设与运行维护;然后从智慧教育、智慧科研和智慧管理方面分析了智慧校园的典型应用,为智慧校园的示范推广应用提供参考。%Smart campus is a new stage of development of information technology. From the digital campus to the smart campus, it has very important signiifcance that strengthens the top-level design of smart campus and builds the overall architecture model of smart campus. Typical applications carried out in the smart campus has a good role model to promote the construction and application process for the smart campus. In order to explore the construction mode of smart campus, the overall architecture model is designed based on the analysis of connotation characteristics and key technologies of smart campus, which includes the perception layer, network layer, data layer, application layer and service layer from bottom to top. Construction standard and operation maintenance of smart campus is ensured through information Standards, Standard system, Operation maintenance and Security system. Finally this paper prospects typical applications of smart campus through smart teachings, smart experiment ,smart research and smart management.

  5. A conceptual model to improve performance in virtual teams

    Directory of Open Access Journals (Sweden)

    Shopee Dube

    2016-04-01

    Full Text Available Background: The vast improvement in communication technologies and sophisticated project management tools, methods and techniques has allowed geographically and culturally diverse groups to operate and function in a virtual environment. To succeed in this virtual environment where time and space are becoming increasingly irrelevant, organisations must define new ways of implementing initiatives. This virtual environment phenomenon has brought about the formation of virtual project teams that allow organisations to harness the skills and knowhow of the best resources, irrespective of their location.Objectives: The aim of this article was to investigate performance criteria and develop a conceptual model which can be applied to enhance the success of virtual project teams. There are no clear guidelines of the performance criteria in managing virtual project teams.Method: A qualitative research methodology was used in this article. The purpose of content analysis was to explore the literature to understand the concept of performance in virtual project teams and to summarise the findings of the literature reviewed.Results: The research identified a set of performance criteria for the virtual project teams as follows: leadership, trust, communication, team cooperation, reliability, motivation, comfort and social interaction. These were used to conceptualise the model.Conclusion: The conceptual model can be used in a holistic way to determine the overall performance of the virtual project team, but each factor can be analysed individually to determine the impact on the overall performance. The knowledge of performance criteria for virtual project teams could aid project managers in enhancing the success of these teams and taking a different approach to better manage and coordinate them.

  6. 混合模式下的农业机械载荷谱数据管理系统%Realization of Load Spectrum Data Management System for Typical Agricultural Machinery Based on Hybrid Model

    Institute of Scientific and Technical Information of China (English)

    孟庆瑞; 田兆锋; 张书明; 阎楚良

    2009-01-01

    Against the background of implementing load spectrum data management system of the typical agricultural machinery, the load spectrum data management system was designed and implemented using the technology of . NET in the mixed model of C/S and B/S. Then overall plan and structure model of the system were worked out according to actual situation and detailed requirements of engineering application. Finally, each system module was coded and debugged. Thus, it can be implemented for the modernization management of load spectrum data, adding communication channels and improving efficiency of structural fatigue research.%以拖拉机、联合收获机等典型农业机械载荷谱数据库建设工作为背景,基于B/S与C/S混合模式,利用.NET技术体系设计与实现载荷谱数据库信息管理系统.针对实际情况和工程应用的具体需求制定了系统的总体方案,设计了系统的结构模型,最后完成系统具体模块的开发调试.系统可实现载荷谱数据现代化管理、增加信息沟通渠道、提高结构疲劳寿命研究工作的效率.

  7. Establishment and application of performance evaluation model for collection and transportation system of municipal solid waste

    Institute of Scientific and Technical Information of China (English)

    彭绪亚; 林晓东; 贾传兴; 王渝昆; 黄媛媛

    2009-01-01

    On the basis of analyzing the typical waste collection and transportation mode,the evaluation index system for performance of the waste collection and transportation system was proposed with three grades,which related to six factors,such as economic evaluation,high efficient evaluation,environmental impact assessment,resource evaluation,evaluation of security and emergency,evaluation of management and society. With the performance evaluation theory,the performance evaluation model of waste collection and transportation system was constructed,which quantified the grading standard of index and determined the index weight in analytic hierarchy process (AHP). After evaluating the waste collection and transportation system of the main districts of Chongqing city,the results show that the it has an excellent performance evaluation grade with very high performance level of three indices involving evaluation of management and society,environmental impact assessment,evaluation of security and emergency and quite low performance level of two indices that include high efficient evaluation and economic evaluation.

  8. Synthesised model of market orientation-business performance relationship

    Directory of Open Access Journals (Sweden)

    G. Nwokah

    2006-12-01

    Full Text Available Purpose: The purpose of this paper is to assess the impact of market orientation on the performance of the organisation. While much empirical works have centered on market orientation, the generalisability of its impact on performance of the Food and Beverages organisations in the Nigeria context has been under-researched. Design/Methodology/Approach: The study adopted a triangulation methodology (quantitative and qualitative approach. Data was collected from key informants using a research instrument. Returned instruments were analyzed using nonparametric correlation through the use of the Statistical Package for Social Sciences (SPSS version 10. Findings: The study validated the earlier instruments but did not find any strong association between market orientation and business performance in the Nigerian context using the food and beverages organisations for the study. The reasons underlying the weak relationship between market orientation and business performance of the Food and Beverages organisations is government policies, new product development, diversification, innovation and devaluation of the Nigerian currency. One important finding of this study is that market orientation leads to business performance through some moderating variables. Implications: The study recommends that Nigerian Government should ensure a stable economy and make economic policies that will enhance existing business development in the country. Also, organisations should have performance measurement systems to detect the impact of investment on market orientation with the aim of knowing how the organisation works. Originality/Value: This study significantly refines the body of knowledge concerning the impact of market orientation on the performance of the organisation, and thereby offers a model of market orientation and business performance in the Nigerian context for marketing scholars and practitioners. This model will, no doubt, contribute to the body of

  9. Integrated healthcare networks' performance: a growth curve modeling approach.

    Science.gov (United States)

    Wan, Thomas T H; Wang, Bill B L

    2003-05-01

    This study examines the effects of integration on the performance ratings of the top 100 integrated healthcare networks (IHNs) in the United States. A strategic-contingency theory is used to identify the relationship of IHNs' performance to their structural and operational characteristics and integration strategies. To create a database for the panel study, the top 100 IHNs selected by the SMG Marketing Group in 1998 were followed up in 1999 and 2000. The data were merged with the Dorenfest data on information system integration. A growth curve model was developed and validated by the Mplus statistical program. Factors influencing the top 100 IHNs' performance in 1998 and their subsequent rankings in the consecutive years were analyzed. IHNs' initial performance scores were positively influenced by network size, number of affiliated physicians and profit margin, and were negatively associated with average length of stay and technical efficiency. The continuing high performance, judged by maintaining higher performance scores, tended to be enhanced by the use of more managerial or executive decision-support systems. Future studies should include time-varying operational indicators to serve as predictors of network performance.

  10. Integrated modeling tool for performance engineering of complex computer systems

    Science.gov (United States)

    Wright, Gary; Ball, Duane; Hoyt, Susan; Steele, Oscar

    1989-01-01

    This report summarizes Advanced System Technologies' accomplishments on the Phase 2 SBIR contract NAS7-995. The technical objectives of the report are: (1) to develop an evaluation version of a graphical, integrated modeling language according to the specification resulting from the Phase 2 research; and (2) to determine the degree to which the language meets its objectives by evaluating ease of use, utility of two sets of performance predictions, and the power of the language constructs. The technical approach followed to meet these objectives was to design, develop, and test an evaluation prototype of a graphical, performance prediction tool. The utility of the prototype was then evaluated by applying it to a variety of test cases found in the literature and in AST case histories. Numerous models were constructed and successfully tested. The major conclusion of this Phase 2 SBIR research and development effort is that complex, real-time computer systems can be specified in a non-procedural manner using combinations of icons, windows, menus, and dialogs. Such a specification technique provides an interface that system designers and architects find natural and easy to use. In addition, PEDESTAL's multiview approach provides system engineers with the capability to perform the trade-offs necessary to produce a design that meets timing performance requirements. Sample system designs analyzed during the development effort showed that models could be constructed in a fraction of the time required by non-visual system design capture tools.

  11. The predictive performance and stability of six species distribution models.

    Science.gov (United States)

    Duan, Ren-Yan; Kong, Xiao-Quan; Huang, Min-Yi; Fan, Wei-Yi; Wang, Zhi-Gao

    2014-01-01

    Predicting species' potential geographical range by species distribution models (SDMs) is central to understand their ecological requirements. However, the effects of using different modeling techniques need further investigation. In order to improve the prediction effect, we need to assess the predictive performance and stability of different SDMs. We collected the distribution data of five common tree species (Pinus massoniana, Betula platyphylla, Quercus wutaishanica, Quercus mongolica and Quercus variabilis) and simulated their potential distribution area using 13 environmental variables and six widely used SDMs: BIOCLIM, DOMAIN, MAHAL, RF, MAXENT, and SVM. Each model run was repeated 100 times (trials). We compared the predictive performance by testing the consistency between observations and simulated distributions and assessed the stability by the standard deviation, coefficient of variation, and the 99% confidence interval of Kappa and AUC values. The mean values of AUC and Kappa from MAHAL, RF, MAXENT, and SVM trials were similar and significantly higher than those from BIOCLIM and DOMAIN trials (pMAXENT, and SVM. Compared to BIOCLIM and DOMAIN, other SDMs (MAHAL, RF, MAXENT, and SVM) had higher prediction accuracy, smaller confidence intervals, and were more stable and less affected by the random variable (randomly selected pseudo-absence points). According to the prediction performance and stability of SDMs, we can divide these six SDMs into two categories: a high performance and stability group including MAHAL, RF, MAXENT, and SVM, and a low performance and stability group consisting of BIOCLIM, and DOMAIN. We highlight that choosing appropriate SDMs to address a specific problem is an important part of the modeling process.

  12. The predictive performance and stability of six species distribution models.

    Directory of Open Access Journals (Sweden)

    Ren-Yan Duan

    Full Text Available Predicting species' potential geographical range by species distribution models (SDMs is central to understand their ecological requirements. However, the effects of using different modeling techniques need further investigation. In order to improve the prediction effect, we need to assess the predictive performance and stability of different SDMs.We collected the distribution data of five common tree species (Pinus massoniana, Betula platyphylla, Quercus wutaishanica, Quercus mongolica and Quercus variabilis and simulated their potential distribution area using 13 environmental variables and six widely used SDMs: BIOCLIM, DOMAIN, MAHAL, RF, MAXENT, and SVM. Each model run was repeated 100 times (trials. We compared the predictive performance by testing the consistency between observations and simulated distributions and assessed the stability by the standard deviation, coefficient of variation, and the 99% confidence interval of Kappa and AUC values.The mean values of AUC and Kappa from MAHAL, RF, MAXENT, and SVM trials were similar and significantly higher than those from BIOCLIM and DOMAIN trials (p<0.05, while the associated standard deviations and coefficients of variation were larger for BIOCLIM and DOMAIN trials (p<0.05, and the 99% confidence intervals for AUC and Kappa values were narrower for MAHAL, RF, MAXENT, and SVM. Compared to BIOCLIM and DOMAIN, other SDMs (MAHAL, RF, MAXENT, and SVM had higher prediction accuracy, smaller confidence intervals, and were more stable and less affected by the random variable (randomly selected pseudo-absence points.According to the prediction performance and stability of SDMs, we can divide these six SDMs into two categories: a high performance and stability group including MAHAL, RF, MAXENT, and SVM, and a low performance and stability group consisting of BIOCLIM, and DOMAIN. We highlight that choosing appropriate SDMs to address a specific problem is an important part of the modeling process.

  13. Microinverter Thermal Performance in the Real-World: Measurements and Modeling.

    Directory of Open Access Journals (Sweden)

    Mohammad Akram Hossain

    Full Text Available Real-world performance, durability and reliability of microinverters are critical concerns for microinverter-equipped photovoltaic systems. We conducted a data-driven study of the thermal performance of 24 new microinverters (Enphase M215 connected to 8 different brands of PV modules on dual-axis trackers at the Solar Durability and Lifetime Extension (SDLE SunFarm at Case Western Reserve University, based on minute by minute power and thermal data from the microinverters and PV modules along with insolation and environmental data from July through October 2013. The analysis shows the strengths of the associations of microinverter temperature with ambient temperature, PV module temperature, irradiance and AC power of the PV systems. The importance of the covariates are rank ordered. A multiple regression model was developed and tested based on stable solar noon-time data, which gives both an overall function that predicts the temperature of microinverters under typical local conditions, and coefficients adjustments reecting refined prediction of the microinverter temperature connected to the 8 brands of PV modules in the study. The model allows for prediction of internal temperature for the Enphase M215 given similar climatic condition and can be expanded to predict microinverter temperature in fixed-rack and roof-top PV systems. This study is foundational in that similar models built on later stage data in the life of a device could reveal potential influencing factors in performance degradation.

  14. Implicit Value Updating Explains Transitive Inference Performance: The Betasort Model.

    Directory of Open Access Journals (Sweden)

    Greg Jensen

    Full Text Available Transitive inference (the ability to infer that B > D given that B > C and C > D is a widespread characteristic of serial learning, observed in dozens of species. Despite these robust behavioral effects, reinforcement learning models reliant on reward prediction error or associative strength routinely fail to perform these inferences. We propose an algorithm called betasort, inspired by cognitive processes, which performs transitive inference at low computational cost. This is accomplished by (1 representing stimulus positions along a unit span using beta distributions, (2 treating positive and negative feedback asymmetrically, and (3 updating the position of every stimulus during every trial, whether that stimulus was visible or not. Performance was compared for rhesus macaques, humans, and the betasort algorithm, as well as Q-learning, an established reward-prediction error (RPE model. Of these, only Q-learning failed to respond above chance during critical test trials. Betasort's success (when compared to RPE models and its computational efficiency (when compared to full Markov decision process implementations suggests that the study of reinforcement learning in organisms will be best served by a feature-driven approach to comparing formal models.

  15. Towards Modeling Realistic Mobility for Performance Evaluations in MANET

    Science.gov (United States)

    Aravind, Alex; Tahir, Hassan

    Simulation modeling plays crucial role in conducting research on complex dynamic systems like mobile ad hoc networks and often the only way. Simulation has been successfully applied in MANET for more than two decades. In several recent studies, it is observed that the credibility of the simulation results in the field has decreased while the use of simulation has steadily increased. Part of this credibility crisis has been attributed to the simulation of mobility of the nodes in the system. Mobility has such a fundamental influence on the behavior and performance of mobile ad hoc networks. Accurate modeling and knowledge of mobility of the nodes in the system is not only helpful but also essential for the understanding and interpretation of the performance of the system under study. Several ideas, mostly in isolation, have been proposed in the literature to infuse realism in the mobility of nodes. In this paper, we attempt a holistic analysis of creating realistic mobility models and then demonstrate creation and analysis of realistic mobility models using a software tool we have developed. Using our software tool, desired mobility of the nodes in the system can be specified, generated, analyzed, and then the trace can be exported to be used in the performance studies of proposed algorithms or systems.

  16. Instruction-level performance modeling and characterization of multimedia applications

    Energy Technology Data Exchange (ETDEWEB)

    Luo, Y. [Los Alamos National Lab., NM (United States). Scientific Computing Group; Cameron, K.W. [Louisiana State Univ., Baton Rouge, LA (United States). Dept. of Computer Science

    1999-06-01

    One of the challenges for characterizing and modeling realistic multimedia applications is the lack of access to source codes. On-chip performance counters effectively resolve this problem by monitoring run-time behaviors at the instruction-level. This paper presents a novel technique of characterizing and modeling workloads at the instruction level for realistic multimedia applications using hardware performance counters. A variety of instruction counts are collected from some multimedia applications, such as RealPlayer, GSM Vocoder, MPEG encoder/decoder, and speech synthesizer. These instruction counts can be used to form a set of abstract characteristic parameters directly related to a processor`s architectural features. Based on microprocessor architectural constraints and these calculated abstract parameters, the architectural performance bottleneck for a specific application can be estimated. Meanwhile, the bottleneck estimation can provide suggestions about viable architectural/functional improvement for certain workloads. The biggest advantage of this new characterization technique is a better understanding of processor utilization efficiency and architectural bottleneck for each application. This technique also provides predictive insight of future architectural enhancements and their affect on current codes. In this paper the authors also attempt to model architectural effect on processor utilization without memory influence. They derive formulas for calculating CPI{sub 0}, CPI without memory effect, and they quantify utilization of architectural parameters. These equations are architecturally diagnostic and predictive in nature. Results provide promise in code characterization, and empirical/analytical modeling.

  17. Modeling the Performance of Fast Mulipole Method on HPC platforms

    KAUST Repository

    Ibeid, Huda

    2012-04-06

    The current trend in high performance computing is pushing towards exascale computing. To achieve this exascale performance, future systems will have between 100 million and 1 billion cores assuming gigahertz cores. Currently, there are many efforts studying the hardware and software bottlenecks for building an exascale system. It is important to understand and meet these bottlenecks in order to attain 10 PFLOPS performance. On applications side, there is an urgent need to model application performance and to understand what changes need to be made to ensure continued scalability at this scale. Fast multipole methods (FMM) were originally developed for accelerating N-body problems for particle based methods. Nowadays, FMM is more than an N-body solver, recent trends in HPC have been to use FMMs in unconventional application areas. FMM is likely to be a main player in exascale due to its hierarchical nature and the techniques used to access the data via a tree structure which allow many operations to happen simultaneously at each level of the hierarchy. In this thesis , we discuss the challenges for FMM on current parallel computers and future exasclae architecture. Furthermore, we develop a novel performance model for FMM. Our ultimate aim of this thesis is to ensure the scalability of FMM on the future exascale machines.

  18. Implementation of multivariate linear mixed-effects models in the analysis of indoor climate performance experiments

    DEFF Research Database (Denmark)

    Jensen, Kasper Lynge; Spliid, Henrik; Toftum, Jørn

    2011-01-01

    The aim of the current study was to apply multivariate mixed-effects modeling to analyze experimental data on the relation between air quality and the performance of office work. The method estimates in one step the effect of the exposure on a multi-dimensional response variable, and yields impor....... The analysis seems superior to conventional univariate statistics and the information provided may be important for the design of performance experiments in general and for the conclusions that can be based on such studies.......The aim of the current study was to apply multivariate mixed-effects modeling to analyze experimental data on the relation between air quality and the performance of office work. The method estimates in one step the effect of the exposure on a multi-dimensional response variable, and yields...... important information on the correlation between the different dimensions of the response variable, which in this study was composed of both subjective perceptions and a two-dimensional performance task outcome. Such correlation is typically not included in the output from univariate analysis methods. Data...

  19. Circuit modeling and performance analysis of photoconductive antenna

    Science.gov (United States)

    Prajapati, Jitendra; Bharadwaj, Mrinmoy; Chatterjee, Amitabh; Bhattacharjee, Ratnajit

    2017-07-01

    In recent years, several experimental and simulation studies have been reported on the terahertz (THz) generation using a photoconductive antenna (PCA). The major problem with PCA is its low overall efficiency, which depends on several parameters related to a semiconductor material, an antenna geometry, and characteristics of the laser beam. To analyze the effect of different parameters on PCA efficiency, accurate circuit modeling, using physics undergoing in the device, is necessary. Although a few equivalent circuit models have been proposed in the literature, these models do not adequately capture the semiconductor physics in PCA. This paper presents an equivalent electrical circuit model of PCA incorporating basic semiconductor device physics. The proposed equivalent circuit model is validated using Sentaurus TCAD device level modeling tool as well as with the experimental results available in the literature. The results obtained from the proposed circuit model are in close agreement with the TCAD results as well as available experimental results. The proposed circuit model is expected to contribute towards future research efforts aimed at optimization of the performance of the PCA system.

  20. On Typicality in Nonequilibrium Steady States

    Science.gov (United States)

    Evans, Denis J.; Williams, Stephen R.; Searles, Debra J.; Rondoni, Lamberto

    2016-08-01

    From the statistical mechanical viewpoint, relaxation of macroscopic systems and response theory rest on a notion of typicality, according to which the behavior of single macroscopic objects is given by appropriate ensembles: ensemble averages of observable quantities represent the measurements performed on single objects, because " almost all" objects share the same fate. In the case of non-dissipative dynamics and relaxation toward equilibrium states, " almost all" is referred to invariant probability distributions that are absolutely continuous with respect to the Lebesgue measure. In other words, the collection of initial micro-states (single systems) that do not follow the ensemble is supposed to constitute a set of vanishing, phase space volume. This approach is problematic in the case of dissipative dynamics and relaxation to nonequilibrium steady states, because the relevant invariant distributions attribute probability 1 to sets of zero volume, while evolution commonly begins in equilibrium states, i.e., in sets of full phase space volume. We consider the relaxation of classical, thermostatted particle systems to nonequilibrium steady states. We show that the dynamical condition known as Ω T-mixing is necessary and sufficient for relaxation of ensemble averages to steady state values. Moreover, we find that the condition known as weak T-mixing applied to smooth observables is sufficient for ensemble relaxation to be independent of the initial ensemble. Lastly, we show that weak T-mixing provides a notion of typicality for dissipative dynamics that is based on the (non-invariant) Lebesgue measure, and that we call physical ergodicity.

  1. Correlation between human observer performance and model observer performance in differential phase contrast CT

    Energy Technology Data Exchange (ETDEWEB)

    Li, Ke; Garrett, John [Department of Medical Physics, University of Wisconsin-Madison, 1111 Highland Avenue, Madison, Wisconsin 53705 (United States); Chen, Guang-Hong [Department of Medical Physics, University of Wisconsin-Madison, 1111 Highland Avenue, Madison, Wisconsin 53705 and Department of Radiology, University of Wisconsin-Madison, 600 Highland Avenue, Madison, Wisconsin 53792 (United States)

    2013-11-15

    Purpose: With the recently expanding interest and developments in x-ray differential phase contrast CT (DPC-CT), the evaluation of its task-specific detection performance and comparison with the corresponding absorption CT under a given radiation dose constraint become increasingly important. Mathematical model observers are often used to quantify the performance of imaging systems, but their correlations with actual human observers need to be confirmed for each new imaging method. This work is an investigation of the effects of stochastic DPC-CT noise on the correlation of detection performance between model and human observers with signal-known-exactly (SKE) detection tasks.Methods: The detectabilities of different objects (five disks with different diameters and two breast lesion masses) embedded in an experimental DPC-CT noise background were assessed using both model and human observers. The detectability of the disk and lesion signals was then measured using five types of model observers including the prewhitening ideal observer, the nonprewhitening (NPW) observer, the nonprewhitening observer with eye filter and internal noise (NPWEi), the prewhitening observer with eye filter and internal noise (PWEi), and the channelized Hotelling observer (CHO). The same objects were also evaluated by four human observers using the two-alternative forced choice method. The results from the model observer experiment were quantitatively compared to the human observer results to assess the correlation between the two techniques.Results: The contrast-to-detail (CD) curve generated by the human observers for the disk-detection experiments shows that the required contrast to detect a disk is inversely proportional to the square root of the disk size. Based on the CD curves, the ideal and NPW observers tend to systematically overestimate the performance of the human observers. The NPWEi and PWEi observers did not predict human performance well either, as the slopes of their CD

  2. Compact models and performance investigations for subthreshold interconnects

    CERN Document Server

    Dhiman, Rohit

    2014-01-01

    The book provides a detailed analysis of issues related to sub-threshold interconnect performance from the perspective of analytical approach and design techniques. Particular emphasis is laid on the performance analysis of coupling noise and variability issues in sub-threshold domain to develop efficient compact models. The proposed analytical approach gives physical insight of the parameters affecting the transient behavior of coupled interconnects. Remedial design techniques are also suggested to mitigate the effect of coupling noise. The effects of wire width, spacing between the wires, wi

  3. Mechanical Response of Typical Cement Concrete Pavements under Impact Loading

    Directory of Open Access Journals (Sweden)

    Ding Fei

    2017-01-01

    Full Text Available In order to study the mechanical response of cement concrete pavements under impact loading, four types of typical cement concrete pavement structures are investigated experimentally and numerically under an impact load. Full-scale three-dimensional pavement slots are tested under an impact load and are monitored for the mechanical characteristics including the deflection of the pavement surface layer, the strain distribution at the bottom of the slab, and the plastic damage and cracking under the dynamic impact load. Numerical analysis is performed by developing a three-dimensional finite element model and by utilizing a cement concrete damage model. The results show that the calculation results based on the cement concrete damage model are in reasonable agreement with the experimental results based on the three-dimensional test slot experiment. The peak values of stress and strain as monitored by the sensors are analyzed and compared with the numerical results, indicating that the errors of numerical results from the proposed model are mostly within 10%. The rationality of the finite element model is verified, and the model is expected to be a suitable reference for the analysis and design of cement concrete pavements.

  4. 3D Massive MIMO Systems: Channel Modeling and Performance Analysis

    KAUST Repository

    Nadeem, Qurrat-Ul-Ain

    2015-03-01

    Multiple-input-multiple-output (MIMO) systems of current LTE releases are capable of adaptation in the azimuth only. More recently, the trend is to enhance the system performance by exploiting the channel\\'s degrees of freedom in the elevation through the dynamic adaptation of the vertical antenna beam pattern. This necessitates the derivation and characterization of three-dimensional (3D) channels. Over the years, channel models have evolved to address the challenges of wireless communication technologies. In parallel to theoretical studies on channel modeling, many standardized channel models like COST-based models, 3GPP SCM, WINNER, ITU have emerged that act as references for industries and telecommunication companies to assess system-level and link-level performances of advanced signal processing techniques over real-like channels. Given the existing channels are only two dimensional (2D) in nature; a large effort in channel modeling is needed to study the impact of the channel component in the elevation direction. The first part of this work sheds light on the current 3GPP activity around 3D channel modeling and beamforming, an aspect that to our knowledge has not been extensively covered by a research publication. The standardized MIMO channel model is presented, that incorporates both the propagation effects of the environment and the radio effects of the antennas. In order to facilitate future studies on the use of 3D beamforming, the main features of the proposed 3D channel model are discussed. A brief overview of the future 3GPP 3D channel model being outlined for the next generation of wireless networks is also provided. In the subsequent part of this work, we present an information-theoretic channel model for MIMO systems that supports the elevation dimension. The model is based on the principle of maximum entropy, which enables us to determine the distribution of the channel matrix consistent with the prior information on the angles of departure and

  5. Key performance indicators in hospital based on balanced scorecard model

    Directory of Open Access Journals (Sweden)

    Hamed Rahimi

    2017-01-01

    Full Text Available Introduction: Performance measurement is receiving increasing verification all over the world. Nowadays in a lot of organizations, irrespective of their type or size, performance evaluation is the main concern and a key issue for top administrators. The purpose of this study is to organize suitable key performance indicators (KPIs for hospitals’ performance evaluation based on the balanced scorecard (BSC. Method: This is a mixed method study. In order to identify the hospital’s performance indicators (HPI, first related literature was reviewed and then the experts’ panel and Delphi method were used. In this study, two rounds were needed for the desired level of consensus. The experts rated the importance of the indicators, on a five-point Likert scale. In the consensus calculation, the consensus percentage was calculated by classifying the values 1-3 as not important (0 and 4-5 to (1 as important. Simple additive weighting technique was used to rank the indicators and select hospital’s KPIs. The data were analyzed by Excel 2010 software. Results: About 218 indicators were obtained from a review of selected literature. Through internal expert panel, 77 indicators were selected. Finally, 22 were selected for KPIs of hospitals. Ten indicators were selected in internal process perspective and 5, 4, and 3 indicators in finance, learning and growth, and customer, respectively. Conclusion: This model can be a useful tool for evaluating and comparing the performance of hospitals. However, this model is flexible and can be adjusted according to differences in the target hospitals. This study can be beneficial for hospital administrators and it can help them to change their perspective about performance evaluation.

  6. Soil Characteristics Under Typical Agroforestry Model in Northwestern Areas%辽西北地区典型农林复合模式下土壤特性研究

    Institute of Scientific and Technical Information of China (English)

    王娇

    2014-01-01

    Research of four typical agroforestry models on soil improvement in northwestern areas was conducted .Re-sult shows that :relative to the control primitive wasteland ,agroforestry model have characteristics of reducing soil bulk density ,increasing soil water holding capacity ,soil porosity and so on ,while improving soil fertility ,increasing the soil nitrogen ,phosphorus ,potassium content ,improving organic matter content .Comprehensive evaluation of soil quality shows that :in the agroforestry model ,Prunus sibirica var .suavosperma- A rachis hypogaea- Zea mays have the optimal effect on soil improvement ;Prunus sibirica var .suavosperma- Ephedra sinica ,followed by Pinus sylvestris var .mongolica- Arachis hypogaea .On soil depth ,the soil improvement effect of model for Prunus sibirica var .suavosperma - A rachis hypogaea- Zea mays and the model of Prunus sibirica var .suavosperma-Ephedra sinica in 0-20 cm soil layer are optimal than that in 20-40 cm soil layer ;the result of the model for Pinus sylvestris- A rachis hypogaea is opposite .%通过对辽西北地区4种典型农林复合模式对土壤改良的研究,结果显示,相对于对照原始荒地,农林复合模式具有降低土壤容重,增加土壤持水量、土壤孔隙度等作用,同时改善土壤肥力,增大土壤内氮、磷、钾含量,提升有机质含量。土壤质量综合评价显示,农林复合模式中大扁杏-花生-玉米对土壤的改良效果最好,大扁杏-麻黄草、樟子松-花生次之。在土层深度上,大扁杏-花生-玉米模式和大扁杏-麻黄草模式对0~20 cm土层的土壤改良效果好于20~40 cm ;樟子松-花生模式结果与之相反。

  7. Numerical simulation of microclimate in Beijing typical residential area based on ENVI-met model%基于ENVI-met的北京典型住宅区微气候数值模拟分析

    Institute of Scientific and Technical Information of China (English)

    秦文翠; 胡聃; 李元征; 郭振

    2015-01-01

    基于三维微气候模拟软件ENVI-met,以北京典型住宅区为研究对象,建立实况及设有屋顶绿化两种场景下的三维数值模拟模型,对研究区的微气候特征进行模拟分析。结果表明:微气候ENVI-met模型经过多次参数校正后,模型模拟结果与实测结果较接近,能较好反应微尺度气候的分布特征;屋顶经过简单的绿化后具有较好的降温增湿效果,使研究区平均温度降低了2—3℃,相对湿度增加了2.7%,风速由0.32—2.70 m·s-1降低至0.40—1.11 m·s-1,整体下降了0.34 m·s-1。ENVI-met通过三维模型可有效地应用于微气候模拟评价中,对评估当地微气候设计和城市规划具有一定的应用性。%Based on an ENVI-met three-dimensional microclimate model,a numerical simulation model in a typical residential area of Beijing under the actual situation and roof greening conditions was established and microclimate characteristics were simulated.The results show that the simulated value is similar to the observed one after multi-ple parameter calibration for the ENVI-met model,and the model could reflect the micro-scale climate distribution characteristics.Roof greening significantly decreases temperature and increases humidity.Mean temperature in the study area decreases by 2-3 ℃ after roof greening;relative humidity increases by 2 .7%;wind speed falls by 0.34 m·s-1 ,and its range decreases from 0.40-1 .1 1 m·s-1 to 0.32-2.70 m·s-1 .The ENVI-met model could be used in the microclimate simulation and assessment by an effective three-dimensional model,which could provide scientific information for the local microclimate design and urban planning.

  8. Evaluation of the performance of DIAS ionospheric forecasting models

    Directory of Open Access Journals (Sweden)

    Tsagouri Ioanna

    2011-08-01

    Full Text Available Nowcasting and forecasting ionospheric products and services for the European region are regularly provided since August 2006 through the European Digital upper Atmosphere Server (DIAS, http://dias.space.noa.gr. Currently, DIAS ionospheric forecasts are based on the online implementation of two models: (i the solar wind driven autoregression model for ionospheric short-term forecast (SWIF, which combines historical and real-time ionospheric observations with solar-wind parameters obtained in real time at the L1 point from NASA ACE spacecraft, and (ii the geomagnetically correlated autoregression model (GCAM, which is a time series forecasting method driven by a synthetic geomagnetic index. In this paper we investigate the operational ability and the accuracy of both DIAS models carrying out a metrics-based evaluation of their performance under all possible conditions. The analysis was established on the systematic comparison between models’ predictions with actual observations obtained over almost one solar cycle (1998–2007 at four European ionospheric locations (Athens, Chilton, Juliusruh and Rome and on the comparison of the models’ performance against two simple prediction strategies, the median- and the persistence-based predictions during storm conditions. The results verify operational validity for both models and quantify their prediction accuracy under all possible conditions in support of operational applications but also of comparative studies in assessing or expanding the current ionospheric forecasting capabilities.

  9. Lightweight ZERODUR: Validation of Mirror Performance and Mirror Modeling Predictions

    Science.gov (United States)

    Hull, Tony; Stahl, H. Philip; Westerhoff, Thomas; Valente, Martin; Brooks, Thomas; Eng, Ron

    2017-01-01

    Upcoming spaceborne missions, both moderate and large in scale, require extreme dimensional stability while relying both upon established lightweight mirror materials, and also upon accurate modeling methods to predict performance under varying boundary conditions. We describe tests, recently performed at NASA's XRCF chambers and laboratories in Huntsville Alabama, during which a 1.2 m diameter, f/1.2988% lightweighted SCHOTT lightweighted ZERODUR(TradeMark) mirror was tested for thermal stability under static loads in steps down to 230K. Test results are compared to model predictions, based upon recently published data on ZERODUR(TradeMark). In addition to monitoring the mirror surface for thermal perturbations in XRCF Thermal Vacuum tests, static load gravity deformations have been measured and compared to model predictions. Also the Modal Response(dynamic disturbance) was measured and compared to model. We will discuss the fabrication approach and optomechanical design of the ZERODUR(TradeMark) mirror substrate by SCHOTT, its optical preparation for test by Arizona Optical Systems (AOS). Summarize the outcome of NASA's XRCF tests and model validations

  10. Performance-Based Design for Large Crowd Venue Control Using a Multi-Agent Model

    Institute of Scientific and Technical Information of China (English)

    ZHANG Qingsong; ZHAO Guomin; LIU Jinlan

    2009-01-01

    Performance-based design is more holistic and flexible than prescriptive design for providing safety in large complex buildings. Here, a multi-agent method to model the egress pattems of evacuees is combined with a microscopic pedestrian simulation model used to analyze the forces between individuals in a densely populated enclosed space in a crowd crushing and trampling analysis (CroC&Ts). The system is used to model egress pattems in a typical crowd evacuation simulation. The simulations indicate that some individuals will die from crushing in 2 m and 4 m wide exits in emergencies. The simulations also show that the fatality probability increases when barriers obstacled the path and when the egress distances were lar-ger. The simulations validate the conclusions of the stranded crowd model (SCM) and provide quantitative predictions of the crowd crushing and trampling risk. Therefore, the CroC&Ts can provide performance-based egress designs for large pubic buildings and improve crowd safety management and emergency planning.

  11. 两种典型滤料厌氧氨氧化效果与工艺运行优化%Comparison of performance and optimizing process for two typical filter medias of ANAMMOX biofilters

    Institute of Scientific and Technical Information of China (English)

    杨庆; 谷鹏超; 刘秀红; 周瑶; 彭永臻

    2015-01-01

    为促进厌氧氨氧化在城市污水处理中的应用,针对陶粒和火山岩两种典型滤料滤池的厌氧氨氧化脱氮效果和关键性工艺参数进行了研究。试验结果表明,接种挂膜启动生物滤池,10 d可实现稳定的厌氧氨氧化生物膜,火山岩滤池生物膜量和EPS均高于陶粒。滤料和反冲洗对厌氧氨氧化滤池实现稳定脱氮具有重要影响,低滤速条件下火山岩和陶粒滤池厌氧氨氧化效果基本相同,火山岩滤池和陶粒滤池反冲洗周期均较长,宜采用单独水冲方式;但高滤速条件下火山岩滤池比陶粒滤池更易堵塞,滤层有效深度小,反冲洗方式宜采用气水联合反冲方式,并相应缩短反冲洗周期、延长反冲洗时间。火山岩和陶粒滤池滤速均不宜高于2 m·h−1,最高总氮负荷分别可达3.81 kg·m−3·d−1和3.56 kg·m−3·d−1。%To promote the engineering applications of anaerobic ammonium oxidation (ANAMMOX) for sewage treatment, nitrogen removal rate and key operational parameters were studied in two typical filters with ceramsite or volcanic rock as filter media were studied. The obtained results showed that the anammox biofim in both biofilters was successfully cultivated after 10 days of inoculation. Filter media and backwash both played important role in achieving stable anammox in biofilter. At low filtration velocity, in both biofilter, not only nitrogen removal rate was almost similar, but also water backwash and long backwash cycle were optimal. However, at high filtration velocity, volcanic rock biofilter was more easily blocked up than ceramsite biofilter. The effective depth of filter layer in volcanic rock biofilter was also thinner than that in ceramsite biofilter. Besides, air+water backwash style was optimal, backwash cycle should be shortened and backwash time should be prolonged. Filtration velocity in both biofilters should be controlled lower than 2 m·h−1. The highest

  12. From Performance Measurement to Strategic Management Model: Balanced Scorecard

    Directory of Open Access Journals (Sweden)

    Cihat Savsar

    2015-03-01

    Full Text Available Abstract: In Today’s competitive markets, one of the main conditions of the surviving of enterprises is the necessity to have effective performance management systems. Decisions must be taken by the management according to the performance of assets. In the transition from industrial society to information society, the presence of business structures have changed and the values of non-financial assets have increased in this period. So some systems have emerged based on intangible assets and to measure them instead of tangible assets and their measurements. With economic and technological development multi-dimensional evaluation in the business couldn’t be sufficient.  Performance evaluation methods can be applied in business with an integrated approach by its accordance with business strategy, linking to reward system and cause effects link established between performance measures. Balanced scorecard is one of the commonly used in measurement methods. While it was used for the first time in 1992 as a performance measurement tool today it has been used as a strategic management model besides its conventional uses. BSC contains customer perspective, internal perspective and learning and growth perspective besides financial perspective. Learning and growth perspective is determinant of other perspectives. In order to achieve the objectives set out in the financial perspective in other dimensions that need to be accomplished, is emphasized. Establishing a causal link between performance measures and targets how to achieve specified goals with strategy maps are described.

  13. Performance model for Micro Tunnelling Boring Machines (MTBM

    Directory of Open Access Journals (Sweden)

    J. Gallo

    2017-06-01

    Full Text Available From the last decades of the XX century, various formulae have been proposed to estimate the performance in tunnelling of disc cutters, mainly employed in Tunnelling Boring Machines (TBM. Nevertheless, their suitability has not been verified in Micro Tunnelling Boring Machines (MTBM, with smaller diameter of excavation, between 1,000 and 2,500 mm and smaller cutter tools, where parameters like joint spacing may have a different influence. This paper analyzes those models proposed for TBM. After having observed very low correlation with data obtained in 15 microtunnels, a new performance model is developed, adapted to the geomechanical data available in this type of works. Moreover, a method to calculate the total amount of hours that are necessary to carry out microtunnels, including all the tasks of the excavation cycle and installation and uninstallation.

  14. Performance potential for simulating spin models on GPU

    CERN Document Server

    Weigel, Martin

    2011-01-01

    Graphics processing units (GPUs) are recently being used to an increasing degree for general computational purposes. This development is motivated by their theoretical peak performance, which significantly exceeds that of broadly available CPUs. For practical purposes, however, it is far from clear how much of this theoretical performance can be realized in actual scientific applications. As is discussed here for the case of studying classical spin models of statistical mechanics by Monte Carlo simulations, only an explicit tailoring of the involved algorithms to the specific architecture under consideration allows to harvest the computational power of GPU systems. A number of examples, ranging from Metropolis simulations of ferromagnetic Ising models, over continuous Heisenberg and disordered spin-glass systems to parallel-tempering simulations are discussed. Significant speed-ups by factors of up to 1000 compared to serial CPU code as well as previous GPU implementations are observed.

  15. Performance potential for simulating spin models on GPU

    Science.gov (United States)

    Weigel, Martin

    2012-04-01

    Graphics processing units (GPUs) are recently being used to an increasing degree for general computational purposes. This development is motivated by their theoretical peak performance, which significantly exceeds that of broadly available CPUs. For practical purposes, however, it is far from clear how much of this theoretical performance can be realized in actual scientific applications. As is discussed here for the case of studying classical spin models of statistical mechanics by Monte Carlo simulations, only an explicit tailoring of the involved algorithms to the specific architecture under consideration allows to harvest the computational power of GPU systems. A number of examples, ranging from Metropolis simulations of ferromagnetic Ising models, over continuous Heisenberg and disordered spin-glass systems to parallel-tempering simulations are discussed. Significant speed-ups by factors of up to 1000 compared to serial CPU code as well as previous GPU implementations are observed.

  16. Model for magnetostrictive performance in soft/hard coupled bilayers

    Energy Technology Data Exchange (ETDEWEB)

    Jianjun, Li, E-mail: ljj8081@gmail.com [National Key Laboratory of Science and Technology on Advanced Composites in Special Environments, Harbin Institute of Technology, Harbin 150080 (China); Laboratoire de Magnétisme de Bretagne, Université de Bretagne Occidentale, 29238 Brest Cedex 3 (France); Beibei, Duan; Minglun, Li [National Key Laboratory of Science and Technology on Advanced Composites in Special Environments, Harbin Institute of Technology, Harbin 150080 (China)

    2015-11-01

    A model is set up to investigate the magnetostrictive performance and spin response in soft/hard magnetostrictive coupled bilayers. Direct coupling between soft ferromagnet and hard TbFe{sub 2} at the interface is assumed. The magnetostriction results from the rotation of ferromagnetic vector and TbFe{sub 2} vectors from the easy axis driven by applied magnetic field. Dependence of magnetostriction on TbFe{sub 2} layer thickness and interfacial exchange interaction is studied. The simulated results reveal the compromise between interfacial exchange interaction and anisotropy of TbFe{sub 2} hard layer. - Highlights: • A model for magnetostrictive performance in soft/hard coupled bilayers. • Simulated magnetostriction loop and corresponding spin response. • Competition and compromise between interfacial interaction and TbFe{sub 2} anisotropy. • Dependence of saturated magnetostriction on different parameters.

  17. The performance of FLake in the Met Office Unified Model

    Directory of Open Access Journals (Sweden)

    Gabriel Gerard Rooney

    2013-12-01

    Full Text Available We present results from the coupling of FLake to the Met Office Unified Model (MetUM. The coupling and initialisation are first described, and the results of testing the coupled model in local and global model configurations are presented. These show that FLake has a small statistical impact on screen temperature, but has the potential to modify the weather in the vicinity of areas of significant inland water. Examination of FLake lake ice has revealed that the behaviour of lakes in the coupled model is unrealistic in some areas of significant sub-grid orography. Tests of various modifications to ameliorate this behaviour are presented. The results indicate which of the possible model changes best improve the annual cycle of lake ice. As FLake has been developed and tuned entirely outside the Unified Model system, these results can be interpreted as a useful objective measure of the performance of the Unified Model in terms of its near-surface characteristics.

  18. Does model performance improve with complexity? A case study with three hydrological models

    Science.gov (United States)

    Orth, Rene; Staudinger, Maria; Seneviratne, Sonia I.; Seibert, Jan; Zappa, Massimiliano

    2015-04-01

    In recent decades considerable progress has been made in climate model development. Following the massive increase in computational power, models became more sophisticated. At the same time also simple conceptual models have advanced. In this study we validate and compare three hydrological models of different complexity to investigate whether their performance varies accordingly. For this purpose we use runoff and also soil moisture measurements, which allow a truly independent validation, from several sites across Switzerland. The models are calibrated in similar ways with the same runoff data. Our results show that the more complex models HBV and PREVAH outperform the simple water balance model (SWBM) in case of runoff but not for soil moisture. Furthermore the most sophisticated PREVAH model shows an added value compared to the HBV model only in case of soil moisture. Focusing on extreme events we find generally improved performance of the SWBM during drought conditions and degraded agreement with observations during wet extremes. For the more complex models we find the opposite behavior, probably because they were primarily developed for prediction of runoff extremes. As expected given their complexity, HBV and PREVAH have more problems with over-fitting. All models show a tendency towards better performance in lower altitudes as opposed to (pre-) alpine sites. The results vary considerably across the investigated sites. In contrast, the different metrics we consider to estimate the agreement between models and observations lead to similar conclusions, indicating that the performance of the considered models is similar at different time scales as well as for anomalies and long-term means. We conclude that added complexity does not necessarily lead to improved performance of hydrological models, and that performance can vary greatly depending on the considered hydrological variable (e.g. runoff vs. soil moisture) or hydrological conditions (floods vs. droughts).

  19. Thermal performance modeling of cross-flow heat exchangers

    CERN Document Server

    Cabezas-Gómez, Luben; Saíz-Jabardo, José Maria

    2014-01-01

    This monograph introduces a numerical computational methodology for thermal performance modeling of cross-flow heat exchangers, with applications in chemical, refrigeration and automobile industries. This methodology allows obtaining effectiveness-number of transfer units (e-NTU) data and has been used for simulating several standard and complex flow arrangements configurations of cross-flow heat exchangers. Simulated results have been validated through comparisons with results from available exact and approximate analytical solutions. Very accurate results have been obtained over wide ranges

  20. Towards an Improved Performance Measure for Language Models

    CERN Document Server

    Ueberla, J P

    1997-01-01

    In this paper a first attempt at deriving an improved performance measure for language models, the probability ratio measure (PRM) is described. In a proof of concept experiment, it is shown that PRM correlates better with recognition accuracy and can lead to better recognition results when used as the optimisation criterion of a clustering algorithm. Inspite of the approximations and limitations of this preliminary work, the results are very encouraging and should justify more work along the same lines.

  1. A Fuzzy Knowledge Representation Model for Student Performance Assessment

    DEFF Research Database (Denmark)

    Badie, Farshad

    Knowledge representation models based on Fuzzy Description Logics (DLs) can provide a foundation for reasoning in intelligent learning environments. While basic DLs are suitable for expressing crisp concepts and binary relationships, Fuzzy DLs are capable of processing degrees of truth/completene....../completeness about vague or imprecise information. This paper tackles the issue of representing fuzzy classes using OWL2 in a dataset describing Performance Assessment Results of Students (PARS)....

  2. Human Engineering Modeling and Performance Lab Study Project

    Science.gov (United States)

    Oliva-Buisson, Yvette J.

    2014-01-01

    The HEMAP (Human Engineering Modeling and Performance) Lab is a joint effort between the Industrial and Human Engineering group and the KAVE (Kennedy Advanced Visualiations Environment) group. The lab consists of sixteen camera system that is used to capture human motions and operational tasks, through te use of a Velcro suit equipped with sensors, and then simulate these tasks in an ergonomic software package know as Jac, The Jack software is able to identify the potential risk hazards.

  3. Modeling and Simulation of Ceramic Arrays to Improve Ballistic Performance

    Science.gov (United States)

    2014-04-30

    distribution is Unlimited. 13. SUPPLEMENTARY NOTES 14. ABSTRACT -Develop Modeling and Simulation tools, use Depth of Penetration ( DOP ) as metric...7.62 APM2 -Evaluate SiC tile on Aluminum with material properties from literature -Develop seam designs to improve performance, demonstrate with DOP ...5083, SiC, DoP Expeminets, AutoDyn Sin 16. SECURITY CLASSIFICATION OF: UU a. REPORT b. ABSTRACT c. THIS PAGE 17. LIMITATION OF ABSTRACT UU 18

  4. Towards Accreditation of Diagnostic Models for Improved Performance

    Science.gov (United States)

    2004-10-02

    analysis. Secondly, while performing testability, the diagnostic algorithm is not included to assess Anuradha Kodali et al. This is an open-access...to assess the diagnosis (Sheppard, & Simpson, 1998). Considering these factors, Interactive Diagnostic Modeling Evaluator (i-DME) ( Kodali , Robinson...requirements set before to suit practical compulsions. This may lead to changing the basic principles and to refine the existing methods continuously

  5. Model for the Analysis of the Company Performance

    Directory of Open Access Journals (Sweden)

    Mădălina DUMBRAVĂ

    2010-08-01

    Full Text Available The analysis of the performance of a firm (company has a determinant role in setting the strategy to follow and this is more necessary during the period of economic-financial crisis. In the following items, I have performed the analysis, based on the balance sheet data, for SC DELTA SRL, using a system of indicators that have relevance, and the interpretation of which allows to draw certain conclusions, depending on which the future development can be forecasted. I have tried to use a number of indicators, viewed as a system, which would define, in the end, a model for company performance analysis. The research focused on using the system of indicators on the data from the balance sheet of SC DELTA SRL.

  6. Model helicopter performance degradation with simulated ice shapes

    Science.gov (United States)

    Tinetti, Ana F.; Korkan, Kenneth D.

    1987-01-01

    An experimental program using a commercially available model helicopter has been conducted in the Texas A&M University Subsonic Wind Tunnel to investigate main rotor performance degradation due to generic ice. The simulated ice, including both primary and secondary formations, was scaled by chord from previously documented artificial ice accretions. Base and iced performance data were gathered as functions of fuselage incidence, blade collective pitch, main rotor rotational velocity, and freestream velocity. It was observed that the presence of simulated ice tends to decrease the lift to equivalent drag ratio, as well as thrust coefficient for the range of velocity ratios tested. Also, increases in torque coefficient due to the generic ice formations were observed. Evaluation of the data has indicated that the addition of roughness due to secondary ice formations is crucial for proper evaluation of the degradation in main rotor performance.

  7. A performance measurement using balanced scorecard and structural equation modeling

    Directory of Open Access Journals (Sweden)

    Rosha Makvandi

    2014-02-01

    Full Text Available During the past few years, balanced scorecard (BSC has been widely used as a promising method for performance measurement. BSC studies organizations in terms of four perspectives including customer, internal processes, learning and growth and financial figures. This paper presents a hybrid of BSC and structural equation modeling (SEM to measure the performance of an Iranian university in province of Alborz, Iran. The proposed study of this paper uses this conceptual method, designs a questionnaire and distributes it among some university students and professors. Using SEM technique, the survey analyzes the data and the results indicate that the university did poorly in terms of all four perspectives. The survey extracts necessary target improvement by presenting necessary attributes for performance improvement.

  8. The application of DEA model in enterprise environmental performance auditing

    Science.gov (United States)

    Li, F.; Zhu, L. Y.; Zhang, J. D.; Liu, C. Y.; Qu, Z. G.; Xiao, M. S.

    2017-01-01

    As a part of society, enterprises have an inescapable responsibility for environmental protection and governance. This article discusses the feasibility and necessity of enterprises environmental performance auditing and uses DEA model calculate the environmental performance of Haier for example. The most of reference data are selected and sorted from Haier’s environmental reportspublished in 2008, 2009, 2011 and 2015, and some of the data from some published articles and fieldwork. All the calculation results are calculated by DEAP software andhave a high credibility. The analysis results of this article can give corporate managements an idea about using environmental performance auditing to adjust their corporate environmental investments capital quota and change their company’s environmental strategies.

  9. Advanced transport systems analysis, modeling, and evaluation of performances

    CERN Document Server

    Janić, Milan

    2014-01-01

    This book provides a systematic analysis, modeling and evaluation of the performance of advanced transport systems. It offers an innovative approach by presenting a multidimensional examination of the performance of advanced transport systems and transport modes, useful for both theoretical and practical purposes. Advanced transport systems for the twenty-first century are characterized by the superiority of one or several of their infrastructural, technical/technological, operational, economic, environmental, social, and policy performances as compared to their conventional counterparts. The advanced transport systems considered include: Bus Rapid Transit (BRT) and Personal Rapid Transit (PRT) systems in urban area(s), electric and fuel cell passenger cars, high speed tilting trains, High Speed Rail (HSR), Trans Rapid Maglev (TRM), Evacuated Tube Transport system (ETT), advanced commercial subsonic and Supersonic Transport Aircraft (STA), conventionally- and Liquid Hydrogen (LH2)-fuelled commercial air trans...

  10. Model helicopter performance degradation with simulated ice shapes

    Science.gov (United States)

    Tinetti, Ana F.; Korkan, Kenneth D.

    1987-01-01

    An experimental program using a commercially available model helicopter has been conducted in the Texas A&M University Subsonic Wind Tunnel to investigate main rotor performance degradation due to generic ice. The simulated ice, including both primary and secondary formations, was scaled by chord from previously documented artificial ice accretions. Base and iced performance data were gathered as functions of fuselage incidence, blade collective pitch, main rotor rotational velocity, and freestream velocity. It was observed that the presence of simulated ice tends to decrease the lift to equivalent drag ratio, as well as thrust coefficient for the range of velocity ratios tested. Also, increases in torque coefficient due to the generic ice formations were observed. Evaluation of the data has indicated that the addition of roughness due to secondary ice formations is crucial for proper evaluation of the degradation in main rotor performance.

  11. Duct thermal performance models for large commercial buildings

    Energy Technology Data Exchange (ETDEWEB)

    Wray, Craig P.

    2003-10-01

    Despite the potential for significant energy savings by reducing duct leakage or other thermal losses from duct systems in large commercial buildings, California Title 24 has no provisions to credit energy-efficient duct systems in these buildings. A substantial reason is the lack of readily available simulation tools to demonstrate the energy-saving benefits associated with efficient duct systems in large commercial buildings. The overall goal of the Efficient Distribution Systems (EDS) project within the PIER High Performance Commercial Building Systems Program is to bridge the gaps in current duct thermal performance modeling capabilities, and to expand our understanding of duct thermal performance in California large commercial buildings. As steps toward this goal, our strategy in the EDS project involves two parts: (1) developing a whole-building energy simulation approach for analyzing duct thermal performance in large commercial buildings, and (2) using the tool to identify the energy impacts of duct leakage in California large commercial buildings, in support of future recommendations to address duct performance in the Title 24 Energy Efficiency Standards for Nonresidential Buildings. The specific technical objectives for the EDS project were to: (1) Identify a near-term whole-building energy simulation approach that can be used in the impacts analysis task of this project (see Objective 3), with little or no modification. A secondary objective is to recommend how to proceed with long-term development of an improved compliance tool for Title 24 that addresses duct thermal performance. (2) Develop an Alternative Calculation Method (ACM) change proposal to include a new metric for thermal distribution system efficiency in the reporting requirements for the 2005 Title 24 Standards. The metric will facilitate future comparisons of different system types using a common ''yardstick''. (3) Using the selected near-term simulation approach

  12. Ranking streamflow model performance based on Information theory metrics

    Science.gov (United States)

    Martinez, Gonzalo; Pachepsky, Yakov; Pan, Feng; Wagener, Thorsten; Nicholson, Thomas

    2016-04-01

    The accuracy-based model performance metrics not necessarily reflect the qualitative correspondence between simulated and measured streamflow time series. The objective of this work was to use the information theory-based metrics to see whether they can be used as complementary tool for hydrologic model evaluation and selection. We simulated 10-year streamflow time series in five watersheds located in Texas, North Carolina, Mississippi, and West Virginia. Eight model of different complexity were applied. The information-theory based metrics were obtained after representing the time series as strings of symbols where different symbols corresponded to different quantiles of the probability distribution of streamflow. The symbol alphabet was used. Three metrics were computed for those strings - mean information gain that measures the randomness of the signal, effective measure complexity that characterizes predictability and fluctuation complexity that characterizes the presence of a pattern in the signal. The observed streamflow time series has smaller information content and larger complexity metrics than the precipitation time series. Watersheds served as information filters and and streamflow time series were less random and more complex than the ones of precipitation. This is reflected the fact that the watershed acts as the information filter in the hydrologic conversion process from precipitation to streamflow. The Nash Sutcliffe efficiency metric increased as the complexity of models increased, but in many cases several model had this efficiency values not statistically significant from each other. In such cases, ranking models by the closeness of the information-theory based parameters in simulated and measured streamflow time series can provide an additional criterion for the evaluation of hydrologic model performance.

  13. 南京地区典型居住建筑能耗模型及65%节能措施研究%Energy consumption model and 65% energy conservation measures for typical residential buildings in Nanjing

    Institute of Scientific and Technical Information of China (English)

    金斯科; 龚延风

    2011-01-01

    Calculates the annual energy amounts consumed by a typical residential building in intermittent operation mode with 24 different envelope property combinations in Nanjing by using DeST software and establishes the regression model of building energy consumption. Analyses the impact of thermal properties of enclosure and area ratio of window to wall on building energy consumption and proposes the 65% energy conservation measures for the residential buildings in this area.%采用DeST能耗软件模拟了南京一典型居住建筑在24种围护结构参数组合及间歇空调运行模式下的建筑全年能耗,并建立了南京地区典型居住建筑的能耗回归模型.同时分析了围护结构热工参数及窗墙面积比对建筑物能耗的影响,并提出了该地区居住建筑节能65%的措施.

  14. 3D Massive MIMO Systems: Modeling and Performance Analysis

    KAUST Repository

    Nadeem, Qurrat-Ul-Ain

    2015-07-30

    Multiple-input-multiple-output (MIMO) systems of current LTE releases are capable of adaptation in the azimuth only. Recently, the trend is to enhance system performance by exploiting the channel’s degrees of freedom in the elevation, which necessitates the characterization of 3D channels. We present an information-theoretic channel model for MIMO systems that supports the elevation dimension. The model is based on the principle of maximum entropy, which enables us to determine the distribution of the channel matrix consistent with the prior information on the angles. Based on this model, we provide analytical expression for the cumulative density function (CDF) of the mutual information (MI) for systems with a single receive and finite number of transmit antennas in the general signalto- interference-plus-noise-ratio (SINR) regime. The result is extended to systems with finite receive antennas in the low SINR regime. A Gaussian approximation to the asymptotic behavior of MI distribution is derived for the large number of transmit antennas and paths regime. We corroborate our analysis with simulations that study the performance gains realizable through meticulous selection of the transmit antenna downtilt angles, confirming the potential of elevation beamforming to enhance system performance. The results are directly applicable to the analysis of 5G 3D-Massive MIMO-systems.

  15. Modelling the Progression of Male Swimmers’ Performances through Adolescence

    Directory of Open Access Journals (Sweden)

    Shilo J. Dormehl

    2016-01-01

    Full Text Available Insufficient data on adolescent athletes is contributing to the challenges facing youth athletic development and accurate talent identification. The purpose of this study was to model the progression of male sub-elite swimmers’ performances during adolescence. The performances of 446 males (12–19 year olds competing in seven individual events (50, 100, 200 m freestyle, 100 m backstroke, breaststroke, butterfly, 200 m individual medley over an eight-year period at an annual international schools swimming championship, run under FINA regulations were collected. Quadratic functions for each event were determined using mixed linear models. Thresholds of peak performance were achieved between the ages of 18.5 ± 0.1 (50 m freestyle and 200 m individual medley and 19.8 ± 0.1 (100 m butterfly years. The slowest rate of improvement was observed in the 200 m individual medley (20.7% and the highest in the 100 m butterfly (26.2%. Butterfly does however appear to be one of the last strokes in which males specialise. The models may be useful as talent identification tools, as they predict the age at which an average sub-elite swimmer could potentially peak. The expected rate of improvement could serve as a tool in which to monitor and evaluate benchmarks.

  16. Performance Evaluation Based on EFQM Excellence Model in Sport Organizations

    Directory of Open Access Journals (Sweden)

    Rasoul Faraji

    2012-06-01

    Full Text Available The present study aims to evaluate the performance of physical education (P.E. general office of Tehran province through model of European Foundation for Quality Management (EFQM. Questionnaire approach was used in this study. Therefore validity of the 50-item EFQM questionnaire verified by the experts and the reliability also calculated in a pilot study (α=0.928. 95 questionnaires distributed between subjects (N=n and 80 questionnaires returned and concluded in the statistical analysis. From nine EFQM criteria, the highest scores were gained in key performance results (37.62% and the lowest gained in people results (27.94%. Totally, this organization achieved 337.11 pointes out of a total of 1000. Additionally, there was a strong relationship (r=0.827, p=0.001 between enablers and results (P<0.05. Based on scores gained in the criteria, improving measures in all criteria is essential for this organization, especially in the people criterion from enablers and people results criterion from results domain. Furthermore, it is believed that the physical education area is one of the best fields for application of the excellence model towards the performance excellence and gaining better results and hence, it seems that the model has a high potential in responding to problems commonly seen in sport sector.

  17. Performance of chromatographic systems to model soil-water sorption.

    Science.gov (United States)

    Hidalgo-Rodríguez, Marta; Fuguet, Elisabet; Ràfols, Clara; Rosés, Martí

    2012-08-24

    A systematic approach for evaluating the goodness of chromatographic systems to model the sorption of neutral organic compounds by soil from water is presented in this work. It is based on the examination of the three sources of error that determine the overall variance obtained when soil-water partition coefficients are correlated against chromatographic retention factors: the variance of the soil-water sorption data, the variance of the chromatographic data, and the variance attributed to the dissimilarity between the two systems. These contributions of variance are easily predicted through the characterization of the systems by the solvation parameter model. According to this method, several chromatographic systems besides the reference octanol-water partition system have been selected to test their performance in the emulation of soil-water sorption. The results from the experimental correlations agree with the predicted variances. The high-performance liquid chromatography system based on an immobilized artificial membrane and the micellar electrokinetic chromatography systems of sodium dodecylsulfate and sodium taurocholate provide the most precise correlation models. They have shown to predict well soil-water sorption coefficients of several tested herbicides. Octanol-water partitions and high-performance liquid chromatography measurements using C18 columns are less suited for the estimation of soil-water partition coefficients.

  18. Beamforming in Ad Hoc Networks: MAC Design and Performance Modeling

    Directory of Open Access Journals (Sweden)

    Khalil Fakih

    2009-01-01

    Full Text Available We examine in this paper the benefits of beamforming techniques in ad hoc networks. We first devise a novel MAC paradigm for ad hoc networks when using these techniques in multipath fading environment. In such networks, the use of conventional directional antennas does not necessarily improve the system performance. On the other hand, the exploitation of the potential benefits of smart antenna systems and especially beamforming techniques needs a prior knowledge of the physical channel. Our proposition performs jointly channel estimation and radio resource sharing. We validate the fruitfulness of the proposed MAC and we evaluate the effects of the channel estimation on the network performance. We then present an accurate analytical model for the performance of IEEE 802.11 MAC protocol. We extend the latter model, by introducing the fading probability, to derive the saturation throughput for our proposed MAC when the simplest beamforming strategy is used in real multipath fading ad hoc networks. Finally, numerical results validate our proposition.

  19. Forecasting Performance of Asymmetric GARCH Stock Market Volatility Models

    Directory of Open Access Journals (Sweden)

    Hojin Lee

    2009-12-01

    Full Text Available We investigate the asymmetry between positive and negative returns in their effect on conditional variance of the stock market index and incorporate the characteristics to form an out-of-sample volatility forecast. Contrary to prior evidence, however, the results in this paper suggest that no asymmetric GARCH model is superior to basic GARCH(1,1 model. It is our prior knowledge that, for equity returns, it is unlikely that positive and negative shocks have the same impact on the volatility. In order to reflect this intuition, we implement three diagnostic tests for volatility models: the Sign Bias Test, the Negative Size Bias Test, and the Positive Size Bias Test and the tests against the alternatives of QGARCH and GJR-GARCH. The asymmetry test results indicate that the sign and the size of the unexpected return shock do not influence current volatility differently which contradicts our presumption that there are asymmetric effects in the stock market volatility. This result is in line with various diagnostic tests which are designed to determine whether the GARCH(1,1 volatility estimates adequately represent the data. The diagnostic tests in section 2 indicate that the GARCH(1,1 model for weekly KOSPI returns is robust to the misspecification test. We also investigate two representative asymmetric GARCH models, QGARCH and GJR-GARCH model, for our out-of-sample forecasting performance. The out-of-sample forecasting ability test reveals that no single model is clearly outperforming. It is seen that the GJR-GARCH and QGARCH model give mixed results in forecasting ability on all four criteria across all forecast horizons considered. Also, the predictive accuracy test of Diebold and Mariano based on both absolute and squared prediction errors suggest that the forecasts from the linear and asymmetric GARCH models need not be significantly different from each other.

  20. Green roof hydrologic performance and modeling: a review.

    Science.gov (United States)

    Li, Yanling; Babcock, Roger W

    2014-01-01

    Green roofs reduce runoff from impervious surfaces in urban development. This paper reviews the technical literature on green roof hydrology. Laboratory experiments and field measurements have shown that green roofs can reduce stormwater runoff volume by 30 to 86%, reduce peak flow rate by 22 to 93% and delay the peak flow by 0 to 30 min and thereby decrease pollution, flooding and erosion during precipitation events. However, the effectiveness can vary substantially due to design characteristics making performance predictions difficult. Evaluation of the most recently published study findings indicates that the major factors affecting green roof hydrology are precipitation volume, precipitation dynamics, antecedent conditions, growth medium, plant species, and roof slope. This paper also evaluates the computer models commonly used to simulate hydrologic processes for green roofs, including stormwater management model, soil water atmosphere and plant, SWMS-2D, HYDRUS, and other models that are shown to be effective for predicting precipitation response and economic benefits. The review findings indicate that green roofs are effective for reduction of runoff volume and peak flow, and delay of peak flow, however, no tool or model is available to predict expected performance for any given anticipated system based on design parameters that directly affect green roof hydrology.

  1. A Fluid Model for Performance Analysis in Cellular Networks

    Directory of Open Access Journals (Sweden)

    Coupechoux Marceau

    2010-01-01

    Full Text Available We propose a new framework to study the performance of cellular networks using a fluid model and we derive from this model analytical formulas for interference, outage probability, and spatial outage probability. The key idea of the fluid model is to consider the discrete base station (BS entities as a continuum of transmitters that are spatially distributed in the network. This model allows us to obtain simple analytical expressions to reveal main characteristics of the network. In this paper, we focus on the downlink other-cell interference factor (OCIF, which is defined for a given user as the ratio of its outer cell received power to its inner cell received power. A closed-form formula of the OCIF is provided in this paper. From this formula, we are able to obtain the global outage probability as well as the spatial outage probability, which depends on the location of a mobile station (MS initiating a new call. Our analytical results are compared to Monte Carlo simulations performed in a traditional hexagonal network. Furthermore, we demonstrate an application of the outage probability related to cell breathing and densification of cellular networks.

  2. Modeling and design of a high-performance hybrid actuator

    Science.gov (United States)

    Aloufi, Badr; Behdinan, Kamran; Zu, Jean

    2016-12-01

    This paper presents the model and design of a novel hybrid piezoelectric actuator which provides high active and passive performances for smart structural systems. The actuator is composed of a pair of curved pre-stressed piezoelectric actuators, so-called commercially THUNDER actuators, installed opposite each other using two clamping mechanisms constructed of in-plane fixable hinges, grippers and solid links. A fully mathematical model is developed to describe the active and passive dynamics of the actuator and investigate the effects of its geometrical parameters on the dynamic stiffness, free displacement and blocked force properties. Among the literature that deals with piezoelectric actuators in which THUNDER elements are used as a source of electromechanical power, the proposed study is unique in that it presents a mathematical model that has the ability to predict the actuator characteristics and achieve other phenomena, such as resonances, mode shapes, phase shifts, dips, etc. For model validation, the measurements of the free dynamic response per unit voltage and passive acceleration transmissibility of a particular actuator design are used to check the accuracy of the results predicted by the model. The results reveal that there is a good agreement between the model and experiment. Another experiment is performed to teste the linearity of the actuator system by examining the variation of the output dynamic responses with varying forces and voltages at different frequencies. From the results, it can be concluded that the actuator acts approximately as a linear system at frequencies up to 1000 Hz. A parametric study is achieved here by applying the developed model to analyze the influence of the geometrical parameters of the fixable hinges on the active and passive actuator properties. The model predictions in the frequency range of 0-1000 Hz show that the hinge thickness, radius, and opening angle parameters have great effects on the frequency dynamic

  3. Tank System Integrated Model: A Cryogenic Tank Performance Prediction Program

    Science.gov (United States)

    Bolshinskiy, L. G.; Hedayat, A.; Hastings, L. J.; Sutherlin, S. G.; Schnell, A. R.; Moder, J. P.

    2017-01-01

    Accurate predictions of the thermodynamic state of the cryogenic propellants, pressurization rate, and performance of pressure control techniques in cryogenic tanks are required for development of cryogenic fluid long-duration storage technology and planning for future space exploration missions. This Technical Memorandum (TM) presents the analytical tool, Tank System Integrated Model (TankSIM), which can be used for modeling pressure control and predicting the behavior of cryogenic propellant for long-term storage for future space missions. Utilizing TankSIM, the following processes can be modeled: tank self-pressurization, boiloff, ullage venting, mixing, and condensation on the tank wall. This TM also includes comparisons of TankSIM program predictions with the test data andexamples of multiphase mission calculations.

  4. Performance Evaluation of 3d Modeling Software for Uav Photogrammetry

    Science.gov (United States)

    Yanagi, H.; Chikatsu, H.

    2016-06-01

    UAV (Unmanned Aerial Vehicle) photogrammetry, which combines UAV and freely available internet-based 3D modeling software, is widely used as a low-cost and user-friendly photogrammetry technique in the fields such as remote sensing and geosciences. In UAV photogrammetry, only the platform used in conventional aerial photogrammetry is changed. Consequently, 3D modeling software contributes significantly to its expansion. However, the algorithms of the 3D modelling software are black box algorithms. As a result, only a few studies have been able to evaluate their accuracy using 3D coordinate check points. With this motive, Smart3DCapture and Pix4Dmapper were downloaded from the Internet and commercial software PhotoScan was also employed; investigations were performed in this paper using check points and images obtained from UAV.

  5. Toward a high performance distributed memory climate model

    Energy Technology Data Exchange (ETDEWEB)

    Wehner, M.F.; Ambrosiano, J.J.; Brown, J.C.; Dannevik, W.P.; Eltgroth, P.G.; Mirin, A.A. [Lawrence Livermore National Lab., CA (United States); Farrara, J.D.; Ma, C.C.; Mechoso, C.R.; Spahr, J.A. [Univ. of California, Los Angeles, CA (US). Dept. of Atmospheric Sciences

    1993-02-15

    As part of a long range plan to develop a comprehensive climate systems modeling capability, the authors have taken the Atmospheric General Circulation Model originally developed by Arakawa and collaborators at UCLA and have recast it in a portable, parallel form. The code uses an explicit time-advance procedure on a staggered three-dimensional Eulerian mesh. The authors have implemented a two-dimensional latitude/longitude domain decomposition message passing strategy. Both dynamic memory management and interprocessor communication are handled with macro constructs that are preprocessed prior to compilation. The code can be moved about a variety of platforms, including massively parallel processors, workstation clusters, and vector processors, with a mere change of three parameters. Performance on the various platforms as well as issues associated with coupling different models for major components of the climate system are discussed.

  6. Cooperative cognitive radio networking system model, enabling techniques, and performance

    CERN Document Server

    Cao, Bin; Mark, Jon W

    2016-01-01

    This SpringerBrief examines the active cooperation between users of Cooperative Cognitive Radio Networking (CCRN), exploring the system model, enabling techniques, and performance. The brief provides a systematic study on active cooperation between primary users and secondary users, i.e., (CCRN), followed by the discussions on research issues and challenges in designing spectrum-energy efficient CCRN. As an effort to shed light on the design of spectrum-energy efficient CCRN, they model the CCRN based on orthogonal modulation and orthogonally dual-polarized antenna (ODPA). The resource allocation issues are detailed with respect to both models, in terms of problem formulation, solution approach, and numerical results. Finally, the optimal communication strategies for both primary and secondary users to achieve spectrum-energy efficient CCRN are analyzed.

  7. Modelling of green roof hydrological performance for urban drainage applications

    DEFF Research Database (Denmark)

    Locatelli, Luca; Mark, Ole; Mikkelsen, Peter Steen

    2014-01-01

    Green roofs are being widely implemented for stormwater management and their impact on the urban hydrological cycle can be evaluated by incorporating them into urban drainage models. This paper presents a model of green roof long term and single event hydrological performance. The model includes...... from 3 different extensive sedum roofs in Denmark. These data consist of high-resolution measurements of runoff, precipitation and atmospheric variables in the period 2010–2012. The hydrological response of green roofs was quantified based on statistical analysis of the results of a 22-year (1989...... and that the mean annual runoff is not linearly related to the storage. Green roofs have therefore the potential to be important parts of future urban stormwater management plans....

  8. Modeling logistic performance in quantitative microbial risk assessment.

    Science.gov (United States)

    Rijgersberg, Hajo; Tromp, Seth; Jacxsens, Liesbeth; Uyttendaele, Mieke

    2010-01-01

    In quantitative microbial risk assessment (QMRA), food safety in the food chain is modeled and simulated. In general, prevalences, concentrations, and numbers of microorganisms in media are investigated in the different steps from farm to fork. The underlying rates and conditions (such as storage times, temperatures, gas conditions, and their distributions) are determined. However, the logistic chain with its queues (storages, shelves) and mechanisms for ordering products is usually not taken into account. As a consequence, storage times-mutually dependent in successive steps in the chain-cannot be described adequately. This may have a great impact on the tails of risk distributions. Because food safety risks are generally very small, it is crucial to model the tails of (underlying) distributions as accurately as possible. Logistic performance can be modeled by describing the underlying planning and scheduling mechanisms in discrete-event modeling. This is common practice in operations research, specifically in supply chain management. In this article, we present the application of discrete-event modeling in the context of a QMRA for Listeria monocytogenes in fresh-cut iceberg lettuce. We show the potential value of discrete-event modeling in QMRA by calculating logistic interventions (modifications in the logistic chain) and determining their significance with respect to food safety.

  9. Cross-Industry Performance Modeling: Toward Cooperative Analysis

    Energy Technology Data Exchange (ETDEWEB)

    H. S. Blackman; W. J. Reece

    1998-10-01

    One of the current unsolved problems in human factors is the difficulty in acquiring information from lessons learned and data collected among human performance analysts in different domains. There are several common concerns and generally accepted issues of importance for human factors, psychology and industry analysts of performance and safety. Among these are the need to incorporate lessons learned in design, to carefully consider implementation of new designs and automation, and the need to reduce human performance-based contributions to risk. In spite of shared concerns, there are several road blocks to widespread sharing of data and lessons learned from operating experience and simulation, including the fact that very few publicly accessible data bases exist(Gertman & Blackman, 1994, and Kirwan, 1997). There is a need to draw together analysts and analytic methodologies to comprise a centralized source of data with sufficient detail to be meaningful while ensuring source anonymity. We propose that a generic source of performance data and a multi-domain data store may provide the first steps toward cooperative performance modeling and analysis across industries.

  10. Cross-industry Performance Modeling: Toward Cooperative Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Reece, Wendy Jane; Blackman, Harold Stabler

    1998-10-01

    One of the current unsolved problems in human factors is the difficulty in acquiring information from lessons learned and data collected among human performance analysts in different domains. There are several common concerns and generally accepted issues of importance for human factors, psychology and industry analysts of performance and safety. Among these are the need to incorporate lessons learned in design, to carefully consider implementation of new designs and automation, and the need to reduce human performance-based contributions to risk. In spite of shared concerns, there are several roadblocks to widespread sharing of data and lessons learned from operating experience and simulation, including the fact that very few publicly accessible data bases exist (Gertman & Blackman, 1994, and Kirwan, 1997). There is a need to draw together analysts and analytic methodologies to comprise a centralized source of data with sufficient detail to be meaningful while ensuring source anonymity. We propose that a generic source of performance data and a multi-domain data store may provide the first steps toward cooperative performance modeling and analysis across industries.

  11. Test of the classic model for predicting endurance running performance.

    Science.gov (United States)

    McLaughlin, James E; Howley, Edward T; Bassett, David R; Thompson, Dixie L; Fitzhugh, Eugene C

    2010-05-01

    To compare the classic physiological variables linked to endurance performance (VO2max, %VO2max at lactate threshold (LT), and running economy (RE)) with peak treadmill velocity (PTV) as predictors of performance in a 16-km time trial. Seventeen healthy, well-trained distance runners (10 males and 7 females) underwent laboratory testing to determine maximal oxygen uptake (VO2max), RE, percentage of maximal oxygen uptake at the LT (%VO2max at LT), running velocity at LT, and PTV. Velocity at VO2max (vVO2max) was calculated from RE and VO2max. Three stepwise regression models were used to determine the best predictors (classic vs treadmill performance protocols) for the 16-km running time trial. Simple Pearson correlations of the variables with 16-km performance showed vVO2max to have the highest correlation (r = -0.972) and %VO2max at the LT the lowest (r = 0.136). The correlation coefficients for LT, VO2max, and PTV were very similar in magnitude (r = -0.903 to r = -0.892). When VO2max, %VO2max at LT, RE, and PTV were entered into SPSS stepwise analysis, VO2max explained 81.3% of the total variance, and RE accounted for an additional 10.7%. vVO2max was shown to be the best predictor of the 16-km performance, accounting for 94.4% of the total variance. The measured velocity at VO2max (PTV) was highly correlated with the estimated velocity at vVO2max (r = 0.8867). Among well-trained subjects heterogeneous in VO2max and running performance, vVO2max is the best predictor of running performance because it integrates both maximal aerobic power and the economy of running. The PTV is linked to the same physiological variables that determine vVO2max.

  12. A Proposal of Ecologic Taxes Based on Thermo-Economic Performance of Heat Engine Models

    Directory of Open Access Journals (Sweden)

    Fernando Angulo-Brown

    2009-11-01

    Full Text Available Within the context of Finite-Time Thermodynamics (FTT a simplified thermal power plant model (the so-called Novikov engine is analyzed under economical criteria by means of the concepts of profit function and the costs involved in the performance of the power plant. In this study, two different heat transfer laws are used, the so called Newton’s law of cooling and the Dulong-Petit’s law of cooling. Two FTT optimization criteria for the performance analysis are used: the maximum power regime (MP and the so-called ecological criterion. This last criterion leads the engine model towards a mode of performance that appreciably diminishes the engine’s wasted energy. In this work, it is shown that the energy-unit price produced under maximum power conditions is cheaper than that produced under maximum ecological (ME conditions. This was accomplished by using a typical definition of profits function stemming from economics. The MP-regime produces considerably more wasted energy toward the environment, thus the MP energy-unit price is subsidized by nature. Due to this fact, an ecological tax is proposed, which could be a certain function of the price difference between the MP and ME modes of power production.

  13. A proposal of ecologic taxes based on thermo-economic performance of heat engine models

    Energy Technology Data Exchange (ETDEWEB)

    Barranco-Jimenez, M. A. [Departamento de Ciencias Basicas, Escuela Superior de Computo del IPN, Av. Miguel Bernal Esq. Juan de Dios Batiz U.P. Zacatenco CP 07738, D.F. (Mexico); Ramos-Gayosso, I. [Unidad de Administracion de Riesgos, Banco de Mexico, 5 de Mayo, Centro, D.F. (Mexico); Rosales, M. A. [Departamento de Fisica y Matematicas, Universidad de las Americas, Puebla Exhacienda Sta. Catarina Martir, Cholula 72820, Puebla (Mexico); Angulo-Brown, F. [Departamento de Fisica, Escuela Superior de Fisica y Matematicas del IPN, Edif. 9 U.P. Zacatenco CP 07738, D.F. (Mexico)

    2009-07-01

    Within the context of Finite-Time Thermodynamics (FTT) a simplified thermal power plant model (the so-called Novikov engine) is analyzed under economical criteria by means of the concepts of profit function and the costs involved in the performance of the power plant. In this study, two different heat transfer laws are used, the so called Newton's law of cooling and the Dulong-Petit's law of cooling. Two FTT optimization criteria for the performance analysis are used: the maximum power regime (MP) and the so-called ecological criterion. This last criterion leads the engine model towards a mode of performance that appreciably diminishes the engine's wasted energy. In this work, it is shown that the energy-unit price produced under maximum power conditions is cheaper than that produced under maximum ecological (ME) conditions. This was accomplished by using a typical definition of profits function stemming from economics. The MP-regime produces considerably more wasted energy toward the environment, thus the MP energy-unit price is subsidized by nature. Due to this fact, an ecological tax is proposed, which could be a certain function of the price difference between the MP and ME modes of power production. (author)

  14. Performance and Probabilistic Verification of Regional Parameter Estimates for Conceptual Rainfall-runoff Models

    Science.gov (United States)

    Franz, K.; Hogue, T.; Barco, J.

    2007-12-01

    Identification of appropriate parameter sets for simulation of streamflow in ungauged basins has become a significant challenge for both operational and research hydrologists. This is especially difficult in the case of conceptual models, when model parameters typically must be "calibrated" or adjusted to match streamflow conditions in specific systems (i.e. some of the parameters are not directly observable). This paper addresses the performance and uncertainty associated with transferring conceptual rainfall-runoff model parameters between basins within large-scale ecoregions. We use the National Weather Service's (NWS) operational hydrologic model, the SACramento Soil Moisture Accounting (SAC-SMA) model. A Multi-Step Automatic Calibration Scheme (MACS), using the Shuffle Complex Evolution (SCE), is used to optimize SAC-SMA parameters for a group of watersheds with extensive hydrologic records from the Model Parameter Estimation Experiment (MOPEX) database. We then explore "hydroclimatic" relationships between basins to facilitate regionalization of parameters for an established ecoregion in the southeastern United States. The impact of regionalized parameters is evaluated via standard model performance statistics as well as through generation of hindcasts and probabilistic verification procedures to evaluate streamflow forecast skill. Preliminary results show climatology ("climate neighbor") to be a better indicator of transferability than physical similarities or proximity ("nearest neighbor"). The mean and median of all the parameters within the ecoregion are the poorest choice for the ungauged basin. The choice of regionalized parameter set affected the skill of the ensemble streamflow hindcasts, however, all parameter sets show little skill in forecasts after five weeks (i.e. climatology is as good an indicator of future streamflows). In addition, the optimum parameter set changed seasonally, with the "nearest neighbor" showing the highest skill in the

  15. Clinical laboratory as an economic model for business performance analysis.

    Science.gov (United States)

    Buljanović, Vikica; Patajac, Hrvoje; Petrovecki, Mladen

    2011-08-15

    To perform SWOT (strengths, weaknesses, opportunities, and threats) analysis of a clinical laboratory as an economic model that may be used to improve business performance of laboratories by removing weaknesses, minimizing threats, and using external opportunities and internal strengths. Impact of possible threats to and weaknesses of the Clinical Laboratory at Našice General County Hospital business performance and use of strengths and opportunities to improve operating profit were simulated using models created on the basis of SWOT analysis results. The operating profit as a measure of profitability of the clinical laboratory was defined as total revenue minus total expenses and presented using a profit and loss account. Changes in the input parameters in the profit and loss account for 2008 were determined using opportunities and potential threats, and economic sensitivity analysis was made by using changes in the key parameters. The profit and loss account and economic sensitivity analysis were tools for quantifying the impact of changes in the revenues and expenses on the business operations of clinical laboratory. Results of simulation models showed that operational profit of €470 723 in 2008 could be reduced to only €21 542 if all possible threats became a reality and current weaknesses remained the same. Also, operational gain could be increased to €535 804 if laboratory strengths and opportunities were utilized. If both the opportunities and threats became a reality, the operational profit would decrease by €384 465. The operational profit of the clinical laboratory could be significantly reduced if all threats became a reality and the current weaknesses remained the same. The operational profit could be increased by utilizing strengths and opportunities as much as possible. This type of modeling may be used to monitor business operations of any clinical laboratory and improve its financial situation by implementing changes in the next fiscal

  16. Clinical laboratory as an economic model for business performance analysis

    Science.gov (United States)

    Buljanović, Vikica; Patajac, Hrvoje; Petrovečki, Mladen

    2011-01-01

    Aim To perform SWOT (strengths, weaknesses, opportunities, and threats) analysis of a clinical laboratory as an economic model that may be used to improve business performance of laboratories by removing weaknesses, minimizing threats, and using external opportunities and internal strengths. Methods Impact of possible threats to and weaknesses of the Clinical Laboratory at Našice General County Hospital business performance and use of strengths and opportunities to improve operating profit were simulated using models created on the basis of SWOT analysis results. The operating profit as a measure of profitability of the clinical laboratory was defined as total revenue minus total expenses and presented using a profit and loss account. Changes in the input parameters in the profit and loss account for 2008 were determined using opportunities and potential threats, and economic sensitivity analysis was made by using changes in the key parameters. The profit and loss account and economic sensitivity analysis were tools for quantifying the impact of changes in the revenues and expenses on the business operations of clinical laboratory. Results Results of simulation models showed that operational profit of €470 723 in 2008 could be reduced to only €21 542 if all possible threats became a reality and current weaknesses remained the same. Also, operational gain could be increased to €535 804 if laboratory strengths and opportunities were utilized. If both the opportunities and threats became a reality, the operational profit would decrease by €384 465. Conclusion The operational profit of the clinical laboratory could be significantly reduced if all threats became a reality and the current weaknesses remained the same. The operational profit could be increased by utilizing strengths and opportunities as much as possible. This type of modeling may be used to monitor business operations of any clinical laboratory and improve its financial situation by

  17. A new regional climate model operating at the meso-gamma scale: performance over Europe

    Directory of Open Access Journals (Sweden)

    David Lindstedt

    2015-01-01

    Full Text Available There are well-known difficulties to run numerical weather prediction (NWP and climate models at resolutions traditionally referred to as ‘grey-zone’ (~3–8 km where deep convection is neither completely resolved by the model dynamics nor completely subgrid. In this study, we describe the performance of an operational NWP model, HARMONIE, in a climate setting (HCLIM, run at two different resolutions (6 and 15 km for a 10-yr period (1998–2007. This model has a convection scheme particularly designed to operate in the ‘grey-zone’ regime, which increases the realism and accuracy of the time and spatial evolution of convective processes compared to more traditional parametrisations. HCLIM is evaluated against standard observational data sets over Europe as well as high-resolution, regional, observations. Not only is the regional climate very well represented but also higher order climate statistics and smaller scale spatial characteristics of precipitation are in good agreement with observations. The added value when making climate simulations at ~5 km resolution compared to more typical regional climate model resolutions is mainly seen for the very rare, high-intensity precipitation events. HCLIM at 6 km resolution reproduces the frequency and intensity of these events better than at 15 km resolution and is in closer agreement with the high-resolution observations.

  18. A Framework for Improving Project Performance of Standard Design Models in Saudi Arabia

    Directory of Open Access Journals (Sweden)

    Shabbab Al-Otaib

    2013-07-01

    Full Text Available Improving project performance in the construction industry poses several challenges for stakeholders. Recently, there have been frequent calls for the importance of adopting standardisation in improving construction design as well as the process and a focus on learning mapping from other industries. The Saudi Ministry of Interior (SMoI has adopted a new Standard Design Model (SDM approach for the development of its construction programme to effectively manage its complex project portfolio and improve project performance. A review of existing literature indicates that despite the adoption of SDM repetitive projects, which enable learning from past mistakes and improving the performance of future projects, it has been realised that there is a lack of learning instruments to capture, store and disseminate Lessons Learnt (LL. This research proposes a framework for improving the project performance of SDMs in the Saudi construction industry. Eight case studies related to a typical standard design project were performed that included interviews with of 24 key stakeholders who are involved in the planning and implementation of SDM projects within the SMoI. The research identified 14 critical success factors CSFs have a direct impact on the SDM project performance. These are classified into three main CSF-related clusters: adaptability to the context; contract management; and construction management. A framework, which comprises the identified 14 CSFs, was developed, refined and validated through a workshop with 12 key stakeholders in the SMoI construction programme. Additionally, a framework implementation process map was developed. Web-based tools and KM were identified as core factors in the framework implementation strategy. Although many past CSF-related studies were conducted to develop a range of construction project performance improvement frameworks, the paper provides the first initiative to develop a framework to improve the performance of

  19. An Analytical Model for the Performance Analysis of Concurrent Transmission in IEEE 802.15.4

    Directory of Open Access Journals (Sweden)

    Cengiz Gezer

    2014-03-01

    Full Text Available Interference is a serious cause of performance degradation for IEEE802.15.4 devices. The effect of concurrent transmissions in IEEE 802.15.4 has been generally investigated by means of simulation or experimental activities. In this paper, a mathematical framework for the derivation of chip, symbol and packet error probability of a typical IEEE 802.15.4 receiver in the presence of interference is proposed. Both non-coherent and coherent demodulation schemes are considered by our model under the assumption of the absence of thermal noise. Simulation results are also added to assess the validity of the mathematical framework when the effect of thermal noise cannot be neglected. Numerical results show that the proposed analysis is in agreement with the measurement results on the literature under realistic working conditions.

  20. An Analytical Model for the Performance Analysis of Concurrent Transmission in IEEE 802.15.4

    Science.gov (United States)

    Gezer, Cengiz; Zanella, Alberto; Verdone, Roberto

    2014-01-01

    Interference is a serious cause of performance degradation for IEEE802.15.4 devices. The effect of concurrent transmissions in IEEE 802.15.4 has been generally investigated by means of simulation or experimental activities. In this paper, a mathematical framework for the derivation of chip, symbol and packet error probability of a typical IEEE 802.15.4 receiver in the presence of interference is proposed. Both non-coherent and coherent demodulation schemes are considered by our model under the assumption of the absence of thermal noise. Simulation results are also added to assess the validity of the mathematical framework when the effect of thermal noise cannot be neglected. Numerical results show that the proposed analysis is in agreement with the measurement results on the literature under realistic working conditions. PMID:24658624

  1. Measuring teaching ability with the Rasch model by scaling a series of product and performance tasks.

    Science.gov (United States)

    Wilkerson, Judy R; Lang, William Steve

    2006-01-01

    Rasch measurement can provide a much needed solution to scaling teacher ability. Typically, decisions about teacher ability are based on dichotomously scored certification tests focused on knowledge of content or pedagogy. This paper presents early developmental work of a partial-credit teacher-ability scale of 42 tasks (performances and products) with 348 rated items or criteria. The tasks and criteria are aligned with national and state standards for expected teacher knowledge and skills. These tasks are being used in about two-thirds of Florida school districts and are spreading to colleges of education. Over time there will be many variations in both tasks and criteria, but here we focus on the initial system and the Rasch model as part of the plan for development of the system.

  2. An analytical model for the performance analysis of concurrent transmission in IEEE 802.15.4.

    Science.gov (United States)

    Gezer, Cengiz; Zanella, Alberto; Verdone, Roberto

    2014-03-20

    Interference is a serious cause of performance degradation for IEEE802.15.4 devices. The effect of concurrent transmissions in IEEE 802.15.4 has been generally investigated by means of simulation or experimental activities. In this paper, a mathematical framework for the derivation of chip, symbol and packet error probability of a typical IEEE 802.15.4 receiver in the presence of interference is proposed. Both non-coherent and coherent demodulation schemes are considered by our model under the assumption of the absence of thermal noise. Simulation results are also added to assess the validity of the mathematical framework when the effect of thermal noise cannot be neglected. Numerical results show that the proposed analysis is in agreement with the measurement results on the literature under realistic working conditions.

  3. ASYMMETRIC PRICE TRANSMISSION MODELING: THE IMPORTANCE OF MODEL COMPLEXITY AND THE PERFORMANCE OF THE SELECTION CRITERIA

    Directory of Open Access Journals (Sweden)

    Henry de-Graft Acquah

    2013-01-01

    Full Text Available Information Criteria provides an attractive basis for selecting the best model from a set of competing asymmetric price transmission models or theories. However, little is understood about the sensitivity of the model selection methods to model complexity. This study therefore fits competing asymmetric price transmission models that differ in complexity to simulated data and evaluates the ability of the model selection methods to recover the true model. The results of Monte Carlo experimentation suggest that in general BIC, CAIC and DIC were superior to AIC when the true data generating process was the standard error correction model, whereas AIC was more successful when the true model was the complex error correction model. It is also shown that the model selection methods performed better in large samples for a complex asymmetric data generating process than with a standard asymmetric data generating process. Except for complex models, AIC's performance did not make substantial gains in recovery rates as sample size increased. The research findings demonstrate the influence of model complexity in asymmetric price transmission model comparison and selection.

  4. Voxel model in BNCT treatment planning: performance analysis and improvements

    Science.gov (United States)

    González, Sara J.; Carando, Daniel G.; Santa Cruz, Gustavo A.; Zamenhof, Robert G.

    2005-02-01

    In recent years, many efforts have been made to study the performance of treatment planning systems in deriving an accurate dosimetry of the complex radiation fields involved in boron neutron capture therapy (BNCT). The computational model of the patient's anatomy is one of the main factors involved in this subject. This work presents a detailed analysis of the performance of the 1 cm based voxel reconstruction approach. First, a new and improved material assignment algorithm implemented in NCTPlan treatment planning system for BNCT is described. Based on previous works, the performances of the 1 cm based voxel methods used in the MacNCTPlan and NCTPlan treatment planning systems are compared by standard simulation tests. In addition, the NCTPlan voxel model is benchmarked against in-phantom physical dosimetry of the RA-6 reactor of Argentina. This investigation shows the 1 cm resolution to be accurate enough for all reported tests, even in the extreme cases such as a parallelepiped phantom irradiated through one of its sharp edges. This accuracy can be degraded at very shallow depths in which, to improve the estimates, the anatomy images need to be positioned in a suitable way. Rules for this positioning are presented. The skin is considered one of the organs at risk in all BNCT treatments and, in the particular case of cutaneous melanoma of extremities, limits the delivered dose to the patient. Therefore, the performance of the voxel technique is deeply analysed in these shallow regions. A theoretical analysis is carried out to assess the distortion caused by homogenization and material percentage rounding processes. Then, a new strategy for the treatment of surface voxels is proposed and tested using two different irradiation problems. For a parallelepiped phantom perpendicularly irradiated with a 5 keV neutron source, the large thermal neutron fluence deviation present at shallow depths (from 54% at 0 mm depth to 5% at 4 mm depth) is reduced to 2% on average

  5. A Hybrid Fuzzy Model for Lean Product Development Performance Measurement

    Science.gov (United States)

    Osezua Aikhuele, Daniel; Mohd Turan, Faiz

    2016-02-01

    In the effort for manufacturing companies to meet up with the emerging consumer demands for mass customized products, many are turning to the application of lean in their product development process, and this is gradually moving from being a competitive advantage to a necessity. However, due to lack of clear understanding of the lean performance measurements, many of these companies are unable to implement and fully integrated the lean principle into their product development process. Extensive literature shows that only few studies have focus systematically on the lean product development performance (LPDP) evaluation. In order to fill this gap, the study therefore proposed a novel hybrid model based on Fuzzy Reasoning Approach (FRA), and the extension of Fuzzy-AHP and Fuzzy-TOPSIS methods for the assessment of the LPDP. Unlike the existing methods, the model considers the importance weight of each of the decision makers (Experts) since the performance criteria/attributes are required to be rated, and these experts have different level of expertise. The rating is done using a new fuzzy Likert rating scale (membership-scale) which is designed such that it can address problems resulting from information lost/distortion due to closed-form scaling and the ordinal nature of the existing Likert scale.

  6. Modelling of LOCA Tests with the BISON Fuel Performance Code

    Energy Technology Data Exchange (ETDEWEB)

    Williamson, Richard L [Idaho National Laboratory; Pastore, Giovanni [Idaho National Laboratory; Novascone, Stephen Rhead [Idaho National Laboratory; Spencer, Benjamin Whiting [Idaho National Laboratory; Hales, Jason Dean [Idaho National Laboratory

    2016-05-01

    BISON is a modern finite-element based, multidimensional nuclear fuel performance code that is under development at Idaho National Laboratory (USA). Recent advances of BISON include the extension of the code to the analysis of LWR fuel rod behaviour during loss-of-coolant accidents (LOCAs). In this work, BISON models for the phenomena relevant to LWR cladding behaviour during LOCAs are described, followed by presentation of code results for the simulation of LOCA tests. Analysed experiments include separate effects tests of cladding ballooning and burst, as well as the Halden IFA-650.2 fuel rod test. Two-dimensional modelling of the experiments is performed, and calculations are compared to available experimental data. Comparisons include cladding burst pressure and temperature in separate effects tests, as well as the evolution of fuel rod inner pressure during ballooning and time to cladding burst. Furthermore, BISON three-dimensional simulations of separate effects tests are performed, which demonstrate the capability to reproduce the effect of azimuthal temperature variations in the cladding. The work has been carried out in the frame of the collaboration between Idaho National Laboratory and Halden Reactor Project, and the IAEA Coordinated Research Project FUMAC.

  7. Performance Evaluation of the Prototype Model NEXT Ion Thruster

    Science.gov (United States)

    Herman, Daniel A.; Soulas, George C.; Patterson, Michael J.

    2008-01-01

    The performance testing results of the first prototype model NEXT ion engine, PM1, are presented. The NEXT program has developed the next generation ion propulsion system to enhance and enable Discovery, New Frontiers, and Flagship-type NASA missions. The PM1 thruster exhibits operational behavior consistent with its predecessors, the engineering model thrusters, with substantial mass savings, enhanced thermal margins, and design improvements for environmental testing compliance. The dry mass of PM1 is 12.7 kg. Modifications made in the thruster design have resulted in improved performance and operating margins, as anticipated. PM1 beginning-of-life performance satisfies all of the electric propulsion thruster mission-derived technical requirements. It demonstrates a wide range of throttleability by processing input power levels from 0.5 to 6.9 kW. At 6.9 kW, the PM1 thruster demonstrates specific impulse of 4190 s, 237 mN of thrust, and a thrust efficiency of 0.71. The flat beam profile, flatness parameters vary from 0.66 at low-power to 0.88 at full-power, and advanced ion optics reduce localized accelerator grid erosion and increases margins for electron backstreaming, impingement-limited voltage, and screen grid ion transparency. The thruster throughput capability is predicted to exceed 750 kg of xenon, an equivalent of 36,500 hr of continuous operation at the full-power operating condition.

  8. Modeling the performance of coated LPG tanks engulfed in fires

    Energy Technology Data Exchange (ETDEWEB)

    Landucci, Gabriele [CONPRICI - Dipartimento di Ingegneria Chimica, Chimica Industriale e Scienza dei Materiali, Universita di Pisa, via Diotisalvi n.2, 56126 Pisa (Italy); Molag, Menso [Nederlandse Organisatie voor toegepast-natuurwetenschappelijk onderzoek TNO, Princetonlaan 6, 3584 CB Utrecht (Netherlands); Cozzani, Valerio, E-mail: valerio.cozzani@unibo.it [CONPRICI - Dipartimento di Ingegneria Chimica, Mineraria e delle Tecnologie Ambientali, Alma Mater Studiorum - Universita di Bologna, Via Terracini 28 - 40131 Bologna (Italy)

    2009-12-15

    The improvement of passive fire protection of storage vessels is a key factor to enhance safety among the LPG distribution chain. A thermal and mechanical model based on finite elements simulations was developed to assess the behaviour of full size tanks used for LPG storage and transportation in fire engulfment scenarios. The model was validated by experimental results. A specific analysis of the performance of four different reference coating materials was then carried out, also defining specific key performance indicators (KPIs) to assess design safety margins in near-miss simulations. The results confirmed the wide influence of coating application on the expected vessel time to failure due to fire engulfment. A quite different performance of the alternative coating materials was evidenced. General correlations were developed among the vessel time to failure and the effective coating thickness in full engulfment scenarios, providing a preliminary assessment of the coating thickness required to prevent tank rupture for a given time lapse. The KPIs defined allowed the assessment of the available safety margins in the reference scenarios analyzed and of the robustness of thermal protection design.

  9. Modeling the performance of coated LPG tanks engulfed in fires.

    Science.gov (United States)

    Landucci, Gabriele; Molag, Menso; Cozzani, Valerio

    2009-12-15

    The improvement of passive fire protection of storage vessels is a key factor to enhance safety among the LPG distribution chain. A thermal and mechanical model based on finite elements simulations was developed to assess the behaviour of full size tanks used for LPG storage and transportation in fire engulfment scenarios. The model was validated by experimental results. A specific analysis of the performance of four different reference coating materials was then carried out, also defining specific key performance indicators (KPIs) to assess design safety margins in near-miss simulations. The results confirmed the wide influence of coating application on the expected vessel time to failure due to fire engulfment. A quite different performance of the alternative coating materials was evidenced. General correlations were developed among the vessel time to failure and the effective coating thickness in full engulfment scenarios, providing a preliminary assessment of the coating thickness required to prevent tank rupture for a given time lapse. The KPIs defined allowed the assessment of the available safety margins in the reference scenarios analyzed and of the robustness of thermal protection design.

  10. Storage Capacity Modeling of Reservoir Systems Employing Performance Measures

    Directory of Open Access Journals (Sweden)

    Issa Saket Oskoui

    2014-12-01

    Full Text Available Developing a prediction relationship for total (i.e. within-year plus over-year storage capacity of reservoir systems is beneficial because it can be used as an alternative to the analysis of reservoirs during designing stage and gives an opportunity to planner to examine and compare different cases in a fraction of time required for complete analysis where detailed analysis is not necessary. Existing relationships for storage capacity are mostly capable of estimating over-year storage capacity and total storage capacity can be obtained through relationships for adjusting over-year capacity and there is no independent relationship to estimate total storage capacity. Moreover these relationships do not involve vulnerability performance criterion and are not verified for Malaysia Rivers. In this study two different reservoirs in Southern part of Peninsular Malaysia, Melaka and Muar, are analyzed through a Monte Carlo simulation approach involving performance metrics. Subsequently the storage capacity results of the simulation are compared with those of the well-known existing equations. It is observed that existing models may not predict total capacity appropriately for Malaysian reservoirs. Consequently, applying the simulation results, two separate regression equations are developed to model total storage capacity of study reservoirs employing time based reliability and vulnerability performance measures.

  11. Mixing Model Performance in Non-Premixed Turbulent Combustion

    Science.gov (United States)

    Pope, Stephen B.; Ren, Zhuyin

    2002-11-01

    In order to shed light on their qualitative and quantitative performance, three different turbulent mixing models are studied in application to non-premixed turbulent combustion. In previous works, PDF model calculations with detailed kinetics have been shown to agree well with experimental data for non-premixed piloted jet flames. The calculations from two different groups using different descriptions of the chemistry and turbulent mixing are capable of producing the correct levels of local extinction and reignition. The success of these calculations raises several questions, since it is not clear that the mixing models used contain an adequate description of the processes involved. To address these questions, three mixing models (IEM, modified Curl and EMST) are applied to a partially-stirred reactor burning hydrogen in air. The parameters varied are the residence time and the mixing time scale. For small relative values of the mixing time scale (approaching the perfectly-stirred limit) the models yield the same extinction behavior. But for larger values, the behavior is distictly different, with EMST being must resistant to extinction.

  12. Modeling impact of environmental factors on photovoltaic array performance

    Energy Technology Data Exchange (ETDEWEB)

    Yang, Jie; Sun, Yize; Xu, Yang [College of Mechanical Engineering, Donghua University NO.2999, North Renmin Road, Shanghai (China)

    2013-07-01

    It is represented in this paper that a methodology to model and quantify the impact of the three environmental factors, the ambient temperature, the incident irradiance and the wind speed, upon the performance of photovoltaic array operating under outdoor conditions. First, A simple correlation correlating operating temperature with the three environmental variables is validated for a range of wind speed studied, 2-8, and for irradiance values between 200 and 1000. Root mean square error (RMSE) between modeled operating temperature and measured values is 1.19% and the mean bias error (MBE) is -0.09%. The environmental factors studied influence I-V curves, P-V curves, and maximum-power outputs of photovoltaic array. The cell-to-module-to-array mathematical model for photovoltaic panels is established in this paper and the method defined as segmented iteration is adopted to solve the I-V curve expression to relate model I-V curves. The model I-V curves and P-V curves are concluded to coincide well with measured data points. The RMSE between numerically calculated maximum-power outputs and experimentally measured ones is 0.2307%, while the MBE is 0.0183%. In addition, a multivariable non-linear regression equation is proposed to eliminate the difference between numerically calculated values and measured ones of maximum power outputs over the range of high ambient temperature and irradiance at noon and in the early afternoon. In conclusion, the proposed method is reasonably simple and accurate.

  13. Evaluation of multidimensional models of WAIS-IV subtest performance.

    Science.gov (United States)

    McFarland, Dennis J

    2017-04-21

    The present study examined the extent to which the covariance structure of the WAIS-IV is best accounted for by models that assume that test performance is the result of group-level factors and multiple independent general factors. Structural models with one to four general factors were evaluated with either four or five group-level factors. Simulations based on four general factors were run to clarify the adequacy of the estimates of the allocation of covariance by the models. Four independent general factors provided better fit than a single general factor for either model with four or five group-level factors. While one of the general factors had much larger loadings than all other factors, simulation results suggested that this might be an artifact of the statistical procedure rather than a reflection of the nature of individual differences in cognitive abilities. These results argue against the contention that clinical interpretation of cognitive test batteries should primarily be at the level of general intelligence. It is a fallacy to assume that factor analysis can reveal the structure of human abilities. Test validity should not be based solely on the results of modeling the covariance of test batteries.

  14. Urban Modelling Performance of Next Generation SAR Missions

    Science.gov (United States)

    Sefercik, U. G.; Yastikli, N.; Atalay, C.

    2017-09-01

    In synthetic aperture radar (SAR) technology, urban mapping and modelling have become possible with revolutionary missions TerraSAR-X (TSX) and Cosmo-SkyMed (CSK) since 2007. These satellites offer 1m spatial resolution in high-resolution spotlight imaging mode and capable for high quality digital surface model (DSM) acquisition for urban areas utilizing interferometric SAR (InSAR) technology. With the advantage of independent generation from seasonal weather conditions, TSX and CSK DSMs are much in demand by scientific users. The performance of SAR DSMs is influenced by the distortions such as layover, foreshortening, shadow and double-bounce depend up on imaging geometry. In this study, the potential of DSMs derived from convenient 1m high-resolution spotlight (HS) InSAR pairs of CSK and TSX is validated by model-to-model absolute and relative accuracy estimations in an urban area. For the verification, an airborne laser scanning (ALS) DSM of the study area was used as the reference model. Results demonstrated that TSX and CSK urban DSMs are compatible in open, built-up and forest land forms with the absolute accuracy of 8-10 m. The relative accuracies based on the coherence of neighbouring pixels are superior to absolute accuracies both for CSK and TSX.

  15. Photovoltaic Pixels for Neural Stimulation: Circuit Models and Performance.

    Science.gov (United States)

    Boinagrov, David; Lei, Xin; Goetz, Georges; Kamins, Theodore I; Mathieson, Keith; Galambos, Ludwig; Harris, James S; Palanker, Daniel

    2016-02-01

    Photovoltaic conversion of pulsed light into pulsed electric current enables optically-activated neural stimulation with miniature wireless implants. In photovoltaic retinal prostheses, patterns of near-infrared light projected from video goggles onto subretinal arrays of photovoltaic pixels are converted into patterns of current to stimulate the inner retinal neurons. We describe a model of these devices and evaluate the performance of photovoltaic circuits, including the electrode-electrolyte interface. Characteristics of the electrodes measured in saline with various voltages, pulse durations, and polarities were modeled as voltage-dependent capacitances and Faradaic resistances. The resulting mathematical model of the circuit yielded dynamics of the electric current generated by the photovoltaic pixels illuminated by pulsed light. Voltages measured in saline with a pipette electrode above the pixel closely matched results of the model. Using the circuit model, our pixel design was optimized for maximum charge injection under various lighting conditions and for different stimulation thresholds. To speed discharge of the electrodes between the pulses of light, a shunt resistor was introduced and optimized for high frequency stimulation.

  16. Current Methods for Modeling and Simulating Icing Effects on Aircraft Performance, Stability and Control

    Science.gov (United States)

    Ralvasky, Thomas P.; Barnhart, Billy P.; Lee, Sam

    2008-01-01

    Icing alters the shape and surface characteristics of aircraft components, which results in altered aerodynamic forces and moments caused by air flow over those iced components. The typical effects of icing are increased drag, reduced stall angle of attack, and reduced maximum lift. In addition to the performance changes, icing can also affect control surface effectiveness, hinge moments, and damping. These effects result in altered aircraft stability and control and flying qualities. Over the past 80 years, methods have been developed to understand how icing affects performance, stability and control. Emphasis has been on wind tunnel testing of two-dimensional subscale airfoils with various ice shapes to understand their effect on the flow field and ultimately the aerodynamics. This research has led to wind tunnel testing of subscale complete aircraft models to identify the integrated effects of icing on the aircraft system in terms of performance, stability, and control. Data sets of this nature enable pilot in the loop simulations to be performed for pilot training, or engineering evaluation of system failure impacts or control system design.

  17. Current Methods Modeling and Simulating Icing Effects on Aircraft Performance, Stability, Control

    Science.gov (United States)

    Ratvasky, Thomas P.; Barnhart, Billy P.; Lee, Sam

    2010-01-01

    Icing alters the shape and surface characteristics of aircraft components, which results in altered aerodynamic forces and moments caused by air flow over those iced components. The typical effects of icing are increased drag, reduced stall angle of attack, and reduced maximum lift. In addition to the performance changes, icing can also affect control surface effectiveness, hinge moments, and damping. These effects result in altered aircraft stability and control and flying qualities. Over the past 80 years, methods have been developed to understand how icing affects performance, stability, and control. Emphasis has been on wind-tunnel testing of two-dimensional subscale airfoils with various ice shapes to understand their effect on the flowfield and ultimately the aerodynamics. This research has led to wind-tunnel testing of subscale complete aircraft models to identify the integrated effects of icing on the aircraft system in terms of performance, stability, and control. Data sets of this nature enable pilot-in-the-loop simulations to be performed for pilot training or engineering evaluation of system failure impacts or control system design.

  18. A logistical model for performance evaluations of hybrid generation systems

    Energy Technology Data Exchange (ETDEWEB)

    Bonanno, F.; Consoli, A.; Raciti, A. [Univ. of Catania (Italy). Dept. of Electrical, Electronic, and Systems Engineering; Lombardo, S. [Schneider Electric SpA, Torino (Italy)

    1998-11-01

    In order to evaluate the fuel and energy savings, and to focus on the problems related to the exploitation of combined renewable and conventional energies, a logistical model for hybrid generation systems (HGS`s) has been prepared. A software package written in ACSL, allowing easy handling of the models and data of the HGS components, is presented. A special feature of the proposed model is that an auxiliary fictitious source is introduced in order to obtain the power electric balance at the busbars during the simulation state and, also, in the case of ill-sized components. The observed imbalance powers are then used to update the system design. As a case study, the simulation program is applied to evaluate the energetic performance of a power plant relative to a small isolated community, and island in the Mediterranean Sea, in order to establish the potential improvement achievable via an optimal integration of renewable energy sources in conventional plants. Evaluations and comparisons among different-sized wind, photovoltaic, and diesel groups, as well as of different management strategies have been performed using the simulation package and are reported and discussed in order to present the track followed to select the final design.

  19. ICT evaluation models and performance of medium and small enterprises

    Directory of Open Access Journals (Sweden)

    Bayaga Anass

    2014-01-01

    Full Text Available Building on prior research related to (1 impact of information communication technology (ICT and (2 operational risk management (ORM in the context of medium and small enterprises (MSEs, the focus of this study was to investigate the relationship between (1 ICT operational risk management (ORM and (2 performances of MSEs. To achieve the focus, the research investigated evaluating models for understanding the value of ICT ORM in MSEs. Multiple regression, Repeated-Measures Analysis of Variance (RM-ANOVA and Repeated-Measures Multivariate Analysis of Variance (RM-MANOVA were performed. The findings of the distribution revealed that only one variable made a significant percentage contribution to the level of ICT operation in MSEs, the Payback method (β = 0.410, p < .000. It may thus be inferred that the Payback method is the prominent variable, explaining the variation in level of evaluation models affecting ICT adoption within MSEs. Conclusively, in answering the two questions (1 degree of variability explained and (2 predictors, the results revealed that the variable contributed approximately 88.4% of the variations in evaluation models affecting ICT adoption within MSEs. The analysis of variance also revealed that the regression coefficients were real and did not occur by chance

  20. Performance modeling of a wearable brain PET (BET) camera

    Science.gov (United States)

    Schmidtlein, C. R.; Turner, J. N.; Thompson, M. O.; Mandal, K. C.; Häggström, I.; Zhang, J.; Humm, J. L.; Feiglin, D. H.; Krol, A.

    2016-03-01

    Purpose: To explore, by means of analytical and Monte Carlo modeling, performance of a novel lightweight and low-cost wearable helmet-shaped Brain PET (BET) camera based on thin-film digital Geiger Avalanche Photo Diode (dGAPD) with LSO and LaBr3 scintillators for imaging in vivo human brain processes for freely moving and acting subjects responding to various stimuli in any environment. Methods: We performed analytical and Monte Carlo modeling PET performance of a spherical cap BET device and cylindrical brain PET (CYL) device, both with 25 cm diameter and the same total mass of LSO scintillator. Total mass of LSO in both the BET and CYL systems is about 32 kg for a 25 mm thick scintillator, and 13 kg for 10 mm thick scintillator (assuming an LSO density of 7.3 g/ml). We also investigated a similar system using an LaBr3 scintillator corresponding to 22 kg and 9 kg for the 25 mm and 10 mm thick systems (assuming an LaBr3 density of 5.08 g/ml). In addition, we considered a clinical whole body (WB) LSO PET/CT scanner with 82 cm ring diameter and 15.8 cm axial length to represent a reference system. BET consisted of distributed Autonomous Detector Arrays (ADAs) integrated into Intelligent Autonomous Detector Blocks (IADBs). The ADA comprised of an array of small LYSO scintillator volumes (voxels with base a×a: 1.0 50% better noise equivalent count (NEC) performance relative to the CYL geometry, and >1100% better performance than a WB geometry for 25 mm thick LSO and LaBr3. For 10 mm thick LaBr3 equivalent mass systems LSO (7 mm thick) performed ~40% higher NEC than LaBr3. Analytic and Monte Carlo simulations also showed that 1×1×3 mm scintillator crystals can achieve ~1.2 mm FWHM spatial resolution. Conclusions: This study shows that a spherical cap brain PET system can provide improved NEC while preserving spatial resolution when compared to an equivalent dedicated cylindrical PET brain camera and shows greatly improved PET performance relative to a conventional