WorldWideScience

Sample records for models typically perform

  1. Exergoeconomic performance optimization for a steady-flow endoreversible refrigeration model including six typical cycles

    Energy Technology Data Exchange (ETDEWEB)

    Chen, Lingen; Kan, Xuxian; Sun, Fengrui; Wu, Feng [College of Naval Architecture and Power, Naval University of Engineering, Wuhan 430033 (China)

    2013-07-01

    The operation of a universal steady flow endoreversible refrigeration cycle model consisting of a constant thermal-capacity heating branch, two constant thermal-capacity cooling branches and two adiabatic branches is viewed as a production process with exergy as its output. The finite time exergoeconomic performance optimization of the refrigeration cycle is investigated by taking profit rate optimization criterion as the objective. The relations between the profit rate and the temperature ratio of working fluid, between the COP (coefficient of performance) and the temperature ratio of working fluid, as well as the optimal relation between profit rate and the COP of the cycle are derived. The focus of this paper is to search the compromised optimization between economics (profit rate) and the utilization factor (COP) for endoreversible refrigeration cycles, by searching the optimum COP at maximum profit, which is termed as the finite-time exergoeconomic performance bound. Moreover, performance analysis and optimization of the model are carried out in order to investigate the effect of cycle process on the performance of the cycles using numerical example. The results obtained herein include the performance characteristics of endoreversible Carnot, Diesel, Otto, Atkinson, Dual and Brayton refrigeration cycles.

  2. A mathematical model for the performance assessment of engineering barriers of a typical near surface radioactive waste disposal facility

    Energy Technology Data Exchange (ETDEWEB)

    Antonio, Raphaela N.; Rotunno Filho, Otto C. [Universidade Federal, Rio de Janeiro, RJ (Brazil). Coordenacao dos Programas de Pos-graduacao de Engenharia. Lab. de Hidrologia e Estudos do Meio Ambiente]. E-mail: otto@hidro.ufrj.br; Ruperti Junior, Nerbe J.; Lavalle Filho, Paulo F. Heilbron [Comissao Nacional de Energia Nuclear (CNEN), Rio de Janeiro, RJ (Brazil)]. E-mail: nruperti@cnen.gov.br

    2005-07-01

    This work proposes a mathematical model for the performance assessment of a typical radioactive waste disposal facility based on the consideration of a multiple barrier concept. The Generalized Integral Transform Technique is employed to solve the Advection-Dispersion mass transfer equation under the assumption of saturated one-dimensional flow, to obtain solute concentrations at given times and locations within the medium. A test-case is chosen in order to illustrate the performance assessment of several configurations of a multi barrier system adopted for the containment of sand contaminated with Ra-226 within a trench. (author)

  3. Modelling object typicality in description logics

    CSIR Research Space (South Africa)

    Britz, K

    2009-12-01

    Full Text Available The authors present a semantic model of typicality of concept members in description logics (DLs) that accords well with a binary, globalist cognitive model of class membership and typicality. The authors define a general preferential semantic...

  4. Modelling object typicality in description logics - [Workshop on Description Logics

    CSIR Research Space (South Africa)

    Britz, K

    2009-07-01

    Full Text Available The authors presents a semantic model of typicality of concept members in description logics that accords well with a binary, globalist cognitive model of class membership and typicality. The authors define a general preferential semantic framework...

  5. Typical NRC inspection procedures for model plant

    International Nuclear Information System (INIS)

    Blaylock, J.

    1984-01-01

    A summary of NRC inspection procedures for a model LEU fuel fabrication plant is presented. Procedures and methods for combining inventory data, seals, measurement techniques, and statistical analysis are emphasized

  6. Typical IAEA inspection procedures for model plant

    International Nuclear Information System (INIS)

    Theis, W.

    1984-01-01

    This session briefly refers to the legal basis for IAEA inspections and to their objectives. It describes in detail the planning and performance of IAEA inspections, including the examination of records, the comparison of facility records with State reports, flow and inventory verifications, the design of statistical sampling plans, and Agency's independent verification measurements. In addition, the session addresses the principles of Material Balance and MUF evaluation, as well as the content and format of summary statements and related problems

  7. INVESTIGATION OF SEISMIC PERFORMANCE AND DESIGN OF TYPICAL CURVED AND SKEWED BRIDGES IN COLORADO

    Science.gov (United States)

    2018-01-15

    This report summarizes the analytical studies on the seismic performance of typical Colorado concrete bridges, particularly those with curved and skewed configurations. A set of bridge models with different geometric configurations derived from a pro...

  8. A Modal Model to Simulate Typical Structural Dynamic Nonlinearity

    Energy Technology Data Exchange (ETDEWEB)

    Pacini, Benjamin Robert [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Mayes, Randall L. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Roettgen, Daniel R [Univ. of Wisconsin, Madison, WI (United States)

    2015-10-01

    Some initial investigations have been published which simulate nonlinear response with almost traditional modal models: instead of connecting the modal mass to ground through the traditional spring and damper, a nonlinear Iwan element was added. This assumes that the mode shapes do not change with amplitude and there are no interactions between modal degrees of freedom. This work expands on these previous studies. An impact experiment is performed on a structure which exhibits typical structural dynamic nonlinear response, i.e. weak frequency dependence and strong damping dependence on the amplitude of vibration. Use of low level modal test results in combination with high level impacts are processed using various combinations of modal filtering, the Hilbert Transform and band-pass filtering to develop response data that are then fit with various nonlinear elements to create a nonlinear pseudo-modal model. Simulations of forced response are compared with high level experimental data for various nonlinear element assumptions.

  9. Analysis and Comparison of Typical Models within Distribution Network Design

    DEFF Research Database (Denmark)

    Jørgensen, Hans Jacob; Larsen, Allan; Madsen, Oli B.G.

    This paper investigates the characteristics of typical optimisation models within Distribution Network Design. During the paper fourteen models known from the literature will be thoroughly analysed. Through this analysis a schematic approach to categorisation of distribution network design models...... for educational purposes. Furthermore, the paper can be seen as a practical introduction to network design modelling as well as a being an art manual or recipe when constructing such a model....... are covered in the categorisation include fixed vs. general networks, specialised vs. general nodes, linear vs. nonlinear costs, single vs. multi commodity, uncapacitated vs. capacitated activities, single vs. multi modal and static vs. dynamic. The models examined address both strategic and tactical planning...

  10. Comparison of Energy Performance of Different HVAC Systems for a Typical Office Room and a Typical Classroom

    DEFF Research Database (Denmark)

    Yu, Tao; Heiselberg, Per; Pomianowski, Michal Zbigniew

    This report is part of the work performed under the project “Natural cooling and Ventilation through Diffuse Ceiling Supply and Thermally Activated Building Constructions”. In this project, a new system solution combining natural ventilation with diffuse ceiling inlet and thermally activated...... the energy consumption for buildings with cooling demand in cold seasons. In this way, the building system can operate at a very low energy use all the year round. The main purpose of this task is to investigate the energy performance of different HVAC systems used in the office room and the classroom......, and find the potential of energy saving for the proposed new system solution. In this report, a typical room is selected according to the previous study, but the occupation is different for the purpose of the office and the classroom. Energy performance of these two types of room under different internal...

  11. Ex-plant consequence assessment for NUREG-1150: models, typical results, uncertainties

    International Nuclear Information System (INIS)

    Sprung, J.L.

    1988-01-01

    The assessment of ex-plant consequences for NUREG-1150 source terms was performed using the MELCOR Accident Consequence Code System (MACCS). This paper briefly discusses the following elements of MACCS consequence calculations: input data, phenomena modeled, computational framework, typical results, controlling phenomena, and uncertainties. Wherever possible, NUREG-1150 results will be used to illustrate the discussion. 28 references

  12. Working hard and working smart: Motivation and ability during typical and maximum performance

    NARCIS (Netherlands)

    Klehe, U.-C.; Anderson, N.

    2007-01-01

    The distinction between what people can do (maximum performance) and what they will do (typical performance) has received considerable theoretical but scant empirical attention in industrial-organizational psychology. This study of 138 participants performing an Internet-search task offers an

  13. Generalized eigenstate typicality in translation-invariant quasifree fermionic models

    Science.gov (United States)

    Riddell, Jonathon; Müller, Markus P.

    2018-01-01

    We demonstrate a generalized notion of eigenstate thermalization for translation-invariant quasifree fermionic models: the vast majority of eigenstates satisfying a finite number of suitable constraints (e.g., fixed energy and particle number) have the property that their reduced density matrix on small subsystems approximates the corresponding generalized Gibbs ensemble. To this end, we generalize analytic results by H. Lai and K. Yang [Phys. Rev. B 91, 081110(R) (2015), 10.1103/PhysRevB.91.081110] and illustrate the claim numerically by example of the Jordan-Wigner transform of the XX spin chain.

  14. Threshold Research on Highway Length under Typical Landscape Patterns Based on Drivers’ Physiological Performance

    Directory of Open Access Journals (Sweden)

    Xia Zhao

    2015-01-01

    Full Text Available The appropriately landscaped highway scenes may not only help improve road safety and comfort but also help protect ecological environment. Yet there is very little research data on highway length threshold with consideration of distinctive landscape patterns. Against this backdrop, the paper aims to quantitatively analyze highway landscape’s effect on driving behavior based on drivers’ physiological performance and quantify highway length thresholds under three typical landscape patterns, namely, “open,” “semiopen,” and “vertical” ones. The statistical analysis was based on data collected in a driving simulator and electrocardiograph. Specifically, vehicle-related data, ECG data, and supplemental subjective stress perception were collected. The study extracted two characteristic indices, lane deviation and LF/HF, and extrapolated the drivers’ U-shaped physiological response to landscape patterns. Models on highway length were built based on LF/HF’s variation trend with highway length. The results revealed that the theoretical highway length threshold tended to increase when the landscape pattern was switched to open, semiopen, and vertical ones. And the reliability and accuracy of the results were validated by questionnaires and field operational tests. Findings from this research will assist practitioners in taking active environmental countermeasures pertaining to different roadside landscape patterns.

  15. Relationship between OSCE scores and other typical medical school performance indicators: a 5-year cohort study.

    Science.gov (United States)

    Dong, Ting; Saguil, Aaron; Artino, Anthony R; Gilliland, William R; Waechter, Donna M; Lopreaito, Joseph; Flanagan, Amy; Durning, Steven J

    2012-09-01

    Objective Structured Clinical Examinations (OSCEs) are used at the majority of U.S. medical schools. Given the high resource demands with constructing and administering OSCEs, understanding how OSCEs relate to typical performance measures in medical school could help educators more effectively design curricula and evaluation to optimize student instruction and assessment. To investigate the correlation between second-year and third-year OSCE scores, as well as the associations between OSCE scores and several other typical measures of students' medical school performance. We tracked the performance of a 5-year cohort (classes of 2007-2011). We studied the univariate correlations among OSCE scores, U.S. Medical Licensing Examination (USMLE) scores, and medical school grade point average. We also examined whether OSCE scores explained additional variance in the USMLE Step 2 Clinical Knowledge score beyond that explained by the Step 1 score. The second- and third-year OSCE scores were weakly correlated. Neither second- nor third-year OSCE score was strongly correlated with USMLE scores or medical school grade point average. Our findings suggest that OSCEs capture a viewpoint that is different from typical assessment measures that largely reflect multiple choice questions; these results also support tenets of situated cognition theory.

  16. A Modal Model to Simulate Typical Structural Dynamic Nonlinearity [PowerPoint

    Energy Technology Data Exchange (ETDEWEB)

    Mayes, Randall L.; Pacini, Benjamin Robert; Roettgen, Dan

    2016-01-01

    Some initial investigations have been published which simulate nonlinear response with almost traditional modal models: instead of connecting the modal mass to ground through the traditional spring and damper, a nonlinear Iwan element was added. This assumes that the mode shapes do not change with amplitude and there are no interactions between modal degrees of freedom. This work expands on these previous studies. An impact experiment is performed on a structure which exhibits typical structural dynamic nonlinear response, i.e. weak frequency dependence and strong damping dependence on the amplitude of vibration. Use of low level modal test results in combination with high level impacts are processed using various combinations of modal filtering, the Hilbert Transform and band-pass filtering to develop response data that are then fit with various nonlinear elements to create a nonlinear pseudo-modal model. Simulations of forced response are compared with high level experimental data for various nonlinear element assumptions.

  17. Models for the estimation of diffuse solar radiation for typical cities in Turkey

    International Nuclear Information System (INIS)

    Bakirci, Kadir

    2015-01-01

    In solar energy applications, diffuse solar radiation component is required. Solar radiation data particularly in terms of diffuse component are not readily affordable, because of high price of measurements as well as difficulties in their maintenance and calibration. In this study, new empirical models for predicting the monthly mean diffuse solar radiation on a horizontal surface for typical cities in Turkey are established. Therefore, fifteen empirical models from studies in the literature are used. Also, eighteen diffuse solar radiation models are developed using long term sunshine duration and global solar radiation data. The accuracy of the developed models is evaluated in terms of different statistical indicators. It is found that the best performance is achieved for the third-order polynomial model based on sunshine duration and clearness index. - Highlights: • Diffuse radiation is given as a function of clearness index and sunshine fraction. • The diffuse radiation is an important parameter in solar energy applications. • The diffuse radiation measurement is for limited periods and it is very rare. • The new models can be used to estimate monthly average diffuse solar radiation. • The accuracy of the models is evaluated on the basis of statistical indicators

  18. Improving Baseline Model Assumptions: Evaluating the Impacts of Typical Methodological Approaches in Watershed Models

    Science.gov (United States)

    Muenich, R. L.; Kalcic, M. M.; Teshager, A. D.; Long, C. M.; Wang, Y. C.; Scavia, D.

    2017-12-01

    Thanks to the availability of open-source software, online tutorials, and advanced software capabilities, watershed modeling has expanded its user-base and applications significantly in the past thirty years. Even complicated models like the Soil and Water Assessment Tool (SWAT) are being used and documented in hundreds of peer-reviewed publications each year, and likely more applied in practice. These models can help improve our understanding of present, past, and future conditions, or analyze important "what-if" management scenarios. However, baseline data and methods are often adopted and applied without rigorous testing. In multiple collaborative projects, we have evaluated the influence of some of these common approaches on model results. Specifically, we examined impacts of baseline data and assumptions involved in manure application, combined sewer overflows, and climate data incorporation across multiple watersheds in the Western Lake Erie Basin. In these efforts, we seek to understand the impact of using typical modeling data and assumptions, versus using improved data and enhanced assumptions on model outcomes and thus ultimately, study conclusions. We provide guidance for modelers as they adopt and apply data and models for their specific study region. While it is difficult to quantitatively assess the full uncertainty surrounding model input data and assumptions, recognizing the impacts of model input choices is important when considering actions at the both the field and watershed scales.

  19. Energy-Performance-Based Design-Build Process: Strategies for Procuring High-Performance Buildings on Typical Construction Budgets: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Scheib, J.; Pless, S.; Torcellini, P.

    2014-08-01

    NREL experienced a significant increase in employees and facilities on our 327-acre main campus in Golden, Colorado over the past five years. To support this growth, researchers developed and demonstrated a new building acquisition method that successfully integrates energy efficiency requirements into the design-build requests for proposals and contracts. We piloted this energy performance based design-build process with our first new construction project in 2008. We have since replicated and evolved the process for large office buildings, a smart grid research laboratory, a supercomputer, a parking structure, and a cafeteria. Each project incorporated aggressive efficiency strategies using contractual energy use requirements in the design-build contracts, all on typical construction budgets. We have found that when energy efficiency is a core project requirement as defined at the beginning of a project, innovative design-build teams can integrate the most cost effective and high performance efficiency strategies on typical construction budgets. When the design-build contract includes measurable energy requirements and is set up to incentivize design-build teams to focus on achieving high performance in actual operations, owners can now expect their facilities to perform. As NREL completed the new construction in 2013, we have documented our best practices in training materials and a how-to guide so that other owners and owner's representatives can replicate our successes and learn from our experiences in attaining market viable, world-class energy performance in the built environment.

  20. [Effects of fuel properties on the performance of a typical Euro IV diesel engine].

    Science.gov (United States)

    Chen, Wen-miao; Wang, Jian-xin; Shuai, Shi-jin

    2008-09-01

    With the purpose of establishing diesel fuel standard for China National 4th Emission Standard, as one part of Beijing "Auto-Oil" programme, engine performance test has been done on a typical Euro IV diesel engine using eight diesel fuels with different fuel properties. Test results show that, fuel properties has little effect on power, fuel consumption, and in-cylinder combustion process of tested Euro IV diesel engine; sulfate in PM and gaseous SO2 emissions increase linearly with diesel sulfur content increase; cetane number increase cause BSFC and PM reduce and NOx increase; T90 decrease cause NOx reduce while PM shows trend of reduce. Prediction equations of tested Euro IV diesel engine's ESC cycle NOx and PM emissions before SCR response to diesel fuel sulfur content, cetane number, T90 and aromatics have been obtained using linear regression method on the base of test results.

  1. Typicality Mediates Performance during Category Verification in Both Ad-Hoc and Well-Defined Categories

    Science.gov (United States)

    Sandberg, Chaleece; Sebastian, Rajani; Kiran, Swathi

    2012-01-01

    Background: The typicality effect is present in neurologically intact populations for natural, ad-hoc, and well-defined categories. Although sparse, there is evidence of typicality effects in persons with chronic stroke aphasia for natural and ad-hoc categories. However, it is unknown exactly what influences the typicality effect in this…

  2. Predicting the seismic performance of typical R/C healthcare facilities: emphasis on hospitals

    Science.gov (United States)

    Bilgin, Huseyin; Frangu, Idlir

    2017-09-01

    Reinforced concrete (RC) type of buildings constitutes an important part of the current building stock in earthquake prone countries such as Albania. Seismic response of structures during a severe earthquake plays a vital role in the extent of structural damage and resulting injuries and losses. In this context, this study evaluates the expected performance of a five-story RC healthcare facility, representative of common practice in Albania, designed according to older codes. The design was based on the code requirements used in this region during the mid-1980s. Non-linear static and dynamic time history analyses were conducted on the structural model using the Zeus NL computer program. The dynamic time history analysis was conducted with a set of ground motions from real earthquakes. The building responses were estimated in global levels. FEMA 356 criteria were used to predict the seismic performance of the building. The structural response measures such as capacity curve and inter-story drift under the set of ground motions and pushover analyses results were compared and detailed seismic performance assessment was done. The main aim of this study is considering the application and methodology for the earthquake performance assessment of existing buildings. The seismic performance of the structural model varied significantly under different ground motions. Results indicate that case study building exhibit inadequate seismic performance under different seismic excitations. In addition, reasons for the poor performance of the building is discussed.

  3. Decision-Tree Models of Categorization Response Times, Choice Proportions, and Typicality Judgments

    Science.gov (United States)

    Lafond, Daniel; Lacouture, Yves; Cohen, Andrew L.

    2009-01-01

    The authors present 3 decision-tree models of categorization adapted from T. Trabasso, H. Rollins, and E. Shaughnessy (1971) and use them to provide a quantitative account of categorization response times, choice proportions, and typicality judgments at the individual-participant level. In Experiment 1, the decision-tree models were fit to…

  4. Auditory, visual and auditory-visual memory and sequencing performance in typically developing children.

    Science.gov (United States)

    Pillai, Roshni; Yathiraj, Asha

    2017-09-01

    The study evaluated whether there exists a difference/relation in the way four different memory skills (memory score, sequencing score, memory span, & sequencing span) are processed through the auditory modality, visual modality and combined modalities. Four memory skills were evaluated on 30 typically developing children aged 7 years and 8 years across three modality conditions (auditory, visual, & auditory-visual). Analogous auditory and visual stimuli were presented to evaluate the three modality conditions across the two age groups. The children obtained significantly higher memory scores through the auditory modality compared to the visual modality. Likewise, their memory scores were significantly higher through the auditory-visual modality condition than through the visual modality. However, no effect of modality was observed on the sequencing scores as well as for the memory and the sequencing span. A good agreement was seen between the different modality conditions that were studied (auditory, visual, & auditory-visual) for the different memory skills measures (memory scores, sequencing scores, memory span, & sequencing span). A relatively lower agreement was noted only between the auditory and visual modalities as well as between the visual and auditory-visual modality conditions for the memory scores, measured using Bland-Altman plots. The study highlights the efficacy of using analogous stimuli to assess the auditory, visual as well as combined modalities. The study supports the view that the performance of children on different memory skills was better through the auditory modality compared to the visual modality. Copyright © 2017 Elsevier B.V. All rights reserved.

  5. Looking around houses: attention to a model when drawing complex shapes in Williams syndrome and typical development.

    Science.gov (United States)

    Hudson, Kerry D; Farran, Emily K

    2013-09-01

    Drawings by individuals with Williams syndrome (WS) typically lack cohesion. The popular hypothesis is that this is a result of excessive focus on local-level detail at the expense of global configuration. In this study, we explored a novel hypothesis that inadequate attention might underpin drawing in WS. WS and typically developing (TD) non-verbal ability matched groups copied and traced a house figure comprised of geometric shapes. The house was presented on a computer screen for 5-s periods and participants pressed a key to re-view the model. Frequency of key-presses indexed the looks to the model. The order that elements were replicated was recorded to assess hierarchisation of elements. If a lack of attention to the model explained poor drawing performance, we expected participants with WS to look less frequently to the model than TD children when copying. If a local-processing preference underpins drawing in WS, more local than global elements would be produced. Results supported the first, but not second hypothesis. The WS group looked to the model infrequently, but global, not local, parts were drawn first, scaffolding local-level details. Both groups adopted a similar order of drawing and tracing of parts, suggesting typical, although delayed strategy-use in the WS group. Additionally both groups drew larger elements of the model before smaller elements, suggested a size-bias when drawing. Copyright © 2013 Elsevier Ltd. All rights reserved.

  6. A comparison of two typical multicyclic models used to forecast the world's conventional oil production

    International Nuclear Information System (INIS)

    Wang Jianliang; Feng Lianyong; Zhao Lin; Snowden, Simon; Wang Xu

    2011-01-01

    This paper introduces two typical multicyclic models: the Hubbert model and the Generalized Weng model. The model-solving process of the two is expounded, and it provides the basis for an empirical analysis of the world's conventional oil production. The results for both show that the world's conventional oil (crude+NGLs) production will reach its peak in 2011 with a production of 30 billion barrels (Gb). In addition, the forecasting effects of these two models, given the same URR are compared, and the intrinsic characteristics of these two models are analyzed. This demonstrates that for specific criteria the multicyclic Generalized Weng model is an improvement on the multicyclic Hubbert model. Finally, based upon the resultant forecast for the world's conventional oil, some suggestions are proposed for China's policy makers. - Highlights: ► Hubbert model and Generalized Weng model are introduced and compared in this article. ► We conclude each model's characteristic and scopes and conditions of applicable. ► We get the same peak production and time of world's oil by applying two models. ► Multicyclic Generalized Weng model is proven slightly better than Hubbert model.

  7. Gender Gap in the National College Entrance Exam Performance in China: A Case Study of a Typical Chinese Municipality

    Science.gov (United States)

    Zhang, Yu; Tsang, Mun

    2015-01-01

    This is one of the first studies to investigate gender achievement gap in the National College Entrance Exam in a typical municipality in China, which is the crucial examination for the transition from high school to higher education in that country. Using ordinary least square model and quantile regression model, the study consistently finds that…

  8. A planning model for a typical enrichement contract: a linear programming approach

    International Nuclear Information System (INIS)

    Miguez, J.D.G.; Barreto, P.M.S.P.

    1981-01-01

    The need to match the demand for enrichment services of a nuclear power plant program with the offer of a typical enrichment contract, taking into account all the existing constraints and flexibilities, implies in building up a mathematical model. For each set of demands, which can be fully supplied under contract, the model must furnish the best scheduling of the SWU quantities according to an economic criterium. Such schedulling consists of establishing the fractions of each SWU demand to be met and the respective dates. (Author) [pt

  9. An extended car-following model considering the acceleration derivative in some typical traffic environments

    Science.gov (United States)

    Zhou, Tong; Chen, Dong; Liu, Weining

    2018-03-01

    Based on the full velocity difference and acceleration car-following model, an extended car-following model is proposed by considering the vehicle’s acceleration derivative. The stability condition is given by applying the control theory. Considering some typical traffic environments, the results of theoretical analysis and numerical simulation show the extended model has a more actual acceleration of string vehicles than that of the previous models in starting process, stopping process and sudden brake. Meanwhile, the traffic jams more easily occur when the coefficient of vehicle’s acceleration derivative increases, which is presented by space-time evolution. The results confirm that the vehicle’s acceleration derivative plays an important role in the traffic jamming transition and the evolution of traffic congestion.

  10. Variability in Classroom Social Communication: Performance of Children with Fetal Alcohol Spectrum Disorders and Typically Developing Peers

    Science.gov (United States)

    Kjellmer, Liselotte; Olswang, Lesley B.

    2013-01-01

    Purpose: In this study, the authors examined how variability in classroom social communication performance differed between children with fetal alcohol spectrum disorders (FASD) and pair-matched, typically developing peers. Method: Twelve pairs of children were observed in their classrooms, 40 min per day (20 min per child) for 4 days over a…

  11. Discussion of the effects of recirculating exhaust air on performance and efficiency of a typical microturbine

    International Nuclear Information System (INIS)

    De Paepe, Ward; Delattin, Frank; Bram, Svend; De Ruyck, Jacques

    2012-01-01

    This paper reports on a specific phenomenon, noticed during steam injection experiments on a microturbine. During the considered experiments, measurements indicated an unsteady inlet air temperature of the compressor, resulting in unstable operation of the microturbine. Non-continuous exhaust air recirculation was a possible explanation for the observed behaviour of the microturbine. The aim of this paper is to investigate and demonstrate the effects of exhaust recirculation on a microgasturbine. Depending on wind direction, exhaust air re-entered the engine, resulting in changing inlet conditions which affects the operating regime of the microturbine. For this paper, a series of experiments were performed in the wind tunnel. These series of experiments allowed investigation of the effect of the wind direction on flue gasses flow. Next to the experiments, steady-state simulations of exhaust recirculation were performed in order to study the effect of exhaust recirculation on thermodynamic performance of the microturbine. Dynamic simulations of the non-continuous recirculation revealed the effects of frequency and amplitude on average performance and stability. Results from simulations supported the important impact of exhaust recirculation. Wind tunnel tests demonstrated the influence of the wind direction on recirculation and revealed the necessity to heighten the stack, thus preventing exhaust recirculation. -- Highlights: ► Unstable operation of a T100 microturbine during steam injection tests was noticed, caused by exhaust gas recirculation. ► Wind tunnel tests were performed to study the effect of the wind direction on the recirculation process. ► Steady-state simulations to investigate the effect of exhaust gas recirculation on thermodynamic performance. ► Dynamic simulations to reveal effects of frequency and amplitude on average performance and stability. ► Wind tunnel tests revealed the necessity to heighten the stack to prevent exhaust

  12. Lack of Effect of Typical Rapid-Weight-Loss Practices on Balance and Anaerobic Performance in Apprentice Jockeys.

    Science.gov (United States)

    Cullen, SarahJane; Dolan, Eimear; O Brien, Kate; McGoldrick, Adrian; Warrington, Giles

    2015-11-01

    Balance and anaerobic performance are key attributes related to horse-racing performance, but research on the impact of making weight for racing on these parameters remains unknown. The purpose of this study was to investigate the effects of rapid weight loss in preparation for racing on balance and anaerobic performance in a group of jockeys. Twelve apprentice male jockeys and 12 age- and gender-matched controls completed 2 trials separated by 48 h. In both trials, body mass, hydration status, balance, and anaerobic performance were assessed. Between the trials, the jockeys reduced body mass by 4% using weight-loss methods typically adopted in preparation for racing, while controls maintained body mass through typical daily dietary and physical activity habits. Apprentice jockeys decreased mean body mass by 4.2% ± 0.3% (P balance, on the left or right side, or in peak power, mean power, or fatigue index were reported between the trials in either group. Results from this study indicate that a 4% reduction in body mass in 48 h through the typical methods employed for racing, in association with an increase in dehydration, resulted in no impairments in balance or anaerobic performance. Further research is required to evaluate performance in a sport-specific setting and to investigate the specific physiological mechanisms involved.

  13. Performance Evaluation of 802.11e with typical data rates | Chenna ...

    African Journals Online (AJOL)

    The type of the applications for which Internet is being used has changed dramatically over the years. Some of the applications require the network to provide Quality of Service (QOS). IEEE 802.11e is the QOS extension of the Wireless Local Area Network standard IEEE 802.11. This paper does the performance evaluation ...

  14. Modeling a typical winter-time dust event over the Arabian Peninsula and the Red Sea

    KAUST Repository

    Kalenderski, Stoitchko

    2013-02-20

    We used WRF-Chem, a regional meteorological model coupled with an aerosol-chemistry component, to simulate various aspects of the dust phenomena over the Arabian Peninsula and Red Sea during a typical winter-time dust event that occurred in January 2009. The model predicted that the total amount of emitted dust was 18.3 Tg for the entire dust outburst period and that the two maximum daily rates were ?2.4 Tg day-1 and ?1.5 Tg day-1, corresponding to two periods with the highest aerosol optical depth that were well captured by ground-and satellite-based observations. The model predicted that the dust plume was thick, extensive, and mixed in a deep boundary layer at an altitude of 3-4 km. Its spatial distribution was modeled to be consistent with typical spatial patterns of dust emissions. We utilized MODIS-Aqua and Solar Village AERONET measurements of the aerosol optical depth (AOD) to evaluate the radiative impact of aerosols. Our results clearly indicated that the presence of dust particles in the atmosphere caused a significant reduction in the amount of solar radiation reaching the surface during the dust event. We also found that dust aerosols have significant impact on the energy and nutrient balances of the Red Sea. Our results showed that the simulated cooling under the dust plume reached 100 W m-2, which could have profound effects on both the sea surface temperature and circulation. Further analysis of dust generation and its spatial and temporal variability is extremely important for future projections and for better understanding of the climate and ecological history of the Red Sea.

  15. Modeling a typical winter-time dust event over the Arabian Peninsula and the Red Sea

    Directory of Open Access Journals (Sweden)

    S. Kalenderski

    2013-02-01

    Full Text Available We used WRF-Chem, a regional meteorological model coupled with an aerosol-chemistry component, to simulate various aspects of the dust phenomena over the Arabian Peninsula and Red Sea during a typical winter-time dust event that occurred in January 2009. The model predicted that the total amount of emitted dust was 18.3 Tg for the entire dust outburst period and that the two maximum daily rates were ~2.4 Tg day−1 and ~1.5 Tg day−1, corresponding to two periods with the highest aerosol optical depth that were well captured by ground- and satellite-based observations. The model predicted that the dust plume was thick, extensive, and mixed in a deep boundary layer at an altitude of 3–4 km. Its spatial distribution was modeled to be consistent with typical spatial patterns of dust emissions. We utilized MODIS-Aqua and Solar Village AERONET measurements of the aerosol optical depth (AOD to evaluate the radiative impact of aerosols. Our results clearly indicated that the presence of dust particles in the atmosphere caused a significant reduction in the amount of solar radiation reaching the surface during the dust event. We also found that dust aerosols have significant impact on the energy and nutrient balances of the Red Sea. Our results showed that the simulated cooling under the dust plume reached 100 W m−2, which could have profound effects on both the sea surface temperature and circulation. Further analysis of dust generation and its spatial and temporal variability is extremely important for future projections and for better understanding of the climate and ecological history of the Red Sea.

  16. Thermal Performance of Typical Residential Building in Karachi with Different Materials for Construction

    Directory of Open Access Journals (Sweden)

    Nafeesa Shaheen

    2016-04-01

    Full Text Available This research work deals with a study of a residential building located in climatic context of Karachi with the objective of being the study of thermal performance based upon passive design techniques. The study helps in reducing the electricity consumption by improving indoor temperatures. The existing residential buildings in Karachi were studied with reference to their planning and design, analyzed and evaluated. Different construction?s compositions of buildings were identified, surveyed and analyzed in making of the effective building envelops. Autodesk® Ecotect, 2011 was used to determine indoor comfort conditions and HVAC (Heating, Ventilation, Air-Conditioning and Cooling loads. The result of the research depicted significant energy savings of 38.5% in HVAC loads with proposed building envelop of locally available materials and glazing.

  17. Enhanced air dispersion modelling at a typical Chinese nuclear power plant site: Coupling RIMPUFF with two advanced diagnostic wind models.

    Science.gov (United States)

    Liu, Yun; Li, Hong; Sun, Sida; Fang, Sheng

    2017-09-01

    An enhanced air dispersion modelling scheme is proposed to cope with the building layout and complex terrain of a typical Chinese nuclear power plant (NPP) site. In this modelling, the California Meteorological Model (CALMET) and the Stationary Wind Fit and Turbulence (SWIFT) are coupled with the Risø Mesoscale PUFF model (RIMPUFF) for refined wind field calculation. The near-field diffusion coefficient correction scheme of the Atmospheric Relative Concentrations in the Building Wakes Computer Code (ARCON96) is adopted to characterize dispersion in building arrays. The proposed method is evaluated by a wind tunnel experiment that replicates the typical Chinese NPP site. For both wind speed/direction and air concentration, the enhanced modelling predictions agree well with the observations. The fraction of the predictions within a factor of 2 and 5 of observations exceeds 55% and 82% respectively in the building area and the complex terrain area. This demonstrates the feasibility of the new enhanced modelling for typical Chinese NPP sites. Copyright © 2017 Elsevier Ltd. All rights reserved.

  18. Comparison of thermal performance between test cells with different coverage systems for experimental typical day of heat in Brazilian Southeastern

    OpenAIRE

    Cardoso, Grace; Vecchia, Francisco

    2017-01-01

    This article shows experimentally the thermal performance of two test cells with different coverage systems, Light Green Roof (LGR) and ceramic roof by analyzing internal surface temperatures (IST) in the ceiling and dry bulb temperatures (DBT). The objective was to evaluate the spatial distribution of temperatures in buildings according to spatial and temporal Dynamic Climatology approaches. An experimental, typical day for heat conditions was determined. The data of the main climatic variab...

  19. Well performance model

    International Nuclear Information System (INIS)

    Thomas, L.K.; Evans, C.E.; Pierson, R.G.; Scott, S.L.

    1992-01-01

    This paper describes the development and application of a comprehensive oil or gas well performance model. The model contains six distinct sections: stimulation design, tubing and/or casing flow, reservoir and near-wellbore calculations, production forecasting, wellbore heat transmission, and economics. These calculations may be performed separately or in an integrated fashion with data and results shared among the different sections. The model analysis allows evaluation of all aspects of well completion design, including the effects on future production and overall well economics

  20. Variability in classroom social communication: performance of children with fetal alcohol spectrum disorders and typically developing peers.

    Science.gov (United States)

    Kjellmer, Liselotte; Olswang, Lesley B

    2013-06-01

    In this study, the authors examined how variability in classroom social communication performance differed between children with fetal alcohol spectrum disorders (FASD) and pair-matched, typically developing peers. Twelve pairs of children were observed in their classrooms, 40 min per day (20 min per child) for 4 days over a 2-week period. Coders documented classroom social communication during situations of Cooperation and following School Rules by recording performance on handheld computers using the Social Communication Coding System (SCCS). The SCCS consists of 6 behavioral dimensions (prosocial/engaged, passive/disengaged, irrelevant, hostile/coercive, assertive, and adult seeking). The frequency of occurrence and duration of each dimension were recorded. These measures were then used to examine variability in performance within and across days (changeability and stability, respectively). Independent of classroom situation, children with FASD were more variable than their typically developing peers in terms of changing behavioral dimensions more often (changeability) and varying their behavior more from day to day (stability). Documenting performance variability may provide a clearer understanding of the classroom social communication difficulties of the child with mild FASD.

  1. Cooperative Problem-Based Learning (CPBL: A Practical PBL Model for a Typical Course

    Directory of Open Access Journals (Sweden)

    Khairiyah Mohd-Yusof

    2011-09-01

    Full Text Available Problem-Based Learning (PBL is an inductive learning approach that uses a realistic problem as the starting point of learning. Unlike in medical education, which is more easily adaptable to PBL, implementing PBL in engineering courses in the traditional semester system set-up is challenging. While PBL is normally implemented in small groups of up to ten students with a dedicated tutor during PBL sessions in medical education, this is not plausible in engineering education because of the high enrolment and large class sizes. In a typical course, implementation of PBL consisting of students in small groups in medium to large classes is more practical. However, this type of implementation is more difficult to monitor, and thus requires good support and guidance in ensuring commitment and accountability of each student towards learning in his/her group. To provide the required support, Cooperative Learning (CL is identified to have the much needed elements to develop the small student groups to functional learning teams. Combining both CL and PBL results in a Cooperative Problem-Based Learning (CPBL model that provides a step by step guide for students to go through the PBL cycle in their teams, according to CL principles. Suitable for implementation in medium to large classes (approximately 40-60 students for one floating facilitator, with small groups consisting of 3-5 students, the CPBL model is designed to develop the students in the whole class into a learning community. This paper provides a detailed description of the CPBL model. A sample implementation in a third year Chemical Engineering course, Process Control and Dynamics, is also described.

  2. NIF capsule performance modeling

    Directory of Open Access Journals (Sweden)

    Weber S.

    2013-11-01

    Full Text Available Post-shot modeling of NIF capsule implosions was performed in order to validate our physical and numerical models. Cryogenic layered target implosions and experiments with surrogate targets produce an abundance of capsule performance data including implosion velocity, remaining ablator mass, times of peak x-ray and neutron emission, core image size, core symmetry, neutron yield, and x-ray spectra. We have attempted to match the integrated data set with capsule-only simulations by adjusting the drive and other physics parameters within expected uncertainties. The simulations include interface roughness, time-dependent symmetry, and a model of mix. We were able to match many of the measured performance parameters for a selection of shots.

  3. Science learning and literacy performance of typically developing, at-risk, and disabled, non-English language background students

    Science.gov (United States)

    Larrinaga McGee, Patria Maria

    Current education reform calls for excellence, access, and equity in all areas of instruction, including science and literacy. Historically, persons of diverse backgrounds or with disabilities have been underrepresented in science. Gaps are evident between the science and literacy achievement of diverse students and their mainstream peers. The purpose of this study was to document, describe, and examine patterns of development and change in the science learning and literacy performance of Hispanic students. The two major questions of this study were: (1) How is science content knowledge, as evident in oral and written formats, manifested in the performance of typically developing, at-risk, and disabled non-English language background (NELB) students? and (2) What are the patterns of literacy performance in science, and as evident in oral and written formats, among typically developing, at-risk, and disabled NELB students? This case study was part of a larger research project, the Promise Project, undertaken at the University of Miami, Coral Gables, Florida, under the sponsorship of the National Science Foundation. The study involved 24 fourth-grade students in seven classrooms located in Promise Project schools where teachers were provided with training and materials for instruction on two units of science content: Matter and Weather. Four students were selected from among the fourth-graders for a closer analysis of their performance. Qualitative and quantitative data analysis methods were used to document, describe, and examine specific events or phenomena in the processes of science learning and literacy development. Important findings were related to (a) gains in science learning and literacy development, (b) students' science learning and literacy development needs, and (c) general and idiosyncratic attitudes toward science and literacy. Five patterns of science "explanations" identified indicated a developmental cognitive/linguistic trajectory in science

  4. Comparison of thermal performance between test cells with different coverage systems for experimental typical day of heat in Brazilian Southeastern

    Directory of Open Access Journals (Sweden)

    Grace Tiberio Cardoso

    2014-09-01

    Full Text Available This article shows experimentally the thermal performance of two test cells with different coverage systems, Light Green Roof (LGR and ceramic roof by analyzing internal surface temperatures (IST in the ceiling and dry bulb temperatures (DBT. The objective was to evaluate the spatial distribution of temperatures in buildings according to spatial and temporal Dynamic Climatology approaches. An experimental, typical day for heat conditions was determined. The data of the main climatic variables provided by an automatic weather station and temperatures inside the test cells were collected using thermocouples installed such that the entire space is included. The results led to the conclusion that the LGR has a balanced IST and DBT spatial distribution compared with ceramic roofs. Nevertheless, the analysis of the thermal performance is only one of the variables that must be considered when developing a construction proposal that is adapted to the context. The manner in which the thermocouples were placed inside the test cells also showed the importance of specifying the location of the sensors in experimental studies on the behavior and thermal performance of buildings.

  5. Assembling Typical Meteorological Year Data Sets for Building Energy Performance Using Reanalysis and Satellite-Based Data

    Directory of Open Access Journals (Sweden)

    Thomas Huld

    2018-02-01

    Full Text Available We present a method to generate Typical Meteorological Year (TMY data sets for use in calculations of the energy performance of buildings, based on satellite derived solar radiation data and other meteorological parameters obtained from reanalysis products. The great advantage of this method is the availability of data over large geographical regions, giving global coverage for the reanalysis and continental-scale coverage for the solar radiation data, making it possible to generate TMY data for nearly any location, independent of the availability of meteorological measurement stations in the area. The TMY data generated with this method have been validated against 487 meteorological stations in Europe, by calculating heating and cooling degree days, and by running building energy performance simulations using EnergyPlus. Results show that the generated data sets using a long time series perform better than the TMY data generated from station measurements for building heating calculations and nearly as well for cooling calculations, with relative standard deviations remaining below 6% for heating calculations. TMY data constructed using the proposed method yield somewhat larger deviations compared to TMY data constructed from station data. We outline a number of possibilities for further improvement using data sets that will become available in the near future.

  6. Identification of a Typical CSTR Using Optimal Focused Time Lagged Recurrent Neural Network Model with Gamma Memory Filter

    Directory of Open Access Journals (Sweden)

    S. N. Naikwad

    2009-01-01

    Full Text Available A focused time lagged recurrent neural network (FTLR NN with gamma memory filter is designed to learn the subtle complex dynamics of a typical CSTR process. Continuous stirred tank reactor exhibits complex nonlinear operations where reaction is exothermic. It is noticed from literature review that process control of CSTR using neuro-fuzzy systems was attempted by many, but optimal neural network model for identification of CSTR process is not yet available. As CSTR process includes temporal relationship in the input-output mappings, time lagged recurrent neural network is particularly used for identification purpose. The standard back propagation algorithm with momentum term has been proposed in this model. The various parameters like number of processing elements, number of hidden layers, training and testing percentage, learning rule and transfer function in hidden and output layer are investigated on the basis of performance measures like MSE, NMSE, and correlation coefficient on testing data set. Finally effects of different norms are tested along with variation in gamma memory filter. It is demonstrated that dynamic NN model has a remarkable system identification capability for the problems considered in this paper. Thus FTLR NN with gamma memory filter can be used to learn underlying highly nonlinear dynamics of the system, which is a major contribution of this paper.

  7. Typical worlds

    Science.gov (United States)

    Barrett, Jeffrey A.

    2017-05-01

    Hugh Everett III presented pure wave mechanics, sometimes referred to as the many-worlds interpretation, as a solution to the quantum measurement problem. While pure wave mechanics is an objectively deterministic physical theory with no probabilities, Everett sought to show how the theory might be understood as making the standard quantum statistical predictions as appearances to observers who were themselves described by the theory. We will consider his argument and how it depends on a particular notion of branch typicality. We will also consider responses to Everett and the relationship between typicality and probability. The suggestion will be that pure wave mechanics requires a number of significant auxiliary assumptions in order to make anything like the standard quantum predictions.

  8. A neuro-fuzzy model for prediction of the indoor temperature in typical Australian residential buildings

    Energy Technology Data Exchange (ETDEWEB)

    Alasha' ary, Haitham; Moghtaderi, Behdad; Page, Adrian; Sugo, Heber [Priority Research Centre for Energy, Chemical Engineering, School of Engineering, Faculty of Engineering and Built Environment, the University of Newcastle, Callaghan, Newcastle, NSW 2308 (Australia)

    2009-07-15

    The Masonry Research Group at The University of Newcastle, Australia has embarked on an extensive research program to study the thermal performance of common walling systems in Australian residential buildings by studying the thermal behaviour of four representative purpose-built thermal test buildings (referred to as 'test modules' or simply 'modules' hereafter). The modules are situated on the university campus and are constructed from brick veneer (BV), cavity brick (CB) and lightweight (LW) constructions. The program of study has both experimental and analytical strands, including the use of a neuro-fuzzy approach to predict the thermal behaviour. The latter approach employs an experimental adaptive neuro-fuzzy inference system (ANFIS) which is used in this study to predict the room (indoor) temperatures of the modules under a range of climatic conditions pertinent to Newcastle (NSW, Australia). The study shows that this neuro-fuzzy model is capable of accurately predicting the room temperature of such buildings; thus providing a potential computationally efficient and inexpensive predictive tool for the more effective thermal design of housing. (author)

  9. Computational fluid dynamics modeling of rope-guided conveyances in two typical kinds of shaft layouts.

    Directory of Open Access Journals (Sweden)

    Renyuan Wu

    Full Text Available The behavior of rope-guided conveyances is so complicated that the rope-guided hoisting system hasn't been understood thoroughly so far. In this paper, with user-defined functions loaded, ANSYS FLUENT 14.5 was employed to simulate lateral motion of rope-guided conveyances in two typical kinds of shaft layouts. With rope-guided mine elevator and mine cages taken into account, results show that the lateral aerodynamic buffeting force is much larger than the Coriolis force, and the side aerodynamic force have the same order of magnitude as the Coriolis force. The lateral aerodynamic buffeting forces should also be considered especially when the conveyance moves along the ventilation air direction. The simulation shows that the closer size of the conveyances can weaken the transverse aerodynamic buffeting effect.

  10. Thermohidraulic model for a typical steam generator of PWR Nuclear Power Plants

    International Nuclear Information System (INIS)

    Braga, C.V.M.

    1980-06-01

    A model of thermohidraulic simulation, for steady state, considering the secondary flow divided in two parts individually homogeneous, and with heat and mass transferences between them is developed. The quality of the two-phase mixture that is fed to the turbine is fixed and, based on this value, the feedwater pressure is determined. The recirculation ratio is intrinsically determined. Based on this model it was developed the GEVAP code, in Fortran-IV language. The model is applied to the steam generator of the Angra II nuclear power plant and the results are compared with KWU'S design parameters, being considered satisfactory. (Author) [pt

  11. General model of wood in typical coupled tasks. Part I. – Phenomenological approach

    Directory of Open Access Journals (Sweden)

    Petr Koňas

    2008-01-01

    Full Text Available The main aim of this work is focused on FE modeling of wood structure. This task is conditioned mainly by different organized structures/regions (tissues, anomalies... and leads to homogenization process of multiphysics declaration of common scientific and engineering problems. The crucial role in this paper is played by derivation of coefficient form of general PDE which is solvable by nowadays numerical solvers. Generality of supposed model is given by wide range of coupled physical fields included in the model. Used approach summarizes and brings together models for various fields of matter and energy common in wood material in wood drying process, but is also suitable for a lot of different tasks of similar materials. Namely microwave drying of wood with orthotropic, visco-elastic material properties together with time, moisture and temperature dependency of structural strains by modified mechanical properties were included. Specific matrixes of elasticity for in­di­vi­dual fields were derived. Thermal field in wood was described by conduction type of spreading. Coupling of physical fields is based on diffusive character of temperature, moisture, static pressure fields movement.

  12. Numerical modeling for longwall pillar design: a case study from a typical longwall panel in China

    Science.gov (United States)

    Zhang, Guangchao; Liang, Saijiang; Tan, Yunliang; Xie, Fuxing; Chen, Shaojie; Jia, Hongguo

    2018-02-01

    This paper presents a new numerical modeling procedure and design principle for longwall pillar design with the assistance of numerical simulation of FLAC3D. A coal mine located in Yanzhou city, Shandong Province, China, was selected for this case study. A meticulously validated numerical model was developed to investigate the stress changes across the longwall pillar with various sizes. In order to improve the reliability of the numerical modeling, a calibration procedure is undertaken to match the Salamon and Munro pillar strength formula for the coal pillar, while a similar calibration procedure is used to estimate the stress-strain response of a gob. The model results demonstrated that when the coal pillar width was 7-8 m, most of the vertical load was carried by the panel rib, whilst the gateroad was overall in a relatively low stress environment and could keep its stability with proper supports. Thus, the rational longwall pillar width was set as 8 m and the field monitoring results confirmed the feasibility of this pillar size. The proposed numerical simulation procedure and design principle presented in this study could be a viable alternative approach for longwall pillar design for other similar projects.

  13. Contrastive analysis of cooling performance between a high-level water collecting cooling tower and a typical cooling tower

    Science.gov (United States)

    Wang, Miao; Wang, Jin; Wang, Jiajin; Shi, Cheng

    2018-02-01

    A three-dimensional (3D) numerical model is established and validated for cooling performance optimization between a high-level water collecting natural draft wet cooling tower (HNDWCT) and a usual natural draft wet cooling tower (UNDWCT) under the actual operation condition at Wanzhou power plant, Chongqing, China. User defined functions (UDFs) of source terms are composed and loaded into the spray, fill and rain zones. Considering the conditions of impact on three kinds of corrugated fills (Double-oblique wave, Two-way wave and S wave) and four kinds of fill height (1.25 m, 1.5 m, 1.75 m and 2 m), numerical simulation of cooling performance are analysed. The results demonstrate that the S wave has the highest cooling efficiency in three fills for both towers, indicating that fill characteristics are crucial to cooling performance. Moreover, the cooling performance of the HNDWCT is far superior to that of the UNDWCT with fill height increases of 1.75 m and above, because the air mass flow rate in the fill zone of the HNDWCT improves more than that in the UNDWCT, as a result of the rain zone resistance declining sharply for the HNDWCT. In addition, the mass and heat transfer capacity of the HNDWCT is better in the tower centre zone than in the outer zone near the tower wall under a uniform fill layout. This behaviour is inverted for the UNDWCT, perhaps because the high-level collection devices play the role of flow guiding in the inner zone. Therefore, when non-uniform fill layout optimization is applied to the HNDWCT, the inner zone increases in height from 1.75 m to 2 m, the outer zone reduces in height from 1.75 m to 1.5 m, and the outlet water temperature declines approximately 0.4 K compared to that of the uniform layout.

  14. Mathematical Ability of 10-Year-Old Boys and Girls: Genetic and Environmental Etiology of Typical and Low Performance

    Science.gov (United States)

    Kovas, Yulia; Haworth, Claire M. A.; Petrill, Stephen A.; Plomin, Robert

    2009-01-01

    The genetic and environmental etiologies of 3 aspects of low mathematical performance (math disability) and the full range of variability (math ability) were compared for boys and girls in a sample of 5,348 children age 10 years (members of 2,674 pairs of same-sex and opposite-sex twins) from the United Kingdom (UK). The measures, which we developed for Web-based testing, included problems from 3 domains of mathematics taught as part of the UK National Curriculum. Using quantitative genetic model-fitting analyses, similar results were found for math disabilities and abilities for all 3 measures: Moderate genetic influence and environmental influence were mainly due to nonshared environmental factors that were unique to the individual, with little influence from shared environment. No sex differences were found in the etiologies of math abilities and disabilities. We conclude that low mathematical performance is the quantitative extreme of the same genetic and environmental factors responsible for variation throughout the distribution. PMID:18064980

  15. Mathematical ability of 10-year-old boys and girls: genetic and environmental etiology of typical and low performance.

    Science.gov (United States)

    Kovas, Yulia; Haworth, Claire M A; Petrill, Stephen A; Plomin, Robert

    2007-01-01

    The genetic and environmental etiologies of 3 aspects of low mathematical performance (math disability) and the full range of variability (math ability) were compared for boys and girls in a sample of 5,348 children age 10 years (members of 2,674 pairs of same-sex and opposite-sex twins) from the United Kingdom (UK). The measures, which we developed for Web-based testing, included problems from 3 domains of mathematics taught as part of the UK National Curriculum. Using quantitative genetic model-fitting analyses, similar results were found for math disabilities and abilities for all 3 measures: Moderate genetic influence and environmental influence were mainly due to nonshared environmental factors that were unique to the individual, with little influence from shared environment. No sex differences were found in the etiologies of math abilities and disabilities. We conclude that low mathematical performance is the quantitative extreme of the same genetic and environmental factors responsible for variation throughout the distribution.

  16. Magnesium degradation influenced by buffering salts in concentrations typical of in vitro and in vivo models

    International Nuclear Information System (INIS)

    Agha, Nezha Ahmad; Feyerabend, Frank; Mihailova, Boriana; Heidrich, Stefanie; Bismayer, Ulrich; Willumeit-Römer, Regine

    2016-01-01

    Magnesium and its alloys have considerable potential for orthopedic applications. During the degradation process the interface between material and tissue is continuously changing. Moreover, too fast or uncontrolled degradation is detrimental for the outcome in vivo. Therefore in vitro setups utilizing physiological conditions are promising for the material/degradation analysis prior to animal experiments. The aim of this study is to elucidate the influence of inorganic salts contributing to the blood buffering capacity on degradation. Extruded pure magnesium samples were immersed under cell culture conditions for 3 and 10 days. Hank's balanced salt solution without calcium and magnesium (HBSS) plus 10% of fetal bovine serum (FBS) was used as the basic immersion medium. Additionally, different inorganic salts were added with respect to concentration in Dulbecco's modified Eagle's medium (DMEM, in vitro model) and human plasma (in vivo model) to form 12 different immersion media. Influences on the surrounding environment were observed by measuring pH and osmolality. The degradation interface was analyzed by electron-induced X-ray emission (EIXE) spectroscopy, including chemical-element mappings and electron microprobe analysis, as well as Fourier transform infrared reflection micro-spectroscopy (FTIR). - Highlights: • Influence of blood buffering salts on magnesium degradation was studied. • CaCl 2 reduced the degradation rate by Ca–PO 4 layer formation. • MgSO 4 influenced the morphology of the degradation interface. • NaHCO 3 induced the formation of MgCO 3 as a degradation product

  17. Simplified CFD model of coolant channels typical of a plate-type fuel element: an exhaustive verification of the simulations

    Energy Technology Data Exchange (ETDEWEB)

    Mantecón, Javier González; Mattar Neto, Miguel, E-mail: javier.mantecon@ipen.br, E-mail: mmattar@ipen.br [Instituto de Pesquisas Energéticas e Nucleares (IPEN/CNEN-SP), São Paulo, SP (Brazil)

    2017-07-01

    The use of parallel plate-type fuel assemblies is common in nuclear research reactors. One of the main problems of this fuel element configuration is the hydraulic instability of the plates caused by the high flow velocities. The current work is focused on the hydrodynamic characterization of coolant channels typical of a flat-plate fuel element, using a numerical model developed with the commercial code ANSYS CFX. Numerical results are compared to accurate analytical solutions, considering two turbulence models and three different fluid meshes. For this study, the results demonstrated that the most suitable turbulence model is the k-ε model. The discretization error is estimated using the Grid Convergence Index method. Despite its simplicity, this model generates precise flow predictions. (author)

  18. GLOBAL MODELING OF NEBULAE WITH PARTICLE GROWTH, DRIFT, AND EVAPORATION FRONTS. I. METHODOLOGY AND TYPICAL RESULTS

    International Nuclear Information System (INIS)

    Estrada, Paul R.; Cuzzi, Jeffrey N.; Morgan, Demitri A.

    2016-01-01

    We model particle growth in a turbulent, viscously evolving protoplanetary nebula, incorporating sticking, bouncing, fragmentation, and mass transfer at high speeds. We treat small particles using a moments method and large particles using a traditional histogram binning, including a probability distribution function of collisional velocities. The fragmentation strength of the particles depends on their composition (icy aggregates are stronger than silicate aggregates). The particle opacity, which controls the nebula thermal structure, evolves as particles grow and mass redistributes. While growing, particles drift radially due to nebula headwind drag. Particles of different compositions evaporate at “evaporation fronts” (EFs) where the midplane temperature exceeds their respective evaporation temperatures. We track the vapor and solid phases of each component, accounting for advection and radial and vertical diffusion. We present characteristic results in evolutions lasting 2 × 10 5 years. In general, (1) mass is transferred from the outer to the inner nebula in significant amounts, creating radial concentrations of solids at EFs; (2) particle sizes are limited by a combination of fragmentation, bouncing, and drift; (3) “lucky” large particles never represent a significant amount of mass; and (4) restricted radial zones just outside each EF become compositionally enriched in the associated volatiles. We point out implications for millimeter to submillimeter SEDs and the inference of nebula mass, radial banding, the role of opacity on new mechanisms for generating turbulence, the enrichment of meteorites in heavy oxygen isotopes, variable and nonsolar redox conditions, the primary accretion of silicate and icy planetesimals, and the makeup of Jupiter’s core

  19. GLOBAL MODELING OF NEBULAE WITH PARTICLE GROWTH, DRIFT, AND EVAPORATION FRONTS. I. METHODOLOGY AND TYPICAL RESULTS

    Energy Technology Data Exchange (ETDEWEB)

    Estrada, Paul R. [Carl Sagan Center, SETI Institute, 189 N. Bernardo Avenue # 100, Mountain View, CA 94043 (United States); Cuzzi, Jeffrey N. [Ames Research Center, NASA, Mail Stop 245-3, Moffett Field, CA 94035 (United States); Morgan, Demitri A., E-mail: Paul.R.Estrada@nasa.gov [USRA, NASA Ames Research Center, Mail Stop 245-3, Moffett Field, CA 94035 (United States)

    2016-02-20

    We model particle growth in a turbulent, viscously evolving protoplanetary nebula, incorporating sticking, bouncing, fragmentation, and mass transfer at high speeds. We treat small particles using a moments method and large particles using a traditional histogram binning, including a probability distribution function of collisional velocities. The fragmentation strength of the particles depends on their composition (icy aggregates are stronger than silicate aggregates). The particle opacity, which controls the nebula thermal structure, evolves as particles grow and mass redistributes. While growing, particles drift radially due to nebula headwind drag. Particles of different compositions evaporate at “evaporation fronts” (EFs) where the midplane temperature exceeds their respective evaporation temperatures. We track the vapor and solid phases of each component, accounting for advection and radial and vertical diffusion. We present characteristic results in evolutions lasting 2 × 10{sup 5} years. In general, (1) mass is transferred from the outer to the inner nebula in significant amounts, creating radial concentrations of solids at EFs; (2) particle sizes are limited by a combination of fragmentation, bouncing, and drift; (3) “lucky” large particles never represent a significant amount of mass; and (4) restricted radial zones just outside each EF become compositionally enriched in the associated volatiles. We point out implications for millimeter to submillimeter SEDs and the inference of nebula mass, radial banding, the role of opacity on new mechanisms for generating turbulence, the enrichment of meteorites in heavy oxygen isotopes, variable and nonsolar redox conditions, the primary accretion of silicate and icy planetesimals, and the makeup of Jupiter’s core.

  20. Principles of Sonar Performance Modeling

    NARCIS (Netherlands)

    Ainslie, M.A.

    2010-01-01

    Sonar performance modelling (SPM) is concerned with the prediction of quantitative measures of sonar performance, such as probability of detection. It is a multidisciplinary subject, requiring knowledge and expertise in the disparate fields of underwater acoustics, acoustical oceanography, sonar

  1. The effects of typical and atypical antipsychotics on the electrical activity of the brain in a rat model

    Directory of Open Access Journals (Sweden)

    Oytun Erbaş

    2013-09-01

    Full Text Available Objective: Antipsychotic drugs are known to have strongeffect on the bioelectric activity in the brain. However,some studies addressing the changes on electroencephalography(EEG caused by typical and atypical antipsychoticdrugs are conflicting. We aimed to compare the effectsof typical and atypical antipsychotics on the electricalactivity in the brain via EEG recordings in a rat model.Methods: Thirty-two Sprague Dawley adult male ratswere used in the study. The rats were divided into fivegroups, randomly (n=7, for each group. The first groupwas used as control group and administered 1 ml/kg salineintraperitoneally (IP. Haloperidol (1 mg/kg (group 2,chlorpromazine (5 mg/kg (group 3, olanzapine (1 mg/kg(group 4, ziprasidone (1 mg/ kg (group 5 were injectedIP for five consecutive days. Then, EEG recordings ofeach group were taken for 30 minutes.Results: The percentages of delta and theta waves inhaloperidol, chlorpromazine, olanzapine and ziprasidonegroups were found to have a highly significant differencecompared with the saline administration group (p<0.001.The theta waves in the olanzapine and ziprasidonegroups were increased compared with haloperidol andchlorpromazine groups (p<0.05.Conclusion: The typical and atypical antipsychotic drugsmay be risk factor for EEG abnormalities. This studyshows that antipsychotic drugs should be used with caution.J Clin Exp Invest 2013; 4 (3: 279-284Key words: Haloperidol, chlorpromazine, olanzapine,ziprasidone, EEG, rat

  2. Biofilm carrier migration model describes reactor performance.

    Science.gov (United States)

    Boltz, Joshua P; Johnson, Bruce R; Takács, Imre; Daigger, Glen T; Morgenroth, Eberhard; Brockmann, Doris; Kovács, Róbert; Calhoun, Jason M; Choubert, Jean-Marc; Derlon, Nicolas

    2017-06-01

    The accuracy of a biofilm reactor model depends on the extent to which physical system conditions (particularly bulk-liquid hydrodynamics and their influence on biofilm dynamics) deviate from the ideal conditions upon which the model is based. It follows that an improved capacity to model a biofilm reactor does not necessarily rely on an improved biofilm model, but does rely on an improved mathematical description of the biofilm reactor and its components. Existing biofilm reactor models typically include a one-dimensional biofilm model, a process (biokinetic and stoichiometric) model, and a continuous flow stirred tank reactor (CFSTR) mass balance that [when organizing CFSTRs in series] creates a pseudo two-dimensional (2-D) model of bulk-liquid hydrodynamics approaching plug flow. In such a biofilm reactor model, the user-defined biofilm area is specified for each CFSTR; thereby, X carrier does not exit the boundaries of the CFSTR to which they are assigned or exchange boundaries with other CFSTRs in the series. The error introduced by this pseudo 2-D biofilm reactor modeling approach may adversely affect model results and limit model-user capacity to accurately calibrate a model. This paper presents a new sub-model that describes the migration of X carrier and associated biofilms, and evaluates the impact that X carrier migration and axial dispersion has on simulated system performance. Relevance of the new biofilm reactor model to engineering situations is discussed by applying it to known biofilm reactor types and operational conditions.

  3. Characterising performance of environmental models

    NARCIS (Netherlands)

    Bennett, N.D.; Croke, B.F.W.; Guariso, G.; Guillaume, J.H.A.; Hamilton, S.H.; Jakeman, A.J.; Marsili-Libelli, S.; Newham, L.T.H.; Norton, J.; Perrin, C.; Pierce, S.; Robson, B.; Seppelt, R.; Voinov, A.; Fath, B.D.; Andreassian, V.

    2013-01-01

    In order to use environmental models effectively for management and decision-making, it is vital to establish an appropriate level of confidence in their performance. This paper reviews techniques available across various fields for characterising the performance of environmental models with focus

  4. Modeling individual differences in text reading fluency: a different pattern of predictors for typically developing and dyslexic readers

    Directory of Open Access Journals (Sweden)

    Pierluigi eZoccolotti

    2014-11-01

    Full Text Available This study was aimed at predicting individual differences in text reading fluency. The basic proposal included two factors, i.e., the ability to decode letter strings (measured by discrete pseudo-word reading and integration of the various sub-components involved in reading (measured by Rapid Automatized Naming, RAN. Subsequently, a third factor was added to the model, i.e., naming of discrete digits. In order to use homogeneous measures, all contributing variables considered the entire processing of the item, including pronunciation time. The model, which was based on commonality analysis, was applied to data from a group of 43 typically developing readers (11- to 13-year-olds and a group of 25 chronologically matched dyslexic children. In typically developing readers, both orthographic decoding and integration of reading sub-components contributed significantly to the overall prediction of text reading fluency. The model prediction was higher (from ca. 37% to 52% of the explained variance when we included the naming of discrete digits variable, which had a suppressive effect on pseudo-word reading. In the dyslexic readers, the variance explained by the two-factor model was high (69% and did not change when the third factor was added. The lack of a suppression effect was likely due to the prominent individual differences in poor orthographic decoding of the dyslexic children. Analyses on data from both groups of children were replicated by using patches of colours as stimuli (both in the RAN task and in the discrete naming task obtaining similar results. We conclude that it is possible to predict much of the variance in text-reading fluency using basic processes, such as orthographic decoding and integration of reading sub-components, even without taking into consideration higher-order linguistic factors such as lexical, semantic and contextual abilities. The approach validity of using proximal vs distal causes to predict reading fluency is

  5. A Probabilistic Approach to Symbolic Performance Modeling of Parallel Systems

    NARCIS (Netherlands)

    Gautama, H.

    2004-01-01

    Performance modeling plays a significant role in predicting the effects of a particular design choice or in diagnosing the cause for some observed performance behavior. Especially for complex systems such as parallel computer, typically, an intended performance cannot be achieved without recourse to

  6. Typical intellectual engagement, Big Five personality traits, approaches to learning and cognitive ability predictors of academic performance.

    Science.gov (United States)

    Furnham, Adrian; Monsen, Jeremy; Ahmetoglu, Gorkan

    2009-12-01

    Both ability (measured by power tests) and non-ability (measured by preference tests) individual difference measures predict academic school outcomes. These include fluid as well as crystalized intelligence, personality traits, and learning styles. This paper examines the incremental validity of five psychometric tests and the sex and age of pupils to predict their General Certificate in Secondary Education (GCSE) test results. The aim was to determine how much variance ability and non-ability tests can account for in predicting specific GCSE exam scores. The sample comprised 212 British schoolchildren. Of these, 123 were females. Their mean age was 15.8 years (SD 0.98 years). Pupils completed three self-report tests: the Neuroticism-Extroversion-Openness-Five-Factor Inventory (NEO-FFI) which measures the 'Big Five' personality traits, (Costa & McCrae, 1992); the Typical Intellectual Engagement Scale (Goff & Ackerman, 1992) and a measure of learning style, the Study Process Questionnaire (SPQ; Biggs, 1987). They also completed two ability tests: the Wonderlic Personnel Test (Wonderlic, 1992) a short measure of general intelligence and the General Knowledge Test (Irving, Cammock, & Lynn, 2001) a measure of crystallized intelligence. Six months later they took their (10th grade) GCSE exams comprising four 'core' compulsory exams as well as a number of specific elective subjects. Correlational analysis suggested that intelligence was the best predictors of school results. Preference test measures accounted for relatively little variance. Regressions indicated that over 50% of the variance in school exams for English (Literature and Language) and Maths and Science combined could be accounted for by these individual difference factors. Data from less than an hour's worth of testing pupils could predict school exam results 6 months later. These tests could, therefore, be used to reliably inform important decisions about how pupils are taught.

  7. Multiprocessor performance modeling with ADAS

    Science.gov (United States)

    Hayes, Paul J.; Andrews, Asa M.

    1989-01-01

    A graph managing strategy referred to as the Algorithm to Architecture Mapping Model (ATAMM) appears useful for the time-optimized execution of application algorithm graphs in embedded multiprocessors and for the performance prediction of graph designs. This paper reports the modeling of ATAMM in the Architecture Design and Assessment System (ADAS) to make an independent verification of ATAMM's performance prediction capability and to provide a user framework for the evaluation of arbitrary algorithm graphs. Following an overview of ATAMM and its major functional rules are descriptions of the ADAS model of ATAMM, methods to enter an arbitrary graph into the model, and techniques to analyze the simulation results. The performance of a 7-node graph example is evaluated using the ADAS model and verifies the ATAMM concept by substantiating previously published performance results.

  8. Firm Sustainability Performance Index Modeling

    Directory of Open Access Journals (Sweden)

    Che Wan Jasimah Bt Wan Mohamed Radzi

    2015-12-01

    Full Text Available The main objective of this paper is to bring a model for firm sustainability performance index by applying both classical and Bayesian structural equation modeling (parametric and semi-parametric modeling. Both techniques are considered to the research data collected based on a survey directed to the China, Taiwan, and Malaysia food manufacturing industry. For estimating firm sustainability performance index we consider three main indicators include knowledge management, organizational learning, and business strategy. Based on the both Bayesian and classical methodology, we confirmed that knowledge management and business strategy have significant impact on firm sustainability performance index.

  9. Performances of typical high energy physics applications in flash-based field-programmable gate array under gamma irradiation

    Science.gov (United States)

    Sano, Y.; Horii, Y.; Ikeno, M.; Kawaguchi, T.; Mizukoshi, K.; Sasaki, O.; Shukutani, K.; Tomoto, M.; Uchida, T.

    2017-04-01

    Recent field-programmable gate arrays (FPGAs) based on flash memories offer a high radiation tolerance. We discuss potential applications of the Microsemi IGLOO2 FPGAs in high energy experiments. We implement a 24 channel time-to-digital converter with a time binning of 0.78 ns and evaluate the performance. Differential and integral non-linearity is measured to be far below the time binning. The time resolution obtained is approximately 0.25 ns. The FPGA was exposed to gamma rays with a total ionizing dose of 300 Gy. The function of the configuration memory is monitored and the degradation of the performance on the ring oscillator and high-speed transceiver is measured. The errors during firmware download and verification of downloaded firmware have been observed at 110-120 Gy and 80-90 Gy, respectively. The functionality of the ring oscillator and high-speed transceiver remains up to about 200 Gy.

  10. The Five Key Questions of Human Performance Modeling.

    Science.gov (United States)

    Wu, Changxu

    2018-01-01

    Via building computational (typically mathematical and computer simulation) models, human performance modeling (HPM) quantifies, predicts, and maximizes human performance, human-machine system productivity and safety. This paper describes and summarizes the five key questions of human performance modeling: 1) Why we build models of human performance; 2) What the expectations of a good human performance model are; 3) What the procedures and requirements in building and verifying a human performance model are; 4) How we integrate a human performance model with system design; and 5) What the possible future directions of human performance modeling research are. Recent and classic HPM findings are addressed in the five questions to provide new thinking in HPM's motivations, expectations, procedures, system integration and future directions.

  11. Seasonal bacterial community succession in four typical wastewater treatment plants: correlations between core microbes and process performance.

    Science.gov (United States)

    Zhang, Bo; Yu, Quanwei; Yan, Guoqi; Zhu, Hubo; Xu, Xiang Yang; Zhu, Liang

    2018-03-15

    To understand the seasonal variation of the activated sludge (AS) bacterial community and identify core microbes in different wastewater processing systems, seasonal AS samples were taken from every biological treatment unit within 4 full-scale wastewater treatment plants. These plants adopted A2/O, A/O and oxidation ditch processes and were active in the treatment of different types and sources of wastewater, some domestic and others industrial. The bacterial community composition was analyzed using high-throughput sequencing technology. The correlations among microbial community structure, dominant microbes and process performance were investigated. Seasonal variation had a stronger impact on the AS bacterial community than any variation within different wastewater treatment system. Facing seasonal variation, the bacterial community within the oxidation ditch process remained more stable those in either the A2/O or A/O processes. The core genera in domestic wastewater treatment systems were Nitrospira, Caldilineaceae, Pseudomonas and Lactococcus. The core genera in the textile dyeing and fine chemical industrial wastewater treatment systems were Nitrospira, Thauera and Thiobacillus.

  12. Modeling the Maturation of Grip Selection Planning and Action Representation: Insights from Typical and Atypical Motor Development.

    Directory of Open Access Journals (Sweden)

    Ian eFuelscher

    2016-02-01

    Full Text Available We investigated the purported association between developmental changes in grip selection planning and improvements in an individual’s capacity to represent action at an internal level (i.e., motor imagery. Participants were groups of healthy children aged 6-7 years and 8-12 years respectively, while a group of adolescents (13-17 years and adults (18-34 years allowed for consideration of childhood development in the broader context of motor maturation. A group of children aged 8-12 years with probable DCD (pDCD was included as a reference group for atypical motor development. Participants’ proficiency to generate and/or engage internal action representations was inferred from performance on the hand rotation task, a well-validated measure of motor imagery. A grip selection task designed to elicit the end-state comfort (ESC effect provided a window into the integrity of grip selection planning. Consistent with earlier accounts, the efficiency of grip selection planning followed a non-linear developmental progression in neurotypical individuals. As expected, analysis confirmed that these developmental improvements were predicted by an increased capacity to generate and/or engage internal action representations. The profile of this association remained stable throughout the (typical developmental spectrum. These findings are consistent with computational accounts of action planning that argue that internal action representations are associated with the expression and development of grip selection planning across typical development. However, no such association was found for our sample of children with pDCD, suggesting that individuals with atypical motor skill may adopt an alternative, sub-optimal strategy to plan their grip selection compared to their same-age control peers.

  13. MODELING SUPPLY CHAIN PERFORMANCE VARIABLES

    Directory of Open Access Journals (Sweden)

    Ashish Agarwal

    2005-01-01

    Full Text Available In order to understand the dynamic behavior of the variables that can play a major role in the performance improvement in a supply chain, a System Dynamics-based model is proposed. The model provides an effective framework for analyzing different variables affecting supply chain performance. Among different variables, a causal relationship among different variables has been identified. Variables emanating from performance measures such as gaps in customer satisfaction, cost minimization, lead-time reduction, service level improvement and quality improvement have been identified as goal-seeking loops. The proposed System Dynamics-based model analyzes the affect of dynamic behavior of variables for a period of 10 years on performance of case supply chain in auto business.

  14. Air Conditioner Compressor Performance Model

    Energy Technology Data Exchange (ETDEWEB)

    Lu, Ning; Xie, YuLong; Huang, Zhenyu

    2008-09-05

    During the past three years, the Western Electricity Coordinating Council (WECC) Load Modeling Task Force (LMTF) has led the effort to develop the new modeling approach. As part of this effort, the Bonneville Power Administration (BPA), Southern California Edison (SCE), and Electric Power Research Institute (EPRI) Solutions tested 27 residential air-conditioning units to assess their response to delayed voltage recovery transients. After completing these tests, different modeling approaches were proposed, among them a performance modeling approach that proved to be one of the three favored for its simplicity and ability to recreate different SVR events satisfactorily. Funded by the California Energy Commission (CEC) under its load modeling project, researchers at Pacific Northwest National Laboratory (PNNL) led the follow-on task to analyze the motor testing data to derive the parameters needed to develop a performance models for the single-phase air-conditioning (SPAC) unit. To derive the performance model, PNNL researchers first used the motor voltage and frequency ramping test data to obtain the real (P) and reactive (Q) power versus voltage (V) and frequency (f) curves. Then, curve fitting was used to develop the P-V, Q-V, P-f, and Q-f relationships for motor running and stalling states. The resulting performance model ignores the dynamic response of the air-conditioning motor. Because the inertia of the air-conditioning motor is very small (H<0.05), the motor reaches from one steady state to another in a few cycles. So, the performance model is a fair representation of the motor behaviors in both running and stalling states.

  15. Evaluation of the AnnAGNPS Model for Predicting Runoff and Nutrient Export in a Typical Small Watershed in the Hilly Region of Taihu Lake

    Directory of Open Access Journals (Sweden)

    Chuan Luo

    2015-09-01

    Full Text Available The application of hydrological and water quality models is an efficient approach to better understand the processes of environmental deterioration. This study evaluated the ability of the Annualized Agricultural Non-Point Source (AnnAGNPS model to predict runoff, total nitrogen (TN and total phosphorus (TP loading in a typical small watershed of a hilly region near Taihu Lake, China. Runoff was calibrated and validated at both an annual and monthly scale, and parameter sensitivity analysis was performed for TN and TP before the two water quality components were calibrated. The results showed that the model satisfactorily simulated runoff at annual and monthly scales, both during calibration and validation processes. Additionally, results of parameter sensitivity analysis showed that the parameters Fertilizer rate, Fertilizer organic, Canopy cover and Fertilizer inorganic were more sensitive to TN output. In terms of TP, the parameters Residue mass ratio, Fertilizer rate, Fertilizer inorganic and Canopy cover were the most sensitive. Based on these sensitive parameters, calibration was performed. TN loading produced satisfactory results for both the calibration and validation processes, whereas the performance of TP loading was slightly poor. The simulation results showed that AnnAGNPS has the potential to be used as a valuable tool for the planning and management of watersheds.

  16. Evaluation of water conservation capacity of loess plateau typical mountain ecosystems based on InVEST model simulation

    Science.gov (United States)

    Lv, Xizhi; Zuo, Zhongguo; Xiao, Peiqing

    2017-06-01

    With increasing demand for water resources and frequently a general deterioration of local water resources, water conservation by forests has received considerable attention in recent years. To evaluate water conservation capacities of different forest ecosystems in mountainous areas of Loess Plateau, the landscape of forests was divided into 18 types in Loess Plateau. Under the consideration of the factors such as climate, topography, plant, soil and land use, the water conservation of the forest ecosystems was estimated by means of InVEST model. The result showed that 486417.7 hm2 forests in typical mountain areas were divided into 18 forest types, and the total water conservation quantity was 1.64×1012m3, equaling an average of water conversation quantity of 9.09×1010m3. There is a great difference in average water conversation capacity among various forest types. The water conservation function and its evaluation is crucial and complicated issues in the study of ecological service function in modern times.

  17. DETRA: Model description and evaluation of model performance

    International Nuclear Information System (INIS)

    Suolanen, V.

    1996-01-01

    The computer code DETRA is a generic tool for environmental transfer analyses of radioactive or stable substances. The code has been applied for various purposes, mainly problems related to the biospheric transfer of radionuclides both in safety analyses of disposal of nuclear wastes and in consideration of foodchain exposure pathways in the analyses of off-site consequences of reactor accidents. For each specific application an individually tailored conceptual model can be developed. The biospheric transfer analyses performed by the code are typically carried out for terrestrial, aquatic and food chain applications. 21 refs, 35 figs, 15 tabs

  18. Performance model for a CCTV-MTI

    International Nuclear Information System (INIS)

    Dunn, D.R.; Dunbar, D.L.

    1978-01-01

    CCTV-MTI (closed circuit television--moving target indicator) monitors represent typical components of access control systems, as for example in a material control and accounting (MC and A) safeguards system. This report describes a performance model for a CCTV-MTI monitor. The performance of a human in an MTI role is a separate problem and is not addressed here. This work was done in conjunction with the NRC sponsored LLL assessment procedure for MC and A systems which is presently under development. We develop a noise model for a generic camera system and a model for the detection mechanism for a postulated MTI design. These models are then translated into an overall performance model. Measures of performance are probabilities of detection and false alarm as a function of intruder-induced grey level changes in the protected area. Sensor responsivity, lens F-number, source illumination and spectral response were treated as design parameters. Some specific results are illustrated for a postulated design employing a camera with a Si-target vidicon. Reflectance or light level changes in excess of 10% due to an intruder will be detected with a very high probability for the portion of the visible spectrum with wavelengths above 500 nm. The resulting false alarm rate was less than one per year. We did not address sources of nuisance alarms due to adverse environments, reliability, resistance to tampering, nor did we examine the effects of the spatial frequency response of the optics. All of these are important and will influence overall system detection performance

  19. Off-design thermodynamic performances on typical days of a 330 MW solar aided coal-fired power plant in China

    International Nuclear Information System (INIS)

    Peng, Shuo; Hong, Hui; Wang, Yanjuan; Wang, Zhaoguo; Jin, Hongguang

    2014-01-01

    Highlights: • Optical loss and heat loss of solar field under different turbine load were investigated. • Off-design thermodynamic feature was disclosed by analyzing several operational parameters. • Possible schemes was proposed to improve the net solar-to-electricity efficiency. - Abstract: The contribution of mid-temperature solar thermal power to improve the performance of coal-fired power plant is analyzed in the present paper. In the solar aided coal-fired power plant, solar heat at <300 °C is used to replace the extracted steam from the steam turbine to heat the feed water. In this way, the steam that was to be extracted could consequently expand in the steam turbine to boost output power. The advantages of a solar aided coal-fired power plant in design condition have been discussed by several researchers. However, thermodynamic performances on off-design operation have not been well discussed until now. In this paper, a typical 330 MW coal-fired power plant in Sinkiang Province of China is selected as the case study to demonstrate the advantages of the solar aided coal-fired power plant under off-design conditions. Hourly thermodynamic performances are analyzed on typical days under partial load. The effects of several operational parameters, such as solar irradiation intensity, incident angle, flow rate of thermal oil, on the performance of solar field efficiency and net solar-to-electricity efficiency were examined. Possible schemes have been proposed for improving the solar aided coal-fired power plant on off-design operation. The results obtained in the current study could provide a promising approach to solve the poor thermodynamic performance of solar thermal power plant and also offer a basis for the practical operation of MW-scale solar aided coal-fired power plant

  20. Probabilistic Radiological Performance Assessment Modeling and Uncertainty

    Science.gov (United States)

    Tauxe, J.

    2004-12-01

    A generic probabilistic radiological Performance Assessment (PA) model is presented. The model, built using the GoldSim systems simulation software platform, concerns contaminant transport and dose estimation in support of decision making with uncertainty. Both the U.S. Nuclear Regulatory Commission (NRC) and the U.S. Department of Energy (DOE) require assessments of potential future risk to human receptors of disposal of LLW. Commercially operated LLW disposal facilities are licensed by the NRC (or agreement states), and the DOE operates such facilities for disposal of DOE-generated LLW. The type of PA model presented is probabilistic in nature, and hence reflects the current state of knowledge about the site by using probability distributions to capture what is expected (central tendency or average) and the uncertainty (e.g., standard deviation) associated with input parameters, and propagating through the model to arrive at output distributions that reflect expected performance and the overall uncertainty in the system. Estimates of contaminant release rates, concentrations in environmental media, and resulting doses to human receptors well into the future are made by running the model in Monte Carlo fashion, with each realization representing a possible combination of input parameter values. Statistical summaries of the results can be compared to regulatory performance objectives, and decision makers are better informed of the inherently uncertain aspects of the model which supports their decision-making. While this information may make some regulators uncomfortable, they must realize that uncertainties which were hidden in a deterministic analysis are revealed in a probabilistic analysis, and the chance of making a correct decision is now known rather than hoped for. The model includes many typical features and processes that would be part of a PA, but is entirely fictitious. This does not represent any particular site and is meant to be a generic example. A

  1. Use of backboard and deflation improve quality of chest compression when cardiopulmonary resuscitation is performed on a typical air inflated mattress configuration.

    Science.gov (United States)

    Oh, Jaehoon; Kang, Hyunggoo; Chee, Youngjoon; Lim, Taeho; Song, Yeongtak; Cho, Youngsuk; Je, Sangmo

    2013-02-01

    No study has examined the effectiveness of backboards and air deflation for achieving adequate chest compression (CC) depth on air mattresses with the typical configurations seen in intensive care units. To determine this efficacy, we measured mattress compression depth (MCD, mm) on these surfaces using dual accelerometers. Eight cardiopulmonary resuscitation providers performed CCs on manikins lying on 4 different surfaces using a visual feedback system. The surfaces were as follows: A, a bed frame; B, a deflated air mattress placed on top of a foam mattress laid on a bed frame; C, a typical air mattress configuration with an inflated air mattress placed on a foam mattress laid on a bed frame; and D, C with a backboard. Deflation of the air mattress decreased MCD significantly (B; 14.74 ± 1.36 vs C; 30.16 ± 3.96, P deflation of the air mattress decreased MCD more than use of a backboard (B; 14.74 ± 1.36 vs D; 25.46 ± 2.89, P = 0.002). The use of a both a backboard and a deflated air mattress in this configuration reduces MCD and thus helps achieve accurate CC depth during cardiopulmonary resuscitation.

  2. Effect of a typical in-season week on strength jump and sprint performances in national-level female basketball players.

    Science.gov (United States)

    Delextrat, A; Trochym, E; Calleja-González, J

    2012-04-01

    The aim of this study was to investigate the effect of a typical in-season week including four practice sessions and one competitive game on strength, jump and sprint performances in national-level female basketball players. Nine female basketball players (24.3±4.1 years old, 173.0±7.9 cm, 65.1±10.9 kg, 21.1±3.8% body fat) participated in ten testing sessions, before and immediately after practices and game (five pre- and five post-tests). Each session involved isokinetic peak torque measurements of the quadriceps and hamstrings of the dominant leg at 60º.s-1, countermovement jump (CMJ) and 20-m sprint. Fluid loss and subjective training load were measured during each practice session, while the frequencies of the main movements performed during the game were recorded. A two-way ANOVA was used to asses the effect of each practice/game and the effect of the day of the week on performances, and the relationship between performance variations and variables recorded during practices/game were analyzed by a Pearson correlation coefficient. Individual sessions induced significant decreases in lower limb strength (from 4.6 to 10.9%, Pjump ability, and monitor the recovery of their players' strength, sprint and jump capacities following specific sessions.

  3. Behavior model for performance assessment.

    Energy Technology Data Exchange (ETDEWEB)

    Borwn-VanHoozer, S. A.

    1999-07-23

    Every individual channels information differently based on their preference of the sensory modality or representational system (visual auditory or kinesthetic) we tend to favor most (our primary representational system (PRS)). Therefore, some of us access and store our information primarily visually first, some auditorily, and others kinesthetically (through feel and touch); which in turn establishes our information processing patterns and strategies and external to internal (and subsequently vice versa) experiential language representation. Because of the different ways we channel our information, each of us will respond differently to a task--the way we gather and process the external information (input), our response time (process), and the outcome (behavior). Traditional human models of decision making and response time focus on perception, cognitive and motor systems stimulated and influenced by the three sensory modalities, visual, auditory and kinesthetic. For us, these are the building blocks to knowing how someone is thinking. Being aware of what is taking place and how to ask questions is essential in assessing performance toward reducing human errors. Existing models give predications based on time values or response times for a particular event, and may be summed and averaged for a generalization of behavior(s). However, by our not establishing a basic understanding of the foundation of how the behavior was predicated through a decision making strategy process, predicative models are overall inefficient in their analysis of the means by which behavior was generated. What is seen is the end result.

  4. Behavior model for performance assessment

    International Nuclear Information System (INIS)

    Brown-VanHoozer, S. A.

    1999-01-01

    Every individual channels information differently based on their preference of the sensory modality or representational system (visual auditory or kinesthetic) we tend to favor most (our primary representational system (PRS)). Therefore, some of us access and store our information primarily visually first, some auditorily, and others kinesthetically (through feel and touch); which in turn establishes our information processing patterns and strategies and external to internal (and subsequently vice versa) experiential language representation. Because of the different ways we channel our information, each of us will respond differently to a task--the way we gather and process the external information (input), our response time (process), and the outcome (behavior). Traditional human models of decision making and response time focus on perception, cognitive and motor systems stimulated and influenced by the three sensory modalities, visual, auditory and kinesthetic. For us, these are the building blocks to knowing how someone is thinking. Being aware of what is taking place and how to ask questions is essential in assessing performance toward reducing human errors. Existing models give predications based on time values or response times for a particular event, and may be summed and averaged for a generalization of behavior(s). However, by our not establishing a basic understanding of the foundation of how the behavior was predicated through a decision making strategy process, predicative models are overall inefficient in their analysis of the means by which behavior was generated. What is seen is the end result

  5. Investigating the Effects of Typical Rowing Strength Training Practices on Strength and Power Development and 2,000 m Rowing Performance

    Directory of Open Access Journals (Sweden)

    Ian Gee Thomas

    2016-04-01

    Full Text Available This study aimed to determine the effects of a short-term, strength training intervention, typically undertaken by club-standard rowers, on 2,000 m rowing performance and strength and power development. Twenty-eight male rowers were randomly assigned to intervention or control groups. All participants performed baseline testing involving assessments of muscle soreness, creatine kinase activity (CK, maximal voluntary contraction (leg-extensors (MVC, static-squat jumps (SSJ, counter-movement jumps (CMJ, maximal rowing power strokes (PS and a 2,000 m rowing ergometer time-trial (2,000 m with accompanying respiratory-exchange and electromyography (EMG analysis. Intervention group participants subsequently performed three identical strength training (ST sessions, in the space of five days, repeating all assessments 24 h following the final ST. The control group completed the same testing procedure but with no ST. Following ST, the intervention group experienced significant elevations in soreness and CK activity, and decrements in MVC, SSJ, CMJ and PS (p < 0.01. However, 2,000 m rowing performance, pacing strategy and gas exchange were unchanged across trials in either condition. Following ST, significant increases occurred for EMG (p < 0.05, and there were non-significant trends for decreased blood lactate and anaerobic energy liberation (p = 0.063 – 0.086. In summary, club-standard rowers, following an intensive period of strength training, maintained their 2,000 m rowing performance despite suffering symptoms of muscle damage and disruption to muscle function. This disruption likely reflected the presence of acute residual fatigue, potentially in type II muscle fibres as strength and power development were affected.

  6. Model Performance Evaluation and Scenario Analysis (MPESA)

    Science.gov (United States)

    Model Performance Evaluation and Scenario Analysis (MPESA) assesses the performance with which models predict time series data. The tool was developed Hydrological Simulation Program-Fortran (HSPF) and the Stormwater Management Model (SWMM)

  7. Relationship of medial gastrocnemius relative fascicle excursion and ankle joint power and work performance during gait in typically developing children: A cross-sectional study.

    Science.gov (United States)

    Martín Lorenzo, Teresa; Albi Rodríguez, Gustavo; Rocon, Eduardo; Martínez Caballero, Ignacio; Lerma Lara, Sergio

    2017-07-01

    Muscle fascicles lengthen in response to chronic passive stretch through in-series sarcomere addition in order to maintain an optimum sarcomere length. In turn, the muscles' force generating capacity, maximum excursion, and contraction velocity is enhanced. Thus, longer fascicles suggest a greater capacity to develop joint power and work. However, static fascicle length measurements may not be taking sarcomere length differences into account. Thus, we considered relative fascicle excursions through passive ankle dorsiflexion may better correlate with the capacity to generate joint power and work than fascicle length. Therefore, the aim of the present study was to determine if medial gastrocnemius relative fascicle excursions correlate with ankle joint power and work generation during gait in typically developing children. A sample of typically developing children (n = 10) were recruited for this study and data analysis was carried out on 20 legs. Medial gastrocnemius relative fascicle excursion from resting joint angle to maximum dorsiflexion was estimated from trigonometric relations of medial gastrocnemius pennation angle and thickness obtained from B-mode real-time ultrasonography. Furthermore, a three-dimensional motion capture system was used to obtain ankle joint work and power during the stance phase of gait. Significant correlations were found between relative fascicle excursion and peak power absorption (-) r(14) = -0.61, P = .012 accounting for 31% variability, positive work r(18) = 0.56, P = .021 accounting for 31% variability, and late stance positive work r(15) = 0.51, P = .037 accounting for 26% variability. The large unexplained variance may be attributed to mechanics of neighboring structures (e.g., soleus or Achilles tendon mechanics) and proximal joint kinetics which may also contribute to ankle joint power and work performance, and were not taken into account. Further studies are encouraged to provide greater insight

  8. Research on Soft Reduction Amount Distribution to Eliminate Typical Inter-dendritic Crack in Continuous Casting Slab of X70 Pipeline Steel by Numerical Model

    Science.gov (United States)

    Liu, Ke; Wang, Chang; Liu, Guo-liang; Ding, Ning; Sun, Qi-song; Tian, Zhi-hong

    2017-04-01

    To investigate the formation of one kind of typical inter-dendritic crack around triple point region in continuous casting(CC) slab during the operation of soft reduction, fully coupled 3D thermo-mechanical finite element models was developed, also plant trials were carried out in a domestic continuous casting machine. Three possible types of soft reduction amount distribution (SRAD) in the soft reduction region were analyzed. The relationship between the typical inter-dendritic cracks and soft reduction conditions is presented and demonstrated in production practice. Considering the critical strain of internal crack formation, a critical tolerance for the soft reduction amount distribution and related casing parameters have been proposed for better contribution of soft reduction to the internal quality of slabs. The typical inter-dendritic crack around the triple point region had been eliminated effectively through the application of proposed suggestions for continuous casting of X70 pipeline steel in industrial practice.

  9. Calibration of PMIS pavement performance prediction models.

    Science.gov (United States)

    2012-02-01

    Improve the accuracy of TxDOTs existing pavement performance prediction models through calibrating these models using actual field data obtained from the Pavement Management Information System (PMIS). : Ensure logical performance superiority patte...

  10. Research on the recycling industry development model for typical exterior plastic components of end-of-life passenger vehicle based on the SWOT method.

    Science.gov (United States)

    Zhang, Hongshen; Chen, Ming

    2013-11-01

    In-depth studies on the recycling of typical automotive exterior plastic parts are significant and beneficial for environmental protection, energy conservation, and sustainable development of China. In the current study, several methods were used to analyze the recycling industry model for typical exterior parts of passenger vehicles in China. The strengths, weaknesses, opportunities, and challenges of the current recycling industry for typical exterior parts of passenger vehicles were analyzed comprehensively based on the SWOT method. The internal factor evaluation matrix and external factor evaluation matrix were used to evaluate the internal and external factors of the recycling industry. The recycling industry was found to respond well to all the factors and it was found to face good developing opportunities. Then, the cross-link strategies analysis for the typical exterior parts of the passenger car industry of China was conducted based on the SWOT analysis strategies and established SWOT matrix. Finally, based on the aforementioned research, the recycling industry model led by automobile manufacturers was promoted. Copyright © 2013 Elsevier Ltd. All rights reserved.

  11. Experimental simulation and numerical modeling of vapor shield formation and divertor material erosion for ITER typical plasma disruptions

    Energy Technology Data Exchange (ETDEWEB)

    Wuerz, H. [Kernforschungszentrum Karlsruhe, INR, Postfach 36 40, D-76021 Karlsruhe (Germany); Arkhipov, N.I. [Troitsk Institute for Innovation and Fusion Research, 142092 Troitsk (Russian Federation); Bakhtin, V.P. [Troitsk Institute for Innovation and Fusion Research, 142092 Troitsk (Russian Federation); Konkashbaev, I. [Troitsk Institute for Innovation and Fusion Research, 142092 Troitsk (Russian Federation); Landman, I. [Troitsk Institute for Innovation and Fusion Research, 142092 Troitsk (Russian Federation); Safronov, V.M. [Troitsk Institute for Innovation and Fusion Research, 142092 Troitsk (Russian Federation); Toporkov, D.A. [Troitsk Institute for Innovation and Fusion Research, 142092 Troitsk (Russian Federation); Zhitlukhin, A.M. [Troitsk Institute for Innovation and Fusion Research, 142092 Troitsk (Russian Federation)

    1995-04-01

    The high divertor heat load during a tokamak plasma disruption results in sudden evaporation of a thin layer of divertor plate material, which acts as vapor shield and protects the target from further excessive evaporation. Formation and effectiveness of the vapor shield are theoretically modeled and are experimentally analyzed at the 2MK-200 facility under conditions simulating the thermal quench phase of ITER tokamak plasma disruptions. ((orig.)).

  12. Numerical modeling and experimental simulation of vapor shield formation and divertor material erosion for ITER typical plasma disruptions

    International Nuclear Information System (INIS)

    Wuerz, H.; Arkhipov, N.I.; Bakhin, V.P.; Goel, B.; Hoebel, W.; Konkashbaev, I.; Landman, I.; Piazza, G.; Safronov, V.M.; Sherbakov, A.R.; Toporkov, D.A.; Zhitlukhin, A.M.

    1994-01-01

    The high divertor heat load during a tokamak plasma disruption results in sudden evaporation of a thin layer of divertor plate material, which acts as vapor shield and protects the target from further excessive evaporation. Formation and effectiveness of the vapor shield are theoretically modeled and experimentally investigated at the 2MK-200 facility under conditions simulating the thermal quench phase of ITER tokamak plasma disruptions. In the optical wavelength range C II, C III, C IV emission lines for graphite, Cu I, Cu II lines for copper and continuum radiation for tungsten samples are observed in the target plasma. The plasma expands along the magnetic field lines with velocities of (4±1)x10 6 cm/s for graphite and 10 5 cm/s for copper. Modeling was done with a radiation hydrodynamics code in one-dimensional planar geometry. The multifrequency radiation transport is treated in flux limited diffusion and in forward reverse transport approximation. In these first modeling studies the overall shielding efficiency for carbon and tungsten defined as ratio of the incident energy and the vaporization energy for power densities of 10 MW/cm 2 exceeds a factor of 30. The vapor shield is established within 2 μs, the power fraction to the target after 10 μs is below 3% and reaches in the stationary state after about 20 μs a value of around 1.5%. ((orig.))

  13. Physics based performance model of a UV missile seeker

    Science.gov (United States)

    James, I.

    2017-10-01

    Electro-optically (EO) guided surface to air missiles (SAM) have developed to use Ultraviolet (UV) wavebands supplementary to the more common Infrared (IR) wavebands. Missiles such as the US Stinger have been around for some time, these have been joined recently by Chinese FN-16 and Russian SA-29 (Verba) and there is a much higher potential proliferation risk. The purpose of this paper is to introduce a first-principles, physics based, model of a typical seeker arrangement. The model is constructed from various calculations that aim to characterise the physical effects that will affect the performance of the system. Data has been gathered from a number of sources to provide realism to the variables within the model. It will be demonstrated that many of the variables have the power to dramatically alter the performance of the system as a whole. Further, data will be shown to illustrate the expected performance of a typical UV detector within a SAM in detection range against a variety of target sizes. The trend for the detection range against aircraft size and skin reflectivity will be shown to be non-linear, this should have been expected owing to the exponential decay of a signal through atmosphere. Future work will validate the performance of the model against real world performance data for cameras (when this is available) to ensure that it is operates within acceptable errors.

  14. Human Performance Models of Pilot Behavior

    Science.gov (United States)

    Foyle, David C.; Hooey, Becky L.; Byrne, Michael D.; Deutsch, Stephen; Lebiere, Christian; Leiden, Ken; Wickens, Christopher D.; Corker, Kevin M.

    2005-01-01

    Five modeling teams from industry and academia were chosen by the NASA Aviation Safety and Security Program to develop human performance models (HPM) of pilots performing taxi operations and runway instrument approaches with and without advanced displays. One representative from each team will serve as a panelist to discuss their team s model architecture, augmentations and advancements to HPMs, and aviation-safety related lessons learned. Panelists will discuss how modeling results are influenced by a model s architecture and structure, the role of the external environment, specific modeling advances and future directions and challenges for human performance modeling in aviation.

  15. The Optimal Price Ratio of Typical Energy Sources in Beijing Based on the Computable General Equilibrium Model

    Directory of Open Access Journals (Sweden)

    Yongxiu He

    2014-04-01

    Full Text Available In Beijing, China, the rational consumption of energy is affected by the insufficient linkage mechanism of the energy pricing system, the unreasonable price ratio and other issues. This paper combines the characteristics of Beijing’s energy market, putting forward the society-economy equilibrium indicator R maximization taking into consideration the mitigation cost to determine a reasonable price ratio range. Based on the computable general equilibrium (CGE model, and dividing four kinds of energy sources into three groups, the impact of price fluctuations of electricity and natural gas on the Gross Domestic Product (GDP, Consumer Price Index (CPI, energy consumption and CO2 and SO2 emissions can be simulated for various scenarios. On this basis, the integrated effects of electricity and natural gas price shocks on the Beijing economy and environment can be calculated. The results show that relative to the coal prices, the electricity and natural gas prices in Beijing are currently below reasonable levels; the solution to these unreasonable energy price ratios should begin by improving the energy pricing mechanism, through means such as the establishment of a sound dynamic adjustment mechanism between regulated prices and market prices. This provides a new idea for exploring the rationality of energy price ratios in imperfect competitive energy markets.

  16. Modelling and Motivating Academic Performance.

    Science.gov (United States)

    Brennan, Geoffrey; Pettit, Philip

    1991-01-01

    Three possible motivators for college teachers (individual economic interest, academic virtue, and academic honor) suggest mechanisms that can be used to improve performance. Policies need to address all three motivators; economic levers alone may undermine alternative ways of supporting good work. (MSE)

  17. [Optimization of sample pretreatment method for the determination of typical artificial sweeteners in soil by high performance liquid chromatography-tandem mass spectrometry].

    Science.gov (United States)

    Feng, Biting; Gan, Zhiwei; Hu, Hongwei; Sun, Hongwen

    2014-09-01

    The sample pretreatment method for the determination of four typical artificial sweeteners (ASs) including sucralose, saccharin, cyclamate, and acesulfame in soil by high performance liquid chromatography-tandem mass spectrometry (HPLC-MS/MS) was optimized. Different conditions of extraction, including four extractants (methanol, acetonitrile, acetone, deionized water), three kinds of ionic strength of sodium acetate solution (0.001, 0.01, 0.1 mol/L), four pH values (3, 4, 5 and 6) of 0.01 mol/L acetate-sodium acetate solution, four set durations of extraction (20, 40, 60, 120 min) and number of extraction times (1, 2, 3, 4 times) were compared. The optimal sample pretreatment method was finally set up. The sam- ples were extracted twice with 25 mL 0.01 mol/L sodium acetate solution (pH 4) for 20 min per cycle. The extracts were combined and then purified and concentrated by CNW Poly-Sery PWAX cartridges with methanol containing 1 mmol/L tris (hydroxymethyl) amino methane (Tris) and 5% (v/v) ammonia hydroxide as eluent. The analytes were determined by HPLC-MS/MS. The recoveries were obtained by spiked soil with the four artificial sweeteners at 1, 10, 100 μg/kg (dry weight), separately. The average recoveries of the analytes ranged from 86.5% to 105%. The intra-day and inter-day precisions expressed as relative standard deviations (RSDs) were in the range of 2.56%-5.94% and 3.99%-6.53%, respectively. Good linearities (r2 > 0.995) were observed between 1-100 μg/kg (dry weight) for all the compounds. The limits of detection were 0.01-0.21 kg/kg and the limits of quantification were 0.03-0.70 μg/kg for the analytes. The four artificial sweeteners were determined in soil samples from farmland contaminated by wastewater in Tianjin. This method is rapid, reliable, and suitable for the investigation of artificial sweeteners in soil.

  18. Cognitive performance modeling based on general systems performance theory.

    Science.gov (United States)

    Kondraske, George V

    2010-01-01

    General Systems Performance Theory (GSPT) was initially motivated by problems associated with quantifying different aspects of human performance. It has proved to be invaluable for measurement development and understanding quantitative relationships between human subsystem capacities and performance in complex tasks. It is now desired to bring focus to the application of GSPT to modeling of cognitive system performance. Previous studies involving two complex tasks (i.e., driving and performing laparoscopic surgery) and incorporating measures that are clearly related to cognitive performance (information processing speed and short-term memory capacity) were revisited. A GSPT-derived method of task analysis and performance prediction termed Nonlinear Causal Resource Analysis (NCRA) was employed to determine the demand on basic cognitive performance resources required to support different levels of complex task performance. This approach is presented as a means to determine a cognitive workload profile and the subsequent computation of a single number measure of cognitive workload (CW). Computation of CW may be a viable alternative to measuring it. Various possible "more basic" performance resources that contribute to cognitive system performance are discussed. It is concluded from this preliminary exploration that a GSPT-based approach can contribute to defining cognitive performance models that are useful for both individual subjects and specific groups (e.g., military pilots).

  19. The nonlinear unloading behavior of a typical Ni-based superalloy during hot deformation: a unified elasto-viscoplastic constitutive model

    Science.gov (United States)

    Chen, Ming-Song; Lin, Y. C.; Li, Kuo-Kuo; Chen, Jian

    2016-09-01

    In authors' previous work (Chen et al. in Appl Phys A. doi: 10.1007/s00339-016-0371-6, 2016), the nonlinear unloading behavior of a typical Ni-based superalloy was investigated by hot compressive experiments with intermediate unloading-reloading cycles. The characters of unloading curves were discussed in detail, and a new elasto-viscoplastic constitutive model was proposed to describe the nonlinear unloading behavior of the studied Ni-based superalloy. Still, the functional relationships between the deformation temperature, strain rate, pre-strain and the parameters of the proposed constitutive model need to be established. In this study, the effects of deformation temperature, strain rate and pre-strain on the parameters of the new constitutive model proposed in authors' previous work (Chen et al. 2016) are analyzed, and a unified elasto-viscoplastic constitutive model is proposed to predict the unloading behavior at arbitrary deformation temperature, strain rate and pre-strain.

  20. Assembly line performance and modeling

    Science.gov (United States)

    Rane, Arun B.; Sunnapwar, Vivek K.

    2017-09-01

    Automobile sector forms the backbone of manufacturing sector. Vehicle assembly line is important section in automobile plant where repetitive tasks are performed one after another at different workstations. In this thesis, a methodology is proposed to reduce cycle time and time loss due to important factors like equipment failure, shortage of inventory, absenteeism, set-up, material handling, rejection and fatigue to improve output within given cost constraints. Various relationships between these factors, corresponding cost and output are established by scientific approach. This methodology is validated in three different vehicle assembly plants. Proposed methodology may help practitioners to optimize the assembly line using lean techniques.

  1. Generalization performance of regularized neural network models

    DEFF Research Database (Denmark)

    Larsen, Jan; Hansen, Lars Kai

    1994-01-01

    Architecture optimization is a fundamental problem of neural network modeling. The optimal architecture is defined as the one which minimizes the generalization error. This paper addresses estimation of the generalization performance of regularized, complete neural network models. Regularization...

  2. Constraining parameters in state-of-the-art marine pelagic biogeochemical models. Is it sufficient to use typical observations of standing-stocks?

    Science.gov (United States)

    Loeptien, Ulrike; Dietze, Heiner

    2014-05-01

    In order to constrain potential feedbacks in the climate system, simple pelagic biogeochemical models (BGCMs) are coupled to 3-dimensional ocean-atmosphere models. These so-called earth system models are frequently applied to calculate climate projections. All BGCs rely on a set of rather uncertain parameters. Among them are generally the Michaelis Menten (MM) constants, utilized in the hyperbolic MM- formulation (which specifies the limiting effect of light and nutrients on carbon assimilation by autotrophic phytoplankton). All model parameters are typically tuned in rather subjective trial-and-error exercises where the parameters are changed manually until a "reasonable" similarity with observed standing stocks is achieved. In the present study, we explore with twin experiments (or synthetic ``observations") the demands on observations that would allow for a more objective estimation of model parameters. These parameter retrieval experiments are based on ``perfect" (synthetic) observations which we, step by step, distort to approach realistic conditions. Finally, we confirm our findings with real-world observations. In summary, we find that even modest noise (10%) inherent to observations may hinder the parameter retrieval already. Particularly, the MM constants are hard to constrain. This is of concern since the MM parameters are key to the model`s sensitivity to anticipated changes of the external conditions.

  3. Constrained bayesian inference of project performance models

    OpenAIRE

    Sunmola, Funlade

    2013-01-01

    Project performance models play an important role in the management of project success. When used for monitoring projects, they can offer predictive ability such as indications of possible delivery problems. Approaches for monitoring project performance relies on available project information including restrictions imposed on the project, particularly the constraints of cost, quality, scope and time. We study in this paper a Bayesian inference methodology for project performance modelling in ...

  4. ORGANIZATIONAL LEARNING AND PERFORMANCE. A CONCEPTUAL MODEL

    OpenAIRE

    Alexandra Luciana GUÞÃ

    2013-01-01

    Throught this paper, our main objective is to propose a conceptual model that links the notions of organizational learning (as capability and as a process) and organizational performance. Our contribution consists in analyzing the literature on organizational learning and organizational performance and in proposing an integrated model, that comprises: organizational learning capability, the process of organizational learning, organizational performance, human capital (the value and uniqueness...

  5. Is our Universe typical?

    International Nuclear Information System (INIS)

    Gurzadyan, V.G.

    1988-01-01

    The problem of typicalness of the Universe - as a dynamical system possessing both regular and chaotic regions of positive measure of phase space, is raised and discussed. Two dynamical systems are considered: 1) The observed Universe as a hierarchy of systems of N graviting bodies; 2) (3+1)-manifold with matter evolving to Wheeler-DeWitt equation in superspace with Hawking boundary condition of compact metrics. It is shown that the observed Universe is typical. There is no unambiguous answer for the second system yet. If it is typical too then the same present state of the Universe could have been originated from an infinite number of different initial conditions the restoration of which is practically impossible at present. 35 refs.; 2 refs

  6. Model performance analysis and model validation in logistic regression

    Directory of Open Access Journals (Sweden)

    Rosa Arboretti Giancristofaro

    2007-10-01

    Full Text Available In this paper a new model validation procedure for a logistic regression model is presented. At first, we illustrate a brief review of different techniques of model validation. Next, we define a number of properties required for a model to be considered "good", and a number of quantitative performance measures. Lastly, we describe a methodology for the assessment of the performance of a given model by using an example taken from a management study.

  7. Performance of GeantV EM Physics Models

    Science.gov (United States)

    Amadio, G.; Ananya, A.; Apostolakis, J.; Aurora, A.; Bandieramonte, M.; Bhattacharyya, A.; Bianchini, C.; Brun, R.; Canal, P.; Carminati, F.; Cosmo, G.; Duhem, L.; Elvira, D.; Folger, G.; Gheata, A.; Gheata, M.; Goulas, I.; Iope, R.; Jun, S. Y.; Lima, G.; Mohanty, A.; Nikitina, T.; Novak, M.; Pokorski, W.; Ribon, A.; Seghal, R.; Shadura, O.; Vallecorsa, S.; Wenzel, S.; Zhang, Y.

    2017-10-01

    The recent progress in parallel hardware architectures with deeper vector pipelines or many-cores technologies brings opportunities for HEP experiments to take advantage of SIMD and SIMT computing models. Launched in 2013, the GeantV project studies performance gains in propagating multiple particles in parallel, improving instruction throughput and data locality in HEP event simulation on modern parallel hardware architecture. Due to the complexity of geometry description and physics algorithms of a typical HEP application, performance analysis is indispensable in identifying factors limiting parallel execution. In this report, we will present design considerations and preliminary computing performance of GeantV physics models on coprocessors (Intel Xeon Phi and NVidia GPUs) as well as on mainstream CPUs.

  8. Performance of GeantV EM Physics Models

    CERN Document Server

    Amadio, G; Apostolakis, J; Aurora, A; Bandieramonte, M; Bhattacharyya, A; Bianchini, C; Brun, R; Canal P; Carminati, F; Cosmo, G; Duhem, L; Elvira, D; Folger, G; Gheata, A; Gheata, M; Goulas, I; Iope, R; Jun, S Y; Lima, G; Mohanty, A; Nikitina, T; Novak, M; Pokorski, W; Ribon, A; Seghal, R; Shadura, O; Vallecorsa, S; Wenzel, S; Zhang, Y

    2017-01-01

    The recent progress in parallel hardware architectures with deeper vector pipelines or many-cores technologies brings opportunities for HEP experiments to take advantage of SIMD and SIMT computing models. Launched in 2013, the GeantV project studies performance gains in propagating multiple particles in parallel, improving instruction throughput and data locality in HEP event simulation on modern parallel hardware architecture. Due to the complexity of geometry description and physics algorithms of a typical HEP application, performance analysis is indispensable in identifying factors limiting parallel execution. In this report, we will present design considerations and preliminary computing performance of GeantV physics models on coprocessors (Intel Xeon Phi and NVidia GPUs) as well as on mainstream CPUs.

  9. Performance of GeantV EM Physics Models

    Energy Technology Data Exchange (ETDEWEB)

    Amadio, G.; et al.

    2016-10-14

    The recent progress in parallel hardware architectures with deeper vector pipelines or many-cores technologies brings opportunities for HEP experiments to take advantage of SIMD and SIMT computing models. Launched in 2013, the GeantV project studies performance gains in propagating multiple particles in parallel, improving instruction throughput and data locality in HEP event simulation on modern parallel hardware architecture. Due to the complexity of geometry description and physics algorithms of a typical HEP application, performance analysis is indispensable in identifying factors limiting parallel execution. In this report, we will present design considerations and preliminary computing performance of GeantV physics models on coprocessors (Intel Xeon Phi and NVidia GPUs) as well as on mainstream CPUs.

  10. The influence of climatic changes on distribution pattern of six typical Kobresia species in Tibetan Plateau based on MaxEnt model and geographic information system

    Science.gov (United States)

    Hu, Zhongjun; Guo, Ke; Jin, Shulan; Pan, Huahua

    2018-01-01

    The issue that climatic change has great influence on species distribution is currently of great interest in field of biogeography. Six typical Kobresia species are selected from alpine grassland of Tibetan Plateau (TP) as research objects which are the high-quality forage for local husbandry, and their distribution changes are modeled in four periods by using MaxEnt model and GIS technology. The modeling results have shown that the distribution of these six typical Kobresia species in TP was strongly affected by two factors of "the annual precipitation" and "the precipitation in the wettest and driest quarters of the year". The modeling results have also shown that the most suitable habitats of K. pygmeae were located in the area around Qinghai Lake, the Hengduan-Himalayan mountain area, and the hinterland of TP. The most suitable habitats of K. humilis were mainly located in the area around Qinghai Lake and the hinterland of TP during the Last Interglacial period, and gradually merged into a bigger area; K. robusta and K. tibetica were located in the area around Qinghai Lake and the hinterland of TP, but they did not integrate into one area all the time, and K. capillifolia were located in the area around Qinghai Lake and extended to the southwest of the original distributing area, whereas K. macrantha were mainly distributed along the area of the Himalayan mountain chain, which had the smallest distribution area among them, and all these six Kobresia species can be divided into four types of "retreat/expansion" styles according to the changes of suitable habitat areas during the four periods; all these change styles are the result of long-term adaptations of the different species to the local climate changes in regions of TP and show the complexity of relationships between different species and climate. The research results have positive reference value to the protection of species diversity and sustainable development of the local husbandry in TP.

  11. Typical Complexity Numbers

    Indian Academy of Sciences (India)

    First page Back Continue Last page Overview Graphics. Typical Complexity Numbers. Say. 1000 tones,; 100 Users,; Transmission every 10 msec. Full Crosstalk cancellation would require. Full cancellation requires a matrix multiplication of order 100*100 for all the tones. 1000*100*100*100 operations every second for the ...

  12. Photovoltaic performance models - A report card

    Science.gov (United States)

    Smith, J. H.; Reiter, L. R.

    1985-01-01

    Models for the analysis of photovoltaic (PV) systems' designs, implementation policies, and economic performance, have proliferated while keeping pace with rapid changes in basic PV technology and extensive empirical data compiled for such systems' performance. Attention is presently given to the results of a comparative assessment of ten well documented and widely used models, which range in complexity from first-order approximations of PV system performance to in-depth, circuit-level characterizations. The comparisons were made on the basis of the performance of their subsystem, as well as system, elements. The models fall into three categories in light of their degree of aggregation into subsystems: (1) simplified models for first-order calculation of system performance, with easily met input requirements but limited capability to address more than a small variety of design considerations; (2) models simulating PV systems in greater detail, encompassing types primarily intended for either concentrator-incorporating or flat plate collector PV systems; and (3) models not specifically designed for PV system performance modeling, but applicable to aspects of electrical system design. Models ignoring subsystem failure or degradation are noted to exclude operating and maintenance characteristics as well.

  13. Performance modeling of parallel algorithms for solving neutron diffusion problems

    International Nuclear Information System (INIS)

    Azmy, Y.Y.; Kirk, B.L.

    1995-01-01

    Neutron diffusion calculations are the most common computational methods used in the design, analysis, and operation of nuclear reactors and related activities. Here, mathematical performance models are developed for the parallel algorithm used to solve the neutron diffusion equation on message passing and shared memory multiprocessors represented by the Intel iPSC/860 and the Sequent Balance 8000, respectively. The performance models are validated through several test problems, and these models are used to estimate the performance of each of the two considered architectures in situations typical of practical applications, such as fine meshes and a large number of participating processors. While message passing computers are capable of producing speedup, the parallel efficiency deteriorates rapidly as the number of processors increases. Furthermore, the speedup fails to improve appreciably for massively parallel computers so that only small- to medium-sized message passing multiprocessors offer a reasonable platform for this algorithm. In contrast, the performance model for the shared memory architecture predicts very high efficiency over a wide range of number of processors reasonable for this architecture. Furthermore, the model efficiency of the Sequent remains superior to that of the hypercube if its model parameters are adjusted to make its processors as fast as those of the iPSC/860. It is concluded that shared memory computers are better suited for this parallel algorithm than message passing computers

  14. Product Data Model for Performance-driven Design

    Science.gov (United States)

    Hu, Guang-Zhong; Xu, Xin-Jian; Xiao, Shou-Ne; Yang, Guang-Wu; Pu, Fan

    2017-09-01

    When designing large-sized complex machinery products, the design focus is always on the overall performance; however, there exist no design theory and method based on performance driven. In view of the deficiency of the existing design theory, according to the performance features of complex mechanical products, the performance indices are introduced into the traditional design theory of "Requirement-Function-Structure" to construct a new five-domain design theory of "Client Requirement-Function-Performance-Structure-Design Parameter". To support design practice based on this new theory, a product data model is established by using performance indices and the mapping relationship between them and the other four domains. When the product data model is applied to high-speed train design and combining the existing research result and relevant standards, the corresponding data model and its structure involving five domains of high-speed trains are established, which can provide technical support for studying the relationships between typical performance indices and design parameters and the fast achievement of a high-speed train scheme design. The five domains provide a reference for the design specification and evaluation criteria of high speed train and a new idea for the train's parameter design.

  15. Studies in Software Cost Model Behavior: Do We Really Understand Cost Model Performance?

    Science.gov (United States)

    Lum, Karen; Hihn, Jairus; Menzies, Tim

    2006-01-01

    While there exists extensive literature on software cost estimation techniques, industry practice continues to rely upon standard regression-based algorithms. These software effort models are typically calibrated or tuned to local conditions using local data. This paper cautions that current approaches to model calibration often produce sub-optimal models because of the large variance problem inherent in cost data and by including far more effort multipliers than the data supports. Building optimal models requires that a wider range of models be considered while correctly calibrating these models requires rejection rules that prune variables and records and use multiple criteria for evaluating model performance. The main contribution of this paper is to document a standard method that integrates formal model identification, estimation, and validation. It also documents what we call the large variance problem that is a leading cause of cost model brittleness or instability.

  16. Iowa calibration of MEPDG performance prediction models.

    Science.gov (United States)

    2013-06-01

    This study aims to improve the accuracy of AASHTO Mechanistic-Empirical Pavement Design Guide (MEPDG) pavement : performance predictions for Iowa pavement systems through local calibration of MEPDG prediction models. A total of 130 : representative p...

  17. Typicality and reasoning fallacies.

    Science.gov (United States)

    Shafir, E B; Smith, E E; Osherson, D N

    1990-05-01

    The work of Tversky and Kahneman on intuitive probability judgment leads to the following prediction: The judged probability that an instance belongs to a category is an increasing function of the typicality of the instance in the category. To test this prediction, subjects in Experiment 1 read a description of a person (e.g., "Linda is 31, bright, ... outspoken") followed by a category. Some subjects rated how typical the person was of the category, while others rated the probability that the person belonged to that category. For categories like bank teller and feminist bank teller: (1) subjects rated the person as more typical of the conjunctive category (a conjunction effect); (2) subjects rated it more probable that the person belonged to the conjunctive category (a conjunction fallacy); and (3) the magnitudes of the conjunction effect and fallacy were highly correlated. Experiment 2 documents an inclusion fallacy, wherein subjects judge, for example, "All bank tellers are conservative" to be more probable than "All feminist bank tellers are conservative." In Experiment 3, results parallel to those of Experiment 1 were obtained with respect to the inclusion fallacy.

  18. Effects of free ammonia on volatile fatty acid accumulation and process performance in the anaerobic digestion of two typical bio-wastes.

    Science.gov (United States)

    Shi, Xuchuan; Lin, Jia; Zuo, Jiane; Li, Peng; Li, Xiaoxia; Guo, Xianglin

    2017-05-01

    The effect of free ammonia on volatile fatty acid (VFA) accumulation and process instability was studied using a lab-scale anaerobic digester fed by two typical bio-wastes: fruit and vegetable waste (FVW) and food waste (FW) at 35°C with an organic loading rate (OLR) of 3.0kg VS/(m 3 ·day). The inhibitory effects of free ammonia on methanogenesis were observed due to the low C/N ratio of each substrate (15.6 and 17.2, respectively). A high concentration of free ammonia inhibited methanogenesis resulting in the accumulation of VFAs and a low methane yield. In the inhibited state, acetate accumulated more quickly than propionate and was the main type of accumulated VFA. The co-accumulation of ammonia and VFAs led to an "inhibited steady state" and the ammonia was the main inhibitory substance that triggered the process perturbation. By statistical significance test and VFA fluctuation ratio analysis, the free ammonia inhibition threshold was identified as 45mg/L. Moreover, propionate, iso-butyrate and valerate were determined to be the three most sensitive VFA parameters that were subject to ammonia inhibition. Copyright © 2016. Published by Elsevier B.V.

  19. Assessing Ecosystem Model Performance in Semiarid Systems

    Science.gov (United States)

    Thomas, A.; Dietze, M.; Scott, R. L.; Biederman, J. A.

    2017-12-01

    In ecosystem process modelling, comparing outputs to benchmark datasets observed in the field is an important way to validate models, allowing the modelling community to track model performance over time and compare models at specific sites. Multi-model comparison projects as well as models themselves have largely been focused on temperate forests and similar biomes. Semiarid regions, on the other hand, are underrepresented in land surface and ecosystem modelling efforts, and yet will be disproportionately impacted by disturbances such as climate change due to their sensitivity to changes in the water balance. Benchmarking models at semiarid sites is an important step in assessing and improving models' suitability for predicting the impact of disturbance on semiarid ecosystems. In this study, several ecosystem models were compared at a semiarid grassland in southwestern Arizona using PEcAn, or the Predictive Ecosystem Analyzer, an open-source eco-informatics toolbox ideal for creating the repeatable model workflows necessary for benchmarking. Models included SIPNET, DALEC, JULES, ED2, GDAY, LPJ-GUESS, MAESPA, CLM, CABLE, and FATES. Comparison between model output and benchmarks such as net ecosystem exchange (NEE) tended to produce high root mean square error and low correlation coefficients, reflecting poor simulation of seasonality and the tendency for models to create much higher carbon sources than observed. These results indicate that ecosystem models do not currently adequately represent semiarid ecosystem processes.

  20. Individualized Biomathematical Modeling of Fatigue and Performance

    Science.gov (United States)

    2008-05-29

    waking period are omitted in order to avoid confounds from sleep inertia. Gray bars indicate scheduled sleep periods . (b) Performance predictions...i.e., total sleep deprivation; black). Light gray areas indicate nocturnal sleep periods . In this illustration, the bifurcation point is set to...confounds from sleep inertia. Gray bars indicate scheduled sleep periods . (b) Corresponding performance predictions according to the new model

  1. Gold-standard performance for 2D hydrodynamic modeling

    Science.gov (United States)

    Pasternack, G. B.; MacVicar, B. J.

    2013-12-01

    Two-dimensional, depth-averaged hydrodynamic (2D) models are emerging as an increasingly useful tool for environmental water resources engineering. One of the remaining technical hurdles to the wider adoption and acceptance of 2D modeling is the lack of standards for 2D model performance evaluation when the riverbed undulates, causing lateral flow divergence and convergence. The goal of this study was to establish a gold-standard that quantifies the upper limit of model performance for 2D models of undulating riverbeds when topography is perfectly known and surface roughness is well constrained. A review was conducted of published model performance metrics and the value ranges exhibited by models thus far for each one. Typically predicted velocity differs from observed by 20 to 30 % and the coefficient of determination between the two ranges from 0.5 to 0.8, though there tends to be a bias toward overpredicting low velocity and underpredicting high velocity. To establish a gold standard as to the best performance possible for a 2D model of an undulating bed, two straight, rectangular-walled flume experiments were done with no bed slope and only different bed undulations and water surface slopes. One flume tested model performance in the presence of a porous, homogenous gravel bed with a long flat section, then a linear slope down to a flat pool bottom, and then the same linear slope back up to the flat bed. The other flume had a PVC plastic solid bed with a long flat section followed by a sequence of five identical riffle-pool pairs in close proximity, so it tested model performance given frequent undulations. Detailed water surface elevation and velocity measurements were made for both flumes. Comparing predicted versus observed velocity magnitude for 3 discharges with the gravel-bed flume and 1 discharge for the PVC-bed flume, the coefficient of determination ranged from 0.952 to 0.987 and the slope for the regression line was 0.957 to 1.02. Unsigned velocity

  2. Driver Performance Model: 1. Conceptual Framework

    National Research Council Canada - National Science Library

    Heimerl, Joseph

    2001-01-01

    ...'. At the present time, no such comprehensive model exists. This report discusses a conceptual framework designed to encompass the relationships, conditions, and constraints related to direct, indirect, and remote modes of driving and thus provides a guide or 'road map' for the construction and creation of a comprehensive driver performance model.

  3. Performance of hedging strategies in interval models

    NARCIS (Netherlands)

    Roorda, Berend; Engwerda, Jacob; Schumacher, J.M.

    2005-01-01

    For a proper assessment of risks associated with the trading of derivatives, the performance of hedging strategies should be evaluated not only in the context of the idealized model that has served as the basis of strategy development, but also in the context of other models. In this paper we

  4. Analysing the temporal dynamics of model performance for hydrological models

    NARCIS (Netherlands)

    Reusser, D.E.; Blume, T.; Schaefli, B.; Zehe, E.

    2009-01-01

    The temporal dynamics of hydrological model performance gives insights into errors that cannot be obtained from global performance measures assigning a single number to the fit of a simulated time series to an observed reference series. These errors can include errors in data, model parameters, or

  5. {sup 99m}Tc-HMPAO and {sup 99m}Tc-ECD perform differently in typically hypoperfused areas in Alzheimer's disease

    Energy Technology Data Exchange (ETDEWEB)

    Koulibaly, Pierre Malick [Nuclear Medicine Department, Centre Antoine Lacassagne, University of Nice-Sophia Antipolis (France); Laboratoire de Biophysique, Universite de Nice-Sophia Antipolis, UFR de Medecine, 28 Avenue de Valombrose, 06107, Nice Cedex 2 (France); Nobili, Flavio; Vitali, Paolo; Girtler, Nicola; Rodriguez, Guido [Clinical Neurophysiology, Department of Internal Medicine, University of Genoa (Italy); Migneco, Octave; Darcourt, Jacques [Nuclear Medicine Department, Centre Antoine Lacassagne, University of Nice-Sophia Antipolis (France); Robert, Philippe H. [Memory Center, Federation of Clinical Neuroscience, Centre Hospitalier Universitaire, University of Nice-Sophia Antipolis (France)

    2003-07-01

    Technetium-99m hexamethylpropylene amine oxime (HMPAO) and {sup 99m}Tc-N,N''-1,2-ethylene diylbis-l-cysteine diethyl ester dihydrochloride (ECD) yield significantly different images of cerebral perfusion owing to their particular pharmacokinetics. The aim of this study was to assess the topography, extension and statistical significance of these differences in Alzheimer's disease (AD). Sixty-four patients with mild to moderate AD were retrospectively selected by two European centres. Two series of patients, including 32 studied with {sup 99m}Tc-HMPAO single-photon emission tomography (SPET) and 32 studied with {sup 99m}Tc-ECD SPET, were matched for sex, age ({+-}3 years) and severity of cognitive impairment as assessed by the Mini-Mental State Examination (MMSE) ({+-}2 points), following a case-control procedure. SPET data were processed using SPM99 software (uncorrected height threshold: P=0.001). {sup 99m}Tc-ECD SPET gave significantly higher uptake ratio values than {sup 99m}Tc-HMPAO SPET in several symmetrical clusters, including the right and left occipital cuneus, the left occipital and parietal precuneus, and the left superior and middle temporal gyri. {sup 99m}Tc-HMPAO SPET gave significantly higher uptake ratio values than ECD in two smaller clusters, including the hippocampus in both hemispheres. In AD, relative brain uptake of {sup 99m}Tc-HMPAO and {sup 99m}Tc-ECD is different in several brain regions, some of which are typically involved in AD, such as the precuneus and the hippocampus. These differences confirm the need for specific normal databases, but their impact on routine SPET reports in AD is not known and deserves an ad hoc investigation. (orig.)

  6. Performance modeling, loss networks, and statistical multiplexing

    CERN Document Server

    Mazumdar, Ravi

    2009-01-01

    This monograph presents a concise mathematical approach for modeling and analyzing the performance of communication networks with the aim of understanding the phenomenon of statistical multiplexing. The novelty of the monograph is the fresh approach and insights provided by a sample-path methodology for queueing models that highlights the important ideas of Palm distributions associated with traffic models and their role in performance measures. Also presented are recent ideas of large buffer, and many sources asymptotics that play an important role in understanding statistical multiplexing. I

  7. Advances in HTGR fuel performance models

    International Nuclear Information System (INIS)

    Stansfield, O.M.; Goodin, D.T.; Hanson, D.L.; Turner, R.F.

    1985-01-01

    Advances in HTGR fuel performance models have improved the agreement between observed and predicted performance and contributed to an enhanced position of the HTGR with regard to investment risk and passive safety. Heavy metal contamination is the source of about 55% of the circulating activity in the HTGR during normal operation, and the remainder comes primarily from particles which failed because of defective or missing buffer coatings. These failed particles make up about 5 x 10 -4 fraction of the total core inventory. In addition to prediction of fuel performance during normal operation, the models are used to determine fuel failure and fission product release during core heat-up accident conditions. The mechanistic nature of the models, which incorporate all important failure modes, permits the prediction of performance from the relatively modest accident temperatures of a passively safe HTGR to the much more severe accident conditions of the larger 2240-MW/t HTGR. (author)

  8. Performance Evaluation Model for Application Layer Firewalls.

    Directory of Open Access Journals (Sweden)

    Shichang Xuan

    Full Text Available Application layer firewalls protect the trusted area network against information security risks. However, firewall performance may affect user experience. Therefore, performance analysis plays a significant role in the evaluation of application layer firewalls. This paper presents an analytic model of the application layer firewall, based on a system analysis to evaluate the capability of the firewall. In order to enable users to improve the performance of the application layer firewall with limited resources, resource allocation was evaluated to obtain the optimal resource allocation scheme in terms of throughput, delay, and packet loss rate. The proposed model employs the Erlangian queuing model to analyze the performance parameters of the system with regard to the three layers (network, transport, and application layers. Then, the analysis results of all the layers are combined to obtain the overall system performance indicators. A discrete event simulation method was used to evaluate the proposed model. Finally, limited service desk resources were allocated to obtain the values of the performance indicators under different resource allocation scenarios in order to determine the optimal allocation scheme. Under limited resource allocation, this scheme enables users to maximize the performance of the application layer firewall.

  9. Performance Evaluation Model for Application Layer Firewalls.

    Science.gov (United States)

    Xuan, Shichang; Yang, Wu; Dong, Hui; Zhang, Jiangchuan

    2016-01-01

    Application layer firewalls protect the trusted area network against information security risks. However, firewall performance may affect user experience. Therefore, performance analysis plays a significant role in the evaluation of application layer firewalls. This paper presents an analytic model of the application layer firewall, based on a system analysis to evaluate the capability of the firewall. In order to enable users to improve the performance of the application layer firewall with limited resources, resource allocation was evaluated to obtain the optimal resource allocation scheme in terms of throughput, delay, and packet loss rate. The proposed model employs the Erlangian queuing model to analyze the performance parameters of the system with regard to the three layers (network, transport, and application layers). Then, the analysis results of all the layers are combined to obtain the overall system performance indicators. A discrete event simulation method was used to evaluate the proposed model. Finally, limited service desk resources were allocated to obtain the values of the performance indicators under different resource allocation scenarios in order to determine the optimal allocation scheme. Under limited resource allocation, this scheme enables users to maximize the performance of the application layer firewall.

  10. Tailored model abstraction in performance assessments

    International Nuclear Information System (INIS)

    Kessler, J.H.

    1995-01-01

    Total System Performance Assessments (TSPAs) are likely to be one of the most significant parts of making safety cases for the continued development and licensing of geologic repositories for the disposal of spent fuel and HLW. Thus, it is critical that the TSPA model capture the 'essence' of the physical processes relevant to demonstrating the appropriate regulation is met. But how much detail about the physical processes must be modeled and understood before there is enough confidence that the appropriate essence has been captured? In this summary the level of model abstraction that is required is discussed. Approaches for subsystem and total system performance analyses are outlined, and the role of best estimate models is examined. It is concluded that a conservative approach for repository performance, based on limited amount of field and laboratory data, can provide sufficient confidence for a regulatory decision

  11. Total motion generated in the unstable cervical spine during management of the typical trauma patient: a comparison of methods in a cadaver model.

    Science.gov (United States)

    Prasarn, Mark L; Horodyski, MaryBeth; Dubose, Dewayne; Small, John; Del Rossi, Gianluca; Zhou, Haitao; Conrad, Bryan P; Rechtine, Glenn R

    2012-05-15

    Biomechanical cadaveric study. We sought to analyze the amount of motion generated in the unstable cervical spine during various maneuvers and transfers that a trauma patient would typically be subjected to prior to definitive fixation, using 2 different protocols. From the time of injury until the spine is adequately stabilized in the operating room, every step in management of the spine-injured patient can result in secondary injury to the spinal cord. The amount of angular motion between C5 and C6, after a surgically created unstable injury, was measured using an electromagnetic motion analysis device (Polhemus Inc., Colchester, VT). A total sequence of maneuvers and transfers was then performed that a patient would be expected to go through from the time of injury until surgical fixation. This included spine board placement and removal, bed transfers, lateral therapy, and turning the patient prone onto the operating table. During each of these, we performed what has been shown to be the best and commonly used (log-roll) techniques. During bed transfers and the turn prone for surgery, there was statistically more angular motion in each plane for traditional transfer with the spine board and manually turning the patient prone as commonly done (P patient from the field to stabilization in the operating room using the best compared with the most commonly used techniques. As previously reported, using log-roll techniques consistently results in unwanted motion at the injured spinal segment.

  12. Estuarine modeling: Does a higher grid resolution improve model performance?

    Science.gov (United States)

    Ecological models are useful tools to explore cause effect relationships, test hypothesis and perform management scenarios. A mathematical model, the Gulf of Mexico Dissolved Oxygen Model (GoMDOM), has been developed and applied to the Louisiana continental shelf of the northern ...

  13. Human Cognitive and Motor Performance Measures under Typical Cool White Fluorescent Illumination vs Relatively High Cool White Illuminance/Irradiance Lighting

    Science.gov (United States)

    1990-01-31

    oral temperature was measured and plasma and saliva samples were obtained for later melatonin and cortisol assays. Subjects were allowed 2 weeks...ILLUMINATION VS RELATIVELY HIGI - AFOSR-89-I164 ILLUMINATION - 61102F E, AUTHOR{S) -3842 J - A4 N Dr Patrick R Hannon i 7. PERFORMING ORGANIZATION NAME(S...ILLUMINATION VS RELATIVELY HIGH COOL WHITE ILLUMINANCE/IRRADIANCE LIGHTING Prepared by: Patrick Roy Hannon, Ed. D. Academic Rank: Associate Professor

  14. Critical review of glass performance modeling

    International Nuclear Information System (INIS)

    Bourcier, W.L.

    1994-07-01

    Borosilicate glass is to be used for permanent disposal of high-level nuclear waste in a geologic repository. Mechanistic chemical models are used to predict the rate at which radionuclides will be released from the glass under repository conditions. The most successful and useful of these models link reaction path geochemical modeling programs with a glass dissolution rate law that is consistent with transition state theory. These models have been used to simulate several types of short-term laboratory tests of glass dissolution and to predict the long-term performance of the glass in a repository. Although mechanistically based, the current models are limited by a lack of unambiguous experimental support for some of their assumptions. The most severe problem of this type is the lack of an existing validated mechanism that controls long-term glass dissolution rates. Current models can be improved by performing carefully designed experiments and using the experimental results to validate the rate-controlling mechanisms implicit in the models. These models should be supported with long-term experiments to be used for model validation. The mechanistic basis of the models should be explored by using modern molecular simulations such as molecular orbital and molecular dynamics to investigate both the glass structure and its dissolution process

  15. Critical review of glass performance modeling

    Energy Technology Data Exchange (ETDEWEB)

    Bourcier, W.L. [Lawrence Livermore National Lab., CA (United States)

    1994-07-01

    Borosilicate glass is to be used for permanent disposal of high-level nuclear waste in a geologic repository. Mechanistic chemical models are used to predict the rate at which radionuclides will be released from the glass under repository conditions. The most successful and useful of these models link reaction path geochemical modeling programs with a glass dissolution rate law that is consistent with transition state theory. These models have been used to simulate several types of short-term laboratory tests of glass dissolution and to predict the long-term performance of the glass in a repository. Although mechanistically based, the current models are limited by a lack of unambiguous experimental support for some of their assumptions. The most severe problem of this type is the lack of an existing validated mechanism that controls long-term glass dissolution rates. Current models can be improved by performing carefully designed experiments and using the experimental results to validate the rate-controlling mechanisms implicit in the models. These models should be supported with long-term experiments to be used for model validation. The mechanistic basis of the models should be explored by using modern molecular simulations such as molecular orbital and molecular dynamics to investigate both the glass structure and its dissolution process.

  16. Performance modeling, stochastic networks, and statistical multiplexing

    CERN Document Server

    Mazumdar, Ravi R

    2013-01-01

    This monograph presents a concise mathematical approach for modeling and analyzing the performance of communication networks with the aim of introducing an appropriate mathematical framework for modeling and analysis as well as understanding the phenomenon of statistical multiplexing. The models, techniques, and results presented form the core of traffic engineering methods used to design, control and allocate resources in communication networks.The novelty of the monograph is the fresh approach and insights provided by a sample-path methodology for queueing models that highlights the importan

  17. A statistical model for predicting muscle performance

    Science.gov (United States)

    Byerly, Diane Leslie De Caix

    The objective of these studies was to develop a capability for predicting muscle performance and fatigue to be utilized for both space- and ground-based applications. To develop this predictive model, healthy test subjects performed a defined, repetitive dynamic exercise to failure using a Lordex spinal machine. Throughout the exercise, surface electromyography (SEMG) data were collected from the erector spinae using a Mega Electronics ME3000 muscle tester and surface electrodes placed on both sides of the back muscle. These data were analyzed using a 5th order Autoregressive (AR) model and statistical regression analysis. It was determined that an AR derived parameter, the mean average magnitude of AR poles, significantly correlated with the maximum number of repetitions (designated Rmax) that a test subject was able to perform. Using the mean average magnitude of AR poles, a test subject's performance to failure could be predicted as early as the sixth repetition of the exercise. This predictive model has the potential to provide a basis for improving post-space flight recovery, monitoring muscle atrophy in astronauts and assessing the effectiveness of countermeasures, monitoring astronaut performance and fatigue during Extravehicular Activity (EVA) operations, providing pre-flight assessment of the ability of an EVA crewmember to perform a given task, improving the design of training protocols and simulations for strenuous International Space Station assembly EVA, and enabling EVA work task sequences to be planned enhancing astronaut performance and safety. Potential ground-based, medical applications of the predictive model include monitoring muscle deterioration and performance resulting from illness, establishing safety guidelines in the industry for repetitive tasks, monitoring the stages of rehabilitation for muscle-related injuries sustained in sports and accidents, and enhancing athletic performance through improved training protocols while reducing

  18. Performance of skylight illuminance inside a dome shaped adobe house under composite climate at New Delhi (India: A typical zero energy passive house

    Directory of Open Access Journals (Sweden)

    Arvind Chel

    2014-06-01

    Full Text Available This paper presents annual experimental performance of pyramid shaped skylight for daylighting of a dome shaped adobe house located at solar energy park in New Delhi (India. This approach of single story dome shaped building with skylight is more useful for rural and semi-urban sectors for both office and residential buildings reducing artificial lighting energy consumption. The hourly measured data of inside and outside illuminance for three different working surface levels inside the existing rooms are presented for each month of the year. The embodied energy payback time of the skylight is also determined on the basis of lighting energy saving potential.

  19. Total motion generated in the unstable thoracolumbar spine during management of the typical trauma patient: a comparison of methods in a cadaver model.

    Science.gov (United States)

    Prasarn, Mark L; Zhou, Haitao; Dubose, Dewayne; Rossi, Gianluca Del; Conrad, Bryan P; Horodyski, Marybeth; Rechtine, Glenn R

    2012-05-01

    The proper prehospital and inpatient management of patients with unstable spinal injuries is critical for prevention of secondary neurological compromise. The authors sought to analyze the amount of motion generated in the unstable thoracolumbar spine during various maneuvers and transfers that a trauma patient would typically be subjected to prior to definitive fixation. Five fresh cadavers with surgically created unstable L-1 burst fractures were tested. The amount of angular motion between the T-12 and L-2 vertebral segments was measured using a 3D electromagnetic motion analysis device. A complete sequence of maneuvers and transfers was then performed that a patient would be expected to go through from the time of injury until surgical fixation. These maneuvers and transfers included spine board placement and removal, bed transfers, lateral therapy, and turning the patient prone onto the operating table. During each of these, the authors performed what they believed to be the most commonly used versus the best techniques for preventing undesirable motion at the injury level. When placing a spine board there was more motion in all 3 planes with the log-roll technique, and this difference reached statistical significance for axial rotation (p = 0.018) and lateral bending (p = 0.003). Using logrolling for spine board removal resulted in increased motion again, and this was statistically significant for flexion-extension (p = 0.014). During the bed transfer and lateral therapy, the log-roll technique resulted in more motion in all 3 planes (p ≤ 0.05). When turning the cadavers prone for surgery there was statistically more angular motion in each plane for manually turning the patient versus the Jackson table turn (p ≤ 0.01). The total motion was decreased by almost 50% in each plane when using an alternative to the log-roll techniques during the complete sequence (p ≤ 0.007). Although it is unknown how much motion in the unstable spine is necessary to cause

  20. Evaluation of models in performance assessment

    International Nuclear Information System (INIS)

    Dormuth, K.W.

    1993-01-01

    The reliability of models used for performance assessment for high-level waste repositories is a key factor in making decisions regarding the management of high-level waste. Model reliability may be viewed as a measure of the confidence that regulators and others have in the use of these models to provide information for decision making. The degree of reliability required for the models will increase as implementation of disposal proceeds and decisions become increasingly important to safety. Evaluation of the models by using observations of real systems provides information that assists the assessment analysts and reviewers in establishing confidence in the conclusions reached in the assessment. A continuing process of model calibration, evaluation, and refinement should lead to increasing reliability of models as implementation proceeds. However, uncertainty in the model predictions cannot be eliminated, so decisions will always be made under some uncertainty. Examples from the Canadian program illustrate the process of model evaluation using observations of real systems and its relationship to performance assessment. 21 refs., 2 figs

  1. Outdoor FSO Communications Under Fog: Attenuation Modeling and Performance Evaluation

    KAUST Repository

    Esmail, Maged Abdullah

    2016-07-18

    Fog is considered to be a primary challenge for free space optics (FSO) systems. It may cause attenuation that is up to hundreds of decibels per kilometer. Hence, accurate modeling of fog attenuation will help telecommunication operators to engineer and appropriately manage their networks. In this paper, we examine fog measurement data coming from several locations in Europe and the United States and derive a unified channel attenuation model. Compared with existing attenuation models, our proposed model achieves a minimum of 9 dB, which is lower than the average root-mean-square error (RMSE). Moreover, we have investigated the statistical behavior of the channel and developed a probabilistic model under stochastic fog conditions. Furthermore, we studied the performance of the FSO system addressing various performance metrics, including signal-to-noise ratio (SNR), bit-error rate (BER), and channel capacity. Our results show that in communication environments with frequent fog, FSO is typically a short-range data transmission technology. Therefore, FSO will have its preferred market segment in future wireless fifth-generation/sixth-generation (5G/6G) networks having cell sizes that are lower than a 1-km diameter. Moreover, the results of our modeling and analysis can be applied in determining the switching/thresholding conditions in highly reliable hybrid FSO/radio-frequency (RF) networks.

  2. Generating Performance Models for Irregular Applications

    Energy Technology Data Exchange (ETDEWEB)

    Friese, Ryan D.; Tallent, Nathan R.; Vishnu, Abhinav; Kerbyson, Darren J.; Hoisie, Adolfy

    2017-05-30

    Many applications have irregular behavior --- non-uniform input data, input-dependent solvers, irregular memory accesses, unbiased branches --- that cannot be captured using today's automated performance modeling techniques. We describe new hierarchical critical path analyses for the \\Palm model generation tool. To create a model's structure, we capture tasks along representative MPI critical paths. We create a histogram of critical tasks with parameterized task arguments and instance counts. To model each task, we identify hot instruction-level sub-paths and model each sub-path based on data flow, instruction scheduling, and data locality. We describe application models that generate accurate predictions for strong scaling when varying CPU speed, cache speed, memory speed, and architecture. We present results for the Sweep3D neutron transport benchmark; Page Rank on multiple graphs; Support Vector Machine with pruning; and PFLOTRAN's reactive flow/transport solver with domain-induced load imbalance.

  3. The nonlinear unloading behavior of a typical Ni-based superalloy during hot deformation: a new elasto-viscoplastic constitutive model

    Science.gov (United States)

    Chen, Ming-Song; Lin, Y. C.; Li, Kuo-Kuo; Chen, Jian

    2016-09-01

    The nonlinear unloading behavior of a typical Ni-based superalloy is investigated by hot compressive experiments with intermediate unloading-reloading cycles. The experimental results show that there are at least four types of unloading curves. However, it is found that there is no essential difference among four types of unloading curves. The variation curves of instantaneous Young's modulus with stress for all types of unloading curves include four segments, i.e., three linear elastic segments (segments I, II, and III) and one subsequent nonlinear elastic segment (segment IV). The instantaneous Young's modulus of segments I and III is approximately equal to that of reloading process, while smaller than that of segment II. In the nonlinear elastic segment, the instantaneous Young's modulus linearly decreases with the decrease in stress. In addition, the relationship between stress and strain rate can be accurately expressed by the hyperbolic sine function. This study includes two parts. In the present part, the characters of unloading curves are discussed in detail, and a new elasto-viscoplastic constitutive model is proposed to describe the nonlinear unloading behavior based on the experimental findings. While in the latter part (Chen et al. in Appl Phys A. doi: 10.1007/s00339-016-0385-0, 2016), the effects of deformation temperature, strain rate, and pre-strain on the parameters of this new constitutive model are analyzed, and a unified elasto-viscoplastic constitutive model is proposed to predict the unloading behavior at arbitrary deformation temperature, strain rate, and pre-strain.

  4. The nonlinear unloading behavior of a typical Ni-based superalloy during hot deformation. A new elasto-viscoplastic constitutive model

    International Nuclear Information System (INIS)

    Chen, Ming-Song; Li, Kuo-Kuo; Lin, Y.C.; Chen, Jian

    2016-01-01

    The nonlinear unloading behavior of a typical Ni-based superalloy is investigated by hot compressive experiments with intermediate unloading-reloading cycles. The experimental results show that there are at least four types of unloading curves. However, it is found that there is no essential difference among four types of unloading curves. The variation curves of instantaneous Young's modulus with stress for all types of unloading curves include four segments, i.e., three linear elastic segments (segments I, II, and III) and one subsequent nonlinear elastic segment (segment IV). The instantaneous Young's modulus of segments I and III is approximately equal to that of reloading process, while smaller than that of segment II. In the nonlinear elastic segment, the instantaneous Young's modulus linearly decreases with the decrease in stress. In addition, the relationship between stress and strain rate can be accurately expressed by the hyperbolic sine function. This study includes two parts. In the present part, the characters of unloading curves are discussed in detail, and a new elasto-viscoplastic constitutive model is proposed to describe the nonlinear unloading behavior based on the experimental findings. While in the latter part (Chen et al. in Appl Phys A. doi:10.1007/s00339-016-0385-0, 2016), the effects of deformation temperature, strain rate, and pre-strain on the parameters of this new constitutive model are analyzed, and a unified elasto-viscoplastic constitutive model is proposed to predict the unloading behavior at arbitrary deformation temperature, strain rate, and pre-strain. (orig.)

  5. Performance Measurement Model A TarBase model with ...

    Indian Academy of Sciences (India)

    rohit

    Model A 8.0 2.0 94.52% 88.46% 76 108 12 12 0.86 0.91 0.78 0.94. Model B 2.0 2.0 93.18% 89.33% 64 95 10 9 0.88 0.90 0.75 0.98. The above results for TEST – 1 show details for our two models (Model A and Model B).Performance of Model A after adding of 32 negative dataset of MiRTif on our testing set(MiRecords) ...

  6. Performance Evaluation and Modelling of Container Terminals

    Science.gov (United States)

    Venkatasubbaiah, K.; Rao, K. Narayana; Rao, M. Malleswara; Challa, Suresh

    2018-02-01

    The present paper evaluates and analyzes the performance of 28 container terminals of south East Asia through data envelopment analysis (DEA), principal component analysis (PCA) and hybrid method of DEA-PCA. DEA technique is utilized to identify efficient decision making unit (DMU)s and to rank DMUs in a peer appraisal mode. PCA is a multivariate statistical method to evaluate the performance of container terminals. In hybrid method, DEA is integrated with PCA to arrive the ranking of container terminals. Based on the composite ranking, performance modelling and optimization of container terminals is carried out through response surface methodology (RSM).

  7. Performance Evaluation of the Becton Dickinson FACSPresto™ Near-Patient CD4 Instrument in a Laboratory and Typical Field Clinic Setting in South Africa.

    Directory of Open Access Journals (Sweden)

    Lindi-Marie Coetzee

    Full Text Available The BD-FACSPresto™ CD4 is a new, point-of-care (POC instrument utilising finger-stick capillary blood sampling. This study evaluated its performance against predicate CD4 testing in South Africa.Phase-I testing: HIV+ patient samples (n = 214 were analysed on the Presto™ under ideal laboratory conditions using venous blood. During Phase-II, 135 patients were capillary-bled for CD4 testing on FACSPresto™, performed according to manufacturer instruction. Comparative statistical analyses against predicate PLG/CD4 method and industry standards were done using GraphPad Prism 6. It included Bland-Altman with 95% limits of agreement (LOA and percentage similarity with coefficient of variation (%CV analyses for absolute CD4 count (cells/μl and CD4 percentage of lymphocytes (CD4%.In Phase-I, 179/217 samples yielded reportable results with Presto™ using venous blood filled cartridges. Compared to predicate, a mean bias of 40.4±45.8 (LOA of -49.2 to 130.2 and %similarity (%CV of 106.1%±7.75 (7.3% was noted for CD4 absolute counts. In Phase-2 field study, 118/135 capillary-bled Presto™ samples resulted CD4 parameters. Compared to predicate, a mean bias of 50.2±92.8 (LOA of -131.7 to 232 with %similarity (%CV 105%±10.8 (10.3%, and 2.87±2.7 (LOA of -8.2 to 2.5 with similarity of 94.7±6.5% (6.83% noted for absolute CD4 and CD4% respectively. No significant clinical differences were indicated for either parameter using two sampling methods.The Presto™ produced remarkable precision to predicate methods, irrespective of venous or capillary blood sampling. A consistent, clinically insignificant over-estimation (5-7% of counts against PLG/CD4 and equivalency to FACSCount™ was noted. Further field studies are awaited to confirm longer-term use.

  8. Utilities for high performance dispersion model PHYSIC

    International Nuclear Information System (INIS)

    Yamazawa, Hiromi

    1992-09-01

    The description and usage of the utilities for the dispersion calculation model PHYSIC were summarized. The model was developed in the study of developing high performance SPEEDI with the purpose of introducing meteorological forecast function into the environmental emergency response system. The procedure of PHYSIC calculation consists of three steps; preparation of relevant files, creation and submission of JCL, and graphic output of results. A user can carry out the above procedure with the help of the Geographical Data Processing Utility, the Model Control Utility, and the Graphic Output Utility. (author)

  9. The Brain’s sense of walking: a study on the intertwine between locomotor imagery and internal locomotor models in healthy adults, typically developing children and children with cerebral palsy

    Directory of Open Access Journals (Sweden)

    Marco eIosa

    2014-10-01

    Full Text Available Motor imagery and internal motor models have been deeply investigated in literature. It is well known that the development of motor imagery occurs during adolescence and it is limited in people affected by cerebral palsy. However, the roles of motor imagery and internal models in locomotion as well as their intertwine received poor attention. In this study we compared the performances of healthy adults (n=8, 28.1±5.1 years old, children with typical development (n=8, 8.1±3.8 years old and children with cerebral palsy (n=12, 7.5±2.9 years old, measured by an optoelectronic system and a trunk-mounted wireless inertial magnetic unit, during three different tasks. Subjects were asked to achieve a target located at 2 or 3m in front of them simulating their walking by stepping in place, or actually walking blindfolded or normally walking with open eyes. Adults performed a not significantly different number of steps (p=0.761 spending not significantly different time between tasks (p=0.156. Children with typical development showed task-dependent differences both in terms of number of steps (p=0.046 and movement time (p=0.002. However, their performance in simulated and blindfolded walking were strictly correlated (R=0.871 for steps, R=0.673 for time. Further, their error in blindfolded walking was in mean only of -2.2% of distance. Also children with cerebral palsy showed significant differences in number of steps (p=0.022 and time (p<0.001, but neither their number of steps nor their movement time recorded during simulated walking were found correlated with those of blindfolded and normal walking. Adults used a unique strategy among different tasks. Children with typical development seemed to be less reliable on their motor predictions, using a task-dependent strategy probably more reliable on sensorial feedback. Children with cerebral palsy showed less efficient performances, especially in simulated walking, suggesting an altered locomotor imagery.

  10. A practical model for sustainable operational performance

    International Nuclear Information System (INIS)

    Vlek, C.A.J.; Steg, E.M.; Feenstra, D.; Gerbens-Leenis, W.; Lindenberg, S.; Moll, H.; Schoot Uiterkamp, A.; Sijtsma, F.; Van Witteloostuijn, A.

    2002-01-01

    By means of a concrete model for sustainable operational performance enterprises can report uniformly on the sustainability of their contributions to the economy, welfare and the environment. The development and design of a three-dimensional monitoring system is presented and discussed [nl

  11. Data Model Performance in Data Warehousing

    Science.gov (United States)

    Rorimpandey, G. C.; Sangkop, F. I.; Rantung, V. P.; Zwart, J. P.; Liando, O. E. S.; Mewengkang, A.

    2018-02-01

    Data Warehouses have increasingly become important in organizations that have large amount of data. It is not a product but a part of a solution for the decision support system in those organizations. Data model is the starting point for designing and developing of data warehouses architectures. Thus, the data model needs stable interfaces and consistent for a longer period of time. The aim of this research is to know which data model in data warehousing has the best performance. The research method is descriptive analysis, which has 3 main tasks, such as data collection and organization, analysis of data and interpretation of data. The result of this research is discussed in a statistic analysis method, represents that there is no statistical difference among data models used in data warehousing. The organization can utilize four data model proposed when designing and developing data warehouse.

  12. Aerodynamic drag modeling of alpine skiers performing giant slalom turns.

    Science.gov (United States)

    Meyer, Frédéric; Le Pelley, David; Borrani, Fabio

    2012-06-01

    Aerodynamic drag plays an important role in performance for athletes practicing sports that involve high-velocity motions. In giant slalom, the skier is continuously changing his/her body posture, and this affects the energy dissipated in aerodynamic drag. It is therefore important to quantify this energy to understand the dynamic behavior of the skier. The aims of this study were to model the aerodynamic drag of alpine skiers in giant slalom simulated conditions and to apply these models in a field experiment to estimate energy dissipated through aerodynamic drag. The aerodynamic characteristics of 15 recreational male and female skiers were measured in a wind tunnel while holding nine different skiing-specific postures. The drag and the frontal area were recorded simultaneously for each posture. Four generalized and two individualized models of the drag coefficient were built, using different sets of parameters. These models were subsequently applied in a field study designed to compare the aerodynamic energy losses between a dynamic and a compact skiing technique. The generalized models estimated aerodynamic drag with an accuracy of between 11.00% and 14.28%, and the individualized models estimated aerodynamic drag with an accuracy between 4.52% and 5.30%. The individualized model used for the field study showed that using a dynamic technique led to 10% more aerodynamic drag energy loss than using a compact technique. The individualized models were capable of discriminating different techniques performed by advanced skiers and seemed more accurate than the generalized models. The models presented here offer a simple yet accurate method to estimate the aerodynamic drag acting upon alpine skiers while rapidly moving through the range of positions typical to turning technique.

  13. Response surface modeling-based source contribution analysis and VOC emission control policy assessment in a typical ozone-polluted urban Shunde, China.

    Science.gov (United States)

    You, Zhiqiang; Zhu, Yun; Jang, Carey; Wang, Shuxiao; Gao, Jian; Lin, Che-Jen; Li, Minhui; Zhu, Zhenghua; Wei, Hao; Yang, Wenwei

    2017-01-01

    To develop a sound ozone (O 3 ) pollution control strategy, it is important to well understand and characterize the source contribution due to the complex chemical and physical formation processes of O 3 . Using the "Shunde" city as a pilot summer case study, we apply an innovative response surface modeling (RSM) methodology based on the Community Multi-Scale Air Quality (CMAQ) modeling simulations to identify the O 3 regime and provide dynamic analysis of the precursor contributions to effectively assess the O 3 impacts of volatile organic compound (VOC) control strategy. Our results show that Shunde is a typical VOC-limited urban O 3 polluted city. The "Jiangmen" city, as the main upper wind area during July 2014, its VOCs and nitrogen oxides (NO x ) emissions make up the largest contribution (9.06%). On the contrary, the contribution from local (Shunde) emission is lowest (6.35%) among the seven neighbor regions. The local VOCs industrial source emission has the largest contribution comparing to other precursor emission sectors in Shunde. The results of dynamic source contribution analysis further show that the local NO x control could slightly increase the ground O 3 under low (10.00%) and medium (40.00%) reduction ratios, while it could start to turn positive to decrease ground O 3 under the high NO x abatement ratio (75.00%). The real-time assessment of O 3 impacts from VOCs control strategies in Pearl River Delta (PRD) shows that the joint regional VOCs emission control policy will effectively reduce the ground O 3 concentration in Shunde. Copyright © 2016. Published by Elsevier B.V.

  14. Radiation induced muscositis as space flight risk. Model studies on X-ray and heavy ion irradiated typical oral mucosa models

    International Nuclear Information System (INIS)

    Tschachojan, Viktoria

    2014-01-01

    Humans in exomagnetospheric space are exposed to highly energetic heavy ion radiation which can be hardly shielded. Since radiation-induced mucositis constitutes a severe complication of heavy ion radiotherapy, it would also implicate a serious medical safety risk for the crew members during prolonged space flights such as missions to Moon or Mars. For assessment of risk developing radiation-induced mucositis, three-dimensional organotypic cultures of immortalized human keratinocytes and fibroblasts were irradiated with a 12 C particle beam at high energies or X-Rays. Immunofluorescence stainings were done from cryosections and radiation induced release of cytokines and chemokines was quantified by ELISA from culture supernatants. The major focuses of this study were on 4, 8, 24 and 48 hours after irradiation. The conducted analyses of our mucosa model showed many structural similarities with the native oral mucosa and authentic immunological responses to radiation exposure. Quantification of the DNA damage in irradiated mucosa models revealed about twice as many DSB after heavy-ion irradiation compared to X-rays at definite doses and time points, suggesting a higher gene toxicity of heavy ions. Nuclear factor κB activation was observed after treatment with X-rays or 12 C particles. An activation of NF κB p65 in irradiated samples could not be detected. ELISA analyses showed significantly higher interleukin 6 and interleukin 8 levels after irradiation with X-rays and 12 C particles compared to non-irradiated controls. However, only X-rays induced significantly higher levels of interleukin 1β. Analyses of TNF-α and IFN-γ showed no radiation-induced effects. Further analyses revealed a radiation-induced reduction in proliferation and loss of compactness in irradiated oral mucosa model, which would lead to local lesions in vivo. In this study we revealed that several pro-inflammatory markers and structural changes are induced by X-rays and heavy-ion irradiation

  15. A New Model to Simulate Energy Performance of VRF Systems

    Energy Technology Data Exchange (ETDEWEB)

    Hong, Tianzhen; Pang, Xiufeng; Schetrit, Oren; Wang, Liping; Kasahara, Shinichi; Yura, Yoshinori; Hinokuma, Ryohei

    2014-03-30

    This paper presents a new model to simulate energy performance of variable refrigerant flow (VRF) systems in heat pump operation mode (either cooling or heating is provided but not simultaneously). The main improvement of the new model is the introduction of the evaporating and condensing temperature in the indoor and outdoor unit capacity modifier functions. The independent variables in the capacity modifier functions of the existing VRF model in EnergyPlus are mainly room wet-bulb temperature and outdoor dry-bulb temperature in cooling mode and room dry-bulb temperature and outdoor wet-bulb temperature in heating mode. The new approach allows compliance with different specifications of each indoor unit so that the modeling accuracy is improved. The new VRF model was implemented in a custom version of EnergyPlus 7.2. This paper first describes the algorithm for the new VRF model, which is then used to simulate the energy performance of a VRF system in a Prototype House in California that complies with the requirements of Title 24 ? the California Building Energy Efficiency Standards. The VRF system performance is then compared with three other types of HVAC systems: the Title 24-2005 Baseline system, the traditional High Efficiency system, and the EnergyStar Heat Pump system in three typical California climates: Sunnyvale, Pasadena and Fresno. Calculated energy savings from the VRF systems are significant. The HVAC site energy savings range from 51 to 85percent, while the TDV (Time Dependent Valuation) energy savings range from 31 to 66percent compared to the Title 24 Baseline Systems across the three climates. The largest energy savings are in Fresno climate followed by Sunnyvale and Pasadena. The paper discusses various characteristics of the VRF systems contributing to the energy savings. It should be noted that these savings are calculated using the Title 24 prototype House D under standard operating conditions. Actual performance of the VRF systems for real

  16. Assessing The Performance of Hydrological Models

    Science.gov (United States)

    van der Knijff, Johan

    The performance of hydrological models is often characterized using the coefficient of efficiency, E. The sensitivity of E to extreme streamflow values, and the difficulty of deciding what value of E should be used as a threshold to identify 'good' models or model parameterizations, have proven to be serious shortcomings of this index. This paper reviews some alternative performance indices that have appeared in the litera- ture. Legates and McCabe (1999) suggested a more generalized form of E, E'(j,B). Here, j is a parameter that controls how much emphasis is put on extreme streamflow values, and B defines a benchmark or 'null hypothesis' against which the results of the model are tested. E'(j,B) was used to evaluate a large number of parameterizations of a conceptual rainfall-runoff model, using 6 different combinations of j and B. First, the effect of j and B is explained. Second, it is demonstrated how the index can be used to explicitly test hypotheses about the model and the data. This approach appears to be particularly attractive if the index is used as a likelihood measure within a GLUE-type analysis.

  17. System Advisor Model: Flat Plate Photovoltaic Performance Modeling Validation Report

    Energy Technology Data Exchange (ETDEWEB)

    Freeman, Janine [National Renewable Energy Lab. (NREL), Golden, CO (United States); Whitmore, Jonathan [National Renewable Energy Lab. (NREL), Golden, CO (United States); Kaffine, Leah [National Renewable Energy Lab. (NREL), Golden, CO (United States); Blair, Nate [National Renewable Energy Lab. (NREL), Golden, CO (United States); Dobos, Aron P. [National Renewable Energy Lab. (NREL), Golden, CO (United States)

    2013-12-01

    The System Advisor Model (SAM) is a free software tool that performs detailed analysis of both system performance and system financing for a variety of renewable energy technologies. This report provides detailed validation of the SAM flat plate photovoltaic performance model by comparing SAM-modeled PV system generation data to actual measured production data for nine PV systems ranging from 75 kW to greater than 25 MW in size. The results show strong agreement between SAM predictions and field data, with annualized prediction error below 3% for all fixed tilt cases and below 8% for all one axis tracked cases. The analysis concludes that snow cover and system outages are the primary sources of disagreement, and other deviations resulting from seasonal biases in the irradiation models and one axis tracking issues are discussed in detail.

  18. New Diagnostics to Assess Model Performance

    Science.gov (United States)

    Koh, Tieh-Yong

    2013-04-01

    The comparison of model performance between the tropics and the mid-latitudes is particularly problematic for observables like temperature and humidity: in the tropics, these observables have little variation and so may give an apparent impression that model predictions are often close to observations; on the contrary, they vary widely in mid-latitudes and so the discrepancy between model predictions and observations might be unnecessarily over-emphasized. We have developed a suite of mathematically rigorous diagnostics that measures normalized errors accounting for the observed and modeled variability of the observables themselves. Another issue in evaluating model performance is the relative importance of getting the variance of an observable right versus getting the modeled variation to be in phase with the observed. The correlation-similarity diagram was designed to analyse the pattern error of a model by breaking it down into contributions from amplitude and phase errors. A final and important question pertains to the generalization of scalar diagnostics to analyse vector observables like wind. In particular, measures of variance and correlation must be properly derived to avoid the mistake of ignoring the covariance between north-south and east-west winds (hence wrongly assuming that the north-south and east-west directions form a privileged vector basis for error analysis). There is also a need to quantify systematic preferences in the direction of vector wind errors, which we make possible by means of an error anisotropy diagram. Although the suite of diagnostics is mentioned with reference to model verification here, it is generally applicable to quantify differences between two datasets (e.g. from two observation platforms). Reference publication: Koh, T. Y. et al. (2012), J. Geophys. Res., 117, D13109, doi:10.1029/2011JD017103. also available at http://www.ntu.edu.sg/home/kohty

  19. Performance modeling of network data services

    Energy Technology Data Exchange (ETDEWEB)

    Haynes, R.A.; Pierson, L.G.

    1997-01-01

    Networks at major computational organizations are becoming increasingly complex. The introduction of large massively parallel computers and supercomputers with gigabyte memories are requiring greater and greater bandwidth for network data transfers to widely dispersed clients. For networks to provide adequate data transfer services to high performance computers and remote users connected to them, the networking components must be optimized from a combination of internal and external performance criteria. This paper describes research done at Sandia National Laboratories to model network data services and to visualize the flow of data from source to sink when using the data services.

  20. Performance assessment modeling of pyrometallurgical process wasteforms

    International Nuclear Information System (INIS)

    Nutt, W.M.; Hill, R.N.; Bullen, D.B.

    1995-01-01

    Performance assessment analyses have been completed to estimate the behavior of high-level nuclear wasteforms generated from the pyrometallurgical processing of liquid metal reactor (LMR) and light water reactor (LWR) spent nuclear fuel. Waste emplaced in the proposed repository at Yucca Mountain is investigated as the basis for the study. The resulting cumulative actinide and fission product releases to the accessible environment within a 100,000 year period from the various pyrometallurgical process wasteforms are compared to those of directly disposed LWR spent fuel using the same total repository system model. The impact of differing radionuclide transport models on the overall release characteristics is investigated

  1. Model description and evaluation of model performance: DOSDIM model

    International Nuclear Information System (INIS)

    Lewyckyj, N.; Zeevaert, T.

    1996-01-01

    DOSDIM was developed to assess the impact to man from routine and accidental atmospheric releases. It is a compartmental, deterministic, radiological model. For an accidental release, dynamic transfer are used in opposition to a routine release for which equilibrium transfer factors are used. Parameters values were chosen to be conservative. Transfer between compartments are described by first-order differential equations. 2 figs

  2. Modelling and evaluation of surgical performance using hidden Markov models.

    Science.gov (United States)

    Megali, Giuseppe; Sinigaglia, Stefano; Tonet, Oliver; Dario, Paolo

    2006-10-01

    Minimally invasive surgery has become very widespread in the last ten years. Since surgeons experience difficulties in learning and mastering minimally invasive techniques, the development of training methods is of great importance. While the introduction of virtual reality-based simulators has introduced a new paradigm in surgical training, skill evaluation methods are far from being objective. This paper proposes a method for defining a model of surgical expertise and an objective metric to evaluate performance in laparoscopic surgery. Our approach is based on the processing of kinematic data describing movements of surgical instruments. We use hidden Markov model theory to define an expert model that describes expert surgical gesture. The model is trained on kinematic data related to exercises performed on a surgical simulator by experienced surgeons. Subsequently, we use this expert model as a reference model in the definition of an objective metric to evaluate performance of surgeons with different abilities. Preliminary results show that, using different topologies for the expert model, the method can be efficiently used both for the discrimination between experienced and novice surgeons, and for the quantitative assessment of surgical ability.

  3. Hybrid Modeling Improves Health and Performance Monitoring

    Science.gov (United States)

    2007-01-01

    Scientific Monitoring Inc. was awarded a Phase I Small Business Innovation Research (SBIR) project by NASA's Dryden Flight Research Center to create a new, simplified health-monitoring approach for flight vehicles and flight equipment. The project developed a hybrid physical model concept that provided a structured approach to simplifying complex design models for use in health monitoring, allowing the output or performance of the equipment to be compared to what the design models predicted, so that deterioration or impending failure could be detected before there would be an impact on the equipment's operational capability. Based on the original modeling technology, Scientific Monitoring released I-Trend, a commercial health- and performance-monitoring software product named for its intelligent trending, diagnostics, and prognostics capabilities, as part of the company's complete ICEMS (Intelligent Condition-based Equipment Management System) suite of monitoring and advanced alerting software. I-Trend uses the hybrid physical model to better characterize the nature of health or performance alarms that result in "no fault found" false alarms. Additionally, the use of physical principles helps I-Trend identify problems sooner. I-Trend technology is currently in use in several commercial aviation programs, and the U.S. Air Force recently tapped Scientific Monitoring to develop next-generation engine health-management software for monitoring its fleet of jet engines. Scientific Monitoring has continued the original NASA work, this time under a Phase III SBIR contract with a joint NASA-Pratt & Whitney aviation security program on propulsion-controlled aircraft under missile-damaged aircraft conditions.

  4. Global processing speed in children with low reading ability and in children and adults with typical reading ability: exploratory factor analytic models.

    Science.gov (United States)

    Peter, Beate; Matsushita, Mark; Raskind, Wendy H

    2011-06-01

    To investigate processing speed as a latent dimension in children with dyslexia and children and adults with typical reading skills. Exploratory factor analysis (FA) was based on a sample of multigenerational families, each ascertained through a child with dyslexia. Eleven measures--6 of them timed--represented verbal and nonverbal processes, alphabet writing, and motor sequencing in the hand and oral motor system. FA was conducted in 4 cohorts (all children, a subset of children with low reading scores, a subset of children with typical reading scores, and adults with typical reading scores; total N = 829). Processing speed formed the first factor in all cohorts. Both measures of motor sequencing speed loaded on the speed factor with the other timed variables. Children with poor reading scores showed lower speed factor scores than did typical peers. The speed factor was negatively correlated with age in the adults. The speed dimension was observed independently of participant cohort, gender, and reading ability. Results are consistent with a unified theory of processing speed as a quadratic function of age in typical development and with slowed processing in poor readers.

  5. Thermal modelling of PV module performance under high ambient temperatures

    Energy Technology Data Exchange (ETDEWEB)

    Diarra, D.C.; Harrison, S.J. [Queen' s Univ., Kingston, ON (Canada). Dept. of Mechanical and Materials Engineering Solar Calorimetry Lab; Akuffo, F.O. [Kwame Nkrumah Univ. of Science and Technology, Kumasi (Ghana). Dept. of Mechanical Engineering

    2005-07-01

    When predicting the performance of photovoltaic (PV) generators, the actual performance is typically lower than test results conducted under standard test conditions because the radiant energy absorbed in the module under normal operation raises the temperature of the cell and other multilayer components. The increase in temperature translates to a lower conversion efficiency of the solar cells. In order to address these discrepancies, a thermal model of a characteristic PV module was developed to assess and predict its performance under real field-conditions. The PV module consisted of monocrystalline silicon cells in EVA between a glass cover and a tedlar backing sheet. The EES program was used to compute the equilibrium temperature profile in the PV module. It was shown that heat is dissipated towards the bottom and the top of the module, and that its temperature can be much higher than the ambient temperature. Modelling results indicate that 70-75 per cent of the absorbed solar radiation is dissipated from the solar cells as heat, while 4.7 per cent of the solar energy is absorbed in the glass cover and the EVA. It was also shown that the operating temperature of the PV module decreases with increased wind speed. 2 refs.

  6. Modelling fuel cell performance using artificial intelligence

    Energy Technology Data Exchange (ETDEWEB)

    Ogaji, S.O.T.; Singh, R.; Pilidis, P.; Diacakis, M. [Power Propulsion and Aerospace Engineering Department, Centre for Diagnostics and Life Cycle Costs, Cranfield University (United Kingdom)

    2006-03-09

    Over the last few years, fuel cell technology has been increasing promisingly its share in the generation of stationary power. Numerous pilot projects are operating worldwide, continuously increasing the amount of operating hours either as stand-alone devices or as part of gas turbine combined cycles. An essential tool for the adequate and dynamic analysis of such systems is a software model that enables the user to assess a large number of alternative options in the least possible time. On the other hand, the sphere of application of artificial neural networks has widened covering such endeavours of life such as medicine, finance and unsurprisingly engineering (diagnostics of faults in machines). Artificial neural networks have been described as diagrammatic representation of a mathematical equation that receives values (inputs) and gives out results (outputs). Artificial neural networks systems have the capacity to recognise and associate patterns and because of their inherent design features, they can be applied to linear and non-linear problem domains. In this paper, the performance of the fuel cell is modelled using artificial neural networks. The inputs to the network are variables that are critical to the performance of the fuel cell while the outputs are the result of changes in any one or all of the fuel cell design variables, on its performance. Critical parameters for the cell include the geometrical configuration as well as the operating conditions. For the neural network, various network design parameters such as the network size, training algorithm, activation functions and their causes on the effectiveness of the performance modelling are discussed. Results from the analysis as well as the limitations of the approach are presented and discussed. (author)

  7. Modelling fuel cell performance using artificial intelligence

    Science.gov (United States)

    Ogaji, S. O. T.; Singh, R.; Pilidis, P.; Diacakis, M.

    Over the last few years, fuel cell technology has been increasing promisingly its share in the generation of stationary power. Numerous pilot projects are operating worldwide, continuously increasing the amount of operating hours either as stand-alone devices or as part of gas turbine combined cycles. An essential tool for the adequate and dynamic analysis of such systems is a software model that enables the user to assess a large number of alternative options in the least possible time. On the other hand, the sphere of application of artificial neural networks has widened covering such endeavours of life such as medicine, finance and unsurprisingly engineering (diagnostics of faults in machines). Artificial neural networks have been described as diagrammatic representation of a mathematical equation that receives values (inputs) and gives out results (outputs). Artificial neural networks systems have the capacity to recognise and associate patterns and because of their inherent design features, they can be applied to linear and non-linear problem domains. In this paper, the performance of the fuel cell is modelled using artificial neural networks. The inputs to the network are variables that are critical to the performance of the fuel cell while the outputs are the result of changes in any one or all of the fuel cell design variables, on its performance. Critical parameters for the cell include the geometrical configuration as well as the operating conditions. For the neural network, various network design parameters such as the network size, training algorithm, activation functions and their causes on the effectiveness of the performance modelling are discussed. Results from the analysis as well as the limitations of the approach are presented and discussed.

  8. Performance of neutron kinetics models for ADS transient analyses

    International Nuclear Information System (INIS)

    Rineiski, A.; Maschek, W.; Rimpault, G.

    2002-01-01

    Within the framework of the SIMMER code development, neutron kinetics models for simulating transients and hypothetical accidents in advanced reactor systems, in particular in Accelerator Driven Systems (ADSs), have been developed at FZK/IKET in cooperation with CE Cadarache. SIMMER is a fluid-dynamics/thermal-hydraulics code, coupled with a structure model and a space-, time- and energy-dependent neutronics module for analyzing transients and accidents. The advanced kinetics models have also been implemented into KIN3D, a module of the VARIANT/TGV code (stand-alone neutron kinetics) for broadening application and for testing and benchmarking. In the paper, a short review of the SIMMER and KIN3D neutron kinetics models is given. Some typical transients related to ADS perturbations are analyzed. The general models of SIMMER and KIN3D are compared with more simple techniques developed in the context of this work to get a better understanding of the specifics of transients in subcritical systems and to estimate the performance of different kinetics options. These comparisons may also help in elaborating new kinetics models and extending existing computation tools for ADS transient analyses. The traditional point-kinetics model may give rather inaccurate transient reaction rate distributions in an ADS even if the material configuration does not change significantly. This inaccuracy is not related to the problem of choosing a 'right' weighting function: the point-kinetics model with any weighting function cannot take into account pronounced flux shape variations related to possible significant changes in the criticality level or to fast beam trips. To improve the accuracy of the point-kinetics option for slow transients, we have introduced a correction factor technique. The related analyses give a better understanding of 'long-timescale' kinetics phenomena in the subcritical domain and help to evaluate the performance of the quasi-static scheme in a particular case. One

  9. Modelling the predictive performance of credit scoring

    Directory of Open Access Journals (Sweden)

    Shi-Wei Shen

    2013-07-01

    Research purpose: The purpose of this empirical paper was to examine the predictive performance of credit scoring systems in Taiwan. Motivation for the study: Corporate lending remains a major business line for financial institutions. However, in light of the recent global financial crises, it has become extremely important for financial institutions to implement rigorous means of assessing clients seeking access to credit facilities. Research design, approach and method: Using a data sample of 10 349 observations drawn between 1992 and 2010, logistic regression models were utilised to examine the predictive performance of credit scoring systems. Main findings: A test of Goodness of fit demonstrated that credit scoring models that incorporated the Taiwan Corporate Credit Risk Index (TCRI, micro- and also macroeconomic variables possessed greater predictive power. This suggests that macroeconomic variables do have explanatory power for default credit risk. Practical/managerial implications: The originality in the study was that three models were developed to predict corporate firms’ defaults based on different microeconomic and macroeconomic factors such as the TCRI, asset growth rates, stock index and gross domestic product. Contribution/value-add: The study utilises different goodness of fits and receiver operator characteristics during the examination of the robustness of the predictive power of these factors.

  10. Teacher Retirement Systems. A Review of Patterns of Teacher Pension Systems in the Fifty States, with a Model Investment Portfolio for a Typical System.

    Science.gov (United States)

    Stone, Charles Edward

    This document is a study of teacher retirement systems of the United States. It also deals with the problems of improved management, particularly of how funds might be invested to increase benefits or reduce the required contributions. It sets up a hypothetically ideal pattern for the investment portfolio of typical State pension fund on the basis…

  11. Global Processing Speed in Children with Low Reading Ability and in Children and Adults with Typical Reading Ability: Exploratory Factor Analytic Models

    Science.gov (United States)

    Peter, Beate; Matsushita, Mark; Raskind, Wendy H.

    2011-01-01

    Purpose: To investigate processing speed as a latent dimension in children with dyslexia and children and adults with typical reading skills. Method: Exploratory factor analysis (FA) was based on a sample of multigenerational families, each ascertained through a child with dyslexia. Eleven measures--6 of them timed--represented verbal and…

  12. High Performance Modeling of Novel Diagnostics Configuration

    Science.gov (United States)

    Smith, Dalton; Gibson, John; Lodes, Rylie; Malcolm, Hayden; Nakamoto, Teagan; Parrack, Kristina; Trujillo, Christopher; Wilde, Zak; Los Alamos Laboratories Q-6 Students Team

    2017-06-01

    A novel diagnostics method to measure the Hayes Electric Effect was tested and verified against computerized models. Where standard PVDF diagnostics utilize piezoelectric materials to measure detonation pressure through strain-induced electrical signals, the PVDF was used in a novel technique by also detecting the detonation's induced electric field. The ALE-3D Hydro Codes predicted the performance by calculating detonation velocities, pressures, and arrival times. These theoretical results then validated the experimental use of the PVDF repurposed to specifically track the Hayes Electric Effect. Los Alamos National Laboratories Q-6.

  13. A Combat Mission Team Performance Model: Development and initial Application

    National Research Council Canada - National Science Library

    Silverman, Denise

    1997-01-01

    ... realistic combat scenarios. We present a conceptual model of team performance measurement in which aircrew coordination, team performance, mission performance and their interrelationships are operationally defined...

  14. The COD Model: Simulating Workgroup Performance

    Science.gov (United States)

    Biggiero, Lucio; Sevi, Enrico

    Though the question of the determinants of workgroup performance is one of the most central in organization science, precise theoretical frameworks and formal demonstrations are still missing. In order to fill in this gap the COD agent-based simulation model is here presented and used to study the effects of task interdependence and bounded rationality on workgroup performance. The first relevant finding is an algorithmic demonstration of the ordering of interdependencies in terms of complexity, showing that the parallel mode is the most simplex, followed by the sequential and then by the reciprocal. This result is far from being new in organization science, but what is remarkable is that now it has the strength of an algorithmic demonstration instead of being based on the authoritativeness of some scholar or on some episodic empirical finding. The second important result is that the progressive introduction of realistic limits to agents' rationality dramatically reduces workgroup performance and addresses to a rather interesting result: when agents' rationality is severely bounded simple norms work better than complex norms. The third main finding is that when the complexity of interdependence is high, then the appropriate coordination mechanism is agents' direct and active collaboration, which means teamwork.

  15. Model for measuring complex performance in an aviation environment

    International Nuclear Information System (INIS)

    Hahn, H.A.

    1988-01-01

    An experiment was conducted to identify models of pilot performance through the attainment and analysis of concurrent verbal protocols. Sixteen models were identified. Novice and expert pilots differed with respect to the models they used. Models were correlated to performance, particularly in the case of expert subjects. Models were not correlated to performance shaping factors (i.e. workload). 3 refs., 1 tab

  16. Numerical modeling capabilities to predict repository performance

    International Nuclear Information System (INIS)

    1979-09-01

    This report presents a summary of current numerical modeling capabilities that are applicable to the design and performance evaluation of underground repositories for the storage of nuclear waste. The report includes codes that are available in-house, within Golder Associates and Lawrence Livermore Laboratories; as well as those that are generally available within the industry and universities. The first listing of programs are in-house codes in the subject areas of hydrology, solute transport, thermal and mechanical stress analysis, and structural geology. The second listing of programs are divided by subject into the following categories: site selection, structural geology, mine structural design, mine ventilation, hydrology, and mine design/construction/operation. These programs are not specifically designed for use in the design and evaluation of an underground repository for nuclear waste; but several or most of them may be so used

  17. Dengue human infection model performance parameters.

    Science.gov (United States)

    Endy, Timothy P

    2014-06-15

    Dengue is a global health problem and of concern to travelers and deploying military personnel with development and licensure of an effective tetravalent dengue vaccine a public health priority. The dengue viruses (DENVs) are mosquito-borne flaviviruses transmitted by infected Aedes mosquitoes. Illness manifests across a clinical spectrum with severe disease characterized by intravascular volume depletion and hemorrhage. DENV illness results from a complex interaction of viral properties and host immune responses. Dengue vaccine development efforts are challenged by immunologic complexity, lack of an adequate animal model of disease, absence of an immune correlate of protection, and only partially informative immunogenicity assays. A dengue human infection model (DHIM) will be an essential tool in developing potential dengue vaccines or antivirals. The potential performance parameters needed for a DHIM to support vaccine or antiviral candidates are discussed. © The Author 2014. Published by Oxford University Press on behalf of the Infectious Diseases Society of America. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  18. High-performance phase-field modeling

    KAUST Repository

    Vignal, Philippe

    2015-04-27

    Many processes in engineering and sciences involve the evolution of interfaces. Among the mathematical frameworks developed to model these types of problems, the phase-field method has emerged as a possible solution. Phase-fields nonetheless lead to complex nonlinear, high-order partial differential equations, whose solution poses mathematical and computational challenges. Guaranteeing some of the physical properties of the equations has lead to the development of efficient algorithms and discretizations capable of recovering said properties by construction [2, 5]. This work builds-up on these ideas, and proposes novel discretization strategies that guarantee numerical energy dissipation for both conserved and non-conserved phase-field models. The temporal discretization is based on a novel method which relies on Taylor series and ensures strong energy stability. It is second-order accurate, and can also be rendered linear to speed-up the solution process [4]. The spatial discretization relies on Isogeometric Analysis, a finite element method that possesses the k-refinement technology and enables the generation of high-order, high-continuity basis functions. These basis functions are well suited to handle the high-order operators present in phase-field models. Two-dimensional and three dimensional results of the Allen-Cahn, Cahn-Hilliard, Swift-Hohenberg and phase-field crystal equation will be presented, which corroborate the theoretical findings, and illustrate the robustness of the method. Results related to more challenging examples, namely the Navier-Stokes Cahn-Hilliard and a diusion-reaction Cahn-Hilliard system, will also be presented. The implementation was done in PetIGA and PetIGA-MF, high-performance Isogeometric Analysis frameworks [1, 3], designed to handle non-linear, time-dependent problems.

  19. A modelling study of long term green roof retention performance.

    Science.gov (United States)

    Stovin, Virginia; Poë, Simon; Berretta, Christian

    2013-12-15

    This paper outlines the development of a conceptual hydrological flux model for the long term continuous simulation of runoff and drought risk for green roof systems. A green roof's retention capacity depends upon its physical configuration, but it is also strongly influenced by local climatic controls, including the rainfall characteristics and the restoration of retention capacity associated with evapotranspiration during dry weather periods. The model includes a function that links evapotranspiration rates to substrate moisture content, and is validated against observed runoff data. The model's application to typical extensive green roof configurations is demonstrated with reference to four UK locations characterised by contrasting climatic regimes, using 30-year rainfall time-series inputs at hourly simulation time steps. It is shown that retention performance is dependent upon local climatic conditions. Volumetric retention ranges from 0.19 (cool, wet climate) to 0.59 (warm, dry climate). Per event retention is also considered, and it is demonstrated that retention performance decreases significantly when high return period events are considered in isolation. For example, in Sheffield the median per-event retention is 1.00 (many small events), but the median retention for events exceeding a 1 in 1 yr return period threshold is only 0.10. The simulation tool also provides useful information about the likelihood of drought periods, for which irrigation may be required. A sensitivity study suggests that green roofs with reduced moisture-holding capacity and/or low evapotranspiration rates will tend to offer reduced levels of retention, whilst high moisture-holding capacity and low evapotranspiration rates offer the strongest drought resistance. Copyright © 2013 Elsevier Ltd. All rights reserved.

  20. The landscape of GPGPU performance modeling tools

    NARCIS (Netherlands)

    Madougou, S.; Varbanescu, A.; de Laat, C.; van Nieuwpoort, R.

    GPUs are gaining fast adoption as high-performance computing architectures, mainly because of their impressive peak performance. Yet most applications only achieve small fractions of this performance. While both programmers and architects have clear opinions about the causes of this performance gap,

  1. Hydrological simulation in a basin of typical tropical climate and soil using the SWAT model part I: Calibration and validation tests

    Directory of Open Access Journals (Sweden)

    Donizete dos R. Pereira

    2016-09-01

    New hydrological insights: The SWAT model was qualified for simulating the Pomba River sub-basin in the sites where rainfall representation was reasonable to good. The model can be used in the simulation of maximum, average and minimum annual daily streamflow based on the paired t-test, contributing with the water resources management of region, although the model still needs to be improved, mainly in the representativeness of rainfall, to give better estimates of extreme values.

  2. How well can we forecast future model error and uncertainty by mining past model performance data

    Science.gov (United States)

    Solomatine, Dimitri

    2016-04-01

    Consider a hydrological model Y(t) = M(X(t), P), where X=vector of inputs; P=vector of parameters; Y=model output (typically flow); t=time. In cases when there is enough past data on the model M performance, it is possible to use this data to build a (data-driven) model EC of model M error. This model EC will be able to forecast error E when a new input X is fed into model M; then subtracting E from the model prediction Y a better estimate of Y can be obtained. Model EC is usually called the error corrector (in meteorology - a bias corrector). However, we may go further in characterizing model deficiencies, and instead of using the error (a real value) we may consider a more sophisticated characterization, namely a probabilistic one. So instead of rather a model EC of the model M error it is also possible to build a model U of model M uncertainty; if uncertainty is described as the model error distribution D this model will calculate its properties - mean, variance, other moments, and quantiles. The general form of this model could be: D = U (RV), where RV=vector of relevant variables having influence on model uncertainty (to be identified e.g. by mutual information analysis); D=vector of variables characterizing the error distribution (typically, two or more quantiles). There is one aspect which is not always explicitly mentioned in uncertainty analysis work. In our view it is important to distinguish the following main types of model uncertainty: 1. The residual uncertainty of models. In this case the model parameters and/or model inputs are considered to be fixed (deterministic), i.e. the model is considered to be optimal (calibrated) and deterministic. Model error is considered as the manifestation of uncertainty. If there is enough past data about the model errors (i.e. its uncertainty), it is possible to build a statistical or machine learning model of uncertainty trained on this data. Here the following methods can be mentioned: (a) quantile regression (QR

  3. PV Performance Modeling Methods and Practices: Results from the 4th PV Performance Modeling Collaborative Workshop.

    Energy Technology Data Exchange (ETDEWEB)

    Stein, Joshua [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2017-03-01

    In 2014, the IEA PVPS Task 13 added the PVPMC as a formal activity to its technical work plan for 2014-2017. The goal of this activity is to expand the reach of the PVPMC to a broader international audience and help to reduce PV performance modeling uncertainties worldwide. One of the main deliverables of this activity is to host one or more PVPMC workshops outside the US to foster more international participation within this collaborative group. This report reviews the results of the first in a series of these joint IEA PVPS Task 13/PVPMC workshops. The 4th PV Performance Modeling Collaborative Workshop was held in Cologne, Germany at the headquarters of TÜV Rheinland on October 22-23, 2015.

  4. Modelling and simulating fire tube boiler performance

    DEFF Research Database (Denmark)

    Sørensen, K.; Condra, T.; Houbak, Niels

    2003-01-01

    A model for a flue gas boiler covering the flue gas and the water-/steam side has been formulated. The model has been formulated as a number of sub models that are merged into an overall model for the complete boiler. Sub models have been defined for the furnace, the convection zone (split in 2......: a zone submerged in water and a zone covered by steam), a model for the material in the boiler (the steel) and 2 models for resp. the water/steam zone (the boiling) and the steam. The dynamic model has been developed as a number of Differential-Algebraic-Equation system (DAE). Subsequently Mat......Lab/Simulink has been applied for carrying out the simulations. To be able to verify the simulated results experiments has been carried out on a full scale boiler plant....

  5. Modeling the marketing strategy-performance relationship : towards an hierarchical marketing performance framework

    OpenAIRE

    Huizingh, Eelko K.R.E.; Zengerink, Evelien

    2001-01-01

    Accurate measurement of marketing performance is an important topic for both marketing academics and marketing managers. Many researchers have recognized that marketing performance measurement should go beyond financial measurement. In this paper we propose a conceptual framework that models marketing performance as a sequence of intermediate performance measures ultimately leading to financial performance. This framework, called the Hierarchical Marketing Performance (HMP) framework, starts ...

  6. Modelling and simulating fire tube boiler performance

    DEFF Research Database (Denmark)

    Sørensen, K.; Condra, T.; Houbak, Niels

    2003-01-01

    : a zone submerged in water and a zone covered by steam), a model for the material in the boiler (the steel) and 2 models for resp. the water/steam zone (the boiling) and the steam. The dynamic model has been developed as a number of Differential-Algebraic-Equation system (DAE). Subsequently Mat...

  7. Models and criteria for waste repository performance

    International Nuclear Information System (INIS)

    Smith, C.F.; Cohen, J.J.

    1981-03-01

    A primary objective of the Waste Management Program is to assure that public health is protected. Predictive modeling, to some extent, will play a role in assuring that this objective is met. This paper considers the requirements and limitations of predictive modeling in providing useful inputs to waste management decision making. Criteria development needs and the relation between criteria and models are also discussed

  8. Spatial-temporal Variations and Source Apportionment of typical Heavy Metals in Beijing-Tianjin-Hebei (BTH) region of China Based on Localized Air Pollutants Emission Inventory and WRF-CMAQ modelling

    Science.gov (United States)

    Tian, H.; Liu, S.; Zhu, C.; Liu, H.; Wu, B.

    2017-12-01

    Abstract: Anthropogenic atmospheric emissions of air pollutants have caused worldwide concerns due to their adverse effects on human health and the ecosystem. By determining the best available emission factors for varied source categories, we established the comprehensive atmospheric emission inventories of hazardous air pollutants including 12 typical toxic heavy metals (Hg, As, Se, Pb, Cd, Cr, Ni, Sb, Mn, Co, Cu, and Zn) from primary anthropogenic activities in Beijing-Tianjin-Hebei (BTH) region of China for the period of 2012 for the first time. The annual emissions of these pollutants were allocated at a high spatial resolution of 9km × 9km grid with ArcGIS methodology and surrogate indexes, such as regional population and gross domestic product (GDP). Notably, the total heavy metal emissions from this region represented about 10.9% of the Chinese national total emissions. The areas with high emissions of heavy metals were mainly concentrated in Tangshan, Shijiazhuang, Handan and Tianjin. Further, WRF-CMAQ modeling system were applied to simulate the regional concentration of heavy metals to explore their spatial-temporal variations, and the source apportionment of these heavy metals in BTH region was performed using the Brute-Force method. Finally, integrated countermeasures were proposed to minimize the final air pollutants discharge on account of the current and future demand of energy-saving and pollution reduction in China. Keywords: heavy metals; particulate matter; emission inventory; CMAQ model; source apportionment Acknowledgment. This work was funded by the National Natural Science Foundation of China (21377012 and 21177012) and the Trail Special Program of Research on the Cause and Control Technology of Air Pollution under the National Key Research and Development Plan of China (2016YFC0201501).

  9. On the significance of the noise model for the performance of a linear MPC in closed-loop operation

    DEFF Research Database (Denmark)

    Hagdrup, Morten; Boiroux, Dimitri; Mahmoudi, Zeinab

    2016-01-01

    models typically means less parameters to identify. Systematic tuning of such controllers is discussed. Simulation studies are conducted for linear time-invariant systems showing that choosing a noise model of low order is beneficial for closed-loop performance. (C) 2016, IFAC (International Federation...

  10. Simulation of changes in heavy metal contamination in farmland soils of a typical manufacturing center through logistic-based cellular automata modeling.

    Science.gov (United States)

    Qiu, Menglong; Wang, Qi; Li, Fangbai; Chen, Junjian; Yang, Guoyi; Liu, Liming

    2016-01-01

    A customized logistic-based cellular automata (CA) model was developed to simulate changes in heavy metal contamination (HMC) in farmland soils of Dongguan, a manufacturing center in Southern China, and to discover the relationship between HMC and related explanatory variables (continuous and categorical). The model was calibrated through the simulation and validation of HMC in 2012. Thereafter, the model was implemented for the scenario simulation of development alternatives for HMC in 2022. The HMC in 2002 and 2012 was determined through soil tests and cokriging. Continuous variables were divided into two groups by odds ratios. Positive variables (odds ratios >1) included the Nemerow synthetic pollution index in 2002, linear drainage density, distance from the city center, distance from the railway, slope, and secondary industrial output per unit of land. Negative variables (odds ratios simulation shows that the government should not only implement stricter environmental regulation but also strengthen the remediation of the current polluted area to effectively mitigate HMC.

  11. Models and criteria for LLW disposal performance

    International Nuclear Information System (INIS)

    Smith, C.F.; Cohen, J.J.

    1980-12-01

    A primary objective of the Low Level Waste (LLW) Management Program is to assure that public health is protected. Predictive modeling, to some extent, will play a role in meeting this objective. This paper considers the requirements and limitations of predictive modeling in providing useful inputs to waste mangement decision making. In addition, criteria development needs and the relation between criteria and models are discussed

  12. Models and criteria for LLW disposal performance

    International Nuclear Information System (INIS)

    Smith, C.F.; Cohen, J.J.

    1980-01-01

    A primary objective of the Low Level Waste (LLW) Management Program is to assure that public health is protected. Predictive modeling, to some extent, will play a role in meeting this objective. This paper considers the requirements and limitations of predictive modeling in providing useful inputs to waste management decision making. In addition, criteria development needs and the relation between criteria and models are discussed

  13. Model Performance Evaluation and Scenario Analysis (MPESA) Tutorial

    Science.gov (United States)

    The model performance evaluation consists of metrics and model diagnostics. These metrics provides modelers with statistical goodness-of-fit measures that capture magnitude only, sequence only, and combined magnitude and sequence errors.

  14. Modelling Flexible Pavement Response and Performance

    DEFF Research Database (Denmark)

    Ullidtz, Per

    This textbook is primarily concerned with models for predicting the future condition of flexible pavements, as a function of traffic loading, climate, materials, etc., using analytical-empirical methods.......This textbook is primarily concerned with models for predicting the future condition of flexible pavements, as a function of traffic loading, climate, materials, etc., using analytical-empirical methods....

  15. HANDOVER MANAGEABILITY AND PERFORMANCE MODELING IN

    African Journals Online (AJOL)

    SOFTLINKS DIGITAL

    This work develops a model for interpreting implementation progress. The proposed progress monitoring model uses existing implementation artefact metrics, tries .... determine implementation velocity. As noted by McConnell [28] this velocity increases at the beginning and decreases near the end. A formal implementation.

  16. Modeling, simulation and performance evaluation of parabolic ...

    African Journals Online (AJOL)

    Model of a parabolic trough power plant, taking into consideration the different losses associated with collection of the solar irradiance and thermal losses is presented. MATLAB software is employed to model the power plant at reference state points. The code is then used to find the different reference values which are ...

  17. Detailed Performance Model for Photovoltaic Systems: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Tian, H.; Mancilla-David, F.; Ellis, K.; Muljadi, E.; Jenkins, P.

    2012-07-01

    This paper presents a modified current-voltage relationship for the single diode model. The single-diode model has been derived from the well-known equivalent circuit for a single photovoltaic cell. The modification presented in this paper accounts for both parallel and series connections in an array.

  18. Model for Agile Software Development Performance Monitoring

    OpenAIRE

    Žabkar, Nataša

    2013-01-01

    Agile methodologies have been in use for more than ten years and during this time they proved to be efficient, even though number of empirical research is scarce, especially regarding agile software development performance monitoring. The most popular agile framework Scrum is using only one measure of performance: the amount of work remaining for implementation of User Story from the Product Backlog or for implementation of Task from the Sprint Backlog. In time the need for additional me...

  19. Modeling and optimization of LCD optical performance

    CERN Document Server

    Yakovlev, Dmitry A; Kwok, Hoi-Sing

    2015-01-01

    The aim of this book is to present the theoretical foundations of modeling the optical characteristics of liquid crystal displays, critically reviewing modern modeling methods and examining areas of applicability. The modern matrix formalisms of optics of anisotropic stratified media, most convenient for solving problems of numerical modeling and optimization of LCD, will be considered in detail. The benefits of combined use of the matrix methods will be shown, which generally provides the best compromise between physical adequacy and accuracy with computational efficiency and optimization fac

  20. VCP associated inclusion body myopathy and paget disease of bone knock-in mouse model exhibits tissue pathology typical of human disease.

    Directory of Open Access Journals (Sweden)

    Mallikarjun Badadani

    2010-10-01

    Full Text Available Dominant mutations in the valosin containing protein (VCP gene cause inclusion body myopathy associated with Paget's disease of bone and frontotemporal dementia (IBMPFD. We have generated a knock-in mouse model with the common R155H mutation. Mice demonstrate progressive muscle weakness starting approximately at the age of 6 months. Histology of mutant muscle showed progressive vacuolization of myofibrils and centrally located nuclei, and immunostaining shows progressive cytoplasmic accumulation of TDP-43 and ubiquitin-positive inclusion bodies in quadriceps myofibrils and brain. Increased LC3-II staining of muscle sections representing increased number of autophagosomes suggested impaired autophagy. Increased apoptosis was demonstrated by elevated caspase-3 activity and increased TUNEL-positive nuclei. X-ray microtomography (uCT images show radiolucency of distal femurs and proximal tibiae in knock-in mice and uCT morphometrics shows decreased trabecular pattern and increased cortical wall thickness. Bone histology and bone marrow derived macrophage cultures in these mice revealed increased osteoclastogenesis observed by TRAP staining suggestive of Paget bone disease. The VCP(R155H/+ knock-in mice replicate the muscle, bone and brain pathology of inclusion body myopathy, thus representing a useful model for preclinical studies.

  1. A unified tool for performance modelling and prediction

    International Nuclear Information System (INIS)

    Gilmore, Stephen; Kloul, Leila

    2005-01-01

    We describe a novel performability modelling approach, which facilitates the efficient solution of performance models extracted from high-level descriptions of systems. The notation which we use for our high-level designs is the Unified Modelling Language (UML) graphical modelling language. The technology which provides the efficient representation capability for the underlying performance model is the multi-terminal binary decision diagram (MTBDD)-based PRISM probabilistic model checker. The UML models are compiled through an intermediate language, the stochastic process algebra PEPA, before translation into MTBDDs for solution. We illustrate our approach on a real-world analysis problem from the domain of mobile telephony

  2. Integrated thermodynamic model for ignition target performance

    Directory of Open Access Journals (Sweden)

    Springer P.T.

    2013-11-01

    Full Text Available We have derived a 3-dimensional synthetic model for NIF implosion conditions, by predicting and optimizing fits to a broad set of x-ray and nuclear diagnostics obtained on each shot. By matching x-ray images, burn width, neutron time-of-flight ion temperature, yield, and fuel ρr, we obtain nearly unique constraints on conditions in the hotspot and fuel in a model that is entirely consistent with the observables. This model allows us to determine hotspot density, pressure, areal density (ρr, total energy, and other ignition-relevant parameters not available from any single diagnostic. This article describes the model and its application to National Ignition Facility (NIF tritium–hydrogen–deuterium (THD and DT implosion data, and provides an explanation for the large yield and ρr degradation compared to numerical code predictions.

  3. Mathematical Modeling of Circadian/Performance Countermeasures

    Data.gov (United States)

    National Aeronautics and Space Administration — We developed and refined our current mathematical model of circadian rhythms to incorporate melatonin as a marker rhythm. We used an existing physiologically based...

  4. Hydrologic Evaluation of Landfill Performance (HELP) Model

    Science.gov (United States)

    The program models rainfall, runoff, infiltration, and other water pathways to estimate how much water builds up above each landfill liner. It can incorporate data on vegetation, soil types, geosynthetic materials, initial moisture conditions, slopes, etc.

  5. Evaluating Models of Human Performance: Safety-Critical Systems Applications

    Science.gov (United States)

    Feary, Michael S.

    2012-01-01

    This presentation is part of panel discussion on Evaluating Models of Human Performance. The purpose of this panel is to discuss the increasing use of models in the world today and specifically focus on how to describe and evaluate models of human performance. My presentation will focus on discussions of generating distributions of performance, and the evaluation of different strategies for humans performing tasks with mixed initiative (Human-Automation) systems. I will also discuss issues with how to provide Human Performance modeling data to support decisions on acceptability and tradeoffs in the design of safety critical systems. I will conclude with challenges for the future.

  6. Comparison of performance of simulation models for floor heating

    DEFF Research Database (Denmark)

    Weitzmann, Peter; Svendsen, Svend

    2005-01-01

    This paper describes the comparison of performance of simulation models for floor heating with different level of detail in the modelling process. The models are compared in an otherwise identical simulation model containing room model, walls, windows, ceiling and ventilation system. By exchanging...

  7. Developing an Energy Performance Modeling Startup Kit

    Energy Technology Data Exchange (ETDEWEB)

    none,

    2012-10-01

    In 2011, the NAHB Research Center began assessing the needs and motivations of residential remodelers regarding energy performance remodeling. This report outlines: the current remodeling industry and the role of energy efficiency; gaps and barriers to adding energy efficiency into remodeling; and support needs of professional remodelers to increase sales and projects involving improving home energy efficiency.

  8. Modeling vibrato and portamento in music performance

    NARCIS (Netherlands)

    Desain, P.W.M.; Honing, H.J.

    1999-01-01

    Research in the psychology of music dealing with expression is often concerned with the discrete aspects of music performance, and mainly concentrates on the study of piano music (partly because of the ease with which piano music can be reduced to discrete note events). However, on other

  9. WirelessHART modeling and performance evaluation

    NARCIS (Netherlands)

    Remke, Anne Katharina Ingrid; Wu, Xian

    2013-01-01

    In process industries wired supervisory and control networks are more and more replaced by wireless systems. Wireless communication inevitably introduces time delays and message losses, which may degrade the system reliability and performance. WirelessHART, as the first international standard for

  10. Modeling the relationship between landscape characteristics and water quality in a typical highly intensive agricultural small watershed, Dongting lake basin, south central China.

    Science.gov (United States)

    Li, Hongqing; Liu, Liming; Ji, Xiang

    2015-03-01

    Understanding the relationship between landscape characteristics and water quality is critically important for estimating pollution potential and reducing pollution risk. Therefore, this study examines the relationship between landscape characteristics and water quality at both spatial and temporal scales. The study took place in the Jinjing River watershed in 2010; seven landscape types and four water quality pollutions were chosen as analysis parameters. Three different buffer areas along the river were drawn to analyze the relationship as a function of spatial scale. The results of a Pearson's correlation coefficient analysis suggest that "source" landscape, namely, tea gardens, residential areas, and paddy lands, have positive effects on water quality parameters, while forests exhibit a negative influence on water quality parameters because they represent a "sink" landscape and the sub-watershed level is identified as a suitable scale. Using the principal component analysis, tea gardens, residential areas, paddy lands, and forests were identified as the main landscape index. A stepwise multiple regression analysis was employed to model the relationship between landscape characteristics and water quality for each season. The results demonstrate that both landscape composition and configuration affect water quality. In summer and winter, the landscape metrics explained approximately 80.7 % of the variance in the water quality variables, which was higher than that for spring and fall (60.3 %). This study can help environmental managers to understand the relationships between landscapes and water quality and provide landscape ecological approaches for water quality control and land use management.

  11. Analysis of spatial distribution characteristics of dissolved organic matter in typical greenhouse soil of northern China using three dimensional fluorescence spectra technique and parallel factor analysis model.

    Science.gov (United States)

    Pan, Hong-wei; Lei, Hong-jun; Han, Yu-ping; Xi, Bei-dou; He, Xiao-song; Xu, Qi-gong; Li, Dan

    2014-06-01

    The aim of the present work is to study the soil DOM characteristics in the vegetable greenhouse with a long-term of cultivation. Results showed that the soil DOM mainly consisted of three components, fulvic acid-like (C1), humic acid-like (C2) and protein-like (C3), with C1 as the majority one. The distribution of DOM in space was also studied. In vertical direction, C1 and C2 decreased significantly with the increase in soil depth, while C3 component decreased after increased. The humification coefficient decreased fast from 0-20 to 30-40 cm, and then increased from 30-40 to 40-50 cm. In the horizontal direction, the level of C2 component varied greatly in space, while that of C1 component changed little, and that of C3 component fell in between the above two. The change in the humification degree of each soil layer significantly varied spatially. Humification process of soil organic matter mainly occurred in the surface soil layer. In addition, the humification degree in space also changed significantly. The new ideas of this study are: (1) Analyze the composition and spatial heterogeneity of soil DOM in the vegetable greenhouse; (2) Use three dimensional fluorescence spectra technology and parallel factor analysis model successfully to quantify the components of soil DOM, which provides a new method for the soil DOM analysis.

  12. Typical errors of ESP users

    Science.gov (United States)

    Eremina, Svetlana V.; Korneva, Anna A.

    2004-07-01

    The paper presents analysis of the errors made by ESP (English for specific purposes) users which have been considered as typical. They occur as a result of misuse of resources of English grammar and tend to resist. Their origin and places of occurrence have also been discussed.

  13. Neuro-fuzzy model for evaluating the performance of processes ...

    Indian Academy of Sciences (India)

    In this work an Adaptive Neuro-Fuzzy Inference System (ANFIS) was used to model the periodic performance of some multi-input single-output (MISO) processes, namely: brewery operations (case study 1) and soap production (case study 2) processes. Two ANFIS models were developed to model the performance of the ...

  14. Persistence Modeling for Assessing Marketing Strategy Performance

    NARCIS (Netherlands)

    M.G. Dekimpe (Marnik); D.M. Hanssens (Dominique)

    2003-01-01

    textabstractThe question of long-run market response lies at the heart of any marketing strategy that tries to create a sustainable competitive advantage for the firm or brand. A key challenge, however, is that only short-run results of marketing actions are readily observable. Persistence modeling

  15. Some useful characteristics of performance models

    International Nuclear Information System (INIS)

    Worledge, D.H.

    1985-01-01

    This paper examines the demands placed upon models of human cognitive decision processes in application to Probabilistic Risk Assessment. Successful models, for this purpose, should, 1) be based on proven or plausible psychological knowledge, e.g., Rasmussen's mental schematic, 2) incorporate opportunities for slips, 3) take account of the recursive nature, in time, of corrections to mistaken actions, and 4) depend on the crew's predominant mental states that accompany such recursions. The latter is equivalent to an explicit coupling between input and output of Rasmussen's mental schematic. A family of such models is proposed with observable rate processes mediating the (conscious) mental states involved. It is expected that the cumulative probability distributions corresponding to the individual rate processes can be identified with probability-time correlations of the HCR Human Cognitive Reliability type discussed elsewhere in this session. The functional forms of the conditional rates are intuitively shown to have simple characteristics that lead to a strongly recursive stochastic process with significant predictive capability. Models of the type proposed have few parts and form a representation that is intentionally far short of a fully transparent exposition of the mental process in order to avoid making impossible demands on data

  16. Evaluating Performances of Traffic Noise Models | Oyedepo ...

    African Journals Online (AJOL)

    Traffic noise in decibel dB(A) were measured at six locations using 407780A Integrating Sound Level Meter, while spot speed and traffic volume were collected with cine-camera. The predicted sound exposure level (SEL) was evaluated using Burgess, British and FWHA model. The average noise level obtained are 77.64 ...

  17. HANDOVER MANAGEABILITY AND PERFORMANCE MODELING IN

    African Journals Online (AJOL)

    SOFTLINKS DIGITAL

    situations. Library Management Design Using Use. Case. To model software using object oriented design, a case study of Bingham University. Library Management System is used. Software is developed to automate the Bingham. University manual Library. The system will be stand alone and will be designed with the.

  18. Sustaining Team Performance: A Systems Model\\

    Science.gov (United States)

    1979-07-31

    a " forgetting curve ." Three cases were tested and retested under four different research schedules. A description c;’ the test cases follows. III-11...available to show fluctuation in Ph due to unit yearly training cycle. Another real-world military example of the classical forgetting curve is found in the...no practice between the acquisition and subsequent test of performance. The classical forgetting curve is believed to apply. The shape of curve depends

  19. Modelling swimming hydrodynamics to enhance performance

    OpenAIRE

    Marinho, D.A.; Rouboa, A.; Barbosa, Tiago M.; Silva, A.J.

    2010-01-01

    Swimming assessment is one of the most complex but outstanding and fascinating topics in biomechanics. Computational fluid dynamics (CFD) methodology is one of the different methods that have been applied in swimming research to observe and understand water movements around the human body and its application to improve swimming performance. CFD has been applied attempting to understand deeply the biomechanical basis of swimming. Several studies have been conducted willing to analy...

  20. Dynamic Model of Centrifugal Compressor for Prediction of Surge Evolution and Performance Variations

    International Nuclear Information System (INIS)

    Jung, Mooncheong; Han, Jaeyoung; Yu, Sangseok

    2016-01-01

    When a control algorithm is developed to protect automotive compressor surges, the simulation model typically selects an empirically determined look-up table. However, it is difficult for a control oriented empirical model to show surge characteristics of the super charger. In this study, a dynamic supercharger model is developed to predict the performance of a centrifugal compressor under dynamic load follow-up. The model is developed using Simulink® environment, and is composed of a compressor, throttle body, valves, and chamber. Greitzer’s compressor model is used, and the geometric parameters are achieved by the actual supercharger. The simulation model is validated with experimental data. It is shown that compressor surge is effectively predicted by this dynamic compressor model under various operating conditions.

  1. An Empirical Study of a Solo Performance Assessment Model

    Science.gov (United States)

    Russell, Brian E.

    2015-01-01

    The purpose of this study was to test a hypothesized model of solo music performance assessment. Specifically, this study investigates the influence of technique and musical expression on perceptions of overall performance quality. The Aural Musical Performance Quality (AMPQ) measure was created to measure overall performance quality, technique,…

  2. Developing an Energy Performance Modeling Startup Kit

    Energy Technology Data Exchange (ETDEWEB)

    Wood, A.

    2012-10-01

    In 2011, the NAHB Research Center began the first part of the multi-year effort by assessing the needs and motivations of residential remodelers regarding energy performance remodeling. The scope is multifaceted - all perspectives will be sought related to remodeling firms ranging in size from small-scale, sole proprietor to national. This will allow the Research Center to gain a deeper understanding of the remodeling and energy retrofit business and the needs of contractors when offering energy upgrade services. To determine the gaps and the motivation for energy performance remodeling, the NAHB Research Center conducted (1) an initial series of focus groups with remodelers at the 2011 International Builders' Show, (2) a second series of focus groups with remodelers at the NAHB Research Center in conjunction with the NAHB Spring Board meeting in DC, and (3) quantitative market research with remodelers based on the findings from the focus groups. The goal was threefold, to: Understand the current remodeling industry and the role of energy efficiency; Identify the gaps and barriers to adding energy efficiency into remodeling; and Quantify and prioritize the support needs of professional remodelers to increase sales and projects involving improving home energy efficiency. This report outlines all three of these tasks with remodelers.

  3. Modeling the Mechanical Performance of Die Casting Dies

    Energy Technology Data Exchange (ETDEWEB)

    R. Allen Miller

    2004-02-27

    The following report covers work performed at Ohio State on modeling the mechanical performance of dies. The focus of the project was development and particularly verification of finite element techniques used to model and predict displacements and stresses in die casting dies. The work entails a major case study performed with and industrial partner on a production die and laboratory experiments performed at Ohio State.

  4. Modeling the marketing strategy-performance relationship : towards an hierarchical marketing performance framework

    NARCIS (Netherlands)

    Huizingh, Eelko K.R.E.; Zengerink, Evelien

    2001-01-01

    Accurate measurement of marketing performance is an important topic for both marketing academics and marketing managers. Many researchers have recognized that marketing performance measurement should go beyond financial measurement. In this paper we propose a conceptual framework that models

  5. Investigation of modern methods of probalistic sensitivity analysis of final repository performance assessment models (MOSEL)

    International Nuclear Information System (INIS)

    Spiessl, Sabine; Becker, Dirk-Alexander

    2017-06-01

    Sensitivity analysis is a mathematical means for analysing the sensitivities of a computational model to variations of its input parameters. Thus, it is a tool for managing parameter uncertainties. It is often performed probabilistically as global sensitivity analysis, running the model a large number of times with different parameter value combinations. Going along with the increase of computer capabilities, global sensitivity analysis has been a field of mathematical research for some decades. In the field of final repository modelling, probabilistic analysis is regarded a key element of a modern safety case. An appropriate uncertainty and sensitivity analysis can help identify parameters that need further dedicated research to reduce the overall uncertainty, generally leads to better system understanding and can thus contribute to building confidence in the models. The purpose of the project described here was to systematically investigate different numerical and graphical techniques of sensitivity analysis with typical repository models, which produce a distinctly right-skewed and tailed output distribution and can exhibit a highly nonlinear, non-monotonic or even non-continuous behaviour. For the investigations presented here, three test models were defined that describe generic, but typical repository systems. A number of numerical and graphical sensitivity analysis methods were selected for investigation and, in part, modified or adapted. Different sampling methods were applied to produce various parameter samples of different sizes and many individual runs with the test models were performed. The results were evaluated with the different methods of sensitivity analysis. On this basis the methods were compared and assessed. This report gives an overview of the background and the applied methods. The results obtained for three typical test models are presented and explained; conclusions in view of practical applications are drawn. At the end, a recommendation

  6. Investigation of modern methods of probalistic sensitivity analysis of final repository performance assessment models (MOSEL)

    Energy Technology Data Exchange (ETDEWEB)

    Spiessl, Sabine; Becker, Dirk-Alexander

    2017-06-15

    Sensitivity analysis is a mathematical means for analysing the sensitivities of a computational model to variations of its input parameters. Thus, it is a tool for managing parameter uncertainties. It is often performed probabilistically as global sensitivity analysis, running the model a large number of times with different parameter value combinations. Going along with the increase of computer capabilities, global sensitivity analysis has been a field of mathematical research for some decades. In the field of final repository modelling, probabilistic analysis is regarded a key element of a modern safety case. An appropriate uncertainty and sensitivity analysis can help identify parameters that need further dedicated research to reduce the overall uncertainty, generally leads to better system understanding and can thus contribute to building confidence in the models. The purpose of the project described here was to systematically investigate different numerical and graphical techniques of sensitivity analysis with typical repository models, which produce a distinctly right-skewed and tailed output distribution and can exhibit a highly nonlinear, non-monotonic or even non-continuous behaviour. For the investigations presented here, three test models were defined that describe generic, but typical repository systems. A number of numerical and graphical sensitivity analysis methods were selected for investigation and, in part, modified or adapted. Different sampling methods were applied to produce various parameter samples of different sizes and many individual runs with the test models were performed. The results were evaluated with the different methods of sensitivity analysis. On this basis the methods were compared and assessed. This report gives an overview of the background and the applied methods. The results obtained for three typical test models are presented and explained; conclusions in view of practical applications are drawn. At the end, a recommendation

  7. Modeling and analysis to quantify MSE wall behavior and performance.

    Science.gov (United States)

    2009-08-01

    To better understand potential sources of adverse performance of mechanically stabilized earth (MSE) walls, a suite of analytical models was studied using the computer program FLAC, a numerical modeling computer program widely used in geotechnical en...

  8. Characterization uncertainty and its effects on models and performance

    Energy Technology Data Exchange (ETDEWEB)

    Rautman, C.A.; Treadway, A.H.

    1991-01-01

    Geostatistical simulation is being used to develop multiple geologic models of rock properties at the proposed Yucca Mountain repository site. Because each replicate model contains the same known information, and is thus essentially indistinguishable statistically from others, the differences between models may be thought of as representing the uncertainty in the site description. The variability among performance measures, such as ground water travel time, calculated using these replicate models therefore quantifies the uncertainty in performance that arises from uncertainty in site characterization.

  9. Metrics for evaluating performance and uncertainty of Bayesian network models

    Science.gov (United States)

    Bruce G. Marcot

    2012-01-01

    This paper presents a selected set of existing and new metrics for gauging Bayesian network model performance and uncertainty. Selected existing and new metrics are discussed for conducting model sensitivity analysis (variance reduction, entropy reduction, case file simulation); evaluating scenarios (influence analysis); depicting model complexity (numbers of model...

  10. Modeling and Performance Analysis of Manufacturing Systems in ...

    African Journals Online (AJOL)

    This study deals with modeling and performance analysis of footwear manufacturing using arena simulation modeling software. It was investigated that modeling and simulation is a potential tool for modeling and analysis of manufacturing assembly lines like footwear manufacturing because it allows the researcher to ...

  11. Identifying the connective strength between model parameters and performance criteria

    Directory of Open Access Journals (Sweden)

    B. Guse

    2017-11-01

    Full Text Available In hydrological models, parameters are used to represent the time-invariant characteristics of catchments and to capture different aspects of hydrological response. Hence, model parameters need to be identified based on their role in controlling the hydrological behaviour. For the identification of meaningful parameter values, multiple and complementary performance criteria are used that compare modelled and measured discharge time series. The reliability of the identification of hydrologically meaningful model parameter values depends on how distinctly a model parameter can be assigned to one of the performance criteria. To investigate this, we introduce the new concept of connective strength between model parameters and performance criteria. The connective strength assesses the intensity in the interrelationship between model parameters and performance criteria in a bijective way. In our analysis of connective strength, model simulations are carried out based on a latin hypercube sampling. Ten performance criteria including Nash–Sutcliffe efficiency (NSE, Kling–Gupta efficiency (KGE and its three components (alpha, beta and r as well as RSR (the ratio of the root mean square error to the standard deviation for different segments of the flow duration curve (FDC are calculated. With a joint analysis of two regression tree (RT approaches, we derive how a model parameter is connected to different performance criteria. At first, RTs are constructed using each performance criterion as the target variable to detect the most relevant model parameters for each performance criterion. Secondly, RTs are constructed using each parameter as the target variable to detect which performance criteria are impacted by changes in the values of one distinct model parameter. Based on this, appropriate performance criteria are identified for each model parameter. In this study, a high bijective connective strength between model parameters and performance criteria

  12. Performance Guaranteed Inertia Emulation forDiesel-Wind System Feed Microgrid via ModelReference Control

    Energy Technology Data Exchange (ETDEWEB)

    Melin, Alexander M. [ORNL; Zhang, Yichen [University of Tennessee, Knoxville (UTK), Department of Electrical Engineering and Computer Science; Djouadi, Seddik [University of Tennessee, Knoxville (UTK), Department of Electrical Engineering and Computer Science; Olama, Mohammed M. [ORNL

    2017-04-01

    In this paper, a model reference control based inertia emulation strategy is proposed. Desired inertia can be precisely emulated through this control strategy so that guaranteed performance is ensured. A typical frequency response model with parametrical inertia is set to be the reference model. A measurement at a specific location delivers the information of disturbance acting on the diesel-wind system to the referencemodel. The objective is for the speed of the diesel-wind system to track the reference model. Since active power variation is dominantly governed by mechanical dynamics and modes, only mechanical dynamics and states, i.e., a swing-engine-governor system plus a reduced-order wind turbine generator, are involved in the feedback control design. The controller is implemented in a three-phase diesel-wind system feed microgrid. The results show exact synthetic inertia is emulated, leading to guaranteed performance and safety bounds.

  13. Evaluation of performance of distributed delay model for chemotherapy-induced myelosuppression.

    Science.gov (United States)

    Krzyzanski, Wojciech; Hu, Shuhua; Dunlavey, Michael

    2018-04-01

    The distributed delay model has been introduced that replaces the transit compartments in the classic model of chemotherapy-induced myelosuppression with a convolution integral. The maturation of granulocyte precursors in the bone marrow is described by the gamma probability density function with the shape parameter (ν). If ν is a positive integer, the distributed delay model coincides with the classic model with ν transit compartments. The purpose of this work was to evaluate performance of the distributed delay model with particular focus on model deterministic identifiability in the presence of the shape parameter. The classic model served as a reference for comparison. Previously published white blood cell (WBC) count data in rats receiving bolus doses of 5-fluorouracil were fitted by both models. The negative two log-likelihood objective function (-2LL) and running times were used as major markers of performance. Local sensitivity analysis was done to evaluate the impact of ν on the pharmacodynamics response WBC. The ν estimate was 1.46 with 16.1% CV% compared to ν = 3 for the classic model. The difference of 6.78 in - 2LL between classic model and the distributed delay model implied that the latter performed significantly better than former according to the log-likelihood ratio test (P = 0.009), although the overall performance was modestly better. The running times were 1 s and 66.2 min, respectively. The long running time of the distributed delay model was attributed to computationally intensive evaluation of the convolution integral. The sensitivity analysis revealed that ν strongly influences the WBC response by controlling cell proliferation and elimination of WBCs from the circulation. In conclusion, the distributed delay model was deterministically identifiable from typical cytotoxic data. Its performance was modestly better than the classic model with significantly longer running time.

  14. Performance Measurement Model A TarBase model with ...

    Indian Academy of Sciences (India)

    rohit

    Cost. G=Gamma. CV=Cross Validation. MCC=Matthew Correlation Coefficient. Test 1: C G CV Accuracy TP TN FP FN ... Conclusion: Without considering the MirTif negative dataset for training Model A and B classifiers, our Model A and B ...

  15. SR 97. Alternative models project. Channel network modelling of Aberg. Performance assessment using CHAN3D

    International Nuclear Information System (INIS)

    Gylling, B.; Moreno, L.; Neretnieks, I.

    1999-06-01

    In earlier papers, discussions of the mechanisms which are important in performance assessment in fractured media are given. The influence of the mechanisms have been demonstrated using CHAN3D. In this study CHAN3D has been used to simulate production of input data to COMP23 and FARF31. CHAN3D has been integrated with COMP23 in earlier studies, but it has not been used before to calculate input data to FARF31. In the normal use of CHAN3D, the transport part of the concept simulates far field migration. The task in this study was to produce input data according to a specification and using a defined hypothetical repository located at Aespoe HRL as a platform. During the process of applying CHAN3D to the site, the scaling of conductivity was studied, using both data from Aespoe HRL and synthetic data. From the realisations performed, ensemble statistics of water travel time, flux at repository scale, flow-wetted surface and F-ratio values were calculated. Two typical realisations were studied in more detail. The results for three specified canister positions were also highlighted. Exit locations for the released particles were studied. In each realisation statistics were calculated over the entities. The values were post-processed to obtain performance measures of higher order. From the averaging over all the realisations it can be concluded that the Monte Carlo stability is reached for the ensemble statistics. The presence of fracture zones has a large influence on flow and transport in the rock. However, for a single canister the result may be very different between realisations. In some realisations there may be a fast path to a fracture zone whereas in other realisations the opposite may be valid. From the calculation of the flow over the boundaries between the regional model and the smaller local model the consistency seem to be acceptable considering that a perfect match of properties is hard to obtain

  16. A Systemic Cause Analysis Model for Human Performance Technicians

    Science.gov (United States)

    Sostrin, Jesse

    2011-01-01

    This article presents a systemic, research-based cause analysis model for use in the field of human performance technology (HPT). The model organizes the most prominent barriers to workplace learning and performance into a conceptual framework that explains and illuminates the architecture of these barriers that exist within the fabric of everyday…

  17. Electric Field Simulation of Surge Capacitors with Typical Defects

    Science.gov (United States)

    Zhang, Chenmeng; Mao, Yuxiang; Xie, Shijun; Zhang, Yu

    2018-03-01

    The electric field of power capacitors with different typical defects in DC working condition and impulse oscillation working condition is studied in this paper. According to the type and location of defects and considering the influence of space charge, two-dimensional models of surge capacitors with different typical defects are simulated based on ANSYS. The distribution of the electric field inside the capacitor is analyzed, and the concentration of electric field and its influence on the insulation performance are obtained. The results show that the type of defects, the location of defects and the space charge all affect the electric field distribution inside the capacitor in varying degrees. Especially the electric field distortion in the local area such as sharp corners and burrs is relatively larger, which increases the probability of partial discharge inside the surge capacitor.

  18. Multitasking TORT under UNICOS: Parallel performance models and measurements

    International Nuclear Information System (INIS)

    Barnett, A.; Azmy, Y.Y.

    1999-01-01

    The existing parallel algorithms in the TORT discrete ordinates code were updated to function in a UNICOS environment. A performance model for the parallel overhead was derived for the existing algorithms. The largest contributors to the parallel overhead were identified and a new algorithm was developed. A parallel overhead model was also derived for the new algorithm. The results of the comparison of parallel performance models were compared to applications of the code to two TORT standard test problems and a large production problem. The parallel performance models agree well with the measured parallel overhead

  19. Multitasking TORT Under UNICOS: Parallel Performance Models and Measurements

    International Nuclear Information System (INIS)

    Azmy, Y.Y.; Barnett, D.A.

    1999-01-01

    The existing parallel algorithms in the TORT discrete ordinates were updated to function in a UNI-COS environment. A performance model for the parallel overhead was derived for the existing algorithms. The largest contributors to the parallel overhead were identified and a new algorithm was developed. A parallel overhead model was also derived for the new algorithm. The results of the comparison of parallel performance models were compared to applications of the code to two TORT standard test problems and a large production problem. The parallel performance models agree well with the measured parallel overhead

  20. Cost and Performance Assumptions for Modeling Electricity Generation Technologies

    Energy Technology Data Exchange (ETDEWEB)

    Tidball, R.; Bluestein, J.; Rodriguez, N.; Knoke, S.

    2010-11-01

    The goal of this project was to compare and contrast utility scale power plant characteristics used in data sets that support energy market models. Characteristics include both technology cost and technology performance projections to the year 2050. Cost parameters include installed capital costs and operation and maintenance (O&M) costs. Performance parameters include plant size, heat rate, capacity factor or availability factor, and plant lifetime. Conventional, renewable, and emerging electricity generating technologies were considered. Six data sets, each associated with a different model, were selected. Two of the data sets represent modeled results, not direct model inputs. These two data sets include cost and performance improvements that result from increased deployment as well as resulting capacity factors estimated from particular model runs; other data sets represent model input data. For the technologies contained in each data set, the levelized cost of energy (LCOE) was also evaluated, according to published cost, performance, and fuel assumptions.

  1. Asymptotic performance modelling of DCF protocol with prioritized channel access

    Science.gov (United States)

    Choi, Woo-Yong

    2017-11-01

    Recently, the modification of the DCF (Distributed Coordination Function) protocol by the prioritized channel access was proposed to resolve the problem that the DCF performance worsens exponentially as more nodes exist in IEEE 802.11 wireless LANs. In this paper, an asymptotic analytical performance model is presented to analyze the MAC performance of the DCF protocol with the prioritized channel access.

  2. Maintenance personnel performance simulation (MAPPS): a model for predicting maintenance performance reliability in nuclear power plants

    International Nuclear Information System (INIS)

    Knee, H.E.; Krois, P.A.; Haas, P.M.; Siegel, A.I.; Ryan, T.G.

    1983-01-01

    The NRC has developed a structured, quantitative, predictive methodology in the form of a computerized simulation model for assessing maintainer task performance. Objective of the overall program is to develop, validate, and disseminate a practical, useful, and acceptable methodology for the quantitative assessment of NPP maintenance personnel reliability. The program was organized into four phases: (1) scoping study, (2) model development, (3) model evaluation, and (4) model dissemination. The program is currently nearing completion of Phase 2 - Model Development

  3. Performance and reliability model checking and model construction

    NARCIS (Netherlands)

    Hermanns, H.; Gnesi, Stefania; Schieferdecker, Ina; Rennoch, Axel

    2000-01-01

    Continuous-time Markov chains (CTMCs) are widely used to describe stochastic phenomena in many diverse areas. They are used to estimate performance and reliability characteristics of various nature, for instance to quantify throughputs of manufacturing systems, to locate bottlenecks in communication

  4. Automatic Performance Model Generation for Java Enterprise Edition (EE) Applications

    OpenAIRE

    Brunnert, Andreas;Vögele, Christian;Krcmar, Helmut

    2015-01-01

    The effort required to create performance models for enterprise applications is often out of proportion compared to their benefits. This work aims to reduce this effort by introducing an approach to automatically generate component-based performance models for running Java EE applications. The approach is applicable for all Java EE server products as it relies on standardized component types and interfaces to gather the required data for modeling an application. The feasibility of the approac...

  5. ECOPATH: Model description and evaluation of model performance

    International Nuclear Information System (INIS)

    Bergstroem, U.; Nordlinder, S.

    1996-01-01

    The model is based upon compartment theory and it is run in combination with a statistical error propagation method (PRISM, Gardner et al. 1983). It is intended to be generic for application on other sites with simple changing of parameter values. It was constructed especially for this scenario. However, it is based upon an earlier designed model for calculating relations between released amount of radioactivity and doses to critical groups (used for Swedish regulations concerning annual reports of released radioactivity from routine operation of Swedish nuclear power plants (Bergstroem och Nordlinder, 1991)). The model handles exposure from deposition on terrestrial areas as well as deposition on lakes, starting with deposition values. 14 refs, 16 figs, 7 tabs

  6. Comparison of Two Models for Damage Accumulation in Simulations of System Performance

    Energy Technology Data Exchange (ETDEWEB)

    Youngblood, R. [Idaho National Laboratory, Idaho Falls, ID (United States); Mandelli, D. [Idaho National Laboratory, Idaho Falls, ID (United States)

    2015-11-01

    A comprehensive simulation study of system performance needs to address variations in component behavior, variations in phenomenology, and the coupling between phenomenology and component failure. This paper discusses two models of this: 1. damage accumulation is modeled as a random walk process in each time history, with component failure occurring when damage accumulation reaches a specified threshold; or 2. damage accumulation is modeled mechanistically within each time history, but failure occurs when damage reaches a time-history-specific threshold, sampled at time zero from each component’s distribution of damage tolerance. A limiting case of the latter is classical discrete-event simulation, with component failure times sampled a priori from failure time distributions; but in such models, the failure times are not typically adjusted for operating conditions varying within a time history. Nowadays, as discussed below, it is practical to account for this. The paper compares the interpretations and computational aspects of the two models mentioned above.

  7. Indonesian Private University Lecturer Performance Improvement Model to Improve a Sustainable Organization Performance

    Science.gov (United States)

    Suryaman

    2018-01-01

    Lecturer performance will affect the quality and carrying capacity of the sustainability of an organization, in this case the university. There are many models developed to measure the performance of teachers, but not much to discuss the influence of faculty performance itself towards sustainability of an organization. This study was conducted in…

  8. A Spectral Evaluation of Models Performances in Mediterranean Oak Woodlands

    Science.gov (United States)

    Vargas, R.; Baldocchi, D. D.; Abramowitz, G.; Carrara, A.; Correia, A.; Kobayashi, H.; Papale, D.; Pearson, D.; Pereira, J.; Piao, S.; Rambal, S.; Sonnentag, O.

    2009-12-01

    Ecosystem processes are influenced by climatic trends at multiple temporal scales including diel patterns and other mid-term climatic modes, such as interannual and seasonal variability. Because interactions between biophysical components of ecosystem processes are complex, it is important to test how models perform in frequency (e.g. hours, days, weeks, months, years) and time (i.e. day of the year) domains in addition to traditional tests of annual or monthly sums. Here we present a spectral evaluation using wavelet time series analysis of model performance in seven Mediterranean Oak Woodlands that encompass three deciduous and four evergreen sites. We tested the performance of five models (CABLE, ORCHIDEE, BEPS, Biome-BGC, and JULES) on measured variables of gross primary production (GPP) and evapotranspiration (ET). In general, model performance fails at intermediate periods (e.g. weeks to months) likely because these models do not represent the water pulse dynamics that influence GPP and ET at these Mediterranean systems. To improve the performance of a model it is critical to identify first where and when the model fails. Only by identifying where a model fails we can improve the model performance and use them as prognostic tools and to generate further hypotheses that can be tested by new experiments and measurements.

  9. Atomic scale simulations for improved CRUD and fuel performance modeling

    Energy Technology Data Exchange (ETDEWEB)

    Andersson, Anders David Ragnar [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Cooper, Michael William Donald [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-01-06

    A more mechanistic description of fuel performance codes can be achieved by deriving models and parameters from atomistic scale simulations rather than fitting models empirically to experimental data. The same argument applies to modeling deposition of corrosion products on fuel rods (CRUD). Here are some results from publications in 2016 carried out using the CASL allocation at LANL.

  10. modeling the effect of bandwidth allocation on network performance

    African Journals Online (AJOL)

    Modeling The Effect Of Bandwidth Allocation On Network Performance. MODELING THE EFFECT OF BANDWIDTH ... ABSTRACT. In this paper, a new channel capacity model for interference- limited systems was obtained .... congestion admission control, with the intent of minimizing energy consumption at each terminal.

  11. Modelling of Box Type Solar Cooker Performance in a Tropical ...

    African Journals Online (AJOL)

    Thermal performance model of box type solar cooker with loaded water is presented. The model was developed using the method of Funk to estimate cooking power in terms of climatic and design parameters for box type solar cooker in a tropical environment. Coefficients for each term used in the model were determined ...

  12. Comparisons of Faulting-Based Pavement Performance Prediction Models

    Directory of Open Access Journals (Sweden)

    Weina Wang

    2017-01-01

    Full Text Available Faulting prediction is the core of concrete pavement maintenance and design. Highway agencies are always faced with the problem of lower accuracy for the prediction which causes costly maintenance. Although many researchers have developed some performance prediction models, the accuracy of prediction has remained a challenge. This paper reviews performance prediction models and JPCP faulting models that have been used in past research. Then three models including multivariate nonlinear regression (MNLR model, artificial neural network (ANN model, and Markov Chain (MC model are tested and compared using a set of actual pavement survey data taken on interstate highway with varying design features, traffic, and climate data. It is found that MNLR model needs further recalibration, while the ANN model needs more data for training the network. MC model seems a good tool for pavement performance prediction when the data is limited, but it is based on visual inspections and not explicitly related to quantitative physical parameters. This paper then suggests that the further direction for developing the performance prediction model is incorporating the advantages and disadvantages of different models to obtain better accuracy.

  13. FARMLAND: Model description and evaluation of model performance

    International Nuclear Information System (INIS)

    Attwood, C.; Fayers, C.; Mayall, A.; Brown, J.; Simmonds, J.R.

    1996-01-01

    The FARMLAND model was originally developed for use in connection with continuous, routine releases of radionuclides, but because it has many time-dependent features it has been developed further for a single accidental release. The most recent version of FARMLAND is flexible and can be used to predict activity concentrations in food as a function of time after both accidental and routine releases of radionuclides. The effect of deposition at different times of the year can be taken into account. FARMLAND contains a suite of models which simulate radionuclide transfer through different parts of the foodchain. The models can be used in different combinations and offer the flexibility to assess a variety of radiological situations. The main foods considered are green vegetables, grain products, root vegetables, milk, meat and offal from cattle, and meat and offal from sheep. A large variety of elements can be considered although the degree of complexity with which some are modelled is greater than others; isotopes of caesium, strontium and iodine are treated in greatest detail. 22 refs, 12 figs, 10 tabs

  14. Conceptual adsorption models and open issues pertaining to performance assessment

    International Nuclear Information System (INIS)

    Serne, R.J.

    1992-01-01

    Recently several articles have been published that question the appropriateness of the distribution coefficient, Rd, concept to quantify radionuclide migration. Several distinct issues surrounding the modeling of nuclide retardation. The first section defines adsorption terminology and discusses various adsorption processes. The next section describes five commonly used adsorption conceptual models, specifically emphasizing what attributes that affect adsorption are explicitly accommodated in each model. I also review efforts to incorporate each adsorption model into performance assessment transport computer codes. The five adsorption conceptual models are (1) the constant Rd model, (2) the parametric Rd model, (3) isotherm adsorption models, (4) mass action adsorption models, and (5) surface-complexation with electrostatics models. The final section discusses the adequacy of the distribution ratio concept, the adequacy of transport calculations that rely on constant retardation factors and the status of incorporating sophisticated adsorption models into transport codes. 86 refs., 1 fig., 1 tab

  15. Emerging Carbon Nanotube Electronic Circuits, Modeling, and Performance

    OpenAIRE

    Xu, Yao; Srivastava, Ashok; Sharma, Ashwani K.

    2010-01-01

    Current transport and dynamic models of carbon nanotube field-effect transistors are presented. A model of single-walled carbon nanotube as interconnect is also presented and extended in modeling of single-walled carbon nanotube bundles. These models are applied in studying the performances of circuits such as the complementary carbon nanotube inverter pair and carbon nanotube as interconnect. Cadence/Spectre simulations show that carbon nanotube field-effect transistor circuits can operate a...

  16. CORPORATE FORESIGHT AND PERFORMANCE: A CHAIN-OF-EFFECTS MODEL

    DEFF Research Database (Denmark)

    Jissink, Tymen; Huizingh, Eelko K.R.E.; Rohrbeck, René

    2015-01-01

    , formal organization, and culture. We investigate the relation of corporate foresight with three innovation performance dimensions – new product success, new product innovativeness, and financial performance. We use partial-least-squares structural equations modelling to assess our measurement mode ls......In this paper we develop and validate a measurement scale for corporate foresight and examine its impact on performance in a chain-of-effects model. We conceptualize corporate foresight as an organizational ability consisting of five distinct dimensions: information scope, method usage, people...... performance dimensions. Implications of our findings, and limitations and future research avenues are discussed....

  17. Models used to assess the performance of photovoltaic systems.

    Energy Technology Data Exchange (ETDEWEB)

    Stein, Joshua S.; Klise, Geoffrey T.

    2009-12-01

    This report documents the various photovoltaic (PV) performance models and software developed and utilized by researchers at Sandia National Laboratories (SNL) in support of the Photovoltaics and Grid Integration Department. In addition to PV performance models, hybrid system and battery storage models are discussed. A hybrid system using other distributed sources and energy storage can help reduce the variability inherent in PV generation, and due to the complexity of combining multiple generation sources and system loads, these models are invaluable for system design and optimization. Energy storage plays an important role in reducing PV intermittency and battery storage models are used to understand the best configurations and technologies to store PV generated electricity. Other researcher's models used by SNL are discussed including some widely known models that incorporate algorithms developed at SNL. There are other models included in the discussion that are not used by or were not adopted from SNL research but may provide some benefit to researchers working on PV array performance, hybrid system models and energy storage. The paper is organized into three sections to describe the different software models as applied to photovoltaic performance, hybrid systems, and battery storage. For each model, there is a description which includes where to find the model, whether it is currently maintained and any references that may be available. Modeling improvements underway at SNL include quantifying the uncertainty of individual system components, the overall uncertainty in modeled vs. measured results and modeling large PV systems. SNL is also conducting research into the overall reliability of PV systems.

  18. Performance comparison of hydrological model structures during low flows

    Science.gov (United States)

    Staudinger, Maria; Stahl, Kerstin; Tallaksen, Lena M.; Clark, Martyn P.; Seibert, Jan

    2010-05-01

    Low flows are still poorly reproduced by common hydrological models since they are traditionally designed to meet peak flow situations best possible. As low flow becomes increasingly important to several target areas there is a need to improve available models. We present a study that assesses the impact of model structure on low flow simulations. This is done using the Framework for Understanding Structural Errors (FUSE), which identifies the set of (subjective) decisions made when building a hydrological model, and provides multiple options for each modeling decision. 79 models were built using the FUSE framework, and applied to simulate stream flows in the Narsjø catchment in Norway (119 km²). To allow comparison all new models were calibrated using an automatic optimization method. Low flow and recession analysis of the new models enables us to evaluate model performance focusing on different aspects by using various objective functions. Additionally, model structures responsible for poor performance, and hence unsuitable, can be detected. We focused on elucidating model performance during summer (August - October) and winter low flows which evolve from entirely different hydrological processes in the Narsjø catchment. Summer low flows develop out of a lack of precipitation while winter low flows are due to water storage in ice and snow. The results showed that simulations of summer low flows were throughout poorer than simulations of winter low flows when evaluating with an objective function focusing on low flows; here, the model structure influencing winter low flow simulations is the lower layer architecture. Different model structures were found to influence model performance during the summer season. The choice of other objective functions has the potential to affect such an evaluation. These findings call for the use of different model structures tailored to particular needs.

  19. Performance assessment model development and analysis of radionuclide transport in the unsaturated zone, Yucca Mountain, Nevada

    Science.gov (United States)

    Robinson, Bruce A.; Li, Chunhong; Ho, Clifford K.

    2003-05-01

    This paper describes the development and use of a particle-tracking model to perform radionuclide-transport simulations in the unsaturated zone at Yucca Mountain, Nevada. The goal of the effort was to produce a computational model that can be coupled to the project's calibrated 3D site-scale flow model so that the results of that effort could be incorporated directly into the Total System Performance Assessment (TSPA) analyses. The transport model simulates multiple species (typically 20 or more) with complex time-varying and spatially varying releases from the potential repository. Water-table rise, climate-change scenarios, and decay chains are additional features of the model. A cell-based particle-tracking method was employed that includes a dual-permeability formulation, advection, longitudinal dispersion, matrix diffusion, and colloid-facilitated transport. This paper examines the transport behavior of several key radionuclides through the unsaturated zone using the calibrated 3D unsaturated flow fields. Computational results illustrate the relative importance of fracture flow, matrix diffusion, and lateral diversion on the distribution of travel times from the simulated repository to the water table for various climatic conditions. Results also indicate rapid transport through fractures for a portion of the released mass. Further refinement of the model will address several issues, including conservatism in the transport model, the assignment of parameters in the flow and transport models, and the underlying assumptions used to support the conceptual models of flow and transport in the unsaturated zone at Yucca Mountain.

  20. A performance comparison of atmospheric dispersion models over complex topography

    International Nuclear Information System (INIS)

    Kido, Hiroko; Oishi, Ryoko; Hayashi, Keisuke; Kanno, Mitsuhiro; Kurosawa, Naohiro

    2007-01-01

    A code system using mass-consistent and Gaussian puff model was improved for a new option of atmospheric dispersion research. There are several atmospheric dispersion models for radionuclides. Because different models have both merits and disadvantages, it is necessary to choose the model that is most suitable for the surface conditions of the estimated region while regarding the calculation time, accuracy, and purpose of the calculations being performed. Some models are less accurate when the topography is complex. It is important to understand the differences between the models for smooth and complex surfaces. In this study, the performances of the following four models were compared: (1) Gaussian plume model (2) Gaussian puff model (3) Mass-consistent wind fields and Gaussian puff model that was improved in this study from one presented in Aomori Energy Society of Japan, 2005 Fall Meeting, D21. (4) Meso-scale meteorological model (RAMS: The Regional Atmospheric Modeling System) and particle-type model (HYPACT: The RAMS Hybrid Particle and Concentration Transport Model) (Reference: ATMET). (author)

  1. Confirming the Value of Swimming-Performance Models for Adolescents.

    Science.gov (United States)

    Dormehl, Shilo J; Robertson, Samuel J; Barker, Alan R; Williams, Craig A

    2017-10-01

    To evaluate the efficacy of existing performance models to assess the progression of male and female adolescent swimmers through a quantitative and qualitative mixed-methods approach. Fourteen published models were tested using retrospective data from an independent sample of Dutch junior national-level swimmers from when they were 12-18 y of age (n = 13). The degree of association by Pearson correlations was compared between the calculated differences from the models and quadratic functions derived from the Dutch junior national qualifying times. Swimmers were grouped based on their differences from the models and compared with their swimming histories that were extracted from questionnaires and follow-up interviews. Correlations of the deviations from both the models and quadratic functions derived from the Dutch qualifying times were all significant except for the 100-m breaststroke and butterfly and the 200-m freestyle for females (P backstroke for males and 200-m freestyle for males and females were almost directly proportional. In general, deviations from the models were accounted for by the swimmers' training histories. Higher levels of retrospective motivation appeared to be synonymous with higher-level career performance. This mixed-methods approach helped confirm the validity of the models that were found to be applicable to adolescent swimmers at all levels, allowing coaches to track performance and set goals. The value of the models in being able to account for the expected performance gains during adolescence enables quantification of peripheral factors that could affect performance.

  2. A performance model of the OSI communication architecture

    Science.gov (United States)

    Kritzinger, P. S.

    1986-06-01

    An analytical model aiming at predicting the performance of software implementations which would be built according to the OSI basic reference model is proposed. The model uses the peer protocol standard of a layer as the reference description of an implementation of that layer. The model is basically a closed multiclass multichain queueing network with a processor-sharing center, modeling process contention at the processor, and a delay center, modeling times spent waiting for responses from the corresponding peer processes. Each individual transition of the protocol constitutes a different class and each layer of the architecture forms a closed chain. Performance statistics include queue lengths and response times at the processor as a function of processor speed and the number of open connections. It is shown how to reduce the model should the protocol state space become very large. Numerical results based upon the derived formulas are given.

  3. Performance evaluation:= (process algebra + model checking) x Markov chains

    NARCIS (Netherlands)

    Hermanns, H.; Larsen, K.G.; Nielsen, Mogens; Katoen, Joost P.

    2001-01-01

    Markov chains are widely used in practice to determine system performance and reliability characteristics. The vast majority of applications considers continuous-time Markov chains (CTMCs). This tutorial paper shows how successful model specification and analysis techniques from concurrency theory

  4. Thermal Model Predictions of Advanced Stirling Radioisotope Generator Performance

    Science.gov (United States)

    Wang, Xiao-Yen J.; Fabanich, William Anthony; Schmitz, Paul C.

    2014-01-01

    This presentation describes the capabilities of three-dimensional thermal power model of advanced stirling radioisotope generator (ASRG). The performance of the ASRG is presented for different scenario, such as Venus flyby with or without the auxiliary cooling system.

  5. Practical Techniques for Modeling Gas Turbine Engine Performance

    Science.gov (United States)

    Chapman, Jeffryes W.; Lavelle, Thomas M.; Litt, Jonathan S.

    2016-01-01

    The cost and risk associated with the design and operation of gas turbine engine systems has led to an increasing dependence on mathematical models. In this paper, the fundamentals of engine simulation will be reviewed, an example performance analysis will be performed, and relationships useful for engine control system development will be highlighted. The focus will be on thermodynamic modeling utilizing techniques common in industry, such as: the Brayton cycle, component performance maps, map scaling, and design point criteria generation. In general, these topics will be viewed from the standpoint of an example turbojet engine model; however, demonstrated concepts may be adapted to other gas turbine systems, such as gas generators, marine engines, or high bypass aircraft engines. The purpose of this paper is to provide an example of gas turbine model generation and system performance analysis for educational uses, such as curriculum creation or student reference.

  6. Testing algorithms for a passenger train braking performance model.

    Science.gov (United States)

    2011-09-01

    "The Federal Railroad Administrations Office of Research and Development funded a project to establish performance model to develop, analyze, and test positive train control (PTC) braking algorithms for passenger train operations. With a good brak...

  7. Dynamic vehicle model for handling performance using experimental data

    Directory of Open Access Journals (Sweden)

    SangDo Na

    2015-11-01

    Full Text Available An analytical vehicle model is essential for the development of vehicle design and performance. Various vehicle models have different complexities, assumptions and limitations depending on the type of vehicle analysis. An accurate full vehicle model is essential in representing the behaviour of the vehicle in order to estimate vehicle dynamic system performance such as ride comfort and handling. An experimental vehicle model is developed in this article, which employs experimental kinematic and compliance data measured between the wheel and chassis. From these data, a vehicle model, which includes dynamic effects due to vehicle geometry changes, has been developed. The experimental vehicle model was validated using an instrumented experimental vehicle and data such as a step change steering input. This article shows a process to develop and validate an experimental vehicle model to enhance the accuracy of handling performance, which comes from precise suspension model measured by experimental data of a vehicle. The experimental force data obtained from a suspension parameter measuring device are employed for a precise modelling of the steering and handling response. The steering system is modelled by a lumped model, with stiffness coefficients defined and identified by comparing steering stiffness obtained by the measured data. The outputs, specifically the yaw rate and lateral acceleration of the vehicle, are verified by experimental results.

  8. Evaluation Model of Organizational Performance for Small and Medium Enterprises

    Directory of Open Access Journals (Sweden)

    Carlos Augusto Passos

    2014-12-01

    Full Text Available In the 1980’s, many tools for evaluating organizational performance were created. However, most of them are useful only to large companies and do not foster results in small and medium-sized enterprises (SMEs. In light of this fact, this article aims at proposing an Organizational Performance Assessment model (OPA which is flexible and adaptable to the reality of SMEs, based on the theoretical framework of various models, and comparisons on the basis of three major authors’ criteria to evaluate OPA models. The research has descriptive and exploratory character, with qualitative nature. The MADE-O model, according to the criteria described in the bibliography, is the one that best fits the needs of SMEs, used as a baseline for the model proposed in this study with adaptations pertaining to the BSC model. The model called the Overall Performance Indicator – Environment (IDG-E has as its main differential, in addition to the base of the models mentioned above, the assessment of the external and internal environment weighted in modules of OPA. As the SME is characterized by having few processes and people, the small amount of performance indicators is another positive aspect. Submitted to the evaluation of the criteria subscribed by the authors, the model proved to be quite feasible for use in SMEs.

  9. Model of service-oriented catering supply chain performance evaluation

    OpenAIRE

    Gou, Juanqiong; Shen, Guguan; Chai, Rui

    2013-01-01

    Purpose: The aim of this paper is constructing a performance evaluation model for service-oriented catering supply chain. Design/methodology/approach: With the research on the current situation of catering industry, this paper summarized the characters of the catering supply chain, and then presents the service-oriented catering supply chain model based on the platform of logistics and information. At last, the fuzzy AHP method is used to evaluate the performance of service-oriented catering ...

  10. ASSESSING INDIVIDUAL PERFORMANCE ON INFORMATION TECHNOLOGY ADOPTION: A NEW MODEL

    OpenAIRE

    Diah Hari Suryaningrum

    2012-01-01

    This paper aims to propose a new model in assessing individual performance on information technology adoption. The new model to assess individual performance was derived from two different theories: decomposed theory of planned behavior and task-technology fit theory. Although many researchers have tried to expand these theories, some of their efforts might lack of theoretical assumptions. To overcome this problem and enhance the coherence of the integration, I used a theory from social scien...

  11. Performance of Air Pollution Models on Massively Parallel Computers

    DEFF Research Database (Denmark)

    Brown, John; Hansen, Per Christian; Wasniewski, Jerzy

    1996-01-01

    To compare the performance and use of three massively parallel SIMD computers, we implemented a large air pollution model on the computers. Using a realistic large-scale model, we gain detailed insight about the performance of the three computers when used to solve large-scale scientific problems...... that involve several types of numerical computations. The computers considered in our study are the Connection Machines CM-200 and CM-5, and the MasPar MP-2216...

  12. A Mathematical Model to Improve the Performance of Logistics Network

    Directory of Open Access Journals (Sweden)

    Muhammad Izman Herdiansyah

    2012-01-01

    Full Text Available The role of logistics nowadays is expanding from just providing transportation and warehousing to offering total integrated logistics. To remain competitive in the global market environment, business enterprises need to improve their logistics operations performance. The improvement will be achieved when we can provide a comprehensive analysis and optimize its network performances. In this paper, a mixed integer linier model for optimizing logistics network performance is developed. It provides a single-product multi-period multi-facilities model, as well as the multi-product concept. The problem is modeled in form of a network flow problem with the main objective to minimize total logistics cost. The problem can be solved using commercial linear programming package like CPLEX or LINDO. Even in small case, the solver in Excel may also be used to solve such model.Keywords: logistics network, integrated model, mathematical programming, network optimization

  13. Construction Of A Performance Assessment Model For Zakat Management Institutions

    Directory of Open Access Journals (Sweden)

    Sri Fadilah

    2016-12-01

    Full Text Available The objective of the research is to examine the performance evaluation using Balanced Scorecard model. The research is conducted due to a big gap existing between zakat (alms and religious tax in Islam with its potential earn of as much as 217 trillion rupiahs and the realization of the collected zakat fund that is only reached for three trillion. This indicates that the performance of zakat management organizations in collecting the zakat is still very low. On the other hand, the quantity and the quality of zakat management organizations have to be improved. This means the performance evaluation model as a tool to evaluate performance is needed. The model construct is making a performance evaluation model that can be implemented to zakat management organizations. The organizational performance with Balanced Scorecard evaluation model will be effective if it is supported by three aspects, namely:  PI, BO and TQM. This research uses explanatory method and data analysis tool of SEM/PLS. Data collecting technique are questionnaires, interviews and documentation. The result of this research shows that PI, BO and TQM simultaneously and partially gives a significant effect on organizational performance.

  14. Model-supported selection of distribution coefficients for performance assessment

    International Nuclear Information System (INIS)

    Ochs, M.; Lothenbach, B.; Shibata, Hirokazu; Yui, Mikazu

    1999-01-01

    A thermodynamic speciation/sorption model is used to illustrate typical problems encountered in the extrapolation of batch-type K d values to repository conditions. For different bentonite-groundwater systems, the composition of the corresponding equilibrium solutions and the surface speciation of the bentonite is calculated by treating simultaneously solution equilibria of soluble components of the bentonite as well as ion exchange and acid/base reactions at the bentonite surface. K d values for Cs, Ra, and Ni are calculated by implementing the appropriate ion exchange and surface complexation equilibria in the bentonite model. Based on this approach, hypothetical batch experiments are contrasted with expected conditions in compacted backfill. For each of these scenarios, the variation of K d values as a function of groundwater composition is illustrated for Cs, Ra, and Ni. The applicability of measured, batch-type K d values to repository conditions is discussed. (author)

  15. Team performance modeling for HRA in dynamic situations

    International Nuclear Information System (INIS)

    Shu Yufei; Furuta, Kazuo; Kondo, Shunsuke

    2002-01-01

    This paper proposes a team behavior network model that can simulate and analyze response of an operator team to an incident in a dynamic and context-sensitive situation. The model is composed of four sub-models, which describe the context of team performance. They are task model, event model, team model and human-machine interface model. Each operator demonstrates aspects of his/her specific cognitive behavior and interacts with other operators and the environment in order to deal with an incident. Individual human factors, which determine the basis of communication and interaction between individuals, and cognitive process of an operator, such as information acquisition, state-recognition, decision-making and action execution during development of an event scenario are modeled. A case of feed and bleed operation in pressurized water reactor under an emergency situation was studied and the result was compared with an experiment to check the validity of the proposed model

  16. Port performance evaluation tool based on microsimulation model

    Directory of Open Access Journals (Sweden)

    Tsavalista Burhani Jzolanda

    2017-01-01

    Full Text Available As port performance is becoming correlative to national competitiveness, the issue of port performance evaluation has significantly raised. Port performances can simply be indicated by port service levels to the ship (e.g., throughput, waiting for berthing etc., as well as the utilization level of equipment and facilities within a certain period. The performances evaluation then can be used as a tool to develop related policies for improving the port’s performance to be more effective and efficient. However, the evaluation is frequently conducted based on deterministic approach, which hardly captures the nature variations of port parameters. Therefore, this paper presents a stochastic microsimulation model for investigating the impacts of port parameter variations to the port performances. The variations are derived from actual data in order to provide more realistic results. The model is further developed using MATLAB and Simulink based on the queuing theory.

  17. Toward a Subjective Measurement Model for Firm Performance

    Directory of Open Access Journals (Sweden)

    Luiz Artur Ledur Brito

    2012-05-01

    Full Text Available Firm performance is a relevant construct in strategic management research and frequently used as a dependentvariable. Despite this relevance, there is hardly a consensus about its definition, dimensionality andmeasurement, what limits advances in research and understanding of the concept. This article proposes and testsa measurement model for firm performance, based on subjective indicators. The model is grounded instakeholder theory and a review of empirical articles. Confirmatory Factor Analyses, using data from 116Brazilian senior managers, were used to test its fit and psychometric properties. The final model had six firstorderdimensions: profitability, growth, customer satisfaction, employee satisfaction, social performance, andenvironmental performance. A second-order financial performance construct, influencing growth andprofitability, correlated with the first-order intercorrelated, non-financial dimensions. Results suggest dimensionscannot be used interchangeably, since they represent different aspects of firm performance, and corroborate theidea that stakeholders have different demands that need to be managed independently. Researchers andpractitioners may use the model to fully treat performance in empirical studies and to understand the impact ofstrategies on multiple performance facets.

  18. A Composite Model for Employees' Performance Appraisal and Improvement

    Science.gov (United States)

    Manoharan, T. R.; Muralidharan, C.; Deshmukh, S. G.

    2012-01-01

    Purpose: The purpose of this paper is to develop an innovative method of performance appraisal that will be useful for designing a structured training programme. Design/methodology/approach: Employees' performance appraisals are conducted using new approaches, namely data envelopment analysis and an integrated fuzzy model. Interpretive structural…

  19. A Model for Effective Performance in the Indonesian Navy.

    Science.gov (United States)

    1987-06-01

    NAVY LEADERSHIP AND MANAGEMENT COM PETENCY M ODEL .................................. 15 D. MCBER COMPETENT MANAGERS MODEL ................ IS E. SU M M... leadership and managerial skills which emphasize on effective performance of the officers in managing the human resources under their cormnand and...supervision. By effective performance we mean officers who not only know about management theories , but who possess the characteristics, knowledge, skill, and

  20. Discussion of various models related to cloud performance

    OpenAIRE

    Kande, Chaitanya Krishna

    2015-01-01

    This paper discusses the various models related to cloud computing. Knowing the metrics related to infrastructure is very critical to enhance the performance of cloud services. Various metrics related to clouds such as pageview response time, admission control and enforcing elasticity to cloud infrastructure are very crucial in analyzing the characteristics of the cloud to enhance the cloud performance.

  1. Performance Implications of Business Model Change: A Case Study

    Directory of Open Access Journals (Sweden)

    Jana Poláková

    2015-01-01

    Full Text Available The paper deals with changes in performance level introduced by the change of business model. The selected case is a small family business undergoing through substantial changes in reflection of structural changes of its markets. The authors used the concept of business model to describe value creation processes within the selected family business and by contrasting the differences between value creation processes before and after the change introduced they prove the role of business model as the performance differentiator. This is illustrated with the use of business model canvas constructed on the basis interviews, observations and document analysis. The two business model canvases allow for explanation of cause-and-effect relationships within the business leading to change in performance. The change in the performance is assessed by financial analysis of the business conducted over the period of 2006–2012 demonstrates changes in performance (comparing development of ROA, ROE and ROS having their lowest levels before the change of business model was introduced, growing after the introduction of the change, as well as the activity indicators with similar developments of the family business. The described case study contributes to the concept of business modeling with the arguments supporting its value as strategic tool facilitating decisions related to value creation within the business.

  2. Conceptual Modeling of Performance Indicators of Higher Education Institutions

    OpenAIRE

    Kahveci, Tuba Canvar; Taşkın, Harun; Toklu, Merve Cengiz

    2013-01-01

    Measuring and analyzing any type of organization are carried out by different actors in the organization. The performance indicators of performance management system increase according to products or services of the organization. Also these indicators should be defined for all levels of the organization. Finally, all of these characteristics make the performance evaluation process more complex for organizations. In order to manage this complexity, the process should be modeled at the beginnin...

  3. The Social Responsibility Performance Outcomes Model: Building Socially Responsible Companies through Performance Improvement Outcomes.

    Science.gov (United States)

    Hatcher, Tim

    2000-01-01

    Considers the role of performance improvement professionals and human resources development professionals in helping organizations realize the ethical and financial power of corporate social responsibility. Explains the social responsibility performance outcomes model, which incorporates the concepts of societal needs and outcomes. (LRW)

  4. Desempenho de crianças com desenvolvimento típico de linguagem em prova de vocabulário expressivo Performance by children with typical language development in expressive vocabulary test

    Directory of Open Access Journals (Sweden)

    Simone Rocha de Vasconcellos Hage

    2006-12-01

    Full Text Available OBJETIVO: obter o perfil de crianças com desenvolvimento típico de linguagem em prova de vocabulário expressivo, e verificar os tipos de desvios semânticos mais utilizados por elas. MÉTODOS: participaram do estudo 400 crianças com desenvolvimento típico de linguagem, entre três anos e seis anos. Foi aplicado protocolo de avaliação lexical com 100 itens. Para cada faixa etária, foi efetuado estudo estatístico, comparando-se as faixas etárias por meio de teste não paramétrico. RESULTADOS: as crianças de cinco e seis anos obtiveram desempenho semelhante e superior às crianças de três e quatro anos quanto ao número de itens nomeados, e o número de itens não nomeados aumentou conforme diminuiu a idade. Não houve diferença estatisticamente significante apenas entre as idades de cinco e seis anos quanto aos itens nomeados e não nomeados. O número total de desvios semânticos das crianças de três anos foi superior às de quatro, que por sua vez foi superior às de cinco e seis anos. Os desvios de maior ocorrência foram os de superextensão e por contigüidade, sendo que as crianças menores tiveram um número maior de ocorrência que as maiores nos dois tipos de desvio. A ocorrência dos desvios de proximidade morfológica, fonológica, antonísia, dêitico, perífrase e designação não verbal foram insignificantes. CONCLUSÃO: com o avanço da idade, maior foi o número de ocorrência do vocábulo esperado e quanto menor a idade, maior a ocorrência de itens não nomeados. Dentre os desvios semânticos, os de maior ocorrência foram superextensão e por relação de contigüidade.PURPOSE: to obtain the profile of children with typical language development in expressive vocabulary test as well as to verify the types of semantic deviations such children used more frequently. METHODS: the study involved 400 normal children aging from three to six years. A lexical assessment protocol with 100 items was applied. A statistical

  5. Faculty Performance Evaluation: The CIPP-SAPS Model.

    Science.gov (United States)

    Mitcham, Maralynne

    1981-01-01

    The issues of faculty performance evaluation for allied health professionals are addressed. Daniel Stufflebeam's CIPP (content-imput-process-product) model is introduced and its development into a CIPP-SAPS (self-administrative-peer- student) model is pursued. (Author/CT)

  6. Technical performance of percutaneous and laminectomy leads analyzed by modeling

    NARCIS (Netherlands)

    Manola, L.; Holsheimer, J.

    2004-01-01

    The objective of this study was to compare the technical performance of laminectomy and percutaneous spinal cord stimulation leads with similar contact spacing by computer modeling. Monopolar and tripolar (guarded cathode) stimulation with both lead types in a low-thoracic spine model was simulated

  7. Neuro-fuzzy model for evaluating the performance of processes ...

    Indian Academy of Sciences (India)

    CHIDOZIE CHUKWUEMEKA NWOBI-OKOYE

    2017-11-16

    Nov 16, 2017 ... In this work an Adaptive Neuro-Fuzzy Inference System (ANFIS) was used to model the periodic performance of some ..... Every node i in this layer is an adaptive node with a node function. Neuro-fuzzy model for .... spectral analysis and parameter optimization using genetic algorithm, the values of v10. and ...

  8. UNCONSTRAINED HANDWRITING RECOGNITION : LANGUAGE MODELS, PERPLEXITY, AND SYSTEM PERFORMANCE

    NARCIS (Netherlands)

    Marti, U-V.; Bunke, H.

    2004-01-01

    In this paper we present a number of language models and their behavior in the recognition of unconstrained handwritten English sentences. We use the perplexity to compare the different models and their prediction power, and relate it to the performance of a recognition system under different

  9. Mathematical Models of Elementary Mathematics Learning and Performance. Final Report.

    Science.gov (United States)

    Suppes, Patrick

    This project was concerned with the development of mathematical models of elementary mathematics learning and performance. Probabilistic finite automata and register machines with a finite number of registers were developed as models and extensively tested with data arising from the elementary-mathematics strand curriculum developed by the…

  10. Activity-Based Costing Model for Assessing Economic Performance.

    Science.gov (United States)

    DeHayes, Daniel W.; Lovrinic, Joseph G.

    1994-01-01

    An economic model for evaluating the cost performance of academic and administrative programs in higher education is described. Examples from its application at Indiana University-Purdue University Indianapolis are used to illustrate how the model has been used to control costs and reengineer processes. (Author/MSE)

  11. Longitudinal modeling in sports: young swimmers' performance and biomechanics profile.

    Science.gov (United States)

    Morais, Jorge E; Marques, Mário C; Marinho, Daniel A; Silva, António J; Barbosa, Tiago M

    2014-10-01

    New theories about dynamical systems highlight the multi-factorial interplay between determinant factors to achieve higher sports performances, including in swimming. Longitudinal research does provide useful information on the sportsmen's changes and how training help him to excel. These questions may be addressed in one single procedure such as latent growth modeling. The aim of the study was to model a latent growth curve of young swimmers' performance and biomechanics over a season. Fourteen boys (12.33 ± 0.65 years-old) and 16 girls (11.15 ± 0.55 years-old) were evaluated. Performance, stroke frequency, speed fluctuation, arm's propelling efficiency, active drag, active drag coefficient and power to overcome drag were collected in four different moments of the season. Latent growth curve modeling was computed to understand the longitudinal variation of performance (endogenous variables) over the season according to the biomechanics (exogenous variables). Latent growth curve modeling showed a high inter- and intra-subject variability in the performance growth. Gender had a significant effect at the baseline and during the performance growth. In each evaluation moment, different variables had a meaningful effect on performance (M1: Da, β = -0.62; M2: Da, β = -0.53; M3: η(p), β = 0.59; M4: SF, β = -0.57; all P < .001). The models' goodness-of-fit was 1.40 ⩽ χ(2)/df ⩽ 3.74 (good-reasonable). Latent modeling is a comprehensive way to gather insight about young swimmers' performance over time. Different variables were the main responsible for the performance improvement. A gender gap, intra- and inter-subject variability was verified. Copyright © 2014 Elsevier B.V. All rights reserved.

  12. Comparison of the performance of net radiation calculation models

    DEFF Research Database (Denmark)

    Kjærsgaard, Jeppe Hvelplund; Cuenca, R.H.; Martinez-Cob, A.

    2009-01-01

    Daily values of net radiation are used in many applications of crop-growth modeling and agricultural water management. Measurements of net radiation are not part of the routine measurement program at many weather stations and are commonly estimated based on other meteorological parameters. Daily....... The performance of the empirical models was nearly identical at all sites. Since the empirical models were easier to use and simpler to calibrate than the physically based models, the results indicate that the empirical models can be used as a good substitute for the physically based ones when available...

  13. Review of Methods for Buildings Energy Performance Modelling

    Science.gov (United States)

    Krstić, Hrvoje; Teni, Mihaela

    2017-10-01

    Research presented in this paper gives a brief review of methods used for buildings energy performance modelling. This paper gives also a comprehensive review of the advantages and disadvantages of available methods as well as the input parameters used for modelling buildings energy performance. European Directive EPBD obliges the implementation of energy certification procedure which gives an insight on buildings energy performance via exiting energy certificate databases. Some of the methods for buildings energy performance modelling mentioned in this paper are developed by employing data sets of buildings which have already undergone an energy certification procedure. Such database is used in this paper where the majority of buildings in the database have already gone under some form of partial retrofitting – replacement of windows or installation of thermal insulation but still have poor energy performance. The case study presented in this paper utilizes energy certificates database obtained from residential units in Croatia (over 400 buildings) in order to determine the dependence between buildings energy performance and variables from database by using statistical dependencies tests. Building energy performance in database is presented with building energy efficiency rate (from A+ to G) which is based on specific annual energy needs for heating for referential climatic data [kWh/(m2a)]. Independent variables in database are surfaces and volume of the conditioned part of the building, building shape factor, energy used for heating, CO2 emission, building age and year of reconstruction. Research results presented in this paper give an insight in possibilities of methods used for buildings energy performance modelling. Further on it gives an analysis of dependencies between buildings energy performance as a dependent variable and independent variables from the database. Presented results could be used for development of new building energy performance

  14. An analytical model of the HINT performance metric

    Energy Technology Data Exchange (ETDEWEB)

    Snell, Q.O.; Gustafson, J.L. [Scalable Computing Lab., Ames, IA (United States)

    1996-10-01

    The HINT benchmark was developed to provide a broad-spectrum metric for computers and to measure performance over the full range of memory sizes and time scales. We have extended our understanding of why HINT performance curves look the way they do and can now predict the curves using an analytical model based on simple hardware specifications as input parameters. Conversely, by fitting the experimental curves with the analytical model, hardware specifications such as memory performance can be inferred to provide insight into the nature of a given computer system.

  15. Disaggregation of Rainy Hours: Compared Performance of Various Models.

    Science.gov (United States)

    Ben Haha, M.; Hingray, B.; Musy, A.

    In the urban environment, the response times of catchments are usually short. To de- sign or to diagnose waterworks in that context, it is necessary to describe rainfall events with a good time resolution: a 10mn time step is often necessary. Such in- formation is not always available. Rainfall disaggregation models have thus to be applied to produce from rough rainfall data that short time resolution information. The communication will present the performance obtained with several rainfall dis- aggregation models that allow for the disaggregation of rainy hours into six 10mn rainfall amounts. The ability of the models to reproduce some statistical character- istics of rainfall (mean, variance, overall distribution of 10mn-rainfall amounts; ex- treme values of maximal rainfall amounts over different durations) is evaluated thanks to different graphical and numerical criteria. The performance of simple models pre- sented in some scientific papers or developed in the Hydram laboratory as well as the performance of more sophisticated ones is compared with the performance of the basic constant disaggregation model. The compared models are either deterministic or stochastic; for some of them the disaggregation is based on scaling properties of rainfall. The compared models are in increasing complexity order: constant model, linear model (Ben Haha, 2001), Ormsbee Deterministic model (Ormsbee, 1989), Ar- tificial Neuronal Network based model (Burian et al. 2000), Hydram Stochastic 1 and Hydram Stochastic 2 (Ben Haha, 2001), Multiplicative Cascade based model (Olsson and Berndtsson, 1998), Ormsbee Stochastic model (Ormsbee, 1989). The 625 rainy hours used for that evaluation (with a hourly rainfall amount greater than 5mm) were extracted from the 21 years chronological rainfall series (10mn time step) observed at the Pully meteorological station, Switzerland. The models were also evaluated when applied to different rainfall classes depending on the season first and on the

  16. A network application for modeling a centrifugal compressor performance map

    Science.gov (United States)

    Nikiforov, A.; Popova, D.; Soldatova, K.

    2017-08-01

    The approximation of aerodynamic performance of a centrifugal compressor stage and vaneless diffuser by neural networks is presented. Advantages, difficulties and specific features of the method are described. An example of a neural network and its structure is shown. The performances in terms of efficiency, pressure ratio and work coefficient of 39 model stages within the range of flow coefficient from 0.01 to 0.08 were modeled with mean squared error 1.5 %. In addition, the loss and friction coefficients of vaneless diffusers of relative widths 0.014-0.10 are modeled with mean squared error 2.45 %.

  17. Assessment of performance of survival prediction models for cancer prognosis

    Directory of Open Access Journals (Sweden)

    Chen Hung-Chia

    2012-07-01

    Full Text Available Abstract Background Cancer survival studies are commonly analyzed using survival-time prediction models for cancer prognosis. A number of different performance metrics are used to ascertain the concordance between the predicted risk score of each patient and the actual survival time, but these metrics can sometimes conflict. Alternatively, patients are sometimes divided into two classes according to a survival-time threshold, and binary classifiers are applied to predict each patient’s class. Although this approach has several drawbacks, it does provide natural performance metrics such as positive and negative predictive values to enable unambiguous assessments. Methods We compare the survival-time prediction and survival-time threshold approaches to analyzing cancer survival studies. We review and compare common performance metrics for the two approaches. We present new randomization tests and cross-validation methods to enable unambiguous statistical inferences for several performance metrics used with the survival-time prediction approach. We consider five survival prediction models consisting of one clinical model, two gene expression models, and two models from combinations of clinical and gene expression models. Results A public breast cancer dataset was used to compare several performance metrics using five prediction models. 1 For some prediction models, the hazard ratio from fitting a Cox proportional hazards model was significant, but the two-group comparison was insignificant, and vice versa. 2 The randomization test and cross-validation were generally consistent with the p-values obtained from the standard performance metrics. 3 Binary classifiers highly depended on how the risk groups were defined; a slight change of the survival threshold for assignment of classes led to very different prediction results. Conclusions 1 Different performance metrics for evaluation of a survival prediction model may give different conclusions in

  18. Hybrid Corporate Performance Prediction Model Considering Technical Capability

    Directory of Open Access Journals (Sweden)

    Joonhyuck Lee

    2016-07-01

    Full Text Available Many studies have tried to predict corporate performance and stock prices to enhance investment profitability using qualitative approaches such as the Delphi method. However, developments in data processing technology and machine-learning algorithms have resulted in efforts to develop quantitative prediction models in various managerial subject areas. We propose a quantitative corporate performance prediction model that applies the support vector regression (SVR algorithm to solve the problem of the overfitting of training data and can be applied to regression problems. The proposed model optimizes the SVR training parameters based on the training data, using the genetic algorithm to achieve sustainable predictability in changeable markets and managerial environments. Technology-intensive companies represent an increasing share of the total economy. The performance and stock prices of these companies are affected by their financial standing and their technological capabilities. Therefore, we apply both financial indicators and technical indicators to establish the proposed prediction model. Here, we use time series data, including financial, patent, and corporate performance information of 44 electronic and IT companies. Then, we predict the performance of these companies as an empirical verification of the prediction performance of the proposed model.

  19. Facial Performance Transfer via Deformable Models and Parametric Correspondence.

    Science.gov (United States)

    Asthana, Akshay; de la Hunty, Miles; Dhall, Abhinav; Goecke, Roland

    2012-09-01

    The issue of transferring facial performance from one person's face to another's has been an area of interest for the movie industry and the computer graphics community for quite some time. In recent years, deformable face models, such as the Active Appearance Model (AAM), have made it possible to track and synthesize faces in real time. Not surprisingly, deformable face model-based approaches for facial performance transfer have gained tremendous interest in the computer vision and graphics community. In this paper, we focus on the problem of real-time facial performance transfer using the AAM framework. We propose a novel approach of learning the mapping between the parameters of two completely independent AAMs, using them to facilitate the facial performance transfer in a more realistic manner than previous approaches. The main advantage of modeling this parametric correspondence is that it allows a "meaningful" transfer of both the nonrigid shape and texture across faces irrespective of the speakers' gender, shape, and size of the faces, and illumination conditions. We explore linear and nonlinear methods for modeling the parametric correspondence between the AAMs and show that the sparse linear regression method performs the best. Moreover, we show the utility of the proposed framework for a cross-language facial performance transfer that is an area of interest for the movie dubbing industry.

  20. Implementation of multivariate linear mixed-effects models in the analysis of indoor climate performance experiments

    DEFF Research Database (Denmark)

    Jensen, Kasper Lynge; Spliid, Henrik; Toftum, Jørn

    2011-01-01

    The aim of the current study was to apply multivariate mixed-effects modeling to analyze experimental data on the relation between air quality and the performance of office work. The method estimates in one step the effect of the exposure on a multi-dimensional response variable, and yields...... important information on the correlation between the different dimensions of the response variable, which in this study was composed of both subjective perceptions and a two-dimensional performance task outcome. Such correlation is typically not included in the output from univariate analysis methods. Data....... The analysis seems superior to conventional univariate statistics and the information provided may be important for the design of performance experiments in general and for the conclusions that can be based on such studies....

  1. Model-based corridor performance analysis – An application to a European case

    DEFF Research Database (Denmark)

    Panagakos, George; Psaraftis, Harilaos N.

    2017-01-01

    to a number of Key Performance Indicators (KPIs). It consists of decomposing the corridor into transport chains, selecting a sample of typical chains, assessing these chains through a set of KPIs, and then aggregating the chain-level KPIs to corridor-level ones using proper weights. A critical feature......, to the extent covered by the GreCOR application, the proposed methodology can effectively assess the performance of a freight transport corridor. Combining the model-based approach for the sample construction and the study-based approach for the estimation of chain-level indicators exploits the strengths......The paper proposes a methodology for freight corridor performance monitoring that is suitable for sustainability assessments. The methodology, initiated by the EU-funded project SuperGreen, involves the periodic monitoring of a standard set of transport chains along the corridor in relation...

  2. Real-time individualization of the unified model of performance.

    Science.gov (United States)

    Liu, Jianbo; Ramakrishnan, Sridhar; Laxminarayan, Srinivas; Balkin, Thomas J; Reifman, Jaques

    2017-12-01

    Existing mathematical models for predicting neurobehavioural performance are not suited for mobile computing platforms because they cannot adapt model parameters automatically in real time to reflect individual differences in the effects of sleep loss. We used an extended Kalman filter to develop a computationally efficient algorithm that continually adapts the parameters of the recently developed Unified Model of Performance (UMP) to an individual. The algorithm accomplishes this in real time as new performance data for the individual become available. We assessed the algorithm's performance by simulating real-time model individualization for 18 subjects subjected to 64 h of total sleep deprivation (TSD) and 7 days of chronic sleep restriction (CSR) with 3 h of time in bed per night, using psychomotor vigilance task (PVT) data collected every 2 h during wakefulness. This UMP individualization process produced parameter estimates that progressively approached the solution produced by a post-hoc fitting of model parameters using all data. The minimum number of PVT measurements needed to individualize the model parameters depended upon the type of sleep-loss challenge, with ~30 required for TSD and ~70 for CSR. However, model individualization depended upon the overall duration of data collection, yielding increasingly accurate model parameters with greater number of days. Interestingly, reducing the PVT sampling frequency by a factor of two did not notably hamper model individualization. The proposed algorithm facilitates real-time learning of an individual's trait-like responses to sleep loss and enables the development of individualized performance prediction models for use in a mobile computing platform. © 2017 European Sleep Research Society.

  3. Observer analysis and its impact on task performance modeling

    Science.gov (United States)

    Jacobs, Eddie L.; Brown, Jeremy B.

    2014-05-01

    Fire fighters use relatively low cost thermal imaging cameras to locate hot spots and fire hazards in buildings. This research describes the analyses performed to study the impact of thermal image quality on fire fighter fire hazard detection task performance. Using human perception data collected by the National Institute of Standards and Technology (NIST) for fire fighters detecting hazards in a thermal image, an observer analysis was performed to quantify the sensitivity and bias of each observer. Using this analysis, the subjects were divided into three groups representing three different levels of performance. The top-performing group was used for the remainder of the modeling. Models were developed which related image quality factors such as contrast, brightness, spatial resolution, and noise to task performance probabilities. The models were fitted to the human perception data using logistic regression, as well as probit regression. Probit regression was found to yield superior fits and showed that models with not only 2nd order parameter interactions, but also 3rd order parameter interactions performed the best.

  4. System Level Modelling and Performance Estimation of Embedded Systems

    DEFF Research Database (Denmark)

    Tranberg-Hansen, Anders Sejer

    The advances seen in the semiconductor industry within the last decade have brought the possibility of integrating evermore functionality onto a single chip forming functionally highly advanced embedded systems. These integration possibilities also imply that as the design complexity increases, so...... an efficient system level design methodology, a modelling framework for performance estimation and design space exploration at the system level is required. This thesis presents a novel component based modelling framework for system level modelling and performance estimation of embedded systems. The framework...... is performed by having the framework produce detailed quantitative information about the system model under investigation. The project is part of the national Danish research project, Danish Network of Embedded Systems (DaNES), which is funded by the Danish National Advanced Technology Foundation. The project...

  5. Causal Analysis for Performance Modeling of Computer Programs

    Directory of Open Access Journals (Sweden)

    Jan Lemeire

    2007-01-01

    Full Text Available Causal modeling and the accompanying learning algorithms provide useful extensions for in-depth statistical investigation and automation of performance modeling. We enlarged the scope of existing causal structure learning algorithms by using the form-free information-theoretic concept of mutual information and by introducing the complexity criterion for selecting direct relations among equivalent relations. The underlying probability distribution of experimental data is estimated by kernel density estimation. We then reported on the benefits of a dependency analysis and the decompositional capacities of causal models. Useful qualitative models, providing insight into the role of every performance factor, were inferred from experimental data. This paper reports on the results for a LU decomposition algorithm and on the study of the parameter sensitivity of the Kakadu implementation of the JPEG-2000 standard. Next, the analysis was used to search for generic performance characteristics of the applications.

  6. Impact of reactive settler models on simulated WWTP performance.

    Science.gov (United States)

    Gernaey, K V; Jeppsson, U; Batstone, D J; Ingildsen, P

    2006-01-01

    Including a reactive settler model in a wastewater treatment plant model allows representation of the biological reactions taking place in the sludge blanket in the settler, something that is neglected in many simulation studies. The idea of including a reactive settler model is investigated for an ASM1 case study. Simulations with a whole plant model including the non-reactive Takács settler model are used as a reference, and are compared to simulation results considering two reactive settler models. The first is a return sludge model block removing oxygen and a user-defined fraction of nitrate, combined with a non-reactive Takács settler. The second is a fully reactive ASM1 Takács settler model. Simulations with the ASM1 reactive settler model predicted a 15.3% and 7.4% improvement of the simulated N removal performance, for constant (steady-state) and dynamic influent conditions respectively. The oxygen/nitrate return sludge model block predicts a 10% improvement of N removal performance under dynamic conditions, and might be the better modelling option for ASM1 plants: it is computationally more efficient and it will not overrate the importance of decay processes in the settler.

  7. Some considerations for validation of repository performance assessment models

    International Nuclear Information System (INIS)

    Eisenberg, N.

    1991-01-01

    Validation is an important aspect of the regulatory uses of performance assessment. A substantial body of literature exists indicating the manner in which validation of models is usually pursued. Because performance models for a nuclear waste repository cannot be tested over the long time periods for which the model must make predictions, the usual avenue for model validation is precluded. Further impediments to model validation include a lack of fundamental scientific theory to describe important aspects of repository performance and an inability to easily deduce the complex, intricate structures characteristic of a natural system. A successful strategy for validation must attempt to resolve these difficulties in a direct fashion. Although some procedural aspects will be important, the main reliance of validation should be on scientific substance and logical rigor. The level of validation needed will be mandated, in part, by the uses to which these models are put, rather than by the ideal of validation of a scientific theory. Because of the importance of the validation of performance assessment models, the NRC staff has engaged in a program of research and international cooperation to seek progress in this important area. 2 figs., 16 refs

  8. Evaluating Flight Crew Performance by a Bayesian Network Model

    Directory of Open Access Journals (Sweden)

    Wei Chen

    2018-03-01

    Full Text Available Flight crew performance is of great significance in keeping flights safe and sound. When evaluating the crew performance, quantitative detailed behavior information may not be available. The present paper introduces the Bayesian Network to perform flight crew performance evaluation, which permits the utilization of multidisciplinary sources of objective and subjective information, despite sparse behavioral data. In this paper, the causal factors are selected based on the analysis of 484 aviation accidents caused by human factors. Then, a network termed Flight Crew Performance Model is constructed. The Delphi technique helps to gather subjective data as a supplement to objective data from accident reports. The conditional probabilities are elicited by the leaky noisy MAX model. Two ways of inference for the BN—probability prediction and probabilistic diagnosis are used and some interesting conclusions are drawn, which could provide data support to make interventions for human error management in aviation safety.

  9. Human performance modeling for system of systems analytics: combat performance-shaping factors.

    Energy Technology Data Exchange (ETDEWEB)

    Lawton, Craig R.; Miller, Dwight Peter

    2006-01-01

    The US military has identified Human Performance Modeling (HPM) as a significant requirement and challenge of future systems modeling and analysis initiatives. To support this goal, Sandia National Laboratories (SNL) has undertaken a program of HPM as an integral augmentation to its system-of-system (SoS) analytics capabilities. The previous effort, reported in SAND2005-6569, evaluated the effects of soldier cognitive fatigue on SoS performance. The current effort began with a very broad survey of any performance-shaping factors (PSFs) that also might affect soldiers performance in combat situations. The work included consideration of three different approaches to cognition modeling and how appropriate they would be for application to SoS analytics. This bulk of this report categorizes 47 PSFs into three groups (internal, external, and task-related) and provides brief descriptions of how each affects combat performance, according to the literature. The PSFs were then assembled into a matrix with 22 representative military tasks and assigned one of four levels of estimated negative impact on task performance, based on the literature. Blank versions of the matrix were then sent to two ex-military subject-matter experts to be filled out based on their personal experiences. Data analysis was performed to identify the consensus most influential PSFs. Results indicate that combat-related injury, cognitive fatigue, inadequate training, physical fatigue, thirst, stress, poor perceptual processing, and presence of chemical agents are among the PSFs with the most negative impact on combat performance.

  10. Global climate model performance over Alaska and Greenland

    DEFF Research Database (Denmark)

    Walsh, John E.; Chapman, William L.; Romanovsky, Vladimir

    2008-01-01

    The performance of a set of 15 global climate models used in the Coupled Model Intercomparison Project is evaluated for Alaska and Greenland, and compared with the performance over broader pan-Arctic and Northern Hemisphere extratropical domains. Root-mean-square errors relative to the 1958...... of the models are generally much larger than the biases of the composite output, indicating that the systematic errors differ considerably among the models. There is a tendency for the models with smaller errors to simulate a larger greenhouse warming over the Arctic, as well as larger increases of Arctic...... to narrowing the uncertainty and obtaining more robust estimates of future climate change in regions such as Alaska, Greenland, and the broader Arctic....

  11. Human performance modeling for system of systems analytics.

    Energy Technology Data Exchange (ETDEWEB)

    Dixon, Kevin R.; Lawton, Craig R.; Basilico, Justin Derrick; Longsine, Dennis E. (INTERA, Inc., Austin, TX); Forsythe, James Chris; Gauthier, John Henry; Le, Hai D.

    2008-10-01

    A Laboratory-Directed Research and Development project was initiated in 2005 to investigate Human Performance Modeling in a System of Systems analytic environment. SAND2006-6569 and SAND2006-7911 document interim results from this effort; this report documents the final results. The problem is difficult because of the number of humans involved in a System of Systems environment and the generally poorly defined nature of the tasks that each human must perform. A two-pronged strategy was followed: one prong was to develop human models using a probability-based method similar to that first developed for relatively well-understood probability based performance modeling; another prong was to investigate more state-of-art human cognition models. The probability-based modeling resulted in a comprehensive addition of human-modeling capability to the existing SoSAT computer program. The cognitive modeling resulted in an increased understanding of what is necessary to incorporate cognition-based models to a System of Systems analytic environment.

  12. Typicity in Potato: Characterization of Geographic Origin

    Directory of Open Access Journals (Sweden)

    Marco Manzelli

    2010-03-01

    Full Text Available A two-year study was carried out in three regions of Italy and the crop performance and the chemical composition of tubers of three typical potato varieties evaluated. Carbon and nitrogen tuber content was determined by means of an elemental analyzer and the other mineral elements by means of a spectrometer. The same determinations were performed on soil samples taken from experimental areas. The Principal Component Analysis, applied to the results of mineral element tuber analysis, permitted the classification of all potato tuber samples according to their geographic origin. Only a partial discrimination was obtained in function of potato varieties. Some correlations between mineral content in the tubers and in the soil were also detected. Analytical and statistical methods proved to be useful in verifying the authenticity of guaranteed geographical food denominations.

  13. Human performance modeling for system of systems analytics :soldier fatigue.

    Energy Technology Data Exchange (ETDEWEB)

    Lawton, Craig R.; Campbell, James E.; Miller, Dwight Peter

    2005-10-01

    The military has identified Human Performance Modeling (HPM) as a significant requirement and challenge of future systems modeling and analysis initiatives as can be seen in the Department of Defense's (DoD) Defense Modeling and Simulation Office's (DMSO) Master Plan (DoD 5000.59-P 1995). To this goal, the military is currently spending millions of dollars on programs devoted to HPM in various military contexts. Examples include the Human Performance Modeling Integration (HPMI) program within the Air Force Research Laboratory, which focuses on integrating HPMs with constructive models of systems (e.g. cockpit simulations) and the Navy's Human Performance Center (HPC) established in September 2003. Nearly all of these initiatives focus on the interface between humans and a single system. This is insufficient in the era of highly complex network centric SoS. This report presents research and development in the area of HPM in a system-of-systems (SoS). Specifically, this report addresses modeling soldier fatigue and the potential impacts soldier fatigue can have on SoS performance.

  14. Performance Analysis, Modeling and Scaling of HPC Applications and Tools

    Energy Technology Data Exchange (ETDEWEB)

    Bhatele, Abhinav [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2016-01-13

    E cient use of supercomputers at DOE centers is vital for maximizing system throughput, mini- mizing energy costs and enabling science breakthroughs faster. This requires complementary e orts along several directions to optimize the performance of scienti c simulation codes and the under- lying runtimes and software stacks. This in turn requires providing scalable performance analysis tools and modeling techniques that can provide feedback to physicists and computer scientists developing the simulation codes and runtimes respectively. The PAMS project is using time allocations on supercomputers at ALCF, NERSC and OLCF to further the goals described above by performing research along the following fronts: 1. Scaling Study of HPC applications; 2. Evaluation of Programming Models; 3. Hardening of Performance Tools; 4. Performance Modeling of Irregular Codes; and 5. Statistical Analysis of Historical Performance Data. We are a team of computer and computational scientists funded by both DOE/NNSA and DOE/ ASCR programs such as ECRP, XStack (Traleika Glacier, PIPER), ExaOSR (ARGO), SDMAV II (MONA) and PSAAP II (XPACC). This allocation will enable us to study big data issues when analyzing performance on leadership computing class systems and to assist the HPC community in making the most e ective use of these resources.

  15. The better model to predict and improve pediatric health care quality: performance or importance-performance?

    Science.gov (United States)

    Olsen, Rebecca M; Bryant, Carol A; McDermott, Robert J; Ortinau, David

    2013-01-01

    The perpetual search for ways to improve pediatric health care quality has resulted in a multitude of assessments and strategies; however, there is little research evidence as to their conditions for maximum effectiveness. A major reason for the lack of evaluation research and successful quality improvement initiatives is the methodological challenge of measuring quality from the parent perspective. Comparison of performance-only and importance-performance models was done to determine the better predictor of pediatric health care quality and more successful method for improving the quality of care provided to children. Fourteen pediatric health care centers serving approximately 250,000 patients in 70,000 households in three West Central Florida counties were studied. A cross-sectional design was used to determine the importance and performance of 50 pediatric health care attributes and four global assessments of pediatric health care quality. Exploratory factor analysis revealed five dimensions of care (physician care, access, customer service, timeliness of services, and health care facility). Hierarchical multiple regression compared the performance-only and the importance-performance models. In-depth interviews, participant observations, and a direct cognitive structural analysis identified 50 health care attributes included in a mailed survey to parents(n = 1,030). The tailored design method guided survey development and data collection. The importance-performance multiplicative additive model was a better predictor of pediatric health care quality. Attribute importance moderates performance and quality, making the importance-performance model superior for measuring and providing a deeper understanding of pediatric health care quality and a better method for improving the quality of care provided to children. Regardless of attribute performance, if the level of attribute importance is not taken into consideration, health care organizations may spend valuable

  16. Four-Stroke, Internal Combustion Engine Performance Modeling

    Science.gov (United States)

    Wagner, Richard C.

    In this thesis, two models of four-stroke, internal combustion engines are created and compared. The first model predicts the intake and exhaust processes using isentropic flow equations augmented by discharge coefficients. The second model predicts the intake and exhaust processes using a compressible, time-accurate, Quasi-One-Dimensional (Q1D) approach. Both models employ the same heat release and reduced-order modeling of the cylinder charge. Both include friction and cylinder loss models so that the predicted performance values can be compared to measurements. The results indicate that the isentropic-based model neglects important fluid mechanics and returns inaccurate results. The Q1D flow model, combined with the reduced-order model of the cylinder charge, is able to capture the dominant intake and exhaust fluid mechanics and produces results that compare well with measurement. Fluid friction, convective heat transfer, piston ring and skirt friction and temperature-varying specific heats in the working fluids are all shown to be significant factors in engine performance predictions. Charge blowby is shown to play a lesser role.

  17. Aircraft Anomaly Detection Using Performance Models Trained on Fleet Data

    Science.gov (United States)

    Gorinevsky, Dimitry; Matthews, Bryan L.; Martin, Rodney

    2012-01-01

    This paper describes an application of data mining technology called Distributed Fleet Monitoring (DFM) to Flight Operational Quality Assurance (FOQA) data collected from a fleet of commercial aircraft. DFM transforms the data into aircraft performance models, flight-to-flight trends, and individual flight anomalies by fitting a multi-level regression model to the data. The model represents aircraft flight performance and takes into account fixed effects: flight-to-flight and vehicle-to-vehicle variability. The regression parameters include aerodynamic coefficients and other aircraft performance parameters that are usually identified by aircraft manufacturers in flight tests. Using DFM, the multi-terabyte FOQA data set with half-million flights was processed in a few hours. The anomalies found include wrong values of competed variables, (e.g., aircraft weight), sensor failures and baises, failures, biases, and trends in flight actuators. These anomalies were missed by the existing airline monitoring of FOQA data exceedances.

  18. Model complexity and performance: How far can we simplify?

    Science.gov (United States)

    Raick, C.; Soetaert, K.; Grégoire, M.

    2006-07-01

    Handling model complexity and reliability is a key area of research today. While complex models containing sufficient detail have become possible due to increased computing power, they often lead to too much uncertainty. On the other hand, very simple models often crudely oversimplify the real ecosystem and can not be used for management purposes. Starting from a complex and validated 1D pelagic ecosystem model of the Ligurian Sea (NW Mediterranean Sea), we derived simplified aggregated models in which either the unbalanced algal growth, the functional group diversity or the explicit description of the microbial loop was sacrificed. To overcome the problem of data availability with adequate spatial and temporal resolution, the outputs of the complex model are used as the baseline of perfect knowledge to calibrate the simplified models. Objective criteria of model performance were used to compare the simplified models’ results to the complex model output and to the available data at the DYFAMED station in the central Ligurian Sea. We show that even the simplest (NPZD) model is able to represent the global ecosystem features described by the complex model (e.g. primary and secondary productions, particulate organic matter export flux, etc.). However, a certain degree of sophistication in the formulation of some biogeochemical processes is required to produce realistic behaviors (e.g. the phytoplankton competition, the potential carbon or nitrogen limitation of the zooplankton ingestion, the model trophic closure, etc.). In general, a 9 state-variable model that has the functional group diversity removed, but which retains the bacterial loop and the unbalanced algal growth, performs best.

  19. Thin film bulk acoustic wave devices : performance optimization and modeling

    OpenAIRE

    Pensala, Tuomas

    2011-01-01

    Thin film bulk acoustic wave (BAW) resonators and filters operating in the GHz range are used in mobile phones for the most demanding filtering applications and complement the surface acoustic wave (SAW) based filters. Their main advantages are small size and high performance at frequencies above 2 GHz. This work concentrates on the characterization, performance optimization, and modeling techniques of thin film BAW devices. Laser interferometric vibration measurements together with plat...

  20. Computational modelling of expressive music performance in hexaphonic guitar

    OpenAIRE

    Siquier, Marc

    2017-01-01

    Computational modelling of expressive music performance has been widely studied in the past. While previous work in this area has been mainly focused on classical piano music, there has been very little work on guitar music, and such work has focused on monophonic guitar playing. In this work, we present a machine learning approach to automatically generate expressive performances from non expressive music scores for polyphonic guitar. We treated guitar as an hexaphonic instrument, obtaining ...

  1. The predictive performance and stability of six species distribution models.

    Science.gov (United States)

    Duan, Ren-Yan; Kong, Xiao-Quan; Huang, Min-Yi; Fan, Wei-Yi; Wang, Zhi-Gao

    2014-01-01

    Predicting species' potential geographical range by species distribution models (SDMs) is central to understand their ecological requirements. However, the effects of using different modeling techniques need further investigation. In order to improve the prediction effect, we need to assess the predictive performance and stability of different SDMs. We collected the distribution data of five common tree species (Pinus massoniana, Betula platyphylla, Quercus wutaishanica, Quercus mongolica and Quercus variabilis) and simulated their potential distribution area using 13 environmental variables and six widely used SDMs: BIOCLIM, DOMAIN, MAHAL, RF, MAXENT, and SVM. Each model run was repeated 100 times (trials). We compared the predictive performance by testing the consistency between observations and simulated distributions and assessed the stability by the standard deviation, coefficient of variation, and the 99% confidence interval of Kappa and AUC values. The mean values of AUC and Kappa from MAHAL, RF, MAXENT, and SVM trials were similar and significantly higher than those from BIOCLIM and DOMAIN trials (pSDMs (MAHAL, RF, MAXENT, and SVM) had higher prediction accuracy, smaller confidence intervals, and were more stable and less affected by the random variable (randomly selected pseudo-absence points). According to the prediction performance and stability of SDMs, we can divide these six SDMs into two categories: a high performance and stability group including MAHAL, RF, MAXENT, and SVM, and a low performance and stability group consisting of BIOCLIM, and DOMAIN. We highlight that choosing appropriate SDMs to address a specific problem is an important part of the modeling process.

  2. A model to describe the performance of the UASB reactor.

    Science.gov (United States)

    Rodríguez-Gómez, Raúl; Renman, Gunno; Moreno, Luis; Liu, Longcheng

    2014-04-01

    A dynamic model to describe the performance of the Upflow Anaerobic Sludge Blanket (UASB) reactor was developed. It includes dispersion, advection, and reaction terms, as well as the resistances through which the substrate passes before its biotransformation. The UASB reactor is viewed as several continuous stirred tank reactors connected in series. The good agreement between experimental and simulated results shows that the model is able to predict the performance of the UASB reactor (i.e. substrate concentration, biomass concentration, granule size, and height of the sludge bed).

  3. PORFLOW Modeling Supporting The H-Tank Farm Performance Assessment

    International Nuclear Information System (INIS)

    Jordan, J. M.; Flach, G. P.; Westbrook, M. L.

    2012-01-01

    Numerical simulations of groundwater flow and contaminant transport in the vadose and saturated zones have been conducted using the PORFLOW code in support of an overall Performance Assessment (PA) of the H-Tank Farm. This report provides technical detail on selected aspects of PORFLOW model development and describes the structure of the associated electronic files. The PORFLOW models for the H-Tank Farm PA, Rev. 1 were updated with grout, solubility, and inventory changes. The aquifer model was refined. In addition, a set of flow sensitivity runs were performed to allow flow to be varied in the related probabilistic GoldSim models. The final PORFLOW concentration values are used as input into a GoldSim dose calculator

  4. PORFLOW Modeling Supporting The H-Tank Farm Performance Assessment

    Energy Technology Data Exchange (ETDEWEB)

    Jordan, J. M.; Flach, G. P.; Westbrook, M. L.

    2012-08-31

    Numerical simulations of groundwater flow and contaminant transport in the vadose and saturated zones have been conducted using the PORFLOW code in support of an overall Performance Assessment (PA) of the H-Tank Farm. This report provides technical detail on selected aspects of PORFLOW model development and describes the structure of the associated electronic files. The PORFLOW models for the H-Tank Farm PA, Rev. 1 were updated with grout, solubility, and inventory changes. The aquifer model was refined. In addition, a set of flow sensitivity runs were performed to allow flow to be varied in the related probabilistic GoldSim models. The final PORFLOW concentration values are used as input into a GoldSim dose calculator.

  5. Model of service-oriented catering supply chain performance evaluation

    Directory of Open Access Journals (Sweden)

    Juanqiong Gou

    2013-03-01

    Full Text Available Purpose: The aim of this paper is constructing a performance evaluation model for service-oriented catering supply chain. Design/methodology/approach: With the research on the current situation of catering industry, this paper summarized the characters of the catering supply chain, and then presents the service-oriented catering supply chain model based on the platform of logistics and information. At last, the fuzzy AHP method is used to evaluate the performance of service-oriented catering supply chain. Findings: With the analysis of the characteristics of catering supply chain, we construct the performance evaluation model in order to guarantee the food safety, logistics efficiency, price stability and so on. Practical implications: In order to evolve an efficient and effective service supply chain, it can not only used to own enterprise improvement, but also can be used for selecting different customers, to choose a different model of development. Originality/value: This paper has a new definition of service-oriented catering supply chain. And it offers a model to evaluate the performance of this catering supply chain.

  6. Performance of the meteorological radiation model during the solar eclipse of 29 March 2006

    Directory of Open Access Journals (Sweden)

    B. E. Psiloglou

    2007-12-01

    Full Text Available Various solar broadband models have been developed in the last half of the 20th century. The driving demand has been the estimation of available solar energy at different locations on earth for various applications. The motivation for such developments, though, has been the ample lack of solar radiation measurements at global scale. Therefore, the main goal of such codes is to generate artificial solar radiation series or calculate the availability of solar energy at a place.

    One of the broadband models to be developed in the late 80's was the Meteorological Radiation Model (MRM. The main advantage of MRM over other similar models was its simplicity in acquiring and using the necessary input data, i.e. air temperature, relative humidity, barometric pressure and sunshine duration from any of the many meteorological stations.

    The present study describes briefly the various steps (versions of MRM and in greater detail the latest version 5. To show the flexibility and great performance of the MRM, a harsh test of the code under the (almost total solar eclipse conditions of 29 March 2006 over Athens was performed and comparison of its results with real measurements was made. From this hard comparison it is shown that the MRM can simulate solar radiation during a solar eclipse event as effectively as on a typical day. Because of the main interest in solar energy applications about the total radiation component, MRM focuses on that. For this component, the RMSE and MBE statistical estimators during this study were found to be 7.64% and −1.67% on 29 March as compared to the respective 5.30% and +2.04% for 28 March. This efficiency of MRM even during an eclipse makes the model promising for easy handling of typical situations with even better results.

  7. Aqua/Aura Updated Inclination Adjust Maneuver Performance Prediction Model

    Science.gov (United States)

    Boone, Spencer

    2017-01-01

    This presentation will discuss the updated Inclination Adjust Maneuver (IAM) performance prediction model that was developed for Aqua and Aura following the 2017 IAM series. This updated model uses statistical regression methods to identify potential long-term trends in maneuver parameters, yielding improved predictions when re-planning past maneuvers. The presentation has been reviewed and approved by Eric Moyer, ESMO Deputy Project Manager.

  8. Dynamic Experiments and Constitutive Model Performance for Polycarbonate

    Science.gov (United States)

    2014-07-01

    Storage and loss tangent moduli for PC; DMA experiments performed at 1 Hz and shift at 100 Hz showing the  and transition regions using the...author would also like to thank Dr. Adam D. Mulliken for courteously providing the experimental results and the Abaqus version of the model and...exponential factor . In 1955, the Ree-Eyring model further accounted for microstructural mechanisms by relating molecular motions to yield behavior

  9. Does segmentation always improve model performance in credit scoring?

    OpenAIRE

    Bijak, Katarzyna; Thomas, Lyn C.

    2012-01-01

    Credit scoring allows for the credit risk assessment of bank customers. A single scoring model (scorecard) can be developed for the entire customer population, e.g. using logistic regression. However, it is often expected that segmentation, i.e. dividing the population into several groups and building separate scorecards for them, will improve the model performance. The most common statistical methods for segmentation are the two-step approaches, where logistic regression follows Classificati...

  10. Building Information Modeling (BIM) for Indoor Environmental Performance Analysis

    DEFF Research Database (Denmark)

    The report is a part of a research assignment carried out by students in the 5ETCS course “Project Byggeri – [entitled as: Building Information Modeling (BIM) – Modeling & Analysis]”, during the 3rd semester of master degree in Civil and Architectural Engineering, Department of Engineering, Aarhus...... University. This includes seven papers describing BIM for Sustainability, concentrating specifically on individual topics regarding to Indoor Environment Performance Analysis....

  11. Rethinking board role performance: Towards an integrative model

    Directory of Open Access Journals (Sweden)

    Babić Verica M.

    2011-01-01

    Full Text Available This research focuses on the board role evolution analysis which took place simultaneously with the development of different corporate governance theories and perspectives. The purpose of this paper is to provide understanding of key factors that make a board effective in the performance of its role. We argue that analysis of board role performance should incorporate both structural and process variables. This paper’s contribution is the development of an integrative model that aims to establish the relationship between the board structure and processes on the one hand, and board role performance on the other.

  12. Human performance models for computer-aided engineering

    Science.gov (United States)

    Elkind, Jerome I. (Editor); Card, Stuart K. (Editor); Hochberg, Julian (Editor); Huey, Beverly Messick (Editor)

    1989-01-01

    This report discusses a topic important to the field of computational human factors: models of human performance and their use in computer-based engineering facilities for the design of complex systems. It focuses on a particular human factors design problem -- the design of cockpit systems for advanced helicopters -- and on a particular aspect of human performance -- vision and related cognitive functions. By focusing in this way, the authors were able to address the selected topics in some depth and develop findings and recommendations that they believe have application to many other aspects of human performance and to other design domains.

  13. The integration of intrapreneurship into a performance management model

    Directory of Open Access Journals (Sweden)

    Thabo WL Foba

    2007-02-01

    Full Text Available This study aimed to investigate the feasibility of using the dynamics of intrapreneurship to develop a new generation performance management model based on the structural dynamics of the Balanced Score Card approach. The literature survey covered entrepreneurship, from which the construct, intrapreneurship, was synthesized. Reconstructive logic and Hermeneutic methodology were used in studying the performance management systems and the Balanced Score Card approach. The dynamics were then integrated into a new approach for the management of performance of intrapreneurial employees in the corporate environment. An unstructured opinion survey followed: a sample of intrapreneurship students evaluated and validated the model’s conceptual feasibility and probable practical value.

  14. Simulation, Characterization, and Optimization of Metabolic Models with the High Performance Systems Biology Toolkit

    Energy Technology Data Exchange (ETDEWEB)

    Lunacek, M.; Nag, A.; Alber, D. M.; Gruchalla, K.; Chang, C. H.; Graf, P. A.

    2011-01-01

    The High Performance Systems Biology Toolkit (HiPer SBTK) is a collection of simulation and optimization components for metabolic modeling and the means to assemble these components into large parallel processing hierarchies suiting a particular simulation and optimization need. The components come in a variety of different categories: model translation, model simulation, parameter sampling, sensitivity analysis, parameter estimation, and optimization. They can be configured at runtime into hierarchically parallel arrangements to perform nested combinations of simulation characterization tasks with excellent parallel scaling to thousands of processors. We describe the observations that led to the system, the components, and how one can arrange them. We show nearly 90% efficient scaling to over 13,000 processors, and we demonstrate three complex yet typical examples that have run on {approx}1000 processors and accomplished billions of stiff ordinary differential equation simulations. This work opens the door for the systems biology metabolic modeling community to take effective advantage of large scale high performance computing resources for the first time.

  15. Performance variation due to stiffness in a tuna-inspired flexible foil model.

    Science.gov (United States)

    Rosic, Mariel-Luisa N; Thornycroft, Patrick J M; Feilich, Kara L; Lucas, Kelsey N; Lauder, George V

    2017-01-17

    Tuna are fast, economical swimmers in part due to their stiff, high aspect ratio caudal fins and streamlined bodies. Previous studies using passive caudal fin models have suggested that while high aspect ratio tail shapes such as a tuna's generally perform well, tail performance cannot be determined from shape alone. In this study, we analyzed the swimming performance of tuna-tail-shaped hydrofoils of a wide range of stiffnesses, heave amplitudes, and frequencies to determine how stiffness and kinematics affect multiple swimming performance parameters for a single foil shape. We then compared the foil models' kinematics with published data from a live swimming tuna to determine how well the hydrofoil models could mimic fish kinematics. Foil kinematics over a wide range of motion programs generally showed a minimum lateral displacement at the narrowest part of the foil, and, immediately anterior to that, a local area of large lateral body displacement. These two kinematic patterns may enhance thrust in foils of intermediate stiffness. Stiffness and kinematics exhibited subtle interacting effects on hydrodynamic efficiency, with no one stiffness maximizing both thrust and efficiency. Foils of intermediate stiffnesses typically had the greatest coefficients of thrust at the highest heave amplitudes and frequencies. The comparison of foil kinematics with tuna kinematics showed that tuna motion is better approximated by a zero angle of attack foil motion program than by programs that do not incorporate pitch. These results indicate that open questions in biomechanics may be well served by foil models, given appropriate choice of model characteristics and control programs. Accurate replication of biological movements will require refinement of motion control programs and physical models, including the creation of models of variable stiffness.

  16. Model for determining and optimizing delivery performance in industrial systems

    Directory of Open Access Journals (Sweden)

    Fechete Flavia

    2017-01-01

    Full Text Available Performance means achieving organizational objectives regardless of their nature and variety, and even overcoming them. Improving performance is one of the major goals of any company. Achieving the global performance means not only obtaining the economic performance, it is a must to take into account other functions like: function of quality, delivery, costs and even the employees satisfaction. This paper aims to improve the delivery performance of an industrial system due to their very low results. The delivery performance took into account all categories of performance indicators, such as on time delivery, backlog efficiency or transport efficiency. The research was focused on optimizing the delivery performance of the industrial system, using linear programming. Modeling the delivery function using linear programming led to obtaining precise quantities to be produced and delivered each month by the industrial system in order to minimize their transport cost, satisfying their customers orders and to control their stock. The optimization led to a substantial improvement in all four performance indicators that concern deliveries.

  17. A dynamic network model to explain the development of excellent human performance

    Directory of Open Access Journals (Sweden)

    Ruud J.R. Den Hartigh

    2016-04-01

    Full Text Available Across different domains, from sports to science, some individuals accomplish excellent levels of performance. For over 150 years, researchers have debated the roles of specific nature and nurture components to develop excellence. In this article, we argue that the key to excellence does not reside in specific underlying components, but rather in the ongoing interactions among the components. We propose that excellence emerges out of dynamic networks consisting of idiosyncratic mixtures of interacting components such as genetic endowment, motivation, practice, and coaching. Using computer simulations we demonstrate that the dynamic network model accurately predicts typical properties of excellence reported in the literature, such as the idiosyncratic developmental trajectories leading to excellence and the highly skewed distributions of productivity present in virtually any achievement domain. Based on this novel theoretical perspective on excellent human performance, this article concludes by suggesting policy implications and directions for future research.

  18. A Dynamic Network Model to Explain the Development of Excellent Human Performance.

    Science.gov (United States)

    Den Hartigh, Ruud J R; Van Dijk, Marijn W G; Steenbeek, Henderien W; Van Geert, Paul L C

    2016-01-01

    Across different domains, from sports to science, some individuals accomplish excellent levels of performance. For over 150 years, researchers have debated the roles of specific nature and nurture components to develop excellence. In this article, we argue that the key to excellence does not reside in specific underlying components, but rather in the ongoing interactions among the components. We propose that excellence emerges out of dynamic networks consisting of idiosyncratic mixtures of interacting components such as genetic endowment, motivation, practice, and coaching. Using computer simulations we demonstrate that the dynamic network model accurately predicts typical properties of excellence reported in the literature, such as the idiosyncratic developmental trajectories leading to excellence and the highly skewed distributions of productivity present in virtually any achievement domain. Based on this novel theoretical perspective on excellent human performance, this article concludes by suggesting policy implications and directions for future research.

  19. Reference Manual for the System Advisor Model's Wind Power Performance Model

    Energy Technology Data Exchange (ETDEWEB)

    Freeman, J.; Jorgenson, J.; Gilman, P.; Ferguson, T.

    2014-08-01

    This manual describes the National Renewable Energy Laboratory's System Advisor Model (SAM) wind power performance model. The model calculates the hourly electrical output of a single wind turbine or of a wind farm. The wind power performance model requires information about the wind resource, wind turbine specifications, wind farm layout (if applicable), and costs. In SAM, the performance model can be coupled to one of the financial models to calculate economic metrics for residential, commercial, or utility-scale wind projects. This manual describes the algorithms used by the wind power performance model, which is available in the SAM user interface and as part of the SAM Simulation Core (SSC) library, and is intended to supplement the user documentation that comes with the software.

  20. Spatial variability and parametric uncertainty in performance assessment models

    International Nuclear Information System (INIS)

    Pensado, Osvaldo; Mancillas, James; Painter, Scott; Tomishima, Yasuo

    2011-01-01

    The problem of defining an appropriate treatment of distribution functions (which could represent spatial variability or parametric uncertainty) is examined based on a generic performance assessment model for a high-level waste repository. The generic model incorporated source term models available in GoldSim ® , the TDRW code for contaminant transport in sparse fracture networks with a complex fracture-matrix interaction process, and a biosphere dose model known as BDOSE TM . Using the GoldSim framework, several Monte Carlo sampling approaches and transport conceptualizations were evaluated to explore the effect of various treatments of spatial variability and parametric uncertainty on dose estimates. Results from a model employing a representative source and ensemble-averaged pathway properties were compared to results from a model allowing for stochastic variation of transport properties along streamline segments (i.e., explicit representation of spatial variability within a Monte Carlo realization). We concluded that the sampling approach and the definition of an ensemble representative do influence consequence estimates. In the examples analyzed in this paper, approaches considering limited variability of a transport resistance parameter along a streamline increased the frequency of fast pathways resulting in relatively high dose estimates, while those allowing for broad variability along streamlines increased the frequency of 'bottlenecks' reducing dose estimates. On this basis, simplified approaches with limited consideration of variability may suffice for intended uses of the performance assessment model, such as evaluation of site safety. (author)

  1. A model of CCTV surveillance operator performance | Donald ...

    African Journals Online (AJOL)

    cognitive processes involved in visual search and monitoring – key activities of operators. The aim of this paper was to integrate the factors into a holistic theoretical model of performance for CCTV operators, drawing on areas such as vigilance, ...

  2. Item Response Theory Models for Performance Decline during Testing

    Science.gov (United States)

    Jin, Kuan-Yu; Wang, Wen-Chung

    2014-01-01

    Sometimes, test-takers may not be able to attempt all items to the best of their ability (with full effort) due to personal factors (e.g., low motivation) or testing conditions (e.g., time limit), resulting in poor performances on certain items, especially those located toward the end of a test. Standard item response theory (IRT) models fail to…

  3. Models for the financial-performance effects of Marketing

    NARCIS (Netherlands)

    Hanssens, D.M.; Dekimpe, Marnik; Wierenga, B.; van der Lans, R.

    We consider marketing-mix models that explicitly include financial performance criteria. These financial metrics are not only comparable across the marketing mix, they also relate well to investors’ evaluation of the firm. To that extent, we treat marketing as an investment in customer value

  4. Modeling performance measurement applications and implementation issues in DEA

    CERN Document Server

    Cook, Wade D

    2005-01-01

    Addresses advanced/new DEA methodology and techniques that are developed for modeling unique and new performance evaluation issuesPesents new DEA methodology and techniques via discussions on how to solve managerial problemsProvides an easy-to-use DEA software - DEAFrontier (www.deafrontier.com) which is an excellent tool for both DEA researchers and practitioners.

  5. Performances of estimators of linear auto-correlated error model ...

    African Journals Online (AJOL)

    The performances of five estimators of linear models with autocorrelated disturbance terms are compared when the independent variable is exponential. The results reveal that for both small and large samples, the Ordinary Least Squares (OLS) compares favourably with the Generalized least Squares (GLS) estimators in ...

  6. Towards a Social Networks Model for Online Learning & Performance

    Science.gov (United States)

    Chung, Kon Shing Kenneth; Paredes, Walter Christian

    2015-01-01

    In this study, we develop a theoretical model to investigate the association between social network properties, "content richness" (CR) in academic learning discourse, and performance. CR is the extent to which one contributes content that is meaningful, insightful and constructive to aid learning and by social network properties we…

  7. Quantitative modeling of human performance in complex, dynamic systems

    National Research Council Canada - National Science Library

    Baron, Sheldon; Kruser, Dana S; Huey, Beverly Messick

    1990-01-01

    ... Sheldon Baron, Dana S. Kruser, and Beverly Messick Huey, editors Panel on Human Performance Modeling Committee on Human Factors Commission on Behavioral and Social Sciences and Education National Research Council NATIONAL ACADEMY PRESS Washington, D.C. 1990 Copyrightoriginal retained, the be not from cannot book, paper original however, for version forma...

  8. Performances Of Estimators Of Linear Models With Autocorrelated ...

    African Journals Online (AJOL)

    The performances of five estimators of linear models with Autocorrelated error terms are compared when the independent variable is autoregressive. The results reveal that the properties of the estimators when the sample size is finite is quite similar to the properties of the estimators when the sample size is infinite although ...

  9. Performances of estimators of linear model with auto-correlated ...

    African Journals Online (AJOL)

    Performances of estimators of linear model with auto-correlated error terms when the independent variable is normal. ... On the other hand, the same slope coefficients β , under Generalized Least Squares (GLS) decreased with increased autocorrelation when the sample size T is small. Journal of the Nigerian Association ...

  10. Performance Modeling for Heterogeneous Wireless Networks with Multiservice Overflow Traffic

    DEFF Research Database (Denmark)

    Huang, Qian; Ko, King-Tim; Iversen, Villy Bæk

    2009-01-01

    . Multiservice loss analysis based on multi-dimensional Markov chain becomes intractable in these networks due to intensive computations required. This paper focuses on performance modeling for heterogeneous wireless networks based on a hierarchical overlay infrastructure. A method based on decomposition...

  11. Stutter-Step Models of Performance in School

    Science.gov (United States)

    Morgan, Stephen L.; Leenman, Theodore S.; Todd, Jennifer J.; Kentucky; Weeden, Kim A.

    2013-01-01

    To evaluate a stutter-step model of academic performance in high school, this article adopts a unique measure of the beliefs of 12,591 high school sophomores from the Education Longitudinal Study, 2002-2006. Verbatim responses to questions on occupational plans are coded to capture specific job titles, the listing of multiple jobs, and the listing…

  12. Performance in model transformations: experiments with ATL and QVT

    NARCIS (Netherlands)

    van Amstel, Marcel; Bosems, S.; Ivanov, Ivan; Ferreira Pires, Luis; Cabot, Jordi; Visser, Eelco

    Model transformations are increasingly being incorporated in software development processes. However, as systems being developed with transformations grow in size and complexity, the performance of the transformations tends to degrade. In this paper we investigate the factors that have an impact on

  13. Improving the performances of autofocus based on adaptive retina-like sampling model

    Science.gov (United States)

    Hao, Qun; Xiao, Yuqing; Cao, Jie; Cheng, Yang; Sun, Ce

    2018-03-01

    An adaptive retina-like sampling model (ARSM) is proposed to balance autofocusing accuracy and efficiency. Based on the model, we carry out comparative experiments between the proposed method and the traditional method in terms of accuracy, the full width of the half maxima (FWHM) and time consumption. Results show that the performances of our method are better than that of the traditional method. Meanwhile, typical autofocus functions, including sum-modified-Laplacian (SML), Laplacian (LAP), Midfrequency-DCT (MDCT) and Absolute Tenengrad (ATEN) are compared through comparative experiments. The smallest FWHM is obtained by the use of LAP, which is more suitable for evaluating accuracy than other autofocus functions. The autofocus function of MDCT is most suitable to evaluate the real-time ability.

  14. Modelling of green roof hydrological performance for urban drainage applications

    DEFF Research Database (Denmark)

    Locatelli, Luca; Mark, Ole; Mikkelsen, Peter Steen

    2014-01-01

    Green roofs are being widely implemented for stormwater management and their impact on the urban hydrological cycle can be evaluated by incorporating them into urban drainage models. This paper presents a model of green roof long term and single event hydrological performance. The model includes...... surface and subsurface storage components representing the overall retention capacity of the green roof which is continuously re-established by evapotranspiration. The runoff from the model is described through a non-linear reservoir approach. The model was calibrated and validated using measurement data...... from 3 different extensive sedum roofs in Denmark. These data consist of high-resolution measurements of runoff, precipitation and atmospheric variables in the period 2010–2012. The hydrological response of green roofs was quantified based on statistical analysis of the results of a 22-year (1989...

  15. Multiscale Modeling and Uncertainty Quantification for Nuclear Fuel Performance

    Energy Technology Data Exchange (ETDEWEB)

    Estep, Donald [Colorado State Univ., Fort Collins, CO (United States); El-Azab, Anter [Florida State Univ., Tallahassee, FL (United States); Pernice, Michael [Idaho National Lab. (INL), Idaho Falls, ID (United States); Peterson, John W. [Idaho National Lab. (INL), Idaho Falls, ID (United States); Polyakov, Peter [Univ. of Wyoming, Laramie, WY (United States); Tavener, Simon [Colorado State Univ., Fort Collins, CO (United States); Xiu, Dongbin [Purdue Univ., West Lafayette, IN (United States); Univ. of Utah, Salt Lake City, UT (United States)

    2017-03-23

    In this project, we will address the challenges associated with constructing high fidelity multiscale models of nuclear fuel performance. We (*) propose a novel approach for coupling mesoscale and macroscale models, (*) devise efficient numerical methods for simulating the coupled system, and (*) devise and analyze effective numerical approaches for error and uncertainty quantification for the coupled multiscale system. As an integral part of the project, we will carry out analysis of the effects of upscaling and downscaling, investigate efficient methods for stochastic sensitivity analysis of the individual macroscale and mesoscale models, and carry out a posteriori error analysis for computed results. We will pursue development and implementation of solutions in software used at Idaho National Laboratories on models of interest to the Nuclear Energy Advanced Modeling and Simulation (NEAMS) program.

  16. Performance modelling for product development of advanced window systems

    DEFF Research Database (Denmark)

    Appelfeld, David

    The research presented in this doctoral thesis shows how the product development (PD) of Complex Fenestration Systems (CFSs) can be facilitated by computer-based analysis to improve the energy efficiency of fenestration systems as well as to improve the indoor environment. The first chapter defines...... and methods,which can address interrelated performance parameters of CFS, are sought. It is possible to evaluate such systems by measurements, however the high cost and complexity of the measurements are limiting factors. The studies in this thesis confirmed that the results from the performance measurements...... of CFSs can be interpreted by simulations and hence simulations can be used for the performance analysis of new CFSs. An advanced simulation model must be often developed and needs to be validated by measurements before the model can be reused. The validation of simulations against the measurements proved...

  17. Modeling the seakeeping performance of luxury cruise ships

    Science.gov (United States)

    Cao, Yu; Yu, Bao-Jun; Wang, Jian-Fang

    2010-09-01

    The seakeeping performance of a luxury cruise ship was evaluated during the concept design phase. By comparing numerical predictions based on 3-D linear potential flow theory in the frequency domain with the results of model tests, it was shown that the 3-D method predicted the seakeeping performance of the luxury cruise ship well. Based on the model, the seakeeping features of the luxury cruise ship were analyzed, and then the influence was seen of changes to the primary design parameters (center of gravity, inertial radius, etc.). Based on the results, suggestions were proposed to improve the choice of parameters for luxury cruise ships during the concept design phase. They should improve seakeeping performance.

  18. Performance Models and Risk Management in Communications Systems

    CERN Document Server

    Harrison, Peter; Rüstem, Berç

    2011-01-01

    This volume covers recent developments in the design, operation, and management of telecommunication and computer network systems in performance engineering and addresses issues of uncertainty, robustness, and risk. Uncertainty regarding loading and system parameters leads to challenging optimization and robustness issues. Stochastic modeling combined with optimization theory ensures the optimum end-to-end performance of telecommunication or computer network systems. In view of the diverse design options possible, supporting models have many adjustable parameters and choosing the best set for a particular performance objective is delicate and time-consuming. An optimization based approach determines the optimal possible allocation for these parameters. Researchers and graduate students working at the interface of telecommunications and operations research will benefit from this book. Due to the practical approach, this book will also serve as a reference tool for scientists and engineers in telecommunication ...

  19. Thermal Model Predictions of Advanced Stirling Radioisotope Generator Performance

    Science.gov (United States)

    Wang, Xiao-Yen J.; Fabanich, William Anthony; Schmitz, Paul C.

    2014-01-01

    This paper presents recent thermal model results of the Advanced Stirling Radioisotope Generator (ASRG). The three-dimensional (3D) ASRG thermal power model was built using the Thermal Desktop(trademark) thermal analyzer. The model was correlated with ASRG engineering unit test data and ASRG flight unit predictions from Lockheed Martin's (LM's) I-deas(trademark) TMG thermal model. The auxiliary cooling system (ACS) of the ASRG is also included in the ASRG thermal model. The ACS is designed to remove waste heat from the ASRG so that it can be used to heat spacecraft components. The performance of the ACS is reported under nominal conditions and during a Venus flyby scenario. The results for the nominal case are validated with data from Lockheed Martin. Transient thermal analysis results of ASRG for a Venus flyby with a representative trajectory are also presented. In addition, model results of an ASRG mounted on a Cassini-like spacecraft with a sunshade are presented to show a way to mitigate the high temperatures of a Venus flyby. It was predicted that the sunshade can lower the temperature of the ASRG alternator by 20 C for the representative Venus flyby trajectory. The 3D model also was modified to predict generator performance after a single Advanced Stirling Convertor failure. The geometry of the Microtherm HT insulation block on the outboard side was modified to match deformation and shrinkage observed during testing of a prototypic ASRG test fixture by LM. Test conditions and test data were used to correlate the model by adjusting the thermal conductivity of the deformed insulation to match the post-heat-dump steady state temperatures. Results for these conditions showed that the performance of the still-functioning inboard ACS was unaffected.

  20. Ensemble perception of emotions in autistic and typical children and adolescents.

    Science.gov (United States)

    Karaminis, Themelis; Neil, Louise; Manning, Catherine; Turi, Marco; Fiorentini, Chiara; Burr, David; Pellicano, Elizabeth

    2017-04-01

    Ensemble perception, the ability to assess automatically the summary of large amounts of information presented in visual scenes, is available early in typical development. This ability might be compromised in autistic children, who are thought to present limitations in maintaining summary statistics representations for the recent history of sensory input. Here we examined ensemble perception of facial emotional expressions in 35 autistic children, 30 age- and ability-matched typical children and 25 typical adults. Participants received three tasks: a) an 'ensemble' emotion discrimination task; b) a baseline (single-face) emotion discrimination task; and c) a facial expression identification task. Children performed worse than adults on all three tasks. Unexpectedly, autistic and typical children were, on average, indistinguishable in their precision and accuracy on all three tasks. Computational modelling suggested that, on average, autistic and typical children used ensemble-encoding strategies to a similar extent; but ensemble perception was related to non-verbal reasoning abilities in autistic but not in typical children. Eye-movement data also showed no group differences in the way children attended to the stimuli. Our combined findings suggest that the abilities of autistic and typical children for ensemble perception of emotions are comparable on average. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.

  1. Ensemble perception of emotions in autistic and typical children and adolescents

    Directory of Open Access Journals (Sweden)

    Themelis Karaminis

    2017-04-01

    Full Text Available Ensemble perception, the ability to assess automatically the summary of large amounts of information presented in visual scenes, is available early in typical development. This ability might be compromised in autistic children, who are thought to present limitations in maintaining summary statistics representations for the recent history of sensory input. Here we examined ensemble perception of facial emotional expressions in 35 autistic children, 30 age- and ability-matched typical children and 25 typical adults. Participants received three tasks: a an ‘ensemble’ emotion discrimination task; b a baseline (single-face emotion discrimination task; and c a facial expression identification task. Children performed worse than adults on all three tasks. Unexpectedly, autistic and typical children were, on average, indistinguishable in their precision and accuracy on all three tasks. Computational modelling suggested that, on average, autistic and typical children used ensemble-encoding strategies to a similar extent; but ensemble perception was related to non-verbal reasoning abilities in autistic but not in typical children. Eye-movement data also showed no group differences in the way children attended to the stimuli. Our combined findings suggest that the abilities of autistic and typical children for ensemble perception of emotions are comparable on average.

  2. A PERFORMANCE MANAGEMENT MODEL FOR PHYSICAL ASSET MANAGEMENT

    Directory of Open Access Journals (Sweden)

    J.L. Jooste

    2012-01-01

    Full Text Available

    ENGLISH ABSTRACT: There has been an emphasis shift from maintenance management towards asset management, where the focus is on reliable and operational equipment and on effective assets at optimum life-cycle costs. A challenge in the manufacturing industry is to develop an asset performance management model that is integrated with business processes and strategies. The authors developed the APM2 model to satisfy that requirement. The model has a generic reference structure and is supported by operational protocols to assist in operations management. It facilitates performance measurement, business integration and continuous improvement, whilst exposing industry to the latest developments in asset performance management.

    AFRIKAANSE OPSOMMING: Daar is ‘n klemverskuiwing vanaf onderhoudsbestuur na batebestuur, waar daar gefokus word op betroubare en operasionele toerusting, asook effektiewe bates teen optimum lewensikluskoste. ‘n Uitdaging in die vervaardigingsindustrie is die ontwikkeling van ‘n prestasiemodel vir bates, wat geïntegreer is met besigheidsprosesse en –strategieë. Die outeurs het die APM2 model ontwikkel om in hierdie behoefte te voorsien. Die model het ‘n generiese verwysingsstruktuur, wat ondersteun word deur operasionele instruksies wat operasionele bestuur bevorder. Dit fasiliteer prestasiebestuur, besigheidsintegrasie en voortdurende verbetering, terwyl dit die industrie ook blootstel aan die nuutste ontwikkelinge in prestasiebestuur van bates.

  3. Performance model for telehealth use in home health agencies.

    Science.gov (United States)

    Frey, Jocelyn; Harmonosky, Catherine M; Dansky, Kathryn H

    2005-10-01

    Increasingly, home health agencies (HHAs) are considering the value of implementing telehealth technology. However, questions arise concerning how to manage and use this technology to benefit patients, nurses, and the agency. Performance models will be beneficial to managers and decision makers in the home health field by providing quantitative information for present and future planning of staff and technology usage in the HHA. This paper presents a model that predicts the average daily census of the HHA as a function of statistically identified parameters. Average daily census was chosen as the outcome variable because it is a proxy measure of an agency's capacity. The model suggests that including a telehealth system in the HHA increases average daily census by 40%-90% depending on the number of nurse full-time equivalent(s) (FTEs) and amount of travel hours per month. The use of a home telecare system enhances HHA performance.

  4. PHARAO laser source flight model: Design and performances

    Energy Technology Data Exchange (ETDEWEB)

    Lévèque, T., E-mail: thomas.leveque@cnes.fr; Faure, B.; Esnault, F. X.; Delaroche, C.; Massonnet, D.; Grosjean, O.; Buffe, F.; Torresi, P. [Centre National d’Etudes Spatiales, 18 avenue Edouard Belin, 31400 Toulouse (France); Bomer, T.; Pichon, A.; Béraud, P.; Lelay, J. P.; Thomin, S. [Sodern, 20 Avenue Descartes, 94451 Limeil-Brévannes (France); Laurent, Ph. [LNE-SYRTE, CNRS, UPMC, Observatoire de Paris, 61 avenue de l’Observatoire, 75014 Paris (France)

    2015-03-15

    In this paper, we describe the design and the main performances of the PHARAO laser source flight model. PHARAO is a laser cooled cesium clock specially designed for operation in space and the laser source is one of the main sub-systems. The flight model presented in this work is the first remote-controlled laser system designed for spaceborne cold atom manipulation. The main challenges arise from mechanical compatibility with space constraints, which impose a high level of compactness, a low electric power consumption, a wide range of operating temperature, and a vacuum environment. We describe the main functions of the laser source and give an overview of the main technologies developed for this instrument. We present some results of the qualification process. The characteristics of the laser source flight model, and their impact on the clock performances, have been verified in operational conditions.

  5. Ergonomic evaluation model of operational room based on team performance

    Directory of Open Access Journals (Sweden)

    YANG Zhiyi

    2017-05-01

    Full Text Available A theoretical calculation model based on the ergonomic evaluation of team performance was proposed in order to carry out the ergonomic evaluation of the layout design schemes of the action station in a multitasking operational room. This model was constructed in order to calculate and compare the theoretical value of team performance in multiple layout schemes by considering such substantial influential factors as frequency of communication, distance, angle, importance, human cognitive characteristics and so on. An experiment was finally conducted to verify the proposed model under the criteria of completion time and accuracy rating. As illustrated by the experiment results,the proposed approach is conductive to the prediction and ergonomic evaluation of the layout design schemes of the action station during early design stages,and provides a new theoretical method for the ergonomic evaluation,selection and optimization design of layout design schemes.

  6. Modeling time-lagged reciprocal psychological empowerment-performance relationships.

    Science.gov (United States)

    Maynard, M Travis; Luciano, Margaret M; D'Innocenzo, Lauren; Mathieu, John E; Dean, Matthew D

    2014-11-01

    Employee psychological empowerment is widely accepted as a means for organizations to compete in increasingly dynamic environments. Previous empirical research and meta-analyses have demonstrated that employee psychological empowerment is positively related to several attitudinal and behavioral outcomes including job performance. While this research positions psychological empowerment as an antecedent influencing such outcomes, a close examination of the literature reveals that this relationship is primarily based on cross-sectional research. Notably, evidence supporting the presumed benefits of empowerment has failed to account for potential reciprocal relationships and endogeneity effects. Accordingly, using a multiwave, time-lagged design, we model reciprocal relationships between psychological empowerment and job performance using a sample of 441 nurses from 5 hospitals. Incorporating temporal effects in a staggered research design and using structural equation modeling techniques, our findings provide support for the conventional positive correlation between empowerment and subsequent performance. Moreover, accounting for the temporal stability of variables over time, we found support for empowerment levels as positive influences on subsequent changes in performance. Finally, we also found support for the reciprocal relationship, as performance levels were shown to relate positively to changes in empowerment over time. Theoretical and practical implications of the reciprocal psychological empowerment-performance relationships are discussed. (PsycINFO Database Record (c) 2014 APA, all rights reserved).

  7. Measuring broadband in Europe: : development of a market model and performance index using structural equations modelling

    NARCIS (Netherlands)

    Lemstra, W.; Voogt, B.; Gorp, van N.

    2015-01-01

    This contribution reports on the development of a performance index and underlying market model with application to broadband developments in the European Union. The Structure–Conduct–Performance paradigm provides the theoretical grounding. Structural equations modelling was applied to determine the

  8. Improved ultra-performance liquid chromatography with electrospray ionization quadrupole-time-of-flight high-definition mass spectrometry method for the rapid analysis of the chemical constituents of a typical medical formula: Liuwei Dihuang Wan.

    Science.gov (United States)

    Wang, Ping; Lv, Hai tao; Zhang, Ai hua; Sun, Hui; Yan, Guang li; Han, Ying; Wu, Xiu hong; Wang, Xi jun

    2013-11-01

    Liuwei Dihuang Wan (LDW), a classic Chinese medicinal formula, has been used to improve or restore declined functions related to aging and geriatric diseases, such as impaired mobility, vision, hearing, cognition, and memory. It has attracted increasing attention as one of the most popular and valuable herbal medicines. However, the systematic analysis of the chemical constituents of LDW is difficult and thus has not been well established. In this paper, a rapid, sensitive, and reliable ultra-performance LC with ESI quadrupole TOF high-definition MS method with automated MetaboLynx analysis in positive and negative ion mode was established to characterize the chemical constituents of LDW. The analysis was performed on a Waters UPLC™ HSS T3 using a gradient elution system. MS/MS fragmentation behavior was proposed for aiding the structural identification of the components. Under the optimized conditions, a total of 50 peaks were tentatively characterized by comparing the retention time and MS data. It is concluded that a rapid and robust platform based on ultra-performance LC with ESI quadrupole TOF high-definition MS has been successfully developed for globally identifying multiple constituents of traditional Chinese medicine prescriptions. This is the first report on the systematic analysis of the chemical constituents of LDW. © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  9. A performance model for the communication in fast multipole methods on high-performance computing platforms

    KAUST Repository

    Ibeid, Huda

    2016-03-04

    Exascale systems are predicted to have approximately 1 billion cores, assuming gigahertz cores. Limitations on affordable network topologies for distributed memory systems of such massive scale bring new challenges to the currently dominant parallel programing model. Currently, there are many efforts to evaluate the hardware and software bottlenecks of exascale designs. It is therefore of interest to model application performance and to understand what changes need to be made to ensure extrapolated scalability. The fast multipole method (FMM) was originally developed for accelerating N-body problems in astrophysics and molecular dynamics but has recently been extended to a wider range of problems. Its high arithmetic intensity combined with its linear complexity and asynchronous communication patterns make it a promising algorithm for exascale systems. In this paper, we discuss the challenges for FMM on current parallel computers and future exascale architectures, with a focus on internode communication. We focus on the communication part only; the efficiency of the computational kernels are beyond the scope of the present study. We develop a performance model that considers the communication patterns of the FMM and observe a good match between our model and the actual communication time on four high-performance computing (HPC) systems, when latency, bandwidth, network topology, and multicore penalties are all taken into account. To our knowledge, this is the first formal characterization of internode communication in FMM that validates the model against actual measurements of communication time. The ultimate communication model is predictive in an absolute sense; however, on complex systems, this objective is often out of reach or of a difficulty out of proportion to its benefit when there exists a simpler model that is inexpensive and sufficient to guide coding decisions leading to improved scaling. The current model provides such guidance.

  10. A conceptual model to improve performance in virtual teams

    Directory of Open Access Journals (Sweden)

    Shopee Dube

    2016-09-01

    Full Text Available Background: The vast improvement in communication technologies and sophisticated project management tools, methods and techniques has allowed geographically and culturally diverse groups to operate and function in a virtual environment. To succeed in this virtual environment where time and space are becoming increasingly irrelevant, organisations must define new ways of implementing initiatives. This virtual environment phenomenon has brought about the formation of virtual project teams that allow organisations to harness the skills and knowhow of the best resources, irrespective of their location. Objectives: The aim of this article was to investigate performance criteria and develop a conceptual model which can be applied to enhance the success of virtual project teams. There are no clear guidelines of the performance criteria in managing virtual project teams. Method: A qualitative research methodology was used in this article. The purpose of content analysis was to explore the literature to understand the concept of performance in virtual project teams and to summarise the findings of the literature reviewed. Results: The research identified a set of performance criteria for the virtual project teams as follows: leadership, trust, communication, team cooperation, reliability, motivation, comfort and social interaction. These were used to conceptualise the model. Conclusion: The conceptual model can be used in a holistic way to determine the overall performance of the virtual project team, but each factor can be analysed individually to determine the impact on the overall performance. The knowledge of performance criteria for virtual project teams could aid project managers in enhancing the success of these teams and taking a different approach to better manage and coordinate them.

  11. On typical properties of Hilbert space operators

    NARCIS (Netherlands)

    Eisner, T.; Mátrai, T.

    2013-01-01

    We study the typical behavior of bounded linear operators on infinite-dimensional complex separable Hilbert spaces in the norm, strong-star, strong, weak polynomial and weak topologies. In particular, we investigate typical spectral properties, the problem of unitary equivalence of typical

  12. Effect of Using Extreme Years in Hydrologic Model Calibration Performance

    Science.gov (United States)

    Goktas, R. K.; Tezel, U.; Kargi, P. G.; Ayvaz, T.; Tezyapar, I.; Mesta, B.; Kentel, E.

    2017-12-01

    Hydrological models are useful in predicting and developing management strategies for controlling the system behaviour. Specifically they can be used for evaluating streamflow at ungaged catchments, effect of climate change, best management practices on water resources, or identification of pollution sources in a watershed. This study is a part of a TUBITAK project named "Development of a geographical information system based decision-making tool for water quality management of Ergene Watershed using pollutant fingerprints". Within the scope of this project, first water resources in Ergene Watershed is studied. Streamgages found in the basin are identified and daily streamflow measurements are obtained from State Hydraulic Works of Turkey. Streamflow data is analysed using box-whisker plots, hydrographs and flow-duration curves focusing on identification of extreme periods, dry or wet. Then a hydrological model is developed for Ergene Watershed using HEC-HMS in the Watershed Modeling System (WMS) environment. The model is calibrated for various time periods including dry and wet ones and the performance of calibration is evaluated using Nash-Sutcliffe Efficiency (NSE), correlation coefficient, percent bias (PBIAS) and root mean square error. It is observed that calibration period affects the model performance, and the main purpose of the development of the hydrological model should guide calibration period selection. Acknowledgement: This study is funded by The Scientific and Technological Research Council of Turkey (TUBITAK) under Project Number 115Y064.

  13. The Network Performance Assessment Model - Regulation with a Reference Network

    International Nuclear Information System (INIS)

    Larsson, Mats B.O.

    2003-11-01

    A new model - the Network Performance Assessment Model - has been developed gradually since 1998, in order to evaluate and benchmark local electricity grids. The model is intended to be a regulation tool for the Swedish local electricity networks, used by the Swedish Energy Agency. At spring 2004 the Network Performance Assessment Model will run into operation, based on the companies' results for 2003. The mission of the Network Performance Assessment Model is to evaluate the networks from a costumers' point of view and establish a fair price level. In order to do that, the performance of the operator is evaluated. The performances are assessed in correspondence to a price level that the consumer is considered to accept, can agree to as fair and is prepared to pay. This price level is based on an average cost, based on the cost of an efficient grid that will be built today, with already known technology. The performances are accounted in Customer Values. Those Customer Values are what can be created by someone but can't be created better by someone else. The starting point is to look upon the companies from a customers' point of view. The factors that can't be influenced by the companies are evaluated by fixed rules, valid to all companies. The rules reflect the differences. The cost for a connection is evaluated from the actual facts, i.e. the distances between the subscribers and the demanded capacity by the subscriber. This is done by the creation of a reference network, with a capacity to fulfill the demand from the subscriber. This is an efficient grid with no spare capacity and no excess capacity. The companies' existing grid are without importance, as well as holds for dimensioning as technology. Those factors which the company can influence, for an example connection reliability, are evaluated from a customer perspective by measuring the actual reliability, measured as the number and length of the interruption. When implemented to the regulation the Network

  14. Synthesised model of market orientation-business performance relationship

    Directory of Open Access Journals (Sweden)

    G. Nwokah

    2006-12-01

    Full Text Available Purpose: The purpose of this paper is to assess the impact of market orientation on the performance of the organisation. While much empirical works have centered on market orientation, the generalisability of its impact on performance of the Food and Beverages organisations in the Nigeria context has been under-researched. Design/Methodology/Approach: The study adopted a triangulation methodology (quantitative and qualitative approach. Data was collected from key informants using a research instrument. Returned instruments were analyzed using nonparametric correlation through the use of the Statistical Package for Social Sciences (SPSS version 10. Findings: The study validated the earlier instruments but did not find any strong association between market orientation and business performance in the Nigerian context using the food and beverages organisations for the study. The reasons underlying the weak relationship between market orientation and business performance of the Food and Beverages organisations is government policies, new product development, diversification, innovation and devaluation of the Nigerian currency. One important finding of this study is that market orientation leads to business performance through some moderating variables. Implications: The study recommends that Nigerian Government should ensure a stable economy and make economic policies that will enhance existing business development in the country. Also, organisations should have performance measurement systems to detect the impact of investment on market orientation with the aim of knowing how the organisation works. Originality/Value: This study significantly refines the body of knowledge concerning the impact of market orientation on the performance of the organisation, and thereby offers a model of market orientation and business performance in the Nigerian context for marketing scholars and practitioners. This model will, no doubt, contribute to the body of

  15. Radionuclide release rates from spent fuel for performance assessment modeling

    International Nuclear Information System (INIS)

    Curtis, D.B.

    1994-01-01

    In a scenario of aqueous transport from a high-level radioactive waste repository, the concentration of radionuclides in water in contact with the waste constitutes the source term for transport models, and as such represents a fundamental component of all performance assessment models. Many laboratory experiments have been done to characterize release rates and understand processes influencing radionuclide release rates from irradiated nuclear fuel. Natural analogues of these waste forms have been studied to obtain information regarding the long-term stability of potential waste forms in complex natural systems. This information from diverse sources must be brought together to develop and defend methods used to define source terms for performance assessment models. In this manuscript examples of measures of radionuclide release rates from spent nuclear fuel or analogues of nuclear fuel are presented. Each example represents a very different approach to obtaining a numerical measure and each has its limitations. There is no way to obtain an unambiguous measure of this or any parameter used in performance assessment codes for evaluating the effects of processes operative over many millennia. The examples are intended to suggest by example that in the absence of the ability to evaluate accuracy and precision, consistency of a broadly based set of data can be used as circumstantial evidence to defend the choice of parameters used in performance assessments

  16. Integrated modeling tool for performance engineering of complex computer systems

    Science.gov (United States)

    Wright, Gary; Ball, Duane; Hoyt, Susan; Steele, Oscar

    1989-01-01

    This report summarizes Advanced System Technologies' accomplishments on the Phase 2 SBIR contract NAS7-995. The technical objectives of the report are: (1) to develop an evaluation version of a graphical, integrated modeling language according to the specification resulting from the Phase 2 research; and (2) to determine the degree to which the language meets its objectives by evaluating ease of use, utility of two sets of performance predictions, and the power of the language constructs. The technical approach followed to meet these objectives was to design, develop, and test an evaluation prototype of a graphical, performance prediction tool. The utility of the prototype was then evaluated by applying it to a variety of test cases found in the literature and in AST case histories. Numerous models were constructed and successfully tested. The major conclusion of this Phase 2 SBIR research and development effort is that complex, real-time computer systems can be specified in a non-procedural manner using combinations of icons, windows, menus, and dialogs. Such a specification technique provides an interface that system designers and architects find natural and easy to use. In addition, PEDESTAL's multiview approach provides system engineers with the capability to perform the trade-offs necessary to produce a design that meets timing performance requirements. Sample system designs analyzed during the development effort showed that models could be constructed in a fraction of the time required by non-visual system design capture tools.

  17. Modeling of Operating Temperature Performance of Triple Junction Solar Cells Using Silvaco's ATLAS

    National Research Council Canada - National Science Library

    Sanders, Michael H

    2007-01-01

    .... Building upon prior thesis work at the Naval Postgraduate School, this thesis utilizes Silvaco's ATLAS software as a tool to simulate the performance of a typical InGaP/GaAs/Ge multi-junction solar...

  18. Model tests on dynamic performance of RC shear walls

    International Nuclear Information System (INIS)

    Nagashima, Toshio; Shibata, Akenori; Inoue, Norio; Muroi, Kazuo.

    1991-01-01

    For the inelastic dynamic response analysis of a reactor building subjected to earthquakes, it is essentially important to properly evaluate its restoring force characteristics under dynamic loading condition and its damping performance. Reinforced concrete shear walls are the main structural members of a reactor building, and dominate its seismic behavior. In order to obtain the basic information on the dynamic restoring force characteristics and damping performance of shear walls, the dynamic test using a large shaking table, static displacement control test and the pseudo-dynamic test on the models of a shear wall were conducted. In the dynamic test, four specimens were tested on a large shaking table. In the static test, four specimens were tested, and in the pseudo-dynamic test, three specimens were tested. These tests are outlined. The results of these tests were compared, placing emphasis on the restoring force characteristics and damping performance of the RC wall models. The strength was higher in the dynamic test models than in the static test models mainly due to the effect of loading rate. (K.I.)

  19. A Practical Model to Perform Comprehensive Cybersecurity Audits

    Directory of Open Access Journals (Sweden)

    Regner Sabillon

    2018-03-01

    Full Text Available These days organizations are continually facing being targets of cyberattacks and cyberthreats; the sophistication and complexity of modern cyberattacks and the modus operandi of cybercriminals including Techniques, Tactics and Procedures (TTP keep growing at unprecedented rates. Cybercriminals are always adopting new strategies to plan and launch cyberattacks based on existing cybersecurity vulnerabilities and exploiting end users by using social engineering techniques. Cybersecurity audits are extremely important to verify that information security controls are in place and to detect weaknesses of inexistent cybersecurity or obsolete controls. This article presents an innovative and comprehensive cybersecurity audit model. The CyberSecurity Audit Model (CSAM can be implemented to perform internal or external cybersecurity audits. This model can be used to perform single cybersecurity audits or can be part of any corporate audit program to improve cybersecurity controls. Any information security or cybersecurity audit team has either the options to perform a full audit for all cybersecurity domains or by selecting specific domains to audit certain areas that need control verification and hardening. The CSAM has 18 domains; Domain 1 is specific for Nation States and Domains 2-18 can be implemented at any organization. The organization can be any small, medium or large enterprise, the model is also applicable to any Non-Profit Organization (NPO.

  20. Instruction-level performance modeling and characterization of multimedia applications

    Energy Technology Data Exchange (ETDEWEB)

    Luo, Y. [Los Alamos National Lab., NM (United States). Scientific Computing Group; Cameron, K.W. [Louisiana State Univ., Baton Rouge, LA (United States). Dept. of Computer Science

    1999-06-01

    One of the challenges for characterizing and modeling realistic multimedia applications is the lack of access to source codes. On-chip performance counters effectively resolve this problem by monitoring run-time behaviors at the instruction-level. This paper presents a novel technique of characterizing and modeling workloads at the instruction level for realistic multimedia applications using hardware performance counters. A variety of instruction counts are collected from some multimedia applications, such as RealPlayer, GSM Vocoder, MPEG encoder/decoder, and speech synthesizer. These instruction counts can be used to form a set of abstract characteristic parameters directly related to a processor`s architectural features. Based on microprocessor architectural constraints and these calculated abstract parameters, the architectural performance bottleneck for a specific application can be estimated. Meanwhile, the bottleneck estimation can provide suggestions about viable architectural/functional improvement for certain workloads. The biggest advantage of this new characterization technique is a better understanding of processor utilization efficiency and architectural bottleneck for each application. This technique also provides predictive insight of future architectural enhancements and their affect on current codes. In this paper the authors also attempt to model architectural effect on processor utilization without memory influence. They derive formulas for calculating CPI{sub 0}, CPI without memory effect, and they quantify utilization of architectural parameters. These equations are architecturally diagnostic and predictive in nature. Results provide promise in code characterization, and empirical/analytical modeling.

  1. Integrated healthcare networks' performance: a growth curve modeling approach.

    Science.gov (United States)

    Wan, Thomas T H; Wang, Bill B L

    2003-05-01

    This study examines the effects of integration on the performance ratings of the top 100 integrated healthcare networks (IHNs) in the United States. A strategic-contingency theory is used to identify the relationship of IHNs' performance to their structural and operational characteristics and integration strategies. To create a database for the panel study, the top 100 IHNs selected by the SMG Marketing Group in 1998 were followed up in 1999 and 2000. The data were merged with the Dorenfest data on information system integration. A growth curve model was developed and validated by the Mplus statistical program. Factors influencing the top 100 IHNs' performance in 1998 and their subsequent rankings in the consecutive years were analyzed. IHNs' initial performance scores were positively influenced by network size, number of affiliated physicians and profit margin, and were negatively associated with average length of stay and technical efficiency. The continuing high performance, judged by maintaining higher performance scores, tended to be enhanced by the use of more managerial or executive decision-support systems. Future studies should include time-varying operational indicators to serve as predictors of network performance.

  2. The predictive performance and stability of six species distribution models.

    Directory of Open Access Journals (Sweden)

    Ren-Yan Duan

    Full Text Available Predicting species' potential geographical range by species distribution models (SDMs is central to understand their ecological requirements. However, the effects of using different modeling techniques need further investigation. In order to improve the prediction effect, we need to assess the predictive performance and stability of different SDMs.We collected the distribution data of five common tree species (Pinus massoniana, Betula platyphylla, Quercus wutaishanica, Quercus mongolica and Quercus variabilis and simulated their potential distribution area using 13 environmental variables and six widely used SDMs: BIOCLIM, DOMAIN, MAHAL, RF, MAXENT, and SVM. Each model run was repeated 100 times (trials. We compared the predictive performance by testing the consistency between observations and simulated distributions and assessed the stability by the standard deviation, coefficient of variation, and the 99% confidence interval of Kappa and AUC values.The mean values of AUC and Kappa from MAHAL, RF, MAXENT, and SVM trials were similar and significantly higher than those from BIOCLIM and DOMAIN trials (p<0.05, while the associated standard deviations and coefficients of variation were larger for BIOCLIM and DOMAIN trials (p<0.05, and the 99% confidence intervals for AUC and Kappa values were narrower for MAHAL, RF, MAXENT, and SVM. Compared to BIOCLIM and DOMAIN, other SDMs (MAHAL, RF, MAXENT, and SVM had higher prediction accuracy, smaller confidence intervals, and were more stable and less affected by the random variable (randomly selected pseudo-absence points.According to the prediction performance and stability of SDMs, we can divide these six SDMs into two categories: a high performance and stability group including MAHAL, RF, MAXENT, and SVM, and a low performance and stability group consisting of BIOCLIM, and DOMAIN. We highlight that choosing appropriate SDMs to address a specific problem is an important part of the modeling process.

  3. [Model oriented assessment of literacy performance in children with cochlear implants].

    Science.gov (United States)

    Fiori, A; Reichmuth, K; Matulat, P; Schmidt, C M; Dinnesen, A G

    2006-07-01

    Although most hearing-impaired children lag behind normally hearing children in literacy acquisition, this aspect has hardly been addressed in the evaluation of language acquisition after cochlear implantation. The present study investigated written language abilities in 8 school-age children with cochlear implants. Neurolinguistic dual-route-models of written language processing indicate that literacy acquisition leads to the establishment of two distinct reading and writing strategies: a lexical one for the quick processing of known words and a sublexical one for decoding unfamiliar words or nonwords letter by letter. 8 school-aged children were investigated, a very heterogeneous group concerning age of onset of hearing impairment, educational placement, and competences in sign language. However, this range is typical of the group of CI-children. The aim was to investigate if children with cochlear implants are able to establish both strategies or if they need to find a differential and individual access to written language. Performance within the Salzburger Lese-Rechtschreib-Test was evaluated. Individual performance of each subject was analysed. Performance varied substantially ranging from only rudimentary spoken and written language abilities in two children to age-equivalent performance in three of them. Severe qualitative differences in written language processing were shown in the remaining three subjects. Suggestions for remediation were made and a re-test was carried out after 12 months. Their individual profiles of performance are described in detail. The present study stresses the importance of a thorough investigation of written language performance in the evaluation of language acquisition after cochlear implantation. The results draw a very heterogeneous picture of performance. Model-oriented testing and analysis of performance prove to be sensible in at least a subpopulation of children with cochlear implants. Based on a better understanding of

  4. Hybrid Building Performance Simulation Models for Industrial Energy Efficiency Applications

    Directory of Open Access Journals (Sweden)

    Peter Smolek

    2018-06-01

    Full Text Available In the challenge of achieving environmental sustainability, industrial production plants, as large contributors to the overall energy demand of a country, are prime candidates for applying energy efficiency measures. A modelling approach using cubes is used to decompose a production facility into manageable modules. All aspects of the facility are considered, classified into the building, energy system, production and logistics. This approach leads to specific challenges for building performance simulations since all parts of the facility are highly interconnected. To meet this challenge, models for the building, thermal zones, energy converters and energy grids are presented and the interfaces to the production and logistics equipment are illustrated. The advantages and limitations of the chosen approach are discussed. In an example implementation, the feasibility of the approach and models is shown. Different scenarios are simulated to highlight the models and the results are compared.

  5. Fracture modelling of a high performance armour steel

    Science.gov (United States)

    Skoglund, P.; Nilsson, M.; Tjernberg, A.

    2006-08-01

    The fracture characteristics of the high performance armour steel Armox 500T is investigated. Tensile mechanical experiments using samples with different notch geometries are used to investigate the effect of multi-axial stress states on the strain to fracture. The experiments are numerically simulated and from the simulation the stress at the point of fracture initiation is determined as a function of strain and these data are then used to extract parameters for fracture models. A fracture model based on quasi-static experiments is suggested and the model is tested against independent experiments done at both static and dynamic loading. The result show that the fracture model give reasonable good agreement between simulations and experiments at both static and dynamic loading condition. This indicates that multi-axial loading is more important to the strain to fracture than the deformation rate in the investigated loading range. However on-going work will further characterise the fracture behaviour of Armox 500T.

  6. Modeling the Performance of Fast Mulipole Method on HPC platforms

    KAUST Repository

    Ibeid, Huda

    2012-04-06

    The current trend in high performance computing is pushing towards exascale computing. To achieve this exascale performance, future systems will have between 100 million and 1 billion cores assuming gigahertz cores. Currently, there are many efforts studying the hardware and software bottlenecks for building an exascale system. It is important to understand and meet these bottlenecks in order to attain 10 PFLOPS performance. On applications side, there is an urgent need to model application performance and to understand what changes need to be made to ensure continued scalability at this scale. Fast multipole methods (FMM) were originally developed for accelerating N-body problems for particle based methods. Nowadays, FMM is more than an N-body solver, recent trends in HPC have been to use FMMs in unconventional application areas. FMM is likely to be a main player in exascale due to its hierarchical nature and the techniques used to access the data via a tree structure which allow many operations to happen simultaneously at each level of the hierarchy. In this thesis , we discuss the challenges for FMM on current parallel computers and future exasclae architecture. Furthermore, we develop a novel performance model for FMM. Our ultimate aim of this thesis is to ensure the scalability of FMM on the future exascale machines.

  7. 3D Massive MIMO Systems: Channel Modeling and Performance Analysis

    KAUST Repository

    Nadeem, Qurrat-Ul-Ain

    2015-03-01

    Multiple-input-multiple-output (MIMO) systems of current LTE releases are capable of adaptation in the azimuth only. More recently, the trend is to enhance the system performance by exploiting the channel\\'s degrees of freedom in the elevation through the dynamic adaptation of the vertical antenna beam pattern. This necessitates the derivation and characterization of three-dimensional (3D) channels. Over the years, channel models have evolved to address the challenges of wireless communication technologies. In parallel to theoretical studies on channel modeling, many standardized channel models like COST-based models, 3GPP SCM, WINNER, ITU have emerged that act as references for industries and telecommunication companies to assess system-level and link-level performances of advanced signal processing techniques over real-like channels. Given the existing channels are only two dimensional (2D) in nature; a large effort in channel modeling is needed to study the impact of the channel component in the elevation direction. The first part of this work sheds light on the current 3GPP activity around 3D channel modeling and beamforming, an aspect that to our knowledge has not been extensively covered by a research publication. The standardized MIMO channel model is presented, that incorporates both the propagation effects of the environment and the radio effects of the antennas. In order to facilitate future studies on the use of 3D beamforming, the main features of the proposed 3D channel model are discussed. A brief overview of the future 3GPP 3D channel model being outlined for the next generation of wireless networks is also provided. In the subsequent part of this work, we present an information-theoretic channel model for MIMO systems that supports the elevation dimension. The model is based on the principle of maximum entropy, which enables us to determine the distribution of the channel matrix consistent with the prior information on the angles of departure and

  8. Compact models and performance investigations for subthreshold interconnects

    CERN Document Server

    Dhiman, Rohit

    2014-01-01

    The book provides a detailed analysis of issues related to sub-threshold interconnect performance from the perspective of analytical approach and design techniques. Particular emphasis is laid on the performance analysis of coupling noise and variability issues in sub-threshold domain to develop efficient compact models. The proposed analytical approach gives physical insight of the parameters affecting the transient behavior of coupled interconnects. Remedial design techniques are also suggested to mitigate the effect of coupling noise. The effects of wire width, spacing between the wires, wi

  9. Performance prediction of industrial centrifuges using scale-down models.

    Science.gov (United States)

    Boychyn, M; Yim, S S S; Bulmer, M; More, J; Bracewell, D G; Hoare, M

    2004-12-01

    Computational fluid dynamics was used to model the high flow forces found in the feed zone of a multichamber-bowl centrifuge and reproduce these in a small, high-speed rotating disc device. Linking the device to scale-down centrifugation, permitted good estimation of the performance of various continuous-flow centrifuges (disc stack, multichamber bowl, CARR Powerfuge) for shear-sensitive protein precipitates. Critically, the ultra scale-down centrifugation process proved to be a much more accurate predictor of production multichamber-bowl performance than was the pilot centrifuge.

  10. Evaluation of the performance of DIAS ionospheric forecasting models

    Directory of Open Access Journals (Sweden)

    Tsagouri Ioanna

    2011-08-01

    Full Text Available Nowcasting and forecasting ionospheric products and services for the European region are regularly provided since August 2006 through the European Digital upper Atmosphere Server (DIAS, http://dias.space.noa.gr. Currently, DIAS ionospheric forecasts are based on the online implementation of two models: (i the solar wind driven autoregression model for ionospheric short-term forecast (SWIF, which combines historical and real-time ionospheric observations with solar-wind parameters obtained in real time at the L1 point from NASA ACE spacecraft, and (ii the geomagnetically correlated autoregression model (GCAM, which is a time series forecasting method driven by a synthetic geomagnetic index. In this paper we investigate the operational ability and the accuracy of both DIAS models carrying out a metrics-based evaluation of their performance under all possible conditions. The analysis was established on the systematic comparison between models’ predictions with actual observations obtained over almost one solar cycle (1998–2007 at four European ionospheric locations (Athens, Chilton, Juliusruh and Rome and on the comparison of the models’ performance against two simple prediction strategies, the median- and the persistence-based predictions during storm conditions. The results verify operational validity for both models and quantify their prediction accuracy under all possible conditions in support of operational applications but also of comparative studies in assessing or expanding the current ionospheric forecasting capabilities.

  11. Does model performance improve with complexity? A case study with three hydrological models

    Science.gov (United States)

    Orth, Rene; Staudinger, Maria; Seneviratne, Sonia I.; Seibert, Jan; Zappa, Massimiliano

    2015-04-01

    In recent decades considerable progress has been made in climate model development. Following the massive increase in computational power, models became more sophisticated. At the same time also simple conceptual models have advanced. In this study we validate and compare three hydrological models of different complexity to investigate whether their performance varies accordingly. For this purpose we use runoff and also soil moisture measurements, which allow a truly independent validation, from several sites across Switzerland. The models are calibrated in similar ways with the same runoff data. Our results show that the more complex models HBV and PREVAH outperform the simple water balance model (SWBM) in case of runoff but not for soil moisture. Furthermore the most sophisticated PREVAH model shows an added value compared to the HBV model only in case of soil moisture. Focusing on extreme events we find generally improved performance of the SWBM during drought conditions and degraded agreement with observations during wet extremes. For the more complex models we find the opposite behavior, probably because they were primarily developed for prediction of runoff extremes. As expected given their complexity, HBV and PREVAH have more problems with over-fitting. All models show a tendency towards better performance in lower altitudes as opposed to (pre-) alpine sites. The results vary considerably across the investigated sites. In contrast, the different metrics we consider to estimate the agreement between models and observations lead to similar conclusions, indicating that the performance of the considered models is similar at different time scales as well as for anomalies and long-term means. We conclude that added complexity does not necessarily lead to improved performance of hydrological models, and that performance can vary greatly depending on the considered hydrological variable (e.g. runoff vs. soil moisture) or hydrological conditions (floods vs. droughts).

  12. Correlation between human observer performance and model observer performance in differential phase contrast CT

    International Nuclear Information System (INIS)

    Li, Ke; Garrett, John; Chen, Guang-Hong

    2013-01-01

    Purpose: With the recently expanding interest and developments in x-ray differential phase contrast CT (DPC-CT), the evaluation of its task-specific detection performance and comparison with the corresponding absorption CT under a given radiation dose constraint become increasingly important. Mathematical model observers are often used to quantify the performance of imaging systems, but their correlations with actual human observers need to be confirmed for each new imaging method. This work is an investigation of the effects of stochastic DPC-CT noise on the correlation of detection performance between model and human observers with signal-known-exactly (SKE) detection tasks.Methods: The detectabilities of different objects (five disks with different diameters and two breast lesion masses) embedded in an experimental DPC-CT noise background were assessed using both model and human observers. The detectability of the disk and lesion signals was then measured using five types of model observers including the prewhitening ideal observer, the nonprewhitening (NPW) observer, the nonprewhitening observer with eye filter and internal noise (NPWEi), the prewhitening observer with eye filter and internal noise (PWEi), and the channelized Hotelling observer (CHO). The same objects were also evaluated by four human observers using the two-alternative forced choice method. The results from the model observer experiment were quantitatively compared to the human observer results to assess the correlation between the two techniques.Results: The contrast-to-detail (CD) curve generated by the human observers for the disk-detection experiments shows that the required contrast to detect a disk is inversely proportional to the square root of the disk size. Based on the CD curves, the ideal and NPW observers tend to systematically overestimate the performance of the human observers. The NPWEi and PWEi observers did not predict human performance well either, as the slopes of their CD

  13. Modelling of performance of the ATLAS SCT detector

    International Nuclear Information System (INIS)

    Kazi, S.

    2000-01-01

    Full text: The ATLAS detector being built at LHC will use the SCT (semiconductor tracking) module for particle tracking in the inner core of the detector. An analytical/numerical model of the discriminator threshold dependence and the temperature dependence of the SCT module was derived. Measurements were conducted on the performance of the SCT module versus temperature and these results were compared with the predictions made by the model. The affect of radiation damage of the SCT detector was also investigated. The detector will operate for approximately 10 years so a study was carried out on the effects of the 10 years of radiation exposure to the SCT

  14. Summary of Calculation Performed with NPIC's New FGR Model

    International Nuclear Information System (INIS)

    Jiao Yongjun; Li Wenjie; Zhou Yi; Xing Shuo

    2013-01-01

    1. Introduction The NPIC modeling group has performed calculations on both real cases and idealized cases in FUMEX II and III data packages. The performance code we used is COPERNIC 2.4 developed by AREVA but a new FGR model has been added. Therefore, a comparison study has been made between the Bernard model (V2.2) and the new model, in order to evaluate the performance of the new model. As mentioned before, the focus of our study lies in thermal fission gas release, or more specifically the grain boundary bubble behaviors. 2. Calculation method There are some differences between the calculated burnup and measured burnup in many real cases. Considering FGR is significant dependent on rod average burnup, a multiplicative factor on fuel rod linear power, i.e. FQE, is applied and adjusted in the calculations to ensure the calculated burnup generally equals the measured burnup. Also, a multiplicative factor on upper plenum volume, i.e. AOPL, is applied and adjusted in the calculations to ensure the calculated free volume equals pre-irradiation data of total free volume in rod. Cladding temperatures were entered if they were provided . Otherwise the cladding temperatures are calculated from the inlet coolant temperature. The results are presented in excel form as an attachment of this paper, including thirteen real cases and three idealized cases. Three real cases (BK353, BK370, US PWR TSQ022) are excluded from validation of the new model, because the athermal release predicted is even greater than release measured, which means a negative thermal release. Obviously it is not reasonable for validation, but the results are also listed in excel (sheet 'Cases excluded from validation'). 3. Results The results of 10 real cases are listed in sheet 'Steady case summary', which summarizes measured and predicted values of Bu, FGR for each case, and plots M/P ratio of FGR calculation by different models in COPERNIC. A statistic comparison was also made with three indexes, i

  15. Models for the energy performance of low-energy houses

    DEFF Research Database (Denmark)

    Andersen, Philip Hvidthøft Delff

    of buildings is needed both in order to assess energy-effciency and to operate modern buildings economically. Energy signatures are a central tool in both energy performance assessment and decision making related to refurbishment of buildings. Also for operation of modern buildings with installations......-building. The building is well-insulated and features large modern energy-effcient windows and oor heating. These features lead to increased non-linear responses to solar radiation and longer time constants. The building is equipped with advanced control and measuring equipment. Experiments are designed and performed...... in order to identify important dynamical properties of the building, and the collected data is used for modeling. The thesis emphasizes the statistical model building and validation needed to identify dynamical systems. It distinguishes from earlier work by focusing on modern low-energy construction...

  16. Lysimeter data as input to performance assessment models

    International Nuclear Information System (INIS)

    McConnell, J.W. Jr.

    1998-01-01

    The Field Lysimeter Investigations: Low-Level Waste Data Base Development Program is obtaining information on the performance of radioactive waste forms in a disposal environment. Waste forms fabricated using ion-exchange resins from EPICOR-117 prefilters employed in the cleanup of the Three Mile Island (TMI) Nuclear Power Station are being tested to develop a low-level waste data base and to obtain information on survivability of waste forms in a disposal environment. The program includes reviewing radionuclide releases from those waste forms in the first 7 years of sampling and examining the relationship between code input parameters and lysimeter data. Also, lysimeter data are applied to performance assessment source term models, and initial results from use of data in two models are presented

  17. WWER reactor fuel performance, modelling and experimental support. Proceedings

    International Nuclear Information System (INIS)

    Stefanova, S.; Chantoin, P.; Kolev, I.

    1994-01-01

    This publication is a compilation of 36 papers presented at the International Seminar on WWER Reactor Fuel Performance, Modelling and Experimental Support, organised by the Institute for Nuclear Research and Nuclear Energy (BG), in cooperation with the International Atomic Energy Agency. The Seminar was attended by 76 participants from 16 countries, including representatives of all major Russian plants and institutions responsible for WWER reactor fuel manufacturing, design and research. The reports are grouped in four chapters: 1) WWER Fuel Performance and Economics: Status and Improvement Prospects: 2) WWER Fuel Behaviour Modelling and Experimental Support; 3) Licensing of WWER Fuel and Fuel Analysis Codes; 4) Spent Fuel of WWER Plants. The reports from the corresponding four panel discussion sessions are also included. All individual papers are recorded in INIS as separate items

  18. Integrated model for supplier selection and performance evaluation

    Directory of Open Access Journals (Sweden)

    Borges de Araújo, Maria Creuza

    2015-08-01

    Full Text Available This paper puts forward a model for selecting suppliers and evaluating the performance of those already working with a company. A simulation was conducted in a food industry. This sector has high significance in the economy of Brazil. The model enables the phases of selecting and evaluating suppliers to be integrated. This is important so that a company can have partnerships with suppliers who are able to meet their needs. Additionally, a group method is used to enable managers who will be affected by this decision to take part in the selection stage. Finally, the classes resulting from the performance evaluation are shown to support the contractor in choosing the most appropriate relationship with its suppliers.

  19. From Performance Measurement to Strategic Management Model: Balanced Scorecard

    Directory of Open Access Journals (Sweden)

    Cihat Savsar

    2015-03-01

    Full Text Available Abstract: In Today’s competitive markets, one of the main conditions of the surviving of enterprises is the necessity to have effective performance management systems. Decisions must be taken by the management according to the performance of assets. In the transition from industrial society to information society, the presence of business structures have changed and the values of non-financial assets have increased in this period. So some systems have emerged based on intangible assets and to measure them instead of tangible assets and their measurements. With economic and technological development multi-dimensional evaluation in the business couldn’t be sufficient.  Performance evaluation methods can be applied in business with an integrated approach by its accordance with business strategy, linking to reward system and cause effects link established between performance measures. Balanced scorecard is one of the commonly used in measurement methods. While it was used for the first time in 1992 as a performance measurement tool today it has been used as a strategic management model besides its conventional uses. BSC contains customer perspective, internal perspective and learning and growth perspective besides financial perspective. Learning and growth perspective is determinant of other perspectives. In order to achieve the objectives set out in the financial perspective in other dimensions that need to be accomplished, is emphasized. Establishing a causal link between performance measures and targets how to achieve specified goals with strategy maps are described.

  20. A Fuzzy Knowledge Representation Model for Student Performance Assessment

    DEFF Research Database (Denmark)

    Badie, Farshad

    Knowledge representation models based on Fuzzy Description Logics (DLs) can provide a foundation for reasoning in intelligent learning environments. While basic DLs are suitable for expressing crisp concepts and binary relationships, Fuzzy DLs are capable of processing degrees of truth/completene....../completeness about vague or imprecise information. This paper tackles the issue of representing fuzzy classes using OWL2 in a dataset describing Performance Assessment Results of Students (PARS)....

  1. PERFORMANCE EVALUATION OF EMPIRICAL MODELS FOR VENTED LEAN HYDROGEN EXPLOSIONS

    OpenAIRE

    Anubhav Sinha; Vendra C. Madhav Rao; Jennifer X. Wen

    2017-01-01

    Explosion venting is a method commonly used to prevent or minimize damage to an enclosure caused by an accidental explosion. An estimate of the maximum overpressure generated though explosion is an important parameter in the design of the vents. Various engineering models (Bauwens et al., 2012, Molkov and Bragin, 2015) and European (EN 14994 ) and USA standards (NFPA 68) are available to predict such overpressure. In this study, their performance is evaluated using a number of published exper...

  2. System performance modeling of extreme ultraviolet lithographic thermal issues

    International Nuclear Information System (INIS)

    Spence, P. A.; Gianoulakis, S. E.; Moen, C. D.; Kanouff, M. P.; Fisher, A.; Ray-Chaudhuri, A. K.

    1999-01-01

    Numerical simulation is used in the development of an extreme ultraviolet lithography Engineering Test Stand. Extensive modeling was applied to predict the impact of thermal loads on key lithographic parameters such as image placement error, focal shift, and loss of CD control. We show that thermal issues can be effectively managed to ensure that their impact on lithographic performance is maintained within design error budgets. (c) 1999 American Vacuum Society

  3. Introducing Model Predictive Control for Improving Power Plant Portfolio Performance

    DEFF Research Database (Denmark)

    Edlund, Kristian Skjoldborg; Bendtsen, Jan Dimon; Børresen, Simon

    2008-01-01

    This paper introduces a model predictive control (MPC) approach for construction of a controller for balancing the power generation against consumption in a power system. The objective of the controller is to coordinate a portfolio consisting of multiple power plant units in the effort to perform...... implementation consisting of a distributed PI controller structure, both in terms of minimising the overall cost but also in terms of the ability to minimise deviation, which is the classical objective....

  4. Thermal performance modeling of cross-flow heat exchangers

    CERN Document Server

    Cabezas-Gómez, Luben; Saíz-Jabardo, José Maria

    2014-01-01

    This monograph introduces a numerical computational methodology for thermal performance modeling of cross-flow heat exchangers, with applications in chemical, refrigeration and automobile industries. This methodology allows obtaining effectiveness-number of transfer units (e-NTU) data and has been used for simulating several standard and complex flow arrangements configurations of cross-flow heat exchangers. Simulated results have been validated through comparisons with results from available exact and approximate analytical solutions. Very accurate results have been obtained over wide ranges

  5. 3D Massive MIMO Systems: Modeling and Performance Analysis

    KAUST Repository

    Nadeem, Qurrat-Ul-Ain

    2015-07-30

    Multiple-input-multiple-output (MIMO) systems of current LTE releases are capable of adaptation in the azimuth only. Recently, the trend is to enhance system performance by exploiting the channel’s degrees of freedom in the elevation, which necessitates the characterization of 3D channels. We present an information-theoretic channel model for MIMO systems that supports the elevation dimension. The model is based on the principle of maximum entropy, which enables us to determine the distribution of the channel matrix consistent with the prior information on the angles. Based on this model, we provide analytical expression for the cumulative density function (CDF) of the mutual information (MI) for systems with a single receive and finite number of transmit antennas in the general signalto- interference-plus-noise-ratio (SINR) regime. The result is extended to systems with finite receive antennas in the low SINR regime. A Gaussian approximation to the asymptotic behavior of MI distribution is derived for the large number of transmit antennas and paths regime. We corroborate our analysis with simulations that study the performance gains realizable through meticulous selection of the transmit antenna downtilt angles, confirming the potential of elevation beamforming to enhance system performance. The results are directly applicable to the analysis of 5G 3D-Massive MIMO-systems.

  6. Evaluation of CFVS Performance with SPARC Model and Application

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Sung Il; Na, Young Su; Ha, Kwang Soon; Cho, Song Won [KAERI, Daejeon (Korea, Republic of)

    2016-05-15

    Containment Filtered Venting System (CFVS) is one of the important safety features to reduce the amount of released fission product into the environment by depressurizing the containment. KAERI has been conducted the integrated performance verification test of CFVS as a part of a Ministry of Trade, Industry and Energy (MOTIE) project. Generally, some codes are used in the case of wet type filter, such as SPARC, BUSCA, SUPRA, etc. Especially SPARC model is included in the MELCOR to calculate the fission product removal rate through the pool scrubbing. In this study, CFVS performance is evaluated using SPARC model in MELCOR according to the steam fraction in the containment. The calculation is mainly focused on the effect of steam fraction in the containment, and the calculation result is explained with the aerosol removal model in SPARC. Previous study on the OPR 1000 is applied to the result. There were two CFVS valve opening period and it is found that the CFVS performance is different in each case. The result of the study provides the fundamental data can be used to decide the CFVS operation time, however, more calculation data is necessary to generalize the result.

  7. Modelling the Progression of Male Swimmers’ Performances through Adolescence

    Directory of Open Access Journals (Sweden)

    Shilo J. Dormehl

    2016-01-01

    Full Text Available Insufficient data on adolescent athletes is contributing to the challenges facing youth athletic development and accurate talent identification. The purpose of this study was to model the progression of male sub-elite swimmers’ performances during adolescence. The performances of 446 males (12–19 year olds competing in seven individual events (50, 100, 200 m freestyle, 100 m backstroke, breaststroke, butterfly, 200 m individual medley over an eight-year period at an annual international schools swimming championship, run under FINA regulations were collected. Quadratic functions for each event were determined using mixed linear models. Thresholds of peak performance were achieved between the ages of 18.5 ± 0.1 (50 m freestyle and 200 m individual medley and 19.8 ± 0.1 (100 m butterfly years. The slowest rate of improvement was observed in the 200 m individual medley (20.7% and the highest in the 100 m butterfly (26.2%. Butterfly does however appear to be one of the last strokes in which males specialise. The models may be useful as talent identification tools, as they predict the age at which an average sub-elite swimmer could potentially peak. The expected rate of improvement could serve as a tool in which to monitor and evaluate benchmarks.

  8. A performance measurement using balanced scorecard and structural equation modeling

    Directory of Open Access Journals (Sweden)

    Rosha Makvandi

    2014-02-01

    Full Text Available During the past few years, balanced scorecard (BSC has been widely used as a promising method for performance measurement. BSC studies organizations in terms of four perspectives including customer, internal processes, learning and growth and financial figures. This paper presents a hybrid of BSC and structural equation modeling (SEM to measure the performance of an Iranian university in province of Alborz, Iran. The proposed study of this paper uses this conceptual method, designs a questionnaire and distributes it among some university students and professors. Using SEM technique, the survey analyzes the data and the results indicate that the university did poorly in terms of all four perspectives. The survey extracts necessary target improvement by presenting necessary attributes for performance improvement.

  9. The application of DEA model in enterprise environmental performance auditing

    Science.gov (United States)

    Li, F.; Zhu, L. Y.; Zhang, J. D.; Liu, C. Y.; Qu, Z. G.; Xiao, M. S.

    2017-01-01

    As a part of society, enterprises have an inescapable responsibility for environmental protection and governance. This article discusses the feasibility and necessity of enterprises environmental performance auditing and uses DEA model calculate the environmental performance of Haier for example. The most of reference data are selected and sorted from Haier’s environmental reportspublished in 2008, 2009, 2011 and 2015, and some of the data from some published articles and fieldwork. All the calculation results are calculated by DEAP software andhave a high credibility. The analysis results of this article can give corporate managements an idea about using environmental performance auditing to adjust their corporate environmental investments capital quota and change their company’s environmental strategies.

  10. PHARAO flight model: optical on ground performance tests

    Science.gov (United States)

    Lévèque, T.; Faure, B.; Esnault, F. X.; Grosjean, O.; Delaroche, C.; Massonnet, D.; Escande, C.; Gasc, Ph.; Ratsimandresy, A.; Béraud, S.; Buffe, F.; Torresi, P.; Larivière, Ph.; Bernard, V.; Bomer, T.; Thomin, S.; Salomon, C.; Abgrall, M.; Rovera, D.; Moric, I.; Laurent, Ph.

    2017-11-01

    PHARAO (Projet d'Horloge Atomique par Refroidissement d'Atomes en Orbite), which has been developed by CNES, is the first primary frequency standard specially designed for operation in space. PHARAO is the main instrument of the ESA mission ACES (Atomic Clock Ensemble in Space). ACES payload will be installed on-board the International Space Station (ISS) to perform fundamental physics experiments. All the sub-systems of the Flight Model (FM) have now passed the qualification process and the whole FM of the cold cesium clock, PHARAO, is being assembled and will undergo extensive tests. The expected performances in space are frequency accuracy less than 3.10-16 (with a final goal at 10-16) and frequency stability of 10-13 τ-1/2. In this paper, we focus on the laser source performances and the main results on the cold atom manipulation.

  11. Advanced transport systems analysis, modeling, and evaluation of performances

    CERN Document Server

    Janić, Milan

    2014-01-01

    This book provides a systematic analysis, modeling and evaluation of the performance of advanced transport systems. It offers an innovative approach by presenting a multidimensional examination of the performance of advanced transport systems and transport modes, useful for both theoretical and practical purposes. Advanced transport systems for the twenty-first century are characterized by the superiority of one or several of their infrastructural, technical/technological, operational, economic, environmental, social, and policy performances as compared to their conventional counterparts. The advanced transport systems considered include: Bus Rapid Transit (BRT) and Personal Rapid Transit (PRT) systems in urban area(s), electric and fuel cell passenger cars, high speed tilting trains, High Speed Rail (HSR), Trans Rapid Maglev (TRM), Evacuated Tube Transport system (ETT), advanced commercial subsonic and Supersonic Transport Aircraft (STA), conventionally- and Liquid Hydrogen (LH2)-fuelled commercial air trans...

  12. Performance model-directed data sieving for high-performance I/O

    Energy Technology Data Exchange (ETDEWEB)

    Chen, Yong; Lu, Yin; Amritkar, Prathamesh; Thakur, Rajeev; Zhuang, Yu

    2014-09-10

    Many scientific computing applications and engineering simulations exhibit noncontiguous I/O access patterns. Data sieving is an important technique to improve the performance of noncontiguous I/O accesses by combining small and noncontiguous requests into a large and contiguous request. It has been proven effective even though more data are potentially accessed than demanded. In this study, we propose a new data sieving approach namely performance model-directed data sieving, or PMD data sieving in short. It improves the existing data sieving approach from two aspects: (1) dynamically determines when it is beneficial to perform data sieving; and (2) dynamically determines how to perform data sieving if beneficial. It improves the performance of the existing data sieving approach considerably and reduces the memory consumption as verified by both theoretical analysis and experimental results. Given the importance of supporting noncontiguous accesses effectively and reducing the memory pressure in a large-scale system, the proposed PMD data sieving approach in this research holds a great promise and will have an impact on high-performance I/O systems.

  13. Qualitative and quantitative examination of the performance of regional air quality models representing different modeling approaches

    International Nuclear Information System (INIS)

    Bhumralkar, C.M.; Ludwig, F.L.; Shannon, J.D.; McNaughton, D.

    1985-04-01

    The calculations of three different air quality models were compared with the best available observations. The comparisons were made without calibrating the models to improve agreement with the observations. Model performance was poor for short averaging times (less than 24 hours). Some of the poor performance can be traced to errors in the input meteorological fields, but error exist on all levels. It should be noted that these models were not originally designed for treating short-term episodes. For short-term episodes, much of the variance in the data can arise from small spatial scale features that tend to be averaged out over longer periods. These small spatial scale features cannot be resolved with the coarse grids that are used for the meteorological and emissions inputs. Thus, it is not surprising that the models performed for the longer averaging times. The models compared were RTM-II, ENAMAP-2 and ACID. (17 refs., 5 figs., 4 tabs

  14. Methodologies for evaluating performance and assessing uncertainty of atmospheric dispersion models

    Science.gov (United States)

    Chang, Joseph C.

    This thesis describes methodologies to evaluate the performance and to assess the uncertainty of atmospheric dispersion models, tools that predict the fate of gases and aerosols upon their release into the atmosphere. Because of the large economic and public-health impacts often associated with the use of the dispersion model results, these models should be properly evaluated, and their uncertainty should be properly accounted for and understood. The CALPUFF, HPAC, and VLSTRACK dispersion modeling systems were applied to the Dipole Pride (DP26) field data (˜20 km in scale), in order to demonstrate the evaluation and uncertainty assessment methodologies. Dispersion model performance was found to be strongly dependent on the wind models used to generate gridded wind fields from observed station data. This is because, despite the fact that the test site was a flat area, the observed surface wind fields still showed considerable spatial variability, partly because of the surrounding mountains. It was found that the two components were comparable for the DP26 field data, with variability more important than uncertainty closer to the source, and less important farther away from the source. Therefore, reducing data errors for input meteorology may not necessarily increase model accuracy due to random turbulence. DP26 was a research-grade field experiment, where the source, meteorological, and concentration data were all well-measured. Another typical application of dispersion modeling is a forensic study where the data are usually quite scarce. An example would be the modeling of the alleged releases of chemical warfare agents during the 1991 Persian Gulf War, where the source data had to rely on intelligence reports, and where Iraq had stopped reporting weather data to the World Meteorological Organization since the 1981 Iran-Iraq-war. Therefore the meteorological fields inside Iraq must be estimated by models such as prognostic mesoscale meteorological models, based on

  15. Modeling Windows in Energy Plus with Simple Performance Indices

    Energy Technology Data Exchange (ETDEWEB)

    Arasteh, Dariush; Kohler, Christian; Griffith, Brent

    2009-10-12

    The building energy simulation program, Energy Plus (E+), cannot use standard window performance indices (U, SHGC, VT) to model window energy impacts. Rather, E+ uses more accurate methods which require a physical description of the window. E+ needs to be able to accept U and SHGC indices as window descriptors because, often, these are all that is known about a window and because building codes, standards, and voluntary programs are developed using these terms. This paper outlines a procedure, developed for E+, which will allow it to use standard window performance indices to model window energy impacts. In this 'Block' model, a given U, SHGC, VT are mapped to the properties of a fictitious 'layer' in E+. For thermal conductance calculations, the 'Block' functions as a single solid layer. For solar optical calculations, the model begins by defining a solar transmittance (Ts) at normal incidence based on the SHGC. For properties at non-normal incidence angles, the 'Block' takes on the angular properties of multiple glazing layers; the number and type of layers defined by the U and SHGC. While this procedure is specific to E+, parts of it may have applicability to other window/building simulation programs.

  16. Decline curve based models for predicting natural gas well performance

    Directory of Open Access Journals (Sweden)

    Arash Kamari

    2017-06-01

    Full Text Available The productivity of a gas well declines over its production life as cannot cover economic policies. To overcome such problems, the production performance of gas wells should be predicted by applying reliable methods to analyse the decline trend. Therefore, reliable models are developed in this study on the basis of powerful artificial intelligence techniques viz. the artificial neural network (ANN modelling strategy, least square support vector machine (LSSVM approach, adaptive neuro-fuzzy inference system (ANFIS, and decision tree (DT method for the prediction of cumulative gas production as well as initial decline rate multiplied by time as a function of the Arps' decline curve exponent and ratio of initial gas flow rate over total gas flow rate. It was concluded that the results obtained based on the models developed in current study are in satisfactory agreement with the actual gas well production data. Furthermore, the results of comparative study performed demonstrates that the LSSVM strategy is superior to the other models investigated for the prediction of both cumulative gas production, and initial decline rate multiplied by time.

  17. Green roof hydrologic performance and modeling: a review.

    Science.gov (United States)

    Li, Yanling; Babcock, Roger W

    2014-01-01

    Green roofs reduce runoff from impervious surfaces in urban development. This paper reviews the technical literature on green roof hydrology. Laboratory experiments and field measurements have shown that green roofs can reduce stormwater runoff volume by 30 to 86%, reduce peak flow rate by 22 to 93% and delay the peak flow by 0 to 30 min and thereby decrease pollution, flooding and erosion during precipitation events. However, the effectiveness can vary substantially due to design characteristics making performance predictions difficult. Evaluation of the most recently published study findings indicates that the major factors affecting green roof hydrology are precipitation volume, precipitation dynamics, antecedent conditions, growth medium, plant species, and roof slope. This paper also evaluates the computer models commonly used to simulate hydrologic processes for green roofs, including stormwater management model, soil water atmosphere and plant, SWMS-2D, HYDRUS, and other models that are shown to be effective for predicting precipitation response and economic benefits. The review findings indicate that green roofs are effective for reduction of runoff volume and peak flow, and delay of peak flow, however, no tool or model is available to predict expected performance for any given anticipated system based on design parameters that directly affect green roof hydrology.

  18. A Fluid Model for Performance Analysis in Cellular Networks

    Directory of Open Access Journals (Sweden)

    Coupechoux Marceau

    2010-01-01

    Full Text Available We propose a new framework to study the performance of cellular networks using a fluid model and we derive from this model analytical formulas for interference, outage probability, and spatial outage probability. The key idea of the fluid model is to consider the discrete base station (BS entities as a continuum of transmitters that are spatially distributed in the network. This model allows us to obtain simple analytical expressions to reveal main characteristics of the network. In this paper, we focus on the downlink other-cell interference factor (OCIF, which is defined for a given user as the ratio of its outer cell received power to its inner cell received power. A closed-form formula of the OCIF is provided in this paper. From this formula, we are able to obtain the global outage probability as well as the spatial outage probability, which depends on the location of a mobile station (MS initiating a new call. Our analytical results are compared to Monte Carlo simulations performed in a traditional hexagonal network. Furthermore, we demonstrate an application of the outage probability related to cell breathing and densification of cellular networks.

  19. Modeling and design of a high-performance hybrid actuator

    Science.gov (United States)

    Aloufi, Badr; Behdinan, Kamran; Zu, Jean

    2016-12-01

    This paper presents the model and design of a novel hybrid piezoelectric actuator which provides high active and passive performances for smart structural systems. The actuator is composed of a pair of curved pre-stressed piezoelectric actuators, so-called commercially THUNDER actuators, installed opposite each other using two clamping mechanisms constructed of in-plane fixable hinges, grippers and solid links. A fully mathematical model is developed to describe the active and passive dynamics of the actuator and investigate the effects of its geometrical parameters on the dynamic stiffness, free displacement and blocked force properties. Among the literature that deals with piezoelectric actuators in which THUNDER elements are used as a source of electromechanical power, the proposed study is unique in that it presents a mathematical model that has the ability to predict the actuator characteristics and achieve other phenomena, such as resonances, mode shapes, phase shifts, dips, etc. For model validation, the measurements of the free dynamic response per unit voltage and passive acceleration transmissibility of a particular actuator design are used to check the accuracy of the results predicted by the model. The results reveal that there is a good agreement between the model and experiment. Another experiment is performed to teste the linearity of the actuator system by examining the variation of the output dynamic responses with varying forces and voltages at different frequencies. From the results, it can be concluded that the actuator acts approximately as a linear system at frequencies up to 1000 Hz. A parametric study is achieved here by applying the developed model to analyze the influence of the geometrical parameters of the fixable hinges on the active and passive actuator properties. The model predictions in the frequency range of 0-1000 Hz show that the hinge thickness, radius, and opening angle parameters have great effects on the frequency dynamic

  20. Radiation induced muscositis as space flight risk. Model studies on X-ray and heavy ion irradiated typical oral mucosa models; Strahlungsinduzierte Mukositis als Risiko der Raumfahrt. Modelluntersuchungen an Roentgen- und Schwerionen-bestrahlten organotypischen Mundschleimhaut-Modellen

    Energy Technology Data Exchange (ETDEWEB)

    Tschachojan, Viktoria

    2014-07-29

    Humans in exomagnetospheric space are exposed to highly energetic heavy ion radiation which can be hardly shielded. Since radiation-induced mucositis constitutes a severe complication of heavy ion radiotherapy, it would also implicate a serious medical safety risk for the crew members during prolonged space flights such as missions to Moon or Mars. For assessment of risk developing radiation-induced mucositis, three-dimensional organotypic cultures of immortalized human keratinocytes and fibroblasts were irradiated with a {sup 12}C particle beam at high energies or X-Rays. Immunofluorescence stainings were done from cryosections and radiation induced release of cytokines and chemokines was quantified by ELISA from culture supernatants. The major focuses of this study were on 4, 8, 24 and 48 hours after irradiation. The conducted analyses of our mucosa model showed many structural similarities with the native oral mucosa and authentic immunological responses to radiation exposure. Quantification of the DNA damage in irradiated mucosa models revealed about twice as many DSB after heavy-ion irradiation compared to X-rays at definite doses and time points, suggesting a higher gene toxicity of heavy ions. Nuclear factor κB activation was observed after treatment with X-rays or {sup 12}C particles. An activation of NF κB p65 in irradiated samples could not be detected. ELISA analyses showed significantly higher interleukin 6 and interleukin 8 levels after irradiation with X-rays and {sup 12}C particles compared to non-irradiated controls. However, only X-rays induced significantly higher levels of interleukin 1β. Analyses of TNF-α and IFN-γ showed no radiation-induced effects. Further analyses revealed a radiation-induced reduction in proliferation and loss of compactness in irradiated oral mucosa model, which would lead to local lesions in vivo. In this study we revealed that several pro-inflammatory markers and structural changes are induced by X-rays and heavy

  1. Cooperative cognitive radio networking system model, enabling techniques, and performance

    CERN Document Server

    Cao, Bin; Mark, Jon W

    2016-01-01

    This SpringerBrief examines the active cooperation between users of Cooperative Cognitive Radio Networking (CCRN), exploring the system model, enabling techniques, and performance. The brief provides a systematic study on active cooperation between primary users and secondary users, i.e., (CCRN), followed by the discussions on research issues and challenges in designing spectrum-energy efficient CCRN. As an effort to shed light on the design of spectrum-energy efficient CCRN, they model the CCRN based on orthogonal modulation and orthogonally dual-polarized antenna (ODPA). The resource allocation issues are detailed with respect to both models, in terms of problem formulation, solution approach, and numerical results. Finally, the optimal communication strategies for both primary and secondary users to achieve spectrum-energy efficient CCRN are analyzed.

  2. Performance Evaluation of 3d Modeling Software for Uav Photogrammetry

    Science.gov (United States)

    Yanagi, H.; Chikatsu, H.

    2016-06-01

    UAV (Unmanned Aerial Vehicle) photogrammetry, which combines UAV and freely available internet-based 3D modeling software, is widely used as a low-cost and user-friendly photogrammetry technique in the fields such as remote sensing and geosciences. In UAV photogrammetry, only the platform used in conventional aerial photogrammetry is changed. Consequently, 3D modeling software contributes significantly to its expansion. However, the algorithms of the 3D modelling software are black box algorithms. As a result, only a few studies have been able to evaluate their accuracy using 3D coordinate check points. With this motive, Smart3DCapture and Pix4Dmapper were downloaded from the Internet and commercial software PhotoScan was also employed; investigations were performed in this paper using check points and images obtained from UAV.

  3. PERFORMANCE EVALUATION OF 3D MODELING SOFTWARE FOR UAV PHOTOGRAMMETRY

    Directory of Open Access Journals (Sweden)

    H. Yanagi

    2016-06-01

    Full Text Available UAV (Unmanned Aerial Vehicle photogrammetry, which combines UAV and freely available internet-based 3D modeling software, is widely used as a low-cost and user-friendly photogrammetry technique in the fields such as remote sensing and geosciences. In UAV photogrammetry, only the platform used in conventional aerial photogrammetry is changed. Consequently, 3D modeling software contributes significantly to its expansion. However, the algorithms of the 3D modelling software are black box algorithms. As a result, only a few studies have been able to evaluate their accuracy using 3D coordinate check points. With this motive, Smart3DCapture and Pix4Dmapper were downloaded from the Internet and commercial software PhotoScan was also employed; investigations were performed in this paper using check points and images obtained from UAV.

  4. A refined index of model performance: a rejoinder

    Science.gov (United States)

    Legates, David R.; McCabe, Gregory J.

    2013-01-01

    Willmott et al. [Willmott CJ, Robeson SM, Matsuura K. 2012. A refined index of model performance. International Journal of Climatology, forthcoming. DOI:10.1002/joc.2419.] recently suggest a refined index of model performance (dr) that they purport to be superior to other methods. Their refined index ranges from − 1.0 to 1.0 to resemble a correlation coefficient, but it is merely a linear rescaling of our modified coefficient of efficiency (E1) over the positive portion of the domain of dr. We disagree with Willmott et al. (2012) that dr provides a better interpretation; rather, E1 is more easily interpreted such that a value of E1 = 1.0 indicates a perfect model (no errors) while E1 = 0.0 indicates a model that is no better than the baseline comparison (usually the observed mean). Negative values of E1 (and, for that matter, dr McCabe [Legates DR, McCabe GJ. 1999. Evaluating the use of “goodness-of-fit” measures in hydrologic and hydroclimatic model validation. Water Resources Research 35(1): 233-241.] and Schaefli and Gupta [Schaefli B, Gupta HV. 2007. Do Nash values have value? Hydrological Processes 21: 2075-2080. DOI: 10.1002/hyp.6825.]. This important discussion focuses on the appropriate baseline comparison to use, and why the observed mean often may be an inadequate choice for model evaluation and development. 

  5. ASYMMETRIC PRICE TRANSMISSION MODELING: THE IMPORTANCE OF MODEL COMPLEXITY AND THE PERFORMANCE OF THE SELECTION CRITERIA

    Directory of Open Access Journals (Sweden)

    Henry de-Graft Acquah

    2013-01-01

    Full Text Available Information Criteria provides an attractive basis for selecting the best model from a set of competing asymmetric price transmission models or theories. However, little is understood about the sensitivity of the model selection methods to model complexity. This study therefore fits competing asymmetric price transmission models that differ in complexity to simulated data and evaluates the ability of the model selection methods to recover the true model. The results of Monte Carlo experimentation suggest that in general BIC, CAIC and DIC were superior to AIC when the true data generating process was the standard error correction model, whereas AIC was more successful when the true model was the complex error correction model. It is also shown that the model selection methods performed better in large samples for a complex asymmetric data generating process than with a standard asymmetric data generating process. Except for complex models, AIC's performance did not make substantial gains in recovery rates as sample size increased. The research findings demonstrate the influence of model complexity in asymmetric price transmission model comparison and selection.

  6. A proposal of ecologic taxes based on thermo-economic performance of heat engine models

    Energy Technology Data Exchange (ETDEWEB)

    Barranco-Jimenez, M. A. [Departamento de Ciencias Basicas, Escuela Superior de Computo del IPN, Av. Miguel Bernal Esq. Juan de Dios Batiz U.P. Zacatenco CP 07738, D.F. (Mexico); Ramos-Gayosso, I. [Unidad de Administracion de Riesgos, Banco de Mexico, 5 de Mayo, Centro, D.F. (Mexico); Rosales, M. A. [Departamento de Fisica y Matematicas, Universidad de las Americas, Puebla Exhacienda Sta. Catarina Martir, Cholula 72820, Puebla (Mexico); Angulo-Brown, F. [Departamento de Fisica, Escuela Superior de Fisica y Matematicas del IPN, Edif. 9 U.P. Zacatenco CP 07738, D.F. (Mexico)

    2009-07-01

    Within the context of Finite-Time Thermodynamics (FTT) a simplified thermal power plant model (the so-called Novikov engine) is analyzed under economical criteria by means of the concepts of profit function and the costs involved in the performance of the power plant. In this study, two different heat transfer laws are used, the so called Newton's law of cooling and the Dulong-Petit's law of cooling. Two FTT optimization criteria for the performance analysis are used: the maximum power regime (MP) and the so-called ecological criterion. This last criterion leads the engine model towards a mode of performance that appreciably diminishes the engine's wasted energy. In this work, it is shown that the energy-unit price produced under maximum power conditions is cheaper than that produced under maximum ecological (ME) conditions. This was accomplished by using a typical definition of profits function stemming from economics. The MP-regime produces considerably more wasted energy toward the environment, thus the MP energy-unit price is subsidized by nature. Due to this fact, an ecological tax is proposed, which could be a certain function of the price difference between the MP and ME modes of power production. (author)

  7. A Proposal of Ecologic Taxes Based on Thermo-Economic Performance of Heat Engine Models

    Directory of Open Access Journals (Sweden)

    Fernando Angulo-Brown

    2009-11-01

    Full Text Available Within the context of Finite-Time Thermodynamics (FTT a simplified thermal power plant model (the so-called Novikov engine is analyzed under economical criteria by means of the concepts of profit function and the costs involved in the performance of the power plant. In this study, two different heat transfer laws are used, the so called Newton’s law of cooling and the Dulong-Petit’s law of cooling. Two FTT optimization criteria for the performance analysis are used: the maximum power regime (MP and the so-called ecological criterion. This last criterion leads the engine model towards a mode of performance that appreciably diminishes the engine’s wasted energy. In this work, it is shown that the energy-unit price produced under maximum power conditions is cheaper than that produced under maximum ecological (ME conditions. This was accomplished by using a typical definition of profits function stemming from economics. The MP-regime produces considerably more wasted energy toward the environment, thus the MP energy-unit price is subsidized by nature. Due to this fact, an ecological tax is proposed, which could be a certain function of the price difference between the MP and ME modes of power production.

  8. An analytical model for the performance analysis of concurrent transmission in IEEE 802.15.4.

    Science.gov (United States)

    Gezer, Cengiz; Zanella, Alberto; Verdone, Roberto

    2014-03-20

    Interference is a serious cause of performance degradation for IEEE802.15.4 devices. The effect of concurrent transmissions in IEEE 802.15.4 has been generally investigated by means of simulation or experimental activities. In this paper, a mathematical framework for the derivation of chip, symbol and packet error probability of a typical IEEE 802.15.4 receiver in the presence of interference is proposed. Both non-coherent and coherent demodulation schemes are considered by our model under the assumption of the absence of thermal noise. Simulation results are also added to assess the validity of the mathematical framework when the effect of thermal noise cannot be neglected. Numerical results show that the proposed analysis is in agreement with the measurement results on the literature under realistic working conditions.

  9. Clinical laboratory as an economic model for business performance analysis

    Science.gov (United States)

    Buljanović, Vikica; Patajac, Hrvoje; Petrovečki, Mladen

    2011-01-01

    Aim To perform SWOT (strengths, weaknesses, opportunities, and threats) analysis of a clinical laboratory as an economic model that may be used to improve business performance of laboratories by removing weaknesses, minimizing threats, and using external opportunities and internal strengths. Methods Impact of possible threats to and weaknesses of the Clinical Laboratory at Našice General County Hospital business performance and use of strengths and opportunities to improve operating profit were simulated using models created on the basis of SWOT analysis results. The operating profit as a measure of profitability of the clinical laboratory was defined as total revenue minus total expenses and presented using a profit and loss account. Changes in the input parameters in the profit and loss account for 2008 were determined using opportunities and potential threats, and economic sensitivity analysis was made by using changes in the key parameters. The profit and loss account and economic sensitivity analysis were tools for quantifying the impact of changes in the revenues and expenses on the business operations of clinical laboratory. Results Results of simulation models showed that operational profit of €470 723 in 2008 could be reduced to only €21 542 if all possible threats became a reality and current weaknesses remained the same. Also, operational gain could be increased to €535 804 if laboratory strengths and opportunities were utilized. If both the opportunities and threats became a reality, the operational profit would decrease by €384 465. Conclusion The operational profit of the clinical laboratory could be significantly reduced if all threats became a reality and the current weaknesses remained the same. The operational profit could be increased by utilizing strengths and opportunities as much as possible. This type of modeling may be used to monitor business operations of any clinical laboratory and improve its financial situation by

  10. Clinical laboratory as an economic model for business performance analysis.

    Science.gov (United States)

    Buljanović, Vikica; Patajac, Hrvoje; Petrovecki, Mladen

    2011-08-15

    To perform SWOT (strengths, weaknesses, opportunities, and threats) analysis of a clinical laboratory as an economic model that may be used to improve business performance of laboratories by removing weaknesses, minimizing threats, and using external opportunities and internal strengths. Impact of possible threats to and weaknesses of the Clinical Laboratory at Našice General County Hospital business performance and use of strengths and opportunities to improve operating profit were simulated using models created on the basis of SWOT analysis results. The operating profit as a measure of profitability of the clinical laboratory was defined as total revenue minus total expenses and presented using a profit and loss account. Changes in the input parameters in the profit and loss account for 2008 were determined using opportunities and potential threats, and economic sensitivity analysis was made by using changes in the key parameters. The profit and loss account and economic sensitivity analysis were tools for quantifying the impact of changes in the revenues and expenses on the business operations of clinical laboratory. Results of simulation models showed that operational profit of €470 723 in 2008 could be reduced to only €21 542 if all possible threats became a reality and current weaknesses remained the same. Also, operational gain could be increased to €535 804 if laboratory strengths and opportunities were utilized. If both the opportunities and threats became a reality, the operational profit would decrease by €384 465. The operational profit of the clinical laboratory could be significantly reduced if all threats became a reality and the current weaknesses remained the same. The operational profit could be increased by utilizing strengths and opportunities as much as possible. This type of modeling may be used to monitor business operations of any clinical laboratory and improve its financial situation by implementing changes in the next fiscal

  11. A Framework for Improving Project Performance of Standard Design Models in Saudi Arabia

    Directory of Open Access Journals (Sweden)

    Shabbab Al-Otaib

    2013-07-01

    Full Text Available Improving project performance in the construction industry poses several challenges for stakeholders. Recently, there have been frequent calls for the importance of adopting standardisation in improving construction design as well as the process and a focus on learning mapping from other industries. The Saudi Ministry of Interior (SMoI has adopted a new Standard Design Model (SDM approach for the development of its construction programme to effectively manage its complex project portfolio and improve project performance. A review of existing literature indicates that despite the adoption of SDM repetitive projects, which enable learning from past mistakes and improving the performance of future projects, it has been realised that there is a lack of learning instruments to capture, store and disseminate Lessons Learnt (LL. This research proposes a framework for improving the project performance of SDMs in the Saudi construction industry. Eight case studies related to a typical standard design project were performed that included interviews with of 24 key stakeholders who are involved in the planning and implementation of SDM projects within the SMoI. The research identified 14 critical success factors CSFs have a direct impact on the SDM project performance. These are classified into three main CSF-related clusters: adaptability to the context; contract management; and construction management. A framework, which comprises the identified 14 CSFs, was developed, refined and validated through a workshop with 12 key stakeholders in the SMoI construction programme. Additionally, a framework implementation process map was developed. Web-based tools and KM were identified as core factors in the framework implementation strategy. Although many past CSF-related studies were conducted to develop a range of construction project performance improvement frameworks, the paper provides the first initiative to develop a framework to improve the performance of

  12. URBAN MODELLING PERFORMANCE OF NEXT GENERATION SAR MISSIONS

    Directory of Open Access Journals (Sweden)

    U. G. Sefercik

    2017-09-01

    Full Text Available In synthetic aperture radar (SAR technology, urban mapping and modelling have become possible with revolutionary missions TerraSAR-X (TSX and Cosmo-SkyMed (CSK since 2007. These satellites offer 1m spatial resolution in high-resolution spotlight imaging mode and capable for high quality digital surface model (DSM acquisition for urban areas utilizing interferometric SAR (InSAR technology. With the advantage of independent generation from seasonal weather conditions, TSX and CSK DSMs are much in demand by scientific users. The performance of SAR DSMs is influenced by the distortions such as layover, foreshortening, shadow and double-bounce depend up on imaging geometry. In this study, the potential of DSMs derived from convenient 1m high-resolution spotlight (HS InSAR pairs of CSK and TSX is validated by model-to-model absolute and relative accuracy estimations in an urban area. For the verification, an airborne laser scanning (ALS DSM of the study area was used as the reference model. Results demonstrated that TSX and CSK urban DSMs are compatible in open, built-up and forest land forms with the absolute accuracy of 8–10 m. The relative accuracies based on the coherence of neighbouring pixels are superior to absolute accuracies both for CSK and TSX.

  13. Urban Modelling Performance of Next Generation SAR Missions

    Science.gov (United States)

    Sefercik, U. G.; Yastikli, N.; Atalay, C.

    2017-09-01

    In synthetic aperture radar (SAR) technology, urban mapping and modelling have become possible with revolutionary missions TerraSAR-X (TSX) and Cosmo-SkyMed (CSK) since 2007. These satellites offer 1m spatial resolution in high-resolution spotlight imaging mode and capable for high quality digital surface model (DSM) acquisition for urban areas utilizing interferometric SAR (InSAR) technology. With the advantage of independent generation from seasonal weather conditions, TSX and CSK DSMs are much in demand by scientific users. The performance of SAR DSMs is influenced by the distortions such as layover, foreshortening, shadow and double-bounce depend up on imaging geometry. In this study, the potential of DSMs derived from convenient 1m high-resolution spotlight (HS) InSAR pairs of CSK and TSX is validated by model-to-model absolute and relative accuracy estimations in an urban area. For the verification, an airborne laser scanning (ALS) DSM of the study area was used as the reference model. Results demonstrated that TSX and CSK urban DSMs are compatible in open, built-up and forest land forms with the absolute accuracy of 8-10 m. The relative accuracies based on the coherence of neighbouring pixels are superior to absolute accuracies both for CSK and TSX.

  14. Photovoltaic Pixels for Neural Stimulation: Circuit Models and Performance.

    Science.gov (United States)

    Boinagrov, David; Lei, Xin; Goetz, Georges; Kamins, Theodore I; Mathieson, Keith; Galambos, Ludwig; Harris, James S; Palanker, Daniel

    2016-02-01

    Photovoltaic conversion of pulsed light into pulsed electric current enables optically-activated neural stimulation with miniature wireless implants. In photovoltaic retinal prostheses, patterns of near-infrared light projected from video goggles onto subretinal arrays of photovoltaic pixels are converted into patterns of current to stimulate the inner retinal neurons. We describe a model of these devices and evaluate the performance of photovoltaic circuits, including the electrode-electrolyte interface. Characteristics of the electrodes measured in saline with various voltages, pulse durations, and polarities were modeled as voltage-dependent capacitances and Faradaic resistances. The resulting mathematical model of the circuit yielded dynamics of the electric current generated by the photovoltaic pixels illuminated by pulsed light. Voltages measured in saline with a pipette electrode above the pixel closely matched results of the model. Using the circuit model, our pixel design was optimized for maximum charge injection under various lighting conditions and for different stimulation thresholds. To speed discharge of the electrodes between the pulses of light, a shunt resistor was introduced and optimized for high frequency stimulation.

  15. Modeling impact of environmental factors on photovoltaic array performance

    Energy Technology Data Exchange (ETDEWEB)

    Yang, Jie; Sun, Yize; Xu, Yang [College of Mechanical Engineering, Donghua University NO.2999, North Renmin Road, Shanghai (China)

    2013-07-01

    It is represented in this paper that a methodology to model and quantify the impact of the three environmental factors, the ambient temperature, the incident irradiance and the wind speed, upon the performance of photovoltaic array operating under outdoor conditions. First, A simple correlation correlating operating temperature with the three environmental variables is validated for a range of wind speed studied, 2-8, and for irradiance values between 200 and 1000. Root mean square error (RMSE) between modeled operating temperature and measured values is 1.19% and the mean bias error (MBE) is -0.09%. The environmental factors studied influence I-V curves, P-V curves, and maximum-power outputs of photovoltaic array. The cell-to-module-to-array mathematical model for photovoltaic panels is established in this paper and the method defined as segmented iteration is adopted to solve the I-V curve expression to relate model I-V curves. The model I-V curves and P-V curves are concluded to coincide well with measured data points. The RMSE between numerically calculated maximum-power outputs and experimentally measured ones is 0.2307%, while the MBE is 0.0183%. In addition, a multivariable non-linear regression equation is proposed to eliminate the difference between numerically calculated values and measured ones of maximum power outputs over the range of high ambient temperature and irradiance at noon and in the early afternoon. In conclusion, the proposed method is reasonably simple and accurate.

  16. Water desalination price from recent performances: Modelling, simulation and analysis

    International Nuclear Information System (INIS)

    Metaiche, M.; Kettab, A.

    2005-01-01

    The subject of the present article is the technical simulation of seawater desalination, by a one stage reverse osmosis system, the objectives of which are the recent valuation of cost price through the use of new membrane and permeator performances, the use of new means of simulation and modelling of desalination parameters, and show the main parameters influencing the cost price. We have taken as the simulation example the Seawater Desalting centre of Djannet (Boumerdes, Algeria). The present performances allow water desalting at a price of 0.5 $/m 3 , which is an interesting and promising price, corresponding with the very acceptable water product quality, in the order of 269 ppm. It is important to run the desalting systems by reverse osmosis under high pressure, resulting in further decrease of the desalting cost and the production of good quality water. Aberration in choice of functioning conditions produces high prices and unacceptable quality. However there exists the possibility of decreasing the price by decreasing the requirement on the product quality. The seawater temperature has an effect on the cost price and quality. The installation of big desalting centres, contributes to the decrease in prices. A very important, long and tedious calculation is effected, which is impossible to conduct without programming and informatics tools. The use of the simulation model has been much efficient in the design of desalination centres that can perform at very improved prices. (author)

  17. A Hybrid Fuzzy Model for Lean Product Development Performance Measurement

    Science.gov (United States)

    Osezua Aikhuele, Daniel; Mohd Turan, Faiz

    2016-02-01

    In the effort for manufacturing companies to meet up with the emerging consumer demands for mass customized products, many are turning to the application of lean in their product development process, and this is gradually moving from being a competitive advantage to a necessity. However, due to lack of clear understanding of the lean performance measurements, many of these companies are unable to implement and fully integrated the lean principle into their product development process. Extensive literature shows that only few studies have focus systematically on the lean product development performance (LPDP) evaluation. In order to fill this gap, the study therefore proposed a novel hybrid model based on Fuzzy Reasoning Approach (FRA), and the extension of Fuzzy-AHP and Fuzzy-TOPSIS methods for the assessment of the LPDP. Unlike the existing methods, the model considers the importance weight of each of the decision makers (Experts) since the performance criteria/attributes are required to be rated, and these experts have different level of expertise. The rating is done using a new fuzzy Likert rating scale (membership-scale) which is designed such that it can address problems resulting from information lost/distortion due to closed-form scaling and the ordinal nature of the existing Likert scale.

  18. Uncertainty assessment in building energy performance with a simplified model

    Directory of Open Access Journals (Sweden)

    Titikpina Fally

    2015-01-01

    Full Text Available To assess a building energy performance, the consumption being predicted or estimated during the design stage is compared to the measured consumption when the building is operational. When valuing this performance, many buildings show significant differences between the calculated and measured consumption. In order to assess the performance accurately and ensure the thermal efficiency of the building, it is necessary to evaluate the uncertainties involved not only in measurement but also those induced by the propagation of the dynamic and the static input data in the model being used. The evaluation of measurement uncertainty is based on both the knowledge about the measurement process and the input quantities which influence the result of measurement. Measurement uncertainty can be evaluated within the framework of conventional statistics presented in the Guide to the Expression of Measurement Uncertainty (GUM as well as by Bayesian Statistical Theory (BST. Another choice is the use of numerical methods like Monte Carlo Simulation (MCS. In this paper, we proposed to evaluate the uncertainty associated to the use of a simplified model for the estimation of the energy consumption of a given building. A detailed review and discussion of these three approaches (GUM, MCS and BST is given. Therefore, an office building has been monitored and multiple temperature sensors have been mounted on candidate locations to get required data. The monitored zone is composed of six offices and has an overall surface of 102 m2.

  19. Modelling of LOCA Tests with the BISON Fuel Performance Code

    Energy Technology Data Exchange (ETDEWEB)

    Williamson, Richard L [Idaho National Laboratory; Pastore, Giovanni [Idaho National Laboratory; Novascone, Stephen Rhead [Idaho National Laboratory; Spencer, Benjamin Whiting [Idaho National Laboratory; Hales, Jason Dean [Idaho National Laboratory

    2016-05-01

    BISON is a modern finite-element based, multidimensional nuclear fuel performance code that is under development at Idaho National Laboratory (USA). Recent advances of BISON include the extension of the code to the analysis of LWR fuel rod behaviour during loss-of-coolant accidents (LOCAs). In this work, BISON models for the phenomena relevant to LWR cladding behaviour during LOCAs are described, followed by presentation of code results for the simulation of LOCA tests. Analysed experiments include separate effects tests of cladding ballooning and burst, as well as the Halden IFA-650.2 fuel rod test. Two-dimensional modelling of the experiments is performed, and calculations are compared to available experimental data. Comparisons include cladding burst pressure and temperature in separate effects tests, as well as the evolution of fuel rod inner pressure during ballooning and time to cladding burst. Furthermore, BISON three-dimensional simulations of separate effects tests are performed, which demonstrate the capability to reproduce the effect of azimuthal temperature variations in the cladding. The work has been carried out in the frame of the collaboration between Idaho National Laboratory and Halden Reactor Project, and the IAEA Coordinated Research Project FUMAC.

  20. Performance of process-based models for simulation of grain N in crop rotations across Europe

    DEFF Research Database (Denmark)

    Yin, Xiaogang; Kersebaum, KC; Kollas, C

    2017-01-01

    and rainfed treatments. Moreover, the multi-model mean provided better predictions of grain N compared to any individual model. In regard to the Individual models, DAISY, FASSET, HERMES, MONICA and STICS are suitable for predicting grain N of the main crops in typical European crop rotations, which all...

  1. ICT evaluation models and performance of medium and small enterprises

    Directory of Open Access Journals (Sweden)

    Bayaga Anass

    2014-01-01

    Full Text Available Building on prior research related to (1 impact of information communication technology (ICT and (2 operational risk management (ORM in the context of medium and small enterprises (MSEs, the focus of this study was to investigate the relationship between (1 ICT operational risk management (ORM and (2 performances of MSEs. To achieve the focus, the research investigated evaluating models for understanding the value of ICT ORM in MSEs. Multiple regression, Repeated-Measures Analysis of Variance (RM-ANOVA and Repeated-Measures Multivariate Analysis of Variance (RM-MANOVA were performed. The findings of the distribution revealed that only one variable made a significant percentage contribution to the level of ICT operation in MSEs, the Payback method (β = 0.410, p < .000. It may thus be inferred that the Payback method is the prominent variable, explaining the variation in level of evaluation models affecting ICT adoption within MSEs. Conclusively, in answering the two questions (1 degree of variability explained and (2 predictors, the results revealed that the variable contributed approximately 88.4% of the variations in evaluation models affecting ICT adoption within MSEs. The analysis of variance also revealed that the regression coefficients were real and did not occur by chance

  2. Behavioral Model of High Performance Camera for NIF Optics Inspection

    International Nuclear Information System (INIS)

    Hackel, B M

    2007-01-01

    The purpose of this project was to develop software that will model the behavior of the high performance Spectral Instruments 1000 series Charge-Coupled Device (CCD) camera located in the Final Optics Damage Inspection (FODI) system on the National Ignition Facility. NIF's target chamber will be mounted with 48 Final Optics Assemblies (FOAs) to convert the laser light from infrared to ultraviolet and focus it precisely on the target. Following a NIF shot, the optical components of each FOA must be carefully inspected for damage by the FODI to ensure proper laser performance during subsequent experiments. Rapid image capture and complex image processing (to locate damage sites) will reduce shot turnaround time; thus increasing the total number of experiments NIF can conduct during its 30 year lifetime. Development of these rapid processes necessitates extensive offline software automation -- especially after the device has been deployed in the facility. Without access to the unique real device or an exact behavioral model, offline software testing is difficult. Furthermore, a software-based behavioral model allows for many instances to be running concurrently; this allows multiple developers to test their software at the same time. Thus it is beneficial to construct separate software that will exactly mimic the behavior and response of the real SI-1000 camera

  3. A Tool for Performance Modeling of Parallel Programs

    Directory of Open Access Journals (Sweden)

    J.A. González

    2003-01-01

    Full Text Available Current performance prediction analytical models try to characterize the performance behavior of actual machines through a small set of parameters. In practice, substantial deviations are observed. These differences are due to factors as memory hierarchies or network latency. A natural approach is to associate a different proportionality constant with each basic block, and analogously, to associate different latencies and bandwidths with each "communication block". Unfortunately, to use this approach implies that the evaluation of parameters must be done for each algorithm. This is a heavy task, implying experiment design, timing, statistics, pattern recognition and multi-parameter fitting algorithms. Software support is required. We present a compiler that takes as source a C program annotated with complexity formulas and produces as output an instrumented code. The trace files obtained from the execution of the resulting code are analyzed with an interactive interpreter, giving us, among other information, the values of those parameters.

  4. A Typical Verification Challenge for the GRID

    NARCIS (Netherlands)

    van de Pol, Jan Cornelis; Bal, H. E.; Brim, L.; Leucker, M.

    2008-01-01

    A typical verification challenge for the GRID community is presented. The concrete challenge is to implement a simple recursive algorithm for finding the strongly connected components in a graph. The graph is typically stored in the collective memory of a number of computers, so a distributed

  5. A human performance modelling approach to intelligent decision support systems

    Science.gov (United States)

    Mccoy, Michael S.; Boys, Randy M.

    1987-01-01

    Manned space operations require that the many automated subsystems of a space platform be controllable by a limited number of personnel. To minimize the interaction required of these operators, artificial intelligence techniques may be applied to embed a human performance model within the automated, or semi-automated, systems, thereby allowing the derivation of operator intent. A similar application has previously been proposed in the domain of fighter piloting, where the demand for pilot intent derivation is primarily a function of limited time and high workload rather than limited operators. The derivation and propagation of pilot intent is presented as it might be applied to some programs.

  6. Use of total plant models for plant performance optimisation

    International Nuclear Information System (INIS)

    Ardron, K.H.

    2004-01-01

    Consideration is given to the mathematical techniques used by Nuclear Electric for steady state power plant analysis and performance optimisation. A quasi-Newton method is deployed to calculate the steady state followed by a model fitting procedure based on Lagrange's method to yield a fit to measured plant data. An optimising algorithm is used to establish maximum achievable power and efficiency. An example is described in which the techniques are applied to identify the plant constraints preventing output increases at a Nuclear Electric Advanced Gas Cooled Reactor. (author)

  7. Analytical and numerical performance models of a Heisenberg Vortex Tube

    Science.gov (United States)

    Bunge, C. D.; Cavender, K. A.; Matveev, K. I.; Leachman, J. W.

    2017-12-01

    Analytical and numerical investigations of a Heisenberg Vortex Tube (HVT) are performed to estimate the cooling potential with cryogenic hydrogen. The Ranque-Hilsch Vortex Tube (RHVT) is a device that tangentially injects a compressed fluid stream into a cylindrical geometry to promote enthalpy streaming and temperature separation between inner and outer flows. The HVT is the result of lining the inside of a RHVT with a hydrogen catalyst. This is the first concept to utilize the endothermic heat of para-orthohydrogen conversion to aid primary cooling. A review of 1st order vortex tube models available in the literature is presented and adapted to accommodate cryogenic hydrogen properties. These first order model predictions are compared with 2-D axisymmetric Computational Fluid Dynamics (CFD) simulations.

  8. Performance of the demonstration model of DIOS FXT

    Science.gov (United States)

    Tawara, Yuzuru; Sakurai, Ikuya; Masuda, Tadashi; Torii, Tatsuharu; Matsushita, Kohji; Ramsey, Brian D.

    2009-08-01

    To search for warm-hot intergalactic medium (WHIM), a small satellite mission DIOS (Diffuse Intergalactic Oxygen Surveyor ) is planned and a specially designed four-stage X-ray telescope (FXT) has been developed as the best fit optics to have a wide field of view and large effective area. Based on the previous design work and mirror fabrication technology used in the Suzaku mission, we made small demonstration model of DIOS FXT. This model has focal length of 700 mm consisting of quadrant housing and four-stage mirror sets with different radii of 150 - 180 mm and each stage mirror hight of 40 mm. We performed X-ray measurement for one set of four-stage mirror with a radius of 180 mm. From the results of the optical and X-ray measurement, it was found that tighter control were required for positioning and fabrication process of each mirror even to get angular resolution of several arcmin.

  9. BISON and MARMOT Development for Modeling Fast Reactor Fuel Performance

    Energy Technology Data Exchange (ETDEWEB)

    Gamble, Kyle Allan Lawrence [Idaho National Lab. (INL), Idaho Falls, ID (United States); Williamson, Richard L. [Idaho National Lab. (INL), Idaho Falls, ID (United States); Schwen, Daniel [Idaho National Lab. (INL), Idaho Falls, ID (United States); Zhang, Yongfeng [Idaho National Lab. (INL), Idaho Falls, ID (United States); Novascone, Stephen Rhead [Idaho National Lab. (INL), Idaho Falls, ID (United States); Medvedev, Pavel G. [Idaho National Lab. (INL), Idaho Falls, ID (United States)

    2015-09-01

    BISON and MARMOT are two codes under development at the Idaho National Laboratory for engineering scale and lower length scale fuel performance modeling. It is desired to add capabilities for fast reactor applications to these codes. The fast reactor fuel types under consideration are metal (U-Pu-Zr) and oxide (MOX). The cladding types of interest include 316SS, D9, and HT9. The purpose of this report is to outline the proposed plans for code development and provide an overview of the models added to the BISON and MARMOT codes for fast reactor fuel behavior. A brief overview of preliminary discussions on the formation of a bilateral agreement between the Idaho National Laboratory and the National Nuclear Laboratory in the United Kingdom is presented.

  10. Predicting the Impacts of Intravehicular Displays on Driving Performance with Human Performance Modeling

    Science.gov (United States)

    Mitchell, Diane Kuhl; Wojciechowski, Josephine; Samms, Charneta

    2012-01-01

    A challenge facing the U.S. National Highway Traffic Safety Administration (NHTSA), as well as international safety experts, is the need to educate car drivers about the dangers associated with performing distraction tasks while driving. Researchers working for the U.S. Army Research Laboratory have developed a technique for predicting the increase in mental workload that results when distraction tasks are combined with driving. They implement this technique using human performance modeling. They have predicted workload associated with driving combined with cell phone use. In addition, they have predicted the workload associated with driving military vehicles combined with threat detection. Their technique can be used by safety personnel internationally to demonstrate the dangers of combining distracter tasks with driving and to mitigate the safety risks.

  11. Performance Analysis of Different NeQuick Ionospheric Model Parameters

    Directory of Open Access Journals (Sweden)

    WANG Ningbo

    2017-04-01

    Full Text Available Galileo adopts NeQuick model for single-frequency ionospheric delay corrections. For the standard operation of Galileo, NeQuick model is driven by the effective ionization level parameter Az instead of the solar activity level index, and the three broadcast ionospheric coefficients are determined by a second-polynomial through fitting the Az values estimated from globally distributed Galileo Sensor Stations (GSS. In this study, the processing strategies for the estimation of NeQuick ionospheric coefficients are discussed and the characteristics of the NeQuick coefficients are also analyzed. The accuracy of Global Position System (GPS broadcast Klobuchar, original NeQuick2 and fitted NeQuickC as well as Galileo broadcast NeQuickG models is evaluated over the continental and oceanic regions, respectively, in comparison with the ionospheric total electron content (TEC provided by global ionospheric maps (GIM, GPS test stations and JASON-2 altimeter. The results show that NeQuickG can mitigate ionospheric delay by 54.2%~65.8% on a global scale, and NeQuickC can correct for 71.1%~74.2% of the ionospheric delay. NeQuick2 performs at the same level with NeQuickG, which is a bit better than that of GPS broadcast Klobuchar model.

  12. The performance indicators of model projects. A special evaluation

    International Nuclear Information System (INIS)

    1995-11-01

    As a result of the acknowledgment of the key role of the Model Project concept in the Agency's Technical Co-operation Programme, the present review of the objectives of the model projects which are now in operation, was undertaken, as recommended by the Board of Governors, to determine at an early stage: the extent to which the present objectives have been defined in a measurable way; whether objectively verifiable performance indicators and success criteria had been identified for each project; whether mechanisms to obtain feedback on the achievements had been foreseen. The overall budget for the 23 model projects, as approved from 1994 to 1998, amounts to $32,557,560, of which 45% is funded by Technical Co-operation Fund. This represents an average investment of about $8 million per year, that is over 15% of the annual TC budget. The conceptual importance of the Model Project initiative, as well as the significant funds allocated to them, led the Secretariat to plan the methods to be used to determine their socio-economic impact. 1 tab

  13. Simulation model of a twin-tail, high performance airplane

    Science.gov (United States)

    Buttrill, Carey S.; Arbuckle, P. Douglas; Hoffler, Keith D.

    1992-01-01

    The mathematical model and associated computer program to simulate a twin-tailed high performance fighter airplane (McDonnell Douglas F/A-18) are described. The simulation program is written in the Advanced Continuous Simulation Language. The simulation math model includes the nonlinear six degree-of-freedom rigid-body equations, an engine model, sensors, and first order actuators with rate and position limiting. A simplified form of the F/A-18 digital control laws (version 8.3.3) are implemented. The simulated control law includes only inner loop augmentation in the up and away flight mode. The aerodynamic forces and moments are calculated from a wind-tunnel-derived database using table look-ups with linear interpolation. The aerodynamic database has an angle-of-attack range of -10 to +90 and a sideslip range of -20 to +20 degrees. The effects of elastic deformation are incorporated in a quasi-static-elastic manner. Elastic degrees of freedom are not actively simulated. In the engine model, the throttle-commanded steady-state thrust level and the dynamic response characteristics of the engine are based on airflow rate as determined from a table look-up. Afterburner dynamics are switched in at a threshold based on the engine airflow and commanded thrust.

  14. Java Programs for Using Newmark's Method and Simplified Decoupled Analysis to Model Slope Performance During Earthquakes

    Science.gov (United States)

    Jibson, Randall W.; Jibson, Matthew W.

    2003-01-01

    Landslides typically cause a large proportion of earthquake damage, and the ability to predict slope performance during earthquakes is important for many types of seismic-hazard analysis and for the design of engineered slopes. Newmark's method for modeling a landslide as a rigid-plastic block sliding on an inclined plane provides a useful method for predicting approximate landslide displacements. Newmark's method estimates the displacement of a potential landslide block as it is subjected to earthquake shaking from a specific strong-motion record (earthquake acceleration-time history). A modification of Newmark's method, decoupled analysis, allows modeling landslides that are not assumed to be rigid blocks. This open-file report is available on CD-ROM and contains Java programs intended to facilitate performing both rigorous and simplified Newmark sliding-block analysis and a simplified model of decoupled analysis. For rigorous analysis, 2160 strong-motion records from 29 earthquakes are included along with a search interface for selecting records based on a wide variety of record properties. Utilities are available that allow users to add their own records to the program and use them for conducting Newmark analyses. Also included is a document containing detailed information about how to use Newmark's method to model dynamic slope performance. This program will run on any platform that supports the Java Runtime Environment (JRE) version 1.3, including Windows, Mac OSX, Linux, Solaris, etc. A minimum of 64 MB of available RAM is needed, and the fully installed program requires 400 MB of disk space.

  15. How to kill a tree: empirical mortality models for 18 species and their performance in a dynamic forest model.

    Science.gov (United States)

    Hülsmann, Lisa; Bugmann, Harald; Cailleret, Maxime; Brang, Peter

    2018-03-01

    Dynamic Vegetation Models (DVMs) are designed to be suitable for simulating forest succession and species range dynamics under current and future conditions based on mathematical representations of the three key processes regeneration, growth, and mortality. However, mortality formulations in DVMs are typically coarse and often lack an empirical basis, which increases the uncertainty of projections of future forest dynamics and hinders their use for developing adaptation strategies to climate change. Thus, sound tree mortality models are highly needed. We developed parsimonious, species-specific mortality models for 18 European tree species using >90,000 records from inventories in Swiss and German strict forest reserves along a considerable environmental gradient. We comprehensively evaluated model performance and incorporated the new mortality functions in the dynamic forest model ForClim. Tree mortality was successfully predicted by tree size and growth. Only a few species required additional covariates in their final model to consider aspects of stand structure or climate. The relationships between mortality and its predictors reflect the indirect influences of resource availability and tree vitality, which are further shaped by species-specific attributes such as maximum longevity and shade tolerance. Considering that the behavior of the models was biologically meaningful, and that their performance was reasonably high and not impacted by changes in the sampling design, we suggest that the mortality algorithms developed here are suitable for implementation and evaluation in DVMs. In the DVM ForClim, the new mortality functions resulted in simulations of stand basal area and species composition that were generally close to historical observations. However, ForClim performance was poorer than when using the original, coarse mortality formulation. The difficulties of simulating stand structure and species composition, which were most evident for Fagus sylvatica L

  16. Modelling of green roofs' hydrologic performance using EPA's SWMM.

    Science.gov (United States)

    Burszta-Adamiak, E; Mrowiec, M

    2013-01-01

    Green roofs significantly affect the increase in water retention and thus the management of rain water in urban areas. In Poland, as in many other European countries, excess rainwater resulting from snowmelt and heavy rainfall contributes to the development of local flooding in urban areas. Opportunities to reduce surface runoff and reduce flood risks are among the reasons why green roofs are more likely to be used also in this country. However, there are relatively few data on their in situ performance. In this study the storm water performance was simulated for the green roofs experimental plots using the Storm Water Management Model (SWMM) with Low Impact Development (LID) Controls module (version 5.0.022). The model consists of many parameters for a particular layer of green roofs but simulation results were unsatisfactory considering the hydrologic response of the green roofs. For the majority of the tested rain events, the Nash coefficient had negative values. It indicates a weak fit between observed and measured flow-rates. Therefore complexity of the LID module does not affect the increase of its accuracy. Further research at a technical scale is needed to determine the role of the green roof slope, vegetation cover and drying process during the inter-event periods.

  17. THE PENA BLANCA NATURAL ANALOGUE PERFORMANCE ASSESSMENT MODEL

    Energy Technology Data Exchange (ETDEWEB)

    G. Saulnier and W. Statham

    2006-04-16

    The Nopal I uranium mine in the Sierra Pena Blanca, Chihuahua, Mexico serves as a natural analogue to the Yucca Mountain repository. The Pena Blanca Natural Analogue Performance Assessment Model simulates the mobilization and transport of radionuclides that are released from the mine and transported to the saturated zone. The Pena Blanca Natural Analogue Performance Assessment Model uses probabilistic simulations of hydrogeologic processes that are analogous to the processes that occur at the Yucca Mountain site. The Nopal I uranium deposit lies in fractured, welded, and altered rhyolitic ash-flow tuffs that overlie carbonate rocks, a setting analogous to the geologic formations at the Yucca Mountain site. The Nopal I mine site has the following analogous characteristics as compared to the Yucca Mountain repository site: (1) Analogous source--UO{sub 2} uranium ore deposit = spent nuclear fuel in the repository; (2) Analogous geology--(i.e. fractured, welded, and altered rhyolitic ash-flow tuffs); (3) Analogous climate--Semiarid to arid; (4) Analogous setting--Volcanic tuffs overlie carbonate rocks; and (5) Analogous geochemistry--Oxidizing conditions Analogous hydrogeology: The ore deposit lies in the unsaturated zone above the water table.

  18. Graphical User Interface for Simulink Integrated Performance Analysis Model

    Science.gov (United States)

    Durham, R. Caitlyn

    2009-01-01

    The J-2X Engine (built by Pratt & Whitney Rocketdyne,) in the Upper Stage of the Ares I Crew Launch Vehicle, will only start within a certain range of temperature and pressure for Liquid Hydrogen and Liquid Oxygen propellants. The purpose of the Simulink Integrated Performance Analysis Model is to verify that in all reasonable conditions the temperature and pressure of the propellants are within the required J-2X engine start boxes. In order to run the simulation, test variables must be entered at all reasonable values of parameters such as heat leak and mass flow rate. To make this testing process as efficient as possible in order to save the maximum amount of time and money, and to show that the J-2X engine will start when it is required to do so, a graphical user interface (GUI) was created to allow the input of values to be used as parameters in the Simulink Model, without opening or altering the contents of the model. The GUI must allow for test data to come from Microsoft Excel files, allow those values to be edited before testing, place those values into the Simulink Model, and get the output from the Simulink Model. The GUI was built using MATLAB, and will run the Simulink simulation when the Simulate option is activated. After running the simulation, the GUI will construct a new Microsoft Excel file, as well as a MATLAB matrix file, using the output values for each test of the simulation so that they may graphed and compared to other values.

  19. A Critical Analysis of Measurement Models of Export Performance

    Directory of Open Access Journals (Sweden)

    Jorge Carneiro

    2007-05-01

    Full Text Available Poor conceptualization of the export performance construct may undermine theory development efforts and may be one of the reasons behind the often conflicting findings in empirical research on the export performance phenomenon. This article reviews the conceptual and empirical literature and proposes a new analytical scheme that may serve as a standard for judging content validity and a guiding yardstick for drawing operational representations of the construct. A critical assessment of some of the most frequently cited measurement frameworks, followed by an analysis of recent (1999-2004 empirical research, leaves no doubt that there are flaws in the conceptualization and operationalization of the performance construct that ought to be addressed. A new measurement model is advanced along with some guidelines which are suggested for its future empirical validation. The new measurement framework allegedly improves on other past efforts in terms of breadth of coverage of the construct’s domain (content validity. It also offers a measurement perspective (with the simultaneous use of both formative and reflective approaches that appears to reflect better the nature of the construct.

  20. Modeling illumination performance of plastic optical fiber passive daylighting system

    International Nuclear Information System (INIS)

    Sulaiman, F.; Ahmad, A.; Ahmed, A.Z.

    2006-01-01

    One of the most direct methods of utilizing solar energy for energy conservation is to bring natural light indoors to light up an area. This paper reports on the investigation of the feasibility to utilize large core optical fibers to convey and distribute solar light passively throughout residential or commercial structures. The focus of this study is on the mathematical modeling of the illumination performance and the light transmission efficiency of solid core end light fiber for optical day lighting systems. The Meatball simulations features the optical fiber transmittance for glass and plastic fibers, illumination performance over lengths of plastic end-lit fiber, spectral transmission, light intensity loss through the large diameter solid core optical fibers as well as the transmission efficiency of the optical fiber itself. It was found that plastic optical fiber has less transmission loss over the distance of the fiber run which clearly shows that the Plastic Optical Fiber should be optimized for emitting visible light. The findings from the analysis on the performance of large diameter optical fibers for day lighting systems seems feasible for energy efficient lighting system in commercial or residential buildings

  1. Job Demands-Control-Support model and employee safety performance.

    Science.gov (United States)

    Turner, Nick; Stride, Chris B; Carter, Angela J; McCaughey, Deirdre; Carroll, Anthony E

    2012-03-01

    The aim of this study was to explore whether work characteristics (job demands, job control, social support) comprising Karasek and Theorell's (1990) Job Demands-Control-Support framework predict employee safety performance (safety compliance and safety participation; Neal and Griffin, 2006). We used cross-sectional data of self-reported work characteristics and employee safety performance from 280 healthcare staff (doctors, nurses, and administrative staff) from Emergency Departments of seven hospitals in the United Kingdom. We analyzed these data using a structural equation model that simultaneously regressed safety compliance and safety participation on the main effects of each of the aforementioned work characteristics, their two-way interactions, and the three-way interaction among them, while controlling for demographic, occupational, and organizational characteristics. Social support was positively related to safety compliance, and both job control and the two-way interaction between job control and social support were positively related to safety participation. How work design is related to employee safety performance remains an important area for research and provides insight into how organizations can improve workplace safety. The current findings emphasize the importance of the co-worker in promoting both safety compliance and safety participation. Crown Copyright © 2011. Published by Elsevier Ltd. All rights reserved.

  2. Beam modeling and VMAT performance with the Agility 160-leaf multileaf collimator.

    Science.gov (United States)

    Bedford, James L; Thomas, Michael D R; Smyth, Gregory

    2013-05-06

    The Agility multileaf collimator (Elekta AB, Stockholm, Sweden) has 160 leaves of projected width 0.5 cm at the isocenter, with maximum leaf speed 3.5 cms-1. These characteristics promise to facilitate fast and accurate delivery of radiotherapy, particularly volumetric-modulated arc therapy (VMAT). The aim of this study is therefore to create a beam model for the Pinnacle3 treatment planning system (Philips Radiation Oncology Systems, Fitchburg, WI), and to use this beam model to explore the performance of the Agility MLC in delivery of VMAT. A 6 MV beam model was created and verified by measuring doses under irregularly shaped fields. VMAT treatment plans for five typical head-and-neck patients were created using the beam model and delivered using both binned and continuously variable dose rate (CVDR). Results were compared with those for an MLCi unit without CVDR. The beam model has similar parameters to those of an MLCi model, with interleaf leakage of only 0.2%. The verification of irregular fields shows a mean agreement between measured and planned dose of 1.3% (planned dose higher). The Agility VMAT head-and-neck plans show equivalent plan quality and delivery accuracy to those for an MLCi unit, with 95% of verification measurements within 3% and 3 mm of planned dose. Mean delivery time is 133 s with the Agility head and CVDR, 171 s without CVDR, and 282 s with an MLCi unit. Pinnacle3 has therefore been shown to model the Agility MLC accurately, and to provide accurate VMAT treatment plans which can be delivered significantly faster with Agility than with an MLCi.

  3. Assigning probability distributions to input parameters of performance assessment models

    International Nuclear Information System (INIS)

    Mishra, Srikanta

    2002-02-01

    This study presents an overview of various approaches for assigning probability distributions to input parameters and/or future states of performance assessment models. Specifically,three broad approaches are discussed for developing input distributions: (a) fitting continuous distributions to data, (b) subjective assessment of probabilities, and (c) Bayesian updating of prior knowledge based on new information. The report begins with a summary of the nature of data and distributions, followed by a discussion of several common theoretical parametric models for characterizing distributions. Next, various techniques are presented for fitting continuous distributions to data. These include probability plotting, method of moments, maximum likelihood estimation and nonlinear least squares analysis. The techniques are demonstrated using data from a recent performance assessment study for the Yucca Mountain project. Goodness of fit techniques are also discussed, followed by an overview of how distribution fitting is accomplished in commercial software packages. The issue of subjective assessment of probabilities is dealt with in terms of the maximum entropy distribution selection approach, as well as some common rules for codifying informal expert judgment. Formal expert elicitation protocols are discussed next, and are based primarily on the guidance provided by the US NRC. The Bayesian framework for updating prior distributions (beliefs) when new information becomes available is discussed. A simple numerical approach is presented for facilitating practical applications of the Bayes theorem. Finally, a systematic framework for assigning distributions is presented: (a) for the situation where enough data are available to define an empirical CDF or fit a parametric model to the data, and (b) to deal with the situation where only a limited amount of information is available

  4. Assigning probability distributions to input parameters of performance assessment models

    Energy Technology Data Exchange (ETDEWEB)

    Mishra, Srikanta [INTERA Inc., Austin, TX (United States)

    2002-02-01

    This study presents an overview of various approaches for assigning probability distributions to input parameters and/or future states of performance assessment models. Specifically,three broad approaches are discussed for developing input distributions: (a) fitting continuous distributions to data, (b) subjective assessment of probabilities, and (c) Bayesian updating of prior knowledge based on new information. The report begins with a summary of the nature of data and distributions, followed by a discussion of several common theoretical parametric models for characterizing distributions. Next, various techniques are presented for fitting continuous distributions to data. These include probability plotting, method of moments, maximum likelihood estimation and nonlinear least squares analysis. The techniques are demonstrated using data from a recent performance assessment study for the Yucca Mountain project. Goodness of fit techniques are also discussed, followed by an overview of how distribution fitting is accomplished in commercial software packages. The issue of subjective assessment of probabilities is dealt with in terms of the maximum entropy distribution selection approach, as well as some common rules for codifying informal expert judgment. Formal expert elicitation protocols are discussed next, and are based primarily on the guidance provided by the US NRC. The Bayesian framework for updating prior distributions (beliefs) when new information becomes available is discussed. A simple numerical approach is presented for facilitating practical applications of the Bayes theorem. Finally, a systematic framework for assigning distributions is presented: (a) for the situation where enough data are available to define an empirical CDF or fit a parametric model to the data, and (b) to deal with the situation where only a limited amount of information is available.

  5. Comparative performance of high-fidelity training models for flexible ureteroscopy: Are all models effective?

    Directory of Open Access Journals (Sweden)

    Shashikant Mishra

    2011-01-01

    Full Text Available Objective: We performed a comparative study of high-fidelity training models for flexible ureteroscopy (URS. Our objective was to determine whether high-fidelity non-virtual reality (VR models are as effective as the VR model in teaching flexible URS skills. Materials and Methods: Twenty-one trained urologists without clinical experience of flexible URS underwent dry lab simulation practice. After a warm-up period of 2 h, tasks were performed on a high-fidelity non-VR (Uro-scopic Trainer TM ; Endo-Urologie-Modell TM and a high-fidelity VR model (URO Mentor TM . The participants were divided equally into three batches with rotation on each of the three stations for 30 min. Performance of the trainees was evaluated by an expert ureteroscopist using pass rating and global rating score (GRS. The participants rated a face validity questionnaire at the end of each session. Results: The GRS improved statistically at evaluation performed after second rotation (P<0.001 for batches 1, 2 and 3. Pass ratings also improved significantly for all training models when the third and first rotations were compared (P<0.05. The batch that was trained on the VR-based model had more improvement on pass ratings on second rotation but could not achieve statistical significance. Most of the realistic domains were higher for a VR model as compared with the non-VR model, except the realism of the flexible endoscope. Conclusions: All the models used for training flexible URS were effective in increasing the GRS and pass ratings irrespective of the VR status.

  6. Modelling of Performance of Caisson Type Breakwaters under Extreme Waves

    Science.gov (United States)

    Güney Doǧan, Gözde; Özyurt Tarakcıoǧlu, Gülizar; Baykal, Cüneyt

    2016-04-01

    Many coastal structures are designed without considering loads of tsunami-like waves or long waves although they are constructed in areas prone to encounter these waves. Performance of caisson type breakwaters under extreme swells is tested in Middle East Technical University (METU) Coastal and Ocean Engineering Laboratory. This paper presents the comparison of pressure measurements taken along the surface of caisson type breakwaters and obtained from numerical modelling of them using IH2VOF as well as damage behavior of the breakwater under the same extreme swells tested in a wave flume at METU. Experiments are conducted in the 1.5 m wide wave flume, which is divided into two parallel sections (0.74 m wide each). A piston type of wave maker is used to generate the long wave conditions located at one end of the wave basin. Water depth is determined as 0.4m and kept constant during the experiments. A caisson type breakwater is constructed to one side of the divided flume. The model scale, based on the Froude similitude law, is chosen as 1:50. 7 different wave conditions are applied in the tests as the wave period ranging from 14.6 s to 34.7 s, wave heights from 3.5 m to 7.5 m and steepness from 0.002 to 0.015 in prototype scale. The design wave parameters for the breakwater were 5m wave height and 9.5s wave period in prototype. To determine the damage of the breakwater which were designed according to this wave but tested under swell waves, video and photo analysis as well as breakwater profile measurements before and after each test are performed. Further investigations are carried out about the acting wave forces on the concrete blocks of the caisson structures via pressure measurements on the surfaces of these structures where the structures are fixed to the channel bottom minimizing. Finally, these pressure measurements will be compared with the results obtained from the numerical study using IH2VOF which is one of the RANS models that can be applied to simulate

  7. Fuzzy regression modeling for tool performance prediction and degradation detection.

    Science.gov (United States)

    Li, X; Er, M J; Lim, B S; Zhou, J H; Gan, O P; Rutkowski, L

    2010-10-01

    In this paper, the viability of using Fuzzy-Rule-Based Regression Modeling (FRM) algorithm for tool performance and degradation detection is investigated. The FRM is developed based on a multi-layered fuzzy-rule-based hybrid system with Multiple Regression Models (MRM) embedded into a fuzzy logic inference engine that employs Self Organizing Maps (SOM) for clustering. The FRM converts a complex nonlinear problem to a simplified linear format in order to further increase the accuracy in prediction and rate of convergence. The efficacy of the proposed FRM is tested through a case study - namely to predict the remaining useful life of a ball nose milling cutter during a dry machining process of hardened tool steel with a hardness of 52-54 HRc. A comparative study is further made between four predictive models using the same set of experimental data. It is shown that the FRM is superior as compared with conventional MRM, Back Propagation Neural Networks (BPNN) and Radial Basis Function Networks (RBFN) in terms of prediction accuracy and learning speed.

  8. Some typical solid propellant rocket motors

    NARCIS (Netherlands)

    Zandbergen, B.T.C.

    2013-01-01

    Typical Solid Propellant Rocket Motors (shortly referred to as Solid Rocket Motors; SRM's) are described with the purpose to form a database, which allows for comparative analysis and applications in practical SRM engineering.

  9. The Impact of 3D Data Quality on Improving GNSS Performance Using City Models Initial Simulations

    Science.gov (United States)

    Ellul, C.; Adjrad, M.; Groves, P.

    2016-10-01

    There is an increasing demand for highly accurate positioning information in urban areas, to support applications such as people and vehicle tracking, real-time air quality detection and navigation. However systems such as GPS typically perform poorly in dense urban areas. A number of authors have made use of 3D city models to enhance accuracy, obtaining good results, but to date the influence of the quality of the 3D city model on these results has not been tested. This paper addresses the following question: how does the quality, and in particular the variation in height, level of generalization and completeness and currency of a 3D dataset, impact the results obtained for the preliminary calculations in a process known as Shadow Matching, which takes into account not only where satellite signals are visible on the street but also where they are predicted to be absent. We describe initial simulations to address this issue, examining the variation in elevation angle - i.e. the angle above which the satellite is visible, for three 3D city models in a test area in London, and note that even within one dataset using different available height values could cause a difference in elevation angle of up to 29°. Missing or extra buildings result in an elevation variation of around 85°. Variations such as these can significantly influence the predicted satellite visibility which will then not correspond to that experienced on the ground, reducing the accuracy of the resulting Shadow Matching process.

  10. Prediction and typicality in multiverse cosmology

    International Nuclear Information System (INIS)

    Azhar, Feraz

    2014-01-01

    In the absence of a fundamental theory that precisely predicts values for observable parameters, anthropic reasoning attempts to constrain probability distributions over those parameters in order to facilitate the extraction of testable predictions. The utility of this approach has been vigorously debated of late, particularly in light of theories that claim we live in a multiverse, where parameters may take differing values in regions lying outside our observable horizon. Within this cosmological framework, we investigate the efficacy of top-down anthropic reasoning based on the weak anthropic principle. We argue contrary to recent claims that it is not clear one can either dispense with notions of typicality altogether or presume typicality, in comparing resulting probability distributions with observations. We show in a concrete, top-down setting related to dark matter, that assumptions about typicality can dramatically affect predictions, thereby providing a guide to how errors in reasoning regarding typicality translate to errors in the assessment of predictive power. We conjecture that this dependence on typicality is an integral feature of anthropic reasoning in broader cosmological contexts, and argue in favour of the explicit inclusion of measures of typicality in schemes invoking anthropic reasoning, with a view to extracting predictions from multiverse scenarios. (paper)

  11. In Silico Modeling of Gastrointestinal Drug Absorption: Predictive Performance of Three Physiologically Based Absorption Models.

    Science.gov (United States)

    Sjögren, Erik; Thörn, Helena; Tannergren, Christer

    2016-06-06

    Gastrointestinal (GI) drug absorption is a complex process determined by formulation, physicochemical and biopharmaceutical factors, and GI physiology. Physiologically based in silico absorption models have emerged as a widely used and promising supplement to traditional in vitro assays and preclinical in vivo studies. However, there remains a lack of comparative studies between different models. The aim of this study was to explore the strengths and limitations of the in silico absorption models Simcyp 13.1, GastroPlus 8.0, and GI-Sim 4.1, with respect to their performance in predicting human intestinal drug absorption. This was achieved by adopting an a priori modeling approach and using well-defined input data for 12 drugs associated with incomplete GI absorption and related challenges in predicting the extent of absorption. This approach better mimics the real situation during formulation development where predictive in silico models would be beneficial. Plasma concentration-time profiles for 44 oral drug administrations were calculated by convolution of model-predicted absorption-time profiles and reported pharmacokinetic parameters. Model performance was evaluated by comparing the predicted plasma concentration-time profiles, Cmax, tmax, and exposure (AUC) with observations from clinical studies. The overall prediction accuracies for AUC, given as the absolute average fold error (AAFE) values, were 2.2, 1.6, and 1.3 for Simcyp, GastroPlus, and GI-Sim, respectively. The corresponding AAFE values for Cmax were 2.2, 1.6, and 1.3, respectively, and those for tmax were 1.7, 1.5, and 1.4, respectively. Simcyp was associated with underprediction of AUC and Cmax; the accuracy decreased with decreasing predicted fabs. A tendency for underprediction was also observed for GastroPlus, but there was no correlation with predicted fabs. There were no obvious trends for over- or underprediction for GI-Sim. The models performed similarly in capturing dependencies on dose and

  12. A minimal limit-cycle model to profile movement patterns of individuals during agility drill performance: Effects of skill level.

    Science.gov (United States)

    Mehdizadeh, Sina; Arshi, Ahmed Reza; Davids, Keith

    2015-06-01

    Identification of control strategies during agility performance is significant in understanding movement behavior. This study aimed at providing a fundamental mathematical model for describing the motion of participants during an agility drill and to determine whether skill level constrained model components. Motion patterns of two groups of skilled and unskilled participants (n=8 in each) during performance of a forward/backward agility drill modeled as limit-cycles. Participant movements were recorded by motion capture of a reflective marker attached to the sacrum of each individual. Graphical and regression analyses of movement kinematics in Hooke's plane, phase plane and velocity profile were performed to determine components of the models. Results showed that the models of both skilled and unskilled groups had terms from Duffing stiffness as well as Van der Pol damping oscillators. Data also indicated that the proposed models captured on average 97% of the variance for both skilled and unskilled groups. Findings from this study revealed the movement patterning associated with skilled and unskilled performance in a typical forward/backward agility drill which might be helpful for trainers and physiotherapists in enhancing agility. Copyright © 2015 Elsevier B.V. All rights reserved.

  13. FASTSim: A Model to Estimate Vehicle Efficiency, Cost and Performance

    Energy Technology Data Exchange (ETDEWEB)

    Brooker, A.; Gonder, J.; Wang, L.; Wood, E.; Lopp, S.; Ramroth, L.

    2015-05-04

    The Future Automotive Systems Technology Simulator (FASTSim) is a high-level advanced vehicle powertrain systems analysis tool supported by the U.S. Department of Energy’s Vehicle Technologies Office. FASTSim provides a quick and simple approach to compare powertrains and estimate the impact of technology improvements on light- and heavy-duty vehicle efficiency, performance, cost, and battery batches of real-world drive cycles. FASTSim’s calculation framework and balance among detail, accuracy, and speed enable it to simulate thousands of driven miles in minutes. The key components and vehicle outputs have been validated by comparing the model outputs to test data for many different vehicles to provide confidence in the results. A graphical user interface makes FASTSim easy and efficient to use. FASTSim is freely available for download from the National Renewable Energy Laboratory’s website (see www.nrel.gov/fastsim).

  14. Modelling ‘Headless': Finance, Performance Art, and Paradoxy

    Directory of Open Access Journals (Sweden)

    Angus Cameron

    2015-07-01

    Full Text Available This essay reflects on the author’s experience of a collaborative performance art project – Headless, by Goldin+Senneby – which since 2007 has created a quasi ‘model’ of offshore finance. Headless is not a conventional economic model, but rather an example of an ‘imaginary economics’ that is peculiar to contemporary art (as in Velthuis, 2005. Because of the context in which the project has unfolded, it has been able to develop an effective engagement with and critique of the ‘real’ offshore world. Headless, the essay argues, moves beyond passive representation, to become an active, if deliberately paradoxical, intervention in debates over the nature of finance, spatiality, privacy, and art.

  15. Computational Human Performance Modeling For Alarm System Design

    Energy Technology Data Exchange (ETDEWEB)

    Jacques Hugo

    2012-07-01

    The introduction of new technologies like adaptive automation systems and advanced alarms processing and presentation techniques in nuclear power plants is already having an impact on the safety and effectiveness of plant operations and also the role of the control room operator. This impact is expected to escalate dramatically as more and more nuclear power utilities embark on upgrade projects in order to extend the lifetime of their plants. One of the most visible impacts in control rooms will be the need to replace aging alarm systems. Because most of these alarm systems use obsolete technologies, the methods, techniques and tools that were used to design the previous generation of alarm system designs are no longer effective and need to be updated. The same applies to the need to analyze and redefine operators’ alarm handling tasks. In the past, methods for analyzing human tasks and workload have relied on crude, paper-based methods that often lacked traceability. New approaches are needed to allow analysts to model and represent the new concepts of alarm operation and human-system interaction. State-of-the-art task simulation tools are now available that offer a cost-effective and efficient method for examining the effect of operator performance in different conditions and operational scenarios. A discrete event simulation system was used by human factors researchers at the Idaho National Laboratory to develop a generic alarm handling model to examine the effect of operator performance with simulated modern alarm system. It allowed analysts to evaluate alarm generation patterns as well as critical task times and human workload predicted by the system.

  16. Model Engine Performance Measurement From Force Balance Instrumentation

    Science.gov (United States)

    Jeracki, Robert J.

    1998-01-01

    A large scale model representative of a low-noise, high bypass ratio turbofan engine was tested for acoustics and performance in the NASA Lewis 9- by 15-Foot Low-Speed Wind Tunnel. This test was part of NASA's continuing Advanced Subsonic Technology Noise Reduction Program. The low tip speed fan, nacelle, and an un-powered core passage (with core inlet guide vanes) were simulated. The fan blades and hub are mounted on a rotating thrust and torque balance. The nacelle, bypass duct stators, and core passage are attached to a six component force balance. The two balance forces, when corrected for internal pressure tares, measure the total thrust-minus-drag of the engine simulator. Corrected for scaling and other effects, it is basically the same force that the engine supports would feel, operating at similar conditions. A control volume is shown and discussed, identifying the various force components of the engine simulator thrust and definitions of net thrust. Several wind tunnel runs with nearly the same hardware installed are compared, to identify the repeatability of the measured thrust-minus-drag. Other wind tunnel runs, with hardware changes that affected fan performance, are compared to the baseline configuration, and the thrust and torque effects are shown. Finally, a thrust comparison between the force balance and nozzle gross thrust methods is shown, and both yield very similar results.

  17. Modeling silica aerogel optical performance by determining its radiative properties

    Science.gov (United States)

    Zhao, Lin; Yang, Sungwoo; Bhatia, Bikram; Strobach, Elise; Wang, Evelyn N.

    2016-02-01

    Silica aerogel has been known as a promising candidate for high performance transparent insulation material (TIM). Optical transparency is a crucial metric for silica aerogels in many solar related applications. Both scattering and absorption can reduce the amount of light transmitted through an aerogel slab. Due to multiple scattering, the transmittance deviates from the Beer-Lambert law (exponential attenuation). To better understand its optical performance, we decoupled and quantified the extinction contributions of absorption and scattering separately by identifying two sets of radiative properties. The radiative properties are deduced from the measured total transmittance and reflectance spectra (from 250 nm to 2500 nm) of synthesized aerogel samples by solving the inverse problem of the 1-D Radiative Transfer Equation (RTE). The obtained radiative properties are found to be independent of the sample geometry and can be considered intrinsic material properties, which originate from the aerogel's microstructure. This finding allows for these properties to be directly compared between different samples. We also demonstrate that by using the obtained radiative properties, we can model the photon transport in aerogels of arbitrary shapes, where an analytical solution is difficult to obtain.

  18. An Efficient Framework Model for Optimizing Routing Performance in VANETs

    Science.gov (United States)

    Zulkarnain, Zuriati Ahmad; Subramaniam, Shamala

    2018-01-01

    Routing in Vehicular Ad hoc Networks (VANET) is a bit complicated because of the nature of the high dynamic mobility. The efficiency of routing protocol is influenced by a number of factors such as network density, bandwidth constraints, traffic load, and mobility patterns resulting in frequency changes in network topology. Therefore, Quality of Service (QoS) is strongly needed to enhance the capability of the routing protocol and improve the overall network performance. In this paper, we introduce a statistical framework model to address the problem of optimizing routing configuration parameters in Vehicle-to-Vehicle (V2V) communication. Our framework solution is based on the utilization of the network resources to further reflect the current state of the network and to balance the trade-off between frequent changes in network topology and the QoS requirements. It consists of three stages: simulation network stage used to execute different urban scenarios, the function stage used as a competitive approach to aggregate the weighted cost of the factors in a single value, and optimization stage used to evaluate the communication cost and to obtain the optimal configuration based on the competitive cost. The simulation results show significant performance improvement in terms of the Packet Delivery Ratio (PDR), Normalized Routing Load (NRL), Packet loss (PL), and End-to-End Delay (E2ED). PMID:29462884

  19. An Integrated Performance Evaluation Model for the Photovoltaics Industry

    Directory of Open Access Journals (Sweden)

    He-Yau Kang

    2012-04-01

    Full Text Available Global warming is causing damaging changes to climate around the World. For environmental protection and natural resource scarcity, alternative forms of energy, such as wind energy, fire energy, hydropower energy, geothermal energy, solar energy, biomass energy, ocean power and natural gas, are gaining attention as means of meeting global energy demands. Due to Japan’s nuclear plant disaster in March 2011, people are demanding a good alternative energy resource, which not only produces zero or little air pollutants and greenhouse gases, but also with a high safety level to protect the World. Solar energy, which depends on an infinite resource, the sun, is one of the most promising renewable energy sources from the perspective of environmental sustainability. Currently, the manufacturing cost of solar cells is still very high, and the power conversion efficiency is low. Therefore, photovoltaics (PV firms must continue to invest in research and development, commit to product differentiation, achieve economies of scale, and consider the possibility of vertical integration, in order to strengthen their competitiveness and to acquire the maximum benefit from the PV market. This research proposes a performance evaluation model by integrating analytic hierarchy process (AHP and data envelopment analysis (DEA to assess the current business performance of PV firms. AHP is applied to obtain experts’ opinions on the importance of the factors, and DEA is used to determine which firms are efficient. A case study is performed on the crystalline silicon PV firms in Taiwan. The findings shall help the firms determine their strengths and weaknesses and provide directions for future improvements in business operations.

  20. Generation of typical solar radiation data for different climates of China

    International Nuclear Information System (INIS)

    Zang, Haixiang; Xu, Qingshan; Bian, Haihong

    2012-01-01

    In this study, typical solar radiation data are generated from both measured data and synthetic generation for 35 stations in six different climatic zones of China. (1) By applying the measured weather data during at least 10 years from 1994 to 2009, typical meteorological years (TMYs) for 35 cities are generated using the Finkelstein–Schafer statistical method. The cumulative distribution function (CDF) of daily global solar radiation (DGSR) for each year is compared with the CDF of DGSR for the long-term years in six different climatic stations (Sanya, Shanghai, Zhengzhou, Harbin, Mohe and Lhasa). The daily global solar radiation as typical data obtained from the TMYs are presented in the Table. (2) Based on the recorded global radiation data from at least 10 years, a new daily global solar radiation model is developed with a sine and cosine wave (SCW) equation. The results of the proposed model and other empirical regression models are compared with measured data using different statistical indicators. It is found that solar radiation data, calculated by the new model, are superior to these from other empirical models at six typical climatic zones. In addition, the novel SCW model is tested and applied for 35 stations in China. -- Highlights: ► Both TMY method and synthetic generation are used to generate solar radiation data. ► The latest and accurate long term weather data in six different climates are applied. ► TMYs using new weighting factors of 8 weather indices for 35 regions are obtained. ► A new sine and cosine wave model is proposed and utilized for 35 major stations. ► Both TMY method and the proposed regression model perform well on monthly bases.

  1. Contribution to the modelling and analysis of logistics system performance by Petri nets and simulation models: Application in a supply chain

    Science.gov (United States)

    Azougagh, Yassine; Benhida, Khalid; Elfezazi, Said

    2016-02-01

    In this paper, the focus is on studying the performance of complex systems in a supply chain context by developing a structured modelling approach based on the methodology ASDI (Analysis, Specification, Design and Implementation) by combining the modelling by Petri nets and simulation using ARENA. The linear approach typically followed in conducting of this kind of problems has to cope with a difficulty of modelling due to the complexity and the number of parameters of concern. Therefore, the approach used in this work is able to structure modelling a way to cover all aspects of the performance study. The modelling structured approach is first introduced before being applied to the case of an industrial system in the field of phosphate. Results of the performance indicators obtained from the models developed, permitted to test the behaviour and fluctuations of this system and to develop improved models of the current situation. In addition, in this paper, it was shown how Arena software can be adopted to simulate complex systems effectively. The method in this research can be applied to investigate various improvements scenarios and their consequences before implementing them in reality.

  2. Modeling and Performance Evaluation of a Top Gated Graphene MOSFET

    Directory of Open Access Journals (Sweden)

    Jith Sarker

    2017-08-01

    Full Text Available In the modernistics years, Graphene has become a promising resplendence in the horizon of fabrication technology, due to some of its unique electronic properties like zero band gap, high saturation velocity, higher electrical conductivity and so on followed by extraordinary thermal, optical and mechanical properties such as- high thermal conductivity, optical transparency, flexibility and thinness. Graphene based devices demand to be deliberated as a possible option for post Si based fabrication technology. In this paper, we have modelled a top gated graphene metal oxide semiconductor field effect transistor (MOSFET. Surface potential dependent Quantum capacitance is obtained self-consistently along with linear and square root approximation model. Gate voltage dependence of surface potential has been analyzed with graphical illustrations and required mathematics as well. Output characteristics, transfer characteristics, transconductance (as a function of gate voltage behavior have been investigated. In the end, effect of channel length on device performance has been justified. Variation of effective mobility and minimum carrier density with respect to channel length has also been observed. Considering all of the graphical illustrations, we do like to conclude that, graphene will be a successor in post silicon era and bring revolutionary changes in the field of fabrication technology.

  3. Modeling the Energy Performance of LoRaWAN.

    Science.gov (United States)

    Casals, Lluís; Mir, Bernat; Vidal, Rafael; Gomez, Carles

    2017-10-16

    LoRaWAN is a flagship Low-Power Wide Area Network (LPWAN) technology that has highly attracted much attention from the community in recent years. Many LoRaWAN end-devices, such as sensors or actuators, are expected not to be powered by the electricity grid; therefore, it is crucial to investigate the energy consumption of LoRaWAN. However, published works have only focused on this topic to a limited extent. In this paper, we present analytical models that allow the characterization of LoRaWAN end-device current consumption, lifetime and energy cost of data delivery. The models, which have been derived based on measurements on a currently prevalent LoRaWAN hardware platform, allow us to quantify the impact of relevant physical and Medium Access Control (MAC) layer LoRaWAN parameters and mechanisms, as well as Bit Error Rate (BER) and collisions, on energy performance. Among others, evaluation results show that an appropriately configured LoRaWAN end-device platform powered by a battery of 2400 mAh can achieve a 1-year lifetime while sending one message every 5 min, and an asymptotic theoretical lifetime of 6 years for infrequent communication.

  4. CLRP: Individual evaluation of model performance for scenario S

    International Nuclear Information System (INIS)

    Krajewski, P.

    1996-01-01

    The model CLRP was created in 1989 as a part of research project ''Long-Lived Post-Chernobyl Radioactivity and Radiation Protection Criteria for Risk Reduction'' performed in cooperation with US Environmental Protection Agency. The aim of this project was to examine the fate of long-lived radionuclides in the terrestrial ecosystem. Concentrations of Cs-137 and Cs-134 in the particular components of terrestrial ecosystem e.g. soil, vegetation, animal tissues and animal products are calculated as a function of time following deposition from the atmosphere. Based on this data the whole body contents of radionuclide as a function of time is calculated and dose to a specific organ for the radionuclide may be estimated as an integral of the resultant dose rate over a sufficient period. In addition, the model allows estimation of inhalation dose from time integrated air concentration and external dose from total deposition using simple conversion factors. The program is designed to allow the simulation of many different radiological situations (chronic or acute releases) and dose affecting countermeasures. Figs, tabs

  5. Performance of monitoring networks estimated from a Gaussian plume model

    International Nuclear Information System (INIS)

    Seebregts, A.J.; Hienen, J.F.A.

    1990-10-01

    In support of the ECN study on monitoring strategies after nuclear accidents, the present report describes the analysis of the performance of a monitoring network in a square grid. This network is used to estimate the distribution of the deposition pattern after a release of radioactivity into the atmosphere. The analysis is based upon a single release, a constant wind direction and an atmospheric dispersion according to a simplified Gaussian plume model. A technique is introduced to estimate the parameters in this Gaussian model based upon measurements at specific monitoring locations and linear regression, although this model is intrinsically non-linear. With these estimated parameters and the Gaussian model the distribution of the contamination due to deposition can be estimated. To investigate the relation between the network and the accuracy of the estimates for the deposition, deposition data have been generated by the Gaussian model, including a measurement error by a Monte Carlo simulation and this procedure has been repeated for several grid sizes, dispersion conditions, number of measurements per location, and errors per single measurement. The present technique has also been applied for the mesh sizes of two networks in the Netherlands, viz. the Landelijk Meetnet Radioaciviteit (National Measurement Network on Radioactivity, mesh size approx. 35 km) and the proposed Landelijk Meetnet Nucleaire Incidenten (National Measurement Network on Nuclear Incidents, mesh size approx. 15 km). The results show accuracies of 11 and 7 percent, respectively, if monitoring locations are used more than 10 km away from the postulated accident site. These figures are based upon 3 measurements per location and a dispersion during neutral weather with a wind velocity of 4 m/s. For stable weather conditions and low wind velocities, i.e. a small plume, the calculated accuracies are at least a factor 1.5 worse.The present type of analysis makes a cost-benefit approach to the

  6. Battery Performance Modelling ad Simulation: a Neural Network Based Approach

    Science.gov (United States)

    Ottavianelli, Giuseppe; Donati, Alessandro

    2002-01-01

    This project has developed on the background of ongoing researches within the Control Technology Unit (TOS-OSC) of the Special Projects Division at the European Space Operations Centre (ESOC) of the European Space Agency. The purpose of this research is to develop and validate an Artificial Neural Network tool (ANN) able to model, simulate and predict the Cluster II battery system's performance degradation. (Cluster II mission is made of four spacecraft flying in tetrahedral formation and aimed to observe and study the interaction between sun and earth by passing in and out of our planet's magnetic field). This prototype tool, named BAPER and developed with a commercial neural network toolbox, could be used to support short and medium term mission planning in order to improve and maximise the batteries lifetime, determining which are the future best charge/discharge cycles for the batteries given their present states, in view of a Cluster II mission extension. This study focuses on the five Silver-Cadmium batteries onboard of Tango, the fourth Cluster II satellite, but time restrains have allowed so far to perform an assessment only on the first battery. In their most basic form, ANNs are hyper-dimensional curve fits for non-linear data. With their remarkable ability to derive meaning from complicated or imprecise history data, ANN can be used to extract patterns and detect trends that are too complex to be noticed by either humans or other computer techniques. ANNs learn by example, and this is why they can be described as an inductive, or data-based models for the simulation of input/target mappings. A trained ANN can be thought of as an "expert" in the category of information it has been given to analyse, and this expert can then be used, as in this project, to provide projections given new situations of interest and answer "what if" questions. The most appropriate algorithm, in terms of training speed and memory storage requirements, is clearly the Levenberg

  7. Entrance and escape dynamics for the typical set

    Science.gov (United States)

    Nicholson, Schuyler B.; Greenberg, Jonah S.; Green, Jason R.

    2018-01-01

    According to the asymptotic equipartition property, sufficiently long sequences of random variables converge to a set that is typical. While the size and probability of this set are central to information theory and statistical mechanics, they can often only be estimated accurately in the asymptotic limit due to the exponential growth in possible sequences. Here we derive a time-inhomogeneous dynamics that constructs the properties of the typical set for all finite length sequences of independent and identically distributed random variables. These dynamics link the finite properties of the typical set to asymptotic results and allow the typical set to be applied to small and transient systems. The main result is a geometric mapping—the triangle map—relating sequences of random variables of length n to those of length n +1 . We show that the number of points in this map needed to quantify the properties of the typical set grows linearly with sequence length, despite the exponential growth in the number of typical sequences. We illustrate the framework for the Bernoulli process and the Schlögl model for autocatalytic chemical reactions and demonstrate both the convergence to asymptotic limits and the ability to reproduce exact calculations.

  8. Memory for Sequences of Events Impaired in Typical Aging

    Science.gov (United States)

    Allen, Timothy A.; Morris, Andrea M.; Stark, Shauna M.; Fortin, Norbert J.; Stark, Craig E. L.

    2015-01-01

    Typical aging is associated with diminished episodic memory performance. To improve our understanding of the fundamental mechanisms underlying this age-related memory deficit, we previously developed an integrated, cross-species approach to link converging evidence from human and animal research. This novel approach focuses on the ability to…

  9. Analogical Reasoning Ability in Autistic and Typically Developing Children

    Science.gov (United States)

    Morsanyi, Kinga; Holyoak, Keith J.

    2010-01-01

    Recent studies (e.g. Dawson et al., 2007) have reported that autistic people perform in the normal range on the Raven Progressive Matrices test, a formal reasoning test that requires integration of relations as well as the ability to infer rules and form high-level abstractions. Here we compared autistic and typically developing children, matched…

  10. Time to discontinuation of atypical versus typical antipsychotics in the naturalistic treatment of schizophrenia

    Directory of Open Access Journals (Sweden)

    Swartz Marvin

    2006-02-01

    Full Text Available Abstract Background There is an ongoing debate over whether atypical antipsychotics are more effective than typical antipsychotics in the treatment of schizophrenia. This naturalistic study compares atypical and typical antipsychotics on time to all-cause medication discontinuation, a recognized index of medication effectiveness in the treatment of schizophrenia. Methods We used data from a large, 3-year, observational, non-randomized, multisite study of schizophrenia, conducted in the U.S. between 7/1997 and 9/2003. Patients who were initiated on oral atypical antipsychotics (clozapine, olanzapine, risperidone, quetiapine, or ziprasidone or oral typical antipsychotics (low, medium, or high potency were compared on time to all-cause medication discontinuation for 1 year following initiation. Treatment group comparisons were based on treatment episodes using 3 statistical approaches (Kaplan-Meier survival analysis, Cox Proportional Hazards regression model, and propensity score-adjusted bootstrap resampling methods. To further assess the robustness of the findings, sensitivity analyses were performed, including the use of (a only 1 medication episode for each patient, the one with which the patient was treated first, and (b all medication episodes, including those simultaneously initiated on more than 1 antipsychotic. Results Mean time to all-cause medication discontinuation was longer on atypical (N = 1132, 256.3 days compared to typical antipsychotics (N = 534, 197.2 days; p Conclusion In the usual care of schizophrenia patients, time to medication discontinuation for any cause appears significantly longer for atypical than typical antipsychotics regardless of the typical antipsychotic potency level. Findings were primarily driven by clozapine and olanzapine, and to a lesser extent by risperidone. Furthermore, only clozapine and olanzapine therapy showed consistently and significantly longer treatment duration compared to perphenazine, a medium

  11. Model Checking for a Class of Performance Properties of Fluid Stochastic Models

    NARCIS (Netherlands)

    Bujorianu, L.M.; Bujorianu, M.C.; Horváth, A.; Telek, M.

    2006-01-01

    Recently, there is an explosive development of fluid approa- ches to computer and distributed systems. These approaches are inherently stochastic and generate continuous state space models. Usually, the performance measures for these systems are defined using probabilities of reaching certain sets

  12. Dynamics modelling and Hybrid Suppression Control of space robots performing cooperative object manipulation

    Science.gov (United States)

    Zarafshan, P.; Moosavian, S. Ali A.

    2013-10-01

    Dynamics modelling and control of multi-body space robotic systems composed of rigid and flexible elements is elaborated here. Control of such systems is highly complicated due to severe under-actuated condition caused by flexible elements, and an inherent uneven nonlinear dynamics. Therefore, developing a compact dynamics model with the requirement of limited computations is extremely useful for controller design, also to develop simulation studies in support of design improvement, and finally for practical implementations. In this paper, the Rigid-Flexible Interactive dynamics Modelling (RFIM) approach is introduced as a combination of Lagrange and Newton-Euler methods, in which the motion equations of rigid and flexible members are separately developed in an explicit closed form. These equations are then assembled and solved simultaneously at each time step by considering the mutual interaction and constraint forces. The proposed approach yields a compact model rather than common accumulation approach that leads to a massive set of equations in which the dynamics of flexible elements is united with the dynamics equations of rigid members. To reveal such merits of this new approach, a Hybrid Suppression Control (HSC) for a cooperative object manipulation task will be proposed, and applied to usual space systems. A Wheeled Mobile Robotic (WMR) system with flexible appendages as a typical space rover is considered which contains a rigid main body equipped with two manipulating arms and two flexible solar panels, and next a Space Free Flying Robotic system (SFFR) with flexible members is studied. Modelling verification of these complicated systems is vigorously performed using ANSYS and ADAMS programs, while the limited computations of RFIM approach provides an efficient tool for the proposed controller design. Furthermore, it will be shown that the vibrations of the flexible solar panels results in disturbing forces on the base which may produce undesirable errors

  13. THE PENA BLANCA NATURAL ANALOGUE PERFORMANCE ASSESSMENT MODEL

    International Nuclear Information System (INIS)

    G.J. Saulnier Jr; W. Statham

    2006-01-01

    The Nopal I uranium mine in the Sierra Pena Blanca, Chihuahua, Mexico serves as a natural analogue to the Yucca Mountain repository. The Pena Blanca Natural Analogue Performance Assessment Model simulates the mobilization and transport of radionuclides that are released from the mine and transported to the saturated zone. the Pena Blanca Natural Analogue Model uses probabilistic simulations of hydrogeologic processes that are analogous to the processes that occur at the Yucca Mountain site. The Nopal I uranium deposit lies in fractured, welded, and altered rhyolitic ash flow tuffs that overlie carbonate rocks, a setting analogous to the geologic formations at the Yucca Mountain site. The Nopal I mine site has the following characteristics as compared to the Yucca Mountain repository site. (1) Analogous source: UO 2 uranium ore deposit = spent nuclear fuel in the repository; (2) Analogous geologic setting: fractured, welded, and altered rhyolitic ash flow tuffs overlying carbonate rocks; (3) Analogous climate: Semiarid to arid; (4) Analogous geochemistry: Oxidizing conditions; and (5) Analogous hydrogeology: The ore deposit lies in the unsaturated zone above the water table. The Nopal I deposit is approximately 8 ± 0.5 million years old and has been exposed to oxidizing conditions during the last 3.2 to 3.4 million years. The Pena Blanca Natural Analogue Model considers that the uranium oxide and uranium silicates in the ore deposit were originally analogous to uranium-oxide spent nuclear fuel. The Pena Blanca site has been characterized using field and laboratory investigations of its fault and fracture distribution, mineralogy, fracture fillings, seepage into the mine adits, regional hydrology, and mineralization that shows the extent of radionuclide migration. Three boreholes were drilled at the Nopal I mine site in 2003 and these boreholes have provided samples for lithologic characterization, water-level measurements, and water samples for laboratory analysis

  14. THE PENA BLANCA NATURAL ANALOGUE PERFORMANCE ASSESSMENT MODEL

    Energy Technology Data Exchange (ETDEWEB)

    G.J. Saulnier Jr; W. Statham

    2006-03-10

    The Nopal I uranium mine in the Sierra Pena Blanca, Chihuahua, Mexico serves as a natural analogue to the Yucca Mountain repository. The Pena Blanca Natural Analogue Performance Assessment Model simulates the mobilization and transport of radionuclides that are released from the mine and transported to the saturated zone. the Pena Blanca Natural Analogue Model uses probabilistic simulations of hydrogeologic processes that are analogous to the processes that occur at the Yucca Mountain site. The Nopal I uranium deposit lies in fractured, welded, and altered rhyolitic ash flow tuffs that overlie carbonate rocks, a setting analogous to the geologic formations at the Yucca Mountain site. The Nopal I mine site has the following characteristics as compared to the Yucca Mountain repository site. (1) Analogous source: UO{sub 2} uranium ore deposit = spent nuclear fuel in the repository; (2) Analogous geologic setting: fractured, welded, and altered rhyolitic ash flow tuffs overlying carbonate rocks; (3) Analogous climate: Semiarid to arid; (4) Analogous geochemistry: Oxidizing conditions; and (5) Analogous hydrogeology: The ore deposit lies in the unsaturated zone above the water table. The Nopal I deposit is approximately 8 {+-} 0.5 million years old and has been exposed to oxidizing conditions during the last 3.2 to 3.4 million years. The Pena Blanca Natural Analogue Model considers that the uranium oxide and uranium silicates in the ore deposit were originally analogous to uranium-oxide spent nuclear fuel. The Pena Blanca site has been characterized using field and laboratory investigations of its fault and fracture distribution, mineralogy, fracture fillings, seepage into the mine adits, regional hydrology, and mineralization that shows the extent of radionuclide migration. Three boreholes were drilled at the Nopal I mine site in 2003 and these boreholes have provided samples for lithologic characterization, water-level measurements, and water samples for laboratory

  15. Sensitivity of hydrological performance assessment analysis to variations in material properties, conceptual models, and ventilation models

    Energy Technology Data Exchange (ETDEWEB)

    Sobolik, S.R.; Ho, C.K.; Dunn, E. [Sandia National Labs., Albuquerque, NM (United States); Robey, T.H. [Spectra Research Inst., Albuquerque, NM (United States); Cruz, W.T. [Univ. del Turabo, Gurabo (Puerto Rico)

    1996-07-01

    The Yucca Mountain Site Characterization Project is studying Yucca Mountain in southwestern Nevada as a potential site for a high-level nuclear waste repository. Site characterization includes surface- based and underground testing. Analyses have been performed to support the design of an Exploratory Studies Facility (ESF) and the design of the tests performed as part of the characterization process, in order to ascertain that they have minimal impact on the natural ability of the site to isolate waste. The information in this report pertains to sensitivity studies evaluating previous hydrological performance assessment analyses to variation in the material properties, conceptual models, and ventilation models, and the implications of this sensitivity on previous recommendations supporting ESF design. This document contains information that has been used in preparing recommendations for Appendix I of the Exploratory Studies Facility Design Requirements document.

  16. Sensitivity of hydrological performance assessment analysis to variations in material properties, conceptual models, and ventilation models

    International Nuclear Information System (INIS)

    Sobolik, S.R.; Ho, C.K.; Dunn, E.; Robey, T.H.; Cruz, W.T.

    1996-07-01

    The Yucca Mountain Site Characterization Project is studying Yucca Mountain in southwestern Nevada as a potential site for a high-level nuclear waste repository. Site characterization includes surface- based and underground testing. Analyses have been performed to support the design of an Exploratory Studies Facility (ESF) and the design of the tests performed as part of the characterization process, in order to ascertain that they have minimal impact on the natural ability of the site to isolate waste. The information in this report pertains to sensitivity studies evaluating previous hydrological performance assessment analyses to variation in the material properties, conceptual models, and ventilation models, and the implications of this sensitivity on previous recommendations supporting ESF design. This document contains information that has been used in preparing recommendations for Appendix I of the Exploratory Studies Facility Design Requirements document

  17. Low Cost Carbon Fibre: Applications, Performance and Cost Models - Chapter 17

    Energy Technology Data Exchange (ETDEWEB)

    Warren, Charles David [ORNL; Wheatley, Dr. Alan [University of Sunderland; Das, Sujit [ORNL

    2014-01-01

    Weight saving in automotive applications has a major bearing on fuel economy. It is generally accepted that, typically, a 10% weight reduction in an automobile will lead to a 6-8% improvement in fuel economy. In this respect, carbon fibre composites are extremely attractive in their ability to provide superlative mechanical performance per unit weight. That is why they are specified for high-end uses such as Formula 1 racing cars and the latest aircraft (e.g. Boeing 787, Airbus A350 and A380), where they comprise over 50% by weight of the structure However, carbon fibres are expensive and this renders their composites similarly expensive. Research has been carried out at Oak Ridge National Laboratories (ORNL), Tennessee, USA for over a decade with the aim of reducing the cost of carbon fibre such that it becomes a cost-effective option for the automotive industry. Aspects of this research relating to the development of low cost carbon fibre have been reported in Chapter 3 of this publication. In this chapter, the practical industrial applications of low-cost carbon fibre are presented, together with considerations of the performance and cost models which underpin the work.

  18. Electrical circuit models for performance modeling of Lithium-Sulfur batteries

    DEFF Research Database (Denmark)

    Knap, Vaclav; Stroe, Daniel Ioan; Teodorescu, Remus

    2015-01-01

    Energy storage technologies such as Lithium-ion (Li-ion) batteries are widely used in the present effort to move towards more ecological solutions in sectors like transportation or renewable-energy integration. However, today's Li-ion batteries are reaching their limits and not all demands...... of the industry are met yet. Therefore, researchers focus on alternative battery chemistries as Lithium-Sulfur (Li-S), which have a huge potential due to their high theoretical specific capacity (approx. 1675 Ah/kg) and theoretical energy density of almost 2600 Wh/kg. To analyze the suitability of this new...... emerging technology for various applications, there is a need for Li-S battery performance model; however, developing such models represents a challenging task due to batteries' complex ongoing chemical reactions. Therefore, the literature review was performed to summarize electrical circuit models (ECMs...

  19. visCOS: An R-package to evaluate model performance of hydrological models

    Science.gov (United States)

    Klotz, Daniel; Herrnegger, Mathew; Wesemann, Johannes; Schulz, Karsten

    2016-04-01

    The evaluation of model performance is a central part of (hydrological) modelling. Much attention has been given to the development of evaluation criteria and diagnostic frameworks. (Klemeš, 1986; Gupta et al., 2008; among many others). Nevertheless, many applications exist for which objective functions do not yet provide satisfying summaries. Thus, the necessity to visualize results arises in order to explore a wider range of model capacities, be it strengths or deficiencies. Visualizations are usually devised for specific projects and these efforts are often not distributed to a broader community (e.g. via open source software packages). Hence, the opportunity to explicitly discuss a state-of-the-art presentation technique is often missed. We therefore present a comprehensive R-package for evaluating model performance by visualizing and exploring different aspects of hydrological time-series. The presented package comprises a set of useful plots and visualization methods, which complement existing packages, such as hydroGOF (Zambrano-Bigiarini et al., 2012). It is derived from practical applications of the hydrological models COSERO and COSEROreg (Kling et al., 2014). visCOS, providing an interface in R, represents an easy-to-use software package for visualizing and assessing model performance and can be implemented in the process of model calibration or model development. The package provides functions to load hydrological data into R, clean the data, process, visualize, explore and finally save the results in a consistent way. Together with an interactive zoom function of the time series, an online calculation of the objective functions for variable time-windows is included. Common hydrological objective functions, such as the Nash-Sutcliffe Efficiency and the Kling-Gupta Efficiency, can also be evaluated and visualized in different ways for defined sub-periods like hydrological years or seasonal sections. Many hydrologists use long-term water-balances as a

  20. Typical horticultural products between tradition and innovation

    Directory of Open Access Journals (Sweden)

    Innocenza Chessa

    2009-10-01

    Full Text Available Recent EU and National policies for agriculture and rural development are mainly focused to foster the production of high quality products as a result of the increasing demand of food safety, typical foods and traditional processing methods. Another word very often used to describe foods in these days is “typicality” which pools together the concepts of “food connected with a specific place”, “historical memory and tradition” and “culture”. The importance for the EU and the National administrations of the above mentioned kind of food is demonstrated, among other things, by the high number of the PDO, PGI and TSG certificated products in Italy. In this period of global markets and economical crisis farmers are realizing how “typical products” can be an opportunity to maintain their market share and to improve the economy of local areas. At the same time, new tools and strategy are needed to reach these goals. A lack of knowledge has being recognized also on how new technologies and results coming from recent research can help in exploiting traditional product and in maintaining the biodiversity. Taking into account the great variety and richness of typical products, landscapes and biodiversity, this report will describe and analyze the relationships among typicality, innovation and research in horticulture. At the beginning “typicality” and “innovation” will be defined also through some statistical features, which ranks Italy at the first place in terms of number of typical labelled products, then will be highlighted how typical products of high quality and connected with the tradition and culture of specific production areas are in a strict relationship with the value of agro-biodiversity. Several different examples will be used to explain different successful methods and/or strategies used to exploit and foster typical Italian vegetables, fruits and flowers. Finally, as a conclusion, since it is thought that