WorldWideScience

Sample records for modeling methodology starts

  1. Agile methodology selection criteria: IT start-up case study

    Science.gov (United States)

    Micic, Lj

    2017-05-01

    Project management in modern IT companies is often based on agile methodologies which have several advantages compared to traditional methodologies such is waterfall. Having in mind that clients sometimes change project during development it is crucial for an IT company to choose carefully which methodology is going to implement and is it going to be mostly based on one or is it going got be combination of several. There are several modern and often used methodologies but among those Scrum, Kanban and XP programming are usually the most common. Sometimes companies use mostly tools and procedures from one but quite often they use some of the combination of those methodologies. Having in mind that those methodologies are just a framework they allow companies to adapt it for their specific projects as well as for other limitations. These methodologies are in limited usage Bosnia but more and more IT companies are starting to use agile methodologies because it is practice and common not just for their clients abroad but also starting to be the only option in order to deliver quality product on time. However it is always challenging which methodology or combination of several companies should implement and how to connect it to its own project, organizational framework and HR management. This paper presents one case study based on local IT start up and delivers solution based on theoretical framework and practical limitations that case company has.

  2. Design Methodology of Camshaft Driven Charge Valves for Pneumatic Engine Starts

    Directory of Open Access Journals (Sweden)

    Moser Michael M.

    2015-01-01

    Full Text Available Idling losses constitute a significant amount of the fuel consumption of internal combustion engines. Therefore, shutting down the engine during idling phases can improve its overall efficiency. For driver acceptance a fast restart of the engine must be guaranteed. A fast engine start can be performed using a powerful electric starter and an appropriate battery which are found in hybrid electric vehicles, for example. However, these devices involve additional cost and weight. An alternative method is to use a tank with pressurized air that can be injected directly into the cylinders to start the engine pneumatically. In this paper, pneumatic engine starts using camshaft driven charge valves are discussed. A general methodology for an air-optimal charge valve design is presented which can deal with various requirements. The proposed design methodology is based on a process model representing pneumatic engine operation. A design example for a two-cylinder engine is shown, and the resulting optimized pneumatic start is experimentally verified on a test bench engine. The engine’s idling speed of 1200 rpm can be reached within 350 ms for an initial pressure in the air tank of 10 bar. A detailed system analysis highlights the characteristics of the optimal design found.

  3. Modeling Start Curves of Bainite Formation

    NARCIS (Netherlands)

    Van Bohemen, S.M.C.

    2009-01-01

    It is demonstrated that calculations with a physically based model give an accurate description of the start curve of bainite formation in a wide range of steels. The temperature dependence of the overall kinetics, which determines the characteristic C shape of the start curve, is controlled by both

  4. Early Start DENVER Model: A Meta - analysis

    Directory of Open Access Journals (Sweden)

    Jane P. Canoy

    2015-11-01

    Full Text Available Each child with Autism Spectrum Disorder has different symptoms, skills and types of impairment or disorder with other children. This is why the word “spectrum” is included in this disorder. Eapen, Crncec, and Walter, 2013 claimed that there was an emerging evidence that early interventions gives the greatest capacity of child’s development during their first years of life as “brain plasticity” are high during this period. With this, the only intervention program model for children as young as 18 months that has been validated in a randomized clinical trial is “Early Start Denver Model” (ESDM. This study aimed to determine the effectiveness of the outcome of “Early Start Denver Model” (ESDM towards young children with Autism Spectrum Disorders. This study made use of meta-analysis method. In this study, the researcher utilized studies related to “Early Start Denver Model (ESDM” which is published in a refereed journal which are all available online. There were five studies included which totals 149 children exposed to ESDM. To examine the “pooled effects” of ESDM in a variety of outcomes, a meta-analytic procedure was performed after the extraction of data of the concrete outcomes. Comprehensive Meta Analysis Version 3.3.070 was used to analyze the data.  The effectiveness of the outcome of “Early Start Denver Model” towards young children with Autism Spectrum Disorders (ASD highly depends on the intensity of intervention and the younger child age. This study would provide the basis in effectively implementing an early intervention to children with autism such as the “Early Start Denver Model” (ESDM that would show great outcome effects to those children that has “Autism Spectrum Disorder”.

  5. Researching Memories about Starting School: Autobiographical Narratives as a Methodological Approach

    Science.gov (United States)

    Turunen, Tuija A.; Dockett, Sue; Perry, Bob

    2015-01-01

    This article reports on methodological issues in the study of autobiographical narratives about transition to school within a life course approach. The data consist of 89 Australian participants' recollections of starting school between 1928 and 1995. These narratives are considered as life reviews and part of the story of "continuing…

  6. Getting Started with Topic Modeling and MALLET

    Directory of Open Access Journals (Sweden)

    Shawn Graham

    2012-09-01

    Full Text Available In this lesson you will first learn what topic modeling is and why you might want to employ it in your research. You will then learn how to install and work with the MALLET natural language processing toolkit to do so. MALLET involves modifying an environment variable (essentially, setting up a short-cut so that your computer always knows where to find the MALLET program and working with the command line (ie, by typing in commands manually, rather than clicking on icons or menus. We will run the topic modeller on some example files, and look at the kinds of outputs that MALLET installed. This will give us a good idea of how it can be used on a corpus of texts to identify topics found in the documents without reading them individually.

  7. University Start-ups: A Better Business Model

    Science.gov (United States)

    Dehn, J.; Webley, P. W.

    2015-12-01

    Many universities look to start-up companies as a way to attract faculty, supporting research and students as traditional federal sources become harder to come by. University affiliated start-up companies can apply for a broader suite of grants, as well as market their services to a broad customer base. Often university administrators see this as a potential panacea, but national statistics show this is not the case. Rarely do universities profit significantly from their start-ups. With a success rates of around 20%, most start-ups end up costing the university money as well as faculty-time. For the faculty, assuming they want to continue in academia, a start-up is often unattractive because it commonly leads out of academia. Running a successful business as well as maintaining a strong teaching and research load is almost impossible to do at the same time. Most business models and business professionals work outside of academia, and the models taught in business schools do not merge well in a university environment. To mitigate this a new business model is proposed where university start-ups are aligned with the academic and research missions of the university. A university start-up must work within the university, directly support research and students, and the work done maintaining the business be recognized as part of the faculty member's university obligations. This requires a complex conflict of interest management plan and for the companies to be non-profit in order to not jeopardize the university's status. This approach may not work well for all universities, but would be ideal for many to conserve resources and ensure a harmonious relationship with their start-ups and faculty.

  8. Initiating "The Methodology of Jacques Ranciere": How Does It All Start?

    Science.gov (United States)

    Mercieca, Duncan P.

    2012-01-01

    Educationalists are currently engaging with Jacques Ranciere's thought on emancipation and equality. The focus of this paper is on what initiates the process that starts emancipation. With reference to teachers the question is: how do teachers become emancipated? This paper discusses how the teacher's life is made "sensible" and how sense is…

  9. Modeling and cold start in alcohol-fueled engines

    Energy Technology Data Exchange (ETDEWEB)

    Markel, A.J.; Bailey, B.K.

    1998-05-01

    Neat alcohol fuels offer several benefits over conventional gasoline in automotive applications. However, their low vapor pressure and high heat of vaporization make it difficult to produce a flammable vapor composition from a neat alcohol fuel during a start under cold ambient conditions. Various methods have been introduced to compensate for this deficiency. In this study, the authors applied computer modeling and simulation to evaluate the potential of four cold-start technologies for engines fueled by near-neat alcohol. The four technologies were a rich combustor device, a partial oxidation reactor, a catalytic reformer, and an enhanced ignition system. The authors ranked the competing technologies by their ability to meet two primary criteria for cold starting an engine at {minus}25 deg C and also by several secondary parameters related to commercialization. Their analysis results suggest that of the four technologies evaluated, the enhanced ignition system is the best option for further development.

  10. Modeling and cold start in alcohol-fueled engines

    Energy Technology Data Exchange (ETDEWEB)

    Markel, A.J.; Bailey, B.K.

    1998-05-01

    Neat alcohol fuels offer several benefits over conventional gasoline in automotive applications. However, their low vapor pressure and high heat of vaporization make it difficult to produce a flammable vapor composition from a neat alcohol fuel during a start under cold ambient conditions. Various methods have been introduced to compensate for this deficiency. In this study, the authors applied computer modeling and simulation to evaluate the potential of four cold-start technologies for engines fueled by near-neat alcohol. The four technologies were a rich combustor device, a partial oxidation reactor, a catalytic reformer, and an enhanced ignition system. The authors ranked the competing technologies by their ability to meet two primary criteria for cold starting an engine at {minus}25 deg C and also by several secondary parameters related to commercialization. Their analysis results suggest that of the four technologies evaluated, the enhanced ignition system is the best option for further development.

  11. Theories, Models and Methodology in Writing Research

    NARCIS (Netherlands)

    Rijlaarsdam, Gert; Bergh, van den Huub; Couzijn, Michel

    1996-01-01

    Theories, Models and Methodology in Writing Research describes the current state of the art in research on written text production. The chapters in the first part offer contributions to the creation of new theories and models for writing processes. The second part examines specific elements of the w

  12. Start Talking Science: A model for STEM communication events

    Science.gov (United States)

    Love, Christina; Start Talking Science Collaboration

    2017-01-01

    Start Talking Science is an annual public event where STEM researchers present posters aimed at a non-technical audience. It was founded in 2013 by a coalition of science communication advocates with varying technical backgrounds from almost every major university in the Philadelphia area. Our mission is to increase public awareness of - and interest in - local cutting-edge STEM research while also strengthening the science communication skills of STEM researchers. All STEM researchers, including students and seasoned professionals, are invited to present their research; however they must participate in our review process which focuses on how to communicate their science to a general audience. Start Talking Science will be presented as a model for science communication events in other cities with details about the first three annual events and our review process. www.starttalkingscience.com

  13. Intelligent CAD Methodology Research of Adaptive Modeling

    Institute of Scientific and Technical Information of China (English)

    ZHANG Weibo; LI Jun; YAN Jianrong

    2006-01-01

    The key to carry out ICAD technology is to establish the knowledge-based and wide rang of domains-covered product model. This paper put out a knowledge-based methodology of adaptive modeling. It is under the Ontology mind, using the Object-Oriented technology and being a knowledge-based model framework. It involves the diverse domains in product design and realizes the multi-domain modeling, embedding the relative information including standards, regulars and expert experience. To test the feasibility of the methodology, the research bonds of the automotive diaphragm spring clutch design and an adaptive clutch design model is established, using the knowledge-based modeling language-AML.

  14. A Structured Methodology for Spreadsheet Modelling

    CERN Document Server

    Knight, Brian; Rajalingham, Kamalesen

    2008-01-01

    In this paper, we discuss the problem of the software engineering of a class of business spreadsheet models. A methodology for structured software development is proposed, which is based on structured analysis of data, represented as Jackson diagrams. It is shown that this analysis allows a straightforward modularisation, and that individual modules may be represented with indentation in the block-structured form of structured programs. The benefits of structured format are discussed, in terms of comprehensibility, ease of maintenance, and reduction in errors. The capability of the methodology to provide a modular overview in the model is described, and examples are given. The potential for a reverse-engineering tool, to transform existing spreadsheet models is discussed.

  15. Preliminary investigation of flow dynamics during the start-up of a bulb turbine model

    Science.gov (United States)

    Coulaud, M.; Fraser, R.; Lemay, J.; Duquesne, P.; Aeschlimann, V.; Deschênes, C.

    2016-11-01

    Nowadays, the electricity network undergoes more perturbations due to the market demand. Additionally, an increase of the production from alternative resources such as wind or solar also induces important variations on the grid. Hydraulic power plants are used to respond quickly to these variations to stabilize the network. Hydraulic turbines have to face more frequent start-up and stop sequences that might shorten significantly their life time. In this context, an experimental analysis of start-up sequences has been conducted on the bulb turbine model of the BulbT project at the Hydraulic Machines Laboratory (LAMH) of Laval University. Maintaining a constant head, guide vanes are opened from 0 ° to 30 °. Three guide vanes opening speed have been chosen from 5 °/s to 20 °/s. Several repetitions were done for each guide vanes opening speed. During these sequences, synchronous time resolved measurements have been performed. Pressure signals were recorded at the runner inlet and outlet and along the draft tube. Also, 25 pressure measurements and strain measurements were obtained on the runner blades. Time resolved particle image velocimetry were used to evaluate flowrate during start-up for some repetitions. Torque fluctuations at shaft were also monitored. This paper presents the experimental set-up and start-up conditions chosen to simulate a prototype start-up. Transient flowrate methodology is explained and validation measurements are detailed. The preliminary results of global performances and runner pressure measurements are presented.

  16. Simulation and Modeling Methodologies, Technologies and Applications

    CERN Document Server

    Filipe, Joaquim; Kacprzyk, Janusz; Pina, Nuno

    2014-01-01

    This book includes extended and revised versions of a set of selected papers from the 2012 International Conference on Simulation and Modeling Methodologies, Technologies and Applications (SIMULTECH 2012) which was sponsored by the Institute for Systems and Technologies of Information, Control and Communication (INSTICC) and held in Rome, Italy. SIMULTECH 2012 was technically co-sponsored by the Society for Modeling & Simulation International (SCS), GDR I3, Lionphant Simulation, Simulation Team and IFIP and held in cooperation with AIS Special Interest Group of Modeling and Simulation (AIS SIGMAS) and the Movimento Italiano Modellazione e Simulazione (MIMOS).

  17. Fresh Start: A Model for Success and Sustainable Change

    Science.gov (United States)

    Matthews, Susan; Kinchington, Francia

    2006-01-01

    This article examines the rationale and debate of the "Fresh Start" schools policy introduced by the New Labour Government in 1997 as a vehicle for improvement in schools that historically had been classified as "failing". Underpinning the policy is the assumption that Fresh Start can act as a catalytic agent of positive change to performance,…

  18. A methodology for acquiring qualitative knowledge for probabilistic graphical models

    DEFF Research Database (Denmark)

    Kjærulff, Uffe Bro; Madsen, Anders L.

    2004-01-01

    We present a practical and general methodology that simplifies the task of acquiring and formulating qualitative knowledge for constructing probabilistic graphical models (PGMs). The methodology efficiently captures and communicates expert knowledge, and has significantly eased the model developm......We present a practical and general methodology that simplifies the task of acquiring and formulating qualitative knowledge for constructing probabilistic graphical models (PGMs). The methodology efficiently captures and communicates expert knowledge, and has significantly eased the model...

  19. A methodology for modeling regional terrorism risk.

    Science.gov (United States)

    Chatterjee, Samrat; Abkowitz, Mark D

    2011-07-01

    Over the past decade, terrorism risk has become a prominent consideration in protecting the well-being of individuals and organizations. More recently, there has been interest in not only quantifying terrorism risk, but also placing it in the context of an all-hazards environment in which consideration is given to accidents and natural hazards, as well as intentional acts. This article discusses the development of a regional terrorism risk assessment model designed for this purpose. The approach taken is to model terrorism risk as a dependent variable, expressed in expected annual monetary terms, as a function of attributes of population concentration and critical infrastructure. This allows for an assessment of regional terrorism risk in and of itself, as well as in relation to man-made accident and natural hazard risks, so that mitigation resources can be allocated in an effective manner. The adopted methodology incorporates elements of two terrorism risk modeling approaches (event-based models and risk indicators), producing results that can be utilized at various jurisdictional levels. The validity, strengths, and limitations of the model are discussed in the context of a case study application within the United States.

  20. Methodology, models and algorithms in thermographic diagnostics

    CERN Document Server

    Živčák, Jozef; Madarász, Ladislav; Rudas, Imre J

    2013-01-01

    This book presents  the methodology and techniques of  thermographic applications with focus primarily on medical thermography implemented for parametrizing the diagnostics of the human body. The first part of the book describes the basics of infrared thermography, the possibilities of thermographic diagnostics and the physical nature of thermography. The second half includes tools of intelligent engineering applied for the solving of selected applications and projects. Thermographic diagnostics was applied to problematics of paraplegia and tetraplegia and carpal tunnel syndrome (CTS). The results of the research activities were created with the cooperation of the four projects within the Ministry of Education, Science, Research and Sport of the Slovak Republic entitled Digital control of complex systems with two degrees of freedom, Progressive methods of education in the area of control and modeling of complex object oriented systems on aircraft turbocompressor engines, Center for research of control of te...

  1. Agent-based Modeling Methodology for Analyzing Weapons Systems

    Science.gov (United States)

    2015-03-26

    43 Figure 14: Simulation Study Methodology for the Weapon System Analysis Metrics Definition and Data Collection The analysis plan calls for...AGENT-BASED MODELING METHODOLOGY FOR ANALYZING WEAPONS SYSTEMS THESIS Casey D. Connors, Major, USA...AGENT-BASED MODELING METHODOLOGY FOR ANALYZING WEAPONS SYSTEMS THESIS Presented to the Faculty Department of Operational Sciences

  2. Model evaluation methodology applicable to environmental assessment models

    Energy Technology Data Exchange (ETDEWEB)

    Shaeffer, D.L.

    1979-08-01

    A model evaluation methodology is presented to provide a systematic framework within which the adequacy of environmental assessment models might be examined. The necessity for such a tool is motivated by the widespread use of models for predicting the environmental consequences of various human activities and by the reliance on these model predictions for deciding whether a particular activity requires the deployment of costly control measures. Consequently, the uncertainty associated with prediction must be established for the use of such models. The methodology presented here consists of six major tasks: model examination, algorithm examination, data evaluation, sensitivity analyses, validation studies, and code comparison. This methodology is presented in the form of a flowchart to show the logical interrelatedness of the various tasks. Emphasis has been placed on identifying those parameters which are most important in determining the predictive outputs of a model. Importance has been attached to the process of collecting quality data. A method has been developed for analyzing multiplicative chain models when the input parameters are statistically independent and lognormally distributed. Latin hypercube sampling has been offered as a promising candidate for doing sensitivity analyses. Several different ways of viewing the validity of a model have been presented. Criteria are presented for selecting models for environmental assessment purposes.

  3. A methodology to study the possible occurrence of chugging in liquid rocket engines during transient start-up

    Science.gov (United States)

    Leonardi, Marco; Nasuti, Francesco; Di Matteo, Francesco; Steelant, Johan

    2017-10-01

    An investigation on the low frequency combustion instabilities due to the interaction of combustion chamber and feed line dynamics in a liquid rocket engine is carried out implementing a specific module in the system analysis software EcosimPro. The properties of the selected double time lag model are identified according to the two classical assumptions of constant and variable time lag. Module capabilities are evaluated on a literature experimental set up consisting of a combustion chamber decoupled from the upstream feed lines. The computed stability map results to be in good agreement with both experimental data and analytical models. Moreover, the first characteristic frequency of the engine is correctly predicted, giving confidence on the use of the module for the analysis of chugging instabilities. As an example of application, a study is carried out on the influence of the feed lines on the system stability, correctly capturing that the lines extend the stable regime of the combustion chamber and that the propellant domes play a key role in coupling the dynamics of combustion chamber and feed lines. A further example is presented to discuss on the role of pressure growth rate and of the combustion chamber properties on the possible occurrence of chug instability during engine start-up and on the conditions that lead to its damping or growth.

  4. Investigations about Starting Cracks in DC-Casting of 6063-Type Billets Part II: Modelling Results

    Science.gov (United States)

    Jensen, E. K.; Schneider, W.

    Influence on starting crack tendency of varying a number of casting parameters has been studied by experiments, Part I (1), and by model calculations, Part II. Both studies point to starting block shape as a most important single factor in controlling starting cracks. By using the thermal model ALSIM-2 in analysing initial experimental results, the variable heat transfer towards the starting block was determined. This made possible a satisfactory model analysis of the starting phase and likewise the formulation of a useful cracking concept. Thus by using calculated and measured liquid pool depth curve in the starting phase of casting as a basis, an effective starting block shape was found. This new shape practically eliminates the starting crack problems in extrusion billets of the AA6063 type alloys.

  5. Methodology for Modeling and Analysis of Business Processes (MMABP)

    OpenAIRE

    Vaclav Repa; Tomas Bruckner

    2015-01-01

    This paper introduces the methodology for modeling business processes. Creation of the methodology is described in terms of the Design Science Method. Firstly, the gap in contemporary Business Process Modeling approaches is identified and general modeling principles which can fill the gap are discussed. The way which these principles have been implemented in the main features of created methodology is described. Most critical identified points of the business process modeling are process stat...

  6. Modeling, methodologies and tools for molecular and nano-scale communications modeling, methodologies and tools

    CERN Document Server

    Nakano, Tadashi; Moore, Michael

    2017-01-01

    (Preliminary) The book presents the state of art in the emerging field of molecular and nanoscale communication. It gives special attention to fundamental models, and advanced methodologies and tools used in the field. It covers a wide range of applications, e.g. nanomedicine, nanorobot communication, bioremediation and environmental managements. It addresses advanced graduate students, academics and professionals working at the forefront in their fields and at the interfaces between different areas of research, such as engineering, computer science, biology and nanotechnology.

  7. Methodology for Modeling and Analysis of Business Processes (MMABP

    Directory of Open Access Journals (Sweden)

    Vaclav Repa

    2015-10-01

    Full Text Available This paper introduces the methodology for modeling business processes. Creation of the methodology is described in terms of the Design Science Method. Firstly, the gap in contemporary Business Process Modeling approaches is identified and general modeling principles which can fill the gap are discussed. The way which these principles have been implemented in the main features of created methodology is described. Most critical identified points of the business process modeling are process states, process hierarchy and the granularity of process description. The methodology has been evaluated by use in the real project. Using the examples from this project the main methodology features are explained together with the significant problems which have been met during the project. Concluding from these problems together with the results of the methodology evaluation the needed future development of the methodology is outlined.

  8. Design of an Emulsion-based Personal Detergent through a Model-based Chemical Product Design Methodology

    DEFF Research Database (Denmark)

    Mattei, Michele; Hill, Michael; Kontogeorgis, Georgios;

    2013-01-01

    An extended systematic methodology for the design of emulsion-based Chemical products is presented. The methodology consists of a model-based framework involving seven sequential hierarchical steps: starting with the identification of the needs to be satisfied by the product and then adding one...

  9. Design of an Emulsion-based Personal Detergent through a Model-based Chemical Product Design Methodology

    DEFF Research Database (Denmark)

    Mattei, Michele; Hill, Michael; Kontogeorgis, Georgios

    2013-01-01

    An extended systematic methodology for the design of emulsion-based Chemical products is presented. The methodology consists of a model-based framework involving seven sequential hierarchical steps: starting with the identification of the needs to be satisfied by the product and then adding one-b...

  10. Modeling and Timing Simulation of Micro Turbine engine in Starting Process

    Science.gov (United States)

    Huang, Libing; Gu, Feng; Zhang, Ying; He, Yuqing

    2016-11-01

    The stability of start-up process of turbine engine is the foundation of its operation. The process of starting performance optimization is very complex, and the experiment is dangerous, so it is very necessary to make the simulation study on the turbine engine starting process. In this paper, the mathematical model of turbine engine in starting process was established, according to the JetCat-P400 turbine engine starting process principle, using the experimental data of engine ground operation and the actuator voltage data. The simulation results showed the validity of the model.

  11. Threat model framework and methodology for personal networks (PNs)

    DEFF Research Database (Denmark)

    Prasad, Neeli R.

    2007-01-01

    is to give a structured, convenient approach for building threat models. A framework for the threat model is presented with a list of requirements for methodology. The methodology will be applied to build a threat model for Personal Networks. Practical tools like UML sequence diagrams and attack trees have...

  12. The Service-Learning methodology applied to Operations Management: From the Operations Plan to business start up.

    Directory of Open Access Journals (Sweden)

    Constantino García-Ramos

    2017-06-01

    After developing this activity of teaching innovation, we can conclude that the SL is a good methodology to improve the academic, personal and social development of students, suggesting that it is possible to join their academic success with the social commitment of the University.

  13. New systematic methodology for incorporating dynamic heat transfer modelling in multi-phase biochemical reactors.

    Science.gov (United States)

    Fernández-Arévalo, T; Lizarralde, I; Grau, P; Ayesa, E

    2014-09-01

    This paper presents a new modelling methodology for dynamically predicting the heat produced or consumed in the transformations of any biological reactor using Hess's law. Starting from a complete description of model components stoichiometry and formation enthalpies, the proposed modelling methodology has integrated successfully the simultaneous calculation of both the conventional mass balances and the enthalpy change of reaction in an expandable multi-phase matrix structure, which facilitates a detailed prediction of the main heat fluxes in the biochemical reactors. The methodology has been implemented in a plant-wide modelling methodology in order to facilitate the dynamic description of mass and heat throughout the plant. After validation with literature data, as illustrative examples of the capability of the methodology, two case studies have been described. In the first one, a predenitrification-nitrification dynamic process has been analysed, with the aim of demonstrating the easy integration of the methodology in any system. In the second case study, the simulation of a thermal model for an ATAD has shown the potential of the proposed methodology for analysing the effect of ventilation and influent characterization.

  14. Getting Started and Working with Building Information Modeling

    Science.gov (United States)

    Smith, Dana K.

    2009-01-01

    This article will assume that one has heard of Building Information Modeling or BIM but has not developed a strategy as to how to get the most out of it. The National BIM Standard (NBIMS) has defined BIM as a digital representation of physical and functional characteristics of a facility. As such, it serves as a shared knowledge resource for…

  15. Modeling Virtual Organization Architecture with the Virtual Organization Breeding Methodology

    Science.gov (United States)

    Paszkiewicz, Zbigniew; Picard, Willy

    While Enterprise Architecture Modeling (EAM) methodologies become more and more popular, an EAM methodology tailored to the needs of virtual organizations (VO) is still to be developed. Among the most popular EAM methodologies, TOGAF has been chosen as the basis for a new EAM methodology taking into account characteristics of VOs presented in this paper. In this new methodology, referred as Virtual Organization Breeding Methodology (VOBM), concepts developed within the ECOLEAD project, e.g. the concept of Virtual Breeding Environment (VBE) or the VO creation schema, serve as fundamental elements for development of VOBM. VOBM is a generic methodology that should be adapted to a given VBE. VOBM defines the structure of VBE and VO architectures in a service-oriented environment, as well as an architecture development method for virtual organizations (ADM4VO). Finally, a preliminary set of tools and methods for VOBM is given in this paper.

  16. Modeling Virtual Organization Architecture with the Virtual Organization Breeding Methodology

    CERN Document Server

    Paszkiewicz, Zbigniew

    2011-01-01

    While Enterprise Architecture Modeling (EAM) methodologies become more and more popular, an EAM methodology tailored to the needs of virtual organizations (VO) is still to be developed. Among the most popular EAM methodologies, TOGAF has been chosen as the basis for a new EAM methodology taking into account characteristics of VOs presented in this paper. In this new methodology, referred as Virtual Organization Breeding Methodology (VOBM), concepts developed within the ECOLEAD project, e.g. the concept of Virtual Breeding Environment (VBE) or the VO creation schema, serve as fundamental elements for development of VOBM. VOBM is a generic methodology that should be adapted to a given VBE. VOBM defines the structure of VBE and VO architectures in a service-oriented environment, as well as an architecture development method for virtual organizations (ADM4VO). Finally, a preliminary set of tools and methods for VOBM is given in this paper.

  17. Eco-friendly dialysis with the Systemic Design methodology: an eco-friendly dialysis may start from "the grave"

    OpenAIRE

    Pereno, Amina; Barbero, Silvia

    2013-01-01

    Chronic Hemodialysis produces about 600,00 tons of plastic wastes per year. The Systemic Design is an innovative method to analyse the environmental impact and the improvement strategies needed for a "friendly planet" production hardware and supplies, in all field of human life, with an approach that progressively moved from the study of the lifespan of the object to the continuous start of new cycles "from cradle to cradle". In medicine, attention to the environmental impact is still limited...

  18. Starting Conditions for Hydrothermal Systems Underneath Martian Craters: Hydrocode Modeling

    Science.gov (United States)

    Pierazzo, E.; Artemieva, N. A.; Ivanov, B. A.

    2004-01-01

    Mars is the most Earth-like of the Solar System s planets, and the first place to look for any sign of present or past extraterrestrial life. Its surface shows many features indicative of the presence of surface and sub-surface water, while impact cratering and volcanism have provided temporary and local surface heat sources throughout Mars geologic history. Impact craters are widely used ubiquitous indicators for the presence of sub-surface water or ice on Mars. In particular, the presence of significant amounts of ground ice or water would cause impact-induced hydrothermal alteration at Martian impact sites. The realization that hydrothermal systems are possible sites for the origin and early evolution of life on Earth has given rise to the hypothesis that hydrothermal systems may have had the same role on Mars. Rough estimates of the heat generated in impact events have been based on scaling relations, or thermal data based on terrestrial impacts on crystalline basements. Preliminary studies also suggest that melt sheets and target uplift are equally important heat sources for the development of a hydrothermal system, while its lifetime depends on the volume and cooling rate of the heat source, as well as the permeability of the host rocks. We present initial results of two-dimensional (2D) and three-dimensional (3D) simulations of impacts on Mars aimed at constraining the initial conditions for modeling the onset and evolution of a hydrothermal system on the red planet. Simulations of the early stages of impact cratering provide an estimate of the amount of shock melting and the pressure-temperature distribution in the target caused by various impacts on the Martian surface. Modeling of the late stage of crater collapse is necessary to characterize the final thermal state of the target, including crater uplift, and distribution of the heated target material (including the melt pool) and hot ejecta around the crater.

  19. Goal Model to Business Process Model: A Methodology for Enterprise Government Tourism System Development

    National Research Council Canada - National Science Library

    Ahmad Nurul Fajar; Imam Marzuki Shofi

    2016-01-01

    .... However, the goal model could not used directly to make business process model. In order to solve this problem,this paper presents and proposed a Methodology to extract goal model into business process model that called GBPM Methodology...

  20. Methodological Approach for Modeling of Multienzyme in-pot Processes

    DEFF Research Database (Denmark)

    Andrade Santacoloma, Paloma de Gracia; Roman Martinez, Alicia; Sin, Gürkan;

    2011-01-01

    This paper presents a methodological approach for modeling multi-enzyme in-pot processes. The methodology is exemplified stepwise through the bi-enzymatic production of N-acetyl-D-neuraminic acid (Neu5Ac) from N-acetyl-D-glucosamine (GlcNAc). In this case study, sensitivity analysis is also used...

  1. Optimal Data Split Methodology for Model Validation

    CERN Document Server

    Morrison, Rebecca; Terejanu, Gabriel; Miki, Kenji; Prudhomme, Serge

    2011-01-01

    The decision to incorporate cross-validation into validation processes of mathematical models raises an immediate question - how should one partition the data into calibration and validation sets? We answer this question systematically: we present an algorithm to find the optimal partition of the data subject to certain constraints. While doing this, we address two critical issues: 1) that the model be evaluated with respect to predictions of a given quantity of interest and its ability to reproduce the data, and 2) that the model be highly challenged by the validation set, assuming it is properly informed by the calibration set. This framework also relies on the interaction between the experimentalist and/or modeler, who understand the physical system and the limitations of the model; the decision-maker, who understands and can quantify the cost of model failure; and the computational scientists, who strive to determine if the model satisfies both the modeler's and decision maker's requirements. We also note...

  2. Methodology for characterizing modeling and discretization uncertainties in computational simulation

    Energy Technology Data Exchange (ETDEWEB)

    ALVIN,KENNETH F.; OBERKAMPF,WILLIAM L.; RUTHERFORD,BRIAN M.; DIEGERT,KATHLEEN V.

    2000-03-01

    This research effort focuses on methodology for quantifying the effects of model uncertainty and discretization error on computational modeling and simulation. The work is directed towards developing methodologies which treat model form assumptions within an overall framework for uncertainty quantification, for the purpose of developing estimates of total prediction uncertainty. The present effort consists of work in three areas: framework development for sources of uncertainty and error in the modeling and simulation process which impact model structure; model uncertainty assessment and propagation through Bayesian inference methods; and discretization error estimation within the context of non-deterministic analysis.

  3. A Physics-Based Starting Model for Gas Turbine Engines Project

    Data.gov (United States)

    National Aeronautics and Space Administration — The objective of this proposal is to demonstrate the feasibility of producing an integrated starting model for gas turbine engines using a new physics-based...

  4. Methodology to estimate the threshold in-cylinder temperature for self-ignition of fuel during cold start of Diesel engines

    Energy Technology Data Exchange (ETDEWEB)

    Broatch, A.; Ruiz, S.; Margot, X.; Gil, A. [CMT-Motores Termicos, Universidad Politecnica de Valencia, Aptdo. 22012, E-46071 Valencia (Spain)

    2010-05-15

    Cold startability of automotive direct injection (DI) Diesel engines is frequently one of the negative features when these are compared to their closest competitor, the gasoline engine. This situation worsens with the current design trends (engine downsizing) and the emerging new Diesel combustion concepts, such as HCCI, PCCI, etc., which require low compression ratio engines. To mitigate this difficulty, pre-heating systems (glow plugs, air heating, etc.) are frequently used and their technologies have been continuously developed. For the optimum design of these systems, the determination of the threshold temperature that the gas should have in the cylinder in order to provoke the self-ignition of the fuel injected during cold starting is crucial. In this paper, a novel methodology for estimating the threshold temperature is presented. In this methodology, experimental and computational procedures are adequately combined to get a good compromise between accuracy and effort. The measurements have been used as input data and boundary conditions in 3D and 0D calculations in order to obtain the thermodynamic conditions of the gas in the cylinder during cold starting. The results obtained from the study of two engine configurations -low and high compression ratio- indicate that the threshold in-cylinder temperature is a single temperature of about 415 C. (author)

  5. Efficient Modelling Methodology for Reconfigurable Underwater Robots

    DEFF Research Database (Denmark)

    Nielsen, Mikkel Cornelius; Blanke, Mogens; Schjølberg, Ingrid

    2016-01-01

    This paper considers the challenge of applying reconfigurable robots in an underwater environment. The main result presented is the development of a model for a system comprised of N, possibly heterogeneous, robots dynamically connected to each other and moving with 6 Degrees of Freedom (DOF......). This paper presents an application of the Udwadia-Kalaba Equation for modelling the Reconfigurable Underwater Robots. The constraints developed to enforce the rigid connection between robots in the system is derived through restrictions on relative distances and orientations. To avoid singularities...

  6. Efficient Modelling Methodology for Reconfigurable Underwater Robots

    DEFF Research Database (Denmark)

    Nielsen, Mikkel Cornelius; Blanke, Mogens; Schjølberg, Ingrid

    2016-01-01

    This paper considers the challenge of applying reconfigurable robots in an underwater environment. The main result presented is the development of a model for a system comprised of N, possibly heterogeneous, robots dynamically connected to each other and moving with 6 Degrees of Freedom (DOF......). This paper presents an application of the Udwadia-Kalaba Equation for modelling the Reconfigurable Underwater Robots. The constraints developed to enforce the rigid connection between robots in the system is derived through restrictions on relative distances and orientations. To avoid singularities...... in the orientation and, thereby, allow the robots to undertake any relative configuration the attitude is represented in Euler parameters....

  7. Model-driven software migration a methodology

    CERN Document Server

    Wagner, Christian

    2014-01-01

    Today, reliable software systems are the basis of any business or company. The continuous further development of those systems is the central component in software evolution. It requires a huge amount of time- man power- as well as financial resources. The challenges are size, seniority and heterogeneity of those software systems. Christian Wagner addresses software evolution: the inherent problems and uncertainties in the process. He presents a model-driven method which leads to a synchronization between source code and design. As a result the model layer will be the central part in further e

  8. Mathematical modeling and analysis of heat pipe start-up from the frozen state

    Energy Technology Data Exchange (ETDEWEB)

    Jang, J.H.; Faghri, A. [Wright State Univ., Dayton, OH (United States); Chang, W.S.; Mahefkey, E.T. [Wright Research and Development Center, Wright-Patterson, OH (United States)

    1989-08-01

    The start-up process of a frozen heat pipe is described and a complete mathematical model for the start-up of the frozen heat pipe is developed based on the existing experimental data, which is simplified and solved numerically. The two-dimensional transient model for the wall and wick is coupled with the one-dimensional transient model for the vapor flow when vaporization and condensation occur at the interface. A parametric study is performed to examine the effect of the boundary specification at the surface of the outer wall on the successful start-up from the frozen state. For successful start-up, the boundary specification at the outer wall surface must melt the working substance in the condenser before dry-out takes place in the evaporator.

  9. Mathematical modeling and analysis of heat pipe start-up from the frozen state

    Science.gov (United States)

    Jang, J. H.; Faghri, A.; Chang, W. S.; Mahefkey, E. T.

    1990-01-01

    The start-up process of a frozen heat pipe is described and a complete mathematical model for the start-up of the frozen heat pipe is developed based on the existing experimental data, which is simplified and solved numerically. The two-dimensional transient model for the wall and wick is coupled with the one-dimensional transient model for the vapor flow when vaporization and condensation occur at the interface. A parametric study is performed to examine the effect of the boundary specification at the surface of the outer wall on the successful start-up from the frozen state. For successful start-up, the boundary specification at the outer wall surface must melt the working substance in the condenser before dry-out takes place in the evaporator.

  10. Efficient Modelling Methodology for Reconfigurable Underwater Robots

    DEFF Research Database (Denmark)

    Nielsen, Mikkel Cornelius; Blanke, Mogens; Schjølberg, Ingrid

    2016-01-01

    This paper considers the challenge of applying reconfigurable robots in an underwater environment. The main result presented is the development of a model for a system comprised of N, possibly heterogeneous, robots dynamically connected to each other and moving with 6 Degrees of Freedom (DOF...

  11. General Methodology for developing UML models from UI

    CERN Document Server

    Reddy, Ch Ram Mohan; Srinivasa, K G; Kumar, T V Suresh; Kanth, K Rajani

    2012-01-01

    In recent past every discipline and every industry have their own methods of developing products. It may be software development, mechanics, construction, psychology and so on. These demarcations work fine as long as the requirements are within one discipline. However, if the project extends over several disciplines, interfaces have to be created and coordinated between the methods of these disciplines. Performance is an important quality aspect of Web Services because of their distributed nature. Predicting the performance of web services during early stages of software development is significant. In Industry, Prototype of these applications is developed during analysis phase of Software Development Life Cycle (SDLC). However, Performance models are generated from UML models. Methodologies for predicting the performance from UML models is available. Hence, In this paper, a methodology for developing Use Case model and Activity model from User Interface is presented. The methodology is illustrated with a case...

  12. Modeling and fatigue assessment of weld start-end locations based on the effective notch stress approach

    Energy Technology Data Exchange (ETDEWEB)

    Malikoutsakis, M. [Aristotle University of Thessaloniki, Faculty of Mechanical Engineering, Laboratory of Machine Elements Machine Design, 54124 Thessaloniki (Greece); Savaidis, G.

    2011-04-15

    The present paper contains a methodology for modeling and life assessment of fatigue loaded welded components providing distinct weld start and end locations. The proposed methodology follows the IIW recommendation regarding modeling and finite element meshing of weld toe and root by means of an effective notch radius and uses the corresponding Woehler curve (FAT class) to assess the durability. Geometrical singularities and, therewith, numerical discontinuities, can be overcome especially when 3D weld root problems are treated. The fatigue life assessment is performed on the basis of normal stresses acting at the failure-critical weld toe and root locations. Comprehensive experimental database containing stress and fatigue life results derived from motor truck's hypoid rear axles providing complex 3D welds subject to vertical, longitudinal, and torsional loading is used to verify the calculation accuracy of the proposed methodology. The agreement between experimentally determined and calculated fatigue results is satisfactory. (Copyright copyright 2011 WILEY-VCH Verlag GmbH and Co. KGaA, Weinheim)

  13. Data and models in Action. Methodological Issues in Production Ecology

    NARCIS (Netherlands)

    Stein, A.; Penning, de F.W.T.

    1999-01-01

    This book addresses methodological issues of production ecology. A central issue is the combination of the agricultural model with reliable data in relation to scale. A model is developed with data from a single point, whereas decisions are to be made for areas of land. Such an approach requires the

  14. Modeling survival in colon cancer: a methodological review

    Directory of Open Access Journals (Sweden)

    Holbert Don

    2007-02-01

    Full Text Available Abstract The Cox proportional hazards model is the most widely used model for survival analysis because of its simplicity. The fundamental assumption in this model is the proportionality of the hazard function. When this condition is not met, other modifications or other models must be used for analysis of survival data. We illustrate in this review several methodological approaches to deal with the violation of the proportionality assumption, using survival in colon cancer as an illustrative example.

  15. Suitability of Modern Software Development Methodologies for Model Driven Development

    Directory of Open Access Journals (Sweden)

    Ruben Picek

    2009-12-01

    Full Text Available As an answer to today’s growing challenges in software industry, wide spectrum of new approaches of software development has occurred. One prominent direction is currently most promising software development paradigm called Model Driven Development (MDD. Despite a lot of skepticism and problems, MDD paradigm is being used and improved to accomplish many inherent potential benefits. In the methodological approach of software development it is necessary to use some kind of development process. Modern methodologies can be classified into two main categories: formal or heavyweight and agile or lightweight. But when it is a question about MDD and development process for MDD, currently known methodologies are very poor or better said they don't have any explanation of MDD process. As the result of research, in this paper, author examines the possibilities of using existing modern software methodologies in context of MDD paradigm.

  16. Design Intelligent Model base Online Tuning Methodology for Nonlinear System

    Directory of Open Access Journals (Sweden)

    Ali Roshanzamir

    2014-04-01

    Full Text Available In various dynamic parameters systems that need to be training on-line adaptive control methodology is used. In this paper fuzzy model-base adaptive methodology is used to tune the linear Proportional Integral Derivative (PID controller. The main objectives in any systems are; stability, robust and reliability. However PID controller is used in many applications but it has many challenges to control of continuum robot. To solve these problems nonlinear adaptive methodology based on model base fuzzy logic is used. This research is used to reduce or eliminate the PID controller problems based on model reference fuzzy logic theory to control of flexible robot manipulator system and testing of the quality of process control in the simulation environment of MATLAB/SIMULINK Simulator.

  17. DIGITAL GEOMETRIC MODELLING OF TEETH PROFILE BY USING CAD METHODOLOGY

    Directory of Open Access Journals (Sweden)

    Krzysztof TWARDOCH

    2014-03-01

    Full Text Available This article is devoted to the problem of properly defining the spatial model of tooth profile with CAD methodologies. Moved by the problem of the accuracy of the mapping defined curves describing the geometry of the teeth. Particular attention was paid to precise geometric modeling involute tooth profile, which has a significant influence on the process of identifying the mesh stiffness for tests performed on the dynamic phenomena occurring in the gear transmission systems conducted using dynamic models

  18. A methodology for constructing the calculation model of scientific spreadsheets

    OpenAIRE

    Vos, de, Ans; Wielemaker, J.; Schreiber, G.; Wielinga, B.; Top, J.L.

    2015-01-01

    Spreadsheets models are frequently used by scientists to analyze research data. These models are typically described in a paper or a report, which serves as single source of information on the underlying research project. As the calculation workflow in these models is not made explicit, readers are not able to fully understand how the research results are calculated, and trace them back to the underlying spreadsheets. This paper proposes a methodology for semi-automatically deriving the calcu...

  19. Recursive modular modelling methodology for lumped-parameter dynamic systems.

    Science.gov (United States)

    Orsino, Renato Maia Matarazzo

    2017-08-01

    This paper proposes a novel approach to the modelling of lumped-parameter dynamic systems, based on representing them by hierarchies of mathematical models of increasing complexity instead of a single (complex) model. Exploring the multilevel modularity that these systems typically exhibit, a general recursive modelling methodology is proposed, in order to conciliate the use of the already existing modelling techniques. The general algorithm is based on a fundamental theorem that states the conditions for computing projection operators recursively. Three procedures for these computations are discussed: orthonormalization, use of orthogonal complements and use of generalized inverses. The novel methodology is also applied for the development of a recursive algorithm based on the Udwadia-Kalaba equation, which proves to be identical to the one of a Kalman filter for estimating the state of a static process, given a sequence of noiseless measurements representing the constraints that must be satisfied by the system.

  20. Direct start - from model to demo vehicle; Der Direktstart - Vom Modell zum Demonstrator

    Energy Technology Data Exchange (ETDEWEB)

    Kulzer, A.; Laubender, J.; Moessner, D.; Sieber, U. [Robert Bosch GmbH, Schwieberdingen (Germany); Lauff, U. [Etas GmbH (Germany)

    2006-09-15

    To start an engine with the use of nothing but combustion energy - the realization of this notion comprised the main theme of a project at the corporate sector research and advance engineering of Robert Bosch GmbH. With thermodynamic investigations as a starting point, Etas tools provided the power needed to demonstrate and analyse several different starting modes within a short period of time in a vehicle. (orig.)

  1. SYSTEMS METHODOLOGY AND MATHEMATICAL MODELS FOR KNOWLEDGE MANAGEMENT

    Institute of Scientific and Technical Information of China (English)

    Yoshiteru NAKAMORI

    2003-01-01

    This paper first introduces a new discipline knowledge science and the role of systems science inits development. Then, after the discussion on current trend in systems science, the paper proposes anew systems methodology for knowledge management and creation. Finally, the paper discussesmathematical modeling techniques to represent and manage human knowledge that is essentiallyvague and context-dependent.

  2. Availability modeling methodology applied to solar power systems

    Science.gov (United States)

    Unione, A.; Burns, E.; Husseiny, A.

    1981-01-01

    Availability is discussed as a measure for estimating the expected performance for solar- and wind-powered generation systems and for identifying causes of performance loss. Applicable analysis techniques, ranging from simple system models to probabilistic fault tree analysis, are reviewed. A methodology incorporating typical availability models is developed for estimating reliable plant capacity. Examples illustrating the impact of design and configurational differences on the expected capacity of a solar-thermal power plant with a fossil-fired backup unit are given.

  3. Inverting reflections using full-waveform inversion with inaccurate starting models

    KAUST Repository

    AlTheyab, Abdullah

    2015-08-19

    We present a method for inverting seismic reflections using full-waveform inversion (FWI) with inaccurate starting models. For a layered medium, near-offset reflections (with zero angle of incidence) are unlikely to be cycle-skipped regardless of the low-wavenumber velocity error in the initial models. Therefore, we use them as a starting point for FWI, and the subsurface velocity model is then updated during the FWI iterations using reflection wavepaths from varying offsets that are not cycle-skipped. To enhance low-wavenumber updates and accelerate the convergence, we take several passes through the non-linear Gauss-Seidel iterations, where we invert traces from a narrow range of near offsets and finally end at the far offsets. Every pass is followed by applying smoothing to the cumulative slowness update. The smoothing is strong at the early stages and relaxed at later iterations to allow for a gradual reconstruction of the subsurface model in a multiscale manner. Applications to synthetic and field data, starting from inaccurate models, show significant low-wavenumber updates and flattening of common-image gathers after many iterations.

  4. An experimental methodology for a fuzzy set preference model

    Science.gov (United States)

    Turksen, I. B.; Willson, Ian A.

    1992-01-01

    A flexible fuzzy set preference model first requires approximate methodologies for implementation. Fuzzy sets must be defined for each individual consumer using computer software, requiring a minimum of time and expertise on the part of the consumer. The amount of information needed in defining sets must also be established. The model itself must adapt fully to the subject's choice of attributes (vague or precise), attribute levels, and importance weights. The resulting individual-level model should be fully adapted to each consumer. The methodologies needed to develop this model will be equally useful in a new generation of intelligent systems which interact with ordinary consumers, controlling electronic devices through fuzzy expert systems or making recommendations based on a variety of inputs. The power of personal computers and their acceptance by consumers has yet to be fully utilized to create interactive knowledge systems that fully adapt their function to the user. Understanding individual consumer preferences is critical to the design of new products and the estimation of demand (market share) for existing products, which in turn is an input to management systems concerned with production and distribution. The question of what to make, for whom to make it and how much to make requires an understanding of the customer's preferences and the trade-offs that exist between alternatives. Conjoint analysis is a widely used methodology which de-composes an overall preference for an object into a combination of preferences for its constituent parts (attributes such as taste and price), which are combined using an appropriate combination function. Preferences are often expressed using linguistic terms which cannot be represented in conjoint models. Current models are also not implemented an individual level, making it difficult to reach meaningful conclusions about the cause of an individual's behavior from an aggregate model. The combination of complex aggregate

  5. Modeling Car-Following Dynamics During the Starting and Stopping Process Based on a Spring System Model

    Institute of Scientific and Technical Information of China (English)

    王殿海; 杨少辉; 初连禹

    2004-01-01

    Car-following models describe how one vehicle follows the preceding vehicles. In order to better model and explain car-following dynamics, this paper categorizes the state of a traveling vehicle into three sub-processes: the starting (acceleration) process, the car-following process, and the stopping (deceleration) process. The starting process primarily involves vehicle acceleration behavior. The stopping process involves not only car-following behavior but also deceleration behavior. This paper regards both the stopping process and the starting process as spring systems. The car-following dynamics during the starting process and the stopping process is modeled in this paper. The parameters of the proposed models, which are represented in the form of trigonometric functions, possess explicit physical meaning and definitive ranges. We have calibrated the model of the starting process using data from the Traffic Engineering Handbook, and obtained reasonable results. Compared with traditional stimulus-response car-following models, this model can better explain traffic flow phenomena and driver behavior theory.

  6. Organizational information assets classification model and security architecture methodology

    Directory of Open Access Journals (Sweden)

    Mostafa Tamtaji

    2015-12-01

    Full Text Available Today's, Organizations are exposed with huge and diversity of information and information assets that are produced in different systems shuch as KMS, financial and accounting systems, official and industrial automation sysytems and so on and protection of these information is necessary. Cloud computing is a model for enabling ubiquitous, convenient, on-demand network access to a shared pool of configurable computing resources that can be rapidly provisioned and released.several benefits of this model cuses that organization has a great trend to implementing Cloud computing. Maintaining and management of information security is the main challenges in developing and accepting of this model. In this paper, at first, according to "design science research methodology" and compatible with "design process at information systems research", a complete categorization of organizational assets, including 355 different types of information assets in 7 groups and 3 level, is presented to managers be able to plan corresponding security controls according to importance of each groups. Then, for directing of organization to architect it’s information security in cloud computing environment, appropriate methodology is presented. Presented cloud computing security architecture , resulted proposed methodology, and presented classification model according to Delphi method and expers comments discussed and verified.

  7. Qualitative response models: A survey of methodology and illustrative applications

    Directory of Open Access Journals (Sweden)

    Nojković Aleksandra

    2007-01-01

    Full Text Available This paper introduces econometric modeling with discrete (categorical dependent variables. Such models, commonly referred to as qualitative response (QR models, have become a standard tool of microeconometric analysis. Microeconometric research represents empirical analysis of microdata, i.e. economic information about individuals, households and firms. Microeconometrics has been most widely adopted in various fields, such as labour economics, consumer behavior, or economy of transport. The latest research shows that this methodology can also be successfully transferred to macroeconomic context and applied to time series and panel data analysis in a wider scope. .

  8. Powerline Communications Channel Modelling Methodology Based on Statistical Features

    CERN Document Server

    Tan, Bo

    2012-01-01

    This paper proposes a new channel modelling method for powerline communications networks based on the multipath profile in the time domain. The new channel model is developed to be applied in a range of Powerline Communications (PLC) research topics such as impulse noise modelling, deployment and coverage studies, and communications theory analysis. To develop the methodology, channels are categorised according to their propagation distance and power delay profile. The statistical multipath parameters such as path arrival time, magnitude and interval for each category are analyzed to build the model. Each generated channel based on the proposed statistical model represents a different realisation of a PLC network. Simulation results in similar the time and frequency domains show that the proposed statistical modelling method, which integrates the impact of network topology presents the PLC channel features as the underlying transmission line theory model. Furthermore, two potential application scenarios are d...

  9. The Methodology Roles in the Realization of a Model Development Environment

    OpenAIRE

    Arthur, James D.; Nance, Richard E.

    1988-01-01

    The definition of "methodology" is followed by a very brief review of past work in modeling methodologies. The dual role of a methodologies is explained: (1) conceptual guidance in the modeling task, and (2) definition of needs for environment designers. A model development based on the conical methodology serves for specific illustration of both roles.

  10. Comparative Heat Conduction Model of a Cold Storage with Puf & Eps Insulation Using Taguchi Methodology

    Directory of Open Access Journals (Sweden)

    Dr. N. Mukhopadhyay

    2015-05-01

    Full Text Available In this project work a mathematical heat conduction model of a cold storage (with the help of computer program; and multiple regression analysis has been proposed which can be used for further development of cold storages in the upcoming future. In cold storage refrigeration system brings down the temperature initially during start up but thermal insulation maintains the temperature later on continuously. In this view, the simple methodology is presented to calculate heat transfer by analytical method also attempt has been made to minimize the energy consumption by replacing 150 mm Expanded polystyrene (EPS by 100 mm Poly Urethane foam (PUF insulation. The methodology is validated against actual data obtained from Penguin cold storage situated in Pune, India. Insulation thickness of the side walls (TW, area of the wall (AW, and insulation thickness of the roof (TR have been chosen as predictor variables of the study.

  11. Supply Chain Modeling: Downstream Risk Assessment Methodology (DRAM)

    Science.gov (United States)

    2013-12-05

    Supply Chain Modeling: Downstream Risk Assessment Methodology (DRAM) Dr. Sean Barnett December 5, 2013 Institute for Defense Analyses Alexandria, Virginia DMSMS Conference 2013 These Slides are Unclassified and Not Proprietary Report Documentation Page Form ApprovedOMB No. 0704-0188 Public reporting burden for the collection of information is estimated to average 1 hour per response, including the time for reviewing instructions, searching existing data sources, gathering and maintaining the data needed, and completing and reviewing the

  12. An improved methodology for precise geoid/quasigeoid modelling

    Science.gov (United States)

    Nesvadba, Otakar; Holota, Petr

    2016-04-01

    The paper describes recent development of the computational procedure useful for precise local quasigeoid modelling. The overall methodology is primarily based on a solution of the so-called gravimetric boundary value problem for an ellipsoidal domain (exterior to an oblate spheroid), which means that gravity disturbances on the ellipsoid are used in quality of input data. The problem of a difference between the Earth's topography and the chosen ellipsoidal surface is solved iteratively, by analytical continuation of the gravity disturbances to the computational ellipsoid. The methodology covers an interpolation technique of the discrete gravity data, which, considering a priori adopted covariance function, provides the best linear unbiased estimate of the respective quantity, numerical integration technique developed on the surface of ellipsoid in the spectral domain, an iterative procedure of analytical continuation in ellipsoidal coordinates, remove and restore of the atmospheric masses, an estimate of the far-zones contribution (in a case of regional data coverage) and the restore step of the obtained disturbing gravity potential to the target height anomaly. All the computational steps of the procedure are modest in the consumption of compute resources, thus the methodology can be used on a common personal computer, free of any accuracy or resolution penalty. Finally, the performance of the developed methodology is demonstrated on the real-case examples related to the territories of France (Auvergne regional quasigeoid) and the Czech Republic.

  13. Methodology of problem space modeling in industrial enterprise management system

    Directory of Open Access Journals (Sweden)

    V.V. Glushchevsky

    2015-03-01

    Full Text Available The aim of the article. The aim of the article is to develop methodological principles for building a problem space model which can be integrated into industrial enterprise management system. The results of the analysis. The author developed methodological principles for constructing the problem space of an industrial enterprise as a structural and functional model. These problems appear on enterprise business process network topology and can be solved by its management system. The centerpiece of the article is description of the main stages of implementation of modeling methodology of industrial enterprise typical management problems. These stages help to solve several units of organizational management system structure of enterprise within their functional competence. Author formulated an axiom system of structural and characteristic properties of modeling space problems elements, and interconnections between them. This system of axioms is actually a justification for the correctness and adequacy of the proposed modeling methodology and comes as theoretical basis in the construction of the structural and functional model of the management problems space. This model generalizes three basic structural components of the enterprise management system with the help of axioms system: a three-dimensional model of the management problem space (the first dimension is the enterprise business process network, the second dimension is a set of management problems, the third dimension is four vectors of measurable and qualitative characteristics of management problems, which can be analyzed and managed during enterprise functioning; a two-dimensional model of the cybernetic space of analytical problems, which are formalized form of management problems (multivariate model experiments can be implemented with the help of this model to solve wide range of problem situations and determine the most effective or optimal management solutions; a two-dimensional model

  14. Mixed-mode modelling mixing methodologies for organisational intervention

    CERN Document Server

    Clarke, Steve; Lehaney, Brian

    2001-01-01

    The 1980s and 1990s have seen a growing interest in research and practice in the use of methodologies within problem contexts characterised by a primary focus on technology, human issues, or power. During the last five to ten years, this has given rise to challenges regarding the ability of a single methodology to address all such contexts, and the consequent development of approaches which aim to mix methodologies within a single problem situation. This has been particularly so where the situation has called for a mix of technological (the so-called 'hard') and human­ centred (so-called 'soft') methods. The approach developed has been termed mixed-mode modelling. The area of mixed-mode modelling is relatively new, with the phrase being coined approximately four years ago by Brian Lehaney in a keynote paper published at the 1996 Annual Conference of the UK Operational Research Society. Mixed-mode modelling, as suggested above, is a new way of considering problem situations faced by organisations. Traditional...

  15. Environmental sustainability modeling with exergy methodology for building life cycle

    Institute of Scientific and Technical Information of China (English)

    刘猛; 姚润明

    2009-01-01

    As an important human activity,the building industry has created comfortable space for living and work,and at the same time brought considerable pollution and huge consumption of energy and recourses. From 1990s after the first building environmental assessment model-BREEAM was released in the UK,a number of assessment models were formulated as analytical and practical in methodology respectively. This paper aims to introduce a generic model of exergy assessment on environmental impact of building life cycle,taking into consideration of previous models and focusing on natural environment as well as building life cycle,and three environmental impacts will be analyzed,namely energy embodied exergy,resource chemical exergy and abatement exergy on energy consumption,resource consumption and pollutant discharge respectively. The model of exergy assessment on environmental impact of building life cycle thus formulated contains two sub-models,one from the aspect of building energy utilization,and the other from building materials use. Combining theories by ecologists such as Odum,building environmental sustainability modeling with exergy methodology is put forward with the index of exergy footprint of building environmental impacts.

  16. Didaktisch-methodisches Modell, Methode und methodisches Instrumentarium im Fremdsprachenunterricht (Pedagogical-Methodological Model, Method and Methodological Arsenal in Foreign Language Teaching)

    Science.gov (United States)

    Guenther, Klaus

    1975-01-01

    Concentrates on (1) an exposition of the categories "pedagogical-methodological model", "method", and "methodological arsenal" from the viewpoint of FL teaching; (2) clearing up the relation between the pedagogical-methodological model and teaching method; (3) explaining an example of the application of the categories mentioned. (Text is in…

  17. A generalized methodology to characterize composite materials for pyrolysis models

    Science.gov (United States)

    McKinnon, Mark B.

    The predictive capabilities of computational fire models have improved in recent years such that models have become an integral part of many research efforts. Models improve the understanding of the fire risk of materials and may decrease the number of expensive experiments required to assess the fire hazard of a specific material or designed space. A critical component of a predictive fire model is the pyrolysis sub-model that provides a mathematical representation of the rate of gaseous fuel production from condensed phase fuels given a heat flux incident to the material surface. The modern, comprehensive pyrolysis sub-models that are common today require the definition of many model parameters to accurately represent the physical description of materials that are ubiquitous in the built environment. Coupled with the increase in the number of parameters required to accurately represent the pyrolysis of materials is the increasing prevalence in the built environment of engineered composite materials that have never been measured or modeled. The motivation behind this project is to develop a systematic, generalized methodology to determine the requisite parameters to generate pyrolysis models with predictive capabilities for layered composite materials that are common in industrial and commercial applications. This methodology has been applied to four common composites in this work that exhibit a range of material structures and component materials. The methodology utilizes a multi-scale experimental approach in which each test is designed to isolate and determine a specific subset of the parameters required to define a material in the model. Data collected in simultaneous thermogravimetry and differential scanning calorimetry experiments were analyzed to determine the reaction kinetics, thermodynamic properties, and energetics of decomposition for each component of the composite. Data collected in microscale combustion calorimetry experiments were analyzed to

  18. Modeling the Financial Distress of Microenterprise StartUps Using Support Vector Machines: A Case Study

    Directory of Open Access Journals (Sweden)

    Antonio Blanco-Oliver

    2014-10-01

    Full Text Available Despite the leading role that micro-entrepreneurship plays in economic development, and the high failure rate of microenterprise start-ups in their early years, very few studies have designed financial distress models to detect the financial problems of micro-entrepreneurs. Moreover, due to a lack of research, nothing is known about whether non-financial information and nonparametric statistical techniques improve the predictive capacity of these models. Therefore, this paper provides an innovative financial distress model specifically designed for microenterprise startups via support vector machines (SVMs that employs financial, non-financial, and macroeconomic variables. Based on a sample of almost 5,500 micro- entrepreneurs from a Peruvian Microfinance Institution (MFI, our findings show that the introduction of non-financial information related to the zone in which the entrepreneurs live and situate their business, the duration of the MFI-entrepreneur relationship, the number of loans granted by the MFI in the last year, the loan destination, and the opinion of experts on the probability that microenterprise start-ups may experience financial problems, significantly increases the accuracy performance of our financial distress model. Furthermore, the results reveal that the models that use SVMs outperform those which employ traditional logistic regression (LR analysis.

  19. Methodologies in the modeling of combined chemo-radiation treatments

    Science.gov (United States)

    Grassberger, C.; Paganetti, H.

    2016-11-01

    The variety of treatment options for cancer patients has increased significantly in recent years. Not only do we combine radiation with surgery and chemotherapy, new therapeutic approaches such as immunotherapy and targeted therapies are starting to play a bigger role. Physics has made significant contributions to radiation therapy treatment planning and delivery. In particular, treatment plan optimization using inverse planning techniques has improved dose conformity considerably. Furthermore, medical physics is often the driving force behind tumor control and normal tissue complication modeling. While treatment optimization and outcome modeling does focus mainly on the effects of radiation, treatment modalities such as chemotherapy are treated independently or are even neglected entirely. This review summarizes the published efforts to model combined modality treatments combining radiation and chemotherapy. These models will play an increasing role in optimizing cancer therapy not only from a radiation and drug dosage standpoint, but also in terms of spatial and temporal optimization of treatment schedules.

  20. Modern methodology and applications in spatial-temporal modeling

    CERN Document Server

    Matsui, Tomoko

    2015-01-01

    This book provides a modern introductory tutorial on specialized methodological and applied aspects of spatial and temporal modeling. The areas covered involve a range of topics which reflect the diversity of this domain of research across a number of quantitative disciplines. For instance, the first chapter deals with non-parametric Bayesian inference via a recently developed framework known as kernel mean embedding which has had a significant influence in machine learning disciplines. The second chapter takes up non-parametric statistical methods for spatial field reconstruction and exceedance probability estimation based on Gaussian process-based models in the context of wireless sensor network data. The third chapter presents signal-processing methods applied to acoustic mood analysis based on music signal analysis. The fourth chapter covers models that are applicable to time series modeling in the domain of speech and language processing. This includes aspects of factor analysis, independent component an...

  1. Improved Methodology for Parameter Inference in Nonlinear, Hydrologic Regression Models

    Science.gov (United States)

    Bates, Bryson C.

    1992-01-01

    A new method is developed for the construction of reliable marginal confidence intervals and joint confidence regions for the parameters of nonlinear, hydrologic regression models. A parameter power transformation is combined with measures of the asymptotic bias and asymptotic skewness of maximum likelihood estimators to determine the transformation constants which cause the bias or skewness to vanish. These optimized constants are used to construct confidence intervals and regions for the transformed model parameters using linear regression theory. The resulting confidence intervals and regions can be easily mapped into the original parameter space to give close approximations to likelihood method confidence intervals and regions for the model parameters. Unlike many other approaches to parameter transformation, the procedure does not use a grid search to find the optimal transformation constants. An example involving the fitting of the Michaelis-Menten model to velocity-discharge data from an Australian gauging station is used to illustrate the usefulness of the methodology.

  2. Logic flowgraph methodology - A tool for modeling embedded systems

    Science.gov (United States)

    Muthukumar, C. T.; Guarro, S. B.; Apostolakis, G. E.

    1991-01-01

    The logic flowgraph methodology (LFM), a method for modeling hardware in terms of its process parameters, has been extended to form an analytical tool for the analysis of integrated (hardware/software) embedded systems. In the software part of a given embedded system model, timing and the control flow among different software components are modeled by augmenting LFM with modified Petrinet structures. The objective of the use of such an augmented LFM model is to uncover possible errors and the potential for unanticipated software/hardware interactions. This is done by backtracking through the augmented LFM mode according to established procedures which allow the semiautomated construction of fault trees for any chosen state of the embedded system (top event). These fault trees, in turn, produce the possible combinations of lower-level states (events) that may lead to the top event.

  3. Adaptation to shift work: physiologically based modeling of the effects of lighting and shifts' start time.

    Science.gov (United States)

    Postnova, Svetlana; Robinson, Peter A; Postnov, Dmitry D

    2013-01-01

    Shift work has become an integral part of our life with almost 20% of the population being involved in different shift schedules in developed countries. However, the atypical work times, especially the night shifts, are associated with reduced quality and quantity of sleep that leads to increase of sleepiness often culminating in accidents. It has been demonstrated that shift workers' sleepiness can be improved by a proper scheduling of light exposure and optimizing shifts timing. Here, an integrated physiologically-based model of sleep-wake cycles is used to predict adaptation to shift work in different light conditions and for different shift start times for a schedule of four consecutive days of work. The integrated model combines a model of the ascending arousal system in the brain that controls the sleep-wake switch and a human circadian pacemaker model. To validate the application of the integrated model and demonstrate its utility, its dynamics are adjusted to achieve a fit to published experimental results showing adaptation of night shift workers (n = 8) in conditions of either bright or regular lighting. Further, the model is used to predict the shift workers' adaptation to the same shift schedule, but for conditions not considered in the experiment. The model demonstrates that the intensity of shift light can be reduced fourfold from that used in the experiment and still produce good adaptation to night work. The model predicts that sleepiness of the workers during night shifts on a protocol with either bright or regular lighting can be significantly improved by starting the shift earlier in the night, e.g.; at 21:00 instead of 00:00. Finally, the study predicts that people of the same chronotype, i.e. with identical sleep times in normal conditions, can have drastically different responses to shift work depending on their intrinsic circadian and homeostatic parameters.

  4. Adaptation to shift work: physiologically based modeling of the effects of lighting and shifts' start time.

    Directory of Open Access Journals (Sweden)

    Svetlana Postnova

    Full Text Available Shift work has become an integral part of our life with almost 20% of the population being involved in different shift schedules in developed countries. However, the atypical work times, especially the night shifts, are associated with reduced quality and quantity of sleep that leads to increase of sleepiness often culminating in accidents. It has been demonstrated that shift workers' sleepiness can be improved by a proper scheduling of light exposure and optimizing shifts timing. Here, an integrated physiologically-based model of sleep-wake cycles is used to predict adaptation to shift work in different light conditions and for different shift start times for a schedule of four consecutive days of work. The integrated model combines a model of the ascending arousal system in the brain that controls the sleep-wake switch and a human circadian pacemaker model. To validate the application of the integrated model and demonstrate its utility, its dynamics are adjusted to achieve a fit to published experimental results showing adaptation of night shift workers (n = 8 in conditions of either bright or regular lighting. Further, the model is used to predict the shift workers' adaptation to the same shift schedule, but for conditions not considered in the experiment. The model demonstrates that the intensity of shift light can be reduced fourfold from that used in the experiment and still produce good adaptation to night work. The model predicts that sleepiness of the workers during night shifts on a protocol with either bright or regular lighting can be significantly improved by starting the shift earlier in the night, e.g.; at 21:00 instead of 00:00. Finally, the study predicts that people of the same chronotype, i.e. with identical sleep times in normal conditions, can have drastically different responses to shift work depending on their intrinsic circadian and homeostatic parameters.

  5. Developing the Business Model – a Methodology for Virtual Enterprises

    DEFF Research Database (Denmark)

    Tølle, Martin; Vesterager, Johan

    2003-01-01

    This chapter presents a methodology to develop Virtual Enterprises (VEs). This Virtual Enterprise Methodology (VEM) outlines activities to consider when setting up and managing virtual enterprises. As a methodology the VEM helps companies to ask the right questions when preparing for, and setting...... and Methodology ISO15704:2000)....

  6. Methodology and basic algorithms of the Livermore Economic Modeling System

    Energy Technology Data Exchange (ETDEWEB)

    Bell, R.B.

    1981-03-17

    The methodology and the basic pricing algorithms used in the Livermore Economic Modeling System (EMS) are described. The report explains the derivations of the EMS equations in detail; however, it could also serve as a general introduction to the modeling system. A brief but comprehensive explanation of what EMS is and does, and how it does it is presented. The second part examines the basic pricing algorithms currently implemented in EMS. Each algorithm's function is analyzed and a detailed derivation of the actual mathematical expressions used to implement the algorithm is presented. EMS is an evolving modeling system; improvements in existing algorithms are constantly under development and new submodels are being introduced. A snapshot of the standard version of EMS is provided and areas currently under study and development are considered briefly.

  7. MODERN MODELS AND METHODS OF DIAGNOSIS OF METHODOLOGY COMPETENT TEACHERS

    Directory of Open Access Journals (Sweden)

    Loyko V. I.

    2016-06-01

    Full Text Available The purpose of the research is development of models and methods of diagnostics of methodical competence of a teacher. According to modern views, methodical thinking is the key competence of teachers. Modern experts consider the methodical competence of a teacher as a personal and professional quality, which is a fundamentally important factor in the success of the professional activity of teachers, as well as a subsystem of its professional competence. This is due to the fact that in today's world, a high level of knowledge of teachers of academic subjects and their possessing of learnt basics of teaching methods can not fully describe the level of professional competence of the teacher. The authors have characterized the functional components of methodical competence of the teacher, its relationship with other personalprofessional qualities (first - to the psychological and educational, research and informational competence, as well as its levels of formation. Forming a model of methodical competence of the teacher, the authors proceeded from the fact that a contemporary teacher high demands: it must be ready to conduct independent research, design-learning technologies, forecasting results of training and education of students. As a leading component of the methodical competence of the teacher is his personal experience in methodological activities and requirements of methodical competence determined goals and objectives of methodical activity, the process of the present study, the formation of patterns of methodical competence of the teacher preceded the refinement of existing models methodical activity of scientific and pedagogical staff of higher education institutions and secondary vocational education institutions. The proposed model of methodical competence of the teacher - the scientific basis of a system of monitoring of his personal and professional development, and evaluation criteria and levels of her diagnosis - targets system of

  8. Proposed Methodology for Generation of Building Information Model with Laserscanning

    Institute of Scientific and Technical Information of China (English)

    Shutao Li; J(o)rg lsele; Georg Bretthauer

    2008-01-01

    For refurbishment and state review of an existing old building,a new model reflecting the current state is often required especially when the original plans are no longer accessible.Laser scanners are used more and more as surveying instruments for various applications because of their high-precision scanning abilities.For buildings,the most notable and widely accepted product data model is the IFC product data model.It is designed to cover the whole lifecycle and supported by various software vendors and enables applications to efficiently share and exchange project information.The models obtained with the laser scan-ner,normally sets of points ("point cloud"),have to be transferred to an IFC compatible building information model to serve the needs of different planning states.This paper presents an approach designed by the German Research Center in Karlsmhe (Forschungszentrum Kadsmhe) to create an IFC compatible building information model from laser range images.The methodology through the entire process from data acquisi tion to the IFC compatible product model was proposed in this paper.In addition,IFC-Models with different level of detail (LoDs) were introduced and discussed within the work.

  9. A business process model as a starting point for tight cooperation among organizations

    Directory of Open Access Journals (Sweden)

    O. Mysliveček

    2006-01-01

    Full Text Available Outsourcing and other kinds of tight cooperation among organizations are more and more necessary for success on all markets (markets of high technology products are particularly influenced. Thus it is important for companies to be able to effectively set up all kinds of cooperation. A business process model (BPM is a suitable starting point for this future cooperation. In this paper the process of setting up such cooperation is outlined, as well as why it is important for business success. 

  10. Mathematical modelling of methanogenic reactor start-up: Importance of volatile fatty acids degrading population.

    Science.gov (United States)

    Jabłoński, Sławomir J; Łukaszewicz, Marcin

    2014-12-01

    Development of balanced community of microorganisms is one of the obligatory for stable anaerobic digestion. Application of mathematical models might be helpful in development of reliable procedures during the process start-up period. Yet, the accuracy of forecast depends on the quality of input and parameters. In this study, the specific anaerobic activity (SAA) tests were applied in order to estimate microbial community structure. Obtained data was applied as input conditions for mathematical model of anaerobic digestion. The initial values of variables describing the amount of acetate and propionate utilizing microorganisms could be calculated on the basis of SAA results. The modelling based on those optimized variables could successfully reproduce the behavior of a real system during the continuous fermentation.

  11. Methodology Using MELCOR Code to Model Proposed Hazard Scenario

    Energy Technology Data Exchange (ETDEWEB)

    Gavin Hawkley

    2010-07-01

    This study demonstrates a methodology for using the MELCOR code to model a proposed hazard scenario within a building containing radioactive powder, and the subsequent evaluation of a leak path factor (LPF) (or the amount of respirable material which that escapes a facility into the outside environment), implicit in the scenario. This LPF evaluation will analyzes the basis and applicability of an assumed standard multiplication of 0.5 × 0.5 (in which 0.5 represents the amount of material assumed to leave one area and enter another), for calculating an LPF value. The outside release is dependsent upon the ventilation/filtration system, both filtered and un-filtered, and from other pathways from the building, such as doorways (, both open and closed). This study is presents ed to show how the multiple leak path factorsLPFs from the interior building can be evaluated in a combinatory process in which a total leak path factorLPF is calculated, thus addressing the assumed multiplication, and allowing for the designation and assessment of a respirable source term (ST) for later consequence analysis, in which: the propagation of material released into the environmental atmosphere can be modeled and the dose received by a receptor placed downwind can be estimated and the distance adjusted to maintains such exposures as low as reasonably achievableALARA.. Also, this study will briefly addresses particle characteristics thatwhich affect atmospheric particle dispersion, and compares this dispersion with leak path factorLPF methodology.

  12. A methodology for modeling barrier island storm-impact scenarios

    Science.gov (United States)

    Mickey, Rangley C.; Long, Joseph W.; Plant, Nathaniel G.; Thompson, David M.; Dalyander, P. Soupy

    2017-02-16

    A methodology for developing a representative set of storm scenarios based on historical wave buoy and tide gauge data for a region at the Chandeleur Islands, Louisiana, was developed by the U.S. Geological Survey. The total water level was calculated for a 10-year period and analyzed against existing topographic data to identify when storm-induced wave action would affect island morphology. These events were categorized on the basis of the threshold of total water level and duration to create a set of storm scenarios that were simulated, using a high-fidelity, process-based, morphologic evolution model, on an idealized digital elevation model of the Chandeleur Islands. The simulated morphological changes resulting from these scenarios provide a range of impacts that can help coastal managers determine resiliency of proposed or existing coastal structures and identify vulnerable areas within those structures.

  13. Modeling of electrohydrodynamic drying process using response surface methodology.

    Science.gov (United States)

    Dalvand, Mohammad Jafar; Mohtasebi, Seyed Saeid; Rafiee, Shahin

    2014-05-01

    Energy consumption index is one of the most important criteria for judging about new, and emerging drying technologies. One of such novel and promising alternative of drying process is called electrohydrodynamic (EHD) drying. In this work, a solar energy was used to maintain required energy of EHD drying process. Moreover, response surface methodology (RSM) was used to build a predictive model in order to investigate the combined effects of independent variables such as applied voltage, field strength, number of discharge electrode (needle), and air velocity on moisture ratio, energy efficiency, and energy consumption as responses of EHD drying process. Three-levels and four-factor Box-Behnken design was employed to evaluate the effects of independent variables on system responses. A stepwise approach was followed to build up a model that can map the entire response surface. The interior relationships between parameters were well defined by RSM.

  14. Effect of slow-to-start in the extended BML model with four-directional traffic

    Energy Technology Data Exchange (ETDEWEB)

    Kuang, Hua [College of Physical Science and Technology, Guangxi Normal University, Guilin, 541004 (China); Department of Civil and Architectural Engineering, City University of Hong Kong, Tat Chee Avenue, Kowloon (Hong Kong); Zhang, Guo-Xin [College of Physical Science and Technology, Guangxi Normal University, Guilin, 541004 (China); Li, Xing-Li [School of Applied Science, Taiyuan University of Science and Technology, Taiyuan, 030024 (China); Lo, Siu-Ming, E-mail: bcsmli@cityu.edu.hk [Department of Civil and Architectural Engineering, City University of Hong Kong, Tat Chee Avenue, Kowloon (Hong Kong)

    2014-04-01

    In this paper, an extended Biham–Middleton–Levine (BML) model is proposed to simulate complex characteristics of four-directional traffic flow by considering the effect of slow-to-start. The simulation results show that the system does not exhibit a sharp transition from moving phase to jamming phase, which is consistent with the results of the latest studies about the original BML model. Differently from the structure geometric patterns in previous studies, a new phase separation phenomenon, i.e., the coexistence of multiple free flow stripes and multi-local jams, can be observed. The formation mechanisms of typical dynamic patterns are also explored. Furthermore, a mean field analysis for the maximum velocity in the moving phase is obtained, which is in good accordance with simulation results. In addition, an interesting feature is found that this new coexistence phenomenon of two phases is determined only by the effect of slow-to-start and is completely independent of traffic light (only considering red light and green light) period.

  15. A ROADMAP FOR GENERATING SEMANTICALLY ENRICHED BUILDING MODELS ACCORDING TO CITYGML MODEL VIA TWO DIFFERENT METHODOLOGIES

    Directory of Open Access Journals (Sweden)

    G. Floros

    2016-10-01

    Full Text Available The methodologies of 3D modeling techniques have increasingly increased due to the rapid advances of new technologies. Nowadays, the focus of 3D modeling software is focused, not only to the finest visualization of the models, but also in their semantic features during the modeling procedure. As a result, the models thus generated are both realistic and semantically enriched. Additionally, various extensions of modeling software allow for the immediate conversion of the model’s format, via semi-automatic procedures with respect to the user’s scope. The aim of this paper is to investigate the generation of a semantically enriched Citygml building model via two different methodologies. The first methodology includes the modeling in Trimble SketchUp and the transformation in FME Desktop Manager, while the second methodology includes the model’s generation in CityEngine and its transformation in the CityGML format via the 3DCitiesProject extension for ArcGIS. Finally, the two aforesaid methodologies are being compared and specific characteristics are evaluated, in order to infer the methodology that is best applied depending on the different projects’ purposes.

  16. Spatial Development Modeling Methodology Application Possibilities in Vilnius

    Directory of Open Access Journals (Sweden)

    Lina Panavaitė

    2017-05-01

    Full Text Available In order to control the continued development of high-rise buildings and their irreversible visual impact on the overall silhouette of the city, the great cities of the world introduced new methodological principles to city’s spatial development models. These methodologies and spatial planning guidelines are focused not only on the controlled development of high-rise buildings, but on the spatial modelling of the whole city by defining main development criteria and estimating possible consequences. Vilnius city is no exception, however the re-establishment of independence of Lithuania caused uncontrolled urbanization process, so most of the city development regulations emerged as a consequence of unmanaged processes of investors’ expectations legalization. The importance of consistent urban fabric as well as conservation and representation of city’s most important objects gained attention only when an actual threat of overshadowing them with new architecture along with unmanaged urbanization in the city center or urban sprawl at suburbia, caused by land-use projects, had emerged. Current Vilnius’ spatial planning documents clearly define urban structure and key development principles, however the definitions are relatively abstract, causing uniform building coverage requirements for territories with distinct qualities and simplifying planar designs which do not meet quality standards. The overall quality of urban architecture is not regulated. The article deals with current spatial modeling methods, their individual parts, principles, the criteria for quality assessment and their applicability in Vilnius. The text contains an outline of possible building coverage regulations and impact assessment criteria for new development. The article contains a compendium of requirements for high-quality spatial planning and building design.

  17. Developing the Business Model – a Methodology for Virtual Enterprises

    DEFF Research Database (Denmark)

    Tølle, Martin; Vesterager, Johan

    2003-01-01

    This chapter presents a methodology to develop Virtual Enterprises (VEs). This Virtual Enterprise Methodology (VEM) outlines activities to consider when setting up and managing virtual enterprises. As a methodology the VEM helps companies to ask the right questions when preparing for, and setting...

  18. A NUMERICAL STUDY ON A SIMPLIFIED TAIL MODEL FOR TURNING FISH IN C-START

    Institute of Scientific and Technical Information of China (English)

    YANG Yan; TONG Bing-gang

    2006-01-01

    Most freshwater fish are good at turning manoeuvres. A simulated fish tail model was numerically investigated and discussed in detail. This study deals with unsteady forces and moment exerted on the fish tail-fin in an initial sideways stroke and a subsequent return stroke motion, and visualizes the flow fields and vortex structures, in order to explore the flow control mechanism of the typical turning motion of fish. Further discussion on fluid dynamic consequences corresponding to two different bending forms of fish tail-fins in its C-start is given. The two-dimensional unsteady incompressible Navier-Stokes equations are solved with a developed pseudo-compressibility method to simulate the flow around the fish tail-fin. The computed results and the comparison with experiments indicate that (1) fish performs a turning motion of its body using the impulsive moment produced by the to-and-fro stroke, and each stage of the process exhibits its specific hydrodynamic characteristic, (2) fishes adopt two forms of tail-tip bend (single bend and double bend) to accomplish a C-start turning manoeuvre, in dependence of their physical situations and natural environments, (3) fish can control its turning motion by modulating some key kinematic parameters.

  19. Modeling of cold start processes and performance optimization for proton exchange membrane fuel cell stacks

    Science.gov (United States)

    Zhou, Yibo; Luo, Yueqi; Yu, Shuhai; Jiao, Kui

    2014-02-01

    In this study, a cold start model for proton exchange membrane fuel cell (PEMFC) stacks is developed, and a novel start-up method, variable heating and load control (VHLC), is proposed and evaluated. The main idea is to only apply load to the neighboring still-active cells, and to apply external heating to certain cells inside the stack simultaneously (load is not applied to the cells fully blocked by ice, although these cells can gain heat from neighboring cells). With the VHLC method, it is found that the stack voltage first increases, then decreases due to the full blockage of ice in some of the individual cells, and finally the dead cells are heated by the other active cells and activated again one by one. Based on this method, the external heating power and the stack self-heating ability are utilized more efficiently. With proper implementation of the VHLC method, it is demonstrated that the cold stat performance can be improved significantly, which is critically important for PEMFC in automotive applications.

  20. Exploring model based engineering for large telescopes: getting started with descriptive models

    Science.gov (United States)

    Karban, R.; Zamparelli, M.; Bauvir, B.; Koehler, B.; Noethe, L.; Balestra, A.

    2008-07-01

    Large telescopes pose a continuous challenge to systems engineering due to their complexity in terms of requirements, operational modes, long duty lifetime, interfaces and number of components. A multitude of decisions must be taken throughout the life cycle of a new system, and a prime means of coping with complexity and uncertainty is using models as one decision aid. The potential of descriptive models based on the OMG Systems Modeling Language (OMG SysMLTM) is examined in different areas: building a comprehensive model serves as the basis for subsequent activities of soliciting and review for requirements, analysis and design alike. Furthermore a model is an effective communication instrument against misinterpretation pitfalls which are typical of cross disciplinary activities when using natural language only or free-format diagrams. Modeling the essential characteristics of the system, like interfaces, system structure and its behavior, are important system level issues which are addressed. Also shown is how to use a model as an analysis tool to describe the relationships among disturbances, opto-mechanical effects and control decisions and to refine the control use cases. Considerations on the scalability of the model structure and organization, its impact on the development process, the relation to document-centric structures, style and usage guidelines and the required tool chain are presented.

  1. A spatial kinetic model for simulating VVER-1000 start-up transient

    Energy Technology Data Exchange (ETDEWEB)

    Kashi, Samira [Department of Nuclear Engineering, Shahid Beheshti University, Tehran (Iran, Islamic Republic of); Moghaddam, Nader Maleki, E-mail: nader.moghaddam@gmail.com [Department of Nuclear Engineering and Physics, Amir Kabir University of Technology, Tehran (Iran, Islamic Republic of); Shahriari, Majid [Department of Nuclear Engineering, Shahid Beheshti University, Tehran (Iran, Islamic Republic of)

    2011-06-15

    Research highlights: > A spatial kinetic model of a VVER-1000 reactor core is presented. > The reactor power is tracked using the point kinetic equations from 100 W to 612 kW. > The lamped parameter approximation is used for solving the energy balance equations. > The value of reactivity related to feedback effects of core elements is calculated. > The main neutronic parameters during the transient are calculated. - Abstract: An accurate prediction of reactor core behavior in transients depends on how much it could be possible to exactly determine the thermal feedbacks of the core elements such as fuel, clad and coolant. In short time transients, results of these feedbacks directly affect the reactor power and determine the reactor response. Such transients are commonly happened during the start-up process which makes it necessary to carefully evaluate the detail of process. Hence this research evaluates a short time transient occurring during the start up of VVER-1000 reactor. The reactor power was tracked using the point kinetic equations from HZP state (100 W) to 612 kW. Final power (612 kW) was achieved by withdrawing control rods and resultant excess reactivity was set into dynamic equations to calculate the reactor power. Since reactivity is the most important part in the point kinetic equations, using a Lumped Parameter (LP) approximation, energy balance equations were solved in different zones of the core. After determining temperature and total reactivity related to feedbacks in each time step, the exact value of reactivity is obtained and is inserted into point kinetic equations. In reactor core each zone has a specific temperature and its corresponding thermal feedback. To decrease the effects of point kinetic approximations, these partial feedbacks in different zones are superposed to show an accurate model of reactor core dynamics. In this manner the reactor point kinetic can be extended to the whole reactor core which means 'Reactor spatial

  2. An EOQ inventory model for items with ramp type demand, three-parameter Weibull distribution deterioration and starting with shortage

    Directory of Open Access Journals (Sweden)

    Jain Sanjay

    2010-01-01

    Full Text Available In this present paper an inventory model is developed with ramp type demand, starting with shortage and three - parameter Weibull distribution deterioration. A brief analysis of the cost involved is carried out by an example.

  3. Hydrodynamics of C-Start Escape Responses of Fish as Studied with Simple Physical Models.

    Science.gov (United States)

    Witt, William C; Wen, Li; Lauder, George V

    2015-10-01

    One of the most-studied unsteady locomotor behaviors exhibited by fishes is the c-start escape response. Although the kinematics of these responses have been studied extensively and two well-defined kinematic stages have been documented, only a few studies have focused on hydrodynamic patterns generated by fishes executing escape behaviors. Previous work has shown that escape responses by bluegill sunfish generate three distinct vortex rings, each with central orthogonal jet flows, and here we extend this conclusion to two other species: stickleback and mosquitofish. Jet #1 is formed by the tail during Stage 1, and moves in the same direction as Stage-2 movement of the fish, thereby reducing final escape-velocity but also rotating the fish. Jet #2, in contrast, moves approximately opposite to the final direction of the fish's motion and contains the bulk of the total fluid-momentum powering the escape response. Jet #3 forms during Stage 2 in the mid-body region and moves in a direction approximately perpendicular to jets 1 and 2, across the direction of movement of the body. In this study, we used a mechanical controller to impulsively move passively flexible plastic panels of three different stiffnesses in heave, pitch, and heave + pitch motions to study the effects of stiffness on unsteady hydrodynamics of escape. We were able to produce kinematics very similar to those of fish c-starts and also to reproduce the 3-jet hydrodynamic pattern of the c-start using a panel of medium flexural stiffness and the combined heave + pitch motion. This medium-stiffness panel matched the measured stiffness of the near-tail region of fish bodies. This motion also produced positive power when the panel straightened during stage 2 of the escape response. More flexible and stiffer panels resulted in non-biological kinematics and patterns of flow for all motions. The use of simple flexible models with a mechanical controller and program of fish-like motion is a promising approach

  4. Development of a General Modelling Methodology for Vacuum Residue Hydroconversion

    Directory of Open Access Journals (Sweden)

    Pereira de Oliveira L.

    2013-11-01

    Full Text Available This work concerns the development of a methodology for kinetic modelling of refining processes, and more specifically for vacuum residue conversion. The proposed approach allows to overcome the lack of molecular detail of the petroleum fractions and to simulate the transformation of the feedstock molecules into effluent molecules by means of a two-step procedure. In the first step, a synthetic mixture of molecules representing the feedstock for the process is generated via a molecular reconstruction method, termed SR-REM molecular reconstruction. In the second step, a kinetic Monte-Carlo method (kMC is used to simulate the conversion reactions on this mixture of molecules. The molecular reconstruction was applied to several petroleum residues and is illustrated for an Athabasca (Canada vacuum residue. The kinetic Monte-Carlo method is then described in detail. In order to validate this stochastic approach, a lumped deterministic model for vacuum residue conversion was simulated using Gillespie’s Stochastic Simulation Algorithm. Despite the fact that both approaches are based on very different hypotheses, the stochastic simulation algorithm simulates the conversion reactions with the same accuracy as the deterministic approach. The full-scale stochastic simulation approach using molecular-level reaction pathways provides high amounts of detail on the effluent composition and is briefly illustrated for Athabasca VR hydrocracking.

  5. An ABET assessment model using Six Sigma methodology

    Science.gov (United States)

    Lalovic, Mira

    Technical fields are changing so rapidly that even the core of an engineering education must be constantly reevaluated. Graduates of today give more dedication and, almost certainly, more importance to continued learning than to mastery of specific technical concepts. Continued learning shapes a high-quality education, which is what an engineering college must offer its students. The question is how to guarantee the quality of education. In addition, the Accreditation Board of Engineering and Technology is asking that universities commit to continuous and comprehensive education, assuming quality of the educational process. The research is focused on developing a generic assessment model for a college of engineering as an annual cycle that consists of a systematic assessment of every course in the program, followed by an assessment of the program and of the college as a whole using Six Sigma methodology. This unique approach to assessment in education will provide a college of engineering with valuable information regarding many important curriculum decisions in every accreditation cycle. The Industrial and Manufacturing Engineering (IME) Program in the College of Engineering at the University of Cincinnati will be used as a case example for a preliminary test of the generic model.

  6. SR-Site groundwater flow modelling methodology, setup and results

    Energy Technology Data Exchange (ETDEWEB)

    Selroos, Jan-Olof (Swedish Nuclear Fuel and Waste Management Co., Stockholm (Sweden)); Follin, Sven (SF GeoLogic AB, Taeby (Sweden))

    2010-12-15

    As a part of the license application for a final repository for spent nuclear fuel at Forsmark, the Swedish Nuclear Fuel and Waste Management Company (SKB) has undertaken three groundwater flow modelling studies. These are performed within the SR-Site project and represent time periods with different climate conditions. The simulations carried out contribute to the overall evaluation of the repository design and long-term radiological safety. Three time periods are addressed; the Excavation and operational phases, the Initial period of temperate climate after closure, and the Remaining part of the reference glacial cycle. The present report is a synthesis of the background reports describing the modelling methodology, setup, and results. It is the primary reference for the conclusions drawn in a SR-Site specific context concerning groundwater flow during the three climate periods. These conclusions are not necessarily provided explicitly in the background reports, but are based on the results provided in these reports. The main results and comparisons presented in the present report are summarised in the SR-Site Main report.

  7. A methodology for ecosystem-scale modeling of selenium

    Science.gov (United States)

    Presser, T.S.; Luoma, S.N.

    2010-01-01

    The main route of exposure for selenium (Se) is dietary, yet regulations lack biologically based protocols for evaluations of risk. We propose here an ecosystem-scale model that conceptualizes and quantifies the variables that determinehow Se is processed from water through diet to predators. This approach uses biogeochemical and physiological factors from laboratory and field studies and considers loading, speciation, transformation to particulate material, bioavailability, bioaccumulation in invertebrates, and trophic transfer to predators. Validation of the model is through data sets from 29 historic and recent field case studies of Se-exposed sites. The model links Se concentrations across media (water, particulate, tissue of different food web species). It can be used to forecast toxicity under different management or regulatory proposals or as a methodology for translating a fish-tissue (or other predator tissue) Se concentration guideline to a dissolved Se concentration. The model illustrates some critical aspects of implementing a tissue criterion: 1) the choice of fish species determines the food web through which Se should be modeled, 2) the choice of food web is critical because the particulate material to prey kinetics of bioaccumulation differs widely among invertebrates, 3) the characterization of the type and phase of particulate material is important to quantifying Se exposure to prey through the base of the food web, and 4) the metric describing partitioning between particulate material and dissolved Se concentrations allows determination of a site-specific dissolved Se concentration that would be responsible for that fish body burden in the specific environment. The linked approach illustrates that environmentally safe dissolved Se concentrations will differ among ecosystems depending on the ecological pathways and biogeochemical conditions in that system. Uncertainties and model sensitivities can be directly illustrated by varying exposure

  8. A dynamic model of a vapor compression cycle with shut-down and start-up operations

    Energy Technology Data Exchange (ETDEWEB)

    Li, Bin; Alleyne, Andrew G. [Department of Mechanical Science and Engineering, University of Illinois at Urbana-Champaign, 1206 West Green Street, MC-244, Urbana, IL 61801 (United States)

    2010-05-15

    This paper presents an advanced switched modeling approach for vapor compression cycle (VCC) systems used in Air Conditioning and Refrigeration. Building upon recent work (), a complete dynamic VCC model is presented that is able to describe the severe transient behaviors in heat exchangers (condenser/evaporator), while maintaining the moving-boundary framework, under compressor shut-down and start-up operations. The heat exchanger models retain a constant structure, but accommodate different model representations. Novel switching schemes between different representations and pseudo-state variables are introduced to accommodate the transitions of dynamic states in heat exchangers while keeping track of the vapor and liquid refrigerant zones during the stop-start transients. Two model validation studies on an experimental system show that the complete dynamic model developed in Matlab/Simulink can well predict the system dynamics in shut-down and start-up transients. (author)

  9. High-Fidelity Modelling Methodology of Light-Limited Photosynthetic Production in Microalgae.

    Directory of Open Access Journals (Sweden)

    Andrea Bernardi

    Full Text Available Reliable quantitative description of light-limited growth in microalgae is key to improving the design and operation of industrial production systems. This article shows how the capability to predict photosynthetic processes can benefit from a synergy between mathematical modelling and lab-scale experiments using systematic design of experiment techniques. A model of chlorophyll fluorescence developed by the authors [Nikolaou et al., J Biotechnol 194:91-99, 2015] is used as starting point, whereby the representation of non-photochemical-quenching (NPQ process is refined for biological consistency. This model spans multiple time scales ranging from milliseconds to hours, thus calling for a combination of various experimental techniques in order to arrive at a sufficiently rich data set and determine statistically meaningful estimates for the model parameters. The methodology is demonstrated for the microalga Nannochloropsis gaditana by combining pulse amplitude modulation (PAM fluorescence, photosynthesis rate and antenna size measurements. The results show that the calibrated model is capable of accurate quantitative predictions under a wide range of transient light conditions. Moreover, this work provides an experimental validation of the link between fluorescence and photosynthesis-irradiance (PI curves which had been theoricized.

  10. Methodologies for modelling energy and amino acid responses in poultry

    Directory of Open Access Journals (Sweden)

    Robert Mervyn Gous

    2007-07-01

    Full Text Available The objective of this paper is to present some of the issues faced by those whose interest is to predict responses in poultry, concentrating mainly on those related to the prediction of voluntary food intake, as this should be the basis of models designed to optimise both performance and feeding programmes. The value of models designed to predict growth or reproductive performance has been improved inestimably by making food intake an output from, as opposed to an input to, such models. Predicting voluntary food intake requires the potential of the bird to be known, be this the growth of body protein or lipid, the growth of feather protein, or the rate at which yolk and albumen may be deposited daily in the form of an egg, and some of the issues relating to the description of potentials are discussed. This potential defines the nutrients that would be required by the bird on the day, which can be converted to a desired food intake by dividing each requirement by the content of that nutrient in the feed. There will be occasions when the bird will be unable to consume what is required, and predicting the magnitude of these constraints on intake and performance provides the greatest challenge for modellers. This paper concentrates on some issues raised in defining the nutrient requirements of an individual, on constraints such as high temperatures and the social and infectious environment on voluntary food intake, on some recent differences in the response to dietary protein that have been observed between the major broiler strains, and on the methodologies used to deal with populations of birds, and finally with broiler breeder hens, whose food intake is constrained by management, not by the environment. These issues suggest that there are still challenges that lie ahead for those wishing to predict responses to nutrients in poultry. It is imperative, however, that the methods used to measure the numbers that make theories work, and that the

  11. Methodological improvements of geoid modelling for the Austrian geoid computation

    Science.gov (United States)

    Kühtreiber, Norbert; Pail, Roland; Wiesenhofer, Bernadette; Pock, Christian; Wirnsberger, Harald; Hofmann-Wellenhof, Bernhard; Ullrich, Christian; Höggerl, Norbert; Ruess, Diethard; Imrek, Erich

    2010-05-01

    The geoid computation method of Least Squares Collocation (LSC) is usually applied in connection with the remove-restore technique. The basic idea is to remove, before applying LSC, not only the long-wavelength gravity field effect represented by the global gravity field model, but also the high-frequency signals, which are mainly related to topography, by applying a topographic-isostatic reduction. In the current Austrian geoid solution, an Airy-Heiskanen model with a standard density of 2670 kg/m3 was used. A close investigation of the absolute error structure of this solution reveals some correlations with topography, which may be explained with these simplified assumptions. On parameter of the remove-restore process to be investigated in this work is the depth of the reference surface of isostatic compensation, the Mohorovicic discontinuity (Moho). The recently compiled European plate Moho depth model, which is based on 3D-seismic tomography and other geophysical measurements, is used instead of the reference surface derived from the Airy-Heiskanen isostatic model. Additionally, the use of of the standard density of 2670 kg/m3 is replaced by a laterally variable (surface) density model. The impact of these two significant modifications of the geophysical conception of the remove-restore procedure on the Austrian geoid solution is investigated and analyzed in detail. In the current Austrian geoid solution the above described remove-restore concept was used in a first step to derive a pure gravimetric geoid and predicting the geoid height for 161 GPS/levelling points. The difference between measured and predicted geoid heights shows a long-wavelength structure. These systematic distortions are commonly attributed to inconsistencies in the datum, distortions of the orthometric height system, and systematic GPS errors. In order to cope with this systematic term, a polynomial of degree 3 was fitted to the difference of predicted geoid heights and GPS

  12. The Starting Early Starting Smart Integrated Services Model: Improving Access to Behavioral Health Services in the Pediatric Health Care Setting for At-Risk Families with Young Children

    Science.gov (United States)

    Morrow, Connie E.; Mansoor, Elana; Hanson, K. Lori; Vogel, April L.; Rose-Jacobs, Ruth; Genatossio, Carolyn Seval; Windham, Amy; Bandstra, Emmalee S.

    2010-01-01

    We evaluated the Starting Early Starting Smart (SESS) national initiative to integrate behavioral health services (parenting, mental health, and drug treatment) into the pediatric health care setting for families with young children. Data are presented from five pediatric care (PC) sites, drawing from families at risk due to demographic and…

  13. Evaluating Supply Chain Management: A Methodology Based on a Theoretical Model

    OpenAIRE

    Alexandre Tadeu Simon; Luiz Carlos Di Serio; Silvio Roberto Ignacio Pires; Guilherme Silveira Martins

    2015-01-01

    Despite the increasing interest in supply chain management (SCM) by researchers and practitioners, there is still a lack of academic literature concerning topics such as methodologies to guide and support SCM evaluation. Most developed methodologies have been provided by consulting companies and are restricted in their publication and use. This article presents a methodology for evaluating companies’ degree of adherence to a SCM conceptual model. The methodology is based on Cooper, Lambert an...

  14. High level models and methodologies for information systems

    CERN Document Server

    Isaias, Pedro

    2014-01-01

    This book introduces methods and methodologies in Information Systems (IS) by presenting, describing, explaining, and illustrating their uses in various contexts, including website development, usability evaluation, quality evaluation, and success assessment.

  15. Modeling and Architectural Design in Agile Development Methodologies

    NARCIS (Netherlands)

    Stojanovic, Z.; Dahanayake, A.; Sol, H

    2003-01-01

    Agile Development Methodologies have been designed to address the problem of delivering high-quality software on time under constantly and rapidly changing requirements in business and IT environments. Agile development processes are characterized by extensive coding practice, intensive communicatio

  16. Reference Management Methodologies for Large Structural Models at Kennedy Space Center

    Science.gov (United States)

    Jones, Corey; Bingham, Ryan; Schmidt, Rick

    2011-01-01

    There have been many challenges associated with modeling some of NASA KSC's largest structures. Given the size of the welded structures here at KSC, it was critically important to properly organize model struc.ture and carefully manage references. Additionally, because of the amount of hardware to be installed on these structures, it was very important to have a means to coordinate between different design teams and organizations, check for interferences, produce consistent drawings, and allow for simple release processes. Facing these challenges, the modeling team developed a unique reference management methodology and model fidelity methodology. This presentation will describe the techniques and methodologies that were developed for these projects. The attendees will learn about KSC's reference management and model fidelity methodologies for large structures. The attendees will understand the goals of these methodologies. The attendees will appreciate the advantages of developing a reference management methodology.

  17. A Mathematical Model for the Starting Process of a Transonic Ludwieg Tube Wind Tunnel

    Science.gov (United States)

    1976-06-01

    DC-TR-76-39 to achieve various start times can be determined. In addition, the sensitivity of the start time to the shape of the area-time curves... SGb ~9=SGA49·SEPS7 C E.2UA!.Ht.’LH . ..cJ1flJll~!L.. 8(14) ’" - 1.00 I CA14 _ ... _ .. _---SZ£~::;._LSilll!t. ... S4LP9 + SBETl4 ) .. 6(14

  18. Natural gas production problems : solutions, methodologies, and modeling.

    Energy Technology Data Exchange (ETDEWEB)

    Rautman, Christopher Arthur; Herrin, James M.; Cooper, Scott Patrick; Basinski, Paul M. (El Paso Production Company, Houston, TX); Olsson, William Arthur; Arnold, Bill Walter; Broadhead, Ronald F. (New Mexico Bureau of Geology and Mineral Resources, Socorro, NM); Knight, Connie D. (Consulting Geologist, Golden, CO); Keefe, Russell G.; McKinney, Curt (Devon Energy Corporation, Oklahoma City, OK); Holm, Gus (Vermejo Park Ranch, Raton, NM); Holland, John F.; Larson, Rich (Vermejo Park Ranch, Raton, NM); Engler, Thomas W. (New Mexico Institute of Mining and Technology, Socorro, NM); Lorenz, John Clay

    2004-10-01

    Natural gas is a clean fuel that will be the most important domestic energy resource for the first half the 21st centtuy. Ensuring a stable supply is essential for our national energy security. The research we have undertaken will maximize the extractable volume of gas while minimizing the environmental impact of surface disturbances associated with drilling and production. This report describes a methodology for comprehensive evaluation and modeling of the total gas system within a basin focusing on problematic horizontal fluid flow variability. This has been accomplished through extensive use of geophysical, core (rock sample) and outcrop data to interpret and predict directional flow and production trends. Side benefits include reduced environmental impact of drilling due to reduced number of required wells for resource extraction. These results have been accomplished through a cooperative and integrated systems approach involving industry, government, academia and a multi-organizational team within Sandia National Laboratories. Industry has provided essential in-kind support to this project in the forms of extensive core data, production data, maps, seismic data, production analyses, engineering studies, plus equipment and staff for obtaining geophysical data. This approach provides innovative ideas and technologies to bring new resources to market and to reduce the overall environmental impact of drilling. More importantly, the products of this research are not be location specific but can be extended to other areas of gas production throughout the Rocky Mountain area. Thus this project is designed to solve problems associated with natural gas production at developing sites, or at old sites under redevelopment.

  19. ChOrDa: a methodology for the modeling of business processes with BPMN

    CERN Document Server

    Buferli, Matteo; Montesi, Danilo

    2009-01-01

    In this paper we present a modeling methodology for BPMN, the standard notation for the representation of business processes. Our methodology simplifies the development of collaborative BPMN diagrams, enabling the automated creation of skeleton process diagrams representing complex choreographies. To evaluate and tune the methodology, we have developed a tool supporting it, that we apply to the modeling of an international patenting process as a working example.

  20. HIERARCHICAL METHODOLOGY FOR MODELING HYDROGEN STORAGE SYSTEMS PART II: DETAILED MODELS

    Energy Technology Data Exchange (ETDEWEB)

    Hardy, B; Donald L. Anton, D

    2008-12-22

    There is significant interest in hydrogen storage systems that employ a media which either adsorbs, absorbs or reacts with hydrogen in a nearly reversible manner. In any media based storage system the rate of hydrogen uptake and the system capacity is governed by a number of complex, coupled physical processes. To design and evaluate such storage systems, a comprehensive methodology was developed, consisting of a hierarchical sequence of models that range from scoping calculations to numerical models that couple reaction kinetics with heat and mass transfer for both the hydrogen charging and discharging phases. The scoping models were presented in Part I [1] of this two part series of papers. This paper describes a detailed numerical model that integrates the phenomena occurring when hydrogen is charged and discharged. A specific application of the methodology is made to a system using NaAlH{sub 4} as the storage media.

  1. A Model-Based Systems Engineering Methodology for Employing Architecture In System Analysis: Developing Simulation Models Using Systems Modeling Language Products to Link Architecture and Analysis

    Science.gov (United States)

    2016-06-01

    ENGINEERING METHODOLOGY FOR EMPLOYING ARCHITECTURE IN SYSTEM ANALYSIS: DEVELOPING SIMULATION MODELS USING SYSTEMS MODELING LANGUAGE PRODUCTS TO LINK... ENGINEERING METHODOLOGY FOR EMPLOYING ARCHITECTURE IN SYSTEM ANALYSIS: DEVELOPING SIMULATION MODELS USING SYSTEMS MODELING LANGUAGE PRODUCTS TO LINK...to model-based systems engineering (MBSE) by formally defining an MBSE methodology for employing architecture in system analysis (MEASA) that presents

  2. Models of Emotion Skills and Social Competence in the Head Start Classroom

    Science.gov (United States)

    Spritz, Becky L.; Sandberg, Elisabeth Hollister; Maher, Edward; Zajdel, Ruth T.

    2010-01-01

    Research Findings: Fostering the social competence of at-risk preschoolers would be facilitated by knowing which of children's emotion skills are most salient to social outcomes. We examined the emotion skills and social competence of 44 children enrolled in a Head Start program. Emotion skills were examined in terms of children's emotional…

  3. A barrier for low frequency noise from starting aircraft: comparison between numerical and scale model results

    NARCIS (Netherlands)

    Bosschaart, C.; Eisses, A.R.; Eerden, F.J.M. van der

    2010-01-01

    Amsterdam Airport Schiphol has organized a competition to design a noise mitigating measure along the 'Polderbaan' runway. Its main goal is to reduce the low frequency (LF) ground noise from starting aircraft. The winning concept is a flexible parabolic shaped noise barrier positioned relatively clo

  4. Parenting Classes, Parenting Behavior, and Child Cognitive Development in Early Head Start: A Longitudinal Model

    Science.gov (United States)

    Chang, Mido; Park, Boyoung; Kim, Sunha

    2009-01-01

    This study analyzed Early Head Start Research and Evaluation (EHSRE) study data, examining the effect of parenting classes on parenting behaviors and children's cognitive outcomes. The study analyzed three sets of dependent variables: parental language and cognitive stimulation, parent-child interactive activities, and the Bayley Mental…

  5. Top-down methodology for rainfall-runoff modelling and evaluation of hydrological extremes

    Science.gov (United States)

    Willems, Patrick

    2014-05-01

    A top-down methodology is presented for implementation and calibration of a lumped conceptual catchment rainfall-runoff model that aims to produce high model performance (depending on the quality and availability of data) in terms of rainfall-runoff discharges for the full range from low to high discharges, including the peak and low flow extremes. The model is to be used to support water engineering applications, which most often deal with high and low flows as well as cumulative runoff volumes. With this application in mind, the paper wants to contribute to the above-mentioned problems and advancements on model evaluation, model-structure selection, the overparameterization problem and the long time the modeller needs to invest or the difficulties one encounters when building and calibrating a lumped conceptual model for a river catchment. The methodology is an empirical and step-wise technique that includes examination of the various model components step by step through a data-based analysis of response characteristics. The approach starts from a generalized lumped conceptual model structure. In this structure, only the general components of a lumped conceptual model, such as the existence of storage and routing elements, and their inter-links, are pre-defined. The detailed specifications on model equations and parameters are supported by advanced time series analysis of the empirical response between the rainfall and evapotranspiration inputs and the river flow output. Subresponses are separated and submodel components and related subsets of parameters are calibrated as independently as possible. At the same time, the model-structure identification process aims to reach parsimonious submodel-structures, and accounts for the serial dependency of runoff values, which typically is higher for low flows than for high flows. It also accounts for the heteroscedasticity and dependency of model residuals when evaluating the model performance. It is shown that this step

  6. Early Start Denver Model - intervention for de helt små børn med autisme

    DEFF Research Database (Denmark)

    Brynskov, Cecilia

    2015-01-01

    Early Start Denver Model (ESDM) er en autismespecifik interventionsmetode, som er udviklet til helt små børn med autisme (0-4 år). Metoden fokuserer på at styrke den tidlige kontakt og barnets motivation, og den arbejder målrettet med de socio-kommunikative forløbere for sprog og med den tidlige...

  7. Evaluating Biosphere Model Estimates of the Start of the Vegetation Active Season in Boreal Forests by Satellite Observations

    Directory of Open Access Journals (Sweden)

    Kristin Böttcher

    2016-07-01

    Full Text Available The objective of this study was to assess the performance of the simulated start of the photosynthetically active season by a large-scale biosphere model in boreal forests in Finland with remote sensing observations. The start of season for two forest types, evergreen needle- and deciduous broad-leaf, was obtained for the period 2003–2011 from regional JSBACH (Jena Scheme for Biosphere–Atmosphere Hamburg runs, driven with climate variables from a regional climate model. The satellite-derived start of season was determined from daily Moderate Resolution Imaging Spectrometer (MODIS time series of Fractional Snow Cover and the Normalized Difference Water Index by applying methods that were targeted to the two forest types. The accuracy of the satellite-derived start of season in deciduous forest was assessed with bud break observations of birch and a root mean square error of seven days was obtained. The evaluation of JSBACH modelled start of season dates with satellite observations revealed high spatial correspondence. The bias was less than five days for both forest types but showed regional differences that need further consideration. The agreement with satellite observations was slightly better for the evergreen than for the deciduous forest. Nonetheless, comparison with gross primary production (GPP determined from CO2 flux measurements at two eddy covariance sites in evergreen forest revealed that the JSBACH-simulated GPP was higher in early spring and led to too-early simulated start of season dates. Photosynthetic activity recovers differently in evergreen and deciduous forests. While for the deciduous forest calibration of phenology alone could improve the performance of JSBACH, for the evergreen forest, changes such as seasonality of temperature response, would need to be introduced to the photosynthetic capacity to improve the temporal development of gross primary production.

  8. A Warm-Started Homogeneous and Self-Dual Interior-Point Method for Linear Economic Model Predictive Control

    DEFF Research Database (Denmark)

    Sokoler, Leo Emil; Skajaa, Anders; Frison, Gianluca

    2013-01-01

    algorithm in MATLAB and its performance is analyzed based on a smart grid power management case study. Closed loop simulations show that 1) our algorithm is significantly faster than state-of-the-art IPMs based on sparse linear algebra routines, and 2) warm-starting reduces the number of iterations......In this paper, we present a warm-started homogenous and self-dual interior-point method (IPM) for the linear programs arising in economic model predictive control (MPC) of linear systems. To exploit the structure in the optimization problems, our algorithm utilizes a Riccati iteration procedure...

  9. Multidimensional Modeling of Fuel Composition Effects on Combustion and Cold-starting in Diesel Engines

    Science.gov (United States)

    1995-01-01

    equally important for both the gas and liquid phase. For the gas phase, a modified Redlich - Kwong equation of state is used (Prausnitz, [lo]). In the...residual fuel mass (case 9). Ignition started early but the combustion developed at a slower rate. Another application of an altered engine geometry...Power, Vol. 115, pp. 781-789,1993. 17. Kong, S.C., Han, Z., and Reitz, R.D., “The Development and Application of a Diesel Ignition and Combustion

  10. Validation of multi-body modelling methodology for reconfigurable underwater robots

    DEFF Research Database (Denmark)

    Nielsen, M.C.; Eidsvik, O. A.; Blanke, Mogens;

    2016-01-01

    This paper investigates the problem of employing reconfigurable robots in an underwater setting. The main results presented is the experimental validation of a modelling methodology for a system consisting of N dynamically connected robots with heterogeneous dynamics. Two distinct types....... The purpose of the model is to enable design of control strategies for cooperative reconfigurable underwater systems....... of experiments are performed, a series of hydrostatic free-decay tests and a series of open-loop trajectory tests. The results are compared to a simulation based on the modelling methodology. The modelling methodology shows promising results for usage with systems composed of reconfigurable underwater modules...

  11. The simulation of start-up of natural circulation boiler based on the Astrom-Bell model

    Science.gov (United States)

    Zhang, Tianyu; Zhao, Zhenning; Li, Yuanyuan; Zhu, Xianran

    2017-01-01

    This paper presents a numerical investigation on the dynamic analysis of steam and water system of the natural circulation boiler SHL35.2.5/AI with the software MATLAB/SIMULINK. Based on the four-order Astrom-Bell model, a model applicable the specific boiler was established, casting light on the changes in parameters of designed, cold start and varying load condition. And a curve of cold start is obtained, which can be taken as reference for practical operation. In addition, in the condition of varying load, our model captured the phenomenon of false water level, and according analysis is made. Our study introduces a feasible method of simulation on the dynamic analysis of steam and water system on other boilers as well.

  12. Synthesis of semantic modelling and risk analysis methodology applied to animal welfare

    NARCIS (Netherlands)

    Bracke, M.B.M.; Edwards, S.A.; Metz, J.H.M.; Noordhuizen, J.P.T.M.; Algers, B.

    2008-01-01

    Decision-making on animal welfare issues requires a synthesis of information. For the assessment of farm animal welfare based on scientific information collected in a database, a methodology called `semantic modelling¿ has been developed. To date, however, this methodology has not been generally app

  13. Integrated methodology for constructing a quantified hydrodynamic model for application to clastic petroleum reservoirs

    Energy Technology Data Exchange (ETDEWEB)

    Honarpour, M. M.; Schatzinger, R. A.; Szpakiewicz, M. J.; Jackson, S. R.; Sharma, B.; Tomutsa, L.; Chang, M. M.

    1990-01-01

    A comprehensive, multidisciplinary, stepwise methodology is developed for constructing and integration geological and engineering information for predicting petroleum reservoir performance. This methodology is based on our experience in characterizing shallow marine reservoirs, but it should also apply to other deposystems. The methodology is presented as Part 1 of this report. Three major tasks that must be studied to facilitate a systematic approach for constructing a predictive hydrodynamic model for petroleum reservoirs are addressed: (1) data collection, organization, evaluation, and integration; (2) hydrodynamic model construction and verification; and (3) prediction and ranking of reservoir parameters by numerical simulation using data derived from the model. 39 refs., 62 figs., 13 tabs.

  14. A cislunar guidance methodology and model for low thrust trajectory generation

    Science.gov (United States)

    Korsmeyer, David J.

    1992-01-01

    A guidance methodology for generating low-thrust cislunar trajectories was developed and incorporated in a computer model. The guidance methodology divides the cislunar transfer into three phases. Each phase is discussed in turn. To establish the effectiveness of the methodology and algorithms the computer model generated three example cases for the cislunar transfer of a low-thrust electric orbital transfer vehicle (EOTV). Transfers from both earth orbit to lunar orbit and from lunar orbit back to earth orbit are considered. The model allows the determination of the low-thrust EOTV's time-of-flight, propellant mass, payload mass, and thrusting history.

  15. Novel Methodology for Functional Modeling and Simulation of Wireless Embedded Systems

    Directory of Open Access Journals (Sweden)

    Nitasha Jugessur

    2008-07-01

    Full Text Available A novel methodology is presented for the modeling and the simulation of wireless embedded systems. Tight interaction between the analog and the digital functionality makes the design and verification of such systems a real challenge. The applied methodology brings together the functional models of the baseband algorithms written in C language with the circuit descriptions at behavioral level in Verilog or Verilog-AMS for the system simulations in a single kernel environment. The physical layer of an ultrawideband system has been successfully modeled and simulated. The results confirm that this methodology provides a standardized framework in order to efficiently and accurately simulate complex mixed signal applications for embedded systems.

  16. Interval Methods for Model Qualification: Methodology and Advanced Application

    OpenAIRE

    Alexandre dit Sandretto, Julien; Trombettoni, Gilles; Daney, David

    2012-01-01

    It is often too complex to use, and sometimes impossible to obtain, an actual model in simulation or command field . To handle a system in practice, a simplification of the real model is then necessary. This simplification goes through some hypotheses made on the system or the modeling approach. In this paper, we deal with all models that can be expressed by real-valued variables involved in analytical relations and depending on parameters. We propose a method that qualifies the simplificatio...

  17. International orientation on methodologies for modelling developments in road safety.

    NARCIS (Netherlands)

    Reurings, M.C.B. & Commandeur, J.J.F.

    2007-01-01

    This report gives an overview of the models developed in countries other than the Netherlands to evaluate past developments in road traffic safety and to obtain estimates of these developments in the future. These models include classical linear regression and loglinear models as applied in Great Br

  18. Spreadsheets Grow Up: Three Spreadsheet Engineering Methodologies for Large Financial Planning Models

    CERN Document Server

    Grossman, Thomas A

    2010-01-01

    Many large financial planning models are written in a spreadsheet programming language (usually Microsoft Excel) and deployed as a spreadsheet application. Three groups, FAST Alliance, Operis Group, and BPM Analytics (under the name "Spreadsheet Standards Review Board") have independently promulgated standardized processes for efficiently building such models. These spreadsheet engineering methodologies provide detailed guidance on design, construction process, and quality control. We summarize and compare these methodologies. They share many design practices, and standardized, mechanistic procedures to construct spreadsheets. We learned that a written book or standards document is by itself insufficient to understand a methodology. These methodologies represent a professionalization of spreadsheet programming, and can provide a means to debug a spreadsheet that contains errors. We find credible the assertion that these spreadsheet engineering methodologies provide enhanced productivity, accuracy and maintain...

  19. Object-oriented analysis and design: a methodology for modeling the computer-based patient record.

    Science.gov (United States)

    Egyhazy, C J; Eyestone, S M; Martino, J; Hodgson, C L

    1998-08-01

    The article highlights the importance of an object-oriented analysis and design (OOAD) methodology for the computer-based patient record (CPR) in the military environment. Many OOAD methodologies do not adequately scale up, allow for efficient reuse of their products, or accommodate legacy systems. A methodology that addresses these issues is formulated and used to demonstrate its applicability in a large-scale health care service system. During a period of 6 months, a team of object modelers and domain experts formulated an OOAD methodology tailored to the Department of Defense Military Health System and used it to produce components of an object model for simple order processing. This methodology and the lessons learned during its implementation are described. This approach is necessary to achieve broad interoperability among heterogeneous automated information systems.

  20. Testing for Equivalence: A Methodology for Computational Cognitive Modelling

    Science.gov (United States)

    Stewart, Terrence; West, Robert

    2010-12-01

    The equivalence test (Stewart and West, 2007; Stewart, 2007) is a statistical measure for evaluating the similarity between a model and the system being modelled. It is designed to avoid over-fitting and to generate an easily interpretable summary of the quality of a model. We apply the equivalence test to two tasks: Repeated Binary Choice (Erev et al., 2010) and Dynamic Stocks and Flows (Gonzalez and Dutt, 2007). In the first case, we find a broad range of statistically equivalent models (and win a prediction competition) while identifying particular aspects of the task that are not yet adequately captured. In the second case, we re-evaluate results from the Dynamic Stocks and Flows challenge, demonstrating how our method emphasizes the breadth of coverage of a model and how it can be used for comparing different models. We argue that the explanatory power of models hinges on numerical similarity to empirical data over a broad set of measures.

  1. Mathematical modelling methodologies in predictive food microbiology: a SWOT analysis.

    Science.gov (United States)

    Ferrer, Jordi; Prats, Clara; López, Daniel; Vives-Rego, Josep

    2009-08-31

    Predictive microbiology is the area of food microbiology that attempts to forecast the quantitative evolution of microbial populations over time. This is achieved to a great extent through models that include the mechanisms governing population dynamics. Traditionally, the models used in predictive microbiology are whole-system continuous models that describe population dynamics by means of equations applied to extensive or averaged variables of the whole system. Many existing models can be classified by specific criteria. We can distinguish between survival and growth models by seeing whether they tackle mortality or cell duplication. We can distinguish between empirical (phenomenological) models, which mathematically describe specific behaviour, and theoretical (mechanistic) models with a biological basis, which search for the underlying mechanisms driving already observed phenomena. We can also distinguish between primary, secondary and tertiary models, by examining their treatment of the effects of external factors and constraints on the microbial community. Recently, the use of spatially explicit Individual-based Models (IbMs) has spread through predictive microbiology, due to the current technological capacity of performing measurements on single individual cells and thanks to the consolidation of computational modelling. Spatially explicit IbMs are bottom-up approaches to microbial communities that build bridges between the description of micro-organisms at the cell level and macroscopic observations at the population level. They provide greater insight into the mesoscale phenomena that link unicellular and population levels. Every model is built in response to a particular question and with different aims. Even so, in this research we conducted a SWOT (Strength, Weaknesses, Opportunities and Threats) analysis of the different approaches (population continuous modelling and Individual-based Modelling), which we hope will be helpful for current and future

  2. Reducing Maladaptive Behaviors in Preschool-Aged Children with Autism Spectrum Disorder Using the Early Start Denver Model

    OpenAIRE

    Fulton, Elizabeth,; Eapen, Valsamma; Črnčec, Rudi; Walter, Amelia; Rogers, Sally

    2014-01-01

    The presence of maladaptive behaviors in young people with autism spectrum disorder (ASD) can significantly limit engagement in treatment programs, as well as compromise future educational and vocational opportunities. This study aimed to explore whether the Early Start Denver Model (ESDM) treatment approach reduced maladaptive behaviors in preschool-aged children with ASD in a community-based long day care setting. The level of maladaptive behavior of 38 children with ASD was rated using an ...

  3. Reducing maladaptive behaviors in preschool-aged children with Autism Spectrum Disorder using the Early Start Denver Model

    OpenAIRE

    Elizabeth eFulton; Valsamma eEapen; Rudi eČrnčec; Amelia eWalter; Sally eRogers

    2014-01-01

    The presence of maladaptive behaviors in young people with Autism Spectrum Disorder (ASD) can significantly limit engagement in treatment programs, as well as compromise future educational and vocational opportunities. This study aimed to explore whether the Early Start Denver Model (ESDM) treatment approach reduced maladaptive behaviors in preschool-aged children with ASD in a community-based long day care setting. The level of maladaptive behavior of 38 children with ASD was rated using an ...

  4. Experimental investigations of transient pressure variations in a high head model Francis turbine during start-up and shutdown

    Institute of Scientific and Technical Information of China (English)

    TRIVEDI Chirag; CERVANTES Michel J.; GANDHI B. K.; OLE DAHLHAUG G

    2014-01-01

    Penetration of the power generated using wind and solar energy to electrical grid network causing several incidents of the grid tripping, power outage, and frequency drooping. This has increased restart (star-stop) cycles of the hydroelectric turbines significantly since grid connected hydroelectric turbines are widely used to manage critical conditions of the grid. Each cycle induces significant stresses due to unsteady pressure loading on the runner blades. The presented work investigates the pressure loading to a high head ( HP=377m, DP=1.78m) Francis turbine during start-stop. The measurements were carried out on a scaled model turbine ( HM =12.5m, DM =0.349m). Total four operating points were considered. At each operating point, three schemes of guide vanes opening and three schemes of guide vanes closing were investigated. The results show that total head variation is up to 9%during start-stop of the turbine. On the runner blade, the maximum pressure amplitudes are about 14 kPa and 16 kPa from the instantaneous mean value of 121 kPa during rapid start-up and shutdown, respectively, which are about 1.5 times larger than that of the slow start-up and shutdown. Moreover, the maximum pressure fluctuations are given at the blade trailing edge.

  5. Towards a methodology for educational modelling: a case in educational assessment

    NARCIS (Netherlands)

    Giesbers, Bas; Van Bruggen, Jan; Hermans, Henry; Joosten-ten Brinke, Desirée; Burgers, Jan; Koper, Rob; Latour, Ignace

    2005-01-01

    Giesbers, B., van Bruggen, J., Hermans, H., Joosten-ten Brinke, D., Burgers, J., Koper, R., & Latour, I. (2007). Towards a methodology for educational modelling: a case in educational assessment. Educational Technology & Society, 10 (1), 237-247.

  6. A changing climate: impacts on human exposures to O3 using an integrated modeling methodology

    Science.gov (United States)

    Predicting the impacts of changing climate on human exposure to air pollution requires future scenarios that account for changes in ambient pollutant concentrations, population sizes and distributions, and housing stocks. An integrated methodology to model changes in human exposu...

  7. A holistic methodology for modeling consumer response to innovation.

    Science.gov (United States)

    Bagozzi, R P

    1983-01-01

    A general structural equation model for representing consumer response to innovation is derived and illustrated. The approach both complements and extends an earlier model proposed by Hauser and Urban. Among other benefits, the model is able to take measurement error into account explicitly, to estimate the intercorrelation among exogenous factors if these exist, to yield a unique solution in a statistical sense, and to test complex hypotheses (e.g., systems of relations, simultaneity, feedback) associated with the measurement of consumer responses and their impact on actual choice behavior. In addition, the procedures permit one to model environmental and managerially controllable stimuli as they constrain and influence consumer choice. Limitations of the procedures are discussed and related to existing approaches. Included in the discussion is a development of four generic response models designed to provide a framework for modeling how consumers behave and how managers might better approach the design of products, persuasive appeals, and other controllable factors in the marketing mix.

  8. Proposal for product development model focused on ce certification methodology

    Directory of Open Access Journals (Sweden)

    Nathalia Marcia Goulart Pinheiro

    2015-09-01

    Full Text Available This paper presents a critical analysis comparing 21 product development models in order to identify whether these structures meet the demands Product Certification of the European Community (CE. Furthermore, it presents a product development model, comprising the steps in the models analyzed, including improvements in activities for referred product certification. The proposed improvements are justified by the growing quest for the internationalization of products and processes within companies.

  9. Methodology for Modeling Building Energy Performance across the Commercial Sector

    Energy Technology Data Exchange (ETDEWEB)

    Griffith, B.; Long, N.; Torcellini, P.; Judkoff, R.; Crawley, D.; Ryan, J.

    2008-03-01

    This report uses EnergyPlus simulations of each building in the 2003 Commercial Buildings Energy Consumption Survey (CBECS) to document and demonstrate bottom-up methods of modeling the entire U.S. commercial buildings sector (EIA 2006). The ability to use a whole-building simulation tool to model the entire sector is of interest because the energy models enable us to answer subsequent 'what-if' questions that involve technologies and practices related to energy. This report documents how the whole-building models were generated from the building characteristics in 2003 CBECS and compares the simulation results to the survey data for energy use.

  10. Methodologic model to scheduling on service systems: a software engineering approach

    Directory of Open Access Journals (Sweden)

    Eduyn Ramiro Lopez-Santana

    2016-06-01

    Full Text Available This paper presents an approach of software engineering to a research proposal to make an Expert System to scheduling on service systems using methodologies and processes of software development. We use the adaptive software development as methodology for the software architecture based on the description as a software metaprocess that characterizes the research process. We make UML’s diagrams (Unified Modeling Language to provide a visual modeling that describes the research methodology in order to identify the actors, elements and interactions in the research process.

  11. Modeling Epistemic and Ontological Cognition: Philosophical Perspectives and Methodological Directions

    Science.gov (United States)

    Greene, Jeffrey A.; Azevedo, Roger A.; Torney-Purta, Judith

    2008-01-01

    We propose an integration of aspects of several developmental and systems of beliefs models of personal epistemology. Qualitatively different positions, including realism, dogmatism, skepticism, and rationalism, are characterized according to individuals' beliefs across three dimensions in a model of epistemic and ontological cognition. This model…

  12. A methodology to calibrate pedestrian walker models using multiple objectives

    NARCIS (Netherlands)

    Campanella, M.C.; Daamen, W.; Hoogendoorn, S.P.

    2012-01-01

    The application of walker models to simulate real situations require accuracy in several traffic situations. One strategy to obtain a generic model is to calibrate the parameters in several situations using multiple-objective functions in the optimization process. In this paper, we propose a general

  13. USEPA SHEDS MODEL: METHODOLOGY FOR EXPOSURE ASSESSMENT FOR WOOD PRESERVATIVES

    Science.gov (United States)

    A physically-based, Monte Carlo probabilistic model (SHEDS-Wood: Stochastic Human Exposure and Dose Simulation model for wood preservatives) has been applied to assess the exposure and dose of children to arsenic (As) and chromium (Cr) from contact with chromated copper arsenat...

  14. WRF Model Methodology for Offshore Wind Energy Applications

    Directory of Open Access Journals (Sweden)

    Evangelia-Maria Giannakopoulou

    2014-01-01

    Full Text Available Among the parameters that must be considered for an offshore wind farm development, the stability conditions of the marine atmospheric boundary layer (MABL are of significant importance. Atmospheric stability is a vital parameter in wind resource assessment (WRA due to its direct relation to wind and turbulence profiles. A better understanding of the stability conditions occurring offshore and of the interaction between MABL and wind turbines is needed. Accurate simulations of the offshore wind and stability conditions using mesoscale modelling techniques can lead to a more precise WRA. However, the use of any mesoscale model for wind energy applications requires a proper validation process to understand the accuracy and limitations of the model. For this validation process, the weather research and forecasting (WRF model has been applied over the North Sea during March 2005. The sensitivity of the WRF model performance to the use of different horizontal resolutions, input datasets, PBL parameterisations, and nesting options was examined. Comparison of the model results with other modelling studies and with high quality observations recorded at the offshore measurement platform FINO1 showed that the ERA-Interim reanalysis data in combination with the 2.5-level MYNN PBL scheme satisfactorily simulate the MABL over the North Sea.

  15. Starting electronics

    CERN Document Server

    Brindley, Keith

    2005-01-01

    Starting Electronics is unrivalled as a highly practical introduction for hobbyists, students and technicians. Keith Brindley introduces readers to the functions of the main component types, their uses, and the basic principles of building and designing electronic circuits. Breadboard layouts make this very much a ready-to-run book for the experimenter; and the use of multimeter, but not oscilloscopes, puts this practical exploration of electronics within reach of every home enthusiast's pocket. The third edition has kept the simplicity and clarity of the original. New material

  16. Terminology and methodology in modelling for water quality management

    DEFF Research Database (Denmark)

    Carstensen, J.; Vanrolleghem, P.; Rauch, W.

    1997-01-01

    There is a widespread need for a common terminology in modelling for water quality management. This paper points out sources of confusion in the communication between researchers due to misuse of existing terminology or use of unclear terminology. The paper attempts to clarify the context...... of the most widely used terms for characterising models and within the process of model building. It is essential to the ever growing society of researchers within water quality management, that communication is eased by establishing a common terminology. This should not be done by giving broader definitions...

  17. Methodology and models in erosion research: discussion and conclusions

    National Research Council Canada - National Science Library

    Shellis, R P; Ganss, C; Ren, Y; Zero, D T; Lussi, A

    2011-01-01

    .... The prospects for clinical trials are also discussed. All models in erosion research require a number of choices regarding experimental conditions, study design and measurement techniques, and these general aspects are discussed first...

  18. Methodology for physical modeling of melter electrode power plug

    Energy Technology Data Exchange (ETDEWEB)

    Heath, W.O.

    1984-09-01

    A method is presented for building and testing a one-third scale model of an electrode power plug used to supply up to 3000 amperes to a liquid fed ceramic melter. The method describes how a one-third scale model can be used to verify the ampacity of the power plug, the effectiveness of the power plug cooling system and the effect of the high amperage current on eddy current heating of rebar in the cell wall. Scale-up of the test data, including cooling air flow rate and pressure drop, temperature profiles, melter water jacket heat duty and electrical resistance is covered. The materials required to build the scale model are specified as well as scale surface finish and dimensions. The method for designing and testing a model power plug involves developing a way to recreate the thermal conditions including heat sources, sinks and boundary temperatures on a scale basis. The major heat sources are the molten glass in contact with the electrode, joule heat generation within the power plug, and eddy current heating of the wall rebar. The melting cavity heat source is modelled using a plate heater to provide radiant heat transfer to a geometrically similar, one-third scale electrode housed in a scale model of a melting cavity having a thermally and geometrically similar wall and floor. The joule heat generation within the power plug is simulated by passing electricity through the model power plug with geometrically similar rebar positioned to simulate the eddy heating phenomenon. The proposed model also features two forced air cooling circuits similar to those on the full design. The interaction of convective, natural and radiant heat transfer in the wall cooling circuit are considered. The cell environment and a melter water jacket, along with the air cooling circuits, constitute the heat sinks and are also simulated.

  19. A Review of Kinetic Modeling Methodologies for Complex Processes

    Directory of Open Access Journals (Sweden)

    de Oliveira Luís P.

    2016-05-01

    Full Text Available In this paper, kinetic modeling techniques for complex chemical processes are reviewed. After a brief historical overview of chemical kinetics, an overview is given of the theoretical background of kinetic modeling of elementary steps and of multistep reactions. Classic lumping techniques are introduced and analyzed. Two examples of lumped kinetic models (atmospheric gasoil hydrotreating and residue hydroprocessing developed at IFP Energies nouvelles (IFPEN are presented. The largest part of this review describes advanced kinetic modeling strategies, in which the molecular detail is retained, i.e. the reactions are represented between molecules or even subdivided into elementary steps. To be able to retain this molecular level throughout the kinetic model and the reactor simulations, several hurdles have to be cleared first: (i the feedstock needs to be described in terms of molecules, (ii large reaction networks need to be automatically generated, and (iii a large number of rate equations with their rate parameters need to be derived. For these three obstacles, molecular reconstruction techniques, deterministic or stochastic network generation programs, and single-event micro-kinetics and/or linear free energy relationships have been applied at IFPEN, as illustrated by several examples of kinetic models for industrial refining processes.

  20. Systematic reviews of animal models: methodology versus epistemology.

    Science.gov (United States)

    Greek, Ray; Menache, Andre

    2013-01-01

    Systematic reviews are currently favored methods of evaluating research in order to reach conclusions regarding medical practice. The need for such reviews is necessitated by the fact that no research is perfect and experts are prone to bias. By combining many studies that fulfill specific criteria, one hopes that the strengths can be multiplied and thus reliable conclusions attained. Potential flaws in this process include the assumptions that underlie the research under examination. If the assumptions, or axioms, upon which the research studies are based, are untenable either scientifically or logically, then the results must be highly suspect regardless of the otherwise high quality of the studies or the systematic reviews. We outline recent criticisms of animal-based research, namely that animal models are failing to predict human responses. It is this failure that is purportedly being corrected via systematic reviews. We then examine the assumption that animal models can predict human outcomes to perturbations such as disease or drugs, even under the best of circumstances. We examine the use of animal models in light of empirical evidence comparing human outcomes to those from animal models, complexity theory, and evolutionary biology. We conclude that even if legitimate criticisms of animal models were addressed, through standardization of protocols and systematic reviews, the animal model would still fail as a predictive modality for human response to drugs and disease. Therefore, systematic reviews and meta-analyses of animal-based research are poor tools for attempting to reach conclusions regarding human interventions.

  1. Systematic Reviews of Animal Models: Methodology versus Epistemology

    Directory of Open Access Journals (Sweden)

    Ray Greek, Andre Menache

    2013-01-01

    Full Text Available Systematic reviews are currently favored methods of evaluating research in order to reach conclusions regarding medical practice. The need for such reviews is necessitated by the fact that no research is perfect and experts are prone to bias. By combining many studies that fulfill specific criteria, one hopes that the strengths can be multiplied and thus reliable conclusions attained. Potential flaws in this process include the assumptions that underlie the research under examination. If the assumptions, or axioms, upon which the research studies are based, are untenable either scientifically or logically, then the results must be highly suspect regardless of the otherwise high quality of the studies or the systematic reviews. We outline recent criticisms of animal-based research, namely that animal models are failing to predict human responses. It is this failure that is purportedly being corrected via systematic reviews. We then examine the assumption that animal models can predict human outcomes to perturbations such as disease or drugs, even under the best of circumstances. We examine the use of animal models in light of empirical evidence comparing human outcomes to those from animal models, complexity theory, and evolutionary biology. We conclude that even if legitimate criticisms of animal models were addressed, through standardization of protocols and systematic reviews, the animal model would still fail as a predictive modality for human response to drugs and disease. Therefore, systematic reviews and meta-analyses of animal-based research are poor tools for attempting to reach conclusions regarding human interventions.

  2. A Roadmap for Generating Semantically Enriched Building Models According to CityGML Model via Two Different Methodologies

    Science.gov (United States)

    Floros, G.; Solou, D.; Pispidikis, I.; Dimopoulou, E.

    2016-10-01

    The methodologies of 3D modeling techniques have increasingly increased due to the rapid advances of new technologies. Nowadays, the focus of 3D modeling software is focused, not only to the finest visualization of the models, but also in their semantic features during the modeling procedure. As a result, the models thus generated are both realistic and semantically enriched. Additionally, various extensions of modeling software allow for the immediate conversion of the model's format, via semi-automatic procedures with respect to the user's scope. The aim of this paper is to investigate the generation of a semantically enriched Citygml building model via two different methodologies. The first methodology includes the modeling in Trimble SketchUp and the transformation in FME Desktop Manager, while the second methodology includes the model's generation in CityEngine and its transformation in the CityGML format via the 3DCitiesProject extension for ArcGIS. Finally, the two aforesaid methodologies are being compared and specific characteristics are evaluated, in order to infer the methodology that is best applied depending on the different projects' purposes.

  3. 0-D modeling of SST-1 plasma break-down & start-up using ECRH assisted pre-ionization

    Energy Technology Data Exchange (ETDEWEB)

    Kumar, Aveg, E-mail: aveg@ipr.res.in; Pradhan, Subrata

    2016-04-15

    Highlights: • Steady state superconducting tokamak (SST-1). • Pre-ionization. • ECRH. • 0-D model. - Abstract: Electron cyclotron resonance (ECRH) assisted break-down and start-up is considered as useful tool towards the discharge initiation in superconducting tokamaks, where the vacuum vessels and the cryostats are usually electrically continuous with thick walls. ECH pre-ionizations are known to reduce the required central solenoid swing induced toroidal electric field, E significantly. The Steady state superconducting tokamak (SST-1, R = 1.1 m, a = 0.2 m) has achieved successful plasma break-down and subsequent current ramp-up with ECH pre-ionizations in both fundamental mode and second harmonic modes with E ∼ 0.35 V/m. This work has discussed an appropriate simulation model and validated its results with experiments for the ECRH assisted breakdown and start-up, for both 1st harmonic ordinary mode (O1) and 2nd harmonic extra-ordinary mode (X2), in SST-1 for hydrogen plasmas, where the loop voltage is limited to 0.35 V/m. The simulation model is a zero-dimensional (0-D) model. In this model five temporal equations are solved for spatially-uniform plasma. The primary findings of this investigation has been the determination of the threshold ECRH power for successful pre-ionization of plasma in SST-1 and validations of the results with experimental findings in SST-1.

  4. An Analysis of Starting Points for Setting Up a Model of a More Reliable Ship Propulsion

    OpenAIRE

    Martinović, Dragan; Tudor, Mato; Bernečić, Dean

    2011-01-01

    This paper considers the important requirement for ship propulsion necessary for its immaculate operation, since any failure can endanger the ship and render it useless. Particular attention is given to the failure of auxiliary engines that can also seriously jeopardise the safety of the ship. Therefore the paper presents preliminary investigations for setting up models of reliable ship propulsion accounting for the failure of auxiliary engines. Models of most frequent implementations of e...

  5. Measuring Team Shared Understanding Using the Analysis-Constructed Shared Mental Model Methodology

    Science.gov (United States)

    Johnson, Tristan E.; O'Connor, Debra L.

    2008-01-01

    Teams are an essential part of successful performance in learning and work environments. Analysis-constructed shared mental model (ACSMM) methodology is a set of techniques where individual mental models are elicited and sharedness is determined not by the individuals who provided their mental models but by an analytical procedure. This method…

  6. A European test of pesticide-leaching models: methodology and major recommendations

    NARCIS (Netherlands)

    Vanclooster, M.; Boesten, J.J.T.I.; Trevisan, M.; Brown, C.D.; Capri, E.; Eklo, O.M.; Gottesbüren, B.; Gouy, V.; Linden, van der A.M.A.

    2000-01-01

    Testing of pesticide-leaching models is important in view of their increasing use in pesticide registration procedures in the European Union. This paper presents the methodology and major conclusions of a test of pesticide-leaching models. Twelve models simulating the vertical one-dimensional moveme

  7. A branch-and-bound methodology within algebraic modelling systems

    NARCIS (Netherlands)

    Bisschop, J.J.; Heerink, J.B.J.; Kloosterman, G.

    1998-01-01

    Through the use of application-specific branch-and-bound directives it is possible to find solutions to combinatorial models that would otherwise be difficult or impossible to find by just using generic branch-and-bound techniques within the framework of mathematical programming. {\\sc Minto} is an e

  8. Methodology Aspects of Quantifying Stochastic Climate Variability with Dynamic Models

    Science.gov (United States)

    Nuterman, Roman; Jochum, Markus; Solgaard, Anna

    2015-04-01

    The paleoclimatic records show that climate has changed dramatically through time. For the past few millions of years it has been oscillating between ice ages, with large parts of the continents covered with ice, and warm interglacial periods like the present one. It is commonly assumed that these glacial cycles are related to changes in insolation due to periodic changes in Earth's orbit around Sun (Milankovitch theory). However, this relationship is far from understood. The insolation changes are so small that enhancing feedbacks must be at play. It might even be that the external perturbation only plays a minor role in comparison to internal stochastic variations or internal oscillations. This claim is based on several shortcomings in the Milankovitch theory: Prior to one million years ago, the duration of the glacial cycles was indeed 41,000 years, in line with the obliquity cycle of Earth's orbit. This duration changed at the so-called Mid-Pleistocene transition to approximately 100,000 years. Moreover, according to Milankovitch's theory the interglacial of 400,000 years ago should not have happened. Thus, while prior to one million years ago the pacing of these glacial cycles may be tied to changes in Earth's orbit, we do not understand the current magnitude and phasing of the glacial cycles. In principle it is possible that the glacial/interglacial cycles are not due to variations in Earth's orbit, but due to stochastic forcing or internal modes of variability. We present a new method and preliminary results for a unified framework using a fully coupled Earth System Model (ESM), in which the leading three ice age hypotheses will be investigated together. Was the waxing and waning of ice sheets due to an internal mode of variability, due to variations in Earth's orbit, or simply due to a low-order auto-regressive process (i.e., noise integrated by system with memory)? The central idea is to use the Generalized Linear Models (GLM), which can handle both

  9. Interoperability Matter: Levels of Data Sharing, Starting from a 3d Information Modelling

    Science.gov (United States)

    Tommasi, C.; Achille, C.

    2017-02-01

    Nowadays, the adoption of BIM processes in the AEC (Architecture, Engineering and Construction) industry means to be oriented towards synergistic workflows, based on informative instruments capable of realizing the virtual model of the building. The target of this article is to speak about the interoperability matter, approaching the subject through a theoretical part and also a practice example, in order to show how these notions are applicable in real situations. In particular, the case study analysed belongs to the Cultural Heritage field, where it is possible to find some difficulties - both in the modelling and sharing phases - due to the complexity of shapes and elements. Focusing on the interoperability between different software, the questions are: What and how many kind of information can I share? Given that this process leads also to a standardization of the modelled parts, is there the possibility of an accuracy loss?

  10. Reliability modelling of repairable systems using Petri nets and fuzzy Lambda-Tau methodology

    Energy Technology Data Exchange (ETDEWEB)

    Knezevic, J.; Odoom, E.R

    2001-07-01

    A methodology is developed which uses Petri nets instead of the fault tree methodology and solves for reliability indices utilising fuzzy Lambda-Tau method. Fuzzy set theory is used for representing the failure rate and repair time instead of the classical (crisp) set theory because fuzzy numbers allow expert opinions, linguistic variables, operating conditions, uncertainty and imprecision in reliability information to be incorporated into the system model. Petri nets are used because unlike the fault tree methodology, the use of Petri nets allows efficient simultaneous generation of minimal cut and path sets.

  11. Prototype integration of the joint munitions assessment and planning model with the OSD threat methodology

    Energy Technology Data Exchange (ETDEWEB)

    Lynn, R.Y.S.; Bolmarcich, J.J.

    1994-06-01

    The purpose of this Memorandum is to propose a prototype procedure which the Office of Munitions might employ to exercise, in a supportive joint fashion, two of its High Level Conventional Munitions Models, namely, the OSD Threat Methodology and the Joint Munitions Assessment and Planning (JMAP) model. The joint application of JMAP and the OSD Threat Methodology provides a tool to optimize munitions stockpiles. The remainder of this Memorandum comprises five parts. The first is a description of the structure and use of the OSD Threat Methodology. The second is a description of JMAP and its use. The third discusses the concept of the joint application of JMAP and OSD Threat Methodology. The fourth displays sample output of the joint application. The fifth is a summary and epilogue. Finally, three appendices contain details of the formulation, data, and computer code.

  12. Community Dissemination of the Early Start Denver Model: Implications for Science and Practice

    Science.gov (United States)

    Vismara, Laurie A.; Young, Gregory S.; Rogers, Sally J.

    2013-01-01

    The growing number of Autism Spectrum Disorder cases exceeds the services available for these children. This increase challenges both researchers and service providers to develop systematic, effective dissemination strategies for transporting university research models to community early intervention (EI) programs. The current study developed an…

  13. Study of fuel control strategy based on an fuel behavior model for starting conditions; Nenryo kyodo model ni motozuita shidoji no nenryo hosei hosho ni tsuite no kosatsu

    Energy Technology Data Exchange (ETDEWEB)

    Nakajima, Y.; Uchida, M.; Iwano, H.; Oba, H. [Nissan Motor Co. Ltd., Tokyo (Japan)

    1997-10-01

    We have applied a fuel behavior model to a fuel injection system which we call SOFIS (Sophisticated and Optimized Fuel Injection System) so that we get air/fuel ratio control accuracy and good driveability. However the fuel behavior under starting conditions is still not clear. To meet low emission rules and to get better driveability under starting conditions, better air/fuel ratio control is necessary. Now we have understood the ignition timing, injection timing, and injection pulse width required in such conditions. In former days, we analyzed the state of the air/fuel mixture under cold conditions and made a new fuel behavior model which considered fuel loss such as hydrocarbons and dissolution into oil and so on. Al this time, we have applied this idea to starting. We confirm this new model offers improved air/fuel ratio control. 6 refs., 9 figs., 3 tabs.

  14. Improved methodology for developing cost uncertainty models for naval vessels

    OpenAIRE

    Brown, Cinda L.

    2008-01-01

    The purpose of this thesis is to analyze the probabilistic cost model currently in use by NAVSEA 05C to predict cost uncertainty in naval vessel construction and to develop a method that better predicts the ultimate cost risk. The data used to develop the improved approach is collected from analysis of the CG(X) class ship by NAVSEA 05C. The NAVSEA 05C cost risk factors are reviewed and analyzed to determine if different factors are better cost predictors. The impact of data elicitation, t...

  15. A Classification Methodology and Retrieval Model to Support Software Reuse

    Science.gov (United States)

    1988-01-01

    Smart .... 42 2.22.2 Sire ....... 42 2.2.2.3 Caliban _ 43 2.23 Probabilistic Information Retrieval 44 2.23.1 Harter’s Model __ 45 2.23.2 University of...attribute vectors destroys the boolean structure. 22.23 Caliban Caliban is an experimental IR system developed at the Swiss Federal Institute of...information items. To retrieve information using Caliban , the user specifies a "virtual information item" (fills out a template describing the item

  16. Methodological aspects of journaling a dynamic adjusting entry model

    Directory of Open Access Journals (Sweden)

    Vlasta Kašparovská

    2011-01-01

    Full Text Available This paper expands the discussion of the importance and function of adjusting entries for loan receivables. Discussion of the cyclical development of adjusting entries, their negative impact on the business cycle and potential solutions has intensified during the financial crisis. These discussions are still ongoing and continue to be relevant to members of the professional public, banking regulators and representatives of international accounting institutions. The objective of this paper is to evaluate a method of journaling dynamic adjusting entries under current accounting law. It also expresses the authors’ opinions on the potential for consistently implementing basic accounting principles in journaling adjusting entries for loan receivables under a dynamic model.

  17. Fuel cycle assessment: A compendium of models, methodologies, and approaches

    Energy Technology Data Exchange (ETDEWEB)

    1994-07-01

    The purpose of this document is to profile analytical tools and methods which could be used in a total fuel cycle analysis. The information in this document provides a significant step towards: (1) Characterizing the stages of the fuel cycle. (2) Identifying relevant impacts which can feasibly be evaluated quantitatively or qualitatively. (3) Identifying and reviewing other activities that have been conducted to perform a fuel cycle assessment or some component thereof. (4) Reviewing the successes/deficiencies and opportunities/constraints of previous activities. (5) Identifying methods and modeling techniques/tools that are available, tested and could be used for a fuel cycle assessment.

  18. Literature Survey of previous research work in Models and Methodologies in Project Management

    OpenAIRE

    Ravinder Singh; Dr. Kevin Lano

    2014-01-01

    This paper provides a survey of the existing literature and research carried out in the area of project management using different models, methodologies, and frameworks. Project Management (PM) broadly means programme management, portfolio management, practice management, project management office, etc. A project management system has a set of processes, procedures, framework, methods, tools, methodologies, techniques, resources, etc. which are used to manage the full life cycle of projects. ...

  19. Models of expected returns on the brazilian market: Empirical tests using predictive methodology

    Directory of Open Access Journals (Sweden)

    Adriano Mussa

    2009-01-01

    Full Text Available Predictive methodologies for test of the expected returns models are largely diffused on the international academic environment. However, these methods have not been used in Brazil in a systematic way. Generally, empirical studies proceeded with Brazilian stock market data are concentrated only in the first step of these methodologies. The purpose of this article was test and compare the models CAPM, 3-factors and 4-factors using a predictive methodology, considering two steps – temporal and cross-section regressions – with standard errors obtained by the techniques of Fama and Macbeth (1973. The results indicated the superiority of the 4-fators model as compared to the 3-fators model, and the superiority of the 3- factors model as compared to the CAPM, but no one of the tested models were enough on the explanation of the Brazilian stock returns. Contrary to some empirical evidences, that do not use predictive methodology, the size and momentum effect seem do not exist on the Brazilian capital markets, but there are evidences of the value effect and the relevance of the market for explanation of expected returns. These finds rise some questions, mainly caused by the originality of the methodology on the local market and by the fact that this subject is still incipient and polemic on the Brazilian academic environment.

  20. Methodology for modeling the microbial contamination of air filters.

    Directory of Open Access Journals (Sweden)

    Yun Haeng Joe

    Full Text Available In this paper, we propose a theoretical model to simulate microbial growth on contaminated air filters and entrainment of bioaerosols from the filters to an indoor environment. Air filter filtration and antimicrobial efficiencies, and effects of dust particles on these efficiencies, were evaluated. The number of bioaerosols downstream of the filter could be characterized according to three phases: initial, transitional, and stationary. In the initial phase, the number was determined by filtration efficiency, the concentration of dust particles entering the filter, and the flow rate. During the transitional phase, the number of bioaerosols gradually increased up to the stationary phase, at which point no further increase was observed. The antimicrobial efficiency and flow rate were the dominant parameters affecting the number of bioaerosols downstream of the filter in the transitional and stationary phase, respectively. It was found that the nutrient fraction of dust particles entering the filter caused a significant change in the number of bioaerosols in both the transitional and stationary phases. The proposed model would be a solution for predicting the air filter life cycle in terms of microbiological activity by simulating the microbial contamination of the filter.

  1. How Famine Started in Somalia: A Simple Model of Fresh Water Use

    Science.gov (United States)

    Gustafson, K. C.; Motesharrei, S.; Miralles-Wilhelm, F. R.; Kalnay, E.; Rivas, J.; Freshwater Modeling Team

    2011-12-01

    Water is the origin of life on our planet and therefore the most valued and essential natural resource for sustaining life on earth. Human uses of freshwater include not only drinking, washing, and other daily domestic activities, but also manufacturing, energy production, agriculture, aquaculture, etc. Though our planet is 97% water, only a minute portion of this is available as freshwater to be used for anthropogenic necessities and the rate at which the natural system can filter and replenish the freshwater supply cannot compete with the rate of demand. That is why special attention must be given to availability of freshwater at present time and in future. We constructed a fairly elementary model of the water cycle while we were working on a more sophisticated water model that includes several additional details. Upon introducing data from different regions of the world (including United States) into our simple model and running simulations, we can gain insight into the future of water sources and availability of freshwater. In particular, we are able to simulate how drought can result in famine, a catastrophe currently taking place in certain areas of Somalia.

  2. The PROMETHEUS bundled payment experiment: slow start shows problems in implementing new payment models.

    Science.gov (United States)

    Hussey, Peter S; Ridgely, M Susan; Rosenthal, Meredith B

    2011-11-01

    Fee-for-service payment is blamed for many of the problems observed in the US health care system. One of the leading alternative payment models proposed in the Affordable Care Act of 2010 is bundled payment, which provides payment for all of the care a patient needs over the course of a defined clinical episode, instead of paying for each discrete service. We evaluated the initial "road test" of PROMETHEUS Payment, one of several bundled payment pilot projects. The project has faced substantial implementation challenges, and none of the three pilot sites had executed contracts or made bundled payments as of May 2011. The pilots have taken longer to set up than expected, primarily because of the complexity of the payment model and the fact that it builds on the existing fee-for-service payment system and other complexities of health care. Participants continue to see promise and value in the bundled payment model, but the pilot results suggest that the desired benefits of this and other payment reforms may take time and considerable effort to materialize.

  3. Methodological characteristics in establishing rat models of poststroke depression

    Institute of Scientific and Technical Information of China (English)

    Fuyou Liu; Shi Yang; Weiyin Chen; Jinyu Wang; Yi Tang; Guanxiang Zhu

    2006-01-01

    BACKGROUND: Ideal model of poststroke depression (PSD) may be induced in rats guided by the theoretical evidence that "primary endogenous mechanism" and "reactivity mechanism" theories for PSD in human being.OBJECTIVE: To investigate the feasibility of comprehensive methods to induce PSD models in rats.DESrGN: A randomized controlled animal trial.SETTING: Department of Neurology, Affiliated Hospital of Chengdu University of Traditional Chinese Medicine.MATERrALS: Male SD rats of SPF degree, weighing 350-500 g, were provided by the experimental animal center of Chengdu University of Traditional Chinese Medicine. The rats were raised for 1 week adaptively, then screened behaviorally by open-field test and passive avoidance test. Forty-five rats with close scores were randomly divided into normal control group (n =10), simple stroke group (n =10), stress group (n =10) and PSD group (n =15).METHODS: The experiments were carried out in the laboratory of Chengdu University of Traditional Chinese Medicine from July 2002 to February 2003. ① Rat models of focal cerebral ischemia were induced by thread embolization, then treated with separate raising and unpredictable stress to induce PSD models. ②The neurologic deficit was evaluated by Longa 5-grade standard (the higher the score, the severer the neurologic deficit) and horizontal round rod test (normal rat could stay on it for at least 3 minutes). ③ The behavioral changes of PSD rats were evaluated by the saccharin water test, open-field text and passive avoidance test,including the changes of interest, spontaneous and exploratory activities, etc. ④ The levels of monoamine neurotransmitters, including norepinephrine (NE), serotonin (5-HT) and dopamine, in brain were determined using fluorospectrophotometry.MAIN OUTCOME MEASURES: ① Score of Longa 5-grade standard; Stayed time in the horizontal round rod test;② Amount of saccharin water consumption; Open-field text: time stayed in the central square, times

  4. Evaluating Supply Chain Management: A Methodology Based on a Theoretical Model

    Directory of Open Access Journals (Sweden)

    Alexandre Tadeu Simon

    2015-01-01

    Full Text Available Despite the increasing interest in supply chain management (SCM by researchers and practitioners, there is still a lack of academic literature concerning topics such as methodologies to guide and support SCM evaluation. Most developed methodologies have been provided by consulting companies and are restricted in their publication and use. This article presents a methodology for evaluating companies’ degree of adherence to a SCM conceptual model. The methodology is based on Cooper, Lambert and Pagh’s original contribution and involves analysis of eleven referential axes established from key business processes, horizontal structures, and initiatives & practices. We analyze the applicability of the proposed model based on findings from interviews with experts - academics and practitioners - as well as from case studies of three focal firms and their supply chains. In general terms, the methodology can be considered a diagnostic instrument that allows companies to evaluate their maturity regarding SCM practices. From this diagnosis, firms can identify and implement activities to improve degree of adherence to the reference model and achieve SCM benefits. The methodology aims to contribute to SCM theory development. It is an initial, but structured, reference for translating a theoretical approach into practical aspects.

  5. New temperature model of the Netherlands from new data and novel modelling methodology

    Science.gov (United States)

    Bonté, Damien; Struijk, Maartje; Békési, Eszter; Cloetingh, Sierd; van Wees, Jan-Diederik

    2017-04-01

    Deep geothermal energy has grown in interest in Western Europe in the last decades, for direct use but also, as the knowledge of the subsurface improves, for electricity generation. In the Netherlands, where the sector took off with the first system in 2005, geothermal energy is seen has a key player for a sustainable future. The knowledge of the temperature subsurface, together with the available flow from the reservoir, is an important factor that can determine the success of a geothermal energy project. To support the development of deep geothermal energy system in the Netherlands, we have made a first assessment of the subsurface temperature based on thermal data but also on geological elements (Bonté et al, 2012). An outcome of this work was ThermoGIS that uses the temperature model. This work is a revision of the model that is used in ThermoGIS. The improvement from the first model are multiple, we have been improving not only the dataset used for the calibration and structural model, but also the methodology trough an improved software (called b3t). The temperature dataset has been updated by integrating temperature on the newly accessible wells. The sedimentary description in the basin has been improved by using an updated and refined structural model and an improved lithological definition. A major improvement in from the methodology used to perform the modelling, with b3t the calibration is made not only using the lithospheric parameters but also using the thermal conductivity of the sediments. The result is a much more accurate definition of the parameters for the model and a perfected handling of the calibration process. The result obtain is a precise and improved temperature model of the Netherlands. The thermal conductivity variation in the sediments associated with geometry of the layers is an important factor of temperature variations and the influence of the Zechtein salt in the north of the country is important. In addition, the radiogenic heat

  6. Coupling watersheds, estuaries and regional ocean through numerical modelling for Western Iberia: a novel methodology

    Science.gov (United States)

    Campuzano, Francisco; Brito, David; Juliano, Manuela; Fernandes, Rodrigo; de Pablo, Hilda; Neves, Ramiro

    2016-12-01

    An original methodology for integrating the water cycle from the rain water to the open ocean by numerical models was set up using an offline coupling technique. The different components of the water continuum, including watersheds, estuaries and ocean, for Western Iberia were reproduced using numerical components of the MOHID Water Modelling System (http://www.mohid.com). This set of models, when combined through this novel methodology, is able to fill information gaps, and to include, in a realistic mode, the fresh water inputs in terms of volume and composition, into a regional ocean model. The designed methodology is illustrated using the Tagus River, estuary and its region of fresh water influence as case study, and its performance is evaluated by means of river flow and salinity observations.

  7. Functioning of the avalanche starting zones which undergo snow-transport by wind: Field observations and computer modeling

    Science.gov (United States)

    Sivardière, F.; Castelle, T.; Guyomarc'h, G.; Mérindol, L.; Buisson, L.

    1995-11-01

    For two years, three French and Swiss laboratories have been making field observations and measurements on two high altitude slopes in a Northern French Alps site. The aim of this work is to study the functioning of the avalanche sites which, in their starting zones, undergo snow-transport by wind. The experimental site is located in the French Alps, at 2,800 m, above Grenoble. It is an open area, equipped with an automatic meteorological station and an altitude laboratory. The two slopes that are studied face East. One of them is artificially released but the other has a natural avalanche activity. The investigations concern: -snow deposition in avalanche starting zones; -temporal evolution of the snowpack characteristics; -avalanche release. For the field observations and measurements, continuous recording of the meteorological conditions on the site, photogrammetrical techniques and two snow depth profiles, as well as stratigraphical snow profiles and video are used. The computer modeling is based on existing computer models developed by the CEMAGREF-Nivologie (ELSA) and the CEN/Météo-France (SAFRAN-CROCUS-MEPRA), which analyse the snowpack and its stability. The field observations and measurements aim at improving snow-transport by wind modeling modules, in order to improve their whole analysis.

  8. Event based uncertainty assessment in urban drainage modelling, applying the GLUE methodology

    DEFF Research Database (Denmark)

    Thorndahl, Søren; Beven, K.J.; Jensen, Jacob Birk

    2008-01-01

    In the present paper an uncertainty analysis on an application of the commercial urban drainage model MOUSE is conducted. Applying the Generalized Likelihood Uncertainty Estimation (GLUE) methodology the model is conditioned on observation time series from two flow gauges as well as the occurrence...... if the uncertainty analysis is unambiguous. It is shown that the GLUE methodology is very applicable in uncertainty analysis of this application of an urban drainage model, although it was shown to be quite difficult of get good fits of the whole time series....

  9. METHODOLOGICAL APPROACH AND MODEL ANALYSIS FOR IDENTIFICATION OF TOURIST TRENDS

    Directory of Open Access Journals (Sweden)

    Neven Šerić

    2015-06-01

    Full Text Available The draw and diversity of the destination’s offer is an antecedent of the tourism visits growth. The destination supply differentiation is carried through new, specialised tourism products. The usual approach consists of forming specialised tourism products in accordance with the existing tourism destination image. Another approach, prevalent in practice of developed tourism destinations is based on innovating the destination supply through accordance with the global tourism trends. For this particular purpose, it is advisable to choose a monitoring and analysis method of tourism trends. The goal is to determine actual trends governing target markets, differentiating whims from trends during the tourism preseason. When considering the return on investment, modifying the destination’s tourism offer on the basis of a tourism whim is a risky endeavour, indeed. Adapting the destination’s supply to tourism whims can result in a shifted image, one that is unable to ensure a long term interest and tourist vacation growth. With regard to tourism trend research and based on the research conducted, an advisable model for evaluating tourism phenomena is proposed, one that determines whether tourism phenomena is a tourism trend or a tourism whim.

  10. Starting over: applying new models that challenge existing paradigms in the scholarly publishing marketplace

    Directory of Open Access Journals (Sweden)

    Dan Scott

    2012-11-01

    Full Text Available Scholarly publishing has gone through turbulent times. Enormous growth in supply and expenditure has been followed – dramatically and unexpectedly – by severe contraction of budgets. The ‘creative destruction’ of the 2008 global financial crisis has produced new opportunities and forced legislators, administrators, academics and librarians to consider alternatives to traditional subscription models. This article presents a case study of one UK-based ‘gold’ publisher's attempts to create a viable, sustainable alternative, which aims to bring the same benefits of open access publishing to the social sciences and arts & humanities as have been proven to work in STM – so, providing insights into the strategic choices of product, scope and aims, pricing, marketing, etc. By the time of publication, 'Social Sciences Directory' will have published its first issue, and 'Humanities Directory' will be close to following suit.

  11. Towards A Model-Based Prognostics Methodology For Electrolytic Capacitors: A Case Study Based On Electrical Overstress Accelerated Aging

    Data.gov (United States)

    National Aeronautics and Space Administration — This paper presents a model-driven methodology for predict- ing the remaining useful life of electrolytic capacitors. This methodology adopts a Kalman filter...

  12. Venture financing of start-ups: A model of contract between VC fund and entrepreneur

    Directory of Open Access Journals (Sweden)

    Osintsev Yury

    2010-01-01

    Full Text Available Venture capital has become one of the main sources of innovation in the modern, global economy. It is not just a substitute for bank loans: it has proven to be a more efficient way of financing projects at different stages. On one hand, venture financing allows for projects with higher risk, which leads to the possibility of higher returns on investment. On the other hand, venture investors who usually have managerial experience often participate in governing the business, which certainly adds value to the enterprise. In this paper we establish the model of contract between the venture capital fund and the entrepreneur, focusing on probably the most important issue of this contract: the shares of the parties in the business. The shares in the company determine the distribution of the joint surplus. The expected joint profits are not just exogenously specified in the contract but are dependent on the behavioral variables of both parties at the stage of fulfilling the contract. We call the behavioral variable of the entrepreneur ‘effort’ and the one of the venture fund ‘advice’. The probability of the project’s success, and hence the expected joint revenues, are increased by these two. However, both kinds of effort are costly to the respective parties that have made them. Based on this fact we can elaborate the profit functions of both sides of the contract. Our model can be considered as a basis for specifying contracts concerning venture financing. It can provide the logic for how the equilibrium shares of entrepreneur and venture fund are obtained.

  13. Joint intelligence operations centers (JIOC) business process model & capabilities evaluation methodology

    OpenAIRE

    Schacher, Gordon; Irvine, Nelson; Hoyt, Roger

    2012-01-01

    A JIOC Business Process Model has been developed for use in evaluating JIOC capabilities. The model is described and depicted through OV5 and organization swim-lane diagrams. Individual intelligence activities diagrams are included. A JIOC evaluation methodology is described.

  14. A parameter estimation and identifiability analysis methodology applied to a street canyon air pollution model

    DEFF Research Database (Denmark)

    Ottosen, Thor Bjørn; Ketzel, Matthias; Skov, Henrik

    2016-01-01

    Pollution Model (OSPM®). To assess the predictive validity of the model, the data is split into an estimation and a prediction data set using two data splitting approaches and data preparation techniques (clustering and outlier detection) are analysed. The sensitivity analysis, being part......Mathematical models are increasingly used in environmental science thus increasing the importance of uncertainty and sensitivity analyses. In the present study, an iterative parameter estimation and identifiability analysis methodology is applied to an atmospheric model – the Operational Street...

  15. Integration of process design and controller design for chemical processes using model-based methodology

    DEFF Research Database (Denmark)

    Abd.Hamid, Mohd-Kamaruddin; Sin, Gürkan; Gani, Rafiqul

    2010-01-01

    In this paper, a novel systematic model-based methodology for performing integrated process design and controller design (IPDC) for chemical processes is presented. The methodology uses a decomposition method to solve the IPDC typically formulated as a mathematical programming (optimization...... with constraints) problem. Accordingly the optimization problem is decomposed into four sub-problems: (i) pre-analysis, (ii) design analysis, (iii) controller design analysis, and (iv) final selection and verification, which are relatively easier to solve. The methodology makes use of thermodynamic-process...... insights and the reverse design approach to arrive at the final process design–controller design decisions. The developed methodology is illustrated through the design of: (a) a single reactor, (b) a single separator, and (c) a reactor–separator-recycle system and shown to provide effective solutions...

  16. Novel Methodology for Functional Modeling and Simulation of Wireless Embedded Systems

    Directory of Open Access Journals (Sweden)

    Sosa Morales Emma

    2008-01-01

    Full Text Available Abstract A novel methodology is presented for the modeling and the simulation of wireless embedded systems. Tight interaction between the analog and the digital functionality makes the design and verification of such systems a real challenge. The applied methodology brings together the functional models of the baseband algorithms written in C language with the circuit descriptions at behavioral level in Verilog or Verilog-AMS for the system simulations in a single kernel environment. The physical layer of an ultrawideband system has been successfully modeled and simulated. The results confirm that this methodology provides a standardized framework in order to efficiently and accurately simulate complex mixed signal applications for embedded systems.

  17. Towards a Cognitive Handoff for the Future Internet: Model-driven Methodology and Taxonomy of Scenarios

    CERN Document Server

    Gonzalez-Horta, Francisco A; Ramirez-Cortes, Juan M; Martinez-Carballido, Jorge; Buenfil-Alpuche, Eldamira

    2011-01-01

    A cognitive handoff is a multipurpose handoff that achieves many desirable features simultaneously; e.g., seamlessness, autonomy, security, correctness, adaptability, etc. But, the development of cognitive handoffs is a challenging task that has not been properly addressed in the literature. In this paper, we discuss the difficulties of developing cognitive handoffs and propose a new model-driven methodology for their systematic development. The theoretical framework of this methodology is the holistic approach, the functional decomposition method, the model-based design paradigm, and the theory of design as scientific problem-solving. We applied the proposed methodology and obtained the following results: (i) a correspondence between handoff purposes and quantitative environment information, (ii) a novel taxonomy of handoff mobility scenarios, and (iii) an original state-based model representing the functional behavior of the handoff process.

  18. Key-Aspects of Scientific Modeling Exemplified by School Science Models: Some Units for Teaching Contextualized Scientific Methodology

    Science.gov (United States)

    Develaki, Maria

    2016-01-01

    Models and modeling are core elements of scientific methods and consequently also are of key importance for the conception and teaching of scientific methodology. The epistemology of models and its transfer and adaption to nature of science education are not, however, simple themes. We present some conceptual units in which school science models…

  19. Analysis of Feedback processes in Online Group Interaction: a methodological model

    Directory of Open Access Journals (Sweden)

    Anna Espasa

    2013-06-01

    Full Text Available The aim of this article is to present a methodological model to analyze students' group interaction to improve their essays in online learning environments, based on asynchronous and written communication. In these environments teacher and student scaffolds for discussion are essential to promote interaction. One of these scaffolds can be the feedback. Research on feedback processes has predominantly focused on feedback design rather than on how students utilize feedback to improve learning. This methodological model fills this gap contributing to analyse the implementation of the feedback processes while students discuss collaboratively in a specific case of writing assignments. A review of different methodological models was carried out to define a framework adjusted to the analysis of the relationship of written and asynchronous group interaction, and students' activity and changes incorporated into the final text. The model proposed includes the following dimensions: 1 student participation 2 nature of student learning and 3 quality of student learning. The main contribution of this article is to present the methodological model and also to ascertain the model's operativity regarding how students incorporate such feedback into their essays.

  20. A PLM components monitoring framework for SMEs based on a PLM maturity model and FAHP methodology

    OpenAIRE

    Zhang, Haiqing; Sekhari, Aicha; Ouzrout, Yacine; Bouras, Abdelaziz

    2014-01-01

    Right PLM components selection and investments increase business advantages. This paper develops a PLM components monitoring framework to assess and guide PLM implementation in small and middle enterprises (SMEs). The framework builds upon PLM maturity models and decision-making methodology. PLM maturity model has the capability to analyze PLM functionalities and evaluate PLM components. A proposed PLM components maturity assessment (PCMA) model can obtain general maturity levels of PLM compo...

  1. A methodology for semiautomatic generation of finite element models: Application to mechanical devices

    Directory of Open Access Journals (Sweden)

    Jesús López

    2015-02-01

    Full Text Available In this work, a methodology to create parameterized finite element models is presented, particularly focusing on the development of suitable algorithms in order to generate models and meshes with high computational efficiency. The methodology is applied to the modeling of two common mechanical devices: an optical linear encoder and a gear transmission. This practical application constitutes a tough test to the methodology proposed, given the complexity and the large number of components that set up this high-precision measurement device and the singularity of the internal gears. The geometrical and mechanical particularities of the components lead to multidimensional modeling, seeking to ensure proper interaction between the different types of finite elements. Besides, modeling criteria to create components such as compression and torsion springs, sheet springs, bearings, or adhesive joints are also presented in the article. The last part of the work aims to validate the simulation results obtained with the methodology proposed with those derived from experimental tests through white noise base-driven vibration and hammer impact excitation modal analysis.

  2. A component modes projection and assembly model reduction methodology for articulated, multi-flexible body structures

    Science.gov (United States)

    Lee, Allan Y.; Tsuha, Walter S.

    1993-01-01

    A two-stage model reduction methodology, combining the classical Component Mode Synthesis (CMS) method and the newly developed Enhanced Projection and Assembly (EP&A) method, is proposed in this research. The first stage of this methodology, called the COmponent Modes Projection and Assembly model REduction (COMPARE) method, involves the generation of CMS mode sets, such as the MacNeal-Rubin mode sets. These mode sets are then used to reduce the order of each component model in the Rayleigh-Ritz sense. The resultant component models are then combined to generate reduced-order system models at various system configurations. A composite mode set which retains important system modes at all system configurations is then selected from these reduced-order system models. In the second stage, the EP&A model reduction method is employed to reduce further the order of the system model generated in the first stage. The effectiveness of the COMPARE methodology has been successfully demonstrated on a high-order, finite-element model of the cruise-configured Galileo spacecraft.

  3. From LCAs to simplified models: a generic methodology applied to wind power electricity.

    Science.gov (United States)

    Padey, Pierryves; Girard, Robin; le Boulch, Denis; Blanc, Isabelle

    2013-02-05

    This study presents a generic methodology to produce simplified models able to provide a comprehensive life cycle impact assessment of energy pathways. The methodology relies on the application of global sensitivity analysis to identify key parameters explaining the impact variability of systems over their life cycle. Simplified models are built upon the identification of such key parameters. The methodology is applied to one energy pathway: onshore wind turbines of medium size considering a large sample of possible configurations representative of European conditions. Among several technological, geographical, and methodological parameters, we identified the turbine load factor and the wind turbine lifetime as the most influent parameters. Greenhouse Gas (GHG) performances have been plotted as a function of these key parameters identified. Using these curves, GHG performances of a specific wind turbine can be estimated, thus avoiding the undertaking of an extensive Life Cycle Assessment (LCA). This methodology should be useful for decisions makers, providing them a robust but simple support tool for assessing the environmental performance of energy systems.

  4. Developing a methodology to predict PM10 concentrations in urban areas using generalized linear models.

    Science.gov (United States)

    Garcia, J M; Teodoro, F; Cerdeira, R; Coelho, L M R; Kumar, Prashant; Carvalho, M G

    2016-09-01

    A methodology to predict PM10 concentrations in urban outdoor environments is developed based on the generalized linear models (GLMs). The methodology is based on the relationship developed between atmospheric concentrations of air pollutants (i.e. CO, NO2, NOx, VOCs, SO2) and meteorological variables (i.e. ambient temperature, relative humidity (RH) and wind speed) for a city (Barreiro) of Portugal. The model uses air pollution and meteorological data from the Portuguese monitoring air quality station networks. The developed GLM considers PM10 concentrations as a dependent variable, and both the gaseous pollutants and meteorological variables as explanatory independent variables. A logarithmic link function was considered with a Poisson probability distribution. Particular attention was given to cases with air temperatures both below and above 25°C. The best performance for modelled results against the measured data was achieved for the model with values of air temperature above 25°C compared with the model considering all ranges of air temperatures and with the model considering only temperature below 25°C. The model was also tested with similar data from another Portuguese city, Oporto, and results found to behave similarly. It is concluded that this model and the methodology could be adopted for other cities to predict PM10 concentrations when these data are not available by measurements from air quality monitoring stations or other acquisition means.

  5. Model and measurement methodology for the analysis of consumer choice of foods products

    NARCIS (Netherlands)

    B. Wierenga (Berend)

    1983-01-01

    textabstractThe consumer can be conceived as an imperfect problem solver. Consumer behavior with respect to food products is purposive, but the consumer is bounded by limitations of information, cognitive skills, memory and time. From this starting point, this paper develops a model of the process b

  6. Model and measurement methodology for the analysis of consumer choice of foods products

    NARCIS (Netherlands)

    B. Wierenga (Berend)

    1983-01-01

    textabstractThe consumer can be conceived as an imperfect problem solver. Consumer behavior with respect to food products is purposive, but the consumer is bounded by limitations of information, cognitive skills, memory and time. From this starting point, this paper develops a model of the

  7. Modeling the Decay in AN Hbim Starting from 3d Point Clouds. a Followed Approach for Cultural Heritage Knowledge

    Science.gov (United States)

    Chiabrando, F.; Lo Turco, M.; Rinaudo, F.

    2017-08-01

    The recent trends in architectural data management imply the scientific and professional collaborations of several disciplines involved in the design, restoration and maintenance. It seems an achieved concept that, in the next future, all the information connected to new interventions or conservation activities on historical buildings will be managed by using a BIM platform. Nowadays the actual range or image based metric survey techniques (mainly produced by using Terrestrial Laser Scanner or photogrammetric platform today more based on projective geometry) allow to generate 3D point clouds, 3D models, orthophotos and other outputs with assessed accuracy. The subsequent conversion of 3D information into parametric components, especially in an historical environment, is not easy and has a lot of open issues. According to the actual BIM commercial software and to the embedded tools or plugin, the paper deals with the methodology followed for the realization of two parametric 3D models (Palazzo Sarmatoris and Smistamento RoundHouse, two historical building in the north-west part of Italy). The paper describes the proposed workflow according to the employed plug-in for automatic reconstruction and to the solution adopted for the well-known problems connected to the modeling phase such as the vaults realization or the 3D irregular surfaces modeling. Finally, the studied strategy for mapping the decay in a BIM environment and the connected results with the conclusions and future perspectives are critically discussed.

  8. MODELING THE DECAY IN AN HBIM STARTING FROM 3D POINT CLOUDS. A FOLLOWED APPROACH FOR CULTURAL HERITAGE KNOWLEDGE

    Directory of Open Access Journals (Sweden)

    F. Chiabrando

    2017-08-01

    Full Text Available The recent trends in architectural data management imply the scientific and professional collaborations of several disciplines involved in the design, restoration and maintenance. It seems an achieved concept that, in the next future, all the information connected to new interventions or conservation activities on historical buildings will be managed by using a BIM platform. Nowadays the actual range or image based metric survey techniques (mainly produced by using Terrestrial Laser Scanner or photogrammetric platform today more based on projective geometry allow to generate 3D point clouds, 3D models, orthophotos and other outputs with assessed accuracy. The subsequent conversion of 3D information into parametric components, especially in an historical environment, is not easy and has a lot of open issues. According to the actual BIM commercial software and to the embedded tools or plugin, the paper deals with the methodology followed for the realization of two parametric 3D models (Palazzo Sarmatoris and Smistamento RoundHouse, two historical building in the north-west part of Italy. The paper describes the proposed workflow according to the employed plug-in for automatic reconstruction and to the solution adopted for the well-known problems connected to the modeling phase such as the vaults realization or the 3D irregular surfaces modeling. Finally, the studied strategy for mapping the decay in a BIM environment and the connected results with the conclusions and future perspectives are critically discussed.

  9. Do Methodological Choices in Environmental Modeling Bias Rebound Effects? : A Case Study on Electric Cars

    NARCIS (Netherlands)

    Font Vivanco, D.; Tukker, A.; Kemp, R.

    2016-01-01

    Improvements in resource efficiency often underperform because of rebound effects. Calculations of the size of rebound effects are subject to various types of bias, among which methodological choices have received particular attention. Modellers have primarily focused on choices related to changes i

  10. Have Cognitive Diagnostic Models Delivered Their Goods? Some Substantial and Methodological Concerns

    Science.gov (United States)

    Wilhelm, Oliver; Robitzsch, Alexander

    2009-01-01

    The paper by Rupp and Templin (2008) is an excellent work on the characteristics and features of cognitive diagnostic models (CDM). In this article, the authors comment on some substantial and methodological aspects of this focus paper. They organize their comments by going through issues associated with the terms "cognitive," "diagnostic" and…

  11. Development in methodologies for modelling of human and ecotoxic impacts in LCA

    DEFF Research Database (Denmark)

    Hauschild, Michael Zwicky; Huijbregts, Mark; Jolliet, Olivier;

    2009-01-01

    Under the UNEP-SETAC Life Cycle Initiative there is an aim to develop an internationally backed recommended practice of life cycle impact assessment addressing methodological issues like choice of characterization model and characterization factors. In this context, an international comparison wa...

  12. 3D Buildings Modelling Based on a Combination of Techniques and Methodologies

    NARCIS (Netherlands)

    Pop, G.; Bucksch, A.K.; Gorte, B.G.H.

    2007-01-01

    Three dimensional architectural models are more and more important for a large number of applications. Specialists look for faster and more precise ways to generate them. This paper discusses methods to combine methodologies for handling data acquired from multiple sources: maps, terrestrial laser a

  13. Analysis of Feedback Processes in Online Group Interaction: A Methodological Model

    Science.gov (United States)

    Espasa, Anna; Guasch, Teresa; Alvarez, Ibis M.

    2013-01-01

    The aim of this article is to present a methodological model to analyze students' group interaction to improve their essays in online learning environments, based on asynchronous and written communication. In these environments teacher and student scaffolds for discussion are essential to promote interaction. One of these scaffolds can be the…

  14. Setting road safety targets in Cambodia : a methodological demonstration using the latent risk time series model.

    NARCIS (Netherlands)

    Commandeur, J.J.F. Wesemann, P. Bijleveld, F.D. Chhoun, V. & Sann, S.

    2017-01-01

    The authors present the methodology used for estimating forecasts for the number of road traffic fatalities in 2011—2020 in Cambodia based on observed developments in Cambodian road traffic fatalities and motor vehicle ownership in the years 1995—2009. Using the latent risk time series model

  15. Literature Survey of previous research work in Models and Methodologies in Project Management

    Directory of Open Access Journals (Sweden)

    Ravinder Singh

    2014-09-01

    Full Text Available This paper provides a survey of the existing literature and research carried out in the area of project management using different models, methodologies, and frameworks. Project Management (PM broadly means programme management, portfolio management, practice management, project management office, etc. A project management system has a set of processes, procedures, framework, methods, tools, methodologies, techniques, resources, etc. which are used to manage the full life cycle of projects. This also means to create risk, quality, performance, and other management plans to monitor and manage the projects efficiently and effectively.

  16. Numerical Methodology for Metal Forming Processes Using Elastoplastic Model with Damage Occurrence

    Institute of Scientific and Technical Information of China (English)

    2006-01-01

    Ductile damage often occurs during metal forming processes due to the large thermo-elasto (visco) plastic flow Iocalisation. This paper presents a numerical methodology, which aims to virtually improve any metal forming processes. The methodology is based on elastoplastic constitutive equations accounting for nonlinear mixed isotropic and kinematic hardening strongly coupled with isotropic ductile damage. An adaptive remeshing scheme based on geometrical and physical error estimates including a kill element procedure is used. Some numerical results are presented to show the capability of the model to predict the damage initiation and growth during the metal forming processes.

  17. Towards a Pattern-Driven Topical Ontology Modeling Methodology in Elderly Care Homes

    Science.gov (United States)

    Tang, Yan; de Baer, Peter; Zhao, Gang; Meersman, Robert; Pudkey, Kevin

    This paper presents a pattern-driven ontology modeling methodology, which is used to create topical ontologies in the human resource management (HRM) domain. An ontology topic is used to group concepts from different contexts (or even from different domain ontologies). We use the Organization for Economic Co-operation and Development (OECD) and the National Vocational Qualification (NVQ) as the resource to create the topical ontologies in this paper. The methodology is implemented in a tool called PAD-ON suit. The paper approach is illustrated with a use case from elderly care homes in UK.

  18. A new methodology for the development of high-latitude ionospheric climatologies and empirical models

    Science.gov (United States)

    Chisham, G.

    2017-01-01

    Many empirical models and climatologies of high-latitude ionospheric processes, such as convection, have been developed over the last 40 years. One common feature in the development of these models is that measurements from different times are combined and averaged on fixed coordinate grids. This methodology ignores the reality that high-latitude ionospheric features are organized relative to the location of the ionospheric footprint of the boundary between open and closed geomagnetic field lines (OCB). This boundary is in continual motion, and the polar cap that it encloses is continually expanding and contracting in response to changes in the rates of magnetic reconnection at the Earth's magnetopause and in the magnetotail. As a consequence, models that are developed by combining and averaging data in fixed coordinate grids heavily smooth the variations that occur near the boundary location. Here we propose that the development of future models should consider the location of the OCB in order to more accurately model the variations in this region. We present a methodology which involves identifying the OCB from spacecraft auroral images and then organizing measurements in a grid where the bins are placed relative to the OCB location. We demonstrate the plausibility of this methodology using ionospheric vorticity measurements made by the Super Dual Auroral Radar Network radars and OCB measurements from the IMAGE spacecraft FUV auroral imagers. This demonstration shows that this new methodology results in sharpening and clarifying features of climatological maps near the OCB location. We discuss the potential impact of this methodology on space weather applications.

  19. A Dynamic Defense Modeling and Simulation Methodology using Semantic Web Services

    Directory of Open Access Journals (Sweden)

    Kangsun Lee

    2010-04-01

    Full Text Available Defense Modeling and Simulations require interoperable and autonomous federates in order to fully simulate complex behavior of war-fighters and to dynamically adapt themselves to various war-game events, commands and controls. In this paper, we propose a semantic web service based methodology to develop war-game simulations. Our methodology encapsulates war-game logic into a set of web services with additional semantic information in WSDL (Web Service Description Language and OWL (Web Ontology Language. By utilizing dynamic discovery and binding power of semantic web services, we are able to dynamically reconfigure federates according to various simulation events. An ASuW (Anti-Surface Warfare simulator is constructed to demonstrate the methodology and successfully shows that the level of interoperability and autonomy can be greatly improved.

  20. An improved methodology for dynamic modelling and simulation of electromechanically coupled drive systems: An experimental validation

    Indian Academy of Sciences (India)

    Nuh Erdogan; Humberto Henao; Richard Grisel

    2015-10-01

    The complexity of electromechanical coupling drive system (ECDS)s, specifically electrical drive systems, makes studying them in their entirety challenging since they consist of elements of diverse nature, i.e. electric, electronics and mechanics. This presents a real struggle to the engineers who want to design and implement such systems with high performance, efficiency and reliability. For this purpose, engineers need a tool capable of modelling and/or simulating components of diverse nature within the ECDS. However, a majority of the available tools are limited in their capacity to describe the characteristics of such components sufficiently. To overcome this difficulty, this paper first proposes an improved methodology of modelling and simulation for ECDS. The approach is based on using domain-based simulators individually, namely electric and mechanic part simulators and also integrating them with a co-simulation. As for the modelling of the drive machine, a finely tuned dynamic model is developed by taking the saturation effect into account. In order to validate the developed model as well as the proposed methodology, an industrial ECDS is tested experimentally. Later, both the experimental and simulation results are compared to prove the accuracy of the developed model and the relevance of the proposed methodology.

  1. A coupled groundwater-flow-modelling and vulnerability-mapping methodology for karstic terrain management

    Science.gov (United States)

    Kavouri, Konstantina P.; Karatzas, George P.; Plagnes, Valérie

    2017-02-01

    A coupled groundwater-flow-modelling and vulnerability-mapping methodology for the management of karst aquifers with spatial variability is developed. The methodology takes into consideration the duality of flow and recharge in karst and introduces a simple method to integrate the effect of temporal storage in the unsaturated zone. In order to investigate the applicability of the developed methodology, simulation results are validated against available field measurement data. The criteria maps from the PaPRIKa vulnerability-mapping method are used to document the groundwater flow model. The FEFLOW model is employed for the simulation of the saturated zone of Palaikastro-Chochlakies karst aquifer, in the island of Crete, Greece, for the hydrological years 2010-2012. The simulated water table reproduces typical karst characteristics, such as steep slopes and preferred drain axes, and is in good agreement with field observations. Selected calculated error indicators—Nash-Sutcliffe efficiency (NSE), root mean squared error (RMSE) and model efficiency (E')—are within acceptable value ranges. Results indicate that different storage processes take place in different parts of the aquifer. The north-central part seems to be more sensitive to diffuse recharge, while the southern part is affected primarily by precipitation events. Sensitivity analysis is performed on the parameters of hydraulic conductivity and specific yield. The methodology is used to estimate the feasibility of artificial aquifer recharge (AAR) at the study area. Based on the developed methodology, guidelines were provided for the selection of the appropriate AAR scenario that has positive impact on the water table.

  2. A coupled groundwater-flow-modelling and vulnerability-mapping methodology for karstic terrain management

    Science.gov (United States)

    Kavouri, Konstantina P.; Karatzas, George P.; Plagnes, Valérie

    2017-08-01

    A coupled groundwater-flow-modelling and vulnerability-mapping methodology for the management of karst aquifers with spatial variability is developed. The methodology takes into consideration the duality of flow and recharge in karst and introduces a simple method to integrate the effect of temporal storage in the unsaturated zone. In order to investigate the applicability of the developed methodology, simulation results are validated against available field measurement data. The criteria maps from the PaPRIKa vulnerability-mapping method are used to document the groundwater flow model. The FEFLOW model is employed for the simulation of the saturated zone of Palaikastro-Chochlakies karst aquifer, in the island of Crete, Greece, for the hydrological years 2010-2012. The simulated water table reproduces typical karst characteristics, such as steep slopes and preferred drain axes, and is in good agreement with field observations. Selected calculated error indicators—Nash-Sutcliffe efficiency (NSE), root mean squared error (RMSE) and model efficiency (E')—are within acceptable value ranges. Results indicate that different storage processes take place in different parts of the aquifer. The north-central part seems to be more sensitive to diffuse recharge, while the southern part is affected primarily by precipitation events. Sensitivity analysis is performed on the parameters of hydraulic conductivity and specific yield. The methodology is used to estimate the feasibility of artificial aquifer recharge (AAR) at the study area. Based on the developed methodology, guidelines were provided for the selection of the appropriate AAR scenario that has positive impact on the water table.

  3. An integrated measurement and modeling methodology for estuarine water quality management

    Institute of Scientific and Technical Information of China (English)

    Michael Hartnett; Stephen Nash

    2015-01-01

    This paper describes research undertaken by the authors to develop an integrated measurement and modeling methodology for water quality management of estuaries. The approach developed utilizes modeling and measurement results in a synergistic manner. Modeling results were initially used to inform the field campaign of appropriate sampling locations and times, and field data were used to develop accurate models. Remote sensing techniques were used to capture data for both model development and model validation. Field surveys were undertaken to provide model initial conditions through data assimilation and determine nutrient fluxes into the model domain. From field data, salinity re-lationships were developed with various water quality parameters, and relationships between chlorophyll a concentrations, transparency, and light attenuation were also developed. These relationships proved to be invaluable in model development, particularly in modeling the growth and decay of chlorophyll a. Cork Harbour, an estuary that regularly experiences summer algal blooms due to anthropogenic sources of nutrients, was used as a case study to develop the methodology. The integration of remote sensing, conventional fieldwork, and modeling is one of the novel aspects of this research and the approach developed has widespread applicability.

  4. A methodology to annotate systems biology markup language models with the synthetic biology open language.

    Science.gov (United States)

    Roehner, Nicholas; Myers, Chris J

    2014-02-21

    Recently, we have begun to witness the potential of synthetic biology, noted here in the form of bacteria and yeast that have been genetically engineered to produce biofuels, manufacture drug precursors, and even invade tumor cells. The success of these projects, however, has often failed in translation and application to new projects, a problem exacerbated by a lack of engineering standards that combine descriptions of the structure and function of DNA. To address this need, this paper describes a methodology to connect the systems biology markup language (SBML) to the synthetic biology open language (SBOL), existing standards that describe biochemical models and DNA components, respectively. Our methodology involves first annotating SBML model elements such as species and reactions with SBOL DNA components. A graph is then constructed from the model, with vertices corresponding to elements within the model and edges corresponding to the cause-and-effect relationships between these elements. Lastly, the graph is traversed to assemble the annotating DNA components into a composite DNA component, which is used to annotate the model itself and can be referenced by other composite models and DNA components. In this way, our methodology can be used to build up a hierarchical library of models annotated with DNA components. Such a library is a useful input to any future genetic technology mapping algorithm that would automate the process of composing DNA components to satisfy a behavioral specification. Our methodology for SBML-to-SBOL annotation is implemented in the latest version of our genetic design automation (GDA) software tool, iBioSim.

  5. Early diagnosis and Early Start Denver Model intervention in autism spectrum disorders delivered in an Italian Public Health System service

    Directory of Open Access Journals (Sweden)

    Devescovi R

    2016-06-01

    Full Text Available Raffaella Devescovi,1 Lorenzo Monasta,2 Alice Mancini,3 Maura Bin,1 Valerio Vellante,1 Marco Carrozzi,1 Costanza Colombi4 1Division of Child Neurology and Psychiatry, 2Clinical Epidemiology and Public Health Research Unit, Institute for Maternal and Child Health – IRCCS “Burlo Garofolo”, Trieste, 3Department of Clinical and Experimental Medicine, University of Pisa, Pisa, Italy; 4Department of Psychiatry, University of Michigan Health System, Ann Arbor, MI, USA Background: Early diagnosis combined with an early intervention program, such as the Early Start Denver Model (ESDM, can positively influence the early natural history of autism spectrum disorders. This study evaluated the effectiveness of an early ESDM-inspired intervention, in a small group of toddlers, delivered at low intensity by the Italian Public Health System.Methods: Twenty-one toddlers at risk for autism spectrum disorders, aged 20–36 months, received 3 hours/wk of one-to-one ESDM-inspired intervention by trained therapists, combined with parents’ and teachers’ active engagement in ecological implementation of treatment. The mean duration of treatment was 15 months. Cognitive and communication skills, as well as severity of autism symptoms, were assessed by using standardized measures at pre-intervention (Time 0 [T0]; mean age =27 months and post-intervention (Time 1 [T1]; mean age =42 months.Results: Children made statistically significant improvements in the language and cognitive domains, as demonstrated by a series of nonparametric Wilcoxon tests for paired data. Regarding severity of autism symptoms, younger age at diagnosis was positively associated with greater improvement at post-assessment.Conclusion: Our results are consistent with the literature that underlines the importance of early diagnosis and early intervention, since prompt diagnosis can reduce the severity of autism symptoms and improve cognitive and language skills in younger children

  6. A NUMERICAL AND ANALYTICAL STUDY ON A TAIL-FLAPPING MODEL FOR FISH FAST C-START

    Institute of Scientific and Technical Information of China (English)

    HU Wenrong; YU Yongliang; TONG Binggang; LIU Hao

    2004-01-01

    The force production physics and the flow control mechanism of fish fast C-start are studied numerically and theoretically by using a tail-flapping model. The problem is simplified to a 2-D foil that rotates rapidly to and fro on one side about its fixed leading edge in water medium. The study involves the simulation of the flow by solving the two-dimensional unsteady incompressible NavierStokes equations and employing a theoretical analytic modeling approach. Firstly, reasonable thrust magnitude and its time history are obtained and checked by fitting predicted results coming from these two approaches. Next, the flow fields and vortex structures are given, and the propulsive mechanism is interpreted. The results show that the induction of vortex distributions near the trailing edge of the tail are important in the time-averaged thrust generation, though the added inertial effect plays an important role in producing an instant large thrust especially in the first stage. Furthermore, dynamic and energetic effects of some kinematic controlling factors are discussed. For enhancing the timeaveraged thrust but keeping a favorable ratio of it to time-averaged input power within the limitations of muscle ability, it is recommended to have a larger deflection amplitude in a limited time interval and with no time delay between the to-and-fro strokes.

  7. Synthesis of semantic modelling and risk analysis methodology applied to animal welfare.

    Science.gov (United States)

    Bracke, M B M; Edwards, S A; Metz, J H M; Noordhuizen, J P T M; Algers, B

    2008-07-01

    Decision-making on animal welfare issues requires a synthesis of information. For the assessment of farm animal welfare based on scientific information collected in a database, a methodology called 'semantic modelling' has been developed. To date, however, this methodology has not been generally applied. Recently, a qualitative Risk Assessment approach has been published by the European Food Safety Authority (EFSA) for the first time, concerning the welfare of intensively reared calves. This paper reports on a critical analysis of this Risk Assessment (RA) approach from a semantic-modelling (SM) perspective, emphasizing the importance of several seemingly self-evident principles, including the definition of concepts, application of explicit methodological procedures and specification of how underlying values and scientific information lead to the RA output. In addition, the need to include positive aspects of welfare and overall welfare assessments are emphasized. The analysis shows that the RA approach for animal welfare could benefit from SM methodology to support transparent and science-based decision-making.

  8. A fault diagnosis methodology for rolling element bearings based on advanced signal pretreatment and autoregressive modelling

    Science.gov (United States)

    Al-Bugharbee, Hussein; Trendafilova, Irina

    2016-05-01

    This study proposes a methodology for rolling element bearings fault diagnosis which gives a complete and highly accurate identification of the faults present. It has two main stages: signals pretreatment, which is based on several signal analysis procedures, and diagnosis, which uses a pattern-recognition process. The first stage is principally based on linear time invariant autoregressive modelling. One of the main contributions of this investigation is the development of a pretreatment signal analysis procedure which subjects the signal to noise cleaning by singular spectrum analysis and then stationarisation by differencing. So the signal is transformed to bring it close to a stationary one, rather than complicating the model to bring it closer to the signal. This type of pretreatment allows the use of a linear time invariant autoregressive model and improves its performance when the original signals are non-stationary. This contribution is at the heart of the proposed method, and the high accuracy of the diagnosis is a result of this procedure. The methodology emphasises the importance of preliminary noise cleaning and stationarisation. And it demonstrates that the information needed for fault identification is contained in the stationary part of the measured signal. The methodology is further validated using three different experimental setups, demonstrating very high accuracy for all of the applications. It is able to correctly classify nearly 100 percent of the faults with regard to their type and size. This high accuracy is the other important contribution of this methodology. Thus, this research suggests a highly accurate methodology for rolling element bearing fault diagnosis which is based on relatively simple procedures. This is also an advantage, as the simplicity of the individual processes ensures easy application and the possibility for automation of the entire process.

  9. An Application of the PMI Model at the Project Level: Evaluation of the ESEA Title IV C Fresh Start Minischool Project.

    Science.gov (United States)

    Harrison, Patricia C.

    The Planning, Monitoring, and Implementation Model (PMI) was developed to provide a model for systematic evaluation of educational programs to determine their effectiveness in achieving goals and objectives. This paper demonstrates the applicability of the PMI model at the project level. Fresh Start Minischool at Ballou High School (District of…

  10. Methodology for Constructing Reduced-Order Power Block Performance Models for CSP Applications: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Wagner, M.

    2010-10-01

    The inherent variability of the solar resource presents a unique challenge for CSP systems. Incident solar irradiation can fluctuate widely over a short time scale, but plant performance must be assessed for long time periods. As a result, annual simulations with hourly (or sub-hourly) timesteps are the norm in CSP analysis. A highly detailed power cycle model provides accuracy but tends to suffer from prohibitively long run-times; alternatively, simplified empirical models can run quickly but don?t always provide enough information, accuracy, or flexibility for the modeler. The ideal model for feasibility-level analysis incorporates both the detail and accuracy of a first-principle model with the low computational load of a regression model. The work presented in this paper proposes a methodology for organizing and extracting information from the performance output of a detailed model, then using it to develop a flexible reduced-order regression model in a systematic and structured way. A similar but less generalized approach for characterizing power cycle performance and a reduced-order modeling methodology for CFD analysis of heat transfer from electronic devices have been presented. This paper builds on these publications and the non-dimensional approach originally described.

  11. The start of the Sagittarius spiral arm (Sagittarius origin) and the start of the Norma spiral arm (Norma origin) - model-computed and observed arm tangents at galactic longitudes -20 degrees < l < +23 degrees

    CERN Document Server

    Vallee, Jacques P

    2016-01-01

    Here we fitted a 4-arm spiral structure to the more accurate data on global arm pitch angle and arm longitude tangents, to get the start of each spiral arm near the Galactic nucleus. We find that the tangent to the 'start of the Sagittarius' spiral arm (arm middle) is at l= -17 degrees +/- 0.5 degree, while the tangent to the 'start of the Norma' spiral arm (arm middle) is at l= +20 degrees +/- 0.5 degree. Earlier, we published a compilation of observations and analysis of the tangent to each spiral arm tracer, from longitudes +23 degrees to +340 degrees; here we cover the arm tracers in the remaining longitudes +340 degrees (=- 20 degrees) to +23 degrees. Our model arm tangents are confirmed through the recent observed masers data (at the arm's inner edge). Observed arm tracers in the inner Galaxy show an offset from the mid-arm; this was also found elsewhere in the Milky Way disk (Vallee 2014c). In addition, we collated the observed tangents to the so-called '3-kpc-arm' features; here they are found statist...

  12. A methodology for the design of experiments in computational intelligence with multiple regression models.

    Science.gov (United States)

    Fernandez-Lozano, Carlos; Gestal, Marcos; Munteanu, Cristian R; Dorado, Julian; Pazos, Alejandro

    2016-01-01

    The design of experiments and the validation of the results achieved with them are vital in any research study. This paper focuses on the use of different Machine Learning approaches for regression tasks in the field of Computational Intelligence and especially on a correct comparison between the different results provided for different methods, as those techniques are complex systems that require further study to be fully understood. A methodology commonly accepted in Computational intelligence is implemented in an R package called RRegrs. This package includes ten simple and complex regression models to carry out predictive modeling using Machine Learning and well-known regression algorithms. The framework for experimental design presented herein is evaluated and validated against RRegrs. Our results are different for three out of five state-of-the-art simple datasets and it can be stated that the selection of the best model according to our proposal is statistically significant and relevant. It is of relevance to use a statistical approach to indicate whether the differences are statistically significant using this kind of algorithms. Furthermore, our results with three real complex datasets report different best models than with the previously published methodology. Our final goal is to provide a complete methodology for the use of different steps in order to compare the results obtained in Computational Intelligence problems, as well as from other fields, such as for bioinformatics, cheminformatics, etc., given that our proposal is open and modifiable.

  13. A methodology for the design of experiments in computational intelligence with multiple regression models

    Directory of Open Access Journals (Sweden)

    Carlos Fernandez-Lozano

    2016-12-01

    Full Text Available The design of experiments and the validation of the results achieved with them are vital in any research study. This paper focuses on the use of different Machine Learning approaches for regression tasks in the field of Computational Intelligence and especially on a correct comparison between the different results provided for different methods, as those techniques are complex systems that require further study to be fully understood. A methodology commonly accepted in Computational intelligence is implemented in an R package called RRegrs. This package includes ten simple and complex regression models to carry out predictive modeling using Machine Learning and well-known regression algorithms. The framework for experimental design presented herein is evaluated and validated against RRegrs. Our results are different for three out of five state-of-the-art simple datasets and it can be stated that the selection of the best model according to our proposal is statistically significant and relevant. It is of relevance to use a statistical approach to indicate whether the differences are statistically significant using this kind of algorithms. Furthermore, our results with three real complex datasets report different best models than with the previously published methodology. Our final goal is to provide a complete methodology for the use of different steps in order to compare the results obtained in Computational Intelligence problems, as well as from other fields, such as for bioinformatics, cheminformatics, etc., given that our proposal is open and modifiable.

  14. A biosphere modeling methodology for dose assessments of the potential Yucca Mountain deep geological high level radioactive waste repository.

    Science.gov (United States)

    Watkins, B M; Smith, G M; Little, R H; Kessler, J

    1999-04-01

    Recent developments in performance standards for proposed high level radioactive waste disposal at Yucca Mountain suggest that health risk or dose rate limits will likely be part of future standards. Approaches to the development of biosphere modeling and dose assessments for Yucca Mountain have been relatively lacking in previous performance assessments due to the absence of such a requirement. This paper describes a practical methodology used to develop a biosphere model appropriate for calculating doses from use of well water by hypothetical individuals due to discharges of contaminated groundwater into a deep well. The biosphere model methodology, developed in parallel with the BIOMOVS II international study, allows a transparent recording of the decisions at each step, from the specification of the biosphere assessment context through to model development and analysis of results. A list of features, events, and processes relevant to Yucca Mountain was recorded and an interaction matrix developed to help identify relationships between them. Special consideration was given to critical/potential exposure group issues and approaches. The conceptual model of the biosphere system was then developed, based on the interaction matrix, to show how radionuclides migrate and accumulate in the biosphere media and result in potential exposure pathways. A mathematical dose assessment model was specified using the flexible AMBER software application, which allows users to construct their own compartment models. The starting point for the biosphere calculations was a unit flux of each radionuclide from the groundwater in the geosphere into the drinking water in the well. For each of the 26 radionuclides considered, the most significant exposure pathways for hypothetical individuals were identified. For 14 of the radionuclides, the primary exposure pathways were identified as consumption of various crops and animal products following assumed agricultural use of the contaminated

  15. A methodology for including wall roughness effects in k-ε low-Reynolds turbulence models

    Energy Technology Data Exchange (ETDEWEB)

    Ambrosini, W., E-mail: walter.ambrosini@ing.unipi.it; Pucciarelli, A.; Borroni, I.

    2015-05-15

    Highlights: • A model for taking into account wall roughness in low-Reynolds k-ε models is presented. • The model is subjected to a first validation to show its potential in general applications. • The application of the model in predicting heat transfer to supercritical fluids is also discussed. - Abstract: A model accounting for wall roughness effects in k-ε low-Reynolds turbulence models is described in the present paper. In particular, the introduction in the transport equations of k and ε of additional source terms related to roughness, based on simple assumptions and dimensional relationships, is proposed. An objective of the present paper, in addition to obtaining more realistic predictions of wall friction, is the application of the proposed model to the study of heat transfer to supercritical fluids. A first validation of the model is reported. The model shows the capability of predicting, at least qualitatively, some of the most important trends observed when dealing with rough pipes in very different flow conditions. Qualitative comparisons with some DNS data available in literature are also performed. Further analyses provided promising results concerning the ability of the model in reproducing the trend of friction factor when varying the flow conditions, though improvements are necessary for achieving better quantitative accuracy. First applications of the model in simulating heat transfer to supercritical fluids are also described, showing the capability of the model to affect the predictions of these heat transfer phenomena, in particular in the vicinity of the pseudo-critical conditions. A more extended application of the model to relevant deteriorated heat transfer conditions will clarify the usefulness of this modelling methodology in improving predictions of these difficult phenomena. Whatever the possible success in this particular application that motivated its development, this approach suggests a general methodology for accounting

  16. An Intelligent Response Surface Methodology for Modeling of Domain Level Constraints

    Institute of Scientific and Technical Information of China (English)

    2006-01-01

    An effective modeling method of domain level constraints in the constraint network for concurrent engineering (CE) was developed. The domain level constraints were analyzed and the framework of modeling of domain level constraints based on simulation and approximate technology was given. An intelligent response surface methodology (IRSM) was proposed, in which artificial intelligence technologies are introduced into the optimization process. The design of crank and connecting rod in the V6 engine as example was given to show the validity of the modeling method.

  17. Simulating large-scale pedestrian movement using CA and event driven model: Methodology and case study

    Science.gov (United States)

    Li, Jun; Fu, Siyao; He, Haibo; Jia, Hongfei; Li, Yanzhong; Guo, Yi

    2015-11-01

    Large-scale regional evacuation is an important part of national security emergency response plan. Large commercial shopping area, as the typical service system, its emergency evacuation is one of the hot research topics. A systematic methodology based on Cellular Automata with the Dynamic Floor Field and event driven model has been proposed, and the methodology has been examined within context of a case study involving the evacuation within a commercial shopping mall. Pedestrians walking is based on Cellular Automata and event driven model. In this paper, the event driven model is adopted to simulate the pedestrian movement patterns, the simulation process is divided into normal situation and emergency evacuation. The model is composed of four layers: environment layer, customer layer, clerk layer and trajectory layer. For the simulation of movement route of pedestrians, the model takes into account purchase intention of customers and density of pedestrians. Based on evacuation model of Cellular Automata with Dynamic Floor Field and event driven model, we can reflect behavior characteristics of customers and clerks at the situations of normal and emergency evacuation. The distribution of individual evacuation time as a function of initial positions and the dynamics of the evacuation process is studied. Our results indicate that the evacuation model using the combination of Cellular Automata with Dynamic Floor Field and event driven scheduling can be used to simulate the evacuation of pedestrian flows in indoor areas with complicated surroundings and to investigate the layout of shopping mall.

  18. A rigorous methodology for development and uncertainty analysis of group contribution based property models

    DEFF Research Database (Denmark)

    Frutiger, Jerome; Abildskov, Jens; Sin, Gürkan

    . The GC model uses the Marrero-Gani (MR) method which considers the group contribution in different levels both functional and structural. The methodology helps improve accuracy and reliability of property modeling and provides a rigorous model quality check and assurance. This is expected to further......Property prediction models are a fundamental tool of process modeling and analysis, especially at the early stage of process development. Furthermore, property prediction models are the fundamental tool for Computer-aided molecular design used for the development of new refrigerants. Group...... contribution (GC) based prediction methods use structurally dependent parameters in order to determine the property of pure components. The aim of the GC parameter estimation is to find the best possible set of model parameters that fits the experimental data. In that sense, there is often a lack of attention...

  19. 2014 International Conference on Simulation and Modeling Methodologies, Technologies and Applications

    CERN Document Server

    Ören, Tuncer; Kacprzyk, Janusz; Filipe, Joaquim

    2015-01-01

    The present book includes a set of selected extended papers from the 4th International Conference on Simulation and Modeling Methodologies, Technologies and Applications (SIMULTECH 2014), held in Vienna, Austria, from 28 to 30 August 2014. The conference brought together researchers, engineers and practitioners interested in methodologies and applications of modeling and simulation. New and innovative solutions are reported in this book. SIMULTECH 2014 received 167 submissions, from 45 countries, in all continents. After a double blind paper review performed by the Program Committee, 23% were accepted as full papers and thus selected for oral presentation. Additional papers were accepted as short papers and posters. A further selection was made after the Conference, based also on the assessment of presentation quality and audience interest, so that this book includes the extended and revised versions of the very best papers of SIMULTECH 2014. Commitment to high quality standards is a major concern of SIMULTEC...

  20. 5th International Conference on Simulation and Modeling Methodologies, Technologies and Applications

    CERN Document Server

    Kacprzyk, Janusz; Ören, Tuncer; Filipe, Joaquim

    2016-01-01

    The present book includes a set of selected extended papers from the 5th International Conference on Simulation and Modeling Methodologies, Technologies and Applications (SIMULTECH 2015), held in Colmar, France, from 21 to 23 July 2015. The conference brought together researchers, engineers and practitioners interested in methodologies and applications of modeling and simulation. New and innovative solutions are reported in this book. SIMULTECH 2015 received 102 submissions, from 36 countries, in all continents. After a double blind paper review performed by the Program Committee, 19% were accepted as full papers and thus selected for oral presentation. Additional papers were accepted as short papers and posters. A further selection was made after the Conference, based also on the assessment of presentation quality and audience interest, so that this book includes the extended and revised versions of the very best papers of SIMULTECH 2015. Commitment to high quality standards is a major concern of SIMULTECH t...

  1. A methodology for modeling photocatalytic reactors for indoor pollution control using previously estimated kinetic parameters

    Energy Technology Data Exchange (ETDEWEB)

    Passalia, Claudio; Alfano, Orlando M. [INTEC - Instituto de Desarrollo Tecnologico para la Industria Quimica, CONICET - UNL, Gueemes 3450, 3000 Santa Fe (Argentina); FICH - Departamento de Medio Ambiente, Facultad de Ingenieria y Ciencias Hidricas, Universidad Nacional del Litoral, Ciudad Universitaria, 3000 Santa Fe (Argentina); Brandi, Rodolfo J., E-mail: rbrandi@santafe-conicet.gov.ar [INTEC - Instituto de Desarrollo Tecnologico para la Industria Quimica, CONICET - UNL, Gueemes 3450, 3000 Santa Fe (Argentina); FICH - Departamento de Medio Ambiente, Facultad de Ingenieria y Ciencias Hidricas, Universidad Nacional del Litoral, Ciudad Universitaria, 3000 Santa Fe (Argentina)

    2012-04-15

    Highlights: Black-Right-Pointing-Pointer Indoor pollution control via photocatalytic reactors. Black-Right-Pointing-Pointer Scaling-up methodology based on previously determined mechanistic kinetics. Black-Right-Pointing-Pointer Radiation interchange model between catalytic walls using configuration factors. Black-Right-Pointing-Pointer Modeling and experimental validation of a complex geometry photocatalytic reactor. - Abstract: A methodology for modeling photocatalytic reactors for their application in indoor air pollution control is carried out. The methodology implies, firstly, the determination of intrinsic reaction kinetics for the removal of formaldehyde. This is achieved by means of a simple geometry, continuous reactor operating under kinetic control regime and steady state. The kinetic parameters were estimated from experimental data by means of a nonlinear optimization algorithm. The second step was the application of the obtained kinetic parameters to a very different photoreactor configuration. In this case, the reactor is a corrugated wall type using nanosize TiO{sub 2} as catalyst irradiated by UV lamps that provided a spatially uniform radiation field. The radiative transfer within the reactor was modeled through a superficial emission model for the lamps, the ray tracing method and the computation of view factors. The velocity and concentration fields were evaluated by means of a commercial CFD tool (Fluent 12) where the radiation model was introduced externally. The results of the model were compared experimentally in a corrugated wall, bench scale reactor constructed in the laboratory. The overall pollutant conversion showed good agreement between model predictions and experiments, with a root mean square error less than 4%.

  2. Reducing maladaptive behaviors in preschool-aged children with autism spectrum disorder using the early start denver model.

    Science.gov (United States)

    Fulton, Elizabeth; Eapen, Valsamma; Crnčec, Rudi; Walter, Amelia; Rogers, Sally

    2014-01-01

    The presence of maladaptive behaviors in young people with autism spectrum disorder (ASD) can significantly limit engagement in treatment programs, as well as compromise future educational and vocational opportunities. This study aimed to explore whether the Early Start Denver Model (ESDM) treatment approach reduced maladaptive behaviors in preschool-aged children with ASD in a community-based long day care setting. The level of maladaptive behavior of 38 children with ASD was rated using an observation-based measure on three occasions during the intervention: on entry, 12 weeks post-entry, and on exit (post-intervention) over an average treatment duration of 11.8 months. Significant reductions were found in children's maladaptive behaviors over the course of the intervention, with 68% of children showing a treatment response by 12 weeks and 79% on exit. This change was accompanied by improvement in children's overall developmental level as assessed by the Mullen scales of early learning, but not by significant changes on the Vineland Adaptive Behavior Scales-II or Social Communication Questionnaire. Replication with a larger sample, control conditions, and additional measures of maladaptive behavior is necessary in order to determine the specific factors underlying these improvements; however, the findings of the present study suggest that the ESDM program may be effective in improving not only core developmental domains, but also decreasing maladaptive behaviors in preschool-aged children with ASD.

  3. Reducing maladaptive behaviors in preschool-aged children with Autism Spectrum Disorder using the Early Start Denver Model

    Directory of Open Access Journals (Sweden)

    Elizabeth eFulton

    2014-05-01

    Full Text Available The presence of maladaptive behaviors in young people with Autism Spectrum Disorder (ASD can significantly limit engagement in treatment programs, as well as compromise future educational and vocational opportunities. This study aimed to explore whether the Early Start Denver Model (ESDM treatment approach reduced maladaptive behaviors in preschool-aged children with ASD in a community-based long day care setting. The level of maladaptive behavior of 38 children with ASD was rated using an observation based measure on three occasions during the intervention: on entry, 12 weeks post-entry, and on exit (post-intervention over an average treatment duration of 11.8 months. Significant reductions were found in children’s maladaptive behaviors over the course of the intervention, with 68% of children showing a treatment response by 12 weeks and 79% on exit. This change was accompanied by improvement in children’s overall developmental level as assessed by the Mullen Scales of Early Learning, but not by significant changes on the Vineland Adaptive Behavior Scales-II or Social Communication Questionnaire. Replication with a larger sample, control conditions and additional measures of maladaptive behavior is necessary in order to determine the specific factors underlying these improvements; however, the findings of the present study suggest that the ESDM program may be effective in improving not only core developmental domains, but also decreasing maladaptive behaviors in preschool-aged children.

  4. A Model-Based Methodology for Spray-Drying Process Development

    OpenAIRE

    Dobry, Dan E.; Settell, Dana M.; Baumann, John M.; Ray, Rod J.; Graham, Lisa J; Beyerinck, Ron A.

    2009-01-01

    Solid amorphous dispersions are frequently used to improve the solubility and, thus, the bioavailability of poorly soluble active pharmaceutical ingredients (APIs). Spray-drying, a well-characterized pharmaceutical unit operation, is ideally suited to producing solid amorphous dispersions due to its rapid drying kinetics. This paper describes a novel flowchart methodology based on fundamental engineering models and state-of-the-art process characterization techniques that ensure that spray-dr...

  5. Model-based interpretation of the ECG: a methodology for temporal and spatial reasoning.

    OpenAIRE

    Tong, D. A.; Widman, L. E.

    1992-01-01

    A new software architecture for automatic interpretation of the electrocardiogram is presented. Using the hypothesize-and-test paradigm, a semi-quantitative physiological model and production rule-based knowledge are combined to reason about time- and space-varying characteristics of complex heart rhythms. A prototype system implementing the methodology accepts a semi-quantitative description of the onset and morphology of the P waves and QRS complexes that are observed in the body-surface el...

  6. A Methodology for Modeling the Flow of Military Personnel Across Air Force Active and Reserve Components

    Science.gov (United States)

    2016-01-01

    for pilots will depend on the • number of active component pilots who separate • fraction of separating pilots who affiliate with the reserve ...when tracking economic output over a period of time. GDP data were collected from the Federal Reserve Economic Data (FRED), Federal Reserve Bank of St...C O R P O R A T I O N Research Report A Methodology for Modeling the Flow of Military Personnel Across Air Force Active and Reserve Components

  7. A METHODOLOGICAL MODEL FOR INTEGRATING CHARACTER WITHIN CONTENT AND LANGUAGE INTEGRATED LEARNING IN SOCIOLOGY OF RELIGION

    OpenAIRE

    Moh Yasir Alimi

    2014-01-01

    AbstractIn this article, I describe a methodological model I used in a experimental study on how to integrate character within the practice of Content and Language Integrated Learning (CLIL) at the higher education Indonesia.This research can be added to research about character education and CLIL in tertiary education, giving nuances to the practice of CLIL so far predominantly a practice in primary and secondary schools.The research was conducted in Semarang State University, in the Departm...

  8. Modeling the Capacity and Emissions Impacts of Reduced Electricity Demand. Part 1. Methodology and Preliminary Results

    Energy Technology Data Exchange (ETDEWEB)

    Coughlin, Katie [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States). Environmental Energy Technologies Division; Shen, Hongxia [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States). Environmental Energy Technologies Division; Chan, Peter [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States). Environmental Energy Technologies Division; McDevitt, Brian [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States). Environmental Energy Technologies Division; Sturges, Andrew [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States). Environmental Energy Technologies Division

    2013-02-07

    Policies aimed at energy conservation and efficiency have broad environmental and economic impacts. Even if these impacts are relatively small, they may be significant compared to the cost of implementing the policy. Methodologies that quantify the marginal impacts of reduced demand for energy have an important role to play in developing accurate measures of both the benefits and costs of a given policy choice. This report presents a methodology for estimating the impacts of reduced demand for electricity on the electric power sector as a whole. The approach uses the National Energy Modeling System (NEMS), a mid-range energy forecast model developed and maintained by the U.S. Department of Energy, Energy Information Administration (EIA)(DOE EIA 2013). The report is organized as follows: In the rest of this section the traditional NEMS-BT approach is reviewed and an outline of the new reduced form NEMS methodology is presented. Section 2 provides an overview of how the NEMS model works, and describes the set of NEMS-BT runs that are used as input to the reduced form approach. Section 3 presents our NEMS-BT simulation results and post-processing methods. In Section 4 we show how the NEMS-BT output can be generalized to apply to a broader set of end-uses. In Section 5 we disuss the application of this approach to policy analysis, and summarize some of the issues that will be further investigated in Part 2 of this study.

  9. MODEL - INTEGRAL METHODOLOGY FOR SUCCESSFUL DESIGNING AND IMPLEMENTING OF TQM SYSTEM IN MACEDONIAN COMPANIES

    Directory of Open Access Journals (Sweden)

    Elizabeta Mitreva

    2011-12-01

    Full Text Available The subject of this paper is linked with the valorization of the meaning and the perspectives of Total Quality Management (TQM system design and implementation within the domestic companies and creating a model-methodology for improved performance, efficiency and effectiveness. The research is designed as an attempt to depict the existing condition in the Macedonian companies regarding quality system design and implementation, analysed through 4 polls in the "house of quality" whose top is the ultimate management, and as its bases measurement, evaluation, analyzing and comparison of the quality are used. This "house" is being held by 4 subsystems e.g. internal standardization, methods and techniques for flawless work performance, education and motivation and analyses of the quality costs. The data received from the research and the proposal of the integral methodology for designing and implementing of TQM system are designed in turn to help and present useful directions to all Macedonian companies tending to become "world class" organizations. The basis in the creation of this model is the redesign of the business processes which afterword begins as a new phase of the business performance - continued improvement, rolling of Deming's Quality Circle (Plan-Do-Check-Act. The model-methodology proposed in this paper is integral and universal which means that it is applicable to all companies regardless of the business area.

  10. IMPLEMENTATION OF DATA ASSIMILATION METHODOLOGY FOR PHYSICAL MODEL UNCERTAINTY EVALUATION USING POST-CHF EXPERIMENTAL DATA

    Directory of Open Access Journals (Sweden)

    JAESEOK HEO

    2014-10-01

    Full Text Available The Best Estimate Plus Uncertainty (BEPU method has been widely used to evaluate the uncertainty of a best-estimate thermal hydraulic system code against a figure of merit. This uncertainty is typically evaluated based on the physical model's uncertainties determined by expert judgment. This paper introduces the application of data assimilation methodology to determine the uncertainty bands of the physical models, e.g., the mean value and standard deviation of the parameters, based upon the statistical approach rather than expert judgment. Data assimilation suggests a mathematical methodology for the best estimate bias and the uncertainties of the physical models which optimize the system response following the calibration of model parameters and responses. The mathematical approaches include deterministic and probabilistic methods of data assimilation to solve both linear and nonlinear problems with the a posteriori distribution of parameters derived based on Bayes' theorem. The inverse problem was solved analytically to obtain the mean value and standard deviation of the parameters assuming Gaussian distributions for the parameters and responses, and a sampling method was utilized to illustrate the non-Gaussian a posteriori distributions of parameters. SPACE is used to demonstrate the data assimilation method by determining the bias and the uncertainty bands of the physical models employing Bennett's heated tube test data and Becker's post critical heat flux experimental data. Based on the results of the data assimilation process, the major sources of the modeling uncertainties were identified for further model development.

  11. Functional Role of the Front and Back Legs During a Track Start with Special Reference to an Inverted Pendulum Model in College Swimmers.

    Science.gov (United States)

    Ikeda, Yusuke; Ichikawa, Hiroshi; Nara, Rio; Baba, Yasuhiro; Shimoyama, Yoshimitsu; Kubo, Yasuyuki

    2016-10-01

    This study investigated factors that determine the velocity of the center of mass (CM) and flight distance from a track start to devise effective technical and physical training methods. Nine male and 5 female competitive swimmers participated in this study. Kinematics and ground reaction forces of the front and back legs were recorded using a video camera and force plates. The track start was modeled as an inverted pendulum system including a compliant leg, connecting the CM and front edge of the starting block. The increase in the horizontal velocity of the CM immediately after the start signal was closely correlated with the rotational component of the inverted pendulum. This rotational component at hands-off was significantly correlated with the average vertical force of the back plate from the start signal to hands-off (r = .967, P back foot-off to front foot-off (r = .783, P < .01). The results indicate that the legs on the starting block in the track start play a different role in the behavior of the inverted pendulum.

  12. Application of Box-Behnken design and response surface methodology for modeling of some Turkish coals

    Energy Technology Data Exchange (ETDEWEB)

    N. Aslan; Y. Cebeci [Cumhuriyet University, Sivas (Turkey). Mining Engineering Department

    2007-01-15

    The aim of our research was to apply Box-Behnken experimental design and response surface methodology for modeling of some Turkish coals. As a base for this study, standard Bond grindability tests were initially done and Bond work indexes (Wi) values were calculated for three Turkish coals. The Box-Behnken experimental design was used to provide data for modeling and the variables of model were Bond work index, grinding time and ball diameter of mill. Coal grinding tests were performed changing these three variables for three size fractions of coals (-3350 + 1700 {mu}m, -1700 + 710 {mu}m and -710 {mu}m). Using these sets of experimental data obtained by mathematical software package (MATLAB 7.1), mathematical models were then developed to show the effect of each parameter and their interactions on product 80% passing size (d{sub 80}). Predicted values of d80 obtained using model equations were in good agreement with the experimental values of d{sub 80} (R{sup 2} value of 0.96 for -3350 + 1700 {mu}m, R{sup 2} value of 0.98 for -1700 + 710 {mu}m and R{sup 2} value of 0.94 for -710 {mu}m). This study proved that Box-Behnken design and response surface methodology could efficiently be applied for modeling of grinding of some Turkish coals. 19 refs., 14 figs., 6 tabs.

  13. Object-oriented modelling with unified modelling language 2.0 for simple software application based on agile methodology

    CERN Document Server

    Warnars, Spits

    2010-01-01

    Unified modelling language (UML) 2.0 introduced in 2002 has been developing and influencing object-oriented software engineering and has become a standard and reference for information system analysis and design modelling. There are many concepts and theories to model the information system or software application with UML 2.0, which can make ambiguities and inconsistencies for a novice to learn to how to model the system with UML especially with UML 2.0. This article will discuss how to model the simple software application by using some of the diagrams of UML 2.0 and not by using the whole diagrams as suggested by agile methodology. Agile methodology is considered as convenient for novices because it can deliver the information technology environment to the end-user quickly and adaptively with minimal documentation. It also has the ability to deliver best performance software application according to the customer's needs. Agile methodology will make simple model with simple documentation, simple team and si...

  14. Concepts and methodologies for modeling and simulation a tribute to Tuncer Oren

    CERN Document Server

    Yilmaz, Levent

    2015-01-01

    This comprehensive text/reference presents cutting-edge advances in the theory and methodology of modeling and simulation (M&S), and reveals how this work has been influenced by the fundamental contributions of Professor Tuncer Ören to this field. Exploring the synergies among the domains of M&S and systems engineering (SE), the book describes how M&S and SE can help to address the complex problems identified as "Grand Challenges" more effectively under a model-driven and simulation-directed systems engineering framework. Topics and features: examines frameworks for the development of advan

  15. A methodology to model causal relationships on offshore safety assessment focusing on human and organizational factors.

    Science.gov (United States)

    Ren, J; Jenkinson, I; Wang, J; Xu, D L; Yang, J B

    2008-01-01

    Focusing on people and organizations, this paper aims to contribute to offshore safety assessment by proposing a methodology to model causal relationships. The methodology is proposed in a general sense that it will be capable of accommodating modeling of multiple risk factors considered in offshore operations and will have the ability to deal with different types of data that may come from different resources. Reason's "Swiss cheese" model is used to form a generic offshore safety assessment framework, and Bayesian Network (BN) is tailored to fit into the framework to construct a causal relationship model. The proposed framework uses a five-level-structure model to address latent failures within the causal sequence of events. The five levels include Root causes level, Trigger events level, Incidents level, Accidents level, and Consequences level. To analyze and model a specified offshore installation safety, a BN model was established following the guideline of the proposed five-level framework. A range of events was specified, and the related prior and conditional probabilities regarding the BN model were assigned based on the inherent characteristics of each event. This paper shows that Reason's "Swiss cheese" model and BN can be jointly used in offshore safety assessment. On the one hand, the five-level conceptual model is enhanced by BNs that are capable of providing graphical demonstration of inter-relationships as well as calculating numerical values of occurrence likelihood for each failure event. Bayesian inference mechanism also makes it possible to monitor how a safety situation changes when information flow travel forwards and backwards within the networks. On the other hand, BN modeling relies heavily on experts' personal experiences and is therefore highly domain specific. "Swiss cheese" model is such a theoretic framework that it is based on solid behavioral theory and therefore can be used to provide industry with a roadmap for BN modeling and

  16. Business Model Change Methodology: Applying New Technology in Organization: The Case of Mobile Technology in Learning Industry

    OpenAIRE

    Nastaran Hajiheydari; Payam Hanafizadeh

    2013-01-01

    The present study intends to design a methodology for examining the influence of modern information and communication technology on business models (BMs). Theoretical framework is mainly selected based on literature as well as consultation with expert focus groups. This methodology is validated by expert judgment and simulated as a real case applying system dynamics. The outcome of the survey includes a change methodology formulated in 5 phases and 37 activities. Not only has this study cover...

  17. Methodological considerations for economic modelling of latent tuberculous infection screening in migrants.

    Science.gov (United States)

    Shedrawy, J; Siroka, A; Oxlade, O; Matteelli, A; Lönnroth, K

    2017-09-01

    Tuberculosis (TB) in migrants from endemic to low-incidence countries results mainly from the reactivation of latent tuberculous infection (LTBI). LTBI screening policies for migrants vary greatly between countries, and the evidence on the cost-effectiveness of the different approaches is weak and heterogeneous. The aim of this review was to assess the methodology used in published economic evaluations of LTBI screening among migrants to identify critical methodological options that must be considered when using modelling to determine value for money from different economic perspectives. Three electronic databases were searched and 10 articles were included. There was considerable variation across this small number of studies with regard to economic perspective, main outcomes, modelling technique, screening options and target populations considered, as well as in parameterisation of the epidemiological situation, test accuracy, efficacy, safety and programme performance. Only one study adopted a societal perspective; others adopted a health care or wider government perspective. Parameters representing the cascade of screening and treating LTBI varied widely, with some studies using highly aspirational scenarios. This review emphasises the need for a more harmonised approach for economic analysis, and better transparency in how policy options and economic perspectives influence methodological choices. Variability is justifiable for some parameters. However, sufficient data are available to standardise others. A societal perspective is ideal, but can be challenging due to limited data. Assumptions about programme performance should be based on empirical data or at least realistic assumptions. Results should be interpreted within specific contexts and policy options, with cautious generalisations.

  18. Economic modeling of electricity production from hot dry rock geothermal reservoirs: methodology and analyses. Final report

    Energy Technology Data Exchange (ETDEWEB)

    Cummings, R.G.; Morris, G.E.

    1979-09-01

    An analytical methodology is developed for assessing alternative modes of generating electricity from hot dry rock (HDR) geothermal energy sources. The methodology is used in sensitivity analyses to explore relative system economics. The methodology used a computerized, intertemporal optimization model to determine the profit-maximizing design and management of a unified HDR electric power plant with a given set of geologic, engineering, and financial conditions. By iterating this model on price, a levelized busbar cost of electricity is established. By varying the conditions of development, the sensitivity of both optimal management and busbar cost to these conditions are explored. A plausible set of reference case parameters is established at the outset of the sensitivity analyses. This reference case links a multiple-fracture reservoir system to an organic, binary-fluid conversion cycle. A levelized busbar cost of 43.2 mills/kWh ($1978) was determined for the reference case, which had an assumed geothermal gradient of 40/sup 0/C/km, a design well-flow rate of 75 kg/s, an effective heat transfer area per pair of wells of 1.7 x 10/sup 6/ m/sup 2/, and plant design temperature of 160/sup 0/C. Variations in the presumed geothermal gradient, size of the reservoir, drilling costs, real rates of return, and other system parameters yield minimum busbar costs between -40% and +76% of the reference case busbar cost.

  19. Testing spectral models for stellar populations with star clusters: I. Methodology

    CERN Document Server

    Fernandes, Roberto Cid

    2009-01-01

    High resolution spectral models for simple stellar populations (SSP) developed in the past few years have become a standard ingredient in studies of stellar population of galaxies. As more such models become available, it becomes increasingly important to test them. In this and a companion paper, we test a suite of publicly available evolutionary synthesis models using integrated optical spectra in the blue-near-UV range of 27 well studied star clusters from the work of Leonardi & Rose (2003) spanning a wide range of ages and metallicities. Most (23) of the clusters are from the Magellanic clouds. This paper concentrates on methodological aspects of spectral fitting. The data are fitted with SSP spectral models from Vazdekis and collaborators, based on the MILES library. Best-fit and Bayesian estimates of age, metallicity and extinction are presented, and degeneracies between these parameters are mapped. We find that these models can match the observed spectra very well in most cases, with small formal un...

  20. [Methodological novelties applied to the anthropology of food: agent-based models and social networks analysis].

    Science.gov (United States)

    Díaz Córdova, Diego

    2016-01-01

    The aim of this article is to introduce two methodological strategies that have not often been utilized in the anthropology of food: agent-based models and social networks analysis. In order to illustrate these methods in action, two cases based in materials typical of the anthropology of food are presented. For the first strategy, fieldwork carried out in Quebrada de Humahuaca (province of Jujuy, Argentina) regarding meal recall was used, and for the second, elements of the concept of "domestic consumption strategies" applied by Aguirre were employed. The underlying idea is that, given that eating is recognized as a "total social fact" and, therefore, as a complex phenomenon, the methodological approach must also be characterized by complexity. The greater the number of methods utilized (with the appropriate rigor), the better able we will be to understand the dynamics of feeding in the social environment.

  1. Methodology for Developing a Probabilistic Risk Assessment Model of Spacecraft Rendezvous and Dockings

    Science.gov (United States)

    Farnham, Steven J., II; Garza, Joel, Jr.; Castillo, Theresa M.; Lutomski, Michael

    2011-01-01

    In 2007 NASA was preparing to send two new visiting vehicles carrying logistics and propellant to the International Space Station (ISS). These new vehicles were the European Space Agency s (ESA) Automated Transfer Vehicle (ATV), the Jules Verne, and the Japanese Aerospace and Explorations Agency s (JAXA) H-II Transfer Vehicle (HTV). The ISS Program wanted to quantify the increased risk to the ISS from these visiting vehicles. At the time, only the Shuttle, the Soyuz, and the Progress vehicles rendezvoused and docked to the ISS. The increased risk to the ISS was from an increase in vehicle traffic, thereby, increasing the potential catastrophic collision during the rendezvous and the docking or berthing of the spacecraft to the ISS. A universal method of evaluating the risk of rendezvous and docking or berthing was created by the ISS s Risk Team to accommodate the increasing number of rendezvous and docking or berthing operations due to the increasing number of different spacecraft, as well as the future arrival of commercial spacecraft. Before the first docking attempt of ESA's ATV and JAXA's HTV to the ISS, a probabilistic risk model was developed to quantitatively calculate the risk of collision of each spacecraft with the ISS. The 5 rendezvous and docking risk models (Soyuz, Progress, Shuttle, ATV, and HTV) have been used to build and refine the modeling methodology for rendezvous and docking of spacecrafts. This risk modeling methodology will be NASA s basis for evaluating the addition of future ISS visiting spacecrafts hazards, including SpaceX s Dragon, Orbital Science s Cygnus, and NASA s own Orion spacecraft. This paper will describe the methodology used for developing a visiting vehicle risk model.

  2. Sharing on Web 3d Models of Ancient Theatres. a Methodological Workflow

    Science.gov (United States)

    Scianna, A.; La Guardia, M.; Scaduto, M. L.

    2016-06-01

    In the last few years, the need to share on the Web the knowledge of Cultural Heritage (CH) through navigable 3D models has increased. This need requires the availability of Web-based virtual reality systems and 3D WEBGIS. In order to make the information available to all stakeholders, these instruments should be powerful and at the same time very user-friendly. However, research and experiments carried out so far show that a standardized methodology doesn't exist. All this is due both to complexity and dimensions of geometric models to be published, on the one hand, and to excessive costs of hardware and software tools, on the other. In light of this background, the paper describes a methodological approach for creating 3D models of CH, freely exportable on the Web, based on HTML5 and free and open source software. HTML5, supporting the WebGL standard, allows the exploration of 3D spatial models using most used Web browsers like Chrome, Firefox, Safari, Internet Explorer. The methodological workflow here described has been tested for the construction of a multimedia geo-spatial platform developed for three-dimensional exploration and documentation of the ancient theatres of Segesta and of Carthage, and the surrounding landscapes. The experimental application has allowed us to explore the potential and limitations of sharing on the Web of 3D CH models based on WebGL standard. Sharing capabilities could be extended defining suitable geospatial Web-services based on capabilities of HTML5 and WebGL technology.

  3. Testing spectral models for stellar populations with star clusters - I. Methodology

    Science.gov (United States)

    Cid Fernandes, Roberto; González Delgado, Rosa M.

    2010-04-01

    High-resolution spectral models for simple stellar populations (SSP) developed in the past few years have become a standard ingredient in studies of stellar population of galaxies. As more such models become available, it becomes increasingly important to test them. In this and a companion paper, we test a suite of publicly available evolutionary synthesis models using integrated optical spectra in the blue-near-UV range of 27 well-studied star clusters from the work of Leonardi and Rose spanning a wide range of ages and metallicities. Most (23) of the clusters are from the Magellanic Clouds. This paper concentrates on the methodological aspects of spectral fitting. The data are fitted with SSP spectral models from Vazdekis and collaborators, based on the Medium-resolution INT Library of Empirical Spectra. Best-fitting and Bayesian estimates of age, metallicity and extinction are presented, and degeneracies between these parameters are mapped. We find that these models can match the observed spectra very well in most cases, with small formal uncertainties in t,Z and AV. In some cases, the spectral fits indicate that the models lack a blue old population, probably associated with the horizontal branch. This methodology, which is mostly based on the publicly available code STARLIGHT, is extended to other sets of models in Paper II, where a comparison with properties derived from spatially resolved data (colour-magnitude diagrams) is presented. The global aim of these two papers is to provide guidance to users of evolutionary synthesis models and empirical feedback to model makers.

  4. Model checking methodology for large systems, faults and asynchronous behaviour. SARANA 2011 work report

    Energy Technology Data Exchange (ETDEWEB)

    Lahtinen, J. [VTT Technical Research Centre of Finland, Espoo (Finland); Launiainen, T.; Heljanko, K.; Ropponen, J. [Aalto Univ., Espoo (Finland). Dept. of Information and Computer Science

    2012-07-01

    Digital instrumentation and control (I and C) systems are challenging to verify. They enable complicated control functions, and the state spaces of the models easily become too large for comprehensive verification through traditional methods. Model checking is a formal method that can be used for system verification. A number of efficient model checking systems are available that provide analysis tools to determine automatically whether a given state machine model satisfies the desired safety properties. This report reviews the work performed in the Safety Evaluation and Reliability Analysis of Nuclear Automation (SARANA) project in 2011 regarding model checking. We have developed new, more exact modelling methods that are able to capture the behaviour of a system more realistically. In particular, we have developed more detailed fault models depicting the hardware configuration of a system, and methodology to model function-block-based systems asynchronously. In order to improve the usability of our model checking methods, we have developed an algorithm for model checking large modular systems. The algorithm can be used to verify properties of a model that could otherwise not be verified in a straightforward manner. (orig.)

  5. Avoiding and identifying errors in health technology assessment models: qualitative study and methodological review.

    Science.gov (United States)

    Chilcott, J; Tappenden, P; Rawdin, A; Johnson, M; Kaltenthaler, E; Paisley, S; Papaioannou, D; Shippam, A

    2010-05-01

    Health policy decisions must be relevant, evidence-based and transparent. Decision-analytic modelling supports this process but its role is reliant on its credibility. Errors in mathematical decision models or simulation exercises are unavoidable but little attention has been paid to processes in model development. Numerous error avoidance/identification strategies could be adopted but it is difficult to evaluate the merits of strategies for improving the credibility of models without first developing an understanding of error types and causes. The study aims to describe the current comprehension of errors in the HTA modelling community and generate a taxonomy of model errors. Four primary objectives are to: (1) describe the current understanding of errors in HTA modelling; (2) understand current processes applied by the technology assessment community for avoiding errors in development, debugging and critically appraising models for errors; (3) use HTA modellers' perceptions of model errors with the wider non-HTA literature to develop a taxonomy of model errors; and (4) explore potential methods and procedures to reduce the occurrence of errors in models. It also describes the model development process as perceived by practitioners working within the HTA community. A methodological review was undertaken using an iterative search methodology. Exploratory searches informed the scope of interviews; later searches focused on issues arising from the interviews. Searches were undertaken in February 2008 and January 2009. In-depth qualitative interviews were performed with 12 HTA modellers from academic and commercial modelling sectors. All qualitative data were analysed using the Framework approach. Descriptive and explanatory accounts were used to interrogate the data within and across themes and subthemes: organisation, roles and communication; the model development process; definition of error; types of model error; strategies for avoiding errors; strategies for

  6. Efficient methodologies for system matrix modelling in iterative image reconstruction for rotating high-resolution PET

    Energy Technology Data Exchange (ETDEWEB)

    Ortuno, J E; Kontaxakis, G; Rubio, J L; Santos, A [Departamento de Ingenieria Electronica (DIE), Universidad Politecnica de Madrid, Ciudad Universitaria s/n, 28040 Madrid (Spain); Guerra, P [Networking Research Center on Bioengineering, Biomaterials and Nanomedicine (CIBER-BBN), Madrid (Spain)], E-mail: juanen@die.upm.es

    2010-04-07

    A fully 3D iterative image reconstruction algorithm has been developed for high-resolution PET cameras composed of pixelated scintillator crystal arrays and rotating planar detectors, based on the ordered subsets approach. The associated system matrix is precalculated with Monte Carlo methods that incorporate physical effects not included in analytical models, such as positron range effects and interaction of the incident gammas with the scintillator material. Custom Monte Carlo methodologies have been developed and optimized for modelling of system matrices for fast iterative image reconstruction adapted to specific scanner geometries, without redundant calculations. According to the methodology proposed here, only one-eighth of the voxels within two central transaxial slices need to be modelled in detail. The rest of the system matrix elements can be obtained with the aid of axial symmetries and redundancies, as well as in-plane symmetries within transaxial slices. Sparse matrix techniques for the non-zero system matrix elements are employed, allowing for fast execution of the image reconstruction process. This 3D image reconstruction scheme has been compared in terms of image quality to a 2D fast implementation of the OSEM algorithm combined with Fourier rebinning approaches. This work confirms the superiority of fully 3D OSEM in terms of spatial resolution, contrast recovery and noise reduction as compared to conventional 2D approaches based on rebinning schemes. At the same time it demonstrates that fully 3D methodologies can be efficiently applied to the image reconstruction problem for high-resolution rotational PET cameras by applying accurate pre-calculated system models and taking advantage of the system's symmetries.

  7. 3rd International Conference on Simulation and Modeling Methodologies, Technologies and Applications

    CERN Document Server

    Koziel, Slawomir; Kacprzyk, Janusz; Leifsson, Leifur; Ören, Tuncer

    2015-01-01

    This book includes extended and revised versions of a set of selected papers from the 3rd International Conference on Simulation and Modeling Methodologies, Technologies and Applications (SIMULTECH 2013) which was co-organized by the Reykjavik University (RU) and sponsored by the Institute for Systems and Technologies of Information, Control and Communication (INSTICC). SIMULTECH 2013 was held in cooperation with the ACM SIGSIM - Special Interest Group (SIG) on SImulation and Modeling (SIM), Movimento Italiano Modellazione e Simulazione (MIMOS) and AIS Special Interest Group on Modeling and Simulation (AIS SIGMAS) and technically co-sponsored by the Society for Modeling & Simulation International (SCS), Liophant Simulation, Simulation Team and International Federation for Information Processing (IFIP). This proceedings brings together researchers, engineers, applied mathematicians and practitioners working in the advances and applications in the field of system simulation.

  8. Methodology of the Access to Care and Timing Simulation Model for Traumatic Spinal Cord Injury Care.

    Science.gov (United States)

    Santos, Argelio; Fallah, Nader; Lewis, Rachel; Dvorak, Marcel F; Fehlings, Michael G; Burns, Anthony Scott; Noonan, Vanessa K; Cheng, Christiana L; Chan, Elaine; Singh, Anoushka; Belanger, Lise M; Atkins, Derek

    2017-03-12

    Despite the relatively low incidence, the management and care of persons with traumatic spinal cord injury (tSCI) can be resource intensive and complex, spanning multiple phases of care and disciplines. Using a simulation model built with a system level view of the healthcare system allows for prediction of the impact of interventions on patient and system outcomes from injury through to community reintegration after tSCI. The Access to Care and Timing (ACT) project developed a simulation model for tSCI care using techniques from operations research and its development has been described previously. The objective of this article is to briefly describe the methodology and the application of the ACT Model as it was used in several of the articles in this focus issue. The approaches employed in this model provide a framework to look into the complexity of interactions both within and among the different SCI programs, sites and phases of care.

  9. The Relationships of Soft Systems Methodology (SSM, Business Process Modeling and e-Government

    Directory of Open Access Journals (Sweden)

    Arief Ramadhan

    2012-01-01

    Full Text Available e-Government have emerged in several countries. Because of many aspects that must be considered, and because of there are exist some soft components in e-Government, then the Soft Systems Methodology (SSM can be considered to use in e-Government systems development process. On the other hand, business process modeling is essential in many fields nowadays, as well as in e-Government. Some researchers have used SSM in e-Government. Several studies that relate the business processes modeling with e-Government have been conducted. This paper tries to reveal the relationship between SSM and business process modeling. Moreover, this paper also tries to explain how business process modeling is integrated within SSM, and further link that integration to the e-Government.

  10. Power Prediction Model for Turning EN-31 Steel Using Response Surface Methodology

    Directory of Open Access Journals (Sweden)

    M. Hameedullah

    2010-01-01

    Full Text Available Power consumption in turning EN-31 steel (a material that is most extensively used in automotive industry with tungstencarbide tool under different cutting conditions was experimentally investigated. The experimental runs were planned accordingto 24+8 added centre point factorial design of experiments, replicated thrice. The data collected was statisticallyanalyzed using Analysis of Variance technique and first order and second order power consumption prediction models weredeveloped by using response surface methodology (RSM. It is concluded that second-order model is more accurate than thefirst-order model and fit well with the experimental data. The model can be used in the automotive industries for decidingthe cutting parameters for minimum power consumption and hence maximum productivity

  11. DATA MINING METHODOLOGY FOR DETERMINING THE OPTIMAL MODEL OF COST PREDICTION IN SHIP INTERIM PRODUCT ASSEMBLY

    Directory of Open Access Journals (Sweden)

    Damir Kolich

    2016-03-01

    Full Text Available In order to accurately predict costs of the thousands of interim products that are assembled in shipyards, it is necessary to use skilled engineers to develop detailed Gantt charts for each interim product separately which takes many hours. It is helpful to develop a prediction tool to estimate the cost of interim products accurately and quickly without the need for skilled engineers. This will drive down shipyard costs and improve competitiveness. Data mining is used extensively for developing prediction models in other industries. Since ships consist of thousands of interim products, it is logical to develop a data mining methodology for a shipyard or any other manufacturing industry where interim products are produced. The methodology involves analysis of existing interim products and data collection. Pre-processing and principal component analysis is done to make the data “user-friendly” for later prediction processing and the development of both accurate and robust models. The support vector machine is demonstrated as the better model when there are a lower number of tuples. However as the number of tuples is increased to over 10000, then the artificial neural network model is recommended.

  12. A New Mathematical Model for Flank Wear Prediction Using Functional Data Analysis Methodology

    Directory of Open Access Journals (Sweden)

    Sonja Jozić

    2014-01-01

    Full Text Available This paper presents a new approach improving the reliability of flank wear prediction during the end milling process. In the present work, prediction of flank wear has been achieved by using cutting parameters and force signals as the sensitive carriers of information about the machining process. A series of experiments were conducted to establish the relationship between flank wear and cutting force components as well as the cutting parameters such as cutting speed, feed per tooth, and radial depth of cut. In order to be able to predict flank wear a new linear regression mathematical model has been developed by utilizing functional data analysis methodology. Regression coefficients of the model are in the form of time dependent functions that have been determined through the use of functional data analysis methodology. The mathematical model has been developed by means of applied cutting parameters and measured cutting forces components during the end milling of workpiece made of 42CrMo4 steel. The efficiency and flexibility of the developed model have been verified by comparing it with the separate experimental data set.

  13. Methodology of synchronization among strategy and operation. A standards-based modeling approach

    Directory of Open Access Journals (Sweden)

    VICTOR EDWIN COLLAZOS

    2017-05-01

    Full Text Available Enterprise Architecture (EA has gained importance in recent years, mainly for its concept of “alignment” between the strategic and operational levels of organizations. Such alignment occurs when Information Technology (IT is applied correctly and timely, working in synergy and harmony with strategy and the operation to achieve mutually their own goals and satisfy the organizational needs.Both the strategic and operational levels have standards that help model elements necessary to obtain desired results. In this sense, BMM and BPMN were selected because both have the support of OMG and they are fairly well known for modelling the strategic level and operational level, respectively. In addition, i* modeling goal can be used for reducing the gap between these two standards. This proposal may help both the high-level design of the information system and to the appropriate identification of the business processes that will support it.This paper presents a methodology for aligning strategy and the operation based on standards and heuristics. We have made a classification for elements of the models and, for some specific cases, an extension of the heuristics associated between them. This allows us to propose methodology, which uses above-mentioned standards and combines mappings, transformations and actions to be considered in the alignment process.

  14. Uncertainty in a monthly water balance model using the generalized likelihood uncertainty estimation methodology

    Indian Academy of Sciences (India)

    Diego Rivera; Yessica Rivas; Alex Godoy

    2015-02-01

    Hydrological models are simplified representations of natural processes and subject to errors. Uncertainty bounds are a commonly used way to assess the impact of an input or model architecture uncertainty in model outputs. Different sets of parameters could have equally robust goodness-of-fit indicators, which is known as Equifinality. We assessed the outputs from a lumped conceptual hydrological model to an agricultural watershed in central Chile under strong interannual variability (coefficient of variability of 25%) by using the Equifinality concept and uncertainty bounds. The simulation period ran from January 1999 to December 2006. Equifinality and uncertainty bounds from GLUE methodology (Generalized Likelihood Uncertainty Estimation) were used to identify parameter sets as potential representations of the system. The aim of this paper is to exploit the use of uncertainty bounds to differentiate behavioural parameter sets in a simple hydrological model. Then, we analyze the presence of equifinality in order to improve the identification of relevant hydrological processes. The water balance model for Chillan River exhibits, at a first stage, equifinality. However, it was possible to narrow the range for the parameters and eventually identify a set of parameters representing the behaviour of the watershed (a behavioural model) in agreement with observational and soft data (calculation of areal precipitation over the watershed using an isohyetal map). The mean width of the uncertainty bound around the predicted runoff for the simulation period decreased from 50 to 20 m3s−1 after fixing the parameter controlling the areal precipitation over the watershed. This decrement is equivalent to decreasing the ratio between simulated and observed discharge from 5.2 to 2.5. Despite the criticisms against the GLUE methodology, such as the lack of statistical formality, it is identified as a useful tool assisting the modeller with the identification of critical parameters.

  15. Parallelization of fine-scale computation in Agile Multiscale Modelling Methodology

    Science.gov (United States)

    Macioł, Piotr; Michalik, Kazimierz

    2016-10-01

    Nowadays, multiscale modelling of material behavior is an extensively developed area. An important obstacle against its wide application is high computational demands. Among others, the parallelization of multiscale computations is a promising solution. Heterogeneous multiscale models are good candidates for parallelization, since communication between sub-models is limited. In this paper, the possibility of parallelization of multiscale models based on Agile Multiscale Methodology framework is discussed. A sequential, FEM based macroscopic model has been combined with concurrently computed fine-scale models, employing a MatCalc thermodynamic simulator. The main issues, being investigated in this work are: (i) the speed-up of multiscale models with special focus on fine-scale computations and (ii) on decreasing the quality of computations enforced by parallel execution. Speed-up has been evaluated on the basis of Amdahl's law equations. The problem of `delay error', rising from the parallel execution of fine scale sub-models, controlled by the sequential macroscopic sub-model is discussed. Some technical aspects of combining third-party commercial modelling software with an in-house multiscale framework and a MPI library are also discussed.

  16. A geostatistical methodology to assess the accuracy of unsaturated flow models

    Energy Technology Data Exchange (ETDEWEB)

    Smoot, J.L.; Williams, R.E.

    1996-04-01

    The Pacific Northwest National Laboratory spatiotemporal movement of water injected into (PNNL) has developed a Hydrologic unsaturated sediments at the Hanford Site in Evaluation Methodology (HEM) to assist the Washington State was used to develop a new U.S. Nuclear Regulatory Commission in method for evaluating mathematical model evaluating the potential that infiltrating meteoric predictions. Measured water content data were water will produce leachate at commercial low- interpolated geostatistically to a 16 x 16 x 36 level radioactive waste disposal sites. Two key grid at several time intervals. Then a issues are raised in the HEM: (1) evaluation of mathematical model was used to predict water mathematical models that predict facility content at the same grid locations at the selected performance, and (2) estimation of the times. Node-by-node comparison of the uncertainty associated with these mathematical mathematical model predictions with the model predictions. The technical objective of geostatistically interpolated values was this research is to adapt geostatistical tools conducted. The method facilitates a complete commonly used for model parameter estimation accounting and categorization of model error at to the problem of estimating the spatial every node. The comparison suggests that distribution of the dependent variable to be model results generally are within measurement calculated by the model. To fulfill this error. The worst model error occurs in silt objective, a database describing the lenses and is in excess of measurement error.

  17. A consistent modelling methodology for secondary settling tanks in wastewater treatment.

    Science.gov (United States)

    Bürger, Raimund; Diehl, Stefan; Nopens, Ingmar

    2011-03-01

    The aim of this contribution is partly to build consensus on a consistent modelling methodology (CMM) of complex real processes in wastewater treatment by combining classical concepts with results from applied mathematics, and partly to apply it to the clarification-thickening process in the secondary settling tank. In the CMM, the real process should be approximated by a mathematical model (process model; ordinary or partial differential equation (ODE or PDE)), which in turn is approximated by a simulation model (numerical method) implemented on a computer. These steps have often not been carried out in a correct way. The secondary settling tank was chosen as a case since this is one of the most complex processes in a wastewater treatment plant and simulation models developed decades ago have no guarantee of satisfying fundamental mathematical and physical properties. Nevertheless, such methods are still used in commercial tools to date. This particularly becomes of interest as the state-of-the-art practice is moving towards plant-wide modelling. Then all submodels interact and errors propagate through the model and severely hamper any calibration effort and, hence, the predictive purpose of the model. The CMM is described by applying it first to a simple conversion process in the biological reactor yielding an ODE solver, and then to the solid-liquid separation in the secondary settling tank, yielding a PDE solver. Time has come to incorporate established mathematical techniques into environmental engineering, and wastewater treatment modelling in particular, and to use proven reliable and consistent simulation models.

  18. Frescoed Vaults: Accuracy Controlled Simplified Methodology for Planar Development of Three-Dimensional Textured Models

    Directory of Open Access Journals (Sweden)

    Marco Giorgio Bevilacqua

    2016-03-01

    Full Text Available In the field of documentation and preservation of cultural heritage, there is keen interest in 3D metric viewing and rendering of architecture for both formal appearance and color. On the other hand, operative steps of restoration interventions still require full-scale, 2D metric surface representations. The transition from 3D to 2D representation, with the related geometric transformations, has not yet been fully formalized for planar development of frescoed vaults. Methodologies proposed so far on this subject provide transitioning from point cloud models to ideal mathematical surfaces and projecting textures using software tools. The methodology used for geometry and texture development in the present work does not require any dedicated software. The different processing steps can be individually checked for any error introduced, which can be then quantified. A direct accuracy check of the planar development of the frescoed surface has been carried out by qualified restorers, yielding a result of 3 mm. The proposed methodology, although requiring further studies to improve automation of the different processing steps, allowed extracting 2D drafts fully usable by operators restoring the vault frescoes.

  19. A Model-Based Methodology for Spray-Drying Process Development.

    Science.gov (United States)

    Dobry, Dan E; Settell, Dana M; Baumann, John M; Ray, Rod J; Graham, Lisa J; Beyerinck, Ron A

    2009-09-01

    Solid amorphous dispersions are frequently used to improve the solubility and, thus, the bioavailability of poorly soluble active pharmaceutical ingredients (APIs). Spray-drying, a well-characterized pharmaceutical unit operation, is ideally suited to producing solid amorphous dispersions due to its rapid drying kinetics. This paper describes a novel flowchart methodology based on fundamental engineering models and state-of-the-art process characterization techniques that ensure that spray-drying process development and scale-up are efficient and require minimal time and API. This methodology offers substantive advantages over traditional process-development methods, which are often empirical and require large quantities of API and long development times. This approach is also in alignment with the current guidance on Pharmaceutical Development Q8(R1). The methodology is used from early formulation-screening activities (involving milligrams of API) through process development and scale-up for early clinical supplies (involving kilograms of API) to commercial manufacturing (involving metric tons of API). It has been used to progress numerous spray-dried dispersion formulations, increasing bioavailability of formulations at preclinical through commercial scales.

  20. A New Methodology for Building-Up a Robust Model for Heliostat Field Flux Characterization

    Directory of Open Access Journals (Sweden)

    Nicolás C. Cruz

    2017-05-01

    Full Text Available The heliostat field of solar central receiver systems (SCRS is formed by hundreds, even thousands, of working heliostats. Their adequate configuration and control define a currently active research line. For instance, automatic aiming methodologies of existing heliostat fields are being widely studied. In general, control techniques require a model of the system to be controlled in order to obtain an estimation of its states. However, this kind of information may not be available or may be hard to obtain for every plant to be studied. In this work, an innovative methodology for data-based analytical heliostat field characterization is proposed and described. It formalizes the way in which the behavior of a whole field can be derived from the study of its more descriptive parts. By successfully applying this procedure, the instantaneous behavior of a field could be expressed by a reduced set of expressions that can be seen as a field descriptor. It is not intended to replace real experimentation but to enhance researchers’ autonomy to build their own reliable and portable synthetic datasets at preliminary stages of their work. The methodology proposed in this paper is successfully applied to a virtual field. Only 30 heliostats out of 541 were studied to characterize the whole field. For the validation set, the average difference in power between the flux maps directly fitted from the measured information and the estimated ones is only of 0.67% (just 0.10946 kW/m2 of root-mean-square error, on average, between them. According to these results, a consistent field descriptor can be built by applying the proposed methodology, which is hence ready for use.

  1. Testing the methodology for site descriptive modelling. Application for the Laxemar area

    Energy Technology Data Exchange (ETDEWEB)

    Andersson, Johan [JA Streamflow AB, Aelvsjoe (Sweden); Berglund, Johan [SwedPower AB, Stockholm (Sweden); Follin, Sven [SF Geologic AB, Stockholm (Sweden); Hakami, Eva [Itasca Geomekanik AB, Stockholm (Sweden); Halvarson, Jan [Swedish Nuclear Fuel and Waste Management Co, Stockholm (Sweden); Hermanson, Jan [Golder Associates AB, Stockholm (Sweden); Laaksoharju, Marcus [Geopoint (Sweden); Rhen, Ingvar [Sweco VBB/VIAK, Stockholm (Sweden); Wahlgren, C.H. [Sveriges Geologiska Undersoekning, Uppsala (Sweden)

    2002-08-01

    A special project has been conducted where the currently available data from the Laxemar area, which is part of the Simpevarp site, have been evaluated and interpreted into a Site Descriptive Model covering: geology, hydrogeology, hydrogeochemistry and rock mechanics. Description of the surface ecosystem has been omitted, since it was re-characterised in another, parallel, project. Furthermore, there has been no evaluation of transport properties. The project is primarily a methodology test. The lessons learnt will be implemented in the Site Descriptive Modelling during the coming site investigation. The intent of the project has been to explore whether available methodology for Site Descriptive Modelling based on surface and borehole data is adequate and to identify potential needs for development and improvement in the methodology. The project has developed, with limitations in scope, a Site Descriptive Model in local scale, corresponding to the situation after completion of the Initial Site Investigations for the Laxemar area (i.e. 'version 1.2' using the vocabulary of the general execution program for the site investigations). The Site Descriptive Model should be reasonable, but should not be regarded as a 'real' model. There are limitations both in input data and in the scope of the analysis. The measured (primary) data constitute a wide range of different measurement results including data from two deep core drilled boreholes. These data both need to be checked for consistency and to be interpreted into a format more amenable for three-dimensional modelling. Examples of such evaluations are estimation of surface geology, lineament interpretation, geological single hole interpretation, hydrogeological single hole interpretation and assessment of hydrogeochemical data. Furthermore, while cross discipline interpretation is encouraged there is also a need for transparency. This means that the evaluations first are made within each discipline

  2. A new methodology for building energy benchmarking: An approach based on clustering concept and statistical models

    Science.gov (United States)

    Gao, Xuefeng

    Though many building energy benchmarking programs have been developed during the past decades, they hold certain limitations. The major concern is that they may cause misleading benchmarking due to not fully considering the impacts of the multiple features of buildings on energy performance. The existing methods classify buildings according to only one of many features of buildings -- the use type, which may result in a comparison between two buildings that are tremendously different in other features and not properly comparable as a result. This research aims to tackle this challenge by proposing a new methodology based on the clustering concept and statistical analysis. The clustering concept, which reflects on machine learning algorithms, classifies buildings based on a multi-dimensional domain of building features, rather than the single dimension of use type. Buildings with the greatest similarity of features that influence energy performance are classified into the same cluster, and benchmarked according to the centroid reference of the cluster. Statistical analysis is applied to find the most influential features impacting building energy performance, as well as provide prediction models for the new design energy consumption. The proposed methodology as applicable to both existing building benchmarking and new design benchmarking was discussed in this dissertation. The former contains four steps: feature selection, clustering algorithm adaptation, results validation, and interpretation. The latter consists of three parts: data observation, inverse modeling, and forward modeling. The experimentation and validation were carried out for both perspectives. It was shown that the proposed methodology could account for the total building energy performance and was able to provide a more comprehensive approach to benchmarking. In addition, the multi-dimensional clustering concept enables energy benchmarking among different types of buildings, and inspires a new

  3. A model for overview of student learning: a matrix of educational outcomes versus methodologies.

    Science.gov (United States)

    Johnsen, David C; Marshall, Teresa A; Finkelstein, Michael W; Cunningham-Ford, Marsha A; Straub-Morarend, Cheryl L; Holmes, David C; Armstrong, Steven R; Aquilino, Steven A; Sharp, Helen M; Solow, Catherine M; McQuistan, Michelle R

    2011-02-01

    A concise overview of an institution's aspirations for its students becomes increasingly elusive because dental education has evolving emphases on priorities like critical thinking and adapting to new technology. The purpose of this article is to offer a learner-oriented matrix that gives a focus for discussion and an overview of an institution's educational outcomes. On one axis of the matrix, common educational outcomes are listed: knowledge, technical skills, critical thinking, ethical and professional values, patient and practice management, and social responsibility awareness. On the other axis, methodologies are listed: definition, cultivation strategies, measures (summative/formative, objective/subjective), institutional coordination, and competency determination. By completing the matrix, an overview of the process by which students reach these outcomes emerges. Each institution would likely complete the matrix differently and, ideally, with active discussion. While the matrix can first be used to establish "Where are we now?" for an institution, it can also be a starting point for more extensive matrices and further discussion. Vertical and horizontal analyses of the matrix provide a unique lens for viewing the institution's learning environment.

  4. Animal Models of Virus-Induced Neurobehavioral Sequelae: Recent Advances, Methodological Issues, and Future Prospects

    Directory of Open Access Journals (Sweden)

    Marco Bortolato

    2010-01-01

    Full Text Available Converging lines of clinical and epidemiological evidence suggest that viral infections in early developmental stages may be a causal factor in neuropsychiatric disorders such as schizophrenia, bipolar disorder, and autism-spectrum disorders. This etiological link, however, remains controversial in view of the lack of consistent and reproducible associations between viruses and mental illness. Animal models of virus-induced neurobehavioral disturbances afford powerful tools to test etiological hypotheses and explore pathophysiological mechanisms. Prenatal or neonatal inoculations of neurotropic agents (such as herpes-, influenza-, and retroviruses in rodents result in a broad spectrum of long-term alterations reminiscent of psychiatric abnormalities. Nevertheless, the complexity of these sequelae often poses methodological and interpretational challenges and thwarts their characterization. The recent conceptual advancements in psychiatric nosology and behavioral science may help determine new heuristic criteria to enhance the translational value of these models. A particularly critical issue is the identification of intermediate phenotypes, defined as quantifiable factors representing single neurochemical, neuropsychological, or neuroanatomical aspects of a diagnostic category. In this paper, we examine how the employment of these novel concepts may lead to new methodological refinements in the study of virus-induced neurobehavioral sequelae through animal models.

  5. Do Methodological Choices in Environmental Modeling Bias Rebound Effects? A Case Study on Electric Cars.

    Science.gov (United States)

    Font Vivanco, David; Tukker, Arnold; Kemp, René

    2016-10-18

    Improvements in resource efficiency often underperform because of rebound effects. Calculations of the size of rebound effects are subject to various types of bias, among which methodological choices have received particular attention. Modellers have primarily focused on choices related to changes in demand, however, choices related to modeling the environmental burdens from such changes have received less attention. In this study, we analyze choices in the environmental assessment methods (life cycle assessment (LCA) and hybrid LCA) and environmental input-output databases (E3IOT, Exiobase and WIOD) used as a source of bias. The analysis is done for a case study on battery electric and hydrogen cars in Europe. The results describe moderate rebound effects for both technologies in the short term. Additionally, long-run scenarios are calculated by simulating the total cost of ownership, which describe notable rebound effect sizes-from 26 to 59% and from 18 to 28%, respectively, depending on the methodological choices-with favorable economic conditions. Relevant sources of bias are found to be related to incomplete background systems, technology assumptions and sectorial aggregation. These findings highlight the importance of the method setup and of sensitivity analyses of choices related to environmental modeling in rebound effect assessments.

  6. A methodology for assessing the market benefits of alternative motor fuels: The Alternative Fuels Trade Model

    Energy Technology Data Exchange (ETDEWEB)

    Leiby, P.N.

    1993-09-01

    This report describes a modeling methodology for examining the prospective economic benefits of displacing motor gasoline use by alternative fuels. The approach is based on the Alternative Fuels Trade Model (AFTM). AFTM development was undertaken by the US Department of Energy (DOE) as part of a longer term study of alternative fuels issues. The AFTM is intended to assist with evaluating how alternative fuels may be promoted effectively, and what the consequences of substantial alternative fuels use might be. Such an evaluation of policies and consequences of an alternative fuels program is being undertaken by DOE as required by Section 502(b) of the Energy Policy Act of 1992. Interest in alternative fuels is based on the prospective economic, environmental and energy security benefits from the substitution of these fuels for conventional transportation fuels. The transportation sector is heavily dependent on oil. Increased oil use implies increased petroleum imports, with much of the increase coming from OPEC countries. Conversely, displacement of gasoline has the potential to reduce US petroleum imports, thereby reducing reliance on OPEC oil and possibly weakening OPEC`s ability to extract monopoly profits. The magnitude of US petroleum import reduction, the attendant fuel price changes, and the resulting US benefits, depend upon the nature of oil-gas substitution and the supply and demand behavior of other world regions. The methodology applies an integrated model of fuel market interactions to characterize these effects.

  7. Computational simulation methodologies for mechanobiological modelling: a cell-centred approach to neointima development in stents.

    Science.gov (United States)

    Boyle, C J; Lennon, A B; Early, M; Kelly, D J; Lally, C; Prendergast, P J

    2010-06-28

    The design of medical devices could be very much improved if robust tools were available for computational simulation of tissue response to the presence of the implant. Such tools require algorithms to simulate the response of tissues to mechanical and chemical stimuli. Available methodologies include those based on the principle of mechanical homeostasis, those which use continuum models to simulate biological constituents, and the cell-centred approach, which models cells as autonomous agents. In the latter approach, cell behaviour is governed by rules based on the state of the local environment around the cell; and informed by experiment. Tissue growth and differentiation requires simulating many of these cells together. In this paper, the methodology and applications of cell-centred techniques--with particular application to mechanobiology--are reviewed, and a cell-centred model of tissue formation in the lumen of an artery in response to the deployment of a stent is presented. The method is capable of capturing some of the most important aspects of restenosis, including nonlinear lesion growth with time. The approach taken in this paper provides a framework for simulating restenosis; the next step will be to couple it with more patient-specific geometries and quantitative parameter data.

  8. METHODOLOGY FOR THE ESTIMATION OF PARAMETERS, OF THE MODIFIED BOUC-WEN MODEL

    Directory of Open Access Journals (Sweden)

    Tomasz HANISZEWSKI

    2015-03-01

    Full Text Available Bouc-Wen model is theoretical formulation that allows to reflect real hysteresis loop of modeled object. Such object is for example a wire rope, which is present on equipment of crane lifting mechanism. Where adopted modified version of the model has nine parameters. Determination of such a number of parameters is complex and problematic issue. In this article are shown the methodology to identify and sample results of numerical simulations. The results were compared with data obtained on the basis of laboratory tests of ropes [3] and on their basis it was found that there is compliance between results and there is possibility to apply in dynamic systems containing in their structures wire ropes [4].

  9. CALS and the Product State Model - Methodology and Supporting Schools and Paradigms

    DEFF Research Database (Denmark)

    Larsen, Michael Holm

    1998-01-01

    This paper address the preliminary considerations in a research project, initiated February 1997, regarding Continuous Acquisition and Life-cycle Support (CALS) which is a part of the activities in CALS Center Denmark. The CALS concept is presented focusing on the Product State Model (PSM). The PSM...... incorporates relevant information about each stage of the production process.The paper will describe the research object, the model object and discuss a part of the methodology in developing a Product State Model. The project is primarily technological, however, organisational and human aspects...... will be considered, as the intentions are that a prototype should be implemented in the production line at Odense Steel Shipyard. Hence, a Multiview approach will be considered incorporating the informational need of many actors/machines. Parameter identification, i.e. describing the parameters which PSM...

  10. Modeling Customer Loyalty by System Dynamics Methodology (Case Study: Internet Service Provider Company

    Directory of Open Access Journals (Sweden)

    Alireza Bafandeh Zendeh

    2016-03-01

    Full Text Available Due to the complexity of the customer loyalty, we tried to provide a conceptual model to explain it in an Internet service provider company with system dynamics approach. To do so, the customer’s loyalty for statistical population was analyzed according to Sterman’s modeling methodology. First of all the reference modes (historical behavior of customer loyalty was evaluated. Then dynamic hypotheses was developed by utilizing causal - loop diagrams and stock-flow maps, based on theoretical literature. In third stage, initial conditions of variables, parameters, and mathematical functions between them were estimated. The model was tested, finally advertising, quality of services improvement and continuing the current situation scenarios were evaluated. Results showed improving the quality of service scenario is more effectiveness in compare to others

  11. A Mapping Model for Transforming Traditional Software Development Methods to Agile Methodology

    Directory of Open Access Journals (Sweden)

    Rashmi Popli

    2013-07-01

    Full Text Available Agility is bringing in responsibility and ownership in individuals, which will eventually bring outeffectiveness and efficiency in deliverables. Agile model is growing in the market at very good pace.Companies are drifting from traditional Software Development Life Cycle models to Agile Environment forthe purpose of attaining quality and for the sake of saving cost and time. Nimbleness nature of Agile ishelpful in frequent releases so as to satisfy the customer by providing frequent dual feedback. InTraditional models, life cycle is properly defined and also phases are elaborated by specifying needed inputand output parameters. On the other hand, in Agile environment, phases are specific to methodologies ofAgile - Extreme Programming etc. In this paper a common life cycle approach is proposed that isapplicable for different kinds of teams. The paper aims to describe a mapping function for mapping oftraditional methods to Agile method.

  12. Methodology for Training Small Domain-specific Language Models and Its Application in Service Robot Speech Interface

    Directory of Open Access Journals (Sweden)

    ONDAS Stanislav

    2014-05-01

    Full Text Available The proposed paper introduces the novel methodology for training small domain-specific language models only from domain vocabulary. Proposed methodology is intended for situations, when no training data are available and preparing of appropriate deterministic grammar is not trivial task. Methodology consists of two phases. In the first phase the “random” deterministic grammar, which enables to generate all possible combination of unigrams and bigrams is constructed from vocabulary. Then, prepared random grammar serves for generating the training corpus. The “random” n-gram model is trained from generated corpus, which can be adapted in second phase. Evaluation of proposed approach has shown usability of the methodology for small domains. Results of methodology assessment favor designed method instead of constructing the appropriate deterministic grammar.

  13. 8000 Ways to Model a Vortex: A Review of Hindcast Wind Field Methodologies

    Science.gov (United States)

    Sweeney, J.

    2014-12-01

    Hindcasts of cyclonic wind fields are crucial for extreme analysis in the oil and gas industry. Recent scientific developments have increased the number of parameterization options for tropical cyclone vortices, leading to well over 8000 permutations of model choices. Which is best? Also problematic is how best to blend modelled vortex winds into a global wind model (such as the Climate Forecast System Reanalysis (CFSR)) in order to resolve tropical cyclones to sufficient detail for wave modelling. Standard blending schemes can leave a 'moat' between the vortex and the CFSR circulation (see Figure 1 from TC Olivia 1996). Using a 35-year track database from the Australian Bureau of Meteorology, this study assesses model configurations and blending schemes against the most extensive measured meteorological dataset in the north-east Indian Ocean (largely commercial-in-confidence). The Holland profile models of 1980 and 2008 are two starting points, with other options examined for radius to maximum wind calculations, pressure-wind relationships, averaging periods, atmospheric profiles, gust factors, and asymmetry methods. Once a vortex is modelled, the winds are then fitted to the radius of gales and blended into the CFSR before further verification. Initial results support recent theoretical developments by Hu et al (2012), with additional results that call for a new asymmetry method and the separation of pressure and wind field modelling.

  14. A Methodology for Robust Multiproxy Paleoclimate Reconstructions and Modeling of Temperature Conditional Quantiles.

    Science.gov (United States)

    Janson, Lucas; Rajaratnam, Bala

    Great strides have been made in the field of reconstructing past temperatures based on models relating temperature to temperature-sensitive paleoclimate proxies. One of the goals of such reconstructions is to assess if current climate is anomalous in a millennial context. These regression based approaches model the conditional mean of the temperature distribution as a function of paleoclimate proxies (or vice versa). Some of the recent focus in the area has considered methods which help reduce the uncertainty inherent in such statistical paleoclimate reconstructions, with the ultimate goal of improving the confidence that can be attached to such endeavors. A second important scientific focus in the subject area is the area of forward models for proxies, the goal of which is to understand the way paleoclimate proxies are driven by temperature and other environmental variables. One of the primary contributions of this paper is novel statistical methodology for (1) quantile regression with autoregressive residual structure, (2) estimation of corresponding model parameters, (3) development of a rigorous framework for specifying uncertainty estimates of quantities of interest, yielding (4) statistical byproducts that address the two scientific foci discussed above. We show that by using the above statistical methodology we can demonstrably produce a more robust reconstruction than is possible by using conditional-mean-fitting methods. Our reconstruction shares some of the common features of past reconstructions, but we also gain useful insights. More importantly, we are able to demonstrate a significantly smaller uncertainty than that from previous regression methods. In addition, the quantile regression component allows us to model, in a more complete and flexible way than least squares, the conditional distribution of temperature given proxies. This relationship can be used to inform forward models relating how proxies are driven by temperature.

  15. A new methodology to test galaxy formation models using the dependence of clustering on stellar mass

    Science.gov (United States)

    Campbell, David J. R.; Baugh, Carlton M.; Mitchell, Peter D.; Helly, John C.; Gonzalez-Perez, Violeta; Lacey, Cedric G.; Lagos, Claudia del P.; Simha, Vimal; Farrow, Daniel J.

    2015-09-01

    We present predictions for the two-point correlation function of galaxy clustering as a function of stellar mass, computed using two new versions of the GALFORM semi-analytic galaxy formation model. These models make use of a high resolution, large volume N-body simulation, set in the 7-year Wilkinson Microwave Anisotropy Probe cosmology. One model uses a universal stellar initial mass function (IMF), while the other assumes different IMFs for quiescent star formation and bursts. Particular consideration is given to how the assumptions required to estimate the stellar masses of observed galaxies (such as the choice of IMF, stellar population synthesis model, and dust extinction) influence the perceived dependence of galaxy clustering on stellar mass. Broad-band spectral energy distribution fitting is carried out to estimate stellar masses for the model galaxies in the same manner as in observational studies. We show clear differences between the clustering signals computed using the true and estimated model stellar masses. As such, we highlight the importance of applying our methodology to compare theoretical models to observations. We introduce an alternative scheme for the calculation of the merger time-scales for satellite galaxies in GALFORM, which takes into account the dark matter subhalo information from the simulation. This reduces the amplitude of small-scale clustering. The new merger scheme offers improved or similar agreement with observational clustering measurements, over the redshift range 0 Public Extragalactic Redshift Survey, depending on the GALFORM model used.

  16. Evaluation of methodologies for interpolation of data for hydrological modeling in glacierized basins with limited information

    Science.gov (United States)

    Muñoz, Randy; Paredes, Javier; Huggel, Christian; Drenkhan, Fabian; García, Javier

    2017-04-01

    The availability and consistency of data is a determining factor for the reliability of any hydrological model and simulated results. Unfortunately, there are many regions worldwide where data is not available in the desired quantity and quality. The Santa River basin (SRB), located within a complex topographic and climatic setting in the tropical Andes of Peru is a clear example of this challenging situation. A monitoring network of in-situ stations in the SRB recorded series of hydro-meteorological variables which finally ceased to operate in 1999. In the following years, several researchers evaluated and completed many of these series. This database was used by multiple research and policy-oriented projects in the SRB. However, hydroclimatic information remains limited, making it difficult to perform research, especially when dealing with the assessment of current and future water resources. In this context, here the evaluation of different methodologies to interpolate temperature and precipitation data at a monthly time step as well as ice volume data in glacierized basins with limited data is presented. The methodologies were evaluated for the Quillcay River, a tributary of the SRB, where the hydro-meteorological data is available from nearby monitoring stations since 1983. The study period was 1983 - 1999 with a validation period among 1993 - 1999. For temperature series the aim was to extend the observed data and interpolate it. Data from Reanalysis NCEP was used to extend the observed series: 1) using a simple correlation with multiple field stations, or 2) applying the altitudinal correction proposed in previous studies. The interpolation then was applied as a function of altitude. Both methodologies provide very close results, by parsimony simple correlation is shown as a viable choice. For precipitation series, the aim was to interpolate observed data. Two methodologies were evaluated: 1) Inverse Distance Weighting whose results underestimate the amount

  17. A MAINTENANCE STRATEGY MODEL FOR STATIC EQUIPMENT USING INSPECTION METHODOLOGIES AND RISK MANAGEMENT

    Directory of Open Access Journals (Sweden)

    J.K. Visser

    2012-01-01

    Full Text Available

    ENGLISH ABSTRACT: Mechanical equipment used on process plants can be categorised into two main types, namely static and rotating equipment. A brief survey at a number of chemical process plants indicated that a number of maintenance strategies exist and are used for rotating equipment. However, some of these strategies are not directly applicable to static equipment, although the risk-based inspection (RBI methodology has been developed for pressure vessels. A generalised risk-based maintenance strategy for all types of static equipment does not currently exist. This paper describes the development of an optimised model of inspection methodologies, maintenance strategies, and risk management principles that are generically applicable for static equipment. It enables maintenance managers and engineers to select an applicable maintenance strategy and inspection methodology, based on the operational and business risks posed by the individual pieces of equipment.

    AFRIKAANSE OPSOMMING: Meganiese toerusting wat op prosesaanlegte gebruik word kan in twee kategorieë verdeel word, naamlik statiese en roterende toerusting. 'n Bondige ondersoek by 'n aantal chemiese prosesaanlegte het aangedui dat 'n aantal strategieë vir instandhouding van roterende toerusting gebruik word, terwyl die risikogebaseerde inspeksiemetodologie wel vir drukvate gebruik word. 'n Algemene risikogebaseerde instandhoudingstrategie vir alle tipes statiese toerusting is egter nie tans beskikbaar nie. Hierdie artikel beskryf die ontwikkeling van 'n geoptimeerde model van inspeksiemetodologieë, instandhoudingstrategieë, en risikobestuursbeginsels wat algemeen gebruik kan word vir statiese toerusting. Dit stel die instandhouding-bestuurders en -ingenieurs in staat om 'n instandhoudingstrategie en inspeksie-metodologie te kies, gebaseer op die operasionele en besigheidsrisiko's van die individuele toerusting.

  18. A system-of-systems modeling methodology for strategic general aviation design decision-making

    Science.gov (United States)

    Won, Henry Thome

    General aviation has long been studied as a means of providing an on-demand "personal air vehicle" that bypasses the traffic at major commercial hubs. This thesis continues this research through development of a system of systems modeling methodology applicable to the selection of synergistic product concepts, market segments, and business models. From the perspective of the conceptual design engineer, the design and selection of future general aviation aircraft is complicated by the definition of constraints and requirements, and the tradeoffs among performance and cost aspects. Qualitative problem definition methods have been utilized, although their accuracy in determining specific requirement and metric values is uncertain. In industry, customers are surveyed, and business plans are created through a lengthy, iterative process. In recent years, techniques have developed for predicting the characteristics of US travel demand based on travel mode attributes, such as door-to-door time and ticket price. As of yet, these models treat the contributing systems---aircraft manufacturers and service providers---as independently variable assumptions. In this research, a methodology is developed which seeks to build a strategic design decision making environment through the construction of a system of systems model. The demonstrated implementation brings together models of the aircraft and manufacturer, the service provider, and most importantly the travel demand. Thus represented is the behavior of the consumers and the reactive behavior of the suppliers---the manufacturers and transportation service providers---in a common modeling framework. The results indicate an ability to guide the design process---specifically the selection of design requirements---through the optimization of "capability" metrics. Additionally, results indicate the ability to find synergetic solutions, that is solutions in which two systems might collaborate to achieve a better result than acting

  19. A new methodology for dynamic modelling of health risks arising from wastewater influenced urban flooding

    Science.gov (United States)

    Jørgensen, Claus; Mark, Ole; Djordjevic, Slobodan; Hammond, Michael; Khan, David M.; Erichsen, Anders; Dorrit Enevoldsen, Ann; Heinicke, Gerald; Helwigh, Birgitte

    2015-04-01

    Indroduction Urban flooding due to rainfall exceeding the design capacity of drainage systems is a global problem and it has significant economic and social consequences. While the cost of the direct flood damages of urban flooding is well understood, the indirect damages, like the water borne diseases is in general still poorly understood. Climate changes are expected to increase the frequency of urban flooding in many countries which is likely to increase water borne diseases. Diarrheal diseases are most prevalent in developing countries, where poor sanitation, poor drinking water and poor surface water quality causes a high disease burden and mortality, especially during floods. The level of water borne diarrhea in countries with well-developed water and waste water infrastructure has been reduced to an acceptable level, and the population in general do not consider waste water as being a health risk. Hence, exposure to wastewater influenced urban flood water still has the potential to cause transmission of diarrheal diseases. When managing urban flooding and planning urban climate change adaptations, health risks are rarely taken into consideration. This paper outlines a novel methodology for linking dynamic urban flood modelling with Quantitative Microbial Risk Assessment (QMRA). This provides a unique possibility for understanding the interaction between urban flooding and the health risks caused by direct human contact with flood water and provides an option for reducing the burden of disease in the population through the use of intelligent urban flood risk management. Methodology We have linked hydrodynamic urban flood modelling with quantitative microbial risk assessment (QMRA) to determine the risk of infection caused by exposure to wastewater influenced urban flood water. The deterministic model MIKE Flood, which integrates the sewer network model in MIKE Urban and the 2D surface model MIKE21, was used to calculate the concentration of pathogens in the

  20. Methodology for the Incorporation of Passive Component Aging Modeling into the RAVEN/ RELAP-7 Environment

    Energy Technology Data Exchange (ETDEWEB)

    Mandelli, Diego; Rabiti, Cristian; Cogliati, Joshua; Alfonsi, Andrea; Askin Guler; Tunc Aldemir

    2014-11-01

    Passive system, structure and components (SSCs) will degrade over their operation life and this degradation may cause to reduction in the safety margins of a nuclear power plant. In traditional probabilistic risk assessment (PRA) using the event-tree/fault-tree methodology, passive SSC failure rates are generally based on generic plant failure data and the true state of a specific plant is not reflected realistically. To address aging effects of passive SSCs in the traditional PRA methodology [1] does consider physics based models that account for the operating conditions in the plant, however, [1] does not include effects of surveillance/inspection. This paper represents an overall methodology for the incorporation of aging modeling of passive components into the RAVEN/RELAP-7 environment which provides a framework for performing dynamic PRA. Dynamic PRA allows consideration of both epistemic and aleatory uncertainties (including those associated with maintenance activities) in a consistent phenomenological and probabilistic framework and is often needed when there is complex process/hardware/software/firmware/ human interaction [2]. Dynamic PRA has gained attention recently due to difficulties in the traditional PRA modeling of aging effects of passive components using physics based models and also in the modeling of digital instrumentation and control systems. RAVEN (Reactor Analysis and Virtual control Environment) [3] is a software package under development at the Idaho National Laboratory (INL) as an online control logic driver and post-processing tool. It is coupled to the plant transient code RELAP-7 (Reactor Excursion and Leak Analysis Program) also currently under development at INL [3], as well as RELAP 5 [4]. The overall methodology aims to: • Address multiple aging mechanisms involving large number of components in a computational feasible manner where sequencing of events is conditioned on the physical conditions predicted in a simulation

  1. Geared rotor dynamic methodologies for advancing prognostic modeling capabilities in rotary-wing transmission systems

    Science.gov (United States)

    Stringer, David Blake

    The overarching objective in this research is the development of a robust, rotor dynamic, physics based model of a helicopter drive train as a foundation for the prognostic modeling for rotary-wing transmissions. Rotorcrafts rely on the integrity of their drive trains for their airworthiness. Drive trains rely on gear technology for their integrity and function. Gears alter the vibration characteristics of a mechanical system and significantly contribute to noise, component fatigue, and personal discomfort prevalent in rotorcraft. This research effort develops methodologies for generating a rotor dynamic model of a rotary-wing transmission based on first principles, through (i) development of a three-dimensional gear-mesh stiffness model for helical and spur gears and integration of this model in a finite element rotor dynamic model, (ii) linear and nonlinear analyses of a geared system for comparison and validation of the gear-mesh model, (iii) development of a modal synthesis technique for potentially providing model reduction and faster analysis capabilities for geared systems, and (iv) extension of the gear-mesh model to bevel and epicyclic configurations. In addition to model construction and validation, faults indigenous to geared systems are presented and discussed. Two faults are selected for analysis and seeded into the transmission model. Diagnostic vibration parameters are presented and used as damage indicators in the analysis. The fault models produce results consistent with damage experienced during experimental testing. The results of this research demonstrate the robustness of the physics-based approach in simulating multiple normal and abnormal conditions. The advantages of this physics-based approach, when combined with contemporary probabilistic and time-series techniques, provide a useful method for improving health monitoring technologies in mechanical systems.

  2. Dunedin's free clinic: an exploration of its model of care using case study methodology.

    Science.gov (United States)

    Loh, Lik; Jaye, Chrystal; Dovey, Susan; Lloyd, Hywel; Rowe, Joanne

    2015-06-01

    Models of care are important therapeutic modalities for achieving the goals of health care teams, but they are seldom explicitly stated or investigated. To describe the model of care at Dunedin's free clinic, and assess whether this model catered to the particular needs of enrolled patients. A mixed methods study was conducted using case study methodology to construct the clinic's model of care from multiple data sources, and to create a profile of patients' needs. A nested case study of patients with diabetes examined patients' social vulnerability characteristics. The pattern matching analytic technique was used to assess the degree of alignment between the model of care and patients' needs. Patients were not only high users of both primary and secondary health care, but also of justice and social welfare sector services. The care of patients with diabetes was complicated by coexisting social vulnerability and medical comorbidities. Surveyed patients placed high value on interpersonal dimensions of care, the Christian ethos of the clinic, and the wider range of services available. This study suggests a degree of 'fit' between the clinic's model of care and the needs of enrolled patients. A model of care that caters to the needs of patients with complex needs is important for securing their engagement in health services.

  3. Modeling and Analysis of MRR, EWR and Surface Roughness in EDM Milling through Response Surface Methodology

    Directory of Open Access Journals (Sweden)

    A. K.M.S. Iqbal

    2010-01-01

    Full Text Available Problem statement: Electrical Discharge Machining (EDM has grown over the last few decades from a novelty to a mainstream manufacturing process. Though, EDM process is very demanding but the mechanism of the process is complex and far from completely understood. It is difficult to establish a model that can accurately predict the performance by correlating the process parameters. The optimum processing parameters are essential to increase the production rate and decrease the machining time, since the materials, which are processed by EDM and even the process is very costly. This research establishes empirical relations regarding machining parameters and the responses in analyzing the machinability of the stainless steel. Approach: The machining factors used are voltage, rotational speed of electrode and feed rate over the responses MRR, EWR and Ra. Response surface methodology was used to investigate the relationships and parametric interactions between the three controllable variables on the MRR, EWR and Ra. Central composite experimental design was used to estimate the model coefficients of the three factors. The responses were modeled using a response surface model based on experimental results. The significant coefficients were obtained by performing Analysis Of Variance (ANOVA at 95% level of significance. Results: The variation in percentage errors for developed models was found within 5%. Conclusion: The developed models show that voltage and rotary motion of electrode are the most significant machining parameters influencing MRR, EWR and Ra. These models can be used to get the desired responses within the experimental range.

  4. Model-driven methodology for rapid deployment of smart spaces based on resource-oriented architectures.

    Science.gov (United States)

    Corredor, Iván; Bernardos, Ana M; Iglesias, Josué; Casar, José R

    2012-01-01

    Advances in electronics nowadays facilitate the design of smart spaces based on physical mash-ups of sensor and actuator devices. At the same time, software paradigms such as Internet of Things (IoT) and Web of Things (WoT) are motivating the creation of technology to support the development and deployment of web-enabled embedded sensor and actuator devices with two major objectives: (i) to integrate sensing and actuating functionalities into everyday objects, and (ii) to easily allow a diversity of devices to plug into the Internet. Currently, developers who are applying this Internet-oriented approach need to have solid understanding about specific platforms and web technologies. In order to alleviate this development process, this research proposes a Resource-Oriented and Ontology-Driven Development (ROOD) methodology based on the Model Driven Architecture (MDA). This methodology aims at enabling the development of smart spaces through a set of modeling tools and semantic technologies that support the definition of the smart space and the automatic generation of code at hardware level. ROOD feasibility is demonstrated by building an adaptive health monitoring service for a Smart Gym.

  5. Model-Driven Methodology for Rapid Deployment of Smart Spaces Based on Resource-Oriented Architectures

    Directory of Open Access Journals (Sweden)

    José R. Casar

    2012-07-01

    Full Text Available Advances in electronics nowadays facilitate the design of smart spaces based on physical mash-ups of sensor and actuator devices. At the same time, software paradigms such as Internet of Things (IoT and Web of Things (WoT are motivating the creation of technology to support the development and deployment of web-enabled embedded sensor and actuator devices with two major objectives: (i to integrate sensing and actuating functionalities into everyday objects, and (ii to easily allow a diversity of devices to plug into the Internet. Currently, developers who are applying this Internet-oriented approach need to have solid understanding about specific platforms and web technologies. In order to alleviate this development process, this research proposes a Resource-Oriented and Ontology-Driven Development (ROOD methodology based on the Model Driven Architecture (MDA. This methodology aims at enabling the development of smart spaces through a set of modeling tools and semantic technologies that support the definition of the smart space and the automatic generation of code at hardware level. ROOD feasibility is demonstrated by building an adaptive health monitoring service for a Smart Gym.

  6. PAGIS summary report of phase 1: a common methodological approach based on European data and models

    Energy Technology Data Exchange (ETDEWEB)

    Cadelli, N.; Cottone, G.; Bertozzi, G.; Girardi, F.

    1984-01-01

    Since 1982 a joint study has been launched by the CEC with the participation of national institutions in the E.C., aiming at a Performance Assessment of Geological Isolation Systems (PAGIS) for HLW disposal. This document is a summary of the first phase of the study which was devoted to the collection of data and models and to the choice of an appropriate methodology. To this purpose, real or national sites have been chosen, which are representative of three types of continental geological formations in the E.C.: clay, granite and salt (although the choices imply no committment of any kind about their final use). Moreover, sub-seabed areas have also been identified. The study covers the following items: - basic data on waste characteristics, site data and repository designs; - methodology, which allows sensitivity and uncertainty analyses to be performed, as well as the assessment of radiation doses to individuals and populations; - preliminary modelling of radionuclide release and migration through the geosphere (near- and far-field) and the biosphere following their various pathways to man; - selection of the most relevant radionuclide release scenarios and their probability of occurrence. Reference values have been selected for the basic data as well as variants covering the various options which are under consideration in the different Countries of the E.C.

  7. The Double Layer Methodology and the Validation of Eigenbehavior Techniques Applied to Lifestyle Modeling

    Science.gov (United States)

    Lamichhane, Bishal

    2017-01-01

    A novel methodology, the double layer methodology (DLM), for modeling an individual's lifestyle and its relationships with health indicators is presented. The DLM is applied to model behavioral routines emerging from self-reports of daily diet and activities, annotated by 21 healthy subjects over 2 weeks. Unsupervised clustering on the first layer of the DLM separated our population into two groups. Using eigendecomposition techniques on the second layer of the DLM, we could find activity and diet routines, predict behaviors in a portion of the day (with an accuracy of 88% for diet and 66% for activity), determine between day and between individual similarities, and detect individual's belonging to a group based on behavior (with an accuracy up to 64%). We found that clustering based on health indicators was mapped back into activity behaviors, but not into diet behaviors. In addition, we showed the limitations of eigendecomposition for lifestyle applications, in particular when applied to noisy and sparse behavioral data such as dietary information. Finally, we proposed the use of the DLM for supporting adaptive and personalized recommender systems for stimulating behavior change. PMID:28133607

  8. Application of infinite model predictive control methodology to other advanced controllers.

    Science.gov (United States)

    Abu-Ayyad, M; Dubay, R; Hernandez, J M

    2009-01-01

    This paper presents an application of most recent developed predictive control algorithm an infinite model predictive control (IMPC) to other advanced control schemes. The IMPC strategy was derived for systems with different degrees of nonlinearity on the process gain and time constant. Also, it was shown that IMPC structure uses nonlinear open-loop modeling which is conducted while closed-loop control is executed every sampling instant. The main objective of this work is to demonstrate that the methodology of IMPC can be applied to other advanced control strategies making the methodology generic. The IMPC strategy was implemented on several advanced controllers such as PI controller using Smith-Predictor, Dahlin controller, simplified predictive control (SPC), dynamic matrix control (DMC), and shifted dynamic matrix (m-DMC). Experimental work using these approaches combined with IMPC was conducted on both single-input-single-output (SISO) and multi-input-multi-output (MIMO) systems and compared with the original forms of these advanced controllers. Computer simulations were performed on nonlinear plants demonstrating that the IMPC strategy can be readily implemented on other advanced control schemes providing improved control performance. Practical work included real-time control applications on a DC motor, plastic injection molding machine and a MIMO three zone thermal system.

  9. Methodology for Outdoor Water Savings Model and Spreadsheet Tool for U.S. and Selected States

    Energy Technology Data Exchange (ETDEWEB)

    Williams, Alison A. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Chen, Yuting [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Dunham, Camilla [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Fuchs, Heidi [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Price, Sarah [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Stratton, Hannah [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)

    2017-07-31

    Green lawns and landscaping are archetypical of the populated American landscape, and typically require irrigation, which corresponds to a significant fraction of residential, commercial, and institutional water use. In North American cities, the estimated portion of residential water used for outdoor purposes ranges from 22-38% in cooler climates up to 59-67% in dry and hot environments, while turfgrass coverage within the United States spans 11.1-20.2 million hectares (Milesi et al. 2009). One national estimate uses satellite and aerial photography data to develop a relationship between impervious surface and lawn surface area, yielding a conservative estimate of 16.4 (± 3.6) million hectares of lawn surface area in the United States—an area three times larger than that devoted to any irrigated crop (Milesi et al. 2005). One approach that holds promise for cutting unnecessary outdoor water use is the increased deployment of “smart” irrigation controllers to increase the water efficiency of irrigation systems. This report describes the methodology and inputs employed in a mathematical model that quantifies the effects of the U.S. Environmental Protection Agency’s WaterSense labeling program for one such type of controller, weather-based irrigation controllers (WBIC). This model builds off that described in “Methodology for National Water Savings Model and Spreadsheet Tool–Outdoor Water Use” and uses a two-tiered approach to quantify outdoor water savings attributable to the WaterSense program for WBIC, as well as net present value (NPV) of that savings. While the first iteration of the model assessed national impacts using averaged national values, this version begins by evaluating impacts in three key large states that make up a sizable portion of the irrigation market: California, Florida, and Texas. These states are considered to be the principal market of “smart” irrigation controllers that may result in the bulk of national savings. Modeled

  10. Direct Adaptive Control Methodologies for Flexible-Joint Space Manipulators with Uncertainties and Modeling Errors

    Science.gov (United States)

    Ulrich, Steve

    This work addresses the direct adaptive trajectory tracking control problem associated with lightweight space robotic manipulators that exhibit elastic vibrations in their joints, and which are subject to parametric uncertainties and modeling errors. Unlike existing adaptive control methodologies, the proposed flexible-joint control techniques do not require identification of unknown parameters, or mathematical models of the system to be controlled. The direct adaptive controllers developed in this work are based on the model reference adaptive control approach, and manage modeling errors and parametric uncertainties by time-varying the controller gains using new adaptation mechanisms, thereby reducing the errors between an ideal model and the actual robot system. More specifically, new decentralized adaptation mechanisms derived from the simple adaptive control technique and fuzzy logic control theory are considered in this work. Numerical simulations compare the performance of the adaptive controllers with a nonadaptive and a conventional model-based controller, in the context of 12.6 m xx 12.6 m square trajectory tracking. To validate the robustness of the controllers to modeling errors, a new dynamics formulation that includes several nonlinear effects usually neglected in flexible-joint dynamics models is proposed. Results obtained with the adaptive methodologies demonstrate an increased robustness to both uncertainties in joint stiffness coefficients and dynamics modeling errors, as well as highly improved tracking performance compared with the nonadaptive and model-based strategies. Finally, this work considers the partial state feedback problem related to flexible-joint space robotic manipulators equipped only with sensors that provide noisy measurements of motor positions and velocities. An extended Kalman filter-based estimation strategy is developed to estimate all state variables in real-time. The state estimation filter is combined with an adaptive

  11. Modelling and Statistical Optimization of Dilute Acid Hydrolysis of Corn Stover Using Response Surface Methodology

    Directory of Open Access Journals (Sweden)

    Andrew Nosakhare Amenaghawon

    2014-07-01

    Full Text Available Response surface methodology (RSM was employed for the analysis of the simultaneous effect of acid concentration, pretreatment time and temperature on the total reducing sugar concentration obtained during acid hydrolysis of corn stover. A three-variable, three-level Box-Behnken design (BBD was used to develop a statistical model for the optimization of the process variables. The optimal hydrolysis conditions that resulted in the maximum total reducing sugar concentration were acid concentration; 1.72 % (w/w, temperature; 169.260C and pretreatment time; 48.73 minutes. Under these conditions, the total reducing sugar concentration was obtained to be 23.41g/L. Validation of the model indicated no difference between predicted and observed values.

  12. A new methodology for modelling of health risk from urban flooding exemplified by cholera

    DEFF Research Database (Denmark)

    Mark, Ole; Jørgensen, Claus; Hammond, Michael

    2016-01-01

    The phenomenon of urban flooding due to rainfall exceeding the design capacity of drainage systems is a global problem and can have significant economic and social consequences. This is even more extreme in developing countries, where poor sanitation still causes a high infectious disease burden...... outlines a novel methodology for linking dynamic urban flood modelling with quantitative microbial risk assessment (QMRA). This provides a unique possibility for understanding the interaction between urban flooding and health risk caused by direct human contact with the flood water and hence gives...... and mortality, especially during floods. At present, there are no software tools capable of combining hydrodynamic modelling and health risk analyses, and the links between urban flooding and the health risk for the population due to direct contact with the flood water are poorly understood. The present paper...

  13. State-space models for bio-loggers: A methodological road map

    DEFF Research Database (Denmark)

    Jonsen, I.D.; Basson, M.; Bestley, S.

    2012-01-01

    development of state-space modelling approaches for animal movement data provides statistical rigor for inferring hidden behavioural states, relating these states to bio-physical data, and ultimately for predicting the potential impacts of climate change. Despite the widespread utility, and current popularity......-physical datasets to understand physiological and ecological influences on habitat selection. In most cases, however, the behavioural context is not directly observable and therefore, must be inferred. Animal movement data are complex in structure, entailing a need for stochastic analysis methods. The recent......, of state-space models for analysis of animal tracking data, these tools are not simple and require considerable care in their use. Here we develop a methodological “road map” for ecologists by reviewing currently available state-space implementations. We discuss appropriate use of state-space methods...

  14. A methodology for 3D modeling and visualization of geological objects

    Institute of Scientific and Technical Information of China (English)

    2009-01-01

    Geological body structure is the product of the geological evolution in the time dimension, which is presented in 3D configuration in the natural world. However, many geologists still record and process their geological data using the 2D or 1D pattern, which results in the loss of a large quantity of spatial data. One of the reasons is that the current methods have limitations on how to express underground geological objects. To analyze and interpret geological models, we present a layer data model to organize different kinds of geological datasets. The data model implemented the unification expression and storage of geological data and geometric models. In addition, it is a method for visualizing large-scaled geological datasets through building multi-resolution geological models rapidly, which can meet the demand of the operation, analysis, and interpretation of 3D geological objects. It proves that our methodology is competent for 3D modeling and self-adaptive visualization of large geological objects and it is a good way to solve the problem of integration and share of geological spatial data.

  15. The Methodology of Interactive Parametric Modelling of Construction Site Facilities in BIM Environment

    Science.gov (United States)

    Kozlovská, Mária; Čabala, Jozef; Struková, Zuzana

    2014-11-01

    Information technology is becoming a strong tool in different industries, including construction. The recent trend of buildings designing is leading up to creation of the most comprehensive virtual building model (Building Information Model) in order to solve all the problems relating to the project as early as in the designing phase. Building information modelling is a new way of approaching to the design of building projects documentation. Currently, the building site layout as a part of the building design documents has a very little support in the BIM environment. Recently, the research of designing the construction process conditions has centred on improvement of general practice in planning and on new approaches to construction site layout planning. The state of art in field of designing the construction process conditions indicated an unexplored problem related to connection of knowledge system with construction site facilities (CSF) layout through interactive modelling. The goal of the paper is to present the methodology for execution of 3D construction site facility allocation model (3D CSF-IAM), based on principles of parametric and interactive modelling.

  16. Methodology to develop crash modification functions for road safety treatments with fully specified and hierarchical models.

    Science.gov (United States)

    Chen, Yongsheng; Persaud, Bhagwant

    2014-09-01

    Crash modification factors (CMFs) for road safety treatments are developed as multiplicative factors that are used to reflect the expected changes in safety performance associated with changes in highway design and/or the traffic control features. However, current CMFs have methodological drawbacks. For example, variability with application circumstance is not well understood, and, as important, correlation is not addressed when several CMFs are applied multiplicatively. These issues can be addressed by developing safety performance functions (SPFs) with components of crash modification functions (CM-Functions), an approach that includes all CMF related variables, along with others, while capturing quantitative and other effects of factors and accounting for cross-factor correlations. CM-Functions can capture the safety impact of factors through a continuous and quantitative approach, avoiding the problematic categorical analysis that is often used to capture CMF variability. There are two formulations to develop such SPFs with CM-Function components - fully specified models and hierarchical models. Based on sample datasets from two Canadian cities, both approaches are investigated in this paper. While both model formulations yielded promising results and reasonable CM-Functions, the hierarchical model was found to be more suitable in retaining homogeneity of first-level SPFs, while addressing CM-Functions in sub-level modeling. In addition, hierarchical models better capture the correlations between different impact factors.

  17. A methodology for 3D modeling and visualization of geological objects

    Institute of Scientific and Technical Information of China (English)

    ZHANG LiQiang; TAN YuMin; KANG ZhiZhong; RUI XiaoPing; ZHAO YuanYuan; LIU Liu

    2009-01-01

    Geological body structure is the product of the geological evolution in the time dimension, which is presented in 3D configuration in the natural world. However, many geologists still record and process their geological data using the 2D or 1D pattern, which results in the loss of a large quantity of spatial data. One of the reasons is that the current methods have limitations on how to express underground geological objects. To analyze and interpret geological models, we present a layer data model to or- ganize different kinds of geological datasets. The data model implemented the unification expression and storage of geological data and geometric models. In addition, it is a method for visualizing large-scaled geological datasets through building multi-resolution geological models rapidly, which can meet the demand of the operation, analysis, and interpretation of 3D geological objects. It proves that our methodology is competent for 3D modeling and self-adaptive visualization of large geological objects and It is a good way to solve the problem of integration and share of geological spatial data.

  18. Agent-Oriented Methodology and Modeling Tools%面向主体的开发方法和可视化建模工具

    Institute of Scientific and Technical Information of China (English)

    季强

    2002-01-01

    This paper introduces an agent-oriented methodology and modeling tools based on MAGE. The methodology supports analysis, desing and implimentation of multi-agent systems. The modeling tools assist the developer in building multi-agent systems using the methodology through a set of visual model editors.

  19. Statistical methodology for discrete fracture model - including fracture size, orientation uncertainty together with intensity uncertainty and variability

    Energy Technology Data Exchange (ETDEWEB)

    Darcel, C. (Itasca Consultants SAS (France)); Davy, P.; Le Goc, R.; Dreuzy, J.R. de; Bour, O. (Geosciences Rennes, UMR 6118 CNRS, Univ. def Rennes, Rennes (France))

    2009-11-15

    Investigations led for several years at Laxemar and Forsmark reveal the large heterogeneity of geological formations and associated fracturing. This project aims at reinforcing the statistical DFN modeling framework adapted to a site scale. This leads therefore to develop quantitative methods of characterization adapted to the nature of fracturing and data availability. We start with the hypothesis that the maximum likelihood DFN model is a power-law model with a density term depending on orientations. This is supported both by literature and specifically here by former analyses of the SKB data. This assumption is nevertheless thoroughly tested by analyzing the fracture trace and lineament maps. Fracture traces range roughly between 0.5 m and 10 m - i e the usual extension of the sample outcrops. Between the raw data and final data used to compute the fracture size distribution from which the size distribution model will arise, several steps are necessary, in order to correct data from finite-size, topographical and sampling effects. More precisely, a particular attention is paid to fracture segmentation status and fracture linkage consistent with the DFN model expected. The fracture scaling trend observed over both sites displays finally a shape parameter k{sub t} close to 1.2 with a density term (alpha{sub 2d}) between 1.4 and 1.8. Only two outcrops clearly display a different trend with k{sub t} close to 3 and a density term (alpha{sub 2d}) between 2 and 3.5. The fracture lineaments spread over the range between 100 meters and a few kilometers. When compared with fracture trace maps, these datasets are already interpreted and the linkage process developed previously has not to be done. Except for the subregional lineament map from Forsmark, lineaments display a clear power-law trend with a shape parameter k{sub t} equal to 3 and a density term between 2 and 4.5. The apparent variation in scaling exponent, from the outcrop scale (k{sub t} = 1.2) on one side, to

  20. Integrated Methodology for Information System Change Control Based on Enterprise Architecture Models

    Directory of Open Access Journals (Sweden)

    Pirta Ruta

    2015-12-01

    Full Text Available The information system (IS change management and governance, according to the best practices, are defined and described in several international methodologies, standards, and frameworks (ITIL, COBIT, ValIT etc.. These methodologies describe IS change management aspects from the viewpoint of their particular enterprise resource management area. The areas are mainly viewed in a partly isolated environment, and the integration of the existing methodologies is insufficient for providing unified and controlled methodological support for holistic IS change management. In this paper, an integrated change management methodology is introduced. The methodology consists of guidelines for IS change control by integrating the following significant resource management areas – information technology (IT governance, change management and enterprise architecture (EA change management. In addition, the methodology includes lists of controls applicable at different phases. The approach is based on re-use and fusion of principles used by related methodologies as well as on empirical observations about typical IS change management mistakes in enterprises.

  1. A Model-Based Prognostics Methodology For Electrolytic Capacitors Based On Electrical Overstress Accelerated Aging

    Data.gov (United States)

    National Aeronautics and Space Administration — A remaining useful life prediction methodology for electrolytic capacitors is presented. This methodology is based on the Kalman filter framework and an empirical...

  2. A Model-based Prognostics Methodology for Electrolytic Capacitors Based on Electrical Overstress Accelerated Aging

    Data.gov (United States)

    National Aeronautics and Space Administration — A remaining useful life prediction methodology for elec- trolytic capacitors is presented. This methodology is based on the Kalman filter framework and an empirical...

  3. Integrated modeling methodology for microtubule dynamics and Taxol kinetics with experimentally identifiable parameters.

    Science.gov (United States)

    Zhao, He; Sokhansanj, Bahrad A

    2007-10-01

    Microtubule dynamics play a critical role in cell function and stress response, modulating mitosis, morphology, signaling, and transport. Drugs such as paclitaxel (Taxol) can impact tubulin polymerization and affect microtubule dynamics. While theoretical methods have been previously proposed to simulate microtubule dynamics, we develop a methodology here that can be used to compare model predictions with experimental data. Our model is a hybrid of (1) a simple two-state stochastic formulation of tubulin polymerization kinetics and (2) an equilibrium approximation for the chemical kinetics of Taxol drug binding to microtubule ends. Model parameters are biologically realistic, with values taken directly from experimental measurements. Model validation is conducted against published experimental data comparing optical measurements of microtubule dynamics in cultured cells under normal and Taxol-treated conditions. To compare model predictions with experimental data requires applying a "windowing" strategy on the spatiotemporal resolution of the simulation. From a biological perspective, this is consistent with interpreting the microtubule "pause" phenomenon as at least partially an artifact of spatiotemporal resolution limits on experimental measurement.

  4. The related congestion failure estimating methodology and model in transportation networks

    Science.gov (United States)

    Yuan, PengCheng; Juan, ZhiCai

    2013-10-01

    Previous works about the probability-based transportation networks evaluation method mainly focus on the static reliability evaluation, they ascribe the stochastic of the travel time to the external long time factors (the traffic supply or the traffic demand). Under this situation, the link’s travel time related relationship can be inferred, and it is efficacious for planners or engineers to make a decision for a long time. Even though some evaluation methodologies about transportation networks’ real-time travel time reliability has been presented, these works assume that the link’s travel time is independent. In this paper we relax this assumption. Using the Gauss copula theory, we present a new method to evaluate the transportation networks’ real-time travel time reliability. The results show that it will overestimate the route or the networks’ travel time reliability when not considering the links’ travel time are related. Not only that, we deep the static reliability evaluation model to the dynamic, we also present the link and transportation network congestion failure evaluation model. Estimations from the model are compared to field-measured data. It shows that, under the error interval ±2 times, the link congestion failure model accuracy rate is above 90.3%, under the error interval ±0.05; the net congestion failure model accuracy rate is above 95%.

  5. A Novel Methodology to Overcome Routing Misbehavior in Manet using Retaliation Model

    Directory of Open Access Journals (Sweden)

    Md. Amir Khusru Akhtar

    2013-08-01

    Full Text Available MANET is a cooperative network in which nodes are responsible for forwarding as well as routing.Noncooperation is still a big challenge that certainly degrades the performance and reliability of aMANET. This paper presents a novel methodology to overcome routing misbehavior in MANET usingRetaliation Model. In this model node misbehavior is watched and an equivalent misbehavior is given inreturn. This model employs several parameters such as number of packets forwarded, number of packetsreceived for forwarding, packet forwarding ratio etc. to calculate Grade and Bonus Points. The Grade isused to isolate selfish nodes from the routing paths and the Bonus Points defines the number of packetsdropped by an honest node in retaliation over its misconducts. The implementation is done in “GloMoSim”on top of the DSR protocol. We obtained up to 40% packet delivery ratio with a cost of a minimum of 7.5%overhead compared to DSR. To minimize total control traffic overhead we have included the FG Modelwith our model and it reduces the overhead up to 75%. This model enforces cooperation due to its stricterpunishment strategy and justifies its name.

  6. A consistent modelling methodology for secondary settling tanks: a reliable numerical method.

    Science.gov (United States)

    Bürger, Raimund; Diehl, Stefan; Farås, Sebastian; Nopens, Ingmar; Torfs, Elena

    2013-01-01

    The consistent modelling methodology for secondary settling tanks (SSTs) leads to a partial differential equation (PDE) of nonlinear convection-diffusion type as a one-dimensional model for the solids concentration as a function of depth and time. This PDE includes a flux that depends discontinuously on spatial position modelling hindered settling and bulk flows, a singular source term describing the feed mechanism, a degenerating term accounting for sediment compressibility, and a dispersion term for turbulence. In addition, the solution itself is discontinuous. A consistent, reliable and robust numerical method that properly handles these difficulties is presented. Many constitutive relations for hindered settling, compression and dispersion can be used within the model, allowing the user to switch on and off effects of interest depending on the modelling goal as well as investigate the suitability of certain constitutive expressions. Simulations show the effect of the dispersion term on effluent suspended solids and total sludge mass in the SST. The focus is on correct implementation whereas calibration and validation are not pursued.

  7. Inverse modeling of emissions for local photooxidant pollution: testing a new methodology with kriging constraints

    Energy Technology Data Exchange (ETDEWEB)

    Pison, I.; Blond, N. [Paris-7 Univ., Creteil (France). LISA, CNRS; Menut, L. [Ecole Polytechnique, Palaiseau (France). LMD/IPSL

    2006-07-01

    A new methodology for the inversion of anthropogenic emissions at a local scale is tested. The inversion constraints are provided by a kriging technique used in air quality forecast in the Paris area, which computes an analyzed concentration field from network measurements and the first-guess simulation of a CTM. The inverse developed here is based on the CHIMERE model and its adjoint to perform 4-D integration. The methodology is validated on synthetic cases inverting emission fluxes. It is shown that the information provided by the analyzed concentrations is sufficient to reach a mathematically acceptable solution to the optimization, even when little information is available in the measurements. As compared to the use of measurements alone or of measurements and a background matrix, the use of kriging leads to a more homogeneous distribution of the corrections, both in space and time. Moreover, it is then possible to double the accuracy of the inversion by performing two kriging-optimization cycles. Nevertheless, kriging analysis cannot compensate for a very important lack of information in the measurements. (orig.)

  8. Inverse modeling of emissions for local photooxidant pollution: Testing a new methodology with kriging constraints

    Directory of Open Access Journals (Sweden)

    I. Pison

    2006-07-01

    Full Text Available A new methodology for the inversion of anthropogenic emissions at a local scale is tested. The inversion constraints are provided by a kriging technique used in air quality forecast in the Paris area, which computes an analyzed concentration field from network measurements and the first-guess simulation of a CTM. The inverse developed here is based on the CHIMERE model and its adjoint to perform 4-D integration. The methodology is validated on synthetic cases inverting emission fluxes. It is shown that the information provided by the analyzed concentrations is sufficient to reach a mathematically acceptable solution to the optimization, even when little information is available in the measurements. As compared to the use of measurements alone or of measurements and a background matrix, the use of kriging leads to a more homogeneous distribution of the corrections, both in space and time. Moreover, it is then possible to double the accuracy of the inversion by performing two kriging-optimization cycles. Nevertheless, kriging analysis cannot compensate for a very important lack of information in the measurements.

  9. Proposition of a modeling and an analysis methodology of integrated reverse logistics chain in the direct chain

    Directory of Open Access Journals (Sweden)

    Faycal Mimouni

    2016-04-01

    Full Text Available Purpose: Propose a modeling and analysis methodology based on the combination of Bayesian networks and Petri networks of the reverse logistics integrated the direct supply chain. Design/methodology/approach: Network modeling by combining Petri and Bayesian network. Findings: Modeling with Bayesian network complimented with Petri network to break the cycle problem in the Bayesian network. Research limitations/implications: Demands are independent from returns. Practical implications: Model can only be used on nonperishable products. Social implications: Legislation aspects: Recycling laws; Protection of environment; Client satisfaction via after sale service. Originality/value: Bayesian network with a cycle combined with the Petri Network.

  10. Application of response surface methodology and artificial neural network methods in modelling and optimization of biosorption process.

    Science.gov (United States)

    Witek-Krowiak, Anna; Chojnacka, Katarzyna; Podstawczyk, Daria; Dawiec, Anna; Pokomeda, Karol

    2014-05-01

    A review on the application of response surface methodology (RSM) and artificial neural networks (ANN) in biosorption modelling and optimization is presented. The theoretical background of the discussed methods with the application procedure is explained. The paper describes most frequently used experimental designs, concerning their limitations and typical applications. The paper also presents ways to determine the accuracy and the significance of model fitting for both methodologies described herein. Furthermore, recent references on biosorption modelling and optimization with the use of RSM and the ANN approach are shown. Special attention was paid to the selection of factors and responses, as well as to statistical analysis of the modelling results.

  11. A model approach to project the start of egg laying of Great Tit ( Parus major L.) in response to climate change

    Science.gov (United States)

    Chmielewski, Frank-M.; Blümel, Klaus; Scherbaum-Heberer, Carina; Koppmann-Rumpf, Bettina; Schmidt, Karl-Heinz

    2013-03-01

    The aim of this study was to select a phenological model that is able to calculate the beginning of egg laying of Great Tit ( Parus major) for both current and future climate conditions. Four models (M1-M4) were optimised on long-term phenological observations from the Ecological Research Centre Schlüchtern (Hessen/Germany). Model M1 was a common thermal time model that accumulates growing degree days (GDD) on an optimised starting date t 1. Since egg laying of Great Tit is influenced not only by air temperature but also by photoperiod, model M1 was extended by a daylength term to give M2. The other two models, M3 and M4, correspond to M1 and M2, but t 1 was intentionally set to 1 January, in order to consider already rising temperatures at the beginning of the year. A comparison of the four models led to following results: model M1 had a relatively high root mean square error at verification (RMSEver) of more than 4 days and can be used only to calculate the start of egg laying for current climate conditions because of the relatively late starting date for GDD calculation. The model failed completely if the starting date was set to 1 January (M3). Consideration of a daylength term in models M2 and M4 improved the performance of both models strongly (RMSEver of only 3 days or less), increased the credibility of parameter estimation, and was a precondition to calculate reliable projections in the timing of egg laying in birds for the future. These results confirm that the start of egg laying of Great Tit is influenced not only by air temperature, but also by photoperiod. Although models M2 and M4 both provide comparably good results for current climate conditions, we recommend model M4-with a starting date of temperature accumulation on 1 January-for calculating possible future shifts in the commencement of egg laying. Our regional projections in the start of egg laying, based on five regional climate models (RCMs: REMO-UBA, ECHAM5-CLM, HadCM3-CLM, WETTREG-0

  12. Review of Project SAFE: Comments on biosphere conceptual model description and risk assessment methodology

    Energy Technology Data Exchange (ETDEWEB)

    Klos, Richard; Wilmot, Roger [Galson Sciences Ltd (United Kingdom)

    2002-09-01

    The Swedish Nuclear Fuel and Waste Management Company's (SKB's) most recent assessment of the safety of the Forsmark repository for low-level and intermediate-level waste (Project SAFE) is currently undergoing review by the Swedish regulators. As part of its review, the Swedish Radiation Protection Institute (SSI) identified that two components of SAFE require more detailed review: (i) the conceptual model description of the biosphere system, and (ii) SKB's risk assessment methodology. We have reviewed the biosphere system interaction matrix and how this has been used in the identification, justification and description of biosphere models for radiological assessment purposes. The risk assessment methodology has been reviewed considering in particular issues associated with scenario selection, assessment timescale, and the probability and risk associated with the well scenario. There is an extensive range of supporting information on which biosphere modelling in Project SAFE is based. However, the link between this material and the biosphere models themselves is not clearly set out. This leads to some contradictions and mis-matches between description and implementation. One example concerns the representation of the geosphere-biosphere interface. The supporting description of lakes indicates that interaction between groundwaters entering the biosphere through lake bed sediments could lead to accumulations of radionuclides in sediments. These sediments may become agricultural areas at some time in the future. In the numerical modelling of the biosphere carried out in Project SAFE, the direct accumulation of contaminants in bed sediments is not represented. Application of a more rigorous procedure to ensure numerical models are fit for purpose is recommended, paying more attention to issues associated with the geosphere-biosphere interface. A more structured approach to risk assessment would be beneficial, with a better explanation of the difference

  13. Mathematical model of marine diesel engine simulator for a new methodology of self propulsion tests

    Science.gov (United States)

    Izzuddin, Nur; Sunarsih, Priyanto, Agoes

    2015-05-01

    As a vessel operates in the open seas, a marine diesel engine simulator whose engine rotation is controlled to transmit through propeller shaft is a new methodology for the self propulsion tests to track the fuel saving in a real time. Considering the circumstance, this paper presents the real time of marine diesel engine simulator system to track the real performance of a ship through a computer-simulated model. A mathematical model of marine diesel engine and the propeller are used in the simulation to estimate fuel rate, engine rotating speed, thrust and torque of the propeller thus achieve the target vessel's speed. The input and output are a real time control system of fuel saving rate and propeller rotating speed representing the marine diesel engine characteristics. The self-propulsion tests in calm waters were conducted using a vessel model to validate the marine diesel engine simulator. The simulator then was used to evaluate the fuel saving by employing a new mathematical model of turbochargers for the marine diesel engine simulator. The control system developed will be beneficial for users as to analyze different condition of vessel's speed to obtain better characteristics and hence optimize the fuel saving rate.

  14. ABOUT THE RELEVANCE AND METHODOLOGY ASPECTS OF TEACHING THE MATHEMATICAL MODELING TO PEDAGOGICAL STUDENTS

    Directory of Open Access Journals (Sweden)

    Y. A. Perminov

    2014-01-01

    Full Text Available The paper substantiates the need for profile training in mathematical modeling for pedagogical students, caused by the total penetration of mathematics into different sciences, including the humanities; fast development of the information communications technologies; and growing importance of mathematical modeling, combining the informal scientific and formal mathematical languages with the unique opportunities of computer programming. The author singles out the reasons for mastering and using the mathematical apparatus by teaches in every discipline. Indeed, among all the modern mathematical methods and ideas, mathematical modeling retains its priority in all professional spheres. Therefore, the discipline of “Mathematical Modeling” can play an important role in integrating different components of specialists training in various profiles. By mastering the basics of mathematical modeling, students acquire skills of methodological thinking; learn the principles of analysis, synthesis, generalization of ideas and methods in different disciplines and scientific spheres; and achieve general culture competences. In conclusion, the author recommends incorporating the “Methods of Profile Training in Mathematical Modeling” into the pedagogical magistracy curricula. 

  15. IIR filtering based adaptive active vibration control methodology with online secondary path modeling using PZT actuators

    Science.gov (United States)

    Boz, Utku; Basdogan, Ipek

    2015-12-01

    Structural vibrations is a major cause for noise problems, discomfort and mechanical failures in aerospace, automotive and marine systems, which are mainly composed of plate-like structures. In order to reduce structural vibrations on these structures, active vibration control (AVC) is an effective approach. Adaptive filtering methodologies are preferred in AVC due to their ability to adjust themselves for varying dynamics of the structure during the operation. The filtered-X LMS (FXLMS) algorithm is a simple adaptive filtering algorithm widely implemented in active control applications. Proper implementation of FXLMS requires availability of a reference signal to mimic the disturbance and model of the dynamics between the control actuator and the error sensor, namely the secondary path. However, the controller output could interfere with the reference signal and the secondary path dynamics may change during the operation. This interference problem can be resolved by using an infinite impulse response (IIR) filter which considers feedback of the one or more previous control signals to the controller output and the changing secondary path dynamics can be updated using an online modeling technique. In this paper, IIR filtering based filtered-U LMS (FULMS) controller is combined with online secondary path modeling algorithm to suppress the vibrations of a plate-like structure. The results are validated through numerical and experimental studies. The results show that the FULMS with online secondary path modeling approach has more vibration rejection capabilities with higher convergence rate than the FXLMS counterpart.

  16. A Radiative Transfer Modeling Methodology in Gas-Liquid Multiphase Flow Simulations

    Directory of Open Access Journals (Sweden)

    Gautham Krishnamoorthy

    2014-01-01

    Full Text Available A methodology for performing radiative transfer calculations in computational fluid dynamic simulations of gas-liquid multiphase flows is presented. By considering an externally irradiated bubble column photoreactor as our model system, the bubble scattering coefficients were determined through add-on functions by employing as inputs the bubble volume fractions, number densities, and the fractional contribution of each bubble size to the bubble volume from four different multiphase modeling options. The scattering coefficient profiles resulting from the models were significantly different from one another and aligned closely with their predicted gas-phase volume fraction distributions. The impacts of the multiphase modeling option, initial bubble diameter, and gas flow rates on the radiation distribution patterns within the reactor were also examined. An increase in air inlet velocities resulted in an increase in the fraction of larger sized bubbles and their contribution to the scattering coefficient. However, the initial bubble sizes were found to have the strongest impact on the radiation field.

  17. Mathematical model of marine diesel engine simulator for a new methodology of self propulsion tests

    Energy Technology Data Exchange (ETDEWEB)

    Izzuddin, Nur; Sunarsih,; Priyanto, Agoes [Faculty of Mechanical Engineering, Universiti Teknologi Malaysia, 81310 Skudai, Johor (Malaysia)

    2015-05-15

    As a vessel operates in the open seas, a marine diesel engine simulator whose engine rotation is controlled to transmit through propeller shaft is a new methodology for the self propulsion tests to track the fuel saving in a real time. Considering the circumstance, this paper presents the real time of marine diesel engine simulator system to track the real performance of a ship through a computer-simulated model. A mathematical model of marine diesel engine and the propeller are used in the simulation to estimate fuel rate, engine rotating speed, thrust and torque of the propeller thus achieve the target vessel’s speed. The input and output are a real time control system of fuel saving rate and propeller rotating speed representing the marine diesel engine characteristics. The self-propulsion tests in calm waters were conducted using a vessel model to validate the marine diesel engine simulator. The simulator then was used to evaluate the fuel saving by employing a new mathematical model of turbochargers for the marine diesel engine simulator. The control system developed will be beneficial for users as to analyze different condition of vessel’s speed to obtain better characteristics and hence optimize the fuel saving rate.

  18. A Diagnostic Model for Dementia in Clinical Practice-Case Methodology Assisting Dementia Diagnosis.

    Science.gov (United States)

    Londos, Elisabet

    2015-04-02

    Dementia diagnosis is important for many different reasons. Firstly, to separate dementia, or major neurocognitive disorder, from MCI (mild cognitive impairment), mild neurocognitive disorder. Secondly, to define the specific underlying brain disorder to aid treatment, prognosis and decisions regarding care needs and assistance. The diagnostic method of dementias is a puzzle of different data pieces to be fitted together in the best possible way to reach a clinical diagnosis. Using a modified case methodology concept, risk factors affecting cognitive reserve and symptoms constituting the basis of the brain damage hypothesis, can be visualized, balanced and reflected against test results as well as structural and biochemical markers. The model's origin is the case method initially described in Harvard business school, here modified to serve dementia diagnostics.

  19. An Innovative Structural Mode Selection Methodology: Application for the X-33 Launch Vehicle Finite Element Model

    Science.gov (United States)

    Hidalgo, Homero, Jr.

    2000-01-01

    An innovative methodology for determining structural target mode selection and mode selection based on a specific criterion is presented. An effective approach to single out modes which interact with specific locations on a structure has been developed for the X-33 Launch Vehicle Finite Element Model (FEM). We presented Root-Sum-Square (RSS) displacement method computes resultant modal displacement for each mode at selected degrees of freedom (DOF) and sorts to locate modes with highest values. This method was used to determine modes, which most influenced specific locations/points on the X-33 flight vehicle such as avionics control components, aero-surface control actuators, propellant valve and engine points for use in flight control stability analysis and for flight POGO stability analysis. Additionally, the modal RSS method allows for primary or global target vehicle modes to also be identified in an accurate and efficient manner.

  20. Generalized Characterization Methodology for Performance Modelling of Lithium-Ion Batteries

    DEFF Research Database (Denmark)

    Stroe, Daniel Loan; Swierczynski, Maciej Jozef; Stroe, Ana-Irina;

    2016-01-01

    Lithium-ion (Li-ion) batteries are complex energy storage devices with their performance behavior highly dependent on the operating conditions (i.e., temperature, load current, and state-of-charge (SOC)). Thus, in order to evaluate their techno-economic viability for a certain application, detailed...... information about Li-ion battery performance behavior becomes necessary. This paper proposes a comprehensive seven-step methodology for laboratory characterization of Li-ion batteries, in which the battery’s performance parameters (i.e., capacity, open-circuit voltage (OCV), and impedance) are determined...... and their dependence on the operating conditions are obtained. Furthermore, this paper proposes a novel hybrid procedure for parameterizing the batteries’ equivalent electrical circuit (EEC), which is used to emulate the batteries’ dynamic behavior. Based on this novel parameterization procedure, the performance model...

  1. Model-based interpretation of the ECG: a methodology for temporal and spatial reasoning.

    Science.gov (United States)

    Tong, D A; Widman, L E

    1993-06-01

    A new software architecture for automatic interpretation of the electrocardiographic rhythm is presented. Using the hypothesize-and-test paradigm, a semiquantitative physiological model and production rule-based knowledge are combined to reason about time- and space-varying characteristics of complex heart rhythms. A prototype system implementing the methodology accepts a semiquantitative description of the onset and morphology of the P waves and QRS complexes that are observed in the body-surface electrocardiogram. A beat-by-beat explanation of the origin and consequences of each wave is produced. The output is in the standard cardiology laddergram format. The current prototype generates the full differential diagnosis of narrow-complex tachycardia and correctly diagnoses complex rhythms, such as atrioventricular (AV) nodal reentrant tachycardia with either hidden or visible P waves and varying degrees of AV block.

  2. A neuro-mechanical model explaining the physiological role of fast and slow muscle fibres at stop and start of stepping of an insect leg.

    Science.gov (United States)

    Toth, Tibor Istvan; Grabowska, Martyna; Schmidt, Joachim; Büschges, Ansgar; Daun-Gruhn, Silvia

    2013-01-01

    Stop and start of stepping are two basic actions of the musculo-skeletal system of a leg. Although they are basic phenomena, they require the coordinated activities of the leg muscles. However, little is known of the details of how these activities are generated by the interactions between the local neuronal networks controlling the fast and slow muscle fibres at the individual leg joints. In the present work, we aim at uncovering some of those details using a suitable neuro-mechanical model. It is an extension of the model in the accompanying paper and now includes all three antagonistic muscle pairs of the main joints of an insect leg, together with their dedicated neuronal control, as well as common inhibitory motoneurons and the residual stiffness of the slow muscles. This model enabled us to study putative processes of intra-leg coordination during stop and start of stepping. We also made use of the effects of sensory signals encoding the position and velocity of the leg joints. Where experimental observations are available, the corresponding simulation results are in good agreement with them. Our model makes detailed predictions as to the coordination processes of the individual muscle systems both at stop and start of stepping. In particular, it reveals a possible role of the slow muscle fibres at stop in accelerating the convergence of the leg to its steady-state position. These findings lend our model physiological relevance and can therefore be used to elucidate details of the stop and start of stepping in insects, and perhaps in other animals, too.

  3. A neuro-mechanical model explaining the physiological role of fast and slow muscle fibres at stop and start of stepping of an insect leg.

    Directory of Open Access Journals (Sweden)

    Tibor Istvan Toth

    Full Text Available Stop and start of stepping are two basic actions of the musculo-skeletal system of a leg. Although they are basic phenomena, they require the coordinated activities of the leg muscles. However, little is known of the details of how these activities are generated by the interactions between the local neuronal networks controlling the fast and slow muscle fibres at the individual leg joints. In the present work, we aim at uncovering some of those details using a suitable neuro-mechanical model. It is an extension of the model in the accompanying paper and now includes all three antagonistic muscle pairs of the main joints of an insect leg, together with their dedicated neuronal control, as well as common inhibitory motoneurons and the residual stiffness of the slow muscles. This model enabled us to study putative processes of intra-leg coordination during stop and start of stepping. We also made use of the effects of sensory signals encoding the position and velocity of the leg joints. Where experimental observations are available, the corresponding simulation results are in good agreement with them. Our model makes detailed predictions as to the coordination processes of the individual muscle systems both at stop and start of stepping. In particular, it reveals a possible role of the slow muscle fibres at stop in accelerating the convergence of the leg to its steady-state position. These findings lend our model physiological relevance and can therefore be used to elucidate details of the stop and start of stepping in insects, and perhaps in other animals, too.

  4. A methodological proposal to contribute to the development of research skills in science education to start the design of a didactic unit built on foundations of scientific and technological literacy

    Directory of Open Access Journals (Sweden)

    Andrés Felipe Velásquez Mosquera

    2013-10-01

    Full Text Available This paper seeks to promote a discussion of the need to promote the training of investigative skills in students of natural sciences from a methodology structured from the design of the plan of course, including a didactic unit, based on scientific and technological literacy to. It is the result of several years of experience in teaching and research of the author in the field of the didactics of the sciences. 

  5. A methodological proposal to contribute to the development of research skills in science education to start the design of a didactic unit built on foundations of scientific and technological literacy

    OpenAIRE

    Andrés Felipe Velásquez Mosquera; Eduardo Augusto López

    2013-01-01

    This paper seeks to promote a discussion of the need to promote the training of investigative skills in students of natural sciences from a methodology structured from the design of the plan of course, including a didactic unit, based on scientific and technological literacy to. It is the result of several years of experience in teaching and research of the author in the field of the didactics of the sciences. 

  6. Developing a new methodology to characterize in vivo the passive mechanical behavior of abdominal wall on an animal model.

    Science.gov (United States)

    Simón-Allué, R; Montiel, J M M; Bellón, J M; Calvo, B

    2015-11-01

    The most common surgical repair of abdominal wall hernia goes through implanting a mesh that substitutes the abdominal muscle/fascia while it is healing. To reduce the risk of relapse or possible complications, this mesh needs to mimic the mechanical behavior of the muscle/fascia, which nowadays is not fully determined. The aim of this work is to develop a methodology to characterize in vivo the passive mechanical behavior of the abdominal wall. For that, New Zealand rabbits were subjected to pneumoperitoneum tests, taking the inner pressure from 0 mmHg to 12 mmHg, values similar to those used in human laparoscopies. Animals treated were divided into two groups: healthy and herniated animals with a surgical mesh (polypropylene Surgipro(TM) Covidien) previously implanted. All experiments were recorded by a stereo rig composed of two synchronized cameras. During the postprocessing of the images, several points over the abdominal surface were tracked and their coordinates extracted for different levels of internal pressure. Starting from that, a three dimensional model of the abdominal wall was reconstructed. Pressure-displacement curves, radii of curvature and strain fields were also analysed. During the experiments, animals tissue mostly deformed during the first levels of pressure, showing the noticeable hyperelastic passive behavior of abdominal muscles. Comparison between healthy and herniated specimen displayed a strong stiffening for herniated animals in the zone where the high density mesh was situated. Cameras were able to discern this change, so this method can be used to measure the possible effect of other meshes.

  7. Site-conditions map for Portugal based on VS measurements: methodology and final model

    Science.gov (United States)

    Vilanova, Susana; Narciso, João; Carvalho, João; Lopes, Isabel; Quinta Ferreira, Mario; Moura, Rui; Borges, José; Nemser, Eliza; Pinto, carlos

    2017-04-01

    In this paper we present a statistically significant site-condition model for Portugal based on shear-wave velocity (VS) data and surface geology. We also evaluate the performance of commonly used Vs30 proxies based on exogenous data and analyze the implications of using those proxies for calculating site amplification in seismic hazard assessment. The dataset contains 161 Vs profiles acquired in Portugal in the context of research projects, technical reports, academic thesis and academic papers. The methodologies involved in characterizing the Vs structure at the sites in the database include seismic refraction, multichannel analysis of seismic waves and refraction microtremor. Invasive measurements were performed in selected locations in order to compare the Vs profiles obtained from both invasive and non-invasive techniques. In general there was good agreement in the subsurface structure of Vs30 obtained from the different methodologies. The database flat-file includes information on Vs30, surface geology at 1:50.000 and 1:500.000 scales, elevation and topographic slope and based on SRTM30 topographic dataset. The procedure used to develop the site-conditions map is based on a three-step process that includes defining a preliminary set of geological units based on the literature, performing statistical tests to assess whether or not the differences in the distributions of Vs30 are statistically significant, and merging of the geological units accordingly. The dataset was, to some extent, affected by clustering and/or preferential sampling and therefore a declustering algorithm was applied. The final model includes three geological units: 1) Igneous, metamorphic and old (Paleogene and Mesozoic) sedimentary rocks; 2) Neogene and Pleistocene formations, and 3) Holocene formations. The evaluation of proxies indicates that although geological analogues and topographic slope are in general unbiased, the latter shows significant bias for particular geological units and

  8. A Methodological Review of US Budget-Impact Models for New Drugs.

    Science.gov (United States)

    Mauskopf, Josephine; Earnshaw, Stephanie

    2016-11-01

    A budget-impact analysis is required by many jurisdictions when adding a new drug to the formulary. However, previous reviews have indicated that adherence to methodological guidelines is variable. In this methodological review, we assess the extent to which US budget-impact analyses for new drugs use recommended practices. We describe recommended practice for seven key elements in the design of a budget-impact analysis. Targeted literature searches for US studies reporting estimates of the budget impact of a new drug were performed and we prepared a summary of how each study addressed the seven key elements. The primary finding from this review is that recommended practice is not followed in many budget-impact analyses. For example, we found that growth in the treated population size and/or changes in disease-related costs expected during the model time horizon for more effective treatments was not included in several analyses for chronic conditions. In addition, all drug-related costs were not captured in the majority of the models. Finally, for most studies, one-way sensitivity and scenario analyses were very limited, and the ranges used in one-way sensitivity analyses were frequently arbitrary percentages rather than being data driven. The conclusions from our review are that changes in population size, disease severity mix, and/or disease-related costs should be properly accounted for to avoid over- or underestimating the budget impact. Since each budget holder might have different perspectives and different values for many of the input parameters, it is also critical for published budget-impact analyses to include extensive sensitivity and scenario analyses based on realistic input values.

  9. Methodology for risk analysis based on atmospheric dispersion modelling from nuclear risk sites

    Science.gov (United States)

    Baklanov, A.; Mahura, A.; Sørensen, J. H.; Rigina, O.

    2003-04-01

    The main purpose of this multidisciplinary study is to develop a methodology for complex nuclear risk and vulnerability assessment, and to test it on example of estimation of nuclear risk to the population in the Nordic countries in case of a severe accident at a nuclear risk site (NRS). The main focus of the paper is the methodology for the evaluation of the atmospheric transport and deposition of radioactive pollutants from NRSs. The method developed for this evaluation is derived from a probabilistic point of view. The main question we are trying to answer is: What is the probability for radionuclide atmospheric transport and impact to different neighbouring regions and countries in case of an accident at an NPP? To answer this question we applied a number of different tools: (i) Trajectory Modelling - to calculate multiyear forward trajectories originating over the locations of selected risk sites; (ii) Dispersion Modelling - for long-term simulation and case studies of radionuclide transport from hypothetical accidental releases at NRSs; (iii) Cluster Analysis - to identify atmospheric transport pathways from NRSs; (iv) Probability Fields Analysis - to construct annual, monthly, and seasonal NRS impact indicators to identify the most impacted geographical regions; (v) Specific Case Studies - to estimate consequences for the environment and the populations after a hypothetical accident; (vi) Vulnerability Evaluation to Radioactive Deposition - to describe its persistence in the ecosystems with a focus to the transfer of certain radionuclides into the food chains of key importance for the intake and exposure for a whole population and for certain population groups; (vii) Risk Evaluation and Mapping - to analyse socio-economical consequences for different geographical areas and various population groups taking into account social-geophysical factors and probabilities, and using demographic databases based on GIS analysis.

  10. MODELING AND STRUCTURING OF ENTERPRISE MANAGEMENT SYSTEM RESORT SPHERE BASED ON ELEMENTS OF NEURAL NETWORK THEORY: THE METHODOLOGICAL BASIS

    Directory of Open Access Journals (Sweden)

    Rena R. Timirualeeva

    2015-01-01

    Full Text Available The article describes the methodology of modeling andstructuring of business networks theory. Accounting ofenvironmental factors mega-, macro- and mesolevels, theinternal state of the managed system and the error management command execution by control system implemented inthis. The proposed methodology can improve the quality of enterprise management of resort complex through a moreflexible response to changes in the parameters of the internaland external environments.

  11. A Model-Based Methodology for Simultaneous Design and Control of a Bioethanol Production Process

    DEFF Research Database (Denmark)

    Alvarado-Morales, Merlin; Abd.Hamid, Mohd-Kamaruddin; Sin, Gürkan

    2010-01-01

    In this work, a framework for the simultaneous solution of design and control problems is presented. Within this framework, two methodologies are presented, the integrated process design and controller design (IPDC) methodology and the process-group contribution (PGC) methodology. The concepts...... of attainable region (AR), driving force (DF), process-group (PG) and reverse simulation are used within these methodologies. The IPDC methodology is used to find the optimal design-control strategy of a process by locating the maximum point in the AR and DF diagrams for reactor and separator, respectively....... The PGC methodology is used to generate more efficient separation designs in terms of energy consumption by targeting the separation task at the largest DF. Both methodologies are highlighted through the application of two case studies, a bioethanol production process and a succinic acid production...

  12. Testing the methodology for site descriptive modelling. Application for the Laxemar area

    Energy Technology Data Exchange (ETDEWEB)

    Andersson, Johan [JA Streamflow AB, Aelvsjoe (Sweden); Berglund, Johan [SwedPower AB, Stockholm (Sweden); Follin, Sven [SF Geologic AB, Stockholm (Sweden); Hakami, Eva [Itasca Geomekanik AB, Stockholm (Sweden); Halvarson, Jan [Swedish Nuclear Fuel and Waste Management Co, Stockholm (Sweden); Hermanson, Jan [Golder Associates AB, Stockholm (Sweden); Laaksoharju, Marcus [Geopoint (Sweden); Rhen, Ingvar [Sweco VBB/VIAK, Stockholm (Sweden); Wahlgren, C.H. [Sveriges Geologiska Undersoekning, Uppsala (Sweden)

    2002-08-01

    A special project has been conducted where the currently available data from the Laxemar area, which is part of the Simpevarp site, have been evaluated and interpreted into a Site Descriptive Model covering: geology, hydrogeology, hydrogeochemistry and rock mechanics. Description of the surface ecosystem has been omitted, since it was re-characterised in another, parallel, project. Furthermore, there has been no evaluation of transport properties. The project is primarily a methodology test. The lessons learnt will be implemented in the Site Descriptive Modelling during the coming site investigation. The intent of the project has been to explore whether available methodology for Site Descriptive Modelling based on surface and borehole data is adequate and to identify potential needs for development and improvement in the methodology. The project has developed, with limitations in scope, a Site Descriptive Model in local scale, corresponding to the situation after completion of the Initial Site Investigations for the Laxemar area (i.e. 'version 1.2' using the vocabulary of the general execution program for the site investigations). The Site Descriptive Model should be reasonable, but should not be regarded as a 'real' model. There are limitations both in input data and in the scope of the analysis. The measured (primary) data constitute a wide range of different measurement results including data from two deep core drilled boreholes. These data both need to be checked for consistency and to be interpreted into a format more amenable for three-dimensional modelling. Examples of such evaluations are estimation of surface geology, lineament interpretation, geological single hole interpretation, hydrogeological single hole interpretation and assessment of hydrogeochemical data. Furthermore, while cross discipline interpretation is encouraged there is also a need for transparency. This means that the evaluations first are made within each discipline

  13. Climate Change Modeling Methodology Selected Entries from the Encyclopedia of Sustainability Science and Technology

    CERN Document Server

    2012-01-01

    The Earth's average temperature has risen by 1.4°F over the past century, and computer models project that it will rise much more over the next hundred years, with significant impacts on weather, climate, and human society. Many climate scientists attribute these increases to the buildup of greenhouse gases produced by the burning of fossil fuels and to the anthropogenic production of short-lived climate pollutants. Climate Change Modeling Methodologies: Selected Entries from the Encyclopedia of Sustainability Science and Technology provides readers with an introduction to the tools and analysis techniques used by climate change scientists to interpret the role of these forcing agents on climate.  Readers will also gain a deeper understanding of the strengths and weaknesses of these models and how to test and assess them.  The contributions include a glossary of key terms and a concise definition of the subject for each topic, as well as recommendations for sources of more detailed information. Features au...

  14. The Cultural Analysis of Soft Systems Methodology and the Configuration Model of Organizational Culture

    Directory of Open Access Journals (Sweden)

    Jürgen Staadt

    2015-06-01

    Full Text Available Organizations that find themselves within a problematic situation connected with cultural issues such as politics and power require adaptable research and corresponding modeling approaches so as to grasp the arrangements of that situation and their impact on the organizational development. This article originates from an insider-ethnographic intervention into the problematic situation of the leading public housing provider in Luxembourg. Its aim is to describe how the more action-oriented cultural analysis of soft systems methodology and the theory-driven configuration model of organizational culture are mutually beneficial rather than contradictory. The data collected between 2007 and 2013 were analyzed manually as well as by means of ATLAS.ti. Results demonstrate that the cultural analysis enables an in-depth understanding of the power-laden environment within the organization bringing about the so-called “socio-political system” and that the configuration model makes it possible to depict the influence of that system on the whole organization. The overall research approach thus contributes toward a better understanding of the influence and the impact of oppressive social environments and evolving power relations on the development of an organization.

  15. Implementing the Simple Biosphere Model (SiB) in a general circulation model: Methodologies and results

    Science.gov (United States)

    Sato, N.; Sellers, P. J.; Randall, D. A.; Schneider, E. K.; Shukla, J.; Kinter, J. L., III; Hou, Y.-T.; Albertazzi, E.

    1989-01-01

    The Simple Biosphere MOdel (SiB) of Sellers et al., (1986) was designed to simulate the interactions between the Earth's land surface and the atmosphere by treating the vegetation explicitly and relistically, thereby incorporating biophysical controls on the exchanges of radiation, momentum, sensible and latent heat between the two systems. The steps taken to implement SiB in a modified version of the National Meteorological Center's spectral GCM are described. The coupled model (SiB-GCM) was used with a conventional hydrological model (Ctl-GCM) to produce summer and winter simulations. The same GCM was used with a conventional hydrological model (Ctl-GCM) to produce comparable 'control' summer and winter variations. It was found that SiB-GCM produced a more realistic partitioning of energy at the land surface than Ctl-GCM. Generally, SiB-GCM produced more sensible heat flux and less latent heat flux over vegetated land than did Ctl-GCM and this resulted in the development of a much deeper daytime planetary boundary and reduced precipitation rates over the continents in SiB-GCM. In the summer simulation, the 200 mb jet stream and the wind speed at 850 mb were slightly weakened in the SiB-GCM relative to the Ctl-GCM results and equivalent analyses from observations.

  16. A Methodological Demonstration of Set-theoretical Approach to Social Media Maturity Models Using Necessary Condition Analysis

    DEFF Research Database (Denmark)

    Lasrado, Lester Allan; Vatrapu, Ravi; Andersen, Kim Normann

    2016-01-01

    Despite being widely accepted and applied across research domains, maturity models have been criticized for lacking academic rigor, especially methodologically rigorous and empirically grounded or tested maturity models are quite rare. Attempting to close this gap, we adopt a set-theoretic approach...

  17. Integrating Social Activity Theory and Critical Discourse Analysis: A Multilayered Methodological Model for Examining Knowledge Mediation in Mentoring

    Science.gov (United States)

    Becher, Ayelet; Orland-Barak, Lily

    2016-01-01

    This study suggests an integrative qualitative methodological framework for capturing complexity in mentoring activity. Specifically, the model examines how historical developments of a discipline direct mentors' mediation of professional knowledge through the language that they use. The model integrates social activity theory and a framework of…

  18. National evaluation for calving ease, gestation length and birth weight by linear and threshold model methodologies.

    Science.gov (United States)

    Lee, Deukhwan; Misztal, Ignacy; Bertrand, J Keith; Rekaya, Romdhane

    2002-01-01

    Data included 393,097 calving ease, 129,520 gestation length, and 412,484 birth weight records on 412,484 Gelbvieh cattle. Additionally, pedigrees were available on 72,123 animals. Included in the models were effects of sex and age of dam, treated as fixed, as well as direct, maternal genetic and permanent environmental effects and effects of contemporary group (herd-year-season), treated as random. In all analyses, birth weight and gestation length were treated as continuous traits. Calving ease (CE) was treated either as a continuous trait in a mixed linear model (LM), or as a categorical trait in linear-threshold models (LTM). Solutions in TM obtained by empirical Bayes (TMEB) and Monte Carlo (TMMC) methodologies were compared with those by LM. Due to the computational cost, only 10,000 samples were obtained for TMMC. For calving ease, correlations between LM and TMEB were 0.86 and 0.78 for direct and maternal genetic effects, respectively. The same correlations but between TMEB and TMMC were 1.00 and 0.98, respectively. The correlations between LM and TMMC were 0.85 and 0.75, respectively. The correlations for the linear traits were above.97 between LM and TMEB but as low as 0.91 between LM and TMMC, suggesting insufficient convergence of TMMC. Computing time required was about 2 hrs, 5 hrs, and 6 days for LM, TMEB and TMMC, respectively, and memory requirements were 169, 171, and 445 megabytes, respectively. Bayesian implementation of threshold model is simple, can be extended to multiple categorical traits, and allows easy calculation of accuracies; however, computing time is prohibitively long for large models.

  19. Comparing regional modeling (CHIMERE) and satellite observations of aerosols (PARASOL): Methodology and case study over Mexico

    Science.gov (United States)

    Stromatas, Stavros

    2010-05-01

    S. Stromatas (1), S. Turquety (1), H. Chepfer (1), L. Menut (1), B. Bessagnet (2), JC Pere (2), D. Tanré (3) . (1) Laboratoire de Météorologie Dynamique, CNRS/IPSL, École Polytechnique, 91128 Palaiseau Cedex, France, (2) INERIS, Institut National de l'Environnement Industriel et des Risques, Parc technologique ALATA, 60550 Verneuil en Halatte, FRANCE, (3) Laboratoire d'Optique Atmosphérique/CNRS Univ. des Sciences et Tech. de Lille, 59650 - Villeneuve d'Ascq, France. Atmospheric suspended particles (aerosols) have significant radiative and environmental impacts, affecting human health, visibility and climate. Therefore, they are regulated by air quality standards worldwide, and monitored by regional observation networks. Satellite observations vastly improve the horizontal and temporal coverage, providing daily distributions. Aerosols are currently estimated using aerosol optical depth (AOD) retrievals, a quantitative measure of the extinction of solar radiation by aerosol scattering and absorption between the point of observation and the top of the atmosphere. Even though remarkable progresses in aerosol modeling by chemistry-transport models (CTM) and measurement experiments have been made in recent years, there is still a significant divergence between the modeled and observed results. However, AOD retrievals from satellites remains a highly challenging task mostly because it depends on a variety of different parameters such as cloud contamination, surface reflectance contributions and a priori assumptions on aerosol types, each one of them incorporating its own difficulties. Therefore, comparisons between CTM and observations are often difficult to interpret. In this presentation, we will discuss comparisons between regional modeling (CHIMERE CTM) over Mexico and satellite observations obtained by the POLDER instrument embarked on PARASOL micro-satellite. After a comparison of the model AOD with the retrieved L2 AOD, we will present an alternative

  20. A Globally Consistent Methodology for an Exposure Model for Natural Catastrophe Risk Assessment

    Science.gov (United States)

    Gunasekera, Rashmin; Ishizawa, Oscar; Pandey, Bishwa; Saito, Keiko

    2013-04-01

    There is a high demand for the development of a globally consistent and robust exposure data model employing a top down approach, to be used in national level catastrophic risk profiling for the public sector liability. To this effect, there are currently several initiatives such as UN-ISDR Global Assessment Report (GAR) and Global Exposure Database for Global Earthquake Model (GED4GEM). However, the consistency and granularity differs from region to region, a problem that is overcome in this proposed approach using national datasets for example in Latin America and the Caribbean Region (LCR). The methodology proposed in this paper aim to produce a global open exposure dataset based upon population, country specific building type distribution and other global/economic indicators such as World Bank indices that are suitable for natural catastrophe risk modelling purposes. The output would be a GIS raster grid at approximately 1 km spatial resolution which would highlight urbaness (building typology distribution, occupancy and use) for each cell at sub national level and compatible with other global initiatives and datasets. It would make use of datasets on population, census, demographic, building data and land use/land cover which are largely available in the public domain. The resultant exposure dataset could be used in conjunction with hazard and vulnerability components to create views of risk for multiple hazards that include earthquake, flood and windstorms. The model we hope would also assist in steps towards future initiatives for open, interchangeable and compatible databases for catastrophe risk modelling. The findings, interpretations, and conclusions expressed in this paper are entirely those of the authors. They do not necessarily represent the views of the International Bank for Reconstruction and Development/World Bank and its affiliated organizations, or those of the Executive Directors of the World Bank or the governments they represent.

  1. Methodology Development of a Gas-Liquid Dynamic Flow Regime Transition Model

    Science.gov (United States)

    Doup, Benjamin Casey

    Current reactor safety analysis codes, such as RELAP5, TRACE, and CATHARE, use flow regime maps or flow regime transition criteria that were developed for static fully-developed two-phase flows to choose interfacial transfer models that are necessary to solve the two-fluid model. The flow regime is therefore difficult to identify near the flow regime transitions, in developing two-phase flows, and in transient two-phase flows. Interfacial area transport equations were developed to more accurately predict the dynamic nature of two-phase flows. However, other model coefficients are still flow regime dependent. Therefore, an accurate prediction of the flow regime is still important. In the current work, the methodology for the development of a dynamic flow regime transition model that uses the void fraction and interfacial area concentration obtained by solving three-field the two-fluid model and two-group interfacial area transport equation is investigated. To develop this model, detailed local experimental data are obtained, the two-group interfacial area transport equations are revised, and a dynamic flow regime transition model is evaluated using a computational fluid dynamics model. Local experimental data is acquired for 63 different flow conditions in bubbly, cap-bubbly, slug, and churn-turbulent flow regimes. The measured parameters are the group-1 and group-2 bubble number frequency, void fraction, interfacial area concentration, and interfacial bubble velocities. The measurements are benchmarked by comparing the prediction of the superficial gas velocities, determined using the local measurements with those determined from volumetric flow rate measurements and the agreement is generally within +/-20%. The repeatability four-sensor probe construction process is within +/-10%. The repeatability of the measurement process is within +/-7%. The symmetry of the test section is examined and the average agreement is within +/-5.3% at z/D = 10 and +/-3.4% at z/D = 32

  2. Starting with ABC and Finishing with XYZ: What Financial Reporting Model Best Fits a Faculty and Why?

    Science.gov (United States)

    Berry, Prudence Jane

    2014-01-01

    This article looks at the range of financial reporting models available for use in the Australian higher education sector, the possible application of activity-based costing (ABC) in faculties and the eventual rejection of ABC in favour of a more qualitative model designed specifically for use in one institution, in a particular Faculty. The…

  3. Prognosis-a wearable health-monitoring system for people at risk: methodology and modeling.

    Science.gov (United States)

    Pantelopoulos, Alexandros; Bourbakis, Nikolaos G

    2010-05-01

    Wearable health-monitoring systems (WHMSs) represent the new generation of healthcare by providing real-time unobtrusive monitoring of patients' physiological parameters through the deployment of several on-body and even intrabody biosensors. Although several technological issues regarding WHMS still need to be resolved in order to become more applicable in real-life scenarios, it is expected that continuous ambulatory monitoring of vital signs will enable proactive personal health management and better treatment of patients suffering from chronic diseases, of the elderly population, and of emergency situations. In this paper, we present a physiological data fusion model for multisensor WHMS called Prognosis. The proposed methodology is based on a fuzzy regular language for the generation of the prognoses of the health conditions of the patient, whereby the current state of the corresponding fuzzy finite-state machine signifies the current estimated health state and context of the patient. The operation of the proposed scheme is explained via detailed examples in hypothetical scenarios. Finally, a stochastic Petri net model of the human-device interaction is presented, which illustrates how additional health status feedback can be obtained from the WHMS' user.

  4. Modeling of the ORNL PCA Benchmark Using SCALE6.0 Hybrid Deterministic-Stochastic Methodology

    Directory of Open Access Journals (Sweden)

    Mario Matijević

    2013-01-01

    Full Text Available Revised guidelines with the support of computational benchmarks are needed for the regulation of the allowed neutron irradiation to reactor structures during power plant lifetime. Currently, US NRC Regulatory Guide 1.190 is the effective guideline for reactor dosimetry calculations. A well known international shielding database SINBAD contains large selection of models for benchmarking neutron transport methods. In this paper a PCA benchmark has been chosen from SINBAD for qualification of our methodology for pressure vessel neutron fluence calculations, as required by the Regulatory Guide 1.190. The SCALE6.0 code package, developed at Oak Ridge National Laboratory, was used for modeling of the PCA benchmark. The CSAS6 criticality sequence of the SCALE6.0 code package, which includes KENO-VI Monte Carlo code, as well as MAVRIC/Monaco hybrid shielding sequence, was utilized for calculation of equivalent fission fluxes. The shielding analysis was performed using multigroup shielding library v7_200n47g derived from general purpose ENDF/B-VII.0 library. As a source of response functions for reaction rate calculations with MAVRIC we used international reactor dosimetry libraries (IRDF-2002 and IRDF-90.v2 and appropriate cross-sections from transport library v7_200n47g. The comparison of calculational results and benchmark data showed a good agreement of the calculated and measured equivalent fission fluxes.

  5. Measuring attitudes in the self-employment intention model: methodological considerations

    Directory of Open Access Journals (Sweden)

    Josipa Mijoč

    2016-12-01

    Full Text Available The paper is based on a statistical model, the construction of which requires determining the independent variables. To determine the predictive ability of different approaches to measuring independent variables, the paper provides an overview of theoretical and research approaches to the research problem. The purpose of the study is to analyze the predictive power of instruments measuring attitudes toward self-employment as one of the most significant predictors of a career choice according to the theory of planned behavior. The paper juxtaposes two various measurement approaches in assessing attitudes toward self-employment. The first approach is based on behavioral beliefs that produce favorable or unfavorable attitudes toward a self-employed career and considers two opposing options: pursuing a self-employed career or accepting a job position (working for an employer. In this context, developing a measurement construct is a multistep process that requires testing psychometric characteristics of proposed measures based on predefined theoretical and empirical dimensions. The second approach incorporates aggregate measures of attitude toward self-employment in which the predictor variable is assessed from only one perspective, without taking into account other career options. Through the means of multiple regression analysis, the paper details a comparison of both measurement approaches and their performance in explaining the dependent variable (self-employment intention. The predictive power of the model is defined as a criterion for selecting a measurement approach that can serve as a methodological framework for prospective studies focused on investigating attitudes toward certain behavior.

  6. Rational Design of Methodology-Independent Metal Parameters Using a Nonbonded Dummy Model.

    Science.gov (United States)

    Jiang, Yang; Zhang, Haiyang; Tan, Tianwei

    2016-07-12

    A nonbonded dummy model for metal ions is highly imperative for the computation of complex biological systems with for instance multiple metal centers. Here we present nonbonded dummy parameters of 11 divalent metallic cations, namely, Mg(2+), V(2+), Cr(2+), Mn(2+), Fe(2+), Co(2+), Ni(2+), Zn(2+), Cd(2+), Sn(2+), and Hg(2+), that are optimized to be compatible with three widely used water models (TIP3P, SPC/E, and TIP4P-EW). The three sets of metal parameters reproduce simultaneously the solvation free energies (ΔGsol), the ion-oxygen distance in the first solvation shell (IOD), and coordination numbers (CN) in explicit water with a relative error less than 1%. The main sources of errors to ΔGsol that arise from the boundary conditions and treatment of electrostatic interactions are corrected rationally, which ensures the independence of the proposed parameters on the methodology used in the calculation. This work will be of great value for the computational study of metal-containing biological systems.

  7. External Validity and Model Validity: A Conceptual Approach for Systematic Review Methodology

    Directory of Open Access Journals (Sweden)

    Raheleh Khorsan

    2014-01-01

    Full Text Available Background. Evidence rankings do not consider equally internal (IV, external (EV, and model validity (MV for clinical studies including complementary and alternative medicine/integrative health care (CAM/IHC research. This paper describe this model and offers an EV assessment tool (EVAT© for weighing studies according to EV and MV in addition to IV. Methods. An abbreviated systematic review methodology was employed to search, assemble, and evaluate the literature that has been published on EV/MV criteria. Standard databases were searched for keywords relating to EV, MV, and bias-scoring from inception to Jan 2013. Tools identified and concepts described were pooled to assemble a robust tool for evaluating these quality criteria. Results. This study assembled a streamlined, objective tool to incorporate for the evaluation of quality of EV/MV research that is more sensitive to CAM/IHC research. Conclusion. Improved reporting on EV can help produce and provide information that will help guide policy makers, public health researchers, and other scientists in their selection, development, and improvement in their research-tested intervention. Overall, clinical studies with high EV have the potential to provide the most useful information about “real-world” consequences of health interventions. It is hoped that this novel tool which considers IV, EV, and MV on equal footing will better guide clinical decision making.

  8. Modeling and optimization of red currants vacuum drying process by response surface methodology (RSM).

    Science.gov (United States)

    Šumić, Zdravko; Vakula, Anita; Tepić, Aleksandra; Čakarević, Jelena; Vitas, Jasmina; Pavlić, Branimir

    2016-07-15

    Fresh red currants were dried by vacuum drying process under different drying conditions. Box-Behnken experimental design with response surface methodology was used for optimization of drying process in terms of physical (moisture content, water activity, total color change, firmness and rehydratation power) and chemical (total phenols, total flavonoids, monomeric anthocyanins and ascorbic acid content and antioxidant activity) properties of dried samples. Temperature (48-78 °C), pressure (30-330 mbar) and drying time (8-16 h) were investigated as independent variables. Experimental results were fitted to a second-order polynomial model where regression analysis and analysis of variance were used to determine model fitness and optimal drying conditions. The optimal conditions of simultaneously optimized responses were temperature of 70.2 °C, pressure of 39 mbar and drying time of 8 h. It could be concluded that vacuum drying provides samples with good physico-chemical properties, similar to lyophilized sample and better than conventionally dried sample.

  9. Constructing a starting 3D shear velocity model with sharp interfaces for SEM-based upper mantle tomography in North America

    Science.gov (United States)

    Calo, M.; Bodin, T.; Yuan, H.; Romanowicz, B. A.; Larmat, C. S.; Maceira, M.

    2013-12-01

    Seismic tomography is currently evolving towards 3D earth models that satisfy full seismic waveforms at increasingly high frequencies. This evolution is possible thanks to the advent of powerful numerical methods such as the Spectral Element Method (SEM) that allow accurate computation of the seismic wavefield in complex media, and the drastic increase of computational resources. However, the production of such models requires handling complex misfit functions with more than one local minimum. Standard linearized inversion methods (such as gradient methods) have two main drawbacks: 1) they produce solution models highly dependent on the starting model; 2) they do not provide a means of estimating true model uncertainties. However, these issues can be addressed with stochastic methods that can sample the space of possible solutions efficiently. Such methods are prohibitively challenging computationally in 3D, but increasingly accessible in 1D. In previous work (Yuan and Romanowicz, 2010; Yuan et al., 2011) we developed a continental scale anisotropic upper mantle model of north America based on a combination of long period seismic waveforms and SKS splitting measurements, showing the pervasive presence of layering of anisotropy in the cratonic lithosphere with significant variations in depth of the mid-lithospheric boundary. The radial anisotropy part of the model has been recently updated using the spectral element method for forward wavefield computations and waveform data from the latest deployments of USarray (Yuan and Romanowicz, 2013). However, the long period waveforms (periods > 40s) themselves only provide a relatively smooth view of the mantle if the starting model is smooth, and the mantle discontinuities necessary for geodynamical interpretation are not imaged. Increasing the frequency of the computations to constrain smaller scale features is possible, but challenging computationally, and at the risk of falling in local minima of the misfit function. In

  10. A Multiscale Progressive Failure Modeling Methodology for Composites that Includes Fiber Strength Stochastics

    Science.gov (United States)

    Ricks, Trenton M.; Lacy, Thomas E., Jr.; Bednarcyk, Brett A.; Arnold, Steven M.; Hutchins, John W.

    2014-01-01

    A multiscale modeling methodology was developed for continuous fiber composites that incorporates a statistical distribution of fiber strengths into coupled multiscale micromechanics/finite element (FE) analyses. A modified two-parameter Weibull cumulative distribution function, which accounts for the effect of fiber length on the probability of failure, was used to characterize the statistical distribution of fiber strengths. A parametric study using the NASA Micromechanics Analysis Code with the Generalized Method of Cells (MAC/GMC) was performed to assess the effect of variable fiber strengths on local composite failure within a repeating unit cell (RUC) and subsequent global failure. The NASA code FEAMAC and the ABAQUS finite element solver were used to analyze the progressive failure of a unidirectional SCS-6/TIMETAL 21S metal matrix composite tensile dogbone specimen at 650 degC. Multiscale progressive failure analyses were performed to quantify the effect of spatially varying fiber strengths on the RUC-averaged and global stress-strain responses and failure. The ultimate composite strengths and distribution of failure locations (predominately within the gage section) reasonably matched the experimentally observed failure behavior. The predicted composite failure behavior suggests that use of macroscale models that exploit global geometric symmetries are inappropriate for cases where the actual distribution of local fiber strengths displays no such symmetries. This issue has not received much attention in the literature. Moreover, the model discretization at a specific length scale can have a profound effect on the computational costs associated with multiscale simulations.models that yield accurate yet tractable results.

  11. Scenario Methodology for Modelling of Future Landscape Developments as Basis for Assessing Ecosystem Services

    Directory of Open Access Journals (Sweden)

    Matthias Rosenberg

    2014-04-01

    Full Text Available The ecosystems of our intensively used European landscapes produce a variety of natural goods and services for the benefit of humankind, and secure the basics and quality of life. Because these ecosystems are still undergoing fundamental changes, the interest of the society is to know more about future developments and their ecological impacts. To describe and analyze these changes, scenarios can be developed and an assessment of the ecological changes can be carried out subsequently. In the project „Landscape Saxony 2050“; a methodology for the construction of exploratory scenarios was worked out. The presented methodology provides a possibility to identify the driving forces (socio-cultural, economic and ecological conditions of the landscape development. It allows to indicate possible future paths which lead to a change of structures and processes in the landscape and can influence the capability to provide ecosystem services. One essential component of the applied technique is that an approach for the assessment of the effects of the landscape changes on ecosystem services is integrated into the developed scenario methodology. Another is, that the methodology is strong designed as participatory, i.e. stakeholders are integrated actively. The method is a seven phase model which provides the option for the integration of the stakeholders‘ participation at all levels of scenario development. The scenario framework was applied to the district of Görlitz, an area of 2100 sq km located at the eastern border of Germany. The region is affected by strong demographic as well as economic changes. The core issue focused on the examination of landscape change in terms of biodiversity. Together with stakeholders, a trend scenario and two alternative scenarios were developed. The changes of the landscape structure are represented in story lines, maps and tables. On basis of the driving forces of the issue areas „cultural / social values“ and

  12. Applications of hydrogeological modelling methodology using NAMMU and CONNECTFLOW. Task 1, 2, 3 and 4

    Energy Technology Data Exchange (ETDEWEB)

    Gylling, Bjoern; Marsic, Niko [Kemakta Konsult AB, Stockholm (Sweden); Hartley, Lee; Holton, David [Serco Assurance Ltd, Risley (United Kingdom)

    2004-11-01

    concept, show the work flow from data to model, and create generic though realistic models that can be adapted for later studies. Models for Task 1, 2, 3 and 4 have been set up. In all cases nested models have been created using CONNECTFLOW. Task 1 and 2 consider the case of a repository-scale DFN model nested within an embedded CPM model that includes both the site-scale and regional scales. Task 1 demonstrates how such models can be constructed. Task 2 shows what results can be obtained using such a model by applying the model to Beberg and comparing the results for canister flux and transport statistics with results from pure CPM models as used in SR 97. Several realisations of the model were performed and analysed to obtain statistics of repository performance measures such as groundwater travel time and flux at starting positions. The results show good consistency in the mean values when compared to SR 97, but the variance is increased in the new nested model. This is to be expected since a DFN model offers much greater resolution in local-scale variability since it represents individual fractures on a few metres scale rather than the CPM approach where flows are effectively averaged out over a larger volume (e.g. 35 m cuboid elements). Task 2 gave also an opportunity to test the recently developed method to calculate F-quotients directly in CONNECTFLOW. For Task 3 the converse arrangement of nesting was used where a rather complex CPM model was nested within a DFN model. The CPM model represents a tunnel system with an access tunnel, six deposition tunnels and 162 deposition holes. A generic DFN model serves as a framework and makes it possible to study fracture and repository intersections. The goal is to obtain detailed near-field flow rates and possibility to address design issues. In Task 4, the CONNECTFLOW concept is used to calculate more realistic input to a near field model since it models explicitly the flow in the fracture system around the canisters

  13. Starting physiology: bioelectrogenesis.

    Science.gov (United States)

    Baptista, Vander

    2015-12-01

    From a Cartesian perspective of rational analysis, the electric potential difference across the cell membrane is one of the fundamental concepts for the study of physiology. Unfortunately, undergraduate students often struggle to understand the genesis of this energy gradient, which makes the teaching activity a hard task for the instructor. The topic of bioelectrogenesis encompasses multidisciplinary concepts, involves several mechanisms, and is a dynamic process, i.e., it never turns off during the lifetime of the cell. Therefore, to improve the transmission and acquisition of knowledge in this field, I present an alternative didactic model. The design of the model assumes that it is possible to build, in a series of sequential steps, an assembly of proteins within the membrane of an isolated cell in a simulated electrophysiology experiment. Initially, no proteins are inserted in the membrane and the cell is at a baseline energy state; the extracellular and intracellular fluids are at thermodynamic equilibrium. Students are guided through a sequence of four steps that add key membrane transport proteins to the model cell. The model is simple at the start and becomes progressively more complex, finally producing transmembrane chemical and electrical gradients. I believe that this didactic approach helps instructors with a more efficient tool for the teaching of the mechanisms of resting membrane potential while helping students avoid common difficulties that may be encountered when learning this topic.

  14. Xinjiang Dushanzi Project Started Construction Started Construction

    Institute of Scientific and Technical Information of China (English)

    2005-01-01

    @@ Xinjiang Dushanzi petroleum refining and chemical integrated project,is the largest one so far in China, with a capacity of ten-million-ton petroleum refinery and onemillion-ton ethylene complex, directly under PetroChina Company Limited (PetroChina), was started construction along with laying a foundation on August 22, 2005.

  15. Artificial Neural Network and Response Surface Methodology Modeling in Ionic Conductivity Predictions of Phthaloylchitosan-Based Gel Polymer Electrolyte

    Directory of Open Access Journals (Sweden)

    Ahmad Danial Azzahari

    2016-01-01

    Full Text Available A gel polymer electrolyte system based on phthaloylchitosan was prepared. The effects of process variables, such as lithium iodide, caesium iodide, and 1-butyl-3-methylimidazolium iodide were investigated using a distance-based ternary mixture experimental design. A comparative approach was made between response surface methodology (RSM and artificial neural network (ANN to predict the ionic conductivity. The predictive capabilities of the two methodologies were compared in terms of coefficient of determination R2 based on the validation data set. It was shown that the developed ANN model had better predictive outcome as compared to the RSM model.

  16. Modelling of aflatoxin G1 reduction by kefir grain using response surface methodology.

    Science.gov (United States)

    Ansari, Farzaneh; Khodaiyan, Faramarz; Rezaei, Karamatollah; Rahmani, Anosheh

    2015-01-01

    Aflatoxin G1 (AFG1) is one of the main toxic contaminants in pistachio nuts and causes potential health hazards. Hence, AFG1 reduction is one of the main concerns in food safety. Kefir-grains contain symbiotic association of microorganisms well known for their aflatoxin decontamination effects. In this study, a central composite design (CCD) using response surface methodology (RSM) was applied to develop a model in order to predict AFG1 reduction in pistachio nuts by kefir-grain (already heated at 70 and 110°C). The independent variables were: toxin concentration (X1: 5, 10, 15, 20 and 25 ng/g), kefir-grain level (X2: 5, 10, 20, 10 and 25%), contact time (X3: 0, 2, 4, 6 and 8 h), and incubation temperature (X4: 20, 30, 40, 50 and 60°C). There was a significant reduction in AFG1 (p kefir-grain used. The variables including X1, X3 and the interactions between X2-X4 as well as X3-X4 have significant effects on AFG1 reduction. The model provided a good prediction of AFG1 reduction under the assay conditions. Optimization was used to enhance the efficiency of kefir-grain on AFG1 reduction. The optimum conditions for the highest AFG1 reduction (96.8%) were predicted by the model as follows: toxin concentration = 20 ng/g, kefir-grain level = 10%, contact time = 6 h, and incubation temperature = 30°C which validated practically in six replications.

  17. Adsorption of cellulase on cereal brans: a simple functional model from response surface methodology

    Directory of Open Access Journals (Sweden)

    Rui Sergio F. da Silva

    1980-11-01

    Full Text Available A functional model based on Langmuirian adsorption as a limiting mechanism was proposed to explain the effect of cellulase during the enzymatic pretreatment of bran, conducted prior to extraction of proteins, by wet alkaline process from wheat and buckwheat bran materials. The proposed model provides a good fit (r = 0.99 for the data generated thru predictive model taken from the response surface methodology, permitting calculation of a affinity constant (b and capacity constant (k, for wheat bran (b = 0.255 g/IU and k = 17.42% and buckwheat bran (b = 0.066g/IUand k = 78.74%.Modelo funcional baseado na adsorção de Langmuir como mecanismo limitante proposto para explicar o efeito da celulase durante o pré-tratamento enzimático de farelos, visando à extração de proteínas, através do método alcalino-úmido. O referido modelo ajusta se muito bem (r = 0,99 aos dados gerados com base em modelo preditivo obtido da metodologia da superfície de resposta. Pode-se calcular a constante de afinidade (b e a constante de capacidade (k para o farelo de trigo e farelo de trigo mourisco (sarraceno, usando uma equação análoga à isoterma de adsorção de Langmuir. Os resultados indicaram que o farelo de trigo mourisco apresenta uma capacidade mais alta para adsorver celulase e, conseqüentemente,'pode-se esperar uma resposta maior ao pré-tratamento com esta enzima.

  18. An Improved Methodology for Individualized Performance Prediction of Sleep-Deprived Individuals with the Two-Process Model

    Science.gov (United States)

    2009-01-01

    process model of sleep regulation for developing individualized biomathematical models that predict performance impairment for individuals subjected to total sleep loss. This new method advances our previous work in two important ways. First, it enables model customization to start as soon as the first performance measurement from an individual becomes available. This was achieved by optimally combining the performance information obtained from the individual’s performance measurements with a priori performance information using a Bayesian framework, while retaining

  19. A simplified methodology to approach the complexity of foraminiferal calcite oxygen-isotope data - model comparison

    Science.gov (United States)

    Roche, Didier; Waelbroeck, Claire

    2016-04-01

    Since the pioneering work of Epstein (Epstein et al., 1953), numerous calcite isotopic records from the ocean have been used to attempt reconstructing paleoclimatic information. Additional to the well known complexity brought by the fact that foraminiferal calcite records both temperature and isotopic composition of the surrounding oceanic waters, an additional effect for surface - dwelling foraminifers is the fact that two different species do not have the same habitat and may thus record different signals. This is obvious when comparing paleoclimatic records where different species have been measured for the isotopic composition of the calcite. The difference in habitat produces a three dimensional spatial complexity (a foraminifera living in preferred climatic conditions at a specific location, but also at a specific depth, sometimes far from the surface) but also a temporal uncertainty (foraminifers generally live for only a few weeks and their growth season may be evolving through time with climate change). While the different species habitats potentially contain a wealth of information that could be used to better understand the sequences of climate change, this has seldom been used in modeling studies, most models deriving calcite isotopic signal from surface and annual mean conditions (e.g. Roche et al., 2014). In the present work, we propose a reduced complexity approach to compute the calcite for several planktonic foraminifers from climate model simulations under pre-industrial conditions. We base our approach on simple functions describing the temperature dependence of the different species growth rates (Lombard et al., 2009) and on probability of presence based on the physical variables computed in the climate model. We present a comparison to available sediment traps and core tops data as a validation of the methodology, focusing on the possibility for future applicability towards inversion of the signal measured in oceanic sediment cores. References

  20. Semantic-Driven e-Government: Application of Uschold and King Ontology Building Methodology for Semantic Ontology Models Development

    CERN Document Server

    Fonou-Dombeu, Jean Vincent; 10.5121/ijwest.2011.2401

    2011-01-01

    Electronic government (e-government) has been one of the most active areas of ontology development during the past six years. In e-government, ontologies are being used to describe and specify e-government services (e-services) because they enable easy composition, matching, mapping and merging of various e-government services. More importantly, they also facilitate the semantic integration and interoperability of e-government services. However, it is still unclear in the current literature how an existing ontology building methodology can be applied to develop semantic ontology models in a government service domain. In this paper the Uschold and King ontology building methodology is applied to develop semantic ontology models in a government service domain. Firstly, the Uschold and King methodology is presented, discussed and applied to build a government domain ontology. Secondly, the domain ontology is evaluated for semantic consistency using its semi-formal representation in Description Logic. Thirdly, an...

  1. Modeling of Membrane-Electrode-Assembly Degradation in Proton-Exchange-Membrane Fuel Cells - Local H2 Starvation and Start-Stop Induced Carbon-Support Corrosion

    Science.gov (United States)

    Gu, Wenbin; Yu, Paul T.; Carter, Robert N.; Makharia, Rohit; Gasteiger, Hubert A.

    Carbon-support corrosion causes electrode structure damage and thus electrode degradation. This chapter discusses fundamental models developed to predict cathode carbon-support corrosion induced by local H2 starvation and start-stop in a proton-exchange-membrane (PEM) fuel cell. Kinetic models based on the balance of current among the various electrode reactions are illustrative, yielding much insight on the origin of carbon corrosion and its implications for future materials developments. They are particularly useful in assessing carbon corrosion rates at a quasi-steady-state when an H2-rich region serves as a power source that drives an H2-free region as a load. Coupled kinetic and transport models are essential in predicting when local H2 starvation occurs and how it affects the carbon corrosion rate. They are specifically needed to estimate length scales at which H2 will be depleted and time scales that are valuable for developing mitigation strategies. To predict carbon-support loss distributions over an entire active area, incorporating the electrode pseudo-capacitance appears necessary for situations with shorter residence times such as start-stop events. As carbon-support corrosion is observed under normal transient operations, further model improvement shall be focused on finding the carbon corrosion kinetics associated with voltage cycling and incorporating mechanisms that can quantify voltage decay with carbon-support loss.

  2. Modeling of burr size in drilling of aluminum silicon carbide composites using response surface methodology

    Directory of Open Access Journals (Sweden)

    Avinash A. Thakre

    2016-09-01

    Full Text Available Exit burrs produced during various machining processes degrade the product quality and functionality of different parts of assembly. It is essential to select the optimum tool geometry and process parameters for minimizing the burr formation during machining. In this paper, the effects of cutting speed, feed rate, point angle of drill bits and concentration of the reinforcements on the burrs produced were investigated. Response surface methodology has been adopted to create the quadratic model for the height and thickness of the burrs produced during drilling of AlSiC composites. Analysis of means and variance were used to find the significance of the process parameters on the responses and to find the optimum combination of parameters to minimize the burr formation. Feed rate, point angle and concentration of reinforcements in the matrix are found to be the significant factors. Both the responses were found to be minimum for lower feed rate, higher point angle and higher concentration of reinforcements. Scanning electron microscopy was used to understand the mechanism of burr formation.

  3. [Proposed difficult airway teaching methodology. Presentation of an interactive fresh frozen cadaver model].

    Science.gov (United States)

    Catalá Bauset, J C; de Andres Ibañez, J A; Valverde Navarro, A; Martinez Soriano, F

    2014-04-01

    The aim of this paper is to present a methodology based on the use of fresh-frozen cadavers for training in the management of the airway, and to evaluate the degree of satisfaction among learning physicians. About 6 fresh-frozen cadavers and 14 workstations were prepared where participants were trained in the different skills needed for airway management. The details of preparation of the cadavers are described. The level of satisfaction of the participant was determined using a Likert rating scale of 5 points, at each of the 14 stations, as well as the overall assessment and clinical usefulness of the course. The mean overall evaluation of the course and its usefulness was 4.75 and 4.9, out of 5, respectively. All parts of the course were rated above 4 out of 5. The high level of satisfaction of the course remained homogeneous in the 2 editions analysed. The overall satisfaction of the course was not finally and uniquely determined by any of its particular parts. The fresh cadaver model for training physicians in techniques of airway management is a proposal satisfactory to the participant, and with a realism that approaches the live patient. Copyright © 2013 Sociedad Española de Anestesiología, Reanimación y Terapéutica del Dolor. Published by Elsevier España. All rights reserved.

  4. Optimization and modelling using the response surface methodology (RSM) for ciprofloxacin removal by electrocoagulation.

    Science.gov (United States)

    Barışçı, Sibel; Turkay, Ozge

    2016-01-01

    In this study, response surface methodology (RSM) was used to investigate the effects of different operating conditions on the removal of ciprofloxacin (CIP) by the electrocoagulation (EC) with pure iron electrodes. Box-Behnken design was used for the optimization of the EC process and to evaluate the effects and interactions of process variables such as applied current density, process time, initial CIP concentration and pH on the removal of CIP by the EC process. The optimum conditions for maximum CIP removal (86.6%) were found as pH = 4; Co = 5 mg.L(1-); Id = 4.325 mA.cm(2-); tprocess = 10 min. The model adequacy and the validity of the optimization step were confirmed with additional experiments which were performed under the proposed optimum conditions. The predicted CIP removal as 86.6% was achieved at each experiment by using the optimum conditions. These results specify that the RSM is a useful tool for optimizing the operational conditions for CIP removal by the EC process.

  5. Anticipation Models for On-Line Control in Steel Industry: Methodologies and Case Study

    Science.gov (United States)

    Briano, Enrico; Caballini, Claudia; Revetria, Roberto; Testa, Alessandro; De Leo, Marco; Belgrano, Franco; Bertolotto, Alessandro

    2010-11-01

    This paper describes a simulation system according to improve steelmaking's efficiency and to monitor its performances by anticipating the next period workload. Usually the production planning in those cases is made by the use of Gantt diagrams, based on operator's work. This means that if an accident occurs, the operator himself has to change in few minutes the production plan with a lower performance than the original one. The first consideration is obviously that the operator's experience itself it's not sufficient to re-plan a performing steelmaking chain. Hence the necessity of simulation as problem-solving technique in this complex situation. A brief introduction on this paper is devoted to identify the common problems in most plants about production planning, and this is indeed needed to define the boundary conditions and the framework of the problem. Then, a description of steelmaking processes and the general features of critical aspects about steelmaking planning (Paragraph 2) is given in order to understand the bonds, features, criticalities to be analyzed and implemented in the simulation model. In paragraph 3 a detailed analysis of proposed methodology and system architecture is given in order to make the reader understand the complexity that the Authors had to face in modeling the system and the solutions they found with approximations, considerations, techniques and algorithms that were the most suitable to be used in this particular situation. A short description of the likely steelmaking plant modeled and Verification and Validation (V&V) results are carried in paragraph 4. It was in fact very important in such a complex system, to define the acceptability of results in terms of verification of the correctness, validation of the results, and accreditation to the users. This is a generally valid principle in simulation, but moreover in a complex system modeling such a steelmaking process, where an error can cost millions. At last, in paragraph 5

  6. The Industry 4.0 Journey: Start the Learning Journey with the Reference Architecture Model Industry 4.0

    DEFF Research Database (Denmark)

    Nardello, Marco; Møller, Charles; Gøtze, John

    2017-01-01

    The wave of the fourth industrial revolution (Industry 4.0) is breaking on manufacturing companies. In manufacturing, one of the buzzwords of the moment is "Smart production". Smart production involves manufacturing equipment with many sensors that can generate and transmit large amounts of data....... Model Industry 4.0 (RAMI4.0) standard for Smart production. The instantiation contributed to organizational learning in the laboratory by collecting and sharing up-to-date information concerning manufacturing equipment....

  7. A Modeling methodology for NoSQL Key-Value databases

    Directory of Open Access Journals (Sweden)

    Gerardo ROSSEL

    2017-08-01

    Full Text Available In recent years, there has been an increasing interest in the field of non-relational databases. However, far too little attention has been paid to design methodology. Key-value data stores are an important component of a class of non-relational technologies that are grouped under the name of NoSQL databases. The aim of this paper is to propose a design methodology for this type of database that allows overcoming the limitations of the traditional techniques. The proposed methodology leads to a clean design that also allows for better data management and consistency.

  8. Critical infrastructure protection decision support system decision model : overview and quick-start user's guide.

    Energy Technology Data Exchange (ETDEWEB)

    Samsa, M.; Van Kuiken, J.; Jusko, M.; Decision and Information Sciences

    2008-12-01

    The Critical Infrastructure Protection Decision Support System Decision Model (CIPDSS-DM) is a useful tool for comparing the effectiveness of alternative risk-mitigation strategies on the basis of CIPDSS consequence scenarios. The model is designed to assist analysts and policy makers in evaluating and selecting the most effective risk-mitigation strategies, as affected by the importance assigned to various impact measures and the likelihood of an incident. A typical CIPDSS-DM decision map plots the relative preference of alternative risk-mitigation options versus the annual probability of an undesired incident occurring once during the protective life of the investment, assumed to be 20 years. The model also enables other types of comparisons, including a decision map that isolates a selected impact variable and displays the relative preference for the options of interest--parameterized on the basis of the contribution of the isolated variable to total impact, as well as the likelihood of the incident. Satisfaction/regret analysis further assists the analyst or policy maker in evaluating the confidence with which one option can be selected over another.

  9. Modeling Methodology of Progressive Collapse by the Example of Real High-Rise Buildings

    Directory of Open Access Journals (Sweden)

    Mariya Barabash

    2014-12-01

    Full Text Available The purpose of the research was to find out several ways to design real buildings with protective measures against progressive collapse. There are no uniform guidelines for choosing the type of finite element able to provide the necessary accuracy of the calculation model taking into account all the main factors affecting the strength and stability of the building. Therefore it is required to develop numerical methods for calculation on progressive collapse of buildings bearing structural elements in case of emergency. In addition, our task was to present a methodology that allows checking the stability of the building agains progressive collapse. By the technique nonlinear analysis on special (emergency regulations combination of loads and impacts is performed, including permanent and long-term regulatory burden and the impact of hypothetical local fractures bearing structures. This study was carried out on the high rise apartment complex with underground parking. In the empirical part of the study the main concern was to find out the reasons of progressive collapse of structures, taking into account stepwise assembly, building inspection performed rollover. Also the existing building retail and office complex “Gulliver” with public facilities and parking is considered, where computation was made on the progressive collapse of the upper slab technical floor. The calculation was carried out on plates or emergency landing helicopter crash on the floor slab. Analysis of the results leads to the following conclusions. To assess the real vitality of the building in an emergency situation, and resistance to progressive collapse it is recommended to count design taking into account physical and geometric nonlinearity and process modeling lifecycle.

  10. A METHODOLOGICAL MODEL FOR INTEGRATING CHARACTER WITHIN CONTENT AND LANGUAGE INTEGRATED LEARNING IN SOCIOLOGY OF RELIGION

    Directory of Open Access Journals (Sweden)

    Moh Yasir Alimi

    2013-01-01

    Full Text Available In this article, I describe a methodological model I used in a experimental study on how to integrate character within the practice of Content and Language Integrated Learning (CLIL at the higher education Indonesia.This research can be added to research about character education and CLIL in tertiary education, giving nuances to the practice of CLIL so far predominantly a practice in primary and secondary schools.The research was conducted in Semarang State University, in the Department of Sociology and Anthropology, in Sociology of Religion bilingual class. The research indicates that the integration of character within CLIL enrich the perspective of CLIL by strengthening the use of CLIL for intellectual growth and moral development. On the other side, the use of CLIL with character education gives methods and perspectives to the practice of character education so far which so far only emphasise contents reforms without learning methods reforms. The research also reveals that the weakness of CLIL in using text for classroom learning can be overcome bythe use ofspecific reading and writing strategies. I develop a practical text strategy which can be effectively used in highly conceptual subject such as sociology of religion. Artikel ini bertujuan untuk mendeskripsikan model metodologis yang saya pakai untuk mengintegrasikannya karakter dalam Content and Language Integrated Learning (CLIL pada pendidikan tinggi di Indonesia. Penelitian ini memperkaya penelitian mengenai pendidikan karakter dan penerapan CLIL di perguruan tinggi, selama ini penelitian semacam itu hanya biasa di level lebih rendah. Penelitian dilakukan di Universitas Negeri Semarang, pada kelas bilingual yang diikuti 25 mahasiswa, dan diujikan pada mata kuliah Sosiologi Agama. Pelajaran dari penelitian ini adalah integrasi karakter dalam CLIL dapat memperkaya CLIL. Sebaliknya penggunaan CLIL untuk mendidikkan karakter di kelas bilingual mampu menjawab berbagai tantangan pendidikan

  11. A comparative review of multi-risk modelling methodologies for climate change adaptation in mountain regions

    Science.gov (United States)

    Terzi, Stefano; Torresan, Silvia; Schneiderbauer, Stefan

    2017-04-01

    Keywords: Climate change, mountain regions, multi-risk assessment, climate change adaptation. Climate change has already led to a wide range of impacts on the environment, the economy and society. Adaptation actions are needed to cope with the impacts that have already occurred (e.g. storms, glaciers melting, floods, droughts) and to prepare for future scenarios of climate change. Mountain environment is particularly vulnerable to the climate changes due to its exposure to recent climate warming (e.g. water regime changes, thawing of permafrost) and due to the high degree of specialization of both natural and human systems (e.g. alpine species, valley population density, tourism-based economy). As a consequence, the mountain local governments are encouraged to undertake territorial governance policies to climate change, considering multi-risks and opportunities for the mountain economy and identifying the best portfolio of adaptation strategies. This study aims to provide a literature review of available qualitative and quantitative tools, methodological guidelines and best practices to conduct multi-risk assessments in the mountain environment within the context of climate change. We analyzed multi-risk modelling and assessment methods applied in alpine regions (e.g. event trees, Bayesian Networks, Agent Based Models) in order to identify key concepts (exposure, resilience, vulnerability, risk, adaptive capacity), climatic drivers, cause-effect relationships and socio-ecological systems to be integrated in a comprehensive framework. The main outcomes of the review, including a comparison of existing techniques based on different criteria (e.g. scale of analysis, targeted questions, level of complexity) and a snapshot of the developed multi-risk framework for climate change adaptation will be here presented and discussed.

  12. Start-Up Capital

    OpenAIRE

    Verheul, Ingrid; Thurik, Roy

    2000-01-01

    textabstractFemale and male entrepreneurs differ in the way they finance their businesses. This can be attributed to the type of business and the type of management and experience (indirect effect). Female start-ups may also experience other barriers based upon discriminatory effects (direct effect). Whether gender has an impact on size and composition of start-up capital, is the subject of the present paper. To test for these direct and indirect effects data of 2000 Dutch starting entreprene...

  13. A data-driven multi-model methodology with deep feature selection for short-term wind forecasting

    Energy Technology Data Exchange (ETDEWEB)

    Feng, Cong; Cui, Mingjian; Hodge, Bri-Mathias; Zhang, Jie

    2017-03-01

    With the growing wind penetration into the power system worldwide, improving wind power forecasting accuracy is becoming increasingly important to ensure continued economic and reliable power system operations. In this paper, a data-driven multi-model wind forecasting methodology is developed with a two-layer ensemble machine learning technique. The first layer is composed of multiple machine learning models that generate individual forecasts. A deep feature selection framework is developed to determine the most suitable inputs to the first layer machine learning models. Then, a blending algorithm is applied in the second layer to create an ensemble of the forecasts produced by first layer models and generate both deterministic and probabilistic forecasts. This two-layer model seeks to utilize the statistically different characteristics of each machine learning algorithm. A number of machine learning algorithms are selected and compared in both layers. This developed multi-model wind forecasting methodology is compared to several benchmarks. The effectiveness of the proposed methodology is evaluated to provide 1-hour-ahead wind speed forecasting at seven locations of the Surface Radiation network. Numerical results show that comparing to the single-algorithm models, the developed multi-model framework with deep feature selection procedure has improved the forecasting accuracy by up to 30%.

  14. Teaching Mathematical Modelling in a Design Context: A Methodology Based on the Mechanical Analysis of a Domestic Cancrusher.

    Science.gov (United States)

    Pace, Sydney

    2000-01-01

    Presents a methodology for teaching mathematical modeling skills to A-level students. Gives an example illustrating how mathematics teachers and design teachers can take joint perspective in devising learning opportunities that develop mathematical and design skills concurrently. (Contains 14 references.) (Author/ASK)

  15. Soft Systems Methodology and Problem Framing: Development of an Environmental Problem Solving Model Respecting a New Emergent Reflexive Paradigm.

    Science.gov (United States)

    Gauthier, Benoit; And Others

    1997-01-01

    Identifies the more representative problem-solving models in environmental education. Suggests the addition of a strategy for defining a problem situation using Soft Systems Methodology to environmental education activities explicitly designed for the development of critical thinking. Contains 45 references. (JRH)

  16. Analytical investigation of the boundary-triggered phase transition dynamics in a cellular automata model with a slow-to-start rule

    Institute of Scientific and Technical Information of China (English)

    Jia Ning; Ma Shou-Feng; Zhong Shi-Quan

    2012-01-01

    Previous studies suggest that there are three different jam phases in the cellular automata automaton model with a slow-to-start rule under open boundaries.In the present paper,the dynamics of each free-flow-jam phase transition is studied.By analysing the microscopic behaviour of the traffic flow,we obtain analytical results on the phase transition dynamics.Our results can describe the detailed time evolution of the system during phase transition,while they provide good approximation for the numerical simulation data.These findings can perfectly explain the microscopic mechanism and details of the boundary-triggered phase transition dynamics.

  17. A Neuro-Mechanical Model Explaining the Physiological Role of Fast and Slow Muscle Fibres at Stop and Start of Stepping of an Insect Leg

    OpenAIRE

    Tibor Istvan Toth; Martyna Grabowska; Joachim Schmidt; Ansgar Büschges; Silvia Daun-Gruhn

    2013-01-01

    Stop and start of stepping are two basic actions of the musculo-skeletal system of a leg. Although they are basic phenomena, they require the coordinated activities of the leg muscles. However, little is known of the details of how these activities are generated by the interactions between the local neuronal networks controlling the fast and slow muscle fibres at the individual leg joints. In the present work, we aim at uncovering some of those details using a suitable neuro-mechanical model....

  18. Methodological guidelines

    Energy Technology Data Exchange (ETDEWEB)

    Halsnaes, K.; Callaway, J.M.; Meyer, H.J.

    1999-04-01

    The guideline document establishes a general overview of the main components of climate change mitigation assessment. This includes an outline of key economic concepts, scenario structure, common assumptions, modelling tools and country study assumptions. The guidelines are supported by Handbook Reports that contain more detailed specifications of calculation standards, input assumptions and available tools. The major objectives of the project have been provided a methodology, an implementing framework and a reporting system which countries can follow in meeting their future reporting obligations under the FCCC and for GEF enabling activities. The project builds upon the methodology development and application in the UNEP National Abatement Coasting Studies (UNEP, 1994a). The various elements provide countries with a road map for conducting climate change mitigation studies and submitting national reports as required by the FCCC. (au) 121 refs.

  19. Impact of the health services utilization and improvement model (HUIM) on self efficacy and satisfaction among a head start population.

    Science.gov (United States)

    Tataw, David B; Bazargan-Hejazi, Shahrzad

    2010-01-01

    The aim of this paper is to evaluate and report the impact of the Health Services Utilization Improvement Model (HUIM) on utilization and satisfaction with care, as well as knowledge regarding prevention, detection, and treatment of asthma, diabetes, tuberculosis, and child injury among low income health services consumers. HUIM outcomes data shows that the coupling of parental education and ecological factors (service linkage and provider orientation) impacts the health services utilization experience of low income consumers evidenced by improved self-efficacy (knowledge and voice), and satisfaction with care from a child's regular provider. Participation in HUIM activities also improved the low income consumer's knowledge of disease identification, self-care and prevention.

  20. Evaluating Supply Chain Management: A Methodology Based on a Theoretical Model

    National Research Council Canada - National Science Library

    Simon, Alexandre Tadeu; Serio, Luiz Carlos Di; Pires, Silvio Roberto Ignacio; Martins, Guilherme Silveira

    2015-01-01

    Despite the increasing interest in supply chain management (SCM) by researchers and practitioners, there is still a lack of academic literature concerning topics such as methodologies to guide and support SCM evaluation...

  1. A Modeling Methodology to Support Evaluation Public Health Impacts on Air Pollution Reduction Programs

    Science.gov (United States)

    Environmental public health protection requires a good understanding of types and locations of pollutant emissions of health concern and their relationship to environmental public health indicators. Therefore, it is necessary to develop the methodologies, data sources, and tools...

  2. The Relationships of Soft Systems Methodology (SSM), Business Process Modeling and e-Government

    National Research Council Canada - National Science Library

    Arief Ramadhan; Dana Indra Sensuse

    2012-01-01

    e-Government have emerged in several countries. Because of many aspects that must be considered, and because of there are exist some soft components in e-Government, then the Soft Systems Methodology (SSM...

  3. An Integrated Modeling Approach for Predicting Process Maps of Residual Stress and Distortion in a Laser Weld: A Combined CFD-FE Methodology

    Science.gov (United States)

    Turner, Richard P.; Panwisawas, Chinnapat; Sovani, Yogesh; Perumal, Bama; Ward, R. Mark; Brooks, Jeffery W.; Basoalto, Hector C.

    2016-10-01

    Laser welding has become an important joining methodology within a number of industries for the structural joining of metallic parts. It offers a high power density welding capability which is desirable for deep weld sections, but is equally suited to performing thinner welded joints with sensible amendments to key process variables. However, as with any welding process, the introduction of severe thermal gradients at the weld line will inevitably lead to process-induced residual stress formation and distortions. Finite element (FE) predictions for weld simulation have been made within academia and industrial research for a number of years, although given the fluid nature of the molten weld pool, FE methodologies have limited capabilities. An improvement upon this established method would be to incorporate a computational fluid dynamics (CFD) model formulation prior to the FE model, to predict the weld pool shape and fluid flow, such that details can be fed into FE from CFD as a starting condition. The key outputs of residual stress and distortions predicted by the FE model can then be monitored against the process variables input to the model. Further, a link between the thermal results and the microstructural properties is of interest. Therefore, an empirical relationship between lamellar spacing and the cooling rate was developed and used to make predictions about the lamellar spacing for welds of different process parameters. Processing parameter combinations that lead to regions of high residual stress formation and high distortion have been determined, and the impact of processing parameters upon the predicted lamellar spacing has been presented.

  4. Development and application of a statistical methodology to evaluate the predictive accuracy of building energy baseline models

    Energy Technology Data Exchange (ETDEWEB)

    Granderson, Jessica [Lawrence Berkeley National Laboratory (LBNL), Berkeley, CA (United States). Energy Technologies Area Div.; Price, Phillip N. [Lawrence Berkeley National Laboratory (LBNL), Berkeley, CA (United States). Energy Technologies Area Div.

    2014-03-01

    This paper documents the development and application of a general statistical methodology to assess the accuracy of baseline energy models, focusing on its application to Measurement and Verification (M&V) of whole-­building energy savings. The methodology complements the principles addressed in resources such as ASHRAE Guideline 14 and the International Performance Measurement and Verification Protocol. It requires fitting a baseline model to data from a ``training period’’ and using the model to predict total electricity consumption during a subsequent ``prediction period.’’ We illustrate the methodology by evaluating five baseline models using data from 29 buildings. The training period and prediction period were varied, and model predictions of daily, weekly, and monthly energy consumption were compared to meter data to determine model accuracy. Several metrics were used to characterize the accuracy of the predictions, and in some cases the best-­performing model as judged by one metric was not the best performer when judged by another metric.

  5. Empirical-Statistical Methodology and Methods for Modeling and Forecasting of Climate Variability of Different Temporal Scales

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    Main problem of modern climatology is to assess the present as well as future climate change. For this aim two approaches are used: physic-mathematic modeling on the basis of GCMs and palaeoclimatic analogues. The third approach is based on the empirical-statistical methodology and is developed in this paper. This ap proach allows to decide two main problems: to give a real assessment of climate changes by observed data for climate monitoring and extrapolation of obtained climate tendencies to the nearest future (10-15 years) and give the empiricai basis for further development of physic-mathematicai models. The basic theory and methodology of empirical-statistic approach have been developed as well as a common model for description of space-time climate variatiom taking into account the processes of different time scales. The way of decreasing of the present and future uncertainty is suggested as the extraction of long-term climate changes components in the particular time series and spatial generalization of the same climate tendencies in the obtained homogeneous regions. Algorithm and methods for realization of empirical-statistic methodology have been developed along with methods for generalization of intraannual fluctuations, methods for extraction of homogeneous components of different time scales (interannual, decadal, century), methodology and methods for spatial generalization and modeling, methods for extrapolation on the basis of two main kinds of time models: stochastic and deterministic--stochastic. Some applications of developed methodology and methods are given for the longest time series of temperature and precipitation over the world and for spatial generalization over the European area.

  6. A methodology for modelling energy-related human behaviour: Application to window opening behaviour in residential buildings

    DEFF Research Database (Denmark)

    Fabi, Valentina; Andersen, Rune Korsholm; Corgnati, Stefano P.

    2013-01-01

    . The occupant behaviour related to the building control potentialities is a very complex process that has been studied only in the last years with some focuses related to natural ventilation (window opening behaviour), space heating energy demand (in particular the adjustments in the temperature set......-point) and natural light (focusing on window blinds adjustments). In this paper, a methodology is presented to model the user behaviour in the context of real energy use and applied to a case study. The methodology, based on a medium/long-term monitoring, is aimed at shifting towards a probabilistic approach...... for modelling the human behaviour related to the control of indoor environment. The procedure is applied at models of occupants’ interactions with windows (opening and closing behaviour). Models of occupants’ window opening behaviour were inferred based on measurements and implemented in a simulation program...

  7. Hybrid methodologies for modeling the dynamics in selected classes of materials

    Science.gov (United States)

    Miljacic, Ljubomir

    The advent of computers brought a profound change in the way the practical problems in the physics of materials are addressed. Within the last decade, a rapidly evolving area of research is oriented towards interfacing the existing numerical tools in an optimized way, by explicitly taking advantage of the specifics of the problem, the so called "hybrid" approach. The Classical Molecular Dynamics (CMD) method holds a central position among computational methods for modeling on different levels of physical behavior; its two main limitations are the accuracy of the force-fields used, and accessible time scale. In this work, a new methodology was constructed to improve a force-field quality by matching it to a quantum model via mapping a complex many-body situation to a much reduced description of important local geometries. It was tested on a system of a water molecule interacting with hematite surface and a 66% reduction in the force mismatch was achieved. Also, a strategy of efficiently improving radial data fitting is found, where fit-functions are defined on a set of overlapping radial zones and where a specific post-processing numerical demand on the fitting data is required. It was incorporated, tested and applied to the DVM density-functional code and showed that the fitting error of the radial degrees of freedom can be efficiently removed for all practical purposes. Two different systems with concurrent Poisson and Newtonian evolution were analyzed in attempt go to beyond the CMD accessible time. Polymerization and self-assembly of thin molecular films on a quartz surface was modeled where local hydrogen bonding was used as an indicator of local configurational relaxation, and as a guide to a polymerization process. The results present a consistent picture which contradicts previous interpretation of experimental data. Also, a study of glass-forming glycerol liquid diffusion was conducted on a temperature range inaccessible to CMD. Atoms were artificially

  8. Soft-systems thinking for community-development decision making: A participative, computer-based modeling methodology

    Energy Technology Data Exchange (ETDEWEB)

    Cook, R.J.

    1987-01-01

    The normative-rational models used to ensure logical decision processes do not capture the complex nature of planning situations, and alternative methodologies that can improve the collection and use of qualitative data are scarce. The intent of this thesis is to design and apply a methodology that may help planners incorporate such data into policy analysis. To guide the application and allow for its evaluation, criteria are gleaned from the literature on computer modeling, human cognition, and group process. From this, a series of individual and group ideation techniques along with two computer-modeling procedures are combined to aid participant understanding and provide computation capabilities. The methodology is applied in the form of a case study in Door County, Wisconsin. The process and its results were evaluated by workshop participants and by three planners who were intent on using this information to help update a county master plan. Based on established criteria, their evaluations indicate that the soft-systems methodology devised in this thesis has potential for improving the collection and use of qualitative data for public-policy purposes.

  9. Modeling and optimization of ethanol fermentation using Saccharomyces cerevisiae: Response surface methodology and artificial neural network

    Directory of Open Access Journals (Sweden)

    Esfahanian Mehri

    2013-01-01

    Full Text Available In this study, the capabilities of response surface methodology (RSM and artificial neural networks (ANN for modeling and optimization of ethanol production from glucoseusing Saccharomyces cerevisiae in batch fermentation process were investigated. Effect of three independent variables in a defined range of pH (4.2-5.8, temperature (20-40ºC and glucose concentration (20-60 g/l on the cell growth and ethanol production was evaluated. Results showed that prediction accuracy of ANN was apparently similar to RSM. At optimum condition of temperature (32°C, pH (5.2 and glucose concentration (50 g/l suggested by the statistical methods, the maximum cell dry weight and ethanol concentration obtained from RSM were 12.06 and 16.2 g/l whereas experimental values were 12.09 and 16.53 g/l, respectively. The present study showed that using ANN as fitness function, the maximum cell dry weight and ethanol concentration were 12.05 and 16.16 g/l, respectively. Also, the coefficients of determination for biomass and ethanol concentration obtained from RSM were 0.9965 and 0.9853 and from ANN were 0.9975 and 0.9936, respectively. The process parameters optimization was successfully conducted using RSM and ANN; however prediction by ANN was slightly more precise than RSM. Based on experimental data maximum yield of ethanol production of 0.5 g ethanol/g substrate (97 % of theoretical yield was obtained.

  10. Epilepsy Therapy Development: Technical and Methodological Issues in Studies with Animal Models

    Science.gov (United States)

    Galanopoulou, Aristea S.; Kokaia, Merab; Loeb, Jeffrey A.; Nehlig, Astrid; Pitkänen, Asla; Rogawski, Michael A.; Staley, Kevin J.; Whittemore, Vicky H.; Dudek, F. Edward

    2013-01-01

    SUMMARY The search for new treatments for seizures, epilepsies and their comorbidities faces considerable challenges. Partly, this is due to gaps in our understanding of the etiology and pathophysiology of most forms of epilepsy. An additional challenge is the difficulty to predict the efficacy, tolerability and impact of potential new treatments on epilepsies and comorbidities in humans, using the available resources. Here we provide a summary of the discussions and proposals of the Working Group 2 as presented in the Joint American Epilepsy Society and International League Against Epilepsy Translational Workshop in London (September 2012). We propose methodological and reporting practices that will enhance the uniformity, reliability and reporting of early stage preclinical studies with animal seizure and epilepsy models that aim to develop and evaluate new therapies for seizures or epilepsies, using multi-disciplinary approaches. The topics considered include: (a) implementation of better study design and reporting practices, (b) incorporation in the study design and analysis of covariants that may impact outcomes (including species, age, sex), (c) utilization of approaches to document target relevance, exposure and engagement by the tested treatment, (d) utilization of clinically relevant treatment protocols, (e) optimization of the use of video-EEG recordings to best meet the study goals, and (f) inclusion of outcome measures that address the tolerability of the treatment or study endpoints apart from seizures. We further discuss the different expectations for studies aiming to meet regulatory requirements to obtain approval for clinical testing in humans. Implementation of the rigorous practices discussed in this report will require considerable investment in time, funds and other research resources, which may create challenges for academic researchers seeking to contribute to epilepsy therapy discovery and development. We propose several infrastructure

  11. Epilepsy therapy development: technical and methodologic issues in studies with animal models.

    Science.gov (United States)

    Galanopoulou, Aristea S; Kokaia, Merab; Loeb, Jeffrey A; Nehlig, Astrid; Pitkänen, Asla; Rogawski, Michael A; Staley, Kevin J; Whittemore, Vicky H; Dudek, F Edward

    2013-08-01

    The search for new treatments for seizures, epilepsies, and their comorbidities faces considerable challenges. This is due in part to gaps in our understanding of the etiology and pathophysiology of most forms of epilepsy. An additional challenge is the difficulty in predicting the efficacy, tolerability, and impact of potential new treatments on epilepsies and comorbidities in humans, using the available resources. Herein we provide a summary of the discussions and proposals of the Working Group 2 as presented in the Joint American Epilepsy Society and International League Against Epilepsy Translational Workshop in London (September 2012). We propose methodologic and reporting practices that will enhance the uniformity, reliability, and reporting of early stage preclinical studies with animal seizure and epilepsy models that aim to develop and evaluate new therapies for seizures or epilepsies, using multidisciplinary approaches. The topics considered include the following: (1) implementation of better study design and reporting practices; (2) incorporation in the study design and analysis of covariants that may influence outcomes (including species, age, sex); (3) utilization of approaches to document target relevance, exposure, and engagement by the tested treatment; (4) utilization of clinically relevant treatment protocols; (5) optimization of the use of video-electroencephalography (EEG) recordings to best meet the study goals; and (6) inclusion of outcome measures that address the tolerability of the treatment or study end points apart from seizures. We further discuss the different expectations for studies aiming to meet regulatory requirements to obtain approval for clinical testing in humans. Implementation of the rigorous practices discussed in this report will require considerable investment in time, funds, and other research resources, which may create challenges for academic researchers seeking to contribute to epilepsy therapy discovery and

  12. Metodología de pesquisa preclínica de actividad anti-herpesvirus a partir de productos naturales Methodology for the preclinical screening of anti-herpesvirus activity starting from natural products

    Directory of Open Access Journals (Sweden)

    Gloria del Barrio Alonso

    2008-08-01

    Full Text Available Herpesviridae es una de las familias virales de mayor impacto en la salud humana y animal. Los virus del herpes simple constituyen causa de infecciones muy comunes, con un amplio espectro de manifestaciones clínicas. La naturaleza latente de la infección que le permite al virus escapar de los efectores de la respuesta inmunológica, ha imposibilitado la obtención de vacunas antiherpéticas eficaces. Esto ha motivado la búsqueda de productos antiherpéticos, lo cual trae consigo la necesidad de establecer un sistema de evaluación preclínica de productos naturales y sintéticos mediante una metodología de pesquisa rápida in vitro. En el presente trabajo se muestra la metodología y fundamentación del sistema de ensayos empleado por el Grupo de Antivirales Naturales de la Facultad de Biología (Universidad de La Habana para el pesquisaje de productos con propiedades antiherpéticas. Dicha metodología incluye la evaluación primaria y varios ensayos secundarios, encaminados a la determinación de los mecanismos de acción.Herpesviridae is one of the viral families of greatest impact on human and animal health. HSVs are the etiological agents of very common infections associated with a broad range of clinical symptoms. The latent nature of the infection allows the virus to escape from immune responses, hindering the obtention of efficient anti-herpes vaccines. This fact has encouraged scientists to search new anti-herpes drugs, and that's why the establishment of a rapid methodology for the in vitro evaluation of new products with potential antiviral activity is urgently needed. In this work, we describe the complete guide followed by the Antiviral Natural Products Research Group (Faculty of Biology, University of Havana to perform the screening for antiviral activities among natural products. This guide includes the preliminary evaluation assay and a variety of secondary tests.

  13. Integrated methodological frameworks for modelling agent-based advanced supply chain planning systems: A systematic literature review

    Directory of Open Access Journals (Sweden)

    Luis Antonio Santa-Eulalia

    2011-12-01

    Full Text Available Purpose: The objective of this paper is to provide a systematic literature review of recent developments in methodological frameworks for the modelling and simulation of agent-based advanced supply chain planning systems.Design/methodology/approach: A systematic literature review is provided to identify, select and make an analysis and a critical summary of all suitable studies in the area. It is organized into two blocks: the first one covers agent-based supply chain planning systems in general terms, while the second one specializes the previous search to identify those works explicitly containing methodological aspects.Findings: Among sixty suitable manuscripts identified in the primary literature search, only seven explicitly considered the methodological aspects. In addition, we noted that, in general, the notion of advanced supply chain planning is not considered unambiguously, that the social and individual aspects of the agent society are not taken into account in a clear manner in several studies and that a significant part of the works are of a theoretical nature, with few real-scale industrial applications. An integrated framework covering all phases of the modelling and simulation process is still lacking in the literature visited.Research limitations/implications: The main research limitations are related to the period covered (last four years, the selected scientific databases, the selected language (i.e. English and the use of only one assessment framework for the descriptive evaluation part.Practical implications: The identification of recent works in the domain and discussion concerning their limitations can help pave the way for new and innovative researches towards a complete methodological framework for agent-based advanced supply chain planning systems.Originality/value: As there are no recent state-of-the-art reviews in the domain of methodological frameworks for agent-based supply chain planning, this paper contributes to

  14. Methodological advances

    Directory of Open Access Journals (Sweden)

    Lebreton, J.-D.

    2004-06-01

    Full Text Available The study of population dynamics has long depended on methodological progress. Among many striking examples, continuous time models for populations structured in age (Sharpe & Lotka, 1911 were made possible by progress in the mathematics of integral equations. Therefore the relationship between population ecology and mathematical and statistical modelling in the broad sense raises a challenge in interdisciplinary research. After the impetus given in particular by Seber (1982, the regular biennial EURING conferences became a major vehicle to achieve this goal. It is thus not surprising that EURING 2003 included a session entitled “Methodological advances”. Even if at risk of heterogeneity in the topics covered and of overlap with other sessions, such a session was a logical way of ensuring that recent and exciting new developments were made available for discussion, further development by biometricians and use by population biologists. The topics covered included several to which full sessions were devoted at EURING 2000 (Anderson, 2001 such as: individual covariates, Bayesian methods, and multi–state models. Some other topics (heterogeneity models, exploited populations and integrated modelling had been addressed by contributed talks or posters. Their presence among “methodological advances”, as well as in other sessions of EURING 2003, was intended as a response to their rapid development and potential relevance to biological questions. We briefly review all talks here, including those not published in the proceedings. In the plenary talk, Pradel et al. (in prep. developed GOF tests for multi–state models. Until recently, the only goodness–of–fit procedures for multistate models were ad hoc, and non optimal, involving use of standard tests for single state models (Lebreton & Pradel, 2002. Pradel et al. (2003 proposed a general approach based in particular on mixtures of multinomial distributions. Pradel et al. (in prep. showed

  15. A METHODOLOGICAL MODEL FOR INTEGRATING CHARACTER WITHIN CONTENT AND LANGUAGE INTEGRATED LEARNING IN SOCIOLOGY OF RELIGION

    Directory of Open Access Journals (Sweden)

    Moh Yasir Alimi

    2014-02-01

    Full Text Available AbstractIn this article, I describe a methodological model I used in a experimental study on how to integrate character within the practice of Content and Language Integrated Learning (CLIL at the higher education Indonesia.This research can be added to research about character education and CLIL in tertiary education, giving nuances to the practice of CLIL so far predominantly a practice in primary and secondary schools.The research was conducted in Semarang State University, in the Department of Sociology and Anthropology, in Sociology of Religion bilingual class. The research indicates that the integration of character within CLIL enrich the perspective of CLIL by strengthening the use of CLIL for intellectual growth and moral development. On the other side, the use of CLIL with character education gives methods and perspectives to the practice of character education which so far only emphasise contents reforms without learning methods reforms. The research also reveals that the weakness of CLIL in using text for classroom learning can be overcome by the use of specific reading and writing strategies. I develop a practical text strategy which can be effectively used in highly conceptual subject such as sociology of religion. AbstrakArtikel ini bertujuan untuk mendeskripsikan model metodologis yang saya pakai untuk mengintegrasikannya karakter dalam Content and Language Integrated Learning (CLIL pada pendidikan tinggi di Indonesia. Penelitian ini memperkaya penelitian mengenai pendidikan karakter dan penerapan CLIL di perguruan tinggi, selama ini penelitian semacam itu hanya biasa di level lebih rendah. Penelitian dilakukan di Universitas Negeri Semarang, pada kelas bilingual yang diikuti 25 mahasiswa, dan diujikan pada mata kuliah Sosiologi Agama. Pelajaran dari penelitian ini adalah integrasi karakter dalam CLIL dapat memperkaya CLIL. Sebaliknya penggunaan CLIL untuk mendidikkan karakter di kelas bilingual mampu menjawab berbagai tantangan

  16. Proposition of a modeling and an analysis methodology of integrated reverse logistics chain in the direct chain

    Energy Technology Data Exchange (ETDEWEB)

    Mimouni, F.; Abouabdellah, A.

    2016-07-01

    Propose a modeling and analysis methodology based on the combination of Bayesian networks and Petri networks of the reverse logistics integrated the direct supply chain. Network modeling by combining Petri and Bayesian network. Modeling with Bayesian network complimented with Petri network to break the cycle problem in the Bayesian network. Demands are independent from returns. Model can only be used on nonperishable products. Legislation aspects: Recycling laws; Protection of environment; Client satisfaction via after sale service. Bayesian network with a cycle combined with the Petri Network. (Author)

  17. Artificial neural network and response surface methodology modeling in mass transfer parameters predictions during osmotic dehydration of Carica papaya L.

    Directory of Open Access Journals (Sweden)

    J. Prakash Maran

    2013-09-01

    Full Text Available In this study, a comparative approach was made between artificial neural network (ANN and response surface methodology (RSM to predict the mass transfer parameters of osmotic dehydration of papaya. The effects of process variables such as temperature, osmotic solution concentration and agitation speed on water loss, weight reduction, and solid gain during osmotic dehydration were investigated using a three-level three-factor Box-Behnken experimental design. Same design was utilized to train a feed-forward multilayered perceptron (MLP ANN with back-propagation algorithm. The predictive capabilities of the two methodologies were compared in terms of root mean square error (RMSE, mean absolute error (MAE, standard error of prediction (SEP, model predictive error (MPE, chi square statistic (χ2, and coefficient of determination (R2 based on the validation data set. The results showed that properly trained ANN model is found to be more accurate in prediction as compared to RSM model.

  18. Methodology Development for Passive Component Reliability Modeling in a Multi-Physics Simulation Environment

    Energy Technology Data Exchange (ETDEWEB)

    Aldemir, Tunc [The Ohio State Univ., Columbus, OH (United States); Denning, Richard [The Ohio State Univ., Columbus, OH (United States); Catalyurek, Umit [The Ohio State Univ., Columbus, OH (United States); Unwin, Stephen [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)

    2015-01-23

    Reduction in safety margin can be expected as passive structures and components undergo degradation with time. Limitations in the traditional probabilistic risk assessment (PRA) methodology constrain its value as an effective tool to address the impact of aging effects on risk and for quantifying the impact of aging management strategies in maintaining safety margins. A methodology has been developed to address multiple aging mechanisms involving large numbers of components (with possibly statistically dependent failures) within the PRA framework in a computationally feasible manner when the sequencing of events is conditioned on the physical conditions predicted in a simulation environment, such as the New Generation System Code (NGSC) concept. Both epistemic and aleatory uncertainties can be accounted for within the same phenomenological framework and maintenance can be accounted for in a coherent fashion. The framework accommodates the prospective impacts of various intervention strategies such as testing, maintenance, and refurbishment. The methodology is illustrated with several examples.

  19. A review and synthesis of late Pleistocene extinction modeling: progress delayed by mismatches between ecological realism, interpretation, and methodological transparency.

    Science.gov (United States)

    Yule, Jeffrey V; Fournier, Robert J; Jensen, Christopher X J; Yang, Jinyan

    2014-06-01

    Late Pleistocene extinctions occurred globally over a period of about 50,000 years, primarily affecting mammals of > or = 44 kg body mass (i.e., megafauna) first in Australia, continuing in Eurasia and, finally, in the Americas. Polarized debate about the cause(s) of the extinctions centers on the role of climate change and anthropogenic factors (especially hunting). Since the late 1960s, investigators have developed mathematical models to simulate the ecological interactions that might have contributed to the extinctions. Here, we provide an overview of the various methodologies used and conclusions reached in the modeling literature, addressing both the strengths and weaknesses of modeling as an explanatory tool. Although late Pleistocene extinction models now provide a solid foundation for viable future work, we conclude, first, that single models offer less compelling support for their respective explanatory hypotheses than many realize; second, that disparities in methodology (both in terms of model parameterization and design) prevent meaningful comparison between models and, more generally, progress from model to model in increasing our understanding of these extinctions; and third, that recent models have been presented and possibly developed without sufficient regard for the transparency of design that facilitates scientific progress.

  20. A Methodology for Calculating EGS Electricity Generation Potential Based on the Gringarten Model for Heat Extraction From Fractured Rock

    Energy Technology Data Exchange (ETDEWEB)

    Augustine, Chad

    2017-05-01

    Existing methodologies for estimating the electricity generation potential of Enhanced Geothermal Systems (EGS) assume thermal recovery factors of 5% or less, resulting in relatively low volumetric electricity generation potentials for EGS reservoirs. This study proposes and develops a methodology for calculating EGS electricity generation potential based on the Gringarten conceptual model and analytical solution for heat extraction from fractured rock. The electricity generation potential of a cubic kilometer of rock as a function of temperature is calculated assuming limits on the allowed produced water temperature decline and reservoir lifetime based on surface power plant constraints. The resulting estimates of EGS electricity generation potential can be one to nearly two-orders of magnitude larger than those from existing methodologies. The flow per unit fracture surface area from the Gringarten solution is found to be a key term in describing the conceptual reservoir behavior. The methodology can be applied to aid in the design of EGS reservoirs by giving minimum reservoir volume, fracture spacing, number of fractures, and flow requirements for a target reservoir power output. Limitations of the idealized model compared to actual reservoir performance and the implications on reservoir design are discussed.

  1. FEMA DFIRM Station Start

    Data.gov (United States)

    Minnesota Department of Natural Resources — This table contains information about station starting locations. These locations indicate the reference point that was used as the origin for distance measurements...

  2. Early Head Start Evaluation

    Data.gov (United States)

    U.S. Department of Health & Human Services — Longitudinal information from an evaluation where children were randomly assigned to Early Head Start or community services as usual;direct assessments and...

  3. Head Start Impact Study

    Data.gov (United States)

    U.S. Department of Health & Human Services — Nationally representative, longitudinal information from an evaluation where children were randomly assigned to Head Start or community services as usual;direct...

  4. A KBE genetic-causal cost modelling methodology for manufacturing cost contingency management

    NARCIS (Netherlands)

    Curran, R.; Gilmour, M.; McAlleean, C.; Kelly, P.

    2009-01-01

    The paper provides validated evidence of a robust methodology for the management of lean manufacturing cost contingency, with a particular focus on contingency regarding recurring work content. A truly concurrent engineering process is established by capturing a range of knowledge from the design, m

  5. Coupling 2D Finite Element Models and Circuit Equations Using a Bottom-Up Methodology

    Science.gov (United States)

    2002-11-01

    EQUATIONS USING A BOTTOM-UP METHODOLOGY E. G6mezl, J. Roger-Folch2 , A. Gabald6nt and A. Molina’ ’Dpto. de Ingenieria Eldctrica. Universidad Polit...de Ingenieria Elictrica. ETSII. Universidad Politdcnica de Valencia. PO Box 22012, 46071. Valencia, Spain. E-mail: iroger adie.upv.es ABSTRACT The

  6. A KBE genetic-causal cost modelling methodology for manufacturing cost contingency management

    NARCIS (Netherlands)

    Curran, R.; Gilmour, M.; McAlleean, C.; Kelly, P.

    2009-01-01

    The paper provides validated evidence of a robust methodology for the management of lean manufacturing cost contingency, with a particular focus on contingency regarding recurring work content. A truly concurrent engineering process is established by capturing a range of knowledge from the design,

  7. The Alignment of CMC Language Learning Methodologies with the Bridge21 Model of 21C Learning

    Science.gov (United States)

    Bauer, Ciarán; Devitt, Ann; Tangney, Brendan

    2015-01-01

    This paper explores the intersection of learning methodologies to promote the development of 21st century skills with the use of Computer-Mediated Communication (CMC) tools to enhance language learning among adolescent learners. Today, technology offers a greater range of affordances in the teaching and learning of second languages while research…

  8. Getting started with Unity

    CERN Document Server

    Felicia, Patrick

    2013-01-01

    Getting Started with Unity is written in an easy-to-follow tutorial format.""Getting Started with Unity"" is for[ 3D game developers[/color] who would like to learn how to use Unity3D and become familiar with its core features. This book is also suitable for intermediate users who would like to improve their skills. No prior knowledge of Unity3D is required.

  9. Rapid Dialogue Prototyping Methodology

    NARCIS (Netherlands)

    Bui Huu Trung, B.H.T.; Sojka, P.; Rajman, M.; Kopecek, I.; Melichar, M.; Pala, K.

    2004-01-01

    This paper is about the automated production of dialogue models. The goal is to propose and validate a methodology that allows the production of finalized dialogue models (i.e. dialogue models specific for given applications) in a few hours. The solution we propose for such a methodology, called the

  10. A multiscale approach to blast neurotrauma modeling:Part II: Methodology for inducing blast injury to in vitro models

    Directory of Open Access Journals (Sweden)

    Gwen B. Effgen

    2012-02-01

    Full Text Available Due to the prominent role of improvised explosive devices (IEDs in wounding patterns of U.S. war-fighters in Iraq and Afghanistan, blast injury has risen to a new level of importance and is recognized to be a major cause of injuries to the brain. However, an injury risk-function for microscopic, macroscopic, behavioral, and neurological deficits has yet to be defined. While operational blast injuries can be very complex and thus difficult to analyze, a simplified blast injury model would facilitate studies correlating biological outcomes with blast biomechanics to define tolerance criteria. Blast-induced traumatic brain injury (bTBI results from the translation of a shock wave in air, such as that produced by an IED, into a pressure wave within the skull-brain complex. Our blast injury methodology recapitulates this phenomenon in vitro, allowing for control of the injury biomechanics via a compressed-gas shock tube used in conjunction with a custom-designed, fluid-filled receiver that contains the living culture. The receiver converts the air shock wave into a fast-rising pressure transient with minimal reflections, mimicking the intracranial pressure history in blast. We have developed an organotypic hippocampal slice culture model that exhibits cell death when exposed to a 530  17.7 kPa peak overpressure with a 1.026 ± 0.017 ms duration and 190 ± 10.7 kPa-ms impulse in-air. We have also injured a simplified in vitro model of the blood-brain barrier, which exhibits disrupted integrity immediately following exposure to 581  10.0 kPa peak overpressure with a 1.067 ms ± 0.006 ms duration and 222 ± 6.9 kPa-ms impulse in-air. To better prevent and treat bTBI, both the initiating biomechanics and the ensuing pathobiology must be understood in greater detail. A well-characterized, in vitro model of bTBI, in conjunction with animal models, will be a powerful tool for developing strategies to mitigate the risks of bTBI.

  11. Psychiatric Advance Directives: Getting Started

    Science.gov (United States)

    ... More... Home Getting Started National Resource Center on Psychiatric Advance Directives - Getting Started Getting Started Psychiatric advance directives (PADs) are relatively new legal instruments ...

  12. Physiologically Based Pharmacokinetic Modeling: Methodology, Applications, and Limitations with a Focus on Its Role in Pediatric Drug Development

    Directory of Open Access Journals (Sweden)

    Feras Khalil

    2011-01-01

    Full Text Available The concept of physiologically based pharmacokinetic (PBPK modeling was introduced years ago, but it has not been practiced significantly. However, interest in and implementation of this modeling technique have grown, as evidenced by the increased number of publications in this field. This paper demonstrates briefly the methodology, applications, and limitations of PBPK modeling with special attention given to discuss the use of PBPK models in pediatric drug development and some examples described in detail. Although PBPK models do have some limitations, the potential benefit from PBPK modeling technique is huge. PBPK models can be applied to investigate drug pharmacokinetics under different physiological and pathological conditions or in different age groups, to support decision-making during drug discovery, to provide, perhaps most important, data that can save time and resources, especially in early drug development phases and in pediatric clinical trials, and potentially to help clinical trials become more “confirmatory” rather than “exploratory”.

  13. Methodological challenges of optical tweezers-based X-ray fluorescence imaging of biological model organisms at synchrotron facilities.

    Science.gov (United States)

    Vergucht, Eva; Brans, Toon; Beunis, Filip; Garrevoet, Jan; Bauters, Stephen; De Rijcke, Maarten; Deruytter, David; Janssen, Colin; Riekel, Christian; Burghammer, Manfred; Vincze, Laszlo

    2015-07-01

    Recently, a radically new synchrotron radiation-based elemental imaging approach for the analysis of biological model organisms and single cells in their natural in vivo state was introduced. The methodology combines optical tweezers (OT) technology for non-contact laser-based sample manipulation with synchrotron radiation confocal X-ray fluorescence (XRF) microimaging for the first time at ESRF-ID13. The optical manipulation possibilities and limitations of biological model organisms, the OT setup developments for XRF imaging and the confocal XRF-related challenges are reported. In general, the applicability of the OT-based setup is extended with the aim of introducing the OT XRF methodology in all research fields where highly sensitive in vivo multi-elemental analysis is of relevance at the (sub)micrometre spatial resolution level.

  14. Combined deterministic and stochastic approaches for modeling the evolution of food products along the cold chain. Part I: Methodology

    OpenAIRE

    Flick, D.; Hoang, H.M.; Alvarez, G.; Laguerre, O.

    2012-01-01

    Many deterministic models have been developed to describe heat transfer in the cold chain and to predict the thermal and microbial evolution of food products. However, different product items will have different evolutions because of the variability of logistic supply chain, equipment design and operating conditions, etc. The objective of this study is to propose a general methodology to predict the evolution of food products and its variability along a cold chain. This evolution is chara...

  15. Methodology to Model and Understand the Complexities of Social, Economic,and Governance Interactions for Regional Assessment in Kenya

    Science.gov (United States)

    2012-07-01

    capacity quickly enough to prevent the outbreak of hostility. Methodology to Model and Understand the Complexities of Social, Economic, and...1997-1998. Not only did the flooding destroy bridges, roads, and crops, but it also created epidemics of cholera and malaria (CIA, 2011). In a large...2011. Wilke, Sharon. "Combating Terrorism in the Horn of Africa and Yemen ,” Harvard - Belfer Center for Science and International Affairs. Report for

  16. Development of a cross-section methodology and a real-time core model for VVER-1000 simulator application

    Energy Technology Data Exchange (ETDEWEB)

    Georgieva, Emiliya Lyudmilova

    2016-06-06

    The novel academic contributions are summarized as follows. A) A cross-section modelling methodology and a cycle-specific cross-section update procedure are developed to meet fidelity requirements applicable to a cycle-specific reactor core simulation, as well as particular customer needs and practices supporting VVER-1000 operation and safety. B) A real-time version of the Nodal Expansion Method code is developed and implemented into Kozloduy 6 full-scope replica control room simulator.

  17. Current applications and future directions for the CDISC Operational Data Model standard: A methodological review.

    Science.gov (United States)

    Hume, Sam; Aerts, Jozef; Sarnikar, Surendra; Huser, Vojtech

    2016-04-01

    In order to further advance research and development on the Clinical Data Interchange Standards Consortium (CDISC) Operational Data Model (ODM) standard, the existing research must be well understood. This paper presents a methodological review of the ODM literature. Specifically, it develops a classification schema to categorize the ODM literature according to how the standard has been applied within the clinical research data lifecycle. This paper suggests areas for future research and development that address ODM's limitations and capitalize on its strengths to support new trends in clinical research informatics. A systematic scan of the following databases was performed: (1) ABI/Inform, (2) ACM Digital, (3) AIS eLibrary, (4) Europe Central PubMed, (5) Google Scholar, (5) IEEE Xplore, (7) PubMed, and (8) ScienceDirect. A Web of Science citation analysis was also performed. The search term used on all databases was "CDISC ODM." The two primary inclusion criteria were: (1) the research must examine the use of ODM as an information system solution component, or (2) the research must critically evaluate ODM against a stated solution usage scenario. Out of 2686 articles identified, 266 were included in a title level review, resulting in 183 articles. An abstract review followed, resulting in 121 remaining articles; and after a full text scan 69 articles met the inclusion criteria. As the demand for interoperability has increased, ODM has shown remarkable flexibility and has been extended to cover a broad range of data and metadata requirements that reach well beyond ODM's original use cases. This flexibility has yielded research literature that covers a diverse array of topic areas. A classification schema reflecting the use of ODM within the clinical research data lifecycle was created to provide a categorized and consolidated view of the ODM literature. The elements of the framework include: (1) EDC (Electronic Data Capture) and EHR (Electronic Health Record

  18. Generalized Characterization Methodology for Performance Modelling of Lithium-Ion Batteries

    DEFF Research Database (Denmark)

    Stroe, Daniel Loan; Swierczynski, Maciej Jozef; Stroe, Ana-Irina

    2016-01-01

    Lithium-ion (Li-ion) batteries are complex energy storage devices with their performance behavior highly dependent on the operating conditions (i.e., temperature, load current, and state-of-charge (SOC)). Thus, in order to evaluate their techno-economic viability for a certain application, detailed...... information about Li-ion battery performance behavior becomes necessary. This paper proposes a comprehensive seven-step methodology for laboratory characterization of Li-ion batteries, in which the battery’s performance parameters (i.e., capacity, open-circuit voltage (OCV), and impedance) are determined...... of the studied Li-ion battery is developed and its accuracy is successfully verified (maximum error lower than 5% and a mean error below 8.5 mV) for various load profiles (including a real application profile), thus validating the proposed seven-step characterization methodology....

  19. Methodology for Analysis, Modeling and Simulation of Airport Gate-waiting Delays

    Science.gov (United States)

    Wang, Jianfeng

    This dissertation presents methodologies to estimate gate-waiting delays from historical data, to identify gate-waiting-delay functional causes in major U.S. airports, and to evaluate the impact of gate operation disruptions and mitigation strategies on gate-waiting delay. Airport gates are a resource of congestion in the air transportation system. When an arriving flight cannot pull into its gate, the delay it experiences is called gate-waiting delay. Some possible reasons for gate-waiting delay are: the gate is occupied, gate staff or equipment is unavailable, the weather prevents the use of the gate (e.g. lightning), or the airline has a preferred gate assignment. Gate-waiting delays potentially stay with the aircraft throughout the day (unless they are absorbed), adding costs to passengers and the airlines. As the volume of flights increases, ensuring that airport gates do not become a choke point of the system is critical. The first part of the dissertation presents a methodology for estimating gate-waiting delays based on historical, publicly available sources. Analysis of gate-waiting delays at major U.S. airports in the summer of 2007 identifies the following. (i) Gate-waiting delay is not a significant problem on majority of days; however, the worst delay days (e.g. 4% of the days at LGA) are extreme outliers. (ii) The Atlanta International Airport (ATL), the John F. Kennedy International Airport (JFK), the Dallas/Fort Worth International Airport (DFW) and the Philadelphia International Airport (PHL) experience the highest gate-waiting delays among major U.S. airports. (iii) There is a significant gate-waiting-delay difference between airlines due to a disproportional gate allocation. (iv) Gate-waiting delay is sensitive to time of a day and schedule peaks. According to basic principles of queueing theory, gate-waiting delay can be attributed to over-scheduling, higher-than-scheduled arrival rate, longer-than-scheduled gate-occupancy time, and reduced gate

  20. A methodology to ensure local mass conservation for porous media models under finite element formulations based on convex optimization

    Science.gov (United States)

    Chang, J.; Nakshatrala, K.

    2014-12-01

    It is well know that the standard finite element methods, in general, do not satisfy element-wise mass/species balance properties. It is, however, desirable to have element-wide mass balance property in subsurface modeling. Several studies over the years have aimed to overcome this drawback of finite element formulations. Currently, a post-processing optimization-based methodology is commonly employed to recover the local mass balance for porous media models. However, such a post-processing technique does not respect the underlying variational structure that the finite element formulation may enjoy. Motivated by this, a consistent methodology to satisfy element-wise local mass balance for porous media models is constructed using convex optimization techniques. The assembled system of global equations is reconstructed into a quadratic programming problem subjected to bounded equality constraints that ensure conservation at the element level. The proposed methodology can be applied to any computational mesh and to any non-locally conservative nodal-based finite element method. Herein, we integrate our proposed methodology into the framework of the classical mixed Galerkin formulation using Taylor-Hood elements and the least-squares finite element formulation. Our numerical studies will include computational cost, numerical convergence, and comparision with popular methods. In particular, it will be shown that the accuracy of the solutions is comparable with that of several popular locally conservative finite element formulations like the lowest order Raviart-Thomas formulation. We believe the proposed optimization-based approach is a viable approach to preserve local mass balance on general computational grids and is amenable for large-scale parallel implementation.

  1. A methodology for a quantitative interpretation of DGGE with the help of mathematical modelling: application in biohydrogen production.

    Science.gov (United States)

    Tapia, Estela; Donoso-Bravo, Andres; Cabrol, Léa; Alves, Madalena; Pereira, Alcina; Rapaport, Alain; Ruiz-Filippi, Gonzalo

    2014-01-01

    Molecular biology techniques provide valuable insights in the investigation of microbial dynamics and evolution. Denaturing gradient gel electrophoresis (DGGE) analysis is one of the most popular methods which have been used in bioprocess assessment. Most of the anaerobic digestion models consider several microbial populations as state variables. However, the difficulty of measuring individual species concentrations may cause inaccurate model predictions. The integration of microbial data and ecosystem modelling is currently a challenging issue for improved system control. A novel procedure that combines common experimental measurements, DGGE, and image analysis is presented in this study in order to provide a preliminary estimation of the actual concentration of the dominant bacterial ribotypes in a bioreactor, for further use as a variable in mathematical modelling of the bioprocess. This approach was applied during the start-up of a continuous anaerobic bioreactor for hydrogen production. The experimental concentration data were used for determining the kinetic parameters of each species, by using a multi-species chemostat-model. The model was able to reproduce the global trend of substrate and biomass concentrations during the reactor start-up, and predicted in an acceptable way the evolution of each ribotype concentration, depicting properly specific ribotype selection and extinction.

  2. A methodology for determining interactions in probabilistic safety assessment models by varying one parameter at a time.

    Science.gov (United States)

    Borgonovo, Emanuele

    2010-03-01

    In risk analysis problems, the decision-making process is supported by the utilization of quantitative models. Assessing the relevance of interactions is an essential information in the interpretation of model results. By such knowledge, analysts and decisionmakers are able to understand whether risk is apportioned by individual factor contributions or by their joint action. However, models are oftentimes large, requiring a high number of input parameters, and complex, with individual model runs being time consuming. Computational complexity leads analysts to utilize one-parameter-at-a-time sensitivity methods, which prevent one from assessing interactions. In this work, we illustrate a methodology to quantify interactions in probabilistic safety assessment (PSA) models by varying one parameter at a time. The method is based on a property of the functional ANOVA decomposition of a finite change that allows to exactly determine the relevance of factors when considered individually or together with their interactions with all other factors. A set of test cases illustrates the technique. We apply the methodology to the analysis of the core damage frequency of the large loss of coolant accident of a nuclear reactor. Numerical results reveal the nonadditive model structure, allow to quantify the relevance of interactions, and to identify the direction of change (increase or decrease in risk) implied by individual factor variations and by their cooperation.

  3. Filling the gap between geophysics and geotechnics in landslide process understanding: a data fusion methodology to integrate multi-source information in hydro-mechanical modeling

    Science.gov (United States)

    Bernadie, S.; Gance, J.; Grandjean, G.; Malet, J.

    2013-12-01

    The population increase and the rising issue of climate change impact the long term stability of mountain slopes. So far, it is not yet possible to assess in all cases conditions for failure, reactivation or rapid surges of slopes. The main reason identified by Van Asch et al. (2007) is the excessive conceptualization of the slope in the models. Therefore to improve our forecasting capability, considering local information such as the local slope geometry, the soil material variability, hydrological processes and the presence of fissures are of first importance. Geophysical imaging, combined with geotechnical tests, is an adapted tool to obtain such detailed information. The development of near-surface geophysics in the last three decades encourages the use of multiple geophysical methods for slope investigations. However, fusion of real data is little used in this domain and a gap still exists between the data processed by the geophysicists and the slope hydro-mechanical models developed by the geotechnical engineers. Starting from this statement, we propose a methodological flowchart of multi-source geophysical and geotechnical data integration to construct a slope hydro-mechanical model of a selected profile at the Super-Sauze landslide. Based on data fusion concepts, the methodology aims at integrating various data in order to create a geological and a geotechnical model of the slope profile. The input data consist in seismic and geoelectrical tomographies (that give access to a spatially distributed information on the soil physical state) supplemented by punctual geotechnical tests (dynamic penetration tests). The tomograms and the geotechnical tests are combined into a unique interpreted model characterized by different geotechnical domains. We use the fuzzy logic clustering method in order to take into account the uncertainty coming from each input data. Then an unstructured finite element mesh, adapted to the resolution of the different input data and

  4. Lean start-up

    DEFF Research Database (Denmark)

    Rasmussen, Erik Stavnsager; Tanev, Stoyan

    2016-01-01

    The risk of launching new products and starting new firms is known to be extremely high. The Lean Start-up approach is a way of reducing these risks and enhancing the chances for success by validating the products and services in the market with customers before launching it in full scale. The main...... point is to develop a Minimum Viable Product that can be tested by potential customers and then pivot the idea if necessary around these customer evaluations. This iterative process goes through a number of stages with the purpose of validating the customers’ problems, the suggested solution...

  5. Induction motor starting current

    Energy Technology Data Exchange (ETDEWEB)

    Arneaud, J.M.; Langman, R.A. [Tasmania Univ., Hobart, TAS (Australia)

    1995-12-31

    Large errors may occur if leakage path saturation is neglected when reduced-voltage test results are used to predict the direct-on-line starting current of induction motors. The results of applying three existing and two new methods for starting current prediction are compared with test data from 52 motors. A quantitative assessment is made of the probable reduction in error that would be achieved by increasing the number of available sets of reduced-voltage, locked rotor test results or by including slot design data. Guidelines are given for selecting an appropriate predictive method. (author). 4 tabs., 1 fig., 6 refs.

  6. Towards A Model-Based Prognostics Methodology for Electrolytic Capacitors: A Case Study Based on Electrical Overstress Accelerated Aging

    Science.gov (United States)

    Celaya, Jose R.; Kulkarni, Chetan S.; Biswas, Gautam; Goebel, Kai

    2012-01-01

    A remaining useful life prediction methodology for electrolytic capacitors is presented. This methodology is based on the Kalman filter framework and an empirical degradation model. Electrolytic capacitors are used in several applications ranging from power supplies on critical avionics equipment to power drivers for electro-mechanical actuators. These devices are known for their comparatively low reliability and given their criticality in electronics subsystems they are a good candidate for component level prognostics and health management. Prognostics provides a way to assess remaining useful life of a capacitor based on its current state of health and its anticipated future usage and operational conditions. We present here also, experimental results of an accelerated aging test under electrical stresses. The data obtained in this test form the basis for a remaining life prediction algorithm where a model of the degradation process is suggested. This preliminary remaining life prediction algorithm serves as a demonstration of how prognostics methodologies could be used for electrolytic capacitors. In addition, the use degradation progression data from accelerated aging, provides an avenue for validation of applications of the Kalman filter based prognostics methods typically used for remaining useful life predictions in other applications.

  7. Model-based scale-up methodology for aerobic fed-batch bioprocesses: application to polyhydroxybutyrate (PHB) production.

    Science.gov (United States)

    Monsalve-Bravo, Gloria Milena; Garelli, Fabricio; Mozumder, Md Salatul Islam; Alvarez, Hernan; De Battista, Hernan

    2015-06-01

    This work presents a general model-based methodology to scale-up fed-batch bioprocesses. The idea behind this approach is to establish a dynamics hierarchy, based on a model of the process, that allows the designer to determine the proper scale factors as well as at which point of the fed-batch the process should be scaled up. Here, concepts and tools of linear control theory, such as the singular value decomposition of the Hankel matrix, are exploited in the context of process design. The proposed scale-up methodology is first described in a bioprocesses general framework highlighting its main features, key variables and parameters. Then, it is applied to a polyhydroxybutyrate (PHB) fed-batch bioreactor and compared with three empirical criteria, that are traditionally employed to determine the scale factors of these processes, showing the usefulness and distinctive features of this proposal. Moreover, this methodology provides theoretical support to a frequently used empirical rule: scale-up aerobic bioreactors at constant volumetric oxygen transfer coefficient. Finally, similar process dynamic behavior and PHB production set at the laboratory scale are predicted at the new operating scale, while it is also determined that is rarely possible to reproduce similar dynamic behavior of the bioreactor using empirical scale-up criteria.

  8. 基于EGARCH模型的远期开始期权定价%Forward-start options pricing based on EGARCH model

    Institute of Scientific and Technical Information of China (English)

    王献东

    2012-01-01

    By means of measure transformation and martingale method of option pricing, the pricing formulas of European forward-start options are obtained by the simple mathematical induction. Then the EGARCH model of the logarithm return volatilities is constructed by selecting the samples of real data of aerospace power stock (600343)' s daily transaction closing quotation price in 2010. The Eviews software is used to estimate the parameters and obtain the volatilities equation, The out-of-sample forecast of the volatilities is conducted, and the more reasonable options price than that based on historical volatilities is calculated. Finally, an example of numerical calculation of forward-start call options price is given.%文章利用测度变换和期权定价的鞅方法,经过简单的数学推导得出了欧式远期开始期权的定价公式.选取了航天动力(600343)股票2010年交易日收盘价格的实际数据为样本建立了股票对数收益波动率的EGARCH模型,利用Eviews软件进行参数估计得到了波动率的方程,并对波动率进行了样本外预测,从而可以计算出比基于历史波动率更合理的期权价格.最后给出了一个远期开始看涨期权价格数值计算的例子.

  9. Assessment of potential improvements on regional air quality modelling related with implementation of a detailed methodology for traffic emission estimation.

    Science.gov (United States)

    Coelho, Margarida C; Fontes, Tânia; Bandeira, Jorge M; Pereira, Sérgio R; Tchepel, Oxana; Dias, Daniela; Sá, Elisa; Amorim, Jorge H; Borrego, Carlos

    2014-02-01

    The accuracy and precision of air quality models are usually associated with the emission inventories. Thus, in order to assess if there are any improvements on air quality regional simulations using detailed methodology of road traffic emission estimation, a regional air quality modelling system was applied. For this purpose, a combination of top-down and bottom-up approaches was used to build an emission inventory. To estimate the road traffic emissions, the bottom-up approach was applied using an instantaneous emission model (Vehicle Specific Power - VSP methodology), and an average emission model (CORINAIR methodology), while for the remaining activity sectors the top-down approach was used. Weather Research and Forecasting (WRF) and Comprehensive Air quality (CAMx) models were selected to assess two emission scenarios: (i) scenario 1, which includes the emissions from the top-down approach; and (ii) scenario 2, which includes the emissions resulting from integration of top-down and bottom-up approaches. The results show higher emission values for PM10, NOx and HC, for scenario 1, and an inverse behaviour to CO. The highest differences between these scenarios were observed for PM10 and HC, about 55% and 75% higher (respectively for each pollutant) than emissions provided by scenario 2. This scenario gives better results for PM10, CO and O3. For NO2 concentrations better results were obtained with scenario 1. Thus, the results obtained suggest that with the combination of the top-down and bottom-up approaches to emission estimation several improvements in the air quality results can be achieved, mainly for PM10, CO and O3.

  10. ATLAS starts moving in

    CERN Multimedia

    2004-01-01

    The first large active detector component was lowered into the ATLAS cavern on 1 March. It consisted of the 8 modules forming the lower part of the central barrel of the tile hadronic calorimeter. The work of assembling the barrel, which comprises 64 modules, started the following day.

  11. A New Start

    Institute of Scientific and Technical Information of China (English)

    2009-01-01

    Goodwill to the Islamic world and willingness to mend ties highlight Barack Obama’s new Middle East policy u.S. President Barack Obama started to implement his new Middle East policy soon after he assumed office. He paid a

  12. Getting started with UDOO

    CERN Document Server

    Palazzetti, Emanuele

    2015-01-01

    If you are an Android developer who wants to learn how to use UDOO to build Android applications that are capable of interacting with their surrounding environment, then this book is ideal for you. Learning UDOO is the next great step to start building your first real-world prototypes powered by the Android operating system.

  13. Start-Up Capital

    NARCIS (Netherlands)

    I. Verheul (Ingrid); A.R. Thurik (Roy)

    2000-01-01

    textabstractFemale and male entrepreneurs differ in the way they finance their businesses. This can be attributed to the type of business and the type of management and experience (indirect effect). Female start-ups may also experience other barriers based upon discriminatory effects (direct

  14. A general gridding, discretization, and coarsening methodology for modeling flow in porous formations with discrete geological features

    Science.gov (United States)

    Karimi-Fard, M.; Durlofsky, L. J.

    2016-10-01

    A comprehensive framework for modeling flow in porous media containing thin, discrete features, which could be high-permeability fractures or low-permeability deformation bands, is presented. The key steps of the methodology are mesh generation, fine-grid discretization, upscaling, and coarse-grid discretization. Our specialized gridding technique combines a set of intersecting triangulated surfaces by constructing approximate intersections using existing edges. This procedure creates a conforming mesh of all surfaces, which defines the internal boundaries for the volumetric mesh. The flow equations are discretized on this conforming fine mesh using an optimized two-point flux finite-volume approximation. The resulting discrete model is represented by a list of control-volumes with associated positions and pore-volumes, and a list of cell-to-cell connections with associated transmissibilities. Coarse models are then constructed by the aggregation of fine-grid cells, and the transmissibilities between adjacent coarse cells are obtained using flow-based upscaling procedures. Through appropriate computation of fracture-matrix transmissibilities, a dual-continuum representation is obtained on the coarse scale in regions with connected fracture networks. The fine and coarse discrete models generated within the framework are compatible with any connectivity-based simulator. The applicability of the methodology is illustrated for several two- and three-dimensional examples. In particular, we consider gas production from naturally fractured low-permeability formations, and transport through complex fracture networks. In all cases, highly accurate solutions are obtained with significant model reduction.

  15. A methodology for linking 2D overland flow models with the sewer network model SWMM 5.1 based on dynamic link libraries.

    Science.gov (United States)

    Leandro, Jorge; Martins, Ricardo

    2016-01-01

    Pluvial flooding in urban areas is characterized by a gradually varying inundation process caused by surcharge of the sewer manholes. Therefore urban flood models need to simulate the interaction between the sewer network and the overland flow in order to accurately predict the flood inundation extents. In this work we present a methodology for linking 2D overland flow models with the storm sewer model SWMM 5. SWMM 5 is a well-known free open-source code originally developed in 1971. The latest major release saw its structure re-written in C ++ allowing it to be compiled as a command line executable or through a series of calls made to function inside a dynamic link library (DLL). The methodology developed herein is written inside the same DLL in C + +, and is able to simulate the bi-directional interaction between both models during simulation. Validation is done in a real case study with an existing urban flood coupled model. The novelty herein is that the new methodology can be added to SWMM without the need for editing SWMM's original code. Furthermore, it is directly applicable to other coupled overland flow models aiming to use SWMM 5 as the sewer network model.

  16. Methodology to evaluate the performance of simulation models for alternative compiler and operating system configurations

    Science.gov (United States)

    Simulation modelers increasingly require greater flexibility for model implementation on diverse operating systems, and they demand high computational speed for efficient iterative simulations. Additionally, model users may differ in preference for proprietary versus open-source software environment...

  17. Random forest methodology for model-based recursive partitioning: the mobForest package for R

    OpenAIRE

    Garge, Nikhil R; Bobashev, Georgiy; Eggleston, Barry

    2013-01-01

    Background Recursive partitioning is a non-parametric modeling technique, widely used in regression and classification problems. Model-based recursive partitioning is used to identify groups of observations with similar values of parameters of the model of interest. The mob() function in the party package in R implements model-based recursive partitioning method. This method produces predictions based on single tree models. Predictions obtained through single tree models are very sensitive to...

  18. A methodology to urban air quality assessment during large time periods of winter using computational fluid dynamic models

    Science.gov (United States)

    Parra, M. A.; Santiago, J. L.; Martín, F.; Martilli, A.; Santamaría, J. M.

    2010-06-01

    The representativeness of point measurements in urban areas is limited due to the strong heterogeneity of the atmospheric flows in cities. To get information on air quality in the gaps between measurement points, and have a 3D field of pollutant concentration, Computational Fluid Dynamic (CFD) models can be used. However, unsteady simulations during time periods of the order of months, often required for regulatory purposes, are not possible for computational reasons. The main objective of this study is to develop a methodology to evaluate the air quality in a real urban area during large time periods by means of steady CFD simulations. One steady simulation for each inlet wind direction was performed and factors like the number of cars inside each street, the length of streets and the wind speed and direction were taken into account to compute the pollutant concentration. This approach is only valid in winter time when the pollutant concentrations are less affected by atmospheric chemistry. A model based on the steady-state Reynolds-Averaged Navier-Stokes equations (RANS) and standard k-ɛ turbulence model was used to simulate a set of 16 different inlet wind directions over a real urban area (downtown Pamplona, Spain). The temporal series of NO x and PM 10 and the spatial differences in pollutant concentration of NO 2 and BTEX obtained were in agreement with experimental data. Inside urban canopy, an important influence of urban boundary layer dynamics on the pollutant concentration patterns was observed. Large concentration differences between different zones of the same square were found. This showed that concentration levels measured by an automatic monitoring station depend on its location in the street or square, and a modelling methodology like this is useful to complement the experimental information. On the other hand, this methodology can also be applied to evaluate abatement strategies by redistributing traffic emissions.

  19. Methodological implications of the of extension of the number of considered markets in a non-Walrasian equilibrium model

    Directory of Open Access Journals (Sweden)

    Florin-Marius PAVELESCU

    2011-12-01

    Full Text Available This paper deals with the consequences of the extension of the number of markets that are taken into consideration in a non-Walrasian equilibrium model. It is reviewed the initial content of the theory of non-Walrasian equilibrium and emphasizes the main modelling factors of the respective equilibrium. It proposes the inclusion of the capital market in the model of non-Walrasian equilibrium and is also reveals the implications of the extension of the respective model to the classification of types of non-Walrasian equilibrium and to the content of macroeconomic and structural policies. Also, it proposes an econometric method for the estimation of the type of unemployment. The respective methodology is practically used in the case of Romania for the period 1991-2004.

  20. BPLOM: BPM Level-Oriented Methodology for Incremental Business Process Modeling and Code Generation on Mobile Platforms

    Directory of Open Access Journals (Sweden)

    Jaime Solis Martines

    2013-06-01

    Full Text Available The requirements engineering phase is the departure point for the development process of any kind of computer application, it determines the functionality needed in the working scenario of the program. Although this is a crucial point in application development, as incorrect requirement definition leads to costly error appearance in later stages of the development process, application domain experts’ implication remains minor. In order to correct this scenario, business process modeling notations were introduced to favor business expert implication in this phase, but notation complexity prevents this participation to reach its ideal state. Hence, we promote the definition of a level oriented business process methodology, which encourages the adaptation of the modeling notation to the modeling and technical knowledge shown by the expert. This approach reduces the complexity found by domain experts and enables them to model their processes completely with a level of technical detail directly proportional to their knowledge.

  1. MODeLeR: A Virtual Constructivist Learning Environment and Methodology for Object-Oriented Design

    Science.gov (United States)

    Coffey, John W.; Koonce, Robert

    2008-01-01

    This article contains a description of the organization and method of use of an active learning environment named MODeLeR, (Multimedia Object Design Learning Resource), a tool designed to facilitate the learning of concepts pertaining to object modeling with the Unified Modeling Language (UML). MODeLeR was created to provide an authentic,…

  2. Project Head Start: Models and Strategies for the Twenty-First Century. Garland Reference Library of Social Science. Source Books on Education, Volume 38.

    Science.gov (United States)

    Washington, Valora; Bailey, Ura Jean Oyemade

    Head Start, the nation's largest early childhood intervention, has enjoyed public and political support. The program has also been haunted by persistent questions about its role in communities, its sustainable impacts, and its quality. This book discusses the past, present, and future of Head Start in the hope of creating better partnerships…

  3. Methodology of High Accuracy and Resolution 3D Geological Model Generation and Application

    Institute of Scientific and Technical Information of China (English)

    吴键; 曹代勇; 邓爱居; 李东津; 蒋涛; 翟光华

    2004-01-01

    By generating a high accuracy and high resolution geological model in Liuchu oil field, the technique of geological modeling is expanded and involved in primary geological study, making the sand bodies and reservoir be easily described in detail. The 3D visualization and 3D interactive editing of geological structure model are the key for modeling procedure. And a high accuracy and resolution geological model has been well applied in optimizing the production scheme.

  4. Getting started with JUCE

    CERN Document Server

    Robinson, Martin

    2013-01-01

    his book is a fast-paced, practical guide full of step-by-step examples which are easy to follow and implement.This book is for programmers with a basic grasp of C++. The examples start at a basic level, making few assumptions beyond fundamental C++ concepts. Those without any experience with C++ should be able to follow and construct the examples, although you may need further support to understand the fundamental concepts.

  5. Getting started with Simulink

    CERN Document Server

    Zamboni, Luca

    2013-01-01

    This practical and easy-to-understand learning tutorial is one big exciting exercise for students and engineers that are always short on their schedules and want to regain some lost time with the help of Simulink.This book is aimed at students and engineers who need a quick start with Simulink. Though it's not required in order to understand how Simulink works, knowledge of physics will help the reader to understand the exercises described.

  6. Getting started with Hazelcast

    CERN Document Server

    Johns, Mat

    2013-01-01

    Written as a step-by-step guide, Getting Started with Hazelcast will teach you all you need to know to make your application data scalable.This book is a great introduction for Java developers, software architects, or developers looking to enable scalable and agile data within their applications. You should have programming knowledge of Java and a general familiarity with concepts like data caching and clustering.

  7. Development of CCF modeling and analysis methodology for diverse system status

    Energy Technology Data Exchange (ETDEWEB)

    Lim, Tae Jin; Byun, Si Sub; Yoon, Tae Kwan [Soongsil University, Seoul (Korea); Moon, Jae Pil [Seoul National University, Seoul (Korea)

    1999-04-01

    The objectives of this project is to develop a procedure for modeling and analyzing CCF efficiently according to various system status. CCF events change according to the change of the system status due to maintenance, accidents, or alternating success criteria for various missions. The objective of the first year's research is to develope a CCF model for various system status. We reviewed and evaluated current CCF models, and analyze their merits and deficiency in modeling various system status. An approximate model is developed as a CCF model. The model is compatible with MGL model. Extensive sensitivity study shows the accuracy and efficiency of the proposed model. Second year's research aims to the development of an integrated CCF procedure for PSA and risk monitor. We develope an adaptive method for the approximate model in a k/m/G system with multiple common cause groups. The accuracy of the method is proved by comparing with the implicit method. Next, we develope a method for modeling CCF in a fault tree. Three alternatives are considered. It is proved to be most efficient to model the CCF events under the gate of individual component failure. The we provides a method for estimating the CCF probability, and develope a software for this purpose. We finally provide a fundamental procedure for modeling CCF in a risk monitor. The modeling procedure is applied to HPSI system, and it is proved to be efficient and accurate. (author). 48 refs., 11 figs., 53 tabs.

  8. A Model-Based Methodology for Integrated Design and Operation of Reactive Distillation Processes

    DEFF Research Database (Denmark)

    Mansouri, Seyed Soheil; Sales-Cruz, Mauricio; Huusom, Jakob Kjøbsted

    2015-01-01

    and resolved. A new approach isto tackle process intensification and controllability issues in an integrated manner, in the early stages of process design. This integrated and simultaneous synthesis approach provides optimal operation and moreefficient control of complex intensified systems that suffice...... bubble point algorithm is used to compute the reactive vapor-liquid equilibrium data set.The operation of the RDC at the highest driving force and other candidate points is compared through openloop and closed-loop analysis. By application of this methodology it is shown that designing the process atthe......Process intensification is a new approach that has the potential to improve existing processes as well as new designs of processes to achieve more profitable and sustainable production. However, many issues with respect to their implementation and operation is not clear; for example, the question...

  9. Describing the access network by means of router buffer modelling: a new methodology.

    Science.gov (United States)

    Sequeira, Luis; Fernández-Navajas, Julián; Saldana, Jose; Gállego, José Ramón; Canales, María

    2014-01-01

    The behaviour of the routers' buffer may affect the quality of service (QoS) of network services under certain conditions, since it may modify some traffic characteristics, as delay or jitter, and may also drop packets. As a consequence, the characterization of the buffer is interesting, especially when multimedia flows are transmitted and even more if they transport information with real-time requirements. This work presents a new methodology with the aim of determining the technical and functional characteristics of real buffers (i.e., behaviour, size, limits, and input and output rate) of a network path. It permits the characterization of intermediate buffers of different devices in a network path across the Internet.

  10. A generic model-based methodology for quantification of mass transfer limitations in microreactors

    DEFF Research Database (Denmark)

    Van Daele, Timothy; Fernandes del Pozo, David; Van Hauwermeiren, Daan

    2016-01-01

    Microreactors are becoming more popular in the biocatalytic field to speed up reactions and thus achieve process intensification. However, even these small-scale reactors can suffer from mass transfer limitations. Traditionally, dimensionless numbers such as the second Damköhler number are used...... to determine whether the reaction is either kinetically or mass transfer limited. However, these dimensionless numbers only give a qualitative measure of the extent of the mass transfer limitation, and are only applicable to simple reactor configurations. In practice, this makes it difficult to rapidly...... quantify the importance of such mass transfer limitations and compare different reactor configurations. This paper presents a novel generic methodology to quantify mass transfer limitations. It was applied to two microreactor configurations: a microreactor with immobilised enzyme at the wall and a Y...

  11. Optimization Parameters of tool life Model Using the Taguchi Approach and Response Surface Methodology

    Directory of Open Access Journals (Sweden)

    Kompan Chomsamutr

    2012-01-01

    Full Text Available The objective of this research is to compare the cutting parameters of turning operation the work pieces of medium carbon steel (AISI 1045 by finding the longest tool life by Taguchi methods and Response Surface Methodology: RSM. This research is to test the collecting data by Taguchi method. The analyses of the impact among the factors are the depth of cut, cutting speed and feed rate. This research found that the most suitable response value; and tool life methods give the same suitable values, i.e. feed rate at 0.10 mm/rev, cutting speed at 150 m/min, and depth of cut at 0.5 mm, which is the value of longest tool life at 670.170 min, while the average error is by RSM at the percentage of 0.07 as relative to the testing value.

  12. Evaluating the Performance of BSBL Methodology for EEG Source Localization On a Realistic Head Model

    CERN Document Server

    Saha, Sajib; Nesterets, Ya I; Tahtali, M; de Hoog, Frank; Gureyev, T E

    2015-01-01

    Source localization in EEG represents a high dimensional inverse problem, which is severely ill-posed by nature. Fortunately, sparsity constraints have come into rescue as it helps solving the ill-posed problems when the signal is sparse. When the signal has a structure such as block structure, consideration of block sparsity produces better results. Knowing sparse Bayesian learning is an important member in the family of sparse recovery, and a superior choice when the projection matrix is highly coherent (which is typical the case for EEG), in this work we evaluate the performance of block sparse Bayesian learning (BSBL) method for EEG source localization. It is already accepted by the EEG community that a group of dipoles rather than a single dipole are activated during brain activities; thus, block structure is a reasonable choice for EEG. In this work we use two definitions of blocks: Brodmann areas and automated anatomical labelling (AAL), and analyze the reconstruction performance of BSBL methodology fo...

  13. A robust methodology for kinetic model parameter estimation for biocatalytic reactions

    DEFF Research Database (Denmark)

    Al-Haque, Naweed; Andrade Santacoloma, Paloma de Gracia; Lima Afonso Neto, Watson;

    2012-01-01

    Effective estimation of parameters in biocatalytic reaction kinetic expressions are very important when building process models to enable evaluation of process technology options and alternative biocatalysts. The kinetic models used to describe enzyme-catalyzed reactions generally include several...

  14. A Mapping Model for Transforming Traditional Software Development Methods to Agile Methodology

    National Research Council Canada - National Science Library

    Rashmi Popli; Anita; Naresh Chauhan

    2013-01-01

    .... Agile model is growing in the market at very good pace.Companies are drifting from traditional Software Development Life Cycle models to Agile Environment forthe purpose of attaining quality and for the sake of saving cost and time...

  15. Selection Methodology of Energy Consumption Model Based on Data Envelopment Analysis

    Directory of Open Access Journals (Sweden)

    Nakhodov Vladimir

    2016-12-01

    Full Text Available The energy efficiency monitoring methods in industry are based on statistical modeling of energy consumption. In the present paper, the widely used method of energy efficiency monitoring “Monitoring and Targeting systems” has been considered, highlighting one of the most important issues — selection of the proper mathematical model of energy consumption. The paper gives a list of different models that can be applied in the corresponding systems. The numbers of criteria that estimate certain characteristics of the mathematical model are represented. The traditional criteria of model adequacy and the “additional” criteria, which allow estimating the model characteristics more precisely, are proposed for choosing the mathematical model of energy consumption in “Monitoring and Targeting systems”. In order to provide the comparison of different models by several criteria simultaneously, an approach based on Data Envelopment Analysis is proposed. Such approach allows providing a more accurate and reliable energy efficiency monitoring.

  16. Modeling the toxicity of aromatic compounds to tetrahymena pyriformis: the response surface methodology with nonlinear methods.

    Science.gov (United States)

    Ren, Shijin

    2003-01-01

    Response surface models based on multiple linear regression had previously been developed for the toxicity of aromatic chemicals to Tetrahymena pyriformis. However, a nonlinear relationship between toxicity and one of the molecular descriptors in the response surface model was observed. In this study, response surface models were established using six nonlinear modeling methods to handle the nonlinearity exhibited in the aromatic chemicals data set. All models were validated using the method of cross-validation, and prediction accuracy was tested on an external data set. Results showed that response surface models based on locally weighted regression scatter plot smoothing (LOESS), multivariate adaptive regression splines (MARS), neural networks (NN), and projection pursuit regression (PPR) provided satisfactory power of model fitting and prediction and had similar applicabilities. The response surface models based on nonlinear methods were difficult to interpret and conservative in discriminating toxicity mechanisms.

  17. CONCEPTUAL AND METHODOLOGICAL MISTAKES IN PSYCHOLOGY AND HEALTH: A CASE STUDY ON THE USE AND ABUSE OF STRUCTURAL EQUATION MODELLING

    Directory of Open Access Journals (Sweden)

    Julio Alfonso Piña López

    2016-09-01

    Full Text Available In this article, a research paper is analysed, which was justified based on the theory of developmental psychopathology, the protective factors, self-regulation, resilience, and quality of life among individuals who lived with type 2 diabetes and hypertension. Structural equation modelling (SEM was used for the data analysis. Although the authors conclude that the data are adequate to the theory tested, they commit errors of logic, concept, methodology and interpretation which, taken together, demonstrate a flagrant rupture between the theory and the data.

  18. Financial constraints in capacity planning: a national utility regulatory model (NUREG). Volume I of III: methodology. Final report

    Energy Technology Data Exchange (ETDEWEB)

    1981-10-29

    This report develops and demonstrates the methodology for the National Utility Regulatory (NUREG) Model developed under contract number DEAC-01-79EI-10579. It is accompanied by two supporting volumes. Volume II is a user's guide for operation of the NUREG software. This includes description of the flow of software and data, as well as the formats of all user data files. Finally, Volume III is a software description guide. It briefly describes, and gives a listing of, each program used in NUREG.

  19. Efficient solution methodology for calibrating the hemodynamic model using functional Magnetic Resonance Imaging (fMRI) measurements

    KAUST Repository

    Zambri, Brian

    2015-11-05

    Our aim is to propose a numerical strategy for retrieving accurately and efficiently the biophysiological parameters as well as the external stimulus characteristics corresponding to the hemodynamic mathematical model that describes changes in blood flow and blood oxygenation during brain activation. The proposed method employs the TNM-CKF method developed in [1], but in a prediction/correction framework. We present numerical results using both real and synthetic functional Magnetic Resonance Imaging (fMRI) measurements to highlight the performance characteristics of this computational methodology. © 2015 IEEE.

  20. 孤独症婴幼儿早期介入丹佛干预模式%Early start denver model for young children with autism

    Institute of Scientific and Technical Information of China (English)

    徐秀

    2015-01-01

    Autism spectrum disorders(ASD) effect on children's physical and mental health has been paid more and more attention,and is widely recognized on the importance of early intervention for young children with ASD,effective intervention for the young children with ASD has become a hot pot in this field.This paper introduced the early start denver model,which was highly respected in early intervention for young children with ASD.%孤独症谱系障碍对儿童身心健康的影响越来越受到人们的关注,而对孤独症婴幼儿早期疗育的重要性也受到业内人士的广泛认同,研发针对婴幼儿孤独症的有效干预方案已成为本领域的热点,现就近年来受到业内人士广泛推崇、针对孤独症婴幼儿的早期介入丹佛干预模式作一介绍.

  1. Modeling of the effect of freezer conditions on the hardness of ice cream using response surface methodology.

    Science.gov (United States)

    Inoue, K; Ochi, H; Habara, K; Taketsuka, M; Saito, H; Ichihashi, N; Iwatsuki, K

    2009-12-01

    The effect of conventional continuous freezer parameters [mix flow (L/h), overrun (%), drawing temperature ( degrees C), cylinder pressure (kPa), and dasher speed (rpm)] on the hardness of ice cream under varying measured temperatures (-5, -10, and -15 degrees C) was investigated systematically using response surface methodology (central composite face-centered design), and the relationships were expressed as statistical models. The range (maximum and minimum values) of each freezer parameter was set according to the actual capability of the conventional freezer and applicability to the manufacturing process. Hardness was measured using a penetrometer. These models showed that overrun and drawing temperature had significant effects on hardness. The models can be used to optimize freezer conditions to make ice cream of the least possible hardness under the highest overrun (120%) and a drawing temperature of approximately -5.5 degrees C (slightly warmer than the lowest drawing temperature of -6.5 degrees C) within the range of this study. With reference to the structural elements of the ice cream, we suggest that the volume of overrun and ice crystal content, ice crystal size, and fat globule destabilization affect the hardness of ice cream. In addition, the combination of a simple instrumental parameter and response surface methodology allows us to show the relation between freezer conditions and one of the most important properties-hardness-visually and quantitatively on the practical level.

  2. Methodology for Developing Hydrological Models Based on an Artificial Neural Network to Establish an Early Warning System in Small Catchments

    Directory of Open Access Journals (Sweden)

    Ivana Sušanj

    2016-01-01

    Full Text Available In some situations, there is no possibility of hazard mitigation, especially if the hazard is induced by water. Thus, it is important to prevent consequences via an early warning system (EWS to announce the possible occurrence of a hazard. The aim and objective of this paper are to investigate the possibility of implementing an EWS in a small-scale catchment and to develop a methodology for developing a hydrological prediction model based on an artificial neural network (ANN as an essential part of the EWS. The methodology is implemented in the case study of the Slani Potok catchment, which is historically recognized as a hazard-prone area, by establishing continuous monitoring of meteorological and hydrological parameters to collect data for the training, validation, and evaluation of the prediction capabilities of the ANN model. The model is validated and evaluated by visual and common calculation approaches and a new evaluation for the assessment. This new evaluation is proposed based on the separation of the observed data into classes based on the mean data value and the percentages of classes above or below the mean data value as well as on the performance of the mean absolute error.

  3. Homogenized modeling methodology for 18650 lithium-ion battery module under large deformation

    Science.gov (United States)

    Tang, Liang; Cheng, Pengle

    2017-01-01

    Effective lithium-ion battery module modeling has become a bottleneck for full-size electric vehicle crash safety numerical simulation. Modeling every single cell in detail would be costly. However, computational accuracy could be lost if the module is modeled by using a simple bulk material or rigid body. To solve this critical engineering problem, a general method to establish a computational homogenized model for the cylindrical battery module is proposed. A single battery cell model is developed and validated through radial compression and bending experiments. To analyze the homogenized mechanical properties of the module, a representative unit cell (RUC) is extracted with the periodic boundary condition applied on it. An elastic–plastic constitutive model is established to describe the computational homogenized model for the module. Two typical packing modes, i.e., cubic dense packing and hexagonal packing for the homogenized equivalent battery module (EBM) model, are targeted for validation compression tests, as well as the models with detailed single cell description. Further, the homogenized EBM model is confirmed to agree reasonably well with the detailed battery module (DBM) model for different packing modes with a length scale of up to 15 × 15 cells and 12% deformation where the short circuit takes place. The suggested homogenized model for battery module makes way for battery module and pack safety evaluation for full-size electric vehicle crashworthiness analysis. PMID:28746390

  4. Improved Conceptual Models Methodology (ICoMM) for Validation of Non-Observable Systems

    Science.gov (United States)

    2015-12-01

    xviii M&S modeling and simulation MDP model development process MEU Marine expeditionary unit MODA multi-objective decision analysis MOE measure of...objective decision analysis ( MODA ) 49 techniques. SMEs are still heavily involved in a MODA and have a method of tracing their values to the model

  5. Homogenized modeling methodology for 18650 lithium-ion battery module under large deformation.

    Science.gov (United States)

    Tang, Liang; Zhang, Jinjie; Cheng, Pengle

    2017-01-01

    Effective lithium-ion battery module modeling has become a bottleneck for full-size electric vehicle crash safety numerical simulation. Modeling every single cell in detail would be costly. However, computational accuracy could be lost if the module is modeled by using a simple bulk material or rigid body. To solve this critical engineering problem, a general method to establish a computational homogenized model for the cylindrical battery module is proposed. A single battery cell model is developed and validated through radial compression and bending experiments. To analyze the homogenized mechanical properties of the module, a representative unit cell (RUC) is extracted with the periodic boundary condition applied on it. An elastic-plastic constitutive model is established to describe the computational homogenized model for the module. Two typical packing modes, i.e., cubic dense packing and hexagonal packing for the homogenized equivalent battery module (EBM) model, are targeted for validation compression tests, as well as the models with detailed single cell description. Further, the homogenized EBM model is confirmed to agree reasonably well with the detailed battery module (DBM) model for different packing modes with a length scale of up to 15 × 15 cells and 12% deformation where the short circuit takes place. The suggested homogenized model for battery module makes way for battery module and pack safety evaluation for full-size electric vehicle crashworthiness analysis.

  6. Projecting future expansion of invasive species: comparing and improving methodologies for species distribution modeling.

    Science.gov (United States)

    Mainali, Kumar P; Warren, Dan L; Dhileepan, Kunjithapatham; McConnachie, Andrew; Strathie, Lorraine; Hassan, Gul; Karki, Debendra; Shrestha, Bharat B; Parmesan, Camille

    2015-12-01

    Modeling the distributions of species, especially of invasive species in non-native ranges, involves multiple challenges. Here, we developed some novel approaches to species distribution modeling aimed at reducing the influences of such challenges and improving the realism of projections. We estimated species-environment relationships for Parthenium hysterophorus L. (Asteraceae) with four modeling methods run with multiple scenarios of (i) sources of occurrences and geographically isolated background ranges for absences, (ii) approaches to drawing background (absence) points, and (iii) alternate sets of predictor variables. We further tested various quantitative metrics of model evaluation against biological insight. Model projections were very sensitive to the choice of training dataset. Model accuracy was much improved using a global dataset for model training, rather than restricting data input to the species' native range. AUC score was a poor metric for model evaluation and, if used alone, was not a useful criterion for assessing model performance. Projections away from the sampled space (i.e., into areas of potential future invasion) were very different depending on the modeling methods used, raising questions about the reliability of ensemble projections. Generalized linear models gave very unrealistic projections far away from the training region. Models that efficiently fit the dominant pattern, but exclude highly local patterns in the dataset and capture interactions as they appear in data (e.g., boosted regression trees), improved generalization of the models. Biological knowledge of the species and its distribution was important in refining choices about the best set of projections. A post hoc test conducted on a new Parthenium dataset from Nepal validated excellent predictive performance of our 'best' model. We showed that vast stretches of currently uninvaded geographic areas on multiple continents harbor highly suitable habitats for parthenium

  7. On the Inclusion of Energy-Shifting Demand Response in Production Cost Models: Methodology and a Case Study

    DEFF Research Database (Denmark)

    O'Connell, Niamh; Hale, Elaine; Doebber, Ian

    storage technologies, and improving economic efficiency. In practice, DR from the commercial and residential sectors is largely an emerging, not a mature, resource, and its actual costs and benefits need to be studied to determine promising combinations of physical DR resource, enabling controls......In the context of future power system requirements for additional flexibility, demand response (DR) is an attractive potential resource. Its proponents widely laud its prospective benefits, which include enabling higher penetrations of variable renewable generation at lower cost than alternative...... and communications, power system characteristics, regulatory environments, market structures, and business models. The work described in this report focuses on the enablement of such analysis from the production cost modeling perspective. In particular, we contribute a bottom-up methodology for modeling load...

  8. Which spatial discretization for distributed hydrological models? Proposition of a methodology and illustration for medium to large-scale catchments

    Directory of Open Access Journals (Sweden)

    J. Dehotin

    2008-05-01

    Full Text Available Distributed hydrological models are valuable tools to derive distributed estimation of water balance components or to study the impact of land-use or climate change on water resources and water quality. In these models, the choice of an appropriate spatial discretization is a crucial issue. It is obviously linked to the available data, their spatial resolution and the dominant hydrological processes. For a given catchment and a given data set, the "optimal" spatial discretization should be adapted to the modelling objectives, as the latter determine the dominant hydrological processes considered in the modelling. For small catchments, landscape heterogeneity can be represented explicitly, whereas for large catchments such fine representation is not feasible and simplification is needed. The question is thus: is it possible to design a flexible methodology to represent landscape heterogeneity efficiently, according to the problem to be solved? This methodology should allow a controlled and objective trade-off between available data, the scale of the dominant water cycle components and the modelling objectives.

    In this paper, we propose a general methodology for such catchment discretization. It is based on the use of nested discretizations. The first level of discretization is composed of the sub-catchments, organised by the river network topology. The sub-catchment variability can be described using a second level of discretizations, which is called hydro-landscape units. This level of discretization is only performed if it is consistent with the modelling objectives, the active hydrological processes and data availability. The hydro-landscapes take into account different geophysical factors such as topography, land-use, pedology, but also suitable hydrological discontinuities such as ditches, hedges, dams, etc. For numerical reasons these hydro-landscapes can be further subdivided into smaller elements that will constitute the

  9. Modeling the Effects of Tool Shoulder and Probe Profile Geometries on Friction Stirred Aluminum Welds Using Response Surface Methodology

    Institute of Scientific and Technical Information of China (English)

    H.K.Mohanty; M.M.Mahapatra; P.Kumar; P.Biswas; N.R.Mandal

    2012-01-01

    The present paper discusses the modeling of tool geometry effects on the friction stir aluminum welds using response surface methodology.The friction stir welding tools were designed with different shoulder and tool probe geometries based on a design matrix.The matrix for the tool designing was made for three types of tools,based on three types of probes,with three levels each for defining the shoulder surface type and probe profile geometries.Then,the effects of tool shoulder and probe geometries on friction stirred aluminum welds were experimentally investigated with respect to weld strength,weld cross section area,grain size of weld and grain size of thermo-mechanically affected zone.These effects were modeled using multiple and response surface regression analysis.The response surface regression modeling were found to be appropriate for defining the friction stir weldment characteristics.

  10. Modelling extrudate expansion in a twin-screw food extrusion cooking process through dimensional analysis methodology

    DEFF Research Database (Denmark)

    Cheng, Hongyuan; Friis, Alan

    2010-01-01

    A new phenomenological model is proposed to correlate extrudate expansion and extruder operation parameters in a twin-screw food extrusion cooking process. Buckingham's pi dimensional analysis method is applied to establish the model. Three dimensionless groups, i.e. pump efficiency, water content...... and temperature, are formed to model the extrusion process from dimensional analysis. The model is evaluated with experimental data for extrusion of whole wheat flour and fish feed. The average deviations of the model correlations are 5.9% and 9% based on experimental data for the whole wheat flour and fish feed...

  11. Methodological notes on model comparisons and strategy classification: A falsificationist proposition

    Directory of Open Access Journals (Sweden)

    Morten Moshagen

    2011-12-01

    Full Text Available Taking a falsificationist perspective, the present paper identifies two major shortcomings of existing approaches to comparative model evaluations in general and strategy classifications in particular. These are (1 failure to consider systematic error and (2 neglect of global model fit. Using adherence measures to evaluate competing models implicitly makes the unrealistic assumption that the error associated with the model predictions is entirely random. By means of simple schematic examples, we show that failure to discriminate between systematic and random error seriously undermines this approach to model evaluation. Second, approaches that treat random versus systematic error appropriately usually rely on relative model fit to infer which model or strategy most likely generated the data. However, the model comparatively yielding the best fit may still be invalid. We demonstrate that taking for granted the vital requirement that a model by itself should adequately describe the data can easily lead to flawed conclusions. Thus, prior to considering the relative discrepancy of competing models, it is necessary to assess their absolute fit and thus, again, attempt falsification. Finally, the scientific value of model fit is discussed from a broader perspective.

  12. Development of a modelling methodology for simulation of long-term morphological evolution of the southern Baltic coast

    Science.gov (United States)

    Zhang, Wenyan; Harff, Jan; Schneider, Ralf; Wu, Chaoyu

    2010-10-01

    The Darss-Zingst peninsula at the southern Baltic Sea is a typical wave-dominated barrier island system which includes an outer barrier island and an inner lagoon. The formation of the Darss-Zingst peninsula dates back to the Littorina Transgression onset about 8,000 cal BP. It originated from several discrete islands, has been reshaped by littoral currents, wind-induced waves during the last 8,000 years and evolved into a complex barrier island system as today; thus, it may serve as an example to study the coastal evolution under long-term climate change. A methodology for developing a long-term (decadal-to-centennial) process-based morphodynamic model for the southern Baltic coastal environment is presented here. The methodology consists of two main components: (1) a preliminary analysis of the key processes driving the morphological evolution of the study area based on statistical analysis of meteorological data and sensitivity studies; (2) a multi-scale high-resolution process-based model. The process-based model is structured into eight main modules. The two-dimensional vertically integrated circulation module, the wave module, the bottom boundary layer module, the sediment transport module, the cliff erosion module and the nearshore storm module are real-time calculation modules which aim at solving the short-term processes. A bathymetry update module and a long-term control function set, in which the ‘reduction’ concepts and technique for morphological update acceleration are implemented, are integrated to up-scale the effects of short-term processes to a decadal-to-centennial scale. A series of multi-scale modelling strategies are implemented in the application of the model to the research area. Successful hindcast of the coastline change of the Darss-Zingst peninsula for the last 300 years validates the modelling methodology. Model results indicate that the coastline change of the Darss-Zingst peninsula is dominated by mechanisms acting on different

  13. A novel approach to delayed-start analyses for demonstrating disease-modifying effects in Alzheimer's disease.

    Directory of Open Access Journals (Sweden)

    Hong Liu-Seifert

    Full Text Available One method for demonstrating disease modification is a delayed-start design, consisting of a placebo-controlled period followed by a delayed-start period wherein all patients receive active treatment. To address methodological issues in previous delayed-start approaches, we propose a new method that is robust across conditions of drug effect, discontinuation rates, and missing data mechanisms. We propose a modeling approach and test procedure to test the hypothesis of noninferiority, comparing the treatment difference at the end of the delayed-start period with that at the end of the placebo-controlled period. We conducted simulations to identify the optimal noninferiority testing procedure to ensure the method was robust across scenarios and assumptions, and to evaluate the appropriate modeling approach for analyzing the delayed-start period. We then applied this methodology to Phase 3 solanezumab clinical trial data for mild Alzheimer's disease patients. Simulation results showed a testing procedure using a proportional noninferiority margin was robust for detecting disease-modifying effects; conditions of high and moderate discontinuations; and with various missing data mechanisms. Using all data from all randomized patients in a single model over both the placebo-controlled and delayed-start study periods demonstrated good statistical performance. In analysis of solanezumab data using this methodology, the noninferiority criterion was met, indicating the treatment difference at the end of the placebo-controlled studies was preserved at the end of the delayed-start period within a pre-defined margin. The proposed noninferiority method for delayed-start analysis controls Type I error rate well and addresses many challenges posed by previous approaches. Delayed-start studies employing the proposed analysis approach could be used to provide evidence of a disease-modifying effect. This method has been communicated with FDA and has been

  14. Strong-LAMP: A LAMP Assay for Strongyloides spp. Detection in Stool and Urine Samples. Towards the Diagnosis of Human Strongyloidiasis Starting from a Rodent Model

    Science.gov (United States)

    Gandasegui, Javier; Bajo Santos, Cristina; López-Abán, Julio; Saugar, José María; Rodríguez, Esperanza; Vicente, Belén; Muro, Antonio

    2016-01-01

    Background Strongyloides stercoralis, the chief causative agent of human strongyloidiasis, is a nematode globally distributed but mainly endemic in tropical and subtropical regions. Chronic infection is often clinically asymptomatic but it can result in severe hyperinfection syndrome or disseminated strongyloidiasis in immunocompromised patients. There is a great diversity of techniques used in diagnosing the disease, but definitive diagnosis is accomplished by parasitological examination of stool samples for morphological identification of parasite. Until now, no molecular method has been tested in urine samples as an alternative to stool samples for diagnosing strongyloidiasis. This study aimed to evaluate the use of a new molecular LAMP assay in a well-established Wistar rat experimental infection model using both stool and, for the first time, urine samples. The LAMP assay was also clinically evaluated in patients´ stool samples. Methodology/Principal Findings Stool and urine samples were obtained daily during a 28-day period from rats infected subcutaneously with different infective third-stage larvae doses of S. venezuelensis. The dynamics of parasite infection was determined by daily counting the number of eggs per gram of feces from day 1 to 28 post-infection. A set of primers for LAMP assay based on a DNA partial sequence in the 18S rRNA gene from S. venezuelensis was designed. The set up LAMP assay (namely, Strong-LAMP) allowed the sensitive detection of S. venezuelensis DNA in both stool and urine samples obtained from each infection group of rats and was also effective in S. stercoralis DNA amplification in patients´ stool samples with previously confirmed strongyloidiasis by parasitological and real-time PCR tests. Conclusions/Significance Our Strong-LAMP assay is an useful molecular tool in research of a strongyloidiasis experimental infection model in both stool and urine samples. After further validation, the Strong-LAMP could also be potentially

  15. Developing Risk Prediction Models for Postoperative Pancreatic Fistula: a Systematic Review of Methodology and Reporting Quality.

    Science.gov (United States)

    Wen, Zhang; Guo, Ya; Xu, Banghao; Xiao, Kaiyin; Peng, Tao; Peng, Minhao

    2016-04-01

    Postoperative pancreatic fistula is still a major complication after pancreatic surgery, despite improvements of surgical technique and perioperative management. We sought to systematically review and critically access the conduct and reporting of methods used to develop risk prediction models for predicting postoperative pancreatic fistula. We conducted a systematic search of PubMed and EMBASE databases to identify articles published before January 1, 2015, which described the development of models to predict the risk of postoperative pancreatic fistula. We extracted information of developing a prediction model including study design, sample size and number of events, definition of postoperative pancreatic fistula, risk predictor selection, missing data, model-building strategies, and model performance. Seven studies of developing seven risk prediction models were included. In three studies (42 %), the number of events per variable was less than 10. The number of candidate risk predictors ranged from 9 to 32. Five studies (71 %) reported using univariate screening, which was not recommended in building a multivariate model, to reduce the number of risk predictors. Six risk prediction models (86 %) were developed by categorizing all continuous risk predictors. The treatment and handling of missing data were not mentioned in all studies. We found use of inappropriate methods that could endanger the development of model, including univariate pre-screening of variables, categorization of continuous risk predictors, and model validation. The use of inappropriate methods affects the reliability and the accuracy of the probability estimates of predicting postoperative pancreatic fistula.

  16. Repository environmental parameters and models/methodologies relevant to assessing the performance of high-level waste packages in basalt, tuff, and salt

    Energy Technology Data Exchange (ETDEWEB)

    Claiborne, H.C.; Croff, A.G.; Griess, J.C.; Smith, F.J.

    1987-09-01

    This document provides specifications for models/methodologies that could be employed in determining postclosure repository environmental parameters relevant to the performance of high-level waste packages for the Basalt Waste Isolation Project (BWIP) at Richland, Washington, the tuff at Yucca Mountain by the Nevada Test Site, and the bedded salt in Deaf Smith County, Texas. Guidance is provided on the identify of the relevant repository environmental parameters; the models/methodologies employed to determine the parameters, and the input data base for the models/methodologies. Supporting studies included are an analysis of potential waste package failure modes leading to identification of the relevant repository environmental parameters, an evaluation of the credible range of the repository environmental parameters, and a summary of the review of existing models/methodologies currently employed in determining repository environmental parameters relevant to waste package performance. 327 refs., 26 figs., 19 tabs.

  17. Using Measured Plane-of-Array Data Directly in Photovoltaic Modeling: Methodology and Validation: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Freeman, Janine; Freestate, David; Riley, Cameron; Hobbs, William

    2016-11-01

    Measured plane-of-array (POA) irradiance may provide a lower-cost alternative to standard irradiance component data for photovoltaic (PV) system performance modeling without loss of accuracy. Previous work has shown that transposition models typically used by PV models to calculate POA irradiance from horizontal data introduce error into the POA irradiance estimates, and that measured POA data can correlate better to measured performance data. However, popular PV modeling tools historically have not directly used input POA data. This paper introduces a new capability in NREL's System Advisor Model (SAM) to directly use POA data in PV modeling, and compares SAM results from both POA irradiance and irradiance components inputs against measured performance data for eight operating PV systems.

  18. Using Measured Plane-of-Array Data Directly in Photovoltaic Modeling: Methodology and Validation

    Energy Technology Data Exchange (ETDEWEB)

    Freeman, Janine; Freestate, David; Hobbs, William; Riley, Cameron

    2016-06-05

    Measured plane-of-array (POA) irradiance may provide a lower-cost alternative to standard irradiance component data for photovoltaic (PV) system performance modeling without loss of accuracy. Previous work has shown that transposition models typically used by PV models to calculate POA irradiance from horizontal data introduce error into the POA irradiance estimates, and that measured POA data can correlate better to measured performance data. However, popular PV modeling tools historically have not directly used input POA data. This paper introduces a new capability in NREL's System Advisor Model (SAM) to directly use POA data in PV modeling, and compares SAM results from both POA irradiance and irradiance components inputs against measured performance data for eight operating PV systems.

  19. A new methodology to test galaxy formation models using the dependence of clustering on stellar mass

    CERN Document Server

    Campbell, David J R; Mitchell, Peter D; Helly, John C; Gonzalez-Perez, Violeta; Lacey, Cedric G; Lagos, Claudia del P; Simha, Vimal; Farrow, Daniel J

    2014-01-01

    We present predictions for the two-point correlation function of galaxy clustering as a function of stellar mass, computed using two new versions of the GALFORM semi-analytic galaxy formation model. These models make use of a new high resolution, large volume N-body simulation, set in the WMAP7 cosmology. One model uses a universal stellar initial mass function (IMF), while the other assumes different IMFs for quiescent star formation and bursts. Particular consideration is given to how the assumptions required to estimate the stellar masses of observed galaxies (such as the choice of IMF, stellar population synthesis model and dust extinction) influence the perceived dependence of galaxy clustering on stellar mass. Broad-band spectral energy distribution fitting is carried out to estimate stellar masses for the model galaxies in the same manner as in observational studies. We show clear differences between the clustering signals computed using the true and estimated model stellar masses. As such, we highligh...

  20. Getting Started with Netduino

    CERN Document Server

    Walker, Chris

    2012-01-01

    Start building electronics projects with Netduino, the popular open source hardware platform that's captured the imagination of makers and hobbyists worldwide. This easy-to-follow book provides the step-by-step guidance you need to experiment with Netduino and the .NET Micro Framework. Through a set of simple projects, you'll learn how to create electronic gadgets-including networked devices that communicate over TCP/IP. Along the way, hobbyists will pick up the basics of .NET programming, and programmers will discover how to work with electronics and microcontrollers. Follow the projects in