WorldWideScience

Sample records for model implementation reducing

  1. Implementation of REDIM reduced chemistry to model an axisymmetric laminar diffusion methane-air flame

    Science.gov (United States)

    Henrique de Almeida Konzen, Pedro; Richter, Thomas; Riedel, Uwe; Maas, Ulrich

    2011-06-01

    The goal of this work is to analyze the use of automatically reduced chemistry by the Reaction-Diffusion Manifold (REDIM) method in simulating axisymmetric laminar coflow diffusion flames. Detailed chemical kinetic models are usually computationally prohibitive for simulating complex reacting flows, and therefore reduced models are required. Automatic reduction model approaches usually exploit the natural multi-scale structure of combustion systems. The novel REDIM approach applies the concept of invariant manifolds to treat also the influence of the transport processes on the reduced model, which overcomes a fundamental problem of model reduction in neglecting the coupling of molecular transport with thermochemical processes. We have considered a previously well studied atmospheric pressure nitrogen-diluted methane-air flame as a test case to validate the methodology presented here. First, one-dimensional and two-dimensional REDIMs were computed and tabulated in lookup tables. Then, the full set of governing equations are projected on the REDIM and implemented in the object-oriented C++ Gascoigne code with a new add-on library to deal with the REDIM tables. The projected set of governing equations have been discretized by the Finite Element Method (FEM) and solved by a GMRES iteration preconditioned by a geometric multigrid method. Local grid refinement, adaptive mesh and parallelization are applied to ensure efficiency and precision. The numerical results obtained using the REDIM approach have shown very good agreement with detailed numerical simulations and experimental data.

  2. Implementation of a numerical holding furnace model in foundry and construction of a reduced model

    Science.gov (United States)

    Loussouarn, Thomas; Maillet, Denis; Remy, Benjamin; Dan, Diane

    2016-09-01

    Vacuum holding induction furnaces are used for the manufacturing of turbine blades by loss wax foundry process. The control of solidification parameters is a key factor for the manufacturing of these parts in according to geometrical and structural expectations. The definition of a reduced heat transfer model with experimental identification through an estimation of its parameters is required here. In a further stage this model will be used to characterize heat exchanges using internal sensors through inverse techniques to optimize the furnace command and the optimization of its design. Here, an axisymmetric furnace and its load have been numerically modelled using FlexPDE, a finite elements code. A detailed model allows the calculation of the internal induction heat source as well as transient radiative transfer inside the furnace. A reduced lumped body model has been defined to represent the numerical furnace. The model reduction and the estimation of the parameters of the lumped body have been made using a Levenberg-Marquardt least squares minimization algorithm with Matlab, using two synthetic temperature signals with a further validation test.

  3. Implementation of a solution Cloud Computing with MapReduce model

    Science.gov (United States)

    Baya, Chalabi

    2014-10-01

    In recent years, large scale computer systems have emerged to meet the demands of high storage, supercomputing, and applications using very large data sets. The emergence of Cloud Computing offers the potentiel for analysis and processing of large data sets. Mapreduce is the most popular programming model which is used to support the development of such applications. It was initially designed by Google for building large datacenters on a large scale, to provide Web search services with rapid response and high availability. In this paper we will test the clustering algorithm K-means Clustering in a Cloud Computing. This algorithm is implemented on MapReduce. It has been chosen for its characteristics that are representative of many iterative data analysis algorithms. Then, we modify the framework CloudSim to simulate the MapReduce execution of K-means Clustering on different Cloud Computing, depending on their size and characteristics of target platforms. The experiment show that the implementation of K-means Clustering gives good results especially for large data set and the Cloud infrastructure has an influence on these results.

  4. Reduced Order Model Implementation in the Risk-Informed Safety Margin Characterization Toolkit

    Energy Technology Data Exchange (ETDEWEB)

    Mandelli, Diego [Idaho National Lab. (INL), Idaho Falls, ID (United States); Smith, Curtis L. [Idaho National Lab. (INL), Idaho Falls, ID (United States); Alfonsi, Andrea [Idaho National Lab. (INL), Idaho Falls, ID (United States); Rabiti, Cristian [Idaho National Lab. (INL), Idaho Falls, ID (United States); Cogliati, Joshua J. [Idaho National Lab. (INL), Idaho Falls, ID (United States); Talbot, Paul W. [Idaho National Lab. (INL), Idaho Falls, ID (United States); Rinaldi, Ivan [Idaho National Lab. (INL), Idaho Falls, ID (United States); Maljovec, Dan [Idaho National Lab. (INL), Idaho Falls, ID (United States); Wang, Bei [Idaho National Lab. (INL), Idaho Falls, ID (United States); Pascucci, Valerio [Idaho National Lab. (INL), Idaho Falls, ID (United States); Zhao, Haihua [Idaho National Lab. (INL), Idaho Falls, ID (United States)

    2015-09-01

    The RISMC project aims to develop new advanced simulation-based tools to perform Probabilistic Risk Analysis (PRA) for the existing fleet of U.S. nuclear power plants (NPPs). These tools numerically model not only the thermo-hydraulic behavior of the reactor primary and secondary systems but also external events temporal evolution and components/system ageing. Thus, this is not only a multi-physics problem but also a multi-scale problem (both spatial, µm-mm-m, and temporal, ms-s-minutes-years). As part of the RISMC PRA approach, a large amount of computationally expensive simulation runs are required. An important aspect is that even though computational power is regularly growing, the overall computational cost of a RISMC analysis may be not viable for certain cases. A solution that is being evaluated is the use of reduce order modeling techniques. During the FY2015, we investigated and applied reduced order modeling techniques to decrease the RICM analysis computational cost by decreasing the number of simulations runs to perform and employ surrogate models instead of the actual simulation codes. This report focuses on the use of reduced order modeling techniques that can be applied to any RISMC analysis to generate, analyze and visualize data. In particular, we focus on surrogate models that approximate the simulation results but in a much faster time (µs instead of hours/days). We apply reduced order and surrogate modeling techniques to several RISMC types of analyses using RAVEN and RELAP-7 and show the advantages that can be gained.

  5. Reduced Order Podolsky Model

    CERN Document Server

    Thibes, Ronaldo

    2016-01-01

    We perform the canonical and path integral quantizations of a lower-order derivatives model describing Podolsky's generalized electrodynamics. The physical content of the model shows an auxiliary massive vector field coupled to the usual electromagnetic field. The equivalence with Podolsky's original model is studied at classical and quantum levels. Concerning the dynamical time evolution we obtain a theory with two first-class and two second-class constraints in phase space. We calculate explicitly the corresponding Dirac brackets involving both vector fields. We use the Senjanovic procedure to implement the second-class constraints and the Batalin-Fradkin-Vilkovisky path integral quantization scheme to deal with the symmetries generated by the first-class constraints. The physical interpretation of the results turns out to be simpler due to the reduced derivatives order permeating the equations of motion, Dirac brackets and effective action.

  6. Implementing Parallel Google Map-Reduce in Eden

    DEFF Research Database (Denmark)

    Berthold, Jost; Dieterle, Mischa; Loogen, Rita

    2009-01-01

    Recent publications have emphasised map-reduce as a general programming model (labelled Google map-reduce), and described existing high-performance implementations for large data sets. We present two parallel implementations for this Google map-reduce skeleton, one following earlier work, and one...... of the Google map-reduce skeleton in usage and performance, and deliver runtime analyses for example applications. Although very flexible, the Google map-reduce skeleton is often too general, and typical examples reveal a better runtime behaviour using alternative skeletons....

  7. Big Data, Big Research: Implementing Population Health-Based Research Models and Integrating Care to Reduce Cost and Improve Outcomes.

    Science.gov (United States)

    Anoushiravani, Afshin A; Patton, Jason; Sayeed, Zain; El-Othmani, Mouhanad M; Saleh, Khaled J

    2016-10-01

    Recent trends in clinical research have moved attention toward reporting clinical outcomes and resource consumption associated with various care processes. This change is the result of technological advancement and a national effort to critically assess health care delivery. As orthopedic surgeons traverse an unchartered health care environment, a more complete understanding of how clinical research is conducted using large data sets is necessary. The purpose of this article is to review various advantages and disadvantages of large data sets available for orthopaedic use, examine their ideal use, and report how they are being implemented nationwide.

  8. Implementation of a reduced order Kalman filter to assimilate ocean color data into a coupled physical-biochemical model of the North Aegean Sea.

    Science.gov (United States)

    Kalaroni, Sofia; Tsiaras, Kostas; Economou-Amilli, Athena; Petihakis, George; Politikos, Dimitrios; Triantafyllou, George

    2013-04-01

    Within the framework of the European project OPEC (Operational Ecology), a data assimilation system was implemented to describe chlorophyll-a concentrations of the North Aegean, as well the impact on the European anchovy (Engraulis encrasicolus) biomass distribution provided by a bioenergetics model, related to the density of three low trophic level functional groups of zooplankton (heterotrophic flagellates, microzooplankton and mesozooplankton). The three-dimensional hydrodynamic-biogeochemical model comprises two on-line coupled sub-models: the Princeton Ocean Model (POM) and the European Regional Seas Ecosystem Model (ERSEM). The assimilation scheme is based on the Singular Evolutive Extended Kalman (SEEK) filter and its variant that uses a fixed correction base (SFEK). For the initialization, SEEK filter uses a reduced order error covariance matrix provided by the dominant Empirical Orthogonal Functions (EOF) of model. The assimilation experiments were performed for year 2003 using SeaWiFS chlorophyll-a data during which the physical model uses the atmospheric forcing obtained from the regional climate model HIRHAM5. The assimilation system is validated by assessing the relevance of the system in fitting the data, the impact of the assimilation on non-observed biochemical parameters and the overall quality of the forecasts.

  9. Obstacles and benefits of the implementation of a reduced-rank smoother with a high resolution model of the tropical Atlantic Ocean

    Directory of Open Access Journals (Sweden)

    N. Freychet

    2012-09-01

    Full Text Available Most of oceanographic operational centers use three-dimensional data assimilation schemes to produce reanalyses. We investigate here the benefits of a smoother, i.e. a four-dimensional formulation of statistical assimilation. A square-root sequential smoother is implemented with a tropical Atlantic Ocean circulation model. A simple twin experiment is performed to investigate its benefits, compared to its corresponding filter. Despite model's non-linearities and the various approximations used for its implementation, the smoother leads to a better estimation of the ocean state, both on statistical (i.e. mean error level and dynamical points of view, as expected from linear theory. Smoothed states are more in phase with the dynamics of the reference state, an aspect that is nicely illustrated with the chaotic dynamics of the North Brazil Current rings. We also show that the smoother efficiency is strongly related to the filter configuration. One of the main obstacles to implement the smoother is then to accurately estimate the error covariances of the filter. Considering this, benefits of the smoother are also investigated with a configuration close to situations that can be managed by operational center systems, where covariances matrices are fixed (optimal interpolation. We define here a simplified smoother scheme, called half-fixed basis smoother, that could be implemented with current reanalysis schemes. Its main assumption is to neglect the propagation of the error covariances matrix, what leads to strongly reduce the cost of assimilation. Results illustrate the ability of this smoother to provide a solution more consistent with the dynamics, compared to the filter. The smoother is also able to produce analyses independently of the observation frequency, so the smoothed solution appears more continuous in time, especially in case of a low frenquency observation network.

  10. MARIANE: MApReduce Implementation Adapted for HPC Environments

    Energy Technology Data Exchange (ETDEWEB)

    Fadika, Zacharia; Dede, Elif; Govindaraju, Madhusudhan; Ramakrishnan, Lavanya

    2011-07-06

    MapReduce is increasingly becoming a popular framework, and a potent programming model. The most popular open source implementation of MapReduce, Hadoop, is based on the Hadoop Distributed File System (HDFS). However, as HDFS is not POSIX compliant, it cannot be fully leveraged by applications running on a majority of existing HPC environments such as Teragrid and NERSC. These HPC environments typicallysupport globally shared file systems such as NFS and GPFS. On such resourceful HPC infrastructures, the use of Hadoop not only creates compatibility issues, but also affects overall performance due to the added overhead of the HDFS. This paper not only presents a MapReduce implementation directly suitable for HPC environments, but also exposes the design choices for better performance gains in those settings. By leveraging inherent distributed file systems' functions, and abstracting them away from its MapReduce framework, MARIANE (MApReduce Implementation Adapted for HPC Environments) not only allows for the use of the model in an expanding number of HPCenvironments, but also allows for better performance in such settings. This paper shows the applicability and high performance of the MapReduce paradigm through MARIANE, an implementation designed for clustered and shared-disk file systems and as such not dedicated to a specific MapReduce solution. The paper identifies the components and trade-offs necessary for this model, and quantifies the performance gains exhibited by our approach in distributed environments over Apache Hadoop in a data intensive setting, on the Magellan testbed at the National Energy Research Scientific Computing Center (NERSC).

  11. Comparing MapReduce and Pipeline Implementations for Counting Triangles

    Directory of Open Access Journals (Sweden)

    Edelmira Pasarella

    2017-01-01

    Full Text Available A common method to define a parallel solution for a computational problem consists in finding a way to use the Divide and Conquer paradigm in order to have processors acting on its own data and scheduled in a parallel fashion. MapReduce is a programming model that follows this paradigm, and allows for the definition of efficient solutions by both decomposing a problem into steps on subsets of the input data and combining the results of each step to produce final results. Albeit used for the implementation of a wide variety of computational problems, MapReduce performance can be negatively affected whenever the replication factor grows or the size of the input is larger than the resources available at each processor. In this paper we show an alternative approach to implement the Divide and Conquer paradigm, named dynamic pipeline. The main features of dynamic pipelines are illustrated on a parallel implementation of the well-known problem of counting triangles in a graph. This problem is especially interesting either when the input graph does not fit in memory or is dynamically generated. To evaluate the properties of pipeline, a dynamic pipeline of processes and an ad-hoc version of MapReduce are implemented in the language Go, exploiting its ability to deal with channels and spawned processes. An empirical evaluation is conducted on graphs of different topologies, sizes, and densities. Observed results suggest that dynamic pipelines allows for an efficient implementation of the problem of counting triangles in a graph, particularly, in dense and large graphs, drastically reducing the execution time with respect to the MapReduce implementation.

  12. Regularized Reduced Order Models

    CERN Document Server

    Wells, David; Xie, Xuping; Iliescu, Traian

    2015-01-01

    This paper puts forth a regularization approach for the stabilization of proper orthogonal decomposition (POD) reduced order models (ROMs) for the numerical simulation of realistic flows. Two regularized ROMs (Reg-ROMs) are proposed: the Leray ROM (L-ROM) and the evolve-then-filter ROM (EF-ROM). These new Reg-ROMs use spatial filtering to smooth (regularize) various terms in the ROMs. Two spatial filters are used: a POD projection onto a POD subspace (Proj) and a new POD differential filter (DF). The four Reg-ROM/filter combinations are tested in the numerical simulation of the one-dimensional Burgers equation with a small diffusion coefficient and the three-dimensional flow past a circular cylinder at a low Reynolds number (Re = 100). Overall, the most accurate Reg-ROM/filter combination is EF-ROM-DF. Furthermore, the DF generally yields better results than Proj. Finally, the four Reg-ROM/filter combinations are computationally efficient and generally more accurate than the standard Galerkin ROM.

  13. Lean Manufacturing Implementation: an Approach to Reduce Production Cost

    Directory of Open Access Journals (Sweden)

    Iraswari

    2012-04-01

    Full Text Available Abstract: Lean Manufacturing Implementation: An Approach To Reduce Production Cost. Opportunities to improve production processes and reduce production cost through the implementation of lean manufacturing in small medium garment manufacturing are presented in this research. This research shows that there is a possibility of decrease in production cost and increase in return on sales. Lean manufacturing implementation can eliminate waste in the production process. This is a set of techniques for identification and elimination of waste gathered from The Ford Production, Statistical Process Control and other techniques. Improvement of quality could be carried out while time and cost of production are being reduced.

  14. Model Reduction via Reducibility Matrix

    Institute of Scientific and Technical Information of China (English)

    Musa Abdalla; Othman Alsmadi

    2006-01-01

    In this work, a new model reduction technique is introduced. The proposed technique is derived using the matrix reducibility concept. The eigenvalues of the reduced model are preserved; that is, the reduced model eigenvalues are a subset of the full order model eigenvalues. This preservation of the eigenvalues makes the mathematical model closer to the physical model. Finally, the outcomes of this method are fully illustrated using simulations of two numeric examples.

  15. Modelling reduced sparse data

    Science.gov (United States)

    Kozera, Ryszard; Noakes, Lyle

    2016-09-01

    In this paper we discuss the problem of fitting to an ordered collection of points in arbitary Euclidean space called reduced data. We are not given here the corresponding interpolation knots. Instead, these are estimated by new knots upon minimizing a relevant highly nonlinear optimization scheme based on natural spline interpolation. The existence of a global minimizer (i.e. the collection of interpolation knots in ascending order) is also addressed in this paper. Finally, Leap-Frog optimization tool is used to compute these knots approximating the unknown interpolation knots. This numerical scheme is subsequently compared with the Secant Method. Two illustrative examples are given.

  16. Implementing loudness models in Matlab

    OpenAIRE

    2004-01-01

    In the field of psychoacoustic analysis the goal is to construct a transformation that will map a time domain waveform into a domain that will best capture the response of a human perceiving sound. A key element of such transformations is the mapping between the sound intensity in decibels and its actual perceived loudness. A number of difdferent loudness models exist to achieve this mapping. This paper examines implementation strategies for some of the more well-known models in the Matlab so...

  17. Implementing loudness models in Matlab

    OpenAIRE

    2004-01-01

    In the field of psychoacoustic analysis the goal is to construct a transformation that will map a time waveform into a domain that best captures the response of a human perceiving sound. A key element of such transformations is the mapping between the sound intensity in decibels and its actual perceived loudness. A number of different loudness models exist to achieve this mapping. This paper examines implementation strategies for some of the more wellknown models in the Matlab software env...

  18. Implementing a new governance model.

    Science.gov (United States)

    Stanley-Clarke, Nicky; Sanders, Jackie; Munford, Robyn

    2016-05-16

    Purpose - The purpose of this paper is to discuss the lessons learnt from the process of implementing a new model of governance within Living Well, a New Zealand statutory mental health agency. Design/methodology/approach - It presents the findings from an organisational case study that involved qualitative interviews, meeting observations and document analysis. Archetype theory provided the analytical framework for the research enabling an analysis of both the formal structures and informal value systems that influenced the implementation of the governance model. Findings - The research found that the move to a new governance model did not proceed as planned. It highlighted the importance of staff commitment, the complexity of adopting a new philosophical approach and the undue influence of key personalities as key determining factors in the implementation process. The findings suggest that planners and managers within statutory mental health agencies need to consider the implications of any proposed governance change on existing roles and relationships, thinking strategically about how to secure professional commitment to change. Practical implications - There are ongoing pressures within statutory mental health agencies to improve the efficiency and effectiveness of organisational structures and systems. This paper has implications for how planners and managers think about the process of implementing new governance models within the statutory mental health environment in order to increase the likelihood of sustaining and embedding new approaches to service delivery. Originality/value - The paper presents insights into the process of implementing new governance models within a statutory mental health agency in New Zealand that has relevance for other jurisdictions.

  19. Reducing Uncertainty: Implementation of Heisenberg Principle to Measure Company Performance

    Directory of Open Access Journals (Sweden)

    Anna Svirina

    2015-08-01

    Full Text Available The paper addresses the problem of uncertainty reduction in estimation of future company performance, which is a result of wide range of enterprise's intangible assets probable efficiency. To reduce this problem, the paper suggests to use quantum economy principles, i.e. implementation of Heisenberg principle to measure efficiency and potential of intangible assets of the company. It is proposed that for intangibles it is not possible to estimate both potential and efficiency at a certain time point. To provide a proof for these thesis, the data on resources potential and efficiency from mid-Russian companies was evaluated within deterministic approach, which did not allow to evaluate probability of achieving certain resource efficiency, and quantum approach, which allowed to estimate the central point around which the probable efficiency of resources in concentrated. Visualization of these approaches was performed by means of LabView software. It was proven that for tangible assets performance estimation a deterministic approach should be used; while for intangible assets the quantum approach allows better quality of future performance prediction. On the basis of these findings we proposed the holistic approach towards estimation of company resource efficiency in order to reduce uncertainty in modeling company performance.

  20. Fiscal 1999 survey report on basic feasibility in implementing model project for energy conservation by reducing electric power loss in Myanmar; 1999 nendo Myanmar ni okeru denryoku sonshitsu teigen sho energy model jigyo kihonteki jisshi kanosei chosa hokokusho

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2001-03-01

    A study was conducted on feasibility in implementing a model project for reducing losses in electrical transmission and distribution in Myanmar, where a critical power supply situation is impending such as frequent power failures. This paper explains the fiscal 1999 results. The project A is the installation work of one additional circuit (159km, 130MVA) on a 132kV transmission line between Lawpita and Kalaw. The supporting structures for this line have already been designed for two circuits. The nation considers it to be the highest priority with the purpose of reducing losses as well as providing stable power supply (it is directly connected to the Baluchaung hydro power plant generating 1/4 of the entire power supply in the nation). The project B is the new installation of a 33kV distribution line between Aung Pin Lae and Mayangyan. This is scheduled for Mandalay where a large growth is anticipated in the demand for electric power. The construction expenditures are, in thousands of Yen, 498,050 (for Japan) and 16,119 (for Myanmar) for the project A, and 309,448 (for Japan) for the project B. The measurable loss reductions are estimated as 1,280kW and 660kW respectively for each project, with both EIRRs surpassing the social discount rate of 10%, indicating economically profitable projects. Years to recovery of investment for the project A and B are 25 and 10 years respectively. (NEDO)

  1. Determining Reduced Order Models for Optimal Stochastic Reduced Order Models

    Energy Technology Data Exchange (ETDEWEB)

    Bonney, Matthew S. [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Brake, Matthew R.W. [Sandia National Lab. (SNL-CA), Livermore, CA (United States)

    2015-08-01

    The use of parameterized reduced order models(PROMs) within the stochastic reduced order model (SROM) framework is a logical progression for both methods. In this report, five different parameterized reduced order models are selected and critiqued against the other models along with truth model for the example of the Brake-Reuss beam. The models are: a Taylor series using finite difference, a proper orthogonal decomposition of the the output, a Craig-Bampton representation of the model, a method that uses Hyper-Dual numbers to determine the sensitivities, and a Meta-Model method that uses the Hyper-Dual results and constructs a polynomial curve to better represent the output data. The methods are compared against a parameter sweep and a distribution propagation where the first four statistical moments are used as a comparison. Each method produces very accurate results with the Craig-Bampton reduction having the least accurate results. The models are also compared based on time requirements for the evaluation of each model where the Meta- Model requires the least amount of time for computation by a significant amount. Each of the five models provided accurate results in a reasonable time frame. The determination of which model to use is dependent on the availability of the high-fidelity model and how many evaluations can be performed. Analysis of the output distribution is examined by using a large Monte-Carlo simulation along with a reduced simulation using Latin Hypercube and the stochastic reduced order model sampling technique. Both techniques produced accurate results. The stochastic reduced order modeling technique produced less error when compared to an exhaustive sampling for the majority of methods.

  2. A Reducing Resistance to Change Model

    Directory of Open Access Journals (Sweden)

    Daniela Braduţanu

    2015-10-01

    Full Text Available The aim of this scientific paper is to present an original reducing resistance to change model. After analyzing the existent literature, I have concluded that the resistance to change subject has gained popularity over the years, but there are not too many models that could help managers implement more smoothly an organizational change process and at the same time, reduce effectively employees’ resistance. The proposed model is very helpful for managers and change agents who are confronted with a high degree of resistance when trying to implement a new change, as well as for researches. The key contribution of this paper is that resistance is not necessarily bad and if used appropriately, it can actually represent an asset. Managers must use employees’ resistance.

  3. Community organizing goes to college: A practice-based model of community organizing to implement environmental strategies to reduce high-risk drinking on college campuses

    OpenAIRE

    Wagoner, Kimberly G.; Rhodes, Scott D.; Lentz, Ashley W.; Wolfson, Mark

    2010-01-01

    Community organizing is a successful method to leverage resources and build community capacity to identify and intervene upon health issues. However, published accounts documenting the systematic facilitation of the process are limited. This qualitative analysis explored community organizing using data collected as part of the Study to Prevent Alcohol Related Consequences (SPARC), a randomized community trial of 10 North Carolina colleges focused on reducing consequences of high-risk drinking...

  4. Enterprise resource planning implementation decision & optimization models

    Institute of Scientific and Technical Information of China (English)

    Wang Shaojun; Wang Gang; Lü Min; Gao Guoan

    2008-01-01

    To study the uncertain optimization problems on implementation schedule, time-cost trade-off and quality in enterprise resource planning (ERP) implementation, combined with program evaluation and review technique (PERT), some optimization models are proposed, which include the implementation schedule model, the timecost trade-off model, the quality model, and the implementation time-cost-quality synthetic optimization model. A PERT-embedded genetic algorithm (GA) based on stochastic simulation technique is introduced to the optimization models solution. Finally, an example is presented to show that the models and algorithm are reasonable and effective, which can offer a reliable quantitative decision method for ERP implementation.

  5. Risk Reducing Effect of AIS Implementation on Collision Risk

    DEFF Research Database (Denmark)

    Lützen, Marie; Friis-Hansen, Peter

    2003-01-01

    AIS (Automatic Identification System) is a transponder system developed for sea traffic purposes. The system sends and receives important ship information and other safety-related information between other ships and shore-based AIS stations. The implementation of AIS has now been initiated and......, as a result, the community will undoubtedly observe an increase in navigational safety. However, to the authors? knowledge, no study has so far rigorously quantified the risk reducing effect of using AIS as an integrated part of the navigational system. The objective of this study is to fill this gap....... The risk reducing effect of AIS is quantified by building a Bayesian network facilitating an evaluation of the effect of AIS on the navigational officer?s reaction ability in a potential, critical collision situation. The time-dependent change in the risk reducing effect on ship collisions is analysed...

  6. CSR Model Implementation from School Stakeholder Perspectives

    Science.gov (United States)

    Herrmann, Suzannah

    2006-01-01

    Despite comprehensive school reform (CSR) model developers' best intentions to make school stakeholders adhere strictly to the implementation of model components, school stakeholders implementing CSR models inevitably make adaptations to the CSR model. Adaptations are made to CSR models because school stakeholders internalize CSR model practices…

  7. Penerapan Reduced Impact Logging Menggunakan Monocable Winch (Pancang Tarik (Implementing Reduced Impact Logging with Monocable Winch

    Directory of Open Access Journals (Sweden)

    Yosep Ruslim

    2012-01-01

    Full Text Available Forest harvesting still encounters many problems especially concerning impact to the residual stand  and environmental damage. Implementing the reduced impact monocable winch and planning of good skid trails should have a positive impact on work efficiency as well as, reducing damage to the residual stand and soil during felling and skidding activities. Reduced impact logging (RIL with a monocable winch (Pancang Tarik system has been tried in several IUPHHKs and it can be concluded that RIL monocable winch system could be applied practically and reduce impact on residual stand and soil damage. Using this technology has many advantages, among others: cost efficiency, locally made, environmental friendly, and high local community participation. Application of  the monocable winch  system in reduced impact logging is an effort to reduce economical and environment  damages when compared to conventional system of ground based skidding with bulldozer system. The aim of this research is to verify the efficiency (operational cost, effectiveness (productivity and  time consumption of monocable winch system. The results  indicate that the implementation monocable winch system, has reduced the soil damage as much as 8% ha-1.  The skidding cost  with monocable system is Rp95.000 m-3. This figure is significantly cheaper if compare with ground base skidding with bulldozer system in which the skidding cost around Rp165.000 m-3.Keywords: mononocable winch, productivity,  skidding cost, reduced impact logging, local community

  8. Guiding healthcare technology implementation: a new integrated technology implementation model.

    Science.gov (United States)

    Schoville, Rhonda R; Titler, Marita G

    2015-03-01

    Healthcare technology is used to improve delivery of safe patient care by providing tools for early diagnosis, ongoing monitoring, and treatment of patients. This technology includes bedside physiologic monitors, pulse oximetry devices, electrocardiogram machines, bedside telemetry, infusion pumps, ventilators, and electronic health records. Healthcare costs are a challenge for society, and hospitals are pushed to lower costs by discharging patients sooner. Healthcare technology is being used to facilitate these early discharges. There is little understanding of how healthcare facilities purchase, implement, and adopt technology. There are two areas of theories and models currently used when investigating technology: technology adoption and implementation science. Technology adoption focuses mainly on how the end users adopt technology, whereas implementation science describes methods, interventions, and variables that promote the use of evidence-based practice. These two approaches are not well informed by each other. In addition, amplifying the knowledge gap is the limited conceptualization of healthcare technology implementation frameworks. To bridge this gap, an all-encompassing model is needed. To understand the key technology implementation factors utilized by leading healthcare facilities, the prevailing technology adoption and implementation science theories and models were reviewed. From this review, an integrated technology implementation model will be set forth.

  9. Approximate Deconvolution Reduced Order Modeling

    CERN Document Server

    Xie, Xuping; Wang, Zhu; Iliescu, Traian

    2015-01-01

    This paper proposes a large eddy simulation reduced order model(LES-ROM) framework for the numerical simulation of realistic flows. In this LES-ROM framework, the proper orthogonal decomposition(POD) is used to define the ROM basis and a POD differential filter is used to define the large ROM structures. An approximate deconvolution(AD) approach is used to solve the ROM closure problem and develop a new AD-ROM. This AD-ROM is tested in the numerical simulation of the one-dimensional Burgers equation with a small diffusion coefficient(10^{-3})

  10. A Reduced Wind Power Grid Model for Research and Education

    DEFF Research Database (Denmark)

    Akhmatov, Vladislav; Lund, Torsten; Hansen, Anca Daniela;

    2007-01-01

    A reduced grid model of a transmission system with a number of central power plants, consumption centers, local wind turbines and a large offshore wind farm is developed and implemented in the simulation tool PowerFactory (DIgSILENT). The reduced grid model is given by Energinet.dk, Transmission ...

  11. Refinement of reduced-models for dynamic systems

    Institute of Scientific and Technical Information of China (English)

    2008-01-01

    A refinement procedure for the reduced models of structural dynamic systems is presented in this article. The refinement procedure is to "tune" the parameters of a reduced model, which could be obtained from any traditional model reduction scheme, into an improved reduced model. Upon the completion of the refinement, the improved reduced model matches the dynamic characteristics - the chosen structural frequencies and their mode shapes - of the full order model. Mathematically, the procedure to implement the model refinement technique is an application of the recently developed cross-model cross-mode (CMCM) method for model updating. A numerical example of reducing a 5-DOF (degree-of-freedom) classical mass-spring (or shear-building) model into a 3-DOF generalized mass-spring model is demonstrated in this article.

  12. Rapid implementation of advanced constitutive models

    Science.gov (United States)

    Starman, Bojan; Halilovič, Miroslav; Vrh, Marko; Štok, Boris

    2013-12-01

    This paper presents a methodology based on the NICE integration scheme [1, 2] for simple and rapid numerical implementation of a class of plasticity constitutive models. In this regard, an algorithm is purposely developed for the implementation of newly developed advanced constitutive models into explicit finite element framework. The methodology follows the organization of the problem state variables into an extended form, which allows the constitutive models' equations to be organized in such a way, that the algorithm can be optionally extended with minimal effort to integrate also evolution equations related to a description of other specific phenomena, such as damage, distortional hardening, phase transitions, degradation etc. To confirm simplicity of the program implementation, computational robustness, effectiveness and improved accuracy of the implemented integration algorithm, a deep drawing simulation of the cylindrical cup is considered as the case study, performed in ABAQUS/Explicit. As a fairly complex considered model, the YLD2004-18p model [3, 4] is first implemented via external subroutine VUMAT. Further, to give additional proof of the simplicity of the proposed methodology, a combination of the YLD2004-18p model and Gurson-Tvergaard-Needleman model (GTN) is considered. As demonstrated, the implementation is really obtained in a very simple way.

  13. Buildings Lean Maintenance Implementation Model

    Science.gov (United States)

    Abreu, Antonio; Calado, João; Requeijo, José

    2016-11-01

    Nowadays, companies in global markets have to achieve high levels of performance and competitiveness to stay "alive".Within this assumption, the building maintenance cannot be done in a casual and improvised way due to the costs related. Starting with some discussion about lean management and building maintenance, this paper introduces a model to support the Lean Building Maintenance (LBM) approach. Finally based on a real case study from a Portuguese company, the benefits, challenges and difficulties are presented and discussed.

  14. Implementing school nursing strategies to reduce LGBTQ adolescent suicide: a randomized cluster trial study protocol.

    Science.gov (United States)

    Willging, Cathleen E; Green, Amy E; Ramos, Mary M

    2016-10-22

    Reducing youth suicide in the United States (U.S.) is a national public health priority, and lesbian, gay, bisexual, transgender, and queer or questioning (LGBTQ) youth are at elevated risk. The Centers for Disease Control and Prevention (CDC) endorses six evidence-based (EB) strategies that center on meeting the needs of LGBTQ youth in schools; however, fewer than 6 % of U.S. schools implement all of them. The proposed intervention model, "RLAS" (Implementing School Nursing Strategies to Reduce LGBTQ Adolescent Suicide), builds on the Exploration, Preparation, Implementation, and Sustainment (EPIS) conceptual framework and the Dynamic Adaptation Process (DAP) to implement EB strategies in U.S. high schools. The DAP accounts for the multilevel context of school settings and uses Implementation Resource Teams (IRTs) to facilitate appropriate expertise, advise on acceptable adaptations, and provide data feedback to make schools implementation ready and prepared to sustain changes. Mixed methods will be used to examine individual, school, and community factors influencing both implementation process and youth outcomes. A cluster randomized controlled trial will assess whether LGBTQ students and their peers in RLAS intervention schools (n = 20) report reductions in suicidality, depression, substance use, bullying, and truancy related to safety concerns compared to those in usual care schools (n = 20). Implementation progress and fidelity for each EB strategy in RLAS intervention schools will be examined using a modified version of the Stages of Implementation Completion checklist. During the implementation and sustainment phases, annual focus groups will be conducted with the 20 IRTs to document their experiences identifying and advancing adaptation supports to facilitate use of EB strategies and their perceptions of the DAP. The DAP represents a data-informed, collaborative, multiple stakeholder approach to progress from exploration to sustainment and obtain

  15. Implementing a trustworthy cost-accounting model.

    Science.gov (United States)

    Spence, Jay; Seargeant, Dan

    2015-03-01

    Hospitals and health systems can develop an effective cost-accounting model and maximize the effectiveness of their cost-accounting teams by focusing on six key areas: Implementing an enhanced data model. Reconciling data efficiently. Accommodating multiple cost-modeling techniques. Improving transparency of cost allocations. Securing department manager participation. Providing essential education and training to staff members and stakeholders.

  16. Concurrent Development of Model and Implementation

    CERN Document Server

    Gravell, A; Augusto, J C; Ferreira, C; Gruner, S

    2011-01-01

    This paper considers how a formal mathematically-based model can be used in support of evolutionary software development, and in particular how such a model can be kept consistent with the implementation as it changes to meet new requirements. A number of techniques are listed can make use of such a model to enhance the development process, and also ways to keep model and implementation consistent. The effectiveness of these techniques is investigated through two case studies concerning the development of small e-business applications, a travel agent and a mortgage broker. Some successes are reported, notably in the use of rapid throwaway modelling to investigate design alternatives, and also in the use of close team working and modelbased trace-checking to maintain synchronisation between model and implementation throughout the development. The main areas of weakness were seen to derive from deficiencies in tool support. Recommendations are therefore made for future improvements to tools supporting formal mo...

  17. Implementing the Schoolwide Enrichment Model in Brazil

    Science.gov (United States)

    de Souza Fleith, Denise; Soriano de Alencar, Eunice M. L.

    2010-01-01

    The Schoolwide Enrichment Model (SEM) has been one of the most widely used models in the education of the gifted in Brazil. It has inspired the political and pedagogical project of the Centers of Activities of High Abilities/Giftedness recently implemented in 27 Brazilian states by the Ministry of Education. In this article, our experience in…

  18. Internal Branding Implementation: Developing a Conceptual Model

    OpenAIRE

    Katja Terglav; Robert Kase; Maja Konecnik Ruzzier

    2012-01-01

    Internal branding is the process, which enables balanced view of the brand at all company levels. Its significance is aligning values and behaviors of employees with brand values and brand promises. In the article, we focus mainly on its implementation, which requires coordination of different functions in the company, for instance, internal marketing and human resource management. Based on findings of qualitative research, we present a conceptual model of internal branding implementation. Re...

  19. Brain-inspired Stochastic Models and Implementations

    KAUST Repository

    Al-Shedivat, Maruan

    2015-05-12

    One of the approaches to building artificial intelligence (AI) is to decipher the princi- ples of the brain function and to employ similar mechanisms for solving cognitive tasks, such as visual perception or natural language understanding, using machines. The recent breakthrough, named deep learning, demonstrated that large multi-layer networks of arti- ficial neural-like computing units attain remarkable performance on some of these tasks. Nevertheless, such artificial networks remain to be very loosely inspired by the brain, which rich structures and mechanisms may further suggest new algorithms or even new paradigms of computation. In this thesis, we explore brain-inspired probabilistic mechanisms, such as neural and synaptic stochasticity, in the context of generative models. The two questions we ask here are: (i) what kind of models can describe a neural learning system built of stochastic components? and (ii) how can we implement such systems e ̆ciently? To give specific answers, we consider two well known models and the corresponding neural architectures: the Naive Bayes model implemented with a winner-take-all spiking neural network and the Boltzmann machine implemented in a spiking or non-spiking fashion. We propose and analyze an e ̆cient neuromorphic implementation of the stochastic neu- ral firing mechanism and study the e ̄ects of synaptic unreliability on learning generative energy-based models implemented with neural networks.

  20. The Business Excellence Model for CSR Implementation?

    Directory of Open Access Journals (Sweden)

    Neergaard Peter

    2014-11-01

    Full Text Available Most of the Fortune 500 companies address Corporate Social Responsibility (CSR on their websites. However, CSR remains a fluffy concept difficult to implement in organization. The European Business Excellence Model has since the introduction in 1992 served as a powerful tool for integrating quality in organizations. CSR was first introduced in the model in 2002. From 2004 the European Foundation for Quality Management (EFQM has been eager to promote the model as an effective tool for implementing CSR.. The article discusses the potentials of the model for this end and illustrates how a 2006 European Award winning company has used the model to integrate CSR. The company adapted the Business Excellence model to improve performance, stimulate innovation and consensus.

  1. Simple implementation of general dark energy models

    Energy Technology Data Exchange (ETDEWEB)

    Bloomfield, Jolyon K. [MIT Kavli Institute for Astrophysics and Space Research, Massachusetts Institute of Technology, 77 Massachusetts Ave #37241, Cambridge, MA, 02139 (United States); Pearson, Jonathan A., E-mail: jolyon@mit.edu, E-mail: jonathan.pearson@durham.ac.uk [Centre for Particle Theory, Department of Mathematical Sciences, Durham University, South Road, Durham, DH1 3LE (United Kingdom)

    2014-03-01

    We present a formalism for the numerical implementation of general theories of dark energy, combining the computational simplicity of the equation of state for perturbations approach with the generality of the effective field theory approach. An effective fluid description is employed, based on a general action describing single-scalar field models. The formalism is developed from first principles, and constructed keeping the goal of a simple implementation into CAMB in mind. Benefits of this approach include its straightforward implementation, the generality of the underlying theory, the fact that the evolved variables are physical quantities, and that model-independent phenomenological descriptions may be straightforwardly investigated. We hope this formulation will provide a powerful tool for the comparison of theoretical models of dark energy with observational data.

  2. Strategies for reducing material costs through implementation of clinical guidelines.

    Science.gov (United States)

    Vollman, K; Sprung, P; Posa, S; Ladin, D; Kachhal, S K

    1998-01-01

    This paper presents a case study where the efforts to improve clinical guidelines resulted in significant savings in material costs through the standardization of the supplies and negotiation of contracts with the suppliers. It also presents an approach that is now being used to standardize material and reduce supply costs in other areas of the health system.

  3. Generalized Reduced Order Model Generation Project

    Data.gov (United States)

    National Aeronautics and Space Administration — M4 Engineering proposes to develop a generalized reduced order model generation method. This method will allow for creation of reduced order aeroservoelastic state...

  4. The Business Excellence Model for CSR Implementation?

    DEFF Research Database (Denmark)

    Neergaard, Peter; Gjerdrum Pedersen, Esben Rahbek

    2012-01-01

    Most of the Fortune 500 companies address Corporate Social Responsibility (CSR) on their websites. However, CSR remains a fluffy concept difficult to implement in organization. The European Business Excellence Model has since the introduction in 1992 served as a powerful tool for integrating qual...

  5. Reducing Support Vector Machine Classification Error by Implementing Kalman Filter

    Directory of Open Access Journals (Sweden)

    Muhsin Hassan

    2013-08-01

    Full Text Available The aim of this is to demonstrate the capability of Kalman Filter to reduce Support Vector Machine classification errors in classifying pipeline corrosion depth. In pipeline defect classification, it is important to increase the accuracy of the SVM classification so that one can avoid misclassification which can lead to greater problems in monitoring pipeline defect and prediction of pipeline leakage. In this paper, it is found that noisy data can greatly affect the performance of SVM. Hence, Kalman Filter + SVM hybrid technique has been proposed as a solution to reduce SVM classification errors. The datasets has been added with Additive White Gaussian Noise in several stages to study the effect of noise on SVM classification accuracy. Three techniques have been studied in this experiment, namely SVM, hybrid of Discrete Wavelet Transform + SVM and hybrid of Kalman Filter + SVM. Experiment results have been compared to find the most promising techniques among them. MATLAB simulations show Kalman Filter and Support Vector Machine combination in a single system produced higher accuracy compared to the other two techniques.

  6. Berkeley-Madonna implementation of Ikeda's model.

    Science.gov (United States)

    Fontecave-Jallon, J; Baconnier, P

    2007-01-01

    Starting from one model, we check the possibility of using Berkeley-Madonna software to transpose and simulate some existing biological integrated models. The considered model is the one of Ikeda et al., proposed in 1979, which treats of fluid regulation and which is very well described mathematically in the original paper. Despite a few mistakes or bugs, the model has been easily and successfully implemented under Berkeley-Madonna. We recover the same simulation results as Ikeda and new simulations can now easily be carried out, thanks to the user-friendly qualities of Berkeley-Madonna.

  7. Technical and Economic Assessment of the Implementation of Measures for Reducing Energy Losses in Distribution Systems

    Science.gov (United States)

    Aguila, Alexander; Wilson, Jorge

    2017-07-01

    This paper develops a methodology to assess a group of measures of electrical improvements in distribution systems, starting from the complementation of technical and economic criteria. In order to solve the problem of energy losses in distribution systems, technical and economic analysis was performed based on a mathematical model to establish a direct relationship between the energy saved by way of minimized losses and the costs of implementing the proposed measures. This paper aims at analysing the feasibility of reducing energy losses in distribution systems, by changing existing network conductors by larger crosssection conductors and distribution voltage change at higher levels. The impact of this methodology provides a highly efficient mathematical tool for analysing the feasibility of implementing improvement projects based on their costs which is a very useful tool for the distribution companies that will serve as a starting point to the analysis for this type of projects in distribution systems.

  8. Escherichia coli growth under modeled reduced gravity

    Science.gov (United States)

    Baker, Paul W.; Meyer, Michelle L.; Leff, Laura G.

    2004-01-01

    Bacteria exhibit varying responses to modeled reduced gravity that can be simulated by clino-rotation. When Escherichia coli was subjected to different rotation speeds during clino-rotation, significant differences between modeled reduced gravity and normal gravity controls were observed only at higher speeds (30-50 rpm). There was no apparent affect of removing samples on the results obtained. When E. coli was grown in minimal medium (at 40 rpm), cell size was not affected by modeled reduced gravity and there were few differences in cell numbers. However, in higher nutrient conditions (i.e., dilute nutrient broth), total cell numbers were higher and cells were smaller under reduced gravity compared to normal gravity controls. Overall, the responses to modeled reduced gravity varied with nutrient conditions; larger surface to volume ratios may help compensate for the zone of nutrient depletion around the cells under modeled reduced gravity.

  9. Normal forms for reduced stochastic climate models

    NARCIS (Netherlands)

    Majda, A.J.; Franzke, C.; Crommelin, D.T.

    The systematic development of reduced low-dimensional stochastic climate models from observations or comprehensive highdimensional climate models is an important topic for atmospheric low-frequency variability, climate sensitivity, and improved extended range forecasting. Here techniques from

  10. The Reduced RUM as a Logit Model: Parameterization and Constraints.

    Science.gov (United States)

    Chiu, Chia-Yi; Köhn, Hans-Friedrich

    2016-06-01

    Cognitive diagnosis models (CDMs) for educational assessment are constrained latent class models. Examinees are assigned to classes of intellectual proficiency defined in terms of cognitive skills called attributes, which an examinee may or may not have mastered. The Reduced Reparameterized Unified Model (Reduced RUM) has received considerable attention among psychometricians. Markov Chain Monte Carlo (MCMC) or Expectation Maximization (EM) are typically used for estimating the Reduced RUM. Commercial implementations of the EM algorithm are available in the latent class analysis (LCA) routines of Latent GOLD and Mplus, for example. Fitting the Reduced RUM with an LCA routine requires that it be reparameterized as a logit model, with constraints imposed on the parameters. For models involving two attributes, these have been worked out. However, for models involving more than two attributes, the parameterization and the constraints are nontrivial and currently unknown. In this article, the general parameterization of the Reduced RUM as a logit model involving any number of attributes and the associated parameter constraints are derived. As a practical illustration, the LCA routine in Mplus is used for fitting the Reduced RUM to two synthetic data sets and to a real-world data set; for comparison, the results obtained by using the MCMC implementation in OpenBUGS are also provided.

  11. A quantum-implementable neural network model

    Science.gov (United States)

    Chen, Jialin; Wang, Lingli; Charbon, Edoardo

    2017-10-01

    A quantum-implementable neural network, namely quantum probability neural network (QPNN) model, is proposed in this paper. QPNN can use quantum parallelism to trace all possible network states to improve the result. Due to its unique quantum nature, this model is robust to several quantum noises under certain conditions, which can be efficiently implemented by the qubus quantum computer. Another advantage is that QPNN can be used as memory to retrieve the most relevant data and even to generate new data. The MATLAB experimental results of Iris data classification and MNIST handwriting recognition show that much less neuron resources are required in QPNN to obtain a good result than the classical feedforward neural network. The proposed QPNN model indicates that quantum effects are useful for real-life classification tasks.

  12. Model-Driven Development in implementing integration flows

    Directory of Open Access Journals (Sweden)

    Tomasz Górski

    2015-04-01

    Full Text Available Integration of many different IT systems makes the integration project highly complex. The process of constructing architectural models and source code can be automated through the application of transformations. As a result, the duration time of designing or implementa-tion, as well as the work input involved can be reduced. The purpose of the paper is to pre-sent an approach to automation of designing one of the key elements of an integration platform, namely, integration flows. The author proposes model-to-code transformation In-tegrationFlow-to-Java which automates the implementation of integration flows applica-tions for selected mediation patterns. The integration flows generator has been incorporated as a plug-in into the IBM Rational Software Architect (RSA. The RSA plug-in which generates complete Java EE application of integration flow from mediation flows diagram. Thus eliminates design and programming stage in WebSphere Integration Devel-oper which reduces development time and costs of licenses. Model-Driven Development is approach which can lead to automation of design and programming stage in software de-velopment. The IntegrationFlow-to-Java transformation offers an opportunity to reduce the duration time of the integration flows implementation forty times (with one hundred flows to be implemented. The outcomes support the significance of using transformations when designing complex IT systems, especially when integration solutions are developed.

  13. Airport Gate Assignment: New Model and Implementation

    CERN Document Server

    Li, Chendong

    2008-01-01

    Airport gate assignment is of great importance in airport operations. In this paper, we study the Airport Gate Assignment Problem (AGAP), propose a new model and implement the model with Optimization Programming language (OPL). With the objective to minimize the number of conflicts of any two adjacent aircrafts assigned to the same gate, we build a mathematical model with logical constraints and the binary constraints, which can provide an efficient evaluation criterion for the Airlines to estimate the current gate assignment. To illustrate the feasibility of the model we construct experiments with the data obtained from Continental Airlines, Houston Gorge Bush Intercontinental Airport IAH, which indicate that our model is both energetic and effective. Moreover, we interpret experimental results, which further demonstrate that our proposed model can provide a powerful tool for airline companies to estimate the efficiency of their current work of gate assignment.

  14. Implementing network constraints in the EMPS model

    Energy Technology Data Exchange (ETDEWEB)

    Helseth, Arild; Warland, Geir; Mo, Birger; Fosso, Olav B.

    2010-02-15

    This report concerns the coupling of detailed market and network models for long-term hydro-thermal scheduling. Currently, the EPF model (Samlast) is the only tool available for this task for actors in the Nordic market. A new prototype for solving the coupled market and network problem has been developed. The prototype is based on the EMPS model (Samkjoeringsmodellen). Results from the market model are distributed to a detailed network model, where a DC load flow detects if there are overloads on monitored lines or intersections. In case of overloads, network constraints are generated and added to the market problem. Theoretical and implementation details for the new prototype are elaborated in this report. The performance of the prototype is tested against the EPF model on a 20-area Nordic dataset. (Author)

  15. Lean business model and implementation of a geriatric fracture center.

    Science.gov (United States)

    Kates, Stephen L

    2014-05-01

    Geriatric hip fracture is a common event associated with high costs of care and often with suboptimal outcomes for the patients. Ideally, a new care model to manage geriatric hip fractures would address both quality and safety of patient care as well as the need for reduced costs of care. The geriatric fracture center model of care is one such model reported to improve both outcomes and quality of care. It is a lean business model applied to medicine. This article describes basic lean business concepts applied to geriatric fracture care and information needed to successfully implement a geriatric fracture center. It is written to assist physicians and surgeons in their efforts to implement an improved care model for their patients. Copyright © 2014 Elsevier Inc. All rights reserved.

  16. Grid Oriented Implementation of the Tephra Model

    Science.gov (United States)

    Coltelli, M.; D'Agostino, M.; Drago, A.; Pistagna, F.; Prestifilippo, M.; Reitano, D.; Scollo, S.; Spata, G.

    2009-04-01

    TEPHRA is a two dimensional advection-diffusion model implemented by Bonadonna et al. [2005] that describes the sedimentation process of particles from volcanic plumes. The model is used by INGV - Istituto Nazionale di Geofisica e Vulcanologia, Sezione di Catania, to forecast tephra dispersion during Etna volcanic events. Every day weather forecast provided by the Italian Air Force Meteorological Office in Rome and by the hydrometeorological service of ARPA in Emilia Romagna are processed by TEPHRA model with other volcanological parameters to simulate two different eruptive scenarios of Mt. Etna (corresponding to 1998 and 2002-03 Etna eruptions). The model outputs are plotted on maps and transferred to Civil Protection which takes the trouble to give public warnings and plan mitigation measures. The TEPHRA model is implemented in ANSI-C code using MPI commands to maximize parallel computation. Actually the model runs on an INGV Beowulf cluster. In order to provide better performances we worked on porting it to PI2S2 sicilian grid infrastructure inside the "PI2S2 Project" (2006-2008). We configured the application to run on grid, using Glite middleware, analyzed the obtained performances and comparing them with ones obtained on the local cluster. As TEPHRA needs to be run in a short time in order to transfer fastly the dispersion maps to Civil Protection, we also worked to minimize and stabilize grid job-scheduling time by using customized high-priority queues called Emergency Queue.

  17. A tool box for implementing supersymmetric models

    Science.gov (United States)

    Staub, Florian; Ohl, Thorsten; Porod, Werner; Speckner, Christian

    2012-10-01

    We present a framework for performing a comprehensive analysis of a large class of supersymmetric models, including spectrum calculation, dark matter studies and collider phenomenology. To this end, the respective model is defined in an easy and straightforward way using the Mathematica package SARAH. SARAH then generates model files for CalcHep which can be used with micrOMEGAs as well as model files for WHIZARD and O'Mega. In addition, Fortran source code for SPheno is created which facilitates the determination of the particle spectrum using two-loop renormalization group equations and one-loop corrections to the masses. As an additional feature, the generated SPheno code can write out input files suitable for use with HiggsBounds to apply bounds coming from the Higgs searches to the model. Combining all programs provides a closed chain from model building to phenomenology. Program summary Program title: SUSY Phenomenology toolbox. Catalog identifier: AEMN_v1_0. Program summary URL: http://cpc.cs.qub.ac.uk/summaries/AEMN_v1_0.html. Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland. Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html. No. of lines in distributed program, including test data, etc.: 140206. No. of bytes in distributed program, including test data, etc.: 1319681. Distribution format: tar.gz. Programming language: Autoconf, Mathematica. Computer: PC running Linux, Mac. Operating system: Linux, Mac OS. Classification: 11.6. Nature of problem: Comprehensive studies of supersymmetric models beyond the MSSM is considerably complicated by the number of different tasks that have to be accomplished, including the calculation of the mass spectrum and the implementation of the model into tools for performing collider studies, calculating the dark matter density and checking the compatibility with existing collider bounds (in particular, from the Higgs searches). Solution method: The

  18. Merging Digital Surface Models Implementing Bayesian Approaches

    Science.gov (United States)

    Sadeq, H.; Drummond, J.; Li, Z.

    2016-06-01

    In this research different DSMs from different sources have been merged. The merging is based on a probabilistic model using a Bayesian Approach. The implemented data have been sourced from very high resolution satellite imagery sensors (e.g. WorldView-1 and Pleiades). It is deemed preferable to use a Bayesian Approach when the data obtained from the sensors are limited and it is difficult to obtain many measurements or it would be very costly, thus the problem of the lack of data can be solved by introducing a priori estimations of data. To infer the prior data, it is assumed that the roofs of the buildings are specified as smooth, and for that purpose local entropy has been implemented. In addition to the a priori estimations, GNSS RTK measurements have been collected in the field which are used as check points to assess the quality of the DSMs and to validate the merging result. The model has been applied in the West-End of Glasgow containing different kinds of buildings, such as flat roofed and hipped roofed buildings. Both quantitative and qualitative methods have been employed to validate the merged DSM. The validation results have shown that the model was successfully able to improve the quality of the DSMs and improving some characteristics such as the roof surfaces, which consequently led to better representations. In addition to that, the developed model has been compared with the well established Maximum Likelihood model and showed similar quantitative statistical results and better qualitative results. Although the proposed model has been applied on DSMs that were derived from satellite imagery, it can be applied to any other sourced DSMs.

  19. MERGING DIGITAL SURFACE MODELS IMPLEMENTING BAYESIAN APPROACHES

    Directory of Open Access Journals (Sweden)

    H. Sadeq

    2016-06-01

    Full Text Available In this research different DSMs from different sources have been merged. The merging is based on a probabilistic model using a Bayesian Approach. The implemented data have been sourced from very high resolution satellite imagery sensors (e.g. WorldView-1 and Pleiades. It is deemed preferable to use a Bayesian Approach when the data obtained from the sensors are limited and it is difficult to obtain many measurements or it would be very costly, thus the problem of the lack of data can be solved by introducing a priori estimations of data. To infer the prior data, it is assumed that the roofs of the buildings are specified as smooth, and for that purpose local entropy has been implemented. In addition to the a priori estimations, GNSS RTK measurements have been collected in the field which are used as check points to assess the quality of the DSMs and to validate the merging result. The model has been applied in the West-End of Glasgow containing different kinds of buildings, such as flat roofed and hipped roofed buildings. Both quantitative and qualitative methods have been employed to validate the merged DSM. The validation results have shown that the model was successfully able to improve the quality of the DSMs and improving some characteristics such as the roof surfaces, which consequently led to better representations. In addition to that, the developed model has been compared with the well established Maximum Likelihood model and showed similar quantitative statistical results and better qualitative results. Although the proposed model has been applied on DSMs that were derived from satellite imagery, it can be applied to any other sourced DSMs.

  20. Implementation of New Management Agile Technique for Reducing Overtime and Increasing Customer Satisfaction

    Directory of Open Access Journals (Sweden)

    HARSIMARJEET KHURANA,

    2011-01-01

    Full Text Available This model has been implemented in one of the production unit located at Ludhiana. The model concentrates how the team members should function in order to improve organization performance in a continuously changing situation. The critical thought following this model is to minimize the problem of overtime and monitor the performance of team members. The implementation of model has shown excellent results inreducing overtime and performance of the team members increasing customer satisfaction.

  1. Reduced order modeling of grid-connected photovoltaic inverter systems

    Science.gov (United States)

    Wasynczuk, O.; Krause, P. C.; Anwah, N. A.

    1988-04-01

    This report summarizes the development of reduced order models of three-phase, line- and self-commutated inverter systems. This work was performed as part of the National Photovoltaics Program within the United States Department of Energy and was supervised by Sandia National Laboratories. The overall objective of the national program is to promote the development of low cost, reliable terrestrial photovoltaic systems for widespread use in residential, commercial and utility applications. The purpose of the effort reported herein is to provide reduced order models of three-phase, line- and self-commutated PV systems suitable for implementation into transient stability programs, which are commonly used to predict the stability characteristics of large-scale power systems. The accuracy of the reduced models is verified by comparing the response characteristics predicted therefrom with the response established using highly detailed PV system models in which the inverter switching is represented in detail.

  2. Implementing Problem Resolution Models in Remedy

    CERN Document Server

    Marquina, M A; Ramos, R

    2000-01-01

    This paper defines the concept of Problem Resolution Model (PRM) and describes the current implementation made by the User Support unit at CERN. One of the main challenges of User Support services in any High Energy Physics institute/organization is to address solving of the computing-relatedproblems faced by their researchers. The User Support group at CERN is the IT unit in charge of modeling the operations of the Help Desk and acts as asecond level support to some of the support lines whose problems are receptioned at the Help Desk. The motivation behind the use of a PRM is to provide well defined procedures and methods to react in an efficient way to a request for solving a problem,providing advice, information etc. A PRM is materialized on a workflow which has a set of defined states in which a problem can be. Problems move from onestate to another according to actions as decided by the person who is handling them. A PRM can be implemented by a computer application, generallyreferred to as Problem Report...

  3. Spatial Aggregation: Data Model and Implementation

    CERN Document Server

    Gomez, Leticia; Kuijpers, Bart; Vaisman, Alejandro

    2007-01-01

    Data aggregation in Geographic Information Systems (GIS) is only marginally present in commercial systems nowadays, mostly through ad-hoc solutions. In this paper, we first present a formal model for representing spatial data. This model integrates geographic data and information contained in data warehouses external to the GIS. We define the notion of geometric aggregation, a general framework for aggregate queries in a GIS setting. We also identify the class of summable queries, which can be efficiently evaluated by precomputing the overlay of two or more of the thematic layers involved in the query. We also sketch a language, denoted GISOLAP-QL, for expressing queries that involve GIS and OLAP features. In addition, we introduce Piet, an implementation of our proposal, that makes use of overlay precomputation for answering spatial queries (aggregate or not). Our experimental evaluation showed that for a certain class of geometric queries with or without aggregation, overlay precomputation outperforms R-tre...

  4. Croatian Cartographic Data Model, Creation and Implementation

    Directory of Open Access Journals (Sweden)

    Zvonko Biljecki

    2003-09-01

    Full Text Available Croatian project Cartographic Data Model (KMP has been started as a component of the STOKIS (SGA, 1995, Official Topographic and Cartographic Information System project. The cartographic data model conforms to CROTIS (Topographic Information System of the Republic of Croatia. It enables the generation of the cartographic database from topographic one. Classification of data is performed by logical grouping of objects and depends on geometry, type and properties of features. The cartographic data model describes structure of cartographic database and all attributes, categories, types and fields. Description of geometry and exchange of data is performed according to the specification of ISO Standards, adapted by Technical Committee ISO/TC 211, Geographic information/Geomatics and OpenGIS Consortium. Creation of the cartographic data model is the basis for the creation of cartographic database. The next step is direct implementation of vector model that contains graphic and alphanumeric elements. The cartographic key must stay identical to TK25 and maps of other scales.

  5. Attic Hatch Model Implementation Using the Bondgraph

    Directory of Open Access Journals (Sweden)

    Hany Ferdinando

    2004-01-01

    Full Text Available The Bondgraph had been used widely in building and simulating a model. This paper solves a problem in an attic hatch (planar mechanism problem using Bondgraph approach. Planar mechanisms are idealized systems that can translate and rotate in one plane. In the same way as linear translation or uni-axial rotation, planar motions can be considered as a special case of spatial motions of mechanisms. In the attic hatch system, people have to push with lot of power to open it. Then came up an idea to add a mass-pulley mechanism in order to open and close it easily. Before implementing this idea, it needs to simulate it first, because it needs to adjust many parameter combinations. This will also prevent someone to make unnecessary hole while implementing this idea (such as trial and error. This paper only dealt with building and simulating this idea, not going further in the real implementation. The simulation result shows that several parameters should be chosen carefully in order to achieve the final goal, i.e. to open the hatch easily and fast. The 20-Sim simulation package is used to verify the model. Abstract in Bahasa Indonesia : Bondgraph sudah dipergunakan secara luas untuk mensimulasikan sebuah model dari plant. Makalah ini akan membahas permasalahan yang terjadi dalam membuat sebuah attic hatch (pintu ke atap yang biasanya terdapat di rumah-rumah Eropa dengan menggunakan bondgraph. Attic hatch merupakan suatu permasalahan dalam planar mechanism. Planar mechanism ada system ideal yang dapat bertranslasi dan berotasi dalam satu bidang datar. Seperti pada translasi linier atau rotasi pada satu sumbu, gerakan planar dapat digolongkan sebaagai kasus khusus dalam mekanik. Pada attic hatch,orang harus mendorong pintu untuk membukanya. Sehingga muncul suatu ide untuk menambahkan katrol dan beban untuk memudahkan proses buka dan tutupnya. Untuk menambahkan katrol dan beban ini, perlu dilakukan suatu simulasi untuk menghindari pembuatan lobang yang

  6. Reduced Chemical Kinetic Model for Titan Entries

    Directory of Open Access Journals (Sweden)

    Romain Savajano

    2011-01-01

    Full Text Available A reduced chemical kinetic model for Titan's atmosphere has been developed. This new model with 18 species and 28 reactions includes the mainfeatures of a more complete scheme, respecting the radiative fluxes. It has been verified against three key elements: a sensitivity analysis, the equilibrium chemical composition using shock tube simulations in CHEMKIN, and the results of computational fluid dynamics (CFDs simulations.

  7. A reduced-rank approach for implementing higher-order Volterra filters

    Science.gov (United States)

    O. Batista, Eduardo L.; Seara, Rui

    2016-12-01

    The use of Volterra filters in practical applications is often limited by their high computational burden. To cope with this problem, many strategies for implementing Volterra filters with reduced complexity have been proposed in the open literature. Some of these strategies are based on reduced-rank approaches obtained by defining a matrix of filter coefficients and applying the singular value decomposition to such a matrix. Then, discarding the smaller singular values, effective reduced-complexity Volterra implementations can be obtained. The application of this type of approach to higher-order Volterra filters (considering orders greater than 2) is however not straightforward, which is especially due to some difficulties encountered in the definition of higher-order coefficient matrices. In this context, the present paper is devoted to the development of a novel reduced-rank approach for implementing higher-order Volterra filters. Such an approach is based on a new form of Volterra kernel implementation that allows decomposing higher-order kernels into structures composed only of second-order kernels. Then, applying the singular value decomposition to the coefficient matrices of these second-order kernels, effective implementations for higher-order Volterra filters can be obtained. Simulation results are presented aiming to assess the effectiveness of the proposed approach.

  8. A probabilistic model for reducing medication errors.

    Directory of Open Access Journals (Sweden)

    Phung Anh Nguyen

    Full Text Available BACKGROUND: Medication errors are common, life threatening, costly but preventable. Information technology and automated systems are highly efficient for preventing medication errors and therefore widely employed in hospital settings. The aim of this study was to construct a probabilistic model that can reduce medication errors by identifying uncommon or rare associations between medications and diseases. METHODS AND FINDINGS: Association rules of mining techniques are utilized for 103.5 million prescriptions from Taiwan's National Health Insurance database. The dataset included 204.5 million diagnoses with ICD9-CM codes and 347.7 million medications by using ATC codes. Disease-Medication (DM and Medication-Medication (MM associations were computed by their co-occurrence and associations' strength were measured by the interestingness or lift values which were being referred as Q values. The DMQs and MMQs were used to develop the AOP model to predict the appropriateness of a given prescription. Validation of this model was done by comparing the results of evaluation performed by the AOP model and verified by human experts. The results showed 96% accuracy for appropriate and 45% accuracy for inappropriate prescriptions, with a sensitivity and specificity of 75.9% and 89.5%, respectively. CONCLUSIONS: We successfully developed the AOP model as an efficient tool for automatic identification of uncommon or rare associations between disease-medication and medication-medication in prescriptions. The AOP model helps to reduce medication errors by alerting physicians, improving the patients' safety and the overall quality of care.

  9. Accelerating transient simulation of linear reduced order models.

    Energy Technology Data Exchange (ETDEWEB)

    Thornquist, Heidi K.; Mei, Ting; Keiter, Eric Richard; Bond, Brad

    2011-10-01

    Model order reduction (MOR) techniques have been used to facilitate the analysis of dynamical systems for many years. Although existing model reduction techniques are capable of providing huge speedups in the frequency domain analysis (i.e. AC response) of linear systems, such speedups are often not obtained when performing transient analysis on the systems, particularly when coupled with other circuit components. Reduced system size, which is the ostensible goal of MOR methods, is often insufficient to improve transient simulation speed on realistic circuit problems. It can be shown that making the correct reduced order model (ROM) implementation choices is crucial to the practical application of MOR methods. In this report we investigate methods for accelerating the simulation of circuits containing ROM blocks using the circuit simulator Xyce.

  10. Time and expenses associated with the implementation of strategies to reduce emergency department crowding.

    Science.gov (United States)

    McHugh, Megan; Van Dyke, Kevin J; Yonek, Julie; Moss, Dina

    2012-09-01

    The Emergency Nurses Association and other groups have encouraged the adoption of patient flow improvement strategies to reduce ED crowding, but little is known about time and expenses associated with implementation. The purpose of this study was to estimate the time spent and expenses incurred as 6 Urgent Matters hospitals planned and implemented strategies to improve patient flow and reduce crowding. We conducted key informant interviews with members of the hospitals' patient flow improvement teams at 2 points in time: immediately after strategy implementation and approximately 6 months later. A total of 129 interviews were conducted using a semistructured interview protocol. Interviews were recorded, transcribed, and coded for analysis. Eight strategies were implemented. The time spent planning and implementing the strategies ranged from 40 to 1,017 hours per strategy. The strategies were largely led by nurses, and collectively, nurses spent more time planning and implementing strategies than others. The most time-consuming strategies were those that involved extensive staff training, large implementation teams, or complex process changes. Only 3 strategies involved sizable expenditures, ranging from $32,850 to $490,000. Construction and the addition of new personnel represented the most costly expenditures. The time and expenses involved in the adoption of patient flow improvement strategies are highly variable. Nurses play an important role in leading and implementing these efforts. Hospital, ED, and nurse leaders should set realistic expectations for the time and expenses needed to support patient flow improvement. Copyright © 2012 Emergency Nurses Association. Published by Mosby, Inc. All rights reserved.

  11. Customer relationship management maturity model (CRM3: A model for stepwise implementation

    Directory of Open Access Journals (Sweden)

    Babak Sohrabi

    2010-01-01

    Full Text Available Being multifaceted process, implementing customer relationship management (CRM project has a high risk and uncertainty that must be reduced using planning to get the desirable benefits. As a matter of fact, existing and optimal position must be determined to reduce the gap between them via suitable investment. To identify this gap as well as the way to higher and optimal condition, maturity model can be used. Relying on extended literature, the present paper reviews the existing models and then develops a model for measuring CRM maturity based on CRM critical success factors, CMMI levels and RADAR logic.

  12. A Probabilistic Model for Reducing Medication Errors

    Science.gov (United States)

    Nguyen, Phung Anh; Syed-Abdul, Shabbir; Iqbal, Usman; Hsu, Min-Huei; Huang, Chen-Ling; Li, Hsien-Chang; Clinciu, Daniel Livius; Jian, Wen-Shan; Li, Yu-Chuan Jack

    2013-01-01

    Background Medication errors are common, life threatening, costly but preventable. Information technology and automated systems are highly efficient for preventing medication errors and therefore widely employed in hospital settings. The aim of this study was to construct a probabilistic model that can reduce medication errors by identifying uncommon or rare associations between medications and diseases. Methods and Finding(s) Association rules of mining techniques are utilized for 103.5 million prescriptions from Taiwan’s National Health Insurance database. The dataset included 204.5 million diagnoses with ICD9-CM codes and 347.7 million medications by using ATC codes. Disease-Medication (DM) and Medication-Medication (MM) associations were computed by their co-occurrence and associations’ strength were measured by the interestingness or lift values which were being referred as Q values. The DMQs and MMQs were used to develop the AOP model to predict the appropriateness of a given prescription. Validation of this model was done by comparing the results of evaluation performed by the AOP model and verified by human experts. The results showed 96% accuracy for appropriate and 45% accuracy for inappropriate prescriptions, with a sensitivity and specificity of 75.9% and 89.5%, respectively. Conclusions We successfully developed the AOP model as an efficient tool for automatic identification of uncommon or rare associations between disease-medication and medication-medication in prescriptions. The AOP model helps to reduce medication errors by alerting physicians, improving the patients’ safety and the overall quality of care. PMID:24312659

  13. Evidence that implementation intentions reduce drivers' speeding behavior: testing a new intervention to change driver behavior.

    Science.gov (United States)

    Brewster, Sarah E; Elliott, Mark A; Kelly, Steve W

    2015-01-01

    Implementation intentions have the potential to break unwanted habits and help individuals behave in line with their goal intentions. We tested the effects of implementation intentions in the context of drivers' speeding behavior. A randomized controlled design was used. Speeding behavior, goal intentions and theoretically derived motivational pre-cursors of goal intentions were measured at both baseline and follow-up (one month later) using self-report questionnaires. Immediately following the baseline questionnaire, the experimental (intervention) group (N=117) specified implementation intentions using a volitional help sheet, which required the participants to link critical situations in which they were tempted to speed with goal-directed responses to resist the temptation. The control group (N=126) instead received general information about the risks of speeding. In support of the hypotheses, the experimental group reported exceeding the speed limit significantly less often at follow-up than did the control group. This effect was specific to 'inclined abstainers' (i.e., participants who reported speeding more than they intended to at baseline and were therefore motivated to reduce their speeding) and could not be attributed to any changes in goal intentions to speed or any other measured motivational construct. Also in line with the hypotheses, implementation intentions attenuated the past-subsequent speeding behavior relationship and augmented the goal intention - subsequent speeding behavior relationship. The findings imply that implementation intentions are effective at reducing speeding and that they do so by weakening the effect of habit, thereby helping drivers to behave in accordance with their existing goal intentions. The volitional help sheet used in this study is an effective tool for promoting implementation intentions to reduce speeding.

  14. Novel Reduced Order in Time Models for Problems in Nonlinear Aeroelasticity Project

    Data.gov (United States)

    National Aeronautics and Space Administration — Research is proposed for the development and implementation of state of the art, reduced order models for problems in nonlinear aeroelasticity. Highly efficient and...

  15. Software implementation of floating-Point arithmetic on a reduced-Instruction-set processor

    Energy Technology Data Exchange (ETDEWEB)

    Gross, T.

    1985-11-01

    Current single chip implementations of reduced-instruction-set processors do not support hardware floating-point operations. Instead, floating-point operations have to be provided either by a coprocessor or by software. This paper discusses issues arising from a software implementation of floating-point arithmetic for the MIPS processor, an experimental VLSI architecture. Measurements indicate that an acceptable level of performance is achieved, but this approach is no substitute for a hardware accelerator if higher-precision results are required. This paper includes instruction profiles for the basic floating-point operations and evaluates the usefulness of some aspects of the instruction set.

  16. CASE STUDY ON THE IMPLEMENTATION OF A VIDEO STORY-BASED INTERVENTI­ON WITH SELF-MODELING TREATMENT PACKAGE TO REDUCE STEREOTYPICAL SPITTING BEHAVIOR IN A YOUNG GIRL WITH AUTISM

    Directory of Open Access Journals (Sweden)

    Cindy NELSON-HEAD

    2012-09-01

    Full Text Available The purpose of this study was to investigate the use of a video story-based intervention with self-modeling to decrease spitting behavior in a female pre-school student with autism during an extended school year ser­vices program. An A-B-A-B design was used to demonstrate a functional relation between a video story-based intervention with self-modeling and decreased spitting be­havior. The results showed that spitting behavior de­crea­sed as a result of the video-based in­ter­vention package. The implications of these re­sults will be explored.

  17. Making sense of implementation theories, models and frameworks

    National Research Council Canada - National Science Library

    Nilsen, Per

    2015-01-01

    .... The aim of this article is to propose a taxonomy that distinguishes between different categories of theories, models and frameworks in implementation science, to facilitate appropriate selection...

  18. Putting the pieces together: an integrated model of program implementation.

    Science.gov (United States)

    Berkel, Cady; Mauricio, Anne M; Schoenfelder, Erin; Sandler, Irwin N

    2011-03-01

    Considerable evidence indicates that variability in implementation of prevention programs is related to the outcomes achieved by these programs. However, while implementation has been conceptualized as a multidimensional construct, few studies examine more than a single dimension, and no theoretical framework exists to guide research on the effects of implementation. We seek to address this need by proposing a theoretical model of the relations between the dimensions of implementation and outcomes of prevention programs that can serve to guide future implementation research. In this article, we focus on four dimensions of implementation, which we conceptualize as behaviors of program facilitators (fidelity, quality of delivery, and adaptation) and behaviors of participants (responsiveness) and present the evidence supporting these as predictors of program outcomes. We then propose a theoretical model by which facilitator and participant dimensions of implementation influence participant outcomes. Finally, we provide recommendations and directions for future implementation research.

  19. Advanced Fluid Reduced Order Models for Compressible Flow.

    Energy Technology Data Exchange (ETDEWEB)

    Tezaur, Irina Kalashnikova; Fike, Jeffrey A.; Carlberg, Kevin Thomas; Barone, Matthew F.; Maddix, Danielle; Mussoni, Erin E.; Balajewicz, Maciej (UIUC)

    2017-09-01

    This report summarizes fiscal year (FY) 2017 progress towards developing and implementing within the SPARC in-house finite volume flow solver advanced fluid reduced order models (ROMs) for compressible captive-carriage flow problems of interest to Sandia National Laboratories for the design and qualification of nuclear weapons components. The proposed projection-based model order reduction (MOR) approach, known as the Proper Orthogonal Decomposition (POD)/Least- Squares Petrov-Galerkin (LSPG) method, can substantially reduce the CPU-time requirement for these simulations, thereby enabling advanced analyses such as uncertainty quantification and de- sign optimization. Following a description of the project objectives and FY17 targets, we overview briefly the POD/LSPG approach to model reduction implemented within SPARC . We then study the viability of these ROMs for long-time predictive simulations in the context of a two-dimensional viscous laminar cavity problem, and describe some FY17 enhancements to the proposed model reduction methodology that led to ROMs with improved predictive capabilities. Also described in this report are some FY17 efforts pursued in parallel to the primary objective of determining whether the ROMs in SPARC are viable for the targeted application. These include the implemen- tation and verification of some higher-order finite volume discretization methods within SPARC (towards using the code to study the viability of ROMs on three-dimensional cavity problems) and a novel structure-preserving constrained POD/LSPG formulation that can improve the accuracy of projection-based reduced order models. We conclude the report by summarizing the key takeaways from our FY17 findings, and providing some perspectives for future work.

  20. Optimal implementation of green infrastructure practices to reduce adverse impacts of urban areas on hydrology and water quality

    Science.gov (United States)

    Liu, Y.; Collingsworth, P.; Pijanowski, B. C.; Engel, B.

    2016-12-01

    Nutrient loading from Maumee River watershed is a significant reason for the harmful algal blooms (HABs) problem in Lake Erie. Although studies have explored strategies to reduce nutrient loading from agricultural areas in the Maumee River watershed, the nutrient loading in urban areas also needs to be reduced. Green infrastructure practices are popular approaches for stormwater management and useful for improving hydrology and water quality. In this study, the Long-Term Hydrologic Impact Assessment-Low Impact Development 2.1 (L-THIA-LID 2.1) model was used to determine how different strategies for implementing green infrastructure practices can be optimized to reduce impacts on hydrology and water quality in an urban watershed in the upper Maumee River system. Community inputs, such as the types of green infrastructure practices of greatest interest and environmental concerns for the community, were also considered during the study. Based on community input, the following environmental concerns were considered: runoff volume, Total Suspended Solids (TSS), Total Phosphorous (TP), Total Kjeldahl Nitrogen (TKN), and Nitrate+Nitrite (NOx); green infrastructure practices of interest included rain barrel, cistern, green roof, permeable patio, porous pavement, grassed swale, bioretention system, grass strip, wetland channel, detention basin, retention pond, and wetland basin. Spatial optimization of green infrastructure practice implementation was conducted to maximize environmental benefits while minimizing the cost of implementation. The green infrastructure practice optimization results can be used by the community to solve hydrology and water quality problems.

  1. Reduced order modeling in iTOUGH2

    Science.gov (United States)

    Pau, George Shu Heng; Zhang, Yingqi; Finsterle, Stefan; Wainwright, Haruko; Birkholzer, Jens

    2014-04-01

    The inverse modeling and uncertainty quantification capabilities of iTOUGH2 are augmented with reduced order models (ROMs) that act as efficient surrogates for computationally expensive high fidelity models (HFMs). The implementation of the ROM capabilities involves integration of three main computational components. The first component is the ROM itself. Two response surface approximations are currently implemented: Gaussian process regression (GPR) and radial basis function (RBF) interpolation. The second component is a multi-output adaptive sampling procedure that determines the sample points used to construct the ROMs. The third component involves defining appropriate error measures for the adaptive sampling procedure, allowing ROMs to be constructed efficiently with limited user intervention. Details in all three components must complement one another to obtain an accurate approximation. The new capability and its integration with other analysis tools within iTOUGH2 are demonstrated in two examples. The results from using the ROMs in an uncertainty quantification analysis and a global sensitivity analysis compare favorably with the results obtained using the HFMs. GPR is more accurate than RBF, but the difference can be small and similar conclusion can be deduced from the analyses. In the second example involving a realistic numerical model for a hypothetical industrial-scale carbon storage project in the Southern San Joaquin Basin, California, USA, significant reduction in computational effort can be achieved when ROMs are used to perform a rigorous global sensitivity analysis.

  2. Implementation of Dynamic Smart Decision Model for Vertical Handoff

    Science.gov (United States)

    Sahni, Nidhi

    2010-11-01

    International Mobile Telecommunications-Advanced (IMT Advanced), better known as 4G is the next level of evolution in the field of wireless communications. 4G Wireless networks enable users to access information anywhere, anytime, with a seamless connection to a wide range of information and services, and receiving a large volume of information, data, pictures, video and thus increasing the demand for High Bandwidth and Signal Strength. The mobility among various networks is achieved through Vertical Handoff. Vertical handoffs refer to the automatic failover from one technology to another in order to maintain communication. The heterogeneous co-existence of access technologies with largely different characteristics creates a decision problem of determining the "best" available network at "best" time for handoff. In this paper, we implemented the proposed Dynamic and Smart Decision model to decide the "best" network interface and "best" time moment to handoff. The proposed model implementation not only demonstrates the individual user needs but also improve the whole system performance i.e. Quality of Service by reducing the unnecessary handoffs and maintain mobility.

  3. Exposure-response modeling methods and practical implementation

    CERN Document Server

    Wang, Jixian

    2015-01-01

    Discover the Latest Statistical Approaches for Modeling Exposure-Response RelationshipsWritten by an applied statistician with extensive practical experience in drug development, Exposure-Response Modeling: Methods and Practical Implementation explores a wide range of topics in exposure-response modeling, from traditional pharmacokinetic-pharmacodynamic (PKPD) modeling to other areas in drug development and beyond. It incorporates numerous examples and software programs for implementing novel methods.The book describes using measurement

  4. Reduced-Rank Hidden Markov Models

    CERN Document Server

    Siddiqi, Sajid M; Gordon, Geoffrey J

    2009-01-01

    We introduce the Reduced-Rank Hidden Markov Model (RR-HMM), a generalization of HMMs that can model smooth state evolution as in Linear Dynamical Systems (LDSs) as well as non-log-concave predictive distributions as in continuous-observation HMMs. RR-HMMs assume an m-dimensional latent state and n discrete observations, with a transition matrix of rank k <= m. This implies the dynamics evolve in a k-dimensional subspace, while the shape of the set of predictive distributions is determined by m. Latent state belief is represented with a k-dimensional state vector and inference is carried out entirely in R^k, making RR-HMMs as computationally efficient as k-state HMMs yet more expressive. To learn RR-HMMs, we relax the assumptions of a recently proposed spectral learning algorithm for HMMs (Hsu, Kakade and Zhang 2009) and apply it to learn k-dimensional observable representations of rank-k RR-HMMs. The algorithm is consistent and free of local optima, and we extend its performance guarantees to cover the RR-...

  5. Reducing equifinality of hydrological models by integrating Functional Streamflow Disaggregation

    Science.gov (United States)

    Lüdtke, Stefan; Apel, Heiko; Nied, Manuela; Carl, Peter; Merz, Bruno

    2014-05-01

    A universal problem of the calibration of hydrological models is the equifinality of different parameter sets derived from the calibration of models against total runoff values. This is an intrinsic problem stemming from the quality of the calibration data and the simplified process representation by the model. However, discharge data contains additional information which can be extracted by signal processing methods. An analysis specifically developed for the disaggregation of runoff time series into flow components is the Functional Streamflow Disaggregation (FSD; Carl & Behrendt, 2008). This method is used in the calibration of an implementation of the hydrological model SWIM in a medium sized watershed in Thailand. FSD is applied to disaggregate the discharge time series into three flow components which are interpreted as base flow, inter-flow and surface runoff. In addition to total runoff, the model is calibrated against these three components in a modified GLUE analysis, with the aim to identify structural model deficiencies, assess the internal process representation and to tackle equifinality. We developed a model dependent (MDA) approach calibrating the model runoff components against the FSD components, and a model independent (MIA) approach comparing the FSD of the model results and the FSD of calibration data. The results indicate, that the decomposition provides valuable information for the calibration. Particularly MDA highlights and discards a number of standard GLUE behavioural models underestimating the contribution of soil water to river discharge. Both, MDA and MIA yield to a reduction of the parameter ranges by a factor up to 3 in comparison to standard GLUE. Based on these results, we conclude that the developed calibration approach is able to reduce the equifinality of hydrological model parameterizations. The effect on the uncertainty of the model predictions is strongest by applying MDA and shows only minor reductions for MIA. Besides

  6. An Approach for the Implementation of Software Quality Models Adpoting CERTICS and CMMI-DEV

    Directory of Open Access Journals (Sweden)

    GARCIA, F.W.

    2015-12-01

    Full Text Available This paper proposes a mapping between two product quality and software processes models used in the industry, the CERTICS national model and the CMMI-DEV international model. The stages of mapping are presented step by step, as well as the mapping review, which had the cooperation of one specialist in CERTICS and CMMI-DEV models. It aims to correlate the structures of the two models in order to facilitate and reduce the implementation time and costs, and to stimulate the execution of multi-model implementations in software developers companies.

  7. Digital Anthropometry: Model, Implementation, and Application

    Directory of Open Access Journals (Sweden)

    Katrina Joy H. Magno

    2014-06-01

    Full Text Available – In this paper, we provide a mathematical framework for identifying and measuring human body parts. We used this framework to implement a computer-based measurement for the purpose of automating the usual manual process of anthropometry. To test the computer-based system, we measured the hands of 91 individuals using both the manual and the computer-based system. Based on two-tailed t-test, the computer-based system has the same measurement as the manual system at 5% level of significance.

  8. Implementation of WPDL Conforming Workflow Model

    Institute of Scientific and Technical Information of China (English)

    张志君; 范玉顺

    2003-01-01

    Workflow process definition language (WPDL) facilitates the transfer of workflow process definitions between separate workflow products. However, much work is still needed to transfer the specific workflow model to a WPDL conforming model. CIMFlow is a workflow management system developed by the National CIMS Engineering Research Center. This paper discusses the methods by which the CIMFlow model conforms to the WPDL meta-model and the differences between the WPDL meta-model and the CIMFlow model. Some improvements are proposed for the WPDL specification. Finally, the mapping and translating methods between the entities and attributes are given for the two models. The proposed methods and improvements are valuable as a reference for other mapping applications and the WPDL specification.

  9. Hubble Space Telescope Reduced-Gyro Control Law Design, Implementation, and On-Orbit Performance

    Science.gov (United States)

    Clapp, Brian R.; Ramsey, Patrick R.; Wirzburger, John H.; Smith, Daniel C.; VanArsadall, John C.

    2008-01-01

    Following gyro failures in April 2001 and April 2003, HST Pointing Control System engineers designed reduced-gyro control laws to extend the spacecraft science mission. The Two-Gyro Science (TGS) and One-Gyro Science (OGS) control laws were designed and implemented using magnetometers, star trackers, and Fine Guidance Sensors in succession to control vehicle rate about the missing gyro axes. Both TGS and OGS have demonstrated on-orbit pointing stability of 7 milli-arcseconds or less, which depends upon the guide star magnitude used by the Fine Guidance Sensor. This paper describes the design, implementation, and on-orbit performance of the TGS and OGS control law fine-pointing modes using Fixed Head Star Trackers and Fine Guidance Sensors, after successfully achieving coarse-pointing control using magnetometers.

  10. Implementation of a telementoring model of medical education in psoriasis

    Directory of Open Access Journals (Sweden)

    Luis D. Mazzuoccolo

    2016-12-01

    Full Text Available The ECHO® (Extension for Community Healthcare Outcomes project is a model of distance medical education. Its mission is to expand knowledge and evaluate the results of this action, both in the training of human resources in healthcare and in the accomplishment of the best medical practices in the community target. It is developed through case presentation videoconferencing, between experts in chronic and complex diseases and physicians, with the aim of reducing the healthcare asymmetries between large urban centers and peripherals areas. We have implemented this telementoring for dermatologists and residents who treat patients with psoriasis. After 10 sessions, a survey was conducted to evaluate the educational attainment of the participants. A significant improvement was found in their abilities to determine the severity of psoriasis, screening for arthritis, assessment of the patient before beginning systemic treatment and appropriate follow-up under different systemic therapies. ECHO replication model helped improve the skills of the participants in the management of this disease, and reduced professional isolation.

  11. A model based security testing method for protocol implementation.

    Science.gov (United States)

    Fu, Yu Long; Xin, Xiao Long

    2014-01-01

    The security of protocol implementation is important and hard to be verified. Since the penetration testing is usually based on the experience of the security tester and the specific protocol specifications, a formal and automatic verification method is always required. In this paper, we propose an extended model of IOLTS to describe the legal roles and intruders of security protocol implementations, and then combine them together to generate the suitable test cases to verify the security of protocol implementation.

  12. A Model Based Security Testing Method for Protocol Implementation

    Directory of Open Access Journals (Sweden)

    Yu Long Fu

    2014-01-01

    Full Text Available The security of protocol implementation is important and hard to be verified. Since the penetration testing is usually based on the experience of the security tester and the specific protocol specifications, a formal and automatic verification method is always required. In this paper, we propose an extended model of IOLTS to describe the legal roles and intruders of security protocol implementations, and then combine them together to generate the suitable test cases to verify the security of protocol implementation.

  13. Closed Catheter Access System Implementation in Reducing Bloodstream Infection Rate in Low Birth Weight Preterm Infants

    Directory of Open Access Journals (Sweden)

    Lily eRundjan

    2015-03-01

    Full Text Available Background Bloodstream infection (BSI is one of the significant causes of morbidity and mortality encountered in a neonatal intensive care unit (NICU, especially in developing countries. Despite the implementation of infection control practices, such as strict hand hygiene, the BSI rate in our hospital is still high. The use of a closed catheter access system to reduce BSI related to intravascular catheter has hitherto never been evaluated in our hospital. Objective To determine the effects of closed catheter access system implementation in reducing the BSI rate in preterm neonates with low birth weight.Methods Randomized clinical trial was conducted on 60 low birth weight preterm infants hospitalized in the neonatal unit at Cipto Mangunkusumo Hospital, Jakarta, Indonesia from June to September, 2013. Randomized subjects either received a closed or non-closed catheter access system. Subjects were monitored for 2 weeks for the development of BSI based on clinical signs, abnormal infection parameters, and blood culture. Results Closed catheter access system implementation gave a protective effect towards the occurrence of culture-proven BSI (relative risk 0.095, 95% CI 0.011 to 0.85, p=0.026. Risk of culture-proven BSI in the control group was 10.545 (95% CI 1.227 to 90.662, p=0.026. BSI occurred in 75% of neonates without risk factors of infection in the control group compared to none in the study group.Conclusions The use of a closed catheter access system reduced the BSI in low birth weight preterm infants. Choosing the right device design, proper disinfection of device and appropriate frequency of connector change should be done simultaneously.

  14. Implementation of a network model of hysteresis

    Energy Technology Data Exchange (ETDEWEB)

    Gruosso, G. [Dipartimento Elettronica e Informazione, Politecnico di Milano, P.za Leonardo da Vinci 32, I-20133 Milan (Italy); Repetto, M. [Dipartimento Ingegneria Elettrica, Politecnico di Torino, Corso Duca degli Abruzzi 24, I-10129 Turin (Italy)]. E-mail: maurizio.repetto@polito.it

    2006-02-01

    A network model of hysteresis based on elementary cells made up with piece-wise linear resistors and a linear capacitor has been presented in the literature and its theoretical properties have been investigated. This model allows to simulate hysteresis in a circuit solver without requiring any modification to its source code. Despite its appealing features, some cautions must be used for the treatment of the interface between the model and the rest of the circuit and for the handling of nonlinear resistors which can introduce some convergence problems in the network solution. These topics are investigated and some results on a simple test case are presented and discussed.

  15. Bacteriophage Infection of Model Metal Reducing Bacteria

    Science.gov (United States)

    Weber, K. A.; Bender, K. S.; Gandhi, K.; Coates, J. D.

    2008-12-01

    filtered through a 0.22 μ m sterile nylon filter, stained with phosphotungstic acid (PTA), and examined using transmission electron microscopy (TEM). TEM revealed the presence of viral like particles in the culture exposed to mytomycin C. Together these results suggest an active infection with a lysogenic bacteriophage in the model metal reducing bacteria, Geobacter spp., which could affect metabolic physiology and subsequently metal reduction in environmental systems.

  16. Embedded systems development from functional models to implementations

    CERN Document Server

    Zeng, Haibo; Natale, Marco; Marwedel, Peter

    2014-01-01

    This book offers readers broad coverage of techniques to model, verify and validate the behavior and performance of complex distributed embedded systems.  The authors attempt to bridge the gap between the three disciplines of model-based design, real-time analysis and model-driven development, for a better understanding of the ways in which new development flows can be constructed, going from system-level modeling to the correct and predictable generation of a distributed implementation, leveraging current and future research results.     Describes integration of heterogeneous models; Discusses synthesis of task model implementations and code implementations; Compares model-based design vs. model-driven approaches; Explains how to enforce correctness by construction in the functional and time domains; Includes optimization techniques for control performance.

  17. Modelling and Implementation of Catalogue Cards Using FreeMarker

    Science.gov (United States)

    Radjenovic, Jelen; Milosavljevic, Branko; Surla, Dusan

    2009-01-01

    Purpose: The purpose of this paper is to report on a study involving the specification (using Unified Modelling Language (UML) 2.0) of information requirements and implementation of the software components for generating catalogue cards. The implementation in a Java environment is developed using the FreeMarker software.…

  18. Modelling and Implementation of Catalogue Cards Using FreeMarker

    Science.gov (United States)

    Radjenovic, Jelen; Milosavljevic, Branko; Surla, Dusan

    2009-01-01

    Purpose: The purpose of this paper is to report on a study involving the specification (using Unified Modelling Language (UML) 2.0) of information requirements and implementation of the software components for generating catalogue cards. The implementation in a Java environment is developed using the FreeMarker software.…

  19. Implementing a Dominican Model of Leadership

    Science.gov (United States)

    Otte, Suzanne

    2015-01-01

    Leadership theories that rely on personal traits, situations, and actions were developed for an industrial world and have become less effective as the world becomes more globalized, networked, and collaborative (Komives et al. 2005). Values-centered models of leadership highlighting collaboration, inclusiveness, empowerment, and ethics have…

  20. Implementing a Dominican Model of Leadership

    Science.gov (United States)

    Otte, Suzanne

    2015-01-01

    Leadership theories that rely on personal traits, situations, and actions were developed for an industrial world and have become less effective as the world becomes more globalized, networked, and collaborative (Komives et al. 2005). Values-centered models of leadership highlighting collaboration, inclusiveness, empowerment, and ethics have…

  1. The rate of invasive testing for trisomy 21 is reduced after implementation of NIPT.

    Science.gov (United States)

    Bjerregaard, Louise; Stenbakken, Anne Betsagoo; Andersen, Camilla Skov; Kristensen, Line; Jensen, Cecilie Vibeke; Skovbo, Peter; Sørensen, Anne Nødgaard

    2017-04-01

    The non-invasive prenatal test (NIPT) was introduced in the North Denmark Region in March 2013. NIPT is offered as an alternative to invasive tests if the combined first trimester risk of trisomy 21 (T21) is ≥ 1:300. The purpose of this study was to investigate the effect of NIPT implementation among high-risk pregnancies in a region with existing first-trimester combined screening for T21. The primary objective was to examine the effect on the invasive testing rate. This was a retrospective observational study including high-risk singleton pregnancies in the North Denmark Region. The women were included in two periods, i.e. before and after the implementation of NIPT, respectively. Group 1 (before NIPT): n = 253 and Group 2 (after NIPT): n = 302. After NIPT implementation, the invasive testing rate fell from 70% to 48% (p < 0.01), and the number of high-risk women refusing further testing dropped from 26% to 3% (p < 0.01). NIPT successfully detected four cases of T21; however, two out of three sex-chromosomal abnormalities were false positives. No false negative NIPT results were revealed in this study. In the North Denmark Region, the implementation of NIPT in high-risk pregnancies significantly reduced the rate of invasive testing. However, the proportion of high-risk women who opted for prenatal tests increased as the majority of women who previously refused further testing now opted for the NIPT. none. The study was approved by the Danish Data Protection Agency (No. 2015-104). Articles published in the Danish Medical Journal are “open access”. This means that the articles are distributed under the terms of the Creative Commons Attribution Non-commercial License, which permits any non-commercial use, distribution, and reproduction in any medium, provided the original author(s) and source are credited.

  2. Implementation science in the real world: a streamlined model.

    Science.gov (United States)

    Knapp, Herschel; Anaya, Henry D

    2012-01-01

    The process of quality improvement may involve enhancing or revising existing practices or the introduction of a novel element. Principles of Implementation Science provide key theories to guide these processes, however, such theories tend to be highly technical in nature and do not provide pragmatic nor streamlined approaches to real-world implementation. This paper presents a concisely comprehensive six step theory-based Implementation Science model that we have successfully used to launch more than two-dozen self-sustaining implementations. In addition, we provide an abbreviated case study in which we used our streamlined theoretical model to successfully guide the development and implementation of an HIV testing/linkage to care campaign in homeless shelter settings in Los Angeles County.

  3. Reduced form models of bond portfolios

    OpenAIRE

    Matti Koivu; Teemu Pennanen

    2010-01-01

    We derive simple return models for several classes of bond portfolios. With only one or two risk factors our models are able to explain most of the return variations in portfolios of fixed rate government bonds, inflation linked government bonds and investment grade corporate bonds. The underlying risk factors have natural interpretations which make the models well suited for risk management and portfolio design.

  4. Accelerated gravitational wave parameter estimation with reduced order modeling.

    Science.gov (United States)

    Canizares, Priscilla; Field, Scott E; Gair, Jonathan; Raymond, Vivien; Smith, Rory; Tiglio, Manuel

    2015-02-20

    Inferring the astrophysical parameters of coalescing compact binaries is a key science goal of the upcoming advanced LIGO-Virgo gravitational-wave detector network and, more generally, gravitational-wave astronomy. However, current approaches to parameter estimation for these detectors require computationally expensive algorithms. Therefore, there is a pressing need for new, fast, and accurate Bayesian inference techniques. In this Letter, we demonstrate that a reduced order modeling approach enables rapid parameter estimation to be performed. By implementing a reduced order quadrature scheme within the LIGO Algorithm Library, we show that Bayesian inference on the 9-dimensional parameter space of nonspinning binary neutron star inspirals can be sped up by a factor of ∼30 for the early advanced detectors' configurations (with sensitivities down to around 40 Hz) and ∼70 for sensitivities down to around 20 Hz. This speedup will increase to about 150 as the detectors improve their low-frequency limit to 10 Hz, reducing to hours analyses which could otherwise take months to complete. Although these results focus on interferometric gravitational wave detectors, the techniques are broadly applicable to any experiment where fast Bayesian analysis is desirable.

  5. Implementation and Operational Research: Expedited Results Delivery Systems Using GPRS Technology Significantly Reduce Early Infant Diagnosis Test Turnaround Times.

    Science.gov (United States)

    Deo, Sarang; Crea, Lindy; Quevedo, Jorge; Lehe, Jonathan; Vojnov, Lara; Peter, Trevor; Jani, Ilesh

    2015-09-01

    The objective of this study was to quantify the impact of a new technology to communicate the results of an infant HIV diagnostic test on test turnaround time and to quantify the association between late delivery of test results and patient loss to follow-up. We used data collected during a pilot implementation of Global Package Radio Service (GPRS) printers for communicating results in the early infant diagnosis program in Mozambique from 2008 through 2010. Our dataset comprised 1757 patient records, of which 767 were from before implementation and 990 from after implementation of expedited results delivery system. We used multivariate logistic regression model to determine the association between late result delivery (more than 30 days between sample collection and result delivery to the health facility) and the probability of result collection by the infant's caregiver. We used a sample selection model to determine the association between late result delivery to the facility and further delay in collection of results by the caregiver. The mean test turnaround time reduced from 68.13 to 41.05 days post-expedited results delivery system. Caregivers collected only 665 (37.8%) of the 1757 results. After controlling for confounders, the late delivery of results was associated with a reduction of approximately 18% (0.44 vs. 0.36; P < 0.01) in the probability of results collected by the caregivers (odds ratio = 0.67, P < 0.05). Late delivery of results was also associated with a further average increase in 20.91 days of delay in collection of results (P < 0.01). Early infant diagnosis program managers should further evaluate the cost-effectiveness of operational interventions (eg, GPRS printers) that reduce delays.

  6. Implementation of IEC Generic Model Type 1A using RTDS

    DEFF Research Database (Denmark)

    Cha, Seung-Tae; Wu, Qiuwei; Zhao, Haoran

    2012-01-01

    This paper presents the implementation of the IEC generic model of Type 1 wind turbine generator (WTG) in the real time digital simulator (RTDS) environment. The model is based on the IEC 61400 TC88 under wind turbine working group’s standardization efforts are implemented. Several case studies...... have been carried out to verify the dynamic performance of the IEC generic Type 1 WTG model under both steady state and dynamic conditions. The case study results show that the IEC generic Type 1 WTG model can represent the relevant dynamic behaviour of wind power generation to ensure grid integration...

  7. Difficulties when implementing the CMMI organizational model

    Directory of Open Access Journals (Sweden)

    Himelda Palacios

    2012-08-01

    Full Text Available Rev.esc.adm.neg Esta investigación analiza y propone estrategias de intervención que permiten identificar y superar los obstáculos de aprendizaje organizacional, que experimenta una organización cuando decide implantar el modelo de calidad para el desarrollo y mantenimiento de software, CMMI (Capability Maturity Model Integrated. Implantar el CMMI es más que definir procesos, procedimientos y formatos, lo que realmente implica es cambiar la cultura organizacional de las áreas y/o empresas de desarrollo de software, cambiar el comportamiento de los Ingenieros de Software. Las estrategias de intervención propuestas facilitan el cambio de la cultura organizacional requerido, para que una organización dedicada al desarrollo y mantenimiento de software pueda alcanzar con éxito los niveles de madurez definidos por el modelo CMMI. Este cambio de cultura implica orientar a la organización hacia los lineamientos definidos por la gestión de la calidad del software, la ingeniería de software, la gerencia de proyectos, la gerencia de procesos, el mejoramiento continuo de procesos, la gestión cuantitativa de procesos y el aprendizaje continuo.

  8. Biochar for reducing GHG emissions in Norway: opportunities and barriers to implementation.

    Science.gov (United States)

    Rasse, Daniel; O'Toole, Adam; Joner, Erik; Borgen, Signe

    2017-04-01

    Norway has ratified the Paris Agreement with a target nationally determined contribution (NDC) of 40% reduction of greenhouse gas emissions by 2030, with the land sector (AFOLU) expected to contribute to this effort. Increased C sequestration in soil, as argued by the 4 per 1000 initiative, can provide C negative solutions towards reaching this goal. However, Norway has only 3% of its land surface that is cultivated, and management options are fairly limited because the major part is already under managed grasslands, which are assumed to be close to C saturation. By contrast, the country has ample forest resources, allowing Norway to report 25 Mt CO2-eq per year of net CO2 uptake by forest. In addition, the forest industry generates large amounts of unused residues, both at the processing plants but also left decaying on the forest floor. Because of the unique characteristics of the Norwegian land sector, the Norwegian Environment Agency reported as early as 2010 that biochar production for soil C storage had the largest potential for reducing GHG emissions through land-use measures. Although straw is a potential feedstock, the larger quantities of forest residues are a prime candidate for this purpose, as exemplified by our first experimental facility at a production farm, which is using wood chips as feedstock for biochar production. The highly controlled and subsidised Norwegian agriculture might offer a unique test case for implementing incentives that would support farmers for biochar-based C sequestration. However, multiple barriers remain, which mostly revolve around the complexity of finding the right implementation scheme (including price setting) in a changing landscape of competition for biomass (with e.g. bioethanol and direct combustion), methods of verification and variable co-benefits to the farmer. Here we will present some of these schemes, from on-farm biochar production to factories for biochar-compound fertilizers, and discuss barriers and

  9. Map /Reduce Design and Implementation of Apriori Algorithm for Handling Voluminous Data - Sets

    Directory of Open Access Journals (Sweden)

    Anjan K Koundinya

    2012-12-01

    Full Text Available Apriori is one of the key algorithms to generate frequent itemsets. Analysing frequent itemset is a crucialstep in analysing structured data and in finding association relationship between items. This stands as anelementary foundation to supervised learning, which encompasses classifier and feature extractionmethods. Applying this algorithm is crucial to understand the behaviour of structured data. Most of thestructured data in scientific domain are voluminous. Processing such kind of data requires state of the artcomputing machines. Setting up such an infrastructure is expensive. Hence a distributed environmentsuch as a clustered setup is employed for tackling such scenarios. Apache Hadoop distribution is one ofthe cluster frameworks in distributed environment that helps by distributing voluminous data across anumber of nodes in the framework. This paper focuses on map/reduce design and implementation ofApriori algorithm for structured data analysis.

  10. Algorithmic Tricks for Reducing the Complexity of FDWT/IDWT Basic Operations Implementation

    Directory of Open Access Journals (Sweden)

    Aleksandr Cariow

    2014-09-01

    Full Text Available In this paper two different approaches to the rationalization of FDWT and IDWT basic operations execution with the reduced number of multiplications are considered. With regard to the well-known approaches, the direct implementation of the above operations requires 2L multiplications for the execution of FDWT and IDWT basic operation plus 2(L-1 additions for FDWT basic operation and L additions for IDWT basic operation. At the same time, the first approach allows the design of the computation procedures, which take only 1,5L multiplications plus 3,5L+1 additions for FDWT basic operation and L+1 multiplications plus 3,5L additions for IDWT basic operation. The other approach allows the design of such computation procedures, which require 1,5L multiplications, plus 2L-1 addition for FDWT basic operation and L+1 addition for IDWT basic operation.

  11. Reducing risky driver behaviour through the implementation of a driver risk management system

    Directory of Open Access Journals (Sweden)

    Rose Luke

    2014-03-01

    Full Text Available South Africa has one of the highest incidences of road accidents in the world. Most accidents are avoidable and are caused by driver behaviour and errors. The purpose of this article was to identify the riskiest driver behaviours in commercial fleets in South Africa, to determine the business impact of such behaviour, to establish a framework for the management of risky driver behaviour and to test the framework by applying a leading commercial driver behaviour management system as a case study. The case study comprised three South African commercial fleets. Using data from these fleets, critical incident triangles were used to determine the ratio data of risky driver behaviour to near-collisions and collisions. Based on managing the riskiest driver behaviours as causes of more serious incidents and accidents, the results indicated that through the implementation of an effective driver risk management system, risky incidents were significantly reduced.

  12. Implementation of Reduced Power Open Core Protocol Compliant Memory System using VHDL

    Directory of Open Access Journals (Sweden)

    Ramesh Bhakthavatchalu

    2011-01-01

    Full Text Available The design of a large scale System on Chip (SoC is becoming challenging not only due to the complexity but also due to the use of a large amount of Intellectual Properties (IP. An interface standard for IP cores is becoming important for a successful SoC design. In a SoC the different IP cores are interfaced through different protocols. It increases the complexity of the design. Open Core Protocol (OCP is an openly licensed core centric protocol intended to meet contemporary system level integration challenges. OCP promotes IP core reusability and reduces design time, design risk and manufacturing costs for SoC designs. OCP defines a highly configurable interface including data flow, control, verification and test signals required to describe an IP core's communication. This paper focuses on the design and implementation of a reconfigurable OCP compliant Master Slave interface for a memory system with burst support. The power reduction using Multivoltage design is the important feature of the paper. The proposed design was implemented in VHDL and the Synthesis is done using Synopsys ASIC synthesis tool Design Compiler.

  13. Automated model integration at source code level: An approach for implementing models into the NASA Land Information System

    Science.gov (United States)

    Wang, S.; Peters-Lidard, C. D.; Mocko, D. M.; Kumar, S.; Nearing, G. S.; Arsenault, K. R.; Geiger, J. V.

    2014-12-01

    Model integration bridges the data flow between modeling frameworks and models. However, models usually do not fit directly into a particular modeling environment, if not designed for it. An example includes implementing different types of models into the NASA Land Information System (LIS), a software framework for land-surface modeling and data assimilation. Model implementation requires scientific knowledge and software expertise and may take a developer months to learn LIS and model software structure. Debugging and testing of the model implementation is also time-consuming due to not fully understanding LIS or the model. This time spent is costly for research and operational projects. To address this issue, an approach has been developed to automate model integration into LIS. With this in mind, a general model interface was designed to retrieve forcing inputs, parameters, and state variables needed by the model and to provide as state variables and outputs to LIS. Every model can be wrapped to comply with the interface, usually with a FORTRAN 90 subroutine. Development efforts need only knowledge of the model and basic programming skills. With such wrappers, the logic is the same for implementing all models. Code templates defined for this general model interface could be re-used with any specific model. Therefore, the model implementation can be done automatically. An automated model implementation toolkit was developed with Microsoft Excel and its built-in VBA language. It allows model specifications in three worksheets and contains FORTRAN 90 code templates in VBA programs. According to the model specification, the toolkit generates data structures and procedures within FORTRAN modules and subroutines, which transfer data between LIS and the model wrapper. Model implementation is standardized, and about 80 - 90% of the development load is reduced. In this presentation, the automated model implementation approach is described along with LIS programming

  14. Criteria for implementing interventions to reduce health inequalities in primary care settings in European regions.

    Science.gov (United States)

    Daponte, Antonio; Bernal, Mariola; Bolívar, Julia; Mateo, Inmaculada; Salmi, Louis-Rachid; Barsanti, Sara; Berghmans, Luc; Piznal, Ewelina; Bourgueil, Yann; Marquez, Soledad; González, Ingrid; Carriazo, Ana; Maros-Szabo, Zsuzsanna; Ménival, Solange

    2014-12-01

    The current social and political context is generating socio-economic inequalities between and within countries, causing and widening health inequalities. The development and implementation of interventions in primary health care (PHC) settings seem unavoidable. Attempts have been made to draw up adequate criteria to guide and evaluate interventions but none for the specific case of PHC. This methodological article aims to contribute to this field by developing and testing a set of criteria for guiding and evaluating real-life interventions to reduce health inequalities in PHC settings in European regions. A literature review, nominal group technique, survey and evaluation template were used to design and test a set of criteria. The questionnaire was answered by professionals in charge of 46 interventions carried out in 12 European countries, and collected detailed information about each intervention. Third-party experts scored the interventions using the set of evaluation criteria proposed. Nine criteria to guide and evaluate interventions were proposed: relevance, appropriateness, applicability, innovation, quality assurance, adequacy of resources, effectiveness in the process, effectiveness in results and mainstreaming. A working definition was drawn up for each one. These criteria were then used to evaluate the interventions identified. The set of criteria drawn up to guide the design, implementation and evaluation of interventions to reduce health inequalities in PHC will be a useful instrument to be applied to interventions under development for culturally, politically and socio-economically diverse PHC contexts throughout Europe. © The Author 2014. Published by Oxford University Press on behalf of the European Public Health Association. All rights reserved.

  15. Reducing RANS Model Error Using Random Forest

    Science.gov (United States)

    Wang, Jian-Xun; Wu, Jin-Long; Xiao, Heng; Ling, Julia

    2016-11-01

    Reynolds-Averaged Navier-Stokes (RANS) models are still the work-horse tools in the turbulence modeling of industrial flows. However, the model discrepancy due to the inadequacy of modeled Reynolds stresses largely diminishes the reliability of simulation results. In this work we use a physics-informed machine learning approach to improve the RANS modeled Reynolds stresses and propagate them to obtain the mean velocity field. Specifically, the functional forms of Reynolds stress discrepancies with respect to mean flow features are trained based on an offline database of flows with similar characteristics. The random forest model is used to predict Reynolds stress discrepancies in new flows. Then the improved Reynolds stresses are propagated to the velocity field via RANS equations. The effects of expanding the feature space through the use of a complete basis of Galilean tensor invariants are also studied. The flow in a square duct, which is challenging for standard RANS models, is investigated to demonstrate the merit of the proposed approach. The results show that both the Reynolds stresses and the propagated velocity field are improved over the baseline RANS predictions. SAND Number: SAND2016-7437 A

  16. A Reducing Resistance to Change Model

    National Research Council Canada - National Science Library

    Daniela Braduţanu

    2015-01-01

    .... After analyzing the existent literature, I have concluded that the resistance to change subject has gained popularity over the years, but there are not too many models that could help managers...

  17. Model of key success factors for Business Intelligence implementation

    Directory of Open Access Journals (Sweden)

    Peter Mesaros

    2016-07-01

    Full Text Available New progressive technologies recorded growth in every area. Information-communication technologies facilitate the exchange of information and it facilitates management of everyday activities in enterprises. Specific modules (such as Business Intelligence facilitate decision-making. Several studies have demonstrated the positive impact of Business Intelligence to decision-making. The first step is to put in place the enterprise. The implementation process is influenced by many factors. This article discusses the issue of key success factors affecting to successful implementation of Business Intelligence. The article describes the key success factors for successful implementation and use of Business Intelligence based on multiple studies. The main objective of this study is to verify the effects and dependence of selected factors and proposes a model of key success factors for successful implementation of Business Intelligence. Key success factors and the proposed model are studied in Slovak enterprises.

  18. Samnett: the EMPS model with power flow constraints: implementation details

    Energy Technology Data Exchange (ETDEWEB)

    Helseth, Arild; Warland, Geir; Mo, Birger; Fosso, Olav B.

    2011-12-15

    This report describes the development and implementation of Samnett. Samnett is a new prototype for solving the coupled market and transmission network problem. The prototype is based on the EMPS model (Samkjoeringsmodellen). Results from the market model are distributed to a detailed transmission network model, where a DC power flow detects if there are overloads on monitored lines or interconnections. In case of overloads, power flow constraints are generated and added to the market problem. This report is an updated version of TR A6891 {sup I}mplementing Network Constraints in the EMPS model{sup .} It further elaborates on theoretical and implementation details in Samnett, but does not contain the case studies and file descriptions presented in TR A6891.(auth)

  19. Automatic generation of computable implementation guides from clinical information models.

    Science.gov (United States)

    Boscá, Diego; Maldonado, José Alberto; Moner, David; Robles, Montserrat

    2015-06-01

    Clinical information models are increasingly used to describe the contents of Electronic Health Records. Implementation guides are a common specification mechanism used to define such models. They contain, among other reference materials, all the constraints and rules that clinical information must obey. However, these implementation guides typically are oriented to human-readability, and thus cannot be processed by computers. As a consequence, they must be reinterpreted and transformed manually into an executable language such as Schematron or Object Constraint Language (OCL). This task can be difficult and error prone due to the big gap between both representations. The challenge is to develop a methodology for the specification of implementation guides in such a way that humans can read and understand easily and at the same time can be processed by computers. In this paper, we propose and describe a novel methodology that uses archetypes as basis for generation of implementation guides. We use archetypes to generate formal rules expressed in Natural Rule Language (NRL) and other reference materials usually included in implementation guides such as sample XML instances. We also generate Schematron rules from NRL rules to be used for the validation of data instances. We have implemented these methods in LinkEHR, an archetype editing platform, and exemplify our approach by generating NRL rules and implementation guides from EN ISO 13606, openEHR, and HL7 CDA archetypes.

  20. Implementations and interpretations of the talbot-ogden infiltration model

    KAUST Repository

    Seo, Mookwon

    2014-11-01

    The interaction between surface and subsurface hydrology flow systems is important for water supplies. Accurate, efficient numerical models are needed to estimate the movement of water through unsaturated soil. We investigate a water infiltration model and develop very fast serial and parallel implementations that are suitable for a computer with a graphical processing unit (GPU).

  1. Implementation of IEC Standard Models for Power System Stability Studies

    DEFF Research Database (Denmark)

    Margaris, Ioannis; Hansen, Anca Daniela; Bech, John

    2012-01-01

    This paper presents the implementation of the generic wind turbine generator (WTG) electrical simulation models proposed in the IEC 61400-27 standard which is currently in preparation. A general overview of the different WTG types is given while the main focus is on Type 4B WTG standard models...

  2. Simulink Implementation of Indirect Vector Control of Induction Machine Model

    Directory of Open Access Journals (Sweden)

    V. Dhanunjayanaidu

    2014-04-01

    Full Text Available In this paper, a modular Simulink implementation of an induction machine model is described in a step-by-step approach. With the modular system, each block solves one of the model equations; therefore, unlike in black box models, all of the machine parameters are accessible for control and verification purposes.After the implementation, examples are given with the model used in different drive applications, such as open-loop constant V/Hz control and indirect vector control. To implement the induction machine model, the dynamic equivalent circuit of induction motor is taken and the model equations in flux linkage form are derived.Then the model is implemented in Simulink by transforming three phase voltages to d-q frame and the d-q currents back to three phase, also it includes unit vector calculation and induction machine d-q model.The inputs here are three phase voltages, load torque, speed of stator and the outputs are flux linkages and currents, electrical torque and speed of rotor.

  3. Implementation of an anisotropic mechanical model for shale in Geodyn

    Energy Technology Data Exchange (ETDEWEB)

    Attia, A; Vorobiev, O; Walsh, S

    2015-05-15

    The purpose of this report is to present the implementation of a shale model in the Geodyn code, based on published rock material models and properties that can help a petroleum engineer in his design of various strategies for oil/gas recovery from shale rock formation.

  4. Reduced physics models in SOLPS for reactor scoping studies

    Energy Technology Data Exchange (ETDEWEB)

    Coster, D.P. [Max-Planck-Institut fuer Plasmaphysik, Garching (Germany)

    2016-08-15

    Heat exhaust is a challenge for ITER and becomes even more of an issue for devices beyond ITER. The main reason for this is that the power produced in the core scales as R{sup 3} while relying on standard exhaust physics results in the heat exhaust scaling as R{sup 1} (R is the major radius). ITER has used SOLPS (B2-EIRENE) to design the ITER divertor, as well as to provide a database that supports the calculations of the ITER operational parameter space. The typical run time for such SOLPS runs is of the order 3 months (for D+C+He using EIRENE to treat the neutrals kinetically with an extensive choice of atomic and molecular physics). Future devices will be expected to radiate much of the power before it crosses the separatrix, and this requires treating extrinsic impurities such as Ne, Ar, Kr and Xe - the large number of charge states puts additional pressure on SOLPS, further slowing down the code. For design work of future machines, fast models have been implemented in system codes but these are usually unavoidably restricted in the included physics. As a bridge between system studies and detailed SOLPS runs, SOLPS offers a number of possibilities to speed up the code considerably at the cost of reducing the fidelity of the physics. By employing a fluid neutral model, aggressive bundling of the charge state of impurities, and reducing the size of the grids used, the run time for one second of physics time (which is often enough for the divertor to come to a steady state) can be reduced to approximately one day. This work looks at the impact of these trade-offs in the physics by comparing key parameters for different simulation assumptions. (copyright 2016 WILEY-VCH Verlag GmbH and Co. KGaA, Weinheim)

  5. The Implementation of Vendor Managed Inventory In the Supply Chain with Simple Probabilistic Inventory Model

    Directory of Open Access Journals (Sweden)

    Anna Ika Deefi

    2016-01-01

    Full Text Available Numerous studies show that the implementation of Vendor Managed Inventory (VMI benefits all members of the supply chain. This research develops model to prove the benefits obtained from implementing VMI to supplier-buyer partnership analytically. The model considers a two-level supply chain which consists of a single supplier and a single buyer. The analytical model is developed to supply chain inventory with probabilistic demand which follows normal distribution. The model also incorporates lead time as a decision variable and investigates the impacts of inventory management before and after the implementation of the VMI. The result shows that the analytical model has the ability to reduce the supply chain expected cost, improve the service level and increase the inventory replenishment. Numerical examples are given to prove them.

  6. VLSI circuits implementing computational models of neocortical circuits.

    Science.gov (United States)

    Wijekoon, Jayawan H B; Dudek, Piotr

    2012-09-15

    This paper overviews the design and implementation of three neuromorphic integrated circuits developed for the COLAMN ("Novel Computing Architecture for Cognitive Systems based on the Laminar Microcircuitry of the Neocortex") project. The circuits are implemented in a standard 0.35 μm CMOS technology and include spiking and bursting neuron models, and synapses with short-term (facilitating/depressing) and long-term (STDP and dopamine-modulated STDP) dynamics. They enable execution of complex nonlinear models in accelerated-time, as compared with biology, and with low power consumption. The neural dynamics are implemented using analogue circuit techniques, with digital asynchronous event-based input and output. The circuits provide configurable hardware blocks that can be used to simulate a variety of neural networks. The paper presents experimental results obtained from the fabricated devices, and discusses the advantages and disadvantages of the analogue circuit approach to computational neural modelling.

  7. Implementation of Enhanced Apriori Algorithm with Map Reduce for Optimizing Big Data

    Directory of Open Access Journals (Sweden)

    Sunil Kumar Khatri

    2015-07-01

    Full Text Available Nowadays as a result of speedy increase in data technology. Massive scale processing may be a major purpose of advanced technology. To handle with this advance progress in information assortment and storage technologies, designing, and implementation massive scale algorithms for data processing is gaining quality and big interest. In data processing domain, association rule classification and learning may be a common and well researched methodology for locating fascinating relations between variables in massive databases. Apriori is that the key algorithmic rule to get the frequent item sets. Analyzing frequent item sets may be a crucial step to find rules and association between them. This stands as a primary foundation to monitored learning, which incorporates classifier and have extraction strategies. Enforcing this algorithmic rule is crucial to infer the behavior of structured information. In scientific domain, most of the structured information in are voluminous. Process such reasonably Brobdingnagian information needs special and dedicated computing machines. fitting such associate degree infrastructure is troublesome and dearly-won. Association rule mining needs massive computation and I/O traffic capability. This paper majorly focuses on making association rules and Map/Reduce style and implementation of Apriori for structured information. Optimize Apriori algorithmic rule to scale back communication value. This paper aims to extract frequent patterns among set of things within the dealing info or different repositories. Apriori algorithmic rule contains a nice influence for locating frequent item sets victimization candidate generation. Apache Hadoop Map cut back is employed to create the cluster. It operating relies on Map cut back programming modal. it's accustomed improve the potency and process of enormous scale information on high performance cluster. It additionally processes Brobdingnagian information sets in parallel on massive

  8. From Implement to Outcrop: a model for identifying implement source rock at outcrop

    Directory of Open Access Journals (Sweden)

    Amy Davis

    2009-09-01

    Full Text Available Traditionally, the sourcing of prehistoric stone tools in Britain has been done most successfully by comparing the petrological and geochemical characteristics of individual stone tools with rock and debitage from known prehistoric quarry sites and stone tool production sites. However, this is a very rare occurrence because only a very small proportion of stone tools in Britain have a secure archaeological provenance, including those from prehistoric quarries or production sites. Substantial numbers of stone tools in the British archaeological record are chance finds; they lack a secure archaeological context. Through a case study of Carrock Fell and the Implement Petrology Group XXXIV, this article presents a new methodological and statistical model for assembling, analysing and interpreting fieldwork evidence, which combines petrological, geochemical portable X-ray fluorescence (PXRF data, and geochemical inductively coupled plasma-atomic spectroscopy (ICP data to establish a signature for 17 gabbroic prehistoric stone implements (Table 1. These results are then compared with similar data gathered from rocks at outcrop. Through qualitative and quantitative analysis, seven gabbroic implements could be securely provenanced to rock from particular outcrop locations. The model is transferable to other similar contexts where sources of implement rock are sought from apparently random distributions of stone tools.

  9. Showcasing leadership exemplars to propel professional practice model implementation.

    Science.gov (United States)

    Storey, Susan; Linden, Elizabeth; Fisher, Mary L

    2008-03-01

    Implementing a professional practice model is a highly complex organizational change that requires expert leadership to be successful. What are the aspects of successful leadership in implementing such a practice change, and how can those behaviors be transferred to other leaders? The authors describe qualitative research that examined this question by interviewing key leaders who are seen by peers as exemplifying the components and intent of one professional practice model. Using their responses to educate peers is seen as a method to expand their best practices. The authors recommend methods to disseminate these best practices in other organizations.

  10. An Open Source modular platform for hydrological model implementation

    Science.gov (United States)

    Kolberg, Sjur; Bruland, Oddbjørn

    2010-05-01

    An implementation framework for setup and evaluation of spatio-temporal models is developed, forming a highly modularized distributed model system. The ENKI framework allows building space-time models for hydrological or other environmental purposes, from a suite of separately compiled subroutine modules. The approach makes it easy for students, researchers and other model developers to implement, exchange, and test single routines in a fixed framework. The open-source license and modular design of ENKI will also facilitate rapid dissemination of new methods to institutions engaged in operational hydropower forecasting or other water resource management. Written in C++, ENKI uses a plug-in structure to build a complete model from separately compiled subroutine implementations. These modules contain very little code apart from the core process simulation, and are compiled as dynamic-link libraries (dll). A narrow interface allows the main executable to recognise the number and type of the different variables in each routine. The framework then exposes these variables to the user within the proper context, ensuring that time series exist for input variables, initialisation for states, GIS data sets for static map data, manually or automatically calibrated values for parameters etc. ENKI is designed to meet three different levels of involvement in model construction: • Model application: Running and evaluating a given model. Regional calibration against arbitrary data using a rich suite of objective functions, including likelihood and Bayesian estimation. Uncertainty analysis directed towards input or parameter uncertainty. o Need not: Know the model's composition of subroutines, or the internal variables in the model, or the creation of method modules. • Model analysis: Link together different process methods, including parallel setup of alternative methods for solving the same task. Investigate the effect of different spatial discretization schemes. o Need not

  11. COST OF QUALITY MODELS AND THEIR IMPLEMENTATION IN MANUFACTURING FIRMS

    Directory of Open Access Journals (Sweden)

    N.M. Vaxevanidis

    2009-03-01

    Full Text Available In order to improve quality, an organization must take into account the costs associated with achieving quality since the objective of continuous improvement programs is not only to meet customer requirements, but also to do it at the lowest, possible, cost. This can only obtained by reducing the costs needed to achieve quality, and the reduction of these costs is only possible if they are identified and measured. Therefore, measuring and reporting the cost of quality (CoQ should be considered an important issue for achieving quality excellence. To collect quality costs an organization needs to adopt a framework to classify costs; however, there is no general agreement on a single broad definition of quality costs. CoQ is usually understood as the sum of conformance plus non-conformance costs, where cost of conformance is the price paid for prevention of poor quality (for example, inspection and quality appraisal and cost of non-conformance is the cost of poor quality caused by product and service failure (for example, rework and returns. The objective of this paper is to give a survey of research articles on the topic of CoQ; it opens with a literature review focused on existing CoQ models; then, it briefly presents the most common CoQ parameters and the metrics (indices used for monitoring CoQ. Finally, the use of CoQ models in practice, i.e., the implementation of a quality costing system and cost of quality reporting in companies is discussed, with emphasis in cases concerning manufacturing firms.

  12. Implementing Model-Check for Employee and Management Satisfaction

    Science.gov (United States)

    Jones, Corey; LaPha, Steven

    2013-01-01

    This presentation will discuss methods to which ModelCheck can be implemented to not only improve model quality, but also satisfy both employees and management through different sets of quality checks. This approach allows a standard set of modeling practices to be upheld throughout a company, with minimal interaction required by the end user. The presenter will demonstrate how to create multiple ModelCheck standards, preventing users from evading the system, and how it can improve the quality of drawings and models.

  13. The mathematical theory of reduced MHD models for fusion plasmas

    OpenAIRE

    Guillard, Hervé

    2015-01-01

    The derivation of reduced MHD models for fusion plasma is here formulated as a special instance of the general theory of singular limit of hyperbolic system of PDEs with large operator. This formulation allows to use the general results of this theory and to prove rigorously that reduced MHD models are valid approximations of the full MHD equations. In particular, it is proven that the solutions of the full MHD system converge to the solutions of an appropriate reduced model.

  14. Using Variational Inference and MapReduce to Scale Topic Modeling

    CERN Document Server

    Zhai, Ke; Asadi, Nima

    2011-01-01

    Latent Dirichlet Allocation (LDA) is a popular topic modeling technique for exploring document collections. Because of the increasing prevalence of large datasets, there is a need to improve the scalability of inference of LDA. In this paper, we propose a technique called ~\\emph{MapReduce LDA} (Mr. LDA) to accommodate very large corpus collections in the MapReduce framework. In contrast to other techniques to scale inference for LDA, which use Gibbs sampling, we use variational inference. Our solution efficiently distributes computation and is relatively simple to implement. More importantly, this variational implementation, unlike highly tuned and specialized implementations, is easily extensible. We demonstrate two extensions of the model possible with this scalable framework: informed priors to guide topic discovery and modeling topics from a multilingual corpus.

  15. Reduced mortality after the implementation of a protocol for the early detection of severe sepsis.

    Science.gov (United States)

    Westphal, Glauco A; Koenig, Álvaro; Caldeira Filho, Milton; Feijó, Janaína; de Oliveira, Louise Trindade; Nunes, Fernanda; Fujiwara, Kênia; Martins, Sheila Fonseca; Roman Gonçalves, Anderson R

    2011-02-01

    We evaluate the impact that implementing an in-hospital protocol for the early detection of sepsis risk has on mortality from severe sepsis/septic shock. This was a prospective cohort study conducted in 2 phases at 2 general hospitals in Brazil. In phase I, patients with severe sepsis/septic shock were identified and treated in accordance with the Surviving Sepsis Campaign guidelines. Over the subsequent 12 months (phase II), patients with severe sepsis/septic shock were identified by means of active surveillance for signs of sepsis risk (SSR). We compared the 2 cohorts in terms of demographic variables, the time required for the identification of at least 2 SSRs, compliance with sepsis bundles (6- and 24-hour), and mortality rates. We identified 217 patients with severe sepsis/septic shock (102 during phase I and 115 during phase II). There were significant differences between phases I and II in terms of the time required for the identification of at least 2 SSRs (34 ± 48 vs 11 ± 17 hours; P detection of sepsis promoted early treatment, reducing in-hospital mortality from severe sepsis/septic shock. Copyright © 2011 Elsevier Inc. All rights reserved.

  16. Gradient Plasticity Model and its Implementation into MARMOT

    Energy Technology Data Exchange (ETDEWEB)

    Barker, Erin I.; Li, Dongsheng; Zbib, Hussein M.; Sun, Xin

    2013-08-01

    The influence of strain gradient on deformation behavior of nuclear structural materials, such as boby centered cubic (bcc) iron alloys has been investigated. We have developed and implemented a dislocation based strain gradient crystal plasticity material model. A mesoscale crystal plasticity model for inelastic deformation of metallic material, bcc steel, has been developed and implemented numerically. Continuum Dislocation Dynamics (CDD) with a novel constitutive law based on dislocation density evolution mechanisms was developed to investigate the deformation behaviors of single crystals, as well as polycrystalline materials by coupling CDD and crystal plasticity (CP). The dislocation density evolution law in this model is mechanism-based, with parameters measured from experiments or simulated with lower-length scale models, not an empirical law with parameters back-fitted from the flow curves.

  17. VAR, SVAR and SVEC Models: Implementation Within R Package vars

    Directory of Open Access Journals (Sweden)

    Bernhard Pfaff

    2008-02-01

    Full Text Available The structure of the package vars and its implementation of vector autoregressive, structural vector autoregressive and structural vector error correction models are explained in this paper. In addition to the three cornerstone functions VAR(, SVAR( and SVEC( for estimating such models, functions for diagnostic testing, estimation of a restricted models, prediction, causality analysis, impulse response analysis and forecast error variance decomposition are provided too. It is further possible to convert vector error correction models into their level VAR representation. The different methods and functions are elucidated by employing a macroeconomic data set for Canada. However, the focus in this writing is on the implementation part rather than the usage of the tools at hand.

  18. Model-reduced gradient-based history matching

    NARCIS (Netherlands)

    Kaleta, M.P.; Hanea, R.G.; Heemink, A.W.; Jansen, J.D.

    2010-01-01

    Gradient-based history matching algorithms can be used to adapt the uncertain parameters in a reservoir model using production data. They require, however, the implementation of an adjoint model to compute the gradients, which is usually an enormous programming effort. We propose a new approach to g

  19. Implementation of a bundle of care to reduce surgical site infections in patients undergoing vascular surgery.

    Directory of Open Access Journals (Sweden)

    Jasper van der Slegt

    Full Text Available BACKGROUND: Surgical site infections (SSI's are associated with severe morbidity, mortality and increased health care costs in vascular surgery. OBJECTIVE: To implement a bundle of care in vascular surgery and measure the effects on the overall and deep-SSI's rates. DESIGN: Prospective, quasi-experimental, cohort study. METHODS: A prospective surveillance for SSI's after vascular surgery was performed in the Amphia hospital in Breda, from 2009 through 2011. A bundle developed by the Dutch hospital patient safety program (DHPSP was introduced in 2009. The elements of the bundle were (1 perioperative normothermia, (2 hair removal before surgery, (3 the use of perioperative antibiotic prophylaxis and (4 discipline in the operating room. Bundle compliance was measured every 3 months in a random sample of surgical procedures and this was used for feedback. RESULTS: Bundle compliance improved significantly from an average of 10% in 2009 to 60% in 2011. In total, 720 vascular procedures were performed during the study period and 75 (10.4% SSI were observed. Deep SSI occurred in 25 (3.5% patients. Patients with SSI's (28,5±29.3 vs 10.8±11.3, p<0.001 and deep-SSI's (48.3±39.4 vs 11.4±11.8, p<0.001 had a significantly longer length of hospital stay after surgery than patients without an infection. A significantly higher mortality was observed in patients who developed a deep SSI (Adjusted OR: 2.96, 95% confidence interval 1.32-6.63. Multivariate analysis showed a significant and independent decrease of the SSI-rate over time that paralleled the introduction of the bundle. The SSI-rate was 51% lower in 2011 compared to 2009. CONCLUSION: The implementation of the bundle was associated with improved compliance over time and a 51% reduction of the SSI-rate in vascular procedures. The bundle did not require expensive or potentially harmful interventions and is therefore an important tool to improve patient safety and reduce SSI's in patients undergoing

  20. Hybrids of Gibbs Point Process Models and Their Implementation

    Directory of Open Access Journals (Sweden)

    Adrian Baddeley

    2013-11-01

    Full Text Available We describe a simple way to construct new statistical models for spatial point pattern data. Taking two or more existing models (finite Gibbs spatial point processes we multiply the probability densities together and renormalise to obtain a new probability density. We call the resulting model a hybrid. We discuss stochastic properties of hybrids, their statistical implications, statistical inference, computational strategies and software implementation in the R package spatstat. Hybrids are particularly useful for constructing models which exhibit interaction at different spatial scales. The methods are demonstrated on a real data set on human social interaction. Software and data are provided.

  1. Theoretic models for recommendation and implementation of assistive technology

    Directory of Open Access Journals (Sweden)

    Ana Cristina de Jesus Alves

    2016-07-01

    Full Text Available Introduction: The latest international researches seek to understand the factors affecting the successful use of assistive technology devices through studies regarding the assessments systematizing; abandonment of devices; or theoric models that consider the aspects of those devices implementation. In Brazil the researches are focused on developing new technologies and there are still not sufficient studies related to the successful use of devices and ways of assistive technology implementation. Objective: To identify conceptual models used for indication and implementation of assistive technology devices. Method: Literature review. The survey was conducted in six databases: CINAHAL, Eric, GALE, LILACS, MEDLINE e PsycInfo. A critical analysis described by Grant and Booth was used. Results: There are no records of a Brazilian survey and among 29 selected articles, 17 conceptual models used in the area of AT were found; of these, 14 were specific to AT. The results showed that the new conceptual models of TA are under development and the conceptual model “Matching Person and Technology – MPT” was the most mentioned. Conclusion: We can observe that the practices related to TA area in international context shows a correlation with conceptual models, thus, we hope this study might have the capacity to contribute for the propagation of this precepts at national level

  2. Reduced computational models of serotonin synthesis, release, and reuptake.

    Science.gov (United States)

    Flower, Gordon; Wong-Lin, KongFatt

    2014-04-01

    Multiscale computational models can provide systemic evaluation and prediction of neuropharmacological drug effects. To date, little computational modeling work has been done to bridge from intracellular to neuronal circuit level. A complex model that describes the intracellular dynamics of the presynaptic terminal of a serotonergic neuron has been previously proposed. By systematically perturbing the model's components, we identify the slow and fast dynamical components of the model, and the reduced slow or fast mode of the model is computationally significantly more efficient with accuracy not deviating much from the original model. The reduced fast-mode model is particularly suitable for incorporating into neurobiologically realistic spiking neuronal models, and hence for large-scale realistic computational simulations. We also develop user-friendly software based on the reduced models to allow scientists to rapidly test and predict neuropharmacological drug effects at a systems level.

  3. Reducing preterm birth by a statewide multifaceted program: an implementation study.

    Science.gov (United States)

    Newnham, John P; White, Scott W; Meharry, Suzanne; Lee, Han-Shin; Pedretti, Michelle K; Arrese, Catherine A; Keelan, Jeffrey A; Kemp, Matthew W; Dickinson, Jan E; Doherty, Dorota A

    2017-05-01

    A comprehensive preterm birth prevention program was introduced in the state of Western Australia encompassing new clinical guidelines, an outreach program for health care practitioners, a public health program for women and their families based on print and social media, and a new clinic at the state's sole tertiary level perinatal center for referral of those pregnant women at highest risk. The initiative had the single aim of safely lowering the rate of preterm birth. The objective of the study was to evaluate the outcomes of the initiative on the rates of preterm birth both statewide and in the single tertiary level perinatal referral center. This was a prospective population-based cohort study of perinatal outcomes before and after 1 full year of implementation of the preterm birth prevention program. In the state overall, the rate of singleton preterm birth was reduced by 7.6% and was lower than in any of the preceding 6 years. This reduction amounted to 196 cases relative to the year before the introduction of the initiative and the effect extended from the 28-31 week gestational age group onward. Within the tertiary level center, the rate of preterm birth in 2015 was also significantly lower than in the preceding years. A comprehensive and multifaceted preterm birth prevention program aimed at both health care practitioners and the general public, operating within the environment of a government-funded universal health care system can significantly lower the rate of early birth. Further research is now required to increase the effect and to determine the relative contributions of each of the interventions. Copyright © 2016 The Author(s). Published by Elsevier Inc. All rights reserved.

  4. A Neuron Model for FPGA Spiking Neuronal Network Implementation

    Directory of Open Access Journals (Sweden)

    BONTEANU, G.

    2011-11-01

    Full Text Available We propose a neuron model, able to reproduce the basic elements of the neuronal dynamics, optimized for digital implementation of Spiking Neural Networks. Its architecture is structured in two major blocks, a datapath and a control unit. The datapath consists of a membrane potential circuit, which emulates the neuronal dynamics at the soma level, and a synaptic circuit used to update the synaptic weight according to the spike timing dependent plasticity (STDP mechanism. The proposed model is implemented into a Cyclone II-Altera FPGA device. Our results indicate the neuron model can be used to build up 1K Spiking Neural Networks on reconfigurable logic suport, to explore various network topologies.

  5. Model-implementation fidelity in cyber physical system design

    CERN Document Server

    Fabre, Christian

    2017-01-01

    This book puts in focus various techniques for checking modeling fidelity of Cyber Physical Systems (CPS), with respect to the physical world they represent. The authors' present modeling and analysis techniques representing different communities, from very different angles, discuss their possible interactions, and discuss the commonalities and differences between their practices. Coverage includes model driven development, resource-driven development, statistical analysis, proofs of simulator implementation, compiler construction, power/temperature modeling of digital devices, high-level performance analysis, and code/device certification. Several industrial contexts are covered, including modeling of computing and communication, proof architectures models and statistical based validation techniques. Addresses CPS design problems such as cross-application interference, parsimonious modeling, and trustful code production Describes solutions, such as simulation for extra-functional properties, extension of cod...

  6. Modeling of enterprise information systems implementation: a preliminary investigation

    Science.gov (United States)

    Yusuf, Yahaya Y.; Abthorpe, M. S.; Gunasekaran, Angappa; Al-Dabass, D.; Onuh, Spencer

    2001-10-01

    The business enterprise has never been in greater need of Agility and the current trend will continue unabated well into the future. It is now recognized that information system is both the foundation and a necessary condition for increased responsiveness. A successful implementation of Enterprise Resource Planning (ERP) can help a company to move towards delivering on its competitive objectives as it enables suppliers to reach out to customers beyond the borders of traditional market defined by geography. The cost of implementation, even when it is successful, could be significant. Bearing in mind the potential strategic benefits, it is important that the implementation project is managed effectively. To this end a project cost model against which to benchmark ongoing project expenditure versus activities completed has been proposed in this paper.

  7. An Implementation Model of Parlay MMCCS API Based on SIP

    Institute of Scientific and Technical Information of China (English)

    GUAN You-qing; SHEN Su-bin

    2006-01-01

    Parlay Multi Media Call Control Services(MMCCS) Application Programming Interfaces (API) based on Session Initiation Protocol (SIP) is essential for the implementation of Parlay Call Control (CC) API. This paper first proposes an implementation model of Parlay MMCCS API based on SIP, then presents a mapping between SIP and MMCCS API as far as methods and parameters are concerned, illustrates corresponding relationships between different components of SIP and MMCCS API by means of an application collaboration diagram, and finally presents an application using Java codes as well as some SIP messages. The application shows that a majority of MMCCS API based on SIP can be implemented and therefore verifies our mapping between MMCCS API and SIP.

  8. Towards a CPN-Based Modelling Approach for Reconciling Verification and Implementation of Protocol Models

    DEFF Research Database (Denmark)

    Simonsen, Kent Inge; Kristensen, Lars Michael

    2013-01-01

    and implementation. Our approach has been developed in the context of the Coloured Petri Nets (CPNs) modelling language. We illustrate our approach by presenting a descriptive specification model of the Websocket protocol which is currently under development by the Internet Engineering Task Force (IETF), and we show......Formal modelling of protocols is often aimed at one specific purpose such as verification or automatically generating an implementation. This leads to models that are useful for one purpose, but not for others. Being able to derive models for verification and implementation from a single model...

  9. Implementation of building information modeling in Malaysian construction industry

    Science.gov (United States)

    Memon, Aftab Hameed; Rahman, Ismail Abdul; Harman, Nur Melly Edora

    2014-10-01

    This study has assessed the implementation level of Building Information Modeling (BIM) in the construction industry of Malaysia. It also investigated several computer software packages facilitating BIM and challenges affecting its implementation. Data collection for this study was carried out using questionnaire survey among the construction practitioners. 95 completed forms of questionnaire received against 150 distributed questionnaire sets from consultant, contractor and client organizations were analyzed statistically. Analysis findings indicated that the level of implementation of BIM in the construction industry of Malaysia is very low. Average index method employed to assess the effectiveness of various software packages of BIM highlighted that Bentley construction, AutoCAD and ArchiCAD are three most popular and effective software packages. Major challenges to BIM implementation are it requires enhanced collaboration, add work to a designer, interoperability and needs enhanced collaboration. For improving the level of implementing BIM in Malaysian industry, it is recommended that a flexible training program of BIM for all practitioners must be created.

  10. Reduced Order Internal Models in the Frequency Domain

    OpenAIRE

    Laakkonen, Petteri; Paunonen, Lassi

    2016-01-01

    The internal model principle states that all robustly regulating controllers must contain a suitably reduplicated internal model of the signal to be regulated. Using frequency domain methods, we show that the number of the copies may be reduced if the class of perturbations in the problem is restricted. We present a two step design procedure for a simple controller containing a reduced order internal model achieving robust regulation. The results are illustrated with an example of a five tank...

  11. Modelling mitigation options to reduce diffuse nitrogen water pollution from agriculture.

    Science.gov (United States)

    Bouraoui, Fayçal; Grizzetti, Bruna

    2014-01-15

    Agriculture is responsible for large scale water quality degradation and is estimated to contribute around 55% of the nitrogen entering the European Seas. The key policy instrument for protecting inland, transitional and coastal water resources is the Water Framework Directive (WFD). Reducing nutrient losses from agriculture is crucial to the successful implementation of the WFD. There are several mitigation measures that can be implemented to reduce nitrogen losses from agricultural areas to surface and ground waters. For the selection of appropriate measures, models are useful for quantifying the expected impacts and the associated costs. In this article we review some of the models used in Europe to assess the effectiveness of nitrogen mitigation measures, ranging from fertilizer management to the construction of riparian areas and wetlands. We highlight how the complexity of models is correlated with the type of scenarios that can be tested, with conceptual models mostly used to evaluate the impact of reduced fertilizer application, and the physically-based models used to evaluate the timing and location of mitigation options and the response times. We underline the importance of considering the lag time between the implementation of measures and effects on water quality. Models can be effective tools for targeting mitigation measures (identifying critical areas and timing), for evaluating their cost effectiveness, for taking into consideration pollution swapping and considering potential trade-offs in contrasting environmental objectives. Models are also useful for involving stakeholders during the development of catchments mitigation plans, increasing their acceptability.

  12. Implementation of Lean System on Erbium Doped Fibre Amplifier Manufacturing Process to Reduce Production Time

    Science.gov (United States)

    Maneechote, T.; Luangpaiboon, P.

    2010-10-01

    A manufacturing process of erbium doped fibre amplifiers is complicated. It needs to meet the customers' requirements under a present economic status that products need to be shipped to customers as soon as possible after purchasing orders. This research aims to study and improve processes and production lines of erbium doped fibre amplifiers using lean manufacturing systems via an application of computer simulation. Three scenarios of lean tooled box systems are selected via the expert system. Firstly, the production schedule based on shipment date is combined with a first in first out control system. The second scenario focuses on a designed flow process plant layout. Finally, the previous flow process plant layout combines with production schedule based on shipment date including the first in first out control systems. The computer simulation with the limited data via an expected value is used to observe the performance of all scenarios. The most preferable resulted lean tooled box systems from a computer simulation are selected to implement in the real process of a production of erbium doped fibre amplifiers. A comparison is carried out to determine the actual performance measures via an analysis of variance of the response or the production time per unit achieved in each scenario. The goodness of an adequacy of the linear statistical model via experimental errors or residuals is also performed to check the normality, constant variance and independence of the residuals. The results show that a hybrid scenario of lean manufacturing system with the first in first out control and flow process plant lay out statistically leads to better performance in terms of the mean and variance of production times.

  13. Implementing a stochastic model for oil futures prices

    Energy Technology Data Exchange (ETDEWEB)

    Cortazar, Gonzalo [Departamento de Ingenieria Industrial y de Sistemas, Escuela de Ingenieria, Pontificia Universidad Catolica de Chile, Vicuna Mackenna 4860, Santiago (Chile); Schwartz, Eduardo S. [Anderson School at UCLA, 110 Westwood Plaza, Los Angeles, CA 90095-1481 (United States)

    2003-05-01

    This paper develops a parsimonious three-factor model of the term structure of oil futures prices that can be easily estimated from available futures price data. In addition, it proposes a new simple spreadsheet implementation procedure. The procedure is flexible, may be used with market prices of any oil contingent claim with closed form pricing solution, and easily deals with missing data problems. The approach is implemented using daily prices of all futures contracts traded at the New York Mercantile Exchange between 1991 and 2001. In-sample and out-of-sample tests indicate that the model fits the data extremely well. Though the paper concentrates on oil, the approach can be used for any other commodity with well-developed futures markets.

  14. Model and Implementation of Communication Link Management Supporting High Availability

    Institute of Scientific and Technical Information of China (English)

    Luo Juan; Cao Yang; He Zheng; Li Feng

    2004-01-01

    Despite the rapid evolution in all aspects of computer technology, both the computer hardware and software are prone to numerous failure conditions. In this paper, we analyzed the characteristic of a computer system and the methods of constructing a system , proposed a communication link management model supporting high availability for network applications, Which will greatly increase the high availability of network applications. Then we elaborated on heartbeat or service detect, fail-over, service take-over, switchback and error recovery process of the model. In the process of constructing the communication link, we implemented the link management and service take-over with high availability requirement, and discussed the state and the state transition of building the communication link between the hosts, depicted the message transfer and the start of timer. At Last, we applied the designed high availability system to a network billing system, and showed how the system was constructed and implemented, which perfectly satisfied the system requirements.

  15. REDUCING PROCESS VARIABILITY BY USING DMAIC MODEL: A CASE STUDY IN BANGLADESH

    OpenAIRE

    Ripon Kumar Chakrabortty; Tarun Kumar Biswas; Iraj Ahmed

    2013-01-01

    Now-a-day's many leading manufacturing industry have started to practice Six Sigma and Lean manufacturing concepts to boost up their productivity as well as quality of products. In this paper, the Six Sigma approach has been used to reduce process variability of a food processing industry in Bangladesh. DMAIC (Define,Measure, Analyze, Improve, & Control) model has been used to implement the Six Sigma Philosophy. Five phases of the model have been structured step by step respectively. Differen...

  16. Reduced length of stay following hip and knee arthroplasty in Denmark 2000-2009: from research to implementation

    DEFF Research Database (Denmark)

    Husted, Henrik; Jensen, Claus Munk; Solgaard, Søren

    2012-01-01

    Fast-track surgery is the combination of optimized clinical and organizational factors aiming at reducing convalescence and perioperative morbidity including the functional recovery resulting in reduced hospitalization. As the previous nationwide studies have demonstrated substantial variations...... in length of stay (LOS) following standardized operations such as total hip and knee arthroplasty (THA and TKA), this nationwide study was undertaken to evaluate the implementation process of fast-track THA and TKA in Denmark....

  17. Rigorous joining of advanced reduced-dimensional beam models to three-dimensional finite element models

    Science.gov (United States)

    Song, Huimin

    In the aerospace and automotive industries, many finite element analyses use lower-dimensional finite elements such as beams, plates and shells, to simplify the modeling. These simplified models can greatly reduce the computation time and cost; however, reduced-dimensional models may introduce inaccuracies, particularly near boundaries and near portions of the structure where reduced-dimensional models may not apply. Another factor in creation of such models is that beam-like structures frequently have complex geometry, boundaries and loading conditions, which may make them unsuitable for modeling with single type of element. The goal of this dissertation is to develop a method that can accurately and efficiently capture the response of a structure by rigorous combination of a reduced-dimensional beam finite element model with a model based on full two-dimensional (2D) or three-dimensional (3D) finite elements. The first chapter of the thesis gives the background of the present work and some related previous work. The second chapter is focused on formulating a system of equations that govern the joining of a 2D model with a beam model for planar deformation. The essential aspect of this formulation is to find the transformation matrices to achieve deflection and load continuity on the interface. Three approaches are provided to obtain the transformation matrices. An example based on joining a beam to a 2D finite element model is examined, and the accuracy of the analysis is studied by comparing joint results with the full 2D analysis. The third chapter is focused on formulating the system of equations for joining a beam to a 3D finite element model for static and free-vibration problems. The transition between the 3D elements and beam elements is achieved by use of the stress recovery technique of the variational-asymptotic method as implemented in VABS (the Variational Asymptotic Beam Section analysis). The formulations for an interface transformation matrix and

  18. CoMD Implementation Suite in Emerging Programming Models

    Energy Technology Data Exchange (ETDEWEB)

    2014-09-23

    CoMD-Em is a software implementation suite of the CoMD [4] proxy app using different emerging programming models. It is intended to analyze the features and capabilities of novel programming models that could help ensure code and performance portability and scalability across heterogeneous platforms while improving programmer productivity. Another goal is to provide the authors and venders with some meaningful feedback regarding the capabilities and limitations of their models. The actual application is a classical molecular dynamics (MD) simulation using either the Lennard-Jones method (LJ) or the embedded atom method (EAM) for primary particle interaction. The code can be extended to support alternate interaction models. The code is expected ro run on a wide class of heterogeneous hardware configurations like shard/distributed/hybrid memory, GPU's and any other platform supported by the underlying programming model.

  19. Analysis of Ecodesign Implementation and Solutions for Packaging Waste System by Using System Dynamics Modeling

    Science.gov (United States)

    Berzina, Alise; Dace, Elina; Bazbauers, Gatis

    2010-01-01

    This paper discusses the findings of a research project which explored the packaging waste management system in Latvia. The paper focuses on identifying how the policy mechanisms can promote ecodesign implementation and material efficiency improvement and therefore reduce the rate of packaging waste accumulation in landfill. The method used for analyzing the packaging waste management policies is system dynamics modeling. The main conclusion is that the existing legislative instruments can be used to create an effective policy for ecodesign implementation but substantially higher tax rates on packaging materials and waste disposal than the existing have to be applied.

  20. Modeling the leadership attributes of top management in green innovation implementation

    Science.gov (United States)

    Ishak, Noormaizatul Akmar; Ramli, Mohammad Fadzli

    2015-05-01

    The implementation of green innovation in the companies is the interest of the governments all over the world. This has been the main focus of the Copenhagen Protocol and Kyoto Protocol that require all governments to preserve the nature through green initiatives. This paper proposes a mathematical model on the leadership attributes of the top management in ensuring green innovation implementation in their companies' strategies to reduce operational cost. With green innovation implementation in the Government-Linked Companies (GLCs), we identify the leadership attributes are tied up to the leadership style of the top managers in the companies. Through this model we have proved that green type leadership always contributes better in cost saving, therefore it is a more efficient leadership attribute for the GLCs especially.

  1. Implementation of New Process Models for Tailored Polymer Composite Structures into Processing Software Packages

    Energy Technology Data Exchange (ETDEWEB)

    Nguyen, Ba Nghiep; Jin, Xiaoshi; Wang, Jin; Phelps, Jay; Tucker III, Charles L.; Kunc, Vlastimil; Bapanapalli, Satish K.; Smith, Mark T.

    2010-02-23

    This report describes the work conducted under the Cooperative Research and Development Agreement (CRADA) (Nr. 260) between the Pacific Northwest National Laboratory (PNNL) and Autodesk, Inc. to develop and implement process models for injection-molded long-fiber thermoplastics (LFTs) in processing software packages. The structure of this report is organized as follows. After the Introduction Section (Section 1), Section 2 summarizes the current fiber orientation models developed for injection-molded short-fiber thermoplastics (SFTs). Section 3 provides an assessment of these models to determine their capabilities and limitations, and the developments needed for injection-molded LFTs. Section 4 then focuses on the development of a new fiber orientation model for LFTs. This model is termed the anisotropic rotary diffusion - reduced strain closure (ARD-RSC) model as it explores the concept of anisotropic rotary diffusion to capture the fiber-fiber interaction in long-fiber suspensions and uses the reduced strain closure method of Wang et al. to slow down the orientation kinetics in concentrated suspensions. In contrast to fiber orientation modeling, before this project, no standard model was developed to predict the fiber length distribution in molded fiber composites. Section 5 is therefore devoted to the development of a fiber length attrition model in the mold. Sections 6 and 7 address the implementations of the models in AMI, and the conclusions drawn from this work is presented in Section 8.

  2. Turning pain into cues for goal-directed behavior : Implementation intentions reduce escape-avoidance behavior on a painful task

    NARCIS (Netherlands)

    Karsdorp, P.A.; Geenen, R.; Kroese, F.M.; Vlaeyen, J.W.S.

    2016-01-01

    Pain automatically elicits escape-avoidance behavior to avert bodily harm. In patients with chronic pain, long-term escape-avoidance behavior may increase the risk of chronic disability. The aim of the presents study was to examine whether implementation intentions reduce escape-avoidance behavior d

  3. Effects of Coaching on the Implementation of Functional Assessment-Based Parent Intervention in Reducing Challenging Behaviors

    Science.gov (United States)

    Fettig, Angel; Schultz, Tia R.; Sreckovic, Melissa A.

    2015-01-01

    This study examined the effects of coaching on the implementation of functional assessment--based parent intervention in reducing children's challenging behaviors. A multiple baseline across participants design was used with three parent-child dyads with children between the ages of 2 and 5 years. The intervention consisted of training and delayed…

  4. Parent Implementation of Function-Based Intervention to Reduce Children's Challenging Behavior: A Literature Review

    Science.gov (United States)

    Fettig, Angel; Barton, Erin E.

    2014-01-01

    The purpose of this literature review was to analyze the research on parent-implemented functional assessment (FA)-based interventions for reducing children's challenging behaviors. Thirteen studies met the review inclusion criteria. These studies were analyzed across independent variables, types of parent coaching and support provided,…

  5. Parent Implementation of Function-Based Intervention to Reduce Children's Challenging Behavior: A Literature Review

    Science.gov (United States)

    Fettig, Angel; Barton, Erin E.

    2014-01-01

    The purpose of this literature review was to analyze the research on parent-implemented functional assessment (FA)-based interventions for reducing children's challenging behaviors. Thirteen studies met the review inclusion criteria. These studies were analyzed across independent variables, types of parent coaching and support provided,…

  6. Reducing Abstraction in High School Computer Science Education: The Case of Definition, Implementation, and Use of Abstract Data Types

    Science.gov (United States)

    Sakhnini, Victoria; Hazzan, Orit

    2008-01-01

    The research presented in this article deals with the difficulties and mental processes involved in the definition, implementation, and use of abstract data types encountered by 12th grade advanced-level computer science students. Research findings are interpreted within the theoretical framework of "reducing abstraction" [Hazzan 1999]. The…

  7. Modular Pneumatic Snake Robot: 3D Modelling, Implementation And Control

    Directory of Open Access Journals (Sweden)

    Pål Liljebäck

    2008-01-01

    Full Text Available This paper gives a treatment of various aspects related to snake locomotion. A mathematical model and a physical implementation of a modular snake robot are presented. A control strategy is also developed, yielding a general expression for different gait patterns. Two forms of locomotion have been simulated with the mathematical model, and experiments with the physical snake robot have been conducted. The simulation results revealed the parameter through which directional control may be achieved for each gait pattern. Experiments with the physical snake robot gave a crude qualitative verification of these findings.

  8. Improved survival with an ambulatory model of non-invasive ventilation implementation in motor neuron disease.

    Science.gov (United States)

    Sheers, Nicole; Berlowitz, David J; Rautela, Linda; Batchelder, Ian; Hopkinson, Kim; Howard, Mark E

    2014-06-01

    Non-invasive ventilation (NIV) increases survival and quality of life in motor neuron disease (MND). NIV implementation historically occurred during a multi-day inpatient admission at this institution; however, increased demand led to prolonged waiting times. The aim of this study was to evaluate the introduction of an ambulatory model of NIV implementation. A prospective cohort study was performed. Inclusion criteria were referral for NIV implementation six months pre- or post-commencement of the Day Admission model. This model involved a 4-h stay to commence ventilation with follow-up in-laboratory polysomnography titration and outpatient attendance. Outcome measures included waiting time, hospital length of stay, adverse events and polysomnography data. Results indicated that after changing to the Day Admission model the median waiting time fell from 30 to 13.5 days (p Survival was also prolonged (median (IQR) 278 (51-512) days pre- vs 580 (306-1355) days post-introduction of the Day Admission model; hazard ratio 0.41, p = 0.04). Daytime PaCO2 was no different. In conclusion, reduced waiting time to commence ventilation and improved survival were observed following introduction of an ambulatory model of NIV implementation in people with MND, with no change in the effectiveness of ventilation.

  9. Taking the easy way out? Increasing implementation effort reduces probability maximizing under cognitive load.

    Science.gov (United States)

    Schulze, Christin; Newell, Ben R

    2016-07-01

    Cognitive load has previously been found to have a positive effect on strategy selection in repeated risky choice. Specifically, whereas inferior probability matching often prevails under single-task conditions, optimal probability maximizing sometimes dominates when a concurrent task competes for cognitive resources. We examined the extent to which this seemingly beneficial effect of increased task demands hinges on the effort required to implement each of the choice strategies. Probability maximizing typically involves a simple repeated response to a single option, whereas probability matching requires choice proportions to be tracked carefully throughout a sequential choice task. Here, we flipped this pattern by introducing a manipulation that made the implementation of maximizing more taxing and, at the same time, allowed decision makers to probability match via a simple repeated response to a single option. The results from two experiments showed that increasing the implementation effort of probability maximizing resulted in decreased adoption rates of this strategy. This was the case both when decision makers simultaneously learned about the outcome probabilities and responded to a dual task (Exp. 1) and when these two aspects were procedurally separated in two distinct stages (Exp. 2). We conclude that the effort involved in implementing a choice strategy is a key factor in shaping repeated choice under uncertainty. Moreover, highlighting the importance of implementation effort casts new light on the sometimes surprising and inconsistent effects of cognitive load that have previously been reported in the literature.

  10. REVEAL: An Extensible Reduced Order Model Builder for Simulation and Modeling

    Energy Technology Data Exchange (ETDEWEB)

    Agarwal, Khushbu; Sharma, Poorva; Ma, Jinliang; Lo, Chaomei; Gorton, Ian; Liu, Yan

    2013-04-30

    Many science domains need to build computationally efficient and accurate representations of high fidelity, computationally expensive simulations. These computationally efficient versions are known as reduced-order models. This paper presents the design and implementation of a novel reduced-order model (ROM) builder, the REVEAL toolset. This toolset generates ROMs based on science- and engineering-domain specific simulations executed on high performance computing (HPC) platforms. The toolset encompasses a range of sampling and regression methods that can be used to generate a ROM, automatically quantifies the ROM accuracy, and provides support for an iterative approach to improve ROM accuracy. REVEAL is designed to be extensible in order to utilize the core functionality with any simulator that has published input and output formats. It also defines programmatic interfaces to include new sampling and regression techniques so that users can ‘mix and match’ mathematical techniques to best suit the characteristics of their model. In this paper, we describe the architecture of REVEAL and demonstrate its usage with a computational fluid dynamics model used in carbon capture.

  11. Process control for sheet-metal stamping process modeling, controller design and shop-floor implementation

    CERN Document Server

    Lim, Yongseob; Ulsoy, A Galip

    2014-01-01

    Process Control for Sheet-Metal Stamping presents a comprehensive and structured approach to the design and implementation of controllers for the sheet metal stamping process. The use of process control for sheet-metal stamping greatly reduces defects in deep-drawn parts and can also yield large material savings from reduced scrap. Sheet-metal forming is a complex process and most often characterized by partial differential equations that are numerically solved using finite-element techniques. In this book, twenty years of academic research are reviewed and the resulting technology transitioned to the industrial environment. The sheet-metal stamping process is modeled in a manner suitable for multiple-input multiple-output control system design, with commercially available sensors and actuators. These models are then used to design adaptive controllers and real-time controller implementation is discussed. Finally, experimental results from actual shopfloor deployment are presented along with ideas for further...

  12. A Conceptual Model of Service Customization and Its Implementation

    Institute of Scientific and Technical Information of China (English)

    Su-Bin Shen; Guan-Qun Gu; Shun-Yi Zhang

    2004-01-01

    With the development of Internet and next generation networks in telecommunications, more and more new services are required to be introduced into networks. Introducing new services into traditional network is always associated with standardizing new protocols. The progress of protocol standardization usually takes several years, which cannot meet the increasing demands of the applications in Internet and next generation networks.Service customization in network systems may be one possible solution to cope with this problem. Based on the principle that network service is provided by interactions among protocol entities, this paper proposes a conceptual model of service customization (SECUM) by separating the service logic from protocol interactive logic within existing network architecture. The theory of Communicating Sequential Processes (CSP) is used to formalize the SECUM in order to locate exactly the service logic and to define precisely the SECUM. For validating the SECUM's usability in practical network systems, this paper also proposes an implementation model for SECUM: a component-based protocol implementation model (CPIM). CPIM discomposes protocol entity into application component, service component, message component and communication component. Service component associates application component with message component. Users or network managers can customize network services by configuring service component. The paper shows respectively the applications of SECUM and CPIM by proposing a customizable IP service model based on SECUM and describing an implementation of Session Initiation Protocol (SIP) based on CPIM. Compared with the existing service-customization techniques,SECUM is a service customization model internal to network system and may provide more powerful capabilities of service customization.

  13. Reducing Fear of the Laboratory Rat: A Participant Modeling Approach.

    Science.gov (United States)

    Barber, Nigel

    1994-01-01

    Reports on the use of participant modeling in a study of 56 college-level students to reduce fear of laboratory rats. Discovers that even mild exposure reduced fear significantly. Finds that women were more fearful initially but that their fear reduction was equal to that of men. (CFR)

  14. Implementation of angular response function modeling in SPECT simulations with GATE

    Energy Technology Data Exchange (ETDEWEB)

    Descourt, P; Visvikis, D [INSERM, U650, LaTIM, IFR SclnBioS, Universite de Brest, CHU Brest, Brest, F-29200 (France); Carlier, T; Bardies, M [CRCNA INSERM U892, Nantes (France); Du, Y; Song, X; Frey, E C; Tsui, B M W [Department of Radiology, J Hopkins University, Baltimore, MD (United States); Buvat, I, E-mail: dimitris@univ-brest.f [IMNC-UMR 8165 CNRS Universites Paris 7 et Paris 11, Orsay (France)

    2010-05-07

    Among Monte Carlo simulation codes in medical imaging, the GATE simulation platform is widely used today given its flexibility and accuracy, despite long run times, which in SPECT simulations are mostly spent in tracking photons through the collimators. In this work, a tabulated model of the collimator/detector response was implemented within the GATE framework to significantly reduce the simulation times in SPECT. This implementation uses the angular response function (ARF) model. The performance of the implemented ARF approach has been compared to standard SPECT GATE simulations in terms of the ARF tables' accuracy, overall SPECT system performance and run times. Considering the simulation of the Siemens Symbia T SPECT system using high-energy collimators, differences of less than 1% were measured between the ARF-based and the standard GATE-based simulations, while considering the same noise level in the projections, acceleration factors of up to 180 were obtained when simulating a planar 364 keV source seen with the same SPECT system. The ARF-based and the standard GATE simulation results also agreed very well when considering a four-head SPECT simulation of a realistic Jaszczak phantom filled with iodine-131, with a resulting acceleration factor of 100. In conclusion, the implementation of an ARF-based model of collimator/detector response for SPECT simulations within GATE significantly reduces the simulation run times without compromising accuracy. (note)

  15. NOTE: Implementation of angular response function modeling in SPECT simulations with GATE

    Science.gov (United States)

    Descourt, P.; Carlier, T.; Du, Y.; Song, X.; Buvat, I.; Frey, E. C.; Bardies, M.; Tsui, B. M. W.; Visvikis, D.

    2010-05-01

    Among Monte Carlo simulation codes in medical imaging, the GATE simulation platform is widely used today given its flexibility and accuracy, despite long run times, which in SPECT simulations are mostly spent in tracking photons through the collimators. In this work, a tabulated model of the collimator/detector response was implemented within the GATE framework to significantly reduce the simulation times in SPECT. This implementation uses the angular response function (ARF) model. The performance of the implemented ARF approach has been compared to standard SPECT GATE simulations in terms of the ARF tables' accuracy, overall SPECT system performance and run times. Considering the simulation of the Siemens Symbia T SPECT system using high-energy collimators, differences of less than 1% were measured between the ARF-based and the standard GATE-based simulations, while considering the same noise level in the projections, acceleration factors of up to 180 were obtained when simulating a planar 364 keV source seen with the same SPECT system. The ARF-based and the standard GATE simulation results also agreed very well when considering a four-head SPECT simulation of a realistic Jaszczak phantom filled with iodine-131, with a resulting acceleration factor of 100. In conclusion, the implementation of an ARF-based model of collimator/detector response for SPECT simulations within GATE significantly reduces the simulation run times without compromising accuracy.

  16. Constitutive modeling and computational implementation for finite strain plasticity

    Science.gov (United States)

    Reed, K. W.; Atluri, S. N.

    1985-01-01

    This paper describes a simple alternate approach to the difficult problem of modeling material behavior. Starting from a general representation for a rate-tpe constitutive equation, it is shown by example how sets of test data may be used to derive restrictions on the scalar functions appearing in the representation. It is not possible to determine these functions from experimental data, but the aforementioned restrictions serve as a guide in their eventual definition. The implications are examined for hypo-elastic, isotropically hardening plastic, and kinematically hardening plastic materials. A simple model for the evolution of the 'back-stress,' in a kinematic-hardening plasticity theory, that is entirely analogous to a hypoelastic stress-strain relation is postulated and examined in detail in modeling finitely plastic tension-torsion test. The implementation of rate-type material models in finite element algorithms is also discussed.

  17. Sliding Mode Control Design via Reduced Order Model Approach

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    This paper presents a design of continuous-time sliding mode control for the higher order systems via reduced order model. It is shown that a continuous-time sliding mode control designed for the reduced order model gives similar performance for the higher order system. The method is illustrated by numerical examples. The paper also introduces a technique for design of a sliding surface such that the system satisfies a cost-optimality condition when on the sliding surface.

  18. A reduced order model for nonlinear vibroacoustic problems

    Directory of Open Access Journals (Sweden)

    Ouisse Morvan

    2012-07-01

    Full Text Available This work is related to geometrical nonlinearities applied to thin plates coupled with fluid-filled domain. Model reduction is performed to reduce the computation time. Reduced order model (ROM is issued from the uncoupled linear problem and enriched with residues to describe the nonlinear behavior and coupling effects. To show the efficiency of the proposed method, numerical simulations in the case of an elastic plate closing an acoustic cavity are presented.

  19. Implementation of transformed lenses in bed of nails reducing refractive index maximum value and sub-unity regions.

    Science.gov (United States)

    Prado, Daniel R; Osipov, Andrey V; Quevedo-Teruel, Oscar

    2015-03-15

    Transformation optics with quasi-conformal mapping is applied to design a Generalized Maxwell Fish-eye Lens (GMFEL) which can be used as a power splitter. The flattened focal line obtained as a result of the transformation allows the lens to adapt to planar antenna feeding systems. Moreover, sub-unity refraction index regions are reduced because of the space compression effect of the transformation, reducing the negative impact of removing those regions when implementing the lens. A technique to reduce the maximum value of the refractive index is presented to compensate for its increase because of the transformation. Finally, the lens is implemented with the bed of nails technology, employing a commercial dielectric slab to improve the range of the effective refractive index. The lens was simulated with a 3D full-wave simulator to validate the design, obtaining an original and feasible power splitter based on a dielectric lens.

  20. MapReduce implementation of a hybrid spectral library-database search method for large-scale peptide identification.

    Science.gov (United States)

    Kalyanaraman, Ananth; Cannon, William R; Latt, Benjamin; Baxter, Douglas J

    2011-11-01

    A MapReduce-based implementation called MR-MSPolygraph for parallelizing peptide identification from mass spectrometry data is presented. The underlying serial method, MSPolygraph, uses a novel hybrid approach to match an experimental spectrum against a combination of a protein sequence database and a spectral library. Our MapReduce implementation can run on any Hadoop cluster environment. Experimental results demonstrate that, relative to the serial version, MR-MSPolygraph reduces the time to solution from weeks to hours, for processing tens of thousands of experimental spectra. Speedup and other related performance studies are also reported on a 400-core Hadoop cluster using spectral datasets from environmental microbial communities as inputs. The source code along with user documentation are available on http://compbio.eecs.wsu.edu/MR-MSPolygraph. ananth@eecs.wsu.edu; william.cannon@pnnl.gov. Supplementary data are available at Bioinformatics online.

  1. A Design and Implementation of the Extended Andorra Model

    CERN Document Server

    Lopes, Ricardo; Silva, Fernando

    2011-01-01

    Logic programming provides a high-level view of programming, giving implementers a vast latitude into what techniques to explore to achieve the best performance for logic programs. Towards obtaining maximum performance, one of the holy grails of logic programming has been to design computational models that could be executed efficiently and that would allow both for a reduction of the search space and for exploiting all the available parallelism in the application. These goals have motivated the design of the Extended Andorra Model, a model where goals that do not constrain non-deterministic goals can execute first. In this work we present and evaluate the Basic design for Extended Andorra Model (BEAM), a system that builds upon David H. D. Warren's original EAM with Implicit Control. We provide a complete description and implementation of the BEAM System as a set of rewrite and control rules. We present the major data structures and execution algorithms that are required for efficient execution, and evaluate...

  2. Factor analysis models for structuring covariance matrices of additive genetic effects: a Bayesian implementation

    Directory of Open Access Journals (Sweden)

    Gianola Daniel

    2007-09-01

    Full Text Available Abstract Multivariate linear models are increasingly important in quantitative genetics. In high dimensional specifications, factor analysis (FA may provide an avenue for structuring (covariance matrices, thus reducing the number of parameters needed for describing (codispersion. We describe how FA can be used to model genetic effects in the context of a multivariate linear mixed model. An orthogonal common factor structure is used to model genetic effects under Gaussian assumption, so that the marginal likelihood is multivariate normal with a structured genetic (covariance matrix. Under standard prior assumptions, all fully conditional distributions have closed form, and samples from the joint posterior distribution can be obtained via Gibbs sampling. The model and the algorithm developed for its Bayesian implementation were used to describe five repeated records of milk yield in dairy cattle, and a one common FA model was compared with a standard multiple trait model. The Bayesian Information Criterion favored the FA model.

  3. The dynamical analysis of modified two-compartment neuron model and FPGA implementation

    Science.gov (United States)

    Lin, Qianjin; Wang, Jiang; Yang, Shuangming; Yi, Guosheng; Deng, Bin; Wei, Xile; Yu, Haitao

    2017-10-01

    The complexity of neural models is increasing with the investigation of larger biological neural network, more various ionic channels and more detailed morphologies, and the implementation of biological neural network is a task with huge computational complexity and power consumption. This paper presents an efficient digital design using piecewise linearization on field programmable gate array (FPGA), to succinctly implement the reduced two-compartment model which retains essential features of more complicated models. The design proposes an approximate neuron model which is composed of a set of piecewise linear equations, and it can reproduce different dynamical behaviors to depict the mechanisms of a single neuron model. The consistency of hardware implementation is verified in terms of dynamical behaviors and bifurcation analysis, and the simulation results including varied ion channel characteristics coincide with the biological neuron model with a high accuracy. Hardware synthesis on FPGA demonstrates that the proposed model has reliable performance and lower hardware resource compared with the original two-compartment model. These investigations are conducive to scalability of biological neural network in reconfigurable large-scale neuromorphic system.

  4. Implementing the Simple Biosphere Model (SiB) in a general circulation model: Methodologies and results

    Science.gov (United States)

    Sato, N.; Sellers, P. J.; Randall, D. A.; Schneider, E. K.; Shukla, J.; Kinter, J. L., III; Hou, Y.-T.; Albertazzi, E.

    1989-01-01

    The Simple Biosphere MOdel (SiB) of Sellers et al., (1986) was designed to simulate the interactions between the Earth's land surface and the atmosphere by treating the vegetation explicitly and relistically, thereby incorporating biophysical controls on the exchanges of radiation, momentum, sensible and latent heat between the two systems. The steps taken to implement SiB in a modified version of the National Meteorological Center's spectral GCM are described. The coupled model (SiB-GCM) was used with a conventional hydrological model (Ctl-GCM) to produce summer and winter simulations. The same GCM was used with a conventional hydrological model (Ctl-GCM) to produce comparable 'control' summer and winter variations. It was found that SiB-GCM produced a more realistic partitioning of energy at the land surface than Ctl-GCM. Generally, SiB-GCM produced more sensible heat flux and less latent heat flux over vegetated land than did Ctl-GCM and this resulted in the development of a much deeper daytime planetary boundary and reduced precipitation rates over the continents in SiB-GCM. In the summer simulation, the 200 mb jet stream and the wind speed at 850 mb were slightly weakened in the SiB-GCM relative to the Ctl-GCM results and equivalent analyses from observations.

  5. EUREST PLUS - European Regulatory Science on Tobacco: Policy implementation to reduce lung diseases - Proposal (Horizon2020

    Directory of Open Access Journals (Sweden)

    Constantine Vardavas

    2016-03-01

    Full Text Available EUREST-PLUS, a thirteen –partner EU joint proposal, coordinated by ENSP (Coordinator: Constantine Vardavas, aims to monitor and evaluate the impact of the TPD at an EU level. The specific objectives of the proposal are: 1. To evaluate the psychosocial and behavioural impact of TPD implementation and FCTC implementation, through the creation of a longitudinal cohort of adult smokers in 6 EU MS (Germany, Greece, Hungary, Poland, Romania, Spain in a pre- vs. post- study design. 2. To assess support for TPD implementation through secondary dataset analyses of the 2015 Special Eurobarometer on Tobacco Survey (SETS, and through trend analyses on the merged datasets of the 2009, 2012 and2015 SETS datasets. 3. To document changes in e-cigarette product parameters (technical design, labelling, packaging and chemical composition following implementation of Article 20 of the TPD. 4. To enhance innovative joint research collaborations, through the pooling and comparisons across both other EU countries of the International Tobacco Control (ITC Project, and other non-EU countries.

  6. Projection-Based Reduced Order Modeling for Spacecraft Thermal Analysis

    Science.gov (United States)

    Qian, Jing; Wang, Yi; Song, Hongjun; Pant, Kapil; Peabody, Hume; Ku, Jentung; Butler, Charles D.

    2015-01-01

    This paper presents a mathematically rigorous, subspace projection-based reduced order modeling (ROM) methodology and an integrated framework to automatically generate reduced order models for spacecraft thermal analysis. Two key steps in the reduced order modeling procedure are described: (1) the acquisition of a full-scale spacecraft model in the ordinary differential equation (ODE) and differential algebraic equation (DAE) form to resolve its dynamic thermal behavior; and (2) the ROM to markedly reduce the dimension of the full-scale model. Specifically, proper orthogonal decomposition (POD) in conjunction with discrete empirical interpolation method (DEIM) and trajectory piece-wise linear (TPWL) methods are developed to address the strong nonlinear thermal effects due to coupled conductive and radiative heat transfer in the spacecraft environment. Case studies using NASA-relevant satellite models are undertaken to verify the capability and to assess the computational performance of the ROM technique in terms of speed-up and error relative to the full-scale model. ROM exhibits excellent agreement in spatiotemporal thermal profiles (<0.5% relative error in pertinent time scales) along with salient computational acceleration (up to two orders of magnitude speed-up) over the full-scale analysis. These findings establish the feasibility of ROM to perform rational and computationally affordable thermal analysis, develop reliable thermal control strategies for spacecraft, and greatly reduce the development cycle times and costs.

  7. Implementation of strength and burn models for plastic-bonded explosives and propellants

    Energy Technology Data Exchange (ETDEWEB)

    Reaugh, J E

    2009-05-07

    We have implemented the burn model in LS-DYNA. At present, the damage (porosity and specific surface area) is specified as initial conditions. However, history variables that are used by the strength model are reserved as placeholders for the next major revision, which will be a completely interactive model. We have implemented an improved strength model for explosives based on a model for concrete. The model exhibits peak strength and subsequent strain softening in uniaxial compression. The peak strength increases with increasing strain rate and/or reduced ambient temperature. Under triaxial compression compression, the strength continues to increase (or at least not decrease) with increasing strain. This behaviour is common to both concrete and polymer-bonded explosives (PBX) because the microstructure of these composites is similar. Both have aggregate material with a broad particle size distribution, although the length scale for concrete aggregate is two orders of magnitude larger than for PBX. The (cement or polymer) binder adheres to the aggregate, and is both pressure and rate sensitive. There is a larger bind binder content in concrete, compared to the explosive, and the aggregates have different hardness. As a result we expect the parameter values to differ, but the functional forms to be applicable to both. The models have been fit to data from tests on an AWE explosive that is HMX based. The decision to implement the models in LS-DYNA was based on three factors: LS-DYNA is used routinely by the AWE engineering analysis group and has a broad base of experienced users; models implemented in LS-DYNA can be transferred easily to LLNL's ALE 3D using a material model wrapper developed by Rich Becker; and LS-DYNA could accommodate the model requirements for a significant number of additional history variables without the significant time delay associated with code modification.

  8. Identifying the Reducing Resistance to Change Phase in an Organizational Change Model

    Directory of Open Access Journals (Sweden)

    Daniela Bradutanu

    2012-04-01

    Full Text Available In this article we examine where in an organizational change process it is better to placethe reducing resistance to change phase, so that employees would accept the new changes easier andnot manifest too much resistance. After analyzing twelve organizational change models we haveconcluded that the place of the reducing resistanceto change phase in an organizational changeprocess is not the same, it being modified according to the type of change. The results of this studyare helpful for researchers, but especially for organizational change leaders. As change leaders areusually the ones confronted with resistance from their subordinates, they must know exactly how todeal with it and when is the best moment to reduceit, depending on the type of change that is desiredto be implemented. The key contribution to this paper is that the best way to gain employee’s supportand change attachment is to try and reduce resistance to change before the actual implementation.Only when an immediate or imposed change is required to be implemented, the methods and ways forovercoming resistance should be applied during andafter the implementation stage, to ensure asuccessful implementation of the change.

  9. Reduced order modeling of some fluid flows of industrial interest

    Energy Technology Data Exchange (ETDEWEB)

    Alonso, D; Terragni, F; Velazquez, A; Vega, J M, E-mail: josemanuel.vega@upm.es [E.T.S.I. Aeronauticos, Universidad Politecnica de Madrid, 28040 Madrid (Spain)

    2012-06-01

    Some basic ideas are presented for the construction of robust, computationally efficient reduced order models amenable to be used in industrial environments, combined with somewhat rough computational fluid dynamics solvers. These ideas result from a critical review of the basic principles of proper orthogonal decomposition-based reduced order modeling of both steady and unsteady fluid flows. In particular, the extent to which some artifacts of the computational fluid dynamics solvers can be ignored is addressed, which opens up the possibility of obtaining quite flexible reduced order models. The methods are illustrated with the steady aerodynamic flow around a horizontal tail plane of a commercial aircraft in transonic conditions, and the unsteady lid-driven cavity problem. In both cases, the approximations are fairly good, thus reducing the computational cost by a significant factor. (review)

  10. Reduced order modeling of steady flows subject to aerodynamic constraints

    DEFF Research Database (Denmark)

    Zimmermann, Ralf; Vendl, Alexander; Goertz, Stefan

    2014-01-01

    A novel reduced-order modeling method based on proper orthogonal decomposition for predicting steady, turbulent flows subject to aerodynamic constraints is introduced. Model-order reduction is achieved by replacing the governing equations of computational fluid dynamics with a nonlinear weighted ...

  11. Reducing Redundancies in Reconfigurable Antenna Structures Using Graph Models

    Energy Technology Data Exchange (ETDEWEB)

    Costantine, Joseph; al-Saffar, Sinan; Christodoulou, Christos G.; Abdallah, Chaouki T.

    2010-04-23

    Many reconfigurable antennas have redundant components in their structures. In this paper we present an approach for reducing redundancies in reconfigurable antenna structures using graph models. We study reconfigurable antennas, which are grouped, categorized and modeled according to a set of proposed graph rules. Several examples are presented and discussed to demonstrate the validity of this new technique.

  12. Reducing outpatient waiting time: a simulation modeling approach.

    Science.gov (United States)

    Aeenparast, Afsoon; Tabibi, Seyed Jamaleddin; Shahanaghi, Kamran; Aryanejhad, Mir Bahador

    2013-09-01

    The objective of this study was to provide a model for reducing outpatient waiting time by using simulation. A simulation model was constructed by using the data of arrival time, service time and flow of 357 patients referred to orthopedic clinic of a general teaching hospital in Tehran. The simulation model was validated before constructing different scenarios. In this study 10 scenarios were presented for reducing outpatient waiting time. Patients waiting time was divided into three levels regarding their physicians. These waiting times for all scenarios were computed by simulation model. According to the final scores the 9th scenario was selected as the best way for reducing outpatient's waiting time. Using the simulation as a decision making tool helps us to decide how we can reduce outpatient's waiting time. Comparison of outputs of this scenario and the based- case scenario in simulation model shows that combining physician's work time changing with patient's admission time changing (scenario 9) would reduce patient waiting time about 73.09%. Due to dynamic and complex nature of healthcare systems, the application of simulation for the planning, modeling and analysis of these systems has lagged behind traditional manufacturing practices. Rapid growth in health care system expenditures, technology and competition has increased the complexity of health care systems. Simulation is a useful tool for decision making in complex and probable systems.

  13. Design and implementation of a generalized laboratory data model

    Directory of Open Access Journals (Sweden)

    Nhan Mike

    2007-09-01

    Full Text Available Abstract Background Investigators in the biological sciences continue to exploit laboratory automation methods and have dramatically increased the rates at which they can generate data. In many environments, the methods themselves also evolve in a rapid and fluid manner. These observations point to the importance of robust information management systems in the modern laboratory. Designing and implementing such systems is non-trivial and it appears that in many cases a database project ultimately proves unserviceable. Results We describe a general modeling framework for laboratory data and its implementation as an information management system. The model utilizes several abstraction techniques, focusing especially on the concepts of inheritance and meta-data. Traditional approaches commingle event-oriented data with regular entity data in ad hoc ways. Instead, we define distinct regular entity and event schemas, but fully integrate these via a standardized interface. The design allows straightforward definition of a "processing pipeline" as a sequence of events, obviating the need for separate workflow management systems. A layer above the event-oriented schema integrates events into a workflow by defining "processing directives", which act as automated project managers of items in the system. Directives can be added or modified in an almost trivial fashion, i.e., without the need for schema modification or re-certification of applications. Association between regular entities and events is managed via simple "many-to-many" relationships. We describe the programming interface, as well as techniques for handling input/output, process control, and state transitions. Conclusion The implementation described here has served as the Washington University Genome Sequencing Center's primary information system for several years. It handles all transactions underlying a throughput rate of about 9 million sequencing reactions of various kinds per month and

  14. The FPGA Implementation of Short—Wave Channel Model

    Institute of Scientific and Technical Information of China (English)

    GANLiangcai; LIYuanyuan

    2003-01-01

    Based on the characteristic of timevariance,short-wave channel can be modeled as a real-time tors of fllter in frequency domain,the model can simulate short-wave channel exactly,such as delay spread,Doppler shift and Doppler spread.In the design,the bandwidth of short-wave channel model is 768kHz,and the frequency interval is 3kHz.A kind of Overlap-Discard algorithm based on the fast Fourier transform (FFT)is utilized to design the real-time FIR filter,and an architectural design structure based on Field Programmable Gate Arrays(FPGA)chip is adopted to implement 512-point FFT.The channel transfer function and the noise and interference function are periodically updated in real-time,which are stored in ROM in advance.The simulation result shows that the hardware implementation is simple and feasible and the wideband short-wave systems,such as frequency-hopping,direct sequence spread spectrum systems.

  15. Implementing Marine Organic Aerosols Into the GEOS-Chem Model

    Science.gov (United States)

    Johnson, Matthew S.

    2015-01-01

    Marine-sourced organic aerosols (MOA) have been shown to play an important role in tropospheric chemistry by impacting surface mass, cloud condensation nuclei, and ice nuclei concentrations over remote marine and coastal regions. In this work, an online marine primary organic aerosol emission parameterization, designed to be used for both global and regional models, was implemented into the GEOS-Chem model. The implemented emission scheme improved the large under-prediction of organic aerosol concentrations in clean marine regions (normalized mean bias decreases from -79% when using the default settings to -12% when marine organic aerosols are added). Model predictions were also in good agreement (correlation coefficient of 0.62 and normalized mean bias of -36%) with hourly surface concentrations of MOA observed during the summertime at an inland site near Paris, France. Our study shows that MOA have weaker coastal-to-inland concentration gradients than sea-salt aerosols, leading to several inland European cities having > 10% of their surface submicron organic aerosol mass concentration with a marine source. The addition of MOA tracers to GEOS-Chem enabled us to identify the regions with large contributions of freshly-emitted or aged aerosol having distinct physicochemical properties, potentially indicating optimal locations for future field studies.

  16. Implementation of splitting methods for air pollution modeling

    Directory of Open Access Journals (Sweden)

    M. Schlegel

    2011-11-01

    Full Text Available Explicit time integration methods are characterized by a small numerical effort per time step. In the application to multiscale problems in atmospheric modeling, this benefit is often more than compensated by stability problems and step size restrictions resulting from stiff chemical reaction terms and from a locally varying Courant-Friedrichs-Lewy (CFL condition for the advection terms. Splitting methods may be applied to efficiently combine implicit and explicit methods (IMEX splitting. Complementarily multirate time integration schemes allow for a local adaptation of the time step size to the grid size. In combination these approaches lead to schemes which are efficient in terms of evaluations of the right hand side. Special challenges arise when these methods are to be implemented. For an efficient implementation it is crucial to locate and exploit redundancies. Furthermore the more complex program flow may lead to computational overhead which in the worst case more than compensates the theoretical gain in efficiency. We present a general splitting approach which allows both for IMEX splittings and for local time step adaptation. The main focus is on an efficient implementation of this approach for parallel computation on computer clusters.

  17. Increasing job satisfaction and motivation while reducing nursing turnover through the implementation of shared governance.

    Science.gov (United States)

    Relf, M

    1995-11-01

    In today's cost-conscious, changing health care environment, health care agencies must identify and implement strategies to promote fiscal responsibility while maintaining employee satisfaction and retention. The cost to recruit professional nurses is high. Therefore, the business objective is to retain the productive employee. Through the implementation of shared governance, employees find the workplace rewarding and stimulating--motivating factors as described by Herzberg. The Secretary's Commission on Nursing identified 16 strategies for the reduction of the nursing shortage and retention of professional nurses. One recommendation reinforces a report by the American Academy of Nursing that states work satisfaction among nurses in higher and turnover rates lower when organizational climates provide for nursing's involvement in decision making relating not only to nursing practice and unit management but also patient care. Through shared governance, staff nurse involvement in nursing and patient care policy is advanced.

  18. Alice and Bob: Reconciling Formal Models and Implementation

    DEFF Research Database (Denmark)

    Almousa, Omar; Mödersheim, Sebastian Alexander; Viganò, Luca

    2015-01-01

    This paper defines the “ultimate” formal semantics for Alice and Bob notation, i.e., what actions the honest agents have to perform, in the presence of an arbitrary set of cryptographic operators and their algebraic theory. Despite its generality, this semantics is mathematically simpler than any...... previous attempt. For practical applicability, we introduce the language SPS and an automatic translation to robust real-world implementations and corresponding formal models, and we prove this translation correct with respect to the semantics....

  19. Implementing a business improvement model based on integrated plant information

    Directory of Open Access Journals (Sweden)

    Swanepoel, Hendrika Francina

    2016-11-01

    Full Text Available The World Energy Council defines numerous challenges in the global energy arena that put pressure on owners and /operators to operate run existing plant better and more efficiently. As such there is an increasing focus on the use of business and technical plant information and data to make better, more integrated, and more informed decisions on the plant. The research study developed a business improvement model (BIM that can be used to establish an integrated plant information management infrastructure as the core foundation for of business improvement initiatives. Operational research then demonstrated how this BIM approach could be successfully implemented to improve business operations and provide decision-making insight.

  20. Modeling and Implementing ISO 13584-based Part Library

    Institute of Scientific and Technical Information of China (English)

    杨东; 肖丽雯; 何援军; 张申生

    2004-01-01

    ISO 13584 (I.e. PLIB) is an international standard for the representation and exchange of CAD part libraries. It aims to provide an application-independent mechanism to enable the share of part library information between applications. In this paper, the approach of modeling part library conforming to ISO 13584 is presented. Also, a prototype of part library management system, I.e. BYL-PLIB, whose implementation is in agreement with ISO 13854 is developed to demonstrate the usefulness of proposed approach.

  1. Bilinear reduced order approximate model of parabolic distributed solar collectors

    KAUST Repository

    Elmetennani, Shahrazed

    2015-07-01

    This paper proposes a novel, low dimensional and accurate approximate model for the distributed parabolic solar collector, by means of a modified gaussian interpolation along the spatial domain. The proposed reduced model, taking the form of a low dimensional bilinear state representation, enables the reproduction of the heat transfer dynamics along the collector tube for system analysis. Moreover, presented as a reduced order bilinear state space model, the well established control theory for this class of systems can be applied. The approximation efficiency has been proven by several simulation tests, which have been performed considering parameters of the Acurex field with real external working conditions. Model accuracy has been evaluated by comparison to the analytical solution of the hyperbolic distributed model and its semi discretized approximation highlighting the benefits of using the proposed numerical scheme. Furthermore, model sensitivity to the different parameters of the gaussian interpolation has been studied.

  2. On the verification of PGD reduced-order models

    OpenAIRE

    Pled, Florent; Chamoin, Ludovic; Ladevèze, Pierre

    2014-01-01

    International audience; In current computational mechanics practice, multidimensional as well as multiscale or parametric models encountered in a wide variety of scientific and engineering fields often require either the resolution of significantly large complexity problems or the direct calculation of very numerous solutions of such complex models. In this framework, the use of model order reduction allows to dramatically reduce the computational requirements engendered by the increasing mod...

  3. Implementation of a plasma-neutral model in NIMROD

    Science.gov (United States)

    Taheri, S.; Shumlak, U.; King, J. R.

    2016-10-01

    Interaction between plasma fluid and neutral species is of great importance in the edge region of magnetically confined fusion plasmas. The presence of neutrals can have beneficial effects such as fueling burning plasmas and quenching the disruptions in tokamaks, as well as deleterious effects like depositing high energy particles on the vessel wall. The behavior of edge plasmas in magnetically confined systems has been investigated using computational approaches that utilize the fluid description for the plasma and Monte Carlo transport for neutrals. In this research a reacting plasma-neutral model is implemented in NIMROD to study the interaction between plasma and neutral fluids. This model, developed by E. T. Meier and U. Shumlak, combines a single-fluid magnetohydrodynamic (MHD) plasma model with a gas dynamic neutral fluid model which accounts for electron-impact ionization, radiative recombination, and resonant charge exchange. Incorporating this model into NIMROD allows the study of the interaction between neutrals and plasma in a variety of plasma science problems. An accelerated plasma moving through a neutral gas background in a coaxial electrode configuration is modeled, and the results are compared with previous calculations from the HiFi code.

  4. Applying Reduced Generator Models in the Coarse Solver of Parareal in Time Parallel Power System Simulation

    Energy Technology Data Exchange (ETDEWEB)

    Duan, Nan [ORNL; Dimitrovski, Aleksandar D [ORNL; Simunovic, Srdjan [ORNL; Sun, Kai [University of Tennessee (UT)

    2016-01-01

    The development of high-performance computing techniques and platforms has provided many opportunities for real-time or even faster-than-real-time implementation of power system simulations. One approach uses the Parareal in time framework. The Parareal algorithm has shown promising theoretical simulation speedups by temporal decomposing a simulation run into a coarse simulation on the entire simulation interval and fine simulations on sequential sub-intervals linked through the coarse simulation. However, it has been found that the time cost of the coarse solver needs to be reduced to fully exploit the potentials of the Parareal algorithm. This paper studies a Parareal implementation using reduced generator models for the coarse solver and reports the testing results on the IEEE 39-bus system and a 327-generator 2383-bus Polish system model.

  5. Reduced Numerical Model for Methane Hydrate Formation under Conditions of Variable Salinity. Time-Stepping Variants and Sensitivity

    Directory of Open Access Journals (Sweden)

    Malgorzata Peszynska

    2015-12-01

    Full Text Available In this paper, we consider a reduced computational model of methane hydrate formation in variable salinity conditions, and give details on the discretization and phase equilibria implementation. We describe three time-stepping variants: Implicit, Semi-implicit, and Sequential, and we compare the accuracy and efficiency of these variants depending on the spatial and temporal discretization parameters. We also study the sensitivity of the model to the simulation parameters and in particular to the reduced phase equilibria model.

  6. A Privacy Data-Oriented Hierarchical MapReduce Programming Model

    Directory of Open Access Journals (Sweden)

    Haiwen Han

    2013-08-01

    Full Text Available To realize privacy data protection efficiently in hybrid cloud service, a hierarchical control architecture based multi-cluster MapReduce programming model (the Hierarchical MapReduce Model,HMR is presented. Under this hierarchical control architecture,  data isolation and placement among private cloud and public clouds according to the data privacy characteristic is implemented by the control center in private cloud.  And then, to perform the corresponding distributed parallel computation correctly under the multi-clusters mode that is different to the conventional single-cluster mode, the Map-Reduce-GlobalReduce three stage scheduling process is designed. Limiting the computation about privacy data in private cloud while outsourcing the computation about non-privacy data to public clouds as much as possible, HMR reaches the performance of both security and low cost.  

  7. A local geopotential model for implementation of underwater passive navigation

    Institute of Scientific and Technical Information of China (English)

    Zhigang Wang; Shaofeng Bian

    2008-01-01

    A main aspect of underwater passive navigation is how to identify the vehicle location on an existing gravity map.and several match-ing algorithms as ICCP and SITAN are the most prevalent methods that many scholars are using.In this paper,a novel algorithm that is different from matching algorithms for passive navigation is developed.The algorithm implements underwater passive navigation by directly estimating the inertial errors through Kalman falter algorithm,and the key part of this implementation is a Fourier series.based local geopotential model.Firstly,the pfinople of local geopotential model based on Fourier series is introduced in this paper,thus the discrete gravity anomalies data can be expressed analytically with respect to geographic coordinares to establish the observation equation required in the application of Kalman filter.Whereafter,the indicated gravity anomalies can be gotten by substituting the inertial posi-tions to existing gravity anomalies map.Finally,the classical extended Kalman filter is introduced with the differences between measured gravity and indicated gravity used as observations to optimally estimate the errors of the inertial navigation system(INS).This naviga-tion algorithm is tested on simulated data with encouraging results.Although this algorithm is developed for underwater navigation using gravity data,it iS equally applicable to other domains,for example vehicle navigation on magnetic or terrain data.

  8. Implementing Modifed Burg Algorithms in Multivariate Subset Autoregressive Modeling

    Directory of Open Access Journals (Sweden)

    A. Alexandre Trindade

    2003-02-01

    Full Text Available The large number of parameters in subset vector autoregressive models often leads one to procure fast, simple, and efficient alternatives or precursors to maximum likelihood estimation. We present the solution of the multivariate subset Yule-Walker equations as one such alternative. In recent work, Brockwell, Dahlhaus, and Trindade (2002, show that the Yule-Walker estimators can actually be obtained as a special case of a general recursive Burg-type algorithm. We illustrate the structure of this Algorithm, and discuss its implementation in a high-level programming language. Applications of the Algorithm in univariate and bivariate modeling are showcased in examples. Univariate and bivariate versions of the Algorithm written in Fortran 90 are included in the appendix, and their use illustrated.

  9. Census Model Transition: Contributions to its Implementation in Portugal

    Directory of Open Access Journals (Sweden)

    Dias Carlos A.

    2016-03-01

    Full Text Available Given the high cost and complexity of traditional censuses, some countries have started to change the census process. Following this trend, Portugal is also evaluating a new census model as an alternative to an exhaustive collection of all statistical units. The main motivations for the implementation of this census model transition in Portugal are related to the decrease in statistical burden on citizens, improvements in the frequency of outputs, and the reduction of collection costs associated with census operations. This article seeks to systematise and critically review all alternatives to the traditional census methodologies, presenting their advantages and disadvantages and the countries that use them. As a result of the comparison, we conclude that the methods that best meet these objectives are those that use administrative data, either in whole or in part. We also present and discuss the results of an inventory and evaluation of administrative registers in Portugal with the potential to produce statistical census information.

  10. The Integrated Model of Embedded Management Systems and Its Implementation

    Institute of Scientific and Technical Information of China (English)

    2002-01-01

    The Simple Network Management Protocol(SNMP) is nowadays the key enabling management technology, while Web technologies have proved to be attractive to networks and systems management. Future development in the management domain should be "integrated". In this article, an embedded management model, which provides an integrated device management framework, is presented. This model consists of five functional modules including System service layer, Application layer, Data layer, Middle layer and Access layer. Each module is described, with the mutual relation included. Then the key points of implementation are discussed. And the system design with the development tools provided by WindRiver Systems, which enable products to be developed cost-effectively and efficiently, is described in detail.

  11. Reduced Order Models for Dynamic Behavior of Elastomer Damping Devices

    Science.gov (United States)

    Morin, B.; Legay, A.; Deü, J.-F.

    2016-09-01

    In the context of passive damping, various mechanical systems from the space industry use elastomer components (shock absorbers, silent blocks, flexible joints...). The material of these devices has frequency, temperature and amplitude dependent characteristics. The associated numerical models, using viscoelastic and hyperelastic constitutive behaviour, may become computationally too expensive during a design process. The aim of this work is to propose efficient reduced viscoelastic models of rubber devices. The first step is to choose an accurate material model that represent the viscoelasticity. The second step is to reduce the rubber device finite element model to a super-element that keeps the frequency dependence. This reduced model is first built by taking into account the fact that the device's interfaces are much more rigid than the rubber core. To make use of this difference, kinematical constraints enforce the rigid body motion of these interfaces reducing the rubber device model to twelve dofs only on the interfaces (three rotations and three translations per face). Then, the superelement is built by using a component mode synthesis method. As an application, the dynamic behavior of a structure supported by four hourglass shaped rubber devices under harmonic loads is analysed to show the efficiency of the proposed approach.

  12. Models of emergency departments for reducing patient waiting times.

    Science.gov (United States)

    Laskowski, Marek; McLeod, Robert D; Friesen, Marcia R; Podaima, Blake W; Alfa, Attahiru S

    2009-07-02

    In this paper, we apply both agent-based models and queuing models to investigate patient access and patient flow through emergency departments. The objective of this work is to gain insights into the comparative contributions and limitations of these complementary techniques, in their ability to contribute empirical input into healthcare policy and practice guidelines. The models were developed independently, with a view to compare their suitability to emergency department simulation. The current models implement relatively simple general scenarios, and rely on a combination of simulated and real data to simulate patient flow in a single emergency department or in multiple interacting emergency departments. In addition, several concepts from telecommunications engineering are translated into this modeling context. The framework of multiple-priority queue systems and the genetic programming paradigm of evolutionary machine learning are applied as a means of forecasting patient wait times and as a means of evolving healthcare policy, respectively. The models' utility lies in their ability to provide qualitative insights into the relative sensitivities and impacts of model input parameters, to illuminate scenarios worthy of more complex investigation, and to iteratively validate the models as they continue to be refined and extended. The paper discusses future efforts to refine, extend, and validate the models with more data and real data relative to physical (spatial-topographical) and social inputs (staffing, patient care models, etc.). Real data obtained through proximity location and tracking system technologies is one example discussed.

  13. Reduced-order models for vertical human-structure interaction

    Science.gov (United States)

    Van Nimmen, Katrien; Lombaert, Geert; De Roeck, Guido; Van den Broeck, Peter

    2016-09-01

    For slender and lightweight structures, the vibration serviceability under crowd- induced loading is often critical in design. Currently, designers rely on equivalent load models, upscaled from single-person force measurements. Furthermore, it is important to consider the mechanical interaction with the human body as this can significantly reduce the structural response. To account for these interaction effects, the contact force between the pedestrian and the structure can be modelled as the superposition of the force induced by the pedestrian on a rigid floor and the force resulting from the mechanical interaction between the structure and the human body. For the case of large crowds, however, this approach leads to models with a very high system order. In the present contribution, two equivalent reduced-order models are proposed to approximate the dynamic behaviour of the full-order coupled crowd-structure system. A numerical study is performed to evaluate the impact of the modelling assumptions on the structural response to pedestrian excitation. The results show that the full-order moving crowd model can be well approximated by a reduced-order model whereby the interaction with the pedestrians in the crowd is modelled using a single (equivalent) SDOF system.

  14. Parallel implementation of approximate atomistic models of the AMOEBA polarizable model

    Science.gov (United States)

    Demerdash, Omar; Head-Gordon, Teresa

    2016-11-01

    In this work we present a replicated data hybrid OpenMP/MPI implementation of a hierarchical progression of approximate classical polarizable models that yields speedups of up to ∼10 compared to the standard OpenMP implementation of the exact parent AMOEBA polarizable model. In addition, our parallel implementation exhibits reasonable weak and strong scaling. The resulting parallel software will prove useful for those who are interested in how molecular properties converge in the condensed phase with respect to the MBE, it provides a fruitful test bed for exploring different electrostatic embedding schemes, and offers an interesting possibility for future exascale computing paradigms.

  15. Design and Implementation of Digital Filter Bank to Reduce Noise and Reconstruct the Input Signals

    Directory of Open Access Journals (Sweden)

    Kawser Ahammed

    2015-04-01

    Full Text Available The main theme of this paper is to reduce noise from the noisy composite signal and reconstruct the input signals from the composite signal by designing FIR digital filter bank. In this work, three sinusoidal signals of different frequencies and amplitudes are combined to get composite signal and a low frequency noise signal is added with the composite signal to get noisy composite signal. Finally noisy composite signal is filtered by using FIR digital filter bank to reduce noise and reconstruct the input signals.

  16. On Modeling CPU Utilization of MapReduce Applications

    CERN Document Server

    Rizvandi, Nikzad Babaii; Zomaya, Albert Y

    2012-01-01

    In this paper, we present an approach to predict the total CPU utilization in terms of CPU clock tick of applications when running on MapReduce framework. Our approach has two key phases: profiling and modeling. In the profiling phase, an application is run several times with different sets of MapReduce configuration parameters to profile total CPU clock tick of the application on a given platform. In the modeling phase, multi linear regression is used to map the sets of MapReduce configuration parameters (number of Mappers, number of Reducers, size of File System (HDFS) and the size of input file) to total CPU clock ticks of the application. This derived model can be used for predicting total CPU requirements of the same application when using MapReduce framework on the same platform. Our approach aims to eliminate error-prone manual processes and presents a fully automated solution. Three standard applications (WordCount, Exim Mainlog parsing and Terasort) are used to evaluate our modeling technique on pseu...

  17. REDUCING PROCESS VARIABILITY BY USING DMAIC MODEL: A CASE STUDY IN BANGLADESH

    Directory of Open Access Journals (Sweden)

    Ripon Kumar Chakrabortty

    2013-03-01

    Full Text Available Now-a-day's many leading manufacturing industry have started to practice Six Sigma and Lean manufacturing concepts to boost up their productivity as well as quality of products. In this paper, the Six Sigma approach has been used to reduce process variability of a food processing industry in Bangladesh. DMAIC (Define,Measure, Analyze, Improve, & Control model has been used to implement the Six Sigma Philosophy. Five phases of the model have been structured step by step respectively. Different tools of Total Quality Management, Statistical Quality Control and Lean Manufacturing concepts likely Quality function deployment, P Control chart, Fish-bone diagram, Analytical Hierarchy Process, Pareto analysis have been used in different phases of the DMAIC model. The process variability have been tried to reduce by identify the root cause of defects and reducing it. The ultimate goal of this study is to make the process lean and increase the level of sigma.

  18. Reducing Test Anxiety among Third Grade Students through the Implementation of Relaxation Techniques

    Science.gov (United States)

    Larson, Heidi A.; El Ramahi, Mera K.; Conn, Steven R.; Estes, Lincoln A.; Ghibellini, Amanda B.

    2010-01-01

    The purpose of this study was to reduce the negative effects that self-perceived levels of test anxiety have on third-grade students. The participants in this study consisted of 177 third-grade students at two Midwestern public elementary schools. Students at one school were taught relaxation techniques, while students at the second school served…

  19. Reduced models of extratropical low-frequency variability

    Science.gov (United States)

    Strounine, Kirill

    Low-frequency variability (LFV) of the atmosphere refers to its behavior on time scales of 10-100 days, longer than the life cycle of a mid-latitude cyclone but shorter than a season. This behavior is still poorly understood and hard to predict. It has been helpful in gaining understanding that might improve prediction to use various simplified models. The present study compares and contrasts various mode reduction strategies that help derive systematically such simplified models of LFV. Three major strategies have been applied to reduce a fairly realistic, high-dimensional, quasi-geostrophic, 3-level (QG3) atmospheric model to lower dimensions: (i) a purely empirical, multi-level regression procedure, which specifies the functional form of the reduced model and finds the model coefficients by multiple polynomial regression; (ii) an empirical-dynamical method, which retains only a few components in the projection of the full QG3 model equations onto a specified basis (the so-called bare truncation), and finds the linear deterministic and additive stochastic corrections empirically; and (iii) a dynamics-based technique, employing the stochastic mode reduction strategy of Majda et al. (2001; MTV). Subject to the assumption of significant time-scale separation in the physical system under consideration, MTV derives the form of the reduced model and finds its coefficients with minimal statistical fitting. The empirical-dynamical and dynamical reduced models were further improved by sequential parameter estimation and benchmarked against multi-level regression models; the extended Kalman filter (EKF) was used for the parameter estimation. In constructing the reduced models, the choice of basis functions is also important. We considered as basis functions a set of empirical orthogonal functions (EOFs). These EOFs were computed using (a) an energy norm; and (b) a potential-enstrophy norm. We also devised a method, using singular value decomposition of the full-model

  20. Determining which land management practices reduce catchment scale flood risk and where to implement them for optimum effect

    Science.gov (United States)

    Pattison, Ian; Lane, Stuart; Hardy, Richard; Reaney, Sim

    2010-05-01

    The theoretical basis for why changes in land management might increase flood risk are well known, but proving them through numerical modelling still remains a challenge. In large catchments, like the River Eden in Cumbria, NW England, one of the reasons for this is that it is unfeasible to test multiple scenarios in all their possible locations. We have developed two linked approaches to refine the number of scenarios and locations using 1) spatial downscaling and 2) participatory decision making, which potentially should increase the likelihood of finding a link between land use and downstream flooding. Firstly, land management practices can have both flood reducing and flood increasing effects, depending on their location. As a result some areas of the catchment are more important in determining downstream flood risk than others, depending on the land use and hydrological connectivity. We apply a downscaling approach to identify which sub-catchments are most important in explaining downstream flooding. This is important because it is in these areas that management options are most likely to have a positive and detectable effect. Secondly, once the dominant sub-catchment has been identified, the land management scenarios that are both feasible and likely to impact flood risk need to be determined. This was done through active stakeholder engagement. The stakeholder group undertook a brainstorming exercise, which suggested about 30 different rural land management scenarios, which were mapped on to a literature-based conceptual framework of hydrological processes. Then these options were evaluated based on five criteria: relevance to catchment, scientific effectiveness, testability, robustness/uncertainty and feasibility of implementation. The suitability of each scenario was discussed and prioritised by the stakeholder group based on scientific needs and expectations and local suitability and feasibility. The next stage of the participatory approach was a mapping

  1. Parameterized reduced-order models using hyper-dual numbers.

    Energy Technology Data Exchange (ETDEWEB)

    Fike, Jeffrey A.; Brake, Matthew Robert

    2013-10-01

    The goal of most computational simulations is to accurately predict the behavior of a real, physical system. Accurate predictions often require very computationally expensive analyses and so reduced order models (ROMs) are commonly used. ROMs aim to reduce the computational cost of the simulations while still providing accurate results by including all of the salient physics of the real system in the ROM. However, real, physical systems often deviate from the idealized models used in simulations due to variations in manufacturing or other factors. One approach to this issue is to create a parameterized model in order to characterize the effect of perturbations from the nominal model on the behavior of the system. This report presents a methodology for developing parameterized ROMs, which is based on Craig-Bampton component mode synthesis and the use of hyper-dual numbers to calculate the derivatives necessary for the parameterization.

  2. Simulating lightning into the RAMS model: implementation and preliminary results

    Directory of Open Access Journals (Sweden)

    S. Federico

    2014-05-01

    Full Text Available This paper shows the results of a tailored version of a previously published methodology, designed to simulate lightning activity, implemented into the Regional Atmospheric Modeling System (RAMS. The method gives the flash density at the resolution of the RAMS grid-scale allowing for a detailed analysis of the evolution of simulated lightning activity. The system is applied in detail to two case studies occurred over the Lazio Region, in Central Italy. Simulations are compared with the lightning activity detected by the LINET network. The cases refer to two thunderstorms of different intensity. Results show that the model predicts reasonably well both cases and that the lightning activity is well reproduced especially for the most intense case. However, there are errors in timing and positioning of the convection, whose magnitude depends on the case study, which mirrors in timing and positioning errors of the lightning distribution. To assess objectively the performance of the methodology, standard scores are presented for four additional case studies. Scores show the ability of the methodology to simulate the daily lightning activity for different spatial scales and for two different minimum thresholds of flash number density. The performance decreases at finer spatial scales and for higher thresholds. The comparison of simulated and observed lighting activity is an immediate and powerful tool to assess the model ability to reproduce the intensity and the evolution of the convection. This shows the importance of the use of computationally efficient lightning schemes, such as the one described in this paper, in forecast models.

  3. Reduced Models in Chemical Kinetics via Nonlinear Data-Mining

    Directory of Open Access Journals (Sweden)

    Eliodoro Chiavazzo

    2014-01-01

    Full Text Available The adoption of detailed mechanisms for chemical kinetics often poses two types of severe challenges: First, the number of degrees of freedom is large; and second, the dynamics is characterized by widely disparate time scales. As a result, reactive flow solvers with detailed chemistry often become intractable even for large clusters of CPUs, especially when dealing with direct numerical simulation (DNS of turbulent combustion problems. This has motivated the development of several techniques for reducing the complexity of such kinetics models, where, eventually, only a few variables are considered in the development of the simplified model. Unfortunately, no generally applicable a priori recipe for selecting suitable parameterizations of the reduced model is available, and the choice of slow variables often relies upon intuition and experience. We present an automated approach to this task, consisting of three main steps. First, the low dimensional manifold of slow motions is (approximately sampled by brief simulations of the detailed model, starting from a rich enough ensemble of admissible initial conditions. Second, a global parametrization of the manifold is obtained through the Diffusion Map (DMAP approach, which has recently emerged as a powerful tool in data analysis/machine learning. Finally, a simplified model is constructed and solved on the fly in terms of the above reduced (slow variables. Clearly, closing this latter model requires nontrivial interpolation calculations, enabling restriction (mapping from the full ambient space to the reduced one and lifting (mapping from the reduced space to the ambient one. This is a key step in our approach, and a variety of interpolation schemes are reported and compared. The scope of the proposed procedure is presented and discussed by means of an illustrative combustion example.

  4. Implementation of Distance Support (DS) to Reduce Total Ownership Cost (R-TOC)

    Science.gov (United States)

    2012-02-01

    resulting in R-TOC. Figure 1 - DS Reduces Total Response Time (DS CRM, Distance Support Framework, MS Powerpoint presentation, Oct. 2011) Re...Hardware 207 Software 43% RMC Tech Inquiry 34% Logistics Other 6% Feedbacks 4% Documentation 3% Hardware 4% QoL 2% Directory...Quality Of Life ( QoL ) 61 Directory Assistance 41 Support Other 18 Medical, Manning, Training 16 PMS Other 2 Total 4,208 There are three

  5. Determination of effective loss factors in reduced SEA models

    Science.gov (United States)

    Chimeno Manguán, M.; Fernández de las Heras, M. J.; Roibás Millán, E.; Simón Hidalgo, F.

    2017-01-01

    The definition of Statistical Energy Analysis (SEA) models for large complex structures is highly conditioned by the classification of the structure elements into a set of coupled subsystems and the subsequent determination of the loss factors representing both the internal damping and the coupling between subsystems. The accurate definition of the complete system can lead to excessively large models as the size and complexity increases. This fact can also rise practical issues for the experimental determination of the loss factors. This work presents a formulation of reduced SEA models for incomplete systems defined by a set of effective loss factors. This reduced SEA model provides a feasible number of subsystems for the application of the Power Injection Method (PIM). For structures of high complexity, their components accessibility can be restricted, for instance internal equipments or panels. For these cases the use of PIM to carry out an experimental SEA analysis is not possible. New methods are presented for this case in combination with the reduced SEA models. These methods allow defining some of the model loss factors that could not be obtained through PIM. The methods are validated with a numerical analysis case and they are also applied to an actual spacecraft structure with accessibility restrictions: a solar wing in folded configuration.

  6. AN OVERVIEW OF REDUCED ORDER MODELING TECHNIQUES FOR SAFETY APPLICATIONS

    Energy Technology Data Exchange (ETDEWEB)

    Mandelli, D.; Alfonsi, A.; Talbot, P.; Wang, C.; Maljovec, D.; Smith, C.; Rabiti, C.; Cogliati, J.

    2016-10-01

    The RISMC project is developing new advanced simulation-based tools to perform Computational Risk Analysis (CRA) for the existing fleet of U.S. nuclear power plants (NPPs). These tools numerically model not only the thermal-hydraulic behavior of the reactors primary and secondary systems, but also external event temporal evolution and component/system ageing. Thus, this is not only a multi-physics problem being addressed, but also a multi-scale problem (both spatial, µm-mm-m, and temporal, seconds-hours-years). As part of the RISMC CRA approach, a large amount of computationally-expensive simulation runs may be required. An important aspect is that even though computational power is growing, the overall computational cost of a RISMC analysis using brute-force methods may be not viable for certain cases. A solution that is being evaluated to assist the computational issue is the use of reduced order modeling techniques. During the FY2015, we investigated and applied reduced order modeling techniques to decrease the RISMC analysis computational cost by decreasing the number of simulation runs; for this analysis improvement we used surrogate models instead of the actual simulation codes. This article focuses on the use of reduced order modeling techniques that can be applied to RISMC analyses in order to generate, analyze, and visualize data. In particular, we focus on surrogate models that approximate the simulation results but in a much faster time (microseconds instead of hours/days).

  7. Robust simulation of buckled structures using reduced order modeling

    Science.gov (United States)

    Wiebe, R.; Perez, R. A.; Spottswood, S. M.

    2016-09-01

    Lightweight metallic structures are a mainstay in aerospace engineering. For these structures, stability, rather than strength, is often the critical limit state in design. For example, buckling of panels and stiffeners may occur during emergency high-g maneuvers, while in supersonic and hypersonic aircraft, it may be induced by thermal stresses. The longstanding solution to such challenges was to increase the sizing of the structural members, which is counter to the ever present need to minimize weight for reasons of efficiency and performance. In this work we present some recent results in the area of reduced order modeling of post- buckled thin beams. A thorough parametric study of the response of a beam to changing harmonic loading parameters, which is useful in exposing complex phenomena and exercising numerical models, is presented. Two error metrics that use but require no time stepping of a (computationally expensive) truth model are also introduced. The error metrics are applied to several interesting forcing parameter cases identified from the parametric study and are shown to yield useful information about the quality of a candidate reduced order model. Parametric studies, especially when considering forcing and structural geometry parameters, coupled environments, and uncertainties would be computationally intractable with finite element models. The goal is to make rapid simulation of complex nonlinear dynamic behavior possible for distributed systems via fast and accurate reduced order models. This ability is crucial in allowing designers to rigorously probe the robustness of their designs to account for variations in loading, structural imperfections, and other uncertainties.

  8. Reduced order models for thermal analysis : final report : LDRD Project No. 137807.

    Energy Technology Data Exchange (ETDEWEB)

    Hogan, Roy E., Jr.; Gartling, David K.

    2010-09-01

    This LDRD Senior's Council Project is focused on the development, implementation and evaluation of Reduced Order Models (ROM) for application in the thermal analysis of complex engineering problems. Two basic approaches to developing a ROM for combined thermal conduction and enclosure radiation problems are considered. As a prerequisite to a ROM a fully coupled solution method for conduction/radiation models is required; a parallel implementation is explored for this class of problems. High-fidelity models of large, complex systems are now used routinely to verify design and performance. However, there are applications where the high-fidelity model is too large to be used repetitively in a design mode. One such application is the design of a control system that oversees the functioning of the complex, high-fidelity model. Examples include control systems for manufacturing processes such as brazing and annealing furnaces as well as control systems for the thermal management of optical systems. A reduced order model (ROM) seeks to reduce the number of degrees of freedom needed to represent the overall behavior of the large system without a significant loss in accuracy. The reduction in the number of degrees of freedom of the ROM leads to immediate increases in computational efficiency and allows many design parameters and perturbations to be quickly and effectively evaluated. Reduced order models are routinely used in solid mechanics where techniques such as modal analysis have reached a high state of refinement. Similar techniques have recently been applied in standard thermal conduction problems e.g. though the general use of ROM for heat transfer is not yet widespread. One major difficulty with the development of ROM for general thermal analysis is the need to include the very nonlinear effects of enclosure radiation in many applications. Many ROM methods have considered only linear or mildly nonlinear problems. In the present study a reduced order model is

  9. Mobile phone model with metamaterials to reduce the exposure

    Science.gov (United States)

    Pinto, Yenny; Begaud, Xavier

    2016-04-01

    This work presents a terminal mobile model where an Inverted-F Antenna (IFA) is associated with three different kinds of metamaterials: artificial magnetic conductor (AMC), electromagnetic band gap (EBG) and resistive high-impedance surface (RHIS). The objective was to evaluate whether some metamaterials may be used to reduce exposure while preserving the antenna performances. The exposure has been evaluated using a simplified phantom model. Two configurations, antenna in front of the phantom and antenna hidden by the ground plane, have been evaluated. Results show that using an optimized RHIS, the SAR 10 g is reduced and the antenna performances are preserved. With RHIS solution, the SAR 10 g peak is reduced by 8 % when the antenna is located in front of the phantom and by 6 % when the antenna is hidden by ground plane.

  10. Reduced Lorenz models for anomalous transport and profile resilience

    DEFF Research Database (Denmark)

    Rypdal, K.; Garcia, Odd Erik

    2007-01-01

    to resilience of the profile. Particular emphasis is put on the diffusionless limit, where these equations reduce to a simple dynamical system depending only on one single forcing parameter. This model is studied numerically, stressing experimentally observable signatures, and some of the perils of dimension...

  11. Implementation of evidence-based home visiting programs aimed at reducing child maltreatment: A meta-analytic review.

    Science.gov (United States)

    Casillas, Katherine L; Fauchier, Angèle; Derkash, Bridget T; Garrido, Edward F

    2016-03-01

    In recent years there has been an increase in the popularity of home visitation programs as a means of addressing risk factors for child maltreatment. The evidence supporting the effectiveness of these programs from several meta-analyses, however, is mixed. One potential explanation for this inconsistency explored in the current study involves the manner in which these programs were implemented. In the current study we reviewed 156 studies associated with 9 different home visitation program models targeted to caregivers of children between the ages of 0 and 5. Meta-analytic techniques were used to determine the impact of 18 implementation factors (e.g., staff selection, training, supervision, fidelity monitoring, etc.) and four study characteristics (publication type, target population, study design, comparison group) in predicting program outcomes. Results from analyses revealed that several implementation factors, including training, supervision, and fidelity monitoring, had a significant effect on program outcomes, particularly child maltreatment outcomes. Study characteristics, including the program's target population and the comparison group employed, also had a significant effect on program outcomes. Implications of the study's results for those interested in implementing home visitation programs are discussed. A careful consideration and monitoring of program implementation is advised as a means of achieving optimal study results.

  12. On reducibility and ergodicity of population projection matrix models

    DEFF Research Database (Denmark)

    Stott, Iain; Townley, Stuart; Carslake, David

    2010-01-01

    1. Population projection matrices (PPMs) are probably the most commonly used empirical population models. To be useful for predictive or prospective analyses, PPM models should generally be irreducible (the associated life cycle graph contains the necessary transition rates to facilitate pathways...... structure used in the population projection). In our sample of published PPMs, 15·6% are non-ergodic. 3. This presents a problem: reducible–ergodic models often defy biological rationale in their description of the life cycle but may or may not prove problematic for analysis as they often behave similarly...... to irreducible models. Reducible–non-ergodic models will usually defy biological rationale in their description of the both the life cycle and population dynamics, hence contravening most analytical methods. 4. We provide simple methods to evaluate reducibility and ergodicity of PPM models, present illustrative...

  13. Meteorological implementation issues in chemistry and transport models

    Directory of Open Access Journals (Sweden)

    S. E. Strahan

    2006-01-01

    Full Text Available Offline chemistry and transport models (CTMs are versatile tools for studying composition and climate issues requiring multi-decadal simulations. They are computationally fast compared to coupled chemistry climate models, making them well-suited for integrating sensitivity experiments necessary for understanding model performance and interpreting results. The archived meteorological fields used by CTMs can be implemented with lower horizontal or vertical resolution than the original meteorological fields in order to shorten integration time, but the effects of these shortcuts on transport processes must be understood if the CTM is to have credibility. In this paper we present a series of sensitivity experiments on a CTM using the Lin and Rood advection scheme, each differing from another by a single feature of the wind field implementation. Transport effects arising from changes in resolution and model lid height are evaluated using process-oriented diagnostics that intercompare CH4, O3, and age tracer carried in the simulations. Some of the diagnostics used are derived from observations and are shown as a reality check for the model. Processes evaluated include tropical ascent, tropical-midlatitude exchange, poleward circulation in the upper stratosphere, and the development of the Antarctic vortex. We find that faithful representation of stratospheric transport in this CTM is possible with a full mesosphere, ~1 km resolution in the lower stratosphere, and relatively low vertical resolution (>4 km spacing in the middle stratosphere and above, but lowering the lid from the upper to lower mesosphere leads to less realistic constituent distributions in the upper stratosphere. Ultimately, this affects the polar lower stratosphere, but the effects are greater for the Antarctic than the Arctic. The fidelity of lower stratospheric transport requires realistic tropical and high latitude mixing barriers which are produced at 2°×2.5°, but not lower

  14. Expectations and implementations of the flipped classroom model in undergraduate mathematics courses

    Science.gov (United States)

    Naccarato, Emilie; Karakok, Gulden

    2015-10-01

    The flipped classroom model is being used more frequently in undergraduate mathematics courses. As with any new teaching model, in-depth investigations of both various implementation styles and how the new model improves student learning are needed. Currently, many practitioners have been sharing their implementations of this model. However, there has not yet been an investigation of the various implementations of the model to discern general trends in this movement. With this research goal in mind, we conducted a study exploring various implementations of the flipped classroom model by interviewing 19 faculty members who experienced using this model at 14 different institutes. Results indicate that participants had similar motivations for implementation; however, subsequent implementations were different. In addition, we share participants' perspectives on (a) student learning of pre-requisite, procedural and conceptual knowledge, and (b) how this particular model promotes such knowledge developments. Finally, we provide suggestions for future implementations and research regarding this particular teaching model.

  15. Implementation of a chest pain management service improves patient care and reduces length of stay.

    Science.gov (United States)

    Scott, Adam C; O'Dwyer, Kristina M; Cullen, Louise; Brown, Anthony; Denaro, Charles; Parsonage, William

    2014-03-01

    Chest pain is one of the most common complaints in patients presenting to an emergency department. Delays in management due to a lack of readily available objective tests to risk stratify patients with possible acute coronary syndromes can lead to an unnecessarily lengthy admission placing pressure on hospital beds or inappropriate discharge. The need for a co-ordinated system of clinical management based on enhanced communication between departments, timely and appropriate triage, clinical investigation, diagnosis, and treatment was identified. An evidence-based Chest Pain Management Service and clinical pathway were developed and implemented, including the introduction of after-hours exercise stress testing. Between November 2005 and March 2013, 5662 patients were managed according to a Chest Pain Management pathway resulting in a reduction of 5181 admission nights by more timely identification of patients at low risk who could then be discharged. In addition, 1360 days were avoided in high-risk patients who received earlier diagnosis and treatment. The creation of a Chest Pain Management pathway and the extended exercise stress testing service resulted in earlier discharge for low-risk patients; and timely treatment for patients with positive and equivocal exercise stress test results. This service demonstrated a significant saving in overnight admissions.

  16. Phylogenetic mixture models can reduce node-density artifacts.

    Science.gov (United States)

    Venditti, Chris; Meade, Andrew; Pagel, Mark

    2008-04-01

    We investigate the performance of phylogenetic mixture models in reducing a well-known and pervasive artifact of phylogenetic inference known as the node-density effect, comparing them to partitioned analyses of the same data. The node-density effect refers to the tendency for the amount of evolutionary change in longer branches of phylogenies to be underestimated compared to that in regions of the tree where there are more nodes and thus branches are typically shorter. Mixture models allow more than one model of sequence evolution to describe the sites in an alignment without prior knowledge of the evolutionary processes that characterize the data or how they correspond to different sites. If multiple evolutionary patterns are common in sequence evolution, mixture models may be capable of reducing node-density effects by characterizing the evolutionary processes more accurately. In gene-sequence alignments simulated to have heterogeneous patterns of evolution, we find that mixture models can reduce node-density effects to negligible levels or remove them altogether, performing as well as partitioned analyses based on the known simulated patterns. The mixture models achieve this without knowledge of the patterns that generated the data and even in some cases without specifying the full or true model of sequence evolution known to underlie the data. The latter result is especially important in real applications, as the true model of evolution is seldom known. We find the same patterns of results for two real data sets with evidence of complex patterns of sequence evolution: mixture models substantially reduced node-density effects and returned better likelihoods compared to partitioning models specifically fitted to these data. We suggest that the presence of more than one pattern of evolution in the data is a common source of error in phylogenetic inference and that mixture models can often detect these patterns even without prior knowledge of their presence in the

  17. Reduced order modeling of fluid/structure interaction.

    Energy Technology Data Exchange (ETDEWEB)

    Barone, Matthew Franklin; Kalashnikova, Irina; Segalman, Daniel Joseph; Brake, Matthew Robert

    2009-11-01

    This report describes work performed from October 2007 through September 2009 under the Sandia Laboratory Directed Research and Development project titled 'Reduced Order Modeling of Fluid/Structure Interaction.' This project addresses fundamental aspects of techniques for construction of predictive Reduced Order Models (ROMs). A ROM is defined as a model, derived from a sequence of high-fidelity simulations, that preserves the essential physics and predictive capability of the original simulations but at a much lower computational cost. Techniques are developed for construction of provably stable linear Galerkin projection ROMs for compressible fluid flow, including a method for enforcing boundary conditions that preserves numerical stability. A convergence proof and error estimates are given for this class of ROM, and the method is demonstrated on a series of model problems. A reduced order method, based on the method of quadratic components, for solving the von Karman nonlinear plate equations is developed and tested. This method is applied to the problem of nonlinear limit cycle oscillations encountered when the plate interacts with an adjacent supersonic flow. A stability-preserving method for coupling the linear fluid ROM with the structural dynamics model for the elastic plate is constructed and tested. Methods for constructing efficient ROMs for nonlinear fluid equations are developed and tested on a one-dimensional convection-diffusion-reaction equation. These methods are combined with a symmetrization approach to construct a ROM technique for application to the compressible Navier-Stokes equations.

  18. Reduced Complexity Channel Models for IMT-Advanced Evaluation

    Directory of Open Access Journals (Sweden)

    Yu Zhang

    2009-01-01

    Full Text Available Accuracy and complexity are two crucial aspects of the applicability of a channel model for wideband multiple input multiple output (MIMO systems. For small number of antenna element pairs, correlation-based models have lower computational complexity while the geometry-based stochastic models (GBSMs can provide more accurate modeling of real radio propagation. This paper investigates several potential simplifications of the GBSM to reduce the complexity with minimal impact on accuracy. In addition, we develop a set of broadband metrics which enable a thorough investigation of the differences between the GBSMs and the simplified models. The impact of various random variables which are employed by the original GBSM on the system level simulation are also studied. Both simulation results and a measurement campaign show that complexity can be reduced significantly with a negligible loss of accuracy in the proposed metrics. As an example, in the presented scenarios, the computational time can be reduced by up to 57% while keeping the relative deviation of 5% outage capacity within 5%.

  19. A new algorithm to reduce noise in microscopy images implemented with a simple program in python.

    Science.gov (United States)

    Papini, Alessio

    2012-03-01

    All microscopical images contain noise, increasing when (e.g., transmission electron microscope or light microscope) approaching the resolution limit. Many methods are available to reduce noise. One of the most commonly used is image averaging. We propose here to use the mode of pixel values. Simple Python programs process a given number of images, recorded consecutively from the same subject. The programs calculate the mode of the pixel values in a given position (a, b). The result is a new image containing in (a, b) the mode of the values. Therefore, the final pixel value corresponds to that read in at least two of the pixels in position (a, b). The application of the program on a set of images obtained by applying salt and pepper noise and GIMP hurl noise with 10-90% standard deviation showed that the mode performs better than averaging with three-eight images. The data suggest that the mode would be more efficient (in the sense of a lower number of recorded images to process to reduce noise below a given limit) for lower number of total noisy pixels and high standard deviation (as impulse noise and salt and pepper noise), while averaging would be more efficient when the number of varying pixels is high, and the standard deviation is low, as in many cases of Gaussian noise affected images. The two methods may be used serially. Copyright © 2011 Wiley Periodicals, Inc.

  20. Reduced Complexity Volterra Models for Nonlinear System Identification

    Directory of Open Access Journals (Sweden)

    Hacıoğlu Rıfat

    2001-01-01

    Full Text Available A broad class of nonlinear systems and filters can be modeled by the Volterra series representation. However, its practical use in nonlinear system identification is sometimes limited due to the large number of parameters associated with the Volterra filter′s structure. The parametric complexity also complicates design procedures based upon such a model. This limitation for system identification is addressed in this paper using a Fixed Pole Expansion Technique (FPET within the Volterra model structure. The FPET approach employs orthonormal basis functions derived from fixed (real or complex pole locations to expand the Volterra kernels and reduce the number of estimated parameters. That the performance of FPET can considerably reduce the number of estimated parameters is demonstrated by a digital satellite channel example in which we use the proposed method to identify the channel dynamics. Furthermore, a gradient-descent procedure that adaptively selects the pole locations in the FPET structure is developed in the paper.

  1. Reducing Teacher Stress by Implementing Collaborative Problem Solving in a School Setting

    Science.gov (United States)

    Schaubman, Averi; Stetson, Erica; Plog, Amy

    2011-01-01

    Student behavior affects teacher stress levels and the student-teacher relationship. In this pilot study, teachers were trained in Collaborative Problem Solving (CPS), a cognitive-behavioral model that explains challenging behavior as the result of underlying deficits in the areas of flexibility/adaptability, frustration tolerance, and problem…

  2. Reducing Teacher Stress by Implementing Collaborative Problem Solving in a School Setting

    Science.gov (United States)

    Schaubman, Averi; Stetson, Erica; Plog, Amy

    2011-01-01

    Student behavior affects teacher stress levels and the student-teacher relationship. In this pilot study, teachers were trained in Collaborative Problem Solving (CPS), a cognitive-behavioral model that explains challenging behavior as the result of underlying deficits in the areas of flexibility/adaptability, frustration tolerance, and problem…

  3. Implementing the Serial Number Tracking model in telecommunications: a case study of Croatia

    Directory of Open Access Journals (Sweden)

    Neven Polovina

    2012-01-01

    Full Text Available Background: The case study describes the implementation of the SNT (Serial Number Tracking model in an integrated information system, as a means of business support in a Croatian mobile telecommunications company. Objectives: The goal was to show how to make the best practice of the SNT implementation in the telecommunication industry, with referencing to problems which have arisen during the implementation. Methods/Approach: the case study approach was used based on the documentation about the SNT model and the business intelligence system in the Croatian mobile telecommunications company. Results: Economic aspects of the effectiveness of the SNT model are described and confirmed based on actual tangible and predominantly on intangible benefits. Conclusions: Advantages of the SNT model are multiple: operating costs for storage and transit of goods were reduced, accuracy of deliveries and physical inventory was improved; a new source of information for the business intelligence system was obtained; operating processes in the distribution of goods were advanced; transit insurance costs decreased and there were fewer cases of fraudulent behaviour.

  4. A Reduced-Order Model of Transport Phenomena for Power Plant Simulation

    Energy Technology Data Exchange (ETDEWEB)

    Paul Cizmas; Brian Richardson; Thomas Brenner; Raymond Fontenot

    2009-09-30

    A reduced-order model based on proper orthogonal decomposition (POD) has been developed to simulate transient two- and three-dimensional isothermal and non-isothermal flows in a fluidized bed. Reduced-order models of void fraction, gas and solids temperatures, granular energy, and z-direction gas and solids velocity have been added to the previous version of the code. These algorithms are presented and their implementation is discussed. Verification studies are presented for each algorithm. A number of methods to accelerate the computations performed by the reduced-order model are presented. The errors associated with each acceleration method are computed and discussed. Using a combination of acceleration methods, a two-dimensional isothermal simulation using the reduced-order model is shown to be 114 times faster than using the full-order model. In the pursue of achieving the objectives of the project and completing the tasks planned for this program, several unplanned and unforeseen results, methods and studies have been generated. These additional accomplishments are also presented and they include: (1) a study of the effect of snapshot sampling time on the computation of the POD basis functions, (2) an investigation of different strategies for generating the autocorrelation matrix used to find the POD basis functions, (3) the development and implementation of a bubble detection and tracking algorithm based on mathematical morphology, (4) a method for augmenting the proper orthogonal decomposition to better capture flows with discontinuities, such as bubbles, and (5) a mixed reduced-order/full-order model, called point-mode proper orthogonal decomposition, designed to avoid unphysical due to approximation errors. The limitations of the proper orthogonal decomposition method in simulating transient flows with moving discontinuities, such as bubbling flows, are discussed and several methods are proposed to adapt the method for future use.

  5. EXPERIMENTS OF A REDUCED GRID IN LASG/IAP WORLD OCEAN GENERAL CIRCULATION MODELS (OGCMs)

    Institute of Scientific and Technical Information of China (English)

    LIU Xiying; LIU Hailong; ZHANG Xuehong; YU Rucong

    2006-01-01

    Due to the decrease in grid size associated with the convergence of meridians toward the poles in spherical coordinates, the time steps in many global climate models with finite-difference method are restricted to be unpleasantly small. To overcome the problem, a reduced grid is introduced to LASG/IAP world ocean general circulation models. The reduced grid is implemented successfully in the coarser resolutions version model L30T63 at first. Then, it is carried out in the improved version model LICOM with finer resolutions. In the experiment with model L30T63, under time step unchanged though, execution time per single model run is shortened significantly owing to the decrease of grid number and filtering execution in high latitudes. Results from additional experiments with L30T63 show that the time step of integration can be quadrupled at most in reduced grid with refinement ratio 3. In the experiment with model LICOM and with the model's original time step unchanged, the model covered area is extended to the whole globe from its original case with the grid point of North Pole considered as an isolated island and the results of experiment are shown to be acceptable.

  6. HIV and tuberculosis – science and implementation to turn the tide and reduce deaths

    Directory of Open Access Journals (Sweden)

    Rony Zachariah

    2012-07-01

    Full Text Available Introduction: Every year, HIV-associated tuberculosis (TB deprives 350,000 mainly young people of productive and healthy lives. People die because TB is not diagnosed and treated in those with known HIV infection and HIV infection is not diagnosed in those with TB. Even in those in whom both HIV and TB are diagnosed and treated, this often happens far too late. These deficiencies can be addressed through the application of new scientific evidence and diagnostic tools. Discussion: A strategy of starting antiretroviral therapy (ART early in the course of HIV infection has the potential to considerably reduce both individual and community burden of TB and needs urgent evaluation for efficacy, feasibility and broader social and economic impact. Isoniazid preventive therapy can reduce the risk of TB and, if given strategically in addition to ART, provides synergistic benefit. Intensified TB screening as part of the “Three I's” strategy should be conducted at every clinic, home or community-based attendance using a symptoms-based algorithm, and new diagnostic tools should increasingly be used to confirm or refute TB diagnoses. Until such time when more sensitive and specific TB diagnostic assays are widely available, bolder approaches such as empirical anti-TB treatment need to be considered and evaluated. Patients with suspected or diagnosed TB must be screened for HIV and given cotrimoxazole preventive therapy and ART if HIV-positive. Three large randomized trials provide conclusive evidence that ART initiated within two to four weeks of start of anti-TB treatment saves lives, particularly in those with severe immunosuppression. The key to ensuring that these collaborative activities are delivered is the co-location and integration of TB and HIV services within the health system and the community. Conclusions: Progress towards reducing HIV-associated TB deaths can be achieved through attention to simple and deliverable actions on the ground

  7. HIV and tuberculosis – science and implementation to turn the tide and reduce deaths

    Science.gov (United States)

    Harries, Anthony D; Lawn, Stephen D; Getahun, Haileyesus; Zachariah, Rony; Havlir, Diane V

    2012-01-01

    Introduction Every year, HIV-associated tuberculosis (TB) deprives 350,000 mainly young people of productive and healthy lives. People die because TB is not diagnosed and treated in those with known HIV infection and HIV infection is not diagnosed in those with TB. Even in those in whom both HIV and TB are diagnosed and treated, this often happens far too late. These deficiencies can be addressed through the application of new scientific evidence and diagnostic tools. Discussion A strategy of starting antiretroviral therapy (ART) early in the course of HIV infection has the potential to considerably reduce both individual and community burden of TB and needs urgent evaluation for efficacy, feasibility and broader social and economic impact. Isoniazid preventive therapy can reduce the risk of TB and, if given strategically in addition to ART, provides synergistic benefit. Intensified TB screening as part of the “Three I's” strategy should be conducted at every clinic, home or community-based attendance using a symptoms-based algorithm, and new diagnostic tools should increasingly be used to confirm or refute TB diagnoses. Until such time when more sensitive and specific TB diagnostic assays are widely available, bolder approaches such as empirical anti-TB treatment need to be considered and evaluated. Patients with suspected or diagnosed TB must be screened for HIV and given cotrimoxazole preventive therapy and ART if HIV-positive. Three large randomized trials provide conclusive evidence that ART initiated within two to four weeks of start of anti-TB treatment saves lives, particularly in those with severe immunosuppression. The key to ensuring that these collaborative activities are delivered is the co-location and integration of TB and HIV services within the health system and the community. Conclusions Progress towards reducing HIV-associated TB deaths can be achieved through attention to simple and deliverable actions on the ground. John Donne, Meditation

  8. The Urgency of Doing: Assessing the System of Sustainable Implementation Model via the Schools Implementing towards Sustainability (SITS) Scale

    Science.gov (United States)

    Moceri, Dominic C.; Elias, Maurice J.; Fishman, Daniel B.; Pandina, Robert; Reyes-Portillo, Jazmin A.

    2012-01-01

    School-based prevention and promotion interventions (SBPPI) improve desirable outcomes (e.g., commitment to school and attendance) and reduce undesirable outcomes (e.g., suspensions and violence). Unfortunately, our understanding of how to effectively implement and sustain SBPPI outside of well-controlled conditions is lacking. To bridge this…

  9. Implementing an HL7 version 3 modeling tool from an Ecore model.

    Science.gov (United States)

    Bánfai, Balázs; Ulrich, Brandon; Török, Zsolt; Natarajan, Ravi; Ireland, Tim

    2009-01-01

    One of the main challenges of achieving interoperability using the HL7 V3 healthcare standard is the lack of clear definition and supporting tools for modeling, testing, and conformance checking. Currently, the knowledge defining the modeling is scattered around in MIF schemas, tools and specifications or simply with the domain experts. Modeling core HL7 concepts, constraints, and semantic relationships in Ecore/EMF encapsulates the domain-specific knowledge in a transparent way while unifying Java, XML, and UML in an abstract, high-level representation. Moreover, persisting and versioning the core HL7 concepts as a single Ecore context allows modelers and implementers to create, edit and validate message models against a single modeling context. The solution discussed in this paper is implemented in the new HL7 Static Model Designer as an extensible toolset integrated as a standalone Eclipse RCP application.

  10. Implementation strategy to reduce environmental impact of energy related activities in Zimbabwe

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1997-01-01

    In 1992 UNEP-Collaborating Centre on Energy and Environment (UNEP-CCEE), Denmark and Southern Centre for Energy and Environment (SCEE), Zimbabwe, prepared a country report for Zimbabwe on Greenhouse Gas (GHG) Abatement Costing. Abatement technologies for both supply and demand side were identified in order to reduce GHG emission. The present study addresses environmental impacts of the entire energy cycle focusing on coal use in industry and power generation. Zimbabwe has proven coal reserves of more than 700 million tonnes, and the potential of geological coal resources is estimated beyond 30 billion tonnes. The conventional applications of coal include electricity generation, steam traction in railway transport, industrial boilers, tobacco curing and coking. As coal is the major source of energy for Zimbabwe, the present study aims at identification of environmental impacts of the entire coal cycle from mining to end-users of electrical energy. (EG)

  11. Effect of different implementations of the same ice history in GIA modeling

    Science.gov (United States)

    Barletta, V. R.; Bordoni, A.

    2013-11-01

    This study shows the effect of changing the way ice histories are implemented in Glacial Isostatic Adjustment (GIA) codes to solve the sea level equation. The ice history models are being constantly improved and are provided in different formats. The overall algorithmic design of the sea-level equation solver often forces to implement the ice model in a representation that differs from the one originally provided. We show that using different representations of the same ice model gives important differences and artificial contributions to the sea level estimates, both at global and at regional scale. This study is not a speculative exercise. The ICE-5G model adopted in this work is widely used in present day sea-level analysis, but discrepancies between the results obtained by different groups for the same ice models still exist, and it was the effort to set a common reference for the sea-level community that inspired this work. Understanding this issue is important to be able to reduce the artefacts introduced by a non-suitable ice model representation. This is especially important when developing new GIA models, since neglecting this problem can easily lead to wrong alignment of the ice and sea-level histories, particularly close to the deglaciation areas, like Antarctica.

  12. An improved model for reduced-order physiological fluid flows

    CERN Document Server

    San, Omer; 10.1142/S0219519411004666

    2012-01-01

    An improved one-dimensional mathematical model based on Pulsed Flow Equations (PFE) is derived by integrating the axial component of the momentum equation over the transient Womersley velocity profile, providing a dynamic momentum equation whose coefficients are smoothly varying functions of the spatial variable. The resulting momentum equation along with the continuity equation and pressure-area relation form our reduced-order model for physiological fluid flows in one dimension, and are aimed at providing accurate and fast-to-compute global models for physiological systems represented as networks of quasi one-dimensional fluid flows. The consequent nonlinear coupled system of equations is solved by the Lax-Wendroff scheme and is then applied to an open model arterial network of the human vascular system containing the largest fifty-five arteries. The proposed model with functional coefficients is compared with current classical one-dimensional theories which assume steady state Hagen-Poiseuille velocity pro...

  13. Empirical Reduced-Order Modeling for Boundary Feedback Flow Control

    Directory of Open Access Journals (Sweden)

    Seddik M. Djouadi

    2008-01-01

    Full Text Available This paper deals with the practical and theoretical implications of model reduction for aerodynamic flow-based control problems. Various aspects of model reduction are discussed that apply to partial differential equation- (PDE- based models in general. Specifically, the proper orthogonal decomposition (POD of a high dimension system as well as frequency domain identification methods are discussed for initial model construction. Projections on the POD basis give a nonlinear Galerkin model. Then, a model reduction method based on empirical balanced truncation is developed and applied to the Galerkin model. The rationale for doing so is that linear subspace approximations to exact submanifolds associated with nonlinear controllability and observability require only standard matrix manipulations utilizing simulation/experimental data. The proposed method uses a chirp signal as input to produce the output in the eigensystem realization algorithm (ERA. This method estimates the system's Markov parameters that accurately reproduce the output. Balanced truncation is used to show that model reduction is still effective on ERA produced approximated systems. The method is applied to a prototype convective flow on obstacle geometry. An H∞ feedback flow controller is designed based on the reduced model to achieve tracking and then applied to the full-order model with excellent performance.

  14. Parallel processing optimization strategy based on MapReduce model in cloud storage environment

    Science.gov (United States)

    Cui, Jianming; Liu, Jiayi; Li, Qiuyan

    2017-05-01

    Currently, a large number of documents in the cloud storage process employed the way of packaging after receiving all the packets. From the local transmitter this stored procedure to the server, packing and unpacking will consume a lot of time, and the transmission efficiency is low as well. A new parallel processing algorithm is proposed to optimize the transmission mode. According to the operation machine graphs model work, using MPI technology parallel execution Mapper and Reducer mechanism. It is good to use MPI technology to implement Mapper and Reducer parallel mechanism. After the simulation experiment of Hadoop cloud computing platform, this algorithm can not only accelerate the file transfer rate, but also shorten the waiting time of the Reducer mechanism. It will break through traditional sequential transmission constraints and reduce the storage coupling to improve the transmission efficiency.

  15. Using the Neumann series expansion for assembling Reduced Order Models

    Directory of Open Access Journals (Sweden)

    Nasisi S.

    2014-06-01

    Full Text Available An efficient method to remove the limitation in selecting the master degrees of freedom in a finite element model by means of a model order reduction is presented. A major difficulty of the Guyan reduction and IRS method (Improved Reduced System is represented by the need of appropriately select the master and slave degrees of freedom for the rate of convergence to be high. This study approaches the above limitation by using a particular arrangement of the rows and columns of the assembled matrices K and M and employing a combination between the IRS method and a variant of the analytical selection of masters presented in (Shah, V. N., Raymund, M., Analytical selection of masters for the reduced eigenvalue problem, International Journal for Numerical Methods in Engineering 18 (1 1982 in case first lowest frequencies had to be sought. One of the most significant characteristics of the approach is the use of the Neumann series expansion that motivates this particular arrangement of the matrices’ entries. The method shows a higher rate of convergence when compared to the standard IRS and very accurate results for the lowest reduced frequencies. To show the effectiveness of the proposed method two testing structures and the human vocal tract model employed in (Vampola, T., Horacek, J., Svec, J. G., FE modeling of human vocal tract acoustics. Part I: Prodution of Czech vowels, Acta Acustica United with Acustica 94 (3 2008 are presented.

  16. Simple cortical and thalamic neuron models for digital arithmetic circuit implementation

    Directory of Open Access Journals (Sweden)

    Takuya eNanami

    2016-05-01

    Full Text Available Trade-off between reproducibility of neuronal activities and computational efficiency is one ofcrucial subjects in computational neuroscience and neuromorphic engineering. A wide variety ofneuronal models have been studied from different viewpoints. The digital spiking silicon neuron(DSSN model is a qualitative model that focuses on efficient implementation by digital arithmeticcircuits. We expanded the DSSN model and found appropriate parameter sets with which itreproduces the dynamical behaviors of the ionic-conductance models of four classes of corticaland thalamic neurons. We first developed a 4-variable model by reducing the number of variablesin the ionic-conductance models and elucidated its mathematical structures using bifurcationanalysis. Then, expanded DSSN models were constructed that reproduce these mathematicalstructures and capture the characteristic behavior of each neuron class. We confirmed thatstatistics of the neuronal spike sequences are similar in the DSSN and the ionic-conductancemodels. Computational cost of the DSSN model is larger than that of the recent sophisticatedIntegrate-and-Fire-based models, but smaller than the ionic-conductance models. This modelis intended to provide another meeting point for above trade-off that satisfies the demand forlarge-scale neuronal network simulation with closer-to-biology models.

  17. Implementation of the ATLAS Run 2 event data model

    CERN Document Server

    Buckley, Andrew; Elsing, Markus; Gillberg, Dag Ingemar; Koeneke, Karsten; Krasznahorkay, Attila; Moyse, Edward; Nowak, Marcin; Snyder, Scott; van Gemmeren, Peter

    2015-01-01

    During the 2013--2014 shutdown of the Large Hadron Collider, ATLAS switched to a new event data model for analysis, called the xAOD. A key feature of this model is the separation of the object data from the objects themselves (the `auxiliary store'). Rather being stored as member variables of the analysis classes, all object data are stored separately, as vectors of simple values. Thus, the data are stored in a `structure of arrays' format, while the user still can access it as an `array of structures'. This organization allows for on-demand partial reading of objects, the selective removal of object properties, and the addition of arbitrary user-defined properties in a uniform manner. It also improves performance by increasing the locality of memory references in typical analysis code. The resulting data structures can be written to ROOT files with data properties represented as simple ROOT tree branches. This talk will focus on the design and implementation of the auxiliary store and its interaction with RO...

  18. Implementation of a vibrationally linked chemical reaction model for DSMC

    Science.gov (United States)

    Carlson, A. B.; Bird, Graeme A.

    1994-01-01

    A new procedure closely linking dissociation and exchange reactions in air to the vibrational levels of the diatomic molecules has been implemented in both one- and two-dimensional versions of Direct Simulation Monte Carlo (DSMC) programs. The previous modeling of chemical reactions with DSMC was based on the continuum reaction rates for the various possible reactions. The new method is more closely related to the actual physics of dissociation and is more appropriate to the particle nature of DSMC. Two cases are presented: the relaxation to equilibrium of undissociated air initially at 10,000 K, and the axisymmetric calculation of shuttle forebody heating during reentry at 92.35 km and 7500 m/s. Although reaction rates are not used in determining the dissociations or exchange reactions, the new method produces rates which agree astonishingly well with the published rates derived from experiment. The results for gas properties and surface properties also agree well with the results produced by earlier DSMC models, equilibrium air calculations, and experiment.

  19. Implementing oral care to reduce aspiration pneumonia amongst patients with dysphagia in a South African setting

    Directory of Open Access Journals (Sweden)

    Jaishika Seedat

    2016-02-01

    Full Text Available Oral care is a crucial routine for patients with dysphagia that, when completed routinely, can prevent the development of aspiration pneumonia. There is no standardised protocol for oral care within government hospitals in South Africa. This study aimed to investigate the outcome of an oral care protocol. Participants were patients with oropharyngeal dysphagia, with either stroke or traumatic brain injury as the underlying medical pathology, and nurses. All participants were recruited from one tertiary level government hospital in Gauteng, South Africa. 139 nurses participated in the study and received training on the oral care protocol. There were two groups of participants with oropharyngeal dysphagia. Group one (study group, n = 23 was recruited by consecutive sampling, received regular oral care and were not restricted from drinking water; however, all other liquids were restricted. Group two (comparison group, n = 23 was recruited via a retrospective record review, received inconsistent oral care and were placed on thickened liquids or liquid restricted diets. Results showed that a regimen of regular oral care and free water provision when combined with dysphagia intervention did prevent aspiration pneumonia in patients with oropharyngeal dysphagia. The article highlights two key findings: that regular and routine oral care is manageable within an acute government hospital context and a strict routine of oral care can reduce aspiration pneumonia in patients with oropharyngeal dysphagia. An implication from these findings is confirmation that teamwork in acute care settings in developing contexts must be prioritised to improve dysphagia management and patient prognosis.

  20. COPE Method Implementation Program to Reduce Communication Apprehension Level in Full Day Yunior High School Students

    Science.gov (United States)

    Prasetyo, A. R.

    2017-02-01

    This study was aimed to explore the effect of COPE method to reduce communication apprehension level of students in Early Adolescence who become Full Day Junior High School students. Full Day Junior High School students, especially in Surabaya coastal area, have more demands to develop the communication aspects such as group discussions and presentations and extracurricular activities. Higher demands to develop such aspects of communication may cause them to experience communication apprehension. The subject was Full Day School students totaling 31 students. The design of the research was experimental design. The experimental method used was a non-randomized pretest posttest control group design and purposive sampling was also used. COPE method is a process that consists of four main stages where people are trying to deal with and control of stressful situations as a result of the problem being faced by conducting cognitive and behavioral changes. Four main stages COPE method is Calming the nervous system, Originating an imaginative plan, Persisting in the face of obstacles and failure, and Evaluating and adjusting the plan. Results of quantitative analysis based on U-Mann Whitney Test shows significant effect on the COPE Method to decrease anxiety levels of communication (0.000 <0.005).

  1. Impact of AMS-02 Measurements on Reducing GCR Model Uncertainties

    Science.gov (United States)

    Slaba, T. C.; O'Neill, P. M.; Golge, S.; Norbury, J. W.

    2015-01-01

    For vehicle design, shield optimization, mission planning, and astronaut risk assessment, the exposure from galactic cosmic rays (GCR) poses a significant and complex problem both in low Earth orbit and in deep space. To address this problem, various computational tools have been developed to quantify the exposure and risk in a wide range of scenarios. Generally, the tool used to describe the ambient GCR environment provides the input into subsequent computational tools and is therefore a critical component of end-to-end procedures. Over the past few years, several researchers have independently and very carefully compared some of the widely used GCR models to more rigorously characterize model differences and quantify uncertainties. All of the GCR models studied rely heavily on calibrating to available near-Earth measurements of GCR particle energy spectra, typically over restricted energy regions and short time periods. In this work, we first review recent sensitivity studies quantifying the ions and energies in the ambient GCR environment of greatest importance to exposure quantities behind shielding. Currently available measurements used to calibrate and validate GCR models are also summarized within this context. It is shown that the AMS-II measurements will fill a critically important gap in the measurement database. The emergence of AMS-II measurements also provides a unique opportunity to validate existing models against measurements that were not used to calibrate free parameters in the empirical descriptions. Discussion is given regarding rigorous approaches to implement the independent validation efforts, followed by recalibration of empirical parameters.

  2. Reducing component estimation for varying coefficient models with longitudinal data

    Institute of Scientific and Technical Information of China (English)

    2008-01-01

    Varying-coefficient models with longitudinal observations are very useful in epidemiology and some other practical fields.In this paper,a reducing component procedure is proposed for es- timating the unknown functions and their derivatives in very general models,in which the unknown coefficient functions admit different or the same degrees of smoothness and the covariates can be time- dependent.The asymptotic properties of the estimators,such as consistency,rate of convergence and asymptotic distribution,are derived.The asymptotic results show that the asymptotic variance of the reducing component estimators is smaller than that of the existing estimators when the coefficient functions admit different degrees of smoothness.Finite sample properties of our procedures are studied through Monte Carlo simulations.

  3. Lattice model of reduced jamming by a barrier

    Science.gov (United States)

    Cirillo, Emilio N. M.; Krehel, Oleh; Muntean, Adrian; van Santen, Rutger

    2016-10-01

    We study an asymmetric simple exclusion process in a strip in the presence of a solid impenetrable barrier. We focus on the effect of the barrier on the residence time of the particles, namely, the typical time needed by the particles to cross the whole strip. We explore the conditions for reduced jamming when varying the environment (different drifts, reservoir densities, horizontal diffusion walks, etc.). In particular, we discover an interesting nonmonotonic behavior of the residence time as a function of the barrier length. Besides recovering by means of both the lattice dynamics and the mean-field model well-known aspects like the faster-is-slower effect and the intermittence of the flow, we propose also a birth-and-death process and a reduced one-dimensional (1D) model with variable barrier permeability to capture the behavior of the residence time with respect to the parameters.

  4. Implementation and implications of macrophyte reconfiguration in hydraulic river modeling

    Science.gov (United States)

    Verschoren, Veerle; Schoelynck, Jonas; Buis, Kerst; Meire, Dieter; Bal, Kris; Meire, Patrick; Temmerman, Stijn

    2014-05-01

    In lowland rivers, abundant macrophyte growth can often be observed. The aquatic vegetation has an impact on the flow by creating friction which results in increased water levels and decreased flow velocities. At the same time submerged macrophytes are susceptible to hydrodynamic forces of the water. Their morphology is therefore often flexible and streamlined so that it enables reconfiguration (i.e. bending of macrophytes with water flow) and decreases potential damage at high flow velocities. Knowledge of these mutual interactions is crucial in order to model water flow in vegetated rivers. A correct estimation of flow velocity and water height is indispensable for the calculation of hydraulic, ecological and geomorphological parameters. The total resistance to water flow in a river can be described by a Manning coefficient. This value is influenced by river characteristics as well as by the presence of macrophytes. In this study a simple method is developed to quantify the resistance created by macrophytes after reconfiguration of their canopy. In order to achieve this we derive model formulations and plant parameters for three different macrophyte species and compare model simulation with measured flow velocity data for two case studies. Furthermore, the effect of macrophyte reconfiguration is investigated by modeling the same case studies with and without the implementation of macrophyte reconfiguration. It was found that the local resistance created by the vegetation was overestimated when reconfiguration was not considered. This resulted in an overestimation of stream velocity adjacent to the vegetation and an underestimation of the stream velocity within and behind the vegetation. Another effect was a higher water level gradient and consequently a higher Manning coefficient in the scenario without reconfiguration compared to the scenario with reconfiguration. Reconfiguration had also an influence on ecological and geomorphological parameters. It was found

  5. Anchanling reduces pathology in a lactacystin- induced Parkinson's disease model

    Institute of Scientific and Technical Information of China (English)

    Yinghong Li; Zhengzhi Wu; Xiaowei Gao; Qingwei Zhu; Yu Jin; Anmin Wu; Andrew C. J. Huang

    2012-01-01

    A rat model of Parkinson's disease was induced by injecting lactacystin stereotaxically into the left mesencephalic ventral tegmental area and substantia nigra pars compacta. After rats were intragastrically perfused with Anchanling, a Chinese medicine, mainly composed of magnolol, for 5 weeks, when compared with Parkinson's disease model rats, tyrosine hydroxylase expression was increased, α-synuclein and ubiquitin expression was decreased, substantia nigra cell apoptosis was reduced, and apomorphine-induced rotational behavior was improved. Results suggested that Anchanling can ameliorate Parkinson's disease pathology possibly by enhancing degradation activity of the ubiquitin-proteasome system.

  6. Regularization method for calibrated POD reduced-order models

    Directory of Open Access Journals (Sweden)

    El Majd Badr Abou

    2014-01-01

    Full Text Available In this work we present a regularization method to improve the accuracy of reduced-order models based on Proper Orthogonal Decomposition. The bench mark configuration retained corresponds to a case of relatively simple dynamics: a two-dimensional flow around a cylinder for a Reynolds number of 200. Finally, we show for this flow configuration that this procedure is efficient in term of reduction of errors.

  7. Predictive modeling and reducing cyclic variability in autoignition engines

    Energy Technology Data Exchange (ETDEWEB)

    Hellstrom, Erik; Stefanopoulou, Anna; Jiang, Li; Larimore, Jacob

    2016-08-30

    Methods and systems are provided for controlling a vehicle engine to reduce cycle-to-cycle combustion variation. A predictive model is applied to predict cycle-to-cycle combustion behavior of an engine based on observed engine performance variables. Conditions are identified, based on the predicted cycle-to-cycle combustion behavior, that indicate high cycle-to-cycle combustion variation. Corrective measures are then applied to prevent the predicted high cycle-to-cycle combustion variation.

  8. CloudNMF: A MapReduce Implementation of Nonnegative Matrix Factorization for Large-scale Biological Datasets

    Directory of Open Access Journals (Sweden)

    Ruiqi Liao

    2014-02-01

    Full Text Available In the past decades, advances in high-throughput technologies have led to the generation of huge amounts of biological data that require analysis and interpretation. Recently, nonnegative matrix factorization (NMF has been introduced as an efficient way to reduce the complexity of data as well as to interpret them, and has been applied to various fields of biological research. In this paper, we present CloudNMF, a distributed open-source implementation of NMF on a MapReduce framework. Experimental evaluation demonstrated that CloudNMF is scalable and can be used to deal with huge amounts of data, which may enable various kinds of a high-throughput biological data analysis in the cloud. CloudNMF is freely accessible at http://admis.fudan.edu.cn/projects/CloudNMF.html.

  9. CloudNMF: a MapReduce implementation of nonnegative matrix factorization for large-scale biological datasets.

    Science.gov (United States)

    Liao, Ruiqi; Zhang, Yifan; Guan, Jihong; Zhou, Shuigeng

    2014-02-01

    In the past decades, advances in high-throughput technologies have led to the generation of huge amounts of biological data that require analysis and interpretation. Recently, nonnegative matrix factorization (NMF) has been introduced as an efficient way to reduce the complexity of data as well as to interpret them, and has been applied to various fields of biological research. In this paper, we present CloudNMF, a distributed open-source implementation of NMF on a MapReduce framework. Experimental evaluation demonstrated that CloudNMF is scalable and can be used to deal with huge amounts of data, which may enable various kinds of a high-throughput biological data analysis in the cloud. CloudNMF is freely accessible at http://admis.fudan.edu.cn/projects/CloudNMF.html.

  10. IMPLEMENTATION MODEL OF MOTOR TRACTION FORCE OF MAGLEV TRAIN

    Directory of Open Access Journals (Sweden)

    V. O. Polyakov

    2016-08-01

    Full Text Available Purpose. Traction force implementation (TFI by the motor of magnetic levitation train (MLT occurs in the process of electric-to-kinetic energy transformation at interaction of inductor and armature magnetic fields. Ac-cordingly, the aim of this study is to obtain a correct description of such energy transformation. Methodology. At the present stage, a mathematical and, in particular, computer simulation is the main and most universal tool for analysis and synthesis of processes and systems. At the same time, radical advantages of this tool make the precision of selection of a particular research methodology even more important. It is especially important for such a large and complex system as MLT. Therefore the special attention in the work is given to the rationale for choosing the research paradigm selective features. Findings. The analysis results of existing TFI process model versions indicate that each of them has both advantages and disadvantages. Therefore, one of the main results of this study was the creation of a mathematical model for such process that would preserve the advantages of previous versions, but would be free from their disadvantages. The work provides rationale for application (for the purposes of research of train motor TFI of the integrative holistic paradigm, which assimilates the advantages of the theory of electric circuit and magnetic field. Originality. The priority of creation of such paradigm and corresponding version of FI model constitute the originality of the research. Practical value. The main manifestation of practical value of this research in the opportunity, in case of use of its results, for significant increase in efficiency of MLT dynamic studies, on the condition that their generalized costs will not rise.

  11. Possibilities of Land Administration Domain Model (ladm) Implementation in Nigeria

    Science.gov (United States)

    Babalola, S. O.; Rahman, A. Abdul; Choon, L. T.; Van Oosterom, P. J. M.

    2015-10-01

    LADM covers essential information associated components of land administration and management including those over water and elements above and below the surface of the earth. LADM standard provides an abstract conceptual model with three packages and one sub-package. LADM defined terminology for a land administration system that allows a shared explanation of different formal customary or informal tenures. The standard provides the basis for national and regional profiles and enables the combination of land management information from different sources in a coherent manner. Given this, this paper started with the description of land and land administration in Nigeria. The pre-colonial, colonial and post-colonial era with organization structure was discussed. This discussion is important to present an understanding of the background of any improvement needed for the LADM implementation in Nigeria. The LADM, ISO 19152 and the packages of LADM was discussed, and the comparison of the different aspects of each package and classes were made with Nigerian land administration and the cadastral system. In the comparison made, it was discovered that the concept is similar to LADM packages in Nigerian land administration. Although, the terminology may not be the same in all cases. Having studied conceptualization and the application of LADM, as a model that has essential information associated with components of the land administration. Including those on the land, over water as well as elements above and below the surface of the earth and discovered that the standard is suitable for the country. The model can, therefore, be adopted into Nigerian land administration system by mapping in some of the concepts of LADM.

  12. POSSIBILITIES OF LAND ADMINISTRATION DOMAIN MODEL (LADM IMPLEMENTATION IN NIGERIA

    Directory of Open Access Journals (Sweden)

    S. O. Babalola

    2015-10-01

    Full Text Available LADM covers essential information associated components of land administration and management including those over water and elements above and below the surface of the earth. LADM standard provides an abstract conceptual model with three packages and one sub-package. LADM defined terminology for a land administration system that allows a shared explanation of different formal customary or informal tenures. The standard provides the basis for national and regional profiles and enables the combination of land management information from different sources in a coherent manner. Given this, this paper started with the description of land and land administration in Nigeria. The pre-colonial, colonial and post-colonial era with organization structure was discussed. This discussion is important to present an understanding of the background of any improvement needed for the LADM implementation in Nigeria. The LADM, ISO 19152 and the packages of LADM was discussed, and the comparison of the different aspects of each package and classes were made with Nigerian land administration and the cadastral system. In the comparison made, it was discovered that the concept is similar to LADM packages in Nigerian land administration. Although, the terminology may not be the same in all cases. Having studied conceptualization and the application of LADM, as a model that has essential information associated with components of the land administration. Including those on the land, over water as well as elements above and below the surface of the earth and discovered that the standard is suitable for the country. The model can, therefore, be adopted into Nigerian land administration system by mapping in some of the concepts of LADM.

  13. Systematic development of reduced reaction mechanisms for dynamic modeling

    Science.gov (United States)

    Frenklach, M.; Kailasanath, K.; Oran, E. S.

    1986-01-01

    A method for systematically developing a reduced chemical reaction mechanism for dynamic modeling of chemically reactive flows is presented. The method is based on the postulate that if a reduced reaction mechanism faithfully describes the time evolution of both thermal and chain reaction processes characteristic of a more complete mechanism, then the reduced mechanism will describe the chemical processes in a chemically reacting flow with approximately the same degree of accuracy. Here this postulate is tested by producing a series of mechanisms of reduced accuracy, which are derived from a full detailed mechanism for methane-oxygen combustion. These mechanisms were then tested in a series of reactive flow calculations in which a large-amplitude sinusoidal perturbation is applied to a system that is initially quiescent and whose temperature is high enough to start ignition processes. Comparison of the results for systems with and without convective flow show that this approach produces reduced mechanisms that are useful for calculations of explosions and detonations. Extensions and applicability to flames are discussed.

  14. Modeling and Implementation of Reliable Ternary Arithmetic and Logic Unit Design Using Vhdl

    Directory of Open Access Journals (Sweden)

    Meruva Kumar Raja

    2014-06-01

    Full Text Available Multivalve logic is a reliable method for defining, analyzing, testing and implementing the basic combinational circuitry with VHDL simulator. It offers better utilization of transmission channels because of its high speed for higher information carried out and it gives more efficient performance. One of the main realizing of the MVL (ternary logic is that reduces the number of required computation steps, simplicity and energy efficiency in digital logic design. This paper using reliable method is brought out for implementing the basic combinational, sequential and TALU (Ternary Arithmetic and Logic Unit circuitry with minimum number of ternary switching circuits (Multiplexers. In this the potential of VHDL modelling and simulation that can be applied to ternary switching circuits to verify its functionality and timing specifications. An intention is to show how proposed simulator can be used to simulate MVL circuits and to evaluate system performance.

  15. Comprehending Consumption: The Behavioral Basis and Implementation of Driver Feedback for Reducing Vehicle Energy Use

    Science.gov (United States)

    Stillwater, Tai

    A large body of evidence suggests that drivers who receive real-time fuel economy information can increase their vehicle fuel economy by 5%, a process commonly known as ecodriving. However, few studies have directly addressed the human side of the feedback, that is, why drivers would (or would not) be motivated to change their behavior and how to design feedback devices to maximize the motivation to ecodrive. This dissertation approaches the question using a mixed qualitative and quantitative approach to explore driver responses and psychology as well as to quantify the process of behavior change. The first chapter discusses the use of mile-per-gallon fuel economy as a metric for driver feedback and finds that an alternative energy economy metric is superior for real-time feedback. The second chapter reviews behavioral theories and proposes a number of practical solutions for the ecodriving context. In the third chapter the theory of planned behavior is tested against driver responses to an existing feedback system available in the 2008 model Toyota Prius. The fourth chapter presents a novel feedback design based on behavioral theories and drivers' responses to the feedback. Finally, chapter five presents the quantitative results of a natural-driving study of fuel economy feedback. The dissertation findings suggest that behavior theories such as the Theory of Planned Behavior can provide important improvements to existing feedback designs. In addition, a careful analysis of vehicle energy flows indicates that the mile-per-gallon metric is deeply flawed as a real-time feedback metric, and should be replaced. Chapters 2 and 3 conclude that behavior theories have both a theoretical and highly practical role in feedback design, although the driving context requires just as much care in the application. Chapters 4 and 5 find that a theory-inspired interface provides drivers with engaging and motivating feedback, and that integrating personal goal into the feedback is

  16. Goal striving strategies and effort mobilization: When implementation intentions reduce effort-related cardiac activity during task performance.

    Science.gov (United States)

    Freydefont, Laure; Gollwitzer, Peter M; Oettingen, Gabriele

    2016-09-01

    Two experiments investigate the influence of goal and implementation intentions on effort mobilization during task performance. Although numerous studies have demonstrated the beneficial effects of setting goals and making plans on performance, the effects of goals and plans on effort-related cardiac activity and especially the cardiac preejection period (PEP) during goal striving have not yet been addressed. According to the Motivational Intensity Theory, participants should increase effort mobilization proportionally to task difficulty as long as success is possible and justified. Forming goals and making plans should allow for reduced effort mobilization when participants perform an easy task. However, when the task is difficult, goals and plans should differ in their effect on effort mobilization. Participants who set goals should disengage, whereas participants who made if-then plans should stay in the field showing high effort mobilization during task performance. As expected, using an easy task in Experiment 1, we observed a lower cardiac PEP in both the implementation intention and the goal intention condition than in the control condition. In Experiment 2, we varied task difficulty and demonstrated that while participants with a mere goal intention disengaged from difficult tasks, participants with an implementation intention increased effort mobilization proportionally with task difficulty. These findings demonstrate the influence of goal striving strategies (i.e., mere goals vs. if-then plans) on effort mobilization during task performance.

  17. Reduced Complexity Modeling (RCM): toward more use of less

    Science.gov (United States)

    Paola, Chris; Voller, Vaughan

    2014-05-01

    Although not exact, there is a general correspondence between reductionism and detailed, high-fidelity models, while 'synthesism' is often associated with reduced-complexity modeling. There is no question that high-fidelity reduction- based computational models are extremely useful in simulating the behaviour of complex natural systems. In skilled hands they are also a source of insight and understanding. We focus here on the case for the other side (reduced-complexity models), not because we think they are 'better' but because their value is more subtle, and their natural constituency less clear. What kinds of problems and systems lend themselves to the reduced-complexity approach? RCM is predicated on the idea that the mechanism of the system or phenomenon in question is, for whatever reason, insensitive to the full details of the underlying physics. There are multiple ways in which this can happen. B.T. Werner argued for the importance of process hierarchies in which processes at larger scales depend on only a small subset of everything going on at smaller scales. Clear scale breaks would seem like a way to test systems for this property but to our knowledge has not been used in this way. We argue that scale-independent physics, as for example exhibited by natural fractals, is another. We also note that the same basic criterion - independence of the process in question from details of the underlying physics - underpins 'unreasonably effective' laboratory experiments. There is thus a link between suitability for experimentation at reduced scale and suitability for RCM. Examples from RCM approaches to erosional landscapes, braided rivers, and deltas illustrate these ideas, and suggest that they are insufficient. There is something of a 'wild west' nature to RCM that puts some researchers off by suggesting a departure from traditional methods that have served science well for centuries. We offer two thoughts: first, that in the end the measure of a model is its

  18. Multipayer patient-centered medical home implementation guided by the chronic care model.

    Science.gov (United States)

    Gabbay, Robert A; Bailit, Michael H; Mauger, David T; Wagner, Edward H; Siminerio, Linda

    2011-06-01

    A unique statewide multipayer ini Pennsylvania was undertaken to implement the Patient-Centered Medical Home (PCMH) guided by the Chronic Care Model (CCM) with diabetes as an initial target disease. This project represents the first broad-scale CCM implementation with payment reform across a diverse range of practice organizations and one of the largest PCMH multipayer initiatives. Practices implemented the CCM and PCMH through regional Breakthrough Series learning collaboratives, supported by Improving Performance in Practice (IPIP) practice coaches, with required monthly quality reporting enhanced by multipayer infrastructure payments. Some 105 practices, representing 382 primary care providers, were engaged in the four regional collaboratives. The practices from the Southeast region of Pennsylvania focused on diabetes patients (n = 10,016). During the first intervention year (May 2008-May 2009), all practices achieved at least Level 1 National Committee for Quality Assurance (NCQA) Physician Practice Connections Patient-Centered Medical Home (PPC-PCMH) recognition. There was significant improvement in the percentage of patients who had evidence-based complications screening and who were on therapies to reduce morbidity and mortality (statins, angiotensin-converting enzyme inhibitors). In addition, there were small but statistically significant improvements in key clinical parameters for blood pressure and cholesterol levels, with the greatest absolute improvement in the highest-risk patients. Transforming primary care delivery through implementation of the PCMH and CCM supported by multipayer infrastructure payments holds significant promise to improve diabetes care.

  19. Criteria for evaluating the design of implementation models for integrated coastal management

    CSIR Research Space (South Africa)

    Taljaard, Susan

    2011-09-01

    Full Text Available are unravelled and a theoretically founded set of criteria for evaluating the design of ICM implementation models is provided. First, paradigms in integrated environmental management (IEM) implementation, the broader domain within which ICM practice is nested...

  20. Implementation of Parallelization Contract Mechanism Extension of Map Reduce Framework for the Efficient Execution Time over Geo-Distributed Dataset

    Directory of Open Access Journals (Sweden)

    Ms. Kirtimalini N.

    2014-12-01

    sets by using different techniques. Further, the paper also discloses the implementation of these techniques, and comparision results of this method with the existing systems. Future trends including use of query optimizing techniques to improve the results of the query as well as reduce the cost for the computation. To achieve this we use the indexing mechanism to the cache system to preserve the query search results.

  1. Predictive models reduce talent development costs in female gymnastics.

    Science.gov (United States)

    Pion, Johan; Hohmann, Andreas; Liu, Tianbiao; Lenoir, Matthieu; Segers, Veerle

    2017-04-01

    This retrospective study focuses on the comparison of different predictive models based on the results of a talent identification test battery for female gymnasts. We studied to what extent these models have the potential to optimise selection procedures, and at the same time reduce talent development costs in female artistic gymnastics. The dropout rate of 243 female elite gymnasts was investigated, 5 years past talent selection, using linear (discriminant analysis) and non-linear predictive models (Kohonen feature maps and multilayer perceptron). The coaches classified 51.9% of the participants correct. Discriminant analysis improved the correct classification to 71.6% while the non-linear technique of Kohonen feature maps reached 73.7% correctness. Application of the multilayer perceptron even classified 79.8% of the gymnasts correctly. The combination of different predictive models for talent selection can avoid deselection of high-potential female gymnasts. The selection procedure based upon the different statistical analyses results in decrease of 33.3% of cost because the pool of selected athletes can be reduced to 92 instead of 138 gymnasts (as selected by the coaches). Reduction of the costs allows the limited resources to be fully invested in the high-potential athletes.

  2. Design, modelling, implementation, and intelligent fuzzy control of a hovercraft

    Science.gov (United States)

    El-khatib, M. M.; Hussein, W. M.

    2011-05-01

    A Hovercraft is an amphibious vehicle that hovers just above the ground or water by air cushion. The concept of air cushion vehicle can be traced back to 1719. However, the practical form of hovercraft nowadays is traced back to 1955. The objective of the paper is to design, simulate and implement an autonomous model of a small hovercraft equipped with a mine detector that can travel over any terrains. A real time layered fuzzy navigator for a hovercraft in a dynamic environment is proposed. The system consists of a Takagi-Sugenotype fuzzy motion planner and a modified proportional navigation based fuzzy controller. The system philosophy is inspired by human routing when moving between obstacles based on visual information including the right and left views from which he makes his next step towards the goal in the free space. It intelligently combines two behaviours to cope with obstacle avoidance as well as approaching a goal using a proportional navigation path accounting for hovercraft kinematics. MATLAB/Simulink software tool is used to design and verify the proposed algorithm.

  3. A Robust Sound Perception Model Suitable for Neuromorphic Implementation

    Directory of Open Access Journals (Sweden)

    Martin eCoath

    2014-01-01

    Full Text Available We have recently demonstrated the emergence of dynamic feature sensitivity through exposure to formative stimuli in a real-time neuromorphic system implementing a hybrid analogue/digital network of spiking neurons. This network, inspired by models of auditory processing in mammals, includes several mutually connected layers with distance-dependent transmission delays and learning in the form of spike timing dependent plasticity, which effects stimulus-driven changes in the network connectivity.Here we present results that demonstrate that the network is robust to a range of variations in the stimulus pattern, such as are found in naturalistic stimuli and neural responses. This robustness is a property critical to the development of realistic, electronic neuromorphic systems.We analyse the variability of the response of the network to `noisy' stimuli which allows us to characterize the acuity in information-theoretic terms. This provides an objective basis for the quantitative comparison of networks, their connectivity patterns, and learning strategies, which can inform future design decisions. We also show, using stimuli derived from speech samples, that the principles are robust to other challenges, such as variable presentation rate, that would have to be met by systems deployed in the real world. Finally we demonstrate the potential applicability of the approach to real sounds.

  4. An extended set of Fortran Basic Linear Algebra Subprograms: model implementation and test programs

    Energy Technology Data Exchange (ETDEWEB)

    Dongarra, J.J.; Du Croz, J.; Hammarling, S.; Hanson, R.J.

    1987-01-01

    This paper describes a model implementation and test software for the Level 2 Basic Linear Algebra Subprograms (Level 2 BLAS). The Level 2 BLAS are targeted at matrix-vector operations with the aim of providing more efficient, but portable, implementations of algorithms on high-performance computers. The model implementation provides a portable set of Fortran 77 Level 2 BLAS for machines where specialized implementations do not exist or are not required. The test software aims to verify that specialized implementations meet the specification of the Level 2 BLAS and that implementations are correctly installed.

  5. Expectations and Implementations of the Flipped Classroom Model in Undergraduate Mathematics Courses

    Science.gov (United States)

    Naccarato, Emilie; Karakok, Gulden

    2015-01-01

    The flipped classroom model is being used more frequently in undergraduate mathematics courses. As with any new teaching model, in-depth investigations of both various implementation styles and how the new model improves student learning are needed. Currently, many practitioners have been sharing their implementations of this model. However, there…

  6. Airborne castanea pollen forecasting model for ecological and allergological implementation.

    Science.gov (United States)

    Astray, G; Fernández-González, M; Rodríguez-Rajo, F J; López, D; Mejuto, J C

    2016-04-01

    Castanea sativa Miller belongs to the natural vegetation of many European deciduous forests prompting impacts in the forestry, ecology, allergological and chestnut food industry fields. The study of the Castanea flowering represents an important tool for evaluating the ecological conservation of North-Western Spain woodland and the possible changes in the chestnut distribution due to recent climatic change. The Castanea pollen production and dispersal capacity may cause hypersensitivity reactions in the sensitive human population due to the relationship between patients with chestnut pollen allergy and a potential cross reactivity risk with other pollens or plant foods. In addition to Castanea pollen's importance as a pollinosis agent, its study is also essential in North-Western Spain due to the economic impact of the industry around the chestnut tree cultivation and its beekeeping interest. The aim of this research is to develop an Artificial Neural Networks for predict the Castanea pollen concentration in the atmosphere of the North-West Spain area by means a 20years data set. It was detected an increasing trend of the total annual Castanea pollen concentrations in the atmosphere during the study period. The Artificial Neural Networks (ANNs) implemented in this study show a great ability to predict Castanea pollen concentration one, two and three days ahead. The model to predict the Castanea pollen concentration one day ahead shows a high linear correlation coefficient of 0.784 (individual ANN) and 0.738 (multiple ANN). The results obtained improved those obtained by the classical methodology used to predict the airborne pollen concentrations such as time series analysis or other models based on the correlation of pollen levels with meteorological variables.

  7. Implementation of a PETN failure model using ARIA's general chemistry framework

    Energy Technology Data Exchange (ETDEWEB)

    Hobbs, Michael L. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2017-01-01

    We previously developed a PETN thermal decomposition model that accurately predicts thermal ignition and detonator failure [1]. This model was originally developed for CALORE [2] and required several complex user subroutines. Recently, a simplified version of the PETN decomposition model was implemented into ARIA [3] using a general chemistry framework without need for user subroutines. Detonator failure was also predicted with this new model using ENCORE. The model was simplified by 1) basing the model on moles rather than mass, 2) simplifying the thermal conductivity model, and 3) implementing ARIA’s new phase change model. This memo briefly describes the model, implementation, and validation.

  8. The i-V curve characteristics of burner-stabilized premixed flames: detailed and reduced models

    KAUST Repository

    Han, Jie

    2016-07-17

    The i-V curve describes the current drawn from a flame as a function of the voltage difference applied across the reaction zone. Since combustion diagnostics and flame control strategies based on electric fields depend on the amount of current drawn from flames, there is significant interest in modeling and understanding i-V curves. We implement and apply a detailed model for the simulation of the production and transport of ions and electrons in one-dimensional premixed flames. An analytical reduced model is developed based on the detailed one, and analytical expressions are used to gain insight into the characteristics of the i-Vcurve for various flame configurations. In order for the reduced model to capture the spatial distribution of the electric field accurately, the concept of a dead zone region, where voltage is constant, is introduced, and a suitable closure for the spatial extent of the dead zone is proposed and validated. The results from the reduced modeling framework are found to be in good agreement with those from the detailed simulations. The saturation voltage is found to depend significantly on the flame location relative to the electrodes, and on the sign of the voltage difference applied. Furthermore, at sub-saturation conditions, the current is shown to increase linearly or quadratically with the applied voltage, depending on the flame location. These limiting behaviors exhibited by the reduced model elucidate the features of i-V curves observed experimentally. The reduced model relies on the existence of a thin layer where charges are produced, corresponding to the reaction zone of a flame. Consequently, the analytical model we propose is not limited to the study of premixed flames, and may be applied easily to others configurations, e.g.~nonpremixed counterflow flames.

  9. Reduced order methods for modeling and computational reduction

    CERN Document Server

    Rozza, Gianluigi

    2014-01-01

    This monograph addresses the state of the art of reduced order methods for modeling and computational reduction of complex parametrized systems, governed by ordinary and/or partial differential equations, with a special emphasis on real time computing techniques and applications in computational mechanics, bioengineering and computer graphics.  Several topics are covered, including: design, optimization, and control theory in real-time with applications in engineering; data assimilation, geometry registration, and parameter estimation with special attention to real-time computing in biomedical engineering and computational physics; real-time visualization of physics-based simulations in computer science; the treatment of high-dimensional problems in state space, physical space, or parameter space; the interactions between different model reduction and dimensionality reduction approaches; the development of general error estimation frameworks which take into account both model and discretization effects. This...

  10. Construction of energy-stable Galerkin reduced order models.

    Energy Technology Data Exchange (ETDEWEB)

    Kalashnikova, Irina; Barone, Matthew Franklin; Arunajatesan, Srinivasan; van Bloemen Waanders, Bart Gustaaf

    2013-05-01

    This report aims to unify several approaches for building stable projection-based reduced order models (ROMs). Attention is focused on linear time-invariant (LTI) systems. The model reduction procedure consists of two steps: the computation of a reduced basis, and the projection of the governing partial differential equations (PDEs) onto this reduced basis. Two kinds of reduced bases are considered: the proper orthogonal decomposition (POD) basis and the balanced truncation basis. The projection step of the model reduction can be done in two ways: via continuous projection or via discrete projection. First, an approach for building energy-stable Galerkin ROMs for linear hyperbolic or incompletely parabolic systems of PDEs using continuous projection is proposed. The idea is to apply to the set of PDEs a transformation induced by the Lyapunov function for the system, and to build the ROM in the transformed variables. The resulting ROM will be energy-stable for any choice of reduced basis. It is shown that, for many PDE systems, the desired transformation is induced by a special weighted L2 inner product, termed the %E2%80%9Csymmetry inner product%E2%80%9D. Attention is then turned to building energy-stable ROMs via discrete projection. A discrete counterpart of the continuous symmetry inner product, a weighted L2 inner product termed the %E2%80%9CLyapunov inner product%E2%80%9D, is derived. The weighting matrix that defines the Lyapunov inner product can be computed in a black-box fashion for a stable LTI system arising from the discretization of a system of PDEs in space. It is shown that a ROM constructed via discrete projection using the Lyapunov inner product will be energy-stable for any choice of reduced basis. Connections between the Lyapunov inner product and the inner product induced by the balanced truncation algorithm are made. Comparisons are also made between the symmetry inner product and the Lyapunov inner product. The performance of ROMs constructed

  11. Model Meets Data: Challenges and Opportunities to Implement Land Management in Earth System Models

    Science.gov (United States)

    Pongratz, J.; Dolman, A. J.; Don, A.; Erb, K. H.; Fuchs, R.; Herold, M.; Jones, C.; Luyssaert, S.; Kuemmerle, T.; Meyfroidt, P.

    2016-12-01

    Land-based demand for food and fibre is projected to increase in the future. In light of global sustainability challenges only part of this increase will be met by expansion of land use into relatively untouched regions. Additional demand will have to be fulfilled by intensification and other adjustments in management of land that already is under agricultural and forestry use. Such land management today occurs on about half of the ice-free land surface, as compared to only about one quarter that has undergone a change in land cover. As the number of studies revealing substantial biogeophysical and biogeochemical effects of land management is increasing, moving beyond land cover change towards including land management has become a key focus for Earth system modeling. However, a basis for prioritizing land management activities for implementation in models is lacking. We lay this basis for prioritization in a collaborative project across the disciplines of Earth system modeling, land system science, and Earth observation. We first assess the status and plans of implementing land management in Earth system and dynamic global vegetation models. A clear trend towards higher complexity of land use representation is visible. We then assess five criteria for prioritizing the implementation of land management activities: (1) spatial extent, (2) evidence for substantial effects on the Earth system, (3) process understanding, (4) possibility to link the management activity to existing concepts and structures of models, (5) availability of data required as model input. While the first three criteria have been assessed by an earlier study for ten common management activities, we review strategies for implementation in models and the availability of required datasets. We can thus evaluate the management activities for their performance in terms of importance for the Earth system, possibility of technical implementation in models, and data availability. This synthesis reveals

  12. Model meets data: Challenges and opportunities to implement land management in Earth System Models

    Science.gov (United States)

    Pongratz, Julia; Dolman, Han; Don, Axel; Erb, Karl-Heinz; Fuchs, Richard; Herold, Martin; Jones, Chris; Luyssaert, Sebastiaan; Kuemmerle, Tobias; Meyfroidt, Patrick; Naudts, Kim

    2017-04-01

    Land-based demand for food and fibre is projected to increase in the future. In light of global sustainability challenges only part of this increase will be met by expansion of land use into relatively untouched regions. Additional demand will have to be fulfilled by intensification and other adjustments in management of land that already is under agricultural and forestry use. Such land management today occurs on about half of the ice-free land surface, as compared to only about one quarter that has undergone a change in land cover. As the number of studies revealing substantial biogeophysical and biogeochemical effects of land management is increasing, moving beyond land cover change towards including land management has become a key focus for Earth system modeling. However, a basis for prioritizing land management activities for implementation in models is lacking. We lay this basis for prioritization in a collaborative project across the disciplines of Earth system modeling, land system science, and Earth observation. We first assess the status and plans of implementing land management in Earth system and dynamic global vegetation models. A clear trend towards higher complexity of land use representation is visible. We then assess five criteria for prioritizing the implementation of land management activities: (1) spatial extent, (2) evidence for substantial effects on the Earth system, (3) process understanding, (4) possibility to link the management activity to existing concepts and structures of models, (5) availability of data required as model input. While the first three criteria have been assessed by an earlier study for ten common management activities, we review strategies for implementation in models and the availability of required datasets. We can thus evaluate the management activities for their performance in terms of importance for the Earth system, possibility of technical implementation in models, and data availability. This synthesis reveals

  13. Implementing a Peer Mentoring Model in the Clemson Eportfolio Program

    Science.gov (United States)

    Ring, Gail L.

    2015-01-01

    Since the implementation of the ePortfolio Program in 2006, Clemson University has incorporated peer review for the formative feedback process. One of the challenges with this large-scale implementation has been ensuring that all work is reviewed and constructive feedback is provided in a timely manner. In this article, I discuss the strategies…

  14. KNOWLEDGE MANAGEMENT AND ENTERPRISE RESOURCE PLANNING IMPLEMENTATION: A CONCEPTUAL MODEL

    Directory of Open Access Journals (Sweden)

    Sevenpri Candra

    2014-01-01

    Full Text Available The purpose of this research is examining the influence of organizational learning and knowledge management in enterprise resource planning implementation. This study is based on organizational learning, knowledge management and enterprise resource planning implementation. This research did not test all organizational factors and focus particularly on knowledge management capacity and absorptive capability. Enterprise resource planning implementation successful is a must. In today’s global and competitor in business, enterprise resource planning is becoming one of the main tools to achieve competitiveness in business. Enterprise resource planning is an infrastructure to create and maintain business to improve front-office and back-office efficiency and effectiveness. This study is significant to bring new thinking in determines the key antecedents to successful enterprise resource planning implementation based on knowledge management perspectives and it will helps to understand the key success factor in enterprise resource planning implementation.

  15. Ottawa Model of Implementation Leadership and Implementation Leadership Scale: mapping concepts for developing and evaluating theory-based leadership interventions

    Directory of Open Access Journals (Sweden)

    Gifford W

    2017-03-01

    Full Text Available Wendy Gifford,1 Ian D Graham,2,3 Mark G Ehrhart,4 Barbara L Davies,5,6 Gregory A Aarons7 1School of Nursing, Faculty of Health Sciences, University of Ottawa, ON, Canada; 2Centre for Practice-Changing Research, Ottawa Hospital Research Institute, 3School of Epidemiology, Public Health and Preventive Medicine, Facility of Medicine, University of Ottawa, Ottawa, ON, Canada; 4Department of Psychology, San Diego State University, San Diego, CA, USA; 5Nursing Best Practice Research Center, University of Ottawa, Ottawa, ON, Canada; 6Department of Psychiatry, University of California, San Diego, La Jolla, CA, USA; 7Child and Adolescent Services Research Center, University of California, San Diego, CA, USA Purpose: Leadership in health care is instrumental to creating a supportive organizational environment and positive staff attitudes for implementing evidence-based practices to improve patient care and outcomes. The purpose of this study is to demonstrate the alignment of the Ottawa Model of Implementation Leadership (O-MILe, a theoretical model for developing implementation leadership, with the Implementation Leadership Scale (ILS, an empirically validated tool for measuring implementation leadership. A secondary objective is to describe the methodological process for aligning concepts of a theoretical model with an independently established measurement tool for evaluating theory-based interventions.Methods: Modified template analysis was conducted to deductively map items of the ILS onto concepts of the O-MILe. An iterative process was used in which the model and scale developers (n=5 appraised the relevance, conceptual clarity, and fit of each ILS items with the O-MILe concepts through individual feedback and group discussions until consensus was reached.Results: All 12 items of the ILS correspond to at least one O-MILe concept, demonstrating compatibility of the ILS as a measurement tool for the O-MILe theoretical constructs.Conclusion: The O

  16. A Finite Element Implementation of a Ductile Damage Model for Small Strains

    CERN Document Server

    Gates, Robert Lee

    2013-01-01

    Lemaitre's ductile damage model and a simplified variant excluding kinematic hardening were studied and implemented into computer code. For purposes of verifying the model, results from computations with the finite element method are compared to literature. It is found that the behavior expected from theory is modeled by both implementations. Quadratic levels of convergence were observed for the simplified model, while results show that convergence of the kinematic hardening implementation deteriorates with damage. It is concluded that further examination is needed to verify the correct implementation of the kinematic hardening model.

  17. Glyburide reduces bacterial dissemination in a mouse model of melioidosis.

    Directory of Open Access Journals (Sweden)

    Gavin C K W Koh

    Full Text Available Burkholderia pseudomallei infection (melioidosis is an important cause of community-acquired Gram-negative sepsis in Northeast Thailand, where it is associated with a ~40% mortality rate despite antimicrobial chemotherapy. We showed in a previous cohort study that patients taking glyburide ( = glibenclamide prior to admission have lower mortality and attenuated inflammatory responses compared to patients not taking glyburide. We sought to define the mechanism underlying this observation in a murine model of melioidosis.Mice (C57BL/6 with streptozocin-induced diabetes were inoculated with ~6 × 10(2 cfu B. pseudomallei intranasally, then treated with therapeutic ceftazidime (600 mg/kg intraperitoneally twice daily starting 24 h after inoculation in order to mimic the clinical scenario. Glyburide (50 mg/kg or vehicle was started 7 d before inoculation and continued until sacrifice. The minimum inhibitory concentration of glyburide for B. pseudomallei was determined by broth microdilution. We also examined the effect of glyburide on interleukin (IL 1β by bone-marrow-derived macrophages (BMDM.Diabetic mice had increased susceptibility to melioidosis, with increased bacterial dissemination but no effect was seen of diabetes on inflammation compared to non-diabetic controls. Glyburide treatment did not affect glucose levels but was associated with reduced pulmonary cellular influx, reduced bacterial dissemination to both liver and spleen and reduced IL1β production when compared to untreated controls. Other cytokines were not different in glyburide-treated animals. There was no direct effect of glyburide on B. pseudomallei growth in vitro or in vivo. Glyburide directly reduced the secretion of IL1β by BMDMs in a dose-dependent fashion.Diabetes increases the susceptibility to melioidosis. We further show, for the first time in any model of sepsis, that glyburide acts as an anti-inflammatory agent by reducing IL1β secretion accompanied by diminished

  18. Variational asymptotic modeling of composite dimensionally reducible structures

    Science.gov (United States)

    Yu, Wenbin

    A general framework to construct accurate reduced models for composite dimensionally reducible structures (beams, plates and shells) was formulated based on two theoretical foundations: decomposition of the rotation tensor and the variational asymptotic method. Two engineering software systems, Variational Asymptotic Beam Sectional Analysis (VABS, new version) and Variational Asymptotic Plate and Shell Analysis (VAPAS), were developed. Several restrictions found in previous work on beam modeling were removed in the present effort. A general formulation of Timoshenko-like cross-sectional analysis was developed, through which the shear center coordinates and a consistent Vlasov model can be obtained. Recovery relations are given to recover the asymptotic approximations for the three-dimensional field variables. A new version of VABS has been developed, which is a much improved program in comparison to the old one. Numerous examples are given for validation. A Reissner-like model being as asymptotically correct as possible was obtained for composite plates and shells. After formulating the three-dimensional elasticity problem in intrinsic form, the variational asymptotic method was used to systematically reduce the dimensionality of the problem by taking advantage of the smallness of the thickness. The through-the-thickness analysis is solved by a one-dimensional finite element method to provide the stiffnesses as input for the two-dimensional nonlinear plate or shell analysis as well as recovery relations to approximately express the three-dimensional results. The known fact that there exists more than one theory that is asymptotically correct to a given order is adopted to cast the refined energy into a Reissner-like form. A two-dimensional nonlinear shell theory consistent with the present modeling process was developed. The engineering computer code VAPAS was developed and inserted into DYMORE to provide an efficient and accurate analysis of composite plates and

  19. Energy consumption model over parallel programs implemented on multicore architectures

    Directory of Open Access Journals (Sweden)

    Ricardo Isidro-Ramirez

    2015-06-01

    Full Text Available In High Performance Computing, energy consump-tion is becoming an important aspect to consider. Due to the high costs that represent energy production in all countries it holds an important role and it seek to find ways to save energy. It is reflected in some efforts to reduce the energy requirements of hardware components and applications. Some options have been appearing in order to scale down energy use and, con-sequently, scale up energy efficiency. One of these strategies is the multithread programming paradigm, whose purpose is to produce parallel programs able to use the full amount of computing resources available in a microprocessor. That energy saving strategy focuses on efficient use of multicore processors that are found in various computing devices, like mobile devices. Actually, as a growing trend, multicore processors are found as part of various specific purpose computers since 2003, from High Performance Computing servers to mobile devices. However, it is not clear how multiprogramming affects energy efficiency. This paper presents an analysis of different types of multicore-based architectures used in computing, and then a valid model is presented. Based on Amdahl’s Law, a model that considers different scenarios of energy use in multicore architectures it is proposed. Some interesting results were found from experiments with the developed algorithm, that it was execute of a parallel and sequential way. A lower limit of energy consumption was found in a type of multicore architecture and this behavior was observed experimentally.

  20. An Algorithm and Implementation Based on an Agricultural EOQ Model

    Directory of Open Access Journals (Sweden)

    Hu Zhineng

    2015-01-01

    Full Text Available With the improvement of living quality, the agricultural supermarket gradually take the place of the farmers market as the trend. But the agricultural supermarkets’ inappropriate inventory strategies are wasteful and inefficient. So this paper will put forward an inventory strategy for the agricultural supermarkets to lead the conductor decides when and how much to shelve the product. This strategy has significant meaning that it can reduce the loss and get more profit. The research methods are based on the inventory theory and the EOQ model, but the authors add multiple cycles’ theory to them because of the agricultural products’ decreasing characteristics. The research procedures are shown as follows. First, the authors do research in the agricultural supermarket to find their real conduction, and then put forward the new strategy in this paper. Second, the authors found out the model. At last, the authors search the specialty agriculture document to find the data such as the loss rate and the fresh parameters, and solve it out by MATLAB. The numerical result proves that the strategy is better than the real conduction in agricultural supermarket, and it also proves the feasibility.

  1. Crop model improvement reduces the uncertainty of the response to temperature of multi-model ensembles

    DEFF Research Database (Denmark)

    Maiorano, Andrea; Martre, Pierre; Asseng, Senthold

    2017-01-01

    To improve climate change impact estimates and to quantify their uncertainty, multi-model ensembles (MMEs) have been suggested. Model improvements can improve the accuracy of simulations and reduce the uncertainty of climate change impact assessments. Furthermore, they can reduce the number of mo...

  2. A Technological Innovation to Reduce Prescribing Errors Based on Implementation Intentions: The Acceptability and Feasibility of MyPrescribe.

    Science.gov (United States)

    Keyworth, Chris; Hart, Jo; Thoong, Hong; Ferguson, Jane; Tully, Mary

    2017-08-01

    Although prescribing of medication in hospitals is rarely an error-free process, prescribers receive little feedback on their mistakes and ways to change future practices. Audit and feedback interventions may be an effective approach to modifying the clinical practice of health professionals, but these may pose logistical challenges when used in hospitals. Moreover, such interventions are often labor intensive. Consequently, there is a need to develop effective and innovative interventions to overcome these challenges and to improve the delivery of feedback on prescribing. Implementation intentions, which have been shown to be effective in changing behavior, link critical situations with an appropriate response; however, these have rarely been used in the context of improving prescribing practices. Semistructured qualitative interviews were conducted to evaluate the acceptability and feasibility of providing feedback on prescribing errors via MyPrescribe, a mobile-compatible website informed by implementation intentions. Data relating to 200 prescribing errors made by 52 junior doctors were collected by 11 hospital pharmacists. These errors were populated into MyPrescribe, where prescribers were able to construct their own personalized action plans. Qualitative interviews with a subsample of 15 junior doctors were used to explore issues regarding feasibility and acceptability of MyPrescribe and their experiences of using implementation intentions to construct prescribing action plans. Framework analysis was used to identify prominent themes, with findings mapped to the behavioral components of the COM-B model (capability, opportunity, motivation, and behavior) to inform the development of future interventions. MyPrescribe was perceived to be effective in providing opportunities for critical reflection on prescribing errors and to complement existing training (such as junior doctors' e-portfolio). The participants were able to provide examples of how they would use

  3. Design and Implementation of a Telepediatric Primary-Level and Low-Cost System to Reduce Unnecessary Patient Transfers.

    Science.gov (United States)

    Cifuentes, Christian; Romero, Eduardo; Godoy, Javier

    2017-06-01

    Most inhabitants in Latin America are concentrated in large urban foci with different access to facilities. Although the main hospitals offer specialized services, economically vulnerable populations cannot easily afford these services, the pediatric population being most affected. This article presents the design and implementation of a low cost telepediatric system, applied to primary care hospitals through a study in Bogotá, Colombia, mainly aimed to reduce the number of unnecessary transfers commonly sent to specialized medical services. The system was carried out over 6 months with a higher incidence of acute respiratory illness in children between 0 and 5 years in nine primary care hospitals in Bogotá. Nineteen (n = 19) pediatricians were trained by a group of engineers that supports the system permanently. The reduction of patient transfers was compared with previous reports of the National Statistical Department in Colombia. The system reduced both the number of patient transfers to higher level hospitals by 83% and the waiting times for patient transfer, improving healthcare in pediatric patients at a reasonable cost, affecting more than 700 patients. At the same time, a decrease of about 17% in the use of antibiotics was observed, which is an important current public health issue. The use of telemedicine improves the efficiency of public health resources, even in big cities such as Bogotá, reducing the number of unnecessary patient transfers and the optimization and appropriate use of medicines.

  4. Minnelide reduces tumor burden in preclinical models of osteosarcoma

    Science.gov (United States)

    Banerjee, Sulagna; Thayanithy, Venugopal; Sangwan, Veena; Mackenzie, Tiffany N.; Saluja, Ashok K.; Subramanian, Subbaya

    2015-01-01

    Osteosarcoma is the most common bone cancer in children and adolescents with a five-year survival rate of about 70%. In this study, we have evaluated the preclinical therapeutic efficacy of the novel synthetic drug, Minnelide, a prodrug of triptolide on osteosarcoma. Triptolide was effective in significantly inducing apoptosis in all osteosarcoma cell lines tested but had no significant effect on the human osteoblast cells. Notably, Minnelide treatment significantly reduced tumor burden and lung metastasis in the orthotopic and lung colonization models. Triptolide/Minnelide effectively downregulated the levels of pro-survival proteins such as heat shock proteins, cMYC, survivin and targets NF-κB pathway. PMID:23499892

  5. Reduced parameter model on trajectory tracking data with applications

    Institute of Scientific and Technical Information of China (English)

    王正明; 朱炬波

    1999-01-01

    The data fusion in tracking the same trajectory by multi-measurernent unit (MMU) is considered. Firstly, the reduced parameter model (RPM) of trajectory parameter (TP), system error and random error are presented,and then the RPM on trajectory tracking data (TTD) is obtained, a weighted method on measuring elements (ME) is studied and criteria on selection of ME based on residual and accuracy estimation are put forward. According to RPM,the problem about selection of ME and self-calibration of TTD is thoroughly investigated. The method improves data accuracy in trajectory tracking obviously and gives accuracy evaluation of trajectory tracking system simultaneously.

  6. Improved Reduced Models for Single-Pass and Reflective Semiconductor Optical Amplifiers

    CERN Document Server

    Dúill, Seán P Ó

    2014-01-01

    We present highly accurate and easy to implement, improved lumped semiconductor optical amplifier (SOA) models for both single-pass and reflective semiconductor optical amplifiers (RSOA). The key feature of the model is the inclusion of the internal losses and we show that a few subdivisions are required to achieve an accuracy of 0.12 dB. For the case of RSOAs, we generalize a recently published model to account for the internal losses that are vital to replicate observed RSOA behavior. The results of the improved reduced RSOA model show large overlap when compared to a full bidirectional travelling wave model over a 40 dB dynamic range of input powers and a 20 dB dynamic range of reflectivity values. The models would be useful for the rapid system simulation of signals in communication systems, i.e. passive optical networks that employ RSOAs, signal processing using SOAs and for implementing digital back propagation to undo amplifier induced signal distortions.

  7. A Reduced Order, One Dimensional Model of Joint Response

    Energy Technology Data Exchange (ETDEWEB)

    DOHNER,JEFFREY L.

    2000-11-06

    As a joint is loaded, the tangent stiffness of the joint reduces due to slip at interfaces. This stiffness reduction continues until the direction of the applied load is reversed or the total interface slips. Total interface slippage in joints is called macro-slip. For joints not undergoing macro-slip, when load reversal occurs the tangent stiffness immediately rebounds to its maximum value. This occurs due to stiction effects at the interface. Thus, for periodic loads, a softening and rebound hardening cycle is produced which defines a hysteretic, energy absorbing trajectory. For many jointed sub-structures, this hysteretic trajectory can be approximated using simple polynomial representations. This allows for complex joint substructures to be represented using simple non-linear models. In this paper a simple one dimensional model is discussed.

  8. Defining Building Information Modeling implementation activities based on capability maturity evaluation: a theoretical model

    Directory of Open Access Journals (Sweden)

    Romain Morlhon

    2015-01-01

    Full Text Available Building Information Modeling (BIM has become a widely accepted tool to overcome the many hurdles that currently face the Architecture, Engineering and Construction industries. However, implementing such a system is always complex and the recent introduction of BIM does not allow organizations to build their experience on acknowledged standards and procedures. Moreover, data on implementation projects is still disseminated and fragmentary. The objective of this study is to develop an assistance model for BIM implementation. Solutions that are proposed will help develop BIM that is better integrated and better used, and take into account the different maturity levels of each organization. Indeed, based on Critical Success Factors, concrete activities that help in implementation are identified and can be undertaken according to the previous maturity evaluation of an organization. The result of this research consists of a structured model linking maturity, success factors and actions, which operates on the following principle: once an organization has assessed its BIM maturity, it can identify various weaknesses and find relevant answers in the success factors and the associated actions.

  9. Development and Implementation of a Smartphone Application to Promote Physical Activity and Reduce Screen-Time in Adolescent Boys

    Science.gov (United States)

    Lubans, David R.; Smith, Jordan J.; Skinner, Geoff; Morgan, Philip J.

    2014-01-01

    Purpose: To describe the development and implementation of a smartphone application (app) designed to promote physical activity and reduce screen-time in adolescent boys considered “at-risk” of obesity. Methods: An app was developed to support the delivery of a face-to-face school-based obesity prevention program known as the “Active Teen Leaders Avoiding Screen-time” (ATLAS) program. ATLAS was guided by self-determination theory and social cognitive theory and evaluated using a cluster randomized controlled trial with 361 boys (12.7 ± 0.5 years) in 14 secondary schools. Following the completion of the study, participants in the intervention group completed a process evaluation questionnaire and focus groups were conducted with 42 students to explore their general perceptions of the ATLAS program and their experience with the smartphone app. Barriers and challenges encountered in the development, implementation, and evaluation of the app are also described. Results: Participation in the study was not contingent on ownership of a smartphone, but 70% of participants in the intervention group reported having access to a smartphone or tablet device. Focus group participants reported an enjoyment of the program, and felt that it had provided them with new skills, techniques, and routines for the future. However, their engagement with the smartphone app was limited, due to a variety of reasons. Barriers to the implementation and evaluation of the app included limited access to smartphone devices, technical problems with the push notifications, lack of access to usage data, and the challenges of maintaining participants’ interest in using the app. Conclusion: Although participants reported high levels of satisfaction with the ATLAS program in general, the smartphone app was not used extensively. Additional strategies and features may be needed to enhance engagement in adolescent boys. PMID:24904909

  10. Development and Implementation of a Smartphone Application to Promote Physical Activity and Reduce Screen-time in Adolescent Boys

    Directory of Open Access Journals (Sweden)

    David Revalds Lubans

    2014-05-01

    Full Text Available Purpose: The primary aim is to describe the development and implementation of a smartphone application (app designed to promote physical activity and reduce screen-time in adolescent boys ‘at risk’ of obesity from low-income communities.Methods: An app was developed to support the delivery of a face-to-face school-based obesity prevention program known as the ‘Active Teen Leaders Avoiding Screen-time’ (ATLAS program. ATLAS was guided by self-determination theory and social cognitive theory and evaluated using a cluster randomized controlled trial with 361 boys (12.7± 0.5 years in 14 secondary schools. Following the completion of the study, participants in the intervention group completed a process evaluation questionnaire and focus groups were conducted with 42 students to explore their general perceptions of the ATLAS program and their experience with the smartphone app. Barriers and challenges encountered in the development, implementation and evaluation of the app are also described.Results: Participation in the study was not contingent on ownership of a smartphone, but 70% of participants in the intervention group reported having access to a smartphone or tablet device. Focus group participants reported an enjoyment of the program, and felt that it had provided them with new skills, techniques, and routines for the future. However, their engagement with the smartphone app was limited, due to a variety of reasons. Barriers to the implementation and evaluation of the app included limited access to smartphone devices, technical problems with the push notifications, lack of access to usage data and the challenges of maintaining participants’ interest in using the app.Conclusions: Although participants reported high levels of satisfaction with the ATLAS program in general, the smartphone app was not used extensively. Additional strategies and features may be needed to enhance engagement in adolescent boys.

  11. What Clinical Interventions Have Been Implemented to Prevent or Reduce Postpartum Hypertension Readmissions? A Clin-IQ

    Directory of Open Access Journals (Sweden)

    Sara O'Meara

    2016-08-01

    Full Text Available A literature review was conducted to determine what clinical interventions have been studied and implemented to prevent and/or reduce postpartum hypertension readmissions. Appropriate verbal and printed educational materials should be given to the patient prior to discharge with use of the “teach back” method. Patients and health care providers within the multidisciplinary team should be educated on the warning signs and symptoms of worsening hypertensive disease and when to appropriately involve the obstetrician. The use of text messaging may be useful in preventing hospital readmissions by increasing patient follow-up and compliance and appropriately managing patients in the postpartum period. Treating postpartum patients with furosemide may decrease blood pressure and prevent postpartum hypertension and the need for antihypertensive therapy.

  12. Implementing multiresolution models and families of models: from entity-level simulation to desktop stochastic models and "repro" models

    Science.gov (United States)

    McEver, Jimmie; Davis, Paul K.; Bigelow, James H.

    2000-06-01

    We have developed and used families of multiresolution and multiple-perspective models (MRM and MRMPM), both in our substantive analytic work for the Department of Defense and to learn more about how such models can be designed and implemented. This paper is a brief case history of our experience with a particular family of models addressing the use of precision fires in interdicting and halting an invading army. Our models were implemented as closed-form analytic solutions, in spreadsheets, and in the more sophisticated AnalyticaTM environment. We also drew on an entity-level simulation for data. The paper reviews the importance of certain key attributes of development environments (visual modeling, interactive languages, friendly use of array mathematics, facilities for experimental design and configuration control, statistical analysis tools, graphical visualization tools, interactive post-processing, and relational database tools). These can go a long way towards facilitating MRMPM work, but many of these attributes are not yet widely available (or available at all) in commercial model-development tools--especially for use with personal computers. We conclude with some lessons learned from our experience.

  13. On-Chip Implementation of High Resolution High Speed Floating Point Adder/Subtractor with Reducing Mean Latency for OFDM

    Directory of Open Access Journals (Sweden)

    Rozita Teymourzadeh

    2010-01-01

    Full Text Available Problem statement: Fast Fourier Transform (FFT is widely applied in OFDM trance-receiver communications system. Hence efficient FFT algorithm is always considered. Approach: This study proposed FPGA realization of high resolution high speed low latency floating point adder/subtractor for FFT in OFDM trance-receiver. The design was implemented for 32 bit pipelined adder/subtractor which satisfied IEEE-754 standard for floating-point arithmetic. The design was focused on the trade-off between the latency and speed improvement as well as resolution and silicon area for the chip implementation. In order to reduce the critical path and decrease the latency, the novel structure was designed and investigated. Results: Consequently, synthesis report indicated the latency of 4 clock cycles due to each stage operated within just one clock cycle. The unique structure of designed adder well thought out resulted 6691 equivalent gate count and lead us to obtain low area on chip. Conclusion: The synthesis Xilinx ISE software provided results representing the estimated area and delay for design when it is pipelined to various depths. The report showed the minimum delay of 3.592 ns or maximum frequency of 278.42 MHz.

  14. Development and Implementation of an Online Chemistry Module to a Large Eddy Simulation Model

    Science.gov (United States)

    Forkel, Renate; Banzhaf, Sabine; Kanani-Sühring, Farah; Ketelsen, Klaus; Khan, Basit; Maronga, Björn; Mauder, Matthias; Raasch, Siegfried

    2017-04-01

    Large Eddy Simulation (LES) models permit to resolve relevant scales of turbulent motion, so that these models can capture the inherent unsteadiness of atmospheric turbulence and advection. However, LES models are so far hardly applied for urban air quality studies, in particular chemical transformation of pollutants. Within the BMBF (Bundesministerium für Bildung und Forschung) funded joint project MOSAIK (Modellbasierte Stadtplanung und Anwendung im Klimawandel / Model-based city planning and application in climate change) the state of the art LES model PALM (Parallelized LES Model; Maronga et al, 2015, Geosci. Model Dev., 8, doi:10.5194/gmd-8-2515-2015) is extended by an atmospheric chemistry scheme. Due to the high computational demands of a LES based model, compromises in the description of chemical processes are required. Therefore, a reduced chemistry mechanism, which includes only major pollutants namely O3, NO, NO2, CO, a highly simplified VOC chemistry and a small number of products have been implemented. For practical applications, our approach is to go beyond the simulation of single street canyons to chemical transformation, advection and deposition of air pollutants in the larger urban canopy. Tests of chemistry schemes and initial studies of chemistry-turbulence interactions are presented.

  15. The Implementation of C-ID, R2D2 Model on Learning Reading Comprehension

    Science.gov (United States)

    Rayanto, Yudi Hari; Rusmawan, Putu Ngurah

    2016-01-01

    The purposes of this research are to find out, (1) whether C-ID, R2D2 model is effective to be implemented on learning Reading comprehension, (2) college students' activity during the implementation of C-ID, R2D2 model on learning Reading comprehension, and 3) college students' learning achievement during the implementation of C-ID, R2D2 model on…

  16. The computational implementation of the landscape model: modeling inferential processes and memory representations of text comprehension.

    Science.gov (United States)

    Tzeng, Yuhtsuen; van den Broek, Paul; Kendeou, Panayiota; Lee, Chengyuan

    2005-05-01

    The complexity of text comprehension demands a computational approach to describe the cognitive processes involved. In this article, we present the computational implementation of the landscape model of reading. This model captures both on-line comprehension processes during reading and the off-line memory representation after reading is completed, incorporating both memory-based and coherence-based mechanisms of comprehension. The overall architecture and specific parameters of the program are described, and a running example is provided. Several studies comparing computational and behavioral data indicate that the implemented model is able to account for cycle-by-cycle comprehension processes and memory for a variety of text types and reading situations.

  17. Implementing a new model for on-the-job training: critical success factors.

    NARCIS (Netherlands)

    van Zolingen, S.J.; Streumer, Jan; van der Klink, Marcel; de Jong, Rolinda

    2000-01-01

    Post Offices Inc. in The Netherlands has developed and implemented a new instruction model for the training of desk employees. The quality of the new instruction model was assessed by means of the evaluation model of Jacobs and Jones for on-the-job training. It is concluded that the implementation

  18. Optimizing Crawler4j using MapReduce Programming Model

    Science.gov (United States)

    Siddesh, G. M.; Suresh, Kavya; Madhuri, K. Y.; Nijagal, Madhushree; Rakshitha, B. R.; Srinivasa, K. G.

    2016-08-01

    World wide web is a decentralized system that consists of a repository of information on the basis of web pages. These web pages act as a source of information or data in the present analytics world. Web crawlers are used for extracting useful information from web pages for different purposes. Firstly, it is used in web search engines where the web pages are indexed to form a corpus of information and allows the users to query on the web pages. Secondly, it is used for web archiving where the web pages are stored for later analysis phases. Thirdly, it can be used for web mining where the web pages are monitored for copyright purposes. The amount of information processed by the web crawler needs to be improved by using the capabilities of modern parallel processing technologies. In order to solve the problem of parallelism and the throughput of crawling this work proposes to optimize the Crawler4j using the Hadoop MapReduce programming model by parallelizing the processing of large input data. Crawler4j is a web crawler that retrieves useful information about the pages that it visits. The crawler Crawler4j coupled with data and computational parallelism of Hadoop MapReduce programming model improves the throughput and accuracy of web crawling. The experimental results demonstrate that the proposed solution achieves significant improvements with respect to performance and throughput. Hence the proposed approach intends to carve out a new methodology towards optimizing web crawling by achieving significant performance gain.

  19. Optimizing Crawler4j using MapReduce Programming Model

    Science.gov (United States)

    Siddesh, G. M.; Suresh, Kavya; Madhuri, K. Y.; Nijagal, Madhushree; Rakshitha, B. R.; Srinivasa, K. G.

    2017-06-01

    World wide web is a decentralized system that consists of a repository of information on the basis of web pages. These web pages act as a source of information or data in the present analytics world. Web crawlers are used for extracting useful information from web pages for different purposes. Firstly, it is used in web search engines where the web pages are indexed to form a corpus of information and allows the users to query on the web pages. Secondly, it is used for web archiving where the web pages are stored for later analysis phases. Thirdly, it can be used for web mining where the web pages are monitored for copyright purposes. The amount of information processed by the web crawler needs to be improved by using the capabilities of modern parallel processing technologies. In order to solve the problem of parallelism and the throughput of crawling this work proposes to optimize the Crawler4j using the Hadoop MapReduce programming model by parallelizing the processing of large input data. Crawler4j is a web crawler that retrieves useful information about the pages that it visits. The crawler Crawler4j coupled with data and computational parallelism of Hadoop MapReduce programming model improves the throughput and accuracy of web crawling. The experimental results demonstrate that the proposed solution achieves significant improvements with respect to performance and throughput. Hence the proposed approach intends to carve out a new methodology towards optimizing web crawling by achieving significant performance gain.

  20. Reduced M(atrix) theory models: ground state solutions

    CERN Document Server

    López, J L

    2015-01-01

    We propose a method to find exact ground state solutions to reduced models of the SU($N$) invariant matrix model arising from the quantization of the 11-dimensional supermembrane action in the light-cone gauge. We illustrate the method by applying it to lower dimensional toy models and for the SU(2) group. This approach could, in principle, be used to find ground state solutions to the complete 9-dimensional model and for any SU($N$) group. The Hamiltonian, the supercharges and the constraints related to the SU($2$) symmetry are built from operators that generate a multicomponent spinorial wave function. The procedure is based on representing the fermionic degrees of freedom by means of Dirac-like gamma matrices, as was already done in the first proposal of supersymmetric (SUSY) quantum cosmology. We exhibit a relation between these finite $N$ matrix theory ground state solutions and SUSY quantum cosmology wave functions giving a possible physical significance of the theory even for finite $N$.

  1. Reducing Psychiatric Inpatient Readmissions Using an Organizational Change Model.

    Science.gov (United States)

    Molfenter, Todd; Connor, Tim; Ford, James H; Hyatt, John; Zimmerman, Dan

    2016-06-01

    Thirty-day hospital readmission rates have become a quality indicator for many regulators and payers, but published accounts of reducing these rates across a patient population are lacking. This article describes and evaluates the Wisconsin Mental Health Readmissions Project, which aimed to reduce psychiatric inpatient 30-day readmission rates in Wisconsin. Nineteen county human services boards representing 23 of Wisconsin's 72 counties and 61% of the state's residential admissions participated in a statewide quality improvement collaborative from January 1, 2010 to December 31, 2013. Participants applied a standardized organizational change model, called NIATx, in the context of a multicounty quality improvement collaborative to reduce 30-day readmission rates. Readmission rates were tracked through national and state databases, using 2009 as a baseline, and analyzed using a chi-square analysis to test the proportion of means. The study team compared readmission rates of Wisconsin counties that participated in the statewide collaborative with those that did not. Between 2009 and 2013, the 30-day readmission rates in Wisconsin declined significantly for counties that participated in the project when compared to those that did not (2009-2013) [Χ2(4) = 54.503, P < .001], based on a 2.5% decline for participants vs a 0.7% decline for nonparticipants. Reductions to behavioral health inpatient readmission rates beyond individual case examples have been difficult to document. This analysis evaluates a method that Wisconsin behavioral health providers applied as part of a multicounty program addressing readmission rates. The findings highlight quality improvement program design elements and interventions to consider in reducing inpatient behavioral health readmissions, as well as the need for further research on this complex systems issue.

  2. Implementation of SNS Model for Intrusion Prevention in Wireless Local Area Network

    DEFF Research Database (Denmark)

    Isah, Abdullahi

    The thesis has proposed and implemented a so-called SNS (Social network security) model for intrusion prevention in the Wireless Local Area Network of an organization. An experimental design was used to implement and test the model at a university in Nigeria.......The thesis has proposed and implemented a so-called SNS (Social network security) model for intrusion prevention in the Wireless Local Area Network of an organization. An experimental design was used to implement and test the model at a university in Nigeria....

  3. Modeling the Capacity and Emissions Impacts of Reduced Electricity Demand. Part 1. Methodology and Preliminary Results

    Energy Technology Data Exchange (ETDEWEB)

    Coughlin, Katie [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States). Environmental Energy Technologies Division; Shen, Hongxia [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States). Environmental Energy Technologies Division; Chan, Peter [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States). Environmental Energy Technologies Division; McDevitt, Brian [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States). Environmental Energy Technologies Division; Sturges, Andrew [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States). Environmental Energy Technologies Division

    2013-02-07

    Policies aimed at energy conservation and efficiency have broad environmental and economic impacts. Even if these impacts are relatively small, they may be significant compared to the cost of implementing the policy. Methodologies that quantify the marginal impacts of reduced demand for energy have an important role to play in developing accurate measures of both the benefits and costs of a given policy choice. This report presents a methodology for estimating the impacts of reduced demand for electricity on the electric power sector as a whole. The approach uses the National Energy Modeling System (NEMS), a mid-range energy forecast model developed and maintained by the U.S. Department of Energy, Energy Information Administration (EIA)(DOE EIA 2013). The report is organized as follows: In the rest of this section the traditional NEMS-BT approach is reviewed and an outline of the new reduced form NEMS methodology is presented. Section 2 provides an overview of how the NEMS model works, and describes the set of NEMS-BT runs that are used as input to the reduced form approach. Section 3 presents our NEMS-BT simulation results and post-processing methods. In Section 4 we show how the NEMS-BT output can be generalized to apply to a broader set of end-uses. In Section 5 we disuss the application of this approach to policy analysis, and summarize some of the issues that will be further investigated in Part 2 of this study.

  4. The Road of ERP Success: A Framework Model for Successful ERP Implementation

    Directory of Open Access Journals (Sweden)

    Sevenpri Candra

    2011-11-01

    Full Text Available To compete with nowadays business is to implement technology and align it into their business strategy. One of technology that commonly implement is Enterprise Resource Planning (ERP. This research will examined what are critical success factor of ERP and the impact of their business outcomes. A framework model for ERP Implementation success is constructs from several research or previous study in Implementation ERP. This study will extends in the research field of successful implementation ERP and implication factor for business practice to have more knowledge in term of implementation ERP and their business strategy. 

  5. Results of the 2013 UT modeling benchmark obtained with models implemented in CIVA

    Energy Technology Data Exchange (ETDEWEB)

    Toullelan, Gwénaël; Raillon, Raphaële; Chatillon, Sylvain [CEA, LIST, 91191Gif-sur-Yvette (France); Lonne, Sébastien [EXTENDE, Le Bergson, 15 Avenue Emile Baudot, 91300 MASSY (France)

    2014-02-18

    The 2013 Ultrasonic Testing (UT) modeling benchmark concerns direct echoes from side drilled holes (SDH), flat bottom holes (FBH) and corner echoes from backwall breaking artificial notches inspected with a matrix phased array probe. This communication presents the results obtained with the models implemented in the CIVA software: the pencilmodel is used to compute the field radiated by the probe, the Kirchhoff approximation is applied to predict the response of FBH and notches and the SOV (Separation Of Variables) model is used for the SDH responses. The comparison between simulated and experimental results are presented and discussed.

  6. Angle- and distance-constrained matcher with parallel implementations for model-based vision

    Science.gov (United States)

    Anhalt, David J.; Raney, Steven; Severson, William E.

    1992-02-01

    The matching component of a model-based vision system hypothesizes one-to-one correspondences between 2D image features and locations on the 3D model. As part of Wright Laboratory's ARAGTAP program [a synthetic aperture radar (SAR) object recognition program], we developed a matcher that searches for feature matches based on the hypothesized object type and aspect angle. Search is constrained by the presumed accuracy of the hypothesized aspect angle and scale. These constraints reduce the search space for matches, thus improving match performance and quality. The algorithm is presented and compared with a matcher based on geometric hashing. Parallel implementations on commercially available shared memory MIMD machines, distributed memory MIMD machines, and SIMD machines are presented and contrasted.

  7. A computational fluid dynamics model for wind simulation:model implementation and experimental validation

    Institute of Scientific and Technical Information of China (English)

    Zhuo-dong ZHANG; Ralf WIELAND; Matthias REICHE; Roger FUNK; Carsten HOFFMANN; Yong LI; Michael SOMMER

    2012-01-01

    To provide physically based wind modelling for wind erosion research at regional scale,a 3D computational fluid dynamics (CFD) wind model was developed.The model was programmed in C language based on the Navier-Stokes equations,and it is freely available as open source.Integrated with the spatial analysis and modelling tool (SAMT),the wind model has convenient input preparation and powerful output visualization.To validate the wind model,a series of experiments was conducted in a wind tunnel.A blocking inflow experiment was designed to test the performance of the model on simulation of basic fluid processes.A round obstacle experiment was designed to check if the model could simulate the influences of the obstacle on wind field.Results show that measured and simulated wind fields have high correlations,and the wind model can simulate both the basic processes of the wind and the influences of the obstacle on the wind field.These results show the high reliability of the wind model.A digital elevation model (DEM) of an area (3800 m long and 1700 m wide) in the Xilingele grassland in Inner Mongolia (autonomous region,China) was applied to the model,and a 3D wind field has been successfully generated.The clear implementation of the model and the adequate validation by wind tunnel experiments laid a solid foundation for the prediction and assessment of wind erosion at regional scale.

  8. Design of sensor networks for instantaneous inversion of modally reduced order models in structural dynamics

    Science.gov (United States)

    Maes, K.; Lourens, E.; Van Nimmen, K.; Reynders, E.; De Roeck, G.; Lombaert, G.

    2015-02-01

    In structural dynamics, the forces acting on a structure are often not well known. System inversion techniques may be used to estimate these forces from the measured response of the structure. This paper first derives conditions for the invertibility of linear system models that apply to any instantaneous input estimation or joint input-state estimation algorithm. The conditions ensure the identifiability of the dynamic forces and system states, their stability and uniqueness. The present paper considers the specific case of modally reduced order models, which are generally obtained from a physical, finite element model, or from experimental data. It is shown how in this case the conditions can be directly expressed in terms of the modal properties of the structure. A distinction is made between input estimation and joint input-state estimation. Each of the conditions is illustrated by a conceptual example. The practical implementation is discussed for a case study where a sensor network for a footbridge is designed.

  9. Developing an active implementation model for a chronic disease management program

    Directory of Open Access Journals (Sweden)

    Margrethe Smidth

    2013-06-01

    Full Text Available Background: Introduction and diffusion of new disease management programs in healthcare is usually slow, but active theory-driven implementation seems to outperform other implementation strategies. However, we have only scarce evidence on the feasibility and real effect of such strategies in complex primary care settings where municipalities, general practitioners and hospitals should work together. The Central Denmark Region recently implemented a disease management program for chronic obstructive pulmonary disease (COPD which presented an opportunity to test an active implementation model against the usual implementation model. The aim of the present paper is to describe the development of an active implementation model using the Medical Research Council’s model for complex interventions and the Chronic Care Model.Methods: We used the Medical Research Council’s five-stage model for developing complex interventions to design an implementation model for a disease management program for COPD. First, literature on implementing change in general practice was scrutinised and empirical knowledge was assessed for suitability. In phase I, the intervention was developed; and in phases II and III, it was tested in a block- and cluster-randomised study. In phase IV, we evaluated the feasibility for others to use our active implementation model. Results: The Chronic Care Model was identified as a model for designing efficient implementation elements. These elements were combined into a multifaceted intervention, and a timeline for the trial in a randomised study was decided upon in accordance with the five stages in the Medical Research Council’s model; this was captured in a PaTPlot, which allowed us to focus on the structure and the timing of the intervention. The implementation strategies identified as efficient were use of the Breakthrough Series, academic detailing, provision of patient material and meetings between providers. The active

  10. Developing an active implementation model for a chronic disease management program

    Directory of Open Access Journals (Sweden)

    Margrethe Smidth

    2013-06-01

    Full Text Available Background: Introduction and diffusion of new disease management programs in healthcare is usually slow, but active theory-driven implementation seems to outperform other implementation strategies. However, we have only scarce evidence on the feasibility and real effect of such strategies in complex primary care settings where municipalities, general practitioners and hospitals should work together. The Central Denmark Region recently implemented a disease management program for chronic obstructive pulmonary disease (COPD which presented an opportunity to test an active implementation model against the usual implementation model. The aim of the present paper is to describe the development of an active implementation model using the Medical Research Council’s model for complex interventions and the Chronic Care Model.Methods: We used the Medical Research Council’s five-stage model for developing complex interventions to design an implementation model for a disease management program for COPD. First, literature on implementing change in general practice was scrutinised and empirical knowledge was assessed for suitability. In phase I, the intervention was developed; and in phases II and III, it was tested in a block- and cluster-randomised study. In phase IV, we evaluated the feasibility for others to use our active implementation model.Results: The Chronic Care Model was identified as a model for designing efficient implementation elements. These elements were combined into a multifaceted intervention, and a timeline for the trial in a randomised study was decided upon in accordance with the five stages in the Medical Research Council’s model; this was captured in a PaTPlot, which allowed us to focus on the structure and the timing of the intervention. The implementation strategies identified as efficient were use of the Breakthrough Series, academic detailing, provision of patient material and meetings between providers. The active

  11. Developing an active implementation model for a chronic disease management program.

    Science.gov (United States)

    Smidth, Margrethe; Christensen, Morten Bondo; Olesen, Frede; Vedsted, Peter

    2013-04-01

    Introduction and diffusion of new disease management programs in healthcare is usually slow, but active theory-driven implementation seems to outperform other implementation strategies. However, we have only scarce evidence on the feasibility and real effect of such strategies in complex primary care settings where municipalities, general practitioners and hospitals should work together. The Central Denmark Region recently implemented a disease management program for chronic obstructive pulmonary disease (COPD) which presented an opportunity to test an active implementation model against the usual implementation model. The aim of the present paper is to describe the development of an active implementation model using the Medical Research Council's model for complex interventions and the Chronic Care Model. We used the Medical Research Council's five-stage model for developing complex interventions to design an implementation model for a disease management program for COPD. First, literature on implementing change in general practice was scrutinised and empirical knowledge was assessed for suitability. In phase I, the intervention was developed; and in phases II and III, it was tested in a block- and cluster-randomised study. In phase IV, we evaluated the feasibility for others to use our active implementation model. The Chronic Care Model was identified as a model for designing efficient implementation elements. These elements were combined into a multifaceted intervention, and a timeline for the trial in a randomised study was decided upon in accordance with the five stages in the Medical Research Council's model; this was captured in a PaTPlot, which allowed us to focus on the structure and the timing of the intervention. The implementation strategies identified as efficient were use of the Breakthrough Series, academic detailing, provision of patient material and meetings between providers. The active implementation model was tested in a randomised trial

  12. Stochastic reduced order models for inverse problems under uncertainty.

    Science.gov (United States)

    Warner, James E; Aquino, Wilkins; Grigoriu, Mircea D

    2015-03-01

    This work presents a novel methodology for solving inverse problems under uncertainty using stochastic reduced order models (SROMs). Given statistical information about an observed state variable in a system, unknown parameters are estimated probabilistically through the solution of a model-constrained, stochastic optimization problem. The point of departure and crux of the proposed framework is the representation of a random quantity using a SROM - a low dimensional, discrete approximation to a continuous random element that permits e cient and non-intrusive stochastic computations. Characterizing the uncertainties with SROMs transforms the stochastic optimization problem into a deterministic one. The non-intrusive nature of SROMs facilitates e cient gradient computations for random vector unknowns and relies entirely on calls to existing deterministic solvers. Furthermore, the method is naturally extended to handle multiple sources of uncertainty in cases where state variable data, system parameters, and boundary conditions are all considered random. The new and widely-applicable SROM framework is formulated for a general stochastic optimization problem in terms of an abstract objective function and constraining model. For demonstration purposes, however, we study its performance in the specific case of inverse identification of random material parameters in elastodynamics. We demonstrate the ability to efficiently recover random shear moduli given material displacement statistics as input data. We also show that the approach remains effective for the case where the loading in the problem is random as well.

  13. High-Fidelity Battery Model for Model Predictive Control Implemented into a Plug-In Hybrid Electric Vehicle

    Directory of Open Access Journals (Sweden)

    Nicolas Sockeel

    2017-04-01

    Full Text Available Power management strategies have impacts on fuel economy, greenhouse gasses (GHG emission, as well as effects on the durability of power-train components. This is why different off-line and real-time optimal control approaches are being developed. However, real-time control seems to be more attractive than off-line control because it can be directly implemented for managing power and energy flows inside an actual vehicle. One interesting illustration of these power management strategies is the model predictive control (MPC based algorithm. Inside a MPC, a cost function is optimized while system constraints are validated in real time. The MPC algorithm relies on dynamic models of the vehicle and the battery. The complexity and accuracy of the battery model are usually neglected to benefit the development of new cost functions or better MPC algorithms. The contribution of this manuscript consists of developing and evaluating a high-fidelity battery model of a plug-in hybrid electric vehicle (PHEV that has been used for MPC. Via empirical work and simulation, the impact of a high-fidelity battery model has been evaluated and compared to a simpler model in the context of MPC. It is proven that the new battery model reduces the absolute voltage, state of charge (SoC, and battery power loss error by a factor of 3.2, 1.9 and 2.1 on average respectively, compared to the simpler battery model.

  14. Implementation and Development of an Eulerian Spray Model for CFD simulations of diesel Sprays

    OpenAIRE

    2016-01-01

    [EN] The main objective of this work is the modeling of diesel sprays under engine conditions, including the atomization, transport and evaporation processes pivotal in the diesel spray formation and its development. For this purpose, an Eulerian single fluid model, embedded in a RANS environment, is implemented in the CFD platform OpenFOAM. The modeling approach implemented here is based on the ⅀-Y model. The model is founded on the assumption of flow scales separation. In actual i...

  15. Determinations of from inclusive semileptonic decays with reduced model dependence.

    Science.gov (United States)

    Aubert, B; Barate, R; Boutigny, D; Couderc, F; Karyotakis, Y; Lees, J P; Poireau, V; Tisserand, V; Zghiche, A; Grauges, E; Palano, A; Pappagallo, M; Pompili, A; Chen, J C; Qi, N D; Rong, G; Wang, P; Zhu, Y S; Eigen, G; Ofte, I; Stugu, B; Abrams, G S; Battaglia, M; Best, D S; Brown, D N; Button-Shafer, J; Cahn, R N; Charles, E; Day, C T; Gill, M S; Gritsan, A V; Groysman, Y; Jacobsen, R G; Kadel, R W; Kadyk, J A; Kerth, L T; Kolomensky, Yu G; Kukartsev, G; Lynch, G; Mir, L M; Oddone, P J; Orimoto, T J; Pripstein, M; Roe, N A; Ronan, M T; Wenzel, W A; Barrett, M; Ford, K E; Harrison, T J; Hart, A J; Hawkes, C M; Morgan, S E; Watson, A T; Fritsch, M; Goetzen, K; Held, T; Koch, H; Lewandowski, B; Pelizaeus, M; Peters, K; Schroeder, T; Steinke, M; Boyd, J T; Burke, J P; Cottingham, W N; Walker, D; Cuhadar-Donszelmann, T; Fulsom, B G; Hearty, C; Knecht, N S; Mattison, T S; McKenna, J A; Khan, A; Kyberd, P; Saleem, M; Teodorescu, L; Blinov, A E; Blinov, V E; Bukin, A D; Druzhinin, V P; Golubev, V B; Kravchenko, E A; Onuchin, A P; Serednyakov, S I; Skovpen, Yu I; Solodov, E P; Yushkov, A N; Bondioli, M; Bruinsma, M; Chao, M; Curry, S; Eschrich, I; Kirkby, D; Lankford, A J; Lund, P; Mandelkern, M; Mommsen, R K; Roethel, W; Stoker, D P; Abachi, S; Buchanan, C; Foulkes, S D; Gary, J W; Long, O; Shen, B C; Wang, K; Zhang, L; del Re, D; Hadavand, H K; Hill, E J; MacFarlane, D B; Paar, H P; Rahatlou, S; Sharma, V; Berryhill, J W; Campagnari, C; Cunha, A; Dahmes, B; Hong, T M; Mazur, M A; Richman, J D; Beck, T W; Eisner, A M; Flacco, C J; Heusch, C A; Kroseberg, J; Lockman, W S; Nesom, G; Schalk, T; Schumm, B A; Seiden, A; Spradlin, P; Williams, D C; Wilson, M G; Albert, J; Chen, E; Dubois-Felsmann, G P; Dvoretskii, A; Hitlin, D G; Minamora, J S; Narsky, I; Piatenko, T; Porter, F C; Ryd, A; Samuel, A; Andreassen, R; Mancinelli, G; Meadows, B T; Sokoloff, M D; Blanc, F; Bloom, P C; Chen, S; Ford, W T; Hirschauer, J F; Kreisel, A; Nauenberg, U; Olivas, A; Ruddick, W O; Smith, J G; Ulmer, K A; Wagner, S R; Zhang, J; Chen, A; Eckhart, E A; Soffer, A; Toki, W H; Wilson, R J; Winklmeier, F; Zeng, Q; Altenburg, D D; Feltresi, E; Hauke, A; Spaan, B; Brandt, T; Dickopp, M; Klose, V; Lacker, H M; Nogowski, R; Otto, S; Petzold, A; Schubert, J; Schubert, K R; Schwierz, R; Sundermann, J E; Bernard, D; Bonneaud, G R; Grenier, P; Latour, E; Schrenk, S; Thiebaux, Ch; Vasileiadis, G; Verderi, M; Bard, D J; Clark, P J; Gradl, W; Muheim, F; Playfer, S; Xie, Y; Andreotti, M; Bettoni, D; Bozzi, C; Calabrese, R; Cibinetto, G; Luppi, E; Negrini, M; Piemontese, L; Anulli, F; Baldini-Ferroli, R; Calcaterra, A; de Sangro, R; Finocchiaro, G; Patteri, P; Peruzzi, I M; Piccolo, M; Zallo, A; Buzzo, A; Capra, R; Contri, R; Lo Vetere, M; Macri, M M; Monge, M R; Passaggio, S; Patrignani, C; Robutti, E; Santroni, A; Tosi, S; Brandenburg, G; Chaisanguanthum, K S; Morii, M; Wu, J; Dubitzky, R S; Langenegger, U; Marks, J; Schenk, S; Uwer, U; Bhimji, W; Bowerman, D A; Dauncey, P D; Egede, U; Flack, R L; Gaillard, J R; Nash, J A; Nikolich, M B; Vazquez, W Panduro; Chai, X; Charles, M J; Mader, W F; Mallik, U; Ziegler, V; Cochran, J; Crawley, H B; Dong, L; Eyges, V; Meyer, W T; Prell, S; Rosenberg, E I; Rubin, A E; Yi, J I; Schott, G; Arnaud, N; Davier, M; Giroux, X; Grosdidier, G; Höcker, A; Diberder, F Le; Lepeltier, V; Lutz, A M; Oyanguren, A; Petersen, T C; Pruvot, S; Rodier, S; Roudeau, P; Schune, M H; Stocchi, A; Wang, W F; Wormser, G; Cheng, C H; Lange, D J; Wright, D M; Bevan, A J; Chavez, C A; Forster, I J; Fry, J R; Gabathuler, E; Gamet, R; George, K A; Hutchcroft, D E; Parry, R J; Payne, D J; Schofield, K C; Touramanis, C; Di Lodovico, F; Menges, W; Sacco, R; Brown, C L; Cowan, G; Flaecher, H U; Green, M G; Hopkins, D A; Jackson, P S; McMahon, T R; Ricciardi, S; Salvatore, F; Brown, D N; Davis, C L; Allison, J; Barlow, N R; Barlow, R J; Chia, Y M; Edgar, C L; Kelly, M P; Lafferty, G D; Naisbit, M T; Williams, J C; Chen, C; Hulsbergen, W D; Jawahery, A; Kovalskyi, D; Lae, C K; Roberts, D A; Simi, G; Blaylock, G; Dallapiccola, C; Hertzbach, S S; Kofler, R; Li, X; Moore, T B; Saremi, S; Staengle, H; Willocq, S Y; Cowan, R; Koeneke, K; Sciolla, G; Sekula, S J; Spitznagel, M; Taylor, F; Yamamoto, R K; Kim, H; Patel, P M; Robertson, S H; Lazzaro, A; Lombardo, V; Palombo, F F; Bauer, J M; Cremaldi, L; Eschenburg, V; Godang, R; Kroeger, R; Reidy, J; Sanders, D A; Summers, D J; Zhao, H W; Brunet, S; Côté, D; Taras, P; Viaud, F B; Nicholson, H; Cavallo, N; De Nardo, G; Fabozzi, F; Gatto, C; Lista, L; Monorchio, D; Paolucci, P; Piccolo, D; Sciacca, C; Baak, M; Bulten, H; Raven, G; Snoek, H L; Wilden, L; Jessop, C P; LoSecco, J M; Allmendinger, T; Benelli, G; Gan, K K; Honscheid, K; Hufnagel, D; Jackson, P D; Kagan, H; Kass, R; Pulliam, T; Rahimi, A M; Ter-Antonyan, R; Wong, Q K; Blount, N L; Brau, J; Frey, R; Igonkina, O; Lu, M; Potter, C T; Rahmat, R; Sinev, N B; Strom, D; Strube, J; Torrence, E; Galeazzi, F; Margoni, M; Morandin, M; Posocco, M; Rotondo, M; Simonetto, F; Stroili, R; Voci, C; Benayoun, M; Chauveau, J; David, P; Del Buono, L; de la Vaissière, Ch; Hamon, O; Hartfiel, B L; John, M J J; Leruste, Ph; Malclès, J; Ocariz, J; Roos, L; Therin, G; Behera, P K; Gladney, L; Panetta, J; Biasini, M; Covarelli, R; Pacetti, S; Pioppi, M; Angelini, C; Batignani, G; Bettarini, S; Bucci, F; Calderini, G; Carpinelli, M; Cenci, R; Forti, F; Giorgi, M A; Lusiani, A; Marchiori, G; Morganti, M; Neri, N; Paoloni, E; Rama, M; Rizzo, G; Walsh, J; Haire, M; Judd, D; Wagoner, D E; Biesiada, J; Danielson, N; Elmer, P; Lau, Y P; Lu, C; Olsen, J; Smith, A J S; Telnov, A V; Bellini, F; Cavoto, G; D'Orazio, A; Di Marco, E; Faccini, R; Ferrarotto, F; Ferroni, F; Gaspero, M; Gioi, L Li; Mazzoni, M A; Morganti, S; Piredda, G; Polci, F; Tehrani, F Safai; Voena, C; Schröder, H; Waldi, R; Adye, T; De Groot, N; Franek, B; Gopal, G P; Olaiya, E O; Wilson, F F; Aleksan, R; Emery, S; Gaidot, A; Ganzhur, S F; Graziani, G; de Monchenault, G Hamel; Kozanecki, W; Legendre, M; Mayer, B; Vasseur, G; Yèche, Ch; Zito, M; Purohit, M V; Weidemann, A W; Wilson, J R; Abe, T; Allen, M T; Aston, D; Bartoldus, R; Berger, N; Boyarski, A M; Buchmueller, O L; Claus, R; Coleman, J P; Convery, M R; Cristinziani, M; Dingfelder, J C; Dong, D; Dorfan, J; Dujmic, D; Dunwoodie, W; Fan, S; Field, R C; Glanzman, T; Gowdy, S J; Hadig, T; Halyo, V; Hast, C; Hryn'ova, T; Innes, W R; Kelsey, M H; Kim, P; Kocian, M L; Leith, D W G S; Libby, J; Luitz, S; Luth, V; Lynch, H L; Marsiske, H; Messner, R; Muller, D R; O'Grady, C P; Ozcan, V E; Perazzo, A; Perl, M; Ratcliff, B N; Roodman, A; Salnikov, A A; Schindler, R H; Schwiening, J; Snyder, A; Stelzer, J; Su, D; Sullivan, M K; Suzuki, K; Swain, S K; Thompson, J M; Va'vra, J; van Bakel, N; Weaver, M; Weinstein, A J R; Wisniewski, W J; Wittgen, M; Wright, D H; Yarritu, A K; Yi, K; Young, C C; Burchat, P R; Edwards, A J; Majewski, S A; Petersen, B A; Roat, C; Ahmed, S; Alam, M S; Bula, R; Ernst, J A; Pan, B; Saeed, M A; Wappler, F R; Zain, S B; Bugg, W; Krishnamurthy, M; Spanier, S M; Eckmann, R; Ritchie, J L; Satpathy, A; Schwitters, R F; Izen, J M; Kitayama, I; Lou, X C; Ye, S; Bianchi, F; Bona, M; Gallo, F; Gamba, D; Bomben, M; Bosisio, L; Cartaro, C; Cossutti, F; Ricca, G Della; Dittongo, S; Grancagnolo, S; Lanceri, L; Vitale, L; Azzolini, V; Martinez-Vidal, F; Panvini, R S; Banerjee, Sw; Bhuyan, B; Brown, C M; Fortin, D; Hamano, K; Kowalewski, R; Nugent, I M; Roney, J M; Sobie, R J; Back, J J; Harrison, P F; Latham, T E; Mohanty, G B; Band, H R; Chen, X; Cheng, B; Dasu, S; Datta, M; Eichenbaum, A M; Flood, K T; Graham, M T; Hollar, J J; Johnson, J R; Kutter, P E; Li, H; Liu, R; Mellado, B; Mihalyi, A; Mohapatra, A K; Pan, Y; Pierini, M; Prepost, R; Tan, P; Wu, S L; Yu, Z; Neal, H

    2006-06-09

    We report two novel determinations of /|Vub/ with reduced model dependence, based on measurements of the mass distribution of the hadronic system in semileptonic B decays. Events are selected by fully reconstructing the decay of one B meson and identifying a charged lepton from the decay of the other B meson from Upsilon(4S)-->BB events. In one approach, we combine the inclusive B-->Xulambdav rate, integrated up to a maximum hadronic mass mXXsgamma photon energy spectrum. We obtain /Vub/=(4.43+/-0.38stat+/-0.25syst+/-0.29theo) x 10-3. In another approach we measure the total B-->Xulambdav rate over the full phase space and find /Vub/=(3.84+/-0.70stat+/-0.30syst+/-0.10theo) x 10-3.

  16. Computational design of patterned interfaces using reduced order models

    Science.gov (United States)

    Vattré, A. J.; Abdolrahim, N.; Kolluri, K.; Demkowicz, M. J.

    2014-01-01

    Patterning is a familiar approach for imparting novel functionalities to free surfaces. We extend the patterning paradigm to interfaces between crystalline solids. Many interfaces have non-uniform internal structures comprised of misfit dislocations, which in turn govern interface properties. We develop and validate a computational strategy for designing interfaces with controlled misfit dislocation patterns by tailoring interface crystallography and composition. Our approach relies on a novel method for predicting the internal structure of interfaces: rather than obtaining it from resource-intensive atomistic simulations, we compute it using an efficient reduced order model based on anisotropic elasticity theory. Moreover, our strategy incorporates interface synthesis as a constraint on the design process. As an illustration, we apply our approach to the design of interfaces with rapid, 1-D point defect diffusion. Patterned interfaces may be integrated into the microstructure of composite materials, markedly improving performance. PMID:25169868

  17. Fragile DNA Repair Mechanism Reduces Ageing in Multicellular Model

    DEFF Research Database (Denmark)

    Bendtsen, Kristian Moss; Juul, Jeppe Søgaard; Trusina, Ala

    2012-01-01

    DNA damages, as well as mutations, increase with age. It is believed that these result from increased genotoxic stress and decreased capacity for DNA repair. The two causes are not independent, DNA damage can, for example, through mutations, compromise the capacity for DNA repair, which in turn...... to DNA damage can undergo full repair, go apoptotic, or accumulate mutations thus reducing DNA repair capacity. Our model predicts that at the tissue level repair rate does not continuously decline with age, but instead has a characteristic extended period of high and non-declining DNA repair capacity...... of compromised cells, thus freeing the space for healthy peers. This finding might be a first step toward understanding why a mutation in single DNA repair protein (e.g. Wrn or Blm) is not buffered by other repair proteins and therefore, leads to severe ageing disorders...

  18. Community perspective on a model to reduce teenage pregnancy.

    Science.gov (United States)

    Tabi, Marian M

    2002-11-01

    Qualitative methodology was used to validate elements of an educational career youth developmental model (ECYDM) to reduce teenage pregnancy among African American teens in two inner city urban communities. The specific aims of the study were to gain understanding of the factors contributing to teenage pregnancy and to identify a pregnancy prevention programme relevant to the needs of African American youth. Data were collected from a convenience purposive sample of 43 African American teens and adults. Teen participants included males and non-pregnant, pregnant, and parent females. Adult participants included parents, school staff, and community clergies. Data were collected using demographic questionnaires, structured individual and focus group interviews. Approval from the Institutional Review Board was obtained before conducting the study. Findings supported elements of the ECYDM as a pregnancy prevention programme for African American teens in inner city urban communities. Participants identified an educational-career motivational programme that utilizes mentoring to teach, counsel, and provide information to improve youths' health, education, career, and social outcomes as the pregnancy prevention programme for youth in urban communities. These findings have important implications for future programme design and research. Teenage pregnancy must be addressed within the context of the individual, family, and community. Community partnership and collaboration of resources is necessary to reduce teenage pregnancy. Educational-career programmes are needed to provide information and knowledge to young men and women to make sound informed decisions. Continued qualitative research is also needed to gain understanding of pregnancy prevention programmes.

  19. Modeling and Implementation of PID Control for Autonomous Robots

    Science.gov (United States)

    2007-06-01

    Richard Dorf . Modern Control Systems. New York, New York: Addison-Wesley Publishing, 1995. Cabezas, Rodrigo. Design of A Bore Sight Camera For The...IMPLEMENTATION OF PID CONTROL FOR AUTONOMOUS ROBOTS by Todd A. Williamson June 2007 Thesis Advisor: Richard Harkins Second Reader: Peter...Author: Todd A. Williamson Approved by: Richard Harkins Thesis Advisor Peter Crooker Second Reader James Luscombe

  20. Cost Estimates for Designing and Implementing a Novel Team Care Model for Chronically Ill Patients.

    Science.gov (United States)

    Panattoni, Laura; Dillon, Ellis; Hurlimann, Lily; Durbin, Meg; Tai-Seale, Ming

    2017-09-25

    Little is known about the cost of implementing chronic care models. We estimate the human resource cost of implementing a novel team-based chronic care model "Champion," at a large multispecialty group practice. We used activity-based costing to calculate costs from development through rollout and stabilization in 1 clinic with 12 000 chronic care patients. Data analyzed included Microsoft Outlook meeting metadata, supporting documents, and 2014 employee wages. Implementation took more than 29 months, involved 168 employees, and cost the organization $2 304 787. Payers may need to consider a mixed-payment model to support the both implementation and maintenance costs of team-based chronic care.

  1. A reduced-dynamics variational approach for the assimilation of altimeter data into eddy-resolving ocean models

    Science.gov (United States)

    Yu, Peng; Morey, Steven L.; O'Brien, James J.

    A new method of assimilating sea surface height (SSH) data into ocean models is introduced and tested. Many features observable by satellite altimetry are approximated by the first baroclinic mode over much of the ocean, especially in the lower (but non-equatorial) and mid latitude regions. Based on this dynamical trait, a reduced-dynamics adjoint technique is developed and implemented with a three-dimensional model using vertical normal mode decomposition. To reduce the complexity of the variational data assimilation problem, the adjoint equations are based on a one-active-layer reduced-gravity model, which approximates the first baroclinic mode, as opposed to the full three-dimensional model equations. The reduced dimensionality of the adjoint model leads to lower computational cost than a traditional variational data assimilation algorithm. The technique is applicable to regions of the ocean where the SSH variability is dominated by the first baroclinic mode. The adjustment of the first baroclinic mode model fields dynamically transfers the SSH information to the deep ocean layers. The technique is developed in a modular fashion that can be readily implemented with many three-dimensional ocean models. For this study, the method is tested with the Navy Coastal Ocean Model (NCOM) configured to simulate the Gulf of Mexico.

  2. A Model Of The Underlying Philosophy And Criteria For Effective Implementation Of Performance Management

    Directory of Open Access Journals (Sweden)

    C. M. Whitford

    2006-11-01

    Full Text Available The objective of this study was to develop a model that assists organisations in implementing performance management effectively. A model describing the philosophical paradigm underpinning best practice in performance management and the criteria for effective implementation of performance management was developed. The sample used in this study was a convenience sample of 615 employees. Exploratory factor analysis revealed three reliable philosophical dimensions. Moderate correlations were found between the three dimensions and some of the implementation criteria.

  3. Reducing Ambulance Diversion at Hospital and Regional Levels: Systemic Review of Insights from Simulation Models

    Directory of Open Access Journals (Sweden)

    M Kit Delgado

    2013-09-01

    Full Text Available Introduction: Optimal solutions for reducing diversion without worsening emergency department (ED crowding are unclear. We performed a systematic review of published simulation studies to identify: 1 the tradeoff between ambulance diversion and ED wait times; 2 the predicted impact of patient flow interventions on reducing diversion; and 3 the optimal regional strategy for reducing diversion.Methods: Data Sources: Systematic review of articles using MEDLINE, Inspec, Scopus. Additional studies identified through bibliography review, Google Scholar, and scientific conference proceedings. Study Selection: Only simulations modeling ambulance diversion as a result of ED crowding or inpatient capacity problems were included. Data extraction: Independent extraction by two authors using predefined data fields.Results: We identified 5,116 potentially relevant records; 10 studies met inclusion criteria. In models that quantified the relationship between ED throughput times and diversion, diversion was found to only minimally improve ED waiting room times. Adding holding units for inpatient boarders and ED-based fast tracks, improving lab turnaround times, and smoothing elective surgery caseloads were found to reduce diversion considerably. While two models found a cooperative agreement between hospitals is necessary to prevent defensive diversion behavior by a hospital when a nearby hospital goes on diversion, one model found there may be more optimal solutions for reducing region wide wait times than a regional ban on diversion.Conclusion: Smoothing elective surgery caseloads, adding ED fast tracks as well as holding units for inpatient boarders, improving ED lab turnaround times, and implementing regional cooperative agreements among hospitals. [West J Emerg Med. 2013;14(5:489-498.

  4. Ultraefficient reduced model for countercurrent two-layer flows

    Science.gov (United States)

    Lavalle, Gianluca; Vila, Jean-Paul; Lucquiaud, Mathieu; Valluri, Prashant

    2017-01-01

    We investigate the dynamics of two superposed layers with density contrast flowing countercurrent inside a channel, when the lower layer is much thinner than the wavelength of interfacial waves. We apply a low-dimensional film model to the bottom (heavier) layer and introduce a fast and efficient method to predict the onset of flow reversal in this phase. We study three vertical scenarios with different applied pressure gradients and compare the temporal growth rates of linear and weakly nonlinear waves to the Orr-Sommerfeld problem and to the weakly nonlinear theory, respectively. At the loading point, i.e., when a large wave hump stands at the interface, our spatiotemporal analysis shows that the system is absolutely unstable. We then present profiles of nonlinear saturated waves, pressure field, and streamline distribution in agreement with direct numerical simulation. The reduced model presented here allows us to explore the effect of the upper-layer speed on the wave pattern, showing that the wave profile is very sensitive when the mean film thickness, rather than the liquid flow rate, is maintained constant in the simulation. In addition, we show the strong effect of surface tension on both the maximum wave hump and the crest steepness before the loading point. Finally, we reveal how the nonlinear wave speed affects the vortex distribution within the lower layer by analyzing the stream function under different scenarios.

  5. Design and Implement a MapReduce Framework for Executing Standalone Software Packages in Hadoop-based Distributed Environments

    Directory of Open Access Journals (Sweden)

    Chao-Chun Chen

    2013-12-01

    Full Text Available The Hadoop MapReduce is the programming model of designing the auto scalable distributed computing applications. It provides developer an effective environment to attain automatic parallelization. However, most existing manufacturing systems are arduous and restrictive to migrate to MapReduce private cloud, due to the platform incompatible and tremendous complexity of system reconstruction. For increasing the efficiency of manufacturing systems with minimum modification of existing systems, we design a framework in this thesis, called MC-Framework: Multi-uses-based Cloudizing-Application Framework. It provides the simple interface to users for fairly executing requested tasks worked with traditional standalone software packages in MapReduce-based private cloud environments. Moreover, this thesis focuses on the multiuser workloads, but the default Hadoop scheduling scheme, i.e., FIFO, would increase delay under multiuser scenarios. Hence, we also propose a new scheduling mechanism, called Job-Sharing Scheduling, to explore and fairly share the jobs to machines in the MapReduce-based private cloud. Then, we prototype an experimental virtual-metrology module of a manufacturing system as a case study to verify and analysis the proposed MC-Framework. The results of our experiments indicate that our proposed framework enormously improved the time performance compared with the original package.

  6. A Model of Microteaching Lesson Study Implementation in the Prospective History Teacher Education

    Science.gov (United States)

    Utami, Indah Wahyu Puji; Mashuri; Nafi'ah, Ulfatun

    2016-01-01

    Microteaching lesson study is a model to improve prospective teacher quality by incorporating several element of microteaching and lesson study. This study concern on the implementation of microteaching lesson study in prospective history teacher education. Microteaching lesson study model implemented in this study consist of three stages: plan,…

  7. Flexible Design and Implementation of Cognitive Models for Predicting Pilot Errors in Cockpit Design

    NARCIS (Netherlands)

    Diggelen, J. van; Janssen, J.; Mioch, T.

    2010-01-01

    This paper describes an integrated design and implementation framework for cognitive models in complex task environments. We propose a task- and humancentered development methodology for deriving the cognitive models, and present a goal-based framework for implementing them. We illustrate our approa

  8. A hybrid architecture for the implementation of the Athena neural net model

    Science.gov (United States)

    Koutsougeras, C.; Papachristou, C.

    1989-01-01

    The implementation of an earlier introduced neural net model for pattern classification is considered. Data flow principles are employed in the development of a machine that efficiently implements the model and can be useful for real time classification tasks. Further enhancement with optical computing structures is also considered.

  9. Models and methods for design and implementation of computer based control and monitoring systems for production cells

    DEFF Research Database (Denmark)

    Lynggaard, Hans Jørgen Birk

    This dissertation is concerned with the engineering, i.e. the designing and making, of industrial cell control systems. The focus is on automated robot welding cells in the shipbuilding industry. The industrial research project defines models and methods for design and implementation of computer...... through the implementation of two cell control systems for robot welding cells in production at Odense Steel Shipyard.It is concluded that cell control technology provides for increased performance in production systems, and that the Cell Control Engineering concept reduces the effort for providing high...

  10. State reduced order models for the modelling of the thermal behavior of buildings

    Energy Technology Data Exchange (ETDEWEB)

    Menezo, Christophe; Bouia, Hassan; Roux, Jean-Jacques; Depecker, Patrick [Institute National de Sciences Appliquees de Lyon, Villeurbanne Cedex, (France). Centre de Thermique de Lyon (CETHIL). Equipe Thermique du Batiment]. E-mail: menezo@insa-cethil-etb.insa-lyon.fr; bouia@insa-cethil-etb.insa-lyon.fr; roux@insa-cethil-etb.insa-lyon.fr; depecker@insa-cethil-etb.insa-lyon.fr

    2000-07-01

    This work is devoted to the field of building physics and related to the reduction of heat conduction models. The aim is to enlarge the model libraries of heat and mass transfer codes through limiting the considerable dimensions reached by the numerical systems during the modelling process of a multizone building. We show that the balanced realization technique, specifically adapted to the coupling of reduced order models with the other thermal phenomena, turns out to be very efficient. (author)

  11. Numerical implementation of a state variable model for friction

    Energy Technology Data Exchange (ETDEWEB)

    Korzekwa, D.A. [Los Alamos National Lab., NM (United States); Boyce, D.E. [Cornell Univ., Ithaca, NY (United States)

    1995-03-01

    A general state variable model for friction has been incorporated into a finite element code for viscoplasticity. A contact area evolution model is used in a finite element model of a sheet forming friction test. The results show that a state variable model can be used to capture complex friction behavior in metal forming simulations. It is proposed that simulations can play an important role in the analysis of friction experiments and the development of friction models.

  12. Reduced-dimension model of liquid plug propagation in tubes

    Science.gov (United States)

    Fujioka, Hideki; Halpern, David; Ryans, Jason; Gaver, Donald P.

    2016-09-01

    We investigate the flow resistance caused by the propagation of a liquid plug in a liquid-lined tube and propose a simple semiempirical formula for the flow resistance as a function of the plug length, the capillary number, and the precursor film thickness. These formulas are based on computational investigations of three key contributors to the plug resistance: the front meniscus, the plug core, and the rear meniscus. We show that the nondimensional flow resistance in the front meniscus varies as a function of the capillary number and the precursor film thickness. For a fixed capillary number, the flow resistance increases with decreasing precursor film thickness. The flow in the core region is modeled as Poiseuille flow and the flow resistance is a linear function of the plug length. For the rear meniscus, the flow resistance increases monotonically with decreasing capillary number. We investigate the maximum mechanical stress behavior at the wall, such as the wall pressure gradient, the wall shear stress, and the wall shear stress gradient, and propose empirical formulas for the maximum stresses in each region. These wall mechanical stresses vary as a function of the capillary number: For semi-infinite fingers of air propagating through pulmonary airways, the epithelial cell damage correlates with the pressure gradient. However, for shorter plugs the front meniscus may provide substantial mechanical stresses that could modulate this behavior and provide a major cause of cell injury when liquid plugs propagate in pulmonary airways. Finally, we propose that the reduced-dimension models developed herein may be of importance for the creation of large-scale models of interfacial flows in pulmonary networks, where full computational fluid dynamics calculations are untenable.

  13. Spindle speed variation technique in turning operations: Modeling and real implementation

    Science.gov (United States)

    Urbikain, G.; Olvera, D.; de Lacalle, L. N. López; Elías-Zúñiga, A.

    2016-11-01

    Chatter is still one of the most challenging problems in machining vibrations. Researchers have focused their efforts to prevent, avoid or reduce chatter vibrations by introducing more accurate predictive physical methods. Among them, the techniques based on varying the rotational speed of the spindle (or SSV, Spindle Speed ​​Variation) have gained great relevance. However, several problems need to be addressed due to technical and practical reasons. On one hand, they can generate harmful overheating of the spindle especially at high speeds. On the other hand, the machine may be unable to perform the interpolation properly. Moreover, it is not trivial to select the most appropriate tuning parameters. This paper conducts a study of the real implementation of the SSV technique in turning systems. First, a stability model based on perturbation theory was developed for simulation purposes. Secondly, the procedure to realistically implement the technique in a conventional turning center was tested and developed. The balance between the improved stability margins and acceptable behavior of the spindle is ensured by energy consumption measurements. Mathematical model shows good agreement with experimental cutting tests.

  14. Interpretive Structural Modeling Of Implementation Enablers For Just In Time In ICPI

    Directory of Open Access Journals (Sweden)

    Nitin Upadhye

    2014-12-01

    Full Text Available Indian Corrugated Packaging Industries (ICPI have built up tough competition among the industries in terms of product cost, quality, product delivery, flexibility, and finally customer’s demand. As their customers, mostly OEMs are asking Just in Time deliveries, ICPI must implement JIT in their system. The term "JIT” as, it denotes a system that utilizes less, in terms of all inputs, to create the same outputs as those created by a traditional mass production system, while contributing increased varieties for the end customer. (Womack et al. 1990 "JIT" focuses on abolishing or reducing Muda (“Muda", the Japanese word for waste and on maximizing or fully utilizing activities that add value from the customer's perspective. There is lack of awareness in identifying the right enablers of JIT implementation. Therefore, this study has tried to find out the enablers from the literature review and expert’s opinions from corrugated packaging industries and developed the relationship matrix to see the driving power and dependence between them. In this study, modeling has been done in order to know the interrelationships between the enablers with the help of Interpretive Structural Modeling and Cross Impact Matrix Multiplication Applied to Classification (MICMAC analysis for the performance of Indian corrugated packaging industries.

  15. Thermodynamics of a physical model implementing a Maxwell demon

    OpenAIRE

    Strasberg, Philipp; Schaller, Gernot; Brandes, Tobias; Esposito, Massimiliano

    2012-01-01

    We present a physical implementation of a Maxwell demon which consists of a conventional single electron transistor (SET) capacitively coupled to another quantum dot detecting its state. Altogether, the system is described by stochastic thermodynamics. We identify the regime where the energetics of the SET is not affected by the detection, but where its coarse-grained entropy production is shown to contain a new contribution compared to the isolated SET. This additional contributi...

  16. Model for the evaluation of implementation programs and professional pharmacy services.

    Science.gov (United States)

    Moullin, Joanna C; Sabater-Hernández, Daniel; Benrimoj, Shalom I

    2016-01-01

    Pharmacy practice and pharmaceutical care research of professional services has largely focused on patient outcomes and cost-effectiveness. Research studies have been, for the most part, conducted in controlled conditions prior to full scale implementation. There appears to be a dearth of process and evaluation of implementation reported. Conducting implementation research or adding implementation measures to an impact study, adds external validity to service and patient outcomes. Evaluations are required for all aspects of implementation including indicators of movement through the implementation stages (formative and summative implementation process evaluation), measures of influencing factors (barriers and facilitators) and change in factors over time (implementation impact), assessment of strategies and/or the implementation program, and overall measures to generate a level of implementation (implementation outcomes). The level of implementation of a professional pharmacy service can be estimated from the level of service delivery (reach and fidelity) and level as a service provider (integration and strength of support in the service environment). The model may be used for evaluating professional pharmacy services and for evaluating implementation programs.

  17. Implementing the Mother-Baby Model of Nursing Care Using Models and Quality Improvement Tools.

    Science.gov (United States)

    Brockman, Vicki

    As family-centered care has become the expected standard, many facilities follow the mother-baby model, in which care is provided to both a woman and her newborn in the same room by the same nurse. My facility employed a traditional model of nursing care, which was not evidence-based or financially sustainable. After implementing the mother-baby model, we experienced an increase in exclusive breastfeeding rates at hospital discharge, increased patient satisfaction, improved staff productivity and decreased salary costs, all while the number of births increased. Our change was successful because it was guided by the use of quality improvement tools, change theory and evidence-based practice models. © 2015 AWHONN.

  18. Wind Farm Flow Modeling using an Input-Output Reduced-Order Model

    Energy Technology Data Exchange (ETDEWEB)

    Annoni, Jennifer; Gebraad, Pieter; Seiler, Peter

    2016-08-01

    Wind turbines in a wind farm operate individually to maximize their own power regardless of the impact of aerodynamic interactions on neighboring turbines. There is the potential to increase power and reduce overall structural loads by properly coordinating turbines. To perform control design and analysis, a model needs to be of low computational cost, but retains the necessary dynamics seen in high-fidelity models. The objective of this work is to obtain a reduced-order model that represents the full-order flow computed using a high-fidelity model. A variety of methods, including proper orthogonal decomposition and dynamic mode decomposition, can be used to extract the dominant flow structures and obtain a reduced-order model. In this paper, we combine proper orthogonal decomposition with a system identification technique to produce an input-output reduced-order model. This technique is used to construct a reduced-order model of the flow within a two-turbine array computed using a large-eddy simulation.

  19. Transmural care in the rehabilitation sector: implementation experiences with a transmural care model for people with spinal cord injury

    Directory of Open Access Journals (Sweden)

    J.H.A. Bloemen-Vrencken

    2005-06-01

    Full Text Available Purposes: The purpose of this article is first to describe the development and content of a transmural care model in the rehabilitation sector, which aims to reduce the number and severity of health problems of people with spinal cord injury (SCI and improve the continuity of care. Second, the purpose is to describe the applicability and implementation experiences of a transmural care model in the rehabilitation sector. Methods: The transmural care model was developed in cooperation with the Dutch Association of Spinal Cord Injured Patients, community nurses, general practitioners, rehabilitation nurses, rehabilitation managers, physiatrists and researchers. The core component of the care model consists of a transmural nurse, who ‘liaises’ between people with SCI living in the community, professional primary care professionals and the rehabilitation centre. The transmural care model provides a job description containing activities to support people with SCI and their family/partners and activities to promote continuity of care. The transmural care model was implemented in two Dutch rehabilitation centres. The following three aspects, as experienced by the transmural nurses, were evaluated: the extent to which the care model was implemented; enabling factors and barriers for implementation; strength and weakness of the care model. Results: The transmural care model was not implemented in all its details, with a clear difference between the two rehabilitation centres. Enabling factors and barriers for implementation were found at three levels: 1. the level of the individual professional (e.g. competencies, attitude and motivation, 2. the organisational and financing level (e.g. availability of facilities and finances, and 3. the social context (the opinion of colleagues, managers and other professionals involved with the care. The most important weakness experienced was that there was not enough time to put all the activities into practice

  20. A Reduced-Complexity Fast Algorithm for Software Implementation of the IFFT/FFT in DMT Systems

    Directory of Open Access Journals (Sweden)

    Kuo Jen-Chih

    2002-01-01

    Full Text Available The discrete multitone (DMT modulation/demodulation scheme is the standard transmission technique in the application of asymmetric digital subscriber lines (ADSL and very-high-speed digital subscriber lines (VDSL. Although the DMT can achieve higher data rate compared with other modulation/demodulation schemes, its computational complexity is too high for cost-efficient implementations. For example, it requires 512-point IFFT/FFT as the modulation/demodulation kernel in the ADSL systems and even higher in the VDSL systems. The large block size results in heavy computational load in running programmable digital signal processors (DSPs. In this paper, we derive computationally efficient fast algorithm for the IFFT/FFT. The proposed algorithm can avoid complex-domain operations that are inevitable in conventional IFFT/FFT computation. The resulting software function requires less computational complexity. We show that it acquires only 17% number of multiplications to compute the IFFT and FFT compared with the Cooly-Tukey algorithm. Hence, the proposed fast algorithm is very suitable for firmware development in reducing the MIPS count in programmable DSPs.

  1. Reduced nonlinear prognostic model construction from high-dimensional data

    Science.gov (United States)

    Gavrilov, Andrey; Mukhin, Dmitry; Loskutov, Evgeny; Feigin, Alexander

    2017-04-01

    Construction of a data-driven model of evolution operator using universal approximating functions can only be statistically justified when the dimension of its phase space is small enough, especially in the case of short time series. At the same time in many applications real-measured data is high-dimensional, e.g. it is space-distributed and multivariate in climate science. Therefore it is necessary to use efficient dimensionality reduction methods which are also able to capture key dynamical properties of the system from observed data. To address this problem we present a Bayesian approach to an evolution operator construction which incorporates two key reduction steps. First, the data is decomposed into a set of certain empirical modes, such as standard empirical orthogonal functions or recently suggested nonlinear dynamical modes (NDMs) [1], and the reduced space of corresponding principal components (PCs) is obtained. Then, the model of evolution operator for PCs is constructed which maps a number of states in the past to the current state. The second step is to reduce this time-extended space in the past using appropriate decomposition methods. Such a reduction allows us to capture only the most significant spatio-temporal couplings. The functional form of the evolution operator includes separately linear, nonlinear (based on artificial neural networks) and stochastic terms. Explicit separation of the linear term from the nonlinear one allows us to more easily interpret degree of nonlinearity as well as to deal better with smooth PCs which can naturally occur in the decompositions like NDM, as they provide a time scale separation. Results of application of the proposed method to climate data are demonstrated and discussed. The study is supported by Government of Russian Federation (agreement #14.Z50.31.0033 with the Institute of Applied Physics of RAS). 1. Mukhin, D., Gavrilov, A., Feigin, A., Loskutov, E., & Kurths, J. (2015). Principal nonlinear dynamical

  2. Modelling obesity outcomes: reducing obesity risk in adulthood may have grater impact than reducing obesity prevalence in childhood

    NARCIS (Netherlands)

    Lhachimi, S.K.; Nusselder, W.J.; Lobstein, T.J.; Smit, H.A.; Baili, P.; Bennett, K.; Kulik, M.C.; Jackson-Leach, R.; Boshuizen, H.C.; Mackenbach, J.P.

    2013-01-01

    A common policy response to the rise in obesity prevalence is to undertake interventions in childhood, but it is an open question whether this is more effective than reducing the risk of becoming obese during adulthood. In this paper, we model the effect on health outcomes of (i) reducing the

  3. Modelling obesity outcomes : reducing obesity risk in adulthood may have greater impact than reducing obesity prevalence in childhood

    NARCIS (Netherlands)

    Lhachimi, S. K.; Nusselder, W. J.; Lobstein, T. J.; Smit, H. A.; Baili, P.; Bennett, K.; Kulik, M. C.; Jackson-Leach, R.; Boshuizen, H. C.; Mackenbach, J. P.

    2013-01-01

    A common policy response to the rise in obesity prevalence is to undertake interventions in childhood, but it is an open question whether this is more effective than reducing the risk of becoming obese during adulthood. In this paper, we model the effect on health outcomes of (i) reducing the preval

  4. Modelling obesity outcomes: reducing obesity risk in adulthood may have grater impact than reducing obesity prevalence in childhood

    NARCIS (Netherlands)

    Lhachimi, S.K.; Nusselder, W.J.; Lobstein, T.J.; Smit, H.A.; Baili, P.; Bennett, K.; Kulik, M.C.; Jackson-Leach, R.; Boshuizen, H.C.; Mackenbach, J.P.

    2013-01-01

    A common policy response to the rise in obesity prevalence is to undertake interventions in childhood, but it is an open question whether this is more effective than reducing the risk of becoming obese during adulthood. In this paper, we model the effect on health outcomes of (i) reducing the preval

  5. Hopes and Cautions in Implementing Bayesian Structural Equation Modeling

    Science.gov (United States)

    MacCallum, Robert C.; Edwards, Michael C.; Cai, Li

    2012-01-01

    Muthen and Asparouhov (2012) have proposed and demonstrated an approach to model specification and estimation in structural equation modeling (SEM) using Bayesian methods. Their contribution builds on previous work in this area by (a) focusing on the translation of conventional SEM models into a Bayesian framework wherein parameters fixed at zero…

  6. Implementation challenges for designing integrated in vitro testing strategies (ITS) aiming at reducing and replacing animal experimentation.

    Science.gov (United States)

    De Wever, Bart; Fuchs, Horst W; Gaca, Marianna; Krul, Cyrille; Mikulowski, Stan; Poth, Albrecht; Roggen, Erwin L; Vilà, Maya R

    2012-04-01

    At the IVTIP (in vitro testing industrial platform) meeting of November 26th 2009 entitled 'Toxicology in the 21st century ('21C')--working our way towards a visionary reality' all delegates endorsed the emerging concept of the '21C' vision as the way forward to enable a thorough, reliable and systematic approach to future toxicity testing without the use of animals. One of the emerging concepts focused on integrating a defined number of tests modelling in vivo-relevant and well-characterised toxicity pathways representing mechanistic endpoints. At this meeting the importance of Integrated Testing Strategies (ITS) as tools towards reduction and eventually replacement of the animals currently used for hazard identification and risk assessment was recognised. A follow-up IVTIP Spring 2010 meeting entitled 'Integrated In Vitro Testing Strategies (ITS)--Implementation Challenges' was organised to address pending questions about ITS. This report is not a review of the ITS literature, but a summary of the discussions triggered by presented examples on how to develop and implement ITS. Contrasts between pharmaceutical and chemical industry, as well as a list of general but practical aspects to be considered while developing an ITS emerged from the discussions. In addition, current recommendations on the validation of ITS were discussed. In conclusion, the outcome of this workshop improved the understanding of the participants of some important factors that may impact the design of an ITS in function of its purpose (e.g., screening, or early decision making versus regulatory), the context in which they need to be applied (e.g., ICH guidelines, REACH) and the status and quality of the available tools. A set of recommendations of best practices was established and the importance of the applicability of the individual tests as well as the testing strategy itself was highlighted. Copyright © 2012 Elsevier Ltd. All rights reserved.

  7. The role of public policies in reducing smoking: the Minnesota SimSmoke tobacco policy model.

    Science.gov (United States)

    Levy, David T; Boyle, Raymond G; Abrams, David B

    2012-11-01

    Following the landmark lawsuit and settlement with the tobacco industry, Minnesota pursued the implementation of stricter tobacco control policies, including tax increases, mass media campaigns, smokefree air laws, and cessation treatment policies. Modeling is used to examine policy effects on smoking prevalence and smoking-attributable deaths. To estimate the effect of tobacco control policies in Minnesota on smoking prevalence and smoking-attributable deaths using the SimSmoke simulation model. Minnesota data starting in 1993 are applied to SimSmoke, a simulation model used to examine the effect of tobacco control policies over time on smoking initiation and cessation. Upon validating the model against smoking prevalence, SimSmoke is used to distinguish the effect of policies implemented since 1993 on smoking prevalence. Using standard attribution methods, SimSmoke also estimates deaths averted as a result of the policies. SimSmoke predicts smoking prevalence accurately between 1993 and 2011. Since 1993, a relative reduction in smoking rates of 29% by 2011 and of 41% by 2041 can be attributed to tobacco control policies, mainly tax increases, smokefree air laws, media campaigns, and cessation treatment programs. Moreover, 48,000 smoking-attributable deaths will be averted by 2041. Minnesota SimSmoke demonstrates that tobacco control policies, especially taxes, have substantially reduced smoking prevalence and smoking-attributable deaths. Taxes, smokefree air laws, mass media, cessation treatment policies, and youth-access enforcement contributed to the decline in prevalence and deaths averted, with the strongest component being taxes. With stronger policies, for example, increasing cigarette taxes to $4.00 per pack, Minnesota's smoking rate could be reduced by another 13%, and 7200 deaths could be averted by 2041. Copyright © 2012 American Journal of Preventive Medicine. Published by Elsevier Inc. All rights reserved.

  8. Azithromycin reduces inflammation in a rat model of acute conjunctivitis

    Science.gov (United States)

    Fernandez-Robredo, Patricia; Recalde, Sergio; Moreno-Orduña, Maite; García-García, Laura; Zarranz-Ventura, Javier; García-Layana, Alfredo

    2013-01-01

    Purpose Macrolide antibiotics are known to have various anti-inflammatory effects in addition to their antimicrobial activity, but the mechanisms are still unclear. The effect of azithromycin on inflammatory molecules in the lipopolysaccharide-induced rat conjunctivitis model was investigated. Methods Twenty-four Wistar rats were divided into two groups receiving topical ocular azithromycin (15 mg/g) or vehicle. In total, six doses (25 µl) were administered as one dose twice a day for three days before subconjunctival lipopolysaccharide injection (3 mg/ml). Before the rats were euthanized, mucus secretion, conjunctival and palpebral edema and redness were evaluated. Real-time polymerase chain reaction was used to determine gene expression for interleukin-6, cyclooxygenase-2, tumor necrosis factor-α, matrix metalloproteinase (MMP)-2, and MMP-9. Interleukin-6 was determined with enzyme-linked immunosorbent assay, nuclear factor-kappa B with western blot, and MMP-2 activity with gelatin zymogram. Four eyes per group were processed for histology and subsequent periodic acid-Schiff staining and CD68 for immunofluorescence. The Student t test or the Wilcoxon test for independent samples was applied (SPSS v.15.0). Results Azithromycin-treated animals showed a significant reduction in all clinical signs (p<0.05) compared to controls. Interleukin-6 (p<0.05), nuclear factor-kappa B protein expression (p<0.01), and MMP-2 activity (p<0.05) in conjunctival homogenates were significantly reduced compared with the control animals. MMP-2 gene expression showed a tendency to decrease in the azithromycin group (p=0.063). Mucus secretion by goblet cells and the macrophage count in conjunctival tissue were also decreased in the azithromycin group (p<0.05). Conclusions These results suggest that azithromycin administration ameliorates induced inflammation effects in a rat model of acute conjunctivitis. PMID:23378729

  9. Accurate mask model implementation in OPC model for 14nm nodes and beyond

    Science.gov (United States)

    Zine El Abidine, Nacer; Sundermann, Frank; Yesilada, Emek; Farys, Vincent; Huguennet, Frederic; Armeanu, Ana-Maria; Bork, Ingo; Chomat, Michael; Buck, Peter; Schanen, Isabelle

    2015-10-01

    In a previous work [1] we demonstrated that current OPC model assuming the mask pattern to be analogous to the designed data is no longer valid. Indeed as depicted in figure 1, an extreme case of line-end shortening shows a gap up to 10 nm difference (at mask level). For that reason an accurate mask model, for a 14nm logic gate level has been calibrated. A model with a total RMS of 1.38nm at mask level was obtained. 2D structures such as line-end shortening and corner rounding were well predicted using SEM pictures overlaid with simulated contours. The first part of this paper is dedicated to the implementation of our improved model in current flow. The improved model consists of a mask model capturing mask process and writing effects and a standard optical and resist model addressing the litho exposure and development effects at wafer level. The second part will focus on results from the comparison of the two models, the new and the regular, as depicted in figure 2.

  10. Accurate mask model implementation in optical proximity correction model for 14-nm nodes and beyond

    Science.gov (United States)

    Zine El Abidine, Nacer; Sundermann, Frank; Yesilada, Emek; Farys, Vincent; Huguennet, Frederic; Armeanu, Ana-Maria; Bork, Ingo; Chomat, Michael; Buck, Peter; Schanen, Isabelle

    2016-04-01

    In a previous work, we demonstrated that the current optical proximity correction model assuming the mask pattern to be analogous to the designed data is no longer valid. An extreme case of line-end shortening shows a gap up to 10 nm difference (at mask level). For that reason, an accurate mask model has been calibrated for a 14-nm logic gate level. A model with a total RMS of 1.38 nm at mask level was obtained. Two-dimensional structures, such as line-end shortening and corner rounding, were well predicted using scanning electron microscopy pictures overlaid with simulated contours. The first part of this paper is dedicated to the implementation of our improved model in current flow. The improved model consists of a mask model capturing mask process and writing effects, and a standard optical and resist model addressing the litho exposure and development effects at wafer level. The second part will focus on results from the comparison of the two models, the new and the regular.

  11. Electron-scale reduced fluid models with gyroviscous effects

    Science.gov (United States)

    Passot, T.; Sulem, P. L.; Tassi, E.

    2017-08-01

    Reduced fluid models for collisionless plasmas including electron inertia and finite Larmor radius corrections are derived for scales ranging from the ion to the electron gyroradii. Based either on pressure balance or on the incompressibility of the electron fluid, they respectively capture kinetic Alfvén waves (KAWs) or whistler waves (WWs), and can provide suitable tools for reconnection and turbulence studies. Both isothermal regimes and Landau fluid closures permitting anisotropic pressure fluctuations are considered. For small values of the electron beta parameter e$ , a perturbative computation of the gyroviscous force valid at scales comparable to the electron inertial length is performed at order e)$ , which requires second-order contributions in a scale expansion. Comparisons with kinetic theory are performed in the linear regime. The spectrum of transverse magnetic fluctuations for strong and weak turbulence energy cascades is also phenomenologically predicted for both types of waves. In the case of moderate ion to electron temperature ratio, a new regime of KAW turbulence at scales smaller than the electron inertial length is obtained, where the magnetic energy spectrum decays like \\bot -13/3$ , thus faster than the \\bot -11/3$ spectrum of WW turbulence.

  12. A mathematical model for reducing the composting time

    Directory of Open Access Journals (Sweden)

    Estefanía Larreategui

    2014-06-01

    Full Text Available The environment is still affected by the inappropriate use of organic matter waste, but a culture of recycling and reuse has been promoted in Ecuador to reduce carbon footprint. The composting, a technique to digest organic matter, which traditionally takes 16-24 weeks, is still inefficient to use. Therefore, this paper concerns the optimization of the composting process in both quality and production time. The variables studied were: type of waste (fruits and vegetables and type of bioaccelerator (yeast and indigenous microorganisms. By using a full factorial random design 22, a quality compost was obtained in 7 weeks of processing. Quality factors as temperature, density, moisture content, pH and carbon-nitrogen ratio allowed the best conditions for composting in the San Gabriel del Baba community (Santo Domingo de los Colorados, Ecuador. As a result of this study, a mathematical surface model which explains the relationship between the temperature and the digestion time of organic matter was obtained.

  13. Global horizontal irradiance clear sky models : implementation and analysis.

    Energy Technology Data Exchange (ETDEWEB)

    Stein, Joshua S.; Hansen, Clifford W.; Reno, Matthew J.

    2012-03-01

    Clear sky models estimate the terrestrial solar radiation under a cloudless sky as a function of the solar elevation angle, site altitude, aerosol concentration, water vapor, and various atmospheric conditions. This report provides an overview of a number of global horizontal irradiance (GHI) clear sky models from very simple to complex. Validation of clear-sky models requires comparison of model results to measured irradiance during clear-sky periods. To facilitate validation, we present a new algorithm for automatically identifying clear-sky periods in a time series of GHI measurements. We evaluate the performance of selected clear-sky models using measured data from 30 different sites, totaling about 300 site-years of data. We analyze the variation of these errors across time and location. In terms of error averaged over all locations and times, we found that complex models that correctly account for all the atmospheric parameters are slightly more accurate than other models, but, primarily at low elevations, comparable accuracy can be obtained from some simpler models. However, simpler models often exhibit errors that vary with time of day and season, whereas the errors for complex models vary less over time.

  14. Modeling the dynamics of evaluation: a multilevel neural network implementation of the iterative reprocessing model.

    Science.gov (United States)

    Ehret, Phillip J; Monroe, Brian M; Read, Stephen J

    2015-05-01

    We present a neural network implementation of central components of the iterative reprocessing (IR) model. The IR model argues that the evaluation of social stimuli (attitudes, stereotypes) is the result of the IR of stimuli in a hierarchy of neural systems: The evaluation of social stimuli develops and changes over processing. The network has a multilevel, bidirectional feedback evaluation system that integrates initial perceptual processing and later developing semantic processing. The network processes stimuli (e.g., an individual's appearance) over repeated iterations, with increasingly higher levels of semantic processing over time. As a result, the network's evaluations of stimuli evolve. We discuss the implications of the network for a number of different issues involved in attitudes and social evaluation. The success of the network supports the IR model framework and provides new insights into attitude theory.

  15. Implementing land use change models in the developing world

    CSIR Research Space (South Africa)

    Le Roux, Alize

    2013-07-01

    Full Text Available in the developing world -Reshaping cities through urban land use modeling- Alize le Roux 2013 Esri International User Conference July 8–12, 2013 | San Diego, California Presentation outline 1. Urban land use change models 2. Value of these models 3... are data hungry 5. Massive potential for municipal consumption projections • Water, energy, waste water, solid waste, public transport, libraries, revenue, … Q & A ___________________________ Alize le Roux ALeroux1@csir.co.za ...

  16. Commercial Implementation of Model-Based Manufacturing of Nanostructured Metals

    Energy Technology Data Exchange (ETDEWEB)

    Lowe, Terry C. [Los Alamos National Laboratory

    2012-07-24

    Computational modeling is an essential tool for commercial production of nanostructured metals. Strength is limited by imperfections at the high strength levels that are achievable in nanostructured metals. Processing to achieve homogeneity at the micro- and nano-scales is critical. Manufacturing of nanostructured metals is intrinsically a multi-scale problem. Manufacturing of nanostructured metal products requires computer control, monitoring and modeling. Large scale manufacturing of bulk nanostructured metals by Severe Plastic Deformation is a multi-scale problem. Computational modeling at all scales is essential. Multiple scales of modeling must be integrated to predict and control nanostructural, microstructural, macrostructural product characteristics and production processes.

  17. A Novel Approach to Implement Takagi-Sugeno Fuzzy Models.

    Science.gov (United States)

    Chang, Chia-Wen; Tao, Chin-Wang

    2017-09-01

    This paper proposes new algorithms based on the fuzzy c-regressing model algorithm for Takagi-Sugeno (T-S) fuzzy modeling of the complex nonlinear systems. A fuzzy c-regression state model (FCRSM) algorithm is a T-S fuzzy model in which the functional antecedent and the state-space-model-type consequent are considered with the available input-output data. The antecedent and consequent forms of the proposed FCRSM consists mainly of two advantages: one is that the FCRSM has low computation load due to only one input variable is considered in the antecedent part; another is that the unknown system can be modeled to not only the polynomial form but also the state-space form. Moreover, the FCRSM can be extended to FCRSM-ND and FCRSM-Free algorithms. An algorithm FCRSM-ND is presented to find the T-S fuzzy state-space model of the nonlinear system when the input-output data cannot be precollected and an assumed effective controller is available. In the practical applications, the mathematical model of controller may be hard to be obtained. In this case, an online tuning algorithm, FCRSM-FREE, is designed such that the parameters of a T-S fuzzy controller and the T-S fuzzy state model of an unknown system can be online tuned simultaneously. Four numerical simulations are given to demonstrate the effectiveness of the proposed approach.

  18. Estimating the opportunity costs of reducing carbon dioxide emissions via avoided deforestation, using integrated assessment modelling

    NARCIS (Netherlands)

    Overmars, K.P.; Stehfest, E.; Tabeau, A.A.; Meijl, van J.C.M.; Beltran, A.M.; Kram, T.

    2014-01-01

    Estimates show that, in recent years, deforestation and forest degradation accounted for about 17% of global greenhouse gas emissions. The implementation of REDD (Reducing Emissions from Deforestation and Forest Degradation in Developing Countries) is suggested to provide substantial emission reduct

  19. Thermal modelling approaches to enable mitigation measures implementation for salmonid gravel stages in hydropeaking rivers

    Science.gov (United States)

    Casas-Mulet, R.; Alfredsen, K. T.

    2016-12-01

    The dewatering of salmon spawning redds can lead to early life stages mortality due to hydropeaking operations, with higher impact on the alevins stages as they have lower tolerance to dewatering than the eggs. Targeted flow-related mitigations measures can reduce such mortality, but it is essential to understand how hydropeaking change thermal regimes in rivers and may impact embryo development; only then optimal measures can be implemented at the right development stage. We present a set of experimental approaches and modelling tools for the estimation of hatch and swim-up dates based on water temperature data in the river Lundesokna (Norway). We identified critical periods for gravel-stages survival and through comparing hydropeaking vs unregulated thermal and hydrological regimes, we established potential flow-release measures to minimise mortality. Modelling outcomes were then used assess the cost-efficiency of each measure. The combinations of modelling tools used in this study were overall satisfactory and their application can be useful especially in systems where little field data is available. Targeted measures built on well-informed modelling approaches can be pre-tested based on their efficiency to mitigate dewatering effects vs. the hydropower system capacity to release or conserve water for power production. Overall, environmental flow releases targeting specific ecological objectives can provide better cost-effective options than conventional operational rules complying with general legislation.

  20. Sepsis Alert - a triage model that reduces time to antibiotics and length of hospital stay.

    Science.gov (United States)

    Rosenqvist, Mari; Fagerstrand, Emma; Lanbeck, Peter; Melander, Olle; Åkesson, Per

    2017-07-01

    To study if a modified triage system at an Emergency Department (ED) combined with educational efforts resulted in reduced time to antibiotics and decreased length of hospital stay (LOS) for patients with severe infection. A retrospective, observational study comparing patients before and after the start of a new triage model at the ED of a University Hospital. After the implementation of the model, patients with fever and abnormal vital signs were triaged into a designated sepsis line (Sepsis Alert) for rapid evaluation by the attending physician supported by a infectious diseases (IDs) specialist. Also, all ED staff participated in a designated sepsis education before Sepsis Alert was introduced. Medical records were evaluated for patients during a 3-month period after the triage system was started in 2012, and also during the corresponding months in 2010 and 2014. A total of 1837 patients presented with abnormal vital signs. Of these, 221 patients presented with fever and thus at risk of having severe sepsis. Among patients triaged according to the new model, median time to antibiotics was 58.5 at startup and 24.5 minutes at follow-up two years later. This was significantly less than for patients treated before the new model, 190 minutes. Also, median LOS was significantly decreased after introduction of the new triage model, from nine to seven days. A triage model at the ED with special attention to severe sepsis patients, led to sustained improvements of time to antibiotic treatment and LOS.

  1. A reduced-order, single-bubble cavitation model with applications to therapeutic ultrasound.

    Science.gov (United States)

    Kreider, Wayne; Crum, Lawrence A; Bailey, Michael R; Sapozhnikov, Oleg A

    2011-11-01

    Cavitation often occurs in therapeutic applications of medical ultrasound such as shock-wave lithotripsy (SWL) and high-intensity focused ultrasound (HIFU). Because cavitation bubbles can affect an intended treatment, it is important to understand the dynamics of bubbles in this context. The relevant context includes very high acoustic pressures and frequencies as well as elevated temperatures. Relative to much of the prior research on cavitation and bubble dynamics, such conditions are unique. To address the relevant physics, a reduced-order model of a single, spherical bubble is proposed that incorporates phase change at the liquid-gas interface as well as heat and mass transport in both phases. Based on the energy lost during the inertial collapse and rebound of a millimeter-sized bubble, experimental observations were used to tune and test model predictions. In addition, benchmarks from the published literature were used to assess various aspects of model performance. Benchmark comparisons demonstrate that the model captures the basic physics of phase change and diffusive transport, while it is quantitatively sensitive to specific model assumptions and implementation details. Given its performance and numerical stability, the model can be used to explore bubble behaviors across a broad parameter space relevant to therapeutic ultrasound.

  2. An implementation of a barotropic quasigeostrophic model of ocean circulation on the MPP

    Science.gov (United States)

    Grosch, C. E.; Fatoohi, R.

    1987-01-01

    The implementation on the Massively Parallel Processor (MPP) of a barotropic quasigeostrophic model of ocean circulation is discussed. The mathematical model, including scalings and boundary conditions is discussed. The numerical scheme, which uses compact differencing is also discussed. The implementation of this model on the MPP is then presented. Finally, some performance results are given and compared to results obtained using the VPS-32 and one processor of a CRAY-2.

  3. Potential Improvement of Building Information Modeling (BIM) Implementation in Malaysian Construction Projects

    OpenAIRE

    Latiffi, Aryani,; Mohd, Suzila; Rakiman, Umol,

    2015-01-01

    Part 4: Building Information Modeling (BIM); International audience; Application of building information modeling (BIM), such as preview design clashes and visualize project’s model increase effectiveness in managing construction projects. However, its implementation in Malaysian construction projects is slow in order to see and gain the benefits. Therefore, this paper aims to explore on potential improvement that could increase BIM implementation in construction projects. A literature review...

  4. Computer Implementation of a New Therapeutic Model for GBM Tumor

    Directory of Open Access Journals (Sweden)

    Ali Jamali Nazari

    2014-01-01

    Full Text Available Modeling the tumor behavior in the host organ as function of time and radiation dose has been a major study in the previous decades. Here the effort in estimation of cancerous and normal cell proliferation and growth in glioblastoma multiform (GBM tumor is presented. This paper introduces a new mathematical model in the form of differential equation of tumor growth. The model contains dose delivery amount in the treatment scheme as an input term. It also can be utilized to optimize the treatment process in order to increase the patient survival period. Gene expression programming (GEP as a new concept is used for estimating this model. The LQ model has also been applied to GEP as an initial value, causing acceleration and improvement of the algorithm estimation. The model shows the number of the tumor and normal brain cells during the treatment process using the status of normal and cancerous cells in the initiation of treatment, the timing and amount of dose delivery to the patient, and a coefficient that describes the brain condition. A critical level is defined for normal cell when the patient’s death occurs. In the end the model has been verified by clinical data obtained from previous accepted formulae and some of our experimental resources. The proposed model helps to predict tumor growth during treatment process in which further treatment processes can be controlled.

  5. Development and Implementation of Mechanistic Terry Turbine Models in RELAP-7 to Simulate RCIC Normal Operation Conditions

    Energy Technology Data Exchange (ETDEWEB)

    Zhao, Haihua [Idaho National Lab. (INL), Idaho Falls, ID (United States); Zou, Ling [Idaho National Lab. (INL), Idaho Falls, ID (United States); Zhang, Hongbin [Idaho National Lab. (INL), Idaho Falls, ID (United States); O' Brien, James Edward [Idaho National Lab. (INL), Idaho Falls, ID (United States)

    2016-09-01

    As part of the efforts to understand the unexpected “self-regulating” mode of the RCIC (Reactor Core Isolation Cooling) systems in Fukushima accidents and extend BWR RCIC and PWR AFW (Auxiliary Feed Water) operational range and flexibility, mechanistic models for the Terry turbine, based on Sandia’s original work [1], have been developed and implemented in the RELAP-7 code to simulate the RCIC system. In 2016, our effort has been focused on normal working conditions of the RCIC system. More complex off-design conditions will be pursued in later years when more data are available. In the Sandia model, the turbine stator inlet velocity is provided according to a reduced-order model which was obtained from a large number of CFD (computational fluid dynamics) simulations. In this work, we propose an alternative method, using an under-expanded jet model to obtain the velocity and thermodynamic conditions for the turbine stator inlet. The models include both an adiabatic expansion process inside the nozzle and a free expansion process outside of the nozzle to ambient pressure. The combined models are able to predict the steam mass flow rate and supersonic velocity to the Terry turbine bucket entrance, which are the necessary input information for the Terry turbine rotor model. The analytical models for the nozzle were validated with experimental data and benchmarked with CFD simulations. The analytical models generally agree well with the experimental data and CFD simulations. The analytical models are suitable for implementation into a reactor system analysis code or severe accident code as part of mechanistic and dynamical models to understand the RCIC behaviors. The newly developed nozzle models and modified turbine rotor model according to the Sandia’s original work have been implemented into RELAP-7, along with the original Sandia Terry turbine model. A new pump model has also been developed and implemented to couple with the Terry turbine model. An input

  6. Intentional systems: Review of neurodynamics, modeling, and robotics implementation

    Science.gov (United States)

    Kozma, Robert

    2008-03-01

    We present an intentional neurodynamic theory for higher cognition and intelligence. This theory provides a unifying framework for integrating symbolic and subsymbolic methods as complementary aspects of human intelligence. Top-down symbolic approaches benefit from the vast experience with logical reasoning and with high-level knowledge processing in humans. Connectionist methods use bottom-up approach to generate intelligent behavior by mimicking subsymbolic aspects of the operation of brains and nervous systems. Neurophysiological correlates of intentionality and cognition include sequences of oscillatory patterns of mesoscopic neural activity. Oscillatory patterns are viewed as intermittent representations of generalized symbol systems, with which brains compute. These dynamical symbols are not rigid but flexible. They disappear soon after they emerged through spatio-temporal phase transitions. Intentional neurodynamics provides a solution to the notoriously difficult symbol grounding problem. Some examples of implementations of the corresponding dynamic principles are described in this review.

  7. Community Intervention Model to Reduce Inappropriate Antibiotic Use

    Science.gov (United States)

    Alder, Stephen; Wuthrich, Amy; Haddadin, Bassam; Donnelly, Sharon; Hannah, Elizabeth Lyon; Stoddard, Greg; Benuzillo, Jose; Bateman, Kim; Samore, Matthew

    2010-01-01

    Background: The Inter-Mountain Project on Antibiotic Resistance and Therapy (IMPART) is an intervention that addresses emerging antimicrobial resistance and the reduction of unnecessary antimicrobial use. Purpose: This study assesses the design and implementation of the community intervention component of IMPART. Methods: The study was conducted…

  8. Impact of implementation choices on quantitative predictions of cell-based computational models

    Science.gov (United States)

    Kursawe, Jochen; Baker, Ruth E.; Fletcher, Alexander G.

    2017-09-01

    'Cell-based' models provide a powerful computational tool for studying the mechanisms underlying the growth and dynamics of biological tissues in health and disease. An increasing amount of quantitative data with cellular resolution has paved the way for the quantitative parameterisation and validation of such models. However, the numerical implementation of cell-based models remains challenging, and little work has been done to understand to what extent implementation choices may influence model predictions. Here, we consider the numerical implementation of a popular class of cell-based models called vertex models, which are often used to study epithelial tissues. In two-dimensional vertex models, a tissue is approximated as a tessellation of polygons and the vertices of these polygons move due to mechanical forces originating from the cells. Such models have been used extensively to study the mechanical regulation of tissue topology in the literature. Here, we analyse how the model predictions may be affected by numerical parameters, such as the size of the time step, and non-physical model parameters, such as length thresholds for cell rearrangement. We find that vertex positions and summary statistics are sensitive to several of these implementation parameters. For example, the predicted tissue size decreases with decreasing cell cycle durations, and cell rearrangement may be suppressed by large time steps. These findings are counter-intuitive and illustrate that model predictions need to be thoroughly analysed and implementation details carefully considered when applying cell-based computational models in a quantitative setting.

  9. Hemocoagulase atrox reduces vascular modeling in rabbit carotid artery adventitia

    Science.gov (United States)

    Wan, Sheng-Yun; Hu, Yuan-Cheng; Zhan, Yan-Qing; Qin, Dan-Dan; Ding, Yang

    2013-01-01

    Objective: This study aimed to compare the effects of hemocoagulase atrox and cauterization hemostasis on intimal hyperplasia and explore the effect of hemocoagulase atrox on vascular modeling in rabbit carotid artery adventitia. Methods: A total of 27 rabbits were randomly divided into 3 groups (0d, 14d, 28d). They were anaesthetized using an intramuscular injection of phenobarbital sodium (1 ml/kg). The left and right common carotid arteries were exposed and capillary hemorrhaged after blunt dissection of the adventitia layers of common carotid arteries. Nine rabbits in each group were again randomly divided into 3 groups, in which animals were respectively treated with hemocoagulase (2 U/ml), cauterization (power = 40 w) and saline (as control). Groups of animals were euthanized at 0, 14 and 28 days after surgery. The samples were equally divided in the middle of the adventitia removal section to obtain equal parts for histologic, immunohistochemical and molecular biologic analysis. The vascular repair after adventitial stripping was observed by HE staining, Masson staining and transmission electron microscopy. The expression of carotid MCP-1, PCNA, TGF-β1, α-SMA and VEGF were measured at different time points by RT-PCR and immunohistochemical staining. Results: HE staining and Masson staining showed that hemocoagulase atrox had a significantly stronger effect on reducing intimal hyperplasia than the cauterization after 14 and 28 days. The results of RT-PCR showed that the expression of MCP-1, TGF-β1, α-SMA and VEGF in hemocoagulase atrox-treated animals were lower than that of cauterization-treated animals. Conclusion: Our results suggested that hemocoagulase atrox as a topical hemostatic is safety and efficiently and it can accelerate adventitia restoration and decrease intimal proliferation. PMID:24228100

  10. Pramipexole reduces inflammation in the experimental animal models of inflammation.

    Science.gov (United States)

    Sadeghi, Heibatollah; Parishani, Mohammad; Akbartabar Touri, Mehdi; Ghavamzadeh, Mehdi; Jafari Barmak, Mehrzad; Zarezade, Vahid; Delaviz, Hamdollah; Sadeghi, Hossein

    2017-04-01

    Pramipexole is a dopamine (DA) agonist (D2 subfamily receptors) that widely use in the treatment of Parkinson's diseases. Some epidemiological and genetic studies propose a role of inflammation in the pathophysiology of Parkinson's disease. To our knowledge, there is no study regarding the anti-inflammatory activity of pramipexol. Therefore, the aim of the study was to investigate anti-inflammatory effect of pramipexol. Anti-inflammatory effects of pramipexole were studied in three well-characterized animal models of inflammation, including carrageenan- or formalin-induced paw inflammation in rats, and 12-O-tetradecanoylphorbol-13-acetate (TPA)-induced ear edema in mice. The animals received pramipexol (0.25, 0.5 and 1 mg/kg, I.P.) 30 min before subplantar injection of carrageenan or formalin. Pramipexol (0.5 and 1 mg/kg) was also injected 30 min before topical application of TPA on the ear mice. Serum malondialdehyde (MDA) levels were evaluated in the carrageenan test. Finally, pathological examination of the inflamed tissues was carried out. Pramipexole significantly inhibited paw inflammation 1, 2, 3 and 4 h after carrageenan challenge compared with the control group (p Pramipexol also showed considerable anti-inflammatory activity against formalin-evoked paw edema over a period of 24 h (p pramipexol (p pramipexole reduced tissue injury, neutrophil infiltration, and subcutaneous edema. Pramipexole did not alter the increased serum levels of MDA due to carrageenan injection. These data clearly indicate that pramipexol possesses significant anti-inflammatory activity. It seems that its antioxidants do not play an important role in these effects.

  11. Implementing the water framework directive: contract design and the cost of measures to reduce nitrogen pollution from agriculture.

    Science.gov (United States)

    Bartolini, Fabio; Gallerani, Vittorio; Raggi, Meri; Viaggi, Davide

    2007-10-01

    The performance of different policy design strategies is a key issue in evaluating programmes for water quality improvement under the Water Framework Directive (60/2000). This issue is emphasised by information asymmetries between regulator and agents. Using an economic model under asymmetric information, the aim of this paper is to compare the cost-effectiveness of selected methods of designing payments to farmers in order to reduce nitrogen pollution in agriculture. A principal-agent model is used, based on profit functions generated through farm-level linear programming. This allows a comparison of flat rate payments and a menu of contracts developed through mechanism design. The model is tested in an area of Emilia Romagna (Italy) in two policy contexts: Agenda 2000 and the 2003 Common Agricultural Policy (CAP) reform. The results show that different policy design options lead to differences in policy costs as great as 200-400%, with clear advantages for the menu of contracts. However, different policy scenarios may strongly affect such differences. Hence, the paper calls for greater attention to the interplay between CAP scenarios and water quality measures.

  12. Implementing the Water Framework Directive: Contract Design and the Cost of Measures to Reduce Nitrogen Pollution from Agriculture

    Science.gov (United States)

    Bartolini, Fabio; Gallerani, Vittorio; Raggi, Meri; Viaggi, Davide

    2007-10-01

    The performance of different policy design strategies is a key issue in evaluating programmes for water quality improvement under the Water Framework Directive (60/2000). This issue is emphasised by information asymmetries between regulator and agents. Using an economic model under asymmetric information, the aim of this paper is to compare the cost-effectiveness of selected methods of designing payments to farmers in order to reduce nitrogen pollution in agriculture. A principal-agent model is used, based on profit functions generated through farm-level linear programming. This allows a comparison of flat rate payments and a menu of contracts developed through mechanism design. The model is tested in an area of Emilia Romagna (Italy) in two policy contexts: Agenda 2000 and the 2003 Common Agricultural Policy (CAP) reform. The results show that different policy design options lead to differences in policy costs as great as 200-400%, with clear advantages for the menu of contracts. However, different policy scenarios may strongly affect such differences. Hence, the paper calls for greater attention to the interplay between CAP scenarios and water quality measures.

  13. Development of Comprehensive Reduced Kinetic Models for Supersonic Reacting Shear Layer Simulations

    Science.gov (United States)

    Zambon, A. C.; Chelliah, H. K.; Drummond, J. P.

    2006-01-01

    Large-scale simulations of multi-dimensional unsteady turbulent reacting flows with detailed chemistry and transport can be computationally extremely intensive even on distributed computing architectures. With the development of suitable reduced chemical kinetic models, the number of scalar variables to be integrated can be decreased, leading to a significant reduction in the computational time required for the simulation with limited loss of accuracy in the results. A general MATLAB-based automated mechanism reduction procedure is presented to reduce any complex starting mechanism (detailed or skeletal) with minimal human intervention. Based on the application of the quasi steady-state (QSS) approximation for certain chemical species and on the elimination of the fast reaction rates in the mechanism, several comprehensive reduced models, capable of handling different fuels such as C2H4, CH4 and H2, have been developed and thoroughly tested for several combustion problems (ignition, propagation and extinction) and physical conditions (reactant compositions, temperatures, and pressures). A key feature of the present reduction procedure is the explicit solution of the concentrations of the QSS species, needed for the evaluation of the elementary reaction rates. In contrast, previous approaches relied on an implicit solution due to the strong coupling between QSS species, requiring computationally expensive inner iterations. A novel algorithm, based on the definition of a QSS species coupling matrix, is presented to (i) introduce appropriate truncations to the QSS algebraic relations and (ii) identify the optimal sequence for the explicit solution of the concentration of the QSS species. With the automatic generation of the relevant source code, the resulting reduced models can be readily implemented into numerical codes.

  14. Reduced Order Aeroservoelastic Models with Rigid Body Modes Project

    Data.gov (United States)

    National Aeronautics and Space Administration — Complex aeroelastic and aeroservoelastic phenomena can be modeled on complete aircraft configurations generating models with millions of degrees of freedom. Starting...

  15. Implementation and Validation of IEC Generic Type 1A Wind Turbine Generator Model

    DEFF Research Database (Denmark)

    Zhao, Haoran; Wu, Qiuwei; Margaris, Ioannis

    2015-01-01

    This paper presents the implementation of the International Electrotechnical Commission (IEC) generic Type 1A wind turbine generator (WTG) model in Power Factory (PF) and the validation of the implemented model against field measurements. The IEC generic Type 1A WTG model structure is briefly...... described. The details are explained regarding how the two mass mechanical model is implemented when the generator mass is included in the PF built-in generator model. In order to verify the IEC generic Type 1A WTG model, the model to field measurement validation method was employed. The model to field...... the simulation results and measurements were calculated according to the voltage dip windows and the index definition specified in the IEC 61400-27-1 committee draft. Copyright © 2014 John Wiley & Sons, Ltd....

  16. Modelling and implementing electronic health records in Denmark

    DEFF Research Database (Denmark)

    Bernstein, Knut; Rasmussen, Morten Bruun; Vingtoft, Søren;

    2003-01-01

    The Danish Health IT strategy points out that integration between electronic health records (EHR) systems has a high priority. This paper reporst reports new tendencies in modelling and integration platforms globally and how this is reflected in the natinal development.......The Danish Health IT strategy points out that integration between electronic health records (EHR) systems has a high priority. This paper reporst reports new tendencies in modelling and integration platforms globally and how this is reflected in the natinal development....

  17. Modelling and implementing electronic health records in Denmark

    DEFF Research Database (Denmark)

    Bernstein, Knut; Rasmussen, Morten Bruun; Vingtoft, Søren

    2003-01-01

    The Danish Health IT strategy points out that integration between electronic health records (EHR) systems has a high priority. This paper reporst reports new tendencies in modelling and integration platforms globally and how this is reflected in the natinal development.......The Danish Health IT strategy points out that integration between electronic health records (EHR) systems has a high priority. This paper reporst reports new tendencies in modelling and integration platforms globally and how this is reflected in the natinal development....

  18. Some combinatorial models for reduced expressions in Coxeter groups

    CERN Document Server

    Denoncourt, Hugh

    2011-01-01

    Stanley's formula for the number of reduced expressions of a permutation regarded as a Coxeter group element raises the question of how to enumerate the reduced expressions of an arbitrary Coxeter group element. We provide a framework for answering this question by constructing combinatorial objects that represent the inversion set and the reduced expressions for an arbitrary Coxeter group element. The framework also provides a formula for the length of an element formed by deleting a generator from a Coxeter group element. Fan and Hagiwara, et al$.$ showed that for certain Coxeter groups, the short-braid avoiding elements characterize those elements that give reduced expressions when any generator is deleted from a reduced expression. We provide a characterization that holds in all Coxeter groups. Lastly, we give applications to the freely braided elements introduced by Green and Losonczy, generalizing some of their results that hold in simply-laced Coxeter groups to the arbitrary Coxeter group setting.

  19. FPGA Implementation of Gaussian Mixture Model Algorithm for 47 fps Segmentation of 1080p Video

    Directory of Open Access Journals (Sweden)

    Mariangela Genovese

    2013-01-01

    Full Text Available Circuits and systems able to process high quality video in real time are fundamental in nowadays imaging systems. The circuit proposed in the paper, aimed at the robust identification of the background in video streams, implements the improved formulation of the Gaussian Mixture Model (GMM algorithm that is included in the OpenCV library. An innovative, hardware oriented, formulation of the GMM equations, the use of truncated binary multipliers, and ROM compression techniques allow reduced hardware complexity and increased processing capability. The proposed circuit has been designed having commercial FPGA devices as target and provides speed and logic resources occupation that overcome previously proposed implementations. The circuit, when implemented on Virtex6 or StratixIV, processes more than 45 frame per second in 1080p format and uses few percent of FPGA logic resources.

  20. Model checking a cache coherence protocol for a Java DSM implementation

    NARCIS (Netherlands)

    J. Pang; W.J. Fokkink (Wan); R. Hofman (Rutger); R. Veldema

    2007-01-01

    textabstractJackal is a fine-grained distributed shared memory implementation of the Java programming language. It aims to implement Java's memory model and allows multithreaded Java programs to run unmodified on a distributed memory system. It employs a multiple-writer cache coherence

  1. Model checking a cache coherence protocol for a Java DSM implementation

    NARCIS (Netherlands)

    Pang, J.; Fokkink, W.J.; Hofman, R.; Veldema, R.

    2007-01-01

    Jackal is a fine-grained distributed shared memory implementation of the Java programming language. It aims to implement Java's memory model and allows multithreaded Java programs to run unmodified on a distributed memory system. It employs a multiple-writer cache coherence protocol. In this paper,

  2. Teacher Candidates' Implementation of the Personal and Social Responsibility Model in Field Experiences

    Science.gov (United States)

    Lee, Okseon

    2012-01-01

    With the teacher concerns theory (Fuller, 1969) as a theoretical framework, this study has set out to examine how physical education teacher candidates perceive their implementation of the Personal and Social Responsibility Model (Hellison, 2003) and how they actually implement it during field experience. Five teacher candidates (three female, two…

  3. Moving toward Change: Institutionalizing Reform through Implementation of the Learning Assistant Model and Open Source Tutorials

    Science.gov (United States)

    Goertzen, Renee Michelle; Brewe, Eric; Kramer, Laird H.; Wells, Leanne; Jones, David

    2011-01-01

    Florida International University has undergone a reform in the introductory physics classes by focusing on the laboratory component of these classes. We present results from the secondary implementation of two research-based instructional strategies: the implementation of the Learning Assistant model as developed by the University of Colorado at…

  4. Career Guidance: An Implementation Model for Small High Schools. A Maxi I Practicum.

    Science.gov (United States)

    Stevens, Richard; And Others

    The purpose of this practicum was to design, develop, and implement a career guidance program for small high schools. The program description would act as a model for implementation at other high schools desiring a career guidance program. The method of communicating the program to others was the writing of a "how to" book which others would use…

  5. Integrating operational watershed and coastal models for the Iberian Coast: Watershed model implementation - A first approach

    Science.gov (United States)

    Brito, David; Campuzano, F. J.; Sobrinho, J.; Fernandes, R.; Neves, R.

    2015-12-01

    River discharges and loads are essential inputs to coastal seas, and thus for coastal seas modelling, and their properties are the result of all activities and policies carried inland. For these reasons main rivers were object of intense monitoring programs having been generated some important amount of historical data. Due to the decline in the Portuguese hydrometric network and in order to quantify and forecast surface water streamflow and nutrients to coastal areas, the MOHID Land model was applied to the Western Iberia Region with a 2 km horizontal resolution and to the Iberian Peninsula with 10 km horizontal resolution. The domains were populated with land use and soil properties and forced with existing meteorological models. This approach also permits to understand how the flows and loads are generated and to forecast their values which are of utmost importance to perform coastal ocean and estuarine forecasts. The final purpose of the implementation is to obtain fresh water quantity and quality that could be used to support management decisions in the watershed, reservoirs and also to estuaries and coastal areas. A process oriented model as MOHID Land is essential to perform this type of simulations, as the model is independent of the number of river catchments. In this work, the Mohid Land model equations and parameterisations were described and an innovative methodology for watershed modelling is presented and validated for a large international river, the Tagus River, and the largest national river of Portugal, the Mondego River. Precipitation, streamflow and nutrients modelling results for these two rivers were compared with observations near their coastal outlet in order to evaluate the model capacity to represent the main watershed trends. Finally, an annual budget of fresh water and nutrient transported by the main twenty five rivers discharging in the Portuguese coast is presented.

  6. Modelling and Implementation of Network Coding for Video

    Directory of Open Access Journals (Sweden)

    Can Eyupoglu

    2016-08-01

    Full Text Available In this paper, we investigate Network Coding for Video (NCV which we apply for video streaming over wireless networks. NCV provides a basis for network coding. We use NCV algorithm to increase throughput and video quality. When designing NCV algorithm, we take the deadline as well as the decodability of the video packet at the receiver. In network coding, different flows of video packets are packed into a single packet at intermediate nodes and forwarded to other nodes over wireless networks. There are many problems that occur during transmission on the wireless channel. Network coding plays an important role in dealing with these problems. We observe the benefits of network coding for throughput increase thanks to applying broadcast operations on wireless networks. The aim of this study is to implement NCV algorithm using C programming language which takes the output of the H.264 video codec generating the video packets. In our experiments, we investigated improvements in terms of video quality and throughput at different scenarios.

  7. Implementing the WebSocket Protocol Based on Formal Modelling and Automated Code Generation

    DEFF Research Database (Denmark)

    Simonsen, Kent Inge; Kristensen, Lars Michael

    2014-01-01

    protocols. Furthermore, we perform formal verification of the CPN model prior to code generation, and test the implementation for interoperability against the Autobahn WebSocket test-suite resulting in 97% and 99% success rate for the client and server implementation, respectively. The tests show...... with pragmatic annotations for automated code generation of protocol software. The contribution of this paper is an application of the approach as implemented in the PetriCode tool to obtain protocol software implementing the IETF WebSocket protocol. This demonstrates the scalability of our approach to real...

  8. Implementation of Gravity Model to Estimation of Transportation Market Shares

    Science.gov (United States)

    Krata, Przemysław

    2010-03-01

    The theoretical consideration presented in the paper is inspired by market gravity models, as an interesting attitude towards operations research on a market. The transportation market issues are emphasized. The mathematical model of relations, taking place between transportation companies and their customers on the market, which is applied in the course of the research is based on continuous functions characteristics. This attitude enables the use of the field theory notions. The resultant vector-type utility function facilitates obtaining of competitive advantage areas for all transportation companies located on the considered transportation market.

  9. A model program to reduce patient failure to keep scheduled medical appointments.

    Science.gov (United States)

    Schmalzried, Hans D; Liszak, Joseph

    2012-06-01

    Community health center clinics that rely on scheduled appointments lose revenue and time when patients do not keep their appointments. Various approaches have been used to improve the rate of patient appointments kept. This article provides a model intervention program developed by a quality improvement committee at a Northwest Ohio community health center that is credited with significantly reducing rates of patient failure to keep scheduled medical and dental clinic appointments. The approach of this intervention program is different from others in that it was primarily designed to help patients learn how to become part of the solution to the problem. Community health center staff accomplishes this through engaging patients in a respectful and courteous manner and helping them understand the importance of their involvement in maintaining an efficient scheduling process to benefit all patients. Data collected from outpatient appointment records before and after implementation of the program indicate that missed appointments dropped to less than half the pre-intervention rate.

  10. Understanding the Effect of Baseline Modeling Implementation Choices on Analysis of Demand Response Performance

    Energy Technology Data Exchange (ETDEWEB)

    University of California, Berkeley; Addy, Nathan; Kiliccote, Sila; Mathieu, Johanna; Callaway, Duncan S.

    2012-06-13

    Accurate evaluation of the performance of buildings participating in Demand Response (DR) programs is critical to the adoption and improvement of these programs. Typically, we calculate load sheds during DR events by comparing observed electric demand against counterfactual predictions made using statistical baseline models. Many baseline models exist and these models can produce different shed calculations. Moreover, modelers implementing the same baseline model can make different modeling implementation choices, which may affect shed estimates. In this work, using real data, we analyze the effect of different modeling implementation choices on shed predictions. We focused on five issues: weather data source, resolution of data, methods for determining when buildings are occupied, methods for aligning building data with temperature data, and methods for power outage filtering. Results indicate sensitivity to the weather data source and data filtration methods as well as an immediate potential for automation of methods to choose building occupied modes.

  11. Transgenic Mouse Model for Reducing Oxidative Damage in Bone

    Science.gov (United States)

    Schreurs, A.-S.; Torres, S.; Truong, T.; Kumar, A.; Alwood, J. S.; Limoli, C. L.; Globus, R. K.

    2014-01-01

    Exposure to musculoskeletal disuse and radiation result in bone loss; we hypothesized that these catabolic treatments cause excess reactive oxygen species (ROS), and thereby alter the tight balance between bone resorption by osteoclasts and bone formation by osteoblasts, culminating in bone loss. To test this, we used transgenic mice which over-express the human gene for catalase, targeted to mitochondria (MCAT). Catalase is an anti-oxidant that converts the ROS hydrogen peroxide into water and oxygen. MCAT mice were shown previously to display reduced mitochondrial oxidative stress and radiosensitivity of the CNS compared to wild type controls (WT). As expected, MCAT mice expressed the transgene in skeletal tissue, and in marrow-derived osteoblasts and osteoclast precursors cultured ex vivo, and also showed greater catalase activity compared to wildtype (WT) mice (3-6 fold). Colony expansion in marrow cells cultured under osteoblastogenic conditions was 2-fold greater in the MCAT mice compared to WT mice, while the extent of mineralization was unaffected. MCAT mice had slightly longer tibiae than WT mice (2%, P less than 0.01), although cortical bone area was slightly lower in MCAT mice than WT mice (10%, p=0.09). To challenge the skeletal system, mice were treated by exposure to combined disuse (2 wk Hindlimb Unloading) and total body irradiation Cs(137) (2 Gy, 0.8 Gy/min), then bone parameters were analyzed by 2-factor ANOVA to detect possible interaction effects. Treatment caused a 2-fold increase (p=0.015) in malondialdehyde levels of bone tissue (ELISA) in WT mice, but had no effect in MCAT mice. These findings indicate that the transgene conferred protection from oxidative damage caused by treatment. Unexpected differences between WT and MCAT mice emerged in skeletal responses to treatment.. In WT mice, treatment did not alter osteoblastogenesis, cortical bone area, moment of inertia, or bone perimeter, whereas in MCAT mice, treatment increased these

  12. Modeling performance measurement applications and implementation issues in DEA

    CERN Document Server

    Cook, Wade D

    2005-01-01

    Addresses advanced/new DEA methodology and techniques that are developed for modeling unique and new performance evaluation issuesPesents new DEA methodology and techniques via discussions on how to solve managerial problemsProvides an easy-to-use DEA software - DEAFrontier (www.deafrontier.com) which is an excellent tool for both DEA researchers and practitioners.

  13. Implementing Relevance Feedback in the Bayesian Network Retrieval Model.

    Science.gov (United States)

    de Campos, Luis M.; Fernandez-Luna, Juan M.; Huete, Juan F.

    2003-01-01

    Discussion of relevance feedback in information retrieval focuses on a proposal for the Bayesian Network Retrieval Model. Bases the proposal on the propagation of partial evidences in the Bayesian network, representing new information obtained from the user's relevance judgments to compute the posterior relevance probabilities of the documents…

  14. I-STEM Ed Exemplar: Implementation of the PIRPOSAL Model

    Science.gov (United States)

    Wells, John G.

    2016-01-01

    The opening pages of the first PIRPOSAL (Problem Identification, Ideation, Research, Potential Solutions, Optimization, Solution Evaluation, Alterations, and Learned Outcomes) article make the case that the instructional models currently used in K-12 Science, Technology, Engineering, and Mathematics (STEM) Education fall short of conveying their…

  15. Inferring Requirement Goals from Model Implementing in UML

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    UML is used widely in many software developmentprocesses.However,it does not make explicit requirement goals.Here is a method tending to establish the semantic relationship between requirements goals and UML models.Before the method is introduced,some relevant concepts are described

  16. Child Health Improvement through Implementation of Food Safety Model

    Directory of Open Access Journals (Sweden)

    Arief Safari

    2016-06-01

    Pemenuhan akan pangan merupakan komponen dasar untuk mewujudkan sumber daya manusia yang berkualitas. Namun, masih terdapat permasalahan dalam mewujudkannya, di antaranya masalah keamanan pangan dengan persentase kasus keracunan makanan masih tinggi. Penelitian ini bertujuan untuk menganalisis situasi pelaksanaan keamanan Pangan Jajanan Anak Sekolah (PJAS yang ada saat ini dan memilih alternatif model keamanan pangan yang paling efektif dan efisien diterapkan di usaha mikro kecil (UMK guna meningkatkan keamanan pangan yang dihasilkannya sehingga terjadi peningkatan kesehatan anak. Penelitian dilakukan pada kuartal II tahun 2015 sampai dengan awal kuartal III tahun 2015 melalui survei lapangan dan survei pakar dengan mengambil studi kasus di lingkungan sekolah dasar. Survei lapangan melibatkan 102 responden untuk memungkinkan dilakukannya analisis situasional dan juga survei pakar untuk memilih model keamanan pangan yang paling efektif dan efisien untuk diterapkan pada UMK PJAS dengan Analytical Hierarchy Process. Hasil survei lapangan menunjukkan 91% responden anak sekolah pernah mengalami gangguan kesehatan setelah mengonsumsi PJAS. Selain itu, 100% responden UMK PJAS tidak menggunakan masker dan sarung tangan sebelum mengolah makanan/minuman, 62% masih menggunakan air sumur sebagai sumber air untuk produksi PJAS dan 86% menggunakan Bahan Tambahan Pangan. Hasil survei pakar menunjukkan model Lima Kunci Pangan Aman terpilih sebagai model keamanan pangan yang paling efektif dan efisien diterapkan pada UMK PJAS.

  17. High power electronics package: from modeling to implementation

    NARCIS (Netherlands)

    Yuan, C.A.; Kregting, R.; Ye, H.; Driel, W. van; Gielen, A.W.J.; Zhang, G.Q.

    2011-01-01

    Power electronics, such as high power RF components and high power LEDs, requires the combination of robust and reliable package structures, materials, and processes to guarantee their functional performance and lifetime. We started with the thermal and thermal-mechanical modeling of such component

  18. Implementing Kuhlthau: A New Model for Library and Reference Instruction.

    Science.gov (United States)

    Isbell, Dennis; Kammerlocher, Lisa

    1998-01-01

    Summarizes Carol Kuhlthau's research on the information search process. Discusses how Kuhlthau's model of students' information search process (ISP) has been integrated into a course at Arizona State University and is being used experimentally as a training tool in the library's reference services. Selected student responses to research process…

  19. The Implementing Model of Empowering Eight for Information Literacy

    Science.gov (United States)

    Boeriswati, Endry

    2012-01-01

    Information literacy is the awareness and skills to identify, locate, evaluate, organize, create, use and communicate information to solve or resolve problems. This article is the result of the research on the efforts to improve students' problem-solving skills in the "Research Methods" course through "Empowering Eight: Information Literacy Model"…

  20. Foreign Models of Financial Equalization, Prospects for Implementation in Ukraine

    Directory of Open Access Journals (Sweden)

    Piontko Nataliia B.

    2015-09-01

    Full Text Available The article is aimed to identify the models of financial equalization applied in foreign countries, and to substantiate the possibilities for use of foreign experience in terms of financial equalization or individual elements of such models on the territory of Ukraine, for taking into consideration the foreign tools of financial equalization in the context of the State regional policy reforms is a priority and urgent task of the present day. During the generalization and systematization of scientific works by numerous domestic and foreign scientists, models of financial equalization, depending on the form of state structure in the country, have been identified. Determinants of the necessity of financial equalization were analyzed, such as: the imbalance between the own financial security and the level of assigned tasks; the level of fiscal decentralization. Methods of income and expenditures equalization, applied in vertical or horizontal levels for balancing regional development, have been substantiated. Features of expansion of financial security of budgets by using innovative tools for equalization have been determined. A comparison of the models of financial equalization in foreign countries was made and the major tasks for improving the mechanism and organization of financial equalization of budgets in Ukraine were defined. Prospects for further research in this area are diversification of tools for financial equalization, defining the investment component in the structure of budgets' incomes and studying the activities of sub-central authorities in the financial market

  1. Possibilities of Land Administration Domain Model (LADM) implementation in Nigeria

    NARCIS (Netherlands)

    Babalola, S.O.; Rahman, A.A.; Choon, L.T.; Van Oosterom, P.J.M.

    2015-01-01

    LADM covers essential information associated components of land administration and management including those over water and elements above and below the surface of the earth. LADM standard provides an abstract conceptual model with three packages and one sub-package. LADM defined terminology for a

  2. MODELS AND SOLUTIONS FOR THE IMPLEMENTATION OF DISTRIBUTED SYSTEMS

    Directory of Open Access Journals (Sweden)

    Tarca Naiana

    2011-07-01

    Full Text Available Software applications may have different degrees of complexity depending on the problems they try to solve and can integrate very complex elements that bring together functionality that sometimes are competing or conflicting. We can take for example a mobile communications system. Functionalities of such a system are difficult to understand, and they add to the non-functional requirements such as the use in practice, performance, cost, durability and security. The transition from local computer networks to cover large networks that allow millions of machines around the world at speeds exceeding one gigabit per second allowed universal access to data and design of applications that require simultaneous use of computing power of several interconnected systems. The result of these technologies has enabled the evolution from centralized to distributed systems that connect a large number of computers. To enable the exploitation of the advantages of distributed systems one had developed software and communications tools that have enabled the implementation of distributed processing of complex solutions. The objective of this document is to present all the hardware, software and communication tools, closely related to the possibility of their application in integrated social and economic level as a result of globalization and the evolution of e-society. These objectives and national priorities are based on current needs and realities of Romanian society, while being consistent with the requirements of Romania's European orientation towards the knowledge society, strengthening the information society, the target goal representing the accomplishment of e-Romania, with its strategic e-government component. Achieving this objective repositions Romania and gives an advantage for sustainable growth, positive international image, rapid convergence in Europe, inclusion and strengthening areas of high competence, in line with Europe 2020, launched by the

  3. Roadmap for Lean implementation in Indian automotive component manufacturing industry: comparative study of UNIDO Model and ISM Model

    Science.gov (United States)

    Jadhav, J. R.; Mantha, S. S.; Rane, S. B.

    2014-07-01

    The demands for automobiles increased drastically in last two and half decades in India. Many global automobile manufacturers and Tier-1 suppliers have already set up research, development and manufacturing facilities in India. The Indian automotive component industry started implementing Lean practices to fulfill the demand of these customers. United Nations Industrial Development Organization (UNIDO) has taken proactive approach in association with Automotive Component Manufacturers Association of India (ACMA) and the Government of India to assist Indian SMEs in various clusters since 1999 to make them globally competitive. The primary objectives of this research are to study the UNIDO-ACMA Model as well as ISM Model of Lean implementation and validate the ISM Model by comparing with UNIDO-ACMA Model. It also aims at presenting a roadmap for Lean implementation in Indian automotive component industry. This paper is based on secondary data which include the research articles, web articles, doctoral thesis, survey reports and books on automotive industry in the field of Lean, JIT and ISM. ISM Model for Lean practice bundles was developed by authors in consultation with Lean practitioners. The UNIDO-ACMA Model has six stages whereas ISM Model has eight phases for Lean implementation. The ISM-based Lean implementation model is validated through high degree of similarity with UNIDO-ACMA Model. The major contribution of this paper is the proposed ISM Model for sustainable Lean implementation. The ISM-based Lean implementation framework presents greater insight of implementation process at more microlevel as compared to UNIDO-ACMA Model.

  4. Caries risk assessment in school children using a reduced Cariogram model without saliva tests

    DEFF Research Database (Denmark)

    Petersson, Gunnel Hänsel; Isberg, Per-Erik; Twetman, Svante

    2010-01-01

    To investigate the caries predictive ability of a reduced Cariogram model without salivary tests in schoolchildren.......To investigate the caries predictive ability of a reduced Cariogram model without salivary tests in schoolchildren....

  5. Development of Transformations from Business Process Models to Implementations by Reuse

    NARCIS (Netherlands)

    Dirgahayu, Teduh; Quartel, Dick; Sinderen, van Marten

    2007-01-01

    This paper presents an approach for developing transformations from business process models to implementations that facilitates reuse. A transformation is developed as a composition of three smaller tasks: pattern recognition, pattern realization and activity transformation. The approach allows one

  6. Development of transformations from business process models to implementations by reuse

    NARCIS (Netherlands)

    Dirgahayu, Teduh; Quartel, Dick; Sinderen, van Marten; Ferreira Pires, L.; Hammoudi, S.

    2007-01-01

    This paper presents an approach for developing transformations from business process models to implementations that facilitates reuse. A transformation is developed as a composition of three smaller tasks: pattern recognition, pattern realization and activity transformation. The approach allows one

  7. Modelling and implementation of 1-3 piezocomposite side scan sonar array

    CSIR Research Space (South Africa)

    Shatalov, MY

    2005-06-01

    Full Text Available Conference “Underwater Acoustic Measurements: Technologies &Results” Heraklion, Crete, Greece, 28th June – 1st July 2005 MODELLING AND IMPLEMENTATION OF 1-3 PIEZOCOMPOSITE SIDE SCAN SONAR ARRAY Michael Shatalov*, Jeremy Wallis*, Kiri...

  8. Reduced Noise Effect in Nonlinear Model Estimation Using Multiscale Representation

    Directory of Open Access Journals (Sweden)

    Mohamed N. Nounou

    2010-01-01

    Full Text Available Nonlinear process models are widely used in various applications. In the absence of fundamental models, it is usually relied on empirical models, which are estimated from measurements of the process variables. Unfortunately, measured data are usually corrupted with measurement noise that degrades the accuracy of the estimated models. Multiscale wavelet-based representation of data has been shown to be a powerful data analysis and feature extraction tool. In this paper, these characteristics of multiscale representation are utilized to improve the estimation accuracy of the linear-in-the-parameters nonlinear model by developing a multiscale nonlinear (MSNL modeling algorithm. The main idea in this MSNL modeling algorithm is to decompose the data at multiple scales, construct multiple nonlinear models at multiple scales, and then select among all scales the model which best describes the process. The main advantage of the developed algorithm is that it integrates modeling and feature extraction to improve the robustness of the estimated model to the presence of measurement noise in the data. This advantage of MSNL modeling is demonstrated using a nonlinear reactor model.

  9. A Parallel and Distributed Surrogate Model Implementation for Computational Steering

    KAUST Repository

    Butnaru, Daniel

    2012-06-01

    Understanding the influence of multiple parameters in a complex simulation setting is a difficult task. In the ideal case, the scientist can freely steer such a simulation and is immediately presented with the results for a certain configuration of the input parameters. Such an exploration process is however not possible if the simulation is computationally too expensive. For these cases we present in this paper a scalable computational steering approach utilizing a fast surrogate model as substitute for the time-consuming simulation. The surrogate model we propose is based on the sparse grid technique, and we identify the main computational tasks associated with its evaluation and its extension. We further show how distributed data management combined with the specific use of accelerators allows us to approximate and deliver simulation results to a high-resolution visualization system in real-time. This significantly enhances the steering workflow and facilitates the interactive exploration of large datasets. © 2012 IEEE.

  10. Numerical Implementation of the Hoek-Brown Material Model with Strain Hardening

    DEFF Research Database (Denmark)

    Sørensen, Emil Smed; Clausen, Johan; Damkilde, Lars

    2013-01-01

    A numerical implementation of the Hoek-Brown criterion is presented, which is capable of modeling important aspects of the different post-failure behaviors observed in jointed rock mass. This is done by varying the material parameters based on the accumulated plastic strains. The implementation....... The constitutive model is demonstrated on a simulation of a tunnel excavation and the results are compared with an analytical solution for a tunnel excavation in elastic-brittle rock material....

  11. Object relationship notation (ORN) for database applications enhancing the modeling and implementation of associations

    CERN Document Server

    Ehlmann, Bryon K

    2009-01-01

    Conceptually, a database consists of objects and relationships. Object Relationship Notation (ORN) is a simple notation that more precisely defines relationships by combining UML multiplicities with uniquely defined referential actions. ""Object Relationship Notation (ORN) for Database Applications: Enhancing the Modeling and Implementation of Associations"" shows how ORN can be used in UML class diagrams and database definition languages (DDLs) to better model and implement relationships and thus more productively develop database applications. For the database developer, it presents many exa

  12. Using memristor crossbar structure to implement a novel adaptive real time fuzzy modeling algorithm

    OpenAIRE

    Afrakoti, Iman Esmaili Paeen; Shouraki, Saeed Bagheri; Merrikhbayat, Farnood

    2013-01-01

    Although fuzzy techniques promise fast meanwhile accurate modeling and control abilities for complicated systems, different difficulties have been re-vealed in real situation implementations. Usually there is no escape of it-erative optimization based on crisp domain algorithms. Recently memristor structures appeared promising to implement neural network structures and fuzzy algorithms. In this paper a novel adaptive real-time fuzzy modeling algorithm is proposed which uses active learning me...

  13. Numerical Implementation of the Hoek-Brown Material Model with Strain Hardening

    DEFF Research Database (Denmark)

    Sørensen, Emil Smed; Clausen, Johan; Damkilde, Lars

    2013-01-01

    A numerical implementation of the Hoek-Brown criterion is presented, which is capable of modeling important aspects of the different post-failure behaviors observed in jointed rock mass. This is done by varying the material parameters based on the accumulated plastic strains. The implementation....... The constitutive model is demonstrated on a simulation of a tunnel excavation and the results are compared with an analytical solution for a tunnel excavation in elastic-brittle rock material....

  14. Re-constructible CMM software system modeling and its implementation

    Science.gov (United States)

    Bai, Y. W.; Wei, S. Y.; Yang, X. H.; Liu, S. P.

    2008-12-01

    This paper presents a novel way for the re-constructible CMM software system modeling by taking advantage of a tiered modeling strategy. It consists of four tiers: (1) the bottom layer is the CAD model manager which encapsulates geometric engine and 3D object displaying engine as a COM; (2) the middle is the kernel components which is designed to manage the objects of geometric entity, coordinate system, probe and the system environment parameters etc; (3) the third layer is function modules layer that is used to manage and handle the messages and events of the windows/dialog, menus and toolbars; (4) the top layer is GUI module that is designed to initialize the application with the resource of GUI with the manner of dynamic loading. A set of commercial CMM software, Direct DMIS, has applied the method in a R&D of China National Institute of Measuring and Test Technology (NIMTT). It proves that the developed system can effectively integrate the modules distributed in different layers developed with C++ or C# and the proposed method is feasible.

  15. A Model for Usability Evaluation for the Development and Implementation of Consumer eHealth Interventions.

    Science.gov (United States)

    Parry, David; Carter, Philip; Koziol-McLain, Jane; Feather, Jacqueline

    2015-01-01

    Consumer eHealth products are often used by people in their own homes or other settings without dedicated clinical supervision, and often with minimal training and limited support--much as eCommerce and eGovernment applications are currently deployed. Internet based self-care systems have been advocated for over a decade as a way to reduce costs and allow more convenient care, and--because of the expectation that they will be used to reduced health cost--, by increasing self-care and avoiding hospitalization. However, the history of consumer eHealth interventions is mixed, with many unsuccessful implementations. Many consumer eHealth products will form part of a broader complex intervention, with many possible benefits and effects on both individuals and society. This poster describes a model of consumer eHealth assessment based on multiple methods of usability evaluation at different stages in the design and fielding of eHealth systems. We argue that different methods of usability evaluation are able to give valuable insights into the likely effects of an intervention in a way that is congruent with software development processes.

  16. Implementation of depth-dependent soil concentrations in multimedia mass balance models.

    NARCIS (Netherlands)

    Hollander, A.; Hessels, L.; Voogt, P De; Meent, D. van de

    2004-01-01

    In standard multimedia mass balance models, the soil compartment is modeled as a box with uniform concentrations, which often does not correspond with actual field situations. Therefore, the theoretically expected decrease of soil concentrations with depth was implemented in the multimedia model Sim

  17. Implementation of depth-dependent soil concentrations in multimedia mass balance models

    NARCIS (Netherlands)

    Hollander, A.; Hessels, L.; de Voogt, P.; van de Meent, D.

    2004-01-01

    In standard multimedia mass balance models, the soil compartment is modeled as a box with uniform concentrations, which often does not correspond with actual field situations. Therefore, the theoretically expected decrease of soil concentrations with depth was implemented in the multimedia model Sim

  18. Description and Rationale for the Planning, Monitoring, and Implementation (PMI) Model: Description.

    Science.gov (United States)

    Ford, Valeria A.

    The design of the Planning, Monitoring, and Implementation Model (PMI) and the aspects of the model that make it useful in public schools are the topics of this paper. After the objectives of a program or operation have been identified, the model specifies three additional pieces of information that are needed for an evaluation: inputs, processes,…

  19. Implementation of Structured Inquiry Based Model Learning toward Students' Understanding of Geometry

    Science.gov (United States)

    Salim, Kalbin; Tiawa, Dayang Hjh

    2015-01-01

    The purpose of this study is implementation of a structured inquiry learning model in instruction of geometry. The model used is a model with a quasi-experimental study amounted to two classes of samples selected from the population of the ten classes with cluster random sampling technique. Data collection tool consists of a test item…

  20. The Implementation of Character Education Model Based on Empowerment Theatre for Primary School Students

    Science.gov (United States)

    Anggraini, Purwati; Kusniarti, Tuti

    2016-01-01

    This study aimed at constructing character education model implemented in primary school. The research method was qualitative with five samples in total, comprising primary schools in Malang city/regency and one school as a pilot model. The pilot model was instructed by theatre coach teacher, parents, and school society. The result showed that…

  1. A Conceptual Model to Implement an Interactive and Collaborative Enterprise 2.0

    Directory of Open Access Journals (Sweden)

    Domenico CONSOLI

    2013-01-01

    Full Text Available To implement an interactive and collaborative Enterprise 2.0 it is important to have, inside the company, organizational and technological preconditions. In this model of advanced enterprise, internal workers must collaborate among themselves to communicate with all external subjects of the supply chain for achieving business goals. The implementation process is a critical and complex procedure that requires a strategic plan in the introduction and adoption of the innovation. In this paper the single actions to follow, for the implementation of the new model of business, with all determinant factors and variables, are described.

  2. Digital hardware implementation of a stochastic two-dimensional neuron model.

    Science.gov (United States)

    Grassia, F; Kohno, T; Levi, T

    2017-02-22

    This study explores the feasibility of stochastic neuron simulation in digital systems (FPGA), which realizes an implementation of a two-dimensional neuron model. The stochasticity is added by a source of current noise in the silicon neuron using an Ornstein-Uhlenbeck process. This approach uses digital computation to emulate individual neuron behavior using fixed point arithmetic operation. The neuron model's computations are performed in arithmetic pipelines. It was designed in VHDL language and simulated prior to mapping in the FPGA. The experimental results confirmed the validity of the developed stochastic FPGA implementation, which makes the implementation of the silicon neuron more biologically plausible for future hybrid experiments.

  3. A system dynamics evaluation model: implementation of health information exchange for public health reporting.

    Science.gov (United States)

    Merrill, Jacqueline A; Deegan, Michael; Wilson, Rosalind V; Kaushal, Rainu; Fredericks, Kimberly

    2013-06-01

    To evaluate the complex dynamics involved in implementing electronic health information exchange (HIE) for public health reporting at a state health department, and to identify policy implications to inform similar implementations. Qualitative data were collected over 8 months from seven experts at New York State Department of Health who implemented web services and protocols for querying, receipt, and validation of electronic data supplied by regional health information organizations. Extensive project documentation was also collected. During group meetings experts described the implementation process and created reference modes and causal diagrams that the evaluation team used to build a preliminary model. System dynamics modeling techniques were applied iteratively to build causal loop diagrams representing the implementation. The diagrams were validated iteratively by individual experts followed by group review online, and through confirmatory review of documents and artifacts. Three casual loop diagrams captured well-recognized system dynamics: Sliding Goals, Project Rework, and Maturity of Resources. The findings were associated with specific policies that address funding, leadership, ensuring expertise, planning for rework, communication, and timeline management. This evaluation illustrates the value of a qualitative approach to system dynamics modeling. As a tool for strategic thinking on complicated and intense processes, qualitative models can be produced with fewer resources than a full simulation, yet still provide insights that are timely and relevant. System dynamics techniques clarified endogenous and exogenous factors at play in a highly complex technology implementation, which may inform other states engaged in implementing HIE supported by federal Health Information Technology for Economic and Clinical Health (HITECH) legislation.

  4. Development and Implementation of a Program Management Maturity Model

    Energy Technology Data Exchange (ETDEWEB)

    Hartwig, Laura; Smith, Matt

    2008-12-15

    In 2006, Honeywell Federal Manufacturing & Technologies (FM&T) announced an updatedvision statement for the organization. The vision is “To be the most admired team within the NNSA [National Nuclear Security Administration] for our relentless drive to convert ideas into the highest quality products and services for National Security by applying the right technology, outstanding program management and best commercial practices.” The challenge to provide outstanding program management was taken up by the Program Management division and the Program Integration Office (PIO) of the company. This article describes how Honeywell developed and deployed a program management maturity model to drive toward excellence.

  5. The Implementation of Life Space Crisis Intervention as a School-Wide Strategy for Reducing Violence and Supporting Students' Continuation in Public Schools

    Science.gov (United States)

    Ramin, John E.

    2011-01-01

    The purpose of this study was to explore the effectiveness of implementing Life Space Crisis Intervention as a school-wide strategy for reducing school violence. Life Space Crisis Intervention (LSCI) is a strength-based verbal interaction strategy (Long, Fecser, Wood, 2001). LSCI utilizes naturally occurring crisis situations as teachable…

  6. Reducing fragmentation in the care of frail older people: the successful development and implementation of the Health and Welfare Information Portal

    NARCIS (Netherlands)

    Robben, S.H.M.; Heinen, M.M.; Makai, P.; Olde Rikkert, M.G.M.; Perry, M.; Schers, H.J.; Melis, R.J.F.

    2013-01-01

    REDUCING FRAGMENTATION IN THE CARE OF FRAIL OLDER PEOPLE: THE SUCCESSFUL DEVELOPMENT AND IMPLEMENTATION OF THE HEALTH AND WELFARE INFORMATION PORTAL: Our fragmented health care systems are insufficiently equipped to provide frail older people with high quality of care. Therefore, we developed the He

  7. [Reducing fragmentation in the care of frail older people: the successful development and implementation of the Health and Welfare Information Portal].

    NARCIS (Netherlands)

    Robben, S.H.M.; Heinen, M.M.; Makai, P.; Olde Rikkert, M.G.M.; Perry, M.; Schers, H.J.; Melis, R.J.F.

    2013-01-01

    REDUCING FRAGMENTATION IN THE CARE OF FRAIL OLDER PEOPLE: THE SUCCESSFUL DEVELOPMENT AND IMPLEMENTATION OF THE HEALTH AND WELFARE INFORMATION PORTAL: Our fragmented health care systems are insufficiently equipped to provide frail older people with high quality of care. Therefore, we developed the He

  8. Chemically enhancing primary clarifiers: model-based development of a dosing controller and full-scale implementation.

    Science.gov (United States)

    Tik, Sovanna; Vanrolleghem, Peter A

    2017-03-01

    Chemically enhanced primary treatment (CEPT) can be used to mitigate the adverse effect of wet weather flow on wastewater treatment processes. In particular, it can reduce the particulate pollution load to subsequent secondary unit processes, such as biofiltration, which may suffer from clogging by an overload of particulate matter. In this paper, a simple primary clarifier model able to take into account the effect of the addition of chemicals on particle settling is presented. Control strategies that optimize the treatment process by chemical addition were designed and tested by running simulations with this CEPT model. The most adequate control strategy in terms of treatment performance, chemicals saving, and maintenance effort was selected. Full-scale implementation of the controller was performed during the autumn of 2015, and the results obtained confirmed the behaviour of the controlled system. Practical issues related to the implementation are presented.

  9. Numerical simulations of a reduced model for blood coagulation

    Science.gov (United States)

    Pavlova, Jevgenija; Fasano, Antonio; Sequeira, Adélia

    2016-04-01

    In this work, the three-dimensional numerical resolution of a complex mathematical model for the blood coagulation process is presented. The model was illustrated in Fasano et al. (Clin Hemorheol Microcirc 51:1-14, 2012), Pavlova et al. (Theor Biol 380:367-379, 2015). It incorporates the action of the biochemical and cellular components of blood as well as the effects of the flow. The model is characterized by a reduction in the biochemical network and considers the impact of the blood slip at the vessel wall. Numerical results showing the capacity of the model to predict different perturbations in the hemostatic system are discussed.

  10. Reduced model-based decision-making in schizophrenia.

    Science.gov (United States)

    Culbreth, Adam J; Westbrook, Andrew; Daw, Nathaniel D; Botvinick, Matthew; Barch, Deanna M

    2016-08-01

    Individuals with schizophrenia have a diminished ability to use reward history to adaptively guide behavior. However, tasks traditionally used to assess such deficits often rely on multiple cognitive and neural processes, leaving etiology unresolved. In the current study, we adopted recent computational formalisms of reinforcement learning to distinguish between model-based and model-free decision-making in hopes of specifying mechanisms associated with reinforcement-learning dysfunction in schizophrenia. Under this framework, decision-making is model-free to the extent that it relies solely on prior reward history, and model-based if it relies on prospective information such as motivational state, future consequences, and the likelihood of obtaining various outcomes. Model-based and model-free decision-making was assessed in 33 schizophrenia patients and 30 controls using a 2-stage 2-alternative forced choice task previously demonstrated to discern individual differences in reliance on the 2 forms of reinforcement-learning. We show that, compared with controls, schizophrenia patients demonstrate decreased reliance on model-based decision-making. Further, parameter estimates of model-based behavior correlate positively with IQ and working memory measures, suggesting that model-based deficits seen in schizophrenia may be partially explained by higher-order cognitive deficits. These findings demonstrate specific reinforcement-learning and decision-making deficits and thereby provide valuable insights for understanding disordered behavior in schizophrenia. (PsycINFO Database Record

  11. Mesh-free Hamiltonian implementation of two dimensional Darwin model

    Science.gov (United States)

    Siddi, Lorenzo; Lapenta, Giovanni; Gibbon, Paul

    2017-08-01

    A new approach to Darwin or magnetoinductive plasma simulation is presented, which combines a mesh-free field solver with a robust time-integration scheme avoiding numerical divergence errors in the solenoidal field components. The mesh-free formulation employs an efficient parallel Barnes-Hut tree algorithm to speed up the computation of fields summed directly from the particles, avoiding the necessity of divergence cleaning procedures typically required by particle-in-cell methods. The time-integration scheme employs a Hamiltonian formulation of the Lorentz force, circumventing the development of violent numerical instabilities associated with time differentiation of the vector potential. It is shown that a semi-implicit scheme converges rapidly and is robust to further numerical instabilities which can develop from a dominant contribution of the vector potential to the canonical momenta. The model is validated by various static and dynamic benchmark tests, including a simulation of the Weibel-like filamentation instability in beam-plasma interactions.

  12. IMPLEMENTATION OF INTERTIAL NAVIGATION SYSTEM MODEL DURING AIRCRAFT TESTING

    Directory of Open Access Journals (Sweden)

    2016-01-01

    Full Text Available The flight subset control is required during the aviation equipment test flights. In order to achieve this objective the complex consisting of strap down inertial navigation system (SINS and user equipment of satellite navigation systems (SNS can be used. Such combination needs to be used for error correction in positioning which is accumulated in SINS with time. This article shows the research results of the inertial navigation system (INS model. The results of the position- ing error calculation for various INS classes are given. Each of the examined INS has a different accumulated error for the same time lag. The methods of combining information of INS and SRNS are covered. The results obtained can be applied for upgrading the aircraft flight and navigation complexes. In particular, they can allow to continuously determine speed, coordinates, angular situation and repositioning rate of change of axes of the instrument frame.

  13. The R Package threg to Implement Threshold Regression Models

    Directory of Open Access Journals (Sweden)

    Tao Xiao

    2015-08-01

    This new package includes four functions: threg, and the methods hr, predict and plot for threg objects returned by threg. The threg function is the model-fitting function which is used to calculate regression coefficient estimates, asymptotic standard errors and p values. The hr method for threg objects is the hazard-ratio calculation function which provides the estimates of hazard ratios at selected time points for specified scenarios (based on given categories or value settings of covariates. The predict method for threg objects is used for prediction. And the plot method for threg objects provides plots for curves of estimated hazard functions, survival functions and probability density functions of the first-hitting-time; function curves corresponding to different scenarios can be overlaid in the same plot for comparison to give additional research insights.

  14. Modernizing Corporate MIS: from Information System Modelling to Implementation

    CERN Document Server

    Dheur, E; Martens, R; Petrilli, A; Smale, B

    1992-01-01

    This paper presents CERN's Advanced Informatics Support (AIS) project which aims at an innovative modernization of all aspects of informatics applied to the administrative processes of the Laboratory. The project began in August 1990 and is based on a proprietary operating system, relational data base system and scalable multi-processor server hardware. An analysis of the existing applications environment and its weaknesses is given in the introduction. The objectives of the project are then described. A justification of the strategic choices is given in the CERN context. The Information System Study, leading to a global data and function model for the Laboratory and the detailed area analysis of the top priority three areas are then described. An analysis is made of the benefits and disadvantages of the use of Oracle CASE in such a study and the compromises required when commercial applications packages were chosen.

  15. Systematic model for lean product development implementation in an automotive related company

    Directory of Open Access Journals (Sweden)

    Daniel Osezua Aikhuele

    2017-07-01

    Full Text Available Lean product development is a major innovative business strategy that employs sets of practices to achieve an efficient, innovative and a sustainable product development. Despite the many benefits and high hopes in the lean strategy, many companies are still struggling, and unable to either achieve or sustain substantial positive results with their lean implementation efforts. However, as the first step towards addressing this issue, this paper seeks to propose a systematic model that considers the administrative and implementation limitations of lean thinking practices in the product development process. The model which is based on the integration of fuzzy Shannon’s entropy and Modified Technique for Order Preference by Similarity to the Ideal Solution (M-TOPSIS model for the lean product development practices implementation with respective to different criteria including management and leadership, financial capabilities, skills and expertise and organization culture, provides a guide or roadmap for product development managers on the lean implementation route.

  16. A PROPOSAL OF A PROCESS MODEL FOR POSTAL ELECTRONIC SERVICE IMPLEMENTATION

    Directory of Open Access Journals (Sweden)

    Bystrík Nemček

    2015-09-01

    Full Text Available This article is dedicated to one of the main business processes–implementation of the postal electronic service. Theoretical point of view is focused on Business Process Management (BPM describing it as a field in systems engineering that focuses on activity of representing processes of an enterprise, so that the current process may be analyzed or improved. The main aim of the practical point of view was to design a model of postal electronic service implementation. A proposal of model is designed in Business Process Model Notation (BPMN, which is a graphical representation for specifying business processes in a business process model

  17. Implementation of a Fractional Model-Based Fault Detection Algorithm into a PLC Controller

    Science.gov (United States)

    Kopka, Ryszard

    2014-12-01

    This paper presents results related to the implementation of model-based fault detection and diagnosis procedures into a typical PLC controller. To construct the mathematical model and to implement the PID regulator, a non-integer order differential/integral calculation was used. Such an approach allows for more exact control of the process and more precise modelling. This is very crucial in model-based diagnostic methods. The theoretical results were verified on a real object in the form of a supercapacitor connected to a PLC controller by a dedicated electronic circuit controlled directly from the PLC outputs.

  18. Implementing Badhwar-O'Neill Galactic Cosmic Ray Model for the Analysis of Space Radiation Exposure

    Science.gov (United States)

    Kim, Myung-Hee Y.; O'Neill, Patrick M.; Slaba, Tony C.

    2014-01-01

    For the analysis of radiation risks to astronauts and planning exploratory space missions, accurate energy spectrum of galactic cosmic radiation (GCR) is necessary. Characterization of the ionizing radiation environment is challenging because the interplanetary plasma and radiation fields are modulated by solar disturbances and the radiation doses received by astronauts in interplanetary space are likewise influenced. A model of the Badhwar-O'Neill 2011 (BO11) GCR environment, which is represented by GCR deceleration potential theta, has been derived by utilizing all of the GCR measurements from balloons, satellites, and the newer NASA Advanced Composition Explorer (ACE). In the BO11 model, the solar modulation level is derived from the mean international sunspot numbers with time-delay, which has been calibrated with actual flight instrument measurements to produce better GCR flux data fit during solar minima. GCR fluxes provided by the BO11 model were compared with various spacecraft measurements at 1 AU, and further comparisons were made for the tissue equivalent proportional counters measurements at low Earth orbits using the high-charge and energy transport (HZETRN) code and various GCR models. For the comparison of the absorbed dose and dose equivalent calculations with the measurements by Radiation Assessment Detector (RAD) at Gale crater on Mars, the intensities and energies of GCR entering the heliosphere were calculated by using the BO11 model, which accounts for time-dependent attenuation of the local interstellar spectrum of each element. The BO11 model, which has emphasized for the last 24 solar minima, showed in relatively good agreement with the RAD data for the first 200 sols, but it was resulted in to be less well during near the solar maximum of solar cycle 24 due to subtleties in the changing heliospheric conditions. By performing the error analysis of the BO11 model and the optimization in reducing overall uncertainty, the resultant BO13 model

  19. Numerical simulation and reduced-order modeling of a flapping airfoil

    Science.gov (United States)

    Lewin, Gregory Carl

    Recent advances in many fields have made the design of micro-aerial vehicles that implement flapping wings a possibility. However, there are many outstanding problems that must be solved before flapping flight can be implemented as a practical means of propulsion. This dissertation focuses on two important aspects of flapping flight: the physics of the flow of a fluid around a heaving airfoil and the development of a reduced-order model for the control of a flapping airfoil. To study the physics of the flow, a numerical model for two-dimensional flow around an airfoil undergoing prescribed oscillatory motions in a viscous flow is developed. The model is used to examine the flow characteristics and power coefficients of a symmetric airfoil heaving sinusoidally over a range of frequencies and amplitudes. Both periodic and aperiodic solutions are found. Additionally, some flows are asymmetric in that the up-stroke is not a mirror image of the down-stroke. For a given Strouhal number---defined as the product of dimensionless frequency and heave amplitude---the maximum efficiency occurs at an intermediate heaving frequency. This is in contrast to ideal flow models, in which efficiency increases monotonically as frequency decreases. Below a threshold frequency, the separation of the leading edge vortices early in each stroke reduces the force on the airfoil and leads to diminished thrust and efficiency. Above the optimum frequency, the efficiency decreases similarly to inviscid theory. For most cases, the efficiency can be correlated to interactions between leading and trailing edge vortices, with positive reinforcement leading to relatively high efficiency, and negative reinforcement leading to relatively low efficiency. Additionally, the efficiency is related to the proximity of the heaving frequency to the frequency of the most spatially unstable mode of the average velocity profile of the wake; the greatest efficiency occurs when the two frequencies are nearly

  20. Model-reduced gradient-based history matching

    NARCIS (Netherlands)

    Kaleta, M.P.

    2011-01-01

    Since the world's energy demand increases every year, the oil & gas industry makes a continuous effort to improve fossil fuel recovery. Physics-based petroleum reservoir modeling and closed-loop model-based reservoir management concept can play an important role here. In this concept measured data a

  1. Sensitivity study of reduced models of the activated sludge process ...

    African Journals Online (AJOL)

    2009-08-07

    Aug 7, 2009 ... primary task of any modern control design is to construct and identify a model ... In this case the problem can be solved if the influence of the parameters ..... the main concept of the enzyme reactions in the UCT model is Sads ...

  2. Model-reduced gradient-based history matching

    NARCIS (Netherlands)

    Kaleta, M.P.

    2011-01-01

    Since the world's energy demand increases every year, the oil & gas industry makes a continuous effort to improve fossil fuel recovery. Physics-based petroleum reservoir modeling and closed-loop model-based reservoir management concept can play an important role here. In this concept measured data a

  3. Implementation of a capsular bag model to enable sufficient lens stabilization within a mechanical eye model

    Science.gov (United States)

    Bayer, Natascha; Rank, Elisabet; Traxler, Lukas; Beckert, Erik; Drauschke, Andreas

    2015-03-01

    Cataract still remains the leading cause of blindness affecting 20 million people worldwide. To restore the patients vision the natural lens is removed and replaced by an intraocular lens (IOL). In modern cataract surgery the posterior capsular bag is maintained to prevent inflammation and to enable stabilization of the implant. Refractive changes following cataract surgery are attributable to lens misalignments occurring due to postoperative shifts and tilts of the artificial lens. Mechanical eye models allow a preoperative investigation of the impact of such misalignments and are crucial to improve the quality of the patients' sense of sight. Furthermore, the success of sophisticated IOLs that correct high order aberrations is depending on a critical evaluation of the lens position. A new type of an IOL holder is designed and implemented into a preexisting mechanical eye model. A physiological representation of the capsular bag is realized with an integrated film element to guarantee lens stabilization and centering. The positioning sensitivity of the IOL is evaluated by performing shifts and tilts in reference to the optical axis. The modulation transfer function is used to measure the optical quality at each position. Lens stability tests within the holder itself are performed by determining the modulation transfer function before and after measurement sequence. Mechanical stability and reproducible measurement results are guaranteed with the novel capsular bag model that allows a precise interpretation of postoperative lens misalignments. The integrated film element offers additional stabilization during measurement routine without damaging the haptics or deteriorating the optical performance.

  4. Exploring the Process of Implementing Healthy Workplace Initiatives: Mapping to Kotter's Leading Change Model.

    Science.gov (United States)

    Chappell, Stacie; Pescud, Melanie; Waterworth, Pippa; Shilton, Trevor; Roche, Dee; Ledger, Melissa; Slevin, Terry; Rosenberg, Michael

    2016-10-01

    The aim of this study was to use Kotter's leading change model to explore the implementation of workplace health and wellbeing initiatives. Qualitative interviews were conducted with 31 workplace representatives with a healthy workplace initiative. None of the workplaces used a formal change management model when implementing their healthy workplace initiatives. Not all of the steps in Kotter model were considered necessary and the order of the steps was challenged. For example, interviewees perceived that communicating the vision, developing the vision, and creating a guiding coalition were integral parts of the process, although there was less emphasis on the importance of creating a sense of urgency and consolidating change. Although none of the workplaces reported using a formal organizational change model when implementing their healthy workplace initiatives, there did appear to be perceived merit in using the steps in Kotter's model.

  5. Implementation of meso-scale radioactive dispersion model for GPU

    Energy Technology Data Exchange (ETDEWEB)

    Sunarko [National Nuclear Energy Agency of Indonesia (BATAN), Jakarta (Indonesia). Nuclear Energy Assessment Center; Suud, Zaki [Bandung Institute of Technology (ITB), Bandung (Indonesia). Physics Dept.

    2017-05-15

    Lagrangian Particle Dispersion Method (LPDM) is applied to model atmospheric dispersion of radioactive material in a meso-scale of a few tens of kilometers for site study purpose. Empirical relationships are used to determine the dispersion coefficient for various atmospheric stabilities. Diagnostic 3-D wind-field is solved based on data from one meteorological station using mass-conservation principle. Particles representing radioactive pollutant are dispersed in the wind-field as a point source. Time-integrated air concentration is calculated using kernel density estimator (KDE) in the lowest layer of the atmosphere. Parallel code is developed for GTX-660Ti GPU with a total of 1 344 scalar processors using CUDA. A test of 1-hour release discovers that linear speedup is achieved starting at 28 800 particles-per-hour (pph) up to about 20 x at 14 4000 pph. Another test simulating 6-hour release with 36 000 pph resulted in a speedup of about 60 x. Statistical analysis reveals that resulting grid doses are nearly identical in both CPU and GPU versions of the code.

  6. Designing and implementing a regional urban modeling system using the SLEUTH cellular urban model

    Science.gov (United States)

    Jantz, C.A.; Goetz, S.J.; Donato, D.; Claggett, P.

    2010-01-01

    This paper presents a fine-scale (30 meter resolution) regional land cover modeling system, based on the SLEUTH cellular automata model, that was developed for a 257000 km2 area comprising the Chesapeake Bay drainage basin in the eastern United States. As part of this effort, we developed a new version of the SLEUTH model (SLEUTH-3r), which introduces new functionality and fit metrics that substantially increase the performance and applicability of the model. In addition, we developed methods that expand the capability of SLEUTH to incorporate economic, cultural and policy information, opening up new avenues for the integration of SLEUTH with other land-change models. SLEUTH-3r is also more computationally efficient (by a factor of 5) and uses less memory (reduced 65%) than the original software. With the new version of SLEUTH, we were able to achieve high accuracies at both the aggregate level of 15 sub-regional modeling units and at finer scales. We present forecasts to 2030 of urban development under a current trends scenario across the entire Chesapeake Bay drainage basin, and three alternative scenarios for a sub-region within the Chesapeake Bay watershed to illustrate the new ability of SLEUTH-3r to generate forecasts across a broad range of conditions. ?? 2009 Elsevier Ltd.

  7. Crop Model Improvement Reduces the Uncertainty of the Response to Temperature of Multi-Model Ensembles

    Science.gov (United States)

    Maiorano, Andrea; Martre, Pierre; Asseng, Senthold; Ewert, Frank; Mueller, Christoph; Roetter, Reimund P.; Ruane, Alex C.; Semenov, Mikhail A.; Wallach, Daniel; Wang, Enli

    2016-01-01

    To improve climate change impact estimates and to quantify their uncertainty, multi-model ensembles (MMEs) have been suggested. Model improvements can improve the accuracy of simulations and reduce the uncertainty of climate change impact assessments. Furthermore, they can reduce the number of models needed in a MME. Herein, 15 wheat growth models of a larger MME were improved through re-parameterization and/or incorporating or modifying heat stress effects on phenology, leaf growth and senescence, biomass growth, and grain number and size using detailed field experimental data from the USDA Hot Serial Cereal experiment (calibration data set). Simulation results from before and after model improvement were then evaluated with independent field experiments from a CIMMYT worldwide field trial network (evaluation data set). Model improvements decreased the variation (10th to 90th model ensemble percentile range) of grain yields simulated by the MME on average by 39% in the calibration data set and by 26% in the independent evaluation data set for crops grown in mean seasonal temperatures greater than 24 C. MME mean squared error in simulating grain yield decreased by 37%. A reduction in MME uncertainty range by 27% increased MME prediction skills by 47%. Results suggest that the mean level of variation observed in field experiments and used as a benchmark can be reached with half the number of models in the MME. Improving crop models is therefore important to increase the certainty of model-based impact assessments and allow more practical, i.e. smaller MMEs to be used effectively.

  8. A Model for Providing Guidance Services in Elementary Schools: A Generalist-Preventive Approach. Implemented Model. Maxi II Practicum.

    Science.gov (United States)

    Horne, Sydney B.

    The purpose of this practicum was to develop, implement, and evaluate a model for elementary school guidance at Northwoods Elementary School, if the need for such a model could be demonstrated. The need was demonstrated, the model was developed and tested. Subsequent investigation demonstrated that guidance services were increased as a result of…

  9. Design and implementation of space physics multi-model application integration based on web

    Science.gov (United States)

    Jiang, Wenping; Zou, Ziming

    With the development of research on space environment and space science, how to develop network online computing environment of space weather, space environment and space physics models for Chinese scientific community is becoming more and more important in recent years. Currently, There are two software modes on space physics multi-model application integrated system (SPMAIS) such as C/S and B/S. the C/S mode which is traditional and stand-alone, demands a team or workshop from many disciplines and specialties to build their own multi-model application integrated system, that requires the client must be deployed in different physical regions when user visits the integrated system. Thus, this requirement brings two shortcomings: reducing the efficiency of researchers who use the models to compute; inconvenience of accessing the data. Therefore, it is necessary to create a shared network resource access environment which could help users to visit the computing resources of space physics models through the terminal quickly for conducting space science research and forecasting spatial environment. The SPMAIS develops high-performance, first-principles in B/S mode based on computational models of the space environment and uses these models to predict "Space Weather", to understand space mission data and to further our understanding of the solar system. the main goal of space physics multi-model application integration system (SPMAIS) is to provide an easily and convenient user-driven online models operating environment. up to now, the SPMAIS have contained dozens of space environment models , including international AP8/AE8、IGRF、T96 models,and solar proton prediction model、geomagnetic transmission model,etc. which are developed by Chinese scientists. another function of SPMAIS is to integrate space observation data sets which offers input data for models online high-speed computing. In this paper, service-oriented architecture (SOA) concept that divides

  10. Overweight in young males reduce fertility in rabbit model

    National Research Council Canada - National Science Library

    Francisco Marco-Jiménez; José Salvador Vicente

    2017-01-01

    ... parameters and fertility success in randomized controlled trial in a rabbit model. Fourteen male rabbits were randomly assigned to a control group in which nutritional requirements were satisfied or a group fed...

  11. Quantum mechanics can reduce the complexity of classical models.

    Science.gov (United States)

    Gu, Mile; Wiesner, Karoline; Rieper, Elisabeth; Vedral, Vlatko

    2012-03-27

    Mathematical models are an essential component of quantitative science. They generate predictions about the future, based on information available in the present. In the spirit of simpler is better; should two models make identical predictions, the one that requires less input is preferred. Yet, for almost all stochastic processes, even the provably optimal classical models waste information. The amount of input information they demand exceeds the amount of predictive information they output. Here we show how to systematically construct quantum models that break this classical bound, and that the system of minimal entropy that simulates such processes must necessarily feature quantum dynamics. This indicates that many observed phenomena could be significantly simpler than classically possible should quantum effects be involved.

  12. Reducing Lag in Virtual Displays Using Multiple Model Adaptive Estimation

    Science.gov (United States)

    1995-12-01

    time varying behavior. Kleinman [4] proposed an optimal control model for human performance in a closed loop. This model interprets human perception as a...Washington, D. C: NASA, 1963. 3. Arthur , Kevin W., Kellogg, S. Booth, Colin Ware, Evaluating 3D Task Performance for Fish Tank Virtual Worlds," ACM...of Describing Behavior," Human Factors, volume 19, number 4,1977. 13. Kozak, J. J., P. A. Hancock, E. J. Arthur , S. T. Chrysler, "Transfer of Training

  13. Reducing uncertainty in high-resolution sea ice models.

    Energy Technology Data Exchange (ETDEWEB)

    Peterson, Kara J.; Bochev, Pavel Blagoveston

    2013-07-01

    Arctic sea ice is an important component of the global climate system, reflecting a significant amount of solar radiation, insulating the ocean from the atmosphere and influencing ocean circulation by modifying the salinity of the upper ocean. The thickness and extent of Arctic sea ice have shown a significant decline in recent decades with implications for global climate as well as regional geopolitics. Increasing interest in exploration as well as climate feedback effects make predictive mathematical modeling of sea ice a task of tremendous practical import. Satellite data obtained over the last few decades have provided a wealth of information on sea ice motion and deformation. The data clearly show that ice deformation is focused along narrow linear features and this type of deformation is not well-represented in existing models. To improve sea ice dynamics we have incorporated an anisotropic rheology into the Los Alamos National Laboratory global sea ice model, CICE. Sensitivity analyses were performed using the Design Analysis Kit for Optimization and Terascale Applications (DAKOTA) to determine the impact of material parameters on sea ice response functions. Two material strength parameters that exhibited the most significant impact on responses were further analyzed to evaluate their influence on quantitative comparisons between model output and data. The sensitivity analysis along with ten year model runs indicate that while the anisotropic rheology provides some benefit in velocity predictions, additional improvements are required to make this material model a viable alternative for global sea ice simulations.

  14. Implementation of Bessel's method for solar eclipses prediction in the WRF-ARW model

    Science.gov (United States)

    Montornes, Alex; Codina, Bernat; Zack, John W.; Sola, Yolanda

    2016-05-01

    Solar eclipses are predictable astronomical events that abruptly reduce the incoming solar radiation into the Earth's atmosphere, which frequently results in non-negligible changes in meteorological fields. The meteorological impacts of these events have been analyzed in many studies since the late 1960s. The recent growth in the solar energy industry has greatly increased the interest in providing more detail in the modeling of solar radiation variations in numerical weather prediction (NWP) models for the use in solar resource assessment and forecasting applications. The significant impact of the recent partial and total solar eclipses that occurred in the USA (23 October 2014) and Europe (20 March 2015) on solar power generation have provided additional motivation and interest for including these astronomical events in the current solar parameterizations.Although some studies added solar eclipse episodes within NWP codes in the 1990s and 2000s, they used eclipse parameterizations designed for a particular case study. In contrast to these earlier implementations, this paper documents a new package for the Weather Research and Forecasting-Advanced Research WRF (WRF-ARW) model that can simulate any partial, total or hybrid solar eclipse for the period 1950 to 2050 and is also extensible to a longer period. The algorithm analytically computes the trajectory of the Moon's shadow and the degree of obscuration of the solar disk at each grid point of the domain based on Bessel's method and the Five Millennium Catalog of Solar Eclipses provided by NASA, with a negligible computational time. Then, the incoming radiation is modified accordingly at each grid point of the domain.This contribution is divided in three parts. First, the implementation of Bessel's method is validated for solar eclipses in the period 1950-2050, by comparing the shadow trajectory with values provided by NASA. Latitude and longitude are determined with a bias lower than 5 x 10-3 degrees (i

  15. Constitutive models based on dislocation density:formulation and implementation into finite element codes

    OpenAIRE

    Domkin, Konstantin

    2005-01-01

    Correct description of the material behaviour is an extra challenge in simulation of the materials processing and manufacturing processes such as metal forming. Material models must account for varying strain, strain rate and temperature, and changing microstructure. This study is devoted to the physically based models of metal plasticity - dislocation density models, their numerical implementation and parameter identification. The basic concepts of dislocation density modelling are introduce...

  16. [Design, implementation and evaluation of a management model of patient safety in hospitals in Catalonia, Spain].

    Science.gov (United States)

    Saura, Rosa Maria; Moreno, Pilar; Vallejo, Paula; Oliva, Glòria; Alava, Fernando; Esquerra, Miquel; Davins, Josep; Vallès, Roser; Bañeres, Joaquim

    2014-07-01

    Since its inception in 2006, the Alliance for Patient Safety in Catalonia has played a major role in promoting and shaping a series of projects related to the strategy of the Ministry of Health, Social Services and Equality, for improving patient safety. One such project was the creation of functional units or committees of safety in hospitals in order to facilitate the management of patient safety. The strategy has been implemented in hospitals in Catalonia which were selected based on criteria of representativeness. The intervention was based on two lines of action, one to develop the model framework and the other for its development. Firstly the strategy for safety management based on EFQM (European Foundation for Quality Management) was defined with the development of standards, targets and indicators to implement security while the second part involved the introduction of tools, methodologies and knowledge to the management support of patient safety and risk prevention. The project was developed in four hospital areas considered higher risk, each assuming six goals for safety management. Some of these targets such as the security control panel or system of adverse event reporting were shared. 23 hospitals joined the project in Catalonia. Despite the different situations in each centre, high compliance was achieved in the development of the objectives. In each of the participating areas the security control panel was developed. Stable structures for safety management were established or strengthened. Training in patient safety played and important role, 1415 professionals participated. Through these kind of projects not only have been introduced programs of proven effectiveness in reducing risks, but they also provide to the facilities a work system that allows autonomy in diagnosis and analysis of the different risk situations or centre specific safety issues. Copyright © 2014. Published by Elsevier Espana.

  17. A pair natural orbital implementation of the coupled cluster model CC2 for excitation energies.

    Science.gov (United States)

    Helmich, Benjamin; Hättig, Christof

    2013-08-28

    We demonstrate how to extend the pair natural orbital (PNO) methodology for excited states, presented in a previous work for the perturbative doubles correction to configuration interaction singles (CIS(D)), to iterative coupled cluster methods such as the approximate singles and doubles model CC2. The original O(N(5)) scaling of the PNO construction is reduced by using orbital-specific virtuals (OSVs) as an intermediate step without spoiling the initial accuracy of the PNO method. Furthermore, a slower error convergence for charge-transfer states is analyzed and resolved by a numerical Laplace transformation during the PNO construction, so that an equally accurate treatment of local and charge-transfer excitations is achieved. With state-specific truncated PNO expansions, the eigenvalue problem is solved by combining the Davidson algorithm with deflation to project out roots that have already been determined and an automated refresh with a generation of new PNOs to achieve self-consistency of the PNO space. For a large test set, we found that truncation errors for PNO-CC2 excitation energies are only slightly larger than for PNO-CIS(D). The computational efficiency of PNO-CC2 is demonstrated for a large organic dye, where a reduction of the doubles space by a factor of more than 1000 is obtained compared to the canonical calculation. A compression of the doubles space by a factor 30 is achieved by a unified OSV space only. Moreover, calculations with the still preliminary PNO-CC2 implementation on a series of glycine oligomers revealed an early break even point with a canonical RI-CC2 implementation between 100 and 300 basis functions.

  18. A pair natural orbital implementation of the coupled cluster model CC2 for excitation energies

    Science.gov (United States)

    Helmich, Benjamin; Hättig, Christof

    2013-08-01

    We demonstrate how to extend the pair natural orbital (PNO) methodology for excited states, presented in a previous work for the perturbative doubles correction to configuration interaction singles (CIS(D)), to iterative coupled cluster methods such as the approximate singles and doubles model CC2. The original O(N^5) scaling of the PNO construction is reduced by using orbital-specific virtuals (OSVs) as an intermediate step without spoiling the initial accuracy of the PNO method. Furthermore, a slower error convergence for charge-transfer states is analyzed and resolved by a numerical Laplace transformation during the PNO construction, so that an equally accurate treatment of local and charge-transfer excitations is achieved. With state-specific truncated PNO expansions, the eigenvalue problem is solved by combining the Davidson algorithm with deflation to project out roots that have already been determined and an automated refresh with a generation of new PNOs to achieve self-consistency of the PNO space. For a large test set, we found that truncation errors for PNO-CC2 excitation energies are only slightly larger than for PNO-CIS(D). The computational efficiency of PNO-CC2 is demonstrated for a large organic dye, where a reduction of the doubles space by a factor of more than 1000 is obtained compared to the canonical calculation. A compression of the doubles space by a factor 30 is achieved by a unified OSV space only. Moreover, calculations with the still preliminary PNO-CC2 implementation on a series of glycine oligomers revealed an early break even point with a canonical RI-CC2 implementation between 100 and 300 basis functions.

  19. Comparative effectiveness of implementation of a nursing-driven protocol in reducing bronchodilator utilization for hospitalized children with bronchiolitis.

    Science.gov (United States)

    Pinto, Jamie M; Schairer, Janet L; Petrova, Anna

    2014-06-01

    The goal of our study was to determine whether the administration of bronchodilators is affected by implementation of a nursing-driven protocol in the care of children hospitalized with bronchiolitis. We included children less than 2 years old, hospitalized with bronchiolitis, but without chronic lung problems, immunodeficiencies or congenital heart disease in the 1-year periods before, during and after implementation of a nursing-driven bronchiolitis protocol. The protocol is based on nursing assessments of respiratory status prior to initiation and continuation of bronchodilator therapy. Utilization rates of bronchodilators were compared with respect to implementation of the nursing-driven protocol using Chi-square, analysis of variance, and regression analysis that is presented as adjusted odds ratio (OR) and 95% confidence interval (95% CI) of the OR. Among the 80 children who were hospitalized before, 63 during and 89 after the implementation of the nursing-driven bronchiolitis protocol, 70.0, 60.3, and 29.2%, respectively, received treatment with bronchodilators (P bronchiolitis protocol was also observed after controlling for the child's age and evidence of pneumonia (OR 0.68, 95% CI 0.61-0.79). The mean number of bronchodilator doses administered among patients in the three groups who received at least one treatment was comparable. Implementation of a nursing-driven bronchiolitis protocol was associated with significant reduction in initiation of bronchodilator treatments, which suggests a benefit from nursing involvement in the promotion of evidence-based recommendations in the management of children hospitalized with bronchiolitis. © 2014 John Wiley & Sons, Ltd.

  20. Finite element implementation of the Hoek-Brown material model with general strain softening behavior

    DEFF Research Database (Denmark)

    Sørensen, Emil Smed; Clausen, Johan Christian; Damkilde, Lars

    2015-01-01

    A numerical implementation of the Hoek–Brown criterion is presented, which is capable of modeling different post-failure behaviors observed in jointed rock mass. This is done by making the material parameters a function of the accumulated plastic strain. The implementation is for use in finite...... element calculations, and is based on the return mapping framework. The updated stress state together with the consistent constitutive matrix is found in principal stress space based on the principles of boundary planes. The implementation is verified through the simulation of a tunnel excavation...

  1. Implementation of the Modified Hoek-Brown Model into the Finite Element Method

    DEFF Research Database (Denmark)

    Sørensen, Emil Smed; Clausen, Johan Christian; Merifield, Richard S.;

    2015-01-01

    The Hoek-Brown model for near-homogeneous rock masses will, in some cases, overpredict the tensile strength of the material. In some cases this can lead to unsafe design of structures. Therefore, a tension cut-off is introduced and the model is implemented into an elasto-plastic framework for use...

  2. On the Implementation of AM/AM AM/PM Behavioral Models in System Level Simulation

    NARCIS (Netherlands)

    Shen, Y.; Tauritz, J.L.

    2003-01-01

    The use of nonlinear device behavioral models offers an economical way of simulating the performance of complex communication systems. A concrete method for implementing the AM/AM AM/PM behavioral model in system level simulation using ADS is developed. This method seamlessly tansfers the data from

  3. Description and Rationale for the Planning, Monitoring, and Implementation (PMI) Model: Rationale.

    Science.gov (United States)

    Cort, H. Russell

    The rationale for the Planning, Monitoring, and Implementation Model (PMI) is the subject of this paper. The Superintendent of the District of Columbia Public Schools requested a model for systematic evaluation of educational programs to determine their effectiveness. The school system's emphasis on objective-referenced instruction and testing,…

  4. Measurement of a model of implementation for health care : toward a testable theory

    NARCIS (Netherlands)

    Cook, Joan M.; O'Donnell, Casey; Dinnen, Stephanie; Coyne, James C.; Ruzek, Josef I.; Schnurr, Paula P.

    2012-01-01

    Background: Greenhalgh et al. used a considerable evidence-base to develop a comprehensive model of implementation of innovations in healthcare organizations [1]. However, these authors did not fully operationalize their model, making it difficult to test formally. The present paper represents a fir

  5. Implementing a New Model for Teachers' Professional Learning in Papua New Guinea

    Science.gov (United States)

    Honan, Eileen; Evans, Terry; Muspratt, Sandy; Paraide, Patricia; Reta, Medi; Baroutsis, Aspa

    2012-01-01

    This article reports on a study that investigates the possibilities of developing a professional learning model based on action research that could lead to sustained improvements in teaching and learning in schools in remote areas of Papua New Guinea. The issues related to the implementation of this model are discussed using a critical lens that…

  6. MATRIX-VBS: implementing an evolving organic aerosol volatility in an aerosol microphysics model

    OpenAIRE

    Gao, Chloe Y.; Tsigaridis, Kostas; Bauer, Susanne E.

    2016-01-01

    We have implemented an existing aerosol microphysics scheme into a box model framework and extended it to represent gas-particle partitioning and chemical ageing of semi-volatile organic aerosols. We then applied this new research tool to investigate the effects of semi-volatile organic species on the growth, composition and mixing state of aerosol particles in case studies representing several different environments. The volatility-basis set (VBS) framework is implemented into the aerosol mi...

  7. Reducing uncertainty based on model fitness: Application to a ...

    African Journals Online (AJOL)

    2015-01-07

    Jan 7, 2015 ... 2Hydrology and Water Quality, Agricultural and Biological Engineering ... This general methodology is applied to a reservoir model of the Okavango ... Global sensitivity and uncertainty analysis (GSA/UA) system- ... and weighing risks between decisions (Saltelli et al., 2008). ...... resources and support.

  8. FPGA implementation of a biological neural network based on the Hodgkin-Huxley neuron model.

    Science.gov (United States)

    Yaghini Bonabi, Safa; Asgharian, Hassan; Safari, Saeed; Nili Ahmadabadi, Majid

    2014-01-01

    A set of techniques for efficient implementation of Hodgkin-Huxley-based (H-H) model of a neural network on FPGA (Field Programmable Gate Array) is presented. The central implementation challenge is H-H model complexity that puts limits on the network size and on the execution speed. However, basics of the original model cannot be compromised when effect of synaptic specifications on the network behavior is the subject of study. To solve the problem, we used computational techniques such as CORDIC (Coordinate Rotation Digital Computer) algorithm and step-by-step integration in the implementation of arithmetic circuits. In addition, we employed different techniques such as sharing resources to preserve the details of model as well as increasing the network size in addition to keeping the network execution speed close to real time while having high precision. Implementation of a two mini-columns network with 120/30 excitatory/inhibitory neurons is provided to investigate the characteristic of our method in practice. The implementation techniques provide an opportunity to construct large FPGA-based network models to investigate the effect of different neurophysiological mechanisms, like voltage-gated channels and synaptic activities, on the behavior of a neural network in an appropriate execution time. Additional to inherent properties of FPGA, like parallelism and re-configurability, our approach makes the FPGA-based system a proper candidate for study on neural control of cognitive robots and systems as well.

  9. The role of public policies in reducing smoking prevalence: results from the Michigan SimSmoke tobacco policy simulation model.

    Science.gov (United States)

    Levy, David T; Huang, An-Tsun; Havumaki, Joshua S; Meza, Rafael

    2016-05-01

    Michigan has implemented several of the tobacco control policies recommended by the World Health Organization MPOWER goals. We consider the effect of those policies and additional policies consistent with MPOWER goals on smoking prevalence and smoking-attributable deaths (SADs). The SimSmoke tobacco control policy simulation model is used to examine the effect of past policies and a set of additional policies to meet the MPOWER goals. The model is adapted to Michigan using state population, smoking, and policy data starting in 1993. SADs are estimated using standard attribution methods. Upon validating the model, SimSmoke is used to distinguish the effect of policies implemented since 1993 against a counterfactual with policies kept at their 1993 levels. The model is then used to project the effect of implementing stronger policies beginning in 2014. SimSmoke predicts smoking prevalence accurately between 1993 and 2010. Since 1993, a relative reduction in smoking rates of 22 % by 2013 and of 30 % by 2054 can be attributed to tobacco control policies. Of the 22 % reduction, 44 % is due to taxes, 28 % to smoke-free air laws, 26 % to cessation treatment policies, and 2 % to youth access. Moreover, 234,000 SADs are projected to be averted by 2054. With additional policies consistent with MPOWER goals, the model projects that, by 2054, smoking prevalence can be further reduced by 17 % with 80,000 deaths averted relative to the absence of those policies. Michigan SimSmoke shows that tobacco control policies, including cigarette taxes, smoke-free air laws, and cessation treatment policies, have substantially reduced smoking and SADs. Higher taxes, strong mass media campaigns, and cessation treatment policies would further reduce smoking prevalence and SADs.

  10. Safety-relevant mode confusions-modelling and reducing them

    Energy Technology Data Exchange (ETDEWEB)

    Bredereke, Jan [Universitaet Bremen, FB 3, P.O. Box 330 440, D-28334 Bremen (Germany)]. E-mail: brederek@tzi.de; Lankenau, Axel [Universitaet Bremen, FB 3, P.O. Box 330 440, D-28334 Bremen (Germany)

    2005-06-01

    Mode confusions are a significant safety concern in safety-critical systems, for example in aircraft. A mode confusion occurs when the observed behaviour of a technical system is out of sync with the user's mental model of its behaviour. But the notion is described only informally in the literature. We present a rigorous way of modelling the user and the machine in a shared-control system. This enables us to propose precise definitions of 'mode' and 'mode confusion' for safety-critical systems. We then validate these definitions against the informal notions in the literature. A new classification of mode confusions by cause leads to a number of design recommendations for shared-control systems. These help in avoiding mode confusion problems. Our approach supports the automated detection of remaining mode confusion problems. We apply our approach practically to a wheelchair robot.

  11. Evidence that brief self-affirming implementation intentions can reduce work-related anxiety in downsize survivors.

    OpenAIRE

    Morgan, JI; Harris, PR

    2015-01-01

    Background and Objectives: Workers were recruited from a UK further education college during a period of organisational downsizing. The study assessed the effects of a brief health psychology intervention on work-related stress in downsize survivors. Design and Methods: Sixty-six employees were randomly allocated to one of two conditions: one in which they were asked to create a work-related self-affirming implementation intention (WS-AII), or a control. Feelings of anxiety and depression wer...

  12. Certified reduced basis model validation: A frequentistic uncertainty framework

    OpenAIRE

    Patera, A. T.; Huynh, Dinh Bao Phuong; Knezevic, David; Patera, Anthony T.

    2011-01-01

    We introduce a frequentistic validation framework for assessment — acceptance or rejection — of the consistency of a proposed parametrized partial differential equation model with respect to (noisy) experimental data from a physical system. Our method builds upon the Hotelling T[superscript 2] statistical hypothesis test for bias first introduced by Balci and Sargent in 1984 and subsequently extended by McFarland and Mahadevan (2008). Our approach introduces two new elements: a spectral repre...

  13. Finite Element Model to Reduce Fire and Blast Vulnerability

    Science.gov (United States)

    2013-01-01

    fit the existing spine model, Figure 2. The ribs were connected by the use of rigid body constraints between the rib ends and the thoracic vertebrae ... cervical , lumbar and thoracic spine that was used in this program underwent a rigorous verification and validation process. However, the other components...and Uncertainty Quantification Applied to Cervical Spine Injury Assessment”. NATO AVT Symposium on Computational Uncertainty in Military Vehicle

  14. Modeling Reduced Human Performance as a Complex Adaptive System

    Science.gov (United States)

    2003-09-01

    fittingly, the latest research paper describes these types of components as LEGOs (listener event graph objects). “The name is also a metaphor for how...Buss, A. H. and P. J. Sanchez (2002). Building Complex Models With LEGOs (Listener Event Graph Objects). Winter Simulation Conference. Buss, D. (1999...Kaarlela, C. (1997). New Gene Therapy Technique Could Eliminate Insulin Injections for many Diabetics, Jeffrey Norris and Jennifer O’Brien (415) 476-481

  15. Verilog-A implementation of a double-gate junctionless compact model for DC circuit simulations

    Science.gov (United States)

    Alvarado, J.; Flores, P.; Romero, S.; Ávila-Herrera, F.; González, V.; Soto-Cruz, B. S.; Cerdeira, A.

    2016-07-01

    A physically based model of the double-gate juntionless transistor which is capable of describing accumulation and depletion regions is implemented in Verilog-A in order to perform DC circuit simulations. Analytical description of the difference of potentials between the center and the surface of the silicon layer allows the determination of the mobile charges. Furthermore, mobility degradation, series resistance, as well as threshold voltage roll-off, drain saturation voltage, channel shortening and velocity saturation are also considered. In order to provide this model to all of the community, the implementation of this model is performed in Ngspice, which is a free circuit simulation with an ADMS interface to integrate Verilog-A models. Validation of the model implementation is done through 2D numerical simulations of transistors with 1 μ {{m}} and 40 {{nm}} silicon channel length and 1 × 1019 or 5× {10}18 {{{cm}}}-3 doping concentration of the silicon layer with 10 and 15 {{nm}} silicon thickness. Good agreement between the numerical simulated behavior and model implementation is obtained, where only eight model parameters are used.

  16. Implementation of an Analytical Model for Leakage Neutron Equivalent Dose in a Proton Radiotherapy Planning System

    Energy Technology Data Exchange (ETDEWEB)

    Eley, John [Department of Radiation Physics, The University of Texas MD Anderson Cancer Center, 1515 Holcombe Blvd., Houston, TX 77030 (United States); Graduate School of Biomedical Sciences, The University of Texas, 6767 Bertner Ave., Houston, TX 77030 (United States); Newhauser, Wayne, E-mail: newhauser@lsu.edu [Department of Physics and Astronomy, Louisiana State University and Agricultural and Mechanical College, 202 Nicholson Hall, Tower Drive, Baton Rouge, LA 70803 (United States); Mary Bird Perkins Cancer Center, 4950 Essen Lane, Baton Rouge, LA 70809 (United States); Homann, Kenneth; Howell, Rebecca [Department of Radiation Physics, The University of Texas MD Anderson Cancer Center, 1515 Holcombe Blvd., Houston, TX 77030 (United States); Graduate School of Biomedical Sciences, The University of Texas, 6767 Bertner Ave., Houston, TX 77030 (United States); Schneider, Christopher [Department of Physics and Astronomy, Louisiana State University and Agricultural and Mechanical College, 202 Nicholson Hall, Tower Drive, Baton Rouge, LA 70803 (United States); Mary Bird Perkins Cancer Center, 4950 Essen Lane, Baton Rouge, LA 70809 (United States); Durante, Marco; Bert, Christoph [GSI Helmholtzzentrum für Schwerionenforschung, Planckstr. 1, Darmstadt 64291 (Germany)

    2015-03-11

    Equivalent dose from neutrons produced during proton radiotherapy increases the predicted risk of radiogenic late effects. However, out-of-field neutron dose is not taken into account by commercial proton radiotherapy treatment planning systems. The purpose of this study was to demonstrate the feasibility of implementing an analytical model to calculate leakage neutron equivalent dose in a treatment planning system. Passive scattering proton treatment plans were created for a water phantom and for a patient. For both the phantom and patient, the neutron equivalent doses were small but non-negligible and extended far beyond the therapeutic field. The time required for neutron equivalent dose calculation was 1.6 times longer than that required for proton dose calculation, with a total calculation time of less than 1 h on one processor for both treatment plans. Our results demonstrate that it is feasible to predict neutron equivalent dose distributions using an analytical dose algorithm for individual patients with irregular surfaces and internal tissue heterogeneities. Eventually, personalized estimates of neutron equivalent dose to organs far from the treatment field may guide clinicians to create treatment plans that reduce the risk of late effects.

  17. Computational implementation of the multi-mechanism deformation coupled fracture model for salt

    Energy Technology Data Exchange (ETDEWEB)

    Koteras, J.R.; Munson, D.E.

    1996-05-01

    The Multi-Mechanism Deformation (M-D) model for creep in rock salt has been used in three-dimensional computations for the Waste Isolation Pilot Plant (WIPP), a potential waste, repository. These computational studies are relied upon to make key predictions about long-term behavior of the repository. Recently, the M-D model was extended to include creep-induced damage. The extended model, the Multi-Mechanism Deformation Coupled Fracture (MDCF) model, is considerably more complicated than the M-D model and required a different technology from that of the M-D model for a computational implementation.

  18. Implementation of Electrical Simulation Model for IEC Standard Type-3A Generator

    DEFF Research Database (Denmark)

    Subramanian, Chandrasekaran; Casadei, Domenico; Tani, Angelo

    2013-01-01

    turbine with partial scale power converter WEG including a two mass mechanical model. The generic models for fixed and variable speed WEGs models are suitable for fundamental frequency positive sequence response simulations during short events in the power system such as voltage dips. The wind power......This paper describes the implementation of electrical simulation model for IEC 61400-27-1 standard Type-3A generator. A general overview of the different wind electric generators(WEG) types are given and the main focused on Type-3A WEG standard models, namely a model for a variable speed wind...

  19. Implementing The Automated Phases Of The Partially-Automated Digital Triage Process Model

    Directory of Open Access Journals (Sweden)

    Gary D Cantrell

    2012-12-01

    Full Text Available Digital triage is a pre-digital-forensic phase that sometimes takes place as a way of gathering quick intelligence. Although effort has been undertaken to model the digital forensics process, little has been done to date to model digital triage. This work discuses the further development of a model that does attempt to address digital triage the Partially-automated Crime Specific Digital Triage Process model. The model itself will be presented along with a description of how its automated functionality was implemented to facilitate model testing.

  20. Development and verification of fuel burn-up calculation model in a reduced reactor geometry

    Energy Technology Data Exchange (ETDEWEB)

    Sembiring, Tagor Malem [Center for Reactor Technology and Nuclear Safety (PTKRN), National Nuclear Energy Agency (BATAN), Kawasan PUSPIPTEK Gd. No. 80, Serpong, Tangerang 15310 (Indonesia)], E-mail: tagorms@batan.go.id; Liem, Peng Hong [Research Laboratory for Nuclear Reactor (RLNR), Tokyo Institute of Technology (Tokyo Tech), O-okayama, Meguro-ku, Tokyo 152-8550 (Japan)

    2008-02-15

    A fuel burn-up model in a reduced reactor geometry (2-D) is successfully developed and implemented in the Batan in-core fuel management code, Batan-FUEL. Considering the bank mode operation of the control rods, several interpolation functions are investigated which best approximate the 3-D fuel assembly radial power distributions across the core as function of insertion depth of the control rods. Concerning the applicability of the interpolation functions, it can be concluded that the optimal coefficients of the interpolation functions are not very sensitive to the core configuration and core or fuel composition in RSG GAS (MPR-30) reactor. Consequently, once the optimal interpolation function and its coefficients are derived then they can be used for 2-D routine operational in-core fuel management without repeating the expensive 3-D neutron diffusion calculations. At the selected fuel elements (at H-9 and G-6 core grid positions), the discrepancy of the FECFs (fuel element channel power peaking factors) between the 2-D and 3-D models are within the range of 3.637 x 10{sup -4}, 3.241 x 10{sup -4} and 7.556 x 10{sup -4} for the oxide, silicide cores with 250 g {sup 235}U/FE and the silicide core with 300 g {sup 235}U/FE, respectively.