WorldWideScience

Sample records for model process steps

  1. Modeling printed circuit board curvature in relation to manufacturing process steps

    NARCIS (Netherlands)

    Schuerink, G.A.; Slomp, M.; Wits, Wessel Willems; Legtenberg, R.; Legtenberg, R.; Kappel, E.A.

    2013-01-01

    This paper presents an analytical method to predict deformations of Printed Circuit Boards (PCBs) in relation to their manufacturing process steps. Classical Lamination Theory (CLT) is used as a basis. The model tracks internal stresses and includes the results of subsequent production steps, such

  2. Modeling and analysis of the affinity filtration process, including broth feeding, washing, and elution steps.

    Science.gov (United States)

    He, L Z; Dong, X Y; Sun, Y

    1998-01-01

    Affinity filtration is a developing protein purification technique that combines the high selectivity of affinity chromatography and the high processing speed of membrane filtration. In this work a lumped kinetic model was developed to describe the whole affinity filtration process, including broth feeding, contaminant washing, and elution steps. Affinity filtration experiments were conducted to evaluate the model using bovine serum albumin as a model protein and a highly substituted Blue Sepharose as an affinity adsorbent. The model with nonadjustable parameters agreed fairly to the experimental results. Thus, the performance of the affinity filtration in processing a crude broth containing contaminant proteins was analyzed by computer simulations using the lumped model. The simulation results show that there is an optimal protein loading for obtaining the maximum recovery yield of the desired protein with a constant purity at each operating condition. The concentration of a crude broth is beneficial in increasing the recovery yield of the desired protein. Using a constant amount of the affinity adsorbent, the recovery yield can be enhanced by decreasing the solution volume in the stirred tank due to the increase of the adsorbent weight fraction. It was found that the lumped kinetic model was simple and useful in analyzing the whole affinity filtration process.

  3. Multivariate modelling of the pharmaceutical two-step process of wet granulation and tableting with multiblock partial least squares

    NARCIS (Netherlands)

    Westerhuis, J.A; Coenegracht, P.M J

    1997-01-01

    The pharmaceutical process of wet granulation and tableting is described as a two-step process. Besides the process variables of both steps and the composition variables of the powder mixture, the physical properties of the intermediate granules are also used to model the crushing strength and

  4. UOE Pipe Numerical Model: Manufacturing Process And Von Mises Residual Stresses Resulted After Each Technological Step

    Science.gov (United States)

    Delistoian, Dmitri; Chirchor, Mihael

    2017-12-01

    Fluid transportation from production areas to final customer is effectuated by pipelines. For oil and gas industry, pipeline safety and reliability represents a priority. From this reason, pipe quality guarantee directly influence pipeline designed life, but first of all protects environment. A significant number of longitudinally welded pipes, for onshore/offshore pipelines, are manufactured by UOE method. This method is based on cold forming. In present study, using finite element method is modeled UOE pipe manufacturing process and is obtained von Mises stresses for each step. Numerical simulation is performed for L415 MB (X60) steel plate with 7,9 mm thickness, length 30 mm and width 1250mm, as result it is obtained a DN 400 pipe.

  5. The RiverFish Approach to Business Process Modeling: Linking Business Steps to Control-Flow Patterns

    Science.gov (United States)

    Zuliane, Devanir; Oikawa, Marcio K.; Malkowski, Simon; Alcazar, José Perez; Ferreira, João Eduardo

    Despite the recent advances in the area of Business Process Management (BPM), today’s business processes have largely been implemented without clearly defined conceptual modeling. This results in growing difficulties for identification, maintenance, and reuse of rules, processes, and control-flow patterns. To mitigate these problems in future implementations, we propose a new approach to business process modeling using conceptual schemas, which represent hierarchies of concepts for rules and processes shared among collaborating information systems. This methodology bridges the gap between conceptual model description and identification of actual control-flow patterns for workflow implementation. We identify modeling guidelines that are characterized by clear phase separation, step-by-step execution, and process building through diagrams and tables. The separation of business process modeling in seven mutually exclusive phases clearly delimits information technology from business expertise. The sequential execution of these phases leads to the step-by-step creation of complex control-flow graphs. The process model is refined through intuitive table and diagram generation in each phase. Not only does the rigorous application of our modeling framework minimize the impact of rule and process changes, but it also facilitates the identification and maintenance of control-flow patterns in BPM-based information system architectures.

  6. A 2-D process-based model for suspended sediment dynamics: A first step towards ecological modeling

    Science.gov (United States)

    Achete, F. M.; van der Wegen, M.; Roelvink, D.; Jaffe, B.

    2015-01-01

    In estuaries suspended sediment concentration (SSC) is one of the most important contributors to turbidity, which influences habitat conditions and ecological functions of the system. Sediment dynamics differs depending on sediment supply and hydrodynamic forcing conditions that vary over space and over time. A robust sediment transport model is a first step in developing a chain of models enabling simulations of contaminants, phytoplankton and habitat conditions. This works aims to determine turbidity levels in the complex-geometry delta of the San Francisco estuary using a process-based approach (Delft3D Flexible Mesh software). Our approach includes a detailed calibration against measured SSC levels, a sensitivity analysis on model parameters and the determination of a yearly sediment budget as well as an assessment of model results in terms of turbidity levels for a single year, water year (WY) 2011. Model results show that our process-based approach is a valuable tool in assessing sediment dynamics and their related ecological parameters over a range of spatial and temporal scales. The model may act as the base model for a chain of ecological models assessing the impact of climate change and management scenarios. Here we present a modeling approach that, with limited data, produces reliable predictions and can be useful for estuaries without a large amount of processes data.

  7. A 2-D process-based model for suspended sediment dynamics: a first step towards ecological modeling

    Science.gov (United States)

    Achete, F. M.; van der Wegen, M.; Roelvink, D.; Jaffe, B.

    2015-06-01

    In estuaries suspended sediment concentration (SSC) is one of the most important contributors to turbidity, which influences habitat conditions and ecological functions of the system. Sediment dynamics differs depending on sediment supply and hydrodynamic forcing conditions that vary over space and over time. A robust sediment transport model is a first step in developing a chain of models enabling simulations of contaminants, phytoplankton and habitat conditions. This works aims to determine turbidity levels in the complex-geometry delta of the San Francisco estuary using a process-based approach (Delft3D Flexible Mesh software). Our approach includes a detailed calibration against measured SSC levels, a sensitivity analysis on model parameters and the determination of a yearly sediment budget as well as an assessment of model results in terms of turbidity levels for a single year, water year (WY) 2011. Model results show that our process-based approach is a valuable tool in assessing sediment dynamics and their related ecological parameters over a range of spatial and temporal scales. The model may act as the base model for a chain of ecological models assessing the impact of climate change and management scenarios. Here we present a modeling approach that, with limited data, produces reliable predictions and can be useful for estuaries without a large amount of processes data.

  8. Step-by-step cyclic processes scheduling

    DEFF Research Database (Denmark)

    Bocewicz, G.; Nielsen, Izabela Ewa; Banaszak, Z.

    2013-01-01

    is to provide a declarative model enabling to state a constraint satisfaction problem aimed at AGVs fleet scheduling subject to assumed itineraries of concurrently manufactured product types. In other words, assuming a given layout of FMS’s material handling and production routes of simultaneously manufactured...

  9. Modeling of the steam hydrolysis in a two-step process for hydrogen production by solar concentrated energy

    Science.gov (United States)

    Valle-Hernández, Julio; Romero-Paredes, Hernando; Pacheco-Reyes, Alejandro

    2017-06-01

    In this paper the simulation of the steam hydrolysis for hydrogen production through the decomposition of cerium oxide is presented. The thermochemical cycle for hydrogen production consists of the endothermic reduction of CeO2 to lower-valence cerium oxide, at high temperature, where concentrated solar energy is used as a source of heat; and of the subsequent steam hydrolysis of the resulting cerium oxide to produce hydrogen. The modeling of endothermic reduction step was presented at the Solar Paces 2015. This work shows the modeling of the exothermic step; the hydrolysis of the cerium oxide (III) to form H2 and the corresponding initial cerium oxide made at lower temperature inside the solar reactor. For this model, three sections of the pipe where the reaction occurs were considered; the steam water inlet, the porous medium and the hydrogen outlet produced. The mathematical model describes the fluid mechanics; mass and energy transfer occurring therein inside the tungsten pipe. Thermochemical process model was simulated in CFD. The results show a temperature distribution in the solar reaction pipe and allow obtaining the fluid dynamics and the heat transfer within the pipe. This work is part of the project "Solar Fuels and Industrial Processes" from the Mexican Center for Innovation in Solar Energy (CEMIE-Sol).

  10. Statistical modeling of tear strength for one step fixation process of reactive printing and easy care finishing

    International Nuclear Information System (INIS)

    Asim, F.; Mahmood, M.

    2017-01-01

    Statistical modeling imparts significant role in predicting the impact of potential factors affecting the one step fixation process of reactive printing and easy care finishing. Investigation of significant factors on tear strength of cotton fabric for single step fixation of reactive printing and easy care finishing has been carried out in this research work using experimental design technique. The potential design factors were; concentration of reactive dye, concentration of crease resistant, fixation method and fixation temperature. The experiments were designed using DoE (Design of Experiment) and analyzed through software Design Expert. The detailed analysis of significant factors and interactions including ANOVA (Analysis of Variance), residuals, model accuracy and statistical model for tear strength has been presented. The interaction and contour plots of vital factors has been examined. It has been found from the statistical analysis that each factor has an interaction with other factor. Most of the investigated factors showed curvature effect on other factor. After critical examination of significant plots, quadratic model of tear strength with significant terms and their interaction at alpha = 0.05 has been developed. The calculated correlation coefficient, R2 of the developed model is 0.9056. The high values of correlation coefficient inferred that developed equation of tear strength will precisely predict the tear strength over the range of values. (author)

  11. Modeling heat and mass transfer in the heat treatment step of yerba maté processing

    Directory of Open Access Journals (Sweden)

    J. M. Peralta

    2007-03-01

    Full Text Available The aim of this research was to estimate the leaf and twig temperature and moisture content of yerba maté branches (Ilex paraguariensis Saint Hilaire during heat treatment, carried out in a rotary kiln dryer. These variables had to be estimated (modeling the heat and mass transfer due to the difficulty of experimental measurement in the dryer. For modeling, the equipment was divided into two zones: the flame or heat treatment zone and the drying zone. The model developed fit well with the experimental data when water loss took place only in leaves. In the first zone, leaf temperature increased until it reached 135°C and then it slowly decreased to 88°C at the exit, despite the gas temperature, which varied in this zone from 460°C to 120°C. Twig temperature increased in the two zones from its inlet temperature (25°C up to 75°C. A model error of about 3% was estimated based on theoretical and experimental data on leaf moisture content.

  12. A two-step approach for fluidized bed granulation in pharmaceutical processing: Assessing different models for design and control.

    Science.gov (United States)

    Ming, Liangshan; Li, Zhe; Wu, Fei; Du, Ruofei; Feng, Yi

    2017-01-01

    Various modeling techniques were used to understand fluidized bed granulation using a two-step approach. First, Plackett-Burman design (PBD) was used to identify the high-risk factors. Then, Box-Behnken design (BBD) was used to analyze and optimize those high-risk factors. The relationship between the high-risk input variables (inlet air temperature X1, binder solution rate X3, and binder-to-powder ratio X5) and quality attributes (flowability Y1, temperature Y2, moisture content Y3, aggregation index Y4, and compactability Y5) of the process was investigated using response surface model (RSM), partial least squares method (PLS) and artificial neural network of multilayer perceptron (MLP). The morphological study of the granules was also investigated using a scanning electron microscope. The results showed that X1, X3, and X5 significantly affected the properties of granule. The RSM, PLS and MLP models were found to be useful statistical analysis tools for a better mechanistic understanding of granulation. The statistical analysis results showed that the RSM model had a better ability to fit the quality attributes of granules compared to the PLS and MLP models. Understanding the effect of process parameters on granule properties provides the basis for modulating the granulation parameters and optimizing the product performance at the early development stage of pharmaceutical products.

  13. A two-step approach for fluidized bed granulation in pharmaceutical processing: Assessing different models for design and control.

    Directory of Open Access Journals (Sweden)

    Liangshan Ming

    Full Text Available Various modeling techniques were used to understand fluidized bed granulation using a two-step approach. First, Plackett-Burman design (PBD was used to identify the high-risk factors. Then, Box-Behnken design (BBD was used to analyze and optimize those high-risk factors. The relationship between the high-risk input variables (inlet air temperature X1, binder solution rate X3, and binder-to-powder ratio X5 and quality attributes (flowability Y1, temperature Y2, moisture content Y3, aggregation index Y4, and compactability Y5 of the process was investigated using response surface model (RSM, partial least squares method (PLS and artificial neural network of multilayer perceptron (MLP. The morphological study of the granules was also investigated using a scanning electron microscope. The results showed that X1, X3, and X5 significantly affected the properties of granule. The RSM, PLS and MLP models were found to be useful statistical analysis tools for a better mechanistic understanding of granulation. The statistical analysis results showed that the RSM model had a better ability to fit the quality attributes of granules compared to the PLS and MLP models. Understanding the effect of process parameters on granule properties provides the basis for modulating the granulation parameters and optimizing the product performance at the early development stage of pharmaceutical products.

  14. A two-step approach for fluidized bed granulation in pharmaceutical processing: Assessing different models for design and control

    Science.gov (United States)

    Ming, Liangshan; Li, Zhe; Wu, Fei; Du, Ruofei; Feng, Yi

    2017-01-01

    Various modeling techniques were used to understand fluidized bed granulation using a two-step approach. First, Plackett-Burman design (PBD) was used to identify the high-risk factors. Then, Box-Behnken design (BBD) was used to analyze and optimize those high-risk factors. The relationship between the high-risk input variables (inlet air temperature X1, binder solution rate X3, and binder-to-powder ratio X5) and quality attributes (flowability Y1, temperature Y2, moisture content Y3, aggregation index Y4, and compactability Y5) of the process was investigated using response surface model (RSM), partial least squares method (PLS) and artificial neural network of multilayer perceptron (MLP). The morphological study of the granules was also investigated using a scanning electron microscope. The results showed that X1, X3, and X5 significantly affected the properties of granule. The RSM, PLS and MLP models were found to be useful statistical analysis tools for a better mechanistic understanding of granulation. The statistical analysis results showed that the RSM model had a better ability to fit the quality attributes of granules compared to the PLS and MLP models. Understanding the effect of process parameters on granule properties provides the basis for modulating the granulation parameters and optimizing the product performance at the early development stage of pharmaceutical products. PMID:28662115

  15. Global Sensitivity Analysis as Good Modelling Practices tool for the identification of the most influential process parameters of the primary drying step during freeze-drying

    DEFF Research Database (Denmark)

    Van Bockstal, Pieter-Jan; Mortier, Séverine Thérèse F.C.; Corver, Jos

    2018-01-01

    Pharmaceutical batch freeze-drying is commonly used to improve the stability of biological therapeutics. The primary drying step is regulated by the dynamic settings of the adaptable process variables, shelf temperature Ts and chamber pressure Pc. Mechanistic modelling of the primary drying step...

  16. Ten steps to successful software process improvement

    Science.gov (United States)

    Kandt, R. K.

    2003-01-01

    This paper identifies ten steps for managing change that address organizational and cultural issues. Four of these steps are critical, that if not done, will almost guarantee failure. This ten-step program emphasizes the alignment of business goals, change process goals, and the work performed by the employees of an organization.

  17. 'Steps in the learning Process'

    International Nuclear Information System (INIS)

    Cheung, Kyung Mo; Cheung, Hwan

    1984-01-01

    The process by which a student learns is extremely complicated. Whether he is simply learning facts, laws or formulae, changing his values or mastering a skill the way in which his brain functions is impossible to describe. The idea of learning domains is put forward not to explain in biological terms what happens in the brain but simply to attempt to break the system down into simpler units so that the learning process can be organized in an easier, more systematic way. In the most commonly used description of this process, the one described by BLOOM, this is BLOOM's Taxonomy. In addition to, I'd like to compare with the work of Lewis (Levels of Knowledge and Understanding). As a result, let us discuss about the most effective method in teaching in order to supply high-quality education

  18. Key Steps in the Special Review Process

    Science.gov (United States)

    EPA uses this process when it has reason to believe that the use of a pesticide may result in unreasonable adverse effects on people or the environment. Steps include comprehensive risk and benefit analyses and multiple Position Documents.

  19. Developing a framework to model the primary drying step of a continuous freeze-drying process based on infrared radiation

    DEFF Research Database (Denmark)

    Van Bockstal, Pieter-Jan; Corver, Jos; Mortier, Séverine Thérèse F.C.

    2018-01-01

    The continuous freeze-drying concept based on spinning the vials during freezing and on non-contact energy transfer via infrared (IR) radiation during drying, improves process efficiency and product quality (uniformity) compared to conventional batch freeze-drying. Automated control of this process....... These results assist in the selection of proper materials which could serve as IR window in the continuous freeze-drying prototype. The modelling framework presented in this paper fits the model-based design approach used for the development of this prototype and shows the potential benefits of this design...

  20. Mechanistic modelling of infrared mediated energy transfer during the primary drying step of a continuous freeze-drying process.

    Science.gov (United States)

    Van Bockstal, Pieter-Jan; Mortier, Séverine Thérèse F C; De Meyer, Laurens; Corver, Jos; Vervaet, Chris; Nopens, Ingmar; De Beer, Thomas

    2017-05-01

    Conventional pharmaceutical freeze-drying is an inefficient and expensive batch-wise process, associated with several disadvantages leading to an uncontrolled end product variability. The proposed continuous alternative, based on spinning the vials during freezing and on optimal energy supply during drying, strongly increases process efficiency and improves product quality (uniformity). The heat transfer during continuous drying of the spin frozen vials is provided via non-contact infrared (IR) radiation. The energy transfer to the spin frozen vials should be optimised to maximise the drying efficiency while avoiding cake collapse. Therefore, a mechanistic model was developed which allows computing the optimal, dynamic IR heater temperature in function of the primary drying progress and which, hence, also allows predicting the primary drying endpoint based on the applied dynamic IR heater temperature. The model was validated by drying spin frozen vials containing the model formulation (3.9mL in 10R vials) according to the computed IR heater temperature profile. In total, 6 validation experiments were conducted. The primary drying endpoint was experimentally determined via in-line near-infrared (NIR) spectroscopy and compared with the endpoint predicted by the model (50min). The mean ratio of the experimental drying time to the predicted value was 0.91, indicating a good agreement between the model predictions and the experimental data. The end product had an elegant product appearance (visual inspection) and an acceptable residual moisture content (Karl Fischer). Copyright © 2017 Elsevier B.V. All rights reserved.

  1. Global Sensitivity Analysis as Good Modelling Practices tool for the identification of the most influential process parameters of the primary drying step during freeze-drying.

    Science.gov (United States)

    Van Bockstal, Pieter-Jan; Mortier, Séverine Thérèse F C; Corver, Jos; Nopens, Ingmar; Gernaey, Krist V; De Beer, Thomas

    2018-02-01

    Pharmaceutical batch freeze-drying is commonly used to improve the stability of biological therapeutics. The primary drying step is regulated by the dynamic settings of the adaptable process variables, shelf temperature T s and chamber pressure P c . Mechanistic modelling of the primary drying step leads to the optimal dynamic combination of these adaptable process variables in function of time. According to Good Modelling Practices, a Global Sensitivity Analysis (GSA) is essential for appropriate model building. In this study, both a regression-based and variance-based GSA were conducted on a validated mechanistic primary drying model to estimate the impact of several model input parameters on two output variables, the product temperature at the sublimation front T i and the sublimation rate ṁ sub . T s was identified as most influential parameter on both T i and ṁ sub , followed by P c and the dried product mass transfer resistance α Rp for T i and ṁ sub , respectively. The GSA findings were experimentally validated for ṁ sub via a Design of Experiments (DoE) approach. The results indicated that GSA is a very useful tool for the evaluation of the impact of different process variables on the model outcome, leading to essential process knowledge, without the need for time-consuming experiments (e.g., DoE). Copyright © 2017 Elsevier B.V. All rights reserved.

  2. The partner selection process : Steps, effectiveness, governance

    NARCIS (Netherlands)

    Duisters, D.; Duijsters, G.M.; de Man, A.P.

    2011-01-01

    Selecting the right partner is important for creating value in alliances. Even though prior research suggests that a structured partner selection process increases alliance success, empirical research remains scarce. This paper presents an explorative empirical study that shows that some steps in

  3. Lyondell develops one step isobutylene process

    International Nuclear Information System (INIS)

    Anon.

    1992-01-01

    This paper reports that Lyondell Petrochemical Co., Houston, has developed a one step process to convert normal butylenes to isobutylene, a key component of methyl tertiary butyl ether (MTBE). MTBE is expected to become the additive of choice among U.S. refiners to blend oxygenated gasolines required by 1990 amendments to the Clean Air Act. Lyondell Pres. and Chief Executive Officer Bob Gower the the new process could help assure adequate supplies of MTBE to meet U.S. demand for cleaner burning fuels. Lyondell estimates the capital cost of building a grassroots plant to produce isobutylene with the new process would be less than half the cost of a grassroot plant to produce isobutylene with existing technology starting with normal butane

  4. Positive steps turning into a process

    Directory of Open Access Journals (Sweden)

    Božičević Goran

    2004-01-01

    Full Text Available The conclusion of the research conducted in Croatia for QPSW in 2003 is there is no systematic, accountable and structural confrontation with the past in Croatia, but there is growing concern within the civil society about the problems incurred by the lack of such a confrontation. Two different approaches can be discerned: individual work with particular persons or target groups and advocacy that could influence the alteration of the public opinion and decision-making. Both levels are necessary and they should unfold simultaneously. The systematization and regional cooperation of documentation centers, cooperation between victim organizations and peace initiatives, the inclusion of former warriors into peace building processes the cooperation of artists and activists - represent some of the new and promising steps on the civilian scene in Croatia. The constant strengthening of the independent media and the judiciary, coupled with constant efforts on both levels - the personal and the public - raises hopes that the confrontation with the past in Croatia is a process and not a trend.

  5. One step processing for future diesel specifications

    International Nuclear Information System (INIS)

    Brierley, G.R.

    1997-01-01

    The trend in diesel fuel specifications is to limit the sulfur level to less than 0.05 wt- per cent. Many regions have also specified that diesel fuels must have lower aromatic levels, higher cetane numbers, and lower distillation end points. These changes will require significant refinery investment to meet the new diesel fuel specifications. The changes may also significantly affect the value of synthetic crude stocks. UOP has developed a new hydroprocessing catalyst which makes it possible to meet the new diesel specifications in one single processing step and at minimal cost. The catalyst saturates aromatics while opening ring structures at the same time. By selectively cracking heavy components into the diesel range with minimal cracking to gas or naphtha, heavier feedstocks can be upgraded to diesel, and refinery diesel yield can be augmented. Synthetic crude distillate is often high in aromatics and low in cetane number. This new UOP hydroprocessing system will allow synthetic crude producers and refiners to produce diesel fuels with higher cetane numbers, high-quality distillate blendstocks and distillate fuels. 26 figs

  6. Multiple Steps Prediction with Nonlinear ARX Models

    OpenAIRE

    Zhang, Qinghua; Ljung, Lennart

    2007-01-01

    NLARX (NonLinear AutoRegressive with eXogenous inputs) models are frequently used in black-box nonlinear system identication. Though it is easy to make one step ahead prediction with such models, multiple steps prediction is far from trivial. The main difficulty is that in general there is no easy way to compute the mathematical expectation of an output conditioned by past measurements. An optimal solution would require intensive numerical computations related to nonlinear filltering. The pur...

  7. Two-step estimation for inhomogeneous spatial point processes

    DEFF Research Database (Denmark)

    Waagepetersen, Rasmus; Guan, Yongtao

    This paper is concerned with parameter estimation for inhomogeneous spatial point processes with a regression model for the intensity function and tractable second order properties (K-function). Regression parameters are estimated using a Poisson likelihood score estimating function and in a seco...... step minimum contrast estimation is applied for the residual clustering parameters. Asymptotic normality of parameter estimates is established under certain mixing conditions and we exemplify how the results may be applied in ecological studies of rain forests....

  8. Two-Step Production of Phenylpyruvic Acid from L-Phenylalanine by Growing and Resting Cells of Engineered Escherichia coli: Process Optimization and Kinetics Modeling.

    Directory of Open Access Journals (Sweden)

    Ying Hou

    Full Text Available Phenylpyruvic acid (PPA is widely used in the pharmaceutical, food, and chemical industries. Here, a two-step bioconversion process, involving growing and resting cells, was established to produce PPA from l-phenylalanine using the engineered Escherichia coli constructed previously. First, the biotransformation conditions for growing cells were optimized (l-phenylalanine concentration 20.0 g·L-1, temperature 35°C and a two-stage temperature control strategy (keep 20°C for 12 h and increase the temperature to 35°C until the end of biotransformation was performed. The biotransformation conditions for resting cells were then optimized in 3-L bioreactor and the optimized conditions were as follows: agitation speed 500 rpm, aeration rate 1.5 vvm, and l-phenylalanine concentration 30 g·L-1. The total maximal production (mass conversion rate reached 29.8 ± 2.1 g·L-1 (99.3% and 75.1 ± 2.5 g·L-1 (93.9% in the flask and 3-L bioreactor, respectively. Finally, a kinetic model was established, and it was revealed that the substrate and product inhibition were the main limiting factors for resting cell biotransformation.

  9. Two-Step Plasma Process for Cleaning Indium Bonding Bumps

    Science.gov (United States)

    Greer, Harold F.; Vasquez, Richard P.; Jones, Todd J.; Hoenk, Michael E.; Dickie, Matthew R.; Nikzad, Shouleh

    2009-01-01

    A two-step plasma process has been developed as a means of removing surface oxide layers from indium bumps used in flip-chip hybridization (bump bonding) of integrated circuits. The two-step plasma process makes it possible to remove surface indium oxide, without incurring the adverse effects of the acid etching process.

  10. A model for two-step ageing

    Indian Academy of Sciences (India)

    Unknown

    matrix are not considered. In the present work, a model is developed which takes into account the coherency strains between cluster and matrix and defines a new stability criterion, inclusive of strain energy term. Experiments were done on AA 7010 aluminium alloy by carrying out a two-step ageing treatment and the.

  11. Two-step estimation for inhomogeneous spatial point processes

    DEFF Research Database (Denmark)

    Waagepetersen, Rasmus; Guan, Yongtao

    2009-01-01

    The paper is concerned with parameter estimation for inhomogeneous spatial point processes with a regression model for the intensity function and tractable second-order properties (K-function). Regression parameters are estimated by using a Poisson likelihood score estimating function and in the ...... and in the second step minimum contrast estimation is applied for the residual clustering parameters. Asymptotic normality of parameter estimates is established under certain mixing conditions and we exemplify how the results may be applied in ecological studies of rainforests....

  12. Towards Algorithmic Generation of Business Processes: From Business Step Dependencies to Process Algebra Expressions

    Science.gov (United States)

    Oikawa, Márcio K.; Ferreira, João E.; Malkowski, Simon; Pu, Calton

    Recently, a lot of work has been done on formalization of business process specification, in particular, using Petri nets and process algebra. However, these efforts usually do not explicitly address complex business process development, which necessitates the specification, coordination, and synchronization of a large number of business steps. It is imperative that these atomic tasks are associated correctly and monitored for countless dependencies. Moreover, as these business processes grow, they become critically reliant on a large number of split and merge points, which additionally increases modeling complexity. Therefore, one of the central challenges in complex business process modeling is the composition of dependent business steps. We address this challenge and introduce a formally correct method for automated composition of algebraic expressions in complex business process modeling based on acyclic directed graph reductions. We show that our method generates an equivalent algebraic expression from an appropriate acyclic directed graph if the graph is well-formed and series-parallel. Additionally, we encapsulate the reductions in an algorithm that transforms business step dependencies described by users into digraphs, recognizes structural conflicts, identifies Wheatstone bridges, and finally generates algebraic expressions.

  13. Crystal growth processes : The role of steps and of mass transfer in the fluid phase

    NARCIS (Netherlands)

    Janssen-Van Rosmalen, R.

    1977-01-01

    The step model introduced by Burton, Cabrera and Frank is known to give a good description of the surface processes, when growth spirals are available as a consequence of screw dislocations. Their model is based on an infinite sequence of equidistant steps. In the first part of the research

  14. Two-step two-stage fission gas release model

    International Nuclear Information System (INIS)

    Kim, Yong-soo; Lee, Chan-bock

    2006-01-01

    Based on the recent theoretical model, two-step two-stage model is developed which incorporates two stage diffusion processes, grain lattice and grain boundary diffusion, coupled with the two step burn-up factor in the low and high burn-up regime. FRAPCON-3 code and its in-pile data sets have been used for the benchmarking and validation of this model. Results reveals that its prediction is in better agreement with the experimental measurements than that by any model contained in the FRAPCON-3 code such as ANS 5.4, modified ANS5.4, and Forsberg-Massih model over whole burn-up range up to 70,000 MWd/MTU. (author)

  15. Multivariate statistical analysis of a multi-step industrial processes

    DEFF Research Database (Denmark)

    Reinikainen, S.P.; Høskuldsson, Agnar

    2007-01-01

    Monitoring and quality control of industrial processes often produce information on how the data have been obtained. In batch processes, for instance, the process is carried out in stages; some process or control parameters are set at each stage. However, the obtained data might not be utilized....... This approach will show how the process develops from a data point of view. The procedure is illustrated on a relatively simple industrial batch process, but it is also applicable in a general context, where knowledge about the variables is available....... efficiently, even if this information may reveal significant knowledge about process dynamics or ongoing phenomena. When studying the process data, it may be important to analyse the data in the light of the physical or time-wise development of each process step. In this paper, a unified approach to analyse...

  16. 48 CFR 15.202 - Advisory multi-step process.

    Science.gov (United States)

    2010-10-01

    ... CONTRACTING METHODS AND CONTRACT TYPES CONTRACTING BY NEGOTIATION Solicitation and Receipt of Proposals and Information 15.202 Advisory multi-step process. (a) The agency may publish a presolicitation notice (see 5.204) that provides a general description of the scope or purpose of the acquisition and invites potential...

  17. Two-step processing of in vivo synthesized rice lectin.

    Science.gov (United States)

    Stinissen, H M; Peumans, W J; Carlier, A R

    1983-01-01

    The synthesis and processing of rice lectin was followed in vivo in developing rice embryos. Using labelling and pulse-chase labelling experiments, the sequence of events in the synthesis and post-translational modifications of this protein could be determined. The primary lectin product observed in vivo is a high molecular weight precursor (28 K), which is post-translationally converted to a 23 K lectin protein, and in a further step cleaved into two smaller 12 K and 10 K polypeptides. The first step of the processing of the rice lectin is a rather slow process (the precursor has a half-life of about 3 h) and resembles the so-called vectorial processing of cytoplasmically made organellar proteins. The second modification consists of a (slow) proteolytic cleavage of the basic lectin subunit into two smaller polypeptides and resembles somewhat the cleavage of some legume (storage) proteins in their protein bodies.

  18. Dynamics Of Innovation Diffusion With Two Step Decision Process

    Directory of Open Access Journals (Sweden)

    Szymczyk Michał

    2014-02-01

    Full Text Available The paper discusses the dynamics of innovation diffusion among heterogeneous consumers. We assume that customers’ decision making process is divided into two steps: testing the innovation and later potential adopting. Such a model setup is designed to imitate the mobile applications market. An innovation provider, to some extent, can control the innovation diffusion by two parameters: product quality and marketing activity. Using the multi-agent approach we identify factors influencing the saturation level and the speed of innovation adaptation in the artificial population. The results show that the expected level of innovation adoption among customer’s friends and relative product quality and marketing campaign intensity are crucial factors explaining them. It has to be stressed that the product quality is more important for innovation saturation level and marketing campaign has bigger influence on the speed of diffusion. The topology of social network between customers is found important, but within investigated parameter range it has lover impact on innovation diffusion dynamics than the above mentioned factors

  19. High pressure as an alternative processing step for ham production.

    Science.gov (United States)

    Pingen, Sylvia; Sudhaus, Nadine; Becker, André; Krischek, Carsten; Klein, Günter

    2016-08-01

    As high pressure processing (HPP) is becoming more and more important in the food industry, this study examined the application of HPP (500 and 600MPa) as a manufacturing step during simulated ham production. By replacing conventional heating with HPP steps, ham-like texture or color attributes could not be achieved. HPP products showed a less pale, less red appearance, softer texture and higher yields. However, a combination of mild temperature (53°C) and 500MPa resulted in parameters more comparable to cooked ham. We conclude that HPP can be used for novel food development, providing novel textures and colors. However, when it comes to ham production, a heating step seems to be unavoidable to obtain characteristic ham properties. Copyright © 2016 Elsevier Ltd. All rights reserved.

  20. A model for two-step ageing

    Indian Academy of Sciences (India)

    Unknown

    While this is true in Al–Zn–Mg alloys, two-step ageing leads to inferior properties in Al–Mg–Si alloys. This controversial behaviour in different alloys can be ... Experiments were done on AA 7010 aluminium alloy by carrying out a two-step ageing treatment and the results fit the new stability criterion. Thus it is found that the ...

  1. Modelling Flow over Stepped Spillway with Varying Chute Geometry ...

    African Journals Online (AJOL)

    This study has modeled some characteristics of the flows over stepped spillway with varying chute geometry through a laboratory investigation. Using six physically built stepped spillway models, with each having six horizontal plain steps at 4cm constant height, 30 cm width and respective chute slope angles at 310, 320, ...

  2. PID controller auto-tuning based on process step response and damping optimum criterion.

    Science.gov (United States)

    Pavković, Danijel; Polak, Siniša; Zorc, Davor

    2014-01-01

    This paper presents a novel method of PID controller tuning suitable for higher-order aperiodic processes and aimed at step response-based auto-tuning applications. The PID controller tuning is based on the identification of so-called n-th order lag (PTn) process model and application of damping optimum criterion, thus facilitating straightforward algebraic rules for the adjustment of both the closed-loop response speed and damping. The PTn model identification is based on the process step response, wherein the PTn model parameters are evaluated in a novel manner from the process step response equivalent dead-time and lag time constant. The effectiveness of the proposed PTn model parameter estimation procedure and the related damping optimum-based PID controller auto-tuning have been verified by means of extensive computer simulations. © 2013 ISA. Published by Elsevier Ltd. All rights reserved.

  3. STEP - Product Model Data Sharing and Exchange

    DEFF Research Database (Denmark)

    Kroszynski, Uri

    1998-01-01

    - Product Data Representation and Exchange", featuring at present some 30 released parts, and growing continuously. Many of the parts are Application Protocols (AP). This article presents an overview of STEP, based upon years of involvement in three ESPRIT projects, which contributed to the development...

  4. A Four-Step Model for Teaching Selection Interviewing Skills

    Science.gov (United States)

    Kleiman, Lawrence S.; Benek-Rivera, Joan

    2010-01-01

    The topic of selection interviewing lends itself well to experience-based teaching methods. Instructors often teach this topic by using a two-step process. The first step consists of lecturing students on the basic principles of effective interviewing. During the second step, students apply these principles by role-playing mock interviews with…

  5. Multi-enzyme Process Modeling

    DEFF Research Database (Denmark)

    Andrade Santacoloma, Paloma de Gracia

    The subject of this thesis is to develop a methodological framework that can systematically guide mathematical model building for better understanding of multi-enzyme processes. In this way, opportunities for process improvements can be identified by analyzing simulations of either existing...... features of the process and provides the information required to structure the process model by using a step-by-step procedure with the required tools and methods. In this way, this framework increases efficiency of the model development process with respect to time and resources needed (fast and effective...... in the scientific literature. Reliable mathematical models of such multi-catalytic schemes can exploit the potential benefit of these processes. In this way, the best outcome of the process can be obtained understanding the types of modification that are required for process optimization. An effective evaluation...

  6. STEPS: Modeling and Simulating Complex Reaction-Diffusion Systems with Python

    OpenAIRE

    Wils, Stefan; Schutter, Erik De

    2009-01-01

    We describe how the use of the Python language improved the user interface of the program STEPS. STEPS is a simulation platform for modeling and stochastic simulation of coupled reaction-diffusion systems with complex 3-dimensional boundary conditions. Setting up such models is a complicated process that consists of many phases. Initial versions of STEPS relied on a static input format that did not cleanly separate these phases, limiting modelers in how they could control the simulation and b...

  7. Stepwise hydrogeological modeling and groundwater flow analysis on site scale (Step 0 and Step 1)

    International Nuclear Information System (INIS)

    Ohyama, Takuya; Saegusa, Hiromitsu; Onoe, Hironori

    2005-05-01

    One of the main goals of the Mizunami Underground Research Laboratory Project is to establish comprehensive techniques for investigation, analysis, and assessment of the deep geological environment. To achieve this goal, a variety of investigations, analysis, and evaluations have been conducted using an iterative approach. In this study, hydrogeological modeling and ground water flow analyses have been carried out using the data from surface-based investigations at Step 0 and Step 1, in order to synthesize the investigation results, to evaluate the uncertainty of the hydrogeological model, and to specify items for further investigation. The results of this study are summarized as follows: 1) As the investigation progresses Step 0 to Step 1, the understanding of groundwater flow was enhanced from Step 0 to Step 1, and the hydrogeological model could be revised, 2) The importance of faults as major groundwater flow pathways was demonstrated, 3) Geological and hydrogeological characteristics of faults with orientation of NNW and NE were shown to be especially significant. The main item specified for further investigations is summarized as follows: geological and hydrogeological characteristics of NNW and NE trending faults are important. (author)

  8. Data-based control of a multi-step forming process

    Science.gov (United States)

    Schulte, R.; Frey, P.; Hildenbrand, P.; Vogel, M.; Betz, C.; Lechner, M.; Merklein, M.

    2017-09-01

    The fourth industrial revolution represents a new stage in the organization and management of the entire value chain. However, concerning the field of forming technology, the fourth industrial revolution has only arrived gradually until now. In order to make a valuable contribution to the digital factory the controlling of a multistage forming process was investigated. Within the framework of the investigation, an abstracted and transferable model is used to outline which data have to be collected, how an interface between the different forming machines can be designed tangible and which control tasks must be fulfilled. The goal of this investigation was to control the subsequent process step based on the data recorded in the first step. The investigated process chain links various metal forming processes, which are typical elements of a multi-step forming process. Data recorded in the first step of the process chain is analyzed and processed for an improved process control of the subsequent process. On the basis of the gained scientific knowledge, it is possible to make forming operations more robust and at the same time more flexible, and thus create the fundament for linking various production processes in an efficient way.

  9. Neptunium control in co-decontamination step of purex process

    International Nuclear Information System (INIS)

    Zhang Zefu; He Jianyu; Zhu Zhaowu; Ye Guoan; Zhao Zhiqiang

    2002-01-01

    A new alternative method for separation of Np in the first co-decontamination step is proposed. It comprises two steps, namely, preconditioning of Np valence state in the dissolved solution of spent fuel by NO gas bubbling in HNO 3 medium to produce HNO 2 , which is considered as salt-free process to convert Np(VI) to Np(V) and stabilization of Np(V) with urea, finally, the demonstrative counter current cascade extraction of Np(IV) and Np(V) in a miniature mixer-settler was carried out. The batch experiments show that Np(V) produced after conditioning may be slowly oxidized again to Np(VI) during standing time. Addition of urea in the HNO 3 solution might enhance the stability of Np(V). On the other hand, the solvent extraction by 30% TBP/kerosene could greatly accelerate the oxidation rate of Np(V). The chemical flow sheet study at 25degC shows that, more than 98% of Np could be routed into HLLW if urea is added in the HNO 3 solution. The operating temperature has great influence on the kinetics of Np(V) oxidation. If operation temperature races to 36degC and urea is not added, about 38% of Np will go along with U and Pu into organic phase. The behavior of Np(IV) during extraction shows great accumulation in the middle stages of battery. (author)

  10. Modelling step-families: exploratory findings.

    Science.gov (United States)

    Bartlema, J

    1988-01-01

    "A combined macro-micro model is applied to a population similar to that forecast for 2035 in the Netherlands in order to simulate the effect on kinship networks of a mating system of serial monogamy. The importance of incorporating a parameter for the degree of concentration of childbearing over the female population is emphasized. The inputs to the model are vectors of fertility rates by age of mother, and by age of father, a matrix of first-marriage rates by age of both partners (used in the macro-analytical expressions), and two parameters H and S (used in the micro-simulation phase). The output is a data base of hypothetical individuals, whose records contain identification number, age, sex, and the identification numbers of their relatives." (SUMMARY IN FRE) excerpt

  11. One-step electrodeposition process to fabricate cathodic superhydrophobic surface

    Energy Technology Data Exchange (ETDEWEB)

    Chen Zhi, E-mail: c2002z@nwpu.edu.cn [Department of Applied Physics, Northwestern Polytechnical University, Xi' an 710129 (China); Li Feng [Department of Applied Physics, Northwestern Polytechnical University, Xi' an 710129 (China); Hao Limei [Department of Applied Physics, Xi' an University of Science and Technology, Xi' an 710054 (China); Chen Anqi; Kong Youchao [Department of Applied Physics, Northwestern Polytechnical University, Xi' an 710129 (China)

    2011-12-01

    In this work, a rapid one-step process is developed to fabricate superhydrophobic cathodic surface by electrodepositing copper plate in an electrolyte solution containing manganese chloride (MnCl{sub 2}{center_dot}4H{sub 2}O), myristic acid (CH{sub 3}(CH{sub 2}){sub 12}COOH) and ethanol. The superhydrophobic surfaces were characterized by means of scanning electron microscopy (SEM), and Fourier transform infrared spectroscopy (FTIR) and X-ray diffraction (XRD). The shortest electrolysis time for fabricating a superhydrophobic surface is about 1 min, the measured maximum contact angle is 163 Degree-Sign and rolling angle is less than 3 Degree-Sign . Furthermore, this method can be easily extended to other conductive materials. The approach is time-saving and cheap, and it is supposed to have a promising future in industrial fields.

  12. Fitting three-level meta-analytic models in R: A step-by-step tutorial

    Directory of Open Access Journals (Sweden)

    Assink, Mark

    2016-10-01

    Full Text Available Applying a multilevel approach to meta-analysis is a strong method for dealing with dependency of effect sizes. However, this method is relatively unknown among researchers and, to date, has not been widely used in meta-analytic research. Therefore, the purpose of this tutorial was to show how a three-level random effects model can be applied to meta-analytic models in R using the rma.mv function of the metafor package. This application is illustrated by taking the reader through a step-by-step guide to the multilevel analyses comprising the steps of (1 organizing a data file; (2 setting up the R environment; (3 calculating an overall effect; (4 examining heterogeneity of within-study variance and between-study variance; (5 performing categorical and continuous moderator analyses; and (6 examining a multiple moderator model. By example, the authors demonstrate how the multilevel approach can be applied to meta-analytically examining the association between mental health disorders of juveniles and juvenile offender recidivism. In our opinion, the rma.mv function of the metafor package provides an easy and flexible way of applying a multi-level structure to meta-analytic models in R. Further, the multilevel meta-analytic models can be easily extended so that the potential moderating influence of variables can be examined.

  13. Integrated modelling in materials and process technology

    DEFF Research Database (Denmark)

    Hattel, Jesper Henri

    2008-01-01

    Integrated modelling of entire process sequences and the subsequent in-service conditions, and multiphysics modelling of the single process steps are areas that increasingly support optimisation of manufactured parts. In the present paper, three different examples of modelling manufacturing...... processes from the viewpoint of combined materials and process modelling are presented: solidification of thin walled ductile cast iron, integrated modelling of spray forming and multiphysics modelling of friction stir welding. The fourth example describes integrated modelling applied to a failure analysis...

  14. STEP wastewater treatment: a solar thermal electrochemical process for pollutant oxidation.

    Science.gov (United States)

    Wang, Baohui; Wu, Hongjun; Zhang, Guoxue; Licht, Stuart

    2012-10-01

    A solar thermal electrochemical production (STEP) pathway was established to utilize solar energy to drive useful chemical processes. In this paper, we use experimental chemistry for efficient STEP wastewater treatment, and suggest a theory based on the decreasing stability of organic pollutants (hydrocarbon oxidation potentials) with increasing temperature. Exemplified by the solar thermal electrochemical oxidation of phenol, the fundamental model and experimental system components of this process outline a general method for the oxidation of environmentally stable organic pollutants into carbon dioxide, which is easily removed. Using thermodynamic calculations we show a sharply decreasing phenol oxidation potential with increasing temperature. The experimental results demonstrate that this increased temperature can be supplied by solar thermal heating. In combination this drives electrochemical phenol removal with enhanced oxidation efficiency through (i) a thermodynamically driven decrease in the energy needed to fuel the process and (ii) improved kinetics to sustain high rates of phenol oxidation at low electrochemical overpotential. The STEP wastewater treatment process is synergistic in that it is performed with higher efficiency than either electrochemical or photovoltaic conversion process acting alone. STEP is a green, efficient, safe, and sustainable process for organic wastewater treatment driven solely by solar energy. Copyright © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  15. Dynamic Virus-Bacterium Interactions in a Porcine Precision-Cut Lung Slice Coinfection Model: Swine Influenza Virus Paves the Way for Streptococcus suis Infection in a Two-Step Process.

    Science.gov (United States)

    Meng, F; Wu, N H; Nerlich, A; Herrler, G; Valentin-Weigand, P; Seitz, M

    2015-07-01

    Swine influenza virus (SIV) and Streptococcus suis are common pathogens of the respiratory tract in pigs, with both being associated with pneumonia. The interactions of both pathogens and their contribution to copathogenesis are only poorly understood. In the present study, we established a porcine precision-cut lung slice (PCLS) coinfection model and analyzed the effects of a primary SIV infection on secondary infection by S. suis at different time points. We found that SIV promoted adherence, colonization, and invasion of S. suis in a two-step process. First, in the initial stages, these effects were dependent on bacterial encapsulation, as shown by selective adherence of encapsulated, but not unencapsulated, S. suis to SIV-infected cells. Second, at a later stage of infection, SIV promoted S. suis adherence and invasion of deeper tissues by damaging ciliated epithelial cells. This effect was seen with a highly virulent SIV subtype H3N2 strain but not with a low-virulence subtype H1N1 strain, and it was independent of the bacterial capsule, since an unencapsulated S. suis mutant behaved in a way similar to that of the encapsulated wild-type strain. In conclusion, the PCLS coinfection model established here revealed novel insights into the dynamic interactions between SIV and S. suis during infection of the respiratory tract. It showed that at least two different mechanisms contribute to the beneficial effects of SIV for S. suis, including capsule-mediated bacterial attachment to SIV-infected cells and capsule-independent effects involving virus-mediated damage of ciliated epithelial cells. Copyright © 2015, American Society for Microbiology. All Rights Reserved.

  16. Optimal filtering for systems with finite-step autocorrelated process noises, random one-step sensor delay and missing measurements

    Science.gov (United States)

    Chen, Dongyan; Xu, Long; Du, Junhua

    2016-03-01

    The optimal filtering problem is investigated for a class of discrete stochastic systems with finite-step autocorrelated process noises, random one-step sensor delay and missing measurements. The random disturbances existing in the system are characterized by the multiplicative noises and the phenomena of sensor delay and missing measurements occur in a random way. The random sensor delay and missing measurements are described by two Bernoulli distributed random variables with known conditional probabilities. By using the state augmentation approach, the original system is converted into a new discrete system where the random one-step sensor delay and missing measurements exist in the sensor output. The new process noises and observation noises consist of the original stochastic terms, and the process noises are still autocorrelated. Then, based on the minimum mean square error (MMSE) principle, a new linear optimal filter is designed such that, for the finite-step autocorrelated process noises, random one-step sensor delay and missing measurements, the estimation error is minimized. By solving the recursive matrix equation, the filter gain is designed. Finally, a simulation example is given to illustrate the feasibility and effectiveness of the proposed filtering scheme.

  17. Computational Fluid Dynamics Modeling of Flow over Stepped Spillway

    Directory of Open Access Journals (Sweden)

    Raad Hoobi Irzooki

    2017-12-01

    Full Text Available In present paper, the computational fluid dynamics (CFD - program Flow-3D was used toanalyze and study the characteristics of flow energy dissipation over stepped spillways. Threedifferent spillway heights ( (15, 20 and 25cm were used. For each one of these models, threenumbers of steps (N (5, 10 and 25 and three spillway slopes (S (0.5, 1 and 1.25 were used.Eight different discharges ranging (600-8500cm³/s were passed over each one of these models,therefore the total runs of this study are 216. The energy dissipation over these models and thepressure distribution on the horizontal and vertical step faces over some models were studied. Forverification purpose of the (CFD program, the experimental work was conducted on four models ofstepped spillway and five different discharges were passed over each model. The magnitude ofdissipated energy on models was compared with results of numerical program under sameconditions. The comparison showed good agreement between them with standard percentage errorranging between (-2.01 - 11.13%. Thus, the program Flow-3D is a reasonable numerical programwhich can be used in this study.Results showed that the energy dissipation increases with increased spillway height anddecreased number of steps and spillway slope. Also, the energy dissipation decreases withincreasing the flow rate. An empirical equation for measuring the energy dissipation was derivedusing the dimensional analysis. The coefficient of determination of this equation ( equals 0.766.

  18. Core-shell polymer nanorods by a two-step template wetting process

    International Nuclear Information System (INIS)

    Dougherty, S; Liang, J

    2009-01-01

    One-dimensional core-shell polymer nanowires offer many advantages and great potential for many different applications. In this paper we introduce a highly versatile two-step template wetting process to fabricate two-component core-shell polymer nanowires with controllable shell thickness. PLLA and PMMA were chosen as model polymers to demonstrate the feasibility of this process. Solution wetting with different concentrations of polymer solutions was used to fabricate the shell layer and melt wetting was used to fill the shell with the core polymer. The shell thickness was analyzed as a function of the polymer solution concentration and viscosity, and the core-shell morphology was observed with TEM. This paper demonstrates the feasibility of fabricating polymer core-shell nanostructures using our two-step template wetting process and opens the arena for optimization and future experiments with polymers that are desirable for specific applications.

  19. A Two Step Face Alignment Approach Using Statistical Models

    Directory of Open Access Journals (Sweden)

    Ying Cui

    2012-10-01

    Full Text Available Although face alignment using the Active Appearance Model (AAM is relatively stable, it is known to be sensitive to initial values and not robust under inconstant circumstances. In order to strengthen the ability of AAM performance for face alignment, a two step approach for face alignment combining AAM and Active Shape Model (ASM is proposed. In the first step, AAM is used to locate the inner landmarks of the face. In the second step, the extended ASM is used to locate the outer landmarks of the face under the constraint of the estimated inner landmarks by AAM. The two kinds of landmarks are then combined together to form the whole facial landmarks. The proposed approach is compared with the basic AAM and the progressive AAM methods. Experimental results show that the proposed approach gives a much more effective performance.

  20. Step-by-Step Model for the Study of the Apriori Algorithm for Predictive Analysis

    Directory of Open Access Journals (Sweden)

    Daniel Grigore ROŞCA

    2015-06-01

    Full Text Available The goal of this paper was to develop an educational oriented application based on the Data Mining Apriori Algorithm which facilitates both the research and the study of data mining by graduate students. The application could be used to discover interesting patterns in the corpus of data and to measure the impact on the speed of execution as a function of problem constraints (value of support and confidence variables or size of the transactional data-base. The paper presents a brief overview of the Apriori Algorithm, aspects about the implementation of the algorithm using a step-by-step process, a discussion of the education-oriented user interface and the process of data mining of a test transactional data base. The impact of some constraints on the speed of the algorithm is also experimentally measured without a systematic review of different approaches to increase execution speed. Possible applications of the implementation, as well as its limits, are briefly reviewed.

  1. Treatment of fish processing wastewater in a one-step or two-step upflow anaerobic sludge blanket (UASB) reactor

    NARCIS (Netherlands)

    Paluenzuela-Rollon, A.; Zeeman, G.; Lubberding, H.J.; Lettinga, G.; Alaerts, G.J.

    2002-01-01

    The performance of one-step UASB reactors treating fish processing wastewater of different lipid levels was determined using artificially generated influent simulating that of the canning of sardines and tuna. The organic loading rates (OLR) and the hydraulic retention times (HRT) were 5-8 g

  2. Stutter-Step Models of Performance in School

    Science.gov (United States)

    Morgan, Stephen L.; Leenman, Theodore S.; Todd, Jennifer J.; Kentucky; Weeden, Kim A.

    2013-01-01

    To evaluate a stutter-step model of academic performance in high school, this article adopts a unique measure of the beliefs of 12,591 high school sophomores from the Education Longitudinal Study, 2002-2006. Verbatim responses to questions on occupational plans are coded to capture specific job titles, the listing of multiple jobs, and the listing…

  3. Problem Resolution through Electronic Mail: A Five-Step Model.

    Science.gov (United States)

    Grandgenett, Neal; Grandgenett, Don

    2001-01-01

    Discusses the use of electronic mail within the general resolution and management of administrative problems and emphasizes the need for careful attention to problem definition and clarity of language. Presents a research-based five-step model for the effective use of electronic mail based on experiences at the University of Nebraska at Omaha.…

  4. Transport processes investigation: A necessary first step in site scale characterization plans

    International Nuclear Information System (INIS)

    Roepke, C.; Glass, R.J.; Brainard, J.; Mann, M.; Kriel, K.; Holt, R.; Schwing, J.

    1995-01-01

    We propose an approach, which we call the Transport Processes Investigation or TPI, to identify and verify site-scale transport processes and their controls. The TPI aids in the formulation of an accurate conceptual model of flow and transport, an essential first step in the development of a cost effective site characterization strategy. The TPI is demonstrated in the highly complex vadose zone of glacial tills that underlie the Fernald Environmental Remediation Project (FEMP) in Fernald, Ohio. As a result of the TPI, we identify and verify the pertinent flow processes and their controls, such as extensive macropore and fracture flow through layered clays, which must be included in an accurate conceptual model of site-scale contaminant transport. We are able to conclude that the classical modeling and sampling methods employed in some site characterization programs will be insufficient to characterize contaminant concentrations or distributions at contaminated or hazardous waste facilities sited in such media

  5. Step-indexed Kripke models over recursive worlds

    DEFF Research Database (Denmark)

    Birkedal, Lars; Reus, Bernhard; Schwinghammer, Jan

    2011-01-01

    worlds that are recursively defined in a category of metric spaces. In this paper, we broaden the scope of this technique from the original domain-theoretic setting to an elementary, operational one based on step indexing. The resulting method is widely applicable and leads to simple, succinct models...... of complicated language features, as we demonstrate in our semantics of Chargu´eraud and Pottier’s type-and-capability system for an ML-like higher-order language. Moreover, the method provides a high-level understanding of the essence of recent approaches based on step indexing....

  6. Block factorization of step response model predictive control problems

    DEFF Research Database (Denmark)

    Kufoalor, D. K.M.; Frison, Gianluca; Imsland, L.

    2017-01-01

    implemented in the HPMPC framework, and the performance is evaluated through simulation studies. The results confirm that a computationally fast controller is achieved, compared to the traditional step response MPC scheme that relies on an explicit prediction formulation. Moreover, the tailored condensing......By introducing a stage-wise prediction formulation that enables the use of highly efficient quadratic programming (QP) solution methods, this paper expands the computational toolbox for solving step response MPC problems. We propose a novel MPC scheme that is able to incorporate step response data...... algorithm exhibits superior performance and produces solution times comparable to that achieved when using a condensing scheme for an equivalent (but much smaller) state-space model derived from first-principles. Implementation aspects necessary for high performance on embedded platforms are discussed...

  7. Coupling of two non-processive myosin 5c dimers enables processive stepping along actin filaments.

    Science.gov (United States)

    Gunther, Laura K; Furuta, Ken'ya; Bao, Jianjun; Urbanowski, Monica K; Kojima, Hiroaki; White, Howard D; Sakamoto, Takeshi

    2014-05-09

    Myosin 5c (Myo5c) is a low duty ratio, non-processive motor unable to move continuously along actin filaments though it is believed to participate in secretory vesicle trafficking in vertebrate cells. Here, we measured the ATPase kinetics of Myo5c dimers and tested the possibility that the coupling of two Myo5c molecules enables processive movement. Steady-state ATPase activity and ADP dissociation kinetics demonstrated that a dimer of Myo5c-HMM (double-headed heavy meromyosin 5c) has a 6-fold lower Km for actin filaments than Myo5c-S1 (single-headed myosin 5c subfragment-1), indicating that the two heads of Myo5c-HMM increase F-actin-binding affinity. Nanometer-precision tracking analyses showed that two Myo5c-HMM dimers linked with each other via a DNA scaffold and moved processively along actin filaments. Moreover, the distance between the Myo5c molecules on the DNA scaffold is an important factor for the processive movement. Individual Myo5c molecules in two-dimer complexes move stochastically in 30-36 nm steps. These results demonstrate that two dimers of Myo5c molecules on a DNA scaffold increased the probability of rebinding to F-actin and enabled processive steps along actin filaments, which could be used for collective cargo transport in cells.

  8. Process modeling style

    CERN Document Server

    Long, John

    2014-01-01

    Process Modeling Style focuses on other aspects of process modeling beyond notation that are very important to practitioners. Many people who model processes focus on the specific notation used to create their drawings. While that is important, there are many other aspects to modeling, such as naming, creating identifiers, descriptions, interfaces, patterns, and creating useful process documentation. Experience author John Long focuses on those non-notational aspects of modeling, which practitioners will find invaluable. Gives solid advice for creating roles, work produ

  9. Towards protein crystallization as a process step in downstream processing of therapeutic antibodies: screening and optimization at microbatch scale.

    Directory of Open Access Journals (Sweden)

    Yuguo Zang

    Full Text Available Crystallization conditions of an intact monoclonal IgG4 (immunoglobulin G, subclass 4 antibody were established in vapor diffusion mode by sparse matrix screening and subsequent optimization. The procedure was transferred to microbatch conditions and a phase diagram was built showing surprisingly low solubility of the antibody at equilibrium. With up-scaling to process scale in mind, purification efficiency of the crystallization step was investigated. Added model protein contaminants were excluded from the crystals to more than 95%. No measurable loss of Fc-binding activity was observed in the crystallized and redissolved antibody. Conditions could be adapted to crystallize the antibody directly from concentrated and diafiltrated cell culture supernatant, showing purification efficiency similar to that of Protein A chromatography. We conclude that crystallization has the potential to be included in downstream processing as a low-cost purification or formulation step.

  10. Theoretical intercomparison of multi-step direct reaction models and computational intercomparison of multi-step direct reaction models

    International Nuclear Information System (INIS)

    Koning, A.J.

    1992-08-01

    In recent years several statistical theories have been developed concerning multistep direct (MSD) nuclear reactions. In addition, dominant in applications is a whole class of semiclassical models that may be subsumed under the heading of 'generalized exciton models'. These are basically MSD-type extensions on top of compound-like concepts. In this report the relationship between their underlying statistical MSD-postulates is highlighted. A command framework is outlined that enables to generate the various MSD theories through assigning statistical properties to different parts of the nuclear Hamiltonian. Then it is shown that distinct forms of nuclear randomness are embodied in the mentioned theories. All these theories appear to be very similar at a qualitative level. In order to explain the high energy-tails and forward-peaked angular distribution typical for particles emitted in MSD reactions, it is imagined that the incident continuum particle stepwise looses its energy and direction in a sequence of collisions, thereby creating new particle-hole pairs in the target system. At each step emission may take place. The statistical aspect comes in because many continuum states are involved in the process. These are supposed to display chaotic behavior, the associated randomness assumption giving rise to important simplifications in the expression for MSD emission cross sections. This picture suggests that mentioned MSD models can be interpreted as a variant of essentially one and the same theory. However, this appears not to be the case. To show this usual MSD distinction within the composite reacting nucleus between the fast continuum particle and the residual interactions, the nucleons of the residual core are to be distinguished from those of the leading particle with the residual system. This distinction will turn out to be crucial to present analysis. 27 refs.; 5 figs.; 1 tab

  11. Industrial Process Identification and Control Design Step-test and Relay-experiment-based Methods

    CERN Document Server

    Liu, Tao

    2012-01-01

      Industrial Process Identification and Control Design is devoted to advanced identification and control methods for the operation of continuous-time processes both with and without time delay, in industrial and chemical engineering practice.   The simple and practical step- or relay-feedback test is employed when applying the proposed identification techniques, which are classified in terms of common industrial process type: open-loop stable; integrating; and unstable, respectively. Correspondingly, control system design and tuning models that follow are presented for single-input-single-output processes.   Furthermore, new two-degree-of-freedom control strategies and cascade control system design methods are explored with reference to independently-improving, set-point tracking and load disturbance rejection. Decoupling, multi-loop, and decentralized control techniques for the operation of multiple-input-multiple-output processes are also detailed. Perfect tracking of a desire output trajectory is realiz...

  12. A Ten-Step Process for Developing Teaching Units

    Science.gov (United States)

    Butler, Geoffrey; Heslup, Simon; Kurth, Lara

    2015-01-01

    Curriculum design and implementation can be a daunting process. Questions quickly arise, such as who is qualified to design the curriculum and how do these people begin the design process. According to Graves (2008), in many contexts the design of the curriculum and the implementation of the curricular product are considered to be two mutually…

  13. Product and Process Modelling

    DEFF Research Database (Denmark)

    Cameron, Ian T.; Gani, Rafiqul

    This book covers the area of product and process modelling via a case study approach. It addresses a wide range of modelling applications with emphasis on modelling methodology and the subsequent in-depth analysis of mathematical models to gain insight via structural aspects of the models....... These approaches are put into the context of life cycle modelling, where multiscale and multiform modelling is increasingly prevalent in the 21st century. The book commences with a discussion of modern product and process modelling theory and practice followed by a series of case studies drawn from a variety...... to biotechnology applications, food, polymer and human health application areas. The book highlights to important nature of modern product and process modelling in the decision making processes across the life cycle. As such it provides an important resource for students, researchers and industrial practitioners....

  14. Standard Model processes

    CERN Document Server

    Mangano, M.L.; Aguilar-Saavedra, Juan Antonio; Alekhin, S.; Badger, S.; Bauer, C.W.; Becher, T.; Bertone, V.; Bonvini, M.; Boselli, S.; Bothmann, E.; Boughezal, R.; Cacciari, M.; Carloni Calame, C.M.; Caola, F.; Campbell, J.M.; Carrazza, S.; Chiesa, M.; Cieri, L.; Cimaglia, F.; Febres Cordero, F.; Ferrarese, P.; D'Enterria, D.; Ferrera, G.; Garcia i Tormo, X.; Garzelli, M.V.; Germann, E.; Hirschi, V.; Han, T.; Ita, H.; Jäger, B.; Kallweit, S.; Karlberg, A.; Kuttimalai, S.; Krauss, F.; Larkoski, A.J.; Lindert, J.; Luisoni, G.; Maierhöfer, P.; Mattelaer, O.; Martinez, H.; Moch, S.; Montagna, G.; Moretti, M.; Nason, P.; Nicrosini, O.; Oleari, C.; Pagani, D.; Papaefstathiou, A.; Petriello, F.; Piccinini, F.; Pierini, M.; Pierog, T.; Pozzorini, S.; Re, E.; Robens, T.; Rojo, J.; Ruiz, R.; Sakurai, K.; Salam, G.P.; Salfelder, L.; Schönherr, M.; Schulze, M.; Schumann, S.; Selvaggi, M.; Shivaji, A.; Siodmok, A.; Skands, P.; Torrielli, P.; Tramontano, F.; Tsinikos, I.; Tweedie, B.; Vicini, A.; Westhoff, S.; Zaro, M.; Zeppenfeld, D.; CERN. Geneva. ATS Department

    2017-06-22

    This report summarises the properties of Standard Model processes at the 100 TeV pp collider. We document the production rates and typical distributions for a number of benchmark Standard Model processes, and discuss new dynamical phenomena arising at the highest energies available at this collider. We discuss the intrinsic physics interest in the measurement of these Standard Model processes, as well as their role as backgrounds for New Physics searches.

  15. Mechanism and performance for adsorption of 2-chlorophenol onto zeolite with surfactant by one-step process from aqueous phase.

    Science.gov (United States)

    Peng, Sha; Tang, Zheng; Jiang, Wei; Wu, Di; Hong, Song; Xing, Baoshan

    2017-03-01

    To decrease the power, material, and time consumption in wastewater treatment, a one-step process was performed to remove 2-chlorophenol (2-CP) from aqueous phase using zeolite and cetyltrimethylammonium bromide (CTAB). Compared with the traditional two-step process, the one-step process used in this study achieved almost eight times higher 2-CP adsorption capacity within a shorter time and maintained high removal efficiencies (around 65%) in reuse tests, thus becoming an efficient and economically acceptable alternative process. For the one-step process, the kinetic data fitted well with a nonlinear pseudo-second-order model, and the isotherm data fitted well with the Dubinin-Astakhov (DA) model. The uptake of 2-CP was highly dependent on pH, increasing in the pH range of 3-6. The enhanced 2-CP removal in a one-step adsorption process can be explained by the larger amount of surfactant loading (≥0.056mmol/g), as determined from the total organic carbon (TOC) and zeta potential. Due to the formation of a loose CTAB bilayer, the hydrophobic partition and the interaction with the positively charged "head" of CTAB bilayers were decisive for the enhancement of pollutant adsorption. Therefore, organic pollutants could be removed from water alongside the synthesis of hydrophobic zeolite in a one-step process, which is a promising technology for the in-situ treatment of organic wastewater. Copyright © 2016. Published by Elsevier B.V.

  16. Modified two-step potential model: Heavy mesons | Sharma | JASSA ...

    African Journals Online (AJOL)

    Modified two-step potential model: Heavy mesons. L K Sharma, P K Jain, V R Mundembe. http://dx.doi.org/10.4314/jassa.v4i2.16898 · AJOL African Journals Online. HOW TO USE AJOL... for Researchers · for Librarians · for Authors · FAQ's · More about AJOL · AJOL's Partners · Terms and Conditions of Use · Contact AJOL ...

  17. More steps towards process automation for optical fabrication

    Science.gov (United States)

    Walker, David; Yu, Guoyu; Beaucamp, Anthony; Bibby, Matt; Li, Hongyu; McCluskey, Lee; Petrovic, Sanja; Reynolds, Christina

    2017-06-01

    In the context of Industrie 4.0, we have previously described the roles of robots in optical processing, and their complementarity with classical CNC machines, providing both processing and automation functions. After having demonstrated robotic moving of parts between a CNC polisher and metrology station, and auto-fringe-acquisition, we have moved on to automate the wash-down operation. This is part of a wider strategy we describe in this paper, leading towards automating the decision-making operations required before and throughout an optical manufacturing cycle.

  18. STEPS: modeling and simulating complex reaction-diffusion systems with Python

    Directory of Open Access Journals (Sweden)

    Stefan Wils

    2009-06-01

    Full Text Available We describe how the use of the Python language improved the user interface of the program STEPS. STEPS is a simulation platform for modeling and stochastic simulation of coupled reaction-diffusion systems with complex 3-dimensional boundary conditions. Setting up such models is a complicated process that consists of many phases. Initial versions of STEPS relied on a static input format that did not cleanly separate these phases, limiting modelers in how they could control the simulation and becoming increasingly complex as new features and new simulation algorithms were added. We solved all of these problems by tightly integrating STEPS with Python, using SWIG to expose our existing simulation code.

  19. Downstream processing of Isochrysis galbana: a step towards microalgal biorefinery

    NARCIS (Netherlands)

    Gilbert-López, B.; Mendiola, J.A.; Fontecha, J.; Broek, van den L.A.M.; Sijtsma, L.; Cifuentes, A.; Herrero, M.; Ibáñez, E.

    2015-01-01

    An algae-based biorefinery relies on the efficient use of algae biomass through its fractionation of several valuable/bioactive compounds that can be used in industry. If this biorefinery includes green platforms as downstream processing technologies able to fulfill the requirements of green

  20. SAR processing with stepped chirps and phased array antennas.

    Energy Technology Data Exchange (ETDEWEB)

    Doerry, Armin Walter

    2006-09-01

    Wideband radar signals are problematic for phased array antennas. Wideband radar signals can be generated from series or groups of narrow-band signals centered at different frequencies. An equivalent wideband LFM chirp can be assembled from lesser-bandwidth chirp segments in the data processing. The chirp segments can be transmitted as separate narrow-band pulses, each with their own steering phase operation. This overcomes the problematic dilemma of steering wideband chirps with phase shifters alone, that is, without true time-delay elements.

  1. FEA Simulation of Free-Bending - a Preforming Step in the Hydroforming Process Chain

    Science.gov (United States)

    Beulich, N.; Craighero, P.; Volk, W.

    2017-09-01

    High-strength steel and aluminum alloys are essential for developing innovative, lightly-weighted space frame concepts. The intended design is built from car body parts with high geometrical complexity and reduced material-thickness. Over the past few years, many complex car body parts have been produced using hydroforming. To increase the accuracy of hydroforming in relation to prospective car concepts, the virtual manufacturing of forming becomes more important. As a part of process digitalization, it is necessary to develop a simulation model for the hydroforming process chain. The preforming of longitudinal welded tubes is therefore implemented by the use of three-dimensional free-bending. This technique is able to reproduce complex deflection curves in combination with innovative low-thickness material design for hydroforming processes. As a first step to the complete process simulation, the content of this paper deals with the development of a finite element simulation model for the free-bending process with 6 degrees of freedom. A mandrel built from spherical segments connected by a steel rope is located inside of the tube to prevent geometrical instability. Critical parameters for the result of the bending process are therefore evaluated and optimized. The simulation model is verified by surface measurements of a two-dimensional bending test.

  2. Gaussian Processes: the Next Step in Exoplanet Data Analysis

    Science.gov (United States)

    Aigrain, Suzanne; Gibson, N.; Roberts, S.; Evans, T.; McQuillan, A.; Reece, S.; Osborne, M.

    2011-09-01

    When searching for or characterising exoplanets, we typically need to isolate a deterministic signal from stochastic processes - astrophysical or instrumental "noise" - in time-series data. Gaussian processes (GPs) enable us to construct distributions over random functions, and to infer the properties of "signal" and "noise" in a way that is both flexible and robust. I will give a brief overview of the principles of GPs and show two example applications which are both interesting in their own right, and highlight some specific strengths of the technique. The first is a new re-analysis of the controversial HST/NICMOS transmission spectrum of HD189733b. The second is the measurement of stellar rotation periods from light curves, when the spot distribution evolves over the duration of the dataset. NB: I could also present another topic: stellar variability studies in Kepler data, based on a new systematics correction which preserves stellar variability. I opted for the GPs because I think it's important to alert the exoplanet community to the potential of this technique, but I'm happy to talk about either.

  3. An improved algorithm to convert CAD model to MCNP geometry model based on STEP file

    International Nuclear Information System (INIS)

    Zhou, Qingguo; Yang, Jiaming; Wu, Jiong; Tian, Yanshan; Wang, Junqiong; Jiang, Hai; Li, Kuan-Ching

    2015-01-01

    Highlights: • Fully exploits common features of cells, making the processing efficient. • Accurately provide the cell position. • Flexible to add new parameters in the structure. • Application of novel structure in INP file processing, conveniently evaluate cell location. - Abstract: MCNP (Monte Carlo N-Particle Transport Code) is a general-purpose Monte Carlo N-Particle code that can be used for neutron, photon, electron, or coupled neutron/photon/electron transport. Its input file, the INP file, has the characteristics of complicated form and is error-prone when describing geometric models. Due to this, a conversion algorithm that can solve the problem by converting general geometric model to MCNP model during MCNP aided modeling is highly needed. In this paper, we revised and incorporated a number of improvements over our previous work (Yang et al., 2013), which was proposed and targeted after STEP file and INP file were analyzed. Results of experiments show that the revised algorithm is more applicable and efficient than previous work, with the optimized extraction of geometry and topology information of the STEP file, as well as the production efficiency of output INP file. This proposed research is promising, and serves as valuable reference for the majority of researchers involved with MCNP-related researches

  4. WWTP Process Tank Modelling

    DEFF Research Database (Denmark)

    Laursen, Jesper

    solution of the Navier-Stokes equations in a multiphase scheme. After a general introduction to the activated sludge tank as a system, the activated sludge tank model is gradually setup in separate stages. The individual sub-processes that are often occurring in activated sludge tanks are initially......-process models, the last part of the thesis, where the integrated process tank model is tested on three examples of activated sludge systems, is initiated. The three case studies are introduced with an increasing degree of model complexity. All three cases are take basis in Danish municipal wastewater treatment...... plants. The first case study involves the modeling of an activated sludge tank undergoing a special controlling strategy with the intention minimizing the sludge loading on the subsequent secondary settlers during storm events. The applied model is a two-phase model, where the sedimentation of sludge...

  5. Quantummechanical multi-step direct models for nuclear data applications

    International Nuclear Information System (INIS)

    Koning, A.J.

    1992-10-01

    Various multi-step direct models have been derived and compared on a theoretical level. Subsequently, these models have been implemented in the computer code system KAPSIES, enabling a consistent comparison on the basis of the same set of nuclear parameters and same set of numerical techniques. Continuum cross sections in the energy region between 10 and several hundreds of MeV have successfully been analysed. Both angular distributions and energy spectra can be predicted in an essentially parameter-free manner. It is demonstrated that the quantum-mechanical MSD models (in particular the FKK model) give an improved prediction of pre-equilibrium angular distributions as compared to the experiment-based systematics of Kalbach. This makes KAPSIES a reliable tool for nuclear data applications in the afore-mentioned energy region. (author). 10 refs., 2 figs

  6. Two-step processing of oil shale to linear hydrocarbons

    Energy Technology Data Exchange (ETDEWEB)

    Eliseev, O.L.; Ryzhov, A.N.; Latypova, D.Zh.; Lapidus, A.L. [Russian Academy of Sciences, Moscow (Russian Federation). N.D. Zelinsky Institute of Organic Chemistry; Avakyan, T.A. [Gubkin Russian State University of Oil and Gas, Moscow (Russian Federation)

    2013-11-01

    Thermal and catalytic steam reforming of oil shale mined from Leningrad and Kashpir deposits was studied. Experiments were performed in fixed bed reactor by varying temperature and steam flow rate. Data obtained were approximated by empirical formulas containing some parameters calculated by least-squares method. Thus predicting amount of hydrogen, carbon monoxide and methane in producer gas is possible for given particular kind of oil shale, temperature and steam flow rate. Adding Ni catalyst enriches hydrogen and depletes CO content in effluent gas at low gasification temperatures. Modeling gas simulating steam reforming gases (H{sub 2}, CO, CO{sub 2}, and N{sub 2} mixture) was tested in hydrocarbon synthesis over Co-containing supported catalyst. Selectivity of CO conversion into C{sub 5+} hydrocarbons reaches 84% while selectivity to methane is 7%. Molecular weight distribution of synthesized alkanes obeys Anderson-Schulz-Flory equation and chain growth probability 0.84. (orig.)

  7. Calculation of the MSD two-step process with the sudden approximation

    Energy Technology Data Exchange (ETDEWEB)

    Yoshida, Shiro [Tohoku Univ., Sendai (Japan). Dept. of Physics; Kawano, Toshihiko [Kyushu Univ., Advanced Energy Engineering Science, Kasuga, Fukuoka (Japan)

    2000-03-01

    A calculation of the two-step process with the sudden approximation is described. The Green's function which connects the one-step matrix element to the two-step one is represented in {gamma}-space to avoid the on-energy-shell approximation. Microscopically calculated two-step cross sections are averaged together with an appropriate level density to give a two-step cross section. The calculated cross sections are compared with the experimental data, however the calculation still contains several simplifications at this moment. (author)

  8. The method validation step of biological dosimetry accreditation process

    International Nuclear Information System (INIS)

    Roy, L.; Voisin, P.A.; Guillou, A.C.; Busset, A.; Gregoire, E.; Buard, V.; Delbos, M.; Voisin, Ph.

    2006-01-01

    One of the missions of the Laboratory of Biological Dosimetry (L.D.B.) of the Institute for Radiation and Nuclear Safety (I.R.S.N.) is to assess the radiological dose after an accidental overexposure suspicion to ionising radiation, by using radio-induced changes of some biological parameters. The 'gold standard' is the yield of dicentrics observed in patients lymphocytes, and this yield is converted in dose using dose effect relationships. This method is complementary to clinical and physical dosimetry, for medical team in charge of the patients. To obtain a formal recognition of its operational activity, the laboratory decided three years ago, to require an accreditation, by following the recommendations of both 17025 General Requirements for the Competence of Testing and Calibration Laboratories and 19238 Performance criteria for service laboratories performing biological dosimetry by cyto-genetics. Diagnostics, risks analysis were realized to control the whole analysis process leading to documents writing. Purchases, personnel department, vocational training were also included in the quality system. Audits were very helpful to improve the quality system. One specificity of this technique is that it is not normalized therefore apart from quality management aspects, several technical points needed some validations. An inventory of potentially influent factors was carried out. To estimate their real effect on the yield of dicentrics, a Placket-Burman experimental design was conducted. The effect of seven parameters was tested: the BUdr (bromodeoxyuridine), PHA (phytohemagglutinin) and colcemid concentration, the culture duration, the incubator temperature, the blood volume and the medium volume. The chosen values were calculated according to the uncertainties on the way they were measured i.e. pipettes, thermometers, test tubes. None of the factors has a significant impact on the yield of dicentrics. Therefore the uncertainty linked to their use was considered as

  9. Process Mining: A Two-Step Approach to Balance Between Underfitting and Overfitting

    DEFF Research Database (Denmark)

    van der Aalst, W.M.P.; Rubin, V.; Verbeek, H.M.W.

    behavior. At best, one has seen a representative subset. Therefore, classical synthesis techniques are not suitable as they aim at finding a model that is able to exactly reproduce the log. Existing process mining techniques try to avoid such "overfitting" by generalizing the model to allow for more...... support for it). None of the existing techniques enables the user to control the balance between "overfitting" and "underfitting". To address this, we propose a two-step approach. First, using a configurable approach, a transition system is constructed. Then, using the "theory of regions", the model...... is synthesized. The approach has been implemented in the context of ProM and overcomes many of the limitations of traditional approaches....

  10. Diagnostic and Prognostic Models for Generator Step-Up Transformers

    Energy Technology Data Exchange (ETDEWEB)

    Vivek Agarwal; Nancy J. Lybeck; Binh T. Pham

    2014-09-01

    In 2014, the online monitoring (OLM) of active components project under the Light Water Reactor Sustainability program at Idaho National Laboratory (INL) focused on diagnostic and prognostic capabilities for generator step-up transformers. INL worked with subject matter experts from the Electric Power Research Institute (EPRI) to augment and revise the GSU fault signatures previously implemented in the Electric Power Research Institute’s (EPRI’s) Fleet-Wide Prognostic and Health Management (FW-PHM) Suite software. Two prognostic models were identified and implemented for GSUs in the FW-PHM Suite software. INL and EPRI demonstrated the use of prognostic capabilities for GSUs. The complete set of fault signatures developed for GSUs in the Asset Fault Signature Database of the FW-PHM Suite for GSUs is presented in this report. Two prognostic models are described for paper insulation: the Chendong model for degree of polymerization, and an IEEE model that uses a loading profile to calculates life consumption based on hot spot winding temperatures. Both models are life consumption models, which are examples of type II prognostic models. Use of the models in the FW-PHM Suite was successfully demonstrated at the 2014 August Utility Working Group Meeting, Idaho Falls, Idaho, to representatives from different utilities, EPRI, and the Halden Research Project.

  11. Biodiesel production from microalgae Spirulina maxima by two step process: Optimization of process variable

    Directory of Open Access Journals (Sweden)

    M.A. Rahman

    2017-04-01

    Full Text Available Biodiesel from green energy source is gaining tremendous attention for ecofriendly and economically aspect. In this investigation, a two-step process was developed for the production of biodiesel from microalgae Spirulina maxima and determined best operating conditions for the steps. In the first stage, acid esterification was conducted to lessen acid value (AV from 10.66 to 0.51 mgKOH/g of the feedstock and optimal conditions for maximum esterified oil yielding were found at molar ratio 12:1, temperature 60°C, 1% (wt% H2SO4, and mixing intensity 400 rpm for a reaction time of 90 min. The second stage alkali transesterification was carried out for maximum biodiesel yielding (86.1% and optimal conditions were found at molar ratio 9:1, temperature 65°C, mixing intensity 600 rpm, catalyst concentration 0.75% (wt% KOH for a reaction time of 20 min. Biodiesel were analyzed according to ASTM standards and results were within standards limit. Results will helpful to produce third generation algal biodiesel from microalgae Spirulina maxima in an efficient manner.

  12. The importance of time-stepping errors in ocean models

    Science.gov (United States)

    Williams, P. D.

    2011-12-01

    Many ocean models use leapfrog time stepping. The Robert-Asselin (RA) filter is usually applied after each leapfrog step, to control the computational mode. However, it will be shown in this presentation that the RA filter generates very large amounts of numerical diapycnal mixing. In some ocean models, the numerical diapycnal mixing from the RA filter is as large as the physical diapycnal mixing. This lowers our confidence in the fidelity of the simulations. In addition to the above problem, the RA filter also damps the physical solution and degrades the numerical accuracy. These two concomitant problems occur because the RA filter does not conserve the mean state, averaged over the three time slices on which it operates. The presenter has recently proposed a simple modification to the RA filter, which does conserve the three-time-level mean state. The modified filter has become known as the Robert-Asselin-Williams (RAW) filter. When used in conjunction with the leapfrog scheme, the RAW filter eliminates the numerical damping of the physical solution and increases the amplitude accuracy by two orders, yielding third-order accuracy. The phase accuracy is unaffected and remains second-order. The RAW filter can easily be incorporated into existing models of the ocean, typically via the insertion of just a single line of code. Better simulations are obtained, at almost no additional computational expense. Results will be shown from recent implementations of the RAW filter in various ocean models. For example, in the UK Met Office Hadley Centre ocean model, sea-surface temperature and sea-ice biases in the North Atlantic Ocean are found to be reduced. These improvements are encouraging for the use of the RAW filter in other ocean models.

  13. Model Process Control Language

    Data.gov (United States)

    National Aeronautics and Space Administration — The MPC (Model Process Control) language enables the capture, communication and preservation of a simulation instance, with sufficient detail that it can be...

  14. Business Model Process Configurations

    DEFF Research Database (Denmark)

    Taran, Yariv; Nielsen, Christian; Thomsen, Peter

    2015-01-01

    strategic preference, as part of their business model innovation activity planned. Practical implications – This paper aimed at strengthening researchers and, particularly, practitioner’s perspectives into the field of business model process configurations. By insuring an [abstracted] alignment between......Purpose – The paper aims: 1) To develop systematically a structural list of various business model process configuration and to group (deductively) these selected configurations in a structured typological categorization list. 2) To facilitate companies in the process of BM innovation......, by developing (inductively) an ontological classification framework, in view of the BM process configurations typology developed. Design/methodology/approach – Given the inconsistencies found in the business model studies (e.g. definitions, configurations, classifications) we adopted the analytical induction...

  15. Biosphere Process Model Report

    Energy Technology Data Exchange (ETDEWEB)

    J. Schmitt

    2000-05-25

    To evaluate the postclosure performance of a potential monitored geologic repository at Yucca Mountain, a Total System Performance Assessment (TSPA) will be conducted. Nine Process Model Reports (PMRs), including this document, are being developed to summarize the technical basis for each of the process models supporting the TSPA model. These reports cover the following areas: (1) Integrated Site Model; (2) Unsaturated Zone Flow and Transport; (3) Near Field Environment; (4) Engineered Barrier System Degradation, Flow, and Transport; (5) Waste Package Degradation; (6) Waste Form Degradation; (7) Saturated Zone Flow and Transport; (8) Biosphere; and (9) Disruptive Events. Analysis/Model Reports (AMRs) contain the more detailed technical information used to support TSPA and the PMRs. The AMRs consists of data, analyses, models, software, and supporting documentation that will be used to defend the applicability of each process model for evaluating the postclosure performance of the potential Yucca Mountain repository system. This documentation will ensure the traceability of information from its source through its ultimate use in the TSPA-Site Recommendation (SR) and in the National Environmental Policy Act (NEPA) analysis processes. The objective of the Biosphere PMR is to summarize (1) the development of the biosphere model, and (2) the Biosphere Dose Conversion Factors (BDCFs) developed for use in TSPA. The Biosphere PMR does not present or summarize estimates of potential radiation doses to human receptors. Dose calculations are performed as part of TSPA and will be presented in the TSPA documentation. The biosphere model is a component of the process to evaluate postclosure repository performance and regulatory compliance for a potential monitored geologic repository at Yucca Mountain, Nevada. The biosphere model describes those exposure pathways in the biosphere by which radionuclides released from a potential repository could reach a human receptor

  16. Biosphere Process Model Report

    International Nuclear Information System (INIS)

    Schmitt, J.

    2000-01-01

    To evaluate the postclosure performance of a potential monitored geologic repository at Yucca Mountain, a Total System Performance Assessment (TSPA) will be conducted. Nine Process Model Reports (PMRs), including this document, are being developed to summarize the technical basis for each of the process models supporting the TSPA model. These reports cover the following areas: (1) Integrated Site Model; (2) Unsaturated Zone Flow and Transport; (3) Near Field Environment; (4) Engineered Barrier System Degradation, Flow, and Transport; (5) Waste Package Degradation; (6) Waste Form Degradation; (7) Saturated Zone Flow and Transport; (8) Biosphere; and (9) Disruptive Events. Analysis/Model Reports (AMRs) contain the more detailed technical information used to support TSPA and the PMRs. The AMRs consists of data, analyses, models, software, and supporting documentation that will be used to defend the applicability of each process model for evaluating the postclosure performance of the potential Yucca Mountain repository system. This documentation will ensure the traceability of information from its source through its ultimate use in the TSPA-Site Recommendation (SR) and in the National Environmental Policy Act (NEPA) analysis processes. The objective of the Biosphere PMR is to summarize (1) the development of the biosphere model, and (2) the Biosphere Dose Conversion Factors (BDCFs) developed for use in TSPA. The Biosphere PMR does not present or summarize estimates of potential radiation doses to human receptors. Dose calculations are performed as part of TSPA and will be presented in the TSPA documentation. The biosphere model is a component of the process to evaluate postclosure repository performance and regulatory compliance for a potential monitored geologic repository at Yucca Mountain, Nevada. The biosphere model describes those exposure pathways in the biosphere by which radionuclides released from a potential repository could reach a human receptor

  17. Transient kinetics and rate limiting steps for the processive cellobiohydrolase Cel7A

    DEFF Research Database (Denmark)

    Cruys-Bagger, Nicolaj; Hirosuke, Tatsumi; Robin Ren, Guilin

    2013-01-01

    Cellobiohydrolases are exo-acting, processive enzymes, which effectively hydrolyze crystalline cellulose. They have attracted considerable interest due to their role in both in natural carbon cycling and industrial enzyme cocktails used for the deconstruction of cellulosic biomass, but many...... mechanistic and regulatory aspects of their heterogeneous catalysis remain poorly understood. Here we address this by applying a deterministic model to real-time kinetic data with high temporal resolution. We used two variants of the cellobiohydrolase Cel7A from H. jecorina, and three types of cellulose...... as substrate. Analysis of the pre-steady state regime allowed delineation rate constants for both fast and slow steps in the enzymatic cycle and assessment of how these constants influenced the rate of hydrolysis at quasi-steady state. Processive movement on the cellulose strand advanced with characteristic...

  18. Global seismic inversion as the next standard step in the processing sequence

    Energy Technology Data Exchange (ETDEWEB)

    Maver, Kim G.; Hansen, Lars S.; Jepsen, Anne-Marie; Rasmussen, Klaus B.

    1998-12-31

    Seismic inversion of post stack seismic data has until recently been regarded as a reservoir oriented method since the standard inversion techniques rely on extensive well control and a detailed user derived input model. Most seismic inversion techniques further requires a stable wavelet. As a consequence seismic inversion is mainly utilised in mature areas focusing of specific zones only after the seismic data has been interpreted and is well understood. By using an advanced 3-D global technique, seismic inversion is presented as the next standard step in the processing sequence. The technique is robust towards noise within the seismic data, utilizes a time variant wavelet, and derives a low frequency model utilizing the stacking velocities and only limited well control. 4 figs.

  19. 42 CFR 50.406 - What are the steps in the process?

    Science.gov (United States)

    2010-10-01

    ... 42 Public Health 1 2010-10-01 2010-10-01 false What are the steps in the process? 50.406 Section 50.406 Public Health PUBLIC HEALTH SERVICE, DEPARTMENT OF HEALTH AND HUMAN SERVICES GRANTS POLICIES OF GENERAL APPLICABILITY Public Health Service Grant Appeals Procedure § 50.406 What are the steps in...

  20. Foam process models.

    Energy Technology Data Exchange (ETDEWEB)

    Moffat, Harry K.; Noble, David R.; Baer, Thomas A. (Procter & Gamble Co., West Chester, OH); Adolf, Douglas Brian; Rao, Rekha Ranjana; Mondy, Lisa Ann

    2008-09-01

    In this report, we summarize our work on developing a production level foam processing computational model suitable for predicting the self-expansion of foam in complex geometries. The model is based on a finite element representation of the equations of motion, with the movement of the free surface represented using the level set method, and has been implemented in SIERRA/ARIA. An empirically based time- and temperature-dependent density model is used to encapsulate the complex physics of foam nucleation and growth in a numerically tractable model. The change in density with time is at the heart of the foam self-expansion as it creates the motion of the foam. This continuum-level model uses an homogenized description of foam, which does not include the gas explicitly. Results from the model are compared to temperature-instrumented flow visualization experiments giving the location of the foam front as a function of time for our EFAR model system.

  1. Hydrothermal decomposition of industrial jarosite in alkaline media: The rate determining step of the process kinetics

    Directory of Open Access Journals (Sweden)

    González-Ibarra A.A.

    2016-01-01

    Full Text Available This work examines the role of NaOH and Ca(OH2 on the hydrothermal decomposition of industrial jarosite deposited by a Mexican company in a tailings dam. The industrial jarosite is mainly composed by natrojarosite and contains 150 g Ag/t, showing a narrow particle size distribution, as revealed by XRD, fire assay, SEM-EDS and laser-diffraction analysis. The effect of the pH, when using NaOH or Ca(OH2 as alkalinizing agent was studied by carrying out decomposition experiments at different pH values and 60°C in a homogeneous size particle system (pH = 8, 9, 10 and 11 and in a heterogeneous size particle system (pH = 11. Also, the kinetic study of the process and the controlling step of the decomposition reaction when NaOH and Ca(OH2 are used was determined by fitting the data obtained to the shrinking core model for spherical particles of constant size. These results, supported by chemical (EDS, morphological (SEM and mapping of elements (EDS analysis of a partially reacted jarosite particle allowed to conclude that when NaOH is used, the process kinetics is controlled by the chemical reaction and when Ca(OH2 is used, the rate determining step is changed to a diffusion control through a layer of solid products.

  2. On different results for new three step iteration process in Banach spaces.

    Science.gov (United States)

    Ullah, Kifayat; Arshad, Muhammad

    2016-01-01

    In this paper we propose a new iteration process, called AK iteration process, for approximation of fixed points for contraction mappings. We show that our iteration process is faster than the leading Vatan Two-step iteration process for contraction mappings. Numerical examples are given to support the analytic proofs. Stability of AK iteration process and data dependence result for contraction mappings by employing AK iteration process are also discussed.

  3. A multi-step electrochemical etching process for a three-dimensional micro probe array

    International Nuclear Information System (INIS)

    Kim, Yoonji; Youn, Sechan; Cho, Young-Ho; Park, HoJoon; Chang, Byeung Gyu; Oh, Yong Soo

    2011-01-01

    We present a simple, fast, and cost-effective process for three-dimensional (3D) micro probe array fabrication using multi-step electrochemical metal foil etching. Compared to the previous electroplating (add-on) process, the present electrochemical (subtractive) process results in well-controlled material properties of the metallic microstructures. In the experimental study, we describe the single-step and multi-step electrochemical aluminum foil etching processes. In the single-step process, the depth etch rate and the bias etch rate of an aluminum foil have been measured as 1.50 ± 0.10 and 0.77 ± 0.03 µm min −1 , respectively. On the basis of the single-step process results, we have designed and performed the two-step electrochemical etching process for the 3D micro probe array fabrication. The fabricated 3D micro probe array shows the vertical and lateral fabrication errors of 15.5 ± 5.8% and 3.3 ± 0.9%, respectively, with the surface roughness of 37.4 ± 9.6 nm. The contact force and the contact resistance of the 3D micro probe array have been measured to be 24.30 ± 0.98 mN and 2.27 ± 0.11 Ω, respectively, for an overdrive of 49.12 ± 1.25 µm.

  4. Women's steps of change and entry into drug abuse treatment. A multidimensional stages of change model.

    Science.gov (United States)

    Brown, V B; Melchior, L A; Panter, A T; Slaughter, R; Huba, G J

    2000-04-01

    The Transtheoretical, or Stages of Change Model, has been applied to the investigation of help-seeking related to a number of addictive behaviors. Overall, the model has shown to be very important in understanding the process of help-seeking. However, substance abuse rarely exists in isolation from other health, mental health, and social problems. The present work extends the original Stages of Change Model by proposing "Steps of Change" as they relate to entry into substance abuse treatment programs for women. Readiness to make life changes in four domains-domestic violence, HIV sexual risk behavior, substance abuse, and mental health-is examined in relation to entry into four substance abuse treatment modalities (12-step, detoxification, outpatient, and residential). The Steps of Change Model hypothesizes that help-seeking behavior of substance-abusing women may reflect a hierarchy of readiness based on the immediacy, or time urgency, of their treatment issues. For example, women in battering relationships may be ready to make changes to reduce their exposure to violence before admitting readiness to seek substance abuse treatment. The Steps of Change Model was examined in a sample of 451 women contacted through a substance abuse treatment-readiness program in Los Angeles, California. A series of logistic regression analyses predict entry into four separate treatment modalities that vary. Results suggest a multidimensional Stages of Change Model that may extend to other populations and to other types of help-seeking behaviors.

  5. One step process for producing dense aluminum nitride and composites thereof

    Science.gov (United States)

    Holt, J. Birch; Kingman, Donald D.; Bianchini, Gregory M.

    1989-01-01

    A one step combustion process for the synthesis of dense aluminum nitride compositions is disclosed. The process comprises igniting pure aluminum powder in a nitrogen atmosphere at a pressure of about 1000 atmospheres or higher. The process enables the production of aluminum nitride bodies to be formed directly in a mold of any desired shape.

  6. Two-Step Estimation of Models Between Latent Classes and External Variables.

    Science.gov (United States)

    Bakk, Zsuzsa; Kuha, Jouni

    2017-11-17

    We consider models which combine latent class measurement models for categorical latent variables with structural regression models for the relationships between the latent classes and observed explanatory and response variables. We propose a two-step method of estimating such models. In its first step, the measurement model is estimated alone, and in the second step the parameters of this measurement model are held fixed when the structural model is estimated. Simulation studies and applied examples suggest that the two-step method is an attractive alternative to existing one-step and three-step methods. We derive estimated standard errors for the two-step estimates of the structural model which account for the uncertainty from both steps of the estimation, and show how the method can be implemented in existing software for latent variable modelling.

  7. Design of Sensor Data Processing Steps in an Air Pollution Monitoring System

    Directory of Open Access Journals (Sweden)

    Kwang Woo Nam

    2011-11-01

    Full Text Available Environmental monitoring is required to understand the effects of various kinds of phenomena such as a flood, a typhoon, or a forest fire. To detect the environmental conditions in remote places, monitoring applications employ the sensor networks to detect conditions, context models to understand phenomena, and computing technology to process the large volumes of data. In this paper, we present an air pollution monitoring system to provide alarm messages about potentially dangerous areas with sensor data analysis. We design the data analysis steps to understand the detected air pollution regions and levels. The analyzed data is used to track the pollution and to give an alarm. This implemented monitoring system is used to mitigate the damages caused by air pollution.

  8. Design of sensor data processing steps in an air pollution monitoring system.

    Science.gov (United States)

    Jung, Young Jin; Lee, Yang Koo; Lee, Dong Gyu; Lee, Yongmi; Nittel, Silvia; Beard, Kate; Nam, Kwang Woo; Ryu, Keun Ho

    2011-01-01

    Environmental monitoring is required to understand the effects of various kinds of phenomena such as a flood, a typhoon, or a forest fire. To detect the environmental conditions in remote places, monitoring applications employ the sensor networks to detect conditions, context models to understand phenomena, and computing technology to process the large volumes of data. In this paper, we present an air pollution monitoring system to provide alarm messages about potentially dangerous areas with sensor data analysis. We design the data analysis steps to understand the detected air pollution regions and levels. The analyzed data is used to track the pollution and to give an alarm. This implemented monitoring system is used to mitigate the damages caused by air pollution.

  9. Coupling of Spinosad Fermentation and Separation Process via Two-Step Macroporous Resin Adsorption Method.

    Science.gov (United States)

    Zhao, Fanglong; Zhang, Chuanbo; Yin, Jing; Shen, Yueqi; Lu, Wenyu

    2015-08-01

    In this paper, a two-step resin adsorption technology was investigated for spinosad production and separation as follows: the first step resin addition into the fermentor at early cultivation period to decrease the timely product concentration in the broth; the second step of resin addition was used after fermentation to adsorb and extract the spinosad. Based on this, a two-step macroporous resin adsorption-membrane separation process for spinosad fermentation, separation, and purification was established. Spinosad concentration in 5-L fermentor increased by 14.45 % after adding 50 g/L macroporous at the beginning of fermentation. The established two-step macroporous resin adsorption-membrane separation process got the 95.43 % purity and 87 % yield for spinosad, which were both higher than that of the conventional crystallization of spinosad from aqueous phase that were 93.23 and 79.15 % separately. The two-step macroporous resin adsorption method has not only carried out the coupling of spinosad fermentation and separation but also increased spinosad productivity. In addition, the two-step macroporous resin adsorption-membrane separation process performs better in spinosad yield and purity.

  10. Two-step flash light sintering process for crack-free inkjet-printed Ag films

    International Nuclear Information System (INIS)

    Park, Sung-Hyeon; Kim, Hak-Sung; Jang, Shin; Lee, Dong-Jun; Oh, Jehoon

    2013-01-01

    In this paper, a two-step flash light sintering process for inkjet-printed Ag films is investigated with the aim of improving the quality of sintered Ag films. The flash light sintering process is divided into two steps: a preheating step and a main sintering step. The preheating step is used to remove the organic binder without abrupt vaporization. The main sintering step is used to complete the necking connections among the silver nanoparticles and achieve high electrical conductivity. The process minimizes the damage on the polymer substrate and the interface between the sintered Ag film and polymer substrate. The electrical conductivity is calculated by measuring the resistance and cross-sectional area with an LCR meter and 3D optical profiler, respectively. It is found that the resistivity of the optimal flash light-sintered Ag films (36.32 nΩ m), which is 228.86% of that of bulk silver, is lower than that of thermally sintered ones (40.84 nΩ m). Additionally, the polyimide film used as the substrate is preserved with the inkjet-printed pattern shape during the flash light sintering process without delamination or defects. (paper)

  11. Planning of step-stress accelerated degradation test based on the inverse Gaussian process

    International Nuclear Information System (INIS)

    Wang, Huan; Wang, Guan-jun; Duan, Feng-jun

    2016-01-01

    The step-stress accelerated degradation test (SSADT) is a useful tool for assessing the lifetime distribution of highly reliable or expensive product. Some efficient SSADT plans have been proposed when the underlying degradation follows the Wiener process or Gamma process. However, how to design an efficient SSADT plan for the inverse Gaussian (IG) process is still a problem to be solved. The aim of this paper is to provide an optimal SSADT plan for the IG degradation process. A cumulative exposure model for the SSADT is adopted, in which the product degradation path depends only on the current stress level and the degradation accumulated, and has nothing to do with the way of accumulation. Under the constraint of the total experimental budget, some design variables are optimized by minimizing the asymptotic variance of the estimated p-quantile of the lifetime distribution of the product. Finally, we use the proposed method to deal with the optimal SSADT design for a type of electrical connector based on a set of stress relaxation data. The sensitivity and stability of the SSADT plan are studied, and we find that the optimal test plan is quite robust for a moderate departure from the values of the parameters. - Highlights: • We propose an optimal SSADT plan for the IG degradation process. • A CE model is assumed in describing the degradation path of the SSADT. • The asymptotic variance of the estimated p-quantile is used as the objective function. • A set of stress relaxation data is analyzed and used for illustration of our method.

  12. Adaptive step goals and rewards: a longitudinal growth model of daily steps for a smartphone-based walking intervention.

    Science.gov (United States)

    Korinek, Elizabeth V; Phatak, Sayali S; Martin, Cesar A; Freigoun, Mohammad T; Rivera, Daniel E; Adams, Marc A; Klasnja, Pedja; Buman, Matthew P; Hekler, Eric B

    2018-02-01

    Adaptive interventions are an emerging class of behavioral interventions that allow for individualized tailoring of intervention components over time to a person's evolving needs. The purpose of this study was to evaluate an adaptive step goal + reward intervention, grounded in Social Cognitive Theory delivered via a smartphone application (Just Walk), using a mixed modeling approach. Participants (N = 20) were overweight (mean BMI = 33.8 ± 6.82 kg/m 2 ), sedentary adults (90% female) interested in participating in a 14-week walking intervention. All participants received a Fitbit Zip that automatically synced with Just Walk to track daily steps. Step goals and expected points were delivered through the app every morning and were designed using a pseudo-random multisine algorithm that was a function of each participant's median baseline steps. Self-report measures were also collected each morning and evening via daily surveys administered through the app. The linear mixed effects model showed that, on average, participants significantly increased their daily steps by 2650 (t = 8.25, p model with a quadratic time variable indicated an inflection point for increasing steps near the midpoint of the intervention and this effect was significant (t 2  = -247, t = -5.01, p goal + rewards intervention using a smartphone app appears to be a feasible approach for increasing walking behavior in overweight adults. App satisfaction was high and participants enjoyed receiving variable goals each day. Future mHealth studies should consider the use of adaptive step goals + rewards in conjunction with other intervention components for increasing physical activity.

  13. Time ordering of two-step processes in energetic ion-atom collisions: Basic formalism

    International Nuclear Information System (INIS)

    Stolterfoht, N.

    1993-01-01

    The semiclassical approximation is applied in second order to describe time ordering of two-step processes in energetic ion-atom collisions. Emphasis is given to the conditions for interferences between first- and second-order terms. In systems with two active electrons, time ordering gives rise to a pair of associated paths involving a second-order process and its time-inverted process. Combining these paths within the independent-particle frozen orbital model, time ordering is lost. It is shown that the loss of time ordering modifies the second-order amplitude so that its ability to interfere with the first-order amplitude is essentially reduced. Time ordering and the capability for interference is regained, as one path is blocked by means of the Pauli exclusion principle. The time-ordering formalism is prepared for papers dealing with collision experiments of single excitation [Stolterfoht et al., following paper, Phys. Rev. A 48, 2986 (1993)] and double excitation [Stolterfoht et al. (unpublished)

  14. Comparison of microbial community shifts in two parallel multi-step drinking water treatment processes.

    Science.gov (United States)

    Xu, Jiajiong; Tang, Wei; Ma, Jun; Wang, Hong

    2017-07-01

    Drinking water treatment processes remove undesirable chemicals and microorganisms from source water, which is vital to public health protection. The purpose of this study was to investigate the effects of treatment processes and configuration on the microbiome by comparing microbial community shifts in two series of different treatment processes operated in parallel within a full-scale drinking water treatment plant (DWTP) in Southeast China. Illumina sequencing of 16S rRNA genes of water samples demonstrated little effect of coagulation/sedimentation and pre-oxidation steps on bacterial communities, in contrast to dramatic and concurrent microbial community shifts during ozonation, granular activated carbon treatment, sand filtration, and disinfection for both series. A large number of unique operational taxonomic units (OTUs) at these four treatment steps further illustrated their strong shaping power towards the drinking water microbial communities. Interestingly, multidimensional scaling analysis revealed tight clustering of biofilm samples collected from different treatment steps, with Nitrospira, the nitrite-oxidizing bacteria, noted at higher relative abundances in biofilm compared to water samples. Overall, this study provides a snapshot of step-to-step microbial evolvement in multi-step drinking water treatment systems, and the results provide insight to control and manipulation of the drinking water microbiome via optimization of DWTP design and operation.

  15. Efficient Hydrolysis of Rice Straw into Xylose and Glucose by a Two-step Process

    Directory of Open Access Journals (Sweden)

    YAN Lu-lu

    2016-07-01

    Full Text Available The hydrolysis of rice straw into xylose and glucose in dilute sulfuric acid aqueous solution was studied with a two-step process in batch autoclave reactor. The results showed that compared with the traditional one-step acid hydrolysis, both xylose and glucose could be produced in high yields from rice straw by using the two-step acid hydrolysis process. The effects of reaction temperature, reaction time, the amount of rice straw and acid concentration on the hydrolysis of rice straw were systematically studied, and showed that except initial rice straw loading amount, the other parameters had remarkable influence on the products distribution and yields. In the first-step of the hydrolysis process, a high xylose yield of 162.6 g·kg-1 was obtained at 140℃ after 120 min reaction time. When the solid residues from the first step were subjected to a second-step hydrolysis, a glucose yield as high as 216.5 g·kg-1 could be achieved at 180℃ after 120 min. This work provides a promising strategy for the efficient and value-added utilization of agricultural wastes such as rice straw.

  16. modelling flow over stepped spillway with varying chute geometry

    African Journals Online (AJOL)

    2012-07-02

    Jul 2, 2012 ... A basic dimensional analysis of the flow over the chute of the stepped spillway, assuming that the dom- inant feature is the momentum exchange between the free stream and the cavity flow within the steps of the spillway [2,3,4], is as presented in equation (1): f1(Ho,Hn,Um,hw,Li,Ls,Ks, tg(θ),µw,ρw,g)=0. (1).

  17. Statistical optimization for biodiesel production from waste frying oil through two-step catalyzed process

    Energy Technology Data Exchange (ETDEWEB)

    Charoenchaitrakool, Manop [Department of Chemical Engineering, Faculty of Engineering, Kasetsart University, Bangkok (Thailand); Center for Advanced Studies in Nanotechnology and its Applications in Chemical, Food and Agricultural Industries, Kasetsart University, Bangkok (Thailand); Thienmethangkoon, Juthagate [Department of Chemical Engineering, Faculty of Engineering, Kasetsart University, Bangkok (Thailand)

    2011-01-15

    The aim of this work was to investigate the optimum conditions in biodiesel production from waste frying oil using two-step catalyzed process. In the first step, sulfuric acid was used as a catalyst for the esterification reaction of free fatty acid and methanol in order to reduce the free fatty acid content to be approximate 0.5%. In the second step, the product from the first step was further reacted with methanol using potassium hydroxide as a catalyst. The Box-Behnken design of experiment was carried out using the MINITAB RELEASE 14, and the results were analyzed using response surface methodology. The optimum conditions for biodiesel production were obtained when using methanol to oil molar ratio of 6.1:1, 0.68 wt.% of sulfuric acid, at 51 C with a reaction time of 60 min in the first step, followed by using molar ratio of methanol to product from the first step of 9.1:1, 1 wt.% KOH, at 55 C with a reaction time of 60 min in the second step. The percentage of methyl ester in the obtained product was 90.56 {+-} 0.28%. In addition, the fuel properties of the produced biodiesel were in the acceptable ranges according to Thai standard for community biodiesel. (author)

  18. Exploratory Research on Novel Coal Liquefaction Concept - Task 2: Evaluation of Process Steps.

    Energy Technology Data Exchange (ETDEWEB)

    Brandes, S.D.; Winschel, R.A.

    1997-05-01

    A novel direct coal liquefaction technology is being investigated in a program being conducted by CONSOL Inc. with the University of Kentucky, Center for Applied Energy Research and LDP Associates under DOE Contract DE-AC22-95PC95050. The novel concept consists of a new approach to coal liquefaction chemistry which avoids some of the inherent limitations of current high-temperature thermal liquefaction processes. The chemistry employed is based on hydride ion donation to solubilize coal at temperatures (350-400{degrees}C) significantly lower than those typically used in conventional coal liquefaction. The process concept being explored consists of two reaction stages. In the first stage, the coal is solubilized by hydride ion donation. In the second, the products are catalytically upgraded to acceptable refinery feedstocks. The program explores not only the initial solubilization step, but integration of the subsequent processing steps, including an interstage solids-separation step, to produce distillate products. A unique feature of the process concept is that many of the individual reaction steps can be decoupled, because little recycle around the liquefaction system is expected. This allows for considerable latitude in the process design. Furthermore, this has allowed for each key element in the process to be explored independently in laboratory work conducted under Task 2 of the program.

  19. Xylose Isomerization with Zeolites in a Two-Step Alcohol–Water Process

    DEFF Research Database (Denmark)

    Paniagua, Marta; Shunmugavel, Saravanamurugan; Melián Rodriguez, Mayra

    2015-01-01

    xyluloside (step 1) followed by hydrolysis after water addition to form additional xylulose (step 2). NMR spectroscopy studies performed with 13C-labeled xylose confirmed the proposed reaction pathway. The most active catalyst examined was zeolite Y, which proved more active than zeolite beta, ZSM-5......, and mordenite. The yield of xylulose obtained over H-USY (Si/Al=6) after 1 h of reaction at 1008C was 39%. After water hydrolysis in the second reaction step, the yield increased to 47%. Results obtained from pyridine adsorption studies confirm that H-USY (6) is a catalyst that combines Brønsted and Lewis acid......Isomerization of xylose to xylulose was efficiently catalyzed by large-pore zeolites in a two-step methanol–water process that enhanced the product yield significantly. The reaction pathway involves xylose isomerization to xylulose, which, in part, subsequently reacts with methanol to form methyl...

  20. Process and Context in Choice Models

    DEFF Research Database (Denmark)

    Ben-Akiva, Moshe; Palma, André de; McFadden, Daniel

    2012-01-01

    We develop a general framework that extends choice models by including an explicit representation of the process and context of decision making. Process refers to the steps involved in decision making. Context refers to factors affecting the process, focusing in this paper on social networks....... The extended choice framework includes more behavioral richness through the explicit representation of the planning process preceding an action and its dynamics and the effects of context (family, friends, and market) on the process leading to a choice, as well as the inclusion of new types of subjective data...

  1. Two-step estimation procedures for inhomogeneous shot-noise Cox processes

    DEFF Research Database (Denmark)

    Prokesová, Michaela; Dvorák, Jirí; Jensen, Eva B. Vedel

    In the present paper we develop several two-step estimation procedures for inhomogeneous shot-noise Cox processes. The intensity function is parametrized by the inhomogeneity parameters while the pair-correlation function is parametrized by the interaction parameters. The suggested procedures...

  2. Low-temperature process steps for realization of non-volatile memory devices

    NARCIS (Netherlands)

    Brunets, I.; Boogaard, A.; Aarnink, Antonius A.I.; Kovalgin, Alexeij Y.; Wolters, Robertus A.M.; Holleman, J.; Schmitz, Jurriaan

    2007-01-01

    In this work, the low-temperature process steps required for the realization of nano-crystal non-volatile memory cells are discussed. An amorphous silicon film, crystallized using a diode pumped solid state green laser irradiating at 532 nm, is proposed as an active layer. The deposition of the

  3. The typical steps of radiation processes development. Experience in investigation, designing and application

    International Nuclear Information System (INIS)

    Babkin, I. Yu.

    1983-01-01

    The typical steps of radiation processes development are discussed as: primary laboratory investigations; primary economic evaluation; more exact estimation of situation in industry; comparative analysis; development of a flow sheet; pilot plant; obtaining of initial data for designing of industrial scale plant; prediction of industrial situation; designing of semi-industrial or industrial plant; industrial plant. (U.K.)

  4. A facile two-step dipping process based on two silica systems for a superhydrophobic surface.

    Science.gov (United States)

    Li, Xiaoguang; Shen, Jun

    2011-10-14

    A silica microsphere suspension and a silica sol are employed in a two-step dipping process for the preparation of a superhydrophobic surface. It's not only a facile way to achieve the lotus effect, but can also create a multi-functional surface with different wetabilities, adhesive forces and transparencies. This journal is © The Royal Society of Chemistry 2011

  5. The Mixing of Methods: a three-step process for improving rigour in impact evaluations

    NARCIS (Netherlands)

    Ton, G.

    2012-01-01

    This article describes a systematic process that is helpful in improving impact evaluation assignments, within restricted budgets and timelines. It involves three steps: a rethink of the key questions of the evaluation to develop more relevant, specific questions; a way of designing a mix of

  6. Preparation of biodiesel from waste cooking oil via two-step catalyzed process

    International Nuclear Information System (INIS)

    Wang Yong; Liu Pengzhan; Ou Shiyi; Zhang Zhisen

    2007-01-01

    Waste cooking oils (WCO), which contain large amounts of free fatty acids produced in restaurants, are collected by the environmental protection agency in the main cities of China and should be disposed in a suitable way. In this research, a two step catalyzed process was adopted to prepare biodiesel from waste cooking oil whose acid value was 75.92 ± 0.036 mgKOH/g. The free fatty acids of WCO were esterified with methanol catalyzed by ferric sulfate in the first step, and the triglycerides (TGs) in WCO were transesterified with methanol catalyzed by potassium hydroxide in the second step. The results showed that ferric sulfate had high activity to catalyze the esterification of free fatty acids (FFA) with methanol, The conversion rate of FFA reached 97.22% when 2 wt% of ferric sulfate was added to the reaction system containing methanol to TG in10:1 (mole ratio) composition and reacted at 95 deg. C for 4 h. The methanol was vacuum evaporated, and transesterification of the remained triglycerides was performed at 65 deg. C for 1 h in a reaction system containing 1 wt% of potassium hydroxide and 6:1 mole ratio of methanol to TG. The final product with 97.02% of biodiesel, obtained after the two step catalyzed process, was analyzed by gas chromatography. This new process has many advantages compared with the old processes, such as no acidic waste water, high efficiency, low equipment cost and easy recovery of the catalyst

  7. Empirical Modeling of Oxygen Uptake of Flow Over Stepped Chutes ...

    African Journals Online (AJOL)

    The present investigation evaluates the influence of three different step chute geometry when skimming flow was allowed over them with the aim of determining the aerated flow length which is a significant factor when developing empirical equations for estimating aeration efficiency of flow. Overall, forty experiments were ...

  8. Multi-Step Deep Reactive Ion Etching Fabrication Process for Silicon-Based Terahertz Components

    Science.gov (United States)

    Jung-Kubiak, Cecile (Inventor); Reck, Theodore (Inventor); Chattopadhyay, Goutam (Inventor); Perez, Jose Vicente Siles (Inventor); Lin, Robert H. (Inventor); Mehdi, Imran (Inventor); Lee, Choonsup (Inventor); Cooper, Ken B. (Inventor); Peralta, Alejandro (Inventor)

    2016-01-01

    A multi-step silicon etching process has been developed to fabricate silicon-based terahertz (THz) waveguide components. This technique provides precise dimensional control across multiple etch depths with batch processing capabilities. Nonlinear and passive components such as mixers and multipliers waveguides, hybrids, OMTs and twists have been fabricated and integrated into a small silicon package. This fabrication technique enables a wafer-stacking architecture to provide ultra-compact multi-pixel receiver front-ends in the THz range.

  9. Towards a comprehensive framework for cosimulation of dynamic models with an emphasis on time stepping

    Science.gov (United States)

    Hoepfer, Matthias

    Over the last two decades, computer modeling and simulation have evolved as the tools of choice for the design and engineering of dynamic systems. With increased system complexities, modeling and simulation become essential enablers for the design of new systems. Some of the advantages that modeling and simulation-based system design allows for are the replacement of physical tests to ensure product performance, reliability and quality, the shortening of design cycles due to the reduced need for physical prototyping, the design for mission scenarios, the invoking of currently nonexisting technologies, and the reduction of technological and financial risks. Traditionally, dynamic systems are modeled in a monolithic way. Such monolithic models include all the data, relations and equations necessary to represent the underlying system. With increased complexity of these models, the monolithic model approach reaches certain limits regarding for example, model handling and maintenance. Furthermore, while the available computer power has been steadily increasing according to Moore's Law (a doubling in computational power every 10 years), the ever-increasing complexities of new models have negated the increased resources available. Lastly, modern systems and design processes are interdisciplinary, enforcing the necessity to make models more flexible to be able to incorporate different modeling and design approaches. The solution to bypassing the shortcomings of monolithic models is cosimulation. In a very general sense, co-simulation addresses the issue of linking together different dynamic sub-models to a model which represents the overall, integrated dynamic system. It is therefore an important enabler for the design of interdisciplinary, interconnected, highly complex dynamic systems. While a basic co-simulation setup can be very easy, complications can arise when sub-models display behaviors such as algebraic loops, singularities, or constraints. This work frames the

  10. DEFORMATION DEPENDENT TUL MULTI-STEP DIRECT MODEL

    International Nuclear Information System (INIS)

    WIENKE, H.; CAPOTE, R.; HERMAN, M.; SIN, M.

    2007-01-01

    The Multi-Step Direct (MSD) module TRISTAN in the nuclear reaction code EMPIRE has been extended in order to account for nuclear deformation. The new formalism was tested in calculations of neutron emission spectra emitted from the 232 Th(n,xn) reaction. These calculations include vibration-rotational Coupled Channels (CC) for the inelastic scattering to low-lying collective levels, ''deformed'' MSD with quadrupole deformation for inelastic scattering to the continuum, Multi-Step Compound (MSC) and Hauser-Feshbach with advanced treatment of the fission channel. Prompt fission neutrons were also calculated. The comparison with experimental data shows clear improvement over the ''spherical'' MSD calculations and JEFF-3.1 and JENDL-3.3 evaluations

  11. Deformation dependent TUL multi-step direct model

    International Nuclear Information System (INIS)

    Wienke, H.; Capote, R.; Herman, M.; Sin, M.

    2008-01-01

    The Multi-Step Direct (MSD) module TRISTAN in the nuclear reaction code EMPIRE has been extended to account for nuclear deformation. The new formalism was tested in calculations of neutron emission spectra emitted from the 232 Th(n,xn) reaction. These calculations include vibration-rotational Coupled Channels (CC) for the inelastic scattering to low-lying collective levels, 'deformed' MSD with quadrupole deformation for inelastic scattering to the continuum, Multi-Step Compound (MSC) and Hauser-Feshbach with advanced treatment of the fission channel. Prompt fission neutrons were also calculated. The comparison with experimental data shows clear improvement over the 'spherical' MSD calculations and JEFF-3.1 and JENDL-3.3 evaluations. (authors)

  12. Adaptive process control strategy for a two-step bending process

    NARCIS (Netherlands)

    Dallinger, F.N.; Roux, E.P.; Havinga, Gosse Tjipke; d'Ippolito, R.; van Tijum, R.; van Ravenswaaij, R.; Hora, P.; van den Boogaard, Antonius H.; Setchi, R.; Howlett, R.J.; Naim, M.; Seinz, H.

    2014-01-01

    A robust production is an important goal in sheet metal forming in order to make the process outcome insensitive to variations in input and process conditions. This would guarantee a minimum number of defects and reduced press downtime. However, for com-plex parts it is difficult to achieve robust

  13. Impacting student anxiety for the USMLE Step 1 through process-oriented preparation

    Directory of Open Access Journals (Sweden)

    Roy E. Strowd

    2010-02-01

    Full Text Available Background: Standardized examinations are the key components of medical education. The USMLE Step 1 is the first of these important milestones. Success on this examination requires both content competency and efficient strategies for study and review. Students employ a wide variety of techniques in studying for this examination, with heavy reliance on personal study habits and advice from other students. Nevertheless, few medical curricula formally address these strategies. Methods: In response to student-generated critique at our institution, a five-part seminar series on process-oriented preparation was developed and implemented to address such concerns. The series focused on early guidance and preparation strategies for Step 1 and the many other important challenges in medical school. Emphasis was placed on facilitating conversation and mentorship opportunities between students. Results & Conclusions: A profoundly positive experience was reported by our medical students that included a decreased anxiety level for the Step 1 examination.

  14. Configuring the thermochemical hydrogen sulfuric acid process step for the Tandem Mirror Reactor

    International Nuclear Information System (INIS)

    Galloway, T.R.

    1981-01-01

    This paper identifies the sulfuric acid step as the critical part of the thermochemical cycle in dictating the thermal demands and temperature requirements of the heat source. The General Atomic Sulfur-Iodine Cycle is coupled to a Tandem Mirror. The sulfuric acid decomposition process step is focused on specifically since this step can use the high efficiency electrical power of the direct converter together with the other thermal-produced electricity to Joule-heat a non-catalytic SO 3 decomposer to approximately 1250 0 K. This approach uses concepts originally suggested by Dick Werner and Oscar Krikorian. The blanket temperature can be lowered to about 900 0 K, greatly alleviating materials problems, the level of technology required, safety problems, and costs. A moderate degree of heat has been integrated to keep the cycle efficiency around 48%, but the number of heat exchangers has been limited in order to keep hydrogen production costs within reasonable bounds

  15. Multivariate modelling of the tablet manufacturing process with wet granulation for tablet optimization and in-process control

    NARCIS (Netherlands)

    Westerhuis, J.A; Coenegracht, P.M J; Lerk, C.F

    1997-01-01

    The process of tablet manufacturing with granulation is described as a two-step process. The first step comprises wet granulation of the powder mixture, and in the second step the granules are compressed into tablets. For the modelling of the pharmaceutical process of wet granulation and tableting,

  16. ADDING A NEW STEP WITH SPATIAL AUTOCORRELATION TO IMPROVE THE FOUR-STEP TRAVEL DEMAND MODEL WITH FEEDBACK FOR A DEVELOPING CITY

    Directory of Open Access Journals (Sweden)

    Xuesong FENG, Ph.D Candidate

    2009-01-01

    Full Text Available It is expected that improvement of transport networks could give rise to the change of spatial distributions of population-related factors and car ownership, which are expected to further influence travel demand. To properly reflect such an interdependence mechanism, an aggregate multinomial logit (A-MNL model was firstly applied to represent the spatial distributions of these exogenous variables of the travel demand model by reflecting the influence of transport networks. Next, the spatial autocorrelation analysis is introduced into the log-transformed A-MNL model (called SPA-MNL model. Thereafter, the SPA-MNL model is integrated into the four-step travel demand model with feedback (called 4-STEP model. As a result, an integrated travel demand model is newly developed and named as the SPA-STEP model. Using person trip data collected in Beijing, the performance of the SPA-STEP model is empirically compared with the 4-STEP model. It was proven that the SPA-STEP model is superior to the 4-STEP model in accuracy; most of the estimated parameters showed statistical differences in values. Moreover, though the results of the simulations to the same set of assumed scenarios by the 4-STEP model and the SPA-STEP model consistently suggested the same sustainable path for the future development of Beijing, it was found that the environmental sustainability and the traffic congestion for these scenarios were generally overestimated by the 4-STEP model compared with the corresponding analyses by the SPA-STEP model. Such differences were clearly generated by the introduction of the new modeling step with spatial autocorrelation.

  17. Structure of turbulent non-premixed flames modeled with two-step chemistry

    Science.gov (United States)

    Chen, J. H.; Mahalingam, S.; Puri, I. K.; Vervisch, L.

    1992-01-01

    Direct numerical simulations of turbulent diffusion flames modeled with finite-rate, two-step chemistry, A + B yields I, A + I yields P, were carried out. A detailed analysis of the turbulent flame structure reveals the complex nature of the penetration of various reactive species across two reaction zones in mixture fraction space. Due to this two zone structure, these flames were found to be robust, resisting extinction over the parameter ranges investigated. As in single-step computations, mixture fraction dissipation rate and the mixture fraction were found to be statistically correlated. Simulations involving unequal molecular diffusivities suggest that the small scale mixing process and, hence, the turbulent flame structure is sensitive to the Schmidt number.

  18. GREENSCOPE: Sustainable Process Modeling

    Science.gov (United States)

    EPA researchers are responding to environmental problems by incorporating sustainability into process design and evaluation. EPA researchers are also developing a tool that allows users to assess modifications to existing and new chemical processes to determine whether changes in...

  19. Multi-step process for concentrating magnetic particles in waste sludges

    Science.gov (United States)

    Watson, J.L.

    1990-07-10

    This invention involves a multi-step, multi-force process for dewatering sludges which have high concentrations of magnetic particles, such as waste sludges generated during steelmaking. This series of processing steps involves (1) mixing a chemical flocculating agent with the sludge; (2) allowing the particles to aggregate under non-turbulent conditions; (3) subjecting the mixture to a magnetic field which will pull the magnetic aggregates in a selected direction, causing them to form a compacted sludge; (4) preferably, decanting the clarified liquid from the compacted sludge; and (5) using filtration to convert the compacted sludge into a cake having a very high solids content. Steps 2 and 3 should be performed simultaneously. This reduces the treatment time and increases the extent of flocculation and the effectiveness of the process. As partially formed aggregates with active flocculating groups are pulled through the mixture by the magnetic field, they will contact other particles and form larger aggregates. This process can increase the solids concentration of steelmaking sludges in an efficient and economic manner, thereby accomplishing either of two goals: (a) it can convert hazardous wastes into economic resources for recycling as furnace feed material, or (b) it can dramatically reduce the volume of waste material which must be disposed. 7 figs.

  20. Models and Modelling Tools for Chemical Product and Process Design

    DEFF Research Database (Denmark)

    Gani, Rafiqul

    2016-01-01

    -process design. Illustrative examples highlighting the need for efficient model-based systems will be presented, where the need for predictive models for innovative chemical product-process design will be highlighted. The examples will cover aspects of chemical product-process design where the idea of the grand......The design, development and reliability of a chemical product and the process to manufacture it, need to be consistent with the end-use characteristics of the desired product. One of the common ways to match the desired product-process characteristics is through trial and error based experiments......, which can be expensive and time consuming. An alternative approach is the use of a systematic model-based framework according to an established work-flow in product-process design, replacing some of the time consuming and/or repetitive experimental steps. The advantages of the use of a model...

  1. Auditory processing models

    DEFF Research Database (Denmark)

    Dau, Torsten

    2008-01-01

    The Handbook of Signal Processing in Acoustics will compile the techniques and applications of signal processing as they are used in the many varied areas of Acoustics. The Handbook will emphasize the interdisciplinary nature of signal processing in acoustics. Each Section of the Handbook...... will present topics on signal processing which are important in a specific area of acoustics. These will be of interest to specialists in these areas because they will be presented from their technical perspective, rather than a generic engineering approach to signal processing. Non-specialists, or specialists...

  2. Four-dimensional transform fault processes: progressive evolution of step-overs and bends

    Science.gov (United States)

    Wakabayashi, John; Hengesh, James V.; Sawyer, Thomas L.

    2004-11-01

    Many bends or step-overs along strike-slip faults may evolve by propagation of the strike-slip fault on one side of the structure and progressive shut-off of the strike-slip fault on the other side. In such a process, new transverse structures form, and the bend or step-over region migrates with respect to materials that were once affected by it. This process is the progressive asymmetric development of a strike-slip duplex. Consequences of this type of step-over evolution include: (1) the amount of structural relief in the restraining step-over or bend region is less than expected; (2) pull-apart basin deposits are left outside of the active basin; and (3) local tectonic inversion occurs that is not linked to regional plate boundary kinematic changes. This type of evolution of step-overs and bends may be common along the dextral San Andreas fault system of California; we present evidence at different scales for the evolution of bends and step-overs along this fault system. Examples of pull-apart basin deposits related to migrating releasing (right) bends or step-overs are the Plio-Pleistocene Merced Formation (tens of km along strike), the Pleistocene Olema Creek Formation (several km along strike) along the San Andreas fault in the San Francisco Bay area, and an inverted colluvial graben exposed in a paleoseismic trench across the Miller Creek fault (meters to tens of meters along strike) in the eastern San Francisco Bay area. Examples of migrating restraining bends or step-overs include the transfer of slip from the Calaveras to Hayward fault, and the Greenville to the Concord fault (ten km or more along strike), the offshore San Gregorio fold and thrust belt (40 km along strike), and the progressive transfer of slip from the eastern faults of the San Andreas system to the migrating Mendocino triple junction (over 150 km along strike). Similar 4D evolution may characterize the evolution of other regions in the world, including the Dead Sea pull-apart, the Gulf

  3. Four-Dimensional Transform Fault Processes: Evolution of Step-Overs and Bends at Different Scales

    Science.gov (United States)

    Wakabayashi, J.; Hengesh, J. V.; Sawyer, T. L.

    2002-12-01

    Many bends or step-overs along strike-slip faults may evolve by propagation of the strike-slip fault on one side of the structure and progressive shut off of the strike-slip fault on the other side. In such a process, new transverse structures form, old ones become inactive, and the bend or step-over region migrates with respect to materials that were once affected by it. This process is the progressive asymmetric development of a strike-slip duplex. Consequences of this type of step-over evolution include the following: 1. the amount of vertical structural relief in restraining step-over or bend regions is less than expected (apatite fission track ages associated with these step-over regions predate the strike-slip faulting); 2. pull-apart basin deposits are left outside of the active basin and commonly subjected to contractional deformation and uplift; and 3. local basin inversion occurs that is not linked to regional plate motion changes. This type of evolution of step-overs and bends may be common along the dextral San Andreas fault system of California. Examples of pull-apart basin deposits related to migrating releasing () bends or step-overs are the Plio-Pleistocene Merced Formation (tens of km along strike), the Pleistocene Olema Creek Formation (several km along strike) along the San Andreas fault in the San Francisco Bay area, and an inverted colluvial graben exposed in a paleoseismic trench across the Miller Creek fault (meters to tens of meters along strike) in the eastern San Francisco Bay area. Examples of migrating restraining bends or step-overs include the transfer of slip from the Calaveras to Hayward fault in the Mission Peak area, and the Greenville to the Concord fault at Mount Diablo (10 km or more along strike), the offshore San Gregorio fold and thrust belt (40 km along strike), and the progressive transfer of slip from the eastern faults of the San Andreas system to the migrating Mendocino triple junction (over 150 km along strike). Another

  4. Developing novel one-step processes for obtaining food-grade O/W emulsions from pressurized fluid extracts: processes description, state of the art and perspectives

    Directory of Open Access Journals (Sweden)

    Diego Tresinari SANTOS

    2015-01-01

    Full Text Available AbstractIn this work, a novel on-line process for production of food-grade emulsions containing oily extracts, i.e. oil-in-water (O/W emulsions, in only one step is presented. This process has been called ESFE, Emulsions from Supercritical Fluid Extraction. With this process, emulsions containing supercritical fluid extracts can be obtained directly from plant materials. The aim in the conception of this process is to propose a new rapid way to obtain emulsions from supercritical fluid extracts. Nowadays the conventional emulsion formulation method is a two-step procedure, i.e. first supercritical fluid extraction for obtaining an extract; secondly emulsion formulation using another device. Other variation of the process was tested and successfully validated originating a new acronymed process: EPFE (Emulsions from Pressurized Fluid Extractions. Both processes exploit the supercritical CO2-essential oils miscibility, in addition, EPFE process exploits the emulsification properties of saponin-rich pressurized aqueous plant extracts. The feasibility of this latter process was demonstrated using Pfaffia glomerata roots as source of saponin-rich extract, water as extracting solvent and clove essential oil, directly extracted using supercritical CO2, as a model dispersed phase. In addition, examples of pressurized fluid-based coupled processes applied for adding value to food bioactive compounds developed in the past five years are reviewed.

  5. Design scope and level for standard design certification under a two step licensing process

    Energy Technology Data Exchange (ETDEWEB)

    Huh, Chang Wook; Suh, Nam Duk [Korea Institute of Nuclear Safety, Daejeon (Korea, Republic of)

    2012-08-15

    A small integral reactor SMART (System Integrated Modular Advanced ReacTor), being developed in Korea since late 1990s and targeted to obtaining a standard design approval by the end of 2011, is introduced. The design scope and level for design certification (DC) is well described in the U.S. NRC SECY documents published the early 1990s. However, the documents are valid for a one-step licensing process called a combined operating license (COL) by the U.S. NRC, while Korea still uses a two-step licensing process. Thus, referencing the concept of the SECY documents, we have established the design scope and level for the SMART DC using the contexts of the standard review plan (SRP). Some examples of the results and issues raised during our review are briefly presented in this paper. The same methodology will be applied to other types of reactor under development in Korea, such as future VHTR reactors.

  6. Process simulation of single-step dimethyl ether production via biomass gasification.

    Science.gov (United States)

    Ju, Fudong; Chen, Hanping; Ding, Xuejun; Yang, Haiping; Wang, Xianhua; Zhang, Shihong; Dai, Zhenghua

    2009-01-01

    In this study, we simulated the single-step process of dimethyl ether (DME) synthesis via biomass gasification using ASPEN Plus. The whole process comprised four parts: gasification, water gas shift reaction, gas purification, and single-step DME synthesis. We analyzed the influence of the oxygen/biomass and steam/biomass ratios on biomass gasification and synthesis performance. The syngas H(2)/CO ratio after water gas shift process was modulated to 1, and the syngas was then purified to remove H(2)S and CO(2), using the Rectisol process. Syngas still contained trace amounts of H(2)S and about 3% CO(2) after purification, which satisfied the synthesis demands. However, the high level of cold energy consumption was a problem during the purification process. The DME yield in this study was 0.37, assuming that the DME selectivity was 0.91 and that CO was totally converted. We performed environmental and economic analyses, and propose the development of a poly-generation process based on economic considerations.

  7. A Three Step B2B Sales Model Based on Satisfaction Judgments

    DEFF Research Database (Denmark)

    Grünbaum, Niels Nolsøe

    2015-01-01

    . The insights produces can be applied for selling companies to craft close collaborative customer relationships in a systematic a d efficient way. The process of building customer relationships will be guided through actions that yields higher satisfaction judgments leading to loyal customers and finally...... companies’ perspective. The buying center members applied satisfaction dimension when forming satisfaction judgments. Moreover, the focus and importance of the identified satisfaction dimensions fluctuated pending on the phase of the buying process. Based on the findings a three step sales model is proposed...... comprising of 1. identification of the satisfaction dimensions the buying center members apply in the buying process. 2. identification of the fluctuation in importance of the satisfaction dimensions and finally 3. identification of the degree of expectations’ adjacent to the identified satisfaction...

  8. A Three Step B2B Sales Model Based on Satisfaction Judgments

    DEFF Research Database (Denmark)

    Grünbaum, Niels Nolsøe

    2015-01-01

    . The insights produces can be applied for selling companies to craft close collaborative customer relationships in a systematic ad efficient way. The process of building customer relationships will be guided through actions that yields higher satisfaction judgments leading to loyal customers and finally...... companies‘ perspective. The buying center members applied satisfaction dimension when forming satisfaction judgments. Moreover, the focus and importance of the identified satisfaction dimensions fluctuated pending on the phase of the buying process. Based on the findings a three step sales model is proposed...... comprising of 1. Identification of the satisfaction dimensions the buying center members apply in the buying process. 2. Identification of the fluctuation in importance of the satisfaction dimensions and finally 3. Identification of the degree of expectations‘ adjacent to the identified satisfaction...

  9. Development of F2 two-step fluorination process for non-aqueous reprocessing

    International Nuclear Information System (INIS)

    1976-02-01

    To establish the F 2 two-step fluorination for stable and high recoveries of plutonium, the fluorination process has been studied with the simulated fuel to a FBR containing UO 2 - PuO 2 and non-radioactive fission products in the 2''phi fluid-bed. The process principle was demonstrated and the effect of FPs on fluorination of U and Pu and the possibility of reducing the Pu loss could be clarified. The feasibility of separating PuF 6 from UF 6 onto UO 2 F 2 by adsorption, was also indicated. (auth.)

  10. Integrating social media and social marketing: a four-step process.

    Science.gov (United States)

    Thackeray, Rosemary; Neiger, Brad L; Keller, Heidi

    2012-03-01

    Social media is a group of Internet-based applications that allows individuals to create, collaborate, and share content with one another. Practitioners can realize social media's untapped potential by incorporating it as part of the larger social marketing strategy, beyond promotion. Social media, if used correctly, may help organizations increase their capacity for putting the consumer at the center of the social marketing process. The purpose of this article is to provide a template for strategic thinking to successfully include social media as part of the social marketing strategy by using a four-step process.

  11. INNOVATION PROCESS MODELLING

    Directory of Open Access Journals (Sweden)

    JANUSZ K. GRABARA

    2011-01-01

    Full Text Available Modelling phenomena in accordance with the structural approach enables one to simplify the observed relations and to present the classification grounds. An example may be a model of organisational structure identifying the logical relations between particular units and presenting the division of authority, work.

  12. Electrochemical model of polyaniline-based memristor with mass transfer step

    International Nuclear Information System (INIS)

    Demin, V.A.; Erokhin, V.V.; Kashkarov, P.K.; Kovalchuk, M.V.

    2015-01-01

    The electrochemical organic memristor with polyaniline active layer is a stand-alone device designed and realized for reproduction of some synapse properties in the innovative electronic circuits, such as the new field-programmable gate arrays or the neuromorphic networks capable for learning. In this work a new theoretical model of the polyaniline memristor is presented. The developed model of organic memristor functioning was based on the detailed consideration of possible electrochemical processes occuring in the active zone of this device including the mass transfer step of ionic reactants. Results of the calculation have demonstrated not only the qualitative explanation of the characteristics observed in the experiment, but also quantitative similarities of the resultant current values. This model can establish a basis for the design and prediction of properties of more complicated circuits and systems (including stochastic ones) based on the organic memristive devices

  13. ELECTRODIALYSIS IN THE CONVERSION STEP OF THE CONCENTRATED SALT SOLUTIONS IN THE PROCESS OF BATTERY SCRAP

    Directory of Open Access Journals (Sweden)

    S. I. Niftaliev

    2014-01-01

    Full Text Available Summary. The concentrated sodium sulfate solution is formed during the processing of waste battery scrap. A promising way to further treatment of the concentrated salt solution could be the conversion of these salts into acid and bases by electrodialysis, that can be reused in the same technical process cycle. For carrying out the process of conversion of salts into the corresponding acid and base several cells schemes with different combinations of cation, anion and bipolar membranes are used. At this article a comparative analysis of these cells is carried out. In the cells there were used the membranes МC-40, МА-41 and МB-2I. Acid and base solutions with higher concentration may be obtained during the process of electrodialysis in the circulation mode, when a predetermined amount of salt in the closed loop is run through a set of membranes to obtain the desired concentration of the product. The disadvantages of this method are the high cost of buffer tanks and the need to work with small volumes of treated solutions. In industrial applications it is advisable to use continuous electrodialysis with bipolar membranes, since this configuration allows to increase the number of repeating sections, which is necessary to reduce the energy costs. The increase of the removal rate of salts can be achieved by increasing the process steps, and to produce a more concentrated products after the conversion step can be applied electrodialysis-concentrator or evaporator.

  14. Ehrenfest's theorem and the validity of the two-step model for strong-field ionization

    DEFF Research Database (Denmark)

    Shvetsov-Shilovskiy, Nikolay; Dimitrovski, Darko; Madsen, Lars Bojer

    By comparison with the solution of the time-dependent Schrodinger equation we explore the validity of the two-step semiclassical model for strong-field ionization in elliptically polarized laser pulses. We find that the discrepancy between the two-step model and the quantum theory correlates...

  15. The "step feature" of suprathermal ion distributions: a discriminator between acceleration processes?

    Directory of Open Access Journals (Sweden)

    H. J. Fahr

    2012-09-01

    Full Text Available The discussion of exactly which process is causing the preferred build-up of v−5-power law tails of the velocity distribution of suprathermal particles in the solar wind is still ongoing. Criteria allowing one to discriminate between the various suggestions that have been made would be useful in order to clarify the physics behind these tails. With this study, we draw the attention to the so-called "step feature" of the velocity distributions and offer a criterion that allows one to distinguish between those scenarios that employ velocity diffusion, i.e. second-order Fermi processes, which are prime candidates in the present debate. With an analytical approximation to the self-consistently obtained velocity diffusion coefficient, we solve the transport equation for suprathermal particles. The numerical simulation reveals that this form of the diffusion coefficient naturally leads to the step feature of the velocity distributions. This finding favours – at least in regions of the appearance of the step feature (i.e. for heliocentric distances up to about 11 AU and at lower energies – the standard velocity diffusion as a consequence of the particle's interactions with the plasma wave turbulence as opposed to that caused by velocity fluctuation-induced compressions and rarefactions.

  16. First step of the project for implementation of two non-symmetric cooling loops modeled by the ALMOD3 code

    International Nuclear Information System (INIS)

    Dominguez, L.; Camargo, C.T.M.

    1984-09-01

    The first step of the project for implementation of two non-symmetric cooling loops modeled by the ALMOD3 computer code is presented. This step consists of the introduction of a simplified model for simulating the steam generator. This model is the GEVAP computer code, integrant part of LOOP code, which simulates the primary coolant circuit of PWR nuclear power plants during transients. The ALMOD3 computer code has a model for the steam generator, called UTSG, which is very detailed. This model has spatial dependence, correlations for 2-phase flow, distinguished correlations for different heat transfer process. The GEVAP model has thermal equilibrium between phases (gaseous and liquid homogeneous mixture), no spatial dependence and uses only one generalized correlation to treat several heat transfer processes. (Author) [pt

  17. Pedagogic process modeling: Humanistic-integrative approach

    Directory of Open Access Journals (Sweden)

    Boritko Nikolaj M.

    2007-01-01

    Full Text Available The paper deals with some current problems of modeling the dynamics of the subject-features development of the individual. The term "process" is considered in the context of the humanistic-integrative approach, in which the principles of self education are regarded as criteria for efficient pedagogic activity. Four basic characteristics of the pedagogic process are pointed out: intentionality reflects logicality and regularity of the development of the process; discreteness (stageability in dicates qualitative stages through which the pedagogic phenomenon passes; nonlinearity explains the crisis character of pedagogic processes and reveals inner factors of self-development; situationality requires a selection of pedagogic conditions in accordance with the inner factors, which would enable steering the pedagogic process. Offered are two steps for singling out a particular stage and the algorithm for developing an integrative model for it. The suggested conclusions might be of use for further theoretic research, analyses of educational practices and for realistic predicting of pedagogical phenomena. .

  18. Innovative process engineering: a generic model of the innovation process

    OpenAIRE

    Pénide, Thomas; Gourc, Didier; Pingaud, Hervé; Peillon, Philippe

    2013-01-01

    International audience; Innovation can be represented as a knowledge transformation process perceived with different levels of granularity. The milestones of this process allow assessment for its each step and set up feedback loops that will be highlighted. This innovation process is a good starting point to understand innovation and then to manage it. Best practices being patterns of processes, we describe innovation best practices as compulsory steps in our innovation process. To put into p...

  19. Is it Feasible to Use Students' Self-reported Step Data in a Local School Policy Process?

    DEFF Research Database (Denmark)

    Bonde, Ane Høstgaard; Bruselius-Jensen, Maria

    2017-01-01

    Objective: We examined students’ self-reported step data and discussed the feasibility of using these data in a local school policy process. Methods: For 5 days during school hours, 281 stu- dents from grades 5–7 participating in a health education program, measured their steps using a pedometer......: Student-collected data showed similar patterns as reported in the literature, and therefore, a feasible perspective could be to use students’ self-reported step data in a local school policy process....

  20. Study of defect formation from process step anomalies in limited boron source diffusion in crystalline silicon

    Energy Technology Data Exchange (ETDEWEB)

    Singha, Bandana, E-mail: bandana@iitb.ac.in; Solanki, Chetan Singh, E-mail: chetanss@iitb.ac.in [Department of Energy Science and Engineering, JJT Bombay Mumbai-400076, Maharashtra (India)

    2016-05-06

    In limited dopant source diffusion process, the deposition and the drive in conditions of the source play an important role in pn- junction formation. The pre diffusion anomalies can introduce defects in the emitter region during the process of diffusion which can glide into the bulk region. So, the defects formed in the emitter region due to different pre diffusion issues are studied in this work with boron spin on dopant source diffused in n-type crystalline Si. The samples are prepared for different diffusion conditions of times carried out at diffusion temperature of 900°C. Different characterization techniques used in this work confirms the presence of these defects in the emitter region. The dopant distribution under the same diffusion condition result in non- uniformity, varying the junction depth of the emitter. A single process step anomaly is sufficient enough to degrade the emitter performance and should be avoided.

  1. TMS field modelling-status and next steps

    DEFF Research Database (Denmark)

    Thielscher, Axel

    2013-01-01

    In the recent years, an increasing number of studies used geometrically accurate head models and finite element (FEM) or finite difference methods (FDM) to estimate the electric field induced by non-invasive neurostimulation techniques such as transcranial magnetic stimulation (TMS) or transcranial...... necessary.Focusing on motor cortex stimulation by TMS, our goal is to explore to which extent the field estimates based on advanced models correlate with the physiological stimulation effects. For example, we aim at testing whether interindividual differences in the field estimates are also reflected...... in differences in the MEP responses. This would indicate that the field calculations accurately capture the impact of individual macroanatomical features of the head and brain on the induced field distribution, in turn strongly supporting their plausibility.Our approach is based on the SimNIBS software pipeline...

  2. MODELLING PURCHASING PROCESSES FROM QUALITY ASPECTS

    Directory of Open Access Journals (Sweden)

    Zora Arsovski

    2008-12-01

    Full Text Available Management has a fundamental task to identify and direct primary and specific processes within purchasing function, applying the up-to-date information infrastructure. ISO 9001:2000 defines a process as a number of interrelated or interactive activities transforming inputs and outputs, and the "process approach" as a systematic identification in management processes employed with the organization and particularly - relationships among the processes. To direct a quality management system using process approach, the organization is to determine the map of its general (basic processes. Primary processes are determined on the grounds of their interrelationship and impact on satisfying customers' needs. To make a proper choice of general business processes, it is necessary to determine the entire business flow, beginning with the customer demand up to the delivery of products or service provided. In the next step the process model is to be converted into data model which is essential for implementation of the information system enabling automation, monitoring, measuring, inspection, analysis and improvement of key purchase processes. In this paper are given methodology and some results of investigation of development of IS for purchasing process from aspects of quality.

  3. Step by step control of a deep drawing process with piezo-electric actuators in serial operation

    Directory of Open Access Journals (Sweden)

    Bäume Tobias

    2015-01-01

    Full Text Available Due to the design-driven increase in complexity of forming car body parts, it becomes more difficult to ensure a stable forming process. Piezoelectric actuators can influence the material flow of stamping parts effectively. In this article the implementation of piezoelectric actuators in a large scale sheet metal forming tool of a car manufacturer is described. Additionally, it is shown that part quality can be assessed with the help of triangulation laser sensors, which are mounted on the blankholder. The resulting flange draw-in signals were used to reduce the occurrence of wrinkling or the rate of cracking. It was shown that process control improved the quality of the stamping parts significantly.

  4. Improving the two-step remediation process for CCA-treated wood. Part I, Evaluating oxalic acid extraction

    Science.gov (United States)

    Carol Clausen

    2004-01-01

    In this study, three possible improvements to a remediation process for chromated-copper-arsenate (CCA) treated wood were evaluated. The process involves two steps: oxalic acid extraction of wood fiber followed by bacterial culture with Bacillus licheniformis CC01. The three potential improvements to the oxalic acid extraction step were (1) reusing oxalic acid for...

  5. Modeling of column apparatus processes

    CERN Document Server

    Boyadjiev, Christo; Boyadjiev, Boyan; Popova-Krumova, Petya

    2016-01-01

    This book presents a new approach for the modeling of chemical and interphase mass transfer processes in industrial column apparatuses, using convection-diffusion and average-concentration models. The convection-diffusion type models are used for a qualitative analysis of the processes and to assess the main, small and slight physical effects, and then reject the slight effects. As a result, the process mechanism can be identified. It also introduces average concentration models for quantitative analysis, which use the average values of the velocity and concentration over the cross-sectional area of the column. The new models are used to analyze different processes (simple and complex chemical reactions, absorption, adsorption and catalytic reactions), and make it possible to model the processes of gas purification with sulfur dioxide, which form the basis of several patents.

  6. UML in business process modeling

    Directory of Open Access Journals (Sweden)

    Bartosz Marcinkowski

    2013-03-01

    Full Text Available Selection and proper application of business process modeling methods and techniques have a significant impact on organizational improvement capabilities as well as proper understanding of functionality of information systems that shall support activity of the organization. A number of business process modeling notations were popularized in practice in recent decades. Most significant of the notations include Business Process Modeling Notation (OMG BPMN and several Unified Modeling Language (OMG UML extensions. In this paper, the assessment whether one of the most flexible and strictly standardized contemporary business process modeling notations, i.e. Rational UML Profile for Business Modeling, enable business analysts to prepare business models that are all-embracing and understandable by all the stakeholders. After the introduction, methodology of research is discussed. Section 2 presents selected case study results. The paper is concluded with a summary.

  7. Business Process Modeling: Perceived Benefits

    Science.gov (United States)

    Indulska, Marta; Green, Peter; Recker, Jan; Rosemann, Michael

    The process-centered design of organizations and information systems is globally seen as an appropriate response to the increased economic pressure on organizations. At the methodological core of process-centered management is process modeling. However, business process modeling in large initiatives can be a time-consuming and costly exercise, making it potentially difficult to convince executive management of its benefits. To date, and despite substantial interest and research in the area of process modeling, the understanding of the actual benefits of process modeling in academia and practice is limited. To address this gap, this paper explores the perception of benefits derived from process modeling initiatives, as reported through a global Delphi study. The study incorporates the views of three groups of stakeholders - academics, practitioners and vendors. Our findings lead to the first identification and ranking of 19 unique benefits associated with process modeling. The study in particular found that process modeling benefits vary significantly between practitioners and academics. We argue that the variations may point to a disconnect between research projects and practical demands.

  8. Effect of urea on formation of hydroxyapatite through double-step hydrothermal processing

    International Nuclear Information System (INIS)

    Parthiban, S. Prakash; Kim, Ill Yong; Kikuta, Koichi; Ohtsuki, Chikara

    2011-01-01

    The effect of urea on the formation of hydroxyapatite (HAp) was studied by employing the double-step hydrothermal processing of a powder mixture of beta-tricalcium phosphate (β-TCP) and dicalcium phosphate dihydrate (DCPD). Co-existence of urea was found to sustain morphology of HAp crystals in the compacts under an initial concentration of 2 mol dm -3 and less. Homogenous morphology of needle-like crystals was observed on the compacts carbonated owing to decomposition of urea. Carbonate ions (CO 3 2- ) was found to be substituted in both the phosphate and hydroxide sites of HAp lattice. The synthesized HAp was calcium deficient, as it had a Ca/P atomic ratio of 1.62 and the phase was identified as calcium deficient hydroxyapatite (CDHA). The release of CO 3 2- ions from urea during the hydrothermal treatment determined the morphology of the CDHA in the compacts. The usage of urea in the morphological control of carbonate-substituted HAp (CHAp) employing the double-step hydrothermal method is established. Highlights: → Carbonate substituted hydroxyapatite (CHAp) compacts were developed by a new method, namely double-step hydrothermal processing. → CHAp compacts with uniform micromorphology were obtained by using urea as solvent. → Morphology was sustained even at higher concentration of urea, which emphasized the versatility of urea. → Homogenous morphology of CHAp compacts were obtained for higher concentration of urea. Pores were also formed at higher concentration on the CHAp compacts. → The slow dissociation of urea under hydrothermal conditions is the reason for morphology control.

  9. Chemical Process Modeling and Control.

    Science.gov (United States)

    Bartusiak, R. Donald; Price, Randel M.

    1987-01-01

    Describes some of the features of Lehigh University's (Pennsylvania) process modeling and control program. Highlights the creation and operation of the Chemical Process Modeling and Control Center (PMC). Outlines the program's philosophy, faculty, technical program, current research projects, and facilities. (TW)

  10. Chapter 1: Standard Model processes

    OpenAIRE

    Becher, Thomas

    2017-01-01

    This chapter documents the production rates and typical distributions for a number of benchmark Standard Model processes, and discusses new dynamical phenomena arising at the highest energies available at this collider. We discuss the intrinsic physics interest in the measurement of these Standard Model processes, as well as their role as backgrounds for New Physics searches.

  11. On an efficient multiple time step Monte Carlo simulation of the SABR model

    NARCIS (Netherlands)

    A. Leitao Rodriguez (Álvaro); L.A. Grzelak (Lech Aleksander); C.W. Oosterlee (Cornelis)

    2017-01-01

    textabstractIn this paper, we will present a multiple time step Monte Carlo simulation technique for pricing options under the Stochastic Alpha Beta Rho model. The proposed method is an extension of the one time step Monte Carlo method that we proposed in an accompanying paper Leitao et al. [Appl.

  12. On an efficient multiple time step Monte Carlo simulation of the SABR model

    NARCIS (Netherlands)

    Leitao Rodriguez, A.; Grzelak, L.A.; Oosterlee, C.W.

    2017-01-01

    In this paper, we will present a multiple time step Monte Carlo simulation technique for pricing options under the Stochastic Alpha Beta Rho model. The proposed method is an extension of the one time step Monte Carlo method that we proposed in an accompanying paper Leitao et al. [Appl. Math.

  13. A Semi-Empirical Two Step Carbon Corrosion Reaction Model in PEM Fuel Cells

    Energy Technology Data Exchange (ETDEWEB)

    Young, Alan; Colbow, Vesna; Harvey, David; Rogers, Erin; Wessel, Silvia

    2013-01-01

    The cathode CL of a polymer electrolyte membrane fuel cell (PEMFC) was exposed to high potentials, 1.0 to 1.4 V versus a reversible hydrogen electrode (RHE), that are typically encountered during start up/shut down operation. While both platinum dissolution and carbon corrosion occurred, the carbon corrosion effects were isolated and modeled. The presented model separates the carbon corrosion process into two reaction steps; (1) oxidation of the carbon surface to carbon-oxygen groups, and (2) further corrosion of the oxidized surface to carbon dioxide/monoxide. To oxidize and corrode the cathode catalyst carbon support, the CL was subjected to an accelerated stress test cycled the potential from 0.6 VRHE to an upper potential limit (UPL) ranging from 0.9 to 1.4 VRHE at varying dwell times. The reaction rate constants and specific capacitances of carbon and platinum were fitted by evaluating the double layer capacitance (Cdl) trends. Carbon surface oxidation increased the Cdl due to increased specific capacitance for carbon surfaces with carbon-oxygen groups, while the second corrosion reaction decreased the Cdl due to loss of the overall carbon surface area. The first oxidation step differed between carbon types, while both reaction rate constants were found to have a dependency on UPL, temperature, and gas relative humidity.

  14. Biodiesel from dewatered wastewater sludge: a two-step process for a more advantageous production.

    Science.gov (United States)

    Pastore, Carlo; Lopez, Antonio; Lotito, Vincenzo; Mascolo, Giuseppe

    2013-07-01

    Alternative approaches for obtaining biodiesel from municipal sludge have been successfully investigated. In order to avoid the expensive conventional preliminary step of sludge drying, dewatered sludge (TSS: 15wt%) was used as starting material. The best performance in terms of yield of fatty acid methyl esters (18wt%) with the lowest energy demand (17MJkgFAME(-1)) was obtained by a new two-step approach based on hexane extraction carried out directly on dewatered acidified (H2SO4) sludge followed by methanolysis of extracted lipids. It was found that sulphuric acid plays a key role in the whole process not only for the transesterification of glycerides but also for the production of new free fatty acids from soaps and their esterification with methanol. In addition to biodiesel production, the investigated process allows valorization of primary sludge as it turns it into a valuable source of chemicals, namely sterols (2.5wt%), aliphatic alcohols (0.8wt%) and waxes (2.3wt%). Copyright © 2013 Elsevier Ltd. All rights reserved.

  15. Controlling individual steps in the production process of paracetamol tablets by use of NIR spectroscopy.

    Science.gov (United States)

    Blanco, M; Cueva-Mestanza, R; Peguero, A

    2010-03-11

    Various physical and chemical parameters of interest to the pharmaceutical industry were determined by NIR spectroscopy with a view to assessing the potential of this technique as an effective, expeditious alternative to conventional methods for this purpose. To this end, the following two steps in the production process of tablets containing 1g of paracetamol were studied: (1) intermediate granulation, which was characterized in terms of Active Principle Ingredient (API) content, average particle size and particle size distribution and (2) manufactured tablet, which was examined in relation to compaction pressure and API content of the tablets. The ultimate aim was to identify critical attributes of the process influencing the quality of the end-product. Based on the results, a new method for determining the API in the end-product was developed and validated for its quality control. Copyright 2009 Elsevier B.V. All rights reserved.

  16. A Conversation on Data Mining Strategies in LC-MS Untargeted Metabolomics: Pre-Processing and Pre-Treatment Steps

    Directory of Open Access Journals (Sweden)

    Fidele Tugizimana

    2016-11-01

    Full Text Available Untargeted metabolomic studies generate information-rich, high-dimensional, and complex datasets that remain challenging to handle and fully exploit. Despite the remarkable progress in the development of tools and algorithms, the “exhaustive” extraction of information from these metabolomic datasets is still a non-trivial undertaking. A conversation on data mining strategies for a maximal information extraction from metabolomic data is needed. Using a liquid chromatography-mass spectrometry (LC-MS-based untargeted metabolomic dataset, this study explored the influence of collection parameters in the data pre-processing step, scaling and data transformation on the statistical models generated, and feature selection, thereafter. Data obtained in positive mode generated from a LC-MS-based untargeted metabolomic study (sorghum plants responding dynamically to infection by a fungal pathogen were used. Raw data were pre-processed with MarkerLynxTM software (Waters Corporation, Manchester, UK. Here, two parameters were varied: the intensity threshold (50–100 counts and the mass tolerance (0.005–0.01 Da. After the pre-processing, the datasets were imported into SIMCA (Umetrics, Umea, Sweden for more data cleaning and statistical modeling. In addition, different scaling (unit variance, Pareto, etc. and data transformation (log and power methods were explored. The results showed that the pre-processing parameters (or algorithms influence the output dataset with regard to the number of defined features. Furthermore, the study demonstrates that the pre-treatment of data prior to statistical modeling affects the subspace approximation outcome: e.g., the amount of variation in X-data that the model can explain and predict. The pre-processing and pre-treatment steps subsequently influence the number of statistically significant extracted/selected features (variables. Thus, as informed by the results, to maximize the value of untargeted metabolomic data

  17. Business process modeling in healthcare.

    Science.gov (United States)

    Ruiz, Francisco; Garcia, Felix; Calahorra, Luis; Llorente, César; Gonçalves, Luis; Daniel, Christel; Blobel, Bernd

    2012-01-01

    The importance of the process point of view is not restricted to a specific enterprise sector. In the field of health, as a result of the nature of the service offered, health institutions' processes are also the basis for decision making which is focused on achieving their objective of providing quality medical assistance. In this chapter the application of business process modelling - using the Business Process Modelling Notation (BPMN) standard is described. Main challenges of business process modelling in healthcare are the definition of healthcare processes, the multi-disciplinary nature of healthcare, the flexibility and variability of the activities involved in health care processes, the need of interoperability between multiple information systems, and the continuous updating of scientific knowledge in healthcare.

  18. Modeling nuclear processes by Simulink

    Science.gov (United States)

    Rashid, Nahrul Khair Alang Md

    2015-04-01

    Modelling and simulation are essential parts in the study of dynamic systems behaviours. In nuclear engineering, modelling and simulation are important to assess the expected results of an experiment before the actual experiment is conducted or in the design of nuclear facilities. In education, modelling can give insight into the dynamic of systems and processes. Most nuclear processes can be described by ordinary or partial differential equations. Efforts expended to solve the equations using analytical or numerical solutions consume time and distract attention from the objectives of modelling itself. This paper presents the use of Simulink, a MATLAB toolbox software that is widely used in control engineering, as a modelling platform for the study of nuclear processes including nuclear reactor behaviours. Starting from the describing equations, Simulink models for heat transfer, radionuclide decay process, delayed neutrons effect, reactor point kinetic equations with delayed neutron groups, and the effect of temperature feedback are used as examples.

  19. Modeling nuclear processes by Simulink

    Energy Technology Data Exchange (ETDEWEB)

    Rashid, Nahrul Khair Alang Md, E-mail: nahrul@iium.edu.my [Faculty of Engineering, International Islamic University Malaysia, Jalan Gombak, Selangor (Malaysia)

    2015-04-29

    Modelling and simulation are essential parts in the study of dynamic systems behaviours. In nuclear engineering, modelling and simulation are important to assess the expected results of an experiment before the actual experiment is conducted or in the design of nuclear facilities. In education, modelling can give insight into the dynamic of systems and processes. Most nuclear processes can be described by ordinary or partial differential equations. Efforts expended to solve the equations using analytical or numerical solutions consume time and distract attention from the objectives of modelling itself. This paper presents the use of Simulink, a MATLAB toolbox software that is widely used in control engineering, as a modelling platform for the study of nuclear processes including nuclear reactor behaviours. Starting from the describing equations, Simulink models for heat transfer, radionuclide decay process, delayed neutrons effect, reactor point kinetic equations with delayed neutron groups, and the effect of temperature feedback are used as examples.

  20. Path modeling and process control

    DEFF Research Database (Denmark)

    Høskuldsson, Agnar; Rodionova, O.; Pomerantsev, A.

    2007-01-01

    and having three or more stages. The methods are applied to a process control of a multi-stage production process having 25 variables and one output variable. When moving along the process, variables change their roles. It is shown how the methods of path modeling can be applied to estimate variables......Many production processes are carried out in stages. At the end of each stage, the production engineer can analyze the intermediate results and correct process parameters (variables) of the next stage. Both analysis of the process and correction to process parameters at next stage should...... be performed regarding the foreseeable output property y, and with respect to an admissible range of correcting actions for the parameters of the next stage. In this paper the basic principles of path modeling is presented. The mathematics is presented for processes having only one stage, having two stages...

  1. Measurement and modeling of advanced coal conversion processes

    Energy Technology Data Exchange (ETDEWEB)

    Solomon, P.R.; Serio, M.A.; Hamblen, D.G. (Advanced Fuel Research, Inc., East Hartford, CT (United States)); Smoot, L.D.; Brewster, B.S. (Brigham Young Univ., Provo, UT (United States))

    1991-09-25

    The objectives of this study are to establish the mechanisms and rates of basic steps in coal conversion processes, to integrate and incorporate this information into comprehensive computer models for coal conversion processes, to evaluate these models and to apply them to gasification, mild gasification and combustion in heat engines. (VC)

  2. Two-dimensional modeling of stepped planing hulls with open and pressurized air cavities

    Directory of Open Access Journals (Sweden)

    Konstantin I. Matveev

    2012-06-01

    Full Text Available A method of hydrodynamic discrete sources is applied for two-dimensional modeling of stepped planing surfaces. The water surface deformations, wetted hull lengths, and pressure distribution are calculated at given hull attitude and Froude number. Pressurized air cavities that improve hydrodynamic performance can also be modeled with the current method. Presented results include validation examples, parametric calculations of a single-step hull, effect of trim tabs, and performance of an infinite series of periodic stepped surfaces. It is shown that transverse steps can lead to higher lift-drag ratio, although at reduced lift capability, in comparison with a stepless hull. Performance of a multi-step configuration is sensitive to the wave pattern between hulls, which depends on Froude number and relative hull spacing.

  3. Step Flow Model of Radial Growth and Shape Evolution of Semiconductor Nanowires

    Science.gov (United States)

    Filimonov, S. N.; Hervieu, Yu. Yu.

    2016-12-01

    A model of radial growth of vertically aligned nanowires (NW) via formation and propagation of monoatomic steps at nanowire sidewalls is developed. The model allows to describe self-consistently the step dynamics and the axial growth of the NW. It is shown that formation of NWs with an abrupt change of wire diameter and a non-tapered section at the top might be explained by the bunching of sidewall steps due to the presence of a strong sink for adatoms at the NW top. The Ehrlich-Schwoebel barrier for the attachment of adatoms to the descending step favors the step bunching at the beginning of the radial growth and promotes the decay of the bunch at a later time of the NW growth.

  4. Online integrity monitoring in the protein A step of mAb production processes-increasing reliability and process robustness.

    Science.gov (United States)

    Bork, Christopher; Holdridge, Sarah; Walter, Mark; Fallon, Eric; Pohlscheidt, Michael

    2014-01-01

    The purification of recombinant proteins and antibodies using large packed-bed columns is a key component in most biotechnology purification processes. Because of its efficiency and established practice in the industry, column chromatography is a state of the art technology with a proven capability for removal of impurities, viral clearance, and process efficiency. In general, the validation and monitoring of chromatographic operations-especially of critical process parameters-is required to ensure robust product quality and compliance with health authority expectations. One key aspect of chromatography that needs to be monitored is the integrity of the packed bed, since this is often critical to achieving sufficient separation of protein species. Identification of potential column integrity issues before they occur is important for both product quality and economic efficiency. In this article, we examine how transition analysis techniques can be utilized to monitor column integrity. A case study on the application of this method during a large scale Protein A capture step in an antibody purification process shows how it can assist with improving process knowledge and increasing the efficiency of manufacturing operations. © 2013 American Institute of Chemical Engineers.

  5. The indirect link between perceived parenting and adolescent future orientation : A multiple-step model

    NARCIS (Netherlands)

    Seginer, R.; Vermulst, A.A.; Shoyer, S.

    2004-01-01

    The indirect links between perceived mothers' and fathers' autonomous-accepting parenting and future orientation were examined in a mediational model consisting of five steps: perceived mothers' and fathers' autonomous-accepting parenting, self-evaluation, and the motivational, cognitive

  6. Focused particle beam nano-machining: the next evolution step towards simulation aided process prediction

    International Nuclear Information System (INIS)

    Plank, Harald

    2015-01-01

    During the last decade, focused ion beam processing has been developed from traditionally used Ga + liquid ion sources towards higher resolution gas field ion sources (He + and Ne + ). Process simulations not only improve the fundamental understanding of the relevant ion–matter interactions, but also enable a certain predictive power to accelerate advances. The historic ‘gold’ standard in ion–solid simulations is the SRIM/TRIM Monte Carlo package released by Ziegler, Ziegler and Biersack 2010 Nucl. Instrum. Methods B 268 1818–23. While SRIM/TRIM is very useful for a myriad of applications, it is not applicable for the understanding of the nanoscale evolution associated with ion beam nano-machining as the substrate does not evolve with the sputtering process. As a solution for this problem, a new, adapted simulation code is briefly overviewed and finally addresses these contributions. By that, experimentally observed Ne + beam sputter profiles can be explained from a fundamental point of view. Due to their very good agreement, these simulations contain the potential for computer aided optimization towards predictable sputter processes for different nanotechnology applications. With these benefits in mind, the discussed simulation approach represents an enormous step towards a computer based master tool for adaptable ion beam applications in the context of industrial applications. (viewpoint)

  7. Enriching step-based product information models to support product life-cycle activities

    Science.gov (United States)

    Sarigecili, Mehmet Ilteris

    The representation and management of product information in its life-cycle requires standardized data exchange protocols. Standard for Exchange of Product Model Data (STEP) is such a standard that has been used widely by the industries. Even though STEP-based product models are well defined and syntactically correct, populating product data according to these models is not easy because they are too big and disorganized. Data exchange specifications (DEXs) and templates provide re-organized information models required in data exchange of specific activities for various businesses. DEXs show us it would be possible to organize STEP-based product models in order to support different engineering activities at various stages of product life-cycle. In this study, STEP-based models are enriched and organized to support two engineering activities: materials information declaration and tolerance analysis. Due to new environmental regulations, the substance and materials information in products have to be screened closely by manufacturing industries. This requires a fast, unambiguous and complete product information exchange between the members of a supply chain. Tolerance analysis activity, on the other hand, is used to verify the functional requirements of an assembly considering the worst case (i.e., maximum and minimum) conditions for the part/assembly dimensions. Another issue with STEP-based product models is that the semantics of product data are represented implicitly. Hence, it is difficult to interpret the semantics of data for different product life-cycle phases for various application domains. OntoSTEP, developed at NIST, provides semantically enriched product models in OWL. In this thesis, we would like to present how to interpret the GD & T specifications in STEP for tolerance analysis by utilizing OntoSTEP.

  8. Validation Testing of the Nitric Acid Dissolution Step Within the K Basin Sludge Pretreatment Process

    International Nuclear Information System (INIS)

    AJ Schmidt; CH Delegard; KL Silvers; PR Bredt; CD Carlson; EW Hoppe; JC Hayes; DE Rinehart; SR Gano; BM Thornton

    1999-01-01

    The work described in this report involved comprehensive bench-scale testing of nitric acid (HNO 3 ) dissolution of actual sludge materials from the Hanford K East (KE) Basin to confirm the baseline chemical pretreatment process. In addition, process monitoring and material balance information was collected to support the development and refinement of process flow diagrams. The testing was performed by Pacific Northwest National Laboratory (PNNL)for the US Department of Energy's Office of Spent Fuel Stabilization (EM-67) and Numatec Hanford Corporation (NHC) to assist in the development of the K Basin Sludge Pretreatment Process. The baseline chemical pretreatment process for K Basin sludge is nitric acid dissolution of all particulate material passing a 1/4-in. screen. The acid-insoluble fraction (residual solids) will be stabilized (possibly by chemical leaching/rinsing and grouting), packaged, and transferred to the Hanford Environmental Restoration Disposal Facility (ERDF). The liquid fraction is to be diluted with depleted uranium for uranium criticality safety and iron nitrate for plutonium criticality safety, and neutralized with sodium hydroxide. The liquid fraction and associated precipitates are to be stored in the Hanford Tank Waste Remediation Systems (TWRS) pending vitrification. It is expected that most of the polychlorinated biphenyls (PCBs), associated with some K Basin sludges, will remain with the residual solids for ultimate disposal to ERDF. Filtration and precipitation during the neutralization step will further remove trace quantities of PCBs within the liquid fraction. The purpose of the work discussed in this report was to examine the dissolution behavior of actual KE Basin sludge materials at baseline flowsheet conditions and validate the.dissolution process step through bench-scale testing. The progress of the dissolution was evaluated by measuring the solution electrical conductivity and concentrations of key species in the dissolver

  9. Validation Testing of the Nitric Acid Dissolution Step Within the K Basin Sludge Pretreatment Process

    Energy Technology Data Exchange (ETDEWEB)

    AJ Schmidt; CH Delegard; KL Silvers; PR Bredt; CD Carlson; EW Hoppe; JC Hayes; DE Rinehart; SR Gano; BM Thornton

    1999-03-24

    The work described in this report involved comprehensive bench-scale testing of nitric acid (HNO{sub 3}) dissolution of actual sludge materials from the Hanford K East (KE) Basin to confirm the baseline chemical pretreatment process. In addition, process monitoring and material balance information was collected to support the development and refinement of process flow diagrams. The testing was performed by Pacific Northwest National Laboratory (PNNL)for the US Department of Energy's Office of Spent Fuel Stabilization (EM-67) and Numatec Hanford Corporation (NHC) to assist in the development of the K Basin Sludge Pretreatment Process. The baseline chemical pretreatment process for K Basin sludge is nitric acid dissolution of all particulate material passing a 1/4-in. screen. The acid-insoluble fraction (residual solids) will be stabilized (possibly by chemical leaching/rinsing and grouting), packaged, and transferred to the Hanford Environmental Restoration Disposal Facility (ERDF). The liquid fraction is to be diluted with depleted uranium for uranium criticality safety and iron nitrate for plutonium criticality safety, and neutralized with sodium hydroxide. The liquid fraction and associated precipitates are to be stored in the Hanford Tank Waste Remediation Systems (TWRS) pending vitrification. It is expected that most of the polychlorinated biphenyls (PCBs), associated with some K Basin sludges, will remain with the residual solids for ultimate disposal to ERDF. Filtration and precipitation during the neutralization step will further remove trace quantities of PCBs within the liquid fraction. The purpose of the work discussed in this report was to examine the dissolution behavior of actual KE Basin sludge materials at baseline flowsheet conditions and validate the.dissolution process step through bench-scale testing. The progress of the dissolution was evaluated by measuring the solution electrical conductivity and concentrations of key species in the

  10. STEPS: efficient simulation of stochastic reaction–diffusion models in realistic morphologies

    Directory of Open Access Journals (Sweden)

    Hepburn Iain

    2012-05-01

    Full Text Available Abstract Background Models of cellular molecular systems are built from components such as biochemical reactions (including interactions between ligands and membrane-bound proteins, conformational changes and active and passive transport. A discrete, stochastic description of the kinetics is often essential to capture the behavior of the system accurately. Where spatial effects play a prominent role the complex morphology of cells may have to be represented, along with aspects such as chemical localization and diffusion. This high level of detail makes efficiency a particularly important consideration for software that is designed to simulate such systems. Results We describe STEPS, a stochastic reaction–diffusion simulator developed with an emphasis on simulating biochemical signaling pathways accurately and efficiently. STEPS supports all the above-mentioned features, and well-validated support for SBML allows many existing biochemical models to be imported reliably. Complex boundaries can be represented accurately in externally generated 3D tetrahedral meshes imported by STEPS. The powerful Python interface facilitates model construction and simulation control. STEPS implements the composition and rejection method, a variation of the Gillespie SSA, supporting diffusion between tetrahedral elements within an efficient search and update engine. Additional support for well-mixed conditions and for deterministic model solution is implemented. Solver accuracy is confirmed with an original and extensive validation set consisting of isolated reaction, diffusion and reaction–diffusion systems. Accuracy imposes upper and lower limits on tetrahedron sizes, which are described in detail. By comparing to Smoldyn, we show how the voxel-based approach in STEPS is often faster than particle-based methods, with increasing advantage in larger systems, and by comparing to MesoRD we show the efficiency of the STEPS implementation. Conclusion STEPS simulates

  11. Markov Decision Process Measurement Model.

    Science.gov (United States)

    LaMar, Michelle M

    2018-03-01

    Within-task actions can provide additional information on student competencies but are challenging to model. This paper explores the potential of using a cognitive model for decision making, the Markov decision process, to provide a mapping between within-task actions and latent traits of interest. Psychometric properties of the model are explored, and simulation studies report on parameter recovery within the context of a simple strategy game. The model is then applied to empirical data from an educational game. Estimates from the model are found to correlate more strongly with posttest results than a partial-credit IRT model based on outcome data alone.

  12. Mechanical, thermal and morphological characterization of polycarbonate/oxidized carbon nanofiber composites produced with a lean 2-step manufacturing process.

    Science.gov (United States)

    Lively, Brooks; Kumar, Sandeep; Tian, Liu; Li, Bin; Zhong, Wei-Hong

    2011-05-01

    In this study we report the advantages of a 2-step method that incorporates an additional process pre-conditioning step for rapid and precise blending of the constituents prior to the commonly used melt compounding method for preparing polycarbonate/oxidized carbon nanofiber composites. This additional step (equivalent to a manufacturing cell) involves the formation of a highly concentrated solid nano-nectar of polycarbonate/carbon nanofiber composite using a solution mixing process followed by melt mixing with pure polycarbonate. This combined method yields excellent dispersion and improved mechanical and thermal properties as compared to the 1-step melt mixing method. The test results indicated that inclusion of carbon nanofibers into composites via the 2-step method resulted in dramatically reduced ( 48% lower) coefficient of thermal expansion compared to that of pure polycarbonate and 30% lower than that from the 1-step processing, at the same loading of 1.0 wt%. Improvements were also found in dynamic mechanical analysis and flexural mechanical properties. The 2-step approach is more precise and leads to better dispersion, higher quality, consistency, and improved performance in critical application areas. It is also consistent with Lean Manufacturing principles in which manufacturing cells are linked together using less of the key resources and creates a smoother production flow. Therefore, this 2-step process can be more attractive for industry.

  13. Modeling of X-ray Images and Energy Spectra Produced by Stepping Lightning Leaders

    Science.gov (United States)

    Xu, Wei; Marshall, Robert A.; Celestin, Sebastien; Pasko, Victor P.

    2017-11-01

    Recent ground-based measurements at the International Center for Lightning Research and Testing (ICLRT) have greatly improved our knowledge of the energetics, fluence, and evolution of X-ray emissions during natural cloud-to-ground (CG) and rocket-triggered lightning flashes. In this paper, using Monte Carlo simulations and the response matrix of unshielded detectors in the Thunderstorm Energetic Radiation Array (TERA), we calculate the energy spectra of X-rays as would be detected by TERA and directly compare with the observational data during event MSE 10-01. The good agreement obtained between TERA measurements and theoretical calculations supports the mechanism of X-ray production by thermal runaway electrons during the negative corona flash stage of stepping lightning leaders. Modeling results also suggest that measurements of X-ray bursts can be used to estimate the approximate range of potential drop of lightning leaders. Moreover, the X-ray images produced during the leader stepping process in natural negative CG discharges, including both the evolution and morphological features, are theoretically quantified. We show that the compact emission pattern as recently observed in X-ray images is likely produced by X-rays originating from the source region, and the diffuse emission pattern can be explained by the Compton scattering effects.

  14. Optical Properties of ZnO Nanowires and Nanorods Synthesized by Two Step Oxidation Process

    Directory of Open Access Journals (Sweden)

    Vahid ghafouri

    2013-12-01

    Full Text Available ZnO nanowires with a diameter of 70 nm and nanorods with a diameter in the range of 100-150 nm and two micrometer in length were grown on glass substrates by resistive evaporation method and applying a two step oxidation process at low temperatures, without using any catalyst, template or buffer layer. XRD pattern of these nanostructures indicated a good crystallinity property with wurtzite hexagonal structure. Photoluminescence measurement revealed three band emissions; one sharp strong peak in the UV region and two weaker peaks in the visible region, indicate good optical properties of nanorods synthesized by this method. Heat treatment in oxygen-rich atmosphere results to decrease of deep-level emission intensity in the PL spectra. The relatively high intensity of UV emission implies that this approach is a simple and promising method for fabricating ZnO nanorods in order to be used in optoelectronic devices especially in the UV range of the spectrum.

  15. Characteristic analysis of laser isotope separation process by two-step photodissociation method

    International Nuclear Information System (INIS)

    Okamoto, Tsuyoshi; Suzuki, Atsuyuki; Kiyose, Ryohei

    1981-01-01

    A large number of laser isotope separation experiments have been performed actively in many countries. In this paper, the selective two-step photodissociation method is chosen and simultaneous nonlinear differential equations that express the separation process are solved directly by using computer. Predicted separation factors are investigated in relation to the incident pulse energy and the concentration of desired molecules. Furthermore, the concept of separative work is used to evaluate the results of separation for this method. It is shown from an example of numerical calculation that a very large separation factor can be obtained if the concentration of desired molecules is lowered and two laser pulses to be closely synchronized are not always required in operation for the photodissociation of molecules. (author)

  16. Two-step optimization of pressure and recovery of reverse osmosis desalination process.

    Science.gov (United States)

    Liang, Shuang; Liu, Cui; Song, Lianfa

    2009-05-01

    Driving pressure and recovery are two primary design variables of a reverse osmosis process that largely determine the total cost of seawater and brackish water desalination. A two-step optimization procedure was developed in this paper to determine the values of driving pressure and recovery that minimize the total cost of RO desalination. It was demonstrated that the optimal net driving pressure is solely determined by the electricity price and the membrane price index, which is a lumped parameter to collectively reflect membrane price, resistance, and service time. On the other hand, the optimal recovery is determined by the electricity price, initial osmotic pressure, and costs for pretreatment of raw water and handling of retentate. Concise equations were derived for the optimal net driving pressure and recovery. The dependences of the optimal net driving pressure and recovery on the electricity price, membrane price, and costs for raw water pretreatment and retentate handling were discussed.

  17. Effects of Process Parameters on the Characteristics of Mixed-Halide Perovskite Solar Cells Fabricated by One-Step and Two-Step Sequential Coating.

    Science.gov (United States)

    Ahmadian-Yazdi, Mohammad Reza; Zabihi, Fatemeh; Habibi, Mehran; Eslamian, Morteza

    2016-12-01

    In this paper, two-step sequential spin-dip and spin-spin coating, as well as one-step spin coating, methods are used to fabricate methylammonium lead mixed-halide perovskites to study the effect of process parameters, including the choice of the solvent, annealing temperature, spin velocity, and dipping time on the characteristics of the perovskite film. Our results show that using a mixture of DMF and DMSO, with volume ratio of 1:1, as the organic solvents for PbCl2 results in the best mixed-halide perovskite because of the effective coordination between DMSO and PbCl2. Surface dewetting due to two effects, i.e., crystallization and thin liquid film instability, is observed and discussed, where an intermediate spin velocity of about 4000 rpm is found suitable to suppress dewetting. The perovskite film fabricated using the one-step method followed by anti-solvent treatment shows the best perovskite conversion in XRD patterns, and the planar device fabricated using the same method exhibited the highest efficiency among the employed methods. The perovskite layer made by sequential spin-dip coating is found thicker with higher absorbance, but the device shows a lower efficiency because of the challenges associated with perovskite conversion in the sequential method. The one-step deposition method is found easier to control and more promising than the sequential deposition methods.

  18. Comparison of Model Reliabilities from Single-Step and Bivariate Blending Methods

    DEFF Research Database (Denmark)

    Taskinen, Matti; Mäntysaari, Esa; Lidauer, Martin

    2013-01-01

    the production trait evaluation of Nordic Red dairy cattle. Genotyped bulls with daughters are used as training animals, and genotyped bulls and producing cows as candidate animals. For simplicity, size of the data is chosen so that the full inverses of the mixed model equation coefficient matrices can......Model based reliabilities in genetic evaluation are compared between three methods: animal model BLUP, single-step BLUP, and bivariate blending after genomic BLUP. The original bivariate blending is revised in this work to better account animal models. The study data is extracted from...... be calculated. Model reliabilities by the single-step and the bivariate blending methods were higher than by animal model due to genomic information. Compared to the single-step method, the bivariate blending method reliability estimates were, in general, lower. Computationally bivariate blending method was...

  19. Simple Models for Process Control

    Czech Academy of Sciences Publication Activity Database

    Gorez, R.; Klán, Petr

    2011-01-01

    Roč. 22, č. 2 (2011), s. 58-62 ISSN 0929-2268 Institutional research plan: CEZ:AV0Z10300504 Keywords : process models * PID control * second order dynamics Subject RIV: JB - Sensors, Measurment, Regulation

  20. A novel multimodal chromatography based single step purification process for efficient manufacturing of an E. coli based biotherapeutic protein product.

    Science.gov (United States)

    Bhambure, Rahul; Gupta, Darpan; Rathore, Anurag S

    2013-11-01

    Methionine oxidized, reduced and fMet forms of a native recombinant protein product are often the critical product variants which are associated with proteins expressed as bacterial inclusion bodies in E. coli. Such product variants differ from native protein in their structural and functional aspects, and may lead to loss of biological activity and immunogenic response in patients. This investigation focuses on evaluation of multimodal chromatography for selective removal of these product variants using recombinant human granulocyte colony stimulating factor (GCSF) as the model protein. Unique selectivity in separation of closely related product variants was obtained using combined pH and salt based elution gradients in hydrophobic charge induction chromatography. Simultaneous removal of process related impurities was also achieved in flow-through leading to single step purification process for the GCSF. Results indicate that the product recovery of up to 90.0% can be obtained with purity levels of greater than 99.0%. Binding the target protein at pHstep. Copyright © 2013 Elsevier B.V. All rights reserved.

  1. Model feedstock supply processing plants

    Directory of Open Access Journals (Sweden)

    V. M. Bautin

    2013-01-01

    Full Text Available The model of raw providing the processing enterprises entering into vertically integrated structure on production and processing of dairy raw materials, differing by an orientation on achievement of cumulative effect by the integrated structure acting as criterion function which maximizing is reached by optimization of capacities, volumes of deliveries of raw materials and its qualitative characteristics, costs of industrial processing of raw materials and demand for dairy production is developed.

  2. Sato Processes in Default Modeling

    DEFF Research Database (Denmark)

    Kokholm, Thomas; Nicolato, Elisa

    In reduced form default models, the instantaneous default intensity is classically the modeling object. Survival probabilities are then given by the Laplace transform of the cumulative hazard defined as the integrated intensity process. Instead, recent literature has shown a tendency towards...

  3. Sato Processes in Default Modeling

    DEFF Research Database (Denmark)

    Kokholm, Thomas; Nicolato, Elisa

    2010-01-01

    In reduced form default models, the instantaneous default intensity is the classical modeling object. Survival probabilities are then given by the Laplace transform of the cumulative hazard defined as the integrated intensity process. Instead, recent literature tends to specify the cumulative...

  4. Kinetic analysis on the two-step processes of AOB and NOB in aerobic nitrifying granules.

    Science.gov (United States)

    Fang, Fang; Ni, Bing-Jie; Li, Xiao-Yan; Sheng, Guo-Ping; Yu, Han-Qing

    2009-07-01

    Complete granulation of nitrifying sludge was achieved in a sequencing batch reactor. For the granular sludge, batch experiments were conducted to characterize the kinetic features of ammonia oxidizers (AOB) and nitrite oxidizers (NOB) in the granules using the respirometric method. A two-step nitrification model was established to determine the kinetic parameters of both AOB and NOB. In addition to nitrification reactions, the new model also took into account biomass maintenance and mass transfer through the granules. The yield coefficient, maximum specific growth rate, and affinity constant for ammonium for AOB were 0.21 g chemical oxygen demand (COD) g(-1) N, 0.09 h(-1), and 9.1 mg N L(-1), respectively, whereas the corresponding values for NOB were 0.05 g COD g(-1) N, 0.11 h(-1), and 4.85 mg N L(-1), respectively. The model developed in this study performed well in simulating the oxygen uptake rate and nitrogen conversion kinetics and in predicting the oxygen consumption of the AOB and NOB in aerobic granules.

  5. Multi-step processes in the (d, t) and (d, 3He) reactions on 116Sn and 208Pb targets at Ed = 200 MeV

    International Nuclear Information System (INIS)

    Langevin-Joliot, H.; Van de Wiele, J.; Guillot, J.; Koning, A.J.

    2000-01-01

    The role of multi-step processes in the reactions 116 Sn(d,t), 208 Pb(d,t) and 116 Sn(d, 3 He), previously studied at E d = 200 MeV at forward angles and for relatively low energy transfers, has been investigated. We have performed for the first time multi-step calculations taking into account systematically collective excitations in the second and higher order step inelastic transitions. A calculation code based on the Feshbach, Kerman and Koonin model has been modified to handle explicitly these collective excitations, most important in the forward angle domain. One step double differential pick-up cross sections were built from finite range distorted wave results spread in energy using known or estimated hole state characteristics. It is shown that two-step cross sections calculated using the above method compare rather well with those deduced via coupled channel calculations for the same collective excitations. The multi-step calculations performed up to 6 steps reproduce reasonably well the 115 Sn, 207 Pb and 115 In experimental spectra measured up to E x ∼- 40 MeV and 15 deg. The relative contributions of steps of increasing order to pick-up cross sections at E d = 200 MeV and 150 MeV are discussed. (authors)

  6. Improving powder flow properties of a direct compression formulation using a two-step glidant mixing process.

    Science.gov (United States)

    Abe, Hidaka; Yasui, Shinichiro; Kuwata, Aya; Takeuchi, Hirofumi

    2009-07-01

    To improve powder flow of a high-dose direct compression formulation (drug content 30%), we compared a two-step operation for mixing glidants with a conventional one-step glidant mixing process. This two-step mixing operation was studied with two kinds of mixtures; an active pharmaceutical ingredient (API)-glidant combination and a direct compression excipient-glidant combination. The two-step operation permitted the selection of the optimum glidant type and concentration in each glidant-mixing procedure even though the formulation had different powder properties such as micronized API and enlarged direct compression vehicles, whereas the conventional approaches forced the selection of a certain glidant type and concentration at one-step mixing. The addition of 0.5% nonporous silica markedly improved API flow. In contrast, 1.0% porous silica was the appropriate glidant to enhance excipient flow at direct compression excipient-glidant mixing. The two-step operation dominantly enhanced powder flow when the appropriate API-glidant mixture and the suitable direct compression excipients-glidant mixture were blended compared to the one-step operation with its optimum glidant concentration. The results showed that the angle of repose was 43 degrees and the critical orifice diameter was 10 mm in the two-step operation, whereas it was 47 degrees and 16 mm in the one-step operation. The two-step operation of glidant mixing enhanced powder flow of the high-dose direct compression formulation compared with the one-step operation. The two-step operation eliminates the bottleneck of powder flow and allows direct compression to be more worth applying for formulation and process development trials.

  7. Command Process Modeling & Risk Analysis

    Science.gov (United States)

    Meshkat, Leila

    2011-01-01

    Commanding Errors may be caused by a variety of root causes. It's important to understand the relative significance of each of these causes for making institutional investment decisions. One of these causes is the lack of standardized processes and procedures for command and control. We mitigate this problem by building periodic tables and models corresponding to key functions within it. These models include simulation analysis and probabilistic risk assessment models.

  8. Modelling of transport and biogeochemical processes in pollution plumes: Literature review of model development

    DEFF Research Database (Denmark)

    Brun, A.; Engesgaard, Peter Knudegaard

    2002-01-01

    A literature survey shows how biogeochemical (coupled organic and inorganic reaction processes) transport models are based on considering the complete biodegradation process as either a single- or as a two-step process. It is demonstrated that some two-step process models rely on the Partial Equi....... A second paper [J. Hydrol. 256 (2002) 230-249], reports the application of the model to a field study of biogeochemical transport processes in a landfill plume in Denmark (Vejen). (C) 2002 Elsevier Science B.V. All rights reserved....

  9. Modelling Hospital Materials Management Processes

    Directory of Open Access Journals (Sweden)

    Raffaele Iannone

    2013-06-01

    integrated and detailed analysis and description model for hospital materials management data and tasks, which is able to tackle information from patient requirements to usage, from replenishment requests to supplying and handling activities. The model takes account of medical risk reduction, traceability and streamlined processes perspectives. Second, the paper translates this information into a business process model and mathematical formalization.The study provides a useful guide to the various relevant technology‐related, management and business issues, laying the foundations of an efficient reengineering of the supply chain to reduce healthcare costs and improve the quality of care.

  10. Determination of the mass transfer limiting step of dye adsorption onto commercial adsorbent by using mathematical models.

    Science.gov (United States)

    Marin, Pricila; Borba, Carlos Eduardo; Módenes, Aparecido Nivaldo; Espinoza-Quiñones, Fernando R; de Oliveira, Silvia Priscila Dias; Kroumov, Alexander Dimitrov

    2014-01-01

    Reactive blue 5G dye removal in a fixed-bed column packed with Dowex Optipore SD-2 adsorbent was modelled. Three mathematical models were tested in order to determine the limiting step of the mass transfer of the dye adsorption process onto the adsorbent. The mass transfer resistance was considered to be a criterion for the determination of the difference between models. The models contained information about the external, internal, or surface adsorption limiting step. In the model development procedure, two hypotheses were applied to describe the internal mass transfer resistance. First, the mass transfer coefficient constant was considered. Second, the mass transfer coefficient was considered as a function of the dye concentration in the adsorbent. The experimental breakthrough curves were obtained for different particle diameters of the adsorbent, flow rates, and feed dye concentrations in order to evaluate the predictive power of the models. The values of the mass transfer parameters of the mathematical models were estimated by using the downhill simplex optimization method. The results showed that the model that considered internal resistance with a variable mass transfer coefficient was more flexible than the other ones and this model described the dynamics of the adsorption process of the dye in the fixed-bed column better. Hence, this model can be used for optimization and column design purposes for the investigated systems and similar ones.

  11. Perovskite Hollow Fibers with Precisely Controlled Cation Stoichiometry via One-Step Thermal Processing.

    Science.gov (United States)

    Zhu, Jiawei; Zhang, Guangru; Liu, Gongping; Liu, Zhengkun; Jin, Wanqin; Xu, Nanping

    2017-05-01

    The practical applications of perovskite hollow fibers (HFs) are limited by challenges in producing these easily, cheaply, and reliably. Here, a one-step thermal processing approach is reported for the efficient production of high performance perovskite HFs, with precise control over their cation stoichiometry. In contrast to traditional production methods, this approach directly uses earth-abundant raw chemicals in a single thermal process. This approach can control cation stoichiometry by avoiding interactions between the perovskites and polar solvents/nonsolvents, optimizes sintering, and results in high performance HFs. Furthermore, this method saves much time and energy (≈ 50%), therefore pollutant emissions are greatly reduced. One successful example is Ba0.5Sr0.5Co0.8Fe0.2O3-δ HFs, which are used in an oxygen-permeable membrane. This exhibits high oxygen permeation flux values that exceed desired commercial targets and compares favorably with previously reported oxygen-permeable membranes. Studies on other perovskites have produced similarly successful results. Overall, this approach could lead to energy efficient, solid-state devices for industrial application in energy and environmental fields. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  12. Process of motion by unit steps over a surface provided with elements regularly arranged

    International Nuclear Information System (INIS)

    Cooper, D.E.; Hendee, L.C. III; Hill, W.G. Jr.; Leshem, Adam; Marugg, M.L.

    1977-01-01

    This invention concerns a process for moving by unit steps an apparatus travelling over a surface provided with an array of orifices aligned and evenly spaced in several lines and several parallel rows regularly spaced, the lines and rows being parallel to axes x and y of Cartesian co-ordinates, each orifice having a separate address in the Cartesian co-ordinate system. The surface travelling apparatus has two previously connected arms aranged in directions transversal to each other thus forming an angle corresponding to the intersection of axes x and y. In the inspection and/or repair of nuclear or similar steam generator tubes, it is desirable that such an apparatus should be able to move in front of a surface comprising an array of orifices by the selective alternate introduction and retraction of two sets of anchoring claws of the two respective arms, in relation to the orifices of the array, it being possible to shift the arms in a movement of translation, transversally to each other, as a set of claws is withdrawn from the orifices. The invention concerns a process and aparatus as indicated above that reduces to a minimum the path length of the apparatus between the orifices it is effectively opposite and a given orifice [fr

  13. Model predictive control of two-step nitrification and its validation via short-cut nitrification tests.

    Science.gov (United States)

    Sui, Jun; Luo, Fan; Li, Jie

    2016-10-01

    Short-cut nitrification (SCN) is shown to be an attractive technology due to its savings in aeration and external carbon source addition cost. However, the shortage of excluding nitrite nitrogen as a model state in an Activated Sludge Model limits the model predictive control of biological nitrogen removal via SCN. In this paper, a two-step kinetic model was developed based on the introduction of pH and temperature as process controller, and it was implemented in an SBR reactor. The simulation results for optimizing operating conditions showed that with increasing of dissolved oxygen (DO) the rate of ammonia oxidation and nitrite accumulation firstly increased in order to achieve a SCN process. By further increasing DO, the SCN process can be transformed into a complete nitrification process. In addition, within a certain range, increasing sludge retention time and aeration time are beneficial to the accumulation of nitrite. The implementation results in the SBR reactor showed that the data predicted by the kinetic model are in agreement with the data obtained, which indicate that the two-step kinetic model is appropriate to simulate the ammonia removal and nitrite production kinetics.

  14. The Brookhaven Process Optimization Models

    Energy Technology Data Exchange (ETDEWEB)

    Pilati, D. A.; Sparrow, F. T.

    1979-01-01

    The Brookhaven National Laboratory Industry Model Program (IMP) has undertaken the development of a set of industry-specific process-optimization models. These models are to be used for energy-use projections, energy-policy analyses, and process technology assessments. Applications of the models currently under development show that system-wide energy impacts may be very different from engineering estimates, selected investment tax credits for cogeneration (or other conservation strategies) may have the perverse effect of increasing industrial energy use, and that a proper combination of energy taxes and investment tax credits is more socially desirable than either policy alone. A section is included describing possible extensions of these models to answer questions or address other systems (e.g., a single plant instead of an entire industry).

  15. Modeling of biopharmaceutical processes. Part 2: Process chromatography unit operation

    DEFF Research Database (Denmark)

    Kaltenbrunner, Oliver; McCue, Justin; Engel, Philip

    2008-01-01

    Process modeling can be a useful tool to aid in process development, process optimization, and process scale-up. When modeling a chromatography process, one must first select the appropriate models that describe the mass transfer and adsorption that occurs within the porous adsorbent. The theoret......Process modeling can be a useful tool to aid in process development, process optimization, and process scale-up. When modeling a chromatography process, one must first select the appropriate models that describe the mass transfer and adsorption that occurs within the porous adsorbent...

  16. Ehrenfest's theorem and the validity of the two-step model for strong-field ionization

    DEFF Research Database (Denmark)

    Shvetsov-Shilovskiy, Nikolay; Dimitrovski, Darko; Madsen, Lars Bojer

    2013-01-01

    with situations where the ensemble average of the force deviates considerably from the force calculated at the average position of the trajectories of the ensemble. We identify the general trends for the applicability of the semiclassical model in terms of intensity, ellipticity, and wavelength of the laser pulse......By comparison with the solution of the time-dependent Schrödinger equation we explore the validity of the two-step semiclassical model for strong-field ionization in elliptically polarized laser pulses. We find that the discrepancy between the two-step model and the quantum theory correlates...

  17. Process modeling for the Integrated Nonthermal Treatment System (INTS) study

    Energy Technology Data Exchange (ETDEWEB)

    Brown, B.W.

    1997-04-01

    This report describes the process modeling done in support of the Integrated Nonthermal Treatment System (INTS) study. This study was performed to supplement the Integrated Thermal Treatment System (ITTS) study and comprises five conceptual treatment systems that treat DOE contract-handled mixed low-level wastes (MLLW) at temperatures of less than 350{degrees}F. ASPEN PLUS, a chemical process simulator, was used to model the systems. Nonthermal treatment systems were developed as part of the INTS study and include sufficient processing steps to treat the entire inventory of MLLW. The final result of the modeling is a process flowsheet with a detailed mass and energy balance. In contrast to the ITTS study, which modeled only the main treatment system, the INTS study modeled each of the various processing steps with ASPEN PLUS, release 9.1-1. Trace constituents, such as radionuclides and minor pollutant species, were not included in the calculations.

  18. Decision-Oriented Health Technology Assessment: One Step Forward in Supporting the Decision-Making Process in Hospitals.

    Science.gov (United States)

    Ritrovato, Matteo; Faggiano, Francesco C; Tedesco, Giorgia; Derrico, Pietro

    2015-06-01

    This article outlines the Decision-Oriented Health Technology Assessment: a new implementation of the European network for Health Technology Assessment Core Model, integrating the multicriteria decision-making analysis by using the analytic hierarchy process to introduce a standardized methodological approach as a valued and shared tool to support health care decision making within a hospital. Following the Core Model as guidance (European network for Health Technology Assessment. HTA core model for medical and surgical interventions. Available from: http://www.eunethta.eu/outputs/hta-core-model-medical-and-surgical-interventions-10r. [Accessed May 27, 2014]), it is possible to apply the analytic hierarchy process to break down a problem into its constituent parts and identify priorities (i.e., assigning a weight to each part) in a hierarchical structure. Thus, it quantitatively compares the importance of multiple criteria in assessing health technologies and how the alternative technologies perform in satisfying these criteria. The verbal ratings are translated into a quantitative form by using the Saaty scale (Saaty TL. Decision making with the analytic hierarchy process. Int J Serv Sci 2008;1:83-98). An eigenvectors analysis is used for deriving the weights' systems (i.e., local and global weights' system) that reflect the importance assigned to the criteria and the priorities related to the performance of the alternative technologies. Compared with the Core Model, this methodological approach supplies a more timely as well as contextualized evidence for a specific technology, making it possible to obtain data that are more relevant and easier to interpret, and therefore more useful for decision makers to make investment choices with greater awareness. We reached the conclusion that although there may be scope for improvement, this implementation is a step forward toward the goal of building a "solid bridge" between the scientific evidence and the final decision

  19. Application of the Two-Step Filter to Process Ranging Measurements for Relative Navigation in an Elliptical Orbit

    Science.gov (United States)

    Garrison, James L.; Axelrad, Penina

    1997-01-01

    This estimator breaks a nonlinear estimation problem into a set of over determined 'first step' states which are linear in the observations and 'second step' states which are ultimately the states of interest. Linear estimation methods are applied to filter the observations and produce the optimal first step state estimate. The 'second step' states are obtained through iterative nonlinear parameter estimation considering the first step states as observations. It has been shown that this process exactly minimizes the least squares cost function for static problems and provides a better solution than the iterated extended Kalman filter (EKF) for dynamic problems. The two step filter is applied in this paper to process range and range rate measurements between the two spacecraft. Details of the application of the two step estimator to this problem will be given, highlighting the use of a test for ill-conditioned covariance estimates that can result from the first order covariance propagation. A comparison will be made between the performance of the two step filter and the IEKF.

  20. Neuroscientific model of motivational process.

    Science.gov (United States)

    Kim, Sung-Il

    2013-01-01

    Considering the neuroscientific findings on reward, learning, value, decision-making, and cognitive control, motivation can be parsed into three sub processes, a process of generating motivation, a process of maintaining motivation, and a process of regulating motivation. I propose a tentative neuroscientific model of motivational processes which consists of three distinct but continuous sub processes, namely reward-driven approach, value-based decision-making, and goal-directed control. Reward-driven approach is the process in which motivation is generated by reward anticipation and selective approach behaviors toward reward. This process recruits the ventral striatum (reward area) in which basic stimulus-action association is formed, and is classified as an automatic motivation to which relatively less attention is assigned. By contrast, value-based decision-making is the process of evaluating various outcomes of actions, learning through positive prediction error, and calculating the value continuously. The striatum and the orbitofrontal cortex (valuation area) play crucial roles in sustaining motivation. Lastly, the goal-directed control is the process of regulating motivation through cognitive control to achieve goals. This consciously controlled motivation is associated with higher-level cognitive functions such as planning, retaining the goal, monitoring the performance, and regulating action. The anterior cingulate cortex (attention area) and the dorsolateral prefrontal cortex (cognitive control area) are the main neural circuits related to regulation of motivation. These three sub processes interact with each other by sending reward prediction error signals through dopaminergic pathway from the striatum and to the prefrontal cortex. The neuroscientific model of motivational process suggests several educational implications with regard to the generation, maintenance, and regulation of motivation to learn in the learning environment.

  1. One-Step Dynamic Classifier Ensemble Model for Customer Value Segmentation with Missing Values

    Directory of Open Access Journals (Sweden)

    Jin Xiao

    2014-01-01

    Full Text Available Scientific customer value segmentation (CVS is the base of efficient customer relationship management, and customer credit scoring, fraud detection, and churn prediction all belong to CVS. In real CVS, the customer data usually include lots of missing values, which may affect the performance of CVS model greatly. This study proposes a one-step dynamic classifier ensemble model for missing values (ODCEM model. On the one hand, ODCEM integrates the preprocess of missing values and the classification modeling into one step; on the other hand, it utilizes multiple classifiers ensemble technology in constructing the classification models. The empirical results in credit scoring dataset “German” from UCI and the real customer churn prediction dataset “China churn” show that the ODCEM outperforms four commonly used “two-step” models and the ensemble based model LMF and can provide better decision support for market managers.

  2. Multi-Step Usage of in Vivo Models During Rational Drug Design and Discovery

    Directory of Open Access Journals (Sweden)

    Charles H. Williams

    2011-04-01

    Full Text Available In this article we propose a systematic development method for rational drug design while reviewing paradigms in industry, emerging techniques and technologies in the field. Although the process of drug development today has been accelerated by emergence of computational methodologies, it is a herculean challenge requiring exorbitant resources; and often fails to yield clinically viable results. The current paradigm of target based drug design is often misguided and tends to yield compounds that have poor absorption, distribution, metabolism, and excretion, toxicology (ADMET properties. Therefore, an in vivo organism based approach allowing for a multidisciplinary inquiry into potent and selective molecules is an excellent place to begin rational drug design. We will review how organisms like the zebrafish and Caenorhabditis elegans can not only be starting points, but can be used at various steps of the drug development process from target identification to pre-clinical trial models. This systems biology based approach paired with the power of computational biology; genetics and developmental biology provide a methodological framework to avoid the pitfalls of traditional target based drug design.

  3. Process Models for Security Architectures

    Directory of Open Access Journals (Sweden)

    Floarea NASTASE

    2006-01-01

    Full Text Available This paper presents a model for an integrated security system, which can be implemented in any organization. It is based on security-specific standards and taxonomies as ISO 7498-2 and Common Criteria. The functionalities are derived from the classes proposed in the Common Criteria document. In the paper we present the process model for each functionality and also we focus on the specific components.

  4. Inorganic metallodielectric materials fabricated using two single-step methods based on the Tollen's process.

    Science.gov (United States)

    Peterson, Molly S M; Bouwman, Jason; Chen, Aiqing; Deutsch, Miriam

    2007-02-01

    Two methods for preparing polycrystalline silver shells on colloidal silica spheres are reported. These do not include the use of organic ligands or metal seeding steps and are based on the Tollen's process for silvering glass. Reaction parameters such as temperature and reactant concentrations are adjusted to slow the reaction kinetics, which we find leads to preferential silver growth on the spheres. The resulting shells are polycrystalline and granular, showing highly uniform sphere coverage. Surface morphologies range from sparsely interconnected grains for shells approximately 20 nm thick, to complete (yet porous) shells of interconnected silver clusters which are up to approximately 140 nm in thickness. The extinction spectra of the core-shell materials are markedly different from those of smooth continuous shells, showing clear evidence that the granular shell geometry influences the plasmon resonance of the composite system. Spheres coated with shells 20-40 nm thick are also suitable for colloidal crystallization. Monolayers of self-assembled spheres with long-range ordering are demonstrated.

  5. Mother Vocal Recognition in Antarctic Fur Seal Arctocephalus gazella Pups: A Two-Step Process.

    Directory of Open Access Journals (Sweden)

    Thierry Aubin

    Full Text Available In otariids, mother's recognition by pups is essential to their survival since females nurse exclusively their own young and can be very aggressive towards non-kin. Antarctic fur seal, Arctocephalus gazella, come ashore to breed and form dense colonies. During the 4-month lactation period, females alternate foraging trips at sea with suckling period ashore. On each return to the colony, females and pups first use vocalizations to find each other among several hundred conspecifics and olfaction is used as a final check. Such vocal identification has to be highly efficient. In this present study, we investigated the components of the individual vocal signature used by pups to identify their mothers by performing playback experiments on pups with synthetic signals. We thus tested the efficiency of this individual vocal signature by performing propagation tests and by testing pups at different playback distances. Pups use both amplitude and frequency modulations to identify their mother's voice, as well as the energy spectrum. Propagation tests showed that frequency modulations propagated reliably up to 64m, whereas amplitude modulations and spectral content greatly were highly degraded for distances over 8m. Playback on pups at different distances suggested that the individual identification is a two-step process: at long range, pups identified first the frequency modulation pattern of their mother's calls, and other components of the vocal signature at closer range. The individual vocal recognition system developed by Antarctic fur seals is well adapted to face the main constraint of finding kin in a crowd.

  6. A two-step annealing process for enhancing the ferroelectric properties of poly(vinylidene fluoride) (PVDF) devices

    KAUST Repository

    Park, Jihoon

    2015-01-01

    We report a simple two-step annealing scheme for the fabrication of stable non-volatile memory devices employing poly(vinylidene fluoride) (PVDF) polymer thin-films. The proposed two-step annealing scheme comprises the crystallization of the ferroelectric gamma-phase during the first step and enhancement of the PVDF film dense morphology during the second step. Moreover, when we extended the processing time of the second step, we obtained good hysteresis curves down to 1 Hz, the first such report for ferroelectric PVDF films. The PVDF films also exhibit a coercive field of 113 MV m-1 and a ferroelectric polarization of 5.4 μC cm-2. © The Royal Society of Chemistry 2015.

  7. Virtual models of the HLA class I antigen processing pathway.

    Science.gov (United States)

    Petrovsky, Nikolai; Brusic, Vladimir

    2004-12-01

    Antigen recognition by cytotoxic CD8 T cells is dependent upon a number of critical steps in MHC class I antigen processing including proteosomal cleavage, TAP transport into the endoplasmic reticulum, and MHC class I binding. Based on extensive experimental data relating to each of these steps there is now the capacity to model individual antigen processing steps with a high degree of accuracy. This paper demonstrates the potential to bring together models of individual antigen processing steps, for example proteosome cleavage, TAP transport, and MHC binding, to build highly informative models of functional pathways. In particular, we demonstrate how an artificial neural network model of TAP transport was used to mine a HLA-binding database so as to identify HLA-binding peptides transported by TAP. This integrated model of antigen processing provided the unique insight that HLA class I alleles apparently constitute two separate classes: those that are TAP-efficient for peptide loading (HLA-B27, -A3, and -A24) and those that are TAP-inefficient (HLA-A2, -B7, and -B8). Hence, using this integrated model we were able to generate novel hypotheses regarding antigen processing, and these hypotheses are now capable of being tested experimentally. This model confirms the feasibility of constructing a virtual immune system, whereby each additional step in antigen processing is incorporated into a single modular model. Accurate models of antigen processing have implications for the study of basic immunology as well as for the design of peptide-based vaccines and other immunotherapies.

  8. Nanomembrane Canister Architectures for the Visualization and Filtration of Oxyanion Toxins with One-Step Processing.

    Science.gov (United States)

    Aboelmagd, Ahmed; El-Safty, Sherif A; Shenashen, Mohamed A; Elshehy, Emad A; Khairy, Mohamed; Sakaic, Masaru; Yamaguchi, Hitoshi

    2015-11-01

    Nanomembrane canister-like architectures were fabricated by using hexagonal mesocylinder-shaped aluminosilica nanotubes (MNTs)-porous anodic alumina (PAA) hybrid nanochannels. The engineering pattern of the MNTs inside a 60 μm-long membrane channel enabled the creation of unique canister-like channel necks and cavities. The open-tubular canister architecture design provides controllable, reproducible, and one-step processing patterns of visual detection and rejection/permeation of oxyanion toxins such as selenite (SeO3(2-)) in aquatic environments (i.e., in ground and river water sources) in the Ibaraki Prefecture of Japan. The decoration of organic ligand moieties such as omega chrome black blue (OCG) into inorganic Al2O3@tubular SiO2/Al2O3 canister membrane channel cavities led to the fabrication of an optical nanomembrane sensor (ONS). The OCG ligand was not leached from the canister as observed in washing, sensing, and recovery assays of selenite anions in solution, which enabled its multiple reuse. The ONS makes a variety of alternate processing analyses of selective quantification, visual detection, rejection/permeation, and recovery of toxic selenite quick and simple without using complex instrumentation. Under optimal conditions, the ONS canister exhibited a high selectivity toward selenite anions relative to other ions and a low-level detection limit of 0.0093 μM. Real analytical data showed that approximately 96% of SeO3(2-) anions can be recovered from aquatic and wastewater samples. The ONS canister holds potential for field recovery applications of toxic selenite anions from water. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  9. Multi-step chemical and radiation process for the production of gas

    International Nuclear Information System (INIS)

    O'Neal, R.D.; Leffert, C.B.; Teichmann, T.; Teitel, R.J.

    1979-01-01

    It has previously been proposed to use the radiation energy within the central reaction chamber of a thermonuclear reactor for the dissociation of water into hydrogen and oxygen in one step. However, the coefficient of recombination of pure hydrogen and oxygen at the elevated reaction chamber temperature is relatively high so that the overall yield is low. Furthermore, it is desirable to recover any unspent tritium from the reaction chamber exhaust, but separation of residual tritium from pure hydrogen in the chamber exhaust is relatively difficult. In the process provided in this patent pure carbon dioxide rather than steam is injected into the central reaction chamber of a thermonuclear reactor. Radiolysis of carbon dioxide yields carbon monoxide and pure oxygen. While the oxygen may be separated and collected at the exhaust vent of the reaction chamber, the carbon monoxide is separated and then combined with water to produce pure hydrogen and reconstituted carbon dioxide, which may be collected and recycled so that the overall closed-loop system produces pure hydrogen and oxygen at the expense of water. The efficiency of the process is high due, in large part, to the relatively low coefficient of recombination of carbon monoxide and oxygen at the reaction chamber temperature. Heat required for the reaction of carbon monoxide with water may be provided by suitable heat transfer from the heated reaction chamber. The chamber exhaust contains carbon monoxide and oxygen, so that any unburnt tritium in the exhaust stream may be easily collected and recycled to form additional pellet fuel. The carbon dioxide molecules injected into the reaction chamber provide protection for the reaction chamber walls from the deleterious effects of charged-particle and x-ray bombardment. (LL)

  10. Effect of two-step functionalization of Ti by chemical processes on protein adsorption

    Science.gov (United States)

    Pisarek, M.; Roguska, A.; Andrzejczuk, M.; Marcon, L.; Szunerits, S.; Lewandowska, M.; Janik-Czachor, M.

    2011-07-01

    Titanium and its alloys are widely used for orthopedic and dental implants because of their superior mechanical properties, low modulus, excellent corrosion resistance and good biocompatibility. However, it takes several months for titanium implants and bone tissue to reach integration. Hence, there is growing interest in shortening the process of osseointegration and thereby reducing surgical restrictions. Various surface modifications have been applied to form a bioactive titanium oxide layer on the metal surface, which is known to accelerate osseointegration. The present work shows that titanium dioxide (TiO 2) layers formed on titanium substrates by etching in a solution of sodium hydroxide (NaOH) or hydrogen peroxide/phosphoric acid (H 3PO 4/H 2O 2, with a volume ratio of 1:1) are highly suitable pre-treatments for apatite-like coating deposition. Using a two-step procedure (etching in an alkaline or acidic solution followed by soaking in Hanks' medium), biomimetic calcium phosphate coatings were deposited on porous TiO 2 layers. The combined effects of surface topography and chemistry on the formation of the calcium phosphate layer are presented. The topography of the TiO 2 layers was characterized using HR-SEM and AFM techniques. The nucleation and growth of calcium phosphate (Ca-P) coatings deposited on TiO 2 porous layers from Hanks' solution was investigated using HR-SEM microscopy. AES, XPS and FTIR surface analytical techniques were used to characterize the titanium dioxide layers before and after deposition of the calcium phosphate coatings, as well as after the process of protein adsorption. To evaluate the potential use of such materials for biomedical applications, the adsorption of serum albumin, the most abundant protein in the blood, was studied on such surfaces.

  11. THREE PRE-PROCESSING STEPS TO INCREASE THE QUALITY OF KINECT RANGE DATA

    Directory of Open Access Journals (Sweden)

    M. Davoodianidaliki

    2013-09-01

    Full Text Available By developing technology with current rate, and increase in usage of active sensors in Close-Range Photogrammetry and Computer Vision, Range Images are the main extra data which has been added to the collection of present ones. Though main output of these data is point cloud, Range Images themselves can be considered important pieces of information. Being a bridge between 2D and 3D data enables it to hold unique and important attributes. There are 3 following properties that are taken advantage of in this study. First attribute to be considered is "Neighborhood of Null pixels" which will add a new field about accuracy of parameters into point cloud. This new field can be used later for data registration and integration. When there is a conflict between points of different stations we can abandon those with lower accuracy field. Next, polynomial fitting to known plane regions is applied. This step can help to soften final point cloud and just applies to some applications. Classification and region tracking in a series of images is needed for this process to be applicable. Finally, there is break-line created by errors of data transfer software. The break-line is caused by loss of some pixels in data transfer and store, and Image will shift along break-line. This error occurs usually when camera moves fast and processor can't handle transfer process entirely. The proposed method performs based on Edge Detection where horizontal lines are used to recognize break-line and near-vertical lines are used to determine shift value.

  12. Cognitive processing for step precision increases beta and gamma band modulation during overground walking

    DEFF Research Database (Denmark)

    Oliveira, Anderson Souza; Arguissain, Federico Gabriel; Andersen, Ole Kæseler

    2018-01-01

    ); (2) walking in a pre-defined pathway forcing variation in step width and length by stepping on green marks on the floor (only one color: W1C), and (3) walking in the same pre-defined W1C pathway while evaluating different combinations among the colors green, yellow and red, in which only one color...

  13. Validation process of simulation model

    International Nuclear Information System (INIS)

    San Isidro, M. J.

    1998-01-01

    It is presented a methodology on empirical validation about any detailed simulation model. This king of validation it is always related with an experimental case. The empirical validation has a residual sense, because the conclusions are based on comparisons between simulated outputs and experimental measurements. This methodology will guide us to detect the fails of the simulation model. Furthermore, it can be used a guide in the design of posterior experiments. Three steps can be well differentiated: Sensitivity analysis. It can be made with a DSA, differential sensitivity analysis, and with a MCSA, Monte-Carlo sensitivity analysis. Looking the optimal domains of the input parameters. It has been developed a procedure based on the Monte-Carlo methods and Cluster techniques, to find the optimal domains of these parameters. Residual analysis. This analysis has been made on the time domain and on the frequency domain, it has been used the correlation analysis and spectral analysis. As application of this methodology, it is presented the validation carried out on a thermal simulation model on buildings, Esp., studying the behavior of building components on a Test Cell of LECE of CIEMAT. (Author) 17 refs

  14. Mathematical modelling in economic processes.

    Directory of Open Access Journals (Sweden)

    L.V. Kravtsova

    2008-06-01

    Full Text Available In article are considered a number of methods of mathematical modelling of economic processes and opportunities of use of spreadsheets Excel for reception of the optimum decision of tasks or calculation of financial operations with the help of the built-in functions.

  15. Modeling of biopharmaceutical processes. Part 2: Process chromatography unit operation

    DEFF Research Database (Denmark)

    Kaltenbrunner, Oliver; McCue, Justin; Engel, Philip

    2008-01-01

    Process modeling can be a useful tool to aid in process development, process optimization, and process scale-up. When modeling a chromatography process, one must first select the appropriate models that describe the mass transfer and adsorption that occurs within the porous adsorbent. The theoret...

  16. Uncertainty modeling process for semantic technology

    Directory of Open Access Journals (Sweden)

    Rommel N. Carvalho

    2016-08-01

    Full Text Available The ubiquity of uncertainty across application domains generates a need for principled support for uncertainty management in semantically aware systems. A probabilistic ontology provides constructs for representing uncertainty in domain ontologies. While the literature has been growing on formalisms for representing uncertainty in ontologies, there remains little guidance in the knowledge engineering literature for how to design probabilistic ontologies. To address the gap, this paper presents the Uncertainty Modeling Process for Semantic Technology (UMP-ST, a new methodology for modeling probabilistic ontologies. To explain how the methodology works and to verify that it can be applied to different scenarios, this paper describes step-by-step the construction of a proof-of-concept probabilistic ontology. The resulting domain model can be used to support identification of fraud in public procurements in Brazil. While the case study illustrates the development of a probabilistic ontology in the PR-OWL probabilistic ontology language, the methodology is applicable to any ontology formalism that properly integrates uncertainty with domain semantics.

  17. Specification of a STEP Based Reference Model for Exchange of Robotics Models

    DEFF Research Database (Denmark)

    Haenisch, Jochen; Kroszynski, Uri; Ludwig, Arnold

    combining geometric, dynamic, process and robot specific data.The growing need for accurate information about manufacturing data (models of robots and other mechanisms) in diverse industrial applications has initiated ESPRIT Project 6457: InterRob. Besides the topics associated with standards for industrial...... of pilot processor programs are based. The processors allow for the exchange of product data models between Analysis systems (e.g. ADAMS), CAD systems (e.g. CATIA, BRAVO), Simulation and off-line programming systems (e.g. GRASP, KISMET, ROPSIM)....

  18. One-Step Dynamic Classifier Ensemble Model for Customer Value Segmentation with Missing Values

    OpenAIRE

    Jin Xiao; Bing Zhu; Geer Teng; Changzheng He; Dunhu Liu

    2014-01-01

    Scientific customer value segmentation (CVS) is the base of efficient customer relationship management, and customer credit scoring, fraud detection, and churn prediction all belong to CVS. In real CVS, the customer data usually include lots of missing values, which may affect the performance of CVS model greatly. This study proposes a one-step dynamic classifier ensemble model for missing values (ODCEM) model. On the one hand, ODCEM integrates the preprocess of missing values and the classif...

  19. Single-step brazing process for mono-block joints and mechanical testing

    International Nuclear Information System (INIS)

    Casalegno, V.; Ferraris, M.; Salvo, M.; Rizzo, S.; Merola, M.

    2007-01-01

    Full text of publication follows: Plasma facing components act as actively cooled thermal shields to sustain thermal and particle loads during normal and transient operations in ITER (International Thermonuclear Experimental Reactor). The plasma-facing layer is referred to as 'armour', which is made of either carbon fibre reinforced carbon composite (CFC) or tungsten (W). CFC is the reference design solution for the lower part of the vertical target of the ITER divertor. The armour is joined onto an actively cooled substrate, the heat sink, made of precipitation hardened copper alloy CuCrZr through a thin pure copper interlayer to decrease, by plastic deformation, the joint interface stresses; in fact, the CFC to Cu joint is affected by the CTE mismatch between the ceramic and metallic material. A new method of joining CFC to copper and CFC/Cu to CuCrZr alloy was effectively developed for the flat-type configuration; the feasibility of this process also for mono-block geometry and the development of a procedure for testing mono-block-type mock-ups is described in this work. The mono-block configuration consists of copper alloy pipe shielded by CFC blocks. It is worth noting that in mono-block configuration, the large thermal expansion mismatch between CFC and copper alloy is more significant than for flat-tile configuration, due to curved interfaces. The joining technique foresees a single-step brazing process: the brazing of the three materials (CFC-Cu-CuCrZr) can be performed in a single heat treatment using the same Cu/Ge based braze. The composite surface was modified by solid state reaction with chromium with the purpose of increasing the wettability of CFC by the brazing alloy. The CFC substrate reacts with Cr which, forming a carbide layer, allows a large reduction of the contact angle; then, the brazing of CFC to pure copper and pure copper to CuCrZr by the same treatment is feasible. This process allows to obtain good joints using a non-active brazing

  20. Investigating the Process of Process Modeling with Eye Movement Analysis

    OpenAIRE

    Pinggera, Jakob; Furtner, Marco; Martini, Markus; Sachse, Pierre; Reiter, Katharina; Zugal, Stefan; Weber, Barbara

    2015-01-01

    Research on quality issues of business process models has recently begun to explore the process of creating process models by analyzing the modeler's interactions with the modeling environment. In this paper we aim to complement previous insights on the modeler's modeling behavior with data gathered by tracking the modeler's eye movements when engaged in the act of modeling. We present preliminary results and outline directions for future research to triangulate toward a more comprehensive un...

  1. Integrated Site Model Process Model Report

    International Nuclear Information System (INIS)

    Booth, T.

    2000-01-01

    The Integrated Site Model (ISM) provides a framework for discussing the geologic features and properties of Yucca Mountain, which is being evaluated as a potential site for a geologic repository for the disposal of nuclear waste. The ISM is important to the evaluation of the site because it provides 3-D portrayals of site geologic, rock property, and mineralogic characteristics and their spatial variabilities. The ISM is not a single discrete model; rather, it is a set of static representations that provide three-dimensional (3-D), computer representations of site geology, selected hydrologic and rock properties, and mineralogic-characteristics data. These representations are manifested in three separate model components of the ISM: the Geologic Framework Model (GFM), the Rock Properties Model (RPM), and the Mineralogic Model (MM). The GFM provides a representation of the 3-D stratigraphy and geologic structure. Based on the framework provided by the GFM, the RPM and MM provide spatial simulations of the rock and hydrologic properties, and mineralogy, respectively. Functional summaries of the component models and their respective output are provided in Section 1.4. Each of the component models of the ISM considers different specific aspects of the site geologic setting. Each model was developed using unique methodologies and inputs, and the determination of the modeled units for each of the components is dependent on the requirements of that component. Therefore, while the ISM represents the integration of the rock properties and mineralogy into a geologic framework, the discussion of ISM construction and results is most appropriately presented in terms of the three separate components. This Process Model Report (PMR) summarizes the individual component models of the ISM (the GFM, RPM, and MM) and describes how the three components are constructed and combined to form the ISM

  2. Evaluating Bank Profitability in Ghana: A five step Du-Pont Model Approach

    Directory of Open Access Journals (Sweden)

    Baah Aye Kusi

    2015-09-01

    Full Text Available We investigate bank profitability in Ghana using periods before, during and after the globe financial crises with the five step du-pont model for the first time.We adapt the variable of the five step du-pont model to explain bank profitability with a panel data of twenty-five banks in Ghana from 2006 to 2012. To ensure meaningful generalization robust errors fixed and random effects models are used.Our empirical results suggests that bank operating activities (operating profit margin, bank efficiency (asset turnover, bank leverage (asset to equity and financing cost (interest burden  were positive and significant determinants of bank profitability (ROE during the period of study implying that bank in Ghana can boost return to equity holders through the above mentioned variables. We further report that the five step du-pont model better explains the total variation (94% in bank profitability in Ghana as compared to earlier findings suggesting that bank specific variables are keen in explaining ROE in banks in Ghana.We cited no empirical study that has employed five step du-pont model making our study unique and different from earlier studies as we assert that bank specific variables are core to explaining bank profitability.                

  3. A local time stepping algorithm for GPU-accelerated 2D shallow water models

    Science.gov (United States)

    Dazzi, Susanna; Vacondio, Renato; Dal Palù, Alessandro; Mignosa, Paolo

    2018-01-01

    In the simulation of flooding events, mesh refinement is often required to capture local bathymetric features and/or to detail areas of interest; however, if an explicit finite volume scheme is adopted, the presence of small cells in the domain can restrict the allowable time step due to the stability condition, thus reducing the computational efficiency. With the aim of overcoming this problem, the paper proposes the application of a Local Time Stepping (LTS) strategy to a GPU-accelerated 2D shallow water numerical model able to handle non-uniform structured meshes. The algorithm is specifically designed to exploit the computational capability of GPUs, minimizing the overheads associated with the LTS implementation. The results of theoretical and field-scale test cases show that the LTS model guarantees appreciable reductions in the execution time compared to the traditional Global Time Stepping strategy, without compromising the solution accuracy.

  4. Modified Step Variational Iteration Method for Solving Fractional Biochemical Reaction Model

    Directory of Open Access Journals (Sweden)

    R. Yulita Molliq

    2011-01-01

    Full Text Available A new method called the modification of step variational iteration method (MoSVIM is introduced and used to solve the fractional biochemical reaction model. The MoSVIM uses general Lagrange multipliers for construction of the correction functional for the problems, and it runs by step approach, which is to divide the interval into subintervals with time step, and the solutions are obtained at each subinterval as well adopting a nonzero auxiliary parameter ℏ to control the convergence region of series' solutions. The MoSVIM yields an analytical solution of a rapidly convergent infinite power series with easily computable terms and produces a good approximate solution on enlarged intervals for solving the fractional biochemical reaction model. The accuracy of the results obtained is in a excellent agreement with the Adam Bashforth Moulton method (ABMM.

  5. Step process for selecting and testing surrogates and indicators of afrotemperate forest invertebrate diversity.

    Directory of Open Access Journals (Sweden)

    Charmaine Uys

    Full Text Available BACKGROUND: The diversity and complexity of invertebrate communities usually result in their exclusion from conservation activities. Here we provide a step process for assessing predominantly ground-dwelling Afrotemperate forest invertebrates' (earthworms, centipedes, millipedes, ants, molluscs potential as surrogates for conservation and indicators for monitoring. We also evaluated sampling methods (soil and litter samples, pitfall traps, active searching quadrats and tree beating and temporal (seasonal effects. METHODOLOGY/PRINCIPAL FINDINGS: Lack of congruence of species richness across taxa indicated poor surrogacy potential for any of the focus taxa. Based on abundance and richness, seasonal stability, and ease of sampling, molluscs were the most appropriate taxon for use in monitoring of disturbance impacts. Mollusc richness was highest in March (Antipodal late summer wet season. The most effective and efficient methods were active searching quadrats and searching litter samples. We tested the effectiveness of molluscs as indicators for monitoring by contrasting species richness and community structure in burned relative to unburned forests. Both species richness and community structure changed significantly with burning. Some mollusc species (e.g. Macroptychia africana showed marked negative responses to burning, and these species have potential for use as indicators. CONCLUSIONS/SIGNIFICANCE: Despite habitat type (i.e., Afrotemperate forest being constant, species richness and community structure varied across forest patches. Therefore, in conservation planning, setting targets for coarse filter features (e.g., habitat type requires fine filter features (e.g., localities for individual species. This is especially true for limited mobility taxa such as those studied here. Molluscs have high potential for indicators for monitoring, and this requires broader study.

  6. Validation of the baking process as a kill-step for controlling Salmonella in muffins.

    Science.gov (United States)

    Channaiah, Lakshmikantha H; Michael, Minto; Acuff, Jennifer C; Phebus, Randall K; Thippareddi, Harshavardhan; Olewnik, Maureen; Milliken, George

    2017-06-05

    This research investigates the potential risk of Salmonella in muffins when contamination is introduced via flour, the main ingredient. Flour was inoculated with a 3-strain cocktail of Salmonella serovars (Newport, Typhimurium, and Senftenberg) and re-dried to achieve a target concentration of ~8logCFU/g. The inoculated flour was then used to prepare muffin batter following a standard commercial recipe. The survival of Salmonella during and after baking at 190.6°C for 21min was analyzed by plating samples on selective and injury-recovery media at regular intervals. The thermal inactivation parameters (D and z values) of the 3-strain Salmonella cocktail were determined. A ≥5logCFU/g reduction in Salmonella population was demonstrated by 17min of baking, and a 6.1logCFU/g reduction in Salmonella population by 21min of baking. The D-values of Salmonella serovar cocktail in muffin batter were 62.2±3.0, 40.1±0.9 and 16.5±1.7min at 55, 58 and 61°C, respectively; and the z-value was 10.4±0.6°C. The water activity (a w ) of the muffin crumb (0.928) after baking and 30min of cooling was similar to that of pre-baked muffin batter, whereas the a w of the muffin crust decreased to (0.700). This study validates a typical commercial muffin baking process utilizing an oven temperature of 190.6°C for at least 17min as an effective kill-step in reducing a Salmonella serovar population by ≥5logCFU/g. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.

  7. In situ biosynthesis of bacterial nanocellulose-CaCO3 hybrid bionanocomposite: One-step process

    International Nuclear Information System (INIS)

    Mohammadkazemi, Faranak; Faria, Marisa; Cordeiro, Nereida

    2016-01-01

    In this work, a simple and green route to the synthesis of the bacterial nanocellulose-calcium carbonate (BNC/CaCO 3 ) hybrid bionanocomposites using one-step in situ biosynthesis was studied. The CaCO 3 was incorporated in the bacterial nanocellulose structure during the cellulose biosynthesis by Gluconacetobacter xylinus PTCC 1734 bacteria. Hestrin-Schramm (HS) and Zhou (Z) culture media were used to the hybrid bionanocomposites production and the effect of ethanol addition was investigated. Attenuated total reflection Fourier transform infrared spectroscopy, field emission scanning electron microscopy, X-ray diffraction, energy-dispersive X-ray spectroscopy, inverse gas chromatography and thermogravimetric analysis were used to characterize the samples. The experimental results demonstrated that the ethanol and culture medium play an important role in the BNC/CaCO 3 hybrid bionanocomposites production, structure and properties. The BNC/CaCO 3 biosynthesized in Z culture medium revealed higher O/C ratio and amphoteric surface character, which justify the highest CaCO 3 content incorporation. The CaCO 3 was incorporated into the cellulosic matrix decreasing the bacterial nanocellulose crystallinity. This work reveals the high potential of in situ biosynthesis of BNC/CaCO 3 hybrid bionanocomposites and opens a new way to the high value-added applications of bacterial nanocellulose. - Graphical Abstract: Display Omitted - Highlights: • BNC/CaCO 3 hybrid bionanocomposites were produced using in situ biosynthesis process. • Ethanol and culture medium play an important role in the production and properties. • Z-BNC/CaCO 3 bionanocomposites revealed higher O/C ratio and amphoteric surface character. • CaCO 3 incorporated into the BNC decreased crystallinity.

  8. Novel three-step pseudo-absence selection technique for improved species distribution modelling.

    Directory of Open Access Journals (Sweden)

    Senait D Senay

    Full Text Available Pseudo-absence selection for spatial distribution models (SDMs is the subject of ongoing investigation. Numerous techniques continue to be developed, and reports of their effectiveness vary. Because the quality of presence and absence data is key for acceptable accuracy of correlative SDM predictions, determining an appropriate method to characterise pseudo-absences for SDM's is vital. The main methods that are currently used to generate pseudo-absence points are: 1 randomly generated pseudo-absence locations from background data; 2 pseudo-absence locations generated within a delimited geographical distance from recorded presence points; and 3 pseudo-absence locations selected in areas that are environmentally dissimilar from presence points. There is a need for a method that considers both geographical extent and environmental requirements to produce pseudo-absence points that are spatially and ecologically balanced. We use a novel three-step approach that satisfies both spatial and ecological reasons why the target species is likely to find a particular geo-location unsuitable. Step 1 comprises establishing a geographical extent around species presence points from which pseudo-absence points are selected based on analyses of environmental variable importance at different distances. This step gives an ecologically meaningful explanation to the spatial range of background data, as opposed to using an arbitrary radius. Step 2 determines locations that are environmentally dissimilar to the presence points within the distance specified in step one. Step 3 performs K-means clustering to reduce the number of potential pseudo-absences to the desired set by taking the centroids of clusters in the most environmentally dissimilar class identified in step 2. By considering spatial, ecological and environmental aspects, the three-step method identifies appropriate pseudo-absence points for correlative SDMs. We illustrate this method by predicting the New

  9. Rotordynamic analysis for stepped-labyrinth gas seals using moody's friction-factor model

    International Nuclear Information System (INIS)

    Ha, Tae Woong

    2001-01-01

    The governing equations are derived for the analysis of a stepped labyrinth gas seal generally used in high performance compressors, gas turbines, and steam turbines. The bulk-flow is assumed for a single cavity control volume set up in a stepped labyrinth cavity and the flow is assumed to be completely turbulent in the circumferential direction. The Moody's wall-friction-factor model is used for the calculation of wall shear stresses in the single cavity control volume. For the reaction force developed by the stepped labyrinth gas seal, linearized zeroth-order and first-order perturbation equations are developed for small motion about a centered position. Integration of the resultant first-order pressure distribution along and around the seal defines the rotordynamic coefficients of the stepped labyrinth gas seal. The resulting leakage and rotordynamic characteristics of the stepped labyrinth gas seal are presented and compared with Scharrer's theoretical analysis using Blasius' wall-friction-factor model. The present analysis shows a good qualitative agreement of leakage characteristics with Scharrer's analysis, but underpredicts by about 20 %. For the rotordynamic coefficients, the present analysis generally yields smaller predicted values compared with Scharrer's analysis

  10. Animal models and conserved processes

    Directory of Open Access Journals (Sweden)

    Greek Ray

    2012-09-01

    Full Text Available Abstract Background The concept of conserved processes presents unique opportunities for using nonhuman animal models in biomedical research. However, the concept must be examined in the context that humans and nonhuman animals are evolved, complex, adaptive systems. Given that nonhuman animals are examples of living systems that are differently complex from humans, what does the existence of a conserved gene or process imply for inter-species extrapolation? Methods We surveyed the literature including philosophy of science, biological complexity, conserved processes, evolutionary biology, comparative medicine, anti-neoplastic agents, inhalational anesthetics, and drug development journals in order to determine the value of nonhuman animal models when studying conserved processes. Results Evolution through natural selection has employed components and processes both to produce the same outcomes among species but also to generate different functions and traits. Many genes and processes are conserved, but new combinations of these processes or different regulation of the genes involved in these processes have resulted in unique organisms. Further, there is a hierarchy of organization in complex living systems. At some levels, the components are simple systems that can be analyzed by mathematics or the physical sciences, while at other levels the system cannot be fully analyzed by reducing it to a physical system. The study of complex living systems must alternate between focusing on the parts and examining the intact whole organism while taking into account the connections between the two. Systems biology aims for this holism. We examined the actions of inhalational anesthetic agents and anti-neoplastic agents in order to address what the characteristics of complex living systems imply for inter-species extrapolation of traits and responses related to conserved processes. Conclusion We conclude that even the presence of conserved processes is

  11. Animal models and conserved processes.

    Science.gov (United States)

    Greek, Ray; Rice, Mark J

    2012-09-10

    The concept of conserved processes presents unique opportunities for using nonhuman animal models in biomedical research. However, the concept must be examined in the context that humans and nonhuman animals are evolved, complex, adaptive systems. Given that nonhuman animals are examples of living systems that are differently complex from humans, what does the existence of a conserved gene or process imply for inter-species extrapolation? We surveyed the literature including philosophy of science, biological complexity, conserved processes, evolutionary biology, comparative medicine, anti-neoplastic agents, inhalational anesthetics, and drug development journals in order to determine the value of nonhuman animal models when studying conserved processes. Evolution through natural selection has employed components and processes both to produce the same outcomes among species but also to generate different functions and traits. Many genes and processes are conserved, but new combinations of these processes or different regulation of the genes involved in these processes have resulted in unique organisms. Further, there is a hierarchy of organization in complex living systems. At some levels, the components are simple systems that can be analyzed by mathematics or the physical sciences, while at other levels the system cannot be fully analyzed by reducing it to a physical system. The study of complex living systems must alternate between focusing on the parts and examining the intact whole organism while taking into account the connections between the two. Systems biology aims for this holism. We examined the actions of inhalational anesthetic agents and anti-neoplastic agents in order to address what the characteristics of complex living systems imply for inter-species extrapolation of traits and responses related to conserved processes. We conclude that even the presence of conserved processes is insufficient for inter-species extrapolation when the trait or response

  12. Bereday and Hilker: Origins of the "Four Steps of Comparison" Model

    Science.gov (United States)

    Adick, Christel

    2018-01-01

    The article draws attention to the forgotten ancestry of the "four steps of comparison" model (description--interpretation--juxtaposition--comparison). Comparativists largely attribute this to George Z. F. Bereday [1964. "Comparative Method in Education." New York: Holt, Rinehart and Winston], but among German scholars, it is…

  13. Anticipated tt-bar states in the quark-confining two-step potential model

    International Nuclear Information System (INIS)

    Kulshreshtha, D.S.

    1980-12-01

    The mass spectrum, thresholds and decay widths of the anticipated tt-bar states are studied as a function of quark mass, in a simple analytically solvable, quark-confining two-step potential model, previously used for charmonium and bottonium. (author)

  14. The cc-bar and bb-bar spectroscopy in the two-step potential model

    International Nuclear Information System (INIS)

    Kulshreshtha, D.S.; Kaiserslautern Univ.

    1984-07-01

    We investigate the spectroscopy of the charmonium (cc-bar) and bottonium (bb-bar) bound states in a static flavour independent nonrelativistic quark-antiquark (qq-bar) two-step potential model proposed earlier. Our predictions are in good agreement with experimental data and with other theoretical predictions. (author)

  15. Model for amorphous aggregation processes

    Science.gov (United States)

    Stranks, Samuel D.; Ecroyd, Heath; van Sluyter, Steven; Waters, Elizabeth J.; Carver, John A.; von Smekal, Lorenz

    2009-11-01

    The amorphous aggregation of proteins is associated with many phenomena, ranging from the formation of protein wine haze to the development of cataract in the eye lens and the precipitation of recombinant proteins during their expression and purification. While much literature exists describing models for linear protein aggregation, such as amyloid fibril formation, there are few reports of models which address amorphous aggregation. Here, we propose a model to describe the amorphous aggregation of proteins which is also more widely applicable to other situations where a similar process occurs, such as in the formation of colloids and nanoclusters. As first applications of the model, we have tested it against experimental turbidimetry data of three proteins relevant to the wine industry and biochemistry, namely, thaumatin, a thaumatinlike protein, and α -lactalbumin. The model is very robust and describes amorphous experimental data to a high degree of accuracy. Details about the aggregation process, such as shape parameters of the aggregates and rate constants, can also be extracted.

  16. A permeation theory for single-file ion channels: one- and two-step models.

    Science.gov (United States)

    Nelson, Peter Hugo

    2011-04-28

    How many steps are required to model permeation through ion channels? This question is investigated by comparing one- and two-step models of permeation with experiment and MD simulation for the first time. In recent MD simulations, the observed permeation mechanism was identified as resembling a Hodgkin and Keynes knock-on mechanism with one voltage-dependent rate-determining step [Jensen et al., PNAS 107, 5833 (2010)]. These previously published simulation data are fitted to a one-step knock-on model that successfully explains the highly non-Ohmic current-voltage curve observed in the simulation. However, these predictions (and the simulations upon which they are based) are not representative of real channel behavior, which is typically Ohmic at low voltages. A two-step association/dissociation (A/D) model is then compared with experiment for the first time. This two-parameter model is shown to be remarkably consistent with previously published permeation experiments through the MaxiK potassium channel over a wide range of concentrations and positive voltages. The A/D model also provides a first-order explanation of permeation through the Shaker potassium channel, but it does not explain the asymmetry observed experimentally. To address this, a new asymmetric variant of the A/D model is developed using the present theoretical framework. It includes a third parameter that represents the value of the "permeation coordinate" (fractional electric potential energy) corresponding to the triply occupied state n of the channel. This asymmetric A/D model is fitted to published permeation data through the Shaker potassium channel at physiological concentrations, and it successfully predicts qualitative changes in the negative current-voltage data (including a transition to super-Ohmic behavior) based solely on a fit to positive-voltage data (that appear linear). The A/D model appears to be qualitatively consistent with a large group of published MD simulations, but no

  17. The G2 erosion model: An algorithm for month-time step assessments.

    Science.gov (United States)

    Karydas, Christos G; Panagos, Panos

    2018-02-01

    A detailed description of the G2 erosion model is presented, in order to support potential users. G2 is a complete, quantitative algorithm for mapping soil loss and sediment yield rates on month-time intervals. G2 has been designed to run in a GIS environment, taking input from geodatabases available by European or other international institutions. G2 adopts fundamental equations from the Revised Universal Soil Loss Equation (RUSLE) and the Erosion Potential Method (EPM), especially for rainfall erosivity, soil erodibility, and sediment delivery ratio. However, it has developed its own equations and matrices for the vegetation cover and management factor and the effect of landscape alterations on erosion. Provision of month-time step assessments is expected to improve understanding of erosion processes, especially in relation to land uses and climate change. In parallel, G2 has full potential to decision-making support with standardised maps on a regular basis. Geospatial layers of rainfall erosivity, soil erodibility, and terrain influence, recently developed by the Joint Research Centre (JRC) on a European or global scale, will further facilitate applications of G2. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.

  18. Branching Patterns and Stepped Leaders in an Electric-Circuit Model for Creeping Discharge

    Science.gov (United States)

    Hidetsugu Sakaguchi,; Sahim M. Kourkouss,

    2010-06-01

    We construct a two-dimensional electric circuit model for creeping discharge. Two types of discharge, surface corona and surface leader, are modeled by a two-step function of conductance. Branched patterns of surface leaders surrounded by the surface corona appear in numerical simulation. The fractal dimension of branched discharge patterns is calculated by changing voltage and capacitance. We find that surface leaders often grow stepwise in time, as is observed in lightning leaders of thunder.

  19. Geochemical modelization of differentiation processes by crystallization

    International Nuclear Information System (INIS)

    Cebria, J.M.; Lopez Ruiz, J.

    1994-01-01

    During crystallization processes, major and trace elements and stable isotopes fractionate, whereas radiogenic isotopes do not change. The different equations proposed allow us to reproduce the variation in major and trace elements during these differentiation processes. In the case of simple fractional crystallization, the residual liquid is impoverished in compatible elements faster than it is enriched in incompatible elements as crystallization proceeds. During in situ crystallization the incompatible elements evolve in a similar way to the case of simple fractional crystallization but the enrichment rate of the moderately incompatible elements is slower and the compatible elements do not suffer a depletion as strong as the one observed during simple fractional crystallization, even for higher f values. In a periodically replenished magma chamber if all the liquid present is removed at the end of each cycle, the magma follows patterns similar to those generated by simple fractional crystallization. On the contrary, if the liquid fraction that crystallizes during each cycle and the one that is extruded at the end of the cycle are small, the residual liquid shows compositions similar to those that would be obtained by equilibrium crystallization. Crystallization processes modelling is in general less difficult than for partial melting. If a rock series is the result of simple fractional crystallization, a C''i L -C''i L plot in which i is a compatible element and j is highly incompatible, allows us to obtain a good approximation to the initial liquid composition. Additionally, long C''i L -log C''i L diagrams in which i is a highly incompatible element, allow us to identify steps in the process and to calculate the bulk distribution coefficients of the trace elements during each step

  20. Balancing Opposing Forces—A Nested Process Evaluation Study Protocol for a Stepped Wedge Designed Cluster Randomized Controlled Trial of an Experience Based Codesign Intervention

    Directory of Open Access Journals (Sweden)

    Victoria Jane Palmer

    2016-10-01

    Full Text Available Background: Process evaluations are essential to understand the contextual, relational, and organizational and system factors of complex interventions. The guidance for developing process evaluations for randomized controlled trials (RCTs has until recently however, been fairly limited. Method/Design: A nested process evaluation (NPE was designed and embedded across all stages of a stepped wedge cluster RCT called the CORE study. The aim of the CORE study is to test the effectiveness of an experience-based codesign methodology for improving psychosocial recovery outcomes for people living with severe mental illness (service users. Process evaluation data collection combines qualitative and quantitative methods with four aims: (1 to describe organizational characteristics, service models, policy contexts, and government reforms and examine the interaction of these with the intervention; (2 to understand how the codesign intervention works, the cluster variability in implementation, and if the intervention is or is not sustained in different settings; (3 to assist in the interpretation of the primary and secondary outcomes and determine if the causal assumptions underpinning the codesign interventions are accurate; and (4 to determine the impact of a purposefully designed engagement model on the broader study retention and knowledge transfer in the trial. Discussion: Process evaluations require prespecified study protocols but finding a balance between their iterative nature and the structure offered by protocol development is an important step forward. Taking this step will advance the role of qualitative research within trials research and enable more focused data collection to occur at strategic points within studies.

  1. Operational Control Procedures for the Activated Sludge Process, Part III-B: Calculation Procedures for Step-Feed Process Responses and Addendum No. 1.

    Science.gov (United States)

    West, Alfred W.

    This is the third in a series of documents developed by the National Training and Operational Technology Center describing operational control procedures for the activated sludge process used in wastewater treatment. This document deals with the calculation procedures associated with a step-feed process. Illustrations and examples are included to…

  2. Effect of the selective adsorption on the reactive scattering process of molecular beams from stepped surfaces

    International Nuclear Information System (INIS)

    Garcia, N.

    1977-01-01

    An indicative proposal which may explain the diffusion of incident atomic beams scattered by a crystal surface is made in terms of the selective adsorption mechanism. In this sense, the stepped metallic surfaces present characteristics which enhance the displacements and the lifetimes of the beams on the surface. This may be important for increasing the exchange reactive scattering of molecules from crystal surfaces

  3. Models of memory: information processing.

    Science.gov (United States)

    Eysenck, M W

    1988-01-01

    A complete understanding of human memory will necessarily involve consideration of the active processes involved at the time of learning and of the organization and nature of representation of information in long-term memory. In addition to process and structure, it is important for theory to indicate the ways in which stimulus-driven and conceptually driven processes interact with each other in the learning situation. Not surprisingly, no existent theory provides a detailed specification of all of these factors. However, there are a number of more specific theories which are successful in illuminating some of the component structures and processes. The working memory model proposed by Baddeley and Hitch (1974) and modified subsequently has shown how the earlier theoretical construct of the short-term store should be replaced with the notion of working memory. In essence, working memory is a system which is used both to process information and to permit the transient storage of information. It comprises a number of conceptually distinct, but functionally interdependent components. So far as long-term memory is concerned, there is evidence of a number of different kinds of representation. Of particular importance is the distinction between declarative knowledge and procedural knowledge, a distinction which has received support from the study of amnesic patients. Kosslyn has argued for a distinction between literal representation and propositional representation, whereas Tulving has distinguished between episodic and semantic memories. While Tulving's distinction is perhaps the best known, there is increasing evidence that episodic and semantic memory differ primarily in content rather than in process, and so the distinction may be of less theoretical value than was originally believed.(ABSTRACT TRUNCATED AT 250 WORDS)

  4. Use of the step-up action research model to improve trauma-related nursing educational practice.

    Science.gov (United States)

    Seale, Ielse; De Villiers, Johanna C

    2015-10-23

    A lack of authentic learning opportunities influence the quality of emergency training of nursing students. The purpose of this article is to describe how the step-up action research model was used to improve the quality of trauma-related educational practice of undergraduate nursing students. To reduce deaths caused by trauma, healthcare workers should be competent to provide emergency care and collaborate effectively with one another. A simulated mass casualty incident, structured to support the integration of theory into practice, became a more rigorous action research activity which focused on the quality improvement of the mass casualty incident. The results indicated improved student learning; partnership appreciation; improved student coping mechanisms, and increased student exposure. Quality emergency training thus results in better real-life collaboration in emergency contexts. The step-up action research model proved to be a collaborative and flexible process. To improve the quality and rigour of educational programmes it is therefore recommended that the step-up action research model be routinely used in the execution of educational practices.

  5. How the Use of ICT can Contribute to a Misleading Picture of Conditions – A Five-Step Process

    Directory of Open Access Journals (Sweden)

    Stefan Holgersson

    2015-11-01

    Full Text Available This paper contributes to the limited research on roles ICT can play in impression-management strategies and is based on case studies done in the Swedish Police. It also gives a theoretical contribution by adopting a holistic approach to explain how ICT can contribute to giving a misleading picture of conditions. Output generated by ICT has nowadays a central role in follow-up activities and decision-making. Even if this type of output, often in colourful, presentable, graphical arrangements, gives the impression of being accurate and reliable there is a risk of defective data quality. The phenomena can be described as a process divided into five steps. The first step is about how the data is generated and/or collected. The second step is linked to how the data is registered. The third step is about the output generated from the ICT-systems. The fourth step is how the output of ICT is selected for presentation. The fifth step concerns how output generated by ICT is interpreted. This paper shows that ICT can easily be used in impression-management strategies. For example, that personnel take shortcuts to affect the statistics rather than applying methods that may give the desired effects.

  6. Mathematical modeling of biological processes

    CERN Document Server

    Friedman, Avner

    2014-01-01

    This book on mathematical modeling of biological processes includes a wide selection of biological topics that demonstrate the power of mathematics and computational codes in setting up biological processes with a rigorous and predictive framework. Topics include: enzyme dynamics, spread of disease, harvesting bacteria, competition among live species, neuronal oscillations, transport of neurofilaments in axon, cancer and cancer therapy, and granulomas. Complete with a description of the biological background and biological question that requires the use of mathematics, this book is developed for graduate students and advanced undergraduate students with only basic knowledge of ordinary differential equations and partial differential equations; background in biology is not required. Students will gain knowledge on how to program with MATLAB without previous programming experience and how to use codes in order to test biological hypothesis.

  7. On the two steps threshold selection for over-threshold modelling of extreme events

    Science.gov (United States)

    Bernardara, Pietro; Mazas, Franck; Weiss, Jerome; Andreewsky, Marc; Kergadallan, Xavier; Benoit, Michel; Hamm, Luc

    2013-04-01

    The estimation of the probability of occurrence of extreme events is traditionally achieved by fitting a probability distribution on a sample of extreme observations. In particular, the extreme value theory (EVT) states that values exceeding a given threshold converge through a Generalized Pareto Distribution (GPD) if the original sample is composed of independent and identically distributed values. However, the temporal series of sea and ocean variables usually show strong temporal autocorrelation. Traditionally, in order to select independent events for the following statistical analysis, the concept of a physical threshold is introduced: events that excess that threshold are defined as "extreme events". This is the so-called "Peak Over a Threshold (POT)" sampling, widely spread in the literature and currently used for engineering applications among many others. In the past, the threshold for the statistical sampling of extreme values asymptotically convergent toward GPD and the threshold for the physical selection of independent extreme events were confused, as the same threshold was used for both sampling data and to meet the hypothesis of extreme value convergence, leading to some incoherencies. In particular, if the two steps are performed simultaneously, the number of peaks over the threshold can increase but also decrease when the threshold decreases. This is logic in a physical point of view, since the definition of the sample of "extreme events" changes, but is not coherent with the statistical theory. We introduce a two-steps threshold selection for over-threshold modelling, aiming to discriminate (i) a physical threshold for the selection of extreme and independent events, and (ii) a statistical threshold for the optimization of the coherence with the hypothesis of the EVT. The former is a physical events identification procedure (also called "declustering") aiming at selecting independent extreme events. The latter is a purely statistical optimization

  8. Modeling pellet impact drilling process

    Science.gov (United States)

    Kovalyov, A. V.; Ryabchikov, S. Ya; Isaev, Ye D.; Ulyanova, O. S.

    2016-03-01

    The paper describes pellet impact drilling which could be used to increase the drilling speed and the rate of penetration when drilling hard rocks. Pellet impact drilling implies rock destruction by metal pellets with high kinetic energy in the immediate vicinity of the earth formation encountered. The pellets are circulated in the bottom hole by a high velocity fluid jet, which is the principle component of the ejector pellet impact drill bit. The experiments conducted has allowed modeling the process of pellet impact drilling, which creates the scientific and methodological basis for engineering design of drilling operations under different geo-technical conditions.

  9. Modeling and experimental characterization of stepped and v-shaped (311) defects in silicon

    Energy Technology Data Exchange (ETDEWEB)

    Marqués, Luis A., E-mail: lmarques@ele.uva.es; Aboy, María [Departamento de Electrónica, Universidad de Valladolid, E.T.S.I. de Telecomunicación, 47011 Valladolid (Spain); Dudeck, Karleen J.; Botton, Gianluigi A. [Department of Materials Science and Engineering, McMaster University, 1280 Main Street West, Hamilton, Ontario L8S 4L7 (Canada); Knights, Andrew P. [Department of Engineering Physics, McMaster University, 1280 Main Street West, Hamilton, Ontario L8S 4L7 (Canada); Gwilliam, Russell M. [Surrey Ion Beam Centre, University of Surrey, Guildford, Surrey GU2 7XH (United Kingdom)

    2014-04-14

    We propose an atomistic model to describe extended (311) defects in silicon. It is based on the combination of interstitial and bond defect chains. The model is able to accurately reproduce not only planar (311) defects but also defect structures that show steps, bends, or both. We use molecular dynamics techniques to show that these interstitial and bond defect chains spontaneously transform into extended (311) defects. Simulations are validated by comparing with precise experimental measurements on actual (311) defects. The excellent agreement between the simulated and experimentally derived structures, regarding individual atomic positions and shape of the distinct structural (311) defect units, provides strong evidence for the robustness of the proposed model.

  10. A visual analysis of the process of process modeling

    NARCIS (Netherlands)

    Claes, J.; Vanderfeesten, I.; Pinggera, J.; Reijers, H.A.; Weber, B.; Poels, G.

    2015-01-01

    The construction of business process models has become an important requisite in the analysis and optimization of processes. The success of the analysis and optimization efforts heavily depends on the quality of the models. Therefore, a research domain emerged that studies the process of process

  11. Dissolution Processes at Step Edges of Calcite in Water Investigated by High-Speed Frequency Modulation Atomic Force Microscopy and Simulation.

    Science.gov (United States)

    Miyata, Kazuki; Tracey, John; Miyazawa, Keisuke; Haapasilta, Ville; Spijker, Peter; Kawagoe, Yuta; Foster, Adam S; Tsukamoto, Katsuo; Fukuma, Takeshi

    2017-07-12

    The microscopic understanding of the crystal growth and dissolution processes have been greatly advanced by the direct imaging of nanoscale step flows by atomic force microscopy (AFM), optical interferometry, and X-ray microscopy. However, one of the most fundamental events that govern their kinetics, namely, atomistic events at the step edges, have not been well understood. In this study, we have developed high-speed frequency modulation AFM (FM-AFM) and enabled true atomic-resolution imaging in liquid at ∼1 s/frame, which is ∼50 times faster than the conventional FM-AFM. With the developed AFM, we have directly imaged subnanometer-scale surface structures around the moving step edges of calcite during its dissolution in water. The obtained images reveal that the transition region with typical width of a few nanometers is formed along the step edges. Building upon insight in previous studies, our simulations suggest that the transition region is most likely to be a Ca(OH) 2 monolayer formed as an intermediate state in the dissolution process. On the basis of this finding, we improve our understanding of the atomistic dissolution model of calcite in water. These results open up a wide range of future applications of the high-speed FM-AFM to the studies on various dynamic processes at solid-liquid interfaces with true atomic resolution.

  12. Development of a three dimensional circulation model based on fractional step method

    Directory of Open Access Journals (Sweden)

    Mazen Abualtayef

    2010-03-01

    Full Text Available A numerical model was developed for simulating a three-dimensional multilayer hydrodynamic and thermodynamic model in domains with irregular bottom topography. The model was designed for examining the interactions between flow and topography. The model was based on the three-dimensional Navier-Stokes equations and was solved using the fractional step method, which combines the finite difference method in the horizontal plane and the finite element method in the vertical plane. The numerical techniques were described and the model test and application were presented. For the model application to the northern part of Ariake Sea, the hydrodynamic and thermodynamic results were predicted. The numerically predicted amplitudes and phase angles were well consistent with the field observations.

  13. FIRST STEPS TOWARDS AN INTEGRATED CITYGML-BASED 3D MODEL OF VIENNA

    Directory of Open Access Journals (Sweden)

    G. Agugiaro

    2016-06-01

    This paper reports about the experiences done so far, it describes the test area and the available data sources, it shows and exemplifies the data integration issues, the strategies developed to solve them in order to obtain the integrated 3D city model. The first results as well as some comments about their quality and limitations are presented, together with the discussion regarding the next steps and some planned improvements.

  14. The portal protein plays essential roles at different steps of the SPP1 DNA packaging process

    International Nuclear Information System (INIS)

    Isidro, Anabela; Henriques, Adriano O.; Tavares, Paulo

    2004-01-01

    A large number of viruses use a specialized portal for entry of DNA to the viral capsid and for its polarized exit at the beginning of infection. These families of viruses assemble an icosahedral procapsid containing a portal protein oligomer in one of its 12 vertices. The viral ATPase (terminase) interacts with the portal vertex to form a powerful molecular motor that translocates DNA to the procapsid interior against a steep concentration gradient. The portal protein is an essential component of this DNA packaging machine. Characterization of single amino acid substitutions in the portal protein gp6 of bacteriophage SPP1 that block DNA packaging identified sequential steps in the packaging mechanism that require its action. Gp6 is essential at early steps of DNA packaging and for DNA translocation to the capsid interior, it affects the efficiency of DNA packaging, it is a central component of the headful sensor that determines the size of the packaged DNA molecule, and is essential for closure of the portal pore by the head completion proteins to prevent exit of the DNA encapsidated. Functional regions of gp6 necessary at each step are identified within its primary structure. The similarity between the architecture of portal oligomers and between the DNA packaging strategies of viruses using portals strongly suggests that the portal protein plays the same roles in a large number of viruses

  15. Collapse models and perceptual processes

    International Nuclear Information System (INIS)

    Ghirardi, Gian Carlo; Romano, Raffaele

    2014-01-01

    Theories including a collapse mechanism have been presented various years ago. They are based on a modification of standard quantum mechanics in which nonlinear and stochastic terms are added to the evolution equation. Their principal merits derive from the fact that they are mathematically precise schemes accounting, on the basis of a unique universal dynamical principle, both for the quantum behavior of microscopic systems as well as for the reduction associated to measurement processes and for the classical behavior of macroscopic objects. Since such theories qualify themselves not as new interpretations but as modifications of the standard theory they can be, in principle, tested against quantum mechanics. Recently, various investigations identifying possible crucial test have been discussed. In spite of the extreme difficulty to perform such tests it seems that recent technological developments allow at least to put precise limits on the parameters characterizing the modifications of the evolution equation. Here we will simply mention some of the recent investigations in this direction, while we will mainly concentrate our attention to the way in which collapse theories account for definite perceptual process. The differences between the case of reductions induced by perceptions and those related to measurement procedures by means of standard macroscopic devices will be discussed. On this basis, we suggest a precise experimental test of collapse theories involving conscious observers. We make plausible, by discussing in detail a toy model, that the modified dynamics can give rise to quite small but systematic errors in the visual perceptual process.

  16. A new methodology to determine kinetic parameters for one- and two-step chemical models

    Science.gov (United States)

    Mantel, T.; Egolfopoulos, F. N.; Bowman, C. T.

    1996-01-01

    In this paper, a new methodology to determine kinetic parameters for simple chemical models and simple transport properties classically used in DNS of premixed combustion is presented. First, a one-dimensional code is utilized to performed steady unstrained laminar methane-air flame in order to verify intrinsic features of laminar flames such as burning velocity and temperature and concentration profiles. Second, the flame response to steady and unsteady strain in the opposed jet configuration is numerically investigated. It appears that for a well determined set of parameters, one- and two-step mechanisms reproduce the extinction limit of a laminar flame submitted to a steady strain. Computations with the GRI-mech mechanism (177 reactions, 39 species) and multicomponent transport properties are used to validate these simplified models. A sensitivity analysis of the preferential diffusion of heat and reactants when the Lewis number is close to unity indicates that the response of the flame to an oscillating strain is very sensitive to this number. As an application of this methodology, the interaction between a two-dimensional vortex pair and a premixed laminar flame is performed by Direct Numerical Simulation (DNS) using the one- and two-step mechanisms. Comparison with the experimental results of Samaniego et al. (1994) shows a significant improvement in the description of the interaction when the two-step model is used.

  17. The role of particle jamming on the formation and stability of step-pool morphology: insight from a reduced-complexity model

    Science.gov (United States)

    Saletti, M.; Molnar, P.; Hassan, M. A.

    2017-12-01

    Granular processes have been recognized as key drivers in earth surface dynamics, especially in steep landscapes because of the large size of sediment found in channels. In this work we focus on step-pool morphologies, studying the effect of particle jamming on step formation. Starting from the jammed-state hypothesis, we assume that grains generate steps because of particle jamming and those steps are inherently more stable because of additional force chains in the transversal direction. We test this hypothesis with a particle-based reduced-complexity model, CAST2, where sediment is organized in patches and entrainment, transport and deposition of grains depend on flow stage and local topography through simplified phenomenological rules. The model operates with 2 grain sizes: fine grains, that can be mobilized both my large and moderate flows, and coarse grains, mobile only during large floods. First, we identify the minimum set of processes necessary to generate and maintain steps in a numerical channel: (a) occurrence of floods, (b) particle jamming, (c) low sediment supply, and (d) presence of sediment with different entrainment probabilities. Numerical results are compared with field observations collected in different step-pool channels in terms of step density, a variable that captures the proportion of the channel occupied by steps. Not only the longitudinal profiles of numerical channels display step sequences similar to those observed in real step-pool streams, but also the values of step density are very similar when all the processes mentioned before are considered. Moreover, with CAST2 it is possible to run long simulations with repeated flood events, to test the effect of flood frequency on step formation. Numerical results indicate that larger step densities belong to system more frequently perturbed by floods, compared to system having a lower flood frequency. Our results highlight the important interactions between external hydrological forcing and

  18. A Single-step Process to Convert Karanja Oil to Fatty Acid Methyl Esters Using Amberlyst15 as a Catalyst

    Directory of Open Access Journals (Sweden)

    Arun K. Gupta

    2018-03-01

    Full Text Available Karanja oil was successfully converted to fatty acid methyl esters (FAME in a single- step process using Amberlyst15 as a catalyst. A methanol to oil ratio of 6 was required to retain the physical structure of the Amberlyst15 catalyst. At higher methanol to oil ratios, the Amberlyst15 catalyst disintegrated. Disintegration of Amberlyst15 caused an irreversible loss in catalytic activity. This loss in activity was due to a decrease in surface area of Amberlyst15, which was caused by a decrease in its mesoporous volume. It appeared that the chemical nature of Amberlyst15 was unaffected. Reuse of Amberlyst15 with a methanol to oil ratio of 6:1 also revealed a loss in FAME yield. However, this loss in activity was recovered by heating the used Amberlyst15 catalyst to 393 K. The kinetic parameters of a power law model were successfully determined for a methanol to oil ratio of 6:1. An activation energy of 54.9 kJ mol–1 was obtained.

  19. A proposed adaptive step size perturbation and observation maximum power point tracking algorithm based on photovoltaic system modeling

    Science.gov (United States)

    Huang, Yu

    Solar energy becomes one of the major alternative renewable energy options for its huge abundance and accessibility. Due to the intermittent nature, the high demand of Maximum Power Point Tracking (MPPT) techniques exists when a Photovoltaic (PV) system is used to extract energy from the sunlight. This thesis proposed an advanced Perturbation and Observation (P&O) algorithm aiming for relatively practical circumstances. Firstly, a practical PV system model is studied with determining the series and shunt resistances which are neglected in some research. Moreover, in this proposed algorithm, the duty ratio of a boost DC-DC converter is the object of the perturbation deploying input impedance conversion to achieve working voltage adjustment. Based on the control strategy, the adaptive duty ratio step size P&O algorithm is proposed with major modifications made for sharp insolation change as well as low insolation scenarios. Matlab/Simulink simulation for PV model, boost converter control strategy and various MPPT process is conducted step by step. The proposed adaptive P&O algorithm is validated by the simulation results and detail analysis of sharp insolation changes, low insolation condition and continuous insolation variation.

  20. School Experience Comes Alive in an Integrated B.Ed. Programme. (A Step by Step Descriptive Model).

    Science.gov (United States)

    Kelliher, Marie H.; Balint, Margaret S.

    A model is presented of a preservice teacher education program (used at Polding College, Sydney, Australia) that consists of three years' academic preparation, and at least one year of classroom teaching experience, followed by one year of full-time study or its equivalent. The school experience is the core of this program and it is integrated…

  1. Modeling single-file diffusion with step fractional Brownian motion and a generalized fractional Langevin equation

    International Nuclear Information System (INIS)

    Lim, S C; Teo, L P

    2009-01-01

    Single-file diffusion behaves as normal diffusion at small time and as subdiffusion at large time. These properties can be described in terms of fractional Brownian motion with variable Hurst exponent or multifractional Brownian motion. We introduce a new stochastic process called Riemann–Liouville step fractional Brownian motion which can be regarded as a special case of multifractional Brownian motion with a step function type of Hurst exponent tailored for single-file diffusion. Such a step fractional Brownian motion can be obtained as a solution of the fractional Langevin equation with zero damping. Various kinds of fractional Langevin equations and their generalizations are then considered in order to decide whether their solutions provide the correct description of the long and short time behaviors of single-file diffusion. The cases where the dissipative memory kernel is a Dirac delta function, a power-law function and a combination of these functions are studied in detail. In addition to the case where the short time behavior of single-file diffusion behaves as normal diffusion, we also consider the possibility of a process that begins as ballistic motion

  2. Modeling single-file diffusion with step fractional Brownian motion and a generalized fractional Langevin equation

    Science.gov (United States)

    Lim, S. C.; Teo, L. P.

    2009-08-01

    Single-file diffusion behaves as normal diffusion at small time and as subdiffusion at large time. These properties can be described in terms of fractional Brownian motion with variable Hurst exponent or multifractional Brownian motion. We introduce a new stochastic process called Riemann-Liouville step fractional Brownian motion which can be regarded as a special case of multifractional Brownian motion with a step function type of Hurst exponent tailored for single-file diffusion. Such a step fractional Brownian motion can be obtained as a solution of the fractional Langevin equation with zero damping. Various kinds of fractional Langevin equations and their generalizations are then considered in order to decide whether their solutions provide the correct description of the long and short time behaviors of single-file diffusion. The cases where the dissipative memory kernel is a Dirac delta function, a power-law function and a combination of these functions are studied in detail. In addition to the case where the short time behavior of single-file diffusion behaves as normal diffusion, we also consider the possibility of a process that begins as ballistic motion.

  3. Improved Reproducibility for Perovskite Solar Cells with 1 cm2Active Area by a Modified Two-Step Process.

    Science.gov (United States)

    Shen, Heping; Wu, Yiliang; Peng, Jun; Duong, The; Fu, Xiao; Barugkin, Chog; White, Thomas P; Weber, Klaus; Catchpole, Kylie R

    2017-02-22

    With rapid progress in recent years, organohalide perovskite solar cells (PSC) are promising candidates for a new generation of highly efficient thin-film photovoltaic technologies, for which up-scaling is an essential step toward commercialization. In this work, we propose a modified two-step method to deposit the CH 3 NH 3 PbI 3 (MAPbI 3 ) perovskite film that improves the uniformity, photovoltaic performance, and repeatability of large-area perovskite solar cells. This method is based on the commonly used two-step method, with one additional process involving treating the perovskite film with concentrated methylammonium iodide (MAI) solution. This additional treatment is proved to be helpful for tailoring the residual PbI 2 level to an optimal range that is favorable for both optical absorption and inhibition of recombination. Scanning electron microscopy and photoluminescence image analysis further reveal that, compared to the standard two-step and one-step methods, this method is very robust for achieving uniform and pinhole-free large-area films. This is validated by the photovoltaic performance of the prototype devices with an active area of 1 cm 2 , where we achieved the champion efficiency of ∼14.5% and an average efficiency of ∼13.5%, with excellent reproducibility.

  4. Bioactive Carbohydrates and Peptides in Foods: An Overview of Sources, Downstream Processing Steps and Associated Bioactivities

    Directory of Open Access Journals (Sweden)

    Maria Hayes

    2015-09-01

    Full Text Available Bioactive peptides and carbohydrates are sourced from a myriad of plant, animal and insects and have huge potential for use as food ingredients and pharmaceuticals. However, downstream processing bottlenecks hinder the potential use of these natural bioactive compounds and add cost to production processes. This review discusses the health benefits and bioactivities associated with peptides and carbohydrates of natural origin and downstream processing methodologies and novel processes which may be used to overcome these.

  5. Bioactive Carbohydrates and Peptides in Foods: An Overview of Sources, Downstream Processing Steps and Associated Bioactivities

    OpenAIRE

    Maria Hayes; Brijesh K. Tiwari

    2015-01-01

    Bioactive peptides and carbohydrates are sourced from a myriad of plant, animal and insects and have huge potential for use as food ingredients and pharmaceuticals. However, downstream processing bottlenecks hinder the potential use of these natural bioactive compounds and add cost to production processes. This review discusses the health benefits and bioactivities associated with peptides and carbohydrates of natural origin and downstream processing methodologies and novel processes which ma...

  6. Bioactive Carbohydrates and Peptides in Foods: An Overview of Sources, Downstream Processing Steps and Associated Bioactivities.

    Science.gov (United States)

    Hayes, Maria; Tiwari, Brijesh K

    2015-09-17

    Bioactive peptides and carbohydrates are sourced from a myriad of plant, animal and insects and have huge potential for use as food ingredients and pharmaceuticals. However, downstream processing bottlenecks hinder the potential use of these natural bioactive compounds and add cost to production processes. This review discusses the health benefits and bioactivities associated with peptides and carbohydrates of natural origin and downstream processing methodologies and novel processes which may be used to overcome these.

  7. Optimized spray drying process for preparation of one-step calcium-alginate gel microspheres

    Energy Technology Data Exchange (ETDEWEB)

    Popeski-Dimovski, Riste [Department of physic, Faculty of Natural Sciences and Mathematics, “ss. Cyril and Methodius” University, Arhimedova 3, 1000 Skopje, R. Macedonia (Macedonia, The Former Yugoslav Republic of)

    2016-03-25

    Calcium-alginate micro particles have been used extensively in drug delivery systems. Therefore we establish a one-step method for preparation of internally gelated micro particles with spherical shape and narrow size distribution. We use four types of alginate with different G/M ratio and molar weight. The size of the particles is measured using light diffraction and scanning electron microscopy. Measurements showed that with this method, micro particles with size distribution around 4 micrometers can be prepared, and SEM imaging showed that those particles are spherical in shape.

  8. Model for Simulating a Spiral Software-Development Process

    Science.gov (United States)

    Mizell, Carolyn; Curley, Charles; Nayak, Umanath

    2010-01-01

    ), productivity (number of lines of code per hour), and number of defects per source line of code. The user provides the number of resources, the overall percent of effort that should be allocated to each process step, and the number of desired staff members for each step. The output of PATT includes the size of the product, a measure of effort, a measure of rework effort, the duration of the entire process, and the numbers of injected, detected, and corrected defects as well as a number of other interesting features. In the development of the present model, steps were added to the IEEE 12207 waterfall process, and this model and its implementing software were made to run repeatedly through the sequence of steps, each repetition representing an iteration in a spiral process. Because the IEEE 12207 model is founded on a waterfall paradigm, it enables direct comparison of spiral and waterfall processes. The model can be used throughout a software-development project to analyze the project as more information becomes available. For instance, data from early iterations can be used as inputs to the model, and the model can be used to estimate the time and cost of carrying the project to completion.

  9. Comparison of Different Turbulence Models for Numerical Simulation of Pressure Distribution in V-Shaped Stepped Spillway

    Directory of Open Access Journals (Sweden)

    Zhaoliang Bai

    2017-01-01

    Full Text Available V-shaped stepped spillway is a new shaped stepped spillway, and the pressure distribution is quite different from that of the traditional stepped spillway. In this paper, five turbulence models were used to simulate the pressure distribution in the skimming flow regimes. Through comparing with the physical value, the realizable k-ε model had better precision in simulating the pressure distribution. Then, the flow pattern of V-shaped and traditional stepped spillways was given to illustrate the unique pressure distribution using realizable k-ε turbulence model.

  10. Statistical efficiency and optimal design for stepped cluster studies under linear mixed effects models.

    Science.gov (United States)

    Girling, Alan J; Hemming, Karla

    2016-06-15

    In stepped cluster designs the intervention is introduced into some (or all) clusters at different times and persists until the end of the study. Instances include traditional parallel cluster designs and the more recent stepped-wedge designs. We consider the precision offered by such designs under mixed-effects models with fixed time and random subject and cluster effects (including interactions with time), and explore the optimal choice of uptake times. The results apply both to cross-sectional studies where new subjects are observed at each time-point, and longitudinal studies with repeat observations on the same subjects. The efficiency of the design is expressed in terms of a 'cluster-mean correlation' which carries information about the dependency-structure of the data, and two design coefficients which reflect the pattern of uptake-times. In cross-sectional studies the cluster-mean correlation combines information about the cluster-size and the intra-cluster correlation coefficient. A formula is given for the 'design effect' in both cross-sectional and longitudinal studies. An algorithm for optimising the choice of uptake times is described and specific results obtained for the best balanced stepped designs. In large studies we show that the best design is a hybrid mixture of parallel and stepped-wedge components, with the proportion of stepped wedge clusters equal to the cluster-mean correlation. The impact of prior uncertainty in the cluster-mean correlation is considered by simulation. Some specific hybrid designs are proposed for consideration when the cluster-mean correlation cannot be reliably estimated, using a minimax principle to ensure acceptable performance across the whole range of unknown values. © 2016 The Authors. Statistics in Medicine published by John Wiley & Sons Ltd. © 2016 The Authors. Statistics in Medicine published by John Wiley & Sons Ltd.

  11. A green two-step process for adipic acid production from cyclohexene. A study on parameters affecting selectivity

    Energy Technology Data Exchange (ETDEWEB)

    Cavani, F.; Macchia, F.; Pino, R.; Raabova, K.; Rozhko, E. [Bologna Univ. (Italy). Dipt. di Chimica Industriale e dei Materiali; Alini, S.; Accorinti, P.; Babini, G. [Radici Chimica SpA, Novara (Italy)

    2011-07-01

    In this paper, we report about the effect of reaction parameters on catalytic behavior in a twostep process aimed at the synthesis of adipic acid from cyclohexene. In the first step, cyclohexene reacts with an aqueous solution of hydrogen peroxide, under conditions leading to the formation of trans-1,2-cyclohexandiol as the prevailing product; the reaction is catalysed by tungstic acid, in the presence of phosphoric acid and of a PT agent. In the second step, 1,2-cyclohexandiol is oxidized with air, in the presence of an heterogeneous catalyst made of alumina-supported Ru(OH){sub 3}. This process is aimed at using the minimal amount of the costly hydrogen peroxide, since only one mole is theoretically needed per mole of cyclohexene. The first step afforded very high yield to the glycol, using only a slight excess of hydrogen peroxide. However, the second step turned out to be the more critical one, since the selectivity to adipic acid was very low because of the concomitant occurrence of several undesired side reactions. The latter were in part due to the reaction conditions used, which were necessary for the activation of cyclohexandiol. (orig.)

  12. Microstructural evolution of a superaustenitic stainless steel during a two-step deformation process

    Science.gov (United States)

    Bayat, N.; Ebrahimi, G. R.; Momeni, A.; Ezatpour, H. R.

    2018-02-01

    Single- and two-step hot compression experiments were carried out on 16Cr25Ni6Mo superaustenitic stainless steel in the temperature range from 950 to 1150°C and at a strain rate of 0.1 s-1. In the two-step tests, the first pass was interrupted at a strain of 0.2; after an interpass time of 5, 20, 40, 60, or 80 s, the test was resumed. The progress of dynamic recrystallization at the interruption strain was less than 10%. The static softening in the interpass period increased with increasing deformation temperature and increasing interpass time. The static recrystallization was found to be responsible for fast static softening in the temperature range from 950 to 1050°C. However, the gentle static softening at 1100 and 1150°C was attributed to the combination of static and metadynamic recrystallizations. The correlation between calculated fractional softening and microstructural observations showed that approximately 30% of interpass softening could be attributed to the static recovery. The microstructural observations illustrated the formation of fine recrystallized grains at the grain boundaries at longer interpass time. The Avrami kinetics equation was used to establish a relationship between the fractional softening and the interpass period. The activation energy for static softening was determined as 276 kJ/mol.

  13. How to define 'best practice' for use in Knowledge Translation research: a practical, stepped and interactive process.

    Science.gov (United States)

    Bosch, Marije; Tavender, Emma; Bragge, Peter; Gruen, Russell; Green, Sally

    2013-10-01

    Defining 'best practice' is one of the first and crucial steps in any Knowledge Translation (KT) research project. Without a sound understanding of what exactly should happen in practice, it is impossible to measure the extent of existing gaps between 'desired' and 'actual' care, set implementation goals, and monitor performance. The aim of this paper is to present a practical, stepped and interactive process to develop best practice recommendations that are actionable, locally applicable and in line with the best available research-based evidence, with a view to adapt these into process measures (quality indicators) for KT research purposes. Our process encompasses the following steps: (1) identify current, high-quality clinical practice guidelines (CPGs) and extract recommendations; (2) select strong recommendations in key clinical management areas; (3) update evidence and create evidence overviews; (4) discuss evidence and produce agreed 'evidence statements'; (5) discuss the relevance of the evidence with local stakeholders; and (6) develop locally applicable actionable best practice recommendations, suitable for use as the basis of quality indicators. Actionable definitions of local best practice are a prerequisite for doing KT research. As substantial resources go into rigorously synthesizing evidence and developing CPGs, it is important to make best use of such available resources. We developed a process for efficiently developing locally applicable actionable best practice recommendations from existing high-quality CPGs that are in line with current research evidence. © 2012 John Wiley & Sons Ltd.

  14. Properties of spatial Cox process models

    DEFF Research Database (Denmark)

    Møller, Jesper

    Probabilistic properties of Cox processes of relevance for statistical modelling and inference are studied. Particularly, we study the most important classes of Cox processes, including log Gaussian Cox processes, shot noise Cox processes, and permanent Cox processes. We consider moment properties...... and point process operations such as thinning, displacements, and superpositioning. We also discuss how to simulate specific Cox processes....

  15. A new heat transfer analysis in machining based on two steps of 3D finite element modelling and experimental validation

    Science.gov (United States)

    Haddag, B.; Kagnaya, T.; Nouari, M.; Cutard, T.

    2013-01-01

    Modelling machining operations allows estimating cutting parameters which are difficult to obtain experimentally and in particular, include quantities characterizing the tool-workpiece interface. Temperature is one of these quantities which has an impact on the tool wear, thus its estimation is important. This study deals with a new modelling strategy, based on two steps of calculation, for analysis of the heat transfer into the cutting tool. Unlike the classical methods, considering only the cutting tool with application of an approximate heat flux at the cutting face, estimated from experimental data (e.g. measured cutting force, cutting power), the proposed approach consists of two successive 3D Finite Element calculations and fully independent on the experimental measurements; only the definition of the behaviour of the tool-workpiece couple is necessary. The first one is a 3D thermomechanical modelling of the chip formation process, which allows estimating cutting forces, chip morphology and its flow direction. The second calculation is a 3D thermal modelling of the heat diffusion into the cutting tool, by using an adequate thermal loading (applied uniform or non-uniform heat flux). This loading is estimated using some quantities obtained from the first step calculation, such as contact pressure, sliding velocity distributions and contact area. Comparisons in one hand between experimental data and the first calculation and at the other hand between measured temperatures with embedded thermocouples and the second calculation show a good agreement in terms of cutting forces, chip morphology and cutting temperature.

  16. Dynamic stepping information process method in mobile bio-sensing computing environments.

    Science.gov (United States)

    Lee, Tae-Gyu; Lee, Seong-Hoon

    2014-01-01

    Recently, the interest toward human longevity free from diseases is being converged as one system frame along with the development of mobile computing environment, diversification of remote medical system and aging society. Such converged system enables implementation of a bioinformatics system created as various supplementary information services by sensing and gathering health conditions and various bio-information of mobile users to set up medical information. The existing bio-information system performs static and identical process without changes after the bio-information process defined at the initial system configuration executes the system. However, such static process indicates ineffective execution in the application of mobile bio-information system performing mobile computing. Especially, an inconvenient duty of having to perform initialization of new definition and execution is accompanied during the process configuration of bio-information system and change of method. This study proposes a dynamic process design and execution method to overcome such ineffective process.

  17. Four wind speed multi-step forecasting models using extreme learning machines and signal decomposing algorithms

    International Nuclear Information System (INIS)

    Liu, Hui; Tian, Hong-qi; Li, Yan-fei

    2015-01-01

    Highlights: • A hybrid architecture is proposed for the wind speed forecasting. • Four algorithms are used for the wind speed multi-scale decomposition. • The extreme learning machines are employed for the wind speed forecasting. • All the proposed hybrid models can generate the accurate results. - Abstract: Realization of accurate wind speed forecasting is important to guarantee the safety of wind power utilization. In this paper, a new hybrid forecasting architecture is proposed to realize the wind speed accurate forecasting. In this architecture, four different hybrid models are presented by combining four signal decomposing algorithms (e.g., Wavelet Decomposition/Wavelet Packet Decomposition/Empirical Mode Decomposition/Fast Ensemble Empirical Mode Decomposition) and Extreme Learning Machines. The originality of the study is to investigate the promoted percentages of the Extreme Learning Machines by those mainstream signal decomposing algorithms in the multiple step wind speed forecasting. The results of two forecasting experiments indicate that: (1) the method of Extreme Learning Machines is suitable for the wind speed forecasting; (2) by utilizing the decomposing algorithms, all the proposed hybrid algorithms have better performance than the single Extreme Learning Machines; (3) in the comparisons of the decomposing algorithms in the proposed hybrid architecture, the Fast Ensemble Empirical Mode Decomposition has the best performance in the three-step forecasting results while the Wavelet Packet Decomposition has the best performance in the one and two step forecasting results. At the same time, the Wavelet Packet Decomposition and the Fast Ensemble Empirical Mode Decomposition are better than the Wavelet Decomposition and the Empirical Mode Decomposition in all the step predictions, respectively; and (4) the proposed algorithms are effective in the wind speed accurate predictions

  18. Influence of fermentation and other processing steps on the folate content of a traditional African cereal-based fermented food.

    Science.gov (United States)

    Saubade, Fabien; Hemery, Youna M; Rochette, Isabelle; Guyot, Jean-Pierre; Humblot, Christèle

    2018-02-02

    Folate deficiency can cause a number of diseases including neural tube defects and megaloblastic anemia, and still occurs in both developed and developing countries. Cereal-based food products are staple foods in many countries, and may therefore be useful sources of folate. The production of folate by microorganisms has been demonstrated in some cereal-based fermented foods, but has never been studied in a traditional African cereal based food spontaneously fermented. The microbiota of ben-saalga, a pearl-millet based fermented porridge frequently consumed in Burkina Faso, has a good genetic potential for the synthesis of folate, but the folate content of ben-saalga is rather low, suggesting that folate is lost during the different processing steps. The aim of this study was therefore to monitor changes in folate content during the different steps of preparing ben-saalga, from pearl-millet grains to porridge. Traditional processing involves seven different steps: washing, soaking, grinding, kneading, sieving, (spontaneous) fermentation, and cooking. Two type of porridge were prepared, one using a process adapted from the traditional process, the other a modified process based on fermentation by backslopping. Dry matter and total folate contents were measured at each step, and a mass balance assessment was performed to follow folate losses and gains. Folate production was observed during the soaking of pearl-millet grains (+26% to +79%), but the folate content of sieved batters (2.5 to 3.4μg/100g fresh weight) was drastically lower than that of milled soaked grains (17.3 to 19.4μg/100g FW). The final folate content of the porridges was very low (1.5 to 2.4μg/100g FW). The fermentation had no significant impact on folate content, whatever the duration and the process used. This study led to a better understanding of the impact on folate of the different processing steps involved in the preparation of ben-saalga. Copyright © 2017 Elsevier B.V. All rights reserved.

  19. A novel, substrate independent three-step process for the growth of uniform ZnO nanorod arrays

    International Nuclear Information System (INIS)

    Byrne, D.; McGlynn, E.; Henry, M.O.; Kumar, K.; Hughes, G.

    2010-01-01

    We report a three-step deposition process for uniform arrays of ZnO nanorods, involving chemical bath deposition of aligned seed layers followed by nanorod nucleation sites and subsequent vapour phase transport growth of nanorods. This combines chemical bath deposition techniques, which enable substrate independent seeding and nucleation site generation with vapour phase transport growth of high crystalline and optical quality ZnO nanorod arrays. Our data indicate that the three-step process produces uniform nanorod arrays with narrow and rather monodisperse rod diameters (∼ 70 nm) across substrates of centimetre dimensions. X-ray photoelectron spectroscopy, scanning electron microscopy and X-ray diffraction were used to study the growth mechanism and characterise the nanostructures.

  20. New process model proves accurate in tests on catalytic reformer

    Energy Technology Data Exchange (ETDEWEB)

    Aguilar-Rodriguez, E.; Ancheyta-Juarez, J. (Inst. Mexicano del Petroleo, Mexico City (Mexico))

    1994-07-25

    A mathematical model has been devised to represent the process that takes place in a fixed-bed, tubular, adiabatic catalytic reforming reactor. Since its development, the model has been applied to the simulation of a commercial semiregenerative reformer. The development of mass and energy balances for this reformer led to a model that predicts both concentration and temperature profiles along the reactor. A comparison of the model's results with experimental data illustrates its accuracy at predicting product profiles. Simple steps show how the model can be applied to simulate any fixed-bed catalytic reformer.

  1. Reconstructing Genetic Regulatory Networks Using Two-Step Algorithms with the Differential Equation Models of Neural Networks.

    Science.gov (United States)

    Chen, Chi-Kan

    2017-07-26

    The identification of genetic regulatory networks (GRNs) provides insights into complex cellular processes. A class of recurrent neural networks (RNNs) captures the dynamics of GRN. Algorithms combining the RNN and machine learning schemes were proposed to reconstruct small-scale GRNs using gene expression time series. We present new GRN reconstruction methods with neural networks. The RNN is extended to a class of recurrent multilayer perceptrons (RMLPs) with latent nodes. Our methods contain two steps: the edge rank assignment step and the network construction step. The former assigns ranks to all possible edges by a recursive procedure based on the estimated weights of wires of RNN/RMLP (RE RNN /RE RMLP ), and the latter constructs a network consisting of top-ranked edges under which the optimized RNN simulates the gene expression time series. The particle swarm optimization (PSO) is applied to optimize the parameters of RNNs and RMLPs in a two-step algorithm. The proposed RE RNN -RNN and RE RMLP -RNN algorithms are tested on synthetic and experimental gene expression time series of small GRNs of about 10 genes. The experimental time series are from the studies of yeast cell cycle regulated genes and E. coli DNA repair genes. The unstable estimation of RNN using experimental time series having limited data points can lead to fairly arbitrary predicted GRNs. Our methods incorporate RNN and RMLP into a two-step structure learning procedure. Results show that the RE RMLP using the RMLP with a suitable number of latent nodes to reduce the parameter dimension often result in more accurate edge ranks than the RE RNN using the regularized RNN on short simulated time series. Combining by a weighted majority voting rule the networks derived by the RE RMLP -RNN using different numbers of latent nodes in step one to infer the GRN, the method performs consistently and outperforms published algorithms for GRN reconstruction on most benchmark time series. The framework of two-step

  2. Enhanced pharmaceutical removal from water in a three step bio-ozone-bio process

    NARCIS (Netherlands)

    Wilt, de Arnoud; Gijn, van Koen; Verhoek, Tom; Vergnes, Amber; Hoek, Mirit; Rijnaarts, Huub; Langenhoff, Alette

    2018-01-01

    Individual treatment processes like biological treatment or ozonation have their limitations for the removal of pharmaceuticals from secondary clarified effluents with high organic matter concentrations (i.e. 17 mg TOC/L). These limitations can be overcome by combining these two processes for a

  3. Formation of complex wedding-cake morphologies during homoepitaxial film growth of Ag on Ag(111): atomistic, step-dynamics, and continuum modeling

    Science.gov (United States)

    Li, Maozhi; Han, Yong; Thiel, P. A.; Evans, J. W.

    2009-02-01

    An atomistic lattice-gas model is developed which successfully describes all key features of the complex mounded morphologies which develop during deposition of Ag films on Ag(111) surfaces. We focus on this homoepitaxial thin film growth process below 200 K. The unstable multilayer growth mode derives from the presence of a large Ehrlich-Schwoebel step-edge barrier, for which we characterize both the step-orientation dependence and the magnitude. Step-dynamics modeling is applied to further characterize and elucidate the evolution of the vertical profiles of these wedding-cake-like mounds. Suitable coarse-graining of these step-dynamics equations leads to instructive continuum formulations for mound evolution.

  4. Formation of complex wedding-cake morphologies during homoepitaxial film growth of Ag on Ag(111): atomistic, step-dynamics, and continuum modeling

    Energy Technology Data Exchange (ETDEWEB)

    Li Maozhi [Department of Physics, Renmin University of China, Beijing 100872 (China); Han, Yong [Institute of Physical Research and Technology, Iowa State University, Ames, IA 50011 (United States); Thiel, P A [Departments of Chemistry and Materials Science and Engineering and Ames Laboratory-USDOE, Iowa State University, Ames, IA 50011 (United States); Evans, J W [Department of Mathematics and Ames Laboratory-USDOE, Iowa State University, Ames, IA 50010 (United States)

    2009-02-25

    An atomistic lattice-gas model is developed which successfully describes all key features of the complex mounded morphologies which develop during deposition of Ag films on Ag(111) surfaces. We focus on this homoepitaxial thin film growth process below 200 K. The unstable multilayer growth mode derives from the presence of a large Ehrlich-Schwoebel step-edge barrier, for which we characterize both the step-orientation dependence and the magnitude. Step-dynamics modeling is applied to further characterize and elucidate the evolution of the vertical profiles of these wedding-cake-like mounds. Suitable coarse-graining of these step-dynamics equations leads to instructive continuum formulations for mound evolution.

  5. Formation of complex wedding-cake morphologies during homoepitaxial film growth of Ag on Ag(111): atomistic, step-dynamics, and continuum modeling

    International Nuclear Information System (INIS)

    Li Maozhi; Han, Yong; Thiel, P A; Evans, J W

    2009-01-01

    An atomistic lattice-gas model is developed which successfully describes all key features of the complex mounded morphologies which develop during deposition of Ag films on Ag(111) surfaces. We focus on this homoepitaxial thin film growth process below 200 K. The unstable multilayer growth mode derives from the presence of a large Ehrlich-Schwoebel step-edge barrier, for which we characterize both the step-orientation dependence and the magnitude. Step-dynamics modeling is applied to further characterize and elucidate the evolution of the vertical profiles of these wedding-cake-like mounds. Suitable coarse-graining of these step-dynamics equations leads to instructive continuum formulations for mound evolution.

  6. Numerical modelling of the jet nozzle enrichment process

    International Nuclear Information System (INIS)

    Vercelli, P.

    1983-01-01

    A numerical model was developed for the simulation of the isotopic enrichment produced by the jet nozzle process. The flow was considered stationary and under ideal gas conditions. The model calculates, for any position of the skimmer piece: (a) values of radial mass concentration profiles for each isotopic species and (b) values of elementary separation effect (Σ sub(A)) and uranium cut (theta). The comparison of the numerical results obtained with the experimental values given in the literature proves the validity of the present work as an initial step in the modelling of the process. (Author) [pt

  7. Cupola Furnace Computer Process Model

    Energy Technology Data Exchange (ETDEWEB)

    Seymour Katz

    2004-12-31

    The cupola furnace generates more than 50% of the liquid iron used to produce the 9+ million tons of castings annually. The cupola converts iron and steel into cast iron. The main advantages of the cupola furnace are lower energy costs than those of competing furnaces (electric) and the ability to melt less expensive metallic scrap than the competing furnaces. However the chemical and physical processes that take place in the cupola furnace are highly complex making it difficult to operate the furnace in optimal fashion. The results are low energy efficiency and poor recovery of important and expensive alloy elements due to oxidation. Between 1990 and 2004 under the auspices of the Department of Energy, the American Foundry Society and General Motors Corp. a computer simulation of the cupola furnace was developed that accurately describes the complex behavior of the furnace. When provided with the furnace input conditions the model provides accurate values of the output conditions in a matter of seconds. It also provides key diagnostics. Using clues from the diagnostics a trained specialist can infer changes in the operation that will move the system toward higher efficiency. Repeating the process in an iterative fashion leads to near optimum operating conditions with just a few iterations. More advanced uses of the program have been examined. The program is currently being combined with an ''Expert System'' to permit optimization in real time. The program has been combined with ''neural network'' programs to affect very easy scanning of a wide range of furnace operation. Rudimentary efforts were successfully made to operate the furnace using a computer. References to these more advanced systems will be found in the ''Cupola Handbook''. Chapter 27, American Foundry Society, Des Plaines, IL (1999).

  8. Computer Aided Continuous Time Stochastic Process Modelling

    DEFF Research Database (Denmark)

    Kristensen, N.R.; Madsen, Henrik; Jørgensen, Sten Bay

    2001-01-01

    A grey-box approach to process modelling that combines deterministic and stochastic modelling is advocated for identification of models for model-based control of batch and semi-batch processes. A computer-aided tool designed for supporting decision-making within the corresponding modelling cycle...

  9. Calibration process of highly parameterized semi-distributed hydrological model

    Science.gov (United States)

    Vidmar, Andrej; Brilly, Mitja

    2017-04-01

    Hydrological phenomena take place in the hydrological system, which is governed by nature, and are essentially stochastic. These phenomena are unique, non-recurring, and changeable across space and time. Since any river basin with its own natural characteristics and any hydrological event therein, are unique, this is a complex process that is not researched enough. Calibration is a procedure of determining the parameters of a model that are not known well enough. Input and output variables and mathematical model expressions are known, while only some parameters are unknown, which are determined by calibrating the model. The software used for hydrological modelling nowadays is equipped with sophisticated algorithms for calibration purposes without possibility to manage process by modeler. The results are not the best. We develop procedure for expert driven process of calibration. We use HBV-light-CLI hydrological model which has command line interface and coupling it with PEST. PEST is parameter estimation tool which is used widely in ground water modeling and can be used also on surface waters. Process of calibration managed by expert directly, and proportionally to the expert knowledge, affects the outcome of the inversion procedure and achieves better results than if the procedure had been left to the selected optimization algorithm. First step is to properly define spatial characteristic and structural design of semi-distributed model including all morphological and hydrological phenomena, like karstic area, alluvial area and forest area. This step includes and requires geological, meteorological, hydraulic and hydrological knowledge of modeler. Second step is to set initial parameter values at their preferred values based on expert knowledge. In this step we also define all parameter and observation groups. Peak data are essential in process of calibration if we are mainly interested in flood events. Each Sub Catchment in the model has own observations group

  10. Use of Anion Exchange Resins for One-Step Processing of Algae from Harvest to Biofuel

    OpenAIRE

    Jessica Jones; Cheng-Han Lee; James Wang; Martin Poenie

    2012-01-01

    Some microalgae are particularly attractive as a renewable feedstock for biodiesel production due to their rapid growth, high content of triacylglycerols, and ability to be grown on non-arable land. Unfortunately, obtaining oil from algae is currently cost prohibitive in part due to the need to pump and process large volumes of dilute algal suspensions. In an effort to circumvent this problem, we have explored the use of anion exchange resins for simplifying the processing of algae to biofuel...

  11. Process Correlation Analysis Model for Process Improvement Identification

    Directory of Open Access Journals (Sweden)

    Su-jin Choi

    2014-01-01

    software development process. However, in the current practice, correlations of process elements are often overlooked in the development of an improvement plan, which diminishes the efficiency of the plan. This is mainly attributed to significant efforts and the lack of required expertise. In this paper, we present a process correlation analysis model that helps identify correlations of process elements from the results of process assessment. This model is defined based on CMMI and empirical data of improvement practices. We evaluate the model using industrial data.

  12. Demystifying process mapping: a key step in neurosurgical quality improvement initiatives.

    Science.gov (United States)

    McLaughlin, Nancy; Rodstein, Jennifer; Burke, Michael A; Martin, Neil A

    2014-08-01

    Reliable delivery of optimal care can be challenging for care providers. Health care leaders have integrated various business tools to assist them and their teams in ensuring consistent delivery of safe and top-quality care. The cornerstone to all quality improvement strategies is the detailed understanding of the current state of a process, captured by process mapping. Process mapping empowers caregivers to audit how they are currently delivering care to subsequently strategically plan improvement initiatives. As a community, neurosurgery has clearly shown dedication to enhancing patient safety and delivering quality care. A care redesign strategy named NERVS (Neurosurgery Enhanced Recovery after surgery, Value, and Safety) is currently being developed and piloted within our department. Through this initiative, a multidisciplinary team led by a clinician neurosurgeon has process mapped the way care is currently being delivered throughout the entire episode of care. Neurosurgeons are becoming leaders in quality programs, and their education on the quality improvement strategies and tools is essential. The authors present a comprehensive review of process mapping, demystifying its planning, its building, and its analysis. The particularities of using process maps, initially a business tool, in the health care arena are discussed, and their specific use in an academic neurosurgical department is presented.

  13. A two-step acid-catalyzed process for the production of biodiesel from rice bran oil

    Energy Technology Data Exchange (ETDEWEB)

    Zullaikah, S.; Lai, Chao Chin; Vali, S.R.; Ju, Yi Hsu [National Taiwan Univ. of Science and Technology, Taipei (China). Dept. of Chemical Engineering

    2005-11-15

    A study was undertaken to examine the effect of temperature, moisture and storage time on the accumulation of free fatty acid in the rice bran. Rice bran stored at room temperature showed that most triacylglyceride was hydrolyzed and free fatty acid (FFA) content was raised up to 76% in six months. A two-step acid-catalyzed methanolysis process was employed for the efficient conversion of rice bran oil into fatty acid methyl ester (FAME). The first step was carried out at 60 {sup o}C. Depending on the initial FFA content of oil, 55-90% FAME content in the reaction product was obtained. More than 98% FFA and less than 35% of TG were reacted in 2 h. The organic phase of the first step reaction product was used as the substrate for a second acid-catalyzed methanolysis at 100 {sup o}C. By this two-step methanolysis reaction, more than 98% FAME in the product can be obtained in less than 8 h. Distillation of reaction product gave 99.8% FAME (biodiesel) with recovery of more than 96%. The residue contains enriched nutraceuticals such as {gamma}-oryzanol (16-18%), mixture of phytosterol, tocol and steryl ester (19-21%). (author)

  14. Migration of additive molecules in a polymer filament obtained by melt spinning: Influence of the fiber processing steps

    Science.gov (United States)

    Gesta, E.; Skovmand, O.; Espuche, E.; Fulchiron, R.

    2015-12-01

    The purpose of this study is to understand the influence of the yarn processing on the migration of additives molecules, especially insecticide, within polyethylene (PE) yarns. Yarns were manufactured in the laboratory focusing on three key-steps (spinning, post-stretching and heat-setting). Influence of each step on yarn properties was investigated using tensile tests, differential scanning calorimetry and wide-angle X-ray diffraction. The post-stretching step was proved to be critical in defining yarn mechanical and structural properties. Although a first orientation of polyethylene crystals was induced during spinning, the optimal orientation was only reached by post-stretching. The results also showed that the heat-setting did not significantly change these properties. The presence of additives crystals at the yarn surface was evidenced by scanning-electron microscopy. These studies performed at each yarn production step allowed a detailed analysis of the additives' ability to migrate. It is concluded that while post-stretching decreased the migration rate, heat-setting seems to boost this migration.

  15. Migration of additive molecules in a polymer filament obtained by melt spinning: Influence of the fiber processing steps

    Energy Technology Data Exchange (ETDEWEB)

    Gesta, E. [Ingénierie des Matériaux Polymères - UMR CNRS 5223, Université de Lyon - Université Lyon 1, Bâtiment POLYTECH Lyon - 15 boulevard Latarjet, 69622, Villeurbanne (France); Intelligent Insect Control, 118 Chemin des Alouettes, Castelnau-le-Lez, 34170 (France); Skovmand, O., E-mail: osk@insectcontrol.net [Intelligent Insect Control, 118 Chemin des Alouettes, Castelnau-le-Lez, 34170 (France); Espuche, E., E-mail: eliane.espuche@univ-lyon1.fr; Fulchiron, R., E-mail: rene.fulchiron@univ-lyon1.fr [Ingénierie des Matériaux Polymères - UMR CNRS 5223, Université de Lyon - Université Lyon 1, Bâtiment POLYTECH Lyon - 15 boulevard Latarjet, 69622, Villeurbanne (France)

    2015-12-17

    The purpose of this study is to understand the influence of the yarn processing on the migration of additives molecules, especially insecticide, within polyethylene (PE) yarns. Yarns were manufactured in the laboratory focusing on three key-steps (spinning, post-stretching and heat-setting). Influence of each step on yarn properties was investigated using tensile tests, differential scanning calorimetry and wide-angle X-ray diffraction. The post-stretching step was proved to be critical in defining yarn mechanical and structural properties. Although a first orientation of polyethylene crystals was induced during spinning, the optimal orientation was only reached by post-stretching. The results also showed that the heat-setting did not significantly change these properties. The presence of additives crystals at the yarn surface was evidenced by scanning-electron microscopy. These studies performed at each yarn production step allowed a detailed analysis of the additives’ ability to migrate. It is concluded that while post-stretching decreased the migration rate, heat-setting seems to boost this migration.

  16. Migration of additive molecules in a polymer filament obtained by melt spinning: Influence of the fiber processing steps

    International Nuclear Information System (INIS)

    Gesta, E.; Skovmand, O.; Espuche, E.; Fulchiron, R.

    2015-01-01

    The purpose of this study is to understand the influence of the yarn processing on the migration of additives molecules, especially insecticide, within polyethylene (PE) yarns. Yarns were manufactured in the laboratory focusing on three key-steps (spinning, post-stretching and heat-setting). Influence of each step on yarn properties was investigated using tensile tests, differential scanning calorimetry and wide-angle X-ray diffraction. The post-stretching step was proved to be critical in defining yarn mechanical and structural properties. Although a first orientation of polyethylene crystals was induced during spinning, the optimal orientation was only reached by post-stretching. The results also showed that the heat-setting did not significantly change these properties. The presence of additives crystals at the yarn surface was evidenced by scanning-electron microscopy. These studies performed at each yarn production step allowed a detailed analysis of the additives’ ability to migrate. It is concluded that while post-stretching decreased the migration rate, heat-setting seems to boost this migration

  17. Large-scale fabrication of In2S3 porous films via one-step hydrothermal process.

    Science.gov (United States)

    Chen, Fei; Deng, Dan; Lei, Yinlin

    2013-10-01

    Large-scale indium sulfide (In2S3) porous films were fabricated via a facile one-step and non-template hydrothermal process using L-cysteine as a capping agent. The impact of reaction conditions such as reaction time, temperatures, and capping agents on the synthesis of the In2S3 porous films were studied. The morphology, structure, and phase composition of In2S3 porous films were characterized by X-ray diffraction (XRD), field-emission scanning electron microscopy (FESEM), and transmission electron microscopy (TEM). The formation process and the optical property of the In2S3 porous films were also evaluated.

  18. A simple one-step chemistry model for partially premixed hydrocarbon combustion

    Energy Technology Data Exchange (ETDEWEB)

    Fernandez-Tarrazo, Eduardo [Instituto Nacional de Tecnica Aeroespacial, Madrid (Spain); Sanchez, Antonio L. [Area de Mecanica de Fluidos, Universidad Carlos III de Madrid, Leganes 28911 (Spain); Linan, Amable [ETSI Aeronauticos, Pl. Cardenal Cisneros 3, Madrid 28040 (Spain); Williams, Forman A. [Department of Mechanical and Aerospace Engineering, University of California San Diego, La Jolla, CA 92093-0411 (United States)

    2006-10-15

    This work explores the applicability of one-step irreversible Arrhenius kinetics with unity reaction order to the numerical description of partially premixed hydrocarbon combustion. Computations of planar premixed flames are used in the selection of the three model parameters: the heat of reaction q, the activation temperature T{sub a}, and the preexponential factor B. It is seen that changes in q with equivalence ratio f need to be introduced in fuel-rich combustion to describe the effect of partial fuel oxidation on the amount of heat released, leading to a universal linear variation q(f) for f>1 for all hydrocarbons. The model also employs a variable activation temperature T{sub a}(f) to mimic changes in the underlying chemistry in rich and very lean flames. The resulting chemistry description is able to reproduce propagation velocities of diluted and undiluted flames accurately over the whole flammability limit. Furthermore, computations of methane-air counterflow diffusion flames are used to test the proposed chemistry under nonpremixed conditions. The model not only predicts the critical strain rate at extinction accurately but also gives near-extinction flames with oxygen leakage, thereby overcoming known predictive limitations of one-step Arrhenius kinetics. (author)

  19. Analog modelling of obduction processes

    Science.gov (United States)

    Agard, P.; Zuo, X.; Funiciello, F.; Bellahsen, N.; Faccenna, C.; Savva, D.

    2012-04-01

    Obduction corresponds to one of plate tectonics oddities, whereby dense, oceanic rocks (ophiolites) are presumably 'thrust' on top of light, continental ones, as for the short-lived, almost synchronous Peri-Arabic obduction (which took place along thousands of km from Turkey to Oman in c. 5-10 Ma). Analog modelling experiments were performed to study the mechanisms of obduction initiation and test various triggering hypotheses (i.e., plate acceleration, slab hitting the 660 km discontinuity, ridge subduction; Agard et al., 2007). The experimental setup comprises (1) an upper mantle, modelled as a low-viscosity transparent Newtonian glucose syrup filling a rigid Plexiglas tank and (2) high-viscosity silicone plates (Rhodrosil Gomme with PDMS iron fillers to reproduce densities of continental or oceanic plates), located at the centre of the tank above the syrup to simulate the subducting and the overriding plates - and avoid friction on the sides of the tank. Convergence is simulated by pushing on a piston at one end of the model with velocities comparable to those of plate tectonics (i.e., in the range 1-10 cm/yr). The reference set-up includes, from one end to the other (~60 cm): (i) the piston, (ii) a continental margin containing a transition zone to the adjacent oceanic plate, (iii) a weakness zone with variable resistance and dip (W), (iv) an oceanic plate - with or without a spreading ridge, (v) a subduction zone (S) dipping away from the piston and (vi) an upper, active continental margin, below which the oceanic plate is being subducted at the start of the experiment (as is known to have been the case in Oman). Several configurations were tested and over thirty different parametric tests were performed. Special emphasis was placed on comparing different types of weakness zone (W) and the extent of mechanical coupling across them, particularly when plates were accelerated. Displacements, together with along-strike and across-strike internal deformation in all

  20. One-step electrodeposition process to fabricate corrosion-resistant superhydrophobic surface on magnesium alloy.

    Science.gov (United States)

    Liu, Qin; Chen, Dexin; Kang, Zhixin

    2015-01-28

    A simple, one-step method has been developed to construct a superhydrophobic surface by electrodepositing Mg-Mn-Ce magnesium plate in an ethanol solution containing cerium nitrate hexahydrate and myristic acid. Scanning electron microscopy, energy-dispersive X-ray spectroscopy, X-ray photoelectron spectroscopy, and Fourier transform infrared spectroscopy were employed to characterize the surfaces. The shortest electrodeposition time to obtain a superhydrophobic surface was about 1 min, and the as-prepared superhydrophobic surfaces had a maximum contact angle of 159.8° and a sliding angle of less than 2°. Potentiodynamic polarization and electrochemical impedance spectroscopy measurements demonstrated that the superhydrophobic surface greatly improved the corrosion properties of magnesium alloy in 3.5 wt % aqueous solutions of NaCl, Na2SO4, NaClO3, and NaNO3. Besides, the chemical stability and mechanical durability of the as-prepared superhydrophobic surface were also examined. The presented method is rapid, low-cost, and environmentally friendly and thus should be of significant value for the industrial fabrication of anticorrosive superhydrophobic surfaces and should have a promising future in expanding the applications of magnesium alloys.

  1. In situ biosynthesis of bacterial nanocellulose-CaCO3 hybrid bionanocomposite: One-step process.

    Science.gov (United States)

    Mohammadkazemi, Faranak; Faria, Marisa; Cordeiro, Nereida

    2016-08-01

    In this work, a simple and green route to the synthesis of the bacterial nanocellulose-calcium carbonate (BNC/CaCO3) hybrid bionanocomposites using one-step in situ biosynthesis was studied. The CaCO3 was incorporated in the bacterial nanocellulose structure during the cellulose biosynthesis by Gluconacetobacter xylinus PTCC 1734 bacteria. Hestrin-Schramm (HS) and Zhou (Z) culture media were used to the hybrid bionanocomposites production and the effect of ethanol addition was investigated. Attenuated total reflection Fourier transform infrared spectroscopy, field emission scanning electron microscopy, X-ray diffraction, energy-dispersive X-ray spectroscopy, inverse gas chromatography and thermogravimetric analysis were used to characterize the samples. The experimental results demonstrated that the ethanol and culture medium play an important role in the BNC/CaCO3 hybrid bionanocomposites production, structure and properties. The BNC/CaCO3 biosynthesized in Z culture medium revealed higher O/C ratio and amphoteric surface character, which justify the highest CaCO3 content incorporation. The CaCO3 was incorporated into the cellulosic matrix decreasing the bacterial nanocellulose crystallinity. This work reveals the high potential of in situ biosynthesis of BNC/CaCO3 hybrid bionanocomposites and opens a new way to the high value-added applications of bacterial nanocellulose. Copyright © 2016 Elsevier B.V. All rights reserved.

  2. From business value model to coordination process model

    NARCIS (Netherlands)

    Fatemi, Hassan; Wieringa, Roelf J.; Poler, R.; van Sinderen, Marten J.; Sanchis, R.

    2009-01-01

    The increased complexity of business webs calls for modeling the collaboration of enterprises from different perspectives, in particular the business and process perspectives, and for mutually aligning these perspectives. Business value modeling and coordination process modeling both are necessary

  3. Business process modeling for processing classified documents using RFID technology

    Directory of Open Access Journals (Sweden)

    Koszela Jarosław

    2016-01-01

    Full Text Available The article outlines the application of the processing approach to the functional description of the designed IT system supporting the operations of the secret office, which processes classified documents. The article describes the application of the method of incremental modeling of business processes according to the BPMN model to the description of the processes currently implemented (“as is” in a manual manner and target processes (“to be”, using the RFID technology for the purpose of their automation. Additionally, the examples of applying the method of structural and dynamic analysis of the processes (process simulation to verify their correctness and efficiency were presented. The extension of the process analysis method is a possibility of applying the warehouse of processes and process mining methods.

  4. Process correlation analysis model for process improvement identification.

    Science.gov (United States)

    Choi, Su-jin; Kim, Dae-Kyoo; Park, Sooyong

    2014-01-01

    Software process improvement aims at improving the development process of software systems. It is initiated by process assessment identifying strengths and weaknesses and based on the findings, improvement plans are developed. In general, a process reference model (e.g., CMMI) is used throughout the process of software process improvement as the base. CMMI defines a set of process areas involved in software development and what to be carried out in process areas in terms of goals and practices. Process areas and their elements (goals and practices) are often correlated due to the iterative nature of software development process. However, in the current practice, correlations of process elements are often overlooked in the development of an improvement plan, which diminishes the efficiency of the plan. This is mainly attributed to significant efforts and the lack of required expertise. In this paper, we present a process correlation analysis model that helps identify correlations of process elements from the results of process assessment. This model is defined based on CMMI and empirical data of improvement practices. We evaluate the model using industrial data.

  5. Accounting for differences in dieting status: steps in the refinement of a model.

    Science.gov (United States)

    Huon, G; Hayne, A; Gunewardene, A; Strong, K; Lunn, N; Piira, T; Lim, J

    1999-12-01

    The overriding objective of this paper is to outline the steps involved in refining a structural model to explain differences in dieting status. Cross-sectional data (representing the responses of 1,644 teenage girls) derive from the preliminary testing in a 3-year longitudinal study. A battery of measures assessed social influence, vulnerability (to conformity) disposition, protective (social coping) skills, and aspects of positive familial context as core components in a model proposed to account for the initiation of dieting. Path analyses were used to establish the predictive ability of those separate components and their interrelationships in accounting for differences in dieting status. Several components of the model were found to be important predictors of dieting status. The model incorporates significant direct, indirect (or mediated), and moderating relationships. Taking all variables into account, the strongest prediction of dieting status was from peer competitiveness, using a new scale developed specifically for this study. Systematic analyses are crucial for the refinement of models to be used in large-scale multivariate studies. In the short term, the model investigated in this study has been shown to be useful in accounting for cross-sectional differences in dieting status. The refined model will be most powerfully employed in large-scale time-extended studies of the initiation of dieting to lose weight. Copyright 1999 by John Wiley & Sons, Inc.

  6. Comprehension of Multiple Documents with Conflicting Information: A Two-Step Model of Validation

    Science.gov (United States)

    Richter, Tobias; Maier, Johanna

    2017-01-01

    In this article, we examine the cognitive processes that are involved when readers comprehend conflicting information in multiple texts. Starting from the notion of routine validation during comprehension, we argue that readers' prior beliefs may lead to a biased processing of conflicting information and a one-sided mental model of controversial…

  7. A four-step model: Initiating writing development at faculty level using Master’s thesis workshops as a vehicle

    DEFF Research Database (Denmark)

    Jensen, Tine Wirenfeldt; Jensen, Eva Naur; Bay, Gina

    traditions for addressing academic writing development at a central level. This paper presents a four-step model for initiating development of academic writing skills at such faculties. The model was developed, tested and evaluated in the fall 2015 in collaboration with all seven departments at Aarhus......In recent years, Danish university education has seen a rise of regulation from central government, intended to significantly reduce student’s degree completion time (The Study Progress Reform, 2013). One of the many effects of the reform is a reduction of the time available for students to write...... a Master’s thesis, as well as less flexibility regarding when the Master’s thesis process begins and ends. The reform has created an immediate need for increased support of academic writing development. This presents a challenge to all faculties, but especially those without writing centers or prior...

  8. Production of acetic acid by hydrothermal two-step process of vegetable wastes for use as a road deicer

    Energy Technology Data Exchange (ETDEWEB)

    Jin, F; Watanabe, Y; Kishita, A; Enomoto, H [Graduate School of Environmental Studies, Tohoku University, Sendai 980-8579 (Japan); Kishida, H [Environmental Systems Headquarters, Environmental Research and Development Center Hitachi Zosen Corporation, Kyoto 625-8501 (Japan)], E-mail: fmjin@mail.tongji.edu.cn

    2008-07-15

    This study aimed to produce acetic acid from vegetable wastes by a new hydrothermal two-step process. A continuous flow reaction system with a maximum treatment capacity of 2 kg/h of dry biomass developed by us was used. Five kinds of vegetables of carrots, white radish, chinese cabbage, cabbage and potato were selected as the representation of vegetable wastes. First, batch experiments with the selected vegetables were performed under the condition of 300 deg. C, 1 min for the first step, and 300 deg. C, 1 min and 70% oxygen supply for the second step, which is the optimum condition for producing acetic acid in the case of using starch as test material. The highest yields of acetic acid from five vegetables were almost the same as those obtained from starch. Subsequently, similar the highest yield of acetic acid and experimental conditions from vegetables were also obtained successfully using the continuous flow reaction system. These results should be useful for developing an industrial scale process.

  9. Towards the Automated Annotation of Process Models

    NARCIS (Netherlands)

    Leopold, H.; Meilicke, C.; Fellmann, M.; Pittke, F.; Stuckenschmidt, H.; Mendling, J.

    2016-01-01

    Many techniques for the advanced analysis of process models build on the annotation of process models with elements from predefined vocabularies such as taxonomies. However, the manual annotation of process models is cumbersome and sometimes even hardly manageable taking the size of taxonomies into

  10. A Risk Management Process for Consumers: The Next Step in Information Security

    NARCIS (Netherlands)

    van Cleeff, A.

    2010-01-01

    Simply by using information technology, consumers expose themselves to considerable security risks. Because no technical or legal solutions are readily available, and awareness programs have limited impact, the only remedy is to develop a risk management process for consumers. Consumers need to

  11. Single step preparation of NdFeB alloy by magnesiothermic reduction-diffusion process

    International Nuclear Information System (INIS)

    Singha, Vinay Kant; Surendranathana, A.O.; John Berchmans, L.

    2014-01-01

    Magnesiothermic reduction is a new approach to produce the NdFeB alloy on a commercial scale. Similar studies were conducted for the preparation of LaNi 5 and SmCo 5 using magnesium as the reductant. In the present investigation NdFeB Hard magnetic bulk materials were synthesized by metallothermic 'Reduction – Diffusion (R-D) Process' using Magnesium as a reductant. For this process oxide precursors of Nd, Fe and B were blended with flux (LiCl/CaCl 2 ) and Mg chips were sandwiched in alternate layers. Thermal analysis (TGA/DTA) was carried out to find the dissociation and decomposition temperature of the reactants. The phase analysis, structure, and elemental composition were assessed by X-ray diffraction (XRD) and electron dispersive spectrometry (EDS). The infrared (IR) spectra were recorded by Fourier transform infrared spectrometer (FTIR). The morphological features and particle size was assessed by scanning electron microscope (SEM). The magnetic behaviour of the alloy was assessed using electron paramagnetic resonance (EPR) and vibratory sample magnetometer (VSM). From these studies it has been concluded that the NdFeB magnetic particles can be prepared using magnesium as the reductant. The process is faster and consumes very less amount of energy for the completion as compared to conventional calciothermic reduction process. Traces of MgO were detected in the alloy which increases the perpendicular anisotropy, thus increasing the coercivity of the material

  12. Two step estimation for Neyman-Scott point process with inhomogeneous cluster centers

    Czech Academy of Sciences Publication Activity Database

    Mrkvička, T.; Muška, Milan; Kubečka, Jan

    2014-01-01

    Roč. 24, č. 1 (2014), s. 91-100 ISSN 0960-3174 R&D Projects: GA ČR(CZ) GA206/07/1392 Institutional support: RVO:60077344 Keywords : bayesian method * clustering * inhomogeneous point process Subject RIV: EH - Ecology, Behaviour Impact factor: 1.623, year: 2014

  13. ACSEPT a European project for a new step in the future demonstration of advanced fuel processing

    Energy Technology Data Exchange (ETDEWEB)

    Bourg, S.; Hill, C. [CEA, DRCP - Bat 181, CEA Marcoule, BP17171, 30207 Bagnols/Ceze (France); Caravaca, C.; Espartero, A. [CIEMAT, Avda. Complutense, 22 - 28040 Madrid (Spain); Rhodes, C.; Taylor, R.; Harrison, M. [National Nuclear Laboratory, Sellafield, Seascale, Cumbria, CA20 1PG (United Kingdom); EKBERG, C. [Chalmers tekniska hoegskola, Institutionen foer kemi- och bioteknik, Aemnesomraadets namn, 412 96 Goeteborg (Sweden); GEIST, A. [Forschungszentrum Karlsruhe, Institut fuer Nukleare Entsorgungstechnik, P.O.B. 3640, D-76021 Karlsruhe (Germany); Modolo, G. [Forschungszentrum Juelich - FZJ, D-52425 Juelich (Germany); Cassayre, L. [CNRS, Laboratoire de Genie Chimique, Toulouse (France); Malmbeck, R. [JRC-ITU, Karlsruhe (Germany); De Angelis, G. [ENEA, Casaccia, Rome (Italy); Bouvet, S. [Rio Tinto Alcan, Centre de Recherche de Voreppe, Voreppe (France); Klaassen, F. [NRG, PO Box 25, NL-1755 ZG Petten (Netherlands)

    2010-07-01

    For more than fifteen years, a European scientific community has joined its effort to develop and optimise processes for the partitioning of actinides from fission products. In an international context of 'nuclear renaissance', the upcoming of a new generation of nuclear reactor (Gen IV) will require the development of associated advanced closed fuel cycles which answer the needs of a sustainable nuclear energy: the minimization of the production of long lived radioactive waste but also the optimization of the use of natural resources with an increased resistance to proliferation. Actually, Partitioning and Transmutation (P and T), associated to a multi-recycling of all transuranics (TRUs), should play a key role in the development of this sustainable nuclear energy. By joining together 34 Partners coming from European universities, nuclear research bodies and major industrial players in a multidisciplinary consortium, the FP7 EURATOM-Fission Collaborative Project ACSEPT (Actinide recycling by Separation and Transmutation), started in 2008 for four year duration, provides the sound basis and fundamental improvements for future demonstrations of fuel treatment in strong connection with fuel fabrication techniques. Consistently with potentially viable recycling strategies, ACSEPT therefore provides a structured R and D framework to develop chemical separation processes compatible with fuel fabrication techniques, with a view to their future demonstration at the pilot level. ACSEPT is organized into three technical domains: (i) Considering technically mature aqueous separation processes, ACSEPT works to optimize and select the most promising ones dedicated either to actinide partitioning or to group actinide separation. (ii) Concerning high temperature pyrochemical separation processes, ACSEPT focuses on the enhancement of the two reference cores of process selected within previous projects. R and D efforts are now devoted to key scientific and technical

  14. ACSEPT a European project for a new step in the future demonstration of advanced fuel processing

    International Nuclear Information System (INIS)

    Bourg, S.; Hill, C.; Caravaca, C.; Espartero, A.; Rhodes, C.; Taylor, R.; Harrison, M.; EKBERG, C.; GEIST, A.; Modolo, G.; Cassayre, L.; Malmbeck, R.; De Angelis, G.; Bouvet, S.; Klaassen, F.

    2010-01-01

    For more than fifteen years, a European scientific community has joined its effort to develop and optimise processes for the partitioning of actinides from fission products. In an international context of 'nuclear renaissance', the upcoming of a new generation of nuclear reactor (Gen IV) will require the development of associated advanced closed fuel cycles which answer the needs of a sustainable nuclear energy: the minimization of the production of long lived radioactive waste but also the optimization of the use of natural resources with an increased resistance to proliferation. Actually, Partitioning and Transmutation (P and T), associated to a multi-recycling of all transuranics (TRUs), should play a key role in the development of this sustainable nuclear energy. By joining together 34 Partners coming from European universities, nuclear research bodies and major industrial players in a multidisciplinary consortium, the FP7 EURATOM-Fission Collaborative Project ACSEPT (Actinide recycling by Separation and Transmutation), started in 2008 for four year duration, provides the sound basis and fundamental improvements for future demonstrations of fuel treatment in strong connection with fuel fabrication techniques. Consistently with potentially viable recycling strategies, ACSEPT therefore provides a structured R and D framework to develop chemical separation processes compatible with fuel fabrication techniques, with a view to their future demonstration at the pilot level. ACSEPT is organized into three technical domains: (i) Considering technically mature aqueous separation processes, ACSEPT works to optimize and select the most promising ones dedicated either to actinide partitioning or to group actinide separation. (ii) Concerning high temperature pyrochemical separation processes, ACSEPT focuses on the enhancement of the two reference cores of process selected within previous projects. R and D efforts are now devoted to key scientific and technical points

  15. Ozonisation of model compounds as a pretreatment step for the biological wastewater treatment

    International Nuclear Information System (INIS)

    Degen, U.

    1979-11-01

    Biological degradability and toxicity of organic substances are two basic criteria determining their behaviour in natural environment and during the biological treatment of waste waters. In this work oxidation products of model compounds (p-toluenesulfonic acid, benzenesulfonic acid and aniline) generated by ozonation were tested in a two step laboratory plant with activated sludge. The organic oxidation products and the initial compounds were the sole source of carbon for the microbes of the adapted activated sludge. The progress of elimination of the compounds was studied by measuring DOC, COD, UV-spectra of the initial compounds and sulfate. Initial concentrations of the model compounds were 2-4 mmole/1 with 25-75ion of sulfonic acids. As oxidation products of p-toluenesulfonic acid the following compounds were identified and quantitatively measured: methylglyoxal, pyruvic acid, oxalic acid, acetic acid, formic acid and sulfate. With all the various solutions with different concentrations of initial compounds and oxidation products the biological activity in the two step laboratory plant could maintain. p-Toluenesulfonic acid and the oxidation products are biologically degraded. The degradation of p-toluenesulfonic acid is measured by following the increasing of the sulfate concentration after biological treatment. This shows that the elimination of p-toluenesulfonic acid is not an adsorption but a mineralization step. At high p-toluenesulfonic acid concentration and low concentration of oxidation products p-toluenesulfonic acid is eliminated with a high efficiency (4.3 mole/d m 3 = 0.34 kg p-toluenesulfonic acid/d m 3 ). However at high concentration of oxidation products p-toluenesulfonic acid is less degraded. The oxidation products are always degraded with an elimination efficiency of 70%. A high load of biologically degradable oxidation products diminished the elimination efficiency of p-toluenesulfonic acid. (orig.) [de

  16. Process mapping evaluation of medication reconciliation in academic teaching hospitals: a critical step in quality improvement.

    Science.gov (United States)

    Holbrook, Anne; Bowen, James M; Patel, Harsit; O'Brien, Chris; You, John J; Tahavori, Roshan; Doleweerd, Jeff; Berezny, Tim; Perri, Dan; Nieuwstraten, Carmine; Troyan, Sue; Patel, Ameen

    2016-12-30

    Medication reconciliation (MedRec) has been a mandated or recommended activity in Canada, the USA and the UK for nearly 10 years. Accreditation bodies in North America will soon require MedRec for every admission, transfer and discharge of every patient. Studies of MedRec have revealed unintentional discrepancies in prescriptions but no clear evidence that clinically important outcomes are improved, leading to widely variable practices. Our objective was to apply process mapping methodology to MedRec to clarify current processes and resource usage, identify potential efficiencies and gaps in care, and make recommendations for improvement in the light of current literature evidence of effectiveness. Process engineers observed and recorded all MedRec activities at 3 academic teaching hospitals, from initial emergency department triage to patient discharge, for general internal medicine patients. Process maps were validated with frontline staff, then with the study team, managers and patient safety leads to summarise current problems and discuss solutions. Across all of the 3 hospitals, 5 general problem themes were identified: lack of use of all available medication sources, duplication of effort creating inefficiency, lack of timeliness of completion of the Best Possible Medication History, lack of standardisation of the MedRec process, and suboptimal communication of MedRec issues between physicians, pharmacists and nurses. MedRec as practised in this environment requires improvements in quality, timeliness, consistency and dissemination. Further research exploring efficient use of resources, in terms of personnel and costs, is required. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.

  17. Evaluation of hydrodynamic ocean models as a first step in larval dispersal modelling

    Science.gov (United States)

    Vasile, Roxana; Hartmann, Klaas; Hobday, Alistair J.; Oliver, Eric; Tracey, Sean

    2018-01-01

    Larval dispersal modelling, a powerful tool in studying population connectivity and species distribution, requires accurate estimates of the ocean state, on a high-resolution grid in both space (e.g. 0.5-1 km horizontal grid) and time (e.g. hourly outputs), particularly of current velocities and water temperature. These estimates are usually provided by hydrodynamic models based on which larval trajectories and survival are computed. In this study we assessed the accuracy of two hydrodynamic models around Australia - Bluelink ReANalysis (BRAN) and Hybrid Coordinate Ocean Model (HYCOM) - through comparison with empirical data from the Australian National Moorings Network (ANMN). We evaluated the models' predictions of seawater parameters most relevant to larval dispersal - temperature, u and v velocities and current speed and direction - on the continental shelf where spawning and nursery areas for major fishery species are located. The performance of each model in estimating ocean parameters was found to depend on the parameter investigated and to vary from one geographical region to another. Both BRAN and HYCOM models systematically overestimated the mean water temperature, particularly in the top 140 m of water column, with over 2 °C bias at some of the mooring stations. HYCOM model was more accurate than BRAN for water temperature predictions in the Great Australian Bight and along the east coast of Australia. Skill scores between each model and the in situ observations showed lower accuracy in the models' predictions of u and v ocean current velocities compared to water temperature predictions. For both models, the lowest accuracy in predicting ocean current velocities, speed and direction was observed at 200 m depth. Low accuracy of both model predictions was also observed in the top 10 m of the water column. BRAN had more accurate predictions of both u and v velocities in the upper 50 m of water column at all mooring station locations. While HYCOM

  18. From Nanoparticles to Process An Aberration Corrected TEM Study of Fischer Tropsch Catalysts at Various Steps of the Process

    International Nuclear Information System (INIS)

    Braidy, N.; Blanchard, J.; Abatzoglou, N.; Andrei, C.

    2011-01-01

    χThe nanostructure of Fischer-Tropsch (FT) Fe carbides are investigated using aberration-corrected high-resolution transmission electron microscopy (TEM). The plasma-generated Fe carbides are analyzed just after synthesis, following reduction via a H2 treatment step and once used as FT catalyst and deactivated. The as-produced nanoparticles (NPs) are seen to be abundantly covered with graphitic and amorphous carbon. Using the extended information limit from the spherical aberration-corrected TEM, the NPs could be indexed as a mixture of NPs in the θ-Fe 3 C and χ-Fe 5 C 2 phases. The reduction treatment exposed the NPs by removing most of the carbonaceous speSubscript textcies while retaining the χ-Fe 5 C 2 . Fe-carbides NPs submitted to conditions typical to FT synthesis develop a Fe3O4 shell which eventually consumes the NPs up to a point where 3-4 nm residual carbide is left at the center of the particle. Subscript textVarious mechanisms explaining the formation of such a microstructure are discussed. (author)

  19. The Role of Peer Tutoring: Steps to Describing a Three-Dimensional Model.

    Science.gov (United States)

    Davis, Kevin

    A comprehensive, three-dimensional model of peer tutoring, constructed by gathering current theories and research and locating them on a dynamic continuum of the tutoring process, allows researchers to break new ground in tutor research and might eventually offer a new heuristic for training peer tutors. The first axis in the model, the focus…

  20. Study the effect of striping in two-step anodizing process on pore arrangement of nano-porous alumina

    International Nuclear Information System (INIS)

    Rahimi, M.H.; Saramad, S.; Tabaian, S.H.; Marashi, S.P.; Zolfaghari, A.; Mohammadalinezhad, M.

    2009-01-01

    Two-step anodic oxidation of aluminum is generally employed to produce the ordered porous anodized alumina (PAA). Dissolving away (striping) the oxide film after the first anodizing step plays a key role in the final arrangement of nano-pores. In this work, different striping durations between 1 and 6 h were applied to the sample that was initially anodized at a constant voltage of 40 V at 17 deg. C for 15 h. The striping duration of 3 h was realized as the optimum time for achieving the best ordering degree for the pores. Scanning electron microscopy (SEM) was used during and at the end of the process to examine the cross section and finishing surface of the specimens. Linear-angular fast Fourier transform (LA-FFT), an in-house technique based on MATLAB software, was employed to assess the ordering degree of the anodized samples.

  1. Step training in a rat model for complex aneurysmal vascular microsurgery

    Directory of Open Access Journals (Sweden)

    Martin Dan

    2015-12-01

    Full Text Available Introduction: Microsurgery training is a key step for the young neurosurgeons. Both in vascular and peripheral nerve pathology, microsurgical techniques are useful tools for the proper treatment. Many training models have been described, including ex vivo (chicken wings and in vivo (rat, rabbit ones. Complex microsurgery training include termino-terminal vessel anastomosis and nerve repair. The aim of this study was to describe a reproducible complex microsurgery training model in rats. Materials and methods: The experimental animals were Brown Norway male rats between 10-16 weeks (average 13 and weighing between 250-400g (average 320g. We performed n=10 rat hind limb replantations. The surgical steps and preoperative management are carefully described. We evaluated the vascular patency by clinical assessment-color, temperature, capillary refill. The rats were daily inspected for any signs of infections. The nerve regeneration was assessed by foot print method. Results: There were no case of vascular compromise or autophagia. All rats had long term survival (>90 days. The nerve regeneration was clinically completed at 6 months postoperative. The mean operative time was 183 minutes, and ischemia time was 25 minutes.

  2. Generalized equivalent circuit model for ultra wideband antenna structure with double steps for energy scavenging

    International Nuclear Information System (INIS)

    Heong, Oon Kheng; Hock, Goh Chin; Chakrabarty, Chandan Kumar; Hock, Goh Tian

    2013-01-01

    There are various types of UWB antennas can be used to scavenge energy from the air and one of them is the printed disc monopole antenna. One of the new challenges imposed on ultra wideband is the design of a generalized antenna circuit model. It is developed in order to extract the inductance and capacitance values of the UWB antennas. In this research work, the developed circuit model can be used to represent the rectangular printed disc monopole antenna with double steps. The antenna structure is simulated with CST Microwave Studio, while the circuit model is simulated with AWR Microwave Office. In order to ensure the simulation result from the circuit model is accurate, the circuit model is also simulated using Mathlab program. The developed circuit model is found to be able to depict the actual UWB antenna. Energy harvesting from environmental wirelessly is an emerging method, which forms a promising alternative to existing energy scavenging system. The developed UWB can be used to scavenge wideband energy from electromagnetic wave present in the environment.

  3. A Step Forward to Closing the Loop between Static and Dynamic Reservoir Modeling

    Directory of Open Access Journals (Sweden)

    Cancelliere M.

    2014-12-01

    Full Text Available The current trend for history matching is to find multiple calibrated models instead of a single set of model parameters that match the historical data. The advantage of several current workflows involving assisted history matching techniques, particularly those based on heuristic optimizers or direct search, is that they lead to a number of calibrated models that partially address the problem of the non-uniqueness of the solutions. The importance of achieving multiple solutions is that calibrated models can be used for a true quantification of the uncertainty affecting the production forecasts, which represent the basis for technical and economic risk analysis. In this paper, the importance of incorporating the geological uncertainties in a reservoir study is demonstrated. A workflow, which includes the analysis of the uncertainty associated with the facies distribution for a fluvial depositional environment in the calibration of the numerical dynamic models and, consequently, in the production forecast, is presented. The first step in the workflow was to generate a set of facies realizations starting from different conceptual models. After facies modeling, the petrophysical properties were assigned to the simulation domains. Then, each facies realization was calibrated separately by varying permeability and porosity fields. Data assimilation techniques were used to calibrate the models in a reasonable span of time. Results showed that even the adoption of a conceptual model for facies distribution clearly representative of the reservoir internal geometry might not guarantee reliable results in terms of production forecast. Furthermore, results also showed that realizations which seem fully acceptable after calibration were not representative of the true reservoir internal configuration and provided wrong production forecasts; conversely, realizations which did not show a good fit of the production data could reliably predict the reservoir

  4. Development of Two-Step Temperature Process to Modulate the Physicochemical Properties ofβ-lactoglobulin Nanoparticles.

    Science.gov (United States)

    Ha, Ho-Kyung; Nam, Gyeong-Won; Khang, Dongwoo; Park, Sung Jean; Lee, Mee-Ryung; Lee, Won-Jae

    2017-01-01

    The development of a new manufacturing process, a two-step temperature treatment, to modulate the physicochemical properties of nanoparticles including the size is critical. This is because its physicochemical properties can be key factors affecting the cellular uptake and the bioavailability of bioactive compounds encapsulated in nanoparticles. The aims of this study were to produce (beta-lactoglobulin) β -lg nanoparticles and to understand how two-step temperature treatment could affect the formation and physicochemical properties of β -lg nanoparticles. The morphological and physicochemical properties of β -lg nanoparticles were determined using atomic force microscopy and a particle size analyzer, respectively. Circular dichroism spectroscopy was used to investigate the secondary structure of β -lg. The surface hydrophobicity and free thiol groups of β -lg were increased with a decrease in sub-ambient temperature and an increase in mild heat temperature. As sub-ambient temperature was decreased, a decrease in α -helical content and an increase in β -sheet content were observed. The two-step temperature treatment firstly involved a sub-ambient temperature treatment from 5 to 20°C for 30 min, followed secondly by a mild heat temperature treatment from 55 to 75°C for 10 min. This resulted in the production of spherically-shaped particles with a size ranging from 61 to 214 nm. Two-way ANOVA exhibited the finding that both sub-ambient and mild heat temperature significantly ( p two-step temperature treatment was shown to play an important role in the manufacturing process - both due to its inducement of the conformational changes of β -lg during nanoparticle formation, and due to its modulation of the physicochemical properties of β -lg nanoparticles.

  5. Efficient Preparation and Performance Characterization of the HMX/F2602 Microspheres by One-Step Granulation Process

    Directory of Open Access Journals (Sweden)

    Conghua Hou

    2017-01-01

    Full Text Available A new one-step granulation process for preparing high melting explosive- (HMX- based PBX was developed. HMX/F2602 microspheres were successfully prepared by using HMX and F2602 as the main explosive and binder, respectively. The particle morphology, particle size, crystal structure, thermal stability, and impact sensitivity of the as-prepared HMX/F2602 microspheres were characterized by scanning electron microscopy (SEM, X-ray diffraction (XRD, laser particle size analyzer, differential scanning calorimetry (DSC, and impact sensitivity test, respectively. The SEM analysis indicated successful coating of F2602 on the surface of HMX, and the resulting particles are ellipsoidal or spherical with a median particle size of 940 nm; the XRD analysis did not show any change in the crystal structure after the coating and still has β-HNX crystal structure; according to the DSC analysis, HMX/F2602 prepared by the new method has better thermal stability compared to that prepared by the water suspension process. The impact sensitivity of HMX/F2602 prepared by this one-step granulation process decreased, and its characteristic height H50 increased from 37.62 to 40.13 cm, thus significantly improving the safety performance. More importantly, this method does not need the freeze-drying process after recrystallization, thus increasing the efficiency by 2 to 3 times.

  6. Two-step sulfonation process for the conversion of polymer fibers to carbon fibers

    Energy Technology Data Exchange (ETDEWEB)

    Barton, Bryan E.; Patton, Jasson T.; Hukkanen, Eric J.; Bernius, Mark T.

    2017-11-14

    Disclosed herein are processes for preparing carbon fibers, comprising: sulfonating a polymer fiber with a sulfonating agent that is fuming sulfuric acid, sulfuric acid, chlorosulfonic acid, or a combination thereof; treating the sulfonated polymer with a heated solvent, wherein the temperature of the heated solvent is at least 95.degree. C.; and carbonizing the resulting product by heating it to a temperature of 501-3000.degree. C. Carbon fibers prepared according to these methods are also disclosed herein.

  7. Process and structures for fabrication of solar cells with laser ablation steps to form contact holes

    Science.gov (United States)

    Harley, Gabriel; Smith, David D; Dennis, Tim; Waldhauer, Ann; Kim, Taeseok; Cousins, Peter John

    2013-11-19

    Contact holes of solar cells are formed by laser ablation to accomodate various solar cell designs. Use of a laser to form the contact holes is facilitated by replacing films formed on the diffusion regions with a film that has substantially uniform thickness. Contact holes may be formed to deep diffusion regions to increase the laser ablation process margins. The laser configuration may be tailored to form contact holes through dielectric films of varying thickness.

  8. ACSEPT, a European project for a new step in the future demonstration of advanced fuel processing

    Energy Technology Data Exchange (ETDEWEB)

    Bourg, S. [CEA Marcoule 30 (France); Hill, C. [CEA Saclay, 91 - Gif sur Yvette (France); Caravaca, C.; Espartero, A. [Ciemat, Madrid (Spain); Rhodes, C.; Taylor, R.; Harrison, M. [National Nuclear Laboratory (United Kingdom); Geist, A. [Fachinformationszentrum Karlsruhe - INE (Germany); Modolo, G. [Forschungszentrum Juelich - FZJ (Germany); Cassayre, L. [Centre National de la Recherche Scientifique (CNRS), 91 - Orsay (France); Malmbeck, R. [Joint Research Centre (JRC) - Institute for Transuranium Elements (ITU) (Germany); De Angelis, G. [ENEA, Bologna (Italy); Bouvet, S. [Alcan, 92 - Courbevoie (France); Klaassen, F. [Nuclear Research and consultancy Group (NRG) (Netherlands); Ekber, C.

    2010-11-15

    Partitioning and transmutation, associated to a multi-recycling of all transuranics should play a key role in the development of sustainable nuclear energy. By joining together 34 partners coming from European universities, nuclear research laboratories and major industrial players, in a multi-disciplinary consortium, the FP7-Euratom-Fission collaborative project ACSEPT (Actinide recycling by separation and transmutation), provides the sound basis and future improvements for future demonstrations of fuel treatment in strong connection with fuel fabrication techniques. ACSEPT is organized into 3 technical domains: 1) selecting and optimizing mature aqueous separation processes (Diamex-Sanex, Ganex); 2) high temperature pyrochemical separation processes, and 3) carrying out engineering and systems studies on hydro- and pyro-chemical processes to prepare for future demonstration at a pilot level. After 2 years of work, 2 successful hot-tests were performed in hydrometallurgy, validating the Sanex and i-Sanex routes. Efforts are now devoted to the Ganex concept. Progress was also made in fuel dissolution and fuel re-fabrication. In pyrometallurgy, promising routes are almost demonstrated for the actinide recovery from aluminium. (A.C.)

  9. Model-based optimization of the primary drying step during freeze-drying

    DEFF Research Database (Denmark)

    Mortier, Séverine Thérèse F.C.; Van Bockstal, Pieter-Jan; Nopens, Ingmar

    2015-01-01

    Since large molecules are considered the key driver for growth of the pharmaceutical industry, the focus of the pharmaceutical industry is shifting from small molecules to biopharmaceuticals: around 50% of the approved biopharmaceuticals are freeze-dried products. Therefore, freeze- drying...... is an important technology to stabilise biopharmaceutical drug products which are unstable in an aqueous solution. However, the freeze-drying process is an energy and time-consuming process. The use of mechanistic modelling to gather process knowledge can assist in optimisation of the process parameters during...... the operation of the freeze-drying process. By applying a dynamic shelf temperature and chamber pressure, which are the only controllable process variables, the processing time can be decreased by a factor 2 to 3....

  10. TWO-STEP ALGORITHM OF TRAINING INITIALIZATION FOR ACOUSTIC MODELS BASED ON DEEP NEURAL NETWORKS

    Directory of Open Access Journals (Sweden)

    I. P. Medennikov

    2016-03-01

    Full Text Available This paper presents a two-step initialization algorithm for training of acoustic models based on deep neural networks. The algorithm is focused on reducing the impact of the non-speech segments on the acoustic model training. The idea of the proposed algorithm is to reduce the percentage of non-speech examples in the training set. Effectiveness evaluation of the algorithm has been carried out on the example of English spontaneous telephone speech recognition (Switchboard. The application of the proposed algorithm has led to 3% relative word error rate reduction, compared with the training initialization by restricted Boltzmann machines. The results presented in the paper can be applied in the development of automatic speech recognition systems.

  11. THE PROCESSING STEPS IN THE RENEW OF PLUG-FORMING DETAILS OF PIPELINE FITTINGS

    Directory of Open Access Journals (Sweden)

    Vladimir A. Skryabin

    2016-06-01

    Full Text Available Introduction. In production and repairs of pipeline armature grinding (debugging is considered as one of the major technological operations. The main task is the providing of impermeability of breech-block. Whatever problems did not arise up in the achievement of impermeability, diagnosis of reason, practically, always one - the process of grinding in of fine surfaces is well not enough conducted. There is a large stake of truth in such answer, however, its not all and problem not only in grinding in. Grinding in is the finish operation of polishing of compressions and effective of its application depends not only on the exact observance of the recommended terms and modes of process. A major value of the the stages is the forming of quality and preceding to grinding in of the operation of treatment of compressions. If prior actions are executed off grade, then efficiency of realization of portable radio operations of grinding in will be. Materials and Methods. To the article a growing requirement is driven in the improvement of quality, increment of productivity and increment of longevity and reliability of machines and wares. The process of grinding (polishing in allows to get the surfaces of processed details with high quality descriptions. Quality of implementation of finishing operation is estimated on following criteria: it is exactly in size, it is an error of form, they are indices of waviness of surface, indices of roughness of surface, the light reflect¬ing ability and quality descriptions of surface layer. For renewal of corps of wedge bolt by a main task providing of impermeability of breech-block. For its implementation hard requirements are produced, namely; a small roughness of surface, form and location. Thus fine surface of corps of wedge bolt must be homogeneous. Results. In order to attain the set roughness of fine surface, the trajectory of motion of instrument must have certain character. Because on this machine-tool a

  12. Modelling noninvasively measured cerebral signals during a hypoxemia challenge: steps towards individualised modelling.

    Directory of Open Access Journals (Sweden)

    Beth Jelfs

    Full Text Available Noninvasive approaches to measuring cerebral circulation and metabolism are crucial to furthering our understanding of brain function. These approaches also have considerable potential for clinical use "at the bedside". However, a highly nontrivial task and precondition if such methods are to be used routinely is the robust physiological interpretation of the data. In this paper, we explore the ability of a previously developed model of brain circulation and metabolism to explain and predict quantitatively the responses of physiological signals. The five signals all noninvasively-measured during hypoxemia in healthy volunteers include four signals measured using near-infrared spectroscopy along with middle cerebral artery blood flow measured using transcranial Doppler flowmetry. We show that optimising the model using partial data from an individual can increase its predictive power thus aiding the interpretation of NIRS signals in individuals. At the same time such optimisation can also help refine model parametrisation and provide confidence intervals on model parameters. Discrepancies between model and data which persist despite model optimisation are used to flag up important questions concerning the underlying physiology, and the reliability and physiological meaning of the signals.

  13. Business Process Modelling based on Petri nets

    Directory of Open Access Journals (Sweden)

    Qin Jianglong

    2017-01-01

    Full Text Available Business process modelling is the way business processes are expressed. Business process modelling is the foundation of business process analysis, reengineering, reorganization and optimization. It can not only help enterprises to achieve internal information system integration and reuse, but also help enterprises to achieve with the external collaboration. Based on the prototype Petri net, this paper adds time and cost factors to form an extended generalized stochastic Petri net. It is a formal description of the business process. The semi-formalized business process modelling algorithm based on Petri nets is proposed. Finally, The case from a logistics company proved that the modelling algorithm is correct and effective.

  14. The development and evaluation of single cell suspension from wheat and barley as a model system; a first step towards functional genomics application

    DEFF Research Database (Denmark)

    Dong, Jing; Bowra, Steve; Vincze, Éva

    2010-01-01

    Background The overall research objective was to develop single cell plant cultures as a model system to facilitate functional genomics of monocots, in particular wheat and barley. The essential first step towards achieving the stated objective was the development of a robust, viable single cell...... suspension culture from both species. Results We established growth conditions to allow routine culturing of somatic cells in 24 well microtiter plate format. Evaluation of the wheat and barley cell suspension as model cell system is a multi step process. As an initial step in the evaluation procedure we...... level of genes (P5CS, P5CR) under various treatments and we suggest that the cells can be used as a model host system to study gene expression and regulation in monocots....

  15. Modeling process flow using diagrams

    NARCIS (Netherlands)

    Kemper, B.; de Mast, J.; Mandjes, M.

    2010-01-01

    In the practice of process improvement, tools such as the flowchart, the value-stream map (VSM), and a variety of ad hoc variants of such diagrams are commonly used. The purpose of this paper is to present a clear, precise, and consistent framework for the use of such flow diagrams in process

  16. Duration of the first steps of the human rRNA processing

    Czech Academy of Sciences Publication Activity Database

    Popov, A.; Smirnov, E.; Kováčik, L.; Raška, O.; Hagen, G.; Stixová, Lenka; Raška, I.

    2013-01-01

    Roč. 4, č. 2 (2013), s. 134-141 ISSN 1949-1034 R&D Projects: GA ČR(CZ) GBP302/12/G157; GA MŠk(CZ) EE2.3.30.0030 Grant - others:GA ČR(CZ) GAP302/12/1885 Institutional research plan: CEZ:AV0Z50040702 Institutional support: RVO:68081707 Keywords : rRNA processing * cleavage * half-life time Subject RIV: BO - Biophysics Impact factor: 3.148, year: 2013

  17. Dynamic modeling and validation of a lignocellulosic enzymatic hydrolysis process

    DEFF Research Database (Denmark)

    Prunescu, Remus Mihail; Sin, Gürkan

    2013-01-01

    The enzymatic hydrolysis process is one of the key steps in second generation biofuel production. After being thermally pretreated, the lignocellulosic material is liquefied by enzymes prior to fermentation. The scope of this paper is to evaluate a dynamic model of the hydrolysis process...... on a demonstration scale reactor. The following novel features are included: the application of the Convection–Diffusion–Reaction equation to a hydrolysis reactor to assess transport and mixing effects; the extension of a competitive kinetic model with enzymatic pH dependency and hemicellulose hydrolysis...

  18. Use of Anion Exchange Resins for One-Step Processing of Algae from Harvest to Biofuel

    Directory of Open Access Journals (Sweden)

    Martin Poenie

    2012-07-01

    Full Text Available Some microalgae are particularly attractive as a renewable feedstock for biodiesel production due to their rapid growth, high content of triacylglycerols, and ability to be grown on non-arable land. Unfortunately, obtaining oil from algae is currently cost prohibitive in part due to the need to pump and process large volumes of dilute algal suspensions. In an effort to circumvent this problem, we have explored the use of anion exchange resins for simplifying the processing of algae to biofuel. Anion exchange resins can bind and accumulate the algal cells out of suspension to form a dewatered concentrate. Treatment of the resin-bound algae with sulfuric acid/methanol elutes the algae and regenerates the resin while converting algal lipids to biodiesel. Hydrophobic polymers can remove biodiesel from the sulfuric acid/methanol, allowing the transesterification reagent to be reused. We show that in situ transesterification of algal lipids can efficiently convert algal lipids to fatty acid methyl esters while allowing the resin and transesterification reagent to be recycled numerous times without loss of effectiveness.

  19. The next step in real time data processing for large scale physics experiments

    CERN Document Server

    Paramesvaran, Sudarshan

    2016-01-01

    Run 2 of the LHC represents one of the most challenging scientific environments for real time data analysis and processing. The steady increase in instantaneous luminosity will result in the CMS detector producing around 150 TB/s of data, only a small fraction of which is useful for interesting Physics studies. During 2015 the CMS collaboration will be completing a total upgrade of its Level 1 Trigger to deal with these conditions. In this talk a description of the major components of this complex system will be described. This will include a discussion of custom-designed electronic processing boards, built to the uTCA specification with AMC cards based on Xilinx 7 FPGAs and a network of high-speed optical links. In addition, novel algorithms will be described which deliver excellent performance in FPGAs and are combined with highly stable software frameworks to ensure a minimal risk of downtime. This upgrade is planned to take data from 2016. However a system of parallel running has been developed that will ...

  20. SOME STEPS TOWARDS A SOCIO-COGNITIVE INTERPRETATION OF SECOND LANGUAGE COMPOSITION PROCESSES

    Directory of Open Access Journals (Sweden)

    Julio Roca de Larios

    2001-12-01

    Full Text Available There has been a tendency in research to interpret L2 composition processes in cognitive terms and to consider the social aspects of L2 writing as incommensurate with the former. In an attempt to initiate a more integrated interpretation of results, the present paper identifies three areas, within the fíeld of process-oriented L2 composition research, where individual text production is shown to be socially mediated. These areas, which have been derived from the expertise approach to writing, include (i the impact on writers' performance of the task environment; (ii the situated nature of the skilled-unskilled distinction; and (iii the role played by previous literacy experiences in the development of a number of aspects of composing. Recommendations for future research include the analysis of social and contextual factors mediating the transfer of writing skills across languages and the possibility of looking at individual witing as a dialogic phenomenon through a reconceptualisation of the notion of problem-space.

  1. A new process sensitivity index to identify important system processes under process model and parametric uncertainty

    Energy Technology Data Exchange (ETDEWEB)

    Dai, Heng [Pacific Northwest National Laboratory, Richland Washington USA; Ye, Ming [Department of Scientific Computing, Florida State University, Tallahassee Florida USA; Walker, Anthony P. [Environmental Sciences Division and Climate Change Science Institute, Oak Ridge National Laboratory, Oak Ridge Tennessee USA; Chen, Xingyuan [Pacific Northwest National Laboratory, Richland Washington USA

    2017-04-01

    Hydrological models are always composed of multiple components that represent processes key to intended model applications. When a process can be simulated by multiple conceptual-mathematical models (process models), model uncertainty in representing the process arises. While global sensitivity analysis methods have been widely used for identifying important processes in hydrologic modeling, the existing methods consider only parametric uncertainty but ignore the model uncertainty for process representation. To address this problem, this study develops a new method to probe multimodel process sensitivity by integrating the model averaging methods into the framework of variance-based global sensitivity analysis, given that the model averaging methods quantify both parametric and model uncertainty. A new process sensitivity index is derived as a metric of relative process importance, and the index includes variance in model outputs caused by uncertainty in both process models and model parameters. For demonstration, the new index is used to evaluate the processes of recharge and geology in a synthetic study of groundwater reactive transport modeling. The recharge process is simulated by two models that converting precipitation to recharge, and the geology process is also simulated by two models of different parameterizations of hydraulic conductivity; each process model has its own random parameters. The new process sensitivity index is mathematically general, and can be applied to a wide range of problems in hydrology and beyond.

  2. modeling grinding modeling grinding processes as micro processes

    African Journals Online (AJOL)

    eobe

    into two parts: static specific chip formation energy and dynamic specific chip formation ... the ratio of static normal chip formation force to static tangential chip formation force and the ratio ... grinding processing parameters to the friction coefficient between workpiece and grinding wheel. From equation. (20), the calculation ...

  3. C-C1-02: Data Extraction From Text, Step 1: Preparing Test for Machine Processing

    Science.gov (United States)

    Carrell, David

    2010-01-01

    multi-step and sometimes iterative process.

  4. Model-based software process improvement

    Science.gov (United States)

    Zettervall, Brenda T.

    1994-01-01

    The activities of a field test site for the Software Engineering Institute's software process definition project are discussed. Products tested included the improvement model itself, descriptive modeling techniques, the CMM level 2 framework document, and the use of process definition guidelines and templates. The software process improvement model represents a five stage cyclic approach for organizational process improvement. The cycles consist of the initiating, diagnosing, establishing, acting, and leveraging phases.

  5. THE BC CRIBS & TRENCHES GEOPHYSICAL CHARACTERIZATION PROJECT ONE STEP FORWARD IN HANFORDS CLEANUP PROCESS

    Energy Technology Data Exchange (ETDEWEB)

    BENECKE, MN.W.

    2006-02-22

    A geophysical characterization project was conducted at the BC Cribs and Trenches Area, located south of 200 East at the Hanford Site. The area consists of 26 waste disposal trenches and cribs, which received approximately 30 million gallons of liquid waste from the uranium recovery process and the ferrocyanide processes associated with wastes generated by reprocessing nuclear fuel. Waste discharges to BC Cribs contributed perhaps the largest liquid fraction of contaminants to the ground in the 200 Areas. The site also includes possibly the largest inventory of Tc-99 ever disposed to the soil at Hanford with an estimated quantity of 400 Ci. Other waste constituents included high volumes of nitrate and U-238. The geophysical characterization at the 50 acre site primarily included high resolution resistivity (HRR). The resistivity technique is a non-invasive method by which electrical resistivity data are collected along linear transects, and data are presented as continuous profiles of subsurface electrical properties. The transects ranged in size from about 400-700 meters and provided information down to depths of 60 meters. The site was characterized by a network of 51 HRR lines with a total of approximately 19.7 line kilometers of data collected parallel and perpendicular to the trenches and cribs. The data were compiled to form a three-dimensional representation of low resistivity values. Low resistivity, or high conductivity, is indicative of high ionic strength soil and porewater resulting from the migration of nitrate and other inorganic constituents through the vadose zone. High spatial density soil data from a single borehole, that included coincident nitrate concentrations, electrical conductivity, and Tc-99, were used to transform the electrical resistivity data into a nitrate plume. The plume was shown to extend laterally beyond the original boundaries of the waste site and, in one area, to depths that exceeded the characterization strategy. It is

  6. THE BC CRIBS & TRENCHES GEOPHYSICAL CHARACTERIZATION PROJECT ONE STEP FORWARD IN HANFORDS CLEANUP PROCESS

    Energy Technology Data Exchange (ETDEWEB)

    BENECKE, M.W.

    2005-11-17

    A geophysical characterization project was conducted at the BC Cribs and Trenches Area, located south of 200 East at the Hanford Site. The area consists of 26 waste disposal trenches and cribs, which received approximately 30 million gallons of liquid waste from the uranium recovery process and the ferrocyanide processes associated with wastes generated by reprocessing nuclear fuel. Waste discharges to BC Cribs contributed perhaps the largest liquid fraction of contaminants to the ground in the 200 Areas. The site also includes possibly the largest inventory of Tc-99 ever disposed to the soil at Hanford with an estimated quantity of 400 Ci. Other waste constituents included high volumes of nitrate and U-238. The geophysical characterization at the 50-acre site primarily included high resolution resistivity (HRR). The resistivity technique is a non-invasive method by which electrical resistivity data are collected along linear transects, and data are presented as continuous profiles of subsurface electrical properties. The transects ranged in size from about 400-700 meters and provided information down to depths of 60 meters. The site was characterized by a network of 51 HRR lines with a total of approximately 19.7 line kilometers of data collected parallel and perpendicular to the trenches and cribs. The data were compiled to form a three-dimensional representation of low resistivity values. Low resistivity, or high conductivity, is indicative of high ionic strength soil and porewater resulting from the migration of nitrate and other inorganic constituents through the vadose zone. High spatial density soil data from a single borehole, that included coincident nitrate concentrations, electrical conductivity. and Tc-99, were used to transform the electrical resistivity data into a nitrate plume. The plume was shown to extend laterally beyond the original boundaries of the waste site and, in one area, to depths that exceeded the characterization strategy.

  7. THE BC CRIBS AND TRENCHES GEOPHYSICAL CHARACTERIZATION PROJECT: ONE STEP FORWARD IN HANFORD'S CLEANUP PROCESS

    International Nuclear Information System (INIS)

    BENECKE, MN.W.

    2006-01-01

    A geophysical characterization project was conducted at the BC Cribs and Trenches Area, located south of 200 East at the Hanford Site. The area consists of 26 waste disposal trenches and cribs, which received approximately 30 million gallons of liquid waste from the uranium recovery process and the ferrocyanide processes associated with wastes generated by reprocessing nuclear fuel. Waste discharges to BC Cribs contributed perhaps the largest liquid fraction of contaminants to the ground in the 200 Areas. The site also includes possibly the largest inventory of Tc-99 ever disposed to the soil at Hanford with an estimated quantity of 400 Ci. Other waste constituents included high volumes of nitrate and U-238. The geophysical characterization at the 50 acre site primarily included high resolution resistivity (HRR). The resistivity technique is a non-invasive method by which electrical resistivity data are collected along linear transects, and data are presented as continuous profiles of subsurface electrical properties. The transects ranged in size from about 400-700 meters and provided information down to depths of 60 meters. The site was characterized by a network of 51 HRR lines with a total of approximately 19.7 line kilometers of data collected parallel and perpendicular to the trenches and cribs. The data were compiled to form a three-dimensional representation of low resistivity values. Low resistivity, or high conductivity, is indicative of high ionic strength soil and porewater resulting from the migration of nitrate and other inorganic constituents through the vadose zone. High spatial density soil data from a single borehole, that included coincident nitrate concentrations, electrical conductivity, and Tc-99, were used to transform the electrical resistivity data into a nitrate plume. The plume was shown to extend laterally beyond the original boundaries of the waste site and, in one area, to depths that exceeded the characterization strategy. It is

  8. One step phase separation process to fabricate superhydrophobic PVC films and its corrosion prevention for AZ91D magnesium alloy

    Energy Technology Data Exchange (ETDEWEB)

    Yang, Na; Li, Jicheng; Bai, Ningning; Xu, Lan; Li, Qing, E-mail: liqingswu@163.com

    2016-07-15

    Graphical abstract: - Highlights: • Independent superhydrophobic polyvinyl chloride (PVC) film was prepared by phase separation process. • The superhydrophobic PVC film showed excellent stability in acid, alkali and salt corrosive solutions. • This film was prepared on magnesium surface protecting it from corrosion. • This method was simple and universal. - Abstract: A one step, simple fabrication method to prepare independent superhydrophobic polyvinyl chloride (PVC) coating is reported in this paper. The rough surface structure and low surface energy could be simply obtained only by a phase separation process. The independent PVC superhydrophobic film was also applied on AZ91D magnesium alloy. Scanning electron microscopy (SEM), water contact angle measurements, electrochemical test and adhesion tests have been performed to characterize the surface morphology, wettability, anti-corrosion and adhesion strength of independent PVC film and superhydrophobic magnesium alloy respectively. The results indicated that whether it was the PVC film or superhydrophobic magnesium, they show static contact angles higher than 150°, excellent anti-corrosion effect and adhesion strength. We believed that the presented method could provide a straightforward and simple route to fabricate low-cost and anti-corrosion coating on various substrate materials. Moreover, this one step process may find potential application in the field of industry because of its simplicity and universality.

  9. Modelling of Batch Process Operations

    DEFF Research Database (Denmark)

    Abdul Samad, Noor Asma Fazli; Cameron, Ian; Gani, Rafiqul

    2011-01-01

    Here a batch cooling crystalliser is modelled and simulated as is a batch distillation system. In the batch crystalliser four operational modes of the crystalliser are considered, namely: initial cooling, nucleation, crystal growth and product removal. A model generation procedure is shown that s...

  10. Mathematical Modeling: A Structured Process

    Science.gov (United States)

    Anhalt, Cynthia Oropesa; Cortez, Ricardo

    2015-01-01

    Mathematical modeling, in which students use mathematics to explain or interpret physical, social, or scientific phenomena, is an essential component of the high school curriculum. The Common Core State Standards for Mathematics (CCSSM) classify modeling as a K-12 standard for mathematical practice and as a conceptual category for high school…

  11. Introducing a Clustering Step in a Consensus Approach for the Scoring of Protein-Protein Docking Models

    KAUST Repository

    Chermak, Edrisse

    2016-11-15

    Correctly scoring protein-protein docking models to single out native-like ones is an open challenge. It is also an object of assessment in CAPRI (Critical Assessment of PRedicted Interactions), the community-wide blind docking experiment. We introduced in the field the first pure consensus method, CONSRANK, which ranks models based on their ability to match the most conserved contacts in the ensemble they belong to. In CAPRI, scorers are asked to evaluate a set of available models and select the top ten ones, based on their own scoring approach. Scorers\\' performance is ranked based on the number of targets/interfaces for which they could provide at least one correct solution. In such terms, blind testing in CAPRI Round 30 (a joint prediction round with CASP11) has shown that critical cases for CONSRANK are represented by targets showing multiple interfaces or for which only a very small number of correct solutions are available. To address these challenging cases, CONSRANK has now been modified to include a contact-based clustering of the models as a preliminary step of the scoring process. We used an agglomerative hierarchical clustering based on the number of common inter-residue contacts within the models. Two criteria, with different thresholds, were explored in the cluster generation, setting either the number of common contacts or of total clusters. For each clustering approach, after selecting the top (most populated) ten clusters, CONSRANK was run on these clusters and the top-ranked model for each cluster was selected, in the limit of 10 models per target. We have applied our modified scoring approach, Clust-CONSRANK, to SCORE_SET, a set of CAPRI scoring models made recently available by CAPRI assessors, and to the subset of homodimeric targets in CAPRI Round 30 for which CONSRANK failed to include a correct solution within the ten selected models. Results show that, for the challenging cases, the clustering step typically enriches the ten top ranked

  12. Recruitment to a university alcohol program: evaluation of social marketing theory and stepped approach model.

    Science.gov (United States)

    Gries, J A; Black, D R; Coster, D C

    1995-07-01

    This study was a first initiative to evaluate the application of social marketing theory (SMT) to increase attendance at an alcohol abuse education program for university residence hall students and to ascertain whether aggressive recruitment strategies are necessary as part of the stepped approach model (SAM) of service delivery. SMT and public health strategies that include focus groups, in-depth interviews, and intercept interviews were used to develop recruitment materials in a Test Hall. These new recruitment materials were introduced to the residents in the Treatment Hall (N = 727) and were compared to the Usual Care, Control Hall (N = 706) which received the recruitment materials normally provided to residents as well as to three Historical Halls separately and combined which had used the Usual Care recruitment materials in the past. The Treatment Hall percentage attendance was significantly superior (0.001 marketing literature expectations. The projections for campus-wide attendance for residence hall students were between 207 and 243 participants and for nationwide attendance, 36,900 +/- 8,185. The results suggest that the SMT and public health methods used are helpful in developing recruitment strategies and are an important initial step of the SAM and that a "minimal intervention" recruitment strategy is a cost-effective approach that can have a dramatic impact.

  13. A two-step framework for over-threshold modelling of environmental extremes

    Science.gov (United States)

    Bernardara, P.; Mazas, F.; Kergadallan, X.; Hamm, L.

    2014-03-01

    The evaluation of the probability of occurrence of extreme natural events is important for the protection of urban areas, industrial facilities and others. Traditionally, the extreme value theory (EVT) offers a valid theoretical framework on this topic. In an over-threshold modelling (OTM) approach, Pickands' theorem, (Pickands, 1975) states that, for a sample composed by independent and identically distributed (i.i.d.) values, the distribution of the data exceeding a given threshold converges through a generalized Pareto distribution (GPD). Following this theoretical result, the analysis of realizations of environmental variables exceeding a threshold spread widely in the literature. However, applying this theorem to an auto-correlated time series logically involves two successive and complementary steps: the first one is required to build a sample of i.i.d. values from the available information, as required by the EVT; the second to set the threshold for the optimal convergence toward the GPD. In the past, the same threshold was often employed both for sampling observations and for meeting the hypothesis of extreme value convergence. This confusion can lead to an erroneous understanding of methodologies and tools available in the literature. This paper aims at clarifying the conceptual framework involved in threshold selection, reviewing the available methods for the application of both steps and illustrating it with a double threshold approach.

  14. Toward a General Research Process for Using Dubin's Theory Building Model

    Science.gov (United States)

    Holton, Elwood F.; Lowe, Janis S.

    2007-01-01

    Dubin developed a widely used methodology for theory building, which describes the components of the theory building process. Unfortunately, he does not define a research process for implementing his theory building model. This article proposes a seven-step general research process for implementing Dubin's theory building model. An example of a…

  15. Modeling business processes: theoretical and practical aspects

    Directory of Open Access Journals (Sweden)

    V.V. Dubininа

    2015-06-01

    Full Text Available The essence of process-oriented enterprise management has been examined in the article. The content and types of information technology have been analyzed in the article, due to the complexity and differentiation of existing methods, as well as the specificity of language, terminology of the enterprise business processes modeling. The theoretical aspects of business processes modeling have been reviewed and the modern traditional modeling techniques received practical application in the visualization model of retailers activity have been studied in the article. In the process of theoretical analysis of the modeling methods found that UFO-toolkit method that has been developed by Ukrainian scientists due to it systemology integrated opportunities, is the most suitable for structural and object analysis of retailers business processes. It was designed visualized simulation model of the business process "sales" as is" of retailers using a combination UFO-elements with the aim of the further practical formalization and optimization of a given business process.

  16. Highly Magneto-Responsive Elastomeric Films Created by a Two-Step Fabrication Process

    KAUST Repository

    Marchi, Sophie

    2015-08-24

    An innovative method for the preparation of elastomeric magnetic films with increased magneto-responsivity is presented. Polymeric films containing aligned magnetic microchains throughout their thickness are formed upon the magnetophoretic transport and assembly of microparticles during polymer curing. The obtained films are subsequently magnetized at a high magnetic field of 3 T directed parallel to the orientation of the microchains. We prove that the combination of both alignment of the particles along a favorable direction during curing and the subsequent magnetization of the solid films induces an impressive increase of the films’ deflection. Specifically, the displacements reach few millimeters, up to 85 times higher than those of the nontreated films with the same particle concentration. Such a process can improve the performance of the magnetic films without increasing the amount of magnetic fillers and, thus, without compromising the mechanical properties of the resulting composites. The proposed method can be used for the fabrication of magnetic films suitable as components in systems in which large displacements at relatively low magnetic fields are required, such as sensors and drug delivery or microfluidic systems, especially where remote control of valves is requested to achieve appropriate flow and mixing of liquids.

  17. Modelling heat processing of dairy products

    NARCIS (Netherlands)

    Hotrum, N.; Fox, M.B.; Lieverloo, H.; Smit, E.; Jong, de P.; Schutyser, M.A.I.

    2010-01-01

    This chapter discusses the application of computer modelling to optimise the heat processing of milk. The chapter first reviews types of heat processing equipment used in the dairy industry. Then, the types of objectives that can be achieved using model-based process optimisation are discussed.

  18. How visual cognition influences process model comprehension

    NARCIS (Netherlands)

    Petrusel, Razvan; Mendling, Jan; Reijers, Hajo A.

    2017-01-01

    Process analysts and other professionals extensively use process models to analyze business processes and identify performance improvement opportunities. Therefore, it is important that such models can be easily and properly understood. Previous research has mainly focused on two types of factors

  19. Viability of probiotic Lactobacillus casei in yoghurt: defining the best processing step to its addition.

    Science.gov (United States)

    Bandiera, Nataly Simões; Carneiro, Isadora; da Silva, Alisson Santana; Honjoya, Edson Renato; de Santana, Elsa Helena Walter; Aragon-Alegro, Lina Casale; de Souza, Cínthia Hoch Batista

    2013-03-01

    Probiotics are live microorganisms capable of producing beneficial effects on its host when consumed in adequate amounts. To exert these effects, foods must contain probiotic microorganisms in populations above 10(6) CFU/g or mL throughout its shelf life. One of the strategies to ensure high population of probiotics in fermented milk is to add them during or after the fermentation process separately from the starter cultures. The objective of this study was to investigate the behavior of the probiotic microorganism Lactobacillus casei added to yoghurt in different stages of production. Yoghurts with L. casei were produced at different stages: before addition of starter (Streptococcus salivarius subsp. thermophilus and Lactobacillus delbrueckii subsp. bulgaricus), added together with this culture and at the end of fermentation. Yoghurt without probiotic added was produced as a control. The products were stored at 4 degrees C and analyzed after 1, 7, 14 and 21 days of storage. In these periods, the populations ofprobiotic and starter cultures were enumerated and the parameters pH and acidity were analyzed. The results were evaluated using analysis of variance and Tukey's test, both at 5% significance level. L. casei remained viable in populations of more than 10(8) CFU / g during 21 days of storage, which is suitable to define the formulations as probiotics. When the different stages of the addition of probiotics in yoghurts were evaluated there was no statistical difference between the formulations (p < 0.05) for populations of L. casei except for the first day of storage.

  20. Modeling process flow using diagrams

    OpenAIRE

    Kemper, B.; de Mast, J.; Mandjes, M.

    2010-01-01

    In the practice of process improvement, tools such as the flowchart, the value-stream map (VSM), and a variety of ad hoc variants of such diagrams are commonly used. The purpose of this paper is to present a clear, precise, and consistent framework for the use of such flow diagrams in process improvement projects. The paper finds that traditional diagrams, such as the flowchart, the VSM, and OR-type of diagrams, have severe limitations, miss certain elements, or are based on implicit but cons...

  1. Analytic observations for the d=1+ 1 bridge site (or single-step) deposition model

    International Nuclear Information System (INIS)

    Evans, J.W.; Kang, H.C.

    1991-01-01

    Some exact results for a reversible version of the d=1+1 bridge site (or single-step) deposition model are presented. Exact steady-state properties are determined directly for finite systems with various mean slopes. These show explicitly how the asymptotic growth velocity and fluctuations are quenched as the slope approaches its maximum allowed value. Next, exact hierarchial equations for the dynamics are presented. For the special case of ''equilibrium growth,'' these are analyzed exactly at the pair-correlation level directly for an infinite system. This provided further insight into asymptotic scaling behavior. Finally, the above hierarchy is compared with one generated from a discrete form of the Kardar--Parisi--Zhang equations. Some differences are described

  2. Finite element method for incompressible two-fluid model using a fractional step method

    International Nuclear Information System (INIS)

    Uchiyama, Tomomi

    1997-01-01

    This paper presents a finite element method for an incompressible two-fluid model. The solution algorithm is based on the fractional step method, which is frequently used in the finite element calculation for single-phase flows. The calculating domain is divided into quadrilateral elements with four nodes. The Galerkin method is applied to derive the finite element equations. Air-water two-phase flows around a square cylinder are calculated by the finite element method. The calculation demonstrates the close relation between the volumetric fraction of the gas-phase and the vortices shed from the cylinder, which is favorably compared with the existing data. It is also confirmed that the present method allows the calculation with less CPU time than the SMAC finite element method proposed in my previous paper. (author)

  3. Bayesian Inference for Step-Stress Partially Accelerated Competing Failure Model under Type II Progressive Censoring

    Directory of Open Access Journals (Sweden)

    Xiaolin Shi

    2016-01-01

    Full Text Available This paper deals with the Bayesian inference on step-stress partially accelerated life tests using Type II progressive censored data in the presence of competing failure causes. Suppose that the occurrence time of the failure cause follows Pareto distribution under use stress levels. Based on the tampered failure rate model, the objective Bayesian estimates, Bayesian estimates, and E-Bayesian estimates of the unknown parameters and acceleration factor are obtained under the squared loss function. To evaluate the performance of the obtained estimates, the average relative errors (AREs and mean squared errors (MSEs are calculated. In addition, the comparisons of the three estimates of unknown parameters and acceleration factor for different sample sizes and different progressive censoring schemes are conducted through Monte Carlo simulations.

  4. A model of hydrogen impact induced chemical erosion of carbon based on elementary reaction steps

    International Nuclear Information System (INIS)

    Wittmann, M.; Kueppers, J.

    1996-01-01

    Based on the elementary reaction steps for chemical erosion of carbon by hydrogen a model is developed which allows to calculate the amount of carbon erosion at a hydrogenated carbon surface under the impact of hydrogen ions and neutrals. Hydrogen ion and neutral flux energy distributions prevailing at target plates in the ASDEX upgrade experiment are chosen in the present calculation. The range of hydrogen particles in the target plates is calculated using TRIDYN code. Based upon the TRIDYN results the extent of the erosion reaction as a function of depth is estimated. The results show that both, target temperature and impinging particle flux energy distribution, determine the hydrogen flux density dependent erosion yield and the location of the erosion below the surface. (orig.)

  5. Prevention of Post-herpetic Neuralgia from Dream to Reality: A Ten-step Model.

    Science.gov (United States)

    Makharita, Mohamed Younis

    2017-02-01

    Herpes zoster (HZ) is a painful, blistering skin eruption in a dermatomal distribution caused by reactivation of a latent varicella zoster virus in the dorsal root ganglia (DRG). Post-herpetic neuralgia (PHN) is the most common complication of acute herpes zoster (AHZ).Severe prodrome, greater acute pain and dermatomal injury, and the density of the eruption are the risk factors and predictors for developing PHN. PHN has a substantial effect on the quality of life; many patients develop severe physical, occupational, social, and psychosocial disabilities as a result of the unceasing pain. The long-term suffering and the limited efficacy of the currently available medications can lead to drug dependency, hopelessness, depression, and even suicide. Family and society are also affected regarding cost and lost productivity. The pathophysiology of PHN remains unclear. Viral reactivation in the dorsal root ganglion and its spread through the affected nerve result in severe ganglionitis and neuritis, which induce a profound sympathetic stimulation and vasoconstriction of the endoneural arterioles, which decreases the blood flow in the intraneural capillary bed resulting in nerve ischemia. Our rationale is based on previous studies which have postulated that the early interventions could reduce repetitive painful stimuli and prevent vasospasm of the endoneural arterioles during the acute phase of HZ. Hence, they might attenuate the central sensitization, prevent the ischemic nerve damage, and finally account for PHN prevention.The author introduces a new Ten-step Model for the prevention of PHN. The idea of this newly suggested approach is to increase the awareness of the health care team and the community about the nature of HZ and its complications, especially in the high-risk groups. Besides, it emphasizes the importance of the prompt antiviral therapy and the early sympathetic blockades for preventing PHN. Key words: Acute herpes zoster, prevention, post

  6. Steady-State Process Modelling

    DEFF Research Database (Denmark)

    Cameron, Ian; Gani, Rafiqul

    2011-01-01

    illustrate the “equation oriented” approach as well as the “sequential modular” approach to solving complex flowsheets for steady state applications. The applications include the Williams-Otto plant, the hydrodealkylation (HDA) of toluene, conversion of ethylene to ethanol and a bio-ethanol process....

  7. Double-step processes in the 12C(p,d)11C reaction at 45 MeV

    International Nuclear Information System (INIS)

    Couvert, Pierre.

    1974-01-01

    12 C(p,d) 11 C pick-up reaction was performed with a 45 MeV proton beam. A 130keV energy resolution was obtained and angular distributions of nine of the ten first levels of 11 C have been extracted within a large angular range. Assuming only neutron direct transfert, the strong relative excitation of high spin levels cannot be reproduced by a DWBA analysis. The double-step process assumption seems to be verified by a systematical analysis of the (p,d) reaction mechanisms. This analysis is done in the coupled-channel formalism for the five first negative parity states of 11 C. The 3/2 - ground state is essentially populated by the direct transfer of a Psub(3/2) neutron. The contribution of a double-step process, via the 2 + inelastic excitation of 12 C, is important for the four other states. A mechanism which assumes a deuteron inelastic scattering on the 11 C final nucleus after the neutron transfer cannot be neglected and improves the fits when it is taken into account [fr

  8. Improvement of Thermo-Mechanical Properties of Short Natural Fiber Reinforced Recycled Polypropylene Composites through Double Step Grafting Process

    Science.gov (United States)

    Saputra, O. A.; Rini, K. S.; Susanti, T. D.; Mustofa, R. E.; Prameswari, M. D.; Pramono, E.

    2017-07-01

    This study focused on the effect of a compatibilizer addition, maleic anhydrides (MAH) on mechanical, thermal and water absorption properties of oil palm empty fruit bunches (EFB) fiber reinforced recycled polypropylene (rPP) biocomposites. The double steps grafting process were conducted by incorporated MAH on both rPP and EFB to improve the surface adhesion between these materials, to result in a good mechanical properties as well as biocompatibility to nature. The chemical test was carried out using FTIR (Fourier Transform Infra-Red) spectroscopy technique to evaluated grafting process. The mechanical test was investigated and found that the addition of 10 phr MAH to both rPP and EFB improved mechanical strength of the biocomposites higher than another formulas. In this study, thermal properties of biocomposites also characterized. Water absorption (WA) analysis showed the presence of EFB fiber increased the water uptake of the material.

  9. Preparation of TiC/W core–shell structured powders by one-step activation and chemical reduction process

    International Nuclear Information System (INIS)

    Ding, Xiao-Yu; Luo, Lai-Ma; Huang, Li-Mei; Luo, Guang-Nan; Zhu, Xiao-Yong; Cheng, Ji-Gui; Wu, Yu-Cheng

    2015-01-01

    Highlights: • A novel wet chemical method was used to prepare TiC/W core–shell structure powders. • TiC nanoparticles were well-encapsulated by W shells. • TiC phase was present in the interior of tungsten grains. - Abstract: In the present study, one-step activation and chemical reduction process as a novel wet-chemical route was performed for the preparation of TiC/W core–shell structured ultra-fine powders. The XRD, FE-SEM, TEM and EDS results demonstrated that the as-synthesized powders are of high purity and uniform with a diameter of approximately 500 nm. It is also found that the TiC nanoparticles were well-encapsulated by W shells. Such a unique process suggests a new method for preparing X/W (X refers the water-insoluble nanoparticles) core–shell nanoparticles with different cores

  10. Process modeling of a HLA research lab

    Science.gov (United States)

    Ribeiro, Bruna G. C.; Sena, Alexandre C.; Silva, Dilson; Marzulo, Leandro A. J.

    2017-11-01

    Bioinformatics has provided tremendous breakthroughs in the field of molecular biology. All this evolution has generated a large volume of biological data that increasingly require the use of computing for analysis and storage of this information. The identification of the human leukocyte antigen (HLA) genotypes is critical to the success of organ transplants in humans. HLA typing involves not only laboratory tests but also DNA sequencing, with the participation of several professionals responsible for different stages of the process. Thus, the objective of this paper is to map the main steps in HLA typing in a laboratory specialized in performing such procedures, analyzing each process and proposing solutions to speed up the these steps, avoiding mistakes.

  11. A generalized logarithmic image processing model based on the gigavision sensor model.

    Science.gov (United States)

    Deng, Guang

    2012-03-01

    The logarithmic image processing (LIP) model is a mathematical theory providing generalized linear operations for image processing. The gigavision sensor (GVS) is a new imaging device that can be described by a statistical model. In this paper, by studying these two seemingly unrelated models, we develop a generalized LIP (GLIP) model. With the LIP model being its special case, the GLIP model not only provides new insights into the LIP model but also defines new image representations and operations for solving general image processing problems that are not necessarily related to the GVS. A new parametric LIP model is also developed. To illustrate the application of the new scalar multiplication operation, we propose an energy-preserving algorithm for tone mapping, which is a necessary step in image dehazing. By comparing with results using two state-of-the-art algorithms, we show that the new scalar multiplication operation is an effective tool for tone mapping.

  12. Pavement maintenance optimization model using Markov Decision Processes

    Science.gov (United States)

    Mandiartha, P.; Duffield, C. F.; Razelan, I. S. b. M.; Ismail, A. b. H.

    2017-09-01

    This paper presents an optimization model for selection of pavement maintenance intervention using a theory of Markov Decision Processes (MDP). There are some particular characteristics of the MDP developed in this paper which distinguish it from other similar studies or optimization models intended for pavement maintenance policy development. These unique characteristics include a direct inclusion of constraints into the formulation of MDP, the use of an average cost method of MDP, and the policy development process based on the dual linear programming solution. The limited information or discussions that are available on these matters in terms of stochastic based optimization model in road network management motivates this study. This paper uses a data set acquired from road authorities of state of Victoria, Australia, to test the model and recommends steps in the computation of MDP based stochastic optimization model, leading to the development of optimum pavement maintenance policy.

  13. Alcoholics Anonymous and twelve-step recovery: a model based on social and cognitive neuroscience.

    Science.gov (United States)

    Galanter, Marc

    2014-01-01

    In the course of achieving abstinence from alcohol, longstanding members of Alcoholics Anonymous (AA) typically experience a change in their addiction-related attitudes and behaviors. These changes are reflective of physiologically grounded mechanisms which can be investigated within the disciplines of social and cognitive neuroscience. This article is designed to examine recent findings associated with these disciplines that may shed light on the mechanisms underlying this change. Literature review and hypothesis development. Pertinent aspects of the neural impact of drugs of abuse are summarized. After this, research regarding specific brain sites, elucidated primarily by imaging techniques, is reviewed relative to the following: Mirroring and mentalizing are described in relation to experimentally modeled studies on empathy and mutuality, which may parallel the experiences of social interaction and influence on AA members. Integration and retrieval of memories acquired in a setting like AA are described, and are related to studies on storytelling, models of self-schema development, and value formation. A model for ascription to a Higher Power is presented. The phenomena associated with AA reflect greater complexity than the empirical studies on which this article is based, and certainly require further elucidation. Despite this substantial limitation in currently available findings, there is heuristic value in considering the relationship between the brain-based and clinical phenomena described here. There are opportunities for the study of neuroscientific correlates of Twelve-Step-based recovery, and these can potentially enhance our understanding of related clinical phenomena. © American Academy of Addiction Psychiatry.

  14. Heavy mesons in a simple quark-confining two-step potential model

    International Nuclear Information System (INIS)

    Kulshreshtha, D.S.; Kaushal, R.S.

    1980-10-01

    We study the mass spectra and decay widths of upsilon resonances in a simple quark-confining, analytically solvable, two-step potential model used earlier to study the charmonium system and even the light mesons like π, rho, K,... etc. Results are found to be in good agreement with experiments and also with the values predicted by others. We also calculate within our model, the masses of the lowest-lying bottom mesons which we denote by B(π) or B, B(rho) or Bsup(*), B(K) or Bsub(s), B(Ksup(*)) or Bsub(s)sup(*) and B(psi) or Bsub(c); showing that these agree well with the other theoretical predictions. In this way we put the BB-bar threshold at 10.242 GeV, which means in our model the first three radial upsilon excitations, viz. Y(9.4345), Y'(9.9930) and Y''(10.1988) are stable with respect to the Zweig allowed decay Ysup(|||...) → B-barB. (author)

  15. A stochastic step model of replicative senescence explains ROS production rate in ageing cell populations.

    Directory of Open Access Journals (Sweden)

    Conor Lawless

    Full Text Available Increases in cellular Reactive Oxygen Species (ROS concentration with age have been observed repeatedly in mammalian tissues. Concomitant increases in the proportion of replicatively senescent cells in ageing mammalian tissues have also been observed. Populations of mitotic human fibroblasts cultured in vitro, undergoing transition from proliferation competence to replicative senescence are useful models of ageing human tissues. Similar exponential increases in ROS with age have been observed in this model system. Tracking individual cells in dividing populations is difficult, and so the vast majority of observations have been cross-sectional, at the population level, rather than longitudinal observations of individual cells.One possible explanation for these observations is an exponential increase in ROS in individual fibroblasts with time (e.g. resulting from a vicious cycle between cellular ROS and damage. However, we demonstrate an alternative, simple hypothesis, equally consistent with these observations which does not depend on any gradual increase in ROS concentration: the Stochastic Step Model of Replicative Senescence (SSMRS. We also demonstrate that, consistent with the SSMRS, neither proliferation-competent human fibroblasts of any age, nor populations of hTERT overexpressing human fibroblasts passaged beyond the Hayflick limit, display high ROS concentrations. We conclude that longitudinal studies of single cells and their lineages are now required for testing hypotheses about roles and mechanisms of ROS increase during replicative senescence.

  16. Numerical modelling of reflood processes

    International Nuclear Information System (INIS)

    Glynn, D.R.; Rhodes, N.; Tatchell, D.G.

    1983-01-01

    The use of a detailed computer model to investigate the effects of grid size and the choice of wall-to-fluid heat-transfer correlations on the predictions obtained for reflooding of a vertical heated channel is described. The model employs equations for the momentum and enthalpy of vapour and liquid and hence accounts for both thermal non-equilibrium and slip between the phases. Empirical correlations are used to calculate interphase and wall-to-fluid friction and heat-transfer as functions of flow regime and local conditions. The empirical formulae have remained fixed with the exception of the wall-to-fluid heat-transfer correlations. These have been varied according to the practices adopted in other computer codes used to model reflood, namely REFLUX, RELAP and TRAC. Calculations have been performed to predict the CSNI standard problem number 7, and the results are compared with experiment. It is shown that the results are substantially grid-independent, and that the choice of correlation has a significant influence on the general flow behaviour, the rate of quenching and on the maximum cladding temperature predicted by the model. It is concluded that good predictions of reflooding rates can be obtained with particular correlation sets. (author)

  17. Branching process models of cancer

    CERN Document Server

    Durrett, Richard

    2015-01-01

    This volume develops results on continuous time branching processes and applies them to study rate of tumor growth, extending classic work on the Luria-Delbruck distribution. As a consequence, the authors calculate the probability that mutations that confer resistance to treatment are present at detection and quantify the extent of tumor heterogeneity. As applications, the authors evaluate ovarian cancer screening strategies and give rigorous proofs for results of Heano and Michor concerning tumor metastasis. These notes should be accessible to students who are familiar with Poisson processes and continuous time. Richard Durrett is mathematics professor at Duke University, USA. He is the author of 8 books, over 200 journal articles, and has supervised more than 40 Ph.D. students. Most of his current research concerns the applications of probability to biology: ecology, genetics, and most recently cancer.

  18. An Iterative Ensemble Kalman Filter with One-Step-Ahead Smoothing for State-Parameters Estimation of Contaminant Transport Models

    KAUST Repository

    Gharamti, M. E.

    2015-05-11

    The ensemble Kalman filter (EnKF) is a popular method for state-parameters estimation of subsurface flow and transport models based on field measurements. The common filtering procedure is to directly update the state and parameters as one single vector, which is known as the Joint-EnKF. In this study, we follow the one-step-ahead smoothing formulation of the filtering problem, to derive a new joint-based EnKF which involves a smoothing step of the state between two successive analysis steps. The new state-parameters estimation scheme is derived in a consistent Bayesian filtering framework and results in separate update steps for the state and the parameters. This new algorithm bears strong resemblance with the Dual-EnKF, but unlike the latter which first propagates the state with the model then updates it with the new observation, the proposed scheme starts by an update step, followed by a model integration step. We exploit this new formulation of the joint filtering problem and propose an efficient model-integration-free iterative procedure on the update step of the parameters only for further improved performances. Numerical experiments are conducted with a two-dimensional synthetic subsurface transport model simulating the migration of a contaminant plume in a heterogenous aquifer domain. Contaminant concentration data are assimilated to estimate both the contaminant state and the hydraulic conductivity field. Assimilation runs are performed under imperfect modeling conditions and various observational scenarios. Simulation results suggest that the proposed scheme efficiently recovers both the contaminant state and the aquifer conductivity, providing more accurate estimates than the standard Joint and Dual EnKFs in all tested scenarios. Iterating on the update step of the new scheme further enhances the proposed filter’s behavior. In term of computational cost, the new Joint-EnKF is almost equivalent to that of the Dual-EnKF, but requires twice more model

  19. A two-step enzymatic resolution process for large-scale production of (S)- and (R)-ethyl-3-hydroxybutyrate.

    Science.gov (United States)

    Fishman, A; Eroshov, M; Dee-Noor, S S; van Mil, J; Cogan, U; Effenberger, R

    2001-08-05

    An efficient two-step enzymatic process for production of (R)- and (S)-ethyl-3-hydroxybutyrate (HEB), two important chiral intermediates for the pharmaceutical market, was developed and scaled-up to a multikilogram scale. Both enantiomers were obtained at 99% chemical purity and over 96% enantiomeric excess, with a total process yield of 73%. The first reaction involved a solvent-free acetylation of racemic HEB with vinylacetate for the production of (S)-HEB. In the second reaction, (R)-enriched ethyl-3-acetoxybutyrate (AEB) was subjected to alcoholysis with ethanol to derive optically pure (R)-HEB. Immobilized Candida antarctica lipase B (CALB) was employed in both stages, with high productivity and selectivity. The type of butyric acid ester influenced the enantioselectivity of the enzyme. Thus, extending the ester alkyl chain from ethyl to octyl resulted in a decrease in enantiomeric excess, whereas using bulky groups such as benzyl or t-butyl, improved the enantioselectivity of the enzyme. A stirred reactor was found unsuitable for large-scale production due to attrition of the enzyme particles and, therefore, a batchwise loop reactor system was used for bench-scale production. The immobilized enzyme was confined to a column and the reactants were circulated through the enzyme bed until the targeted conversion was reached. The desired products were separated from the reaction mixture in each of the two stages by fractional distillation. The main features of the process are the exclusion of solvent (thus ensuring high process throughput), and the use of the same enzyme for both the acetylation and the alcoholysis steps. Kilogram quantities of (S)-HEB and (R)-HEB were effectively prepared using this unit, which can be easily scaled-up to produce industrial quantities. Copyright 2001 John Wiley & Sons, Inc.

  20. Coconut Model for Learning First Steps of Craniotomy Techniques and Cerebrospinal Fluid Leak Avoidance.

    Science.gov (United States)

    Drummond-Braga, Bernardo; Peleja, Sebastião Berquó; Macedo, Guaracy; Drummond, Carlos Roberto S A; Costa, Pollyana H V; Garcia-Zapata, Marco T; Oliveira, Marcelo Magaldi

    2016-12-01

    Neurosurgery simulation has gained attention recently due to changes in the medical system. First-year neurosurgical residents in low-income countries usually perform their first craniotomy on a real subject. Development of high-fidelity, cheap, and largely available simulators is a challenge in residency training. An original model for the first steps of craniotomy with cerebrospinal fluid leak avoidance practice using a coconut is described. The coconut is a drupe from Cocos nucifera L. (coconut tree). The green coconut has 4 layers, and some similarity can be seen between these layers and the human skull. The materials used in the simulation are the same as those used in the operating room. The coconut is placed on the head holder support with the face up. The burr holes are made until endocarp is reached. The mesocarp is dissected, and the conductor is passed from one hole to the other with the Gigli saw. The hook handle for the wire saw is positioned, and the mesocarp and endocarp are cut. After sawing the 4 margins, mesocarp is detached from endocarp. Four burr holes are made from endocarp to endosperm. Careful dissection of the endosperm is done, avoiding liquid albumen leak. The Gigli saw is passed through the trephine holes. Hooks are placed, and the endocarp is cut. After cutting the 4 margins, it is dissected from the endosperm and removed. The main goal of the procedure is to remove the endocarp without fluid leakage. The coconut model for learning the first steps of craniotomy and cerebrospinal fluid leak avoidance has some limitations. It is more realistic while trying to remove the endocarp without damage to the endosperm. It is also cheap and can be widely used in low-income countries. However, the coconut does not have anatomic landmarks. The mesocarp makes the model less realistic because it has fibers that make the procedure more difficult and different from a real craniotomy. The model has a potential pedagogic neurosurgical application for

  1. Discovering Process Reference Models from Process Variants Using Clustering Techniques

    NARCIS (Netherlands)

    Li, C.; Reichert, M.U.; Wombacher, Andreas

    2008-01-01

    In today's dynamic business world, success of an enterprise increasingly depends on its ability to react to changes in a quick and flexible way. In response to this need, process-aware information systems (PAIS) emerged, which support the modeling, orchestration and monitoring of business processes

  2. Developmental Steps in Metaphorical Language Abilities: The Influence of Age, Gender, Cognitive Flexibility, Information Processing Speed, and Analogical Reasoning.

    Science.gov (United States)

    Willinger, Ulrike; Deckert, Matthias; Schmöger, Michaela; Schaunig-Busch, Ines; Formann, Anton K; Auff, Eduard

    2017-12-01

    Metaphor is a specific type of figurative language that is used in various important fields such as in the work with children in clinical or teaching contexts. The aim of the study was to investigate the developmental course, developmental steps, and possible cognitive predictors regarding metaphor processing in childhood and early adolescence. One hundred sixty-four typically developing children (7-year-olds, 9-year-olds) and early adolescents (11-year-olds) were tested for metaphor identification, comprehension, comprehension quality, and preference by the Metaphoric Triads Task as well as for analogical reasoning, information processing speed, cognitive flexibility under time pressure, and cognitive flexibility without time pressure. Metaphor identification and comprehension consecutively increased with age. Eleven-year-olds showed significantly higher metaphor comprehension quality and preference scores than seven- and nine-year-olds, whilst these younger age groups did not differ. Age, cognitive flexibility under time pressure, information processing speed, analogical reasoning, and cognitive flexibility without time pressure significantly predicted metaphor comprehension. Metaphorical language ability shows an ongoing development and seemingly changes qualitatively at the beginning of early adolescence. These results can possibly be explained by a greater synaptic reorganization in early adolescents. Furthermore, cognitive flexibility under time pressure and information processing speed possibly facilitate the ability to adapt metaphor processing strategies in a flexible, quick, and appropriate way.

  3. Systematic approach for the identification of process reference models

    CSIR Research Space (South Africa)

    Van Der Merwe, A

    2009-02-01

    Full Text Available Process models are used in different application domains to capture knowledge on the process flow. Process reference models (PRM) are used to capture reusable process models, which should simplify the identification process of process models...

  4. The Throw-and-Catch Model of Human Gait: Evidence from Coupling of Pre-Step Postural Activity and Step Location.

    Science.gov (United States)

    Bancroft, Matthew J; Day, Brian L

    2016-01-01

    Postural activity normally precedes the lift of a foot from the ground when taking a step, but its function is unclear. The throw-and-catch hypothesis of human gait proposes that the pre-step activity is organized to generate momentum for the body to fall ballistically along a specific trajectory during the step. The trajectory is appropriate for the stepping foot to land at its intended location while at the same time being optimally placed to catch the body and regain balance. The hypothesis therefore predicts a strong coupling between the pre-step activity and step location. Here we examine this coupling when stepping to visually-presented targets at different locations. Ten healthy, young subjects were instructed to step as accurately as possible onto targets placed in five locations that required either different step directions or different step lengths. In 75% of trials, the target location remained constant throughout the step. In the remaining 25% of trials, the intended step location was changed by making the target jump to a new location 96 ms ± 43 ms after initiation of the pre-step activity, long before foot lift. As predicted by the throw-and-catch hypothesis, when the target location remained constant, the pre-step activity led to body momentum at foot lift that was coupled to the intended step location. When the target location jumped, the pre-step activity was adjusted (median latency 223 ms) and prolonged (on average by 69 ms), which altered the body's momentum at foot lift according to where the target had moved. We conclude that whenever possible the coupling between the pre-step activity and the step location is maintained. This provides further support for the throw-and-catch hypothesis of human gait.

  5. Community Learning Process: A Model of Solid Waste Reduction and Separation

    OpenAIRE

    Jittree Pothimamaka

    2008-01-01

    The main purpose of this research was to study and develop an appropriate model of waste reduction and separation in the community under the community learning process. This is a research and development (R&D) study with mixed methodology consisting of four steps. Step One: Research was conducted to obtain information on solid waste disposal in Bang Sue District, Bangkok Metropolis, Thailand, employing group discussions with community members and data collection from the field. Step Two: The ...

  6. 2-D edge modelling: convergence and results for next step devices in the high recycling regime

    International Nuclear Information System (INIS)

    Pacher, H.D.; D'haeseleer, W.D.; Pacher, G.W.

    1992-01-01

    In the present work, we apply the Braams B-2 code to a next step device, with particular emphasis on convergence of the solutions and on the scaling of the peak power load per unit area on the divertor plate, f p , on the electron temperature T e,p at the point of peak power load. A double null geometry is chosen (ITER as defined, 22 MA, 6 m). The input power P to one outer divertor is 0.4 of the total power to the SOL and 0.05 of the fusion power, and f p is given without safety and peaking factors. Recycling is treated by an analytical model including atoms and molecules. The model is appropriate for the high recycling regime down to T e,p ∼5-10 eV as long as impurity radiation can be neglected, but not beyond since sideways neutral motion is not included. DT ionization and radiation losses are included. Only D-T ions are treated, but collision frequencies are corrected for impurities. Radial transport coefficients are uniform in space, with χ e =2 m 2 /s and D=χ i =χ e /3=const, for most of the cases but Bohm-like scaling is also investigated. (author) 10 refs., 7 figs

  7. Two-step biocatalytic process using lipase and whole cell catalysts for biodiesel production from unrefined jatropha oil.

    Science.gov (United States)

    Zhou, Gui-xiong; Chen, Guan-yi; Yan, Bei-bei

    2015-10-01

    To avoid lipase deactivation by methanol in the enzymatic transesterification process, a two-step biocatalytic process for biodiesel production from unrefined jatropha oil was developed. Unrefined jatropha oil was first hydrolyzed to free fatty acids (FFAs) by the commercial enzyme Candida rugosa lipase. The maximum yield achieved of FFAs 90.3% at 40 °C, water/oil ratio 0.75:1 (v/v), lipase content 2% (w/w) after 8 h reaction. After hydrolysis, the FFAs were separated and converted to biodiesel by using Rhizopus oryzae IFO4697 cells immobilized within biomass support particles as a whole-cell biocatalyst. Molecular sieves (3 Å) were added to the esterification reaction mixture to remove the byproduct water. The maximum fatty acid methyl ester yield reached 88.6% at 35 °C, molar ratio of methanol to FFAs 1.2:1, molecular sieves (3 Å) content 60% (w/w) after 42 h. In addition, both C. rugosa lipase and R. oryzae whole cell catalyst in the process showed excellent reusability, retaining 89 and 79% yields, respectively, even after six batches of reactions. This novel process, combining the advantages of enzyme and whole cell catalysts, saved the consumption of commercial enzyme and avoid enzyme deactivation by methanol.

  8. Integrated Process Modeling-A Process Validation Life Cycle Companion.

    Science.gov (United States)

    Zahel, Thomas; Hauer, Stefan; Mueller, Eric M; Murphy, Patrick; Abad, Sandra; Vasilieva, Elena; Maurer, Daniel; Brocard, Cécile; Reinisch, Daniela; Sagmeister, Patrick; Herwig, Christoph

    2017-10-17

    During the regulatory requested process validation of pharmaceutical manufacturing processes, companies aim to identify, control, and continuously monitor process variation and its impact on critical quality attributes (CQAs) of the final product. It is difficult to directly connect the impact of single process parameters (PPs) to final product CQAs, especially in biopharmaceutical process development and production, where multiple unit operations are stacked together and interact with each other. Therefore, we want to present the application of Monte Carlo (MC) simulation using an integrated process model (IPM) that enables estimation of process capability even in early stages of process validation. Once the IPM is established, its capability in risk and criticality assessment is furthermore demonstrated. IPMs can be used to enable holistic production control strategies that take interactions of process parameters of multiple unit operations into account. Moreover, IPMs can be trained with development data, refined with qualification runs, and maintained with routine manufacturing data which underlines the lifecycle concept. These applications will be shown by means of a process characterization study recently conducted at a world-leading contract manufacturing organization (CMO). The new IPM methodology therefore allows anticipation of out of specification (OOS) events, identify critical process parameters, and take risk-based decisions on counteractions that increase process robustness and decrease the likelihood of OOS events.

  9. Process modelling on a canonical basis[Process modelling; Canonical modelling

    Energy Technology Data Exchange (ETDEWEB)

    Siepmann, Volker

    2006-12-20

    Based on an equation oriented solving strategy, this thesis investigates a new approach to process modelling. Homogeneous thermodynamic state functions represent consistent mathematical models of thermodynamic properties. Such state functions of solely extensive canonical state variables are the basis of this work, as they are natural objective functions in optimisation nodes to calculate thermodynamic equilibrium regarding phase-interaction and chemical reactions. Analytical state function derivatives are utilised within the solution process as well as interpreted as physical properties. By this approach, only a limited range of imaginable process constraints are considered, namely linear balance equations of state variables. A second-order update of source contributions to these balance equations is obtained by an additional constitutive equation system. These equations are general dependent on state variables and first-order sensitivities, and cover therefore practically all potential process constraints. Symbolic computation technology efficiently provides sparsity and derivative information of active equations to avoid performance problems regarding robustness and computational effort. A benefit of detaching the constitutive equation system is that the structure of the main equation system remains unaffected by these constraints, and a priori information allows to implement an efficient solving strategy and a concise error diagnosis. A tailor-made linear algebra library handles the sparse recursive block structures efficiently. The optimisation principle for single modules of thermodynamic equilibrium is extended to host entire process models. State variables of different modules interact through balance equations, representing material flows from one module to the other. To account for reusability and encapsulation of process module details, modular process modelling is supported by a recursive module structure. The second-order solving algorithm makes it

  10. Evaluation and optimisation of phenomenological multi-step soot model for spray combustion under diesel engine-like operating conditions

    Science.gov (United States)

    Pang, Kar Mun; Jangi, Mehdi; Bai, Xue-Song; Schramm, Jesper

    2015-05-01

    In this work, a two-dimensional computational fluid dynamics study is reported of an n-heptane combustion event and the associated soot formation process in a constant volume combustion chamber. The key interest here is to evaluate the sensitivity of the chemical kinetics and submodels of a semi-empirical soot model in predicting the associated events. Numerical computation is performed using an open-source code and a chemistry coordinate mapping approach is used to expedite the calculation. A library consisting of various phenomenological multi-step soot models is constructed and integrated with the spray combustion solver. Prior to the soot modelling, combustion simulations are carried out. Numerical results show that the ignition delay times and lift-off lengths exhibit good agreement with the experimental measurements across a wide range of operating conditions, apart from those in the cases with ambient temperature lower than 850 K. The variation of the soot precursor production with respect to the change of ambient oxygen levels qualitatively agrees with that of the conceptual models when the skeletal n-heptane mechanism is integrated with a reduced pyrene chemistry. Subsequently, a comprehensive sensitivity analysis is carried out to appraise the existing soot formation and oxidation submodels. It is revealed that the soot formation is captured when the surface growth rate is calculated using a square root function of the soot specific surface area and when a pressure-dependent model constant is considered. An optimised soot model is then proposed based on the knowledge gained through this exercise. With the implementation of optimised model, the simulated soot onset and transport phenomena before reaching quasi-steady state agree reasonably well with the experimental observation. Also, variation of spatial soot distribution and soot mass produced at oxygen molar fractions ranging from 10.0 to 21.0% for both low and high density conditions are reproduced.

  11. Propagation of Uncertainty in Bayesian Kernel Models - Application to Multiple-Step Ahead Forecasting

    DEFF Research Database (Denmark)

    Quinonero, Joaquin; Girard, Agathe; Larsen, Jan

    2003-01-01

    The object of Bayesian modelling is predictive distribution, which, in a forecasting scenario, enables evaluation of forecasted values and their uncertainties. We focus on reliably estimating the predictive mean and variance of forecasted values using Bayesian kernel based models such as the Gaus......The object of Bayesian modelling is predictive distribution, which, in a forecasting scenario, enables evaluation of forecasted values and their uncertainties. We focus on reliably estimating the predictive mean and variance of forecasted values using Bayesian kernel based models...... such as the Gaussian process and the relevance vector machine. We derive novel analytic expressions for the predictive mean and variance for Gaussian kernel shapes under the assumption of a Gaussian input distribution in the static case, and of a recursive Gaussian predictive density in iterative forecasting...

  12. Fabrication of extremely thermal-stable GaN template on Mo substrate using double bonding and step annealing process

    Science.gov (United States)

    Qing, Wang; Yang, Liu; Yongjian, Sun; Yuzhen, Tong; Guoyi, Zhang

    2016-08-01

    A new layer transfer technique which comprised double bonding and a step annealing process was utilized to transfer the GaN epilayer from a sapphire substrate to a Mo substrate. Combined with the application of the thermal-stable bonding medium, the resulting two-inch-diameter GaN template showed extremely good stability under high temperature and low stress state. Moreover, no cracks and winkles were observed. The transferred GaN template was suitable for homogeneous epitaxial, thus could be used for the direct fabrication of vertical LED chips as well as power electron devices. It has been confirmed that the double bonding and step annealing technique together with the thermal-stable bonding layer could significantly improve the bonding strength and stress relief, finally enhancing the thermal stability of the transferred GaN template. Project supported by the Guangdong Innovative Research Team Program (No. 2009010044), the China Postdoctoral Science Foundation (No. 2014M562233), the National Natural Science Foundation of Guangdong, China (No. 2015A030312011), and the Opened Fund of the State Key Laboratory on Integrated Optoelectronics (No. IOSKL2014KF17).

  13. First steps in translating human cognitive processes of cane pruning grapevines into AI rules for automated robotic pruning

    Directory of Open Access Journals (Sweden)

    Saxton Valerie

    2014-01-01

    Full Text Available Cane pruning of grapevines is a skilled task for which, internationally, there is a dire shortage of human pruners. As part of a larger project developing an automated robotic pruner, we have used artificial intelligence (AI algorithms to create an expert system for selecting new canes and cutting off unwanted canes. A domain and ontology has been created for AI, which reflects the expertise of expert human pruners. The first step in the creation of an expert system was to generate virtual vines, which were then ‘pruned’ by human pruners and also by the expert system in its infancy. Here we examined the decisions of 12 human pruners, for consistency of decision, on 60 virtual vines. 96.7% of the 12 pruners agreed on at least one cane choice after which there was diminishing agreement on which further canes to select for laying. Our results indicate that techniques developed in computational intelligence can be used to co-ordinate and synthesise the expertise of human pruners into a best practice format. This paper describes first steps in this knowledge elicitation process, and discusses the fit between cane pruning expertise and the expertise that can be elicited using AI based expert system techniques.

  14. Superthermostability of nanoscale TIC-reinforced copper alloys manufactured by a two-step ball-milling process

    Science.gov (United States)

    Wang, Fenglin; Li, Yunping; Xu, Xiandong; Koizumi, Yuichiro; Yamanaka, Kenta; Bian, Huakang; Chiba, Akihiko

    2015-12-01

    A Cu-TiC alloy, with nanoscale TiC particles highly dispersed in the submicron-grained Cu matrix, was manufactured by a self-developed two-step ball-milling process on Cu, Ti and C powders. The thermostability of the composite was evaluated by high-temperature isothermal annealing treatments, with temperatures ranging from 727 to 1273 K. The semicoherent nanoscale TiC particles with Cu matrix, mainly located along the grain boundaries, were found to exhibit the promising trait of blocking grain boundary migrations, which leads to a super-stabilized microstructures up to approximately the melting point of copper (1223 K). Furthermore, the Cu-TiC alloys after annealing at 1323 K showed a slight decrease in Vickers hardness as well as the duplex microstructure due to selective grain growth, which were discussed in terms of hardness contributions from various mechanisms.

  15. One-step spray-coating process for the fabrication of colorful superhydrophobic coatings with excellent corrosion resistance.

    Science.gov (United States)

    Li, Jian; Wu, Runni; Jing, Zhijiao; Yan, Long; Zha, Fei; Lei, Ziqiang

    2015-10-06

    A simple method was used to generate colorful hydrophobic stearate particles via chemical reactions between inorganic salts and sodium stearate. Colored self-cleaning superhydrophobic coatings were prepared through a facile one-step spray-coating process by spraying the stearate particle suspensions onto stainless steel substrates. Furthermore, the colorful superhydrophobic coating maintains excellent chemical stability under both harsh acidic and alkaline circumstances. After being immersed in a 3.5 wt % NaCl aqueous solution for 1 month, the as-prepared coatings remained superhydrophobic; however, they lost their self-cleaning property with a sliding angle of about 46 ± 3°. The corrosion behavior of the superhydrophobic coatings on the Al substrate was characterized by the polarization curve and electrochemical impedance spectroscopy (EIS). The electrochemical corrosion test results indicated that the superhydrophobic coatings possessed excellent corrosion resistance, which could supply efficient and long-term preservation for the bare Al substrate.

  16. Measures of Quality in Business Process Modelling

    Directory of Open Access Journals (Sweden)

    Radek Hronza

    2015-06-01

    Full Text Available Business process modelling and analysing is undoubtedly one of the most important parts of Applied (Business Informatics. Quality of business process models (diagrams is crucial for any purpose in this area. The goal of a process analyst’s work is to create generally understandable, explicit and error free models. If a process is properly described, created models can be used as an input into deep analysis and optimization. It can be assumed that properly designed business process models (similarly as in the case of correctly written algorithms contain characteristics that can be mathematically described. Besides it will be possible to create a tool that will help process analysts to design proper models. As part of this review will be conducted systematic literature review in order to find and analyse business process model’s design and business process model’s quality measures. It was found that mentioned area had already been the subject of research investigation in the past. Thirty-three suitable scietific publications and twenty-two quality measures were found. Analysed scientific publications and existing quality measures do not reflect all important attributes of business process model’s clarity, simplicity and completeness. Therefore it would be appropriate to add new measures of quality.

  17. The Use of an Eight-Step Instructional Model to Train School Staff in Partner-Augmented Input

    Science.gov (United States)

    Senner, Jill E.; Baud, Matthew R.

    2017-01-01

    An eight-step instruction model was used to train a self-contained classroom teacher, speech-language pathologist, and two instructional assistants in partner-augmented input, a modeling strategy for teaching augmentative and alternative communication use. With the exception of a 2-hr training session, instruction primarily was conducted during…

  18. A Two-Step Hybrid Approach for Modeling the Nonlinear Dynamic Response of Piezoelectric Energy Harvesters

    Directory of Open Access Journals (Sweden)

    Claudio Maruccio

    2018-01-01

    Full Text Available An effective hybrid computational framework is described here in order to assess the nonlinear dynamic response of piezoelectric energy harvesting devices. The proposed strategy basically consists of two steps. First, fully coupled multiphysics finite element (FE analyses are performed to evaluate the nonlinear static response of the device. An enhanced reduced-order model is then derived, where the global dynamic response is formulated in the state-space using lumped coefficients enriched with the information derived from the FE simulations. The electromechanical response of piezoelectric beams under forced vibrations is studied by means of the proposed approach, which is also validated by comparing numerical predictions with some experimental results. Such numerical and experimental investigations have been carried out with the main aim of studying the influence of material and geometrical parameters on the global nonlinear response. The advantage of the presented approach is that the overall computational and experimental efforts are significantly reduced while preserving a satisfactory accuracy in the assessment of the global behavior.

  19. The Palliser Rockslide, Canadian Rocky Mountains: Characterization and modeling of a stepped failure surface

    Science.gov (United States)

    Sturzenegger, M.; Stead, D.

    2012-02-01

    This paper presents the results of an investigation of the prehistoric Palliser Rockslide, Rocky Mountains, Canada. Conventional aerial photograph interpretation and field mapping are complemented by terrestrial digital photogrammetry. These techniques allow quantification of the rockslide debris volume and reconstruction of the pre-slide topography. It has been estimated that the volume of rock involved in the most recent large rockslide is 8 Mm 3. Terrestrial digital photogrammetry is used in the characterization of the failure surface morphology, which is subdivided into four types of step-path geometry comprising both pre-existing discontinuities and intact rock fractures. Incorporation of these data into various rock slope stability numerical modeling methods highlights a complex failure mechanism, which includes sliding along a large scale curved failure surface, intact rock bridge fracturing and lateral confinement. A preliminary quantification of the contribution of intact rock bridges to the shear strength of the failure surface is presented in terms of the apparent cohesion, apparent tensile strength and cumulative length of the intact rock segments.

  20. Step-Growth Polymerization.

    Science.gov (United States)

    Stille, J. K.

    1981-01-01

    Following a comparison of chain-growth and step-growth polymerization, focuses on the latter process by describing requirements for high molecular weight, step-growth polymerization kinetics, synthesis and molecular weight distribution of some linear step-growth polymers, and three-dimensional network step-growth polymers. (JN)

  1. Modelling income processes with lots of heterogeneity

    DEFF Research Database (Denmark)

    Browning, Martin; Ejrnæs, Mette; Alvarez, Javier

    2010-01-01

    We model earnings processes allowing for lots of heterogeneity across agents. We also introduce an extension to the linear ARMA model which allows the initial convergence in the long run to be different from that implied by the conventional ARMA model. This is particularly important for unit root...

  2. Counting Processes for Retail Default Modeling

    DEFF Research Database (Denmark)

    Kiefer, Nicholas Maximilian; Larson, C. Erik

    in a discrete state space. In a simple case, the states could be default/non-default; in other models relevant for credit modeling the states could be credit scores or payment status (30 dpd, 60 dpd, etc.). Here we focus on the use of stochastic counting processes for mortgage default modeling, using data...

  3. Distillation modeling for a uranium refining process

    International Nuclear Information System (INIS)

    Westphal, B.R.

    1996-01-01

    As part of the spent fuel treatment program at Argonne National Laboratory, a vacuum distillation process is being employed for the recovery of uranium following an electrorefining process. Distillation of a salt electrolyte, containing a eutectic mixture of lithium and potassium chlorides, from uranium is achieved by a simple batch operation and is termed open-quotes cathode processingclose quotes. The incremental distillation of electrolyte salt will be modeled by an equilibrium expression and on a molecular basis since the operation is conducted under moderate vacuum conditions. As processing continues, the two models will be compared and analyzed for correlation with actual operating results. Possible factors that may contribute to aberrations from the models include impurities at the vapor-liquid boundary, distillate reflux, anomalous pressure gradients, and mass transport phenomena at the evaporating surface. Ultimately, the purpose of either process model is to enable the parametric optimization of the process

  4. Piecewise deterministic processes in biological models

    CERN Document Server

    Rudnicki, Ryszard

    2017-01-01

    This book presents a concise introduction to piecewise deterministic Markov processes (PDMPs), with particular emphasis on their applications to biological models. Further, it presents examples of biological phenomena, such as gene activity and population growth, where different types of PDMPs appear: continuous time Markov chains, deterministic processes with jumps, processes with switching dynamics, and point processes. Subsequent chapters present the necessary tools from the theory of stochastic processes and semigroups of linear operators, as well as theoretical results concerning the long-time behaviour of stochastic semigroups induced by PDMPs and their applications to biological models. As such, the book offers a valuable resource for mathematicians and biologists alike. The first group will find new biological models that lead to interesting and often new mathematical questions, while the second can observe how to include seemingly disparate biological processes into a unified mathematical theory, and...

  5. Modeling closed nuclear fuel cycles processes

    Energy Technology Data Exchange (ETDEWEB)

    Shmidt, O.V. [A.A. Bochvar All-Russian Scientific Research Institute for Inorganic Materials, Rogova, 5a street, Moscow, 123098 (Russian Federation); Makeeva, I.R. [Zababakhin All-Russian Scientific Research Institute of Technical Physics, Vasiliev street 13, Snezhinsk, Chelyabinsk region, 456770 (Russian Federation); Liventsov, S.N. [Tomsk Polytechnic University, Tomsk, Lenin Avenue, 30, 634050 (Russian Federation)

    2016-07-01

    Computer models of processes are necessary for determination of optimal operating conditions for closed nuclear fuel cycle (NFC) processes. Computer models can be quickly changed in accordance with new and fresh data from experimental research. 3 kinds of process simulation are necessary. First, the VIZART software package is a balance model development used for calculating the material flow in technological processes. VIZART involves taking into account of equipment capacity, transport lines and storage volumes. Secondly, it is necessary to simulate the physico-chemical processes that are involved in the closure of NFC. The third kind of simulation is the development of software that allows the optimization, diagnostics and control of the processes which implies real-time simulation of product flows on the whole plant or on separate lines of the plant. (A.C.)

  6. Efficient transformation of use case main success scenario steps into bussiness object relation (BORM) diagrams for effective bussiness process requirement analysis

    Czech Academy of Sciences Publication Activity Database

    Podaras, A.; Moravec, J.; Papík, Martin

    2012-01-01

    Roč. 2, č. 1 (2012), s. 86-88 ISSN 1804-7890 Institutional research plan: CEZ:AV0Z10750506 Keywords : Business process requirement Analysis * UCBTA Algorithm * UCBTA Transition Rules * Use Case Main Success Scenario Steps * BORM Diagrams Subject RIV: IN - Informatics, Computer Science http://library.utia.cas.cz/separaty/2012/ZOI/papik-efficient transformation of use case main success scenario steps into bussiness object relation (borm) diagrams for effective bussiness process requirement analysis.pdf

  7. Decomposition of business process models into reusable sub-diagrams

    Directory of Open Access Journals (Sweden)

    Wiśniewski Piotr

    2017-01-01

    Full Text Available In this paper, an approach to automatic decomposition of business process models is proposed. According to our method, an existing BPMN diagram is disassembled into reusable parts containing the desired number of elements. Such elements and structure can work as design patterns and be validated by a user in terms of correctness. In the next step, these component models are categorised considering their parameters such as resources used, as well as input and output data. The classified components may be considered a repository of reusable parts, that can be further applied in the design of new models. The proposed technique may play a significant role in facilitating the business process redesign procedure, which is of a great importance regarding engineering and industrial applications.

  8. Improving Genetic Evaluation of Litter Size Using a Single-step Model

    DEFF Research Database (Denmark)

    Guo, Xiangyu; Christensen, Ole Fredslund; Ostersen, Tage

    A recently developed single-step method allows genetic evaluation based on information from phenotypes, pedigree and markers simultaneously. This paper compared reliabilities of predicted breeding values obtained from single-step method and the traditional pedigree-based method for two litter size...... traits, total number of piglets born (TNB), and litter size at five days after birth (Ls 5) in Danish Landrace and Yorkshire pigs. The results showed that the single-step method combining phenotypic and genotypic information provided more accurate predictions than the pedigree-based method, not only...

  9. Mathematical Modelling of Coal Gasification Processes

    Science.gov (United States)

    Sundararajan, T.; Raghavan, V.; Ajilkumar, A.; Vijay Kumar, K.

    2017-07-01

    experimental and modelling work has been undertaken to investigate the gasification characteristics of high ash Indian coals and compare the yield with those of high grade Australian and Japanese coals. A 20 kW capacity entrained flow gasifier has been constructed and the gasification characteristics have been studied for Indian coals for different particle sizes, system pressures and air flow rates. The theoretical model incorporates the effects of Knudsen diffusion, devolatilization and various heterogenous and homogenous kinetic steps as well as two-phase flow interactions involving the gaseous and particle phases. Output parameters such as carbon conversion, cold gas efficiency and syngas composition have been compared for different grades of coals under a wide range of operating conditions. The model developed for the entrained flow gasifier predicts the gasification characteristics of both Indian and foreign coals well. Apart from the entrained flow gasifier, a bubbling bed gasifier of 100 kW capacity has also been studied. A pilot plant for the gasification of Indian coals has been set up for this capacity and its performance has been investigated experimentally as well as theoretically at different air and steam flow rates. Carbon conversion efficiency of more than 80% has been achieved.

  10. Cintichem modified process - {sup 99}Mo precipitation step: application of statistical analysis tools over the reaction parameters

    Energy Technology Data Exchange (ETDEWEB)

    Teodoro, Rodrigo; Dias, Carla R.B.R.; Osso Junior, Joao A., E-mail: jaosso@ipen.b [Instituto de Pesquisas Energeticas e Nucleares (IPEN/CNEN-SP), Sao Paulo, SP (Brazil); Fernandez Nunez, Eutimio Gustavo [Universidade de Sao Paulo (EP/USP), SP (Brazil). Escola Politecnica. Dept. de Engenharia Quimica

    2011-07-01

    Precipitation of {sup 99}Mo by {alpha}-benzoin oxime ({alpha}-Bz) is a standard precipitation method for molybdenum due the high selectivity of this agent. Nowadays, statistical analysis tools have been employed in analytical systems to prove its efficiency and feasibility. IPEN has a project aiming the production of {sup 99}Mo by the fission of {sup 235}U route. The processing uses as the first step the precipitation of {sup 99}Mo with {alpha}-Bz. This precipitation step involves many key reaction parameters. The aim of this work is based on the development of the already known acidic route to produce {sup 99}Mo as well as the optimization of the reactional parameters applying statistical tools. In order to simulate {sup 99}Mo precipitation, the study was conducted in acidic media using HNO{sub 3}, {alpha}Bz as precipitant agent and NaOH /1%H{sub 2}O{sub 2} as dissolver solution. Then, a Mo carrier, KMnO{sub 4} solutions and {sup 99}Mo tracer were added to the reaction flask. The reactional parameters ({alpha}-Bz/Mo ratio, Mo carrier, reaction time and temperature, and cooling reaction time before filtration) were evaluated under a fractional factorial design of resolution V. The best values of each reactional parameter were determined by a response surface statistical planning. The precipitation and recovery yields of {sup 99}Mo were measured using HPGe detector. Statistical analysis from experimental data suggested that the reactional parameters {alpha}-Bz/Mo ratio, reaction time and temperature have a significant impact on {sup 99}Mo precipitation. Optimization statistical planning showed that higher {alpha}Bz/Mo ratios, room temperature, and lower reaction time lead to higher {sup 99}Mo yields. (author)

  11. Spectral Difference in the Image Domain for Large Neighborhoods, a GEOBIA Pre-Processing Step for High Resolution Imagery

    Directory of Open Access Journals (Sweden)

    Roeland de Kok

    2012-08-01

    Full Text Available Contrast plays an important role in the visual interpretation of imagery. To mimic visual interpretation and using contrast in a Geographic Object Based Image Analysis (GEOBIA environment, it is useful to consider an analysis for single pixel objects. This should be done before applying homogeneity criteria in the aggregation of pixels for the construction of meaningful image objects. The habit or “best practice” to start GEOBIA with pixel aggregation into homogeneous objects should come with the awareness that feature attributes for single pixels are at risk of becoming less accessible for further analysis. Single pixel contrast with image convolution on close neighborhoods is a standard technique, also applied in edge detection. This study elaborates on the analysis of close as well as much larger neighborhoods inside the GEOBIA domain. The applied calculations are limited to the first segmentation step for single pixel objects in order to produce additional feature attributes for objects of interest to be generated in further aggregation processes. The equation presented functions at a level that is considered an intermediary product in the sequential processing of imagery. The procedure requires intensive processor and memory capacity. The resulting feature attributes highlight not only contrasting pixels (edges but also contrasting areas of local pixel groups. The suggested approach can be extended and becomes useful in classifying artificial areas at national scales using high resolution satellite mosaics.

  12. Tubing-Electrospinning: A One-Step Process for Fabricating Fibrous Matrices with Spatial, Chemical, and Mechanical Gradients.

    Science.gov (United States)

    Kim, Jung-Suk; Im, Byung Gee; Jin, Gyuhyung; Jang, Jae-Hyung

    2016-08-31

    Guiding newly generated tissues in a gradient pattern, thereby precisely mimicking inherent tissue morphology and subsequently arranging the intimate networks between adjacent tissues, is essential to raise the technical levels of tissue engineering and facilitate its transition into the clinic. In this study, a straightforward electrospinning method (the tubing-electrospinning technique) was developed to create fibrous matrices readily with diverse gradient patterns and to induce patterned cellular responses. Gradient fibrous matrices can be produced simply by installing a series of polymer-containing lengths of tubing into an electrospinning circuit and sequentially processing polymers without a time lag. The loading of polymer samples with different characteristics, including concentration, wettability, and mechanical properties, into the tubing system enabled unique features in fibrous matrices, such as longitudinal gradients in fiber density, surface properties, and mechanical stiffness. The resulting fibrous gradients were shown to arrange cellular migration and residence in a gradient manner, thereby offering efficient cues to mediate patterned tissue formation. The one-step process using tubing-electrospinning apparatus can be used without significant modifications regardless of the type of fibrous gradient. Hence, the tubing-electrospinning system can serve as a platform that can be readily used by a wide-range of users to induce patterned tissue formation in a gradient manner, which will ultimately improve the functionality of tissue engineering scaffolds.

  13. Development strategy and process models for phased automation of design and digital manufacturing electronics

    Science.gov (United States)

    Korshunov, G. I.; Petrushevskaya, A. A.; Lipatnikov, V. A.; Smirnova, M. S.

    2018-03-01

    The strategy of quality of electronics insurance is represented as most important. To provide quality, the processes sequence is considered and modeled by Markov chain. The improvement is distinguished by simple database means of design for manufacturing for future step-by-step development. Phased automation of design and digital manufacturing electronics is supposed. The MatLab modelling results showed effectiveness increase. New tools and software should be more effective. The primary digital model is proposed to represent product in the processes sequence from several processes till the whole life circle.

  14. A novel two-step coprecipitation process using Fe(III) and Al(III) for the removal and immobilization of arsenate from acidic aqueous solution.

    Science.gov (United States)

    Jia, Yongfeng; Zhang, Danni; Pan, Rongrong; Xu, Liying; Demopoulos, George P

    2012-02-01

    Lime neutralization and coprecipitation of arsenate with iron is widely practiced for the removal and immobilization of arsenic from mineral processing effluents. However, the stability of the generated iron-arsenate coprecipitate is still of concern. In this work, we developed a two-step coprecipitation process involving the use of iron and aluminum and tested the stability of the resultant coprecipitates. The two-step Fe-As-Fe or Fe-As-Al coprecipitation process involved an initial Fe/As = 2 coprecipitation at pH4 to remove arsenic from water down to 0.25 mg/L, followed by introduction of iron or aluminum (Fe/As = 2, Al/As = 1.5 or 2). The two-step coprecipitates showed higher stability than traditional Fe/As = 4 coprecipitate under both oxic and anoxic conditions. Leaching stability was enhanced when aluminum was applied in the second step. The use of aluminum in the second step also inhibited microbial mediated arsenate reduction and arsenic remobilization. The results suggest that the two-step coprecipitation process is superior to conventional coprecipitation methods with respect to the stability of the generated arsenic-bearing solid waste. The use of Al in the second step is better than Fe to enhance the stability. This work may have important implications to the development of new technologies for efficient arsenic removal from hydrometallurgical solutions and safe disposal in both oxic and anoxic environment. Copyright © 2011 Elsevier Ltd. All rights reserved.

  15. Additive N-step Markov chains as prototype model of symbolic stochastic dynamical systems with long-range correlations

    International Nuclear Information System (INIS)

    Mayzelis, Z.A.; Apostolov, S.S.; Melnyk, S.S.; Usatenko, O.V.; Yampol'skii, V.A.

    2007-01-01

    A theory of symbolic dynamic systems with long-range correlations based on the consideration of the binary N-step Markov chains developed earlier in Phys Rev Lett 2003;90:110601 is generalized to the biased case (non-equal numbers of zeros and unities in the chain). In the model, the conditional probability that the ith symbol in the chain equals zero (or unity) is a linear function of the number of unities (zeros) among the preceding N symbols. The correlation and distribution functions as well as the variance of number of symbols in the words of arbitrary length L are obtained analytically and verified by numerical simulations. A self-similarity of the studied stochastic process is revealed and the similarity group transformation of the chain parameters is presented. The diffusion Fokker-Planck equation governing the distribution function of the L-words is explored. If the persistent correlations are not extremely strong, the distribution function is shown to be the Gaussian with the variance being nonlinearly dependent on L. An equation connecting the memory and correlation function of the additive Markov chain is presented. This equation allows reconstructing a memory function using a correlation function of the system. Effectiveness and robustness of the proposed method is demonstrated by simple model examples. Memory functions of concrete coarse-grained literary texts are found and their universal power-law behavior at long distances is revealed

  16. Additive N-step Markov chains as prototype model of symbolic stochastic dynamical systems with long-range correlations

    Energy Technology Data Exchange (ETDEWEB)

    Mayzelis, Z.A. [Department of Physics, Kharkov National University, 4 Svoboda Sq., Kharkov 61077 (Ukraine); Apostolov, S.S. [Department of Physics, Kharkov National University, 4 Svoboda Sq., Kharkov 61077 (Ukraine); Melnyk, S.S. [A. Ya. Usikov Institute for Radiophysics and Electronics, Ukrainian Academy of Science, 12 Proskura Street, 61085 Kharkov (Ukraine); Usatenko, O.V. [A. Ya. Usikov Institute for Radiophysics and Electronics, Ukrainian Academy of Science, 12 Proskura Street, 61085 Kharkov (Ukraine)]. E-mail: usatenko@ire.kharkov.ua; Yampol' skii, V.A. [A. Ya. Usikov Institute for Radiophysics and Electronics, Ukrainian Academy of Science, 12 Proskura Street, 61085 Kharkov (Ukraine)

    2007-10-15

    A theory of symbolic dynamic systems with long-range correlations based on the consideration of the binary N-step Markov chains developed earlier in Phys Rev Lett 2003;90:110601 is generalized to the biased case (non-equal numbers of zeros and unities in the chain). In the model, the conditional probability that the ith symbol in the chain equals zero (or unity) is a linear function of the number of unities (zeros) among the preceding N symbols. The correlation and distribution functions as well as the variance of number of symbols in the words of arbitrary length L are obtained analytically and verified by numerical simulations. A self-similarity of the studied stochastic process is revealed and the similarity group transformation of the chain parameters is presented. The diffusion Fokker-Planck equation governing the distribution function of the L-words is explored. If the persistent correlations are not extremely strong, the distribution function is shown to be the Gaussian with the variance being nonlinearly dependent on L. An equation connecting the memory and correlation function of the additive Markov chain is presented. This equation allows reconstructing a memory function using a correlation function of the system. Effectiveness and robustness of the proposed method is demonstrated by simple model examples. Memory functions of concrete coarse-grained literary texts are found and their universal power-law behavior at long distances is revealed.

  17. Integrated Modeling of Process, Structures and Performance in Cast Parts

    DEFF Research Database (Denmark)

    Kotas, Petr

    and to defects occurrence. In other words, it is desired to eliminate all of the potential casting defects and at the same time to maximize the casting yield. The numerical optimization algorithm then takes these objectives and searches for a set of the investigated process, design or material parameters e......This thesis deals with numerical simulations of gravity sand casting processes for the production of large steel parts. The entire manufacturing process is numerically modeled and evaluated, taking into consideration mould filling, solidification, solid state cooling and the subsequent stress build.......g. chill design, riser design, gating system design, etc., which would satisfy these objectives the most. The first step in the numerical casting process simulation is to analyze mould filling where the emphasis is put on the gating system design. There are still a lot of foundry specialists who ignore...

  18. Next Step for STEP

    Energy Technology Data Exchange (ETDEWEB)

    Wood, Claire [CTSI; Bremner, Brenda [CTSI

    2013-08-09

    The Siletz Tribal Energy Program (STEP), housed in the Tribe’s Planning Department, will hire a data entry coordinator to collect, enter, analyze and store all the current and future energy efficiency and renewable energy data pertaining to administrative structures the tribe owns and operates and for homes in which tribal members live. The proposed data entry coordinator will conduct an energy options analysis in collaboration with the rest of the Siletz Tribal Energy Program and Planning Department staff. An energy options analysis will result in a thorough understanding of tribal energy resources and consumption, if energy efficiency and conservation measures being implemented are having the desired effect, analysis of tribal energy loads (current and future energy consumption), and evaluation of local and commercial energy supply options. A literature search will also be conducted. In order to educate additional tribal members about renewable energy, we will send four tribal members to be trained to install and maintain solar panels, solar hot water heaters, wind turbines and/or micro-hydro.

  19. Three-dimensional model analysis and processing

    CERN Document Server

    Yu, Faxin; Luo, Hao; Wang, Pinghui

    2011-01-01

    This book focuses on five hot research directions in 3D model analysis and processing in computer science:  compression, feature extraction, content-based retrieval, irreversible watermarking and reversible watermarking.

  20. Value-Oriented Coordination Process Modeling

    NARCIS (Netherlands)

    Fatemi, Hassan; van Sinderen, Marten J.; Wieringa, Roelf J.; Hull, Richard; Mendling, Jan; Tai, Stefan

    Business webs are collections of enterprises designed to jointly satisfy a consumer need. Designing business webs calls for modeling the collaboration of enterprises from different perspectives, in particular the business value and coordination process perspectives, and for mutually aligning these

  1. Finite element analysis on multi-step rolling process and controlling quality defect for steel wheel rim

    Directory of Open Access Journals (Sweden)

    Ping Lu

    2015-07-01

    Full Text Available To conduct an in-depth analysis of the wheel rim forming processes and effectively control rim forming quality defects, three-dimensional elastic–plastic finite element models of flaring and three rolling processes for 22.5 × 9.0-type steel wheel rim were established using ABAQUS software. Some key techniques in establishing models were investigated, such as methods of imposing boundary condition given by side guide wheels and enforcing load curve. The accuracy of the models was verified by comparing the simulation results with the point-cloud model of the actual produced rim in terms of exterior shape and thickness. Distributions and changes in the equivalent stress and equivalent plastic strain were analysed. The results indicate that the rim misalignment defect often occurs when the unequal width of the reserved material at the two ends of the rim is in the first rolling process. An improved die design is proposed. The results of the finite element analysis indicate that the improved dies are conducive to the flow of the material between the gap of the upper roller and the lower roller, and the difference in the rim width is significantly reduced.

  2. Influence of processing steps in cold-smoked salmon production on survival and growth of persistent and presumed non-persistent Listeria monocytogenes

    DEFF Research Database (Denmark)

    Porsby, Cisse Hedegaard; Vogel, Birte Fonnesbech; Mohr, Mona

    2008-01-01

    Cold-smoked salmon is a ready-to-eat product in which Listeria monocytogenes sometimes can grow to high numbers. The bacterium can colonize the processing environment and it is believed to survive or even grow during the processing steps. The purpose of the present study was to determine......, cold-smoking and process-freezing (a freezing step after smoking and before slicing). The prevalence of L. monocytogenes in the commercial production facility was too low to determine any quantitative effects, however, one of nine samples was positive before processing and none after. Taken together...

  3. Modeling of Dielectric Heating within Lyophilization Process

    Directory of Open Access Journals (Sweden)

    Jan Kyncl

    2014-01-01

    Full Text Available A process of lyophilization of paper books is modeled. The process of drying is controlled by a dielectric heating system. From the physical viewpoint, the task represents a 2D coupled problem described by two partial differential equations for the electric and temperature fields. The material parameters are supposed to be temperature-dependent functions. The continuous mathematical model is solved numerically. The methodology is illustrated with some examples whose results are discussed.

  4. The role of mass media in adolescents' sexual behaviors: exploring the explanatory value of the three-step self-objectification process.

    Science.gov (United States)

    Vandenbosch, Laura; Eggermont, Steven

    2015-04-01

    This longitudinal study (N = 730) explored whether the three-step process of self-objectification (internalization of appearance ideals, valuing appearance over competence, and body surveillance) could explain the influence of sexual media messages on adolescents' sexual behaviors. A structural equation model showed that reading sexualizing magazines (Time 1) was related to the internalization of appearance ideals and valuing appearance over competence (Time 2). In turn, the internalization of appearance ideals was positively associated with body surveillance and valuing appearance over competence (all at Time 2). Valuing appearance over competence was also positively associated with body surveillance (all at Time 2). Lastly, body surveillance (Time 2) positively related to the initiation of French kissing (Time 3) whereas valuing appearance over competence (Time 2) positively related to the initiation of sexual intercourse (Time 3). No significant relationship was observed for intimate touching. The discussion focused on the explanatory role of self-objectification in media effects on adolescents' sexual behaviors.

  5. Cost Models for MMC Manufacturing Processes

    Science.gov (United States)

    Elzey, Dana M.; Wadley, Haydn N. G.

    1996-01-01

    Processes for the manufacture of advanced metal matrix composites are rapidly approaching maturity in the research laboratory and there is growing interest in their transition to industrial production. However, research conducted to date has almost exclusively focused on overcoming the technical barriers to producing high-quality material and little attention has been given to the economical feasibility of these laboratory approaches and process cost issues. A quantitative cost modeling (QCM) approach was developed to address these issues. QCM are cost analysis tools based on predictive process models relating process conditions to the attributes of the final product. An important attribute, of the QCM approach is the ability to predict the sensitivity of material production costs to product quality and to quantitatively explore trade-offs between cost and quality. Applications of the cost models allow more efficient direction of future MMC process technology development and a more accurate assessment of MMC market potential. Cost models were developed for two state-of-the art metal matrix composite (MMC) manufacturing processes: tape casting and plasma spray deposition. Quality and Cost models are presented for both processes and the resulting predicted quality-cost curves are presented and discussed.

  6. Improving pain care through implementation of the Stepped Care Model at a multisite community health center

    Directory of Open Access Journals (Sweden)

    Anderson DR

    2016-11-01

    Full Text Available Daren R Anderson,1 Ianita Zlateva,1 Emil N Coman,2 Khushbu Khatri,1 Terrence Tian,1 Robert D Kerns3 1Weitzman Institute, Community Health Center, Inc., Middletown, 2UCONN Health Disparities Institute, University of Connecticut, Farmington, 3VA Connecticut Healthcare System, West Haven, CT, USA Purpose: Treating pain in primary care is challenging. Primary care providers (PCPs receive limited training in pain care and express low confidence in their knowledge and ability to manage pain effectively. Models to improve pain outcomes have been developed, but not formally implemented in safety net practices where pain is particularly common. This study evaluated the impact of implementing the Stepped Care Model for Pain Management (SCM-PM at a large, multisite Federally Qualified Health Center. Methods: The Promoting Action on Research Implementation in Health Services framework guided the implementation of the SCM-PM. The multicomponent intervention included: education on pain care, new protocols for pain assessment and management, implementation of an opioid management dashboard, telehealth consultations, and enhanced onsite specialty resources. Participants included 25 PCPs and their patients with chronic pain (3,357 preintervention and 4,385 postintervention cared for at Community Health Center, Inc. Data were collected from the electronic health record and supplemented by chart reviews. Surveys were administered to PCPs to assess knowledge, attitudes, and confidence. Results: Providers’ pain knowledge scores increased to an average of 11% from baseline; self-rated confidence in ability to manage pain also increased. Use of opioid treatment agreements and urine drug screens increased significantly by 27.3% and 22.6%, respectively. Significant improvements were also noted in documentation of pain, pain treatment, and pain follow-up. Referrals to behavioral health providers for patients with pain increased by 5.96% (P=0.009. There was no

  7. Physical and mathematical modelling of extrusion processes

    DEFF Research Database (Denmark)

    Arentoft, Mogens; Gronostajski, Z.; Niechajowics, A.

    2000-01-01

    The main objective of the work is to study the extrusion process using physical modelling and to compare the findings of the study with finite element predictions. The possibilities and advantages of the simultaneous application of both of these methods for the analysis of metal forming processes...

  8. Business Process Modeling Notation - An Overview

    Directory of Open Access Journals (Sweden)

    Alexandra Fortiş

    2006-01-01

    Full Text Available BPMN represents an industrial standard created to offer a common and user friendly notation to all the participants to a business process. The present paper aims to briefly present the main features of this notation as well as an interpretation of some of the main patterns characterizing a business process modeled by the working fluxes.

  9. Towards an understanding of business model innovation processes

    DEFF Research Database (Denmark)

    Taran, Yariv; Boer, Harry; Lindgren, Peter

    2009-01-01

    Companies today, in some industries more than others, invest more capital and resources just to stay competitive, develop more diverse solutions, and increasingly start to think more radically, when considering to innovate their business model. However, the development and innovation of business...... models is a complex venture and has not been widely researched yet. The objective of this paper is therefore 1) to build a [descriptive] theoretical understanding, based on Christensen’s (2005) three-step procedure, to business models and their innovation and, as a result of that, 2) to strengthen...... researchers’ and practitioners’ perspectives as to how the process of business model innovation can be realized. By using various researchers’ perspectives and assumptions, we identify relevant inconsistencies, which consequently lead us to propose possible supplementary solutions. We conclude our paper...

  10. Theory Building- Towards an understanding of business model innovation processes

    DEFF Research Database (Denmark)

    Taran, Yariv; Boer, Harry; Lindgren, Peter

    2009-01-01

    Companies today, in some industries more than others, invest more capital and resources just to stay competitive, develop more diverse solutions, and increasingly start to think more radically, when considering to innovate their business model. However, the development and innovation of business...... models is a complex venture and has not been widely researched yet. The objective of this paper is therefore 1) to build a [descriptive] theoretical understanding, based on Christensen's (2005) three-step procedure, to business models and their innovation and, as a result of that, 2) to strengthen...... researchers' and practitioners' perspectives as to how the process of business model innovation can be realized. By using various researchers' perspectives and assumptions, we identify relevant inconsistencies, which consequently lead us to propose possible supplementary solutions. We conclude our paper...

  11. Qualitative simulation in formal process modelling

    International Nuclear Information System (INIS)

    Sivertsen, Elin R.

    1999-01-01

    In relation to several different research activities at the OECD Halden Reactor Project, the usefulness of formal process models has been identified. Being represented in some appropriate representation language, the purpose of these models is to model process plants and plant automatics in a unified way to allow verification and computer aided design of control strategies. The present report discusses qualitative simulation and the tool QSIM as one approach to formal process models. In particular, the report aims at investigating how recent improvements of the tool facilitate the use of the approach in areas like process system analysis, procedure verification, and control software safety analysis. An important long term goal is to provide a basis for using qualitative reasoning in combination with other techniques to facilitate the treatment of embedded programmable systems in Probabilistic Safety Analysis (PSA). This is motivated from the potential of such a combination in safety analysis based on models comprising both software, hardware, and operator. It is anticipated that the research results from this activity will benefit V and V in a wide variety of applications where formal process models can be utilized. Examples are operator procedures, intelligent decision support systems, and common model repositories (author) (ml)

  12. The (Mathematical) Modeling Process in Biosciences.

    Science.gov (United States)

    Torres, Nestor V; Santos, Guido

    2015-01-01

    In this communication, we introduce a general framework and discussion on the role of models and the modeling process in the field of biosciences. The objective is to sum up the common procedures during the formalization and analysis of a biological problem from the perspective of Systems Biology, which approaches the study of biological systems as a whole. We begin by presenting the definitions of (biological) system and model. Particular attention is given to the meaning of mathematical model within the context of biology. Then, we present the process of modeling and analysis of biological systems. Three stages are described in detail: conceptualization of the biological system into a model, mathematical formalization of the previous conceptual model and optimization and system management derived from the analysis of the mathematical model. All along this work the main features and shortcomings of the process are analyzed and a set of rules that could help in the task of modeling any biological system are presented. Special regard is given to the formative requirements and the interdisciplinary nature of this approach. We conclude with some general considerations on the challenges that modeling is posing to current biology.

  13. Mathematical modeling of the voloxidation process. Final report

    International Nuclear Information System (INIS)

    Stanford, T.G.

    1979-06-01

    A mathematical model of the voloxidation process, a head-end reprocessing step for the removal of volatile fission products from spent nuclear fuel, has been developed. Three types of voloxidizer operation have been considered; co-current operation in which the gas and solid streams flow in the same direction, countercurrent operation in which the gas and solid streams flow in opposite directions, and semi-batch operation in which the gas stream passes through the reactor while the solids remain in it and are processed batch wise. Because of the complexity of the physical ahd chemical processes which occur during the voloxidation process and the lack of currently available kinetic data, a global kinetic model has been adapted for this study. Test cases for each mode of operation have been simulated using representative values of the model parameters. To process 714 kgm/day of spent nuclear fuel, using an oxidizing atmosphere containing 20 mole percent oxygen, it was found that a reactor 0.7 m in diameter and 2.49 m in length would be required for both cocurrent and countercurrent modes of operation while for semibatch operation a 0.3 m 3 reactor and an 88200 sec batch processing time would be required

  14. Extending Model Checking To Object Process Validation

    NARCIS (Netherlands)

    van Rein, H.

    2002-01-01

    Object-oriented techniques allow the gathering and modelling of system requirements in terms of an application area. The expression of data and process models at that level is a great asset in communication with non-technical people in that area, but it does not necessarily lead to consistent

  15. Hierarchical Structured Model for Nonlinear Dynamical Processes ...

    African Journals Online (AJOL)

    The mathematical representation of the process, in this context, is by a set of linear stochastic differential equations (SDE) with unique solutions. The problem of realization is that of constructing the dynamical system by looking at the problem of scientific model building. In model building, one must be able to calculate the ...

  16. Exogenous ROS-induced cell sheet transfer based on hematoporphyrin-polyketone film via a one-step process.

    Science.gov (United States)

    Koo, Min-Ah; Lee, Mi Hee; Kwon, Byeong-Ju; Seon, Gyeung Mi; Kim, Min Sung; Kim, Dohyun; Nam, Ki Chang; Park, Jong-Chul

    2018-04-01

    To date, most of invasive cell sheet harvesting methods have used culture surface property variations, such as wettability, pH, electricity, and magnetism, to induce cell detachment. These methods that rely on surface property changes are effective when cell detachment prior to application is necessary, but of limited use when used for cell sheet transfer to target regions. The study reports a new reactive oxygen species (ROS)-induced strategy based on hematoporphyrin-incorporated polyketone film (Hp-PK film) to transfer cell sheets directly to target areas without an intermediate harvesting process. After green LED (510 nm) irradiation, production of exogenous ROS from the Hp-PK films induces cell sheet detachment and transfer. The study suggests that ROS-induced cell detachment property of the Hp-PK film is closely related to conformational changes of extracellular matrix (ECM) proteins. Also, this strategy with the Hp-PK film can be applied by regulating production rate of exogenous ROS in various types of cells, including fibroblasts, mesenchymal stem cells and keratinocytes. In conclusion, ROS-induced method using the Hp-PK film can be used for one-step cell sheet transplantation and has potential in biomedical applications. Copyright © 2018 Elsevier Ltd. All rights reserved.

  17. Removal of pharmaceutical residues using ozonation as intermediate process step at Linköping WWTP, Sweden.

    Science.gov (United States)

    Baresel, Christian; Malmborg, Jonas; Ek, Mats; Sehlén, Robert

    2016-01-01

    Pilot tests as basis for the design, implementation and operation of a future full-scale oxidation plant completing the existing sewage treatment in Linköping, Sweden, were performed. Using an ozonation step between bio-sedimentation and post-denitrification processes, the primary goal was the removal of the highest priority substances to effluent water levels that will not cause adverse effects in the recipient considering the natural dilution. The study included initial emission screenings, dose control trials, treatment performance studies and eco-toxicity studies. At an ozone dose of 5 mg O3/L, most substances could be removed. Ecotoxicological tests showed no negative effect for the tested ozone doses. High levels of oxygen into the denitrification could be rapidly reduced in the biology. The number of bacteria in the treated water could be significantly reduced even at relatively low ozone doses. Based on these results, the planning for the full-scale implementation of the treatment system was initiated in 2015.

  18. Interprofessional practice in primary care: development of a tailored process model

    Directory of Open Access Journals (Sweden)

    Stans SEA

    2013-04-01

    Full Text Available Steffy EA Stans, JG Anita Stevens, Anna JHM Beurskens Research Center of Autonomy and Participation for Persons with a Chronic Illness, Zuyd University of Applied Sciences, Heerlen, The Netherlands Purpose: This study investigated the improvement of interprofessional practice in primary care by performing the first three steps of the implementation model described by Grol et al. This article describes the targets for improvement in a setting for children with complex care needs (step 1, the identification of barriers and facilitators influencing interprofessional practice (step 2, and the development of a tailored interprofessional process model (step 3. Methods: In step 2, thirteen qualitative semistructured interviews were held with several stakeholders, including parents of children, an occupational therapist, a speech and language therapist, a physical therapist, the manager of the team, two general practitioners, a psychologist, and a primary school teacher. The data were analyzed using directed content analysis and using the domains of the Chronic Care Model as a framework. In step 3, a project group was formed to develop helpful strategies, including the development of an interprofessional process through process mapping. Results: In step 2, it was found that the most important barriers to implementing interprofessional practice related to the lack of structure in the care process. A process model for interprofessional primary care was developed for the target group. Conclusion: The lack of a shared view of what is involved in the process of interprofessional practice was the most important barrier to its successful implementation. It is suggested that the tailored process developed, supported with the appropriate tools, may provide both professional staff and their clients, in this setting but also in other areas of primary care, with insight to the care process and a clear representation of "who should do what, when, and how." Keywords

  19. Filament winding cylinders. I - Process model

    Science.gov (United States)

    Lee, Soo-Yong; Springer, George S.

    1990-01-01

    A model was developed which describes the filament winding process of composite cylinders. The model relates the significant process variables such as winding speed, fiber tension, and applied temperature to the thermal, chemical and mechanical behavior of the composite cylinder and the mandrel. Based on the model, a user friendly code was written which can be used to calculate (1) the temperature in the cylinder and the mandrel, (2) the degree of cure and viscosity in the cylinder, (3) the fiber tensions and fiber positions, (4) the stresses and strains in the cylinder and in the mandrel, and (5) the void diameters in the cylinder.

  20. Evolution of quantum-like modeling in decision making processes

    International Nuclear Information System (INIS)

    Khrennikova, Polina

    2012-01-01

    The application of the mathematical formalism of quantum mechanics to model behavioral patterns in social science and economics is a novel and constantly emerging field. The aim of the so called 'quantum like' models is to model the decision making processes in a macroscopic setting, capturing the particular 'context' in which the decisions are taken. Several subsequent empirical findings proved that when making a decision people tend to violate the axioms of expected utility theory and Savage's Sure Thing principle, thus violating the law of total probability. A quantum probability formula was devised to describe more accurately the decision making processes. A next step in the development of QL-modeling in decision making was the application of Schrödinger equation to describe the evolution of people's mental states. A shortcoming of Schrödinger equation is its inability to capture dynamics of an open system; the brain of the decision maker can be regarded as such, actively interacting with the external environment. Recently the master equation, by which quantum physics describes the process of decoherence as the result of interaction of the mental state with the environmental 'bath', was introduced for modeling the human decision making. The external environment and memory can be referred to as a complex 'context' influencing the final decision outcomes. The master equation can be considered as a pioneering and promising apparatus for modeling the dynamics of decision making in different contexts.

  1. Evolution of quantum-like modeling in decision making processes

    Science.gov (United States)

    Khrennikova, Polina

    2012-12-01

    The application of the mathematical formalism of quantum mechanics to model behavioral patterns in social science and economics is a novel and constantly emerging field. The aim of the so called 'quantum like' models is to model the decision making processes in a macroscopic setting, capturing the particular 'context' in which the decisions are taken. Several subsequent empirical findings proved that when making a decision people tend to violate the axioms of expected utility theory and Savage's Sure Thing principle, thus violating the law of total probability. A quantum probability formula was devised to describe more accurately the decision making processes. A next step in the development of QL-modeling in decision making was the application of Schrödinger equation to describe the evolution of people's mental states. A shortcoming of Schrödinger equation is its inability to capture dynamics of an open system; the brain of the decision maker can be regarded as such, actively interacting with the external environment. Recently the master equation, by which quantum physics describes the process of decoherence as the result of interaction of the mental state with the environmental 'bath', was introduced for modeling the human decision making. The external environment and memory can be referred to as a complex 'context' influencing the final decision outcomes. The master equation can be considered as a pioneering and promising apparatus for modeling the dynamics of decision making in different contexts.

  2. Modeling Aspects of Activated Sludge Processes Part l l: Mathematical Process Modeling and Biokinetics of Activated Sludge Processes

    International Nuclear Information System (INIS)

    AbdElHaleem, H.S.; EI-Ahwany, A. H.; Ibrahim, H.I.; Ibrahim, G.

    2004-01-01

    Mathematical process modeling and biokinetics of activated sludge process were reviewed considering different types of models. It has been evaluated the task group models of ASMI. and 2, and 3 versioned by Henze et al considering the conditions of each model and the different processes of which every model consists. It is revealed that ASMI contains some defects avoided in ASM3. Relied on homogeneity, Models can be classified into homogenous models characterized by taking the activated sludge process as one phase. In this type of models, the internal mass transfer inside the floes was neglected.. Hence, the kinetic parameter produces can be considered inaccurate. The other type of models is the heterogeneous model This type considers the mass transfer operations in addition to the biochemical reaction processes; hence, the resulted kinetic parameters can be considered more accurate than that of homogenous type

  3. Direct construction of predictive models for describing growth Salmonella enteritidis in liquid eggs – a one-step approach

    Science.gov (United States)

    The objective of this study was to develop a new approach using a one-step approach to directly construct predictive models for describing the growth of Salmonella Enteritidis (SE) in liquid egg white (LEW) and egg yolk (LEY). A five-strain cocktail of SE, induced to resist rifampicin at 100 mg/L, ...

  4. Numerical Investigation of Transitional Flow over a Backward Facing Step Using a Low Reynolds Number k-ε Model

    DEFF Research Database (Denmark)

    Skovgaard, M.; Nielsen, Peter V.

    In this paper it is investigated if it is possible to simulate and capture some of the low Reynolds number effects numerically using time averaged momentum equations and a low Reynolds number k-f model. The test case is the larninar to turbulent transitional flow over a backward facing step...

  5. Optimization of a semiconductor manufacturing process using a reentrant model

    Directory of Open Access Journals (Sweden)

    Sarah Abuhab Valente

    2015-01-01

    Full Text Available The scope of this work is the simulation of a semiconductor manufacturing model in Arena® software and subsequent optimization and sensitivity analysis of this model. The process is considered extremely complex given the amount of steps, machinery, parameters, and highly reentrant characteristics, which makes it difficult to reach stability of production process. The production model used was the Intel Five-Machine Six-Step Mini-fab developed by Karl Kempf (1994. It was programmed in Arena® and optimized by OptQuest®, an add-on. We concluded that variation in the number of machines and operators reflects on cycle time only if there is an increase of one unit of resource more than obtained in the optimization. As a result, we highlighted the scenario where a reduction in cycle time stood out, in which one extra unit was added in the second machine group, representing a 7.41% reduction in cycle time.

  6. Process modeling study of the CIF incinerator

    International Nuclear Information System (INIS)

    Hang, T.

    1995-01-01

    The Savannah River Site (SRS) plans to begin operating the Consolidated Incineration Facility (CIF) in 1996. The CIF will treat liquid and solid low-level radioactive, mixed and RCRA hazardous wastes generated at SRS. In addition to experimental test programs, process modeling was applied to provide guidance in areas of safety, environmental regulation compliances, process improvement and optimization. A steady-state flowsheet model was used to calculate material/energy balances and to track key chemical constituents throughout the process units. Dynamic models were developed to predict the CIF transient characteristics in normal and abnormal operation scenarios. Predictions include the rotary kiln heat transfer, dynamic responses of the CIF to fluctuations in the solid waste feed or upsets in the system equipments, performance of the control system, air inleakage in the kiln, etc. This paper reviews the modeling study performed to assist in the deflagration risk assessment

  7. From Business Value Model to Coordination Process Model

    Science.gov (United States)

    Fatemi, Hassan; van Sinderen, Marten; Wieringa, Roel

    The increased complexity of business webs calls for modeling the collaboration of enterprises from different perspectives, in particular the business and process perspectives, and for mutually aligning these perspectives. Business value modeling and coordination process modeling both are necessary for a good e-business design, but these activities have different goals and use different concepts. Nevertheless, the resulting models should be consistent with each other because they refer to the same system from different perspectives. Hence, checking the consistency between these models or producing one based on the other would be of high value. In this paper we discuss the issue of achieving consistency in multi-level e-business design and give guidelines to produce consistent coordination process models from business value models in a stepwise manner.

  8. Numerical modeling of atmospheric washout processes

    International Nuclear Information System (INIS)

    Bayer, D.; Beheng, K.D.; Herbert, F.

    1987-01-01

    For the washout of particles from the atmosphere by clouds and rain one has to distinguish between processes which work in the first phase of cloud development, when condensation nuclei build up in saturated air (Nucleation Aerosol Scavenging, NAS) and those processes which work at the following cloud development. In the second case particles are taken off by cloud droplets or by falling rain drops via collision (Collision Aerosol Scavenging, CAS). The physics of both processes is described. For the CAS process a numerical model is presented. The report contains a documentation of the mathematical equations and the computer programs (FORTRAN). (KW) [de

  9. Scientific Opinion on the safety evaluation of the process “INTERSEROH Step 1” used to recycle polypropylene cratesfor use as food contact material

    OpenAIRE

    EFSA Panel on Food Contact Materials, Enzymes, Flavourings and Processing Aids (CEF)

    2012-01-01

    The EFSA Panel on Food Contact Materials, Enzymes, Flavourings and Processing Aids (CEF) provides a scientific opinion dealing with the safety evaluation of the recycling process “INTERSEROH Step 1” with the EC register number RECYC069. The process recycles pre-washed damaged food contact re-usable polypropylene crates (RPC) which have been used in a closed and controlled product loop into new recycled polypropylene crates. Through this process, cleaned damaged crates are firstly gro...

  10. Assesment of advanced step models for steady state Monte Carlo burnup calculations in application to prismatic HTGR

    Directory of Open Access Journals (Sweden)

    Kępisty Grzegorz

    2015-09-01

    Full Text Available In this paper, we compare the methodology of different time-step models in the context of Monte Carlo burnup calculations for nuclear reactors. We discuss the differences between staircase step model, slope model, bridge scheme and stochastic implicit Euler method proposed in literature. We focus on the spatial stability of depletion procedure and put additional emphasis on the problem of normalization of neutron source strength. Considered methodology has been implemented in our continuous energy Monte Carlo burnup code (MCB5. The burnup simulations have been performed using the simplified high temperature gas-cooled reactor (HTGR system with and without modeling of control rod withdrawal. Useful conclusions have been formulated on the basis of results.

  11. Modelling of subcritical free-surface flow over an inclined backward-facing step in a water channel

    Directory of Open Access Journals (Sweden)

    Šulc Jan

    2012-04-01

    Full Text Available The contribution deals with the experimental and numerical modelling of subcritical turbulent flow in an open channel with an inclined backward-facing step. The step with the inclination angle α = 20° was placed in the water channel of the cross-section 200×200 mm. Experiments were carried out by means of the PIV and LDA measuring techniques. Numerical simulations were executed by means of the commercial software ANSYS CFX 12.0. Numerical results obtained for twoequation models and EARSM turbulence model completed by transport equations for turbulent energy and specific dissipation rate were compared with experimental data. The modelling was concentrated particularly on the development of the flow separation and on the corresponding changes of free surface.

  12. Using Field Data for Energy Efficiency Based on Maintenance and Operational Optimisation. A Step towards PHM in Process Plants

    Directory of Open Access Journals (Sweden)

    Micaela Demichela

    2018-03-01

    Full Text Available Energy saving is an important issue for any industrial sector; in particular, for the process industry, it can help to minimize both energy costs and environmental impact. Maintenance optimization and operational procedures can offer margins to increase energy efficiency in process plants, even if they are seldom explicitly taken into account in the predictive models guiding the energy saving policies. To ensure that the plant achieves the desired performance, maintenance operations and maintenance results should be monitored, and the connection between the inputs and the outcomes of the maintenance process, in terms of total contribution to manufacturing performance, should be explicit. In this study, a model for the energy efficiency analysis was developed, based on cost and benefits balance. It is aimed at supporting the decision making in terms of technical and operational solutions for energy efficiency, through the optimization of maintenance interventions and operational procedures. A case study is here described: the effects on energy efficiency of technical and operational optimization measures for bituminous materials production process equipment. The idea of the Conservation Supply Curve (CSC was used to capture both the cost effectiveness of the measures and the energy efficiency effectiveness. The optimization was thus based on the energy consumption data registered on-site: data collection and modelling of the relevant data were used as a base to implement a prognostic and health management (PHM policy in the company. Based on the results from the analysis, efficiency measures for the industrial case study were proposed, also in relation to maintenance optimization and operating procedures. In the end, the impacts of the implementation of energy saving measures on the performance of the system, in terms of technical and economic feasibility, were demonstrated. The results showed that maintenance optimization could help in reaching

  13. A Quantitative, Time-Dependent Model of Oxygen Isotopes in the Solar Nebula: Step one

    Science.gov (United States)

    Nuth, J. A.; Paquette, J. A.; Farquhar, A.; Johnson, N. M.

    2011-01-01

    The remarkable discovery that oxygen isotopes in primitive meteorites were fractionated along a line of slope I rather than along the typical slope 0,52 terrestrial fractionation line occurred almost 40 years ago, However, a satisfactory, quantitative explanation for this observation has yet to be found, though many different explanations have been proposed, The first of these explanations proposed that the observed line represented the final product produced by mixing molecular cloud dust with a nucleosynthetic component, rich in O-16, possibly resulting from a nearby supernova explosion, Donald Clayton suggested that Galactic Chemical Evolution would gradually change the oxygen isotopic composition of the interstellar grain population by steadily producing O-16 in supernovae, then producing the heavier isotopes as secondary products in lower mass stars, Thiemens and collaborators proposed a chemical mechanism that relied on the availability of additional active rotational and vibrational states in otherwise-symmetric molecules, such as CO2, O3 or SiO2, containing two different oxygen isotopes and a second, photochemical process that suggested that differential photochemical dissociation processes could fractionate oxygen , This second line of research has been pursued by several groups, though none of the current models is quantitative,

  14. Visible-light photocatalytic decolorization of reactive brilliant red X-3B on Cu{sub 2}O/crosslinked-chitosan nanocomposites prepared via one step process

    Energy Technology Data Exchange (ETDEWEB)

    Cao, Chunhua [College of Resource and Environmental Science, Wuhan University, Wuhan 430072 (China); Key Laboratory of Optoelectronic Chemical Materials and Devices of Ministry of Education, College of Chemical and Environmental Engineering, Jianghan University, Wuhan 430056 (China); Xiao, Ling, E-mail: xiaoling9119@yahoo.cn [College of Resource and Environmental Science, Wuhan University, Wuhan 430072 (China); Liu, Li; Zhu, Huayue [College of Resource and Environmental Science, Wuhan University, Wuhan 430072 (China); Chen, Chunhua; Gao, Lin [Key Laboratory of Optoelectronic Chemical Materials and Devices of Ministry of Education, College of Chemical and Environmental Engineering, Jianghan University, Wuhan 430056 (China)

    2013-04-15

    Cu{sub 2}O/crosslinked-chitosan nanocomposites (Cu{sub 2}O/CS NCs) were in situ prepared via a simple one-step liquid phase precipitation–reduction process and characterized by XRD, FT-IR, SEM, TEM, BET, XPS and UV–vis/DRS. The characterization results showed that Cu{sub 2}O/CS NCs were almost similar spherical or ellipsoidal and the surface was rough and porous because Cu{sub 2}O particle was wrapped in chitosan. The chitosan layer was especially favorable for improving the adsorption ability of dye and molecular oxygen and restraining the recombination of electrons–holes pair. The visible-light photocatalytic decolorization behavior on Cu{sub 2}O/CS NCs was evaluated using reactive brilliant red X-3B (X-3B) as a model pollutant. The influences of various experimental factors on X-3B decolorization were investigated. It was found that the photocatalytic decolorization process on Cu{sub 2}O/CS NCs followed apparent pseudo-first-order kinetics model. The dye X-3B could be decolorized more efficiently in acidic media than in alkaline media. Cu{sub 2}O/CS NCs exhibited enhanced visible-light photocatalytic activity compared with other photocatalysts reported before under similar experimental conditions.

  15. Novel Ordered Stepped-Wedge Cluster Trial Designs for Detecting Ebola Vaccine Efficacy Using a Spatially Structured Mathematical Model.

    Directory of Open Access Journals (Sweden)

    Ibrahim Diakite

    2016-08-01

    Full Text Available During the 2014 Ebola virus disease (EVD outbreak, policy-makers were confronted with difficult decisions on how best to test the efficacy of EVD vaccines. On one hand, many were reluctant to withhold a vaccine that might prevent a fatal disease from study participants randomized to a control arm. On the other, regulatory bodies called for rigorous placebo-controlled trials to permit direct measurement of vaccine efficacy prior to approval of the products. A stepped-wedge cluster study (SWCT was proposed as an alternative to a more traditional randomized controlled vaccine trial to address these concerns. Here, we propose novel "ordered stepped-wedge cluster trial" (OSWCT designs to further mitigate tradeoffs between ethical concerns, logistics, and statistical rigor.We constructed a spatially structured mathematical model of the EVD outbreak in Sierra Leone. We used the output of this model to simulate and compare a series of stepped-wedge cluster vaccine studies. Our model reproduced the observed order of first case occurrence within districts of Sierra Leone. Depending on the infection risk within the trial population and the trial start dates, the statistical power to detect a vaccine efficacy of 90% varied from 14% to 32% for standard SWCT, and from 67% to 91% for OSWCTs for an alpha error of 5%. The model's projection of first case occurrence was robust to changes in disease natural history parameters.Ordering clusters in a step-wedge trial based on the cluster's underlying risk of infection as predicted by a spatial model can increase the statistical power of a SWCT. In the event of another hemorrhagic fever outbreak, implementation of our proposed OSWCT designs could improve statistical power when a step-wedge study is desirable based on either ethical concerns or logistical constraints.

  16. A Two-Step Model for Assessing Relative Interest in E-Books Compared to Print

    Science.gov (United States)

    Knowlton, Steven A.

    2016-01-01

    Librarians often wish to know whether readers in a particular discipline favor e-books or print books. Because print circulation and e-book usage statistics are not directly comparable, it can be hard to determine the relative interest of readers in the two types of books. This study demonstrates a two-step method by which librarians can assess…

  17. Multi-step Attack Modelling and Simulation (MsAMS) Framework based on Mobile Ambients

    NARCIS (Netherlands)

    Nunes Leal Franqueira, V.; Lopes, R H C; van Eck, Pascal

    Attackers take advantage of any security breach to penetrate an organisation perimeter and exploit hosts as stepping stones to reach valuable assets, deeper in the network. The exploitation of hosts is possible not only when vulnerabilities in commercial off-the-shelf (COTS) software components are

  18. Study of the Inception Length of Flow over Stepped Spillway Models ...

    African Journals Online (AJOL)

    The results showed that the inception (development) length increases as the unit discharge increases and it decreases with an increase in both stepped roughness height and chute angle. The ratio of the development length, in this study, to that of Bauer's was found to be 4:5. Finally, SMM-5 produced the least velocity of ...

  19. Modeling nonhomogeneous Markov processes via time transformation.

    Science.gov (United States)

    Hubbard, R A; Inoue, L Y T; Fann, J R

    2008-09-01

    Longitudinal studies are a powerful tool for characterizing the course of chronic disease. These studies are usually carried out with subjects observed at periodic visits giving rise to panel data. Under this observation scheme the exact times of disease state transitions and sequence of disease states visited are unknown and Markov process models are often used to describe disease progression. Most applications of Markov process models rely on the assumption of time homogeneity, that is, that the transition rates are constant over time. This assumption is not satisfied when transition rates depend on time from the process origin. However, limited statistical tools are available for dealing with nonhomogeneity. We propose models in which the time scale of a nonhomogeneous Markov process is transformed to an operational time scale on which the process is homogeneous. We develop a method for jointly estimating the time transformation and the transition intensity matrix for the time transformed homogeneous process. We assess maximum likelihood estimation using the Fisher scoring algorithm via simulation studies and compare performance of our method to homogeneous and piecewise homogeneous models. We apply our methodology to a study of delirium progression in a cohort of stem cell transplantation recipients and show that our method identifies temporal trends in delirium incidence and recovery.

  20. The significance of multi-step partitioning : Processing-structure-property relationship in governing high strength-high ductility combination in medium-manganese steels

    NARCIS (Netherlands)

    Liu, S.; Xiong, Z.; Guo, H.; Shang, C.; Misra, R. D.K.

    2017-01-01

    Intercritical annealing, flash process and tempering were innovatively combined to obtain high strength-high ductility combination in 0.12C–4.89Mn-1.57Al steel. The process referred as multi-step partitioning (MSP) was designed to accomplish the following objectives: (a) enrichment of austenite with

  1. Software Engineering Laboratory (SEL) cleanroom process model

    Science.gov (United States)

    Green, Scott; Basili, Victor; Godfrey, Sally; Mcgarry, Frank; Pajerski, Rose; Waligora, Sharon

    1991-01-01

    The Software Engineering Laboratory (SEL) cleanroom process model is described. The term 'cleanroom' originates in the integrated circuit (IC) production process, where IC's are assembled in dust free 'clean rooms' to prevent the destructive effects of dust. When applying the clean room methodology to the development of software systems, the primary focus is on software defect prevention rather than defect removal. The model is based on data and analysis from previous cleanroom efforts within the SEL and is tailored to serve as a guideline in applying the methodology to future production software efforts. The phases that are part of the process model life cycle from the delivery of requirements to the start of acceptance testing are described. For each defined phase, a set of specific activities is discussed, and the appropriate data flow is described. Pertinent managerial issues, key similarities and differences between the SEL's cleanroom process model and the standard development approach used on SEL projects, and significant lessons learned from prior cleanroom projects are presented. It is intended that the process model described here will be further tailored as additional SEL cleanroom projects are analyzed.

  2. Business modeling process for university’s technology transfer offices

    Directory of Open Access Journals (Sweden)

    Marin Alexandru

    2017-07-01

    Full Text Available The present paper is devoted to analyze the appropriate recommendations to increase the effectiveness of technology transfer centers from Romanian National Network for Innovation and Technology Transfer - ReNITT, hosted by universities. The study is focused on the definition of a conceptual frame to develop specific business models, by the specialized compartments from technology/knowledge transfer entities, and using the specific instruments of business modeling process. The qualitative and quantitative analysis of the 8 steps scheduling of pairing the building blocks of the Business Models Canvas, corresponding to the specific technology transfer models, and taking into account the elements of the value chain of technology transfer and making connections with technology readiness level, allows a clarification of this relative “fuzzy” and complicated modeling process of university’s Technology Transfer Offices activities, gathering in a concentrated format all necessary information. According to their mission, objectives and strategies, universities decide upon a certain business model for the Technology Transfer Offices, adaptable to client segment and value proposition to attain, by the offered services portfolio. In conclusion, during their activities, Technology Transfer Offices identify, validate and exploit the opportunities originated from applicative research results, by “technology push” methods. Also, there are necessary specific competences (human and material to develop externally aware business models starting from real needs of the clients, by “market pull” techniques, that would contribute to enhance the endogenous innovation potential of firms.

  3. Causally nonseparable processes admitting a causal model

    International Nuclear Information System (INIS)

    Feix, Adrien; Araújo, Mateus; Brukner, Caslav

    2016-01-01

    A recent framework of quantum theory with no global causal order predicts the existence of ‘causally nonseparable’ processes. Some of these processes produce correlations incompatible with any causal order (they violate so-called ‘causal inequalities’ analogous to Bell inequalities ) while others do not (they admit a ‘causal model’ analogous to a local model ). Here we show for the first time that bipartite causally nonseparable processes with a causal model exist, and give evidence that they have no clear physical interpretation. We also provide an algorithm to generate processes of this kind and show that they have nonzero measure in the set of all processes. We demonstrate the existence of processes which stop violating causal inequalities but are still causally nonseparable when mixed with a certain amount of ‘white noise’. This is reminiscent of the behavior of Werner states in the context of entanglement and nonlocality. Finally, we provide numerical evidence for the existence of causally nonseparable processes which have a causal model even when extended with an entangled state shared among the parties. (paper)

  4. Stochastic differential equation model to Prendiville processes

    International Nuclear Information System (INIS)

    Granita; Bahar, Arifah

    2015-01-01

    The Prendiville process is another variation of the logistic model which assumes linearly decreasing population growth rate. It is a continuous time Markov chain (CTMC) taking integer values in the finite interval. The continuous time Markov chain can be approximated by stochastic differential equation (SDE). This paper discusses the stochastic differential equation of Prendiville process. The work started with the forward Kolmogorov equation in continuous time Markov chain of Prendiville process. Then it was formulated in the form of a central-difference approximation. The approximation was then used in Fokker-Planck equation in relation to the stochastic differential equation of the Prendiville process. The explicit solution of the Prendiville process was obtained from the stochastic differential equation. Therefore, the mean and variance function of the Prendiville process could be easily found from the explicit solution

  5. Stochastic differential equation model to Prendiville processes

    Energy Technology Data Exchange (ETDEWEB)

    Granita, E-mail: granitafc@gmail.com [Dept. of Mathematical Science, Universiti Teknologi Malaysia, 81310, Johor Malaysia (Malaysia); Bahar, Arifah [Dept. of Mathematical Science, Universiti Teknologi Malaysia, 81310, Johor Malaysia (Malaysia); UTM Center for Industrial & Applied Mathematics (UTM-CIAM) (Malaysia)

    2015-10-22

    The Prendiville process is another variation of the logistic model which assumes linearly decreasing population growth rate. It is a continuous time Markov chain (CTMC) taking integer values in the finite interval. The continuous time Markov chain can be approximated by stochastic differential equation (SDE). This paper discusses the stochastic differential equation of Prendiville process. The work started with the forward Kolmogorov equation in continuous time Markov chain of Prendiville process. Then it was formulated in the form of a central-difference approximation. The approximation was then used in Fokker-Planck equation in relation to the stochastic differential equation of the Prendiville process. The explicit solution of the Prendiville process was obtained from the stochastic differential equation. Therefore, the mean and variance function of the Prendiville process could be easily found from the explicit solution.

  6. Influence of a novel two-step austempering process on the strain-hardening behavior of austempered ductile cast iron (ADI)

    Energy Technology Data Exchange (ETDEWEB)

    Yang Jianghuai; Putatunda, Susil K

    2004-09-25

    An investigation was carried out to examine the influence of a novel two-step austempering process on the strain-hardening behavior of austempered ductile cast iron (ADI). Strain-hardening exponent (n value) of specimens austempered by conventional single-step austempering process as well as the novel two-step process were determined over the entire plastic deformation regions of the stress-strain curves. Optical microscopy and X-ray diffraction analysis were performed to examine mechanisms of strain-hardening behavior in ADI under monotonic (tensile) loading. Test results show that this novel two-step process has resulted in improved microstructural variables in the ADI matrix, and higher hardness, yield strength and tensile strengths, but lower ductility and strain-hardening exponent values compared to the conventional single-step austempering process. Test results also indicate that strain-hardening exponent of ADI is a function of amount and morphology of microstructural constituents and interaction intensities between carbon atoms and dislocations in the matrix.

  7. The Probability Model of Expectation Disconfirmation Process

    Directory of Open Access Journals (Sweden)

    Hui-Hsin HUANG

    2015-06-01

    Full Text Available This paper proposes a probability model to explore the dynamic process of customer’s satisfaction. Bases on expectation disconfirmation theory, the satisfaction is constructed with customer’s expectation before buying behavior and the perceived performance after purchase. The experiment method is designed to measure expectation disconfirmation effects and we also use the collection data to estimate the overall satisfaction and model calibration. The results show good fitness between the model and the real data. This model has application for business marketing areas in order to manage relationship satisfaction.

  8. Chain binomial models and binomial autoregressive processes.

    Science.gov (United States)

    Weiss, Christian H; Pollett, Philip K

    2012-09-01

    We establish a connection between a class of chain-binomial models of use in ecology and epidemiology and binomial autoregressive (AR) processes. New results are obtained for the latter, including expressions for the lag-conditional distribution and related quantities. We focus on two types of chain-binomial model, extinction-colonization and colonization-extinction models, and present two approaches to parameter estimation. The asymptotic distributions of the resulting estimators are studied, as well as their finite-sample performance, and we give an application to real data. A connection is made with standard AR models, which also has implications for parameter estimation. © 2011, The International Biometric Society.

  9. Fiscal 2000 project for development of international standards for supporting novel industries. Standardization of production process system (Development of basic STEP standards); 2000 nendo shinki sangyo shiengata kokusai hyojun kaihatsu jigyo. Seisan process system no hyojunka (STEP kiban kikaku no kaihatsu)

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2001-03-01

    Efforts are under way to develop STEP (standard for the exchange of product model data) of ISO-10303 into specifications for expressing the total life cycle of products from designing, manufacturing, to disposition. Japan's study centers about product model expression involving 'thing-making for machine products.' In the study, the functions of the mechanical production process system ranging from designing to manufacturing were analyzed, and the results were built into an integrated application activity model (I-AAM). The I-AAM was analyzed, and an assembly model capable of expressing detailed relations between parts, such as mechanical linking and binding, was developed, and the model was accepted as a new work item at TC184/SC4 in February 2001. In relation to the parametric assembly of Part 108, moreover, the fruit of this research and development effort was adopted. In relation to manufacturing, problems were extracted involving important production designs between the processes of designing and manufacturing. They were raised at SC4, and a data model was proposed. (NEDO)

  10. A neurolinguistic model of grammatical construction processing.

    Science.gov (United States)

    Dominey, Peter Ford; Hoen, Michel; Inui, Toshio

    2006-12-01

    One of the functions of everyday human language is to communicate meaning. Thus, when one hears or reads the sentence, "John gave a book to Mary," some aspect of an event concerning the transfer of possession of a book from John to Mary is (hopefully) transmitted. One theoretical approach to language referred to as construction grammar emphasizes this link between sentence structure and meaning in the form of grammatical constructions. The objective of the current research is to (1) outline a functional description of grammatical construction processing based on principles of psycholinguistics, (2) develop a model of how these functions can be implemented in human neurophysiology, and then (3) demonstrate the feasibility of the resulting model in processing languages of typologically diverse natures, that is, English, French, and Japanese. In this context, particular interest will be directed toward the processing of novel compositional structure of relative phrases. The simulation results are discussed in the context of recent neurophysiological studies of language processing.

  11. Cell therapy-processing economics: small-scale microfactories as a stepping stone toward large-scale macrofactories.

    Science.gov (United States)

    Harrison, Richard P; Medcalf, Nicholas; Rafiq, Qasim A

    2018-03-01

    Manufacturing methods for cell-based therapies differ markedly from those established for noncellular pharmaceuticals and biologics. Attempts to 'shoehorn' these into existing frameworks have yielded poor outcomes. Some excellent clinical results have been realized, yet emergence of a 'blockbuster' cell-based therapy has so far proved elusive.  The pressure to provide these innovative therapies, even at a smaller scale, remains. In this process, economics research paper, we utilize cell expansion research data combined with operational cost modeling in a case study to demonstrate the alternative ways in which a novel mesenchymal stem cell-based therapy could be provided at small scale. This research outlines the feasibility of cell microfactories but highlighted that there is a strong pressure to automate processes and split the quality control cost-burden over larger production batches. The study explores one potential paradigm of cell-based therapy provisioning as a potential exemplar on which to base manufacturing strategy.

  12. Similarity metrics for surgical process models.

    Science.gov (United States)

    Neumuth, Thomas; Loebe, Frank; Jannin, Pierre

    2012-01-01

    The objective of this work is to introduce a set of similarity metrics for comparing surgical process models (SPMs). SPMs are progression models of surgical interventions that support quantitative analyses of surgical activities, supporting systems engineering or process optimization. Five different similarity metrics are presented and proven. These metrics deal with several dimensions of process compliance in surgery, including granularity, content, time, order, and frequency of surgical activities. The metrics were experimentally validated using 20 clinical data sets each for cataract interventions, craniotomy interventions, and supratentorial tumor resections. The clinical data sets were controllably modified in simulations, which were iterated ten times, resulting in a total of 600 simulated data sets. The simulated data sets were subsequently compared to the original data sets to empirically assess the predictive validity of the metrics. We show that the results of the metrics for the surgical process models correlate significantly (pmetrics meet predictive validity. The clinical use of the metrics was exemplarily, as demonstrated by assessment of the learning curves of observers during surgical process model acquisition. Measuring similarity between surgical processes is a complex task. However, metrics for computing the similarity between surgical process models are needed in many uses in the field of medical engineering. These metrics are essential whenever two SPMs need to be compared, such as during the evaluation of technical systems, the education of observers, or the determination of surgical strategies. These metrics are key figures that provide a solid base for medical decisions, such as during validation of sensor systems for use in operating rooms in the future. Copyright © 2011 Elsevier B.V. All rights reserved.

  13. A process algebra model of QED

    International Nuclear Information System (INIS)

    Sulis, William

    2016-01-01

    The process algebra approach to quantum mechanics posits a finite, discrete, determinate ontology of primitive events which are generated by processes (in the sense of Whitehead). In this ontology, primitive events serve as elements of an emergent space-time and of emergent fundamental particles and fields. Each process generates a set of primitive elements, using only local information, causally propagated as a discrete wave, forming a causal space termed a causal tapestry. Each causal tapestry forms a discrete and finite sampling of an emergent causal manifold (space-time) M and emergent wave function. Interactions between processes are described by a process algebra which possesses 8 commutative operations (sums and products) together with a non-commutative concatenation operator (transitions). The process algebra possesses a representation via nondeterministic combinatorial games. The process algebra connects to quantum mechanics through the set valued process and configuration space covering maps, which associate each causal tapestry with sets of wave functions over M. Probabilities emerge from interactions between processes. The process algebra model has been shown to reproduce many features of the theory of non-relativistic scalar particles to a high degree of accuracy, without paradox or divergences. This paper extends the approach to a semi-classical form of quantum electrodynamics. (paper)

  14. The Effect of Multi-step Oral-revision Processes on Iranian EFL Learners’ Argumentative Writing Achievement

    Directory of Open Access Journals (Sweden)

    Farrokhlagha Heidari

    2010-05-01

    Full Text Available The purpose of this study was to explore the role of two multi-step oral-revision processes as feedback providing tools on Iranian EFL learners’ argumentative writing achievement. The participants taking part in this study were 45 Iranian EFL students who were randomly assigned into three groups. The participants of the groups were given three argumentative writing assignments, each assignment demanding three separate drafts. In the control group, the participants revised their essays in response to teacher's written feedback, while the participants of the two experimental groups experienced oral-revision talks with their teacher or a peer. Two sets of quantitative and qualitative data were collected: Argumentative essays written at the beginning and the end of the semester and interviews. The results of the quantitative aspect of the study revealed the significant outperformance of the two experimental groups. Moreover, the data provided through interviews revealed some differences in terms of the effectiveness of feedback between the two experimental groups. The participants of the peer-led group reported more awareness of the rhetorical structures and an ability to revise surface errors. While, the teacher-led group reported more global writing concerns like content, organization of ideas,   and discourse. The obtained results point out that the mutual co-construction of participation roles and certain combinations of negotiation and scaffolding let the teacher provide a supportive conversational environment and assistance in accordance with the proficiency of learners of the teacher-led group to promote greater learner participation.

  15. In situ biosynthesis of bacterial nanocellulose-CaCO{sub 3} hybrid bionanocomposite: One-step process

    Energy Technology Data Exchange (ETDEWEB)

    Mohammadkazemi, Faranak, E-mail: f_mkazemi@sbu.ac.ir [Department of Cellulose and Paper Technology, Faculty of New Technologies Engineering, Shahid Beheshti University, Science and Research Campus, Zirab, Savadkooh, Mazandaran (Iran, Islamic Republic of); Faria, Marisa; Cordeiro, Nereida [Faculty of Exact Science and Engineering, University of Madeira, Funchal (Portugal)

    2016-08-01

    In this work, a simple and green route to the synthesis of the bacterial nanocellulose-calcium carbonate (BNC/CaCO{sub 3}) hybrid bionanocomposites using one-step in situ biosynthesis was studied. The CaCO{sub 3} was incorporated in the bacterial nanocellulose structure during the cellulose biosynthesis by Gluconacetobacter xylinus PTCC 1734 bacteria. Hestrin-Schramm (HS) and Zhou (Z) culture media were used to the hybrid bionanocomposites production and the effect of ethanol addition was investigated. Attenuated total reflection Fourier transform infrared spectroscopy, field emission scanning electron microscopy, X-ray diffraction, energy-dispersive X-ray spectroscopy, inverse gas chromatography and thermogravimetric analysis were used to characterize the samples. The experimental results demonstrated that the ethanol and culture medium play an important role in the BNC/CaCO{sub 3} hybrid bionanocomposites production, structure and properties. The BNC/CaCO{sub 3} biosynthesized in Z culture medium revealed higher O/C ratio and amphoteric surface character, which justify the highest CaCO{sub 3} content incorporation. The CaCO{sub 3} was incorporated into the cellulosic matrix decreasing the bacterial nanocellulose crystallinity. This work reveals the high potential of in situ biosynthesis of BNC/CaCO{sub 3} hybrid bionanocomposites and opens a new way to the high value-added applications of bacterial nanocellulose. - Graphical Abstract: Display Omitted - Highlights: • BNC/CaCO{sub 3} hybrid bionanocomposites were produced using in situ biosynthesis process. • Ethanol and culture medium play an important role in the production and properties. • Z-BNC/CaCO{sub 3} bionanocomposites revealed higher O/C ratio and amphoteric surface character. • CaCO{sub 3} incorporated into the BNC decreased crystallinity.

  16. Determinantal point process models on the sphere

    DEFF Research Database (Denmark)

    Møller, Jesper; Nielsen, Morten; Porcu, Emilio

    We consider determinantal point processes on the d-dimensional unit sphere Sd . These are finite point processes exhibiting repulsiveness and with moment properties determined by a certain determinant whose entries are specified by a so-called kernel which we assume is a complex covariance function...... and eigenfunctions in a spectral representation for the kernel, and we figure out how repulsive isotropic DPPs can be. Moreover, we discuss the shortcomings of adapting existing models for isotropic covariance functions and consider strategies for developing new models, including a useful spectral approach....

  17. Performance assessment modeling of pyrometallurgical process wasteforms

    International Nuclear Information System (INIS)

    Nutt, W.M.; Hill, R.N.; Bullen, D.B.

    1995-01-01

    Performance assessment analyses have been completed to estimate the behavior of high-level nuclear wasteforms generated from the pyrometallurgical processing of liquid metal reactor (LMR) and light water reactor (LWR) spent nuclear fuel. Waste emplaced in the proposed repository at Yucca Mountain is investigated as the basis for the study. The resulting cumulative actinide and fission product releases to the accessible environment within a 100,000 year period from the various pyrometallurgical process wasteforms are compared to those of directly disposed LWR spent fuel using the same total repository system model. The impact of differing radionuclide transport models on the overall release characteristics is investigated

  18. A stepwise model for simulation-based curriculum development for clinical skills, a modification of the six-step approach.

    Science.gov (United States)

    Khamis, Nehal N; Satava, Richard M; Alnassar, Sami A; Kern, David E

    2016-01-01

    Despite the rapid growth in the use of simulation in health professions education, courses vary considerably in quality. Many do not integrate efficiently into an overall school/program curriculum or conform to academic accreditation requirements. Moreover, some of the guidelines for simulation design are specialty specific. We designed a model that integrates best practices for effective simulation-based training and a modification of Kern et al.'s 6-step approach for curriculum development. We invited international simulation and health professions education experts to complete a questionnaire evaluating the model. We reviewed comments and suggested modifications from respondents and reached consensus on a revised version of the model. We recruited 17 simulation and education experts. They expressed a consensus on the seven proposed curricular steps: problem identification and general needs assessment, targeted needs assessment, goals and objectives, educational strategies, individual assessment/feedback, program evaluation, and implementation. We received several suggestions for descriptors that applied the steps to simulation, leading to some revisions in the model. We have developed a model that integrates principles of curriculum development and simulation design that is applicable across specialties. Its use could lead to high-quality simulation courses that integrate efficiently into an overall curriculum.

  19. Retort process modelling for Indian traditional foods.

    Science.gov (United States)

    Gokhale, S V; Lele, S S

    2014-11-01

    Indian traditional staple and snack food is typically a heterogeneous recipe that incorporates varieties of vegetables, lentils and other ingredients. Modelling the retorting process of multilayer pouch packed Indian food was achieved using lumped-parameter approach. A unified model is proposed to estimate cold point temperature. Initial process conditions, retort temperature and % solid content were the significantly affecting independent variables. A model was developed using combination of vegetable solids and water, which was then validated using four traditional Indian vegetarian products: Pulav (steamed rice with vegetables), Sambar (south Indian style curry containing mixed vegetables and lentils), Gajar Halawa (carrot based sweet product) and Upama (wheat based snack product). The predicted and experimental values of temperature profile matched with ±10 % error which is a good match considering the food was a multi component system. Thus the model will be useful as a tool to reduce number of trials required to optimize retorting of various Indian traditional vegetarian foods.

  20. Models of transport processes in concrete

    International Nuclear Information System (INIS)

    Pommersheim, J.M.; Clifton, J.R.

    1991-01-01

    An approach being considered by the US Nuclear Regulatory Commission for disposal of low-level radioactive waste is to place the waste forms in concrete vaults buried underground. The vaults would need a service life of 500 years. Approaches for predicting the service life of concrete of such vaults include the use of mathematical models. Mathematical models are presented in this report for the major degradation processes anticipated for the concrete vaults, which are corrosion of steel reinforcement, sulfate attack, acid attack, and leaching. The models mathematically represent rate controlling processes including diffusion, convection, and reaction and sorption of chemical species. These models can form the basis for predicting the life of concrete under in-service conditions. 33 refs., 6 figs., 7 tabs