WorldWideScience

Sample records for modeling techniques employed

  1. Automatic video segmentation employing object/camera modeling techniques

    NARCIS (Netherlands)

    Farin, D.S.

    2005-01-01

    Practically established video compression and storage techniques still process video sequences as rectangular images without further semantic structure. However, humans watching a video sequence immediately recognize acting objects as semantic units. This semantic object separation is currently not

  2. Atmospheric and dispersion modeling in areas of highly complex terrain employing a four-dimensional data assimilation technique

    International Nuclear Information System (INIS)

    Fast, J.D.; O'Steen, B.L.

    1994-01-01

    The results of this study indicate that the current data assimilation technique can have a positive impact on the mesoscale flow fields; however, care must be taken in its application to grids of relatively fine horizontal resolution. Continuous FDDA is a useful tool in producing high-resolution mesoscale analysis fields that can be used to (1) create a better initial conditions for mesoscale atmospheric models and (2) drive transport models for dispersion studies. While RAMS is capable of predicting the qualitative flow during this evening, additional experiments need to be performed to improve the prognostic forecasts made by RAMS and refine the FDDA procedure so that the overall errors are reduced even further. Despite the fact that a great deal of computational time is necessary in executing RAMS and LPDM in the configuration employed in this study, recent advances in workstations is making applications such as this more practical. As the speed of these machines increase in the next few years, it will become feasible to employ prognostic, three-dimensional mesoscale/transport models to routinely predict atmospheric dispersion of pollutants, even to highly complex terrain. For example, the version of RAMS in this study could be run in a ''nowcasting'' model that would continually assimilate local and regional observations as soon as they become available. The atmospheric physics in the model would be used to determine the wind field where no observations are available. The three-dimensional flow fields could be used as dynamic initial conditions for a model forecast. The output from this type of modeling system will have to be compared to existing diagnostic, mass-consistent models to determine whether the wind field and dispersion forecasts are significantly improved

  3. Graphology: A Nontraditional Employment-Screening Technique.

    Science.gov (United States)

    Augustin, Harriet M.; Hinkson, Diana; Halatin, Ted J.

    2001-01-01

    Explains procedures for handwriting analysis (graphology) and the benefits and limitations of its use in employment screening. Presents implications for teaching penmanship in business education. (SK)

  4. Employing innovative techniques to reduce inspection times

    International Nuclear Information System (INIS)

    Heumueller, R.; Guse, G.; Dirauf, F.; Fischer, E.

    1997-01-01

    Shorter inspection periods mean lower revision costs and less tight revision schedules, but must not detract from the quality of inspection findings. This requirement imposes upon the company performing the inspection the need for top achievements both in quality management and in the use of innovative techniques. Flexible equipment systems and inspection techniques adapted to specific purposes are able to reduce inspection times in many inspection jobs. As part of a complete system designed to reduce inspection times, the new Saphir (Siemens Alok Phased Array Integrated Reliable UT-System) inspection equipment system is the core of most of the recent innovations. Being an integrated inspection equipment system, it is able to handle conventional US probes as well as arrays and phased arrays. It is open for further matching to specific inspection and administrative requirements and developments, and it may be incorporated in the network of an integrated system with a database. A technological leap in probe design in the past few years has allowed controllable wave fields to be generated which are in no way inferior to those of conventional probes with fixed angles of incidence. In this way, a number of inspection techniques can be implemented with a single probe. This reduces inspection times, setup and retooling times, and doses. Typical examples already used in practice are the LLT (longitudinal-longitudinal-transverse waves) technique and the integration of inspections for longitudinal and transverse defects in a single run. In the near future, surfaces with complicated curvatures will be inspected by novel modular robot systems consisting of individual modules of linear axes and rotational axes. (orig.) [de

  5. Evaluation of fish handling techniques employed by artisanal fishers ...

    African Journals Online (AJOL)

    Evaluation of fish handling techniques employed by artisanal fishers on quality of Lethrinids and Siganids fish genera at landing time along the Kenyan coast using sensory and microbiological methods.

  6. Pedagogical Techniques Employed by the Television Show "MythBusters"

    Science.gov (United States)

    Zavrel, Erik

    2016-11-01

    "MythBusters," the long-running though recently discontinued Discovery Channel science entertainment television program, has proven itself to be far more than just a highly rated show. While its focus is on entertainment, the show employs an array of pedagogical techniques to communicate scientific concepts to its audience. These techniques include: achieving active learning, avoiding jargon, employing repetition to ensure comprehension, using captivating demonstrations, cultivating an enthusiastic disposition, and increasing intrinsic motivation to learn. In this content analysis, episodes from the show's 10-year history were examined for these techniques. "MythBusters" represents an untapped source of pedagogical techniques, which science educators may consider availing themselves of in their tireless effort to better reach their students. Physics educators in particular may look to "MythBusters" for inspiration and guidance in how to incorporate these techniques into their own teaching and help their students in the learning process.

  7. Employing 3R Techniques in Managing Cement Industry Waste

    Directory of Open Access Journals (Sweden)

    Lamyaa Mohammed Dawood

    2018-01-01

    Full Text Available Waste management conserves human health, ownership, environment, and keeps valuable natural resources. Lean-green waste of an organization’s operations can be decreased through implementation 3R (Reduce, Reuse, and Recycling techniques by reduction of manufacturing system wastes. This research aims to integrate lean-green waste of the manufacturing system throughout employing 3R techniques and weighted properties method in order to manage waste. Al-Kufa cement plant is employed as a case study. Results are generated using Edraw Max Version 7 and Excel. Overall results show reduce technique of lean-green waste management has major contribution of 55 % and recycling technique has minor contribution 18 %. Defects waste has major integration of lean-green waste, while air emissions waste has minor integration of lean-green waste.

  8. Employing lighting techniques during on-orbit operations

    Science.gov (United States)

    Wheelwright, Charles D.; Toole, Jennifer R.

    As a result of past space missions and evaluations, many procedures have been established and shown to be prudent applications for use in present and future space environment scenarios. However, recent procedures to employ the use of robotics to assist crewmembers in performing tasks which require viewing remote and obstructed locations have led to a need to pursue alternative methods to assist in these operations. One of those techniques which is under development entails incorporating the use of suitable lighting aids/techniques with a closed circuit television (CCTV) camera/monitor system to supervise the robotics operations. The capability to provide adequate lighting during grappling, deploying, docking and berthing operations under all on-orbit illumination conditions is essential to a successful mission. Using automated devices such as the Remote Manipulator System (RMS) to dock and berth a vehicle during payload retrieval, under nighttime, earthshine, solar, or artificial illumination conditions can become a cumbersome task without first incorporating lighting techniques that provide the proper target illumination, orientation, and alignment cues. Studies indicate that the use of visual aids such as the CCTV with a pretested and properly oriented lighting system can decrease the time necessary to accomplish grappling tasks. Evaluations have been and continue to be performed to assess the various on-orbit conditions in order to predict and determine the appropriate lighting techniques and viewing angles necessary to assist crewmembers in payload operations.

  9. Developing the EFL Students’ Reading Comprehension By Employing Herringbone Technique

    Directory of Open Access Journals (Sweden)

    Yunisrina Qismullah Yusuf

    2016-11-01

    Full Text Available This research aims to find out whether Herringbone Technique can developEFL students’ reading comprehension or not, especially in finding the main idea and supporting details in a recount text. The literature notes that Herringbone is known to be one of the techniques that can helpstudents to find important information and main idea in a text by seeking for answers on six comprehension questions on a diagram, viewed such as a fish’s skeleton. The sample of this research was 30 second grade junior high school students. The researchers employed simple random sampling to choose the sample. It further used the quantitative experimental approach with the pre-experimental design in the form of one group pre-test-post-test design.Pre-test was given before treatments, while post-test was given after the students were taught reading by using the Herringbone Technique. Each pre-test and post-test consisted of ten questions. The statistical formulas were used to analyze the data from the students tests’ scores. The mean score of pre-test was 59 and the post-test was 76. Futhermore, the result of t-test was 10.15, while the critical value of 0.05 significant level was 2.045, with the degree of freedom of 29. Since t-test (10.15 was higher than t-table (2.045, this study concludes that this technique could increase the EFL students’ reading comprehension.Despite in practice the researchers found that this technique decreased the students’ efforts in note-taking, however it did not inhibited them in obtaining improved scores in their reading tests.

  10. Markowitz portfolio optimization model employing fuzzy measure

    Science.gov (United States)

    Ramli, Suhailywati; Jaaman, Saiful Hafizah

    2017-04-01

    Markowitz in 1952 introduced the mean-variance methodology for the portfolio selection problems. His pioneering research has shaped the portfolio risk-return model and become one of the most important research fields in modern finance. This paper extends the classical Markowitz's mean-variance portfolio selection model applying the fuzzy measure to determine the risk and return. In this paper, we apply the original mean-variance model as a benchmark, fuzzy mean-variance model with fuzzy return and the model with return are modeled by specific types of fuzzy number for comparison. The model with fuzzy approach gives better performance as compared to the mean-variance approach. The numerical examples are included to illustrate these models by employing Malaysian share market data.

  11. Employing image processing techniques for cancer detection using microarray images.

    Science.gov (United States)

    Dehghan Khalilabad, Nastaran; Hassanpour, Hamid

    2017-02-01

    Microarray technology is a powerful genomic tool for simultaneously studying and analyzing the behavior of thousands of genes. The analysis of images obtained from this technology plays a critical role in the detection and treatment of diseases. The aim of the current study is to develop an automated system for analyzing data from microarray images in order to detect cancerous cases. The proposed system consists of three main phases, namely image processing, data mining, and the detection of the disease. The image processing phase performs operations such as refining image rotation, gridding (locating genes) and extracting raw data from images the data mining includes normalizing the extracted data and selecting the more effective genes. Finally, via the extracted data, cancerous cell is recognized. To evaluate the performance of the proposed system, microarray database is employed which includes Breast cancer, Myeloid Leukemia and Lymphomas from the Stanford Microarray Database. The results indicate that the proposed system is able to identify the type of cancer from the data set with an accuracy of 95.45%, 94.11%, and 100%, respectively. Copyright © 2017 Elsevier Ltd. All rights reserved.

  12. Mathematical modelling techniques

    CERN Document Server

    Aris, Rutherford

    1995-01-01

    ""Engaging, elegantly written."" - Applied Mathematical ModellingMathematical modelling is a highly useful methodology designed to enable mathematicians, physicists and other scientists to formulate equations from a given nonmathematical situation. In this elegantly written volume, a distinguished theoretical chemist and engineer sets down helpful rules not only for setting up models but also for solving the mathematical problems they pose and for evaluating models.The author begins with a discussion of the term ""model,"" followed by clearly presented examples of the different types of mode

  13. Laser and optical techniques employed in enviromental monitoring

    Energy Technology Data Exchange (ETDEWEB)

    Sunesson, A

    1988-03-01

    Monitoring of trace gases in the atmosphere has been performed with differential absorption lidar and differential optical absorption spectroscopy. A new mobile differential absorption lidar system has been constructed it is described in detail and examples of SO/sub 2/ and NO/sub 2/ measurements are given. Studies of the NO/sub 2/ concentration distribution in an urban area during a temperature inversion were performed. Detection of CI/sub 2/ with DIAL was investigated. Using a narrow-bandwidth laser system range-resolved measurements of NO and Hg were investigated. Mapping of NO and Hg plumes was performed. The use of mercury as a tracer gas for geothermal energy exploration was investigated during a field test in Iceland. Contrary to expectations very low mercury concentrations were detected. A high-resolution differential optical absorption system has been constructed and applied in longh-path monitoring of SO/sub 2/ and NO/sub 2/. Detection of atmospheric atomic mercury was investigated. A multipass absorption cell (White cell) has been constructed and used for spectroscopic measurements. Weak oxygen absorption lines in the vicinity of the Hg line were studied both with laser and optical techniques. (With 176 refs.)

  14. Boundary representation modelling techniques

    CERN Document Server

    2006-01-01

    Provides the most complete presentation of boundary representation solid modelling yet publishedOffers basic reference information for software developers, application developers and users Includes a historical perspective as well as giving a background for modern research.

  15. Model techniques for testing heated concrete structures

    International Nuclear Information System (INIS)

    Stefanou, G.D.

    1983-01-01

    Experimental techniques are described which may be used in the laboratory to measure strains of model concrete structures representing to scale actual structures of any shape or geometry, operating at elevated temperatures, for which time-dependent creep and shrinkage strains are dominant. These strains could be used to assess the distribution of stress in the scaled structure and hence to predict the actual behaviour of concrete structures used in nuclear power stations. Similar techniques have been employed in an investigation to measure elastic, thermal, creep and shrinkage strains in heated concrete models representing to scale parts of prestressed concrete pressure vessels for nuclear reactors. (author)

  16. An Elderly Employment Model For The Thai Automotive Industry

    Directory of Open Access Journals (Sweden)

    Pornrat Sadangharn

    2017-10-01

    Full Text Available This research aims at studying an elderly employment model for the Thai automotive industry. Mixed methods with a sequential exploratory strategy were utilized. Grounded theory was employed by using in-depth interviews to investigate the characteristics of elderly employment in the Thai automotive industry. For this stage of the research, theoretical and purposive sampling was used to select 32 key informants from four groups of stakeholders: (1 elderly workers, (2 employers or human resource managers, (3 government officers, and (4 academics. The findings were then validated using a quantitative approach with structural equation modelling (SEM. A total of 308 elderly workers and human resource managers were surveyed regarding their opinion about elderly employment. Based on the survey, the elderly employment model in the Thai automotive industry, which is comprised of the approach to elderly employment, elderly employment preparation, and key success factors for elderly employment, was revealed to be a good fit. 

  17. New techniques for subdivision modelling

    OpenAIRE

    BEETS, Koen

    2006-01-01

    In this dissertation, several tools and techniques for modelling with subdivision surfaces are presented. Based on the huge amount of theoretical knowledge about subdivision surfaces, we present techniques to facilitate practical 3D modelling which make subdivision surfaces even more useful. Subdivision surfaces have reclaimed attention several years ago after their application in full-featured 3D animation movies, such as Toy Story. Since then and due to their attractive properties an ever i...

  18. Survey of semantic modeling techniques

    Energy Technology Data Exchange (ETDEWEB)

    Smith, C.L.

    1975-07-01

    The analysis of the semantics of programing languages was attempted with numerous modeling techniques. By providing a brief survey of these techniques together with an analysis of their applicability for answering semantic issues, this report attempts to illuminate the state-of-the-art in this area. The intent is to be illustrative rather than thorough in the coverage of semantic models. A bibliography is included for the reader who is interested in pursuing this area of research in more detail.

  19. Possibilities to employ noise analysis techniques in controlling nuclear power stations

    International Nuclear Information System (INIS)

    Alfonso Pallares, C.; Iglesias Ferrer, R.; Sarabia Molina, I.

    1998-01-01

    This work shows basic requirements the authors think must be complied with by monitoring systems for operational surveillance based on noise analysis techniques that in turn can be employed in the regulatory control

  20. An employer brand predictive model for talent attraction and retention

    Directory of Open Access Journals (Sweden)

    Annelize Botha

    2011-11-01

    Full Text Available Orientation: In an ever shrinking global talent pool organisations use employer brand to attract and retain talent, however, in the absence of theoretical pointers, many organisations are losing out on a powerful business tool by not developing or maintaining their employer brand correctly. Research purpose: This study explores the current state of knowledge about employer brand and identifies the various employer brand building blocks which are conceptually integrated in a predictive model. Motivation for the study: The need for scientific progress though the accurate representation of a set of employer brand phenomena and propositions, which can be empirically tested, motivated this study. Research design, approach and method: This study was nonempirical in approach and searched for linkages between theoretical concepts by making use of relevant contextual data. Theoretical propositions which explain the identified linkages were developed for purpose of further empirical research. Main findings: Key findings suggested that employer brand is influenced by target group needs, a differentiated Employer Value Proposition (EVP, the people strategy, brand consistency, communication of the employer brand and measurement of Human Resources (HR employer branding efforts. Practical/managerial implications: The predictive model provides corporate leaders and their human resource functionaries a theoretical pointer relative to employer brand which could guide more effective talent attraction and retention decisions. Contribution/value add: This study adds to the small base of research available on employer brand and contributes to both scientific progress as well as an improved practical understanding of factors which influence employer brand.

  1. ANALYSIS MODEL FOR RETURN ON CAPITAL EMPLOYED

    Directory of Open Access Journals (Sweden)

    BURJA CAMELIA

    2013-02-01

    Full Text Available At the microeconomic level, the appreciation of the capitals’ profitability is a very complex action which is ofinterest for stakeholders. This study has as main purpose to extend the traditional analysis model for the capitals’profitability, based on the ratio “Return on capital employed”. In line with it the objectives of this work aim theidentification of factors that exert an influence on the capital’s profitability utilized by a company and the measurementof their contribution in the manifestation of the phenomenon. The proposed analysis model is validated on the use caseof a representative company from the agricultural sector. The results obtained reveal that in a company there are somefactors which can act positively on the capitals’ profitability: capital turnover, sales efficiency, increase the share ofsales in the total revenues, improvement of the expenses’ efficiency. The findings are useful both for the decisionmakingfactors in substantiating the economic strategies and for the capital owners who are interested in efficiency oftheir investments.

  2. Advanced Atmospheric Ensemble Modeling Techniques

    Energy Technology Data Exchange (ETDEWEB)

    Buckley, R. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Chiswell, S. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Kurzeja, R. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Maze, G. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Viner, B. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Werth, D. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL)

    2017-09-29

    Ensemble modeling (EM), the creation of multiple atmospheric simulations for a given time period, has become an essential tool for characterizing uncertainties in model predictions. We explore two novel ensemble modeling techniques: (1) perturbation of model parameters (Adaptive Programming, AP), and (2) data assimilation (Ensemble Kalman Filter, EnKF). The current research is an extension to work from last year and examines transport on a small spatial scale (<100 km) in complex terrain, for more rigorous testing of the ensemble technique. Two different release cases were studied, a coastal release (SF6) and an inland release (Freon) which consisted of two release times. Observations of tracer concentration and meteorology are used to judge the ensemble results. In addition, adaptive grid techniques have been developed to reduce required computing resources for transport calculations. Using a 20- member ensemble, the standard approach generated downwind transport that was quantitatively good for both releases; however, the EnKF method produced additional improvement for the coastal release where the spatial and temporal differences due to interior valley heating lead to the inland movement of the plume. The AP technique showed improvements for both release cases, with more improvement shown in the inland release. This research demonstrated that transport accuracy can be improved when models are adapted to a particular location/time or when important local data is assimilated into the simulation and enhances SRNL’s capability in atmospheric transport modeling in support of its current customer base and local site missions, as well as our ability to attract new customers within the intelligence community.

  3. Managing age discrimination: an examination of the techniques used when seeking employment.

    Science.gov (United States)

    Berger, Ellie D

    2009-06-01

    This article examines the age-related management techniques used by older workers in their search for employment. Data are drawn from interviews with individuals aged 45-65 years (N = 30). Findings indicate that participants develop "counteractions" and "concealments" to manage perceived age discrimination. Individuals counteract employers' ageist stereotypes by maintaining their skills and changing their work-related expectations and conceal age by altering their résumés, physical appearance, and language used. This research suggests that there is a need to reexamine the hiring practices of employers and to improve legislation in relation to their accountability.

  4. Multichannel and Multispectral Image Restoration Employing Fuzzy Theory and Directional Techniques

    OpenAIRE

    Rosales, Alberto; Ponomaryov, Volodymyr

    2009-01-01

    It has designed a novel structure of robust framework to remove impulse noise and additive noise in images and multichannel video sequences. Unlike existed techniques, the designed approach employs fuzzy and directional techniques to estimate motion and noise in the past and present frames showing good results. The designed fuzzy rules characterize the presence of motion and noise between the pixels in two frames (past and present frames). It has been demonstrated that the combined use of gra...

  5. Managing Age Discrimination: An Examination of the Techniques Used when Seeking Employment

    Science.gov (United States)

    Berger, Ellie D.

    2009-01-01

    Purpose: This article examines the age-related management techniques used by older workers in their search for employment. Design and Methods: Data are drawn from interviews with individuals aged 45-65 years (N = 30). Results: Findings indicate that participants develop "counteractions" and "concealments" to manage perceived age discrimination.…

  6. Sequential mass spectrometric analysis of uranium and plutonium employing resin bead technique

    International Nuclear Information System (INIS)

    Ramakumar, K.L.; Aggarwal, S.K.; Chitambar, S.A.; Jain, H.C.

    1985-01-01

    Sequential mass spectrometric analysis of uranium and plutonium employing anion exchange resin bead technique is reported using a high sensitive single stage magnetic analyser instrument, the routinely employed rhenium double filament assembly and 0.5M HNO 3 as a wetting agent for loading the resin beads. A precision of bettter than 0.3per cent (2sigma) is obtained on the isotopic ratio measurements. However, extreme care has to be exercised to carry the resin bead experiments under ultra clean conditions so as to avoid pick up of contamination. (author)

  7. A Comparative of business process modelling techniques

    Science.gov (United States)

    Tangkawarow, I. R. H. T.; Waworuntu, J.

    2016-04-01

    In this era, there is a lot of business process modeling techniques. This article is the research about differences of business process modeling techniques. For each technique will explain about the definition and the structure. This paper presents a comparative analysis of some popular business process modelling techniques. The comparative framework is based on 2 criteria: notation and how it works when implemented in Somerleyton Animal Park. Each technique will end with the advantages and disadvantages. The final conclusion will give recommend of business process modeling techniques that easy to use and serve the basis for evaluating further modelling techniques.

  8. The Swedish Model in Employment. Post-Crisis Solutions

    Directory of Open Access Journals (Sweden)

    Mirela Ionela ACELEANU

    2012-04-01

    Full Text Available The Swedish model in the field of employment is part of the Nordic countries model focused on labour market flexibility in terms of a generous welfare system. This paper presents the characteristics of this model and its application effects in Sweden before and after the current crisis. According to specialized studies and registered indicators, Sweden has very good results in terms of general well-being, quality of life, employment of labour resources, being among the top countries. Thus, since the experience of the developed countries should be a benchmark for the developing countries, the purpose of the paper is to identify some measures that can be applied in Romania to improve employment, economic and social policies.

  9. Employment, Production and Consumption model: Patterns of phase transitions

    Science.gov (United States)

    Lavička, H.; Lin, L.; Novotný, J.

    2010-04-01

    We have simulated the model of Employment, Production and Consumption (EPC) using Monte Carlo. The EPC model is an agent based model that mimics very basic rules of industrial economy. From the perspective of physics, the nature of the interactions in the EPC model represents multi-agent interactions where the relations among agents follow the key laws for circulation of capital and money. Monte Carlo simulations of the stochastic model reveal phase transition in the model economy. The two phases are the phase with full unemployment and the phase with nearly full employment. The economy switches between these two states suddenly as a reaction to a slight variation in the exogenous parameter, thus the system exhibits strong non-linear behavior as a response to the change of the exogenous parameters.

  10. Techniques employed for detection of hot particles in the marine environment

    International Nuclear Information System (INIS)

    Pillsbury, G D

    2007-01-01

    During the decommissioning of the Maine Yankee nuclear plant, several methods were developed and employed to survey for hot particles in the marine environment surrounding the site. The methods used and the sensitivities achieved in the search for environmentally dispersed particles during the various decommissioning activities performed are described in detail. Surveys were performed on dry soil, exposed marine sediment and submerged marine sediment. Survey techniques ranged from the use of the basic NaI detector coupled to a count rate meter to an intrinsic germanium detector deployed in a submarine housing coupled to a multi-channel analyser. The initial surveys consisted of collecting samples of marine sediment, spreading them out over a 1 m 2 surface in a thin layer, and scanning the deposited sediment by hand using a 5 cm by 5 cm NaI detector coupled to a standard count rate meter. This technique was later replaced by walkover scans with the 5 cm by 5 cm NaI detector moved in a serpentine pattern over the sediment surface. By coupling the detector to a 'smart meter', an alarm set point could be used to alert the surveyor to the presence of a particle within the instrument's field of view. A similar technique, with the detector mounted in a watertight housing secured to the end of a pole, was also employed to scan underwater locations. The most sensitive method developed for performing underwater surveys was the use of the intrinsic germanium detector placed in a submarine housing. Detailed descriptions of the methods employed and the results obtained are presented. This work demonstrates that there are several approaches to surveying for discrete particles in the marine environment and the relative merits of each are considered

  11. Construction and use of an optical semi-automatic titrator employing the technique of reflectance photometry

    International Nuclear Information System (INIS)

    Hwang, Hoon

    2001-01-01

    An optical semi-automatic titrator was constructed employing the technique of the reflectance spectrometry and was tested for the determination of the end points of the acid-base, precipitation, and EDTA titrations. And since the current optical semi-automatic titrator built on the principle of the reflectance spectrometry could be successfully used even for the determination of the end of the end point in the precipitation titration where the solid particles are formed during the titration process, it was found to be feasible that a completely automated optical titrator would be designed and built based on the current findings

  12. Characterization of ultrashort laser pulses employing self-phase modulation dispersion-scan technique

    Science.gov (United States)

    Sharba, A. B.; Chekhlov, O.; Wyatt, A. S.; Pattathil, R.; Borghesi, M.; Sarri, G.

    2018-03-01

    We present a new phase characterization technique for ultrashort laser pulses that employs self-phase modulation (SPM) in the dispersion scan approach. The method can be implemented by recording a set of nonlinearly modulated spectra generated with a set of known chirp values. The unknown phase of the pulse is retrieved by linking the recorded spectra to the initial spectrum of the pulse via a phase function guessed by a function minimization iterative algorithm. This technique has many advantages over the dispersion scan techniques that use frequency conversion processes. Mainly, the use of SPM cancels out the phase and group velocity mismatch errors and dramatically widens the spectral acceptance of the nonlinear medium and the range of working wavelength. The robustness of the technique is demonstrated with smooth and complex phase retrievals using numerical examples. The method is shown to be not affected by the spatial distribution of the beam or the presence of nonlinear absorption process. In addition, we present an efficient method for phase representation based on a summation of a set of Gaussian functions. The independence of the functions from each other prevents phase coupling of any kind and facilitates a flexible phase representation.

  13. Industrial scale production of stable isotopes employing the technique of plasma separation

    International Nuclear Information System (INIS)

    Stevenson, N.R.; Bigelow, T.S.; Tarallo, F.J.

    2003-01-01

    Calutrons, centrifuges, diffusion and distillation processes are some of the devices and techniques that have been employed to produce substantial quantities of enriched stable isotopes. Nevertheless, the availability of enriched isotopes in sufficient quantities for industrial applications remains very restricted. Industries such as those involved with medicine, semiconductors, nuclear fuel, propulsion, and national defense have identified the potential need for various enriched isotopes in large quantities. Economically producing most enriched (non-gaseous) isotopes in sufficient quantities has so far eluded commercial producers. The plasma separation process is a commercial technique now available for producing large quantities of a wide range of enriched isotopes. Until recently, this technique has mainly been explored with small-scale ('proof-of-principle') devices that have been built and operated at research institutes. The new Theragenics TM facility at Oak Ridge, TN houses the only existing commercial scale PSP system. This device, which successfully operated in the 1980's, has recently been re-commissioned and is planned to be used to produce a variety of isotopes. Progress and the capabilities of this device and it's potential for impacting the world's supply of stable isotopes in the future is summarized. This technique now holds promise of being able to open the door to allowing new and exciting applications of these isotopes in the future. (author)

  14. Motion Model Employment using interacting Motion Model Algorithm

    DEFF Research Database (Denmark)

    Hussain, Dil Muhammad Akbar

    2006-01-01

    The paper presents a simulation study to track a maneuvering target using a selective approach in choosing Interacting Multiple Models (IMM) algorithm to provide a wider coverage to track such targets.  Initially, there are two motion models in the system to track a target.  Probability of each m...

  15. Nanosecond pulsed laser ablation of Ge investigated by employing photoacoustic deflection technique and SEM analysis

    Energy Technology Data Exchange (ETDEWEB)

    Yaseen, Nazish; Bashir, Shazia; Shabbir, Muhammad Kaif; Jalil, Sohail Abdul; Akram, Mahreen; Hayat, Asma; Mahmood, Khaliq; Haq, Faizan-ul; Ahmad, Riaz; Hussain, Tousif

    2016-06-01

    Nanosecond pulsed laser ablation phenomena of single crystal Ge (100) has been investigated by employing photoacoustic deflection as well as SEM analysis techniques. Nd: YAG laser (1064 nm, 10 ns, 1–10 Hz) at various laser fluences ranging from 0.2 to 11 J cm{sup −2} is employed as pump beam to ablate Ge targets. In order to evaluate in-situe ablation threshold fluence of Ge by photoacoustic deflection technique, Continuous Wave (CW) He–Ne laser (632 nm, power 10 mW) is employed as a probe beam. It travels parallel to the target surface at a distance of 3 mm and after passing through Ge plasma it causes deflection due to density gradient of acoustic waves. The deflected signal is detected by photodiode and is recorded by oscilloscope. The threshold fluence of Ge, the velocity of ablated species and the amplitude of the deflected signal are evaluated. The threshold fluence of Ge comes out to be 0.5 J cm{sup −2} and is comparable with the analytical value. In order to compare the estimated value of threshold with ex-situe measurements, the quantitative analysis of laser irradiated Ge is performed by using SEM analysis. For this purpose Ge is exposed to single and multiple shots of 5, 10, 50 and 100 at various laser fluences ranging from 0.2 to 11 J cm{sup −2}. The threshold fluence for single and multiple shots as well as incubation coefficients are evaluated. It is observed that the value of incubation co-efficient decreases with increasing number of pulses and is therefore responsible for lowering the threshold fluence of Ge. SEM analysis also reveals the growth of various features such as porous structures, non-uniform ripples and blisters on the laser irradiated Ge. It is observed that both the fluence as well as number of laser shots plays a significant role for the growth of these structures.

  16. Nanosecond pulsed laser ablation of Ge investigated by employing photoacoustic deflection technique and SEM analysis

    International Nuclear Information System (INIS)

    Yaseen, Nazish; Bashir, Shazia; Shabbir, Muhammad Kaif; Jalil, Sohail Abdul; Akram, Mahreen; Hayat, Asma; Mahmood, Khaliq; Haq, Faizan-ul; Ahmad, Riaz; Hussain, Tousif

    2016-01-01

    Nanosecond pulsed laser ablation phenomena of single crystal Ge (100) has been investigated by employing photoacoustic deflection as well as SEM analysis techniques. Nd: YAG laser (1064 nm, 10 ns, 1–10 Hz) at various laser fluences ranging from 0.2 to 11 J cm"−"2 is employed as pump beam to ablate Ge targets. In order to evaluate in-situe ablation threshold fluence of Ge by photoacoustic deflection technique, Continuous Wave (CW) He–Ne laser (632 nm, power 10 mW) is employed as a probe beam. It travels parallel to the target surface at a distance of 3 mm and after passing through Ge plasma it causes deflection due to density gradient of acoustic waves. The deflected signal is detected by photodiode and is recorded by oscilloscope. The threshold fluence of Ge, the velocity of ablated species and the amplitude of the deflected signal are evaluated. The threshold fluence of Ge comes out to be 0.5 J cm"−"2 and is comparable with the analytical value. In order to compare the estimated value of threshold with ex-situe measurements, the quantitative analysis of laser irradiated Ge is performed by using SEM analysis. For this purpose Ge is exposed to single and multiple shots of 5, 10, 50 and 100 at various laser fluences ranging from 0.2 to 11 J cm"−"2. The threshold fluence for single and multiple shots as well as incubation coefficients are evaluated. It is observed that the value of incubation co-efficient decreases with increasing number of pulses and is therefore responsible for lowering the threshold fluence of Ge. SEM analysis also reveals the growth of various features such as porous structures, non-uniform ripples and blisters on the laser irradiated Ge. It is observed that both the fluence as well as number of laser shots plays a significant role for the growth of these structures.

  17. Performability Modelling Tools, Evaluation Techniques and Applications

    NARCIS (Netherlands)

    Haverkort, Boudewijn R.H.M.

    1990-01-01

    This thesis deals with three aspects of quantitative evaluation of fault-tolerant and distributed computer and communication systems: performability evaluation techniques, performability modelling tools, and performability modelling applications. Performability modelling is a relatively new

  18. Technique for enhancing the power output of an electrostatic generator employing parametric resonance

    Science.gov (United States)

    Post, Richard F.

    2016-02-23

    A circuit-based technique enhances the power output of electrostatic generators employing an array of axially oriented rods or tubes or azimuthal corrugated metal surfaces for their electrodes. During generator operation, the peak voltage across the electrodes occurs at an azimuthal position that is intermediate between the position of minimum gap and maximum gap. If this position is also close to the azimuthal angle where the rate of change of capacity is a maximum, then the highest rf power output possible for a given maximum allowable voltage at the minimum gap can be attained. This rf power output is then coupled to the generator load through a coupling condenser that prevents suppression of the dc charging potential by conduction through the load. Optimized circuit values produce phase shifts in the rf output voltage that allow higher power output to occur at the same voltage limit at the minimum gap position.

  19. Employment of an ion implantation technique for catalyst coating on various substrates

    International Nuclear Information System (INIS)

    Bannikov, M.G.; Chattha, J.A.; Zlobin, V.N.; Vasilve, I.P.; Cherkasov, J.A.; Gawrilenko, P.N.

    2001-01-01

    Catalysts are widely used in the chemical industry as well as in the production of vehicle catalytic converters. Precious metals are employed increasingly as catalytic materials. Traditional methods of coating, such as impregnation, are thought to reduce the porosity and specific area of catalyst thus reducing the catalytic efficiency. Apart from that, impregnation technology leads to the high expense of precious metals. To reduce the content of noble metals in catalysts the ion implantation method of coating has been investigated. Several samples of catalysts on various substrates were prepared by ion implantation technique and tested. New catalysts have shown high nitric oxides (NO/sub x/) and carbon monoxide (CO) conversion efficiency, with the content of noble metals reduced substantially. Experiment has also shown that specific area of substrates coated by an ion implantation had not decreased. Schematic of an ion implanter and experimental results are provided. (author)

  20. A VLSI Implementation of Rank-Order Searching Circuit Employing a Time-Domain Technique

    Directory of Open Access Journals (Sweden)

    Trong-Tu Bui

    2013-01-01

    Full Text Available We present a compact and low-power rank-order searching (ROS circuit that can be used for building associative memories and rank-order filters (ROFs by employing time-domain computation and floating-gate MOS techniques. The architecture inherits the accuracy and programmability of digital implementations as well as the compactness and low-power consumption of analog ones. We aim to implement identification function as the first priority objective. Filtering function would be implemented once the location identification function has been carried out. The prototype circuit was designed and fabricated in a 0.18 μm CMOS technology. It consumes only 132.3 μW for an eight-input demonstration case.

  1. Surgical Anatomy and Microvascular Surgical Technique Relevant to Experimental Renal Transplant in Rat Employing Aortic and Inferior Venacaval Conduits.

    Science.gov (United States)

    Shrestha, Badri Man; Haylor, John

    2017-11-15

    Rat models of renal transplant are used to investigate immunologic processes and responses to therapeutic agents before their translation into routine clinical practice. In this study, we have described details of rat surgical anatomy and our experiences with the microvascular surgical technique relevant to renal transplant by employing donor inferior vena cava and aortic conduits. For this study, 175 rats (151 Lewis and 24 Fisher) were used to establish the Fisher-Lewis rat model of chronic allograft injury at our institution. Anatomic and technical details were recorded during the period of training and establishment of the model. A final group of 12 transplanted rats were studied for an average duration of 51 weeks for the Lewis-to-Lewis isografts (5 rats) and 42 weeks for the Fisher-to-Lewis allografts (7 rats). Functional measurements and histology confirmed the diagnosis of chronic allograft injury. Mastering the anatomic details and microvascular surgical techniques can lead to the successful establishment of an experimental renal transplant model.

  2. Probabilistic evaluation of process model matching techniques

    NARCIS (Netherlands)

    Kuss, Elena; Leopold, Henrik; van der Aa, Han; Stuckenschmidt, Heiner; Reijers, Hajo A.

    2016-01-01

    Process model matching refers to the automatic identification of corresponding activities between two process models. It represents the basis for many advanced process model analysis techniques such as the identification of similar process parts or process model search. A central problem is how to

  3. Model Checking Markov Chains: Techniques and Tools

    NARCIS (Netherlands)

    Zapreev, I.S.

    2008-01-01

    This dissertation deals with four important aspects of model checking Markov chains: the development of efficient model-checking tools, the improvement of model-checking algorithms, the efficiency of the state-space reduction techniques, and the development of simulation-based model-checking

  4. Large-scale inverse model analyses employing fast randomized data reduction

    Science.gov (United States)

    Lin, Youzuo; Le, Ellen B.; O'Malley, Daniel; Vesselinov, Velimir V.; Bui-Thanh, Tan

    2017-08-01

    When the number of observations is large, it is computationally challenging to apply classical inverse modeling techniques. We have developed a new computationally efficient technique for solving inverse problems with a large number of observations (e.g., on the order of 107 or greater). Our method, which we call the randomized geostatistical approach (RGA), is built upon the principal component geostatistical approach (PCGA). We employ a data reduction technique combined with the PCGA to improve the computational efficiency and reduce the memory usage. Specifically, we employ a randomized numerical linear algebra technique based on a so-called "sketching" matrix to effectively reduce the dimension of the observations without losing the information content needed for the inverse analysis. In this way, the computational and memory costs for RGA scale with the information content rather than the size of the calibration data. Our algorithm is coded in Julia and implemented in the MADS open-source high-performance computational framework (http://mads.lanl.gov). We apply our new inverse modeling method to invert for a synthetic transmissivity field. Compared to a standard geostatistical approach (GA), our method is more efficient when the number of observations is large. Most importantly, our method is capable of solving larger inverse problems than the standard GA and PCGA approaches. Therefore, our new model inversion method is a powerful tool for solving large-scale inverse problems. The method can be applied in any field and is not limited to hydrogeological applications such as the characterization of aquifer heterogeneity.

  5. An employer brand predictive model for talent attraction and retention

    OpenAIRE

    Annelize Botha; Mark Bussin; Lukas de Swardt

    2011-01-01

    Orientation: In an ever shrinking global talent pool organisations use employer brand to attract and retain talent, however, in the absence of theoretical pointers, many organisations are losing out on a powerful business tool by not developing or maintaining their employer brand correctly. Research purpose: This study explores the current state of knowledge about employer brand and identifies the various employer brand building blocks which are conceptually integrated in a predictiv...

  6. Advanced structural equation modeling issues and techniques

    CERN Document Server

    Marcoulides, George A

    2013-01-01

    By focusing primarily on the application of structural equation modeling (SEM) techniques in example cases and situations, this book provides an understanding and working knowledge of advanced SEM techniques with a minimum of mathematical derivations. The book was written for a broad audience crossing many disciplines, assumes an understanding of graduate level multivariate statistics, including an introduction to SEM.

  7. Towards Model Validation and Verification with SAT Techniques

    OpenAIRE

    Gogolla, Martin

    2010-01-01

    After sketching how system development and the UML (Unified Modeling Language) and the OCL (Object Constraint Language) are related, validation and verification with the tool USE (UML-based Specification Environment) is demonstrated. As a more efficient alternative for verification tasks, two approaches using SAT-based techniques are put forward: First, a direct encoding of UML and OCL with Boolean variables and propositional formulas, and second, an encoding employing an...

  8. Employing socially driven techniques for framing, contextualization, and collaboration in complex analytical threads

    Science.gov (United States)

    Wollocko, Arthur; Danczyk, Jennifer; Farry, Michael; Jenkins, Michael; Voshell, Martin

    2015-05-01

    The proliferation of sensor technologies continues to impact Intelligence Analysis (IA) work domains. Historical procurement focus on sensor platform development and acquisition has resulted in increasingly advanced collection systems; however, such systems often demonstrate classic data overload conditions by placing increased burdens on already overtaxed human operators and analysts. Support technologies and improved interfaces have begun to emerge to ease that burden, but these often focus on single modalities or sensor platforms rather than underlying operator and analyst support needs, resulting in systems that do not adequately leverage their natural human attentional competencies, unique skills, and training. One particular reason why emerging support tools often fail is due to the gap between military applications and their functions, and the functions and capabilities afforded by cutting edge technology employed daily by modern knowledge workers who are increasingly "digitally native." With the entry of Generation Y into these workplaces, "net generation" analysts, who are familiar with socially driven platforms that excel at giving users insight into large data sets while keeping cognitive burdens at a minimum, are creating opportunities for enhanced workflows. By using these ubiquitous platforms, net generation analysts have trained skills in discovering new information socially, tracking trends among affinity groups, and disseminating information. However, these functions are currently under-supported by existing tools. In this paper, we describe how socially driven techniques can be contextualized to frame complex analytical threads throughout the IA process. This paper focuses specifically on collaborative support technology development efforts for a team of operators and analysts. Our work focuses on under-supported functions in current working environments, and identifies opportunities to improve a team's ability to discover new information and

  9. Verification of Orthogrid Finite Element Modeling Techniques

    Science.gov (United States)

    Steeve, B. E.

    1996-01-01

    The stress analysis of orthogrid structures, specifically with I-beam sections, is regularly performed using finite elements. Various modeling techniques are often used to simplify the modeling process but still adequately capture the actual hardware behavior. The accuracy of such 'Oshort cutso' is sometimes in question. This report compares three modeling techniques to actual test results from a loaded orthogrid panel. The finite element models include a beam, shell, and mixed beam and shell element model. Results show that the shell element model performs the best, but that the simpler beam and beam and shell element models provide reasonable to conservative results for a stress analysis. When deflection and stiffness is critical, it is important to capture the effect of the orthogrid nodes in the model.

  10. A Strategy Modelling Technique for Financial Services

    OpenAIRE

    Heinrich, Bernd; Winter, Robert

    2004-01-01

    Strategy planning processes often suffer from a lack of conceptual models that can be used to represent business strategies in a structured and standardized form. If natural language is replaced by an at least semi-formal model, the completeness, consistency, and clarity of strategy descriptions can be drastically improved. A strategy modelling technique is proposed that is based on an analysis of modelling requirements, a discussion of related work and a critical analysis of generic approach...

  11. Soft computing techniques toward modeling the water supplies of Cyprus.

    Science.gov (United States)

    Iliadis, L; Maris, F; Tachos, S

    2011-10-01

    This research effort aims in the application of soft computing techniques toward water resources management. More specifically, the target is the development of reliable soft computing models capable of estimating the water supply for the case of "Germasogeia" mountainous watersheds in Cyprus. Initially, ε-Regression Support Vector Machines (ε-RSVM) and fuzzy weighted ε-RSVMR models have been developed that accept five input parameters. At the same time, reliable artificial neural networks have been developed to perform the same job. The 5-fold cross validation approach has been employed in order to eliminate bad local behaviors and to produce a more representative training data set. Thus, the fuzzy weighted Support Vector Regression (SVR) combined with the fuzzy partition has been employed in an effort to enhance the quality of the results. Several rational and reliable models have been produced that can enhance the efficiency of water policy designers. Copyright © 2011 Elsevier Ltd. All rights reserved.

  12. Employ Simulation Techniques. Second Edition. Module C-5 of Category C--Instructional Execution. Professional Teacher Education Module Series.

    Science.gov (United States)

    Ohio State Univ., Columbus. National Center for Research in Vocational Education.

    One of a series of performance-based teacher education learning packages focusing upon specific professional competencies of vocational teachers, this learning module deals with employing simulation techniques. It consists of an introduction and four learning experiences. Covered in the first learning experience are various types of simulation…

  13. Numerical modeling techniques for flood analysis

    Science.gov (United States)

    Anees, Mohd Talha; Abdullah, K.; Nawawi, M. N. M.; Ab Rahman, Nik Norulaini Nik; Piah, Abd. Rahni Mt.; Zakaria, Nor Azazi; Syakir, M. I.; Mohd. Omar, A. K.

    2016-12-01

    Topographic and climatic changes are the main causes of abrupt flooding in tropical areas. It is the need to find out exact causes and effects of these changes. Numerical modeling techniques plays a vital role for such studies due to their use of hydrological parameters which are strongly linked with topographic changes. In this review, some of the widely used models utilizing hydrological and river modeling parameters and their estimation in data sparse region are discussed. Shortcomings of 1D and 2D numerical models and the possible improvements over these models through 3D modeling are also discussed. It is found that the HEC-RAS and FLO 2D model are best in terms of economical and accurate flood analysis for river and floodplain modeling respectively. Limitations of FLO 2D in floodplain modeling mainly such as floodplain elevation differences and its vertical roughness in grids were found which can be improve through 3D model. Therefore, 3D model was found to be more suitable than 1D and 2D models in terms of vertical accuracy in grid cells. It was also found that 3D models for open channel flows already developed recently but not for floodplain. Hence, it was suggested that a 3D model for floodplain should be developed by considering all hydrological and high resolution topographic parameter's models, discussed in this review, to enhance the findings of causes and effects of flooding.

  14. Race, Employment Disadvantages, and Heavy Drinking: A Multilevel Model.

    Science.gov (United States)

    Lo, Celia C; Cheng, Tyrone C

    2015-01-01

    We intended to determine (1) whether stress from employment disadvantages led to increased frequency of heavy drinking and (2) whether race had a role in the relationship between such disadvantages and heavy drinking. Study data came from the National Longitudinal Survey of Youth, a prospective study that has followed a representative sample of youth since 1979. Our study employed data from 11 particular years, during which the survey included items measuring respondents' heavy drinking. Our final sample numbered 10,171 respondents, which generated 75,394 person-waves for data analysis. Both of our hypotheses were supported by results from multilevel mixed-effects linear regression capturing the time-varying nature of three employment disadvantages and of the heavy-drinking outcome. Results show that more-frequent heavy drinking was associated with employment disadvantages, and that disadvantages' effects on drinking were stronger for Blacks and Hispanics than for Whites. That worsening employment disadvantages have worse effects on minority groups' heavy drinking (compared to Whites) probably contributes to the racial health disparities in our nation. Policies and programs addressing such disparities are especially important during economic downturns.

  15. Full employment and competition in the Aspen economic model: implications for modeling acts of terrorism.

    Energy Technology Data Exchange (ETDEWEB)

    Sprigg, James A.; Ehlen, Mark Andrew

    2004-11-01

    Acts of terrorism could have a range of broad impacts on an economy, including changes in consumer (or demand) confidence and the ability of productive sectors to respond to changes. As a first step toward a model of terrorism-based impacts, we develop here a model of production and employment that characterizes dynamics in ways useful toward understanding how terrorism-based shocks could propagate through the economy; subsequent models will introduce the role of savings and investment into the economy. We use Aspen, a powerful economic modeling tool developed at Sandia, to demonstrate for validation purposes that a single-firm economy converges to the known monopoly equilibrium price, output, and employment levels, while multiple-firm economies converge toward the competitive equilibria typified by lower prices and higher output and employment. However, we find that competition also leads to churn by consumers seeking lower prices, making it difficult for firms to optimize with respect to wages, prices, and employment levels. Thus, competitive firms generate market ''noise'' in the steady state as they search for prices and employment levels that will maximize profits. In the context of this model, not only could terrorism depress overall consumer confidence and economic activity but terrorist acts could also cause normal short-run dynamics to be misinterpreted by consumers as a faltering economy.

  16. Employer-driven consumerism: integrating health into the business model.

    Science.gov (United States)

    Thompson, Michael; Checkley, Joseph

    2006-01-01

    Consumer-driven health care is a misnomer. Notwithstanding the enormous role the individual consumer has to play in reshaping the U.S. health care delivery system, this article will focus on the employer as the key driver of change and innovation in the consumerism revolution. American Standard provides a case study of how one major employer has evaluated health care in the context of its business and aggressively integrated consumerism and health into the core of its business. Other companies will appropriately execute consumerism strategies in a fashion consistent with their own needs, culture, resources and populations. However, the principles supporting those strategies will be very much consistent.

  17. Multi-beam backscatter image data processing techniques employed to EM 1002 system

    Digital Repository Service at National Institute of Oceanography (India)

    Fernandes, W.A.; Chakraborty, B.

    to compensate outer-beam backscatter strength data in such a way that the effect of angular backscatter strength is removed. In this work we have developed backscatter data processing techniques for EM1002 multi-beam system...

  18. Student Employment as a Model for Experiential Learning

    Science.gov (United States)

    Fede, Jacquelyn H.; Gorman, Kathleen S.; Cimini, Maria E.

    2018-01-01

    Background: Evidence suggests experiential learning promotes the development of a range of transferrable skills including communication, responsibility, and social skills. However, many students are unable to participate in internships or other common forms of experiential education because they need to work for pay. University employment has been…

  19. Structural Modeling Using "Scanning and Mapping" Technique

    Science.gov (United States)

    Amos, Courtney L.; Dash, Gerald S.; Shen, J. Y.; Ferguson, Frederick; Noga, Donald F. (Technical Monitor)

    2000-01-01

    Supported by NASA Glenn Center, we are in the process developing a structural damage diagnostic and monitoring system for rocket engines, which consists of five modules: Structural Modeling, Measurement Data Pre-Processor, Structural System Identification, Damage Detection Criterion, and Computer Visualization. The function of the system is to detect damage as it is incurred by the engine structures. The scientific principle to identify damage is to utilize the changes in the vibrational properties between the pre-damaged and post-damaged structures. The vibrational properties of the pre-damaged structure can be obtained based on an analytic computer model of the structure. Thus, as the first stage of the whole research plan, we currently focus on the first module - Structural Modeling. Three computer software packages are selected, and will be integrated for this purpose. They are PhotoModeler-Pro, AutoCAD-R14, and MSC/NASTRAN. AutoCAD is the most popular PC-CAD system currently available in the market. For our purpose, it plays like an interface to generate structural models of any particular engine parts or assembly, which is then passed to MSC/NASTRAN for extracting structural dynamic properties. Although AutoCAD is a powerful structural modeling tool, the complexity of engine components requires a further improvement in structural modeling techniques. We are working on a so-called "scanning and mapping" technique, which is a relatively new technique. The basic idea is to producing a full and accurate 3D structural model by tracing on multiple overlapping photographs taken from different angles. There is no need to input point positions, angles, distances or axes. Photographs can be taken by any types of cameras with different lenses. With the integration of such a modeling technique, the capability of structural modeling will be enhanced. The prototypes of any complex structural components will be produced by PhotoModeler first based on existing similar

  20. Employer Brand Opens up for a Gender Process Model

    Directory of Open Access Journals (Sweden)

    Hans Lundkvist

    2011-11-01

    Full Text Available Regardless of a long tradition of legislation, policymaking and practical achievements, the issues ofgender equality and of the segregated labor market still remain a matter of concern in Sweden. Thispaper describes a collaborative process between a research project and an engineering enterprise.It describes the point of departure, based on the concept of employer brand, of a long-term changeprocess and the different phases and activities during an intensive period 2009. The collaborationaimed to develop innovative methods, and to apply them in order to achieve increased genderawareness, and thereby to be able to retain and attract the best labor for tomorrow. Differentapproaches and methods as analogies, anecdotes, and pictures were used to nourish the process.Findings showed that the interactive process contributed to increased awareness. During the processthe enterprise became more conscious of the potential of being a gender equal employer

  1. An electromagnetic signals monitoring and analysis wireless platform employing personal digital assistants and pattern analysis techniques

    Science.gov (United States)

    Ninos, K.; Georgiadis, P.; Cavouras, D.; Nomicos, C.

    2010-05-01

    This study presents the design and development of a mobile wireless platform to be used for monitoring and analysis of seismic events and related electromagnetic (EM) signals, employing Personal Digital Assistants (PDAs). A prototype custom-developed application was deployed on a 3G enabled PDA that could connect to the FTP server of the Institute of Geodynamics of the National Observatory of Athens and receive and display EM signals at 4 receiver frequencies (3 KHz (E-W, N-S), 10 KHz (E-W, N-S), 41 MHz and 46 MHz). Signals may originate from any one of the 16 field-stations located around the Greek territory. Employing continuous recordings of EM signals gathered from January 2003 till December 2007, a Support Vector Machines (SVM)-based classification system was designed to distinguish EM precursor signals within noisy background. EM-signals corresponding to recordings preceding major seismic events (Ms≥5R) were segmented, by an experienced scientist, and five features (mean, variance, skewness, kurtosis, and a wavelet based feature), derived from the EM-signals were calculated. These features were used to train the SVM-based classification scheme. The performance of the system was evaluated by the exhaustive search and leave-one-out methods giving 87.2% overall classification accuracy, in correctly identifying EM precursor signals within noisy background employing all calculated features. Due to the insufficient processing power of the PDAs, this task was performed on a typical desktop computer. This optimal trained context of the SVM classifier was then integrated in the PDA based application rendering the platform capable to discriminate between EM precursor signals and noise. System's efficiency was evaluated by an expert who reviewed 1/ multiple EM-signals, up to 18 days prior to corresponding past seismic events, and 2/ the possible EM-activity of a specific region employing the trained SVM classifier. Additionally, the proposed architecture can form a

  2. Map Your Way to Speech Success! Employing Mind Mapping as a Speech Preparation Technique

    Science.gov (United States)

    Paxman, Christina G.

    2011-01-01

    Mind mapping has gained considerable credibility recently in corporations such as Boeing and Nabisco, as well as in the classroom in terms of preparing for examinations and preparing for speeches. A mind map is a graphic technique for organizing an individual's thoughts and other information. It harnesses the full range of cortical skills--word,…

  3. Detection of Pseudomonas aeruginosa Metabolite Pyocyanin in Water and Saliva by Employing the SERS Technique

    Directory of Open Access Journals (Sweden)

    Olga Žukovskaja

    2017-07-01

    Full Text Available Pyocyanin (PYO is a metabolite specific for Pseudomonas aeruginosa. In the case of immunocompromised patients, it is currently considered a biomarker for life-threating Pseudomonas infections. In the frame of this study it is shown, that PYO can be detected in aqueous solution by employing surface-enhanced Raman spectroscopy (SERS combined with a microfluidic platform. The achieved limit of detection is 0.5 μM. This is ~2 orders of magnitude below the concentration of PYO found in clinical samples. Furthermore, as proof of principle, the SERS detection of PYO in the saliva of three volunteers was also investigated. This body fluid can be collected in a non-invasive manner and is highly chemically complex, making the detection of the target molecule challenging. Nevertheless, PYO was successfully detected in two saliva samples down to 10 μM and in one sample at a concentration of 25 μM. This indicates that the molecules present in saliva do not inhibit the efficient adsorption of PYO on the surface of the employed SERS active substrates.

  4. Operational techniques employed for the liquid sodium source term control loops

    International Nuclear Information System (INIS)

    Chulos, L.E.

    1976-01-01

    Four Source Term Control Loops (STCLs) have been designed, constructed, and placed into operation at the Hanford Engineering Development Laboratory (HEDL) as part of the Radioactivity Control Technology program. The data obtained are used to determine the corrosion and deposition of LMFBR materials, including corrosion product radionuclides, in a non-isothermal flowing sodium system. The paper discusses operation of the STCL Facilities and, in particular, the methods used for controlling the oxygen content of the liquid sodium. These methods include cold trapping techniques, hot trapping, seeding the cold traps with sodium oxide, and precipitating the oxygen in the cold trap in a controlled manner. Operational problems encountered with the STCL Facilities and the techniques for correcting these problems are also discussed

  5. A 320 mV, 6 kb subthreshold 10T SRAM employing voltage lowering techniques

    International Nuclear Information System (INIS)

    Cai Jiangzheng; Zhang Sumin; Yuan Jia; Shang Xinchao; Chen Liming; Hei Yong

    2015-01-01

    This paper presents a 6 kb SRAM that uses a novel 10T cell to achieve a minimum operating voltage of 320 mV in a 130 nm CMOS process. A number of low power circuit techniques are included to enable the proposed SRAM to operate in the subthreshold region. The reverse short channel effect and the reverse narrow channel effect are utilized to improve the performance of the SRAM. A novel subthreshold pulse generation circuit produces an ideal pulse to make read operation stable. A floating write bit-line effectively reduces the standby leakage consumption. Finally, a short read bit-line makes the read operation fast and energy-saving. Measurements indicate that these techniques are effective, the SRAM can operate at 800 kHz and consume 1.94 μW at its lowest voltage (320 mV). (paper)

  6. Modeling techniques for quantum cascade lasers

    Energy Technology Data Exchange (ETDEWEB)

    Jirauschek, Christian [Institute for Nanoelectronics, Technische Universität München, D-80333 Munich (Germany); Kubis, Tillmann [Network for Computational Nanotechnology, Purdue University, 207 S Martin Jischke Drive, West Lafayette, Indiana 47907 (United States)

    2014-03-15

    Quantum cascade lasers are unipolar semiconductor lasers covering a wide range of the infrared and terahertz spectrum. Lasing action is achieved by using optical intersubband transitions between quantized states in specifically designed multiple-quantum-well heterostructures. A systematic improvement of quantum cascade lasers with respect to operating temperature, efficiency, and spectral range requires detailed modeling of the underlying physical processes in these structures. Moreover, the quantum cascade laser constitutes a versatile model device for the development and improvement of simulation techniques in nano- and optoelectronics. This review provides a comprehensive survey and discussion of the modeling techniques used for the simulation of quantum cascade lasers. The main focus is on the modeling of carrier transport in the nanostructured gain medium, while the simulation of the optical cavity is covered at a more basic level. Specifically, the transfer matrix and finite difference methods for solving the one-dimensional Schrödinger equation and Schrödinger-Poisson system are discussed, providing the quantized states in the multiple-quantum-well active region. The modeling of the optical cavity is covered with a focus on basic waveguide resonator structures. Furthermore, various carrier transport simulation methods are discussed, ranging from basic empirical approaches to advanced self-consistent techniques. The methods include empirical rate equation and related Maxwell-Bloch equation approaches, self-consistent rate equation and ensemble Monte Carlo methods, as well as quantum transport approaches, in particular the density matrix and non-equilibrium Green's function formalism. The derived scattering rates and self-energies are generally valid for n-type devices based on one-dimensional quantum confinement, such as quantum well structures.

  7. Modeling techniques for quantum cascade lasers

    Science.gov (United States)

    Jirauschek, Christian; Kubis, Tillmann

    2014-03-01

    Quantum cascade lasers are unipolar semiconductor lasers covering a wide range of the infrared and terahertz spectrum. Lasing action is achieved by using optical intersubband transitions between quantized states in specifically designed multiple-quantum-well heterostructures. A systematic improvement of quantum cascade lasers with respect to operating temperature, efficiency, and spectral range requires detailed modeling of the underlying physical processes in these structures. Moreover, the quantum cascade laser constitutes a versatile model device for the development and improvement of simulation techniques in nano- and optoelectronics. This review provides a comprehensive survey and discussion of the modeling techniques used for the simulation of quantum cascade lasers. The main focus is on the modeling of carrier transport in the nanostructured gain medium, while the simulation of the optical cavity is covered at a more basic level. Specifically, the transfer matrix and finite difference methods for solving the one-dimensional Schrödinger equation and Schrödinger-Poisson system are discussed, providing the quantized states in the multiple-quantum-well active region. The modeling of the optical cavity is covered with a focus on basic waveguide resonator structures. Furthermore, various carrier transport simulation methods are discussed, ranging from basic empirical approaches to advanced self-consistent techniques. The methods include empirical rate equation and related Maxwell-Bloch equation approaches, self-consistent rate equation and ensemble Monte Carlo methods, as well as quantum transport approaches, in particular the density matrix and non-equilibrium Green's function formalism. The derived scattering rates and self-energies are generally valid for n-type devices based on one-dimensional quantum confinement, such as quantum well structures.

  8. Significant enhancement in thermoelectric performance of nanostructured higher manganese silicides synthesized employing a melt spinning technique.

    Science.gov (United States)

    Muthiah, Saravanan; Singh, R C; Pathak, B D; Avasthi, Piyush Kumar; Kumar, Rishikesh; Kumar, Anil; Srivastava, A K; Dhar, Ajay

    2018-01-25

    The limited thermoelectric performance of p-type Higher Manganese Silicides (HMS) in terms of their low figure-of-merit (ZT), which is far below unity, is the main bottle-neck for realising an efficient HMS based thermoelectric generator, which has been recognized as the most promising material for harnessing waste-heat in the mid-temperature range, owing to its thermal stability, earth-abundant and environmentally friendly nature of its constituent elements. We report a significant enhancement in the thermoelectric performance of nanostructured HMS synthesized using rapid solidification by optimizing the cooling rates during melt-spinning followed by spark plasma sintering of the resulting melt-spun ribbons. By employing this experimental strategy, an unprecedented ZT ∼ 0.82 at 800 K was realized in spark plasma sintered 5 at% Al-doped MnSi 1.73 HMS, melt spun at an optimized high cooling rate of ∼2 × 10 7 K s -1 . This enhancement in ZT represents a ∼25% increase over the best reported values thus far for HMS and primarily originates from a nano-crystalline microstructure consisting of a HMS matrix (20-40 nm) with excess Si (3-9 nm) uniformly distributed in it. This nanostructure, resulting from the high cooling rates employed during the melt-spinning of HMS, introduces a high density of nano-crystallite boundaries in a wide spectrum of nano-scale dimensions, which scatter the low-to-mid-wavelength heat-carrying phonons. This abundant phonon scattering results in a significantly reduced thermal conductivity of ∼1.5 W m -1 K -1 at 800 K, which primarily contributes to the enhancement in ZT.

  9. Recent Trends in Microextraction Techniques Employed in Analytical and Bioanalytical Sample Preparation

    Directory of Open Access Journals (Sweden)

    Abuzar Kabir

    2017-12-01

    Full Text Available Sample preparation has been recognized as a major step in the chemical analysis workflow. As such, substantial efforts have been made in recent years to simplify the overall sample preparation process. Major focusses of these efforts have included miniaturization of the extraction device; minimizing/eliminating toxic and hazardous organic solvent consumption; eliminating sample pre-treatment and post-treatment steps; reducing the sample volume requirement; reducing extraction equilibrium time, maximizing extraction efficiency etc. All these improved attributes are congruent with the Green Analytical Chemistry (GAC principles. Classical sample preparation techniques such as solid phase extraction (SPE and liquid-liquid extraction (LLE are being rapidly replaced with emerging miniaturized and environmentally friendly techniques such as Solid Phase Micro Extraction (SPME, Stir bar Sorptive Extraction (SBSE, Micro Extraction by Packed Sorbent (MEPS, Fabric Phase Sorptive Extraction (FPSE, and Dispersive Liquid-Liquid Micro Extraction (DLLME. In addition to the development of many new generic extraction sorbents in recent years, a large number of molecularly imprinted polymers (MIPs created using different template molecules have also enriched the large cache of microextraction sorbents. Application of nanoparticles as high-performance extraction sorbents has undoubtedly elevated the extraction efficiency and method sensitivity of modern chromatographic analyses to a new level. Combining magnetic nanoparticles with many microextraction sorbents has opened up new possibilities to extract target analytes from sample matrices containing high volumes of matrix interferents. The aim of the current review is to critically audit the progress of microextraction techniques in recent years, which has indisputably transformed the analytical chemistry practices, from biological and therapeutic drug monitoring to the environmental field; from foods to phyto

  10. A microcontroller-based compensated optical proximity detector employing the switching-mode synchronous detection technique

    International Nuclear Information System (INIS)

    Rakshit, Anjan; Chatterjee, Amitava

    2012-01-01

    This paper describes the development of a microcontroller-based optical proximity detector that can provide a low-cost yet powerful obstacle-sensing mechanism for mobile robots. The system is developed with the switching-mode synchronous detection technique to provide satisfactory performance over a wide range of operating conditions and is developed with the facility of externally setting a threshold, for reliable operation. The system is dynamically compensated against ambient illumination variations. Experimental studies demonstrate how the minimum distance of activation can be varied with different choices of thresholds. (paper)

  11. Rabbit tissue model (RTM) harvesting technique.

    Science.gov (United States)

    Medina, Marelyn

    2002-01-01

    A method for creating a tissue model using a female rabbit for laparoscopic simulation exercises is described. The specimen is called a Rabbit Tissue Model (RTM). Dissection techniques are described for transforming the rabbit carcass into a small, compact unit that can be used for multiple training sessions. Preservation is accomplished by using saline and refrigeration. Only the animal trunk is used, with the rest of the animal carcass being discarded. Practice exercises are provided for using the preserved organs. Basic surgical skills, such as dissection, suturing, and knot tying, can be practiced on this model. In addition, the RTM can be used with any pelvic trainer that permits placement of larger practice specimens within its confines.

  12. An integrative model for measuring graduates’ employability skills—A study in China

    Directory of Open Access Journals (Sweden)

    Wenping Su

    2015-12-01

    Full Text Available Employability is a key issue in graduates’ job-hunting, but little research has been done on that of the graduates in Chinese universities. These universities have been experiencing a decline in their graduate employment since the past decade. This paper attempts to tackle this issue. It reviews the relevant research on employability and develops a research-based theoretical framework to evaluate and analyze the graduates’ employability in China. It adopts multiple approaches to establish the skills that will enhance university students’ employability. Investigating around 100 employers and 200 undergraduates from the universities in Beijing, the paper explores the characteristics of and factors influencing the graduates’ employability. Subsequently, it proposes a qualitative model to measure graduates’ employability. Based on the findings, it discusses the theoretical and practical implications and provides advice for Chinese graduates to improve their employability.

  13. Layer-by-Layer technique employed to construct multitask interfaces in polymer composites

    Directory of Open Access Journals (Sweden)

    Luísa Sá Vitorino

    Full Text Available Abstract The properties of glass fiber-reinforced polymer composites are closely related to the fiber-matrix interface. Interfacial treatments to improve mechanical properties are usually limited to enhance interfacial adhesion. In this work, Layer-by-Layer (LbL technique was introduced to build a novel interface in polymer composites. Different numbers of bilayers of poly(diallyldimethylammonium chloride and poly(sodium 4-styrenesulfonate with carbon nanotubes were deposited through LbL on the surface of woven glass fibers (GFs. Polypropylene composites containing the modified GFs were prepared by compression molding. Thermogravimetric analysis, scanning electron microscopy and Raman spectroscopy proved that multilayers of polymers with carbon nanotubes could be deposited on GFs surface. Mechanical tests on composites with modified GFs revealed an increase in Flexural Modulus and toughness. The overall results attested that the LbL technique can be used to design interfaces with different compositions to perform diverse tasks, such as to improve the stiffness of composites and to encapsulate active nanocomponents.

  14. Techniques Employed to Conduct Postshot Drilling at the former Nevada Test Site

    Energy Technology Data Exchange (ETDEWEB)

    Dekin, W D

    2011-04-14

    Postshot drilling provided essential data on the results of the underground nuclear tests conducted at the Nevada Test Site (NTS), now identified as the Nevada National Security Site (NNSS). It was the means by which samples from the zone of interest were obtained for radiochemical analysis. This handbook describes how Lawrence Livermore National Laboratory (LLNL) conducted postshot drilling operations at the NTS, and it provides a general understanding of the process. Postshot drilling is a specialized application of rotary drilling. Accordingly, this handbook gives a brief description of rotary drilling in Section 2 to acquaint the reader with the general subject before proceeding to the specialized techniques used in postshot drilling. In Section 3, the handbook describes the typical postshot drilling situation at the former NTS and the drilling methods used. Section 4 describes the typical sequence of operations in postshot drilling at the former NTS. Detailed information on special equipment and techniques is given in a series of appendices (A through F) at the end of the handbook.

  15. Characterisation of the suspended particulate matter in a stratified estuarine environment employing complementary techniques

    Science.gov (United States)

    Thomas, Luis P.; Marino, Beatriz M.; Szupiany, Ricardo N.; Gallo, Marcos N.

    2017-09-01

    The ability to predict the sediment and nutrient circulation within estuarine waters is of significant economic and ecological importance. In these complex systems, flocculation is a dynamically active process that is directly affected by the prevalent environmental conditions. Consequently, the floc properties continuously change, which greatly complicates the characterisation of the suspended particle matter (SPM). In the present study, three different techniques are combined in a stratified estuary under quiet weather conditions and with a low river discharge to search for a solution to this problem. The challenge is to obtain the concentration, size and flux of suspended elements through selected cross-sections using the method based on the simultaneous backscatter records of 1200 and 600 kHz ADCPs, isokinetic sampling data and LISST-25X measurements. The two-ADCP method is highly effective for determining the SPM size distributions in a non-intrusive way. The isokinetic sampling and the LISST-25X diffractometer offer point measurements at specific depths, which are especially useful for calibrating the ADCP backscatter intensity as a function of the SPM concentration and size, and providing complementary information on the sites where acoustic records are not available. Limitations and potentials of the techniques applied are discussed.

  16. Discrimination techniques employing both reflective and thermal multispectral signals. [for remote sensor technology

    Science.gov (United States)

    Malila, W. A.; Crane, R. B.; Richardson, W.

    1973-01-01

    Recent improvements in remote sensor technology carry implications for data processing. Multispectral line scanners now exist that can collect data simultaneously and in registration in multiple channels at both reflective and thermal (emissive) wavelengths. Progress in dealing with two resultant recognition processing problems is discussed: (1) More channels mean higher processing costs; to combat these costs, a new and faster procedure for selecting subsets of channels has been developed. (2) Differences between thermal and reflective characteristics influence recognition processing; to illustrate the magnitude of these differences, some explanatory calculations are presented. Also introduced, is a different way to process multispectral scanner data, namely, radiation balance mapping and related procedures. Techniques and potentials are discussed and examples presented.

  17. A Generic Current Mode Design for Multifunction Grounded Capacitor Filters Employing Log-Domain Technique

    Directory of Open Access Journals (Sweden)

    N. A. Shah

    2011-01-01

    Full Text Available A generic design (GD for realizing an nth order log-domain multifunction filter (MFF, which can yield four possible stable filter configurations, each offering simultaneously lowpass (LP, highpass (HP, and bandpass (BP frequency responses, is presented. The features of these filters are very simple, consisting of merely a few exponential transconductor cells and capacitors; all grounded elements, capable of absorbing the shunt parasitic capacitances, responses are electronically tuneable, and suitable for monolithic integration. Furthermore, being designed using log-domain technique, it offers all its advantages. As an example, 5th-order MFFs are designed in each case and their performances are evaluated through simulation. Lastly, a comparative study of the MFFs is also carried, which helps in selecting better high-order MFF for a given application.

  18. Does the technique employed for skin temperature assessment alter outcomes? A systematic review

    International Nuclear Information System (INIS)

    Bach, Aaron J E; Stewart, Ian B; Minett, Geoffrey M; Costello, Joseph T

    2015-01-01

    Skin temperature is an important physiological measure that can reflect the presence of illness and injury as well as provide insight into the localised interactions between the body and the environment. The aim of this systematic review was to analyse the agreement between conductive and infrared means of assessing skin temperature which are commonly employed in in clinical, occupational, sports medicine, public health and research settings.Full-text eligibility was determined independently by two reviewers. Studies meeting the following criteria were included in the review: (1) the literature was written in English, (2) participants were human (in vivo), (3) skin surface temperature was assessed at the same site, (4) with at least two commercially available devices employed—one conductive and one infrared—and (5) had skin temperature data reported in the study.A computerised search of four electronic databases, using a combination of 21 keywords, and citation tracking was performed in January 2015. A total of 8,602 were returned.Methodology quality was assessed by two authors independently, using the Cochrane risk of bias tool.A total of 16 articles (n = 245) met the inclusion criteria.Devices are classified to be in agreement if they met the clinically meaningful recommendations of mean differences within  ±0.5 °C and limits of agreement of  ±1.0 °C.Twelve of the included studies found mean differences greater than  ±0.5 °C between conductive and infrared devices. In the presence of external stimulus (e.g. exercise and/or heat) five studies found exacerbated measurement differences between conductive and infrared devices.This is the first review that has attempted to investigate presence of any systemic bias between infrared and conductive measures by collectively evaluating the current evidence base. There was also a consistently high risk of bias across the studies, in terms of sample size, random sequence generation, allocation

  19. Off-the-job training for VATS employing anatomically correct lung models.

    Science.gov (United States)

    Obuchi, Toshiro; Imakiire, Takayuki; Miyahara, Sou; Nakashima, Hiroyasu; Hamanaka, Wakako; Yanagisawa, Jun; Hamatake, Daisuke; Shiraishi, Takeshi; Moriyama, Shigeharu; Iwasaki, Akinori

    2012-02-01

    We evaluated our simulated major lung resection employing anatomically correct lung models as "off-the-job training" for video-assisted thoracic surgery trainees. A total of 76 surgeons voluntarily participated in our study. They performed video-assisted thoracic surgical lobectomy employing anatomically correct lung models, which are made of sponges so that vessels and bronchi can be cut using usual surgical techniques with typical forceps. After the simulation surgery, participants answered questionnaires on a visual analogue scale, in terms of their level of interest and the reality of our training method as off-the-job training for trainees. We considered that the closer a score was to 10, the more useful our method would be for training new surgeons. Regarding the appeal or level of interest in this simulation surgery, the mean score was 8.3 of 10, and regarding reality, it was 7.0. The participants could feel some of the real sensations of the surgery and seemed to be satisfied to perform the simulation lobectomy. Our training method is considered to be suitable as an appropriate type of surgical off-the-job training.

  20. Employment of the technique of radiotracers for analysis of industrial filters

    International Nuclear Information System (INIS)

    Ramos, Vitor Santos; Crispim, Verginia Reis

    2007-01-01

    The main aim of this work is to develop a methodology to evaluate the characteristics of porous media in filter using the radio-tracing technique. To do this, an experimental prototype filter made up of an acrylic cylinder, vertically mounted and supported on the lower side by a controlled leaking valve was developed. Two filters (spheres of acrylic and silica crystals) were used to check the movement of the water through the porous media using 123 I in its MIBG (iodine-123-meta-iodo benzyl-guanidine) form. Further up the filter an instantaneous injection of the substance makes it possible to see the passage of radioactive clouds through the two scintillatory detectors NaI (2x2) ' ' positioned before and immediately after the cylinder with the filtering element (porous media). The are caused by the detectors on the passage of the radioactive cloud are analyzed through statistical functions using the weighted moment method which makes it possible to calculate the Residence-Time (the amount of time the tracer takes to thoroughly pass through the filter) per the equation of dispersion in tubular flow and the one-directional flow of the radiotracer in the porous media. (author)

  1. Development of cobalt ferrite powder preparation employing the sol-gel technique and its structural characterization

    International Nuclear Information System (INIS)

    Sajjia, M.; Oubaha, M.; Prescott, T.; Olabi, A.G.

    2010-01-01

    Research highlights: This work focuses on the sol-gel process and the effects that the initial parameters have on the final product, which is the cobalt ferrite powder, in addition to the heat treatment. Particular interest is devoted to understand how the crosslinker and the chelating agent work and affect the final product. - Abstract: This work focuses on the development of a method to make cobalt ferrite powder using the sol-gel process. A particular emphasis is devoted to the understanding of the role of the chemical parameters involved in the sol-gel technique, and of the heat treatment on the structures and morphologies of the materials obtained. Several samples of cobalt ferrite powder were obtained by varying the initial parameters of the process in addition to the heat treatment temperature. X-ray diffraction and scanning electron microscopy were used to identify the structure and morphology of samples demonstrating the influence of the initial parameters. DTA/TGA was carried out on two standard samples to identify important reaction temperatures during the heat treatment. The average size of the nano crystallites was estimated for a sample by the full width at half maximum (FWHM) of the strongest X-ray diffraction (XRD) peak. It has been found that the chelating agent and the crosslinker have a critical influence on the resultant structure, the particle size and the particle size distribution.

  2. Optical Coherence Tomography Angiography: Employing a Novel Technique for Investigation in Vogt-Koyanagi-Harada Disease.

    Science.gov (United States)

    Giannakouras, Panagiotis; Andreanos, Konstantinos; Giavi, Barbara; Diagourtas, Andreas

    2017-01-01

    To report a case of Vogt-Koyanagi-Harada (VKH) disease and describe the imaging findings by means of optical coherence tomography angiography (OCTA). Medical and ophthalmological history, ophthalmological examination, laboratory evaluation, B-scan ultrasonography, fluorescein and indocyanine angiography, and optical coherence tomography (OCT) were performed at baseline, as well as OCTA. A 50-year-old healthy female presented with decreased vision in both eyes. A Topcon DRI OCT Triton Plus swept source OCT system was used to visualize and evaluate the retinal and choroidal vascular plexus. Patchy and confluent dark areas in the superficial and deep retinal capillary plexus and choriocapillaris corresponded to areas of hypoperfusion, analyzed as areas of ischemia. VKH disease is characterized by ocular, neurological, and integumentary findings in its complete form. We present a case of incomplete disease in a 50-year-old female evaluated by means of OCTA which is a novel technique that provides depth-resolved images of the retina and choroidal microvasculature without dye injection that allows better visualization and detailed evaluation of the retinal and choroidal vascular plexus.

  3. Optical Coherence Tomography Angiography: Employing a Novel Technique for Investigation in Vogt-Koyanagi-Harada Disease

    Directory of Open Access Journals (Sweden)

    Panagiotis Giannakouras

    2017-07-01

    Full Text Available Purpose: To report a case of Vogt-Koyanagi-Harada (VKH disease and describe the imaging findings by means of optical coherence tomography angiography (OCTA. Methods: Medical and ophthalmological history, ophthalmological examination, laboratory evaluation, B-scan ultrasonography, fluorescein and indocyanine angiography, and optical coherence tomography (OCT were performed at baseline, as well as OCTA. Results: A 50-year-old healthy female presented with decreased vision in both eyes. A Topcon DRI OCT Triton Plus swept source OCT system was used to visualize and evaluate the retinal and choroidal vascular plexus. Patchy and confluent dark areas in the superficial and deep retinal capillary plexus and choriocapillaris corresponded to areas of hypoperfusion, analyzed as areas of ischemia. Conclusions and Importance: VKH disease is characterized by ocular, neurological, and integumentary findings in its complete form. We present a case of incomplete disease in a 50-year-old female evaluated by means of OCTA which is a novel technique that provides depth-resolved images of the retina and choroidal microvasculature without dye injection that allows better visualization and detailed evaluation of the retinal and choroidal vascular plexus.

  4. Modeling energy fluxes in heterogeneous landscapes employing a mosaic approach

    Science.gov (United States)

    Klein, Christian; Thieme, Christoph; Priesack, Eckart

    2015-04-01

    Recent studies show that uncertainties in regional and global climate and weather simulations are partly due to inadequate descriptions of the energy flux exchanges between the land surface and the atmosphere. One major shortcoming is the limitation of the grid-cell resolution, which is recommended to be about at least 3x3 km² in most models due to limitations in the model physics. To represent each individual grid cell most models select one dominant soil type and one dominant land use type. This resolution, however, is often too coarse in regions where the spatial diversity of soil and land use types are high, e.g. in Central Europe. An elegant method to avoid the shortcoming of grid cell resolution is the so called mosaic approach. This approach is part of the recently developed ecosystem model framework Expert-N 5.0. The aim of this study was to analyze the impact of the characteristics of two managed fields, planted with winter wheat and potato, on the near surface soil moistures and on the near surface energy flux exchanges of the soil-plant-atmosphere interface. The simulated energy fluxes were compared with eddy flux tower measurements between the respective fields at the research farm Scheyern, North-West of Munich, Germany. To perform these simulations, we coupled the ecosystem model Expert-N 5.0 to an analytical footprint model. The coupled model system has the ability to calculate the mixing ratio of the surface energy fluxes at a given point within one grid cell (in this case at the flux tower between the two fields). This approach accounts for the differences of the two soil types, of land use managements, and of canopy properties due to footprint size dynamics. Our preliminary simulation results show that a mosaic approach can improve modeling and analyzing energy fluxes when the land surface is heterogeneous. In this case our applied method is a promising approach to extend weather and climate models on the regional and on the global scale.

  5. Absorbed dose in fibrotic microenvironment models employing Monte Carlo simulation

    International Nuclear Information System (INIS)

    Zambrano Ramírez, O.D.; Rojas Calderón, E.L.; Azorín Vega, E.P.; Ferro Flores, G.; Martínez Caballero, E.

    2015-01-01

    The presence or absence of fibrosis and yet more, the multimeric and multivalent nature of the radiopharmaceutical have recently been reported to have an effect on the radiation absorbed dose in tumor microenvironment models. Fibroblast and myofibroblast cells produce the extracellular matrix by the secretion of proteins which provide structural and biochemical support to cells. The reactive and reparative mechanisms triggered during the inflammatory process causes the production and deposition of extracellular matrix proteins, the abnormal excessive growth of the connective tissue leads to fibrosis. In this work, microenvironment (either not fibrotic or fibrotic) models composed of seven spheres representing cancer cells of 10 μm in diameter each with a 5 μm diameter inner sphere (cell nucleus) were created in two distinct radiation transport codes (PENELOPE and MCNP). The purpose of creating these models was to determine the radiation absorbed dose in the nucleus of cancer cells, based on previously reported radiopharmaceutical retain (by HeLa cells) percentages of the 177 Lu-Tyr 3 -octreotate (monomeric) and 177 Lu-Tyr 3 -octreotate-AuNP (multimeric) radiopharmaceuticals. A comparison in the results between the PENELOPE and MCNP was done. We found a good agreement in the results of the codes. The percent difference between the increase percentages of the absorbed dose in the not fibrotic model with respect to the fibrotic model of the codes PENELOPE and MCNP was found to be under 1% for both radiopharmaceuticals. (authors)

  6. Self-employment in an equilibrium model of the labor market

    Directory of Open Access Journals (Sweden)

    Jake Bradley

    2016-06-01

    Full Text Available Abstract Self-employed workers account for between 8 and 30 % of participants in the labor markets of OECD countries (Blanchower, Self-employment: more may not be better, 2004. This paper develops and estimates a general equilibrium model of the labor market that accounts for this sizable proportion. The model incorporates self-employed workers, some of whom hire paid employees in the market. Employment rates and earnings distributions are determined endogenously and are estimated to match their empirical counterparts. The model is estimated using the British Household Panel Survey (BHPS. The model is able to estimate nonpecuniary amenities associated with employment in different labor market states, accounting for both different employment dynamics within state and the misreporting of earnings by self-employed workers. Structural parameter estimates are then used to assess the impact of an increase in the generosity of unemployment benefits on the aggregate employment rate. Findings suggest that modeling the self-employed, some of whom hire paid employees implies that small increases in unemployment benefits leads to an expansion in aggregate employment. JEL Classification J21, J24, J28, J64

  7. Landsat analysis of tropical forest succession employing a terrain model

    Science.gov (United States)

    Barringer, T. H.; Robinson, V. B.; Coiner, J. C.; Bruce, R. C.

    1980-01-01

    Landsat multispectral scanner (MSS) data have yielded a dual classification of rain forest and shadow in an analysis of a semi-deciduous forest on Mindonoro Island, Philippines. Both a spatial terrain model, using a fifth side polynomial trend surface analysis for quantitatively estimating the general spatial variation in the data set, and a spectral terrain model, based on the MSS data, have been set up. A discriminant analysis, using both sets of data, has suggested that shadowing effects may be due primarily to local variations in the spectral regions and can therefore be compensated for through the decomposition of the spatial variation in both elevation and MSS data.

  8. Employment, Production and Consumption model: Patterns of phase transitions

    Czech Academy of Sciences Publication Activity Database

    Lavička, H.; Lin, L.; Novotný, Jan

    2010-01-01

    Roč. 389, č. 8 (2010), s. 1708-1720 ISSN 0378-4371 Institutional research plan: CEZ:AV0Z10480505 Keywords : EPC * Agent based model * Phase transition Subject RIV: BG - Nuclear, Atomic and Molecular Physics, Colliders Impact factor: 1.521, year: 2010

  9. A Simple Forecasting Model Linking Macroeconomic Policy to Industrial Employment Demand.

    Science.gov (United States)

    Malley, James R.; Hady, Thomas F.

    A study detailed further a model linking monetary and fiscal policy to industrial employment in metropolitan and nonmetropolitan areas of four United States regions. The model was used to simulate the impacts on area and regional employment of three events in the economy: changing real gross national product (GNP) via monetary policy, holding the…

  10. A Alternative Analog Circuit Design Methodology Employing Integrated Artificial Intelligence Techniques

    Science.gov (United States)

    Tuttle, Jeffery L.

    In consideration of the computer processing power now available to the designer, an alternative analog circuit design methodology is proposed. Computer memory capacities no longer require the reduction of the transistor operational characteristics to an imprecise formulation. Therefore, it is proposed that transistor modelling be abandoned in favor of fully characterized transistor data libraries. Secondly, availability of the transistor libraries would facilitate an automated selection of the most appropriate device(s) for the circuit being designed. More specifically, a preprocessor computer program to a more sophisticated circuit simulator (e.g. SPICE) is developed to assist the designer in developing the basic circuit topology and the selection of the most appropriate transistor. Once this is achieved, the circuit topology and selected transistor data library would be downloaded to the simulator for full circuit operational characterization and subsequent design modifications. It is recognized that the design process is enhanced by the use of heuristics as applied to iterative design results. Accordingly, an artificial intelligence (AI) interface is developed to assist the designer in applying the preprocessor results. To demonstrate the retrofitability of the AI interface to established programs, the interface is specifically designed to be as non-intrusive to the host code as possible. Implementation of the proposed methodology offers the potential to speed the design process, since the preprocessor both minimizes the required number of simulator runs and provides a higher acceptance potential of the initial and subsequent simulator runs. Secondly, part count reductions may be realizable since the circuit topologies are not as strongly driven by transistor limitations. Thirdly, the predicted results should more closely match actual circuit operations since the inadequacies of the transistor models have been virtually eliminated. Finally, the AI interface

  11. Trends in demolition planning and in the cutting techniques employed; Trends bei der Rueckbauplanung und den verwendeten Schneidverfahren

    Energy Technology Data Exchange (ETDEWEB)

    Brueggemann, Pascal [NUKEM Technologies GmbH, Alzenau (Germany)

    2009-08-15

    The past couple of years have seen the continuous further development of cutting techniques employed in demolishing nuclear facilities. This trend is continuing. After mostly mechanical cutting techniques had been used initially, there has been a tendency in the past few years to make increasing use of thermal cutting. Reasons for this preference are the reduced space requirement and the advantages this technique offers in remote handling because of the low weight and the small dimensions of the cutting heads, and the high flexibility in materials to be cut. This results in greater planning reliability in calculating quotations and performing the demolition work. The emissions generated by these techniques are manageable reliably at low additional expense for the required extraction and filter systems. Thermal cutting is likely to achieve wide use in dismantling and demolition of nuclear plant components also in the future. Advanced developments in computer hardware and software allow more and more detailed planning of demolition work. While, in the past, CAD had been applied only to generate two-dimensional drawings, current 3D applications allow animated representations of entire demolition scenes with high accuracy in detail. In this way, potential problems associated with handling large components or using manipulators for cutting can be detected and solved much more easily in the preparatory phase. Although current CAD use can be no substitute for mockup tests, it does help to enhance clearly reliability in planning prior to running mockup tests. (orig.)

  12. Development of multi-channel high power rectangular RF window for LHCD system employing high temperature vacuum brazing technique

    International Nuclear Information System (INIS)

    Sharma, P K; Ambulkar, K K; Parmar, P R; Virani, C G; Thakur, A L; Joshi, L M; Nangru, S C

    2010-01-01

    A 3.7 GHz., 120 kW (pulsed), lower hybrid current drive (LHCD) system is employed to drive non-inductive plasma current in ADITYA tokamak. The rf power is coupled to the plasma through grill antenna and is placed in vacuum environment. A vacuum break between the pressurized transmission line and the grill antenna is achieved with the help of a multi (eight) channel rectangular RF vacuum window. The phasing between adjacent channels of 8-channel window (arranged in two rows) is important for launching lower hybrid waves and each channel should have independent vacuum window so that phase information is retained. The geometrical parameter of the grill antenna, like periodicity (9mm), channel dimensions (cross sectional dimension of 76mm x 7mm), etc. is to be maintained. These design constraint demanded a development of a multi channel rectangular RF vacuum window. To handle rf losses and thermal effects, high temperature vacuum brazing techniques is desired. Based on the above requirements we have successfully developed a multi channel rectangular rf vacuum window employing high temperature vacuum brazing technique. During the development process we could optimize the chemical processing parameters, brazing process parameters, jigs and fixtures for high temperature brazing and leak testing, etc. Finally the window is tested for low power rf performance using VNA. In this paper we would present the development of the said window in detail along with its mechanical, vacuum and rf performances.

  13. Development of multi-channel high power rectangular RF window for LHCD system employing high temperature vacuum brazing technique

    Energy Technology Data Exchange (ETDEWEB)

    Sharma, P K; Ambulkar, K K; Parmar, P R; Virani, C G; Thakur, A L [Institute for Plasma Research, Bhat, Gandhinagar 382428 (India); Joshi, L M; Nangru, S C, E-mail: pramod@ipr.res.i [Central Electronics Engineering Research Institute, Pilani, Rajasthan 333 031 (India)

    2010-02-01

    A 3.7 GHz., 120 kW (pulsed), lower hybrid current drive (LHCD) system is employed to drive non-inductive plasma current in ADITYA tokamak. The rf power is coupled to the plasma through grill antenna and is placed in vacuum environment. A vacuum break between the pressurized transmission line and the grill antenna is achieved with the help of a multi (eight) channel rectangular RF vacuum window. The phasing between adjacent channels of 8-channel window (arranged in two rows) is important for launching lower hybrid waves and each channel should have independent vacuum window so that phase information is retained. The geometrical parameter of the grill antenna, like periodicity (9mm), channel dimensions (cross sectional dimension of 76mm x 7mm), etc. is to be maintained. These design constraint demanded a development of a multi channel rectangular RF vacuum window. To handle rf losses and thermal effects, high temperature vacuum brazing techniques is desired. Based on the above requirements we have successfully developed a multi channel rectangular rf vacuum window employing high temperature vacuum brazing technique. During the development process we could optimize the chemical processing parameters, brazing process parameters, jigs and fixtures for high temperature brazing and leak testing, etc. Finally the window is tested for low power rf performance using VNA. In this paper we would present the development of the said window in detail along with its mechanical, vacuum and rf performances.

  14. Improved modeling techniques for turbomachinery flow fields

    Energy Technology Data Exchange (ETDEWEB)

    Lakshminarayana, B. [Pennsylvania State Univ., University Park, PA (United States); Fagan, J.R. Jr. [Allison Engine Company, Indianapolis, IN (United States)

    1995-10-01

    This program has the objective of developing an improved methodology for modeling turbomachinery flow fields, including the prediction of losses and efficiency. Specifically, the program addresses the treatment of the mixing stress tensor terms attributed to deterministic flow field mechanisms required in steady-state Computational Fluid Dynamic (CFD) models for turbo-machinery flow fields. These mixing stress tensors arise due to spatial and temporal fluctuations (in an absolute frame of reference) caused by rotor-stator interaction due to various blade rows and by blade-to-blade variation of flow properties. These tasks include the acquisition of previously unavailable experimental data in a high-speed turbomachinery environment, the use of advanced techniques to analyze the data, and the development of a methodology to treat the deterministic component of the mixing stress tensor. Penn State will lead the effort to make direct measurements of the momentum and thermal mixing stress tensors in high-speed multistage compressor flow field in the turbomachinery laboratory at Penn State. They will also process the data by both conventional and conditional spectrum analysis to derive momentum and thermal mixing stress tensors due to blade-to-blade periodic and aperiodic components, revolution periodic and aperiodic components arising from various blade rows and non-deterministic (which includes random components) correlations. The modeling results from this program will be publicly available and generally applicable to steady-state Navier-Stokes solvers used for turbomachinery component (compressor or turbine) flow field predictions. These models will lead to improved methodology, including loss and efficiency prediction, for the design of high-efficiency turbomachinery and drastically reduce the time required for the design and development cycle of turbomachinery.

  15. Verification Techniques for Parameter Selection and Bayesian Model Calibration Presented for an HIV Model

    Science.gov (United States)

    Wentworth, Mami Tonoe

    Uncertainty quantification plays an important role when making predictive estimates of model responses. In this context, uncertainty quantification is defined as quantifying and reducing uncertainties, and the objective is to quantify uncertainties in parameter, model and measurements, and propagate the uncertainties through the model, so that one can make a predictive estimate with quantified uncertainties. Two of the aspects of uncertainty quantification that must be performed prior to propagating uncertainties are model calibration and parameter selection. There are several efficient techniques for these processes; however, the accuracy of these methods are often not verified. This is the motivation for our work, and in this dissertation, we present and illustrate verification frameworks for model calibration and parameter selection in the context of biological and physical models. First, HIV models, developed and improved by [2, 3, 8], describe the viral infection dynamics of an HIV disease. These are also used to make predictive estimates of viral loads and T-cell counts and to construct an optimal control for drug therapy. Estimating input parameters is an essential step prior to uncertainty quantification. However, not all the parameters are identifiable, implying that they cannot be uniquely determined by the observations. These unidentifiable parameters can be partially removed by performing parameter selection, a process in which parameters that have minimal impacts on the model response are determined. We provide verification techniques for Bayesian model calibration and parameter selection for an HIV model. As an example of a physical model, we employ a heat model with experimental measurements presented in [10]. A steady-state heat model represents a prototypical behavior for heat conduction and diffusion process involved in a thermal-hydraulic model, which is a part of nuclear reactor models. We employ this simple heat model to illustrate verification

  16. Employing X-ray absorption technique for better detector resolution and measurement of low cross-section events

    Science.gov (United States)

    Sharma, Gaurav; Puri, Nitin K.; Kumar, Pravin; Nandi, T.

    2018-03-01

    The versatility of X-ray absorption technique is experimentally employed for enhancing the detector resolution and to rejuvenate the low probable transitions obscured in the pile-up region, during a beam-foil spectroscopy experiment. The multiple aluminum absorber layers (10 μm each) are used to suppress the pile-up contribution drastically and to restore a weak transition which is about 1.38 × 104 times weaker than a one-electron-one-photon transitions viz. Kα and Khα. The weak line is possibly originating from a two-electron-one-photon transition in He-like Ti. Further, the transitions, which were obscured in the spectra due to high intensity ratio, are revived by dissimilar line intensity attenuation using this technique. The measured lifetimes of Kα line with and without intensity attenuation match well within error bar. The present technique finds potential implications in understanding the structure of multiple-core-vacant ions and other low cross section processes in ion-solid collisions.

  17. Investigations of ultrafast charge dynamics in laser-irradiated targets by a self probing technique employing laser driven protons

    Energy Technology Data Exchange (ETDEWEB)

    Ahmed, H. [School of Mathematics and Physics, Queen' s University Belfast, BT7 1NN (United Kingdom); Kar, S., E-mail: s.kar@qub.ac.uk [School of Mathematics and Physics, Queen' s University Belfast, BT7 1NN (United Kingdom); Cantono, G. [School of Mathematics and Physics, Queen' s University Belfast, BT7 1NN (United Kingdom); Department of Physics “E. Fermi”, Largo B. Pontecorvo 3, 56127 Pisa (Italy); Consiglio Nazionale delle Ricerche, Istituto Nazionale di Ottica, Research Unit Adriano Gozzini, via G. Moruzzi 1, Pisa 56124 (Italy); Nersisyan, G. [School of Mathematics and Physics, Queen' s University Belfast, BT7 1NN (United Kingdom); Brauckmann, S. [Institut für Laser-und Plasmaphysik, Heinrich-Heine-Universität, Düsseldorf (Germany); Doria, D.; Gwynne, D. [School of Mathematics and Physics, Queen' s University Belfast, BT7 1NN (United Kingdom); Macchi, A. [Department of Physics “E. Fermi”, Largo B. Pontecorvo 3, 56127 Pisa (Italy); Consiglio Nazionale delle Ricerche, Istituto Nazionale di Ottica, Research Unit Adriano Gozzini, via G. Moruzzi 1, Pisa 56124 (Italy); Naughton, K. [School of Mathematics and Physics, Queen' s University Belfast, BT7 1NN (United Kingdom); Willi, O. [Institut für Laser-und Plasmaphysik, Heinrich-Heine-Universität, Düsseldorf (Germany); Lewis, C.L.S.; Borghesi, M. [School of Mathematics and Physics, Queen' s University Belfast, BT7 1NN (United Kingdom)

    2016-09-01

    The divergent and broadband proton beams produced by the target normal sheath acceleration mechanism provide the unique opportunity to probe, in a point-projection imaging scheme, the dynamics of the transient electric and magnetic fields produced during laser-plasma interactions. Commonly such experimental setup entails two intense laser beams, where the interaction produced by one beam is probed with the protons produced by the second. We present here experimental studies of the ultra-fast charge dynamics along a wire connected to laser irradiated target carried out by employing a ‘self’ proton probing arrangement – i.e. by connecting the wire to the target generating the probe protons. The experimental data shows that an electromagnetic pulse carrying a significant amount of charge is launched along the wire, which travels as a unified pulse of 10s of ps duration with a velocity close to speed of light. The experimental capabilities and the analysis procedure of this specific type of proton probing technique are discussed. - Highlights: • Prompt charging of laser irradiated target generates ultra-short EM pulses. • Its ultrafast propagation along a wire was studied by self-proton probing technique. • Self-proton probing technique is the proton probing with one laser pulse. • Pulse temporal profile and speed along the wire were measured with high resolution.

  18. A Conceptual Model for Employer Training to Manage Employee Counter-Productive Behaviors

    Science.gov (United States)

    Rock, Naomi Spickard

    2011-01-01

    The purpose of this study was to develop a model for employer training to manage employees who possess counter-productive behaviors. With the increasing encouragement for employers to hire without discriminating, the number of individuals with disabilities in the workforce will rise. There is limited training in universities and businesses to…

  19. Testing a Model of Undergraduate Competence in Employability Skills and Its Implications for Stakeholders

    Science.gov (United States)

    Jackson, Denise

    2014-01-01

    Despite the development of employability skills being firmly entrenched in higher education's strategic agenda worldwide; recent graduates' standards in certain skills are not meeting industry expectations. This paper presents and tests a model of undergraduate competence in employability skills. It highlights those factors which impact on…

  20. Developing a Sustainable Practical Model of Graduate Employability for Higher Education

    Science.gov (United States)

    Rufai, Ahmed Umar; Bakar, Ab Rahim Bin; Rashi, Abdullah Bin Mat

    2015-01-01

    The purpose of this paper is to evolve a sustainable practical model of employability skills that is sure to capture relevant learning aspects of a particular occupational discipline to be used as framework for Undergraduate students to develop their employability potentials. The study was conducted in three Universities and Polytechnics each with…

  1. Developing a Sustainable Practical Model of Graduate Employability for Higher Education

    Directory of Open Access Journals (Sweden)

    Ahmed Umar Rufai

    2015-01-01

    Full Text Available The purpose of this paper is to evolve a sustainable practical model of employability skills that is sure to capture relevant learning aspects of a particular occupational discipline to be used as framework for Undergraduate students to develop their employability potentials. The study was conducted in three Universities and Polytechnics each with three multi-national companies. Data were collected using semi-structured interviews. Using purposeful sampling 18 academic staff and 3 professionals representing company employers were selected as the study participants. The study evolved a model that is work-based, explicit in its outcome, fully articulated and realistic in terms of employability skill experiences. The proposed model can be used to establish a common higher education programme or curricula that is work-based and skill experience oriented, that can encourage students in higher education to think about work place learning more explicitly and reflectively, that will in turn help them to develop a broad range of knowledge, skills, attitudes, and values, each of which ultimately contribute in some manner to graduate employability.  The paper is considered a contribution to the evolution and growth of knowledge on the linkage between higher education and workplace, through which the ‘skill gap’ occurring between the demand of employment and the level of educational preparation of graduates can be bridged. Keywords: Employability, Higher Education, Graduates, Model/Framework,   academic staff, Employers/Professionals

  2. Analysis of the chemical elements behavior in leaves infected with fumagina employing X-ray fluorescence technique

    International Nuclear Information System (INIS)

    Blonski, Maria Selia

    2002-02-01

    Fumagina is produced by the Capnodium fungi, a mushroom from Dothideales order, that lives associated with the yellow aphid (Monelliopsis pecanis), excreting a heavy load of a sugared substance that provides the sooty fungus development, covering, in some cases the totality, of leaves, fruits and branches surfaces. One of the big problems caused by this disease is the physic photosynthesis stuckness, reducing in 98% the active photosynthesis radiation and in 70% the net photosynthesis. This photosynthesis reduction affects the plants carbohydrate reservoirs, resulting in reduced and bad qualified crops. It was employed the Energy Dispersion X-Ray Fluorescence Technique (EDXRF) to study the fumagina disease. The measurements were realized with infected and healthy leaves of citric plants (orange and lemon trees), collected at the Farm School of the State University of Londrina, and also leaves of icsora plant from Piracicaba-SP. Through this technique it was observed a metabolic disorder related to Fe and Ca elements. By comparing the EDXRF spectra of the healthy plant leaves with the infected ones, it was verified that the concentration of Ca is higher than the concentration Fe in leaves of healthy plants, while for the leaves of plants infected with fumagina, it was observed a higher concentration of Fe. Employing EDXRF and the emission/transmission method for the matrix effect correction, samples of the citric plants were also analysed in pellet. The elements K, Ca, Ti, Mn , Fe, Cu and Zn were quantified. It was verified a higher Fe concentration in fumagina samples compared to the samples without fumagina. For all the elements of interest the minimum level of detection found was at the order of μg g -1 . (author)

  3. Atmospheric deposition of heavy metals in transilvanian plateau of romania studied by the moss bio monitoring Technique employing nuclear and related analytical Techniques and gis technology

    International Nuclear Information System (INIS)

    Lucaciu, Adriana; Craciun, L.; Cuculeanu, V.; Eseanu, D.

    2001-01-01

    This paper presents data for 39 elements of 69 moss samples (Hypnum cupressiforme) collected in the Transilvanian Plateau of Romania. This results have obtained in the framework of the project Atmospheric Deposition of Heavy Metals in Rural and Urban Areas of Romania Studied by the Moss Bio monitoring Technique Employing Nuclear and Related Analytical Techniques and GIS Technology carried out under the auspices of the International Atomic Energy Agency, Vienna.The samples collected have been analyzed by ENAA with the exception of Cu, Cd, and Pb which were determined by AAS. IAEA certified materials were used to ensure the quality of the measurements. The regional concentration variations of selected elements are presented in the form of maps constructed by GIS technology. Extremely high values are observed for elements such as Cu, Zn, As and Sb in parts of this territory affected by local metal industries. The levels are among the highest observed in the world, and could be partly responsible for the unfortunate health situation in some of these areas

  4. Improved ceramic slip casting technique. [application to aircraft model fabrication

    Science.gov (United States)

    Buck, Gregory M. (Inventor); Vasquez, Peter (Inventor)

    1993-01-01

    A primary concern in modern fluid dynamics research is the experimental verification of computational aerothermodynamic codes. This research requires high precision and detail in the test model employed. Ceramic materials are used for these models because of their low heat conductivity and their survivability at high temperatures. To fabricate such models, slip casting techniques were developed to provide net-form, precision casting capability for high-purity ceramic materials in aqueous solutions. In previous slip casting techniques, block, or flask molds made of plaster-of-paris were used to draw liquid from the slip material. Upon setting, parts were removed from the flask mold and cured in a kiln at high temperatures. Casting detail was usually limited with this technique -- detailed parts were frequently damaged upon separation from the flask mold, as the molded parts are extremely delicate in the uncured state, and the flask mold is inflexible. Ceramic surfaces were also marred by 'parting lines' caused by mold separation. This adversely affected the aerodynamic surface quality of the model as well. (Parting lines are invariably necessary on or near the leading edges of wings, nosetips, and fins for mold separation. These areas are also critical for flow boundary layer control.) Parting agents used in the casting process also affected surface quality. These agents eventually soaked into the mold, the model, or flaked off when releasing the case model. Different materials were tried, such as oils, paraffin, and even an algae. The algae released best, but some of it remained on the model and imparted an uneven texture and discoloration on the model surface when cured. According to the present invention, a wax pattern for a shell mold is provided, and an aqueous mixture of a calcium sulfate-bonded investment material is applied as a coating to the wax pattern. The coated wax pattern is then dried, followed by curing to vaporize the wax pattern and leave a shell

  5. Full experimental modelling of a liver tissue mimicking phantom for medical ultrasound studies employing different hydrogels.

    Science.gov (United States)

    Casciaro, Sergio; Conversano, Francesco; Musio, Stefano; Casciaro, Ernesto; Demitri, Christian; Sannino, Alessandro

    2009-04-01

    Tissue mimicking phantoms have been widely reported to be an important tool for development, optimisation and performance testing of ultrasound-based diagnostic techniques. In particular, modern applications of tissue mimicking phantoms often include characterisation of the nonlinear behaviour of experimental ultrasound contrast agents. In such cases, the tissue-mimicking materials should be chosen not only based on the values of their density, speed of sound and attenuation coefficient, but also considering their effect on the appearance of "native harmonics" due to nonlinear distortion of ultrasound signal during propagation. In a previous paper it was demonstrated that a cellulose-based hydrogel is suitable to simulate nonlinear acoustical behaviour of liver tissue for thicknesses up to 8 cm. In this paper we present the experimental characterisation of the nonlinear acoustical behaviour of a different polyethylene glycol diacrylate (PEGDA)-based hydrogel, in order to assess whether and how it can improve the performances and overcome some limitations of the cellulose-based hydrogel as liver tissue-mimicking material. Samples of pig liver tissue, cellulose-based hydrogel and PEGDA-based hydrogel were insonified in a through-transmission set-up, employing 2.25-MHz pulses with different mechanical index (MI) values. Second harmonic and first harmonic amplitudes were extracted from the spectra of received signals and their difference was then used to compare sample behaviours. Obtained results show how a new more accurate and combined experimental model of linear and nonlinear acoustical behaviour of liver tissue is feasible. In fact, a further confirmation of the cellulose-based hydrogel effectiveness to precisely simulate the liver tissue for penetration depths up to 8 cm was provided, and it was also shown that the employment of the PEGDA-based hydrogel can extend the range of useful tissue-mimicking material thicknesses up to 11 cm, moreover allowing a

  6. Structural modeling techniques by finite element method

    International Nuclear Information System (INIS)

    Kang, Yeong Jin; Kim, Geung Hwan; Ju, Gwan Jeong

    1991-01-01

    This book includes introduction table of contents chapter 1 finite element idealization introduction summary of the finite element method equilibrium and compatibility in the finite element solution degrees of freedom symmetry and anti symmetry modeling guidelines local analysis example references chapter 2 static analysis structural geometry finite element models analysis procedure modeling guidelines references chapter 3 dynamic analysis models for dynamic analysis dynamic analysis procedures modeling guidelines and modeling guidelines.

  7. Relationship Between Self-Assessed Fidelity and Self-Reported Employment in the Individual Placement and Support Model of Supported Employment.

    Science.gov (United States)

    Margolies, Paul J; Humensky, Jennifer L; Chiang, I-Chin; Covell, Nancy H; Jewell, Thomas C; Broadway-Wilson, Karen; Gregory, Raymond; Scannevin, Gary; Dixon, Lisa B

    2018-05-01

    A growing body of literature demonstrates that high-fidelity implementation of the individual placement and support (IPS) model of supported employment increases the chances of achieving desired outcomes. This study examined the relationship between IPS fidelity, as self-reported by program sites, and employment outcomes and determined whether this relationship was maintained over time. A total of 78 outpatient programs in New York State provided data on self-reported IPS fidelity and employment outcomes. Pearson correlations were used to determine the relationship between fidelity scores and competitive employment rates. A mixed-effects model examined the relationship between repeated fidelity and employment measures over time. A significant positive relationship was found between better self-reported IPS fidelity and greater employment. The relationship between IPS fidelity and employment was sustained over time (up to one year). Higher-fidelity implementation of the IPS model, as self-assessed by program sites, was associated with higher employment rates, which were sustained over time.

  8. Assessing the Effectiveness of Statistical Classification Techniques in Predicting Future Employment of Participants in the Temporary Assistance for Needy Families Program

    Science.gov (United States)

    Montoya, Isaac D.

    2008-01-01

    Three classification techniques (Chi-square Automatic Interaction Detection [CHAID], Classification and Regression Tree [CART], and discriminant analysis) were tested to determine their accuracy in predicting Temporary Assistance for Needy Families program recipients' future employment. Technique evaluation was based on proportion of correctly…

  9. Modeling of critical experiments employing Raschig rings in uranyl nitrate solution

    International Nuclear Information System (INIS)

    Tanner, J.E.

    1989-01-01

    Four critical experiments employing borated glass rings in concentrated uranyl nitrate solution yielded k eff higher by 0. 04 when modeled with a flux-weighted, homogenized cross section set than when modeled with discrete rings. k eff varied by 0.014 for a 10% boron uncertainty and by up to 0.04 for a 10% packing fraction uncertainty

  10. BIOMEHANICAL MODEL OF THE GOLF SWING TECHNIQUE

    Directory of Open Access Journals (Sweden)

    Milan Čoh

    2011-08-01

    Full Text Available Golf is an extremely complex game which depends on a number of interconnected factors. One of the most important elements is undoubtedly the golf swing technique. High performance of the golf swing technique is generated by: the level of motor abilities, high degree of movement control, the level of movement structure stabilisation, morphological characteristics, inter- and intro-muscular coordination, motivation, and concentration. The golf swing technique was investigated using the biomechanical analysis method. Kinematic parameters were registered using two synchronised high-speed cameras at a frequency of 2,000 Hz. The sample of subjects consisted of three professional golf players. The study results showed a relatively high variability of the swing technique. The maximum velocity of the ball after a wood swing ranged from 233 to 227 km/h. The velocity of the ball after an iron swing was lower by 10 km/h on average. The elevation angle of the ball ranged from 11.7 to 15.3 degrees. In the final phase of the golf swing, i.e. downswing, the trunk rotators play the key role.

  11. Respirometry techniques and activated sludge models

    NARCIS (Netherlands)

    Benes, O.; Spanjers, H.; Holba, M.

    2002-01-01

    This paper aims to explain results of respirometry experiments using Activated Sludge Model No. 1. In cases of insufficient fit of ASM No. 1, further modifications to the model were carried out and the so-called "Enzymatic model" was developed. The best-fit method was used to determine the effect of

  12. UAV State Estimation Modeling Techniques in AHRS

    Science.gov (United States)

    Razali, Shikin; Zhahir, Amzari

    2017-11-01

    Autonomous unmanned aerial vehicle (UAV) system is depending on state estimation feedback to control flight operation. Estimation on the correct state improves navigation accuracy and achieves flight mission safely. One of the sensors configuration used in UAV state is Attitude Heading and Reference System (AHRS) with application of Extended Kalman Filter (EKF) or feedback controller. The results of these two different techniques in estimating UAV states in AHRS configuration are displayed through position and attitude graphs.

  13. Employing the intelligence cycle process model within the Homeland Security Enterprise

    OpenAIRE

    Stokes, Roger L.

    2013-01-01

    CHDS State/Local The purpose of this thesis was to examine the employment and adherence of the intelligence cycle process model within the National Network of Fusion Centers and the greater Homeland Security Enterprise by exploring the customary intelligence cycle process model established by the United States Intelligence Community (USIC). This thesis revealed there are various intelligence cycle process models used by the USIC and taught to the National Network. Given the numerous differ...

  14. Applying the nominal group technique in an employment relations conflict situation: A case study of a university maintenance section in South Africa

    Directory of Open Access Journals (Sweden)

    Cornelis (Kees S. van der Waal

    2009-09-01

    Full Text Available After a breakdown in employment relations in the maintenance section of a higher education institution, the authors were asked to intervene in order to try and solve the employment relations conflict situation. It was decided to employ the Nominal Group Technique (NGT as a tool in problem identification during conflict in the workplace. An initial investigation of documentation and interviews with prominent individuals in the organisation was carried out. The NGT was then used in four focus group discussions to determine the important issues as seen by staff members. The NGT facilitates the determination of shared perceptions and the ranking of ideas. The NGT was used in diverse groups, necessitating adaptations to the technique. The perceived causes of the conflict were established. The NGT can be used in a conflict situation in the workplace in order to establish the perceived causes of employment relations conflict.

  15. Moving objects management models, techniques and applications

    CERN Document Server

    Meng, Xiaofeng; Xu, Jiajie

    2014-01-01

    This book describes the topics of moving objects modeling and location tracking, indexing and querying, clustering, location uncertainty, traffic aware navigation and privacy issues as well as the application to intelligent transportation systems.

  16. Materials and techniques for model construction

    Science.gov (United States)

    Wigley, D. A.

    1985-01-01

    The problems confronting the designer of models for cryogenic wind tunnel models are discussed with particular reference to the difficulties in obtaining appropriate data on the mechanical and physical properties of candidate materials and their fabrication technologies. The relationship between strength and toughness of alloys is discussed in the context of maximizing both and avoiding the problem of dimensional and microstructural instability. All major classes of materials used in model construction are considered in some detail and in the Appendix selected numerical data is given for the most relevant materials. The stepped-specimen program to investigate stress-induced dimensional changes in alloys is discussed in detail together with interpretation of the initial results. The methods used to bond model components are considered with particular reference to the selection of filler alloys and temperature cycles to avoid microstructural degradation and loss of mechanical properties.

  17. The Employment of spatial autoregressive models in predicting demand for natural gas

    International Nuclear Information System (INIS)

    Castro, Jorge Henrique de; Silva, Alexandre Pinto Alves da

    2010-01-01

    Develop the natural gas network is critical success factor for the distribution company. It is a decision that employs the demand given location 'x' and a future time 't' so that the net allows the best conditions for the return of the capital. In this segment, typical network industry, the spatial infra-structure vision associated to the market allows better evaluation of the business because to mitigate costs and risks. In fact, economic models little developed in order to assess the question of the location, due to its little employment by economists. The objective of this article is to analyze the application of spatial perspective in natural gas demand forecasting and to identify the models that can be employed observing issues of dependency and spatial heterogeneity; as well as the capacity of mapping of variables associated with the problem. (author)

  18. Model measurements for new accelerating techniques

    International Nuclear Information System (INIS)

    Aronson, S.; Haseroth, H.; Knott, J.; Willis, W.

    1988-06-01

    We summarize the work carried out for the past two years, concerning some different ways for achieving high-field gradients, particularly in view of future linear lepton colliders. These studies and measurements on low power models concern the switched power principle and multifrequency excitation of resonant cavities. 15 refs., 12 figs

  19. Self-Observation Model Employing an Instinctive Interface for Classroom Active Learning

    Science.gov (United States)

    Chen, Gwo-Dong; Nurkhamid; Wang, Chin-Yeh; Yang, Shu-Han; Chao, Po-Yao

    2014-01-01

    In a classroom, obtaining active, whole-focused, and engaging learning results from a design is often difficult. In this study, we propose a self-observation model that employs an instinctive interface for classroom active learning. Students can communicate with virtual avatars in the vertical screen and can react naturally according to the…

  20. Employing organizational modeling and simulation of the KC-135 depot's flight controls repair cell

    OpenAIRE

    Paskin, Matthew A.; Trevino, Alice W.; Ferrer, Geraldo; Dillard, John T.

    2008-01-01

    Today’s environment of increased operations tempo is stressing the KC- 135 Stratotanker fleet. With an 80-year life span expectancy, effectively maintaining these aircraft is challenging. This research modeled the KC- 135 programmed depot maintenance (PDM) flight controls repair cell to identify improvement opportunities within the repair process. Computational organizational modeling (COM) incorporates the human element along with organizational design theory. By employing ...

  1. A probabilistic evaluation procedure for process model matching techniques

    NARCIS (Netherlands)

    Kuss, Elena; Leopold, Henrik; van der Aa, Han; Stuckenschmidt, Heiner; Reijers, Hajo A.

    2018-01-01

    Process model matching refers to the automatic identification of corresponding activities between two process models. It represents the basis for many advanced process model analysis techniques such as the identification of similar process parts or process model search. A central problem is how to

  2. A Review of Domain Modelling and Domain Imaging Techniques in Ferroelectric Crystals

    Directory of Open Access Journals (Sweden)

    John E. Huber

    2011-02-01

    Full Text Available The present paper reviews models of domain structure in ferroelectric crystals, thin films and bulk materials. Common crystal structures in ferroelectric materials are described and the theory of compatible domain patterns is introduced. Applications to multi-rank laminates are presented. Alternative models employing phase-field and related techniques are reviewed. The paper then presents methods of observing ferroelectric domain structure, including optical, polarized light, scanning electron microscopy, X-ray and neutron diffraction, atomic force microscopy and piezo-force microscopy. Use of more than one technique for unambiguous identification of the domain structure is also described.

  3. Model order reduction techniques with applications in finite element analysis

    CERN Document Server

    Qu, Zu-Qing

    2004-01-01

    Despite the continued rapid advance in computing speed and memory the increase in the complexity of models used by engineers persists in outpacing them. Even where there is access to the latest hardware, simulations are often extremely computationally intensive and time-consuming when full-blown models are under consideration. The need to reduce the computational cost involved when dealing with high-order/many-degree-of-freedom models can be offset by adroit computation. In this light, model-reduction methods have become a major goal of simulation and modeling research. Model reduction can also ameliorate problems in the correlation of widely used finite-element analyses and test analysis models produced by excessive system complexity. Model Order Reduction Techniques explains and compares such methods focusing mainly on recent work in dynamic condensation techniques: - Compares the effectiveness of static, exact, dynamic, SEREP and iterative-dynamic condensation techniques in producing valid reduced-order mo...

  4. MODEL AUTHENTIC SELF-ASSESSMENT DALAM PENGEMBANGAN EMPLOYABILITY SKILLS MAHASISWA PENDIDIKAN TINGGI VOKASI

    Directory of Open Access Journals (Sweden)

    I Made Suarta

    2015-06-01

    ______________________________________________________________ AUTHENTIC SELF-ASSESSMENT MODEL FOR DEVELOPING EMPLOYABILITY SKILLS STUDENT IN HIGHER VOCATIONAL EDUCATION Abstract The purpose of this research is to develop assessment tools to evaluate achievement of employability skills which are integrated in the learning database applications. The assessment model developed is a combination of self-assessment and authentic assessment, proposed as models of authentic self-assessment. The steps of developing authentic self-assessment models include: identifying the standards, selecting an authentic task, identifying the criteria for the task, and creating the rubric. The results of development assessment tools include: (1 problem solving skills assessment model, (2 self-management skills assessment model, and (3 competence database applications assessment model. This model can be used to assess the cognitive, affective, and psychomotor achievement. The results indicate: achievement of problem solving and self-management ability was in good category, and competencies in designing conceptual and logical database was in high category. This model also has met the basic principles of assessment, i.e.: validity, reliability, focused on competencies, comprehen-sive, objectivity, and the principle of educating. Keywords: authentic assessment, self-assessment, problem solving skills, self-management skills, vocational education

  5. EXCHANGE-RATES FORECASTING: EXPONENTIAL SMOOTHING TECHNIQUES AND ARIMA MODELS

    Directory of Open Access Journals (Sweden)

    Dezsi Eva

    2011-07-01

    Full Text Available Exchange rates forecasting is, and has been a challenging task in finance. Statistical and econometrical models are widely used in analysis and forecasting of foreign exchange rates. This paper investigates the behavior of daily exchange rates of the Romanian Leu against the Euro, United States Dollar, British Pound, Japanese Yen, Chinese Renminbi and the Russian Ruble. Smoothing techniques are generated and compared with each other. These models include the Simple Exponential Smoothing technique, as the Double Exponential Smoothing technique, the Simple Holt-Winters, the Additive Holt-Winters, namely the Autoregressive Integrated Moving Average model.

  6. A partnership-based model for embedding employability in urban planning education

    Directory of Open Access Journals (Sweden)

    Neale Blair

    2016-01-01

    Full Text Available This paper proposes a partnership-based model for embedding employability in urban planning education. The model is based on the author’s experiences of implementing an international project which supported the development of employability skills in urban and regional planning education in Malawi. Since independence, urban planners have typically trained outside the country, attending university in the UK and other Commonwealth countries. More recently, the paradigm has shifted towards in-country education delivered by academic staff cognisant with the opportunities and challenges of development in Malawi. There remains, though, a gap between graduate knowledge of the subject and the skills necessary to pursue a professional career in the sector. Although there is no consensus yet on the meaning of employability in the literature, lessons from the project indicate that academic–public–private collaboration helps incorporate in curriculum skills that employers anticipate. Applicability of these principles is however context dependent, particularly in the emerging economy context where institutional capacity may be less developed compared to elsewhere.

  7. On a Graphical Technique for Evaluating Some Rational Expectations Models

    DEFF Research Database (Denmark)

    Johansen, Søren; Swensen, Anders R.

    2011-01-01

    Campbell and Shiller (1987) proposed a graphical technique for the present value model, which consists of plotting estimates of the spread and theoretical spread as calculated from the cointegrated vector autoregressive model without imposing the restrictions implied by the present value model....... In addition to getting a visual impression of the fit of the model, the purpose is to see if the two spreads are nevertheless similar as measured by correlation, variance ratio, and noise ratio. We extend these techniques to a number of rational expectation models and give a general definition of spread...

  8. Maintaining stability of standalone Micro-Grid by employing electrical and mechanical fault ride through techniques upon fixed speed wind generation systems

    International Nuclear Information System (INIS)

    Kamel, Rashad M.

    2013-01-01

    Highlights: • This paper presented two fault ride through techniques for fixed speed wind systems. • The first technique is implemented by inserting a series resistance with the terminals of wind generator. • Second technique is performed by adapting the gear ratio to spill some mechanical power. - Abstract: This study presents two different Fault Ride Through (FRT) techniques to keep and restore stability of Fixed Speed Wind Generation system (FSWGs) installed in standalone Micro-Grid (MG). The first technique is an electrical FRT and is implemented by inserting a series resistance with the terminals of FSWGs during fault to maintain reasonable value of terminal voltage and consequently help stability restoration. The second controller is a mechanical FRT controller and is performed by change the gear ratio of wind generation systems to spill part of extracted mechanical power and consequently improving stability issue. Obtained results proved that each controller able to maintain the stability of FSWGs under the most severe disturbance conditions (400 ms three phase fault at FSWGs terminals). The first controller is faster than the second controller in restoring FSWGs stability. Superior results and performances are obtained when the two FRT techniques are employed simultaneously. Without employing any one of the two FRT techniques, FSWGs is not able to maintain or restore its stability after fault clearing. Consequently, MG will lose one of its micro-sources and cannot keep its stability during the standalone mode, unless load shedding strategy is activated. The two proposed controllers are simple, effective, and economical attractive

  9. The advancement of the built environment research through employment of structural equation modeling (SEM)

    Science.gov (United States)

    Wasilah, S.; Fahmyddin, T.

    2018-03-01

    The employment of structural equation modeling (SEM) in research has taken an increasing attention in among researchers in built environment. There is a gap to understand the attributes, application, and importance of this approach in data analysis in built environment study. This paper intends to provide fundamental comprehension of SEM method in data analysis, unveiling attributes, employment and significance and bestow cases to assess associations amongst variables and constructs. The study uses some main literature to grasp the essence of SEM regarding with built environment research. The better acknowledgment of this analytical tool may assist the researcher in the built environment to analyze data under complex research questions and to test multivariate models in a single study.

  10. Alternative technique of cervical spinal stabilization employing lateral mass plate and screw and intra-articular spacer fixation

    Directory of Open Access Journals (Sweden)

    Atul Goel

    2013-01-01

    Full Text Available Aim: The author discusses an alternative technique of segmental cervical spinal fixation. Material and Methods: The subtleties of the technique are discussed on the basis of experience with 3 cases with a follow-up of between 30 and 36 months. Technique: The technique involves debridement of facetal articular cartilage, distraction of facets, jamming of ′Goel spacer′ into the articular cavity and fortification of the fixation by lateral mass plate and screw fixation. The ′double-insurance′ method of fixation is safe for vertebral artery, nerve roots and spinal neural structures and the fixation is strong. Conclusions: The discussed technique is safe and provides a strong fixation and a ground for ultimate arthrodesis.

  11. Theoretical modelling of experimental diagnostic procedures employed during pre-dose dosimetry of quartz

    International Nuclear Information System (INIS)

    Pagonis, V.; Chen, R.; Kitis, G.

    2006-01-01

    The pre-dose technique in thermoluminescence (TL) is used for dating archaeological ceramics and for accident dosimetry. During routine applications of this technique, the sensitisation of the quartz samples is measured as a function of the annealing temperature, yielding the so-called thermal activation characteristic (TAC). The measurement of multiple TACs and the study of the effect of UV-radiation on the TL sensitivity of quartz are important analytical and diagnostic tools. In this paper, it is shown that a modified Zimmerman model for quartz can successfully model the experimental steps undertaken during a measurement of multiple TACs. (authors)

  12. Virtual 3d City Modeling: Techniques and Applications

    Science.gov (United States)

    Singh, S. P.; Jain, K.; Mandla, V. R.

    2013-08-01

    3D city model is a digital representation of the Earth's surface and it's related objects such as Building, Tree, Vegetation, and some manmade feature belonging to urban area. There are various terms used for 3D city models such as "Cybertown", "Cybercity", "Virtual City", or "Digital City". 3D city models are basically a computerized or digital model of a city contains the graphic representation of buildings and other objects in 2.5 or 3D. Generally three main Geomatics approach are using for Virtual 3-D City models generation, in first approach, researcher are using Conventional techniques such as Vector Map data, DEM, Aerial images, second approach are based on High resolution satellite images with LASER scanning, In third method, many researcher are using Terrestrial images by using Close Range Photogrammetry with DSM & Texture mapping. We start this paper from the introduction of various Geomatics techniques for 3D City modeling. These techniques divided in to two main categories: one is based on Automation (Automatic, Semi-automatic and Manual methods), and another is Based on Data input techniques (one is Photogrammetry, another is Laser Techniques). After details study of this, finally in short, we are trying to give the conclusions of this study. In the last, we are trying to give the conclusions of this research paper and also giving a short view for justification and analysis, and present trend for 3D City modeling. This paper gives an overview about the Techniques related with "Generation of Virtual 3-D City models using Geomatics Techniques" and the Applications of Virtual 3D City models. Photogrammetry, (Close range, Aerial, Satellite), Lasergrammetry, GPS, or combination of these modern Geomatics techniques play a major role to create a virtual 3-D City model. Each and every techniques and method has some advantages and some drawbacks. Point cloud model is a modern trend for virtual 3-D city model. Photo-realistic, Scalable, Geo-referenced virtual 3

  13. The psychological contract: is the UK National Health Service a model employer?

    Science.gov (United States)

    Fielden, Sandra; Whiting, Fiona

    2007-05-01

    The UK National Health Service (NHS) is facing recruitment challenges that mean it will need to become an 'employer of choice' if it is to continue to attract high-quality employees. This paper reports the findings from a study focusing on allied health professional staff (n = 67), aimed at establishing the expectations of the NHS inherent in their current psychological contract and to consider whether the government's drive to make the NHS a model employer meets those expectations. The findings show that the most important aspects of the psychological contract were relational and based on the investment made in the employment relationship by both parties. The employment relationship was one of high involvement but also one where transactional contract items, such as pay, were still of some importance. Although the degree of employee satisfaction with the relational content of the psychological contract was relatively positive, there was, nevertheless, a mismatch between levels of importance placed on such aspects of the contract and levels of satisfaction, with employees increasingly placing greater emphasis on those items the NHS is having the greatest difficulty providing. Despite this apparent disparity between employee expectation and the fulfilment of those expectations, the overall health of the psychological contract was still high.

  14. Circuit oriented electromagnetic modeling using the PEEC techniques

    CERN Document Server

    Ruehli, Albert; Jiang, Lijun

    2017-01-01

    This book provides intuitive solutions to electromagnetic problems by using the Partial Eelement Eequivalent Ccircuit (PEEC) method. This book begins with an introduction to circuit analysis techniques, laws, and frequency and time domain analyses. The authors also treat Maxwell's equations, capacitance computations, and inductance computations through the lens of the PEEC method. Next, readers learn to build PEEC models in various forms: equivalent circuit models, non orthogonal PEEC models, skin-effect models, PEEC models for dielectrics, incident and radiate field models, and scattering PEEC models. The book concludes by considering issues like such as stability and passivity, and includes five appendices some with formulas for partial elements.

  15. Artificial intelligence techniques for modeling database user behavior

    Science.gov (United States)

    Tanner, Steve; Graves, Sara J.

    1990-01-01

    The design and development of the adaptive modeling system is described. This system models how a user accesses a relational database management system in order to improve its performance by discovering use access patterns. In the current system, these patterns are used to improve the user interface and may be used to speed data retrieval, support query optimization and support a more flexible data representation. The system models both syntactic and semantic information about the user's access and employs both procedural and rule-based logic to manipulate the model.

  16. [Intestinal lengthening techniques: an experimental model in dogs].

    Science.gov (United States)

    Garibay González, Francisco; Díaz Martínez, Daniel Alberto; Valencia Flores, Alejandro; González Hernández, Miguel Angel

    2005-01-01

    To compare two intestinal lengthening procedures in an experimental dog model. Intestinal lengthening is one of the methods for gastrointestinal reconstruction used for treatment of short bowel syndrome. The modification to the Bianchi's technique is an alternative. The modified technique decreases the number of anastomoses to a single one, thus reducing the risk of leaks and strictures. To our knowledge there is not any clinical or experimental report that studied both techniques, so we realized the present report. Twelve creole dogs were operated with the Bianchi technique for intestinal lengthening (group A) and other 12 creole dogs from the same race and weight were operated by the modified technique (Group B). Both groups were compared in relation to operating time, difficulties in technique, cost, intestinal lengthening and anastomoses diameter. There were no statistical difference in the anastomoses diameter (A = 9.0 mm vs. B = 8.5 mm, p = 0.3846). Operating time (142 min vs. 63 min) cost and technique difficulties were lower in group B (p anastomoses (of Group B) and intestinal segments had good blood supply and were patent along their full length. Bianchi technique and the modified technique offer two good reliable alternatives for the treatment of short bowel syndrome. The modified technique improved operating time, cost and technical issues.

  17. Model describing the effect of employment of the United States military in a complex emergency.

    Science.gov (United States)

    MacMillan, Donald S

    2005-01-01

    The end of the Cold War vastly altered the worldwide political landscape. With the loss of a main competitor, the United States (US) military has had to adapt its strategic, operational, and tactical doctrines to an ever-increasing variety of non-traditional missions, including humanitarian operations. Complex emergencies (CEs) are defined in this paper from a political and military perspective, various factors that contribute to their development are described, and issues resulting from the employment of US military forces are discussed. A model was developed to illustrate the course of a humanitarian emergency and the potential impact of a military response. The US intervention in Haiti, Northern Iraq, Kosovo, Somalia, Bosnia, and Rwanda serve as examples. A CE develops when there is civil conflict, loss of national governmental authority, a mass population movement, and massive economic failure, each leading to a general decline in food security. The military can alleviate a CE in four ways: (1) provide security for relief efforts; (2) enforce negotiated settlements; (3) provide security for non-combatants; and/or (4) employ logistical capabilities. The model incorporates Norton and Miskel's taxonomy of identifying failing states and helps illustrate the factors that lead to a CE. The model can be used to determine if and when military intervention will have the greatest impact. The model demonstrates that early military intervention and mission assignment within the core competencies of the forces can reverse the course of a CE. Further study will be needed to verify the model.

  18. Predictive models to determine imagery strategies employed by children to judge hand laterality.

    Science.gov (United States)

    Spruijt, Steffie; Jongsma, Marijtje L A; van der Kamp, John; Steenbergen, Bert

    2015-01-01

    A commonly used paradigm to study motor imagery is the hand laterality judgment task. The present study aimed to determine which strategies young children employ to successfully perform this task. Children of 5 to 8 years old (N = 92) judged laterality of back and palm view hand pictures in different rotation angles. Response accuracy and response duration were registered. Response durations of the trials with a correct judgment were fitted to a-priori defined predictive sinusoid models, representing different strategies to successfully perform the hand laterality judgment task. The first model predicted systematic changes in response duration as a function of rotation angle of the displayed hand. The second model predicted that response durations are affected by biomechanical constraints of hand rotation. If observed data could be best described by the first model, this would argue for a mental imagery strategy that does not involve motor processes to solve the task. The second model reflects a motor imagery strategy to solve the task. In line with previous research, we showed an age-related increase in response accuracy and decrease in response duration in children. Observed data for both back and palm view showed that motor imagery strategies were used to perform hand laterality judgments, but that not all the children use these strategies (appropriately) at all times. A direct comparison of response duration patterns across age sheds new light on age-related differences in the strategies employed to solve the task. Importantly, the employment of the motor imagery strategy for successful task performance did not change with age.

  19. Integrated approach to model decomposed flow hydrograph using artificial neural network and conceptual techniques

    Science.gov (United States)

    Jain, Ashu; Srinivasulu, Sanaga

    2006-02-01

    This paper presents the findings of a study aimed at decomposing a flow hydrograph into different segments based on physical concepts in a catchment, and modelling different segments using different technique viz. conceptual and artificial neural networks (ANNs). An integrated modelling framework is proposed capable of modelling infiltration, base flow, evapotranspiration, soil moisture accounting, and certain segments of the decomposed flow hydrograph using conceptual techniques and the complex, non-linear, and dynamic rainfall-runoff process using ANN technique. Specifically, five different multi-layer perceptron (MLP) and two self-organizing map (SOM) models have been developed. The rainfall and streamflow data derived from the Kentucky River catchment were employed to test the proposed methodology and develop all the models. The performance of all the models was evaluated using seven different standard statistical measures. The results obtained in this study indicate that (a) the rainfall-runoff relationship in a large catchment consists of at least three or four different mappings corresponding to different dynamics of the underlying physical processes, (b) an integrated approach that models the different segments of the decomposed flow hydrograph using different techniques is better than a single ANN in modelling the complex, dynamic, non-linear, and fragmented rainfall runoff process, (c) a simple model based on the concept of flow recession is better than an ANN to model the falling limb of a flow hydrograph, and (d) decomposing a flow hydrograph into the different segments corresponding to the different dynamics based on the physical concepts is better than using the soft decomposition employed using SOM.

  20. Modelling Technique for Demonstrating Gravity Collapse Structures in Jointed Rock.

    Science.gov (United States)

    Stimpson, B.

    1979-01-01

    Described is a base-friction modeling technique for studying the development of collapse structures in jointed rocks. A moving belt beneath weak material is designed to simulate gravity. A description is given of the model frame construction. (Author/SA)

  1. A pilot modeling technique for handling-qualities research

    Science.gov (United States)

    Hess, R. A.

    1980-01-01

    A brief survey of the more dominant analysis techniques used in closed-loop handling-qualities research is presented. These techniques are shown to rely on so-called classical and modern analytical models of the human pilot which have their foundation in the analysis and design principles of feedback control. The optimal control model of the human pilot is discussed in some detail and a novel approach to the a priori selection of pertinent model parameters is discussed. Frequency domain and tracking performance data from 10 pilot-in-the-loop simulation experiments involving 3 different tasks are used to demonstrate the parameter selection technique. Finally, the utility of this modeling approach in handling-qualities research is discussed.

  2. Summary on several key techniques in 3D geological modeling.

    Science.gov (United States)

    Mei, Gang

    2014-01-01

    Several key techniques in 3D geological modeling including planar mesh generation, spatial interpolation, and surface intersection are summarized in this paper. Note that these techniques are generic and widely used in various applications but play a key role in 3D geological modeling. There are two essential procedures in 3D geological modeling: the first is the simulation of geological interfaces using geometric surfaces and the second is the building of geological objects by means of various geometric computations such as the intersection of surfaces. Discrete geometric surfaces that represent geological interfaces can be generated by creating planar meshes first and then spatially interpolating; those surfaces intersect and then form volumes that represent three-dimensional geological objects such as rock bodies. In this paper, the most commonly used algorithms of the key techniques in 3D geological modeling are summarized.

  3. The stucco decorations from St. Lorenzo in Laino (Como, Italy): The materials and the techniques employed by the 'Magistri Comacini'

    International Nuclear Information System (INIS)

    Rampazzi, Laura; Rizzo, Biagio; Colombo, Chiara; Conti, Claudia; Realini, Marco; Bartolucci, Ugo; Colombini, Maria Perla; Spiriti, Andrea; Facchin, Laura

    2008-01-01

    The paper reports a cross-cutting and systematic approach to the analytical study of Baroque stucco decorations by Italian artists known as 'Magistri Comacini'. Samples from the Church of St. Lorenzo in Laino (Como, Italy) were analysed using chemical and mineralogical techniques and the results interpreted with the help of art historians in order to enlighten the artistic techniques. The characterisation of the binder, aggregate and organic fractions was carried out through optical microscopy, scanning electron microscopy with microprobe, X-ray powder diffraction, Fourier transform infrared spectroscopy, gas chromatography-mass spectrometry and laser ablation mass spectrometry equipped with plasma source spectrometry. The results on major, minor and trace components ascertained various recipes. In particular, the investigation of the finishing layers corroborated hypotheses of attribution and chronology and stressed the different skills of the artists working in the building, shedding a light on details of the decoration technique

  4. 'Marginal Employment' and the Demand for Heterogenous Labour: Empirical Evidence from a Multi-factor Labour Demand Model for Germany

    OpenAIRE

    Ronny Freier; Viktor Steiner

    2007-01-01

    We develop a structural multi-factor labour demand model which distinguishes between eight labour categories including non-standard types of employment such as marginal employment. The model is estimated for both the number of workers and total working hours using a new panel data set. For unskilled and skilled workers in full-time employment, we find labour demand elasticities similar to previous estimates for the west German economy. Our new estimates of own-wage elasticities for marginal e...

  5. Study of Belousov—Zhabotinsky oscillators in water—acetonitrile medium employing EMF and EPR techniques with o-vanillin, p-vanillin and adrenaline as substrates

    Science.gov (United States)

    Lalitha, P. V.; Ramaswamy, R.; Ramakrishnan, Geetha; Rao, P. Sambasiva

    1994-12-01

    The oscillatory behaviour of three substrates, ortho-vanillin, para-vanillin and adrenaline, in mixed media (water plus 20% acetonitrile) has been investigated using EPR and EMF techniques. All these substrates exhibit two types of oscillation involving Br 2/Br - and Mn(III)/Mn(II) redox couples. From the oscillatory characteristics (total time, number of oscillations and time per oscillation) obtained by employing these techniques, the reactivities of the vanillins have been correlated. The Field—Koros—Noyes mechanism, suggested for catalysed systems in pure aqueous medium, is established to be applicable even in mixed media.

  6. Technique employed to seal a tube leaking in a heat exchanger of the tube type by explosives with supporting means for the adjacent tubes

    International Nuclear Information System (INIS)

    Larson, G.C.

    1978-01-01

    This invention concerns the technique employed to seal a tube leaking in a heat exchanger of the tube and tube plate type by detonating metal plugs activated by an explosive and inserted in both ends of the tube. It refers in particular to an apparatus and process in which the deformation or distortion of the adjacent tubes and tube plate ties under the effect of the explosive forces is significantly reduced [fr

  7. 3D Modeling Techniques for Print and Digital Media

    Science.gov (United States)

    Stephens, Megan Ashley

    In developing my thesis, I looked to gain skills using ZBrush to create 3D models, 3D scanning, and 3D printing. The models created compared the hearts of several vertebrates and were intended for students attending Comparative Vertebrate Anatomy. I used several resources to create a model of the human heart and was able to work from life while creating heart models from other vertebrates. I successfully learned ZBrush and 3D scanning, and successfully printed 3D heart models. ZBrush allowed me to create several intricate models for use in both animation and print media. The 3D scanning technique did not fit my needs for the project, but may be of use for later projects. I was able to 3D print using two different techniques as well.

  8. Development of scan analysis techniques employing a small computer. Progress report, August 1, 1974--July 31, 1975

    International Nuclear Information System (INIS)

    Kuhl, D.E.

    1975-01-01

    Progress is reported in the development of equipment and counting techniques for transverse section scanning of the brain following the administration of radiopharmaceuticals to evaluate regional blood flow. The scanning instrument has an array of 32 scintillation detectors that surround the head and scan data are analyzed using a small computer. (U.S.)

  9. The "Robustness" of Vocabulary Intervention in the Public Schools: Targets and Techniques Employed in Speech-Language Therapy

    Science.gov (United States)

    Justice, Laura M.; Schmitt, Mary Beth; Murphy, Kimberly A.; Pratt, Amy; Biancone, Tricia

    2014-01-01

    This study examined vocabulary intervention--in terms of targets and techniques--for children with language impairment receiving speech-language therapy in public schools (i.e., non-fee-paying schools) in the United States. Vocabulary treatments and targets were examined with respect to their alignment with the empirically validated practice of…

  10. A New ABCD Technique to Analyze Business Models & Concepts

    OpenAIRE

    Aithal P. S.; Shailasri V. T.; Suresh Kumar P. M.

    2015-01-01

    Various techniques are used to analyze individual characteristics or organizational effectiveness like SWOT analysis, SWOC analysis, PEST analysis etc. These techniques provide an easy and systematic way of identifying various issues affecting a system and provides an opportunity for further development. Whereas these provide a broad-based assessment of individual institutions and systems, it suffers limitations while applying to business context. The success of any business model depends on ...

  11. A Method to Test Model Calibration Techniques: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Judkoff, Ron; Polly, Ben; Neymark, Joel

    2016-09-01

    This paper describes a method for testing model calibration techniques. Calibration is commonly used in conjunction with energy retrofit audit models. An audit is conducted to gather information about the building needed to assemble an input file for a building energy modeling tool. A calibration technique is used to reconcile model predictions with utility data, and then the 'calibrated model' is used to predict energy savings from a variety of retrofit measures and combinations thereof. Current standards and guidelines such as BPI-2400 and ASHRAE-14 set criteria for 'goodness of fit' and assume that if the criteria are met, then the calibration technique is acceptable. While it is logical to use the actual performance data of the building to tune the model, it is not certain that a good fit will result in a model that better predicts post-retrofit energy savings. Therefore, the basic idea here is that the simulation program (intended for use with the calibration technique) is used to generate surrogate utility bill data and retrofit energy savings data against which the calibration technique can be tested. This provides three figures of merit for testing a calibration technique, 1) accuracy of the post-retrofit energy savings prediction, 2) closure on the 'true' input parameter values, and 3) goodness of fit to the utility bill data. The paper will also discuss the pros and cons of using this synthetic surrogate data approach versus trying to use real data sets of actual buildings.

  12. Modeling Techniques for a Computational Efficient Dynamic Turbofan Engine Model

    Directory of Open Access Journals (Sweden)

    Rory A. Roberts

    2014-01-01

    Full Text Available A transient two-stream engine model has been developed. Individual component models developed exclusively in MATLAB/Simulink including the fan, high pressure compressor, combustor, high pressure turbine, low pressure turbine, plenum volumes, and exit nozzle have been combined to investigate the behavior of a turbofan two-stream engine. Special attention has been paid to the development of transient capabilities throughout the model, increasing physics model, eliminating algebraic constraints, and reducing simulation time through enabling the use of advanced numerical solvers. The lessening of computation time is paramount for conducting future aircraft system-level design trade studies and optimization. The new engine model is simulated for a fuel perturbation and a specified mission while tracking critical parameters. These results, as well as the simulation times, are presented. The new approach significantly reduces the simulation time.

  13. Ferromagnetic linewidth measurements employing electrodynamic model of the magnetic plasmon resonance

    Science.gov (United States)

    Krupka, Jerzy; Aleshkevych, Pavlo; Salski, Bartlomiej; Kopyt, Pawel

    2018-02-01

    The mode of uniform precession, or Kittel mode, in a magnetized ferromagnetic sphere, has recently been proven to be the magnetic plasmon resonance. In this paper we show how to apply the electrodynamic model of the magnetic plasmon resonance for accurate measurements of the ferromagnetic resonance linewidth ΔH. Two measurement methods are presented. The first one employs Q-factor measurements of the magnetic plasmon resonance coupled to the resonance of an empty metallic cavity. Such coupled modes are known as magnon-polariton modes, i.e. hybridized modes between the collective spin excitation and the cavity excitation. The second one employs direct Q-factor measurements of the magnetic plasmon resonance in a filter setup with two orthogonal semi-loops used for coupling. Q-factor measurements are performed employing a vector network analyser. The methods presented in this paper allow one to extend the measurement range of the ferromagnetic resonance linewidth ΔH well beyond the limits of the commonly used measurement standards in terms of the size of the samples and the lowest measurable linewidths. Samples that can be measured with the newly proposed methods may have larger size as compared to the size of samples that were used in the standard methods restricted by the limits of perturbation theory.

  14. Numerical model updating technique for structures using firefly algorithm

    Science.gov (United States)

    Sai Kubair, K.; Mohan, S. C.

    2018-03-01

    Numerical model updating is a technique used for updating the existing experimental models for any structures related to civil, mechanical, automobiles, marine, aerospace engineering, etc. The basic concept behind this technique is updating the numerical models to closely match with experimental data obtained from real or prototype test structures. The present work involves the development of numerical model using MATLAB as a computational tool and with mathematical equations that define the experimental model. Firefly algorithm is used as an optimization tool in this study. In this updating process a response parameter of the structure has to be chosen, which helps to correlate the numerical model developed with the experimental results obtained. The variables for the updating can be either material or geometrical properties of the model or both. In this study, to verify the proposed technique, a cantilever beam is analyzed for its tip deflection and a space frame has been analyzed for its natural frequencies. Both the models are updated with their respective response values obtained from experimental results. The numerical results after updating show that there is a close relationship that can be brought between the experimental and the numerical models.

  15. Plasticity models of material variability based on uncertainty quantification techniques

    Energy Technology Data Exchange (ETDEWEB)

    Jones, Reese E. [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Rizzi, Francesco [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Boyce, Brad [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Templeton, Jeremy Alan [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Ostien, Jakob [Sandia National Lab. (SNL-CA), Livermore, CA (United States)

    2017-11-01

    The advent of fabrication techniques like additive manufacturing has focused attention on the considerable variability of material response due to defects and other micro-structural aspects. This variability motivates the development of an enhanced design methodology that incorporates inherent material variability to provide robust predictions of performance. In this work, we develop plasticity models capable of representing the distribution of mechanical responses observed in experiments using traditional plasticity models of the mean response and recently developed uncertainty quantification (UQ) techniques. Lastly, we demonstrate that the new method provides predictive realizations that are superior to more traditional ones, and how these UQ techniques can be used in model selection and assessing the quality of calibrated physical parameters.

  16. Models and Techniques for Proving Data Structure Lower Bounds

    DEFF Research Database (Denmark)

    Larsen, Kasper Green

    In this dissertation, we present a number of new techniques and tools for proving lower bounds on the operational time of data structures. These techniques provide new lines of attack for proving lower bounds in both the cell probe model, the group model, the pointer machine model and the I...... bound of tutq = (lgd􀀀1 n). For ball range searching, we get a lower bound of tutq = (n1􀀀1=d). The highest previous lower bound proved in the group model does not exceed ((lg n= lg lg n)2) on the maximum of tu and tq. Finally, we present a new technique for proving lower bounds....../O-model. In all cases, we push the frontiers further by proving lower bounds higher than what could possibly be proved using previously known techniques. For the cell probe model, our results have the following consequences: The rst (lg n) query time lower bound for linear space static data structures...

  17. Employing Tropospheric Numerical Weather Prediction Model for High-Precision GNSS Positioning

    Science.gov (United States)

    Alves, Daniele; Gouveia, Tayna; Abreu, Pedro; Magário, Jackes

    2014-05-01

    In the past few years is increasing the necessity of realizing high accuracy positioning. In this sense, the spatial technologies have being widely used. The GNSS (Global Navigation Satellite System) has revolutionized the geodetic positioning activities. Among the existent methods one can emphasize the Precise Point Positioning (PPP) and network-based positioning. But, to get high accuracy employing these methods, mainly in real time, is indispensable to realize the atmospheric modeling (ionosphere and troposphere) accordingly. Related to troposphere, there are the empirical models (for example Saastamoinen and Hopfield). But when highly accuracy results (error of few centimeters) are desired, maybe these models are not appropriated to the Brazilian reality. In order to minimize this limitation arises the NWP (Numerical Weather Prediction) models. In Brazil the CPTEC/INPE (Center for Weather Prediction and Climate Studies / Brazilian Institute for Spatial Researches) provides a regional NWP model, currently used to produce Zenithal Tropospheric Delay (ZTD) predictions (http://satelite.cptec.inpe.br/zenital/). The actual version, called eta15km model, has a spatial resolution of 15 km and temporal resolution of 3 hours. In this paper the main goal is to accomplish experiments and analysis concerning the use of troposphere NWP model (eta15km model) in PPP and network-based positioning. Concerning PPP it was used data from dozens of stations over the Brazilian territory, including Amazon forest. The results obtained with NWP model were compared with Hopfield one. NWP model presented the best results in all experiments. Related to network-based positioning it was used data from GNSS/SP Network in São Paulo State, Brazil. This network presents the best configuration in the country to realize this kind of positioning. Actually the network is composed by twenty stations (http://www.fct.unesp.br/#!/pesquisa/grupos-de-estudo-e-pesquisa/gege//gnss-sp-network2789/). The

  18. The 'robustness' of vocabulary intervention in the public schools: targets and techniques employed in speech-language therapy.

    Science.gov (United States)

    Justice, Laura M; Schmitt, Mary Beth; Murphy, Kimberly A; Pratt, Amy; Biancone, Tricia

    2014-01-01

    This study examined vocabulary intervention-in terms of targets and techniques-for children with language impairment receiving speech-language therapy in public schools (i.e., non-fee-paying schools) in the United States. Vocabulary treatments and targets were examined with respect to their alignment with the empirically validated practice of rich vocabulary intervention. Participants were forty-eight 5-7-year-old children participating in kindergarten or the first-grade year of school, all of whom had vocabulary-specific goals on their individualized education programmes. Two therapy sessions per child were coded to determine what vocabulary words were being directly targeted and what techniques were used for each. Study findings showed that the majority of words directly targeted during therapy were lower-level basic vocabulary words (87%) and very few (1%) were academically relevant. On average, three techniques were used per word to promote deep understanding. Interpreting findings against empirical descriptions of rich vocabulary intervention indicates that children were exposed to some but not all aspects of this empirically supported practice. © 2013 Royal College of Speech and Language Therapists.

  19. Cost-benefit analysis of the industrial evaluations employing radioactive tracer techniques in the sugar-cane industry

    International Nuclear Information System (INIS)

    Aguila, D.; Jerez, P.F.

    1998-01-01

    A practice with radioactivity is justifiable if the benefit that she brings is greater than the detriment to the health that provokes. This is achieved with an optimization of the radiological protection on the base of the principle ALARA (the dose must be at botommost level that reasonably could be reached). The cost-benefit analysis helps to take a decision of practice optimized to use. Based on the cost-benefit criterion in the framework of the industrial radioprotection, was accomplished an industrial evaluations study employing 99mT c and 1 31 I in industry Cuban sugar-bowl. The results of the analysis demonstrated that the use of the 99mT c as radiotracer is the better option to take

  20. Automatic Classification of the Sub-Techniques (Gears Used in Cross-Country Ski Skating Employing a Mobile Phone

    Directory of Open Access Journals (Sweden)

    Thomas Stöggl

    2014-10-01

    Full Text Available The purpose of the current study was to develop and validate an automatic algorithm for classification of cross-country (XC ski-skating gears (G using Smartphone accelerometer data. Eleven XC skiers (seven men, four women with regional-to-international levels of performance carried out roller skiing trials on a treadmill using fixed gears (G2left, G2right, G3, G4left, G4right and a 950-m trial using different speeds and inclines, applying gears and sides as they normally would. Gear classification by the Smartphone (on the chest and based on video recordings were compared. Formachine-learning, a collective database was compared to individual data. The Smartphone application identified the trials with fixed gears correctly in all cases. In the 950-m trial, participants executed 140 ± 22 cycles as assessed by video analysis, with the automatic Smartphone application giving a similar value. Based on collective data, gears were identified correctly 86.0% ± 8.9% of the time, a value that rose to 90.3% ± 4.1% (P < 0.01 with machine learning from individual data. Classification was most often incorrect during transition between gears, especially to or from G3. Identification was most often correct for skiers who made relatively few transitions between gears. The accuracy of the automatic procedure for identifying G2left, G2right, G3, G4left and G4right was 96%, 90%, 81%, 88% and 94%, respectively. The algorithm identified gears correctly 100% of the time when a single gear was used and 90% of the time when different gears were employed during a variable protocol. This algorithm could be improved with respect to identification of transitions between gears or the side employed within a given gear.

  1. Automatic Classification of the Sub-Techniques (Gears) Used in Cross-Country Ski Skating Employing a Mobile Phone

    Science.gov (United States)

    Stöggl, Thomas; Holst, Anders; Jonasson, Arndt; Andersson, Erik; Wunsch, Tobias; Norström, Christer; Holmberg, Hans-Christer

    2014-01-01

    The purpose of the current study was to develop and validate an automatic algorithm for classification of cross-country (XC) ski-skating gears (G) using Smartphone accelerometer data. Eleven XC skiers (seven men, four women) with regional-to-international levels of performance carried out roller skiing trials on a treadmill using fixed gears (G2left, G2right, G3, G4left, G4right) and a 950-m trial using different speeds and inclines, applying gears and sides as they normally would. Gear classification by the Smartphone (on the chest) and based on video recordings were compared. Formachine-learning, a collective database was compared to individual data. The Smartphone application identified the trials with fixed gears correctly in all cases. In the 950-m trial, participants executed 140 ± 22 cycles as assessed by video analysis, with the automatic Smartphone application giving a similar value. Based on collective data, gears were identified correctly 86.0% ± 8.9% of the time, a value that rose to 90.3% ± 4.1% (P < 0.01) with machine learning from individual data. Classification was most often incorrect during transition between gears, especially to or from G3. Identification was most often correct for skiers who made relatively few transitions between gears. The accuracy of the automatic procedure for identifying G2left, G2right, G3, G4left and G4right was 96%, 90%, 81%, 88% and 94%, respectively. The algorithm identified gears correctly 100% of the time when a single gear was used and 90% of the time when different gears were employed during a variable protocol. This algorithm could be improved with respect to identification of transitions between gears or the side employed within a given gear. PMID:25365459

  2. Modeling with data tools and techniques for scientific computing

    CERN Document Server

    Klemens, Ben

    2009-01-01

    Modeling with Data fully explains how to execute computationally intensive analyses on very large data sets, showing readers how to determine the best methods for solving a variety of different problems, how to create and debug statistical models, and how to run an analysis and evaluate the results. Ben Klemens introduces a set of open and unlimited tools, and uses them to demonstrate data management, analysis, and simulation techniques essential for dealing with large data sets and computationally intensive procedures. He then demonstrates how to easily apply these tools to the many threads of statistical technique, including classical, Bayesian, maximum likelihood, and Monte Carlo methods

  3. Internal Physical Features of a Land Surface Model Employing a Tangent Linear Model

    Science.gov (United States)

    Yang, Runhua; Cohn, Stephen E.; daSilva, Arlindo; Joiner, Joanna; Houser, Paul R.

    1997-01-01

    The Earth's land surface, including its biomass, is an integral part of the Earth's weather and climate system. Land surface heterogeneity, such as the type and amount of vegetative covering., has a profound effect on local weather variability and therefore on regional variations of the global climate. Surface conditions affect local weather and climate through a number of mechanisms. First, they determine the re-distribution of the net radiative energy received at the surface, through the atmosphere, from the sun. A certain fraction of this energy increases the surface ground temperature, another warms the near-surface atmosphere, and the rest evaporates surface water, which in turn creates clouds and causes precipitation. Second, they determine how much rainfall and snowmelt can be stored in the soil and how much instead runs off into waterways. Finally, surface conditions influence the near-surface concentration and distribution of greenhouse gases such as carbon dioxide. The processes through which these mechanisms interact with the atmosphere can be modeled mathematically, to within some degree of uncertainty, on the basis of underlying physical principles. Such a land surface model provides predictive capability for surface variables including ground temperature, surface humidity, and soil moisture and temperature. This information is important for agriculture and industry, as well as for addressing fundamental scientific questions concerning global and local climate change. In this study we apply a methodology known as tangent linear modeling to help us understand more deeply, the behavior of the Mosaic land surface model, a model that has been developed over the past several years at NASA/GSFC. This methodology allows us to examine, directly and quantitatively, the dependence of prediction errors in land surface variables upon different vegetation conditions. The work also highlights the importance of accurate soil moisture information. Although surface

  4. Uncertainty analysis of constant amplitude fatigue test data employing the six parameters random fatigue limit model

    Directory of Open Access Journals (Sweden)

    Leonetti Davide

    2018-01-01

    Full Text Available Estimating and reducing uncertainty in fatigue test data analysis is a relevant task in order to assess the reliability of a structural connection with respect to fatigue. Several statistical models have been proposed in the literature with the aim of representing the stress range vs. endurance trend of fatigue test data under constant amplitude loading and the scatter in the finite and infinite life regions. In order to estimate the safety level of the connection also the uncertainty related to the amount of information available need to be estimated using the methods provided by the theory of statistic. The Bayesian analysis is employed to reduce the uncertainty due to the often small amount of test data by introducing prior information related to the parameters of the statistical model. In this work, the inference of fatigue test data belonging to cover plated steel beams is presented. The uncertainty is estimated by making use of Bayesian and frequentist methods. The 5% quantile of the fatigue life is estimated by taking into account the uncertainty related to the sample size for both a dataset containing few samples and one containing more data. The S-N curves resulting from the application of the employed methods are compared and the effect of the reduction of uncertainty in the infinite life region is quantified.

  5. Trimming a hazard logic tree with a new model-order-reduction technique

    Science.gov (United States)

    Porter, Keith; Field, Edward; Milner, Kevin R

    2017-01-01

    The size of the logic tree within the Uniform California Earthquake Rupture Forecast Version 3, Time-Dependent (UCERF3-TD) model can challenge risk analyses of large portfolios. An insurer or catastrophe risk modeler concerned with losses to a California portfolio might have to evaluate a portfolio 57,600 times to estimate risk in light of the hazard possibility space. Which branches of the logic tree matter most, and which can one ignore? We employed two model-order-reduction techniques to simplify the model. We sought a subset of parameters that must vary, and the specific fixed values for the remaining parameters, to produce approximately the same loss distribution as the original model. The techniques are (1) a tornado-diagram approach we employed previously for UCERF2, and (2) an apparently novel probabilistic sensitivity approach that seems better suited to functions of nominal random variables. The new approach produces a reduced-order model with only 60 of the original 57,600 leaves. One can use the results to reduce computational effort in loss analyses by orders of magnitude.

  6. The role of political affiliation in employment decisions: A model and research agenda.

    Science.gov (United States)

    Roth, Philip L; Goldberg, Caren B; Thatcher, Jason B

    2017-09-01

    Organizational researchers have studied how individuals identify with groups and organizations and how this affiliation influences behavior for decades (e.g., Tajfel, 1982). Interestingly, investigation into political affiliation and political affiliation similarity in the organizational sciences is extremely rare. This is striking, given the deep political divides that exist between groups of individuals described in the political science literature. We draw from theories based on similarity, organizational identification, and person-environment fit, as well as theoretical notions related to individuating information, to develop a model, the political affiliation model (PAM), which describes the implications of political affiliation and political similarity for employment decisions. We set forth a number of propositions based on PAM, to spur future research in the organizational sciences for a timely topic which has received little attention. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  7. Plants status monitor: Modelling techniques and inherent benefits

    International Nuclear Information System (INIS)

    Breeding, R.J.; Lainoff, S.M.; Rees, D.C.; Prather, W.A.; Fickiessen, K.O.E.

    1987-01-01

    The Plant Status Monitor (PSM) is designed to provide plant personnel with information on the operational status of the plant and compliance with the plant technical specifications. The PSM software evaluates system models using a 'distributed processing' technique in which detailed models of individual systems are processed rather than by evaluating a single, plant-level model. In addition, development of the system models for PSM provides inherent benefits to the plant by forcing detailed reviews of the technical specifications, system design and operating procedures, and plant documentation. (orig.)

  8. A New Profile Learning Model for Recommendation System based on Machine Learning Technique

    Directory of Open Access Journals (Sweden)

    Shereen H. Ali

    2016-03-01

    Full Text Available Recommender systems (RSs have been used to successfully address the information overload problem by providing personalized and targeted recommendations to the end users. RSs are software tools and techniques providing suggestions for items to be of use to a user, hence, they typically apply techniques and methodologies from Data Mining. The main contribution of this paper is to introduce a new user profile learning model to promote the recommendation accuracy of vertical recommendation systems. The proposed profile learning model employs the vertical classifier that has been used in multi classification module of the Intelligent Adaptive Vertical Recommendation (IAVR system to discover the user’s area of interest, and then build the user’s profile accordingly. Experimental results have proven the effectiveness of the proposed profile learning model, which accordingly will promote the recommendation accuracy.

  9. Sensitivity analysis technique for application to deterministic models

    International Nuclear Information System (INIS)

    Ishigami, T.; Cazzoli, E.; Khatib-Rahbar, M.; Unwin, S.D.

    1987-01-01

    The characterization of sever accident source terms for light water reactors should include consideration of uncertainties. An important element of any uncertainty analysis is an evaluation of the sensitivity of the output probability distributions reflecting source term uncertainties to assumptions regarding the input probability distributions. Historically, response surface methods (RSMs) were developed to replace physical models using, for example, regression techniques, with simplified models for example, regression techniques, with simplified models for extensive calculations. The purpose of this paper is to present a new method for sensitivity analysis that does not utilize RSM, but instead relies directly on the results obtained from the original computer code calculations. The merits of this approach are demonstrated by application of the proposed method to the suppression pool aerosol removal code (SPARC), and the results are compared with those obtained by sensitivity analysis with (a) the code itself, (b) a regression model, and (c) Iman's method

  10. Selection of productivity improvement techniques via mathematical modeling

    Directory of Open Access Journals (Sweden)

    Mahassan M. Khater

    2011-07-01

    Full Text Available This paper presents a new mathematical model to select an optimal combination of productivity improvement techniques. The proposed model of this paper considers four-stage cycle productivity and the productivity is assumed to be a linear function of fifty four improvement techniques. The proposed model of this paper is implemented for a real-world case study of manufacturing plant. The resulted problem is formulated as a mixed integer programming which can be solved for optimality using traditional methods. The preliminary results of the implementation of the proposed model of this paper indicate that the productivity can be improved through a change on equipments and it can be easily applied for both manufacturing and service industries.

  11. Constructing canine carotid artery stenosis model by endovascular technique

    International Nuclear Information System (INIS)

    Cheng Guangsen; Liu Yizhi

    2005-01-01

    Objective: To establish a carotid artery stenosis model by endovascular technique suitable for neuro-interventional therapy. Methods: Twelve dogs were anesthetized, the unilateral segments of the carotid arteries' tunica media and intima were damaged by a corneous guiding wire of home made. Twenty-four carotid artery stenosis models were thus created. DSA examination was performed on postprocedural weeks 2, 4, 8, 10 to estimate the changes of those stenotic carotid arteries. Results: Twenty-four carotid artery stenosis models were successfully created in twelve dogs. Conclusions: Canine carotid artery stenosis models can be created with the endovascular method having variation of pathologic characters and hemodynamic changes similar to human being. It is useful for further research involving the new technique and new material for interventional treatment. (authors)

  12. Modeling and Simulation Techniques for Large-Scale Communications Modeling

    National Research Council Canada - National Science Library

    Webb, Steve

    1997-01-01

    .... Tests of random number generators were also developed and applied to CECOM models. It was found that synchronization of random number strings in simulations is easy to implement and can provide significant savings for making comparative studies. If synchronization is in place, then statistical experiment design can be used to provide information on the sensitivity of the output to input parameters. The report concludes with recommendations and an implementation plan.

  13. Modeling and design techniques for RF power amplifiers

    CERN Document Server

    Raghavan, Arvind; Laskar, Joy

    2008-01-01

    The book covers RF power amplifier design, from device and modeling considerations to advanced circuit design architectures and techniques. It focuses on recent developments and advanced topics in this area, including numerous practical designs to back the theoretical considerations. It presents the challenges in designing power amplifiers in silicon and helps the reader improve the efficiency of linear power amplifiers, and design more accurate compact device models, with faster extraction routines, to create cost effective and reliable circuits.

  14. Techniques for discrimination-free predictive models (Chapter 12)

    NARCIS (Netherlands)

    Kamiran, F.; Calders, T.G.K.; Pechenizkiy, M.; Custers, B.H.M.; Calders, T.G.K.; Schermer, B.W.; Zarsky, T.Z.

    2013-01-01

    In this chapter, we give an overview of the techniques developed ourselves for constructing discrimination-free classifiers. In discrimination-free classification the goal is to learn a predictive model that classifies future data objects as accurately as possible, yet the predicted labels should be

  15. Using of Structural Equation Modeling Techniques in Cognitive Levels Validation

    Directory of Open Access Journals (Sweden)

    Natalija Curkovic

    2012-10-01

    Full Text Available When constructing knowledge tests, cognitive level is usually one of the dimensions comprising the test specifications with each item assigned to measure a particular level. Recently used taxonomies of the cognitive levels most often represent some modification of the original Bloom’s taxonomy. There are many concerns in current literature about existence of predefined cognitive levels. The aim of this article is to investigate can structural equation modeling techniques confirm existence of different cognitive levels. For the purpose of the research, a Croatian final high-school Mathematics exam was used (N = 9626. Confirmatory factor analysis and structural regression modeling were used to test three different models. Structural equation modeling techniques did not support existence of different cognitive levels in this case. There is more than one possible explanation for that finding. Some other techniques that take into account nonlinear behaviour of the items as well as qualitative techniques might be more useful for the purpose of the cognitive levels validation. Furthermore, it seems that cognitive levels were not efficient descriptors of the items and so improvements are needed in describing the cognitive skills measured by items.

  16. NMR and modelling techniques in structural and conformation analysis

    Energy Technology Data Exchange (ETDEWEB)

    Abraham, R J [Liverpool Univ. (United Kingdom)

    1994-12-31

    The use of Lanthanide Induced Shifts (L.I.S.) and modelling techniques in conformational analysis is presented. The use of Co{sup III} porphyrins as shift reagents is discussed, with examples of their use in the conformational analysis of some heterocyclic amines. (author) 13 refs., 9 figs.

  17. Air quality modelling using chemometric techniques | Azid | Journal ...

    African Journals Online (AJOL)

    This study presents that the chemometric techniques and modelling become an excellent tool in API assessment, air pollution source identification, apportionment and can be setbacks in designing an API monitoring network for effective air pollution resources management. Keywords: air pollutant index; chemometric; ANN; ...

  18. AN OVERVIEW OF REDUCED ORDER MODELING TECHNIQUES FOR SAFETY APPLICATIONS

    Energy Technology Data Exchange (ETDEWEB)

    Mandelli, D.; Alfonsi, A.; Talbot, P.; Wang, C.; Maljovec, D.; Smith, C.; Rabiti, C.; Cogliati, J.

    2016-10-01

    The RISMC project is developing new advanced simulation-based tools to perform Computational Risk Analysis (CRA) for the existing fleet of U.S. nuclear power plants (NPPs). These tools numerically model not only the thermal-hydraulic behavior of the reactors primary and secondary systems, but also external event temporal evolution and component/system ageing. Thus, this is not only a multi-physics problem being addressed, but also a multi-scale problem (both spatial, µm-mm-m, and temporal, seconds-hours-years). As part of the RISMC CRA approach, a large amount of computationally-expensive simulation runs may be required. An important aspect is that even though computational power is growing, the overall computational cost of a RISMC analysis using brute-force methods may be not viable for certain cases. A solution that is being evaluated to assist the computational issue is the use of reduced order modeling techniques. During the FY2015, we investigated and applied reduced order modeling techniques to decrease the RISMC analysis computational cost by decreasing the number of simulation runs; for this analysis improvement we used surrogate models instead of the actual simulation codes. This article focuses on the use of reduced order modeling techniques that can be applied to RISMC analyses in order to generate, analyze, and visualize data. In particular, we focus on surrogate models that approximate the simulation results but in a much faster time (microseconds instead of hours/days).

  19. Developing a workplace breast feeding support model for employed lactating mothers.

    Science.gov (United States)

    Yimyam, Susanha; Hanpa, Wasana

    2014-06-01

    Resuming work is often considered an obstacle for continued breast feeding. The objectives of this participatory action research study were to develop a breast feeding support model in the workplace and to compare breast feeding rates before and after implementation of the breast feeding support campaign. Twenty-four women participated before the implementation of the breast feeding support campaign, whereas 31 women participated after the campaign. Data were collected by interviewing employed women about their breast feeding practices within six months post partum. Additional data were collected through interviews with the workplace administrator and head of work sections as well as observation of the breast feeding support campaigns. Qualitative data were analysed using thematic analysis, whereas quantitative data were analysed using descriptive statistics and χ(2) test. The workplace breast feeding support model was developed based on the concept of Mother-Friendly Workplace Initiatives by the World Alliance for Breastfeeding Action (WABA) and the Thai government׳s promotion of a workplace breast feeding corner. Within this model, a committee for breast feeding support was created for working with the research team to develop breast feeding activities and media for breast feeding education and breast feeding support campaigns in the workplace. Breast feeding rates at six months after implementation of the breast feeding support campaign were significantly higher than rates before, both for exclusive breast feeding and any breast feeding at levels .004 and .033, respectively. These results suggest that breast feeding should be encouraged in every workplace depending on context. Individual advice and help for employed mothers should be provided from pregnancy through weaning in the postpartum period. Copyright © 2014 Elsevier Ltd. All rights reserved.

  20. Low level waste management: a compilation of models and monitoring techniques. Volume 1

    International Nuclear Information System (INIS)

    Mosier, J.E.; Fowler, J.R.; Barton, C.J.

    1980-04-01

    In support of the National Low-Level Waste (LLW) Management Research and Development Program being carried out at Oak Ridge National Laboratory, Science Applications, Inc., conducted a survey of models and monitoring techniques associated with the transport of radionuclides and other chemical species from LLW burial sites. As a result of this survey, approximately 350 models were identified. For each model the purpose and a brief description are presented. To the extent possible, a point of contact and reference material are identified. The models are organized into six technical categories: atmospheric transport, dosimetry, food chain, groundwater transport, soil transport, and surface water transport. About 4% of the models identified covered other aspects of LLW management and are placed in a miscellaneous category. A preliminary assessment of all these models was performed to determine their ability to analyze the transport of other chemical species. The models that appeared to be applicable are identified. A brief survey of the state-of-the-art techniques employed to monitor LLW burial sites is also presented, along with a very brief discussion of up-to-date burial techniques

  1. Low level waste management: a compilation of models and monitoring techniques. Volume 1

    Energy Technology Data Exchange (ETDEWEB)

    Mosier, J.E.; Fowler, J.R.; Barton, C.J. (comps.)

    1980-04-01

    In support of the National Low-Level Waste (LLW) Management Research and Development Program being carried out at Oak Ridge National Laboratory, Science Applications, Inc., conducted a survey of models and monitoring techniques associated with the transport of radionuclides and other chemical species from LLW burial sites. As a result of this survey, approximately 350 models were identified. For each model the purpose and a brief description are presented. To the extent possible, a point of contact and reference material are identified. The models are organized into six technical categories: atmospheric transport, dosimetry, food chain, groundwater transport, soil transport, and surface water transport. About 4% of the models identified covered other aspects of LLW management and are placed in a miscellaneous category. A preliminary assessment of all these models was performed to determine their ability to analyze the transport of other chemical species. The models that appeared to be applicable are identified. A brief survey of the state-of-the-art techniques employed to monitor LLW burial sites is also presented, along with a very brief discussion of up-to-date burial techniques.

  2. Re-engineering pre-employment check-up systems: a model for improving health services.

    Science.gov (United States)

    Rateb, Said Abdel Hakim; El Nouman, Azza Abdel Razek; Rateb, Moshira Abdel Hakim; Asar, Mohamed Naguib; El Amin, Ayman Mohammed; Gad, Saad abdel Aziz; Mohamed, Mohamed Salah Eldin

    2011-01-01

    The purpose of this paper is to develop a model for improving health services provided by the pre-employment medical fitness check-up system affiliated to Egypt's Health Insurance Organization (HIO). Operations research, notably system re-engineering, is used in six randomly selected centers and findings before and after re-engineering are compared. The re-engineering model follows a systems approach, focusing on three areas: structure, process and outcome. The model is based on six main components: electronic booking, standardized check-up processes, protected medical documents, advanced archiving through an electronic content management (ECM) system, infrastructure development, and capacity building. The model originates mainly from customer needs and expectations. The centers' monthly customer flow increased significantly after re-engineering. The mean time spent per customer cycle improved after re-engineering--18.3 +/- 5.5 minutes as compared to 48.8 +/- 14.5 minutes before. Appointment delay was also significantly decreased from an average 18 to 6.2 days. Both beneficiaries and service providers were significantly more satisfied with the services after re-engineering. The model proves that re-engineering program costs are exceeded by increased revenue. Re-engineering in this study involved multiple structure and process elements. The literature review did not reveal similar re-engineering healthcare packages. Therefore, each element was compared separately. This model is highly recommended for improving service effectiveness and efficiency. This research is the first in Egypt to apply the re-engineering approach to public health systems. Developing user-friendly models for service improvement is an added value.

  3. Increasing Employability by Implementing a Work-Integrated Learning Partnership Model in South Africa--A Student Perspective

    Science.gov (United States)

    Taylor, Susanne; Govender, Cookie M.

    2017-01-01

    In South Africa, 70 per cent of the population is under 35 years old. South Africa has a vision to increase youth employment by focusing on education, training and skills development that will promote employment opportunities. A work-integrated learning (WIL) partnership model was developed to provide students with work experience and to increase…

  4. Predictive models for the assessment of occupational exposure to chemicals: A new challenge for employers

    Directory of Open Access Journals (Sweden)

    Jan Piotr Gromiec

    2013-10-01

    Full Text Available Employers are obliged to carry out and document the risk associated with the use of chemical substances. The best but the most expensive method is to measure workplace concentrations of chemicals. At present no "measureless" method for risk assessment is available in Poland, but predictive models for such assessments have been developed in some countries. The purpose of this work is to review and evaluate the applicability of selected predictive methods for assessing occupational inhalation exposure and related risk to check the compliance with Occupational Exposure Limits (OELs, as well as the compliance with REACH obligations. Based on the literature data HSE COSHH Essentials, EASE, ECETOC TRA, Stoffenmanager, and EMKG-Expo-Tool were evaluated. The data on validation of predictive models were also examined. It seems that predictive models may be used as a useful method for Tier 1 assessment of occupational exposure by inhalation. Since the levels of exposure are frequently overestimated, they should be considered as "rational worst cases" for selection of proper control measures. Bearing in mind that the number of available exposure scenarios and PROC categories is limited, further validation by field surveys is highly recommended. Predictive models may serve as a good tool for preliminary risk assessment and selection of the most appropriate risk control measures in Polish small and medium size enterprises (SMEs providing that they are available in the Polish language. This also requires an extensive training of their future users. Med Pr 2013;64(5:699–716

  5. Fabrication and characterisation of ligand-functionalised ultrapure monodispersed metal nanoparticle nanoassemblies employing advanced gas deposition technique

    Science.gov (United States)

    Geremariam Welearegay, Tesfalem; Cindemir, Umut; Österlund, Lars; Ionescu, Radu

    2018-02-01

    Here, we report for the first time the fabrication of ligand-functionalised ultrapure monodispersed metal nanoparticles (Au, Cu, and Pt) from their pure metal precursors using the advanced gas deposition technique. The experimental conditions during nanoparticle formation were adjusted in order to obtain ultrafine isolated nanoparticles on different substrates. The morphology and surface analysis of the as-deposited metal nanoparticles were investigated using scanning electron microscopy, x-ray diffraction and Fourier transform infra-red spectroscopy, which demonstrated the formation of highly ordered pure crystalline nanoparticles with a relatively uniform size distribution of ∼10 nm (Au), ∼4 nm (Cu) and ∼3 nm (Pt), respectively. A broad range of organic ligands containing thiol or amine functional groups were attached to the nanoparticles to form continuous networks of nanoparticle-ligand nanoassemblies, which were characterised by scanning electron microscopy and x-ray photoelectron spectroscopy. The electrical resistance of the functional nanoassemblies deposited in the gap spacing of two microfabricated parallel Au electrodes patterned on silicon substrates ranged between tens of kΩ and tens of MΩ, which is suitable for use in many applications including (bio)chemical sensors, surface-enhanced Raman spectroscopy and molecular electronic rectifiers.

  6. Spatial Modeling of Geometallurgical Properties: Techniques and a Case Study

    Energy Technology Data Exchange (ETDEWEB)

    Deutsch, Jared L., E-mail: jdeutsch@ualberta.ca [University of Alberta, School of Mining and Petroleum Engineering, Department of Civil and Environmental Engineering (Canada); Palmer, Kevin [Teck Resources Limited (Canada); Deutsch, Clayton V.; Szymanski, Jozef [University of Alberta, School of Mining and Petroleum Engineering, Department of Civil and Environmental Engineering (Canada); Etsell, Thomas H. [University of Alberta, Department of Chemical and Materials Engineering (Canada)

    2016-06-15

    High-resolution spatial numerical models of metallurgical properties constrained by geological controls and more extensively by measured grade and geomechanical properties constitute an important part of geometallurgy. Geostatistical and other numerical techniques are adapted and developed to construct these high-resolution models accounting for all available data. Important issues that must be addressed include unequal sampling of the metallurgical properties versus grade assays, measurements at different scale, and complex nonlinear averaging of many metallurgical parameters. This paper establishes techniques to address each of these issues with the required implementation details and also demonstrates geometallurgical mineral deposit characterization for a copper–molybdenum deposit in South America. High-resolution models of grades and comminution indices are constructed, checked, and are rigorously validated. The workflow demonstrated in this case study is applicable to many other deposit types.

  7. A fermionic molecular dynamics technique to model nuclear matter

    International Nuclear Information System (INIS)

    Vantournhout, K.; Jachowicz, N.; Ryckebusch, J.

    2009-01-01

    Full text: At sub-nuclear densities of about 10 14 g/cm 3 , nuclear matter arranges itself in a variety of complex shapes. This can be the case in the crust of neutron stars and in core-collapse supernovae. These slab like and rod like structures, designated as nuclear pasta, have been modelled with classical molecular dynamics techniques. We present a technique, based on fermionic molecular dynamics, to model nuclear matter at sub-nuclear densities in a semi classical framework. The dynamical evolution of an antisymmetric ground state is described making the assumption of periodic boundary conditions. Adding the concepts of antisymmetry, spin and probability distributions to classical molecular dynamics, brings the dynamical description of nuclear matter to a quantum mechanical level. Applications of this model vary from investigation of macroscopic observables and the equation of state to the study of fundamental interactions on the microscopic structure of the matter. (author)

  8. Model technique for aerodynamic study of boiler furnace

    Energy Technology Data Exchange (ETDEWEB)

    1966-02-01

    The help of the Division was recently sought to improve the heat transfer and reduce the exit gas temperature in a pulverized-fuel-fired boiler at an Australian power station. One approach adopted was to construct from Perspex a 1:20 scale cold-air model of the boiler furnace and to use a flow-visualization technique to study the aerodynamic patterns established when air was introduced through the p.f. burners of the model. The work established good correlations between the behaviour of the model and of the boiler furnace.

  9. Line impedance estimation using model based identification technique

    DEFF Research Database (Denmark)

    Ciobotaru, Mihai; Agelidis, Vassilios; Teodorescu, Remus

    2011-01-01

    The estimation of the line impedance can be used by the control of numerous grid-connected systems, such as active filters, islanding detection techniques, non-linear current controllers, detection of the on/off grid operation mode. Therefore, estimating the line impedance can add extra functions...... into the operation of the grid-connected power converters. This paper describes a quasi passive method for estimating the line impedance of the distribution electricity network. The method uses the model based identification technique to obtain the resistive and inductive parts of the line impedance. The quasi...

  10. Simplified Model Surgery Technique for Segmental Maxillary Surgeries

    Directory of Open Access Journals (Sweden)

    Namit Nagar

    2011-01-01

    Full Text Available Model surgery is the dental cast version of cephalometric prediction of surgical results. Patients having vertical maxillary excess with prognathism invariably require Lefort I osteotomy with maxillary segmentation and maxillary first premolar extractions during surgery. Traditionally, model surgeries in these cases have been done by sawing the model through the first premolar interproximal area and removing that segment. This clinical innovation employed the use of X-ray film strips as separators in maxillary first premolar interproximal area. The method advocated is a time-saving procedure where no special clinical or laboratory tools, such as plaster saw (with accompanying plaster dust, were required and reusable separators were made from old and discarded X-ray films.

  11. Model-checking techniques based on cumulative residuals.

    Science.gov (United States)

    Lin, D Y; Wei, L J; Ying, Z

    2002-03-01

    Residuals have long been used for graphical and numerical examinations of the adequacy of regression models. Conventional residual analysis based on the plots of raw residuals or their smoothed curves is highly subjective, whereas most numerical goodness-of-fit tests provide little information about the nature of model misspecification. In this paper, we develop objective and informative model-checking techniques by taking the cumulative sums of residuals over certain coordinates (e.g., covariates or fitted values) or by considering some related aggregates of residuals, such as moving sums and moving averages. For a variety of statistical models and data structures, including generalized linear models with independent or dependent observations, the distributions of these stochastic processes tinder the assumed model can be approximated by the distributions of certain zero-mean Gaussian processes whose realizations can be easily generated by computer simulation. Each observed process can then be compared, both graphically and numerically, with a number of realizations from the Gaussian process. Such comparisons enable one to assess objectively whether a trend seen in a residual plot reflects model misspecification or natural variation. The proposed techniques are particularly useful in checking the functional form of a covariate and the link function. Illustrations with several medical studies are provided.

  12. Prediction of Monthly Summer Monsoon Rainfall Using Global Climate Models Through Artificial Neural Network Technique

    Science.gov (United States)

    Nair, Archana; Singh, Gurjeet; Mohanty, U. C.

    2018-01-01

    The monthly prediction of summer monsoon rainfall is very challenging because of its complex and chaotic nature. In this study, a non-linear technique known as Artificial Neural Network (ANN) has been employed on the outputs of Global Climate Models (GCMs) to bring out the vagaries inherent in monthly rainfall prediction. The GCMs that are considered in the study are from the International Research Institute (IRI) (2-tier CCM3v6) and the National Centre for Environmental Prediction (Coupled-CFSv2). The ANN technique is applied on different ensemble members of the individual GCMs to obtain monthly scale prediction over India as a whole and over its spatial grid points. In the present study, a double-cross-validation and simple randomization technique was used to avoid the over-fitting during training process of the ANN model. The performance of the ANN-predicted rainfall from GCMs is judged by analysing the absolute error, box plots, percentile and difference in linear error in probability space. Results suggest that there is significant improvement in prediction skill of these GCMs after applying the ANN technique. The performance analysis reveals that the ANN model is able to capture the year to year variations in monsoon months with fairly good accuracy in extreme years as well. ANN model is also able to simulate the correct signs of rainfall anomalies over different spatial points of the Indian domain.

  13. Modeling the cost-benefit of nerve conduction studies in pre-employment screening for carpal tunnel syndrome.

    Science.gov (United States)

    Evanoff, Bradley; Kymes, Steve

    2010-06-01

    The aim of this study was to evaluate the costs associated with pre-employment nerve conduction testing as a screening tool for carpal tunnel syndrome (CTS) in the workplace. We used a Markov decision analysis model to compare the costs associated with a strategy of screening all prospective employees for CTS and not hiring those with abnormal nerve conduction, versus a strategy of not screening for CTS. The variables included in our model included employee turnover rate, the incidence of CTS, the prevalence of median nerve conduction abnormalities, the relative risk of developing CTS conferred by abnormal nerve conduction screening, the costs of pre-employment screening, and the worker's compensation costs to the employer for each case of CTS. In our base case, total employer costs for CTS from the perspective of the employer (cost of screening plus costs for workers' compensation associated with CTS) were higher when screening was used. Median costs per employee position over five years were US$503 for the screening strategy versus US$200 for a no-screening strategy. A sensitivity analysis showed that a strategy of screening was cost-beneficial from the perspective of the employer only under a few circumstances. Using Monte Carlo simulation varying all parameters, we found a 30% probability that screening would be cost-beneficial. A strategy of pre-employment screening for CTS should be carefully evaluated for yield and social consequences before being implemented. Our model suggests such screening is not appropriate for most employers.

  14. Employing Organizational Modeling and Simulation to Reduce F/A-18E/F F414 Engine Maintenance Time

    National Research Council Canada - National Science Library

    Hagan, Joel J; Slack, William G

    2006-01-01

    ...) at Naval Air Station (NAS) Lemoore, California. To achieve this goal, organizational modeling was employed to evaluate how changes to the organizational structure of the Lemoore AIMD affected engine throughput time...

  15. ABOUT RISK PROCESS ESTIMATION TECHNIQUES EMPLOYED BY A VIRTUAL ORGANIZATION WHICH IS DIRECTED TOWARDS THE INSURANCE BUSINESS

    Directory of Open Access Journals (Sweden)

    Covrig Mihaela

    2008-05-01

    Full Text Available In a virtual organization directed on the insurance business, the estimations of the risk process and of the ruin probability are important concerns: for researchers, at the theoretical level, and for the management of the company, as these influence the insurer strategy. We consider the evolution over an extended period of time of the insurer surplus process. In this paper, we present some methods for the estimation of the ruin probability and for the evaluation of a reserve fund. We discuss the ruin probability with respect to: the parameters of the individual claim distribution, the load factor of premiums and the intensity parameter of the number of claims process. We analyze the model in which the premiums are computed according to the mean value principle. Also, we attempt the case when the initial capital is proportional to the expected value of the individual claim. We give numerical illustration.

  16. Data-driven remaining useful life prognosis techniques stochastic models, methods and applications

    CERN Document Server

    Si, Xiao-Sheng; Hu, Chang-Hua

    2017-01-01

    This book introduces data-driven remaining useful life prognosis techniques, and shows how to utilize the condition monitoring data to predict the remaining useful life of stochastic degrading systems and to schedule maintenance and logistics plans. It is also the first book that describes the basic data-driven remaining useful life prognosis theory systematically and in detail. The emphasis of the book is on the stochastic models, methods and applications employed in remaining useful life prognosis. It includes a wealth of degradation monitoring experiment data, practical prognosis methods for remaining useful life in various cases, and a series of applications incorporated into prognostic information in decision-making, such as maintenance-related decisions and ordering spare parts. It also highlights the latest advances in data-driven remaining useful life prognosis techniques, especially in the contexts of adaptive prognosis for linear stochastic degrading systems, nonlinear degradation modeling based pro...

  17. Use of hydrological modelling and isotope techniques in Guvenc basin

    International Nuclear Information System (INIS)

    Altinbilek, D.

    1991-07-01

    The study covers the work performed under Project No. 335-RC-TUR-5145 entitled ''Use of Hydrologic Modelling and Isotope Techniques in Guvenc Basin'' and is an initial part of a program for estimating runoff from Central Anatolia Watersheds. The study presented herein consists of mainly three parts: 1) the acquisition of a library of rainfall excess, direct runoff and isotope data for Guvenc basin; 2) the modification of SCS model to be applied to Guvenc basin first and then to other basins of Central Anatolia for predicting the surface runoff from gaged and ungaged watersheds; and 3) the use of environmental isotope technique in order to define the basin components of streamflow of Guvenc basin. 31 refs, figs and tabs

  18. Construct canine intracranial aneurysm model by endovascular technique

    International Nuclear Information System (INIS)

    Liang Xiaodong; Liu Yizhi; Ni Caifang; Ding Yi

    2004-01-01

    Objective: To construct canine bifurcation aneurysms suitable for evaluating the exploration of endovascular devices for interventional therapy by endovascular technique. Methods: The right common carotid artery of six dogs was expanded with a pliable balloon by means of endovascular technique, then embolization with detached balloon was taken at their originations DAS examination were performed on 1, 2, 3 d after the procedurse. Results: 6 aneurysm models were created in six dogs successfully with the mean width and height of the aneurysms decreasing in 3 days. Conclusions: This canine aneurysm model presents the virtue in the size and shape of human cerebral bifurcation saccular aneurysms on DSA image, suitable for developing the exploration of endovascular devices for aneurismal therapy. The procedure is quick, reliable and reproducible. (authors)

  19. Multi-factor models and signal processing techniques application to quantitative finance

    CERN Document Server

    Darolles, Serges; Jay, Emmanuelle

    2013-01-01

    With recent outbreaks of multiple large-scale financial crises, amplified by interconnected risk sources, a new paradigm of fund management has emerged. This new paradigm leverages "embedded" quantitative processes and methods to provide more transparent, adaptive, reliable and easily implemented "risk assessment-based" practices.This book surveys the most widely used factor models employed within the field of financial asset pricing. Through the concrete application of evaluating risks in the hedge fund industry, the authors demonstrate that signal processing techniques are an intere

  20. [Preparation of simulate craniocerebral models via three dimensional printing technique].

    Science.gov (United States)

    Lan, Q; Chen, A L; Zhang, T; Zhu, Q; Xu, T

    2016-08-09

    Three dimensional (3D) printing technique was used to prepare the simulate craniocerebral models, which were applied to preoperative planning and surgical simulation. The image data was collected from PACS system. Image data of skull bone, brain tissue and tumors, cerebral arteries and aneurysms, and functional regions and relative neural tracts of the brain were extracted from thin slice scan (slice thickness 0.5 mm) of computed tomography (CT), magnetic resonance imaging (MRI, slice thickness 1mm), computed tomography angiography (CTA), and functional magnetic resonance imaging (fMRI) data, respectively. MIMICS software was applied to reconstruct colored virtual models by identifying and differentiating tissues according to their gray scales. Then the colored virtual models were submitted to 3D printer which produced life-sized craniocerebral models for surgical planning and surgical simulation. 3D printing craniocerebral models allowed neurosurgeons to perform complex procedures in specific clinical cases though detailed surgical planning. It offered great convenience for evaluating the size of spatial fissure of sellar region before surgery, which helped to optimize surgical approach planning. These 3D models also provided detailed information about the location of aneurysms and their parent arteries, which helped surgeons to choose appropriate aneurismal clips, as well as perform surgical simulation. The models further gave clear indications of depth and extent of tumors and their relationship to eloquent cortical areas and adjacent neural tracts, which were able to avoid surgical damaging of important neural structures. As a novel and promising technique, the application of 3D printing craniocerebral models could improve the surgical planning by converting virtual visualization into real life-sized models.It also contributes to functional anatomy study.

  1. Skin fluorescence model based on the Monte Carlo technique

    Science.gov (United States)

    Churmakov, Dmitry Y.; Meglinski, Igor V.; Piletsky, Sergey A.; Greenhalgh, Douglas A.

    2003-10-01

    The novel Monte Carlo technique of simulation of spatial fluorescence distribution within the human skin is presented. The computational model of skin takes into account spatial distribution of fluorophores following the collagen fibers packing, whereas in epidermis and stratum corneum the distribution of fluorophores assumed to be homogeneous. The results of simulation suggest that distribution of auto-fluorescence is significantly suppressed in the NIR spectral region, while fluorescence of sensor layer embedded in epidermis is localized at the adjusted depth. The model is also able to simulate the skin fluorescence spectra.

  2. System health monitoring using multiple-model adaptive estimation techniques

    Science.gov (United States)

    Sifford, Stanley Ryan

    Monitoring system health for fault detection and diagnosis by tracking system parameters concurrently with state estimates is approached using a new multiple-model adaptive estimation (MMAE) method. This novel method is called GRid-based Adaptive Parameter Estimation (GRAPE). GRAPE expands existing MMAE methods by using new techniques to sample the parameter space. GRAPE expands on MMAE with the hypothesis that sample models can be applied and resampled without relying on a predefined set of models. GRAPE is initially implemented in a linear framework using Kalman filter models. A more generalized GRAPE formulation is presented using extended Kalman filter (EKF) models to represent nonlinear systems. GRAPE can handle both time invariant and time varying systems as it is designed to track parameter changes. Two techniques are presented to generate parameter samples for the parallel filter models. The first approach is called selected grid-based stratification (SGBS). SGBS divides the parameter space into equally spaced strata. The second approach uses Latin Hypercube Sampling (LHS) to determine the parameter locations and minimize the total number of required models. LHS is particularly useful when the parameter dimensions grow. Adding more parameters does not require the model count to increase for LHS. Each resample is independent of the prior sample set other than the location of the parameter estimate. SGBS and LHS can be used for both the initial sample and subsequent resamples. Furthermore, resamples are not required to use the same technique. Both techniques are demonstrated for both linear and nonlinear frameworks. The GRAPE framework further formalizes the parameter tracking process through a general approach for nonlinear systems. These additional methods allow GRAPE to either narrow the focus to converged values within a parameter range or expand the range in the appropriate direction to track the parameters outside the current parameter range boundary

  3. Application of object modeling technique to medical image retrieval system

    International Nuclear Information System (INIS)

    Teshima, Fumiaki; Abe, Takeshi

    1993-01-01

    This report describes the results of discussions on the object-oriented analysis methodology, which is one of the object-oriented paradigms. In particular, we considered application of the object modeling technique (OMT) to the analysis of a medical image retrieval system. The object-oriented methodology places emphasis on the construction of an abstract model from real-world entities. The effectiveness of and future improvements to OMT are discussed from the standpoint of the system's expandability. These discussions have elucidated that the methodology is sufficiently well-organized and practical to be applied to commercial products, provided that it is applied to the appropriate problem domain. (author)

  4. The Anglo-Saxon model of employment in the current economic context. The case of United Kingdom

    Directory of Open Access Journals (Sweden)

    Mirela Ionela ACELEANU

    2012-11-01

    Full Text Available The Anglo-Saxon model of employment has certain features that derive from the specificity of the political and economic system of Great Britain. The labour market policies in the United Kingdom were different depending on the historical and political period; however, the Anglo-Saxon model is characterized mainly by high flexibility but low security. This paper presents the main features of the Anglo-Saxon model of employment, compared to the Nordic and the Mediterranean model and shows Britain's economic situation before and after the crisis.

  5. Level-set techniques for facies identification in reservoir modeling

    Science.gov (United States)

    Iglesias, Marco A.; McLaughlin, Dennis

    2011-03-01

    In this paper we investigate the application of level-set techniques for facies identification in reservoir models. The identification of facies is a geometrical inverse ill-posed problem that we formulate in terms of shape optimization. The goal is to find a region (a geologic facies) that minimizes the misfit between predicted and measured data from an oil-water reservoir. In order to address the shape optimization problem, we present a novel application of the level-set iterative framework developed by Burger in (2002 Interfaces Free Bound. 5 301-29 2004 Inverse Problems 20 259-82) for inverse obstacle problems. The optimization is constrained by (the reservoir model) a nonlinear large-scale system of PDEs that describes the reservoir dynamics. We reformulate this reservoir model in a weak (integral) form whose shape derivative can be formally computed from standard results of shape calculus. At each iteration of the scheme, the current estimate of the shape derivative is utilized to define a velocity in the level-set equation. The proper selection of this velocity ensures that the new shape decreases the cost functional. We present results of facies identification where the velocity is computed with the gradient-based (GB) approach of Burger (2002) and the Levenberg-Marquardt (LM) technique of Burger (2004). While an adjoint formulation allows the straightforward application of the GB approach, the LM technique requires the computation of the large-scale Karush-Kuhn-Tucker system that arises at each iteration of the scheme. We efficiently solve this system by means of the representer method. We present some synthetic experiments to show and compare the capabilities and limitations of the proposed implementations of level-set techniques for the identification of geologic facies.

  6. Level-set techniques for facies identification in reservoir modeling

    International Nuclear Information System (INIS)

    Iglesias, Marco A; McLaughlin, Dennis

    2011-01-01

    In this paper we investigate the application of level-set techniques for facies identification in reservoir models. The identification of facies is a geometrical inverse ill-posed problem that we formulate in terms of shape optimization. The goal is to find a region (a geologic facies) that minimizes the misfit between predicted and measured data from an oil–water reservoir. In order to address the shape optimization problem, we present a novel application of the level-set iterative framework developed by Burger in (2002 Interfaces Free Bound. 5 301–29; 2004 Inverse Problems 20 259–82) for inverse obstacle problems. The optimization is constrained by (the reservoir model) a nonlinear large-scale system of PDEs that describes the reservoir dynamics. We reformulate this reservoir model in a weak (integral) form whose shape derivative can be formally computed from standard results of shape calculus. At each iteration of the scheme, the current estimate of the shape derivative is utilized to define a velocity in the level-set equation. The proper selection of this velocity ensures that the new shape decreases the cost functional. We present results of facies identification where the velocity is computed with the gradient-based (GB) approach of Burger (2002) and the Levenberg–Marquardt (LM) technique of Burger (2004). While an adjoint formulation allows the straightforward application of the GB approach, the LM technique requires the computation of the large-scale Karush–Kuhn–Tucker system that arises at each iteration of the scheme. We efficiently solve this system by means of the representer method. We present some synthetic experiments to show and compare the capabilities and limitations of the proposed implementations of level-set techniques for the identification of geologic facies

  7. Validation of transport models using additive flux minimization technique

    Energy Technology Data Exchange (ETDEWEB)

    Pankin, A. Y.; Kruger, S. E. [Tech-X Corporation, 5621 Arapahoe Ave., Boulder, Colorado 80303 (United States); Groebner, R. J. [General Atomics, San Diego, California 92121 (United States); Hakim, A. [Princeton Plasma Physics Laboratory, Princeton, New Jersey 08543-0451 (United States); Kritz, A. H.; Rafiq, T. [Department of Physics, Lehigh University, Bethlehem, Pennsylvania 18015 (United States)

    2013-10-15

    A new additive flux minimization technique is proposed for carrying out the verification and validation (V and V) of anomalous transport models. In this approach, the plasma profiles are computed in time dependent predictive simulations in which an additional effective diffusivity is varied. The goal is to obtain an optimal match between the computed and experimental profile. This new technique has several advantages over traditional V and V methods for transport models in tokamaks and takes advantage of uncertainty quantification methods developed by the applied math community. As a demonstration of its efficiency, the technique is applied to the hypothesis that the paleoclassical density transport dominates in the plasma edge region in DIII-D tokamak discharges. A simplified version of the paleoclassical model that utilizes the Spitzer resistivity for the parallel neoclassical resistivity and neglects the trapped particle effects is tested in this paper. It is shown that a contribution to density transport, in addition to the paleoclassical density transport, is needed in order to describe the experimental profiles. It is found that more additional diffusivity is needed at the top of the H-mode pedestal, and almost no additional diffusivity is needed at the pedestal bottom. The implementation of this V and V technique uses the FACETS::Core transport solver and the DAKOTA toolkit for design optimization and uncertainty quantification. The FACETS::Core solver is used for advancing the plasma density profiles. The DAKOTA toolkit is used for the optimization of plasma profiles and the computation of the additional diffusivity that is required for the predicted density profile to match the experimental profile.

  8. Validation of transport models using additive flux minimization technique

    International Nuclear Information System (INIS)

    Pankin, A. Y.; Kruger, S. E.; Groebner, R. J.; Hakim, A.; Kritz, A. H.; Rafiq, T.

    2013-01-01

    A new additive flux minimization technique is proposed for carrying out the verification and validation (V and V) of anomalous transport models. In this approach, the plasma profiles are computed in time dependent predictive simulations in which an additional effective diffusivity is varied. The goal is to obtain an optimal match between the computed and experimental profile. This new technique has several advantages over traditional V and V methods for transport models in tokamaks and takes advantage of uncertainty quantification methods developed by the applied math community. As a demonstration of its efficiency, the technique is applied to the hypothesis that the paleoclassical density transport dominates in the plasma edge region in DIII-D tokamak discharges. A simplified version of the paleoclassical model that utilizes the Spitzer resistivity for the parallel neoclassical resistivity and neglects the trapped particle effects is tested in this paper. It is shown that a contribution to density transport, in addition to the paleoclassical density transport, is needed in order to describe the experimental profiles. It is found that more additional diffusivity is needed at the top of the H-mode pedestal, and almost no additional diffusivity is needed at the pedestal bottom. The implementation of this V and V technique uses the FACETS::Core transport solver and the DAKOTA toolkit for design optimization and uncertainty quantification. The FACETS::Core solver is used for advancing the plasma density profiles. The DAKOTA toolkit is used for the optimization of plasma profiles and the computation of the additional diffusivity that is required for the predicted density profile to match the experimental profile

  9. Extending a model of precarious employment: A qualitative study of immigrant workers in Spain.

    Science.gov (United States)

    Porthé, Victoria; Ahonen, Emily; Vázquez, M Luisa; Pope, Catherine; Agudelo, Andrés Alonso; García, Ana M; Amable, Marcelo; Benavides, Fernando G; Benach, Joan

    2010-04-01

    Since the 1980s, changes in the labor market have modified power relations between capital and labor, leading to greater levels of precarious employment among workers. Globalization has led to a growth in migration, as people leave their countries in search of work. We aimed to describe the dimensions of precarious employment for immigrant workers in Spain. Qualitative study using analytic induction. Criterion sampling was used to recruit 129 immigrant workers in Spain with documented and undocumented administrative status. Data quality was ensured by triangulation. Immigrant workers reported that precarious employment is characterized by high job instability, a lack of power for negotiating employment conditions, and defenselessness against high labor demands. They described insufficient wages, long working hours, limited social benefits, and difficulty in exercising their rights. Undocumented workers reported greater defenselessness and worse employment conditions. This study allowed us to describe the dimensions of precarious employment in immigrant workers. (c) 2010 Wiley-Liss, Inc.

  10. Enrichment techniques employed in phosphoproteomics

    Czech Academy of Sciences Publication Activity Database

    Fíla, Jan; Honys, David

    2012-01-01

    Roč. 43, č. 3 (2012), s. 1025-1047 ISSN 0939-4451 R&D Projects: GA ČR(CZ) GAP501/11/1462; GA ČR GA522/09/0858; GA ČR GA525/09/0994; GA MŠk OC08011 Institutional research plan: CEZ:AV0Z50380511 Keywords : Phosphoproteomics * Enrichment * IMAC Subject RIV: ED - Physiology Impact factor: 3.914, year: 2012

  11. Analytical Modelling of the Effects of Different Gas Turbine Cooling Techniques on Engine Performance =

    Science.gov (United States)

    Uysal, Selcuk Can

    In this research, MATLAB SimulinkRTM was used to develop a cooled engine model for industrial gas turbines and aero-engines. The model consists of uncooled on-design, mean-line turbomachinery design and a cooled off-design analysis in order to evaluate the engine performance parameters by using operating conditions, polytropic efficiencies, material information and cooling system details. The cooling analysis algorithm involves a 2nd law analysis to calculate losses from the cooling technique applied. The model is used in a sensitivity analysis that evaluates the impacts of variations in metal Biot number, thermal barrier coating Biot number, film cooling effectiveness, internal cooling effectiveness and maximum allowable blade temperature on main engine performance parameters of aero and industrial gas turbine engines. The model is subsequently used to analyze the relative performance impact of employing Anti-Vortex Film Cooling holes (AVH) by means of data obtained for these holes by Detached Eddy Simulation-CFD Techniques that are valid for engine-like turbulence intensity conditions. Cooled blade configurations with AVH and other different external cooling techniques were used in a performance comparison study. (Abstract shortened by ProQuest.).

  12. Combining variational and model-based techniques to register PET and MR images in hand osteoarthritis

    International Nuclear Information System (INIS)

    Magee, Derek; Tanner, Steven F; Jeavons, Alan P; Waller, Michael; Tan, Ai Lyn; McGonagle, Dennis

    2010-01-01

    Co-registration of clinical images acquired using different imaging modalities and equipment is finding increasing use in patient studies. Here we present a method for registering high-resolution positron emission tomography (PET) data of the hand acquired using high-density avalanche chambers with magnetic resonance (MR) images of the finger obtained using a 'microscopy coil'. This allows the identification of the anatomical location of the PET radiotracer and thereby locates areas of active bone metabolism/'turnover'. Image fusion involving data acquired from the hand is demanding because rigid-body transformations cannot be employed to accurately register the images. The non-rigid registration technique that has been implemented in this study uses a variational approach to maximize the mutual information between images acquired using these different imaging modalities. A piecewise model of the fingers is employed to ensure that the methodology is robust and that it generates an accurate registration. Evaluation of the accuracy of the technique is tested using both synthetic data and PET and MR images acquired from patients with osteoarthritis. The method outperforms some established non-rigid registration techniques and results in a mean registration error that is less than approximately 1.5 mm in the vicinity of the finger joints.

  13. Numerical and modeling techniques used in the EPIC code

    International Nuclear Information System (INIS)

    Pizzica, P.A.; Abramson, P.B.

    1977-01-01

    EPIC models fuel and coolant motion which result from internal fuel pin pressure (from fission gas or fuel vapor) and/or from the generation of sodium vapor pressures in the coolant channel subsequent to pin failure in an LMFBR. The modeling includes the ejection of molten fuel from the pin into a coolant channel with any amount of voiding through a clad rip which may be of any length or which may expand with time. One-dimensional Eulerian hydrodynamics is used to model both the motion of fuel and fission gas inside a molten fuel cavity and the mixture of two-phase sodium and fission gas in the channel. Motion of molten fuel particles in the coolant channel is tracked with a particle-in-cell technique

  14. [Hierarchy structuring for mammography technique by interpretive structural modeling method].

    Science.gov (United States)

    Kudo, Nozomi; Kurowarabi, Kunio; Terashita, Takayoshi; Nishimoto, Naoki; Ogasawara, Katsuhiko

    2009-10-20

    Participation in screening mammography is currently desired in Japan because of the increase in breast cancer morbidity. However, the pain and discomfort of mammography is recognized as a significant deterrent for women considering this examination. Thus quick procedures, sufficient experience, and advanced skills are required for radiologic technologists. The aim of this study was to make the point of imaging techniques explicit and to help understand the complicated procedure. We interviewed 3 technologists who were highly skilled in mammography, and 14 factors were retrieved by using brainstorming and the KJ method. We then applied Interpretive Structural Modeling (ISM) to the factors and developed a hierarchical concept structure. The result showed a six-layer hierarchy whose top node was explanation of the entire procedure on mammography. Male technologists were related to as a negative factor. Factors concerned with explanation were at the upper node. We gave attention to X-ray techniques and considerations. The findings will help beginners improve their skills.

  15. Teaching scientific concepts through simple models and social communication techniques

    International Nuclear Information System (INIS)

    Tilakaratne, K.

    2011-01-01

    For science education, it is important to demonstrate to students the relevance of scientific concepts in every-day life experiences. Although there are methods available for achieving this goal, it is more effective if cultural flavor is also added to the teaching techniques and thereby the teacher and students can easily relate the subject matter to their surroundings. Furthermore, this would bridge the gap between science and day-to-day experiences in an effective manner. It could also help students to use science as a tool to solve problems faced by them and consequently they would feel science is a part of their lives. In this paper, it has been described how simple models and cultural communication techniques can be used effectively in demonstrating important scientific concepts to the students of secondary and higher secondary levels by using two consecutive activities carried out at the Institute of Fundamental Studies (IFS), Sri Lanka. (author)

  16. Automated Techniques for the Qualitative Analysis of Ecological Models: Continuous Models

    Directory of Open Access Journals (Sweden)

    Lynn van Coller

    1997-06-01

    Full Text Available The mathematics required for a detailed analysis of the behavior of a model can be formidable. In this paper, I demonstrate how various computer packages can aid qualitative analyses by implementing techniques from dynamical systems theory. Because computer software is used to obtain the results, the techniques can be used by nonmathematicians as well as mathematicians. In-depth analyses of complicated models that were previously very difficult to study can now be done. Because the paper is intended as an introduction to applying the techniques to ecological models, I have included an appendix describing some of the ideas and terminology. A second appendix shows how the techniques can be applied to a fairly simple predator-prey model and establishes the reliability of the computer software. The main body of the paper discusses a ratio-dependent model. The new techniques highlight some limitations of isocline analyses in this three-dimensional setting and show that the model is structurally unstable. Another appendix describes a larger model of a sheep-pasture-hyrax-lynx system. Dynamical systems techniques are compared with a traditional sensitivity analysis and are found to give more information. As a result, an incomplete relationship in the model is highlighted. I also discuss the resilience of these models to both parameter and population perturbations.

  17. Joining the Dots: Theoretically Connecting the Vona du Toit Model of Creative Ability (VdTMoCA) with Supported Employment

    OpenAIRE

    de Bruyn, Marna; Wright, Jon

    2017-01-01

    The Vona du Toit Model of Creative Ability (VdTMoCA) presents a framework for understanding client motivation and action in occupational therapy, emphasising the relationship between motivation and action. Similarly, motivation to work is regarded as the primary and in some instances, the only eligibility criterion for inclusion in supported employment services. This commentary explores the potential theoretical link between the VdTMoCA and supported employment, primarily applied to the South...

  18. A new cerebral vasospasm model established with endovascular puncture technique

    International Nuclear Information System (INIS)

    Tu Jianfei; Liu Yizhi; Ji Jiansong; Zhao Zhongwei

    2011-01-01

    Objective: To investigate the method of establishing cerebral vasospasm (CVS) models in rabbits by using endovascular puncture technique. Methods: Endovascular puncture procedure was performed in 78 New Zealand white rabbits to produce subarachnoid hemorrhage (SAH). The survival rabbits were randomly divided into seven groups (3 h, 12 h, 1 d, 2 d, 3 d, 7 d and 14 d), with five rabbits in each group for both study group (SAH group) and control group. Cerebral CT scanning was carried out in all rabbits both before and after the operation. The inner diameter and the thickness of vascular wall of both posterior communicating artery (PcoA) and basilar artery (BA) were determined after the animals were sacrificed, and the results were analyzed. Results: Of 78 experimental rabbits, CVS model was successfully established in 45, including 35 of SAH group and 10 control subgroup. The technical success rate was 57.7%. Twelve hours after the procedure, the inner diameter of PcoA and BA in SAH group was decreased by 45.6% and 52.3%, respectively, when compared with these in control group. The vascular narrowing showed biphasic changes, the inner diameter markedly decreased again at the 7th day when the decrease reached its peak to 31.2% and 48.6%, respectively. Conclusion: Endovascular puncture technique is an effective method to establish CVS models in rabbits. The death rate of experimental animals can be decreased if new interventional material is used and the manipulation is carefully performed. (authors)

  19. Analysis of thermoluminescence kinetics of Mg2SiO4:Tb compounds employing an interactive model

    International Nuclear Information System (INIS)

    Marcazzo, J.; Prokic, M.; Santiago, M.; Molina, P.; Caselli, E.

    2009-01-01

    The kinetics involved in the thermoluminescence (TL) of Mg 2 SiO 4 :Tb compounds has been investigated by unfolding glow curves employing both the General Order model and a model that takes into account interactions among traps. The dependence of the glow curve shape on dose is only correctly described if interaction among traps is included in the analysis.

  20. Analysis of the glow curve of SrB 4O 7:Dy compounds employing the GOT model

    Science.gov (United States)

    Ortega, F.; Molina, P.; Santiago, M.; Spano, F.; Lester, M.; Caselli, E.

    2006-02-01

    The glow curve of SrB 4O 7:Dy phosphors has been analysed with the general one trap model (GOT). To solve the differential equation describing the GOT model a novel algorithm has been employed, which reduces significantly the deconvolution time with respect to the time required by usual integration algorithms, such as the Runge-Kutta method.

  1. Analysis of the glow curve of SrB4O7:Dy compounds employing the GOT model

    International Nuclear Information System (INIS)

    Ortega, F.; Molina, P.; Santiago, M.; Spano, F.; Lester, M.; Caselli, E.

    2006-01-01

    The glow curve of SrB 4 O 7 :Dy phosphors has been analysed with the general one trap model (GOT). To solve the differential equation describing the GOT model a novel algorithm has been employed, which reduces significantly the deconvolution time with respect to the time required by usual integration algorithms, such as the Runge-Kutta method

  2. Use of advanced modeling techniques to optimize thermal packaging designs.

    Science.gov (United States)

    Formato, Richard M; Potami, Raffaele; Ahmed, Iftekhar

    2010-01-01

    Through a detailed case study the authors demonstrate, for the first time, the capability of using advanced modeling techniques to correctly simulate the transient temperature response of a convective flow-based thermal shipper design. The objective of this case study was to demonstrate that simulation could be utilized to design a 2-inch-wall polyurethane (PUR) shipper to hold its product box temperature between 2 and 8 °C over the prescribed 96-h summer profile (product box is the portion of the shipper that is occupied by the payload). Results obtained from numerical simulation are in excellent agreement with empirical chamber data (within ±1 °C at all times), and geometrical locations of simulation maximum and minimum temperature match well with the corresponding chamber temperature measurements. Furthermore, a control simulation test case was run (results taken from identical product box locations) to compare the coupled conduction-convection model with a conduction-only model, which to date has been the state-of-the-art method. For the conduction-only simulation, all fluid elements were replaced with "solid" elements of identical size and assigned thermal properties of air. While results from the coupled thermal/fluid model closely correlated with the empirical data (±1 °C), the conduction-only model was unable to correctly capture the payload temperature trends, showing a sizeable error compared to empirical values (ΔT > 6 °C). A modeling technique capable of correctly capturing the thermal behavior of passively refrigerated shippers can be used to quickly evaluate and optimize new packaging designs. Such a capability provides a means to reduce the cost and required design time of shippers while simultaneously improving their performance. Another advantage comes from using thermal modeling (assuming a validated model is available) to predict the temperature distribution in a shipper that is exposed to ambient temperatures which were not bracketed

  3. ADVANCED TECHNIQUES FOR RESERVOIR SIMULATION AND MODELING OF NONCONVENTIONAL WELLS

    Energy Technology Data Exchange (ETDEWEB)

    Louis J. Durlofsky; Khalid Aziz

    2004-08-20

    Nonconventional wells, which include horizontal, deviated, multilateral and ''smart'' wells, offer great potential for the efficient management of oil and gas reservoirs. These wells are able to contact larger regions of the reservoir than conventional wells and can also be used to target isolated hydrocarbon accumulations. The use of nonconventional wells instrumented with downhole inflow control devices allows for even greater flexibility in production. Because nonconventional wells can be very expensive to drill, complete and instrument, it is important to be able to optimize their deployment, which requires the accurate prediction of their performance. However, predictions of nonconventional well performance are often inaccurate. This is likely due to inadequacies in some of the reservoir engineering and reservoir simulation tools used to model and optimize nonconventional well performance. A number of new issues arise in the modeling and optimization of nonconventional wells. For example, the optimal use of downhole inflow control devices has not been addressed for practical problems. In addition, the impact of geological and engineering uncertainty (e.g., valve reliability) has not been previously considered. In order to model and optimize nonconventional wells in different settings, it is essential that the tools be implemented into a general reservoir simulator. This simulator must be sufficiently general and robust and must in addition be linked to a sophisticated well model. Our research under this five year project addressed all of the key areas indicated above. The overall project was divided into three main categories: (1) advanced reservoir simulation techniques for modeling nonconventional wells; (2) improved techniques for computing well productivity (for use in reservoir engineering calculations) and for coupling the well to the simulator (which includes the accurate calculation of well index and the modeling of multiphase flow

  4. Short term load forecasting technique based on the seasonal exponential adjustment method and the regression model

    International Nuclear Information System (INIS)

    Wu, Jie; Wang, Jianzhou; Lu, Haiyan; Dong, Yao; Lu, Xiaoxiao

    2013-01-01

    Highlights: ► The seasonal and trend items of the data series are forecasted separately. ► Seasonal item in the data series is verified by the Kendall τ correlation testing. ► Different regression models are applied to the trend item forecasting. ► We examine the superiority of the combined models by the quartile value comparison. ► Paired-sample T test is utilized to confirm the superiority of the combined models. - Abstract: For an energy-limited economy system, it is crucial to forecast load demand accurately. This paper devotes to 1-week-ahead daily load forecasting approach in which load demand series are predicted by employing the information of days before being similar to that of the forecast day. As well as in many nonlinear systems, seasonal item and trend item are coexisting in load demand datasets. In this paper, the existing of the seasonal item in the load demand data series is firstly verified according to the Kendall τ correlation testing method. Then in the belief of the separate forecasting to the seasonal item and the trend item would improve the forecasting accuracy, hybrid models by combining seasonal exponential adjustment method (SEAM) with the regression methods are proposed in this paper, where SEAM and the regression models are employed to seasonal and trend items forecasting respectively. Comparisons of the quartile values as well as the mean absolute percentage error values demonstrate this forecasting technique can significantly improve the accuracy though models applied to the trend item forecasting are eleven different ones. This superior performance of this separate forecasting technique is further confirmed by the paired-sample T tests

  5. Monte Carlo technique for very large ising models

    Science.gov (United States)

    Kalle, C.; Winkelmann, V.

    1982-08-01

    Rebbi's multispin coding technique is improved and applied to the kinetic Ising model with size 600*600*600. We give the central part of our computer program (for a CDC Cyber 76), which will be helpful also in a simulation of smaller systems, and describe the other tricks necessary to go to large lattices. The magnetization M at T=1.4* T c is found to decay asymptotically as exp(-t/2.90) if t is measured in Monte Carlo steps per spin, and M( t = 0) = 1 initially.

  6. Validation techniques of agent based modelling for geospatial simulations

    Directory of Open Access Journals (Sweden)

    M. Darvishi

    2014-10-01

    Full Text Available One of the most interesting aspects of modelling and simulation study is to describe the real world phenomena that have specific properties; especially those that are in large scales and have dynamic and complex behaviours. Studying these phenomena in the laboratory is costly and in most cases it is impossible. Therefore, Miniaturization of world phenomena in the framework of a model in order to simulate the real phenomena is a reasonable and scientific approach to understand the world. Agent-based modelling and simulation (ABMS is a new modelling method comprising of multiple interacting agent. They have been used in the different areas; for instance, geographic information system (GIS, biology, economics, social science and computer science. The emergence of ABM toolkits in GIS software libraries (e.g. ESRI’s ArcGIS, OpenMap, GeoTools, etc for geospatial modelling is an indication of the growing interest of users to use of special capabilities of ABMS. Since ABMS is inherently similar to human cognition, therefore it could be built easily and applicable to wide range applications than a traditional simulation. But a key challenge about ABMS is difficulty in their validation and verification. Because of frequent emergence patterns, strong dynamics in the system and the complex nature of ABMS, it is hard to validate and verify ABMS by conventional validation methods. Therefore, attempt to find appropriate validation techniques for ABM seems to be necessary. In this paper, after reviewing on Principles and Concepts of ABM for and its applications, the validation techniques and challenges of ABM validation are discussed.

  7. Validation techniques of agent based modelling for geospatial simulations

    Science.gov (United States)

    Darvishi, M.; Ahmadi, G.

    2014-10-01

    One of the most interesting aspects of modelling and simulation study is to describe the real world phenomena that have specific properties; especially those that are in large scales and have dynamic and complex behaviours. Studying these phenomena in the laboratory is costly and in most cases it is impossible. Therefore, Miniaturization of world phenomena in the framework of a model in order to simulate the real phenomena is a reasonable and scientific approach to understand the world. Agent-based modelling and simulation (ABMS) is a new modelling method comprising of multiple interacting agent. They have been used in the different areas; for instance, geographic information system (GIS), biology, economics, social science and computer science. The emergence of ABM toolkits in GIS software libraries (e.g. ESRI's ArcGIS, OpenMap, GeoTools, etc) for geospatial modelling is an indication of the growing interest of users to use of special capabilities of ABMS. Since ABMS is inherently similar to human cognition, therefore it could be built easily and applicable to wide range applications than a traditional simulation. But a key challenge about ABMS is difficulty in their validation and verification. Because of frequent emergence patterns, strong dynamics in the system and the complex nature of ABMS, it is hard to validate and verify ABMS by conventional validation methods. Therefore, attempt to find appropriate validation techniques for ABM seems to be necessary. In this paper, after reviewing on Principles and Concepts of ABM for and its applications, the validation techniques and challenges of ABM validation are discussed.

  8. Laparoscopic anterior resection: new anastomosis technique in a pig model.

    Science.gov (United States)

    Bedirli, Abdulkadir; Yucel, Deniz; Ekim, Burcu

    2014-01-01

    Bowel anastomosis after anterior resection is one of the most difficult tasks to perform during laparoscopic colorectal surgery. This study aims to evaluate a new feasible and safe intracorporeal anastomosis technique after laparoscopic left-sided colon or rectum resection in a pig model. The technique was evaluated in 5 pigs. The OrVil device (Covidien, Mansfield, Massachusetts) was inserted into the anus and advanced proximally to the rectum. A 0.5-cm incision was made in the sigmoid colon, and the 2 sutures attached to its delivery tube were cut. After the delivery tube was evacuated through the anus, the tip of the anvil was removed through the perforation. The sigmoid colon was transected just distal to the perforation with an endoscopic linear stapler. The rectosigmoid segment to be resected was removed through the anus with a grasper, and distal transection was performed. A 25-mm circular stapler was inserted and combined with the anvil, and end-to-side intracorporeal anastomosis was then performed. We performed the technique in 5 pigs. Anastomosis required an average of 12 minutes. We observed that the proximal and distal donuts were completely removed in all pigs. No anastomotic air leakage was observed in any of the animals. This study shows the efficacy and safety of intracorporeal anastomosis with the OrVil device after laparoscopic anterior resection.

  9. A Continuous Dynamic Traffic Assignment Model From Plate Scanning Technique

    Energy Technology Data Exchange (ETDEWEB)

    Rivas, A.; Gallego, I.; Sanchez-Cambronero, S.; Ruiz-Ripoll, L.; Barba, R.M.

    2016-07-01

    This paper presents a methodology for the dynamic estimation of traffic flows on all links of a network from observable field data assuming the first-in-first-out (FIFO) hypothesis. The traffic flow intensities recorded at the exit of the scanned links are propagated to obtain the flow waves on unscanned links. For that, the model calculates the flow-cost functions through information registered with the plate scanning technique. The model also responds to the concern about the parameter quality of flow-cost functions to replicate the real traffic flow behaviour. It includes a new algorithm for the adjustment of the parameter values to link characteristics when its quality is questionable. For that, it is necessary the a priori study of the location of the scanning devices to identify all path flows and to measure travel times in all links. A synthetic network is used to illustrate the proposed method and to prove its usefulness and feasibility. (Author)

  10. Learning at the workplace and sustainable employability: a multi-source model moderated by age

    NARCIS (Netherlands)

    Marjan Gorgievski; Annet de Lange; Beatrice van der Heijden

    2015-01-01

    This study, among 330 pairs of employees and their supervisors, tested whether self- versus supervisor ratings of five employability dimensions (occupational expertise, corporate sense, personal flexibility, anticipation and optimization, and balance) are associated with different learning

  11. Learning at the workplace and sustainable employability: A multi-source model moderated by age

    NARCIS (Netherlands)

    Heijden, B.I.J.M. van der; Gorgievski, M.J.; Lange, A.H. de

    2016-01-01

    This study, among 330 pairs of employees and their supervisors, tested whether self- versus supervisor ratings of five employability dimensions (occupational expertise, corporate sense, personal flexibility, anticipation and optimization, and balance) are associated with different learning

  12. Learning at the workspace and sustainable employability: a multi source model moderated by age

    NARCIS (Netherlands)

    Annet de Lange; Beatrice van der Heijden; Marjan Gorgievski

    2015-01-01

    This study, among 330 pairs of employees and their supervisors, tested whether self- versus supervisor ratings of five employability dimensions (occupational expertise, corporate sense, personal flexibility, anticipation and optimization, and balance) are associated with different learning

  13. Techniques to Access Databases and Integrate Data for Hydrologic Modeling

    International Nuclear Information System (INIS)

    Whelan, Gene; Tenney, Nathan D.; Pelton, Mitchell A.; Coleman, Andre M.; Ward, Duane L.; Droppo, James G.; Meyer, Philip D.; Dorow, Kevin E.; Taira, Randal Y.

    2009-01-01

    This document addresses techniques to access and integrate data for defining site-specific conditions and behaviors associated with ground-water and surface-water radionuclide transport applicable to U.S. Nuclear Regulatory Commission reviews. Environmental models typically require input data from multiple internal and external sources that may include, but are not limited to, stream and rainfall gage data, meteorological data, hydrogeological data, habitat data, and biological data. These data may be retrieved from a variety of organizations (e.g., federal, state, and regional) and source types (e.g., HTTP, FTP, and databases). Available data sources relevant to hydrologic analyses for reactor licensing are identified and reviewed. The data sources described can be useful to define model inputs and parameters, including site features (e.g., watershed boundaries, stream locations, reservoirs, site topography), site properties (e.g., surface conditions, subsurface hydraulic properties, water quality), and site boundary conditions, input forcings, and extreme events (e.g., stream discharge, lake levels, precipitation, recharge, flood and drought characteristics). Available software tools for accessing established databases, retrieving the data, and integrating it with models were identified and reviewed. The emphasis in this review was on existing software products with minimal required modifications to enable their use with the FRAMES modeling framework. The ability of four of these tools to access and retrieve the identified data sources was reviewed. These four software tools were the Hydrologic Data Acquisition and Processing System (HDAPS), Integrated Water Resources Modeling System (IWRMS) External Data Harvester, Data for Environmental Modeling Environmental Data Download Tool (D4EM EDDT), and the FRAMES Internet Database Tools. The IWRMS External Data Harvester and the D4EM EDDT were identified as the most promising tools based on their ability to access and

  14. Techniques to Access Databases and Integrate Data for Hydrologic Modeling

    Energy Technology Data Exchange (ETDEWEB)

    Whelan, Gene; Tenney, Nathan D.; Pelton, Mitchell A.; Coleman, Andre M.; Ward, Duane L.; Droppo, James G.; Meyer, Philip D.; Dorow, Kevin E.; Taira, Randal Y.

    2009-06-17

    This document addresses techniques to access and integrate data for defining site-specific conditions and behaviors associated with ground-water and surface-water radionuclide transport applicable to U.S. Nuclear Regulatory Commission reviews. Environmental models typically require input data from multiple internal and external sources that may include, but are not limited to, stream and rainfall gage data, meteorological data, hydrogeological data, habitat data, and biological data. These data may be retrieved from a variety of organizations (e.g., federal, state, and regional) and source types (e.g., HTTP, FTP, and databases). Available data sources relevant to hydrologic analyses for reactor licensing are identified and reviewed. The data sources described can be useful to define model inputs and parameters, including site features (e.g., watershed boundaries, stream locations, reservoirs, site topography), site properties (e.g., surface conditions, subsurface hydraulic properties, water quality), and site boundary conditions, input forcings, and extreme events (e.g., stream discharge, lake levels, precipitation, recharge, flood and drought characteristics). Available software tools for accessing established databases, retrieving the data, and integrating it with models were identified and reviewed. The emphasis in this review was on existing software products with minimal required modifications to enable their use with the FRAMES modeling framework. The ability of four of these tools to access and retrieve the identified data sources was reviewed. These four software tools were the Hydrologic Data Acquisition and Processing System (HDAPS), Integrated Water Resources Modeling System (IWRMS) External Data Harvester, Data for Environmental Modeling Environmental Data Download Tool (D4EM EDDT), and the FRAMES Internet Database Tools. The IWRMS External Data Harvester and the D4EM EDDT were identified as the most promising tools based on their ability to access and

  15. Improving default risk prediction using Bayesian model uncertainty techniques.

    Science.gov (United States)

    Kazemi, Reza; Mosleh, Ali

    2012-11-01

    Credit risk is the potential exposure of a creditor to an obligor's failure or refusal to repay the debt in principal or interest. The potential of exposure is measured in terms of probability of default. Many models have been developed to estimate credit risk, with rating agencies dating back to the 19th century. They provide their assessment of probability of default and transition probabilities of various firms in their annual reports. Regulatory capital requirements for credit risk outlined by the Basel Committee on Banking Supervision have made it essential for banks and financial institutions to develop sophisticated models in an attempt to measure credit risk with higher accuracy. The Bayesian framework proposed in this article uses the techniques developed in physical sciences and engineering for dealing with model uncertainty and expert accuracy to obtain improved estimates of credit risk and associated uncertainties. The approach uses estimates from one or more rating agencies and incorporates their historical accuracy (past performance data) in estimating future default risk and transition probabilities. Several examples demonstrate that the proposed methodology can assess default probability with accuracy exceeding the estimations of all the individual models. Moreover, the methodology accounts for potentially significant departures from "nominal predictions" due to "upsetting events" such as the 2008 global banking crisis. © 2012 Society for Risk Analysis.

  16. Mechanical Properties of Nanostructured Materials Determined Through Molecular Modeling Techniques

    Science.gov (United States)

    Clancy, Thomas C.; Gates, Thomas S.

    2005-01-01

    The potential for gains in material properties over conventional materials has motivated an effort to develop novel nanostructured materials for aerospace applications. These novel materials typically consist of a polymer matrix reinforced with particles on the nanometer length scale. In this study, molecular modeling is used to construct fully atomistic models of a carbon nanotube embedded in an epoxy polymer matrix. Functionalization of the nanotube which consists of the introduction of direct chemical bonding between the polymer matrix and the nanotube, hence providing a load transfer mechanism, is systematically varied. The relative effectiveness of functionalization in a nanostructured material may depend on a variety of factors related to the details of the chemical bonding and the polymer structure at the nanotube-polymer interface. The objective of this modeling is to determine what influence the details of functionalization of the carbon nanotube with the polymer matrix has on the resulting mechanical properties. By considering a range of degree of functionalization, the structure-property relationships of these materials is examined and mechanical properties of these models are calculated using standard techniques.

  17. Towards intelligent diagnostic system employing integration of mathematical and engineering model

    International Nuclear Information System (INIS)

    Isa, Nor Ashidi Mat

    2015-01-01

    as mathematical model of clustering technique have been widely used in developing the medical diagnostic systems. The selected features will be classified using mathematical models that embedded engineering theory such as artificial intelligence, support vector machine, neural network and fuzzy-neuro system. These classifiers will provide the diagnostic results without human intervention. Among many publishable researches, several prototypes have been developed namely NeuralPap, Neural Mammo, and Cervix Kit. The former system (NeuralPap) is an automatic intelligent diagnostic system for classifying and distinguishing between the normal and cervical cancerous cells. Meanwhile, the Cervix Kit is a portable Field-programmable gate array (FPGA)-based cervical diagnostic kit that could automatically diagnose the cancerous cell based on the images obtained during sampling test. Besides the cervical diagnostic system, the Neural Mammo system is developed to specifically aid the diagnosis of breast cancer using a fine needle aspiration image

  18. Towards intelligent diagnostic system employing integration of mathematical and engineering model

    Science.gov (United States)

    Isa, Nor Ashidi Mat

    2015-05-01

    as mathematical model of clustering technique have been widely used in developing the medical diagnostic systems. The selected features will be classified using mathematical models that embedded engineering theory such as artificial intelligence, support vector machine, neural network and fuzzy-neuro system. These classifiers will provide the diagnostic results without human intervention. Among many publishable researches, several prototypes have been developed namely NeuralPap, Neural Mammo, and Cervix Kit. The former system (NeuralPap) is an automatic intelligent diagnostic system for classifying and distinguishing between the normal and cervical cancerous cells. Meanwhile, the Cervix Kit is a portable Field-programmable gate array (FPGA)-based cervical diagnostic kit that could automatically diagnose the cancerous cell based on the images obtained during sampling test. Besides the cervical diagnostic system, the Neural Mammo system is developed to specifically aid the diagnosis of breast cancer using a fine needle aspiration image.

  19. Towards intelligent diagnostic system employing integration of mathematical and engineering model

    Energy Technology Data Exchange (ETDEWEB)

    Isa, Nor Ashidi Mat [Imaging and Intelligent System Research Team (ISRT), School of Electrical and Electronic Engineering, Engineering Campus, Universiti Sains Malaysia, 14300 Nibong Tebal, Penang (Malaysia)

    2015-05-15

    as mathematical model of clustering technique have been widely used in developing the medical diagnostic systems. The selected features will be classified using mathematical models that embedded engineering theory such as artificial intelligence, support vector machine, neural network and fuzzy-neuro system. These classifiers will provide the diagnostic results without human intervention. Among many publishable researches, several prototypes have been developed namely NeuralPap, Neural Mammo, and Cervix Kit. The former system (NeuralPap) is an automatic intelligent diagnostic system for classifying and distinguishing between the normal and cervical cancerous cells. Meanwhile, the Cervix Kit is a portable Field-programmable gate array (FPGA)-based cervical diagnostic kit that could automatically diagnose the cancerous cell based on the images obtained during sampling test. Besides the cervical diagnostic system, the Neural Mammo system is developed to specifically aid the diagnosis of breast cancer using a fine needle aspiration image.

  20. Characterising and modelling regolith stratigraphy using multiple geophysical techniques

    Science.gov (United States)

    Thomas, M.; Cremasco, D.; Fotheringham, T.; Hatch, M. A.; Triantifillis, J.; Wilford, J.

    2013-12-01

    Regolith is the weathered, typically mineral-rich layer from fresh bedrock to land surface. It encompasses soil (A, E and B horizons) that has undergone pedogenesis. Below is the weathered C horizon that retains at least some of the original rocky fabric and structure. At the base of this is the lower regolith boundary of continuous hard bedrock (the R horizon). Regolith may be absent, e.g. at rocky outcrops, or may be many 10's of metres deep. Comparatively little is known about regolith, and critical questions remain regarding composition and characteristics - especially deeper where the challenge of collecting reliable data increases with depth. In Australia research is underway to characterise and map regolith using consistent methods at scales ranging from local (e.g. hillslope) to continental scales. These efforts are driven by many research needs, including Critical Zone modelling and simulation. Pilot research in South Australia using digitally-based environmental correlation techniques modelled the depth to bedrock to 9 m for an upland area of 128 000 ha. One finding was the inability to reliably model local scale depth variations over horizontal distances of 2 - 3 m and vertical distances of 1 - 2 m. The need to better characterise variations in regolith to strengthen models at these fine scales was discussed. Addressing this need, we describe high intensity, ground-based multi-sensor geophysical profiling of three hillslope transects in different regolith-landscape settings to characterise fine resolution (i.e. a number of frequencies; multiple frequency, multiple coil electromagnetic induction; and high resolution resistivity. These were accompanied by georeferenced, closely spaced deep cores to 9 m - or to core refusal. The intact cores were sub-sampled to standard depths and analysed for regolith properties to compile core datasets consisting of: water content; texture; electrical conductivity; and weathered state. After preprocessing (filtering, geo

  1. Modelling Diverse Soil Attributes with Visible to Longwave Infrared Spectroscopy Using PLSR Employed by an Automatic Modelling Engine

    Directory of Open Access Journals (Sweden)

    Veronika Kopačková

    2017-02-01

    Full Text Available The study tested a data mining engine (PARACUDA® to predict various soil attributes (BC, CEC, BS, pH, Corg, Pb, Hg, As, Zn and Cu using reflectance data acquired for both optical and thermal infrared regions. The engine was designed to utilize large data in parallel and automatic processing to build and process hundreds of diverse models in a unified manner while avoiding bias and deviations caused by the operator(s. The system is able to systematically assess the effect of diverse preprocessing techniques; additionally, it analyses other parameters, such as different spectral resolutions and spectral coverages that affect soil properties. Accordingly, the system was used to extract models across both optical and thermal infrared spectral regions, which holds significant chromophores. In total, 2880 models were evaluated where each model was generated with a different preprocessing scheme of the input spectral data. The models were assessed using statistical parameters such as coefficient of determination (R2, square error of prediction (SEP, relative percentage difference (RPD and by physical explanation (spectral assignments. It was found that the smoothing procedure is the most beneficial preprocessing stage, especially when combined with spectral derivation (1st or 2nd derivatives. Automatically and without the need of an operator, the data mining engine enabled the best prediction models to be found from all the combinations tested. Furthermore, the data mining approach used in this study and its processing scheme proved to be efficient tools for getting a better understanding of the geochemical properties of the samples studied (e.g., mineral associations.

  2. IMAGE-BASED MODELING TECHNIQUES FOR ARCHITECTURAL HERITAGE 3D DIGITALIZATION: LIMITS AND POTENTIALITIES

    Directory of Open Access Journals (Sweden)

    C. Santagati

    2013-07-01

    Full Text Available 3D reconstruction from images has undergone a revolution in the last few years. Computer vision techniques use photographs from data set collection to rapidly build detailed 3D models. The simultaneous applications of different algorithms (MVS, the different techniques of image matching, feature extracting and mesh optimization are inside an active field of research in computer vision. The results are promising: the obtained models are beginning to challenge the precision of laser-based reconstructions. Among all the possibilities we can mainly distinguish desktop and web-based packages. Those last ones offer the opportunity to exploit the power of cloud computing in order to carry out a semi-automatic data processing, thus allowing the user to fulfill other tasks on its computer; whereas desktop systems employ too much processing time and hard heavy approaches. Computer vision researchers have explored many applications to verify the visual accuracy of 3D model but the approaches to verify metric accuracy are few and no one is on Autodesk 123D Catch applied on Architectural Heritage Documentation. Our approach to this challenging problem is to compare the 3Dmodels by Autodesk 123D Catch and 3D models by terrestrial LIDAR considering different object size, from the detail (capitals, moldings, bases to large scale buildings for practitioner purpose.

  3. Viable Techniques, Leontief’s Closed Model, and Sraffa’s Subsistence Economies

    Directory of Open Access Journals (Sweden)

    Alberto Benítez

    2014-11-01

    Full Text Available This paper studies the production techniques employed in economies that reproduce themselves. Special attention is paid to the distinction usually made between those that do not produce a surplus and those that do, which are referred to as first and second class economies, respectively. Based on this, we present a new definition of viable economies and show that every viable economy of the second class can be represented as a viable economy of the first class under two different forms, Leontief‘s closed model and Sraffa’s subsistence economies. This allows us to present some remarks concerning the economic interpretation of the two models. On the one hand, we argue that the participation of each good in the production of every good can be considered as a normal characteristic of the first model and, on the other hand, we provide a justification for the same condition to be considered a characteristic of the second model. Furthermore, we discuss three definitions of viable techniques advanced by other authors and show that they differ from ours because they admit economies that do not reproduce themselves completely.

  4. Image-Based Modeling Techniques for Architectural Heritage 3d Digitalization: Limits and Potentialities

    Science.gov (United States)

    Santagati, C.; Inzerillo, L.; Di Paola, F.

    2013-07-01

    3D reconstruction from images has undergone a revolution in the last few years. Computer vision techniques use photographs from data set collection to rapidly build detailed 3D models. The simultaneous applications of different algorithms (MVS), the different techniques of image matching, feature extracting and mesh optimization are inside an active field of research in computer vision. The results are promising: the obtained models are beginning to challenge the precision of laser-based reconstructions. Among all the possibilities we can mainly distinguish desktop and web-based packages. Those last ones offer the opportunity to exploit the power of cloud computing in order to carry out a semi-automatic data processing, thus allowing the user to fulfill other tasks on its computer; whereas desktop systems employ too much processing time and hard heavy approaches. Computer vision researchers have explored many applications to verify the visual accuracy of 3D model but the approaches to verify metric accuracy are few and no one is on Autodesk 123D Catch applied on Architectural Heritage Documentation. Our approach to this challenging problem is to compare the 3Dmodels by Autodesk 123D Catch and 3D models by terrestrial LIDAR considering different object size, from the detail (capitals, moldings, bases) to large scale buildings for practitioner purpose.

  5. Optimization of DNA Sensor Model Based Nanostructured Graphene Using Particle Swarm Optimization Technique

    Directory of Open Access Journals (Sweden)

    Hediyeh Karimi

    2013-01-01

    Full Text Available It has been predicted that the nanomaterials of graphene will be among the candidate materials for postsilicon electronics due to their astonishing properties such as high carrier mobility, thermal conductivity, and biocompatibility. Graphene is a semimetal zero gap nanomaterial with demonstrated ability to be employed as an excellent candidate for DNA sensing. Graphene-based DNA sensors have been used to detect the DNA adsorption to examine a DNA concentration in an analyte solution. In particular, there is an essential need for developing the cost-effective DNA sensors holding the fact that it is suitable for the diagnosis of genetic or pathogenic diseases. In this paper, particle swarm optimization technique is employed to optimize the analytical model of a graphene-based DNA sensor which is used for electrical detection of DNA molecules. The results are reported for 5 different concentrations, covering a range from 0.01 nM to 500 nM. The comparison of the optimized model with the experimental data shows an accuracy of more than 95% which verifies that the optimized model is reliable for being used in any application of the graphene-based DNA sensor.

  6. Model assessment using a multi-metric ranking technique

    Science.gov (United States)

    Fitzpatrick, P. J.; Lau, Y.; Alaka, G.; Marks, F.

    2017-12-01

    Validation comparisons of multiple models presents challenges when skill levels are similar, especially in regimes dominated by the climatological mean. Assessing skill separation will require advanced validation metrics and identifying adeptness in extreme events, but maintain simplicity for management decisions. Flexibility for operations is also an asset. This work postulates a weighted tally and consolidation technique which ranks results by multiple types of metrics. Variables include absolute error, bias, acceptable absolute error percentages, outlier metrics, model efficiency, Pearson correlation, Kendall's Tau, reliability Index, multiplicative gross error, and root mean squared differences. Other metrics, such as root mean square difference and rank correlation were also explored, but removed when the information was discovered to be generally duplicative to other metrics. While equal weights are applied, weights could be altered depending for preferred metrics. Two examples are shown comparing ocean models' currents and tropical cyclone products, including experimental products. The importance of using magnitude and direction for tropical cyclone track forecasts instead of distance, along-track, and cross-track are discussed. Tropical cyclone intensity and structure prediction are also assessed. Vector correlations are not included in the ranking process, but found useful in an independent context, and will be briefly reported.

  7. VLF surface-impedance modelling techniques for coal exploration

    Energy Technology Data Exchange (ETDEWEB)

    Wilson, G.; Thiel, D.; O' Keefe, S. [Central Queensland University, Rockhampton, Qld. (Australia). Faculty of Engineering and Physical Systems

    2000-10-01

    New and efficient computational techniques are required for geophysical investigations of coal. This will allow automated inverse analysis procedures to be used for interpretation of field data. In this paper, a number of methods of modelling electromagnetic surface impedance measurements are reviewed, particularly as applied to typical coal seam geology found in the Bowen Basin. At present, the Impedance method and the finite-difference time-domain (FDTD) method appear to offer viable solutions although both have problems. The Impedance method is currently slightly inaccurate, and the FDTD method has large computational demands. In this paper both methods are described and results are presented for a number of geological targets. 17 refs., 14 figs.

  8. Demand Management Based on Model Predictive Control Techniques

    Directory of Open Access Journals (Sweden)

    Yasser A. Davizón

    2014-01-01

    Full Text Available Demand management (DM is the process that helps companies to sell the right product to the right customer, at the right time, and for the right price. Therefore the challenge for any company is to determine how much to sell, at what price, and to which market segment while maximizing its profits. DM also helps managers efficiently allocate undifferentiated units of capacity to the available demand with the goal of maximizing revenue. This paper introduces control system approach to demand management with dynamic pricing (DP using the model predictive control (MPC technique. In addition, we present a proper dynamical system analogy based on active suspension and a stability analysis is provided via the Lyapunov direct method.

  9. Effectiveness of Facebook Based Learning to Enhance Creativity among Islamic Studies Students by Employing Isman Instructional Design Model

    Science.gov (United States)

    Alias, Norlidah; Siraj, Saedah; Daud, Mohd Khairul Azman Md; Hussin, Zaharah

    2013-01-01

    The study examines the effectiveness of Facebook based learning to enhance creativity among Islamic Studies students in the secondary educational setting in Malaysia. It describes the design process by employing the Isman Instructional Design Model. A quantitative study was carried out using experimental method and background survey. The…

  10. Predictive models to determine imagery strategies employed by children to judge hand laterality.

    NARCIS (Netherlands)

    Spruijt, S.; Jongsma, M.L.; Kamp, J. van der; Steenbergen, B.

    2015-01-01

    A commonly used paradigm to study motor imagery is the hand laterality judgment task. The present study aimed to determine which strategies young children employ to successfully perform this task. Children of 5 to 8 years old (N = 92) judged laterality of back and palm view hand pictures in

  11. A Peer-to-Peer Support Model for Developing Graduate Students' Career and Employability Skills

    Science.gov (United States)

    Jones, Narelle; Torezani, Silvia; Luca, Joseph

    2012-01-01

    Career opportunities for Australian research graduates have expanded in recent years into areas outside academia. However, the employment market is highly competitive, and Australian universities have recognised the need to produce graduates with transferable skills across all sectors, not just academia. The need to provide an infrastructure to…

  12. Predictive models to determine imagery strategies employed by children to judge hand laterality

    NARCIS (Netherlands)

    Spruijt, S; Jongsma, M.L.A.; van der Kamp, J.; Steenbergen, B.

    2015-01-01

    A commonly used paradigm to study motor imagery is the hand laterality judgment task. The present study aimed to determine which strategies young children employ to successfully perform this task. Children of 5 to 8 years old (N = 92) judged laterality of back and palm view hand pictures in

  13. Enhancing photogrammetric 3d city models with procedural modeling techniques for urban planning support

    International Nuclear Information System (INIS)

    Schubiger-Banz, S; Arisona, S M; Zhong, C

    2014-01-01

    This paper presents a workflow to increase the level of detail of reality-based 3D urban models. It combines the established workflows from photogrammetry and procedural modeling in order to exploit distinct advantages of both approaches. The combination has advantages over purely automatic acquisition in terms of visual quality, accuracy and model semantics. Compared to manual modeling, procedural techniques can be much more time effective while maintaining the qualitative properties of the modeled environment. In addition, our method includes processes for procedurally adding additional features such as road and rail networks. The resulting models meet the increasing needs in urban environments for planning, inventory, and analysis

  14. Improvement of the supply chain for the sugar cane exportation process employing discrete events simulation techniques - doi: 10.4025/actascitechnol.v35i4.17468

    Directory of Open Access Journals (Sweden)

    Maria Alejandra Guerrero Hernandez

    2013-10-01

    Full Text Available Current paper proposes a reduction on all logistic costs involved in the sugar cane exportation process for an important medium-size cargo harbor in South America. The strategy consisted in studying and improving the efficiency of the logistics transportation flow of merchandise between the sugar cane mills and the cargo ports by employing discrete events simulation as a methodology and the ProModel software as a simulation platform. The simulated scenarios show 89% reductions in additional costs for inventory management; 50% of the resources used for loading the containers; and an 11.4% increment in the efficiency of operations. In addition, the new planning strategy was also evaluated with either a positive or negative variation of the demand. It should be underscored that the improved scenarios (out of those simulated do not involve investments in facilities or in machinery purchasing.  

  15. Properties of hypothesis testing techniques and (Bayesian) model selection for exploration-based and theory-based (order-restricted) hypotheses

    NARCIS (Netherlands)

    Kuiper, Rebecca M.; Nederhoff, Tim; Klugkist, Irene

    2015-01-01

    In this paper, the performance of six types of techniques for comparisons of means is examined. These six emerge from the distinction between the method employed (hypothesis testing, model selection using information criteria, or Bayesian model selection) and the set of hypotheses that is

  16. A review on reflective remote sensing and data assimilation techniques for enhanced agroecosystem modeling

    Science.gov (United States)

    Dorigo, W. A.; Zurita-Milla, R.; de Wit, A. J. W.; Brazile, J.; Singh, R.; Schaepman, M. E.

    2007-05-01

    During the last 50 years, the management of agroecosystems has been undergoing major changes to meet the growing demand for food, timber, fibre and fuel. As a result of this intensified use, the ecological status of many agroecosystems has been severely deteriorated. Modeling the behavior of agroecosystems is, therefore, of great help since it allows the definition of management strategies that maximize (crop) production while minimizing the environmental impacts. Remote sensing can support such modeling by offering information on the spatial and temporal variation of important canopy state variables which would be very difficult to obtain otherwise. In this paper, we present an overview of different methods that can be used to derive biophysical and biochemical canopy state variables from optical remote sensing data in the VNIR-SWIR regions. The overview is based on an extensive literature review where both statistical-empirical and physically based methods are discussed. Subsequently, the prevailing techniques of assimilating remote sensing data into agroecosystem models are outlined. The increasing complexity of data assimilation methods and of models describing agroecosystem functioning has significantly increased computational demands. For this reason, we include a short section on the potential of parallel processing to deal with the complex and computationally intensive algorithms described in the preceding sections. The studied literature reveals that many valuable techniques have been developed both for the retrieval of canopy state variables from reflective remote sensing data as for assimilating the retrieved variables in agroecosystem models. However, for agroecosystem modeling and remote sensing data assimilation to be commonly employed on a global operational basis, emphasis will have to be put on bridging the mismatch between data availability and accuracy on one hand, and model and user requirements on the other. This could be achieved by

  17. The phase field technique for modeling multiphase materials

    Science.gov (United States)

    Singer-Loginova, I.; Singer, H. M.

    2008-10-01

    This paper reviews methods and applications of the phase field technique, one of the fastest growing areas in computational materials science. The phase field method is used as a theory and computational tool for predictions of the evolution of arbitrarily shaped morphologies and complex microstructures in materials. In this method, the interface between two phases (e.g. solid and liquid) is treated as a region of finite width having a gradual variation of different physical quantities, i.e. it is a diffuse interface model. An auxiliary variable, the phase field or order parameter \\phi(\\vec{x}) , is introduced, which distinguishes one phase from the other. Interfaces are identified by the variation of the phase field. We begin with presenting the physical background of the phase field method and give a detailed thermodynamical derivation of the phase field equations. We demonstrate how equilibrium and non-equilibrium physical phenomena at the phase interface are incorporated into the phase field methods. Then we address in detail dendritic and directional solidification of pure and multicomponent alloys, effects of natural convection and forced flow, grain growth, nucleation, solid-solid phase transformation and highlight other applications of the phase field methods. In particular, we review the novel phase field crystal model, which combines atomistic length scales with diffusive time scales. We also discuss aspects of quantitative phase field modeling such as thin interface asymptotic analysis and coupling to thermodynamic databases. The phase field methods result in a set of partial differential equations, whose solutions require time-consuming large-scale computations and often limit the applicability of the method. Subsequently, we review numerical approaches to solve the phase field equations and present a finite difference discretization of the anisotropic Laplacian operator.

  18. Smoker-free workplace policies: developing a model of public health consequences of workplace policies barring employment to smokers.

    Science.gov (United States)

    Houle, B; Siegel, M

    2009-02-01

    A marked shift in tobacco-related workplace health promotion intervention involves the adoption of policies barring employment to smokers. We discuss the potential public health consequences of these policies on those affected-smokers, their families, the surrounding community and society at large. We find a lack of published evidence evaluating the effectiveness and consequences of these policies. By developing a model of policy effects, we outline possible unintended consequences. With such large gaps in the evidence base and the potential for deleterious consequences, we argue for increased discussion about the use of smoker-free employment policies as a public health intervention and for increased engagement of employers by the public health community in worksite health promotion.

  19. Establishment of reproducible osteosarcoma rat model using orthotopic implantation technique.

    Science.gov (United States)

    Yu, Zhe; Sun, Honghui; Fan, Qingyu; Long, Hua; Yang, Tongtao; Ma, Bao'an

    2009-05-01

    In experimental musculoskeletal oncology, there remains a need for animal models that can be used to assess the efficacy of new and innovative treatment methodologies for bone tumors. Rat plays a very important role in the bone field especially in the evaluation of metabolic bone diseases. The objective of this study was to develop a rat osteosarcoma model for evaluation of new surgical and molecular methods of treatment for extremity sarcoma. One hundred male SD rats weighing 125.45+/-8.19 g were divided into 5 groups and anesthetized intraperitoneally with 10% chloral hydrate. Orthotopic implantation models of rat osteosarcoma were performed by injecting directly into the SD rat femur with a needle for inoculation with SD tumor cells. In the first step of the experiment, 2x10(5) to 1x10(6) UMR106 cells in 50 microl were injected intraosseously into median or distal part of the femoral shaft and the tumor take rate was determined. The second stage consisted of determining tumor volume, correlating findings from ultrasound with findings from necropsia and determining time of survival. In the third stage, the orthotopically implanted tumors and lung nodules were resected entirely, sectioned, and then counter stained with hematoxylin and eosin for histopathologic evaluation. The tumor take rate was 100% for implants with 8x10(5) tumor cells or more, which was much less than the amount required for subcutaneous implantation, with a high lung metastasis rate of 93.0%. Ultrasound and necropsia findings matched closely (r=0.942; p<0.01), which demonstrated that Doppler ultrasonography is a convenient and reliable technique for measuring cancer at any stage. Tumor growth curve showed that orthotopically implanted tumors expanded vigorously with time-lapse, especially in the first 3 weeks. The median time of survival was 38 days and surgical mortality was 0%. The UMR106 cell line has strong carcinogenic capability and high lung metastasis frequency. The present rat

  20. Modelling and analysis of ozone concentration by artificial intelligent techniques for estimating air quality

    Science.gov (United States)

    Taylan, Osman

    2017-02-01

    High ozone concentration is an important cause of air pollution mainly due to its role in the greenhouse gas emission. Ozone is produced by photochemical processes which contain nitrogen oxides and volatile organic compounds in the lower atmospheric level. Therefore, monitoring and controlling the quality of air in the urban environment is very important due to the public health care. However, air quality prediction is a highly complex and non-linear process; usually several attributes have to be considered. Artificial intelligent (AI) techniques can be employed to monitor and evaluate the ozone concentration level. The aim of this study is to develop an Adaptive Neuro-Fuzzy inference approach (ANFIS) to determine the influence of peripheral factors on air quality and pollution which is an arising problem due to ozone level in Jeddah city. The concentration of ozone level was considered as a factor to predict the Air Quality (AQ) under the atmospheric conditions. Using Air Quality Standards of Saudi Arabia, ozone concentration level was modelled by employing certain factors such as; nitrogen oxide (NOx), atmospheric pressure, temperature, and relative humidity. Hence, an ANFIS model was developed to observe the ozone concentration level and the model performance was assessed by testing data obtained from the monitoring stations established by the General Authority of Meteorology and Environment Protection of Kingdom of Saudi Arabia. The outcomes of ANFIS model were re-assessed by fuzzy quality charts using quality specification and control limits based on US-EPA air quality standards. The results of present study show that the ANFIS model is a comprehensive approach for the estimation and assessment of ozone level and is a reliable approach to produce more genuine outcomes.

  1. High frequency magnetic field technique: mathematical modelling and development of a full scale water fraction meter

    Energy Technology Data Exchange (ETDEWEB)

    Cimpan, Emil

    2004-09-15

    water fraction. The model intended to employ existent formulas of the medium parameters worked out by Maxwell, Bruggeman and Ramu and Rao. However, to calculate the loss due to the induced eddy currents within the medium in the particular case of the oil continuous phase, other mathematical models expressing (equivalent) medium conductivity and permittivity were required and developed in this work. Although the resonance frequency of the coil was decreasing with increasing medium conductivity, this variation was not as significant as the variation of the coil impedance. This raised the question as to whether coils having the same self-resonance frequency in different media could be constructed. This was worth investigating because it could simplify the mathematical modelling. This was indeed the case and coils featuring approximately the same resonance frequency in different media were made. Concluding, the measuring device based on the HFMFT, which was constructed, investigated and described in this work can be developed into a practical instrument for monitoring the water fraction in multiphase flows. The overall measurement accuracy when using this technique would depend on the analytical models expressing the medium parameters and circumscribing the HFMFT itself. When the mathematical modelling of the HFMFT was finalised, it was understood that many other applications of the technique were also possible. Some of these applications, which might be of interest such as a conductivity meter and a three-component ratio meter, are briefly discussed.

  2. A comparative analysis of reactor lower head debris cooling models employed in the existing severe accident analysis codes

    International Nuclear Information System (INIS)

    Ahn, K.I.; Kim, D.H.; Kim, S.B.; Kim, H.D.

    1998-08-01

    MELCOR and MAAP4 are the representative severe accident analysis codes which have been developed for the integral analysis of the phenomenological reactor lower head corium cooling behavior. Main objectives of the present study is to identify merits and disadvantages of each relevant model through the comparative analysis of the lower plenum corium cooling models employed in these two codes. The final results will be utilized for the development of LILAC phenomenological models and for the continuous improvement of the existing MELCOR reactor lower head models, which are currently being performed at the KAERI. For these purposes, first, nine reference models are selected featuring the lower head corium behavior based on the existing experimental evidences and related models. Then main features of the selected models have been critically analyzed, and finally merits and disadvantages of each corresponding model have been summarized in the view point of realistic corium behavior and reasonable modeling. Being on these evidences, summarized and presented the potential improvements for developing more advanced models. The present study has been focused on the qualitative comparison of each model and so more detailed quantitative analysis is strongly required to obtain the final conclusions for their merits and disadvantages. In addition, in order to compensate the limitations of the current model, required further studies relating closely the detailed mechanistic models with the molten material movement and heat transfer based on phase-change in the porous medium, to the existing simple models. (author). 36 refs

  3. Using ecosystem modelling techniques in exposure assessments of radionuclides - an overview

    International Nuclear Information System (INIS)

    Kumblad, L.

    2005-01-01

    The risk to humans from potential releases from nuclear facilities is evaluated in safety assessments. Essential components of these assessments are exposure models, which estimate the transport of radionuclides in the environment, the uptake in biota, and transfer to humans. Recently, there has been a growing concern for radiological protection of the whole environment, not only humans, and a first attempt has been to employ model approaches based on stylized environments and transfer functions to biota based exclusively on bioconcentration factors (BCF). They are generally of a non-mechanistic nature and involve no knowledge of the actual processes involved, which is a severe limitation when assessing real ecosystems. in this paper, the possibility of using an ecological modelling approach as a complement or an alternative to the use of BCF-based models is discussed. The paper gives an overview of ecological and ecosystem modelling and examples of studies where ecosystem models have been used in association to ecological risk assessment studies for other pollutants than radionuclides. It also discusses the potential to use this technique in exposure assessments of radionuclides with a few examples from the safety assessment work performed by the Swedish nuclear fuel and waste management company (SKB). Finally there is a comparison of the characteristics of ecosystem models and traditionally exposure models for radionuclides used to estimate the radionuclide exposure of biota. The evaluation of ecosystem models already applied in safety assessments has shown that the ecosystem approach is possible to use to assess exposure to biota, and that it can handle many of the modelling problems identified related to BCF-models. The findings in this paper suggest that both national and international assessment frameworks for protection of the environment from ionising radiation would benefit from striving to adopt methodologies based on ecologically sound principles and

  4. Where do inmmigrants fare worse? Modeling workplace wage gap variation with longitudinal employer-employee data.

    Science.gov (United States)

    Tomaskovic-Devey, Donald; Hällsten, Martin; Avent-Holt, Dustin

    2015-01-01

    The authors propose a strategy for observing and explaining workplace variance in categorically linked inequalities. Using Swedish economy-wide linked employer-employee panel data, the authors examine variation in workplace wage inequalities between native Swedes and non-Western immigrants. Consistent with relational inequality theory, the authors' findings are that immigrant-native wage gaps vary dramatically across workplaces, even net of strong human capital controls. The authors also find that, net of observed and fixed-effect controls for individual traits, workplace immigrant-native wage gaps decline with increased workplace immigrant employment and managerial representation and increase when job segregation rises. These results are stronger in high-inequality workplaces and for white-collar employees: contexts in which one expects status-based claims on organizational resources, the central causal mechanism identified by relational inequality theory, to be stronger. The authors conclude that workplace variation in the non-Western immigrant-native wage gaps is contingent on organizational variationin the relative power of groups and the institutional context in which that power is exercised.

  5. Efficient sampling techniques for uncertainty quantification in history matching using nonlinear error models and ensemble level upscaling techniques

    KAUST Repository

    Efendiev, Y.

    2009-11-01

    The Markov chain Monte Carlo (MCMC) is a rigorous sampling method to quantify uncertainty in subsurface characterization. However, the MCMC usually requires many flow and transport simulations in evaluating the posterior distribution and can be computationally expensive for fine-scale geological models. We propose a methodology that combines coarse- and fine-scale information to improve the efficiency of MCMC methods. The proposed method employs off-line computations for modeling the relation between coarse- and fine-scale error responses. This relation is modeled using nonlinear functions with prescribed error precisions which are used in efficient sampling within the MCMC framework. We propose a two-stage MCMC where inexpensive coarse-scale simulations are performed to determine whether or not to run the fine-scale (resolved) simulations. The latter is determined on the basis of a statistical model developed off line. The proposed method is an extension of the approaches considered earlier where linear relations are used for modeling the response between coarse-scale and fine-scale models. The approach considered here does not rely on the proximity of approximate and resolved models and can employ much coarser and more inexpensive models to guide the fine-scale simulations. Numerical results for three-phase flow and transport demonstrate the advantages, efficiency, and utility of the method for uncertainty assessment in the history matching. Copyright 2009 by the American Geophysical Union.

  6. A structural econometric model of family valuation and choice of employer-sponsored health insurance in the United States.

    Science.gov (United States)

    Vanness, David J

    2003-09-01

    This paper estimates a fully structural unitary household model of employment and health insurance decisions for dual wage-earner families with children in the United States, using data from the 1987 National Medical Expenditure Survey. Families choose hours of work and the breakdown of compensation between cash wages and health insurance benefits for each wage earner in order to maximize expected utility under uncertain need for medical care. Heterogeneous demand for the employer-sponsored health insurance is thus generated directly from variations in health status and earning potential. The paper concludes by discussing the benefits of using structural models for simulating welfare effects of insurance reform relative to the costly assumptions that must be imposed for identification. Copyright 2003 John Wiley & Sons, Ltd.

  7. Experimental modeling of aerosols produced by microorganisms in working area air as risk factor exerting hazardous impacts on health of workers employed at biotechnological production

    Directory of Open Access Journals (Sweden)

    N.V. Dudchik

    2017-09-01

    Full Text Available Scientific foundation and practices in the sphere of hygienic and ecological standardization concerning biological factors of the environment have a number of peculiarities and are methodically less developed than chemical factors standardization. Efficient industrial control over maximum permissible concentrations of standardized microorganisms-producers in working area air is based on validated instrumental techniques of quantitative assessment. Our goal was to create experimental models for microorganisms-producers' aerosols of a multi-component microbe specimen in working area air as a risk factor causing impacts on health of workers employed at biotechnological production; another task was to work out a procedure for measuring Pseudomonas aurantiaca B-162/255.17concentration and cells and spores of Bacillus sp. BB58-3 strain in working area air. We gave grounds for a technology aimed at quantitative determination of microorganisms-producers in working area air in a modeling experiment; it was based on conventional stages and tech-niques accepted in microbiological practices, namely air samples taking via aspiration technique allowing for a volume taken; cultivation under conditions which are optimal for examined microorganisms-producers in a nutrient medium with reduced composition; calculation of evolved colonies with specific morphological features; morphologic identification of microorganisms and colonies; calculation of microorganisms' quantity on dishes with recalculation per 1 m3 of air. Bas-ing on the detected regular concentration dependences of microbe contamination dynamics in air we worked out a proce-dure for quantitative determination of microorganisms-producers; we also performed metrological estimate of opera-tional properties for assessing microorganisms-producers of a multi-component microbe specimen as a risk factor caus-ing hazardous impacts on health of workers employed at biotechnological production. We validated our

  8. MULTIREGION: a simulation-forecasting model of BEA economic area population and employment. [Bureau of Economic Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Olsen, R.J.; Westley, G.W.; Herzog, H.W. Jr.; Kerley, C.R.; Bjornstad, D.J.; Vogt, D.P.; Bray, L.G.; Grady, S.T.; Nakosteen, R.A.

    1977-10-01

    This report documents the development of MULTIREGION, a computer model of regional and interregional socio-economic development. The MULTIREGION model interprets the economy of each BEA economic area as a labor market, measures all activity in terms of people as members of the population (labor supply) or as employees (labor demand), and simultaneously simulates or forecasts the demands and supplies of labor in all BEA economic areas at five-year intervals. In general the outputs of MULTIREGION are intended to resemble those of the Water Resource Council's OBERS projections and to be put to similar planning and analysis purposes. This report has been written at two levels to serve the needs of multiple audiences. The body of the report serves as a fairly nontechnical overview of the entire MULTIREGION project; a series of technical appendixes provide detailed descriptions of the background empirical studies of births, deaths, migration, labor force participation, natural resource employment, manufacturing employment location, and local service employment used to construct the model.

  9. Modelling of 3D fractured geological systems - technique and application

    Science.gov (United States)

    Cacace, M.; Scheck-Wenderoth, M.; Cherubini, Y.; Kaiser, B. O.; Bloecher, G.

    2011-12-01

    All rocks in the earth's crust are fractured to some extent. Faults and fractures are important in different scientific and industry fields comprising engineering, geotechnical and hydrogeological applications. Many petroleum, gas and geothermal and water supply reservoirs form in faulted and fractured geological systems. Additionally, faults and fractures may control the transport of chemical contaminants into and through the subsurface. Depending on their origin and orientation with respect to the recent and palaeo stress field as well as on the overall kinematics of chemical processes occurring within them, faults and fractures can act either as hydraulic conductors providing preferential pathways for fluid to flow or as barriers preventing flow across them. The main challenge in modelling processes occurring in fractured rocks is related to the way of describing the heterogeneities of such geological systems. Flow paths are controlled by the geometry of faults and their open void space. To correctly simulate these processes an adequate 3D mesh is a basic requirement. Unfortunately, the representation of realistic 3D geological environments is limited by the complexity of embedded fracture networks often resulting in oversimplified models of the natural system. A technical description of an improved method to integrate generic dipping structures (representing faults and fractures) into a 3D porous medium is out forward. The automated mesh generation algorithm is composed of various existing routines from computational geometry (e.g. 2D-3D projection, interpolation, intersection, convex hull calculation) and meshing (e.g. triangulation in 2D and tetrahedralization in 3D). All routines have been combined in an automated software framework and the robustness of the approach has been tested and verified. These techniques and methods can be applied for fractured porous media including fault systems and therefore found wide applications in different geo-energy related

  10. Determination of concentration distribution and velocity of a catalyst in a model of a fluidized bed reactor using nuclear techniques

    International Nuclear Information System (INIS)

    Santos, V.A. dos.

    1981-09-01

    A simplified model of a cracking unit was construct. The gaseous phase consisted of air, the solid phase (zeolite catalyst cracking) and both the phases circulate at the ambiente temperature in the steady state with 500 g of catalyst and air flow of 1600 1/h. Measurements for the circulation time of the solid phase (catalyst), concentration and radial distribution of catalyst have been carried out. The reduced experimental model of the cracking reactor (FCC) was used and radioctive tracer and attenuation of γ-radiation techniques were employed. (E.G.) [pt

  11. A COGNITIVE APPROACH TO CORPORATE GOVERNANCE: A VISUALIZATION TEST OF MENTAL MODELS WITH THE COGNITIVE MAPPING TECHNIQUE

    Directory of Open Access Journals (Sweden)

    Garoui NASSREDDINE

    2012-01-01

    Full Text Available The idea of this paper is to determine the mental models of actors in the fi rm with respect to the cognitive approach of corporate governance. The paper takes a corporate governance perspective, discusses mental models and uses the cognitive map to view the diagrams showing the ways of thinking and the conceptualization of the cognitive approach. In addition, it employs a cognitive mapping technique. Returning to the systematic exploration of grids for each actor, it concludes that there is a balance of concepts expressing their cognitive orientation.

  12. Simulation of Electric Faults in Doubly-Fed Induction Generators Employing Advanced Mathematical Modelling

    DEFF Research Database (Denmark)

    Martens, Sebastian; Mijatovic, Nenad; Holbøll, Joachim

    2015-01-01

    in many areas of electrical machine analysis. However, for fault investigations, the phase-coordinate representation has been found more suitable. This paper presents a mathematical model in phase coordinates of the DFIG with two parallel windings per rotor phase. The model has been implemented in Matlab...

  13. A Titration Technique for Demonstrating a Magma Replenishment Model.

    Science.gov (United States)

    Hodder, A. P. W.

    1983-01-01

    Conductiometric titrations can be used to simulate subduction-setting volcanism. Suggestions are made as to the use of this technique in teaching volcanic mechanisms and geochemical indications of tectonic settings. (JN)

  14. Modelling skin penetration using the Laplace transform technique.

    Science.gov (United States)

    Anissimov, Y G; Watkinson, A

    2013-01-01

    The Laplace transform is a convenient mathematical tool for solving ordinary and partial differential equations. The application of this technique to problems arising in drug penetration through the skin is reviewed in this paper. © 2013 S. Karger AG, Basel.

  15. [Employment and urban growth; an application of Czamanski's model to the Mexican case].

    Science.gov (United States)

    Verduzco Chavez, B

    1991-01-01

    The author applies the 1964 model developed by Stanislaw Czamanski, based on theories of urban growth and industrial localization, to the analysis of urban growth in Mexico. "The advantages of this model in its application as a support instrument in the process of urban planning when the information available is incomplete are...discussed...." Census data for 44 cities in Mexico are used. (SUMMARY IN ENG) excerpt

  16. On-line detection of apnea/hypopnea events using SpO2 signal: a rule-based approach employing binary classifier models.

    Science.gov (United States)

    Koley, Bijoy Laxmi; Dey, Debangshu

    2014-01-01

    This paper presents an online method for automatic detection of apnea/hypopnea events, with the help of oxygen saturation (SpO2) signal, measured at fingertip by Bluetooth nocturnal pulse oximeter. Event detection is performed by identifying abnormal data segments from the recorded SpO2 signal, employing a binary classifier model based on a support vector machine (SVM). Thereafter the abnormal segment is further analyzed to detect different states within the segment, i.e., steady, desaturation, and resaturation, with the help of another SVM-based binary ensemble classifier model. Finally, a heuristically obtained rule-based system is used to identify the apnea/hypopnea events from the time-sequenced decisions of these classifier models. In the developmental phase, a set of 34 time domain-based features was extracted from the segmented SpO2 signal using an overlapped windowing technique. Later, an optimal set of features was selected on the basis of recursive feature elimination technique. A total of 34 subjects were included in the study. The results show average event detection accuracies of 96.7% and 93.8% for the offline and the online tests, respectively. The proposed system provides direct estimation of the apnea/hypopnea index with the help of a relatively inexpensive and widely available pulse oximeter. Moreover, the system can be monitored and accessed by physicians through LAN/WAN/Internet and can be extended to deploy in Bluetooth-enabled mobile phones.

  17. A comparative analysis of molten corium-concrete interaction models employed in MELCOR and MAAP codes

    International Nuclear Information System (INIS)

    Park, Soo Yong; Song, Y. M.; Kim, D. H.; Kim, H. D.

    1999-03-01

    The purpose of this report are to identify the modelling differences by review phenomenological models related to MCCI, and to investigate modelling uncertainty by performing sensitivity analysis, and finally to identify models to be improved in MELCOR. As the results, the most important uncertain parameter in the MCCI area is the debris stratification/mixing, and heat transfer between molten corium and overlying water pool. MAAP has a very simple and flexible corium-water heat transfer model, which seems to be needed in MELCOR for evaluation of real plants as long as large phenomenological uncertainty still exists. During the corium-concrete interaction, there is a temperature distribution inside basemat concrete. This would affect the amount or timing of gas generation. While MAAP calculates the temperature distribution through nodalization methodology, MELCOR calculates concrete response based on one-dimensional steady-state ablation, with no consideration given to conduction into the concrete or to decomposition in advanced of the ablation front. The code may be inaccurate for analysis of combustible gas generation during MCCI. Thus there is a necessity to improve the concrete decomposition model in MELCOR. (Author). 12 refs., 5 tabs., 42 figs

  18. Optimization of Dimensional accuracy in plasma arc cutting process employing parametric modelling approach

    Science.gov (United States)

    Naik, Deepak kumar; Maity, K. P.

    2018-03-01

    Plasma arc cutting (PAC) is a high temperature thermal cutting process employed for the cutting of extensively high strength material which are difficult to cut through any other manufacturing process. This process involves high energized plasma arc to cut any conducting material with better dimensional accuracy in lesser time. This research work presents the effect of process parameter on to the dimensional accuracy of PAC process. The input process parameters were selected as arc voltage, standoff distance and cutting speed. A rectangular plate of 304L stainless steel of 10 mm thickness was taken for the experiment as a workpiece. Stainless steel is very extensively used material in manufacturing industries. Linear dimension were measured following Taguchi’s L16 orthogonal array design approach. Three levels were selected to conduct the experiment for each of the process parameter. In all experiments, clockwise cut direction was followed. The result obtained thorough measurement is further analyzed. Analysis of variance (ANOVA) and Analysis of means (ANOM) were performed to evaluate the effect of each process parameter. ANOVA analysis reveals the effect of input process parameter upon leaner dimension in X axis. The results of the work shows that the optimal setting of process parameter values for the leaner dimension on the X axis. The result of the investigations clearly show that the specific range of input process parameter achieved the improved machinability.

  19. Delamination of plasters applied to historical masonry walls: analysis by acoustic emission technique and numerical model

    Science.gov (United States)

    Grazzini, A.; Lacidogna, G.; Valente, S.; Accornero, F.

    2018-06-01

    Masonry walls of historical buildings are subject to rising damp effects due to capillary or rain infiltrations, which in the time produce decay and delamination of historical plasters. In the restoration of masonry buildings, the plaster detachment frequently occurs because of mechanical incompatibility in repair mortar. An innovative laboratory procedure is described for test mechanical adhesion of new repair mortars. Compression static tests were carried out on composite specimens stone block-repair mortar, which specific geometry can test the de-bonding process of mortar in adherence with a stone masonry structure. The acoustic emission (AE) technique was employed for estimating the amount of energy released from fracture propagation in adherence surface between mortar and stone. A numerical simulation was elaborated based on the cohesive crack model. The evolution of detachment process of mortar in a coupled stone brick-mortar system was analysed by triangulation of AE signals, which can improve the numerical model and predict the type of failure in the adhesion surface of repair plaster. Through the cohesive crack model, it was possible to interpret theoretically the de-bonding phenomena occurring at the interface between stone block and mortar. Therefore, the mechanical behaviour of the interface is characterized.

  20. Learning-based computing techniques in geoid modeling for precise height transformation

    Science.gov (United States)

    Erol, B.; Erol, S.

    2013-03-01

    Precise determination of local geoid is of particular importance for establishing height control in geodetic GNSS applications, since the classical leveling technique is too laborious. A geoid model can be accurately obtained employing properly distributed benchmarks having GNSS and leveling observations using an appropriate computing algorithm. Besides the classical multivariable polynomial regression equations (MPRE), this study attempts an evaluation of learning based computing algorithms: artificial neural networks (ANNs), adaptive network-based fuzzy inference system (ANFIS) and especially the wavelet neural networks (WNNs) approach in geoid surface approximation. These algorithms were developed parallel to advances in computer technologies and recently have been used for solving complex nonlinear problems of many applications. However, they are rather new in dealing with precise modeling problem of the Earth gravity field. In the scope of the study, these methods were applied to Istanbul GPS Triangulation Network data. The performances of the methods were assessed considering the validation results of the geoid models at the observation points. In conclusion the ANFIS and WNN revealed higher prediction accuracies compared to ANN and MPRE methods. Beside the prediction capabilities, these methods were also compared and discussed from the practical point of view in conclusions.

  1. Design and fabrication of a hybrid maglev model employing PML and SML

    Science.gov (United States)

    Sun, R. X.; Zheng, J.; Zhan, L. J.; Huang, S. Y.; Li, H. T.; Deng, Z. G.

    2017-10-01

    A hybrid maglev model combining permanent magnet levitation (PML) and superconducting magnetic levitation (SML) was designed and fabricated to explore a heavy-load levitation system advancing in passive stability and simple structure. In this system, the PML was designed to levitate the load, and the SML was introduced to guarantee the stability. In order to realize different working gaps of the two maglev components, linear bearings were applied to connect the PML layer (for load) and the SML layer (for stability) of the hybrid maglev model. Experimental results indicate that the hybrid maglev model possesses excellent advantages of heavy-load ability and passive stability at the same time. This work presents a possible way to realize a heavy-load passive maglev concept.

  2. Perspectives of employability skills

    OpenAIRE

    ANNE LOUISE NEWTON

    2017-01-01

    The study investigated the different perspectives held by young people, employers and policy makers around Employability Skills and it examined how young people learnt these skills. This study draws young peoples’ perspectives into the research around Employability Skills and highlights the way in which social and cultural capital mediate their development. The research points to a model to re-vision employability skills which recognises the many ways in which they are learnt, over time a...

  3. Review of air quality modeling techniques. Volume 8

    International Nuclear Information System (INIS)

    Rosen, L.C.

    1977-01-01

    Air transport and diffusion models which are applicable to the assessment of the environmental effects of nuclear, geothermal, and fossil-fuel electric generation are reviewed. The general classification of models and model inputs are discussed. A detailed examination of the statistical, Gaussian plume, Gaussian puff, one-box and species-conservation-of-mass models is given. Representative models are discussed with attention given to the assumptions, input data requirement, advantages, disadvantages and applicability of each

  4. University-Community Partnership Models: Employing Organizational Management Theories of Paradox and Strategic Contradiction

    Science.gov (United States)

    Bowers, Amanda M.

    2017-01-01

    University-Community (U-C) partnerships have the potential to respond to society's most pressing needs through engaged scholarship. Despite this promise, partnerships face paradoxical tensions and inherent contradictions that are often not fully addressed in U-C partnership models or frameworks, or in practice. This article seeks to explore the…

  5. Electromagnetic interference modeling and suppression techniques in variable-frequency drive systems

    Science.gov (United States)

    Yang, Le; Wang, Shuo; Feng, Jianghua

    2017-11-01

    Electromagnetic interference (EMI) causes electromechanical damage to the motors and degrades the reliability of variable-frequency drive (VFD) systems. Unlike fundamental frequency components in motor drive systems, high-frequency EMI noise, coupled with the parasitic parameters of the trough system, are difficult to analyze and reduce. In this article, EMI modeling techniques for different function units in a VFD system, including induction motors, motor bearings, and rectifierinverters, are reviewed and evaluated in terms of applied frequency range, model parameterization, and model accuracy. The EMI models for the motors are categorized based on modeling techniques and model topologies. Motor bearing and shaft models are also reviewed, and techniques that are used to eliminate bearing current are evaluated. Modeling techniques for conventional rectifierinverter systems are also summarized. EMI noise suppression techniques, including passive filter, Wheatstone bridge balance, active filter, and optimized modulation, are reviewed and compared based on the VFD system models.

  6. Employer Branding

    OpenAIRE

    Stroblová, Zuzana

    2017-01-01

    The aim of the Master Thesis is to describe how to build Employer Brand a company. It is based on the description of Employer Branding project of a particular company and the evaluation its process. The thesis is a case study and consists of theoretical and practical part. The theoretical part focuses on trends and changes in leadership approach, definition of Employer Branding and HR Marketing. The practical part deals with the brand building process itself, describes the outputs of the proj...

  7. Employer branding

    OpenAIRE

    Mičková, Kateřina

    2008-01-01

    The demand for qualified employees is higher then the offering, both in Czech republic and internationally. Demand for specific skills, in addition to a greater demand for workforce generally, is making employee recruitment and retention much more difficult and expensive. Employer Branding claims to be an answer to this new challenge. This international concept focuses on developing an "employer brand" - mental image of a company as an employer. To achieve this, it is necessary to demonstrate...

  8. The preparation of aneurysm model in rabbits by vessel ligation and elastase-induced technique

    International Nuclear Information System (INIS)

    Lu Chuan; Xie Qianyu; Liu Linxiang

    2010-01-01

    Objective: To establish an aneurysm model, which is quite similar to the human intracranial aneurysm in morphology, in rabbits by means of vessel ligation together with elastase-induced technique. Methods: Sixteen New Zealand white rabbits were used in this study. Distal carotid ligation and intraluminal elastase incubation was employed in ten rabbits (study group) to create aneurysm on the right common carotid artery. And surgical suture of a segment of the left carotid common artery was carried out in six rabbits (used as control group) to establish the aneurysm model. DSA exam of the created aneurysms by using catheterization via femoral artery was performed at one week and at one month after surgery. The patency, morphology and pathology of the aneurysms were observed. The results were statistically analyzed. Results: The aneurysms in both groups remained patent after they were created. Angiography one week after the surgery showed that all the aneurysms in study group were patent, while in control group only two aneurysms showed opacification with contrast medium and the remaining four aneurysms were all occluded. DSA at one month after the procedure demonstrated that all the aneurysms in study group remained patent, and the previous two patent aneurysms in control group became occluded. The mean width and length of the aneurysmal cavity in study group immediately after the procedure were (3.70 ± 0.16) mm and (6.53 ± 0.65) mm respectively, which enlarged to (5.06 ± 0.31) mm and (9.0 ± 0.52) mm respectively one month after the surgery. The difference in size changes was statistically significant (P < 0.05). Pathologically, almost complete absence of the internal elastic lamina and medial wall elastin of the aneurysms was observed. Conclusion: The aneurysm model prepared with vessel ligation together with elastase-induced technique carries high patent rate and possesses the feature of spontaneous growing, moreover, its morphology is quite similar to the

  9. Examining Interior Grid Nudging Techniques Using Two-Way Nesting in the WRF Model for Regional Climate Modeling

    Science.gov (United States)

    This study evaluates interior nudging techniques using the Weather Research and Forecasting (WRF) model for regional climate modeling over the conterminous United States (CONUS) using a two-way nested configuration. NCEP–Department of Energy Atmospheric Model Intercomparison Pro...

  10. Modelling tick abundance using machine learning techniques and satellite imagery

    DEFF Research Database (Denmark)

    Kjær, Lene Jung; Korslund, L.; Kjelland, V.

    satellite images to run Boosted Regression Tree machine learning algorithms to predict overall distribution (presence/absence of ticks) and relative tick abundance of nymphs and larvae in southern Scandinavia. For nymphs, the predicted abundance had a positive correlation with observed abundance...... the predicted distribution of larvae was mostly even throughout Denmark, it was primarily around the coastlines in Norway and Sweden. Abundance was fairly low overall except in some fragmented patches corresponding to forested habitats in the region. Machine learning techniques allow us to predict for larger...... the collected ticks for pathogens and using the same machine learning techniques to develop prevalence maps of the ScandTick region....

  11. Adaptive subdomain modeling: A multi-analysis technique for ocean circulation models

    Science.gov (United States)

    Altuntas, Alper; Baugh, John

    2017-07-01

    Many coastal and ocean processes of interest operate over large temporal and geographical scales and require a substantial amount of computational resources, particularly when engineering design and failure scenarios are also considered. This study presents an adaptive multi-analysis technique that improves the efficiency of these computations when multiple alternatives are being simulated. The technique, called adaptive subdomain modeling, concurrently analyzes any number of child domains, with each instance corresponding to a unique design or failure scenario, in addition to a full-scale parent domain providing the boundary conditions for its children. To contain the altered hydrodynamics originating from the modifications, the spatial extent of each child domain is adaptively adjusted during runtime depending on the response of the model. The technique is incorporated in ADCIRC++, a re-implementation of the popular ADCIRC ocean circulation model with an updated software architecture designed to facilitate this adaptive behavior and to utilize concurrent executions of multiple domains. The results of our case studies confirm that the method substantially reduces computational effort while maintaining accuracy.

  12. Application of integrated modeling technique for data services ...

    African Journals Online (AJOL)

    This paper, therefore, describes the application of the integrated simulation technique for deriving the optimum resources required for data services in an asynchronous transfer mode (ATM) based private wide area network (WAN) to guarantee specific QoS requirement. The simulation tool drastically cuts the simulation ...

  13. Prescribed wind shear modelling with the actuator line technique

    DEFF Research Database (Denmark)

    Mikkelsen, Robert Flemming; Sørensen, Jens Nørkær; Troldborg, Niels

    2007-01-01

    A method for prescribing arbitrary steady atmospheric wind shear profiles combined with CFD is presented. The method is furthermore combined with the actuator line technique governing the aerodynamic loads on a wind turbine. Computation are carried out on a wind turbine exposed to a representative...

  14. On a Numerical and Graphical Technique for Evaluating some Models Involving Rational Expectations

    DEFF Research Database (Denmark)

    Johansen, Søren; Swensen, Anders Rygh

    Campbell and Shiller (1987) proposed a graphical technique for the present value model which consists of plotting the spread and theoretical spread as calculated from the cointegrated vector autoregressive model. We extend these techniques to a number of rational expectation models and give...

  15. On a numerical and graphical technique for evaluating some models involving rational expectations

    DEFF Research Database (Denmark)

    Johansen, Søren; Swensen, Anders Rygh

    Campbell and Shiller (1987) proposed a graphical technique for the present value model which consists of plotting the spread and theoretical spread as calculated from the cointegrated vector autoregressive model. We extend these techniques to a number of rational expectation models and give...

  16. Crustal Models Assessment in Western Part of Romania Employing Active Seismic and Seismologic Methods

    Science.gov (United States)

    Bala, Andrei; Toma-Danila, Dragos; Tataru, Dragos; Grecu, Bogdan

    2017-12-01

    In the years 1999 - 2000 two regional seismic refraction lines were performed within a close cooperation with German partners from University of Karlsruhe. One of these lines is Vrancea 2001, with 420 km in length, almost half of them recorded in Transylvanian Basin. The structure of the crust along the seismic line revealed a very complicated crustal structure beginning with Eastern Carpathians and continuing in the Transylvanian Basin until Medias. As a result of the development of the National Seismic Network in the last ten years, more than 100 permanent broadband stations are now continuously operating in Romania. Complementary to this national dataset, maintained and developed in the National Institute for Earth Physics, new data emerged from the temporary seismologic networks established during the joint projects with European partners in the last decades. The data gathered so far is valuable both for seismology purposes and crustal structure studies, especially for the western part of the country, where this kind of data were sparse until now. Between 2009 and 2011, a new reference model for the Earth’s crust and mantle of the European Plate was defined through the NERIES project from existing data and models. The database gathered from different kind of measurements in Transylvanian Basin and eastern Pannonian Basin were included in this NERIES model and an improved and upgraded model of the Earth crust emerged for western part of Romania. Although the dataset has its origins in several periods over the last 50 years, the results are homogeneous and they improve and strengthen our image about the depth of the principal boundaries in the crust. In the last chapter two maps regarding these boundaries are constructed, one for mid-crustal boundary and one for Moho. They were build considering all the punctual information available from different sources in active seismic and seismology which are introduced in the general maps from the NERIES project for

  17. Employer Toolkit.

    Science.gov (United States)

    Thuli, Kelli J.; Hong, Esther

    This document consists of two guides intended for either employers or service providers involved in school to work partnerships for students with disabilities. "Tools for Service Providers" is intended to be used for training local-level providers who are developing school to work linkages with employers. Following an introduction, this…

  18. Enthalpy-based equation of state for highly porous materials employing modified soft sphere fluid model

    Science.gov (United States)

    Nayak, Bishnupriya; Menon, S. V. G.

    2018-01-01

    Enthalpy-based equation of state based on a modified soft sphere model for the fluid phase, which includes vaporization and ionization effects, is formulated for highly porous materials. Earlier developments and applications of enthalpy-based approach had not accounted for the fact that shocked states of materials with high porosity (e.g., porosity more than two for Cu) are in the expanded fluid region. We supplement the well known soft sphere model with a generalized Lennard-Jones formula for the zero temperature isotherm, with parameters determined from cohesive energy, specific volume and bulk modulus of the solid at normal condition. Specific heats at constant pressure, ionic and electronic enthalpy parameters and thermal excitation effects are calculated using the modified approach and used in the enthalpy-based equation of state. We also incorporate energy loss from the shock due to expansion of shocked material in calculating porous Hugoniot. Results obtained for Cu, even up to initial porosities ten, show good agreement with experimental data.

  19. The application of neural networks with artificial intelligence technique in the modeling of industrial processes

    International Nuclear Information System (INIS)

    Saini, K. K.; Saini, Sanju

    2008-01-01

    Neural networks are a relatively new artificial intelligence technique that emulates the behavior of biological neural systems in digital software or hardware. These networks can 'learn', automatically, complex relationships among data. This feature makes the technique very useful in modeling processes for which mathematical modeling is difficult or impossible. The work described here outlines some examples of the application of neural networks with artificial intelligence technique in the modeling of industrial processes.

  20. Application of the Electronic Nose Technique to Differentiation between Model Mixtures with COPD Markers

    Directory of Open Access Journals (Sweden)

    Jacek Namieśnik

    2013-04-01

    Full Text Available The paper presents the potential of an electronic nose technique in the field of fast diagnostics of patients suspected of Chronic Obstructive Pulmonary Disease (COPD. The investigations were performed using a simple electronic nose prototype equipped with a set of six semiconductor sensors manufactured by FIGARO Co. They were aimed at verification of a possibility of differentiation between model reference mixtures with potential COPD markers (N,N-dimethylformamide and N,N-dimethylacetamide. These mixtures contained volatile organic compounds (VOCs such as acetone, isoprene, carbon disulphide, propan-2-ol, formamide, benzene, toluene, acetonitrile, acetic acid, dimethyl ether, dimethyl sulphide, acrolein, furan, propanol and pyridine, recognized as the components of exhaled air. The model reference mixtures were prepared at three concentration levels—10 ppb, 25 ppb, 50 ppb v/v—of each component, except for the COPD markers. Concentration of the COPD markers in the mixtures was from 0 ppb to 100 ppb v/v. Interpretation of the obtained data employed principal component analysis (PCA. The investigations revealed the usefulness of the electronic device only in the case when the concentration of the COPD markers was twice as high as the concentration of the remaining components of the mixture and for a limited number of basic mixture components.

  1. Constraint-based Student Modelling in Probability Story Problems with Scaffolding Techniques

    Directory of Open Access Journals (Sweden)

    Nabila Khodeir

    2018-01-01

    Full Text Available Constraint-based student modelling (CBM is an important technique employed in intelligent tutoring systems to model student knowledge to provide relevant assistance. This paper introduces the Math Story Problem Tutor (MAST, a Web-based intelligent tutoring system for probability story problems, which is able to generate problems of different contexts, types and difficulty levels for self-paced learning. Constraints in MAST are specified at a low-level of granularity to allow fine-grained diagnosis of the student error. Furthermore, MAST extends CBM to address errors due to misunderstanding of the narrative story. It can locate and highlight keywords that may have been overlooked or misunderstood leading to an error. This is achieved by utilizing the role of sentences and keywords that are defined through the Natural Language Generation (NLG methods deployed in the story problem generation. MAST also integrates CBM with scaffolding questions and feedback to provide various forms of help and guidance to the student. This allows the student to discover and correct any errors in his/her solution. MAST has been preliminary evaluated empirically and the results show the potential effectiveness in tutoring students with a decrease in the percentage of violated constraints along the learning curve. Additionally, there is a significant improvement in the results of the post–test exam in comparison to the pre-test exam of the students using MAST in comparison to those relying on the textbook

  2. Workplace accommodations for employees with disabilities: A multilevel model of employer decision-making.

    Science.gov (United States)

    Telwatte, Apsara; Anglim, Jeromy; Wynton, Sarah K A; Moulding, Richard

    2017-02-01

    Existing research suggests that the decision to grant or deny workplace accommodations for people with disabilities is influenced by a range of legal and nonlegal factors. However, less is known about how these factors operate at the within-person level. Thus, we proposed and tested a multilevel model of the accommodation decision-making process, which we applied to better understand why people with psychological disabilities often experience greater challenges in obtaining accommodations. A sample of 159 Australian adults, composed mostly of managers and HR professionals, read 12 vignettes involving requests for accommodations from existing employees. The requests differed in whether they were for psychological or physical disabilities. For each vignette, participants rated their empathy with the employee, the legitimacy of the employee's disability, the necessity for productivity, the perceived cost, and the reasonableness, and indicated whether they would grant the accommodation. Multilevel modeling indicated that greater empathy, legitimacy, and necessity, and lower perceived cost predicted perceptions of greater reasonableness and greater granting. Accommodation requests from employees with psychological disabilities were seen as less reasonable and were less likely to be granted; much of this effect seemed to be driven by perceptions that such accommodations were less necessary for productivity. Ratings on accommodations were influenced both by general between-person tendencies and within-person appraisals of particular scenarios. The study points to a need for organizations to more clearly establish guidelines for how decision-makers should fairly evaluate accommodation requests for employees with psychological disabilities and disability more broadly. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  3. An eigenexpansion technique for modelling plasma start-up

    International Nuclear Information System (INIS)

    Pillsbury, R.D.

    1989-01-01

    An algorithm has been developed and implemented in a computer program that allows the estimation of PF coil voltages required to start-up an axisymmetric plasma in a tokamak in the presence of eddy currents in toroidally continuous conducting structures. The algorithm makes use of an eigen-expansion technique to solve the lumped parameter circuit loop voltage equations associated with the PF coils and passive (conducting) structures. An example of start-up for CIT (Compact Ignition Tokamak) is included

  4. Wave propagation in fluids models and numerical techniques

    CERN Document Server

    Guinot, Vincent

    2012-01-01

    This second edition with four additional chapters presents the physical principles and solution techniques for transient propagation in fluid mechanics and hydraulics. The application domains vary including contaminant transport with or without sorption, the motion of immiscible hydrocarbons in aquifers, pipe transients, open channel and shallow water flow, and compressible gas dynamics. The mathematical formulation is covered from the angle of conservation laws, with an emphasis on multidimensional problems and discontinuous flows, such as steep fronts and shock waves. Finite

  5. A vortex model for Darrieus turbine using finite element techniques

    Energy Technology Data Exchange (ETDEWEB)

    Ponta, Fernando L. [Universidad de Buenos Aires, Dept. de Electrotecnia, Grupo ISEP, Buenos Aires (Argentina); Jacovkis, Pablo M. [Universidad de Buenos Aires, Dept. de Computacion and Inst. de Calculo, Buenos Aires (Argentina)

    2001-09-01

    Since 1970 several aerodynamic prediction models have been formulated for the Darrieus turbine. We can identify two families of models: stream-tube and vortex. The former needs much less computation time but the latter is more accurate. The purpose of this paper is to show a new option for modelling the aerodynamic behaviour of Darrieus turbines. The idea is to combine a classic free vortex model with a finite element analysis of the flow in the surroundings of the blades. This avoids some of the remaining deficiencies in classic vortex models. The agreement between analysis and experiment when predicting instantaneous blade forces and near wake flow behind the rotor is better than the one obtained in previous models. (Author)

  6. Employment of Lithuanian Statistical Data Into Tax-Benefit Micro-Simulation Models

    Directory of Open Access Journals (Sweden)

    Viginta Ivaškaitė-Tamošiūnė

    2012-07-01

    Full Text Available In this study, we aim to assess the “best fit” of the existing Lithuanian micro-datasets for constructing a national micro-simulation model. Specifically, we compare and evaluate the potential of two (state level representative micro-data surveys in terms of their potential to simulate Lithuanian (direct taxes, social contributions and social benefits. Both selected datasets contain rich information on the socio-economic and demographical conditions of the country: the Household Budget Survey (HBS for the years 2004 and 2005 and the European Community Statistics on Income and Living Conditions (EU-SILC in Lithuania for the year 2005. The selected databases offer the most comprehensive range of income and other socio-demographic attributes, needed for simulation of tax and contributions’ payers/amounts and benefits’ recipients/amounts. The evaluation of the dataset capacity to simulate these measures is done by a comparative statistical analysis. Among the comparative categories are definitions (of households, incomes, survey collection modes, level of aggregation of various variables, demographic and incomes variables and corresponding numbers (amounts.The comparative analysis of the HBS and EU-SILC datasets shows, that despite embedded differences and shortages regarding simulation capacities of both surveys, these datasets contain valuable and sufficient information for the purpose of simulation of Lithuanian tax-benefit policies.In general a conclusion could be drawn, that HBS offers higher possibilities of simulating the Lithuanian tax-benefit system. This dataset contains more detailed national income categories (i.e. recipients of maternity/paternity insurance, diverse pensions, etc.— information on which is not available in the EU-SILC. The latter dataset does not contain national policy system specific components, but offer information on income aggregates, such all old-age pensions, social exclusion benefits, etc. Additionally

  7. Reduction of thermal models of buildings: improvement of techniques using meteorological influence models; Reduction de modeles thermiques de batiments: amelioration des techniques par modelisation des sollicitations meteorologiques

    Energy Technology Data Exchange (ETDEWEB)

    Dautin, S.

    1997-04-01

    This work concerns the modeling of thermal phenomena inside buildings for the evaluation of energy exploitation costs of thermal installations and for the modeling of thermal and aeraulic transient phenomena. This thesis comprises 7 chapters dealing with: (1) the thermal phenomena inside buildings and the CLIM2000 calculation code, (2) the ETNA and GENEC experimental cells and their modeling, (3) the techniques of model reduction tested (Marshall`s truncature, Michailesco aggregation method and Moore truncature) with their algorithms and their encoding in the MATRED software, (4) the application of model reduction methods to the GENEC and ETNA cells and to a medium size dual-zone building, (5) the modeling of meteorological influences classically applied to buildings (external temperature and solar flux), (6) the analytical expression of these modeled meteorological influences. The last chapter presents the results of these improved methods on the GENEC and ETNA cells and on a lower inertia building. These new methods are compared to classical methods. (J.S.) 69 refs.

  8. Toward real-time diffuse optical tomography: accelerating light propagation modeling employing parallel computing on GPU and CPU.

    Science.gov (United States)

    Doulgerakis, Matthaios; Eggebrecht, Adam; Wojtkiewicz, Stanislaw; Culver, Joseph; Dehghani, Hamid

    2017-12-01

    Parameter recovery in diffuse optical tomography is a computationally expensive algorithm, especially when used for large and complex volumes, as in the case of human brain functional imaging. The modeling of light propagation, also known as the forward problem, is the computational bottleneck of the recovery algorithm, whereby the lack of a real-time solution is impeding practical and clinical applications. The objective of this work is the acceleration of the forward model, within a diffusion approximation-based finite-element modeling framework, employing parallelization to expedite the calculation of light propagation in realistic adult head models. The proposed methodology is applicable for modeling both continuous wave and frequency-domain systems with the results demonstrating a 10-fold speed increase when GPU architectures are available, while maintaining high accuracy. It is shown that, for a very high-resolution finite-element model of the adult human head with ∼600,000 nodes, consisting of heterogeneous layers, light propagation can be calculated at ∼0.25  s/excitation source. (2017) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE).

  9. Toward real-time diffuse optical tomography: accelerating light propagation modeling employing parallel computing on GPU and CPU

    Science.gov (United States)

    Doulgerakis, Matthaios; Eggebrecht, Adam; Wojtkiewicz, Stanislaw; Culver, Joseph; Dehghani, Hamid

    2017-12-01

    Parameter recovery in diffuse optical tomography is a computationally expensive algorithm, especially when used for large and complex volumes, as in the case of human brain functional imaging. The modeling of light propagation, also known as the forward problem, is the computational bottleneck of the recovery algorithm, whereby the lack of a real-time solution is impeding practical and clinical applications. The objective of this work is the acceleration of the forward model, within a diffusion approximation-based finite-element modeling framework, employing parallelization to expedite the calculation of light propagation in realistic adult head models. The proposed methodology is applicable for modeling both continuous wave and frequency-domain systems with the results demonstrating a 10-fold speed increase when GPU architectures are available, while maintaining high accuracy. It is shown that, for a very high-resolution finite-element model of the adult human head with ˜600,000 nodes, consisting of heterogeneous layers, light propagation can be calculated at ˜0.25 s/excitation source.

  10. Multiparous Ewe as a Model for Teaching Vaginal Hysterectomy Techniques.

    Science.gov (United States)

    Kerbage, Yohan; Cosson, Michel; Hubert, Thomas; Giraudet, Géraldine

    2017-12-01

    Despite being linked to improving patient outcomes and limiting costs, the use of vaginal hysterectomy is on the wane. Although a combination of reasons might explain this trend, one cause is a lack of practical training. An appropriate teaching model must therefore be devised. Currently, only low-fidelity simulators exist. Ewes provide an appropriate model for pelvic anatomy and are well-suited for testing vaginal mesh properties. This article sets out a vaginal hysterectomy procedure for use as an education and training model. A multiparous ewe was the model. Surgery was performed under general anesthesia. The ewe was in a lithotomy position resembling that assumed by women on the operating table. Two vaginal hysterectomies were performed on two ewes, following every step precisely as if the model were human. Each surgical step of vaginal hysterectomy performed on the ewe and on a woman were compared side by side. We identified that all surgical steps were particularly similar. The main limitations of this model are costs ($500/procedure), logistic problems (housing large animals), and public opposition to animal training models. The ewe appears to be an appropriate model for teaching and training of vaginal hysterectomy.

  11. VIA Employability

    DEFF Research Database (Denmark)

    Andersen, Henrik Mariendal

    2017-01-01

    ’s realized at the entrance to the labor market and in the future career. The purpose is to find opportunities to improve employability-developing activities and to adapt it to specific needs from the students. Based on a number of qualitative interviews and personality tests of the graduates, an increased......The fact that students develop employability during their education is a key point for educational institutions and the focus on this issue has never been greater. This project looks into personal experience from VIA-graduates of "developing their employability" during the education and how it...

  12. Employment of Lithuanian Statistical Data Into Tax-Benefit Micro-Simulation Models

    Directory of Open Access Journals (Sweden)

    Viginta Ivaškaitė-Tamošiūnė

    2013-01-01

    Full Text Available In this study, we aim to assess the “best fit” of the existing Lithuanian micro-datasets for constructing a national micro-simulation model. Specifically, we compare and evaluate the potential of two (state level representative micro-data surveys in terms of their potential to simulate Lithuanian (direct taxes, social contributions and social benefits. Both selected datasets contain rich information on the socio-economic and demographical conditions of the country: the Household Budget Survey (HBS for the years 2004 and 2005 and the European Community Statistics on Income and Living Conditions (EU-SILC in Lithuania for the year 2005. The selected databases offer the most comprehensive range of income and other socio-demographic attributes, needed for simulation of tax and contributions’ payers/amounts and benefits’ recipients/amounts. The evaluation of the dataset capacity to simulate these measures is done by a comparative statistical analysis. Among the comparative categories are definitions (of households, incomes, survey collection modes, level of aggregation of various variables, demographic and incomes variables and corresponding numbers (amounts. The comparative analysis of the HBS and EU-SILC datasets shows, that despite embedded differences and shortages regarding simulation capacities of both surveys, these datasets contain valuable and sufficient information for the purpose of simulation of Lithuanian tax-benefit policies. In general a conclusion could be drawn, that HBS offers higher possibilities of simulating the Lithuanian tax-benefit system. This dataset contains more detailed national income categories (i.e. recipients of maternity/paternity insurance, diverse pensions, etc.— information on which is not available in the EU-SILC. The latter dataset does not contain national policy system specific components, but offer information on income aggregates, such all old-age pensions, social exclusion benefits, etc

  13. Modeling rainfall-runoff process using soft computing techniques

    Science.gov (United States)

    Kisi, Ozgur; Shiri, Jalal; Tombul, Mustafa

    2013-02-01

    Rainfall-runoff process was modeled for a small catchment in Turkey, using 4 years (1987-1991) of measurements of independent variables of rainfall and runoff values. The models used in the study were Artificial Neural Networks (ANNs), Adaptive Neuro-Fuzzy Inference System (ANFIS) and Gene Expression Programming (GEP) which are Artificial Intelligence (AI) approaches. The applied models were trained and tested using various combinations of the independent variables. The goodness of fit for the model was evaluated in terms of the coefficient of determination (R2), root mean square error (RMSE), mean absolute error (MAE), coefficient of efficiency (CE) and scatter index (SI). A comparison was also made between these models and traditional Multi Linear Regression (MLR) model. The study provides evidence that GEP (with RMSE=17.82 l/s, MAE=6.61 l/s, CE=0.72 and R2=0.978) is capable of modeling rainfall-runoff process and is a viable alternative to other applied artificial intelligence and MLR time-series methods.

  14. Employment protection

    OpenAIRE

    Stefano Scarpetta

    2014-01-01

    Laws on hiring and firing are intended to protect workers from unfair behavior by employers, to counter imperfections in financial markets that limit workers’ ability to insure themselves against job loss, and to preserve firm-specific human capital. But by imposing costs on firms’ adaptation to changes in demand and technology, employment protection legislation may reduce not only job destruction but also job creation, hindering the efficient allocation of labor and productivity growth....

  15. Validation techniques of agent based modelling for geospatial simulations

    OpenAIRE

    Darvishi, M.; Ahmadi, G.

    2014-01-01

    One of the most interesting aspects of modelling and simulation study is to describe the real world phenomena that have specific properties; especially those that are in large scales and have dynamic and complex behaviours. Studying these phenomena in the laboratory is costly and in most cases it is impossible. Therefore, Miniaturization of world phenomena in the framework of a model in order to simulate the real phenomena is a reasonable and scientific approach to understand the world. Agent...

  16. Use of machine learning techniques for modeling of snow depth

    Directory of Open Access Journals (Sweden)

    G. V. Ayzel

    2017-01-01

    Full Text Available Snow exerts significant regulating effect on the land hydrological cycle since it controls intensity of heat and water exchange between the soil-vegetative cover and the atmosphere. Estimating of a spring flood runoff or a rain-flood on mountainous rivers requires understanding of the snow cover dynamics on a watershed. In our work, solving a problem of the snow cover depth modeling is based on both available databases of hydro-meteorological observations and easily accessible scientific software that allows complete reproduction of investigation results and further development of this theme by scientific community. In this research we used the daily observational data on the snow cover and surface meteorological parameters, obtained at three stations situated in different geographical regions: Col de Porte (France, Sodankyla (Finland, and Snoquamie Pass (USA.Statistical modeling of the snow cover depth is based on a complex of freely distributed the present-day machine learning models: Decision Trees, Adaptive Boosting, Gradient Boosting. It is demonstrated that use of combination of modern machine learning methods with available meteorological data provides the good accuracy of the snow cover modeling. The best results of snow cover depth modeling for every investigated site were obtained by the ensemble method of gradient boosting above decision trees – this model reproduces well both, the periods of snow cover accumulation and its melting. The purposeful character of learning process for models of the gradient boosting type, their ensemble character, and use of combined redundancy of a test sample in learning procedure makes this type of models a good and sustainable research tool. The results obtained can be used for estimating the snow cover characteristics for river basins where hydro-meteorological information is absent or insufficient.

  17. Modeling and Control of Multivariable Process Using Intelligent Techniques

    Directory of Open Access Journals (Sweden)

    Subathra Balasubramanian

    2010-10-01

    Full Text Available For nonlinear dynamic systems, the first principles based modeling and control is difficult to implement. In this study, a fuzzy controller and recurrent fuzzy controller are developed for MIMO process. Fuzzy logic controller is a model free controller designed based on the knowledge about the process. In fuzzy controller there are two types of rule-based fuzzy models are available: one the linguistic (Mamdani model and the other is Takagi–Sugeno model. Of these two, Takagi-Sugeno model (TS has attracted most attention. The fuzzy controller application is limited to static processes due to their feedforward structure. But, most of the real-time processes are dynamic and they require the history of input/output data. In order to store the past values a memory unit is needed, which is introduced by the recurrent structure. The proposed recurrent fuzzy structure is used to develop a controller for the two tank heating process. Both controllers are designed and implemented in a real time environment and their performance is compared.

  18. A study on the modeling techniques using LS-INGRID

    Energy Technology Data Exchange (ETDEWEB)

    Ku, J. H.; Park, S. W

    2001-03-01

    For the development of radioactive material transport packages, the verification of structural safety of a package against the free drop impact accident should be carried out. The use of LS-DYNA, which is specially developed code for impact analysis, is essential for impact analysis of the package. LS-INGRID is a pre-processor for LS-DYNA with considerable capability to deal with complex geometries and allows for parametric modeling. LS-INGRID is most effective in combination with LS-DYNA code. Although the usage of LS-INGRID seems very difficult relative to many commercial mesh generators, the productivity of users performing parametric modeling tasks with LS-INGRID can be much higher in some cases. Therefore, LS-INGRID has to be used with LS-DYNA. This report presents basic explanations for the structure and commands, basic modelling examples and advanced modelling of LS-INGRID to use it for the impact analysis of various packages. The new users can build the complex model easily, through a study for the basic examples presented in this report from the modelling to the loading and constraint conditions.

  19. Employing Model-Based Reasoning in Interdisciplinary Research Teams: Evidence-Based Practices for Integrating Knowledge Across Systems

    Science.gov (United States)

    Pennington, D. D.; Vincent, S.

    2017-12-01

    The NSF-funded project "Employing Model-Based Reasoning in Socio-Environmental Synthesis (EMBeRS)" has developed a generic model for exchanging knowledge across disciplines that is based on findings from the cognitive, learning, social, and organizational sciences addressing teamwork in complex problem solving situations. Two ten-day summer workshops for PhD students from large, NSF-funded interdisciplinary projects working on a variety of water issues were conducted in 2016 and 2017, testing the model by collecting a variety of data, including surveys, interviews, audio/video recordings, material artifacts and documents, and photographs. This presentation will introduce the EMBeRS model, the design of workshop activities based on the model, and results from surveys and interviews with the participating students. Findings suggest that this approach is very effective for developing a shared, integrated research vision across disciplines, compared with activities typically provided by most large research projects, and that students believe the skills developed in the EMBeRS workshops are unique and highly desireable.

  20. Computational modelling of the HyperVapotron cooling technique

    Energy Technology Data Exchange (ETDEWEB)

    Milnes, Joseph, E-mail: Joe.Milnes@ccfe.ac.uk [Euratom/CCFE Fusion Association, Culham Science Centre, Abingdon, Oxon, OX14 3DB (United Kingdom); Burns, Alan [School of Process Material and Environmental Engineering, CFD Centre, University of Leeds, Leeds, LS2 9JT (United Kingdom); ANSYS UK, Milton Park, Oxfordshire (United Kingdom); Drikakis, Dimitris [Department of Engineering Physics, Cranfield University, Cranfield, MK43 0AL (United Kingdom)

    2012-09-15

    Highlights: Black-Right-Pointing-Pointer The heat transfer mechanisms within a HyperVapotron are examined. Black-Right-Pointing-Pointer A multiphase, CFD model is developed. Black-Right-Pointing-Pointer Modelling choices for turbulence and wall boiling are evaluated. Black-Right-Pointing-Pointer Considerable improvements in accuracy are found compared to standard boiling models. Black-Right-Pointing-Pointer The model should enable significant virtual prototyping to be performed. - Abstract: Efficient heat transfer technologies are essential for magnetically confined fusion reactors; this applies to both the current generation of experimental reactors as well as future power plants. A number of High Heat Flux devices have therefore been developed specifically for this application. One of the most promising candidates is the HyperVapotron, a water cooled device which relies on internal fins and boiling heat transfer to maximise the heat transfer capability. Over the past 30 years, numerous variations of the HyperVapotron have been built and tested at fusion research centres around the globe resulting in devices that can now sustain heat fluxes in the region of 20-30 MW/m{sup 2} in steady state. Until recently, there had been few attempts to model or understand the internal heat transfer mechanisms responsible for this exceptional performance with the result that design improvements have been traditionally sought experimentally which is both inefficient and costly. This paper presents the successful attempt to develop an engineering model of the HyperVapotron device using customisation of commercial Computational Fluid Dynamics software. To establish the most appropriate modelling choices, in-depth studies were performed examining the turbulence models (within the Reynolds Averaged Navier Stokes framework), near wall methods, grid resolution and boiling submodels. Comparing the CFD solutions with HyperVapotron experimental data suggests that a RANS-based, multiphase

  1. A parametric model order reduction technique for poroelastic finite element models.

    Science.gov (United States)

    Lappano, Ettore; Polanz, Markus; Desmet, Wim; Mundo, Domenico

    2017-10-01

    This research presents a parametric model order reduction approach for vibro-acoustic problems in the frequency domain of systems containing poroelastic materials (PEM). The method is applied to the Finite Element (FE) discretization of the weak u-p integral formulation based on the Biot-Allard theory and makes use of reduced basis (RB) methods typically employed for parametric problems. The parametric reduction is obtained rewriting the Biot-Allard FE equations for poroelastic materials using an affine representation of the frequency (therefore allowing for RB methods) and projecting the frequency-dependent PEM system on a global reduced order basis generated with the proper orthogonal decomposition instead of standard modal approaches. This has proven to be better suited to describe the nonlinear frequency dependence and the strong coupling introduced by damping. The methodology presented is tested on two three-dimensional systems: in the first experiment, the surface impedance of a PEM layer sample is calculated and compared with results of the literature; in the second, the reduced order model of a multilayer system coupled to an air cavity is assessed and the results are compared to those of the reference FE model.

  2. Territories typification technique with use of statistical models

    Science.gov (United States)

    Galkin, V. I.; Rastegaev, A. V.; Seredin, V. V.; Andrianov, A. V.

    2018-05-01

    Territories typification is required for solution of many problems. The results of geological zoning received by means of various methods do not always agree. That is why the main goal of the research given is to develop a technique of obtaining a multidimensional standard classified indicator for geological zoning. In the course of the research, the probabilistic approach was used. In order to increase the reliability of geological information classification, the authors suggest using complex multidimensional probabilistic indicator P K as a criterion of the classification. The second criterion chosen is multidimensional standard classified indicator Z. These can serve as characteristics of classification in geological-engineering zoning. Above mentioned indicators P K and Z are in good correlation. Correlation coefficient values for the entire territory regardless of structural solidity equal r = 0.95 so each indicator can be used in geological-engineering zoning. The method suggested has been tested and the schematic map of zoning has been drawn.

  3. A Review of Models for Dose Assessment Employed by SKB in the Renewed Safety Assessment for SFR 1

    Energy Technology Data Exchange (ETDEWEB)

    Shaw, George [Imperial College of Science Technology and Medicine (United Kingdom)

    2002-09-01

    This document provides a critical review, on behalf of SSI, of the models employed by the Swedish Nuclear Fuel and Waste Management Co (SKB) for dose assessment in the renewed safety assessment for the final repository for radioactive operational waste (SFR 1) in Forsmark, Sweden. The main objective of the review is to examine the models used by SKB for radiological dose assessment in a series of evolving biotopes in the vicinity of the Forsmark repository within a time frame beginning in 3000 AD and extending beyond 7500 AD. Five biosphere models (for coasts, lakes, agriculture, mires and wells) are described in Report TR-01-04. The principal consideration of the review is to determine whether these models are fit for the purpose of dose evaluation over the time frames involved and in the evolving sequence of biotopes specified. As well as providing general observations and comments on the modelling approach taken, six specific questions are addressed, as follows. Are the assumptions underlying the models justifiable? Are all reasonably foreseeable environmental processes considered? Has parameter uncertainty been sufficiently and reasonably addressed? Have sufficient models been used to address all reasonably foreseeable biotopes? Are the transitions between biotopes modelled adequately (specifically, are initial conditions for developing biotopes adequately specified by calculations for subsiding biotopes)? Have all critical radionuclides been identified? It is concluded that, in general, the assumptions underlying most of the models are justifiable. The exceptions are a) the rather simplistic approach taken in the Coastal Model and b) the lack of consideration of wild foods and age-dependence when calculating exposures of humans to radionuclides via dietary pathways. Most foreseeable processes appear to have been accounted for within the constraints of the models used, although it is recommended that attention be paid to future climate states when considering

  4. Nuclear-fuel-cycle optimization: methods and modelling techniques

    International Nuclear Information System (INIS)

    Silvennoinen, P.

    1982-01-01

    This book present methods applicable to analyzing fuel-cycle logistics and optimization as well as in evaluating the economics of different reactor strategies. After an introduction to the phases of a fuel cycle, uranium cost trends are assessed in a global perspective. Subsequent chapters deal with the fuel-cycle problems faced by a power utility. The fuel-cycle models cover the entire cycle from the supply of uranium to the disposition of spent fuel. The chapter headings are: Nuclear Fuel Cycle, Uranium Supply and Demand, Basic Model of the LWR (light water reactor) Fuel Cycle, Resolution of Uncertainties, Assessment of Proliferation Risks, Multigoal Optimization, Generalized Fuel-Cycle Models, Reactor Strategy Calculations, and Interface with Energy Strategies. 47 references, 34 figures, 25 tables

  5. A method to employ the spatial organization of catchments into semi-distributed rainfall–runoff models

    Directory of Open Access Journals (Sweden)

    H. Oppel

    2017-08-01

    Full Text Available A distributed or semi-distributed deterministic hydrological model should consider the hydrologically most relevant catchment characteristics. These are heterogeneously distributed within a watershed but often interrelated and subject to a certain spatial organization which results in archetypes of combined characteristics. In order to reproduce the natural rainfall–runoff response the reduction of variance of catchment properties as well as the incorporation of the spatial organization of the catchment are desirable. In this study the width-function approach is utilized as a basic characteristic to analyse the succession of catchment characteristics. By applying this technique we were able to assess the context of catchment properties like soil or topology along the streamflow length and the network geomorphology, giving indications of the spatial organization of a catchment. Moreover, this information and this technique have been implemented in an algorithm for automated sub-basin ascertainment, which included the definition of zones within the newly defined sub-basins. The objective was to provide sub-basins that were less heterogeneous than common separation schemes. The algorithm was applied to two parameters characterizing the topology and soil of four mid-European watersheds. Resulting partitions indicated a wide range of applicability for the method and the algorithm. Additionally, the intersection of derived zones for different catchment characteristics could give insights into sub-basin similarities. Finally, a HBV96 case study demonstrated the potential benefits of modelling with the new subdivision technique.

  6. Increasing the reliability of ecological models using modern software engineering techniques

    Science.gov (United States)

    Robert M. Scheller; Brian R. Sturtevant; Eric J. Gustafson; Brendan C. Ward; David J. Mladenoff

    2009-01-01

    Modern software development techniques are largely unknown to ecologists. Typically, ecological models and other software tools are developed for limited research purposes, and additional capabilities are added later, usually in an ad hoc manner. Modern software engineering techniques can substantially increase scientific rigor and confidence in ecological models and...

  7. Advanced Techniques for Reservoir Simulation and Modeling of Non-Conventional Wells

    Energy Technology Data Exchange (ETDEWEB)

    Durlofsky, Louis J.

    2000-08-28

    This project targets the development of (1) advanced reservoir simulation techniques for modeling non-conventional wells; (2) improved techniques for computing well productivity (for use in reservoir engineering calculations) and well index (for use in simulation models), including the effects of wellbore flow; and (3) accurate approaches to account for heterogeneity in the near-well region.

  8. Determining Plutonium Mass in Spent Fuel with Nondestructive Assay Techniques -- Preliminary Modeling Results Emphasizing Integration among Techniques

    International Nuclear Information System (INIS)

    Tobin, S.J.; Fensin, M.L.; Ludewigt, B.A.; Menlove, H.O.; Quiter, B.J.; Sandoval, N.P.; Swinhoe, M.T.; Thompson, S.J.

    2009-01-01

    There are a variety of motivations for quantifying Pu in spent (used) fuel assemblies by means of nondestructive assay (NDA) including the following: strengthen the capabilities of the International Atomic Energy Agencies to safeguards nuclear facilities, quantifying shipper/receiver difference, determining the input accountability value at reprocessing facilities and providing quantitative input to burnup credit determination for repositories. For the purpose of determining the Pu mass in spent fuel assemblies, twelve NDA techniques were identified that provide information about the composition of an assembly. A key point motivating the present research path is the realization that none of these techniques, in isolation, is capable of both (1) quantifying the elemental Pu mass of an assembly and (2) detecting the diversion of a significant number of pins. As such, the focus of this work is determining how to best integrate 2 or 3 techniques into a system that can quantify elemental Pu and to assess how well this system can detect material diversion. Furthermore, it is important economically to down-select among the various techniques before advancing to the experimental phase. In order to achieve this dual goal of integration and down-selection, a Monte Carlo library of PWR assemblies was created and is described in another paper at Global 2009 (Fensin et al.). The research presented here emphasizes integration among techniques. An overview of a five year research plan starting in 2009 is given. Preliminary modeling results for the Monte Carlo assembly library are presented for 3 NDA techniques: Delayed Neutrons, Differential Die-Away, and Nuclear Resonance Fluorescence. As part of the focus on integration, the concept of 'Pu isotopic correlation' is discussed and the role of cooling time determination.

  9. Parameter estimation in stochastic mammogram model by heuristic optimization techniques.

    NARCIS (Netherlands)

    Selvan, S.E.; Xavier, C.C.; Karssemeijer, N.; Sequeira, J.; Cherian, R.A.; Dhala, B.Y.

    2006-01-01

    The appearance of disproportionately large amounts of high-density breast parenchyma in mammograms has been found to be a strong indicator of the risk of developing breast cancer. Hence, the breast density model is popular for risk estimation or for monitoring breast density change in prevention or

  10. Discovering Process Reference Models from Process Variants Using Clustering Techniques

    NARCIS (Netherlands)

    Li, C.; Reichert, M.U.; Wombacher, Andreas

    2008-01-01

    In today's dynamic business world, success of an enterprise increasingly depends on its ability to react to changes in a quick and flexible way. In response to this need, process-aware information systems (PAIS) emerged, which support the modeling, orchestration and monitoring of business processes

  11. Teaching Behavioral Modeling and Simulation Techniques for Power Electronics Courses

    Science.gov (United States)

    Abramovitz, A.

    2011-01-01

    This paper suggests a pedagogical approach to teaching the subject of behavioral modeling of switch-mode power electronics systems through simulation by general-purpose electronic circuit simulators. The methodology is oriented toward electrical engineering (EE) students at the undergraduate level, enrolled in courses such as "Power…

  12. Biliary System Architecture: Experimental Models and Visualization Techniques

    Czech Academy of Sciences Publication Activity Database

    Sarnová, Lenka; Gregor, Martin

    2017-01-01

    Roč. 66, č. 3 (2017), s. 383-390 ISSN 0862-8408 R&D Projects: GA MŠk(CZ) LQ1604; GA ČR GA15-23858S Institutional support: RVO:68378050 Keywords : Biliary system * Mouse model * Cholestasis * Visualisation * Morphology Subject RIV: EB - Genetics ; Molecular Biology OBOR OECD: Cell biology Impact factor: 1.461, year: 2016

  13. Testing Model with "Check Technique" for Physics Education

    Science.gov (United States)

    Demir, Cihat

    2016-01-01

    As the number, date and form of the written tests are structured and teacher-oriented, it is considered that it creates fear and anxiety among the students. It has been found necessary and important to form a testing model which will keep the students away from the test anxiety and allows them to learn only about the lesson. For this study,…

  14. Evaluation of a Multiple Mediator Model of the Relationship between Core Self-Evaluations and Job Satisfaction in Employed Individuals with Disabilities

    Science.gov (United States)

    Smedema, Susan Miller; Kesselmayer, Rachel Friefeld; Peterson, Lauren

    2018-01-01

    Purpose: To test a meditation model of the relationship between core self-evaluations (CSE) and job satisfaction in employed individuals with disabilities. Method: A quantitative descriptive design using Hayes's (2012) PROCESS macro for SPSS and multiple regression analysis. Two-hundred fifty-nine employed persons with disabilities were recruited…

  15. Ureteric Embolization for Lower Urinary Tract Fistulae: Use of Two Amplatzer Vascular Plugs and N-Butyl Cyanoacrylate Employing the 'Sandwich' Technique

    Energy Technology Data Exchange (ETDEWEB)

    Saad, Wael E. A., E-mail: ws6r@virginia.edu; Kalagher, S.; Turba, U. C.; Sabri, S. S.; Park, A.-W.; Stone, J.; Angle, J. F.; Matsumoto, A. H. [University of Virginia Health System, Department of Radiology and Medical Imaging, Division of Vascular Interventional Radiology (United States)

    2013-08-01

    PurposeThis study describes and evaluated the effectiveness of occluding distal ureters in the clinical setting of urinary vaginal (vesicovaginal or enterovesicovaginal) fistulae utilizing a new technique which combines Amplatzer vascular plugs and N-butyl cyanoacrylate.MaterialsThis is a retrospective study (January 2007-December 2010) of patients with urinary-vaginal fistulae undergoing distal ureter embolization utilizing an Amplatzer- N-butyl cyanoacrylate-Amplatzer sandwich technique. An 8-12-mm type-I or type-II Amplatzer vascular plug was delivered using the sheath and deployed in the ureter distal to the pelvic brim. Instillation of 0.8-1.5 cc of N-butyl cyanoacrylate into ureter proximal to the Amplatzer plug was performed. This was followed by another set of 8-12-mm type-I or type-II Amplatzer vascular plugs in a technique referred to as the 'sandwich technique.'ResultsFive ureters in three patients were occluded utilizing the above-described technique during the 4-year study period. Mean maximum size Amplatzer used per ureter was 10.8 mm (range, 8-12). One ureter required three Amplatzer plugs and the rest required two. Two patients (3 ureters) were clinically successful with complete resolution of symptoms in 36-48 h. The third patient (2 ureters) was partly successful and required a second Amplatzer- N-butyl cyanoacrylate sandwich technique embolization. The mean clinical follow-up was 11.3 months (range, 1.7-29.2).ConclusionsThe Amplatzer- N-butyl cyanoacrylate-Amplatzer sandwich technique for occluding the distal ureter is safe and effective with a quick (probably due to the N-butyl cyanoacrylate) and durable (probably due to the Amplatzer plugs) clinical response.

  16. Data assimilation techniques and modelling uncertainty in geosciences

    Directory of Open Access Journals (Sweden)

    M. Darvishi

    2014-10-01

    Full Text Available "You cannot step into the same river twice". Perhaps this ancient quote is the best phrase to describe the dynamic nature of the earth system. If we regard the earth as a several mixed systems, we want to know the state of the system at any time. The state could be time-evolving, complex (such as atmosphere or simple and finding the current state requires complete knowledge of all aspects of the system. On one hand, the Measurements (in situ and satellite data are often with errors and incomplete. On the other hand, the modelling cannot be exact; therefore, the optimal combination of the measurements with the model information is the best choice to estimate the true state of the system. Data assimilation (DA methods are powerful tools to combine observations and a numerical model. Actually, DA is an interaction between uncertainty analysis, physical modelling and mathematical algorithms. DA improves knowledge of the past, present or future system states. DA provides a forecast the state of complex systems and better scientific understanding of calibration, validation, data errors and their probability distributions. Nowadays, the high performance and capabilities of DA have led to extensive use of it in different sciences such as meteorology, oceanography, hydrology and nuclear cores. In this paper, after a brief overview of the DA history and a comparison with conventional statistical methods, investigated the accuracy and computational efficiency of two main classical algorithms of DA involving stochastic DA (BLUE and Kalman filter and variational DA (3D and 4D-Var, then evaluated quantification and modelling of the errors. Finally, some of DA applications in geosciences and the challenges facing the DA are discussed.

  17. Altered vocal fold kinematics in synthetic self-oscillating models that employ adipose tissue as a lateral boundary condition.

    Science.gov (United States)

    Saidi, Hiba; Erath, Byron D.

    2015-11-01

    The vocal folds play a major role in human communication by initiating voiced sound production. During voiced speech, the vocal folds are set into sustained vibrations. Synthetic self-oscillating vocal fold models are regularly employed to gain insight into flow-structure interactions governing the phonation process. Commonly, a fixed boundary condition is applied to the lateral, anterior, and posterior sides of the synthetic vocal fold models. However, physiological observations reveal the presence of adipose tissue on the lateral surface between the thyroid cartilage and the vocal folds. The goal of this study is to investigate the influence of including this substrate layer of adipose tissue on the dynamics of phonation. For a more realistic representation of the human vocal folds, synthetic multi-layer vocal fold models have been fabricated and tested while including a soft lateral layer representative of adipose tissue. Phonation parameters have been collected and are compared to those of the standard vocal fold models. Results show that vocal fold kinematics are affected by adding the adipose tissue layer as a new boundary condition.

  18. Employer Branding

    DEFF Research Database (Denmark)

    Frimann, Søren; Mønsted, Bolette Rye

    2012-01-01

    Employer branding er både for den private og den offentlige sektor blevet en måde, de kan imødekomme ændrede arbejdsmarkedsvilkår og organisatoriske udfordringer i en postmoderne og globaliseret verden. Den aktuelle finanskrise har skabt nye udfordringer for organisationer i deres bestræbelser på...... at tiltrække- og fastholde attraktive medarbejdere. Men hvilken betydning har det, når Grundfos siger ”Mennesket er i fokus”, og hvad siger ”mangfoldighed” om Københavns Kommune som arbejdsplads i relation til employer branding? Er der egentlig sammenhæng mellem tankerne bag employer branding og de eksternt...... kommunikerede employer brandprodukter. Eller bliver det unikke ved arbejdspladserne ersattet af buzzwords uden substans og inddragelse af ansatte og interessenter? Artiklen har til formål at vurdere disse spørgsmål på baggrund af analyser af to cases med employer branding....

  19. The impact of applying product-modelling techniques in configurator projects

    DEFF Research Database (Denmark)

    Hvam, Lars; Kristjansdottir, Katrin; Shafiee, Sara

    2018-01-01

    This paper aims to increase understanding of the impact of using product-modelling techniques to structure and formalise knowledge in configurator projects. Companies that provide customised products increasingly apply configurators in support of sales and design activities, reaping benefits...... that include shorter lead times, improved quality of specifications and products, and lower overall product costs. The design and implementation of configurators are a challenging task that calls for scientifically based modelling techniques to support the formal representation of configurator knowledge. Even...... the phenomenon model and information model are considered visually, (2) non-UML-based modelling techniques, in which only the phenomenon model is considered and (3) non-formal modelling techniques. This study analyses the impact to companies from increased availability of product knowledge and improved control...

  20. A comparison between EGR and lean-burn strategies employed in a natural gas SI engine using a two-zone combustion model

    Energy Technology Data Exchange (ETDEWEB)

    Ibrahim, Amr; Bari, Saiful [Sustainable Energy Centre, School of Advanced Manufacturing and Mechanical Engineering, Univ. of South Australia, Mawson Lakes SA 5095 (Australia)

    2009-12-15

    Exhaust gas recirculation (EGR) strategy has been recently employed in natural gas SI engines as an alternative to lean burn technique in order to satisfy the increasingly stringent emission standards. However, the effect of EGR on some of engine performance parameters compared to lean burn is not yet quite certain. In the current study, the effect of both EGR and lean burn on natural gas SI engine performance was compared at similar operating conditions. This was achieved numerically by developing a computer simulation of the four-stroke spark-ignition natural gas engine. A two-zone combustion model was developed to simulate the in-cylinder conditions during combustion. A kinetic model based on the extended Zeldovich mechanism was also developed in order to predict NO emission. The combustion model was validated using experimental data and a good agreement between the results was found. It was demonstrated that adding EGR to the stoichiometric inlet charge at constant inlet pressure of 130 kPa decreased power more rapidly than excess air; however, the power loss was recovered by increasing the inlet pressure from 130 kPa at zero dilution to 150 kPa at 20% EGR dilution. The engine fuel consumption increased by 10% when 20% EGR dilution was added at inlet pressure of 150 kPa compared to using 20% air dilution at 130 kPa. However, it was found that EGR dilution strategy is capable of producing extremely lower NO emission than lean burn technique. NO emission was reduced by about 70% when the inlet charge was diluted at a rate of 20% using EGR instead of excess air. (author)

  1. A comparison between EGR and lean-burn strategies employed in a natural gas SI engine using a two-zone combustion model

    International Nuclear Information System (INIS)

    Ibrahim, Amr; Bari, Saiful

    2009-01-01

    Exhaust gas recirculation (EGR) strategy has been recently employed in natural gas SI engines as an alternative to lean burn technique in order to satisfy the increasingly stringent emission standards. However, the effect of EGR on some of engine performance parameters compared to lean burn is not yet quite certain. In the current study, the effect of both EGR and lean burn on natural gas SI engine performance was compared at similar operating conditions. This was achieved numerically by developing a computer simulation of the four-stroke spark-ignition natural gas engine. A two-zone combustion model was developed to simulate the in-cylinder conditions during combustion. A kinetic model based on the extended Zeldovich mechanism was also developed in order to predict NO emission. The combustion model was validated using experimental data and a good agreement between the results was found. It was demonstrated that adding EGR to the stoichiometric inlet charge at constant inlet pressure of 130 kPa decreased power more rapidly than excess air; however, the power loss was recovered by increasing the inlet pressure from 130 kPa at zero dilution to 150 kPa at 20% EGR dilution. The engine fuel consumption increased by 10% when 20% EGR dilution was added at inlet pressure of 150 kPa compared to using 20% air dilution at 130 kPa. However, it was found that EGR dilution strategy is capable of producing extremely lower NO emission than lean burn technique. NO emission was reduced by about 70% when the inlet charge was diluted at a rate of 20% using EGR instead of excess air.

  2. Ionospheric scintillation forecasting model based on NN-PSO technique

    Science.gov (United States)

    Sridhar, M.; Venkata Ratnam, D.; Padma Raju, K.; Sai Praharsha, D.; Saathvika, K.

    2017-09-01

    The forecasting and modeling of ionospheric scintillation effects are crucial for precise satellite positioning and navigation applications. In this paper, a Neural Network model, trained using Particle Swarm Optimization (PSO) algorithm, has been implemented for the prediction of amplitude scintillation index (S4) observations. The Global Positioning System (GPS) and Ionosonde data available at Darwin, Australia (12.4634° S, 130.8456° E) during 2013 has been considered. The correlation analysis between GPS S4 and Ionosonde drift velocities (hmf2 and fof2) data has been conducted for forecasting the S4 values. The results indicate that forecasted S4 values closely follow the measured S4 values for both the quiet and disturbed conditions. The outcome of this work will be useful for understanding the ionospheric scintillation phenomena over low latitude regions.

  3. Techniques for studies of unbinned model independent CP violation

    Energy Technology Data Exchange (ETDEWEB)

    Bedford, Nicholas; Weisser, Constantin; Parkes, Chris; Gersabeck, Marco; Brodzicka, Jolanta; Chen, Shanzhen [University of Manchester (United Kingdom)

    2016-07-01

    Charge-Parity (CP) violation is a known part of the Standard Model and has been observed and measured in both the B and K meson systems. The observed levels, however, are insufficient to explain the observed matter-antimatter asymmetry in the Universe, and so other sources need to be found. One area of current investigation is the D meson system, where predicted levels of CP violation are much lower than in the B and K meson systems. This means that more sensitive methods are required when searching for CP violation in this system. Several unbinned model independent methods have been proposed for this purpose, all of which need to be optimised and their sensitivities compared.

  4. The stucco decorations from St. Lorenzo in Laino (Como, Italy): The materials and the techniques employed by the 'Magistri Comacini'

    Energy Technology Data Exchange (ETDEWEB)

    Rampazzi, Laura [Dipartimento di Scienze Chimiche e Ambientali, Universita degli Studi dell' Insubria, via Valleggio 11, 22100 Como (Italy)], E-mail: laura.rampazzi@uninsubria.it; Rizzo, Biagio [Dipartimento di Scienze Chimiche e Ambientali, Universita degli Studi dell' Insubria, via Valleggio 11, 22100 Como (Italy); Colombo, Chiara; Conti, Claudia; Realini, Marco [Istituto per la Conservazione e la Valorizzazione dei Beni Culturali, Unita di Milano ' Gino Bozza' , Area della Ricerca Milano 3 Bicocca, via Cozzi 53, 20125 Milano (Italy); Bartolucci, Ugo; Colombini, Maria Perla [Dipartimento di Chimica e Chimica Industriale, Universita degli Studi di Pisa, Universita di Pisa, Via Risorgimento 35, 56126 Pisa (Italy); Spiriti, Andrea; Facchin, Laura [Dipartimento di Informatica e Comunicazione, Universita degli Studi dell' Insubria, Via Mazzini 5, 21100 Varese (Italy)

    2008-12-07

    The paper reports a cross-cutting and systematic approach to the analytical study of Baroque stucco decorations by Italian artists known as 'Magistri Comacini'. Samples from the Church of St. Lorenzo in Laino (Como, Italy) were analysed using chemical and mineralogical techniques and the results interpreted with the help of art historians in order to enlighten the artistic techniques. The characterisation of the binder, aggregate and organic fractions was carried out through optical microscopy, scanning electron microscopy with microprobe, X-ray powder diffraction, Fourier transform infrared spectroscopy, gas chromatography-mass spectrometry and laser ablation mass spectrometry equipped with plasma source spectrometry. The results on major, minor and trace components ascertained various recipes. In particular, the investigation of the finishing layers corroborated hypotheses of attribution and chronology and stressed the different skills of the artists working in the building, shedding a light on details of the decoration technique.

  5. Student employment

    DEFF Research Database (Denmark)

    Jacob, Marita; Gerth, Maria; Weiss, Felix

    2018-01-01

    , according to social origins, in student employment from first-year students through graduating students. We show that inequality in job quality exists and is partly attributable to the need for students from lower social origins to work to finance their studies. We hypothesise that initial inequalities......In this article, we examine social origin differences in employment patterns across different stages of higher education and compare these differences between vocational and academic fields of study. Using data from a large-scale German student survey, we study the development of inequality...

  6. A titration model for evaluating calcium hydroxide removal techniques

    Directory of Open Access Journals (Sweden)

    Mark PHILLIPS

    2015-02-01

    Full Text Available Objective Calcium hydroxide (Ca(OH2 has been used in endodontics as an intracanal medicament due to its antimicrobial effects and its ability to inactivate bacterial endotoxin. The inability to totally remove this intracanal medicament from the root canal system, however, may interfere with the setting of eugenol-based sealers or inhibit bonding of resin to dentin, thus presenting clinical challenges with endodontic treatment. This study used a chemical titration method to measure residual Ca(OH2 left after different endodontic irrigation methods. Material and Methods Eighty-six human canine roots were prepared for obturation. Thirty teeth were filled with known but different amounts of Ca(OH2 for 7 days, which were dissolved out and titrated to quantitate the residual Ca(OH2 recovered from each root to produce a standard curve. Forty-eight of the remaining teeth were filled with equal amounts of Ca(OH2 followed by gross Ca(OH2 removal using hand files and randomized treatment of either: 1 Syringe irrigation; 2 Syringe irrigation with use of an apical file; 3 Syringe irrigation with added 30 s of passive ultrasonic irrigation (PUI, or 4 Syringe irrigation with apical file and PUI (n=12/group. Residual Ca(OH2 was dissolved with glycerin and titrated to measure residual Ca(OH2 left in the root. Results No method completely removed all residual Ca(OH2. The addition of 30 s PUI with or without apical file use removed Ca(OH2 significantly better than irrigation alone. Conclusions This technique allowed quantification of residual Ca(OH2. The use of PUI (with or without apical file resulted in significantly lower Ca(OH2 residue compared to irrigation alone.

  7. Modern EMC analysis techniques II models and applications

    CERN Document Server

    Kantartzis, Nikolaos V

    2008-01-01

    The objective of this two-volume book is the systematic and comprehensive description of the most competitive time-domain computational methods for the efficient modeling and accurate solution of modern real-world EMC problems. Intended to be self-contained, it performs a detailed presentation of all well-known algorithms, elucidating on their merits or weaknesses, and accompanies the theoretical content with a variety of applications. Outlining the present volume, numerical investigations delve into printed circuit boards, monolithic microwave integrated circuits, radio frequency microelectro

  8. Optimization of the design of thick, segmented scintillators for megavoltage cone-beam CT using a novel, hybrid modeling technique

    Energy Technology Data Exchange (ETDEWEB)

    Liu, Langechuan; Antonuk, Larry E., E-mail: antonuk@umich.edu; El-Mohri, Youcef; Zhao, Qihua; Jiang, Hao [Department of Radiation Oncology, University of Michigan, Ann Arbor, Michigan 48109 (United States)

    2014-06-15

    Purpose: Active matrix flat-panel imagers (AMFPIs) incorporating thick, segmented scintillators have demonstrated order-of-magnitude improvements in detective quantum efficiency (DQE) at radiotherapy energies compared to systems based on conventional phosphor screens. Such improved DQE values facilitate megavoltage cone-beam CT (MV CBCT) imaging at clinically practical doses. However, the MV CBCT performance of such AMFPIs is highly dependent on the design parameters of the scintillators. In this paper, optimization of the design of segmented scintillators was explored using a hybrid modeling technique which encompasses both radiation and optical effects. Methods: Imaging performance in terms of the contrast-to-noise ratio (CNR) and spatial resolution of various hypothetical scintillator designs was examined through a hybrid technique involving Monte Carlo simulation of radiation transport in combination with simulation of optical gain distributions and optical point spread functions. The optical simulations employed optical parameters extracted from a best fit to measurement results reported in a previous investigation of a 1.13 cm thick, 1016μm pitch prototype BGO segmented scintillator. All hypothetical designs employed BGO material with a thickness and element-to-element pitch ranging from 0.5 to 6 cm and from 0.508 to 1.524 mm, respectively. In the CNR study, for each design, full tomographic scans of a contrast phantom incorporating various soft-tissue inserts were simulated at a total dose of 4 cGy. Results: Theoretical values for contrast, noise, and CNR were found to be in close agreement with empirical results from the BGO prototype, strongly supporting the validity of the modeling technique. CNR and spatial resolution for the various scintillator designs demonstrate complex behavior as scintillator thickness and element pitch are varied—with a clear trade-off between these two imaging metrics up to a thickness of ∼3 cm. Based on these results, an

  9. Using the Social Enterprise Intervention (SEI) and Individual Placement and Support (IPS) models to improve employment and clinical outcomes of homeless youth with mental illness.

    Science.gov (United States)

    Ferguson, Kristin M

    2013-09-01

    Prior research reveals high unemployment rates among homeless youth. The literature offers many examples of using evidence-informed and evidence-based supported employment models with vulnerable populations to assist them in obtaining and maintaining employment and concurrently addressing mental health challenges. However, there are few examples to date of these models with homeless youth with mental illness. The purpose of this article was thus to describe a methodology for establishing a university-agency research partnership to design, implement, evaluate, and replicate evidence-informed and evidence-based interventions with homeless youth with mental illness to enhance their employment, mental health, and functional outcomes. Data from two studies are used to illustrate the relationship between vocational skill-building/employment and mental health among homeless youth. The article concludes with a discussion of the implications of conducting community-based participatory employment and clinical intervention research. The author highlights the opportunities and tensions associated with this approach.

  10. Who will have Sustainable Employment After a Back Injury? The Development of a Clinical Prediction Model in a Cohort of Injured Workers.

    Science.gov (United States)

    Shearer, Heather M; Côté, Pierre; Boyle, Eleanor; Hayden, Jill A; Frank, John; Johnson, William G

    2017-09-01

    Purpose Our objective was to develop a clinical prediction model to identify workers with sustainable employment following an episode of work-related low back pain (LBP). Methods We used data from a cohort study of injured workers with incident LBP claims in the USA to predict employment patterns 1 and 6 months following a workers' compensation claim. We developed three sequential models to determine the contribution of three domains of variables: (1) basic demographic/clinical variables; (2) health-related variables; and (3) work-related factors. Multivariable logistic regression was used to develop the predictive models. We constructed receiver operator curves and used the c-index to measure predictive accuracy. Results Seventy-nine percent and 77 % of workers had sustainable employment at 1 and 6 months, respectively. Sustainable employment at 1 month was predicted by initial back pain intensity, mental health-related quality of life, claim litigation and employer type (c-index = 0.77). At 6 months, sustainable employment was predicted by physical and mental health-related quality of life, claim litigation and employer type (c-index = 0.77). Adding health-related and work-related variables to models improved predictive accuracy by 8.5 and 10 % at 1 and 6 months respectively. Conclusion We developed clinically-relevant models to predict sustainable employment in injured workers who made a workers' compensation claim for LBP. Inquiring about back pain intensity, physical and mental health-related quality of life, claim litigation and employer type may be beneficial in developing programs of care. Our models need to be validated in other populations.

  11. Application of nonliner reduction techniques in chemical process modeling: a review

    International Nuclear Information System (INIS)

    Muhaimin, Z; Aziz, N.; Abd Shukor, S.R.

    2006-01-01

    Model reduction techniques have been used widely in engineering fields for electrical, mechanical as well as chemical engineering. The basic idea of reduction technique is to replace the original system by an approximating system with much smaller state-space dimension. A reduced order model is more beneficial to process and industrial field in terms of control purposes. This paper is to provide a review on application of nonlinear reduction techniques in chemical processes. The advantages and disadvantages of each technique reviewed are also highlighted

  12. Nuclear fuel cycle optimization - methods and modelling techniques

    International Nuclear Information System (INIS)

    Silvennoinen, P.

    1982-01-01

    This book is aimed at presenting methods applicable in the analysis of fuel cycle logistics and optimization as well as in evaluating the economics of different reactor strategies. After a succinct introduction to the phases of a fuel cycle, uranium cost trends are assessed in a global perspective and subsequent chapters deal with the fuel cycle problems faced by a power utility. A fundamental material flow model is introduced first in the context of light water reactor fuel cycles. Besides the minimum cost criterion, the text also deals with other objectives providing for a treatment of cost uncertainties and of the risk of proliferation of nuclear weapons. Methods to assess mixed reactor strategies, comprising also other reactor types than the light water reactor, are confined to cost minimization. In the final Chapter, the integration of nuclear capacity within a generating system is examined. (author)

  13. Application of nonlinear forecasting techniques for meteorological modeling

    Directory of Open Access Journals (Sweden)

    V. Pérez-Muñuzuri

    2000-10-01

    Full Text Available A nonlinear forecasting method was used to predict the behavior of a cloud coverage time series several hours in advance. The method is based on the reconstruction of a chaotic strange attractor using four years of cloud absorption data obtained from half-hourly Meteosat infrared images from Northwestern Spain. An exhaustive nonlinear analysis of the time series was carried out to reconstruct the phase space of the underlying chaotic attractor. The forecast values are used by a non-hydrostatic meteorological model ARPS for daily weather prediction and their results compared with surface temperature measurements from a meteorological station and a vertical sounding. The effect of noise in the time series is analyzed in terms of the prediction results.Key words: Meterology and atmospheric dynamics (mesoscale meteorology; general – General (new fields

  14. Application of nonlinear forecasting techniques for meteorological modeling

    Directory of Open Access Journals (Sweden)

    V. Pérez-Muñuzuri

    Full Text Available A nonlinear forecasting method was used to predict the behavior of a cloud coverage time series several hours in advance. The method is based on the reconstruction of a chaotic strange attractor using four years of cloud absorption data obtained from half-hourly Meteosat infrared images from Northwestern Spain. An exhaustive nonlinear analysis of the time series was carried out to reconstruct the phase space of the underlying chaotic attractor. The forecast values are used by a non-hydrostatic meteorological model ARPS for daily weather prediction and their results compared with surface temperature measurements from a meteorological station and a vertical sounding. The effect of noise in the time series is analyzed in terms of the prediction results.

    Key words: Meterology and atmospheric dynamics (mesoscale meteorology; general – General (new fields

  15. Fuel element transfer cask modelling using MCNP technique

    International Nuclear Information System (INIS)

    Rosli Darmawan

    2009-01-01

    Full text: After operating for more than 25 years, some of the Reaktor TRIGA PUSPATI (RTP) fuel elements would have been depleted. A few addition and fuel reconfiguration exercises have to be conducted in order to maintain RTP capacity. Presently, RTP spent fuels are stored at the storage area inside RTP tank. The need to transfer the fuel element outside of RTP tank may be prevalence in the near future. The preparation shall be started from now. A fuel element transfer cask has been designed according to the recommendation by the fuel manufacturer and experience of other countries. A modelling using MCNP code has been conducted to analyse the design. The result shows that the design of transfer cask fuel element is safe for handling outside the RTP tank according to recent regulatory requirement. (author)

  16. Fuel Element Transfer Cask Modelling Using MCNP Technique

    International Nuclear Information System (INIS)

    Darmawan, Rosli; Topah, Budiman Naim

    2010-01-01

    After operating for more than 25 years, some of the Reaktor TRIGA Puspati (RTP) fuel elements would have been depleted. A few addition and fuel reconfiguration exercises have to be conducted in order to maintain RTP capacity. Presently, RTP spent fuels are stored at the storage area inside RTP tank. The need to transfer the fuel element outside of RTP tank may be prevalence in the near future. The preparation shall be started from now. A fuel element transfer cask has been designed according to the recommendation by the fuel manufacturer and experience of other countries. A modelling using MCNP code has been conducted to analyse the design. The result shows that the design of transfer cask fuel element is safe for handling outside the RTP tank according to recent regulatory requirement.

  17. Maine's Employability Skills Program

    Science.gov (United States)

    McMahon, John M.; Wolffe, Karen E.; Wolfe, Judy; Brooker, Carrie

    2013-01-01

    This Practice Report describes the development and implementation of the "Maine Employability Skills Program," a model employment program developed by the Maine Division for the Blind and Visually Impaired (DBVI). The program was designed to support the efforts of the chronically unemployed or underemployed. These consumers were either…

  18. Advancing botnet modeling techniques for military and security simulations

    Science.gov (United States)

    Banks, Sheila B.; Stytz, Martin R.

    2011-06-01

    Simulation environments serve many purposes, but they are only as good as their content. One of the most challenging and pressing areas that call for improved content is the simulation of bot armies (botnets) and their effects upon networks and computer systems. Botnets are a new type of malware, a type that is more powerful and potentially dangerous than any other type of malware. A botnet's power derives from several capabilities including the following: 1) the botnet's capability to be controlled and directed throughout all phases of its activity, 2) a command and control structure that grows increasingly sophisticated, and 3) the ability of a bot's software to be updated at any time by the owner of the bot (a person commonly called a bot master or bot herder.) Not only is a bot army powerful and agile in its technical capabilities, a bot army can be extremely large, can be comprised of tens of thousands, if not millions, of compromised computers or it can be as small as a few thousand targeted systems. In all botnets, their members can surreptitiously communicate with each other and their command and control centers. In sum, these capabilities allow a bot army to execute attacks that are technically sophisticated, difficult to trace, tactically agile, massive, and coordinated. To improve our understanding of their operation and potential, we believe that it is necessary to develop computer security simulations that accurately portray bot army activities, with the goal of including bot army simulations within military simulation environments. In this paper, we investigate issues that arise when simulating bot armies and propose a combination of the biologically inspired MSEIR infection spread model coupled with the jump-diffusion infection spread model to portray botnet propagation.

  19. Effect of the Impeller Design on Degasification Kinetics Using the Impeller Injector Technique Assisted by Mathematical Modeling

    Directory of Open Access Journals (Sweden)

    Diego Abreu-López

    2017-04-01

    Full Text Available A mathematical model was developed to describe the hydrodynamics of a batch reactor for aluminum degassing utilizing the rotor-injector technique. The mathematical model uses the Eulerian algorithm to represent the two-phase system including the simulation of vortex formation at the free surface, and the use of the RNG k-ε model to account for the turbulence in the system. The model was employed to test the performances of three different impeller designs, two of which are available commercially, while the third one is a new design proposed in previous work. The model simulates the hydrodynamics and consequently helps to explain and connect the performances in terms of degassing kinetics and gas consumption found in physical modeling previously reported. Therefore, the model simulates a water physical model. The model reveals that the new impeller design distributes the bubbles more uniformly throughout the ladle, and exhibits a better-agitated bath, since the transfer of momentum to the fluids is better. Gas is evenly distributed with this design because both phases, gas and liquid, are dragged to the bottom of the ladle as a result of the higher pumping effect in comparison to the commercial designs.

  20. A wideband LNA employing gate-inductive-peaking and noise-canceling techniques in 0.18 μm CMOS

    International Nuclear Information System (INIS)

    Bao Kuan; Fan Xiangning; Li Wei; Zhang Li; Wang Zhigong

    2012-01-01

    This paper presents a wideband low noise amplifier (LNA) for multi-standard radio applications. The low noise characteristic is achieved by the noise-canceling technique while the bandwidth is enhanced by gate-inductive-peaking technique. High-frequency noise performance is consequently improved by the flattened gain over the entire operating frequency band. Fabricated in 0.18 μm CMOS process, the LNA achieves 2.5 GHz of −3 dB bandwidth and 16 dB of gain. The gain variation is within ±0.8 dB from 300 MHz to 2.2 GHz. The measured noise figure (NF) and average IIP3 are 3.4 dB and −2 dBm, respectively. The proposed LNA occupies 0.39 mm 2 core chip area. Operating at 1.8 V, the LNA drains a current of 11.7 mA. (semiconductor integrated circuits)

  1. Multivariate moment closure techniques for stochastic kinetic models

    International Nuclear Information System (INIS)

    Lakatos, Eszter; Ale, Angelique; Kirk, Paul D. W.; Stumpf, Michael P. H.

    2015-01-01

    Stochastic effects dominate many chemical and biochemical processes. Their analysis, however, can be computationally prohibitively expensive and a range of approximation schemes have been proposed to lighten the computational burden. These, notably the increasingly popular linear noise approximation and the more general moment expansion methods, perform well for many dynamical regimes, especially linear systems. At higher levels of nonlinearity, it comes to an interplay between the nonlinearities and the stochastic dynamics, which is much harder to capture correctly by such approximations to the true stochastic processes. Moment-closure approaches promise to address this problem by capturing higher-order terms of the temporally evolving probability distribution. Here, we develop a set of multivariate moment-closures that allows us to describe the stochastic dynamics of nonlinear systems. Multivariate closure captures the way that correlations between different molecular species, induced by the reaction dynamics, interact with stochastic effects. We use multivariate Gaussian, gamma, and lognormal closure and illustrate their use in the context of two models that have proved challenging to the previous attempts at approximating stochastic dynamics: oscillations in p53 and Hes1. In addition, we consider a larger system, Erk-mediated mitogen-activated protein kinases signalling, where conventional stochastic simulation approaches incur unacceptably high computational costs

  2. Multivariate moment closure techniques for stochastic kinetic models

    Energy Technology Data Exchange (ETDEWEB)

    Lakatos, Eszter, E-mail: e.lakatos13@imperial.ac.uk; Ale, Angelique; Kirk, Paul D. W.; Stumpf, Michael P. H., E-mail: m.stumpf@imperial.ac.uk [Department of Life Sciences, Centre for Integrative Systems Biology and Bioinformatics, Imperial College London, London SW7 2AZ (United Kingdom)

    2015-09-07

    Stochastic effects dominate many chemical and biochemical processes. Their analysis, however, can be computationally prohibitively expensive and a range of approximation schemes have been proposed to lighten the computational burden. These, notably the increasingly popular linear noise approximation and the more general moment expansion methods, perform well for many dynamical regimes, especially linear systems. At higher levels of nonlinearity, it comes to an interplay between the nonlinearities and the stochastic dynamics, which is much harder to capture correctly by such approximations to the true stochastic processes. Moment-closure approaches promise to address this problem by capturing higher-order terms of the temporally evolving probability distribution. Here, we develop a set of multivariate moment-closures that allows us to describe the stochastic dynamics of nonlinear systems. Multivariate closure captures the way that correlations between different molecular species, induced by the reaction dynamics, interact with stochastic effects. We use multivariate Gaussian, gamma, and lognormal closure and illustrate their use in the context of two models that have proved challenging to the previous attempts at approximating stochastic dynamics: oscillations in p53 and Hes1. In addition, we consider a larger system, Erk-mediated mitogen-activated protein kinases signalling, where conventional stochastic simulation approaches incur unacceptably high computational costs.

  3. Effect 0f Credit Risk Management Techniques 0n The Performance 0f Unsecured Bank Loans Employed Commercial Banks In Kenya

    Directory of Open Access Journals (Sweden)

    Prof. R.W Gakure

    2013-07-01

    Full Text Available Financial risk in a banking organization is possibility that the outcome of an action or event could bring up adverse impacts. Such outcomes could either result in a direct loss of earnings / capital or may result in imposition of constraints on bank’s ability to meet its business objectives. The purpose of this study was to investigate the effect of credit risk management techniques on the performance of unsecured bank loans by commercial banks in Kenya.

  4. A 3D Resistivity Model of the Pyramid of La Luna (Teotihuacan, Mexico) Subsoil by Employing Non-Conventional ERT Arrays

    Science.gov (United States)

    Chavez, R. E.; Tejero, A.; Cifuentes, G.; Garcia-Serrano, A.; Argote-Espino, D. L.; HernaNdez-Quintero, J. E.; Ortega, V.

    2017-12-01

    The Pyramid of La Luna is found within the archaeological site of Teotihuacan, located to the NE of Mexico City. This pre-Hispanic city was developed between 250 AD and 450 AD, with a population of 100,000 people. The most important edifices are the pyramids of El Sol and La Luna. The pyramid of El Sol is one of the largest pre-Hispanic structures found nowadays in Mexico (a square basement of approximately 200m X 225 m). The pyramid of La Luna (with a base of 140m X 150m), smaller in size is located towards the northern portion of this ancient city. At its front, a big plaza is found surrounded by pyramids of different ages. Previous archaeological studies carried out within the plaza, discovered small shallow pipes for water discharge. Then, it is possible to find deeper structures within the Square. A geophysical work was carried out in the Plaza of La Luna employing the ERT-3D to build a 3D resistivity model. Four ERT profiles were deployed in the area in the E-W direction, with a length of 80 m each; electrodes were inserted 3 m apart. A roll-along technique was employed to obtain a 3D view of the plaza subsoil. Gradient (G), Equatorial (Eq), and Minimum Coupling (MC) arrays were applied. A total of 2,600 apparent resistivity observations were acquired. Also, the pyramid was surrounded with 105 electrodes to illuminate this structure subsoil, employing the 'L' and 'Corner' arrays and the already mentioned settings. Electrodes were separated 5 m for the E and N sides, and 6 m for the W and S sides, topographic correction was added to the interpretation. 7,200 apparent resistivity values were obtained. Processing of the data included noise filtering, real electrode position and removing of spikes. Finally, the data were inverted to compute a 3D resistivity distribution of the subsoil. Preliminary results obtained indicate the presence of high resistivity anomalies probably associated to infill or archaeological features. However, an interesting resistivity

  5. Northern employment

    International Nuclear Information System (INIS)

    Zavitz, J.

    1997-01-01

    Hiring practices and policies and employment opportunities that were available in the Beaufort Sea and MacKenzie Delta project for local residents and for people from southern Canada were dealt with in this chapter. Depending on the source, Northern hiring was a mere token, or a genuine and successful effort on the part of the companies to involve the native population and to share with them the benefits of the project. The fact remains that opening up job opportunities for Northerners was not easily attained, and would never have been realized without the involvement of government and community organizations. Government also played a major role in developing policies and training regimes. By the end of exploration operations, the hiring of Northern residents in the oil and gas industry had become a requirement of drilling applications. Training programs were also created to ensure that Northern residents received the means necessary to take advantage of Northern employment opportunities

  6. Effectiveness of behavioral change techniques employed in eHealth interventions designed to improve glycemic control in persons with poorly controlled type 2 diabetes: a systematic review and meta-analysis protocol

    Directory of Open Access Journals (Sweden)

    Mihiretu Kebede

    2017-10-01

    Full Text Available Abstract Background The incorporation of Behavioral Change Techniques (BCTs in eHealth interventions for the management of non-communicable diseases (NCDs, such as type 2 diabetes mellitus (T2DM, might be a promising approach to improve clinical and behavioral outcomes of NCDs in the long run. This 3paper reports a protocol for a systematic review that aims to (a identify the effects of individual BCTs in eHealth interventions for lowering glycated hemoglobin levels (HbA1c and (b investigate which additional intervention features (duration of intervention, tailoring, theory-base, and mode of delivery affect levels of HbA1c in this population. The protocol follows the Preferred Reporting Items for Systematic review and Meta-Analysis Protocols (PRISMA-P 2015 guideline. Methods/design To identify eligible studies, an extensive systematic database search (PubMed, Web of Science, and PsycINFO using keywords will be conducted. This review will include randomized controlled trials examining the effects of eHealth interventions on HbA1c in persons with poorly controlled T2DM over a minimum follow-up period of 3 months. Relevant data will be extracted from the included studies using Microsoft Excel. The content of the interventions will be extracted from the description of interventions and will be classified according to the BCT taxonomy v1 tool. The quality of studies will be independently assessed by two reviewers using the Cochrane risk of bias tool. If the studies have adequate homogeneity, meta-analysis will be considered. The effect sizes of each BCT will be calculated using the random effect model. The quality of the synthesized evidence will be evaluated employing the Grading of the Recommendations Assessment, Development and Evaluation (GRADE criteria. Discussion This systematic review is one of the firsts to appraise the effectiveness of eHealth interventions employing BCTs which aimed at improving glycemic control in persons with poorly

  7. Predictive modeling in Clostridium acetobutylicum fermentations employing Raman spectroscopy and multivariate data analysis for real-time culture monitoring

    Science.gov (United States)

    Zu, Theresah N. K.; Liu, Sanchao; Germane, Katherine L.; Servinsky, Matthew D.; Gerlach, Elliot S.; Mackie, David M.; Sund, Christian J.

    2016-05-01

    The coupling of optical fibers with Raman instrumentation has proven to be effective for real-time monitoring of chemical reactions and fermentations when combined with multivariate statistical data analysis. Raman spectroscopy is relatively fast, with little interference from the water peak present in fermentation media. Medical research has explored this technique for analysis of mammalian cultures for potential diagnosis of some cancers. Other organisms studied via this route include Escherichia coli, Saccharomyces cerevisiae, and some Bacillus sp., though very little work has been performed on Clostridium acetobutylicum cultures. C. acetobutylicum is a gram-positive anaerobic bacterium, which is highly sought after due to its ability to use a broad spectrum of substrates and produce useful byproducts through the well-known Acetone-Butanol-Ethanol (ABE) fermentation. In this work, real-time Raman data was acquired from C. acetobutylicum cultures grown on glucose. Samples were collected concurrently for comparative off-line product analysis. Partial-least squares (PLS) models were built both for agitated cultures and for static cultures from both datasets. Media components and metabolites monitored include glucose, butyric acid, acetic acid, and butanol. Models were cross-validated with independent datasets. Experiments with agitation were more favorable for modeling with goodness of fit (QY) values of 0.99 and goodness of prediction (Q2Y) values of 0.98. Static experiments did not model as well as agitated experiments. Raman results showed the static experiments were chaotic, especially during and shortly after manual sampling.

  8. Interdependencies of acquisition, detection, and reconstruction techniques on the accuracy of iodine quantification in varying patient sizes employing dual-energy CT

    Energy Technology Data Exchange (ETDEWEB)

    Marin, Daniele; Pratts-Emanuelli, Jose J.; Mileto, Achille; Bashir, Mustafa R.; Nelson, Rendon C.; Boll, Daniel T. [Duke University Medical Center, Department of Radiology, Durham, NC (United States); Husarik, Daniela B. [University Hospital Zurich, Diagnostic and Interventional Radiology, Zurich (Switzerland)

    2014-10-03

    To assess the impact of patient habitus, acquisition parameters, detector efficiencies, and reconstruction techniques on the accuracy of iodine quantification using dual-source dual-energy CT (DECT). Two phantoms simulating small and large patients contained 20 iodine solutions mimicking vascular and parenchymal enhancement from saline isodensity to 400 HU and 30 iodine solutions simulating enhancement of the urinary collecting system from 400 to 2,000 HU. DECT acquisition (80/140 kVp and 100/140 kVp) was performed using two DECT systems equipped with standard and integrated electronics detector technologies. DECT raw datasets were reconstructed using filtered backprojection (FBP), and iterative reconstruction (SAFIRE I/V). Accuracy for iodine quantification was significantly higher for the small compared to the large phantoms (9.2 % ± 7.5 vs. 24.3 % ± 26.1, P = 0.0001), the integrated compared to the conventional detectors (14.8 % ± 20.6 vs. 18.8 % ± 20.4, respectively; P = 0.006), and SAFIRE V compared to SAFIRE I and FBP reconstructions (15.2 % ± 18.1 vs. 16.1 % ± 17.6 and 18.9 % ± 20.4, respectively; P ≤ 0.003). A significant synergism was observed when the most effective detector and reconstruction techniques were combined with habitus-adapted dual-energy pairs. In a second-generation dual-source DECT system, the accuracy of iodine quantification can be substantially improved by an optimal choice and combination of acquisition parameters, detector, and reconstruction techniques. (orig.)

  9. Addressing the midwifery workforce crisis: evaluating an employment model for undergraduate midwifery students at a tertiary maternity hospital in Melbourne, Australia.

    Science.gov (United States)

    McLachlan, Helen L; Forster, Della A; Ford, Rachael L; Farrell, Tanya

    2011-12-01

    In Victoria, maternity services are under significant strain due to increased numbers of women giving birth and critical workforce shortages. Hospitals have experienced challenges in adequately staffing maternity units, particularly on postnatal wards. In 2008, a tertiary maternity hospital in Melbourne introduced a model where undergraduate midwifery students were employed as Division 2 nurses (SMW_Div2) (enrolled nurses), to work in the postnatal area only. This study explored the pilot employment model from the perspective of the SMW_Div2 and hospital midwives. A web-based survey was administered to hospital midwives and the SMW_Div2s in the employment model in January 2010. The survey explored the views of midwives and SMW_Div2s regarding the perceived impact of the model on workforce readiness, recruitment and retention, and clinical competence and confidence. Forty-seven of 158 midwives (30%) and five of nine SMW_Div2s employed in the model responded to the survey. Both groups considered the model to have benefits for the organisation, including increased: student workforce readiness; clinical confidence and competence; and organisational loyalty. Both groups also considered that the model would facilitate: workforce recruitment; a teaching and learning culture within the organisation; and enhanced partnerships between students, hospitals and universities. Caution was expressed regarding workload and the need for ongoing support for SMW_Div2s working in the model. SMW_Div2s and midwives were positive about the introduction of the paid employment model at the Women's. The findings are consistent with evaluations of similar programs in the nursing setting. The employment model has potential short and long term individual and organisational advantages, which is important in the context of increasing births and workforce shortages. Progression of such models will be contingent on the collaboration and cooperation of the various stakeholders involved in maternity

  10. Evaluation of data assimilation techniques for a mesoscale meteorological model and their effects on air quality model results

    Energy Technology Data Exchange (ETDEWEB)

    Amicarelli, A; Pelliccioni, A [ISPESL - Dipartimento Insediamenti Produttivi e Interazione con l' Ambiente, Via Fontana Candida, 1 00040 Monteporzio Catone (RM) Italy (Italy); Finardi, S; Silibello, C [ARIANET, via Gilino 9, 20128 Milano (Italy); Gariazzo, C

    2008-05-01

    Data assimilation techniques are methods to limit the growth of errors in a dynamical model by allowing observations distributed in space and time to force (nudge) model solutions. They have become common for meteorological model applications in recent years, especially to enhance weather forecast and to support air-quality studies. In order to investigate the influence of different data assimilation techniques on the meteorological fields produced by RAMS model, and to evaluate their effects on the ozone and PM{sub 10} concentrations predicted by FARM model, several numeric experiments were conducted over the urban area of Rome, Italy, during a summer episode.

  11. Evaluation of data assimilation techniques for a mesoscale meteorological model and their effects on air quality model results

    Science.gov (United States)

    Amicarelli, A.; Gariazzo, C.; Finardi, S.; Pelliccioni, A.; Silibello, C.

    2008-05-01

    Data assimilation techniques are methods to limit the growth of errors in a dynamical model by allowing observations distributed in space and time to force (nudge) model solutions. They have become common for meteorological model applications in recent years, especially to enhance weather forecast and to support air-quality studies. In order to investigate the influence of different data assimilation techniques on the meteorological fields produced by RAMS model, and to evaluate their effects on the ozone and PM10 concentrations predicted by FARM model, several numeric experiments were conducted over the urban area of Rome, Italy, during a summer episode.

  12. Evaluation of data assimilation techniques for a mesoscale meteorological model and their effects on air quality model results

    International Nuclear Information System (INIS)

    Amicarelli, A; Pelliccioni, A; Finardi, S; Silibello, C; Gariazzo, C

    2008-01-01

    Data assimilation techniques are methods to limit the growth of errors in a dynamical model by allowing observations distributed in space and time to force (nudge) model solutions. They have become common for meteorological model applications in recent years, especially to enhance weather forecast and to support air-quality studies. In order to investigate the influence of different data assimilation techniques on the meteorological fields produced by RAMS model, and to evaluate their effects on the ozone and PM 10 concentrations predicted by FARM model, several numeric experiments were conducted over the urban area of Rome, Italy, during a summer episode

  13. Development of Reservoir Characterization Techniques and Production Models for Exploiting Naturally Fractured Reservoirs

    Energy Technology Data Exchange (ETDEWEB)

    Wiggins, Michael L.; Brown, Raymon L.; Civan, Frauk; Hughes, Richard G.

    2001-08-15

    Research continues on characterizing and modeling the behavior of naturally fractured reservoir systems. Work has progressed on developing techniques for estimating fracture properties from seismic and well log data, developing naturally fractured wellbore models, and developing a model to characterize the transfer of fluid from the matrix to the fracture system for use in the naturally fractured reservoir simulator.

  14. Comparison between two meshless methods based on collocation technique for the numerical solution of four-species tumor growth model

    Science.gov (United States)

    Dehghan, Mehdi; Mohammadi, Vahid

    2017-03-01

    As is said in [27], the tumor-growth model is the incorporation of nutrient within the mixture as opposed to being modeled with an auxiliary reaction-diffusion equation. The formulation involves systems of highly nonlinear partial differential equations of surface effects through diffuse-interface models [27]. Simulations of this practical model using numerical methods can be applied for evaluating it. The present paper investigates the solution of the tumor growth model with meshless techniques. Meshless methods are applied based on the collocation technique which employ multiquadrics (MQ) radial basis function (RBFs) and generalized moving least squares (GMLS) procedures. The main advantages of these choices come back to the natural behavior of meshless approaches. As well as, a method based on meshless approach can be applied easily for finding the solution of partial differential equations in high-dimension using any distributions of points on regular and irregular domains. The present paper involves a time-dependent system of partial differential equations that describes four-species tumor growth model. To overcome the time variable, two procedures will be used. One of them is a semi-implicit finite difference method based on Crank-Nicolson scheme and another one is based on explicit Runge-Kutta time integration. The first case gives a linear system of algebraic equations which will be solved at each time-step. The second case will be efficient but conditionally stable. The obtained numerical results are reported to confirm the ability of these techniques for solving the two and three-dimensional tumor-growth equations.

  15. The Effect of Environmental Regulation on Employment in Resource-Based Areas of China-An Empirical Research Based on the Mediating Effect Model.

    Science.gov (United States)

    Cao, Wenbin; Wang, Hui; Ying, Huihui

    2017-12-19

    While environmental pollution is becoming more and more serious, many countries are adopting policies to control pollution. At the same time, the environmental regulation will inevitably affect economic and social development, especially employment growth. The environmental regulation will not only affect the scale of employment directly, but it will also have indirect effects by stimulating upgrades in the industrial structure and in technological innovation. This paper examines the impact of environmental regulation on employment, using a mediating model based on the data from five typical resource-based provinces in China from 2000 to 2015. The estimation is performed based on the system GMM (Generalized Method of Moments) estimator. The results show that the implementation of environmental regulation in resource-based areas has both a direct effect and a mediating effect on employment. These findings provide policy implications for these resource-based areas to promote the coordinating development between the environment and employment.

  16. Personal recommender systems for learners in lifelong learning: requirements, techniques and model

    NARCIS (Netherlands)

    Drachsler, Hendrik; Hummel, Hans; Koper, Rob

    2007-01-01

    Drachsler, H., Hummel, H. G. K., & Koper, R. (2008). Personal recommender systems for learners in lifelong learning: requirements, techniques and model. International Journal of Learning Technology, 3(4), 404-423.

  17. Assessing sequential data assimilation techniques for integrating GRACE data into a hydrological model

    KAUST Repository

    Khaki, M.; Hoteit, Ibrahim; Kuhn, M.; Awange, J.; Forootan, E.; van Dijk, A.; Schumacher, M.; Pattiaratchi, C.

    2017-01-01

    The time-variable terrestrial water storage (TWS) products from the Gravity Recovery And Climate Experiment (GRACE) have been increasingly used in recent years to improve the simulation of hydrological models by applying data assimilation techniques

  18. System Response Analysis and Model Order Reduction, Using Conventional Method, Bond Graph Technique and Genetic Programming

    Directory of Open Access Journals (Sweden)

    Lubna Moin

    2009-04-01

    Full Text Available This research paper basically explores and compares the different modeling and analysis techniques and than it also explores the model order reduction approach and significance. The traditional modeling and simulation techniques for dynamic systems are generally adequate for single-domain systems only, but the Bond Graph technique provides new strategies for reliable solutions of multi-domain system. They are also used for analyzing linear and non linear dynamic production system, artificial intelligence, image processing, robotics and industrial automation. This paper describes a unique technique of generating the Genetic design from the tree structured transfer function obtained from Bond Graph. This research work combines bond graphs for model representation with Genetic programming for exploring different ideas on design space tree structured transfer function result from replacing typical bond graph element with their impedance equivalent specifying impedance lows for Bond Graph multiport. This tree structured form thus obtained from Bond Graph is applied for generating the Genetic Tree. Application studies will identify key issues and importance for advancing this approach towards becoming on effective and efficient design tool for synthesizing design for Electrical system. In the first phase, the system is modeled using Bond Graph technique. Its system response and transfer function with conventional and Bond Graph method is analyzed and then a approach towards model order reduction is observed. The suggested algorithm and other known modern model order reduction techniques are applied to a 11th order high pass filter [1], with different approach. The model order reduction technique developed in this paper has least reduction errors and secondly the final model retains structural information. The system response and the stability analysis of the system transfer function taken by conventional and by Bond Graph method is compared and

  19. Revolutionary introduction of RIA/IRMA methodology in medical diagnostics: a study employing the technique for hyperprolactinemia and its correlation with hypothyroidism

    International Nuclear Information System (INIS)

    Tasneem, A.

    2011-01-01

    The aim of the study was to determine the incidence of hyperprolactinemia, its underlying causes and consequences, and to study its correlation with hypothyroidism. The study was carried out on 1365 male and female subjects referred to Centre for Nuclear Medicine Lahore for hormonal estimation. Serum Prolactin and thyroid stimulating hormone (TSH) levels were measured using IRMA kits. Prevalence of hyperprolactinemia turned out to be 4.90%. Menstrual irregularity appeared as a major consequence. The incidence rate was the highest in the age range of 21-27 years. Hypothyroidism in hyperprolactinemic subjects was observed to be 22.7%. i) Immunoradiometric assay is a micro analytical technique which can measure very minute amount of the antigens in the serum. II) Prevalence of hypothyroidism in hyperprolactinemic subjects in our population is promising enough to estimate thyroid hormone levels in hyperprolactinemic patients. (author)

  20. The shape modulation of osteoblast-osteocyte transformation and its correlation with the fibrillar organization in secondary osteons: a SEM study employing the graded osmic maceration technique.

    Science.gov (United States)

    Pazzaglia, Ugo E; Congiu, Terenzio; Marchese, Marcella; Dell'Orbo, Carlo

    2010-06-01

    Cortex fractured surface and graded osmic maceration techniques were used to study the secretory activity of osteoblasts, the transformation of osteoblast to osteocytes, and the structural organization of the matrix around the cells with scanning electron microscopy (SEM). A specialized membrane differentiation at the base of the cell was observed with finger-like, flattened processes which formed a diffuse meshwork. These findings suggested that this membrane differentiation below the cells had not only functioned in transporting collagen through the membrane but also in orienting the fibrils once assembled. Thin ramifications arose from the large and flat membrane foldings oriented perpendicular to the plane of the osteoblasts. This meshwork of fine filaments could not be visualized with SEM because they were obscured within the matrix substance. Their 3-D structure, however, should be similar to the canalicular system. The meshwork of large, flattened processes was no more evident in the cells which had completed their transformation into osteocytes.

  1. Sabots, Obturator and Gas-In-Launch Tube Techniques for Heat Flux Models in Ballistic Ranges

    Science.gov (United States)

    Bogdanoff, David W.; Wilder, Michael C.

    2013-01-01

    For thermal protection system (heat shield) design for space vehicle entry into earth and other planetary atmospheres, it is essential to know the augmentation of the heat flux due to vehicle surface roughness. At the NASA Ames Hypervelocity Free Flight Aerodynamic Facility (HFFAF) ballistic range, a campaign of heat flux studies on rough models, using infrared camera techniques, has been initiated. Several phenomena can interfere with obtaining good heat flux data when using this measuring technique. These include leakage of the hot drive gas in the gun barrel through joints in the sabot (model carrier) to create spurious thermal imprints on the model forebody, deposition of sabot material on the model forebody, thereby changing the thermal properties of the model surface and unknown in-barrel heating of the model. This report presents developments in launch techniques to greatly reduce or eliminate these problems. The techniques include the use of obturator cups behind the launch package, enclosed versus open front sabot designs and the use of hydrogen gas in the launch tube. Attention also had to be paid to the problem of the obturator drafting behind the model and impacting the model. Of the techniques presented, the obturator cups and hydrogen in the launch tube were successful when properly implemented

  2. Modelling the effects of the sterile insect technique applied to Eldana saccharina Walker in sugarcane

    Directory of Open Access Journals (Sweden)

    L Potgieter

    2012-12-01

    Full Text Available A mathematical model is formulated for the population dynamics of an Eldana saccharina Walker infestation of sugarcane under the influence of partially sterile released insects. The model describes the population growth of and interaction between normal and sterile E.saccharina moths in a temporally variable, but spatially homogeneous environment. The model consists of a deterministic system of difference equations subject to strictly positive initial data. The primary objective of this model is to determine suitable parameters in terms of which the above population growth and interaction may be quantified and according to which E.saccharina infestation levels and the associated sugarcane damage may be measured. Although many models have been formulated in the past describing the sterile insect technique, few of these models describe the technique for Lepidopteran species with more than one life stage and where F1-sterility is relevant. In addition, none of these models consider the technique when fully sterile females and partially sterile males are being released. The model formulated is also the first to describe the technique applied specifically to E.saccharina, and to consider the economic viability of applying the technique to this species. Pertinent decision support is provided to farm managers in terms of the best timing for releases, release ratios and release frequencies.

  3. Soldiers’ employment attitude and employability: An exploratory study

    Directory of Open Access Journals (Sweden)

    Peng Gao

    2015-04-01

    Full Text Available Purpose: Nowadays it is very difficult for Chinese retired soldiers to find proper jobs, and the primary reason is the significant gap between job requirements and soldiers owned job skills. Therefore, it is very important to improve the soldiers’ job skills and enhance their understanding of employment.Design/methodology/approach: This paper expands the study scope from the soldiers’ job skills to the employability, initiatively introduces the employment attitude which has obvious impact on the employment of soldiers, and analyses the influence that employment attitude can play on employability. At last, this paper develops statistical method to find the relationship between soldiers’ employment attitude and employability.Findings: The empirical analysis shows that soldiers’ employment attitude has the positive linkage with employability, which makes the employment attitude a measurable variable for the employability rather than an absolute standard.Research limitations/implications: According to the research purpose, more variables should be considered in the model, consequently, there are only three indicators to describe solders’ employment attitude and four indicators to describe solders’ employability.Originality/value: This paper takes research on soldiers’ employability in a new perspective. The soldiers’ employment attitude is served as the entry point, showing the influence that soldiers’ employment attitude has on employability.

  4. The Integrated Use of Enterprise and System Dynamics Modelling Techniques in Support of Business Decisions

    Directory of Open Access Journals (Sweden)

    K. Agyapong-Kodua

    2012-01-01

    Full Text Available Enterprise modelling techniques support business process (reengineering by capturing existing processes and based on perceived outputs, support the design of future process models capable of meeting enterprise requirements. System dynamics modelling tools on the other hand are used extensively for policy analysis and modelling aspects of dynamics which impact on businesses. In this paper, the use of enterprise and system dynamics modelling techniques has been integrated to facilitate qualitative and quantitative reasoning about the structures and behaviours of processes and resource systems used by a Manufacturing Enterprise during the production of composite bearings. The case study testing reported has led to the specification of a new modelling methodology for analysing and managing dynamics and complexities in production systems. This methodology is based on a systematic transformation process, which synergises the use of a selection of public domain enterprise modelling, causal loop and continuous simulation modelling techniques. The success of the modelling process defined relies on the creation of useful CIMOSA process models which are then converted to causal loops. The causal loop models are then structured and translated to equivalent dynamic simulation models using the proprietary continuous simulation modelling tool iThink.

  5. Employment relations, flexibility and risk

    DEFF Research Database (Denmark)

    Jensen, Carsten Strøby

    Employment relations literature often distinguishes between social democratic/corporatist models of employment relations and liberal models of employment relations as they are seen as opposite or at least different ways of organizing labor markets. They are often characterized as having very...... different risk profiles in terms of relationships between employees, employers, and the state. Low levels of labor market regulation very often characterize the liberal models of employment relations as we know them from, for instance, the USA and the UK. This means that employment conditions are very often...... insecure and that the burden of unemployment risk mostly lies with the employees rather than the employer. Corporatist – or social democratic – employment relations models are, in contrast to the liberal models, often characterized by stricter regulation of the labor market and by high standards...

  6. A Hybrid Technique for De-Noising Multi-Modality Medical Images by Employing Cuckoo’s Search with Curvelet Transform

    Directory of Open Access Journals (Sweden)

    Qaisar Javaid

    2018-01-01

    Full Text Available De-noising of the medical images is very difficult task. To improve the overall visual representation we need to apply a contrast enhancement techniques, this representation provide the physicians and clinicians a good and recovered diagnosis results. Various de-noising and contrast enhancements methods are develops. However, some of the methods are not good in providing the better results with accuracy and efficiency. In our paper we de-noise and enhance the medical images without any loss of information. We uses the curvelet transform in combination with ridglet transform along with CS (Cuckoo Search algorithm. The curvlet transform adapt and represents the sparse pixel informations with all edges. The edges play very important role in understanding of the images. Curvlet transform computes the edges very efficiently where the wavelets are failed. We used the CS to optimize the de-noising coefficients without loss of structural and morphological information. Our designed method would be accurate and efficient in de-noising the medical images. Our method attempts to remove the multiplicative and additive noises. Our proposed method is proved to be an efficient and reliable in removing all kind of noises from the medical images. Result indicates that our proposed approach is better than other approaches in removing impulse, Gaussian, and speckle noises.

  7. Improvements in the technique of vascular perfusion-fixation employing a fluorocarbon-containing perfusate and a peristaltic pump controlled by pressure feedback

    DEFF Research Database (Denmark)

    Rostgaard, J; Qvortrup, Klaus; Poulsen, Steen Seier

    1993-01-01

    A new improved technique for whole-body perfusion-fixation of rats and other small animals is described. The driving force is a peristaltic pump which is feedback regulated by a pressure transducer that monitors the blood-perfusion pressure in the left ventricle of the heart. The primary perfusate...... to cannulate the heart; the outer and inner barrels of the cannula are connected to the peristaltic pump and to the pressure transducer, respectively. The tissue oxygen tension in the rat is monitored by a subcutaneous oxygen electrode. Measurements showed that tissue hypoxia/anoxia did not develop before......-fixative is composed of a blood substitute--13.3% oxygenated fluorocarbon FC-75--in 0.05 M cacodylate buffer (pH 7.4) with a 2% glutaraldehyde. The secondary perfusate-fixative is composed of 2% glutaraldehyde in 0.05 M cacodylate buffer (pH 7.4) with 20 mM CaCl2. A double-barrelled, self-holding cannula is used...

  8. Employing the nominal group technique to explore the views of pharmacists, pharmacy assistants and women on community pharmacy weight management services and educational resources.

    Science.gov (United States)

    Fakih, Souhiela; Marriott, Jennifer L; Hussainy, Safeera Y

    2016-04-01

    The objectives of this study were to investigate how pharmacists, pharmacy assistants and women feel about community pharmacy involvement in weight management, and to identify what pharmacists, pharmacy assistants and women want in weight management educational resources. Three homogenous and one heterogeneous nominal group (NG) sessions of up to 120-min duration were conducted with nine women, ten pharmacists and eight pharmacy assistants. The NG technique was used to conduct each session to determine the most important issues that should be considered surrounding community pharmacy weight management services and development of any educational resources. The heterogeneous NG session was used to finalise what women, pharmacists and pharmacy assistants want in educational resources. Overall, pharmacists, pharmacy assistants and women believe that pharmacy staff have an important role in the management of overweight and obesity because of their accessibility, trust and the availability of products in pharmacy. Regarding the most suitable healthcare professional(s) to treat overweight and obesity, the majority of participants believed that no one member of the healthcare team was most suitable and that overweight and obesity needs to be treated by a multidisciplinary team. The importance of having weight management educational resources for pharmacy staff and women that come from trustworthy resources without financial gain or commercialisation was also emphasised. Pharmacists, pharmacy assistants and women feel that community pharmacies have a definite role to play in weight management. Pharmacy-specific weight management educational resources that are readily available to pharmacy staff and women are highly desirable. © 2015 Royal Pharmaceutical Society.

  9. Who will have Sustainable Employment After a Back Injury? The Development of a Clinical Prediction Model in a Cohort of Injured Workers

    DEFF Research Database (Denmark)

    Shearer, Heather M.; Côté, Pierre; Boyle, Eleanor

    2017-01-01

    to develop the predictive models. We constructed receiver operator curves and used the c-index to measure predictive accuracy. Results Seventy-nine percent and 77 % of workers had sustainable employment at 1 and 6 months, respectively. Sustainable employment at 1 month was predicted by initial back pain...... intensity, mental health-related quality of life, claim litigation and employer type (c-index = 0.77). At 6 months, sustainable employment was predicted by physical and mental health-related quality of life, claim litigation and employer type (c-index = 0.77). Adding health-related and work......-related variables to models improved predictive accuracy by 8.5 and 10 % at 1 and 6 months respectively. Conclusion We developed clinically-relevant models to predict sustainable employment in injured workers who made a workers’ compensation claim for LBP. Inquiring about back pain intensity, physical and mental...

  10. Modelling of ground penetrating radar data in stratified media using the reflectivity technique

    International Nuclear Information System (INIS)

    Sena, Armando R; Sen, Mrinal K; Stoffa, Paul L

    2008-01-01

    Horizontally layered media are often encountered in shallow exploration geophysics. Ground penetrating radar (GPR) data in these environments can be modelled by techniques that are more efficient than finite difference (FD) or finite element (FE) schemes because the lateral homogeneity of the media allows us to reduce the dependence on the horizontal spatial variables through Fourier transforms on these coordinates. We adapt and implement the invariant embedding or reflectivity technique used to model elastic waves in layered media to model GPR data. The results obtained with the reflectivity and FDTD modelling techniques are in excellent agreement and the effects of the air–soil interface on the radiation pattern are correctly taken into account by the reflectivity technique. Comparison with real wide-angle GPR data shows that the reflectivity technique can satisfactorily reproduce the real GPR data. These results and the computationally efficient characteristics of the reflectivity technique (compared to FD or FE) demonstrate its usefulness in interpretation and possible model-based inversion schemes of GPR data in stratified media

  11. Modelling techniques for predicting the long term consequences of radiation on natural aquatic populations

    International Nuclear Information System (INIS)

    Wallis, I.G.

    1978-01-01

    The purpose of this working paper is to describe modelling techniques for predicting the long term consequences of radiation on natural aquatic populations. Ideally, it would be possible to use aquatic population models: (1) to predict changes in the health and well-being of all aquatic populations as a result of changing the composition, amount and location of radionuclide discharges; (2) to compare the effects of steady, fluctuating and accidental releases of radionuclides; and (3) to evaluate the combined impact of the discharge of radionuclides and other wastes, and natural environmental stresses on aquatic populations. At the onset it should be stated that there is no existing model which can achieve this ideal performance. However, modelling skills and techniques are available to develop useful aquatic population models. This paper discusses the considerations involved in developing these models and briefly describes the various types of population models which have been developed to date

  12. A hybrid SEA/modal technique for modeling structural-acoustic interior noise in rotorcraft.

    Science.gov (United States)

    Jayachandran, V; Bonilha, M W

    2003-03-01

    This paper describes a hybrid technique that combines Statistical Energy Analysis (SEA) predictions for structural vibration with acoustic modal summation techniques to predict interior noise levels in rotorcraft. The method was applied for predicting the sound field inside a mock-up of the interior panel system of the Sikorsky S-92 helicopter. The vibration amplitudes of the frame and panel systems were predicted using a detailed SEA model and these were used as inputs to the model of the interior acoustic space. The spatial distribution of the vibration field on individual panels, and their coupling to the acoustic space were modeled using stochastic techniques. Leakage and nonresonant transmission components were accounted for using space-averaged values obtained from a SEA model of the complete structural-acoustic system. Since the cabin geometry was quite simple, the modeling of the interior acoustic space was performed using a standard modal summation technique. Sound pressure levels predicted by this approach at specific microphone locations were compared with measured data. Agreement within 3 dB in one-third octave bands above 40 Hz was observed. A large discrepancy in the one-third octave band in which the first acoustic mode is resonant (31.5 Hz) was observed. Reasons for such a discrepancy are discussed in the paper. The developed technique provides a method for modeling helicopter cabin interior noise in the frequency mid-range where neither FEA nor SEA is individually effective or accurate.

  13. Application of separable parameter space techniques to multi-tracer PET compartment modeling

    International Nuclear Information System (INIS)

    Zhang, Jeff L; Michael Morey, A; Kadrmas, Dan J

    2016-01-01

    Multi-tracer positron emission tomography (PET) can image two or more tracers in a single scan, characterizing multiple aspects of biological functions to provide new insights into many diseases. The technique uses dynamic imaging, resulting in time-activity curves that contain contributions from each tracer present. The process of separating and recovering separate images and/or imaging measures for each tracer requires the application of kinetic constraints, which are most commonly applied by fitting parallel compartment models for all tracers. Such multi-tracer compartment modeling presents challenging nonlinear fits in multiple dimensions. This work extends separable parameter space kinetic modeling techniques, previously developed for fitting single-tracer compartment models, to fitting multi-tracer compartment models. The multi-tracer compartment model solution equations were reformulated to maximally separate the linear and nonlinear aspects of the fitting problem, and separable least-squares techniques were applied to effectively reduce the dimensionality of the nonlinear fit. The benefits of the approach are then explored through a number of illustrative examples, including characterization of separable parameter space multi-tracer objective functions and demonstration of exhaustive search fits which guarantee the true global minimum to within arbitrary search precision. Iterative gradient-descent algorithms using Levenberg–Marquardt were also tested, demonstrating improved fitting speed and robustness as compared to corresponding fits using conventional model formulations. The proposed technique overcomes many of the challenges in fitting simultaneous multi-tracer PET compartment models. (paper)

  14. Techniques to extract physical modes in model-independent analysis of rings

    International Nuclear Information System (INIS)

    Wang, C.-X.

    2004-01-01

    A basic goal of Model-Independent Analysis is to extract the physical modes underlying the beam histories collected at a large number of beam position monitors so that beam dynamics and machine properties can be deduced independent of specific machine models. Here we discuss techniques to achieve this goal, especially the Principal Component Analysis and the Independent Component Analysis.

  15. Using Game Theory Techniques and Concepts to Develop Proprietary Models for Use in Intelligent Games

    Science.gov (United States)

    Christopher, Timothy Van

    2011-01-01

    This work is about analyzing games as models of systems. The goal is to understand the techniques that have been used by game designers in the past, and to compare them to the study of mathematical game theory. Through the study of a system or concept a model often emerges that can effectively educate students about making intelligent decisions…

  16. Application of Soft Computing Techniques and Multiple Regression Models for CBR prediction of Soils

    Directory of Open Access Journals (Sweden)

    Fatimah Khaleel Ibrahim

    2017-08-01

    Full Text Available The techniques of soft computing technique such as Artificial Neutral Network (ANN have improved the predicting capability and have actually discovered application in Geotechnical engineering. The aim of this research is to utilize the soft computing technique and Multiple Regression Models (MLR for forecasting the California bearing ratio CBR( of soil from its index properties. The indicator of CBR for soil could be predicted from various soils characterizing parameters with the assist of MLR and ANN methods. The data base that collected from the laboratory by conducting tests on 86 soil samples that gathered from different projects in Basrah districts. Data gained from the experimental result were used in the regression models and soft computing techniques by using artificial neural network. The liquid limit, plastic index , modified compaction test and the CBR test have been determined. In this work, different ANN and MLR models were formulated with the different collection of inputs to be able to recognize their significance in the prediction of CBR. The strengths of the models that were developed been examined in terms of regression coefficient (R2, relative error (RE% and mean square error (MSE values. From the results of this paper, it absolutely was noticed that all the proposed ANN models perform better than that of MLR model. In a specific ANN model with all input parameters reveals better outcomes than other ANN models.

  17. A fully blanketed early B star LTE model atmosphere using an opacity sampling technique

    International Nuclear Information System (INIS)

    Phillips, A.P.; Wright, S.L.

    1980-01-01

    A fully blanketed LTE model of a stellar atmosphere with Tsub(e) = 21914 K (thetasub(e) = 0.23), log g = 4 is presented. The model includes an explicit representation of the opacity due to the strongest lines, and uses a statistical opacity sampling technique to represent the weaker line opacity. The sampling technique is subjected to several tests and the model is compared with an atmosphere calculated using the line-distribution function method. The limitations of the distribution function method and the particular opacity sampling method used here are discussed in the light of the results obtained. (author)

  18. Identification techniques for phenomenological models of hysteresis based on the conjugate gradient method

    International Nuclear Information System (INIS)

    Andrei, Petru; Oniciuc, Liviu; Stancu, Alexandru; Stoleriu, Laurentiu

    2007-01-01

    An identification technique for the parameters of phenomenological models of hysteresis is presented. The basic idea of our technique is to set up a system of equations for the parameters of the model as a function of known quantities on the major or minor hysteresis loops (e.g. coercive force, susceptibilities at various points, remanence), or other magnetization curves. This system of equations can be either over or underspecified and is solved by using the conjugate gradient method. Numerical results related to the identification of parameters in the Energetic, Jiles-Atherton, and Preisach models are presented

  19. The Effect of Learning Based on Technology Model and Assessment Technique toward Thermodynamic Learning Achievement

    Science.gov (United States)

    Makahinda, T.

    2018-02-01

    The purpose of this research is to find out the effect of learning model based on technology and assessment technique toward thermodynamic achievement by controlling students intelligence. This research is an experimental research. The sample is taken through cluster random sampling with the total respondent of 80 students. The result of the research shows that the result of learning of thermodynamics of students who taught the learning model of environmental utilization is higher than the learning result of student thermodynamics taught by simulation animation, after controlling student intelligence. There is influence of student interaction, and the subject between models of technology-based learning with assessment technique to student learning result of Thermodynamics, after controlling student intelligence. Based on the finding in the lecture then should be used a thermodynamic model of the learning environment with the use of project assessment technique.

  20. Employment contract and insecurity jobs: asymmetric information modelling of atypical situation, case of poor and less developed countries

    OpenAIRE

    El Bouhadi, Abdelhamid

    2006-01-01

    Our aim in this paper is dedicated to study the relations of recruiting among the employee and the employer in a frame characterized by the insecurity conditions of job and large scale of unemployment. The recruiting relationship that we will study is atypical as far as the offered salary is not a fixed minimum wage. The hiring contract between the employer and the employee in the developing labour markets is made with different conditions regarding to others, mainly in the developed countrie...

  1. Modeling and simulation of defects detection in conductive multi-layered pieces by the eddy current technique

    International Nuclear Information System (INIS)

    Bennoud, S; Zergoug, M

    2015-01-01

    It has been shown that the eddy current method is one of the most effective techniques for the detection and characterization of surface and near-surface defects in conductive mediums especially in aluminum alloy. It is one of the most applied methods in industries which require a maximum of reliability and security (aerospace, aeronautics, nuclear, Etc). In this study, a code to solve electromagnetic problems by employing the finite element method is developed. The suggested model can simulate the probe response to the presence of a defect hidden in a multi-layered structure or a riveted structure on aluminum alloy. The developed code is based on the discretization in three dimensions of the Maxwell's equations in harmonic mode by the finite element method based on the combined potential formulations. That will enable us to interpret the results, to present them in graphical form and to carry out simulations for various applications

  2. Experimental investigation of the predictive capabilities of data driven modeling techniques in hydrology - Part 2: Application

    Directory of Open Access Journals (Sweden)

    A. Elshorbagy

    2010-10-01

    Full Text Available In this second part of the two-part paper, the data driven modeling (DDM experiment, presented and explained in the first part, is implemented. Inputs for the five case studies (half-hourly actual evapotranspiration, daily peat soil moisture, daily till soil moisture, and two daily rainfall-runoff datasets are identified, either based on previous studies or using the mutual information content. Twelve groups (realizations were randomly generated from each dataset by randomly sampling without replacement from the original dataset. Neural networks (ANNs, genetic programming (GP, evolutionary polynomial regression (EPR, Support vector machines (SVM, M5 model trees (M5, K-nearest neighbors (K-nn, and multiple linear regression (MLR techniques are implemented and applied to each of the 12 realizations of each case study. The predictive accuracy and uncertainties of the various techniques are assessed using multiple average overall error measures, scatter plots, frequency distribution of model residuals, and the deterioration rate of prediction performance during the testing phase. Gamma test is used as a guide to assist in selecting the appropriate modeling technique. Unlike two nonlinear soil moisture case studies, the results of the experiment conducted in this research study show that ANNs were a sub-optimal choice for the actual evapotranspiration and the two rainfall-runoff case studies. GP is the most successful technique due to its ability to adapt the model complexity to the modeled data. EPR performance could be close to GP with datasets that are more linear than nonlinear. SVM is sensitive to the kernel choice and if appropriately selected, the performance of SVM can improve. M5 performs very well with linear and semi linear data, which cover wide range of hydrological situations. In highly nonlinear case studies, ANNs, K-nn, and GP could be more successful than other modeling techniques. K-nn is also successful in linear situations, and it

  3. Experimental investigation of the predictive capabilities of data driven modeling techniques in hydrology - Part 2: Application

    Science.gov (United States)

    Elshorbagy, A.; Corzo, G.; Srinivasulu, S.; Solomatine, D. P.

    2010-10-01

    In this second part of the two-part paper, the data driven modeling (DDM) experiment, presented and explained in the first part, is implemented. Inputs for the five case studies (half-hourly actual evapotranspiration, daily peat soil moisture, daily till soil moisture, and two daily rainfall-runoff datasets) are identified, either based on previous studies or using the mutual information content. Twelve groups (realizations) were randomly generated from each dataset by randomly sampling without replacement from the original dataset. Neural networks (ANNs), genetic programming (GP), evolutionary polynomial regression (EPR), Support vector machines (SVM), M5 model trees (M5), K-nearest neighbors (K-nn), and multiple linear regression (MLR) techniques are implemented and applied to each of the 12 realizations of each case study. The predictive accuracy and uncertainties of the various techniques are assessed using multiple average overall error measures, scatter plots, frequency distribution of model residuals, and the deterioration rate of prediction performance during the testing phase. Gamma test is used as a guide to assist in selecting the appropriate modeling technique. Unlike two nonlinear soil moisture case studies, the results of the experiment conducted in this research study show that ANNs were a sub-optimal choice for the actual evapotranspiration and the two rainfall-runoff case studies. GP is the most successful technique due to its ability to adapt the model complexity to the modeled data. EPR performance could be close to GP with datasets that are more linear than nonlinear. SVM is sensitive to the kernel choice and if appropriately selected, the performance of SVM can improve. M5 performs very well with linear and semi linear data, which cover wide range of hydrological situations. In highly nonlinear case studies, ANNs, K-nn, and GP could be more successful than other modeling techniques. K-nn is also successful in linear situations, and it should

  4. Towards a Business Process Modeling Technique for Agile Development of Case Management Systems

    Directory of Open Access Journals (Sweden)

    Ilia Bider

    2017-12-01

    Full Text Available A modern organization needs to adapt its behavior to changes in the business environment by changing its Business Processes (BP and corresponding Business Process Support (BPS systems. One way of achieving such adaptability is via separation of the system code from the process description/model by applying the concept of executable process models. Furthermore, to ease introduction of changes, such process model should separate different perspectives, for example, control-flow, human resources, and data perspectives, from each other. In addition, for developing a completely new process, it should be possible to start with a reduced process model to get a BPS system quickly running, and then continue to develop it in an agile manner. This article consists of two parts, the first sets requirements on modeling techniques that could be used in the tools that supports agile development of BPs and BPS systems. The second part suggests a business process modeling technique that allows to start modeling with the data/information perspective which would be appropriate for processes supported by Case or Adaptive Case Management (CM/ACM systems. In a model produced by this technique, called data-centric business process model, a process instance/case is defined as sequence of states in a specially designed instance database, while the process model is defined as a set of rules that set restrictions on allowed states and transitions between them. The article details the background for the project of developing the data-centric process modeling technique, presents the outline of the structure of the model, and gives formal definitions for a substantial part of the model.

  5. The noise analysis and optimum filtering techniques for a two-dimensional position sensitive orthogonal strip gamma ray detector employing resistive charge division

    International Nuclear Information System (INIS)

    Gerber, M.S.; Muller, D.W.

    1976-01-01

    The analysis of an orthogonal strip, two-dimensional position sensitive high purity germanium gamma ray detector is discussed. Position sensitivity is obtained by connecting each electrode strip on the detector to a resistor network. Charge, entering the network, divides in relation to the resistance between its entry point and the virtual earth points of the charge sensitive preamplifiers located at the end of each resistor network. The difference of the voltage pulses at the output of each preamplifier is proportional to the position at which the charge entered the resistor network and the sum of the pulse is proportional to the energy of the detected gamma ray. The analysis and spatial noise resolution is presented for this type of position sensitive detector. The results of the analysis show that the position resolution is proportional to the square root of the filter amplifier's output pulse time constant and that for energy measurement the resolution is maximized at the filter amplifier's noise corner time constant. The design of the electronic noise filtering system for the prototype gamma ray camera was based on the mathematical energy and spatial resolution equations. For the spatial channel a Gaussian trapezoidal filtering system was developed. Gaussian filtering was used for the energy channel. The detector noise model was verified by taking rms noise measurements of the filtered energy and spatial pulses from resistive readout charge dividing detectors. These measurements were within 10% of theory. (Auth.)

  6. Employment, Family Union, and Childbearing Decisions in Great Britain

    OpenAIRE

    Arnstein Aassve; Simon Burgess; Matt Dickson; Carol Propper

    2004-01-01

    The paper investigates the relationship between work and family life in Britain. Using appropriate statistical techniques we estimate a five-equation model, which includes birth events, union formation, union dissolution, employment and non-employment events. The model allows for unobserved heterogeneity that is correlated across all five equations. We use information from the British Household Panel Survey, including the retrospective histories concerning work, union, and child bearing, to e...

  7. Application of data assimilation technique for flow field simulation for Kaiga site using TAPM model

    International Nuclear Information System (INIS)

    Shrivastava, R.; Oza, R.B.; Puranik, V.D.; Hegde, M.N.; Kushwaha, H.S.

    2008-01-01

    The data assimilation techniques are becoming popular nowadays to get realistic flow field simulation for the site under consideration. The present paper describes data assimilation technique for flow field simulation for Kaiga site using the air pollution model (TAPM) developed by CSIRO, Australia. In this, the TAPM model was run for Kaiga site for a period of one month (Nov. 2004) using the analysed meteorological data supplied with the model for Central Asian (CAS) region and the model solutions were nudged with the observed wind speed and wind direction data available for the site. The model was run with 4 nested grids with grid spacing varying from 30km, 10km, 3km and 1km respectively. The models generated results with and without nudging are statistically compared with the observations. (author)

  8. Modelling of thermal field and point defect dynamics during silicon single crystal growth using CZ technique

    Science.gov (United States)

    Sabanskis, A.; Virbulis, J.

    2018-05-01

    Mathematical modelling is employed to numerically analyse the dynamics of the Czochralski (CZ) silicon single crystal growth. The model is axisymmetric, its thermal part describes heat transfer by conduction and thermal radiation, and allows to predict the time-dependent shape of the crystal-melt interface. Besides the thermal field, the point defect dynamics is modelled using the finite element method. The considered process consists of cone growth and cylindrical phases, including a short period of a reduced crystal pull rate, and a power jump to avoid large diameter changes. The influence of the thermal stresses on the point defects is also investigated.

  9. Reliability modeling of digital component in plant protection system with various fault-tolerant techniques

    International Nuclear Information System (INIS)

    Kim, Bo Gyung; Kang, Hyun Gook; Kim, Hee Eun; Lee, Seung Jun; Seong, Poong Hyun

    2013-01-01

    Highlights: • Integrated fault coverage is introduced for reflecting characteristics of fault-tolerant techniques in the reliability model of digital protection system in NPPs. • The integrated fault coverage considers the process of fault-tolerant techniques from detection to fail-safe generation process. • With integrated fault coverage, the unavailability of repairable component of DPS can be estimated. • The new developed reliability model can reveal the effects of fault-tolerant techniques explicitly for risk analysis. • The reliability model makes it possible to confirm changes of unavailability according to variation of diverse factors. - Abstract: With the improvement of digital technologies, digital protection system (DPS) has more multiple sophisticated fault-tolerant techniques (FTTs), in order to increase fault detection and to help the system safely perform the required functions in spite of the possible presence of faults. Fault detection coverage is vital factor of FTT in reliability. However, the fault detection coverage is insufficient to reflect the effects of various FTTs in reliability model. To reflect characteristics of FTTs in the reliability model, integrated fault coverage is introduced. The integrated fault coverage considers the process of FTT from detection to fail-safe generation process. A model has been developed to estimate the unavailability of repairable component of DPS using the integrated fault coverage. The new developed model can quantify unavailability according to a diversity of conditions. Sensitivity studies are performed to ascertain important variables which affect the integrated fault coverage and unavailability

  10. Transfer of physics detector models into CAD systems using modern techniques

    International Nuclear Information System (INIS)

    Dach, M.; Vuoskoski, J.

    1996-01-01

    Designing high energy physics detectors for future experiments requires sophisticated computer aided design and simulation tools. In order to satisfy the future demands in this domain, modern techniques, methods, and standards have to be applied. We present an interface application, designed and implemented using object-oriented techniques, for the widely used GEANT physics simulation package. It converts GEANT detector models into the future industrial standard, STEP. (orig.)

  11. Comparison of Analysis and Spectral Nudging Techniques for Dynamical Downscaling with the WRF Model over China

    OpenAIRE

    Ma, Yuanyuan; Yang, Yi; Mai, Xiaoping; Qiu, Chongjian; Long, Xiao; Wang, Chenghai

    2016-01-01

    To overcome the problem that the horizontal resolution of global climate models may be too low to resolve features which are important at the regional or local scales, dynamical downscaling has been extensively used. However, dynamical downscaling results generally drift away from large-scale driving fields. The nudging technique can be used to balance the performance of dynamical downscaling at large and small scales, but the performances of the two nudging techniques (analysis nudging and s...

  12. A critical review of field techniques employed in the survey of large woody debris in river corridors: a central European perspective.

    Science.gov (United States)

    Máčka, Zdeněk; Krejčí, Lukáš; Loučková, Blanka; Peterková, Lucie

    2011-10-01

    In forested watersheds, large woody debris (LWD) is an integral component of river channels and floodplains. Fallen trees have a significant impact on physical and ecological processes in fluvial ecosystems. An enormous body of literature concerning LWD in river corridors is currently available. However, synthesis and statistical treatment of the published data are hampered by the heterogeneity of methodological approaches. Likewise, the precision and accuracy of data arising out of published surveys have yet to be assessed. For this review, a literature scrutiny of 100 randomly selected research papers was made to examine the most frequently surveyed LWD variables and field procedures. Some 29 variables arose for individual LWD pieces, and 15 variables for wood accumulations. The literature survey revealed a large variability in field procedures for LWD surveys. In many studies (32), description of field procedure proved less than adequate, rendering the results impossible to reproduce in comparable fashion by other researchers. This contribution identifies the main methodological problems and sources of error associated with the mapping and measurement of the most frequently surveyed variables of LWD, both as individual pieces and in accumulations. The discussion stems from our own field experience with LWD survey in river systems of various geomorphic styles and types of riparian vegetation in the Czech Republic in the 2004-10 period. We modelled variability in terms of LWD number, volume, and biomass for three geomorphologically contrasting river systems. The results appeared to be sensitive, in the main, to sampling strategy and prevailing field conditions; less variability was produced by errors of measurement. Finally, we propose a comprehensive standard field procedure for LWD surveyors, including a total of 20 variables describing spatial position, structural characteristics and the functions and dynamics of LWD. However, resources are only rarely

  13. A Model-Based Systems Engineering Methodology for Employing Architecture In System Analysis: Developing Simulation Models Using Systems Modeling Language Products to Link Architecture and Analysis

    Science.gov (United States)

    2016-06-01

    18 Figure 5 Spiral Model ...............................................................................................20 Figure 6...Memorandum No. 1. Tallahassee, FL: Florida Department of Transportation. 19 The spiral model of system development, first introduced in Boehm...system capabilities into the waterfall model would prove quite difficult, the spiral model assumes that available technologies will change over the

  14. A new wind speed forecasting strategy based on the chaotic time series modelling technique and the Apriori algorithm

    International Nuclear Information System (INIS)

    Guo, Zhenhai; Chi, Dezhong; Wu, Jie; Zhang, Wenyu

    2014-01-01

    Highlights: • Impact of meteorological factors on wind speed forecasting is taken into account. • Forecasted wind speed results are corrected by the associated rules. • Forecasting accuracy is improved by the new wind speed forecasting strategy. • Robust of the proposed model is validated by data sampled from different sites. - Abstract: Wind energy has been the fastest growing renewable energy resource in recent years. Because of the intermittent nature of wind, wind power is a fluctuating source of electrical energy. Therefore, to minimize the impact of wind power on the electrical grid, accurate and reliable wind power forecasting is mandatory. In this paper, a new wind speed forecasting approach based on based on the chaotic time series modelling technique and the Apriori algorithm has been developed. The new approach consists of four procedures: (I) Clustering by using the k-means clustering approach; (II) Employing the Apriori algorithm to discover the association rules; (III) Forecasting the wind speed according to the chaotic time series forecasting model; and (IV) Correcting the forecasted wind speed data using the associated rules discovered previously. This procedure has been verified by 31-day-ahead daily average wind speed forecasting case studies, which employed the wind speed and other meteorological data collected from four meteorological stations located in the Hexi Corridor area of China. The results of these case studies reveal that the chaotic forecasting model can efficiently improve the accuracy of the wind speed forecasting, and the Apriori algorithm can effectively discover the association rules between the wind speed and other meteorological factors. In addition, the correction results demonstrate that the association rules discovered by the Apriori algorithm have powerful capacities in handling the forecasted wind speed values correction when the forecasted values do not match the classification discovered by the association rules

  15. An experimental technique for the modelling of air flow movements in nuclear plant

    International Nuclear Information System (INIS)

    Ainsworth, R.W.; Hallas, N.J.

    1986-01-01

    This paper describes an experimental technique developed at Harwell to model ventilation flows in plant at 1/5th scale. The technique achieves dynamic similarity not only for forced convection imposed by the plant ventilation system, but also for the interaction between natural convection (from heated objects) and forced convection. The use of a scale model to study flow of fluids is a well established technique, relying upon various criteria, expressed in terms of dimensionless numbers, to achieve dynamic similarity. For forced convective flows, simulation of Reynolds number is sufficient, but to model natural convection and its interaction with forced convection, the Rayleigh, Grashof and Prandtl numbers must be simulated at the same time. This paper describes such a technique, used in experiments on a hypothetical glove box cell to study the interaction between forced and natural convection. The model contained features typically present in a cell, such as a man, motor, stairs, glove box, etc. The aim of the experiment was to study the overall flow patterns, especially around the model man 'working' at the glove box. The cell ventilation was theoretically designed to produce a downward flow over the face of the man working at the glove box. However, the results have shown that the flow velocities produced an upwards flow over the face of the man. The work has indicated the viability of modelling simultaneously the forced and natural convection processes in a cell. It has also demonstrated that simplistic assumptions cannot be made about ventilation flow patterns. (author)

  16. The Employment of spatial autoregressive models in predicting demand for natural gas; O Emprego de modelos auto-regressivos espaciais na previsao de demanda para gas natural

    Energy Technology Data Exchange (ETDEWEB)

    Castro, Jorge Henrique de [Petroleo Brasileiro S.A. (PETROBRAS), Rio de Janeiro, RJ (Brazil); Silva, Alexandre Pinto Alves da [Coordenacao dos Programas de Pos-Graduacao de Engenharia (COPPE/UFRJ), RJ (Brazil). Programa de Engenharia Eletrica

    2010-07-01

    Develop the natural gas network is critical success factor for the distribution company. It is a decision that employs the demand given location 'x' and a future time 't' so that the net allows the best conditions for the return of the capital. In this segment, typical network industry, the spatial infra-structure vision associated to the market allows better evaluation of the business because to mitigate costs and risks. In fact, economic models little developed in order to assess the question of the location, due to its little employment by economists. The objective of this article is to analyze the application of spatial perspective in natural gas demand forecasting and to identify the models that can be employed observing issues of dependency and spatial heterogeneity; as well as the capacity of mapping of variables associated with the problem. (author)

  17. Optimizing Availability of a Framework in Series Configuration Utilizing Markov Model and Monte Carlo Simulation Techniques

    Directory of Open Access Journals (Sweden)

    Mansoor Ahmed Siddiqui

    2017-06-01

    Full Text Available This research work is aimed at optimizing the availability of a framework comprising of two units linked together in series configuration utilizing Markov Model and Monte Carlo (MC Simulation techniques. In this article, effort has been made to develop a maintenance model that incorporates three distinct states for each unit, while taking into account their different levels of deterioration. Calculations are carried out using the proposed model for two distinct cases of corrective repair, namely perfect and imperfect repairs, with as well as without opportunistic maintenance. Initially, results are accomplished using an analytical technique i.e., Markov Model. Validation of the results achieved is later carried out with the help of MC Simulation. In addition, MC Simulation based codes also work well for the frameworks that follow non-exponential failure and repair rates, and thus overcome the limitations offered by the Markov Model.

  18. Comparison of different uncertainty techniques in urban stormwater quantity and quality modelling

    DEFF Research Database (Denmark)

    Dotto, C. B.; Mannina, G.; Kleidorfer, M.

    2012-01-01

    -UA), an approach based on a multi-objective auto-calibration (a multialgorithm, genetically adaptive multiobjective method, AMALGAM) and a Bayesian approach based on a simplified Markov Chain Monte Carlo method (implemented in the software MICA). To allow a meaningful comparison among the different uncertainty...... techniques, common criteria have been set for the likelihood formulation, defining the number of simulations, and the measure of uncertainty bounds. Moreover, all the uncertainty techniques were implemented for the same case study, in which the same stormwater quantity and quality model was used alongside...... the same dataset. The comparison results for a well-posed rainfall/runoff model showed that the four methods provide similar probability distributions of model parameters, and model prediction intervals. For ill-posed water quality model the differences between the results were much wider; and the paper...

  19. GRAVTool, a Package to Compute Geoid Model by Remove-Compute-Restore Technique

    Science.gov (United States)

    Marotta, G. S.; Blitzkow, D.; Vidotti, R. M.

    2015-12-01

    Currently, there are several methods to determine geoid models. They can be based on terrestrial gravity data, geopotential coefficients, astro-geodetic data or a combination of them. Among the techniques to compute a precise geoid model, the Remove-Compute-Restore (RCR) has been widely applied. It considers short, medium and long wavelengths derived from altitude data provided by Digital Terrain Models (DTM), terrestrial gravity data and global geopotential coefficients, respectively. In order to apply this technique, it is necessary to create procedures that compute gravity anomalies and geoid models, by the integration of different wavelengths, and that adjust these models to one local vertical datum. This research presents a developed package called GRAVTool based on MATLAB software to compute local geoid models by RCR technique and its application in a study area. The studied area comprehends the federal district of Brazil, with ~6000 km², wavy relief, heights varying from 600 m to 1340 m, located between the coordinates 48.25ºW, 15.45ºS and 47.33ºW, 16.06ºS. The results of the numerical example on the studied area show the local geoid model computed by the GRAVTool package (Figure), using 1377 terrestrial gravity data, SRTM data with 3 arc second of resolution, and geopotential coefficients of the EIGEN-6C4 model to degree 360. The accuracy of the computed model (σ = ± 0.071 m, RMS = 0.069 m, maximum = 0.178 m and minimum = -0.123 m) matches the uncertainty (σ =± 0.073) of 21 points randomly spaced where the geoid was computed by geometrical leveling technique supported by positioning GNSS. The results were also better than those achieved by Brazilian official regional geoid model (σ = ± 0.099 m, RMS = 0.208 m, maximum = 0.419 m and minimum = -0.040 m).

  20. A neuro-fuzzy computing technique for modeling hydrological time series

    Science.gov (United States)

    Nayak, P. C.; Sudheer, K. P.; Rangan, D. M.; Ramasastri, K. S.

    2004-05-01

    Intelligent computing tools such as artificial neural network (ANN) and fuzzy logic approaches are proven to be efficient when applied individually to a variety of problems. Recently there has been a growing interest in combining both these approaches, and as a result, neuro-fuzzy computing techniques have evolved. This approach has been tested and evaluated in the field of signal processing and related areas, but researchers have only begun evaluating the potential of this neuro-fuzzy hybrid approach in hydrologic modeling studies. This paper presents the application of an adaptive neuro fuzzy inference system (ANFIS) to hydrologic time series modeling, and is illustrated by an application to model the river flow of Baitarani River in Orissa state, India. An introduction to the ANFIS modeling approach is also presented. The advantage of the method is that it does not require the model structure to be known a priori, in contrast to most of the time series modeling techniques. The results showed that the ANFIS forecasted flow series preserves the statistical properties of the original flow series. The model showed good performance in terms of various statistical indices. The results are highly promising, and a comparative analysis suggests that the proposed modeling approach outperforms ANNs and other traditional time series models in terms of computational speed, forecast errors, efficiency, peak flow estimation etc. It was observed that the ANFIS model preserves the potential of the ANN approach fully, and eases the model building process.

  1. Mathematical modeling of flow-injection techniques and their applications for environmental monitoring

    International Nuclear Information System (INIS)

    Begum, N.N.; Ahmed, J.

    2006-01-01

    A classification of the existing mathematical models of flow-injection (FI) manifolds based on the main principles on which they are built, have been proposed. Numerous mathematical models of FI systems employing ideas from different scientific areas (e.g. mathematical statistics, chemical engineering, chromatography) have been developed so far. The models have been compared with respect to their predictive power, the complexity of their mathematical treatment, and the requirements for computation time when applied to single-line, multi-channel and conjugated two-line FI systems. It is concluded that the axially dispersed plug flow model deserves special attention because it offers an acceptable compromise between the conflicting requirements for maximal possible mathematical simplicity and maximal possible precision. Applicability of these existing flow-injection models to single-line, multi-channel and conjugated two-line systems for environmental monitoring have been discussed. (author)

  2. An overview of techniques for linking high-dimensional molecular data to time-to-event endpoints by risk prediction models.

    Science.gov (United States)

    Binder, Harald; Porzelius, Christine; Schumacher, Martin

    2011-03-01

    Analysis of molecular data promises identification of biomarkers for improving prognostic models, thus potentially enabling better patient management. For identifying such biomarkers, risk prediction models can be employed that link high-dimensional molecular covariate data to a clinical endpoint. In low-dimensional settings, a multitude of statistical techniques already exists for building such models, e.g. allowing for variable selection or for quantifying the added value of a new biomarker. We provide an overview of techniques for regularized estimation that transfer this toward high-dimensional settings, with a focus on models for time-to-event endpoints. Techniques for incorporating specific covariate structure are discussed, as well as techniques for dealing with more complex endpoints. Employing gene expression data from patients with diffuse large B-cell lymphoma, some typical modeling issues from low-dimensional settings are illustrated in a high-dimensional application. First, the performance of classical stepwise regression is compared to stage-wise regression, as implemented by a component-wise likelihood-based boosting approach. A second issues arises, when artificially transforming the response into a binary variable. The effects of the resulting loss of efficiency and potential bias in a high-dimensional setting are illustrated, and a link to competing risks models is provided. Finally, we discuss conditions for adequately quantifying the added value of high-dimensional gene expression measurements, both at the stage of model fitting and when performing evaluation. Copyright © 2011 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  3. Forecasting performances of three automated modelling techniques during the economic crisis 2007-2009

    DEFF Research Database (Denmark)

    Kock, Anders Bredahl; Teräsvirta, Timo

    2014-01-01

    . The performances of these three model selectors are compared by looking at the accuracy of the forecasts of the estimated neural network models. We apply the neural network model and the three modelling techniques to monthly industrial production and unemployment series from the G7 countries and the four......In this work we consider the forecasting of macroeconomic variables during an economic crisis. The focus is on a specific class of models, the so-called single hidden-layer feed-forward autoregressive neural network models. What makes these models interesting in the present context is the fact...... that they form a class of universal approximators and may be expected to work well during exceptional periods such as major economic crises. Neural network models are often difficult to estimate, and we follow the idea of White (2006) of transforming the specification and nonlinear estimation problem...

  4. Combining Quality Work-Integrated Learning and Career Development Learning through the Use of the SOAR Model to Enhance Employability

    Science.gov (United States)

    Reddan, Gregory; Rauchle, Maja

    2017-01-01

    This paper presents students' perceptions of the benefits to employability of a suite of courses that incorporate both work-integrated learning (WIL) and career development learning (CDL). Field Project A and Field Project B are elective courses in the Bachelor of Exercise Science at Griffith University. These courses engage students in active and…

  5. Employment and the Reduction of the Work Week: A Comparison of Seven European Macro-economic Models.

    Science.gov (United States)

    van Ginneken, Wouter

    1984-01-01

    Analyzes the impact of a reduced work week on employment, productivity, wages, investment, economic growth, inflation, and government deficits. Concludes that reducing working hours would have greater effect if accompanied by wage reductions and limitation of overtime, but would not affect underlying causes of unemployment. (SK)

  6. Toward a Mediation Model of Employability Enhancement: A Study of Employee-Supervisor Pairs in the Building Sector

    Science.gov (United States)

    van der Heijden, Beatrice I. J. M.; Bakker, Arnold B.

    2011-01-01

    This study examines whether jobs that enable competence development and a constructive leadership style enhance workers' employability or career potential through their assumed positive relationship with work-related flow (absorption, work enjoyment, and intrinsic work motivation). The authors conducted an explorative study with 303 pairs of…

  7. Preparing At-Risk Youth for a Changing World: Revisiting a Person-in-Context Model for Transition to Employment

    Science.gov (United States)

    DeLuca, Christopher; Godden, Lorraine; Hutchinson, Nancy L.; Versnel, Joan

    2015-01-01

    Background: The current global cohort of youth has been called "a generation at-risk", marked by a dramatic rise in youth who are not in employment, education or training programmes. In 2010, youth were three times as likely as adults to be unemployed, with youth unemployment worsening in 2012 and 2013. Accordingly, there is an urgent…

  8. Machine Learning Techniques for Modelling Short Term Land-Use Change

    Directory of Open Access Journals (Sweden)

    Mileva Samardžić-Petrović

    2017-11-01

    Full Text Available The representation of land use change (LUC is often achieved by using data-driven methods that include machine learning (ML techniques. The main objectives of this research study are to implement three ML techniques, Decision Trees (DT, Neural Networks (NN, and Support Vector Machines (SVM for LUC modeling, in order to compare these three ML techniques and to find the appropriate data representation. The ML techniques are applied on the case study of LUC in three municipalities of the City of Belgrade, the Republic of Serbia, using historical geospatial data sets and considering nine land use classes. The ML models were built and assessed using two different time intervals. The information gain ranking technique and the recursive attribute elimination procedure were implemented to find the most informative attributes that were related to LUC in the study area. The results indicate that all three ML techniques can be used effectively for short-term forecasting of LUC, but the SVM achieved the highest agreement of predicted changes.

  9. Electricity market price spike analysis by a hybrid data model and feature selection technique

    International Nuclear Information System (INIS)

    Amjady, Nima; Keynia, Farshid

    2010-01-01

    In a competitive electricity market, energy price forecasting is an important activity for both suppliers and consumers. For this reason, many techniques have been proposed to predict electricity market prices in the recent years. However, electricity price is a complex volatile signal owning many spikes. Most of electricity price forecast techniques focus on the normal price prediction, while price spike forecast is a different and more complex prediction process. Price spike forecasting has two main aspects: prediction of price spike occurrence and value. In this paper, a novel technique for price spike occurrence prediction is presented composed of a new hybrid data model, a novel feature selection technique and an efficient forecast engine. The hybrid data model includes both wavelet and time domain variables as well as calendar indicators, comprising a large candidate input set. The set is refined by the proposed feature selection technique evaluating both relevancy and redundancy of the candidate inputs. The forecast engine is a probabilistic neural network, which are fed by the selected candidate inputs of the feature selection technique and predict price spike occurrence. The efficiency of the whole proposed method for price spike occurrence forecasting is evaluated by means of real data from the Queensland and PJM electricity markets. (author)

  10. Evaluation of mesh morphing and mapping techniques in patient specific modeling of the human pelvis.

    Science.gov (United States)

    Salo, Zoryana; Beek, Maarten; Whyne, Cari Marisa

    2013-01-01

    Robust generation of pelvic finite element models is necessary to understand the variation in mechanical behaviour resulting from differences in gender, aging, disease and injury. The objective of this study was to apply and evaluate mesh morphing and mapping techniques to facilitate the creation and structural analysis of specimen-specific finite element (FE) models of the pelvis. A specimen-specific pelvic FE model (source mesh) was generated following a traditional user-intensive meshing scheme. The source mesh was morphed onto a computed tomography scan generated target surface of a second pelvis using a landmarked-based approach, in which exterior source nodes were shifted to target surface vertices, while constrained along a normal. A second copy of the morphed model was further refined through mesh mapping, in which surface nodes of the initial morphed model were selected in patches and remapped onto the surfaces of the target model. Computed tomography intensity based material properties were assigned to each model. The source, target, morphed and mapped models were analyzed under axial compression using linear static FE analysis and their strain distributions evaluated. Morphing and mapping techniques were effectively applied to generate good quality geometrically complex specimen-specific pelvic FE models. Mapping significantly improved strain concurrence with the target pelvis FE model. Copyright © 2012 John Wiley & Sons, Ltd.

  11. Evaluation of mesh morphing and mapping techniques in patient specific modelling of the human pelvis.

    Science.gov (United States)

    Salo, Zoryana; Beek, Maarten; Whyne, Cari Marisa

    2012-08-01

    Robust generation of pelvic finite element models is necessary to understand variation in mechanical behaviour resulting from differences in gender, aging, disease and injury. The objective of this study was to apply and evaluate mesh morphing and mapping techniques to facilitate the creation and structural analysis of specimen-specific finite element (FE) models of the pelvis. A specimen-specific pelvic FE model (source mesh) was generated following a traditional user-intensive meshing scheme. The source mesh was morphed onto a computed tomography scan generated target surface of a second pelvis using a landmarked-based approach, in which exterior source nodes were shifted to target surface vertices, while constrained along a normal. A second copy of the morphed model was further refined through mesh mapping, in which surface nodes of the initial morphed model were selected in patches and remapped onto the surfaces of the target model. Computed tomography intensity-based material properties were assigned to each model. The source, target, morphed and mapped models were analyzed under axial compression using linear static FE analysis, and their strain distributions were evaluated. Morphing and mapping techniques were effectively applied to generate good quality and geometrically complex specimen-specific pelvic FE models. Mapping significantly improved strain concurrence with the target pelvis FE model. Copyright © 2012 John Wiley & Sons, Ltd.

  12. Control System Design for Cylindrical Tank Process Using Neural Model Predictive Control Technique

    Directory of Open Access Journals (Sweden)

    M. Sridevi

    2010-10-01

    Full Text Available Chemical manufacturing and process industry requires innovative technologies for process identification. This paper deals with model identification and control of cylindrical process. Model identification of the process was done using ARMAX technique. A neural model predictive controller was designed for the identified model. The performance of the controllers was evaluated using MATLAB software. The performance of NMPC controller was compared with Smith Predictor controller and IMC controller based on rise time, settling time, overshoot and ISE and it was found that the NMPC controller is better suited for this process.

  13. Particle Markov Chain Monte Carlo Techniques of Unobserved Component Time Series Models Using Ox

    DEFF Research Database (Denmark)

    Nonejad, Nima

    This paper details Particle Markov chain Monte Carlo techniques for analysis of unobserved component time series models using several economic data sets. PMCMC combines the particle filter with the Metropolis-Hastings algorithm. Overall PMCMC provides a very compelling, computationally fast...... and efficient framework for estimation. These advantages are used to for instance estimate stochastic volatility models with leverage effect or with Student-t distributed errors. We also model changing time series characteristics of the US inflation rate by considering a heteroskedastic ARFIMA model where...

  14. Fuzzy modeling and control of rotary inverted pendulum system using LQR technique

    International Nuclear Information System (INIS)

    Fairus, M A; Mohamed, Z; Ahmad, M N

    2013-01-01

    Rotary inverted pendulum (RIP) system is a nonlinear, non-minimum phase, unstable and underactuated system. Controlling such system can be a challenge and is considered a benchmark in control theory problem. Prior to designing a controller, equations that represent the behaviour of the RIP system must be developed as accurately as possible without compromising the complexity of the equations. Through Takagi-Sugeno (T-S) fuzzy modeling technique, the nonlinear system model is then transformed into several local linear time-invariant models which are then blended together to reproduce, or approximate, the nonlinear system model within local region. A parallel distributed compensation (PDC) based fuzzy controller using linear quadratic regulator (LQR) technique is designed to control the RIP system. The results show that the designed controller able to balance the RIP system

  15. Advanced particle-in-cell simulation techniques for modeling the Lockheed Martin Compact Fusion Reactor

    Science.gov (United States)

    Welch, Dale; Font, Gabriel; Mitchell, Robert; Rose, David

    2017-10-01

    We report on particle-in-cell developments of the study of the Compact Fusion Reactor. Millisecond, two and three-dimensional simulations (cubic meter volume) of confinement and neutral beam heating of the magnetic confinement device requires accurate representation of the complex orbits, near perfect energy conservation, and significant computational power. In order to determine initial plasma fill and neutral beam heating, these simulations include ionization, elastic and charge exchange hydrogen reactions. To this end, we are pursuing fast electromagnetic kinetic modeling algorithms including a two implicit techniques and a hybrid quasi-neutral algorithm with kinetic ions. The kinetic modeling includes use of the Poisson-corrected direct implicit, magnetic implicit, as well as second-order cloud-in-cell techniques. The hybrid algorithm, ignoring electron inertial effects, is two orders of magnitude faster than kinetic but not as accurate with respect to confinement. The advantages and disadvantages of these techniques will be presented. Funded by Lockheed Martin.

  16. Analysis of Multipath Mitigation Techniques with Land Mobile Satellite Channel Model

    Directory of Open Access Journals (Sweden)

    M. Z. H. Bhuiyan J. Zhang

    2012-12-01

    Full Text Available Multipath is undesirable for Global Navigation Satellite System (GNSS receivers, since the reception of multipath can create a significant distortion to the shape of the correlation function leading to an error in the receivers’ position estimate. Many multipath mitigation techniques exist in the literature to deal with the multipath propagation problem in the context of GNSS. The multipath studies in the literature are often based on optimistic assumptions, for example, assuming a static two-path channel or a fading channel with a Rayleigh or a Nakagami distribution. But, in reality, there are a lot of channel modeling issues, for example, satellite-to-user geometry, variable number of paths, variable path delays and gains, Non Line-Of-Sight (NLOS path condition, receiver movements, etc. that are kept out of consideration when analyzing the performance of these techniques. Therefore, this is of utmost importance to analyze the performance of different multipath mitigation techniques in some realistic measurement-based channel models, for example, the Land Multipath is undesirable for Global Navigation Satellite System (GNSS receivers, since the reception of multipath can create a significant distortion to the shape of the correlation function leading to an error in the receivers’ position estimate. Many multipath mitigation techniques exist in the literature to deal with the multipath propagation problem in the context of GNSS. The multipath studies in the literature are often based on optimistic assumptions, for example, assuming a static two-path channel or a fading channel with a Rayleigh or a Nakagami distribution. But, in reality, there are a lot of channel modeling issues, for example, satellite-to-user geometry, variable number of paths, variable path delays and gains, Non Line-Of-Sight (NLOS path condition, receiver movements, etc. that are kept out of consideration when analyzing the performance of these techniques. Therefore, this

  17. Robotic and endoscopic transoral thyroidectomy: feasibility and description of the technique in the cadaveric model.

    Science.gov (United States)

    Kahramangil, Bora; Mohsin, Khuzema; Alzahrani, Hassan; Bu Ali, Daniah; Tausif, Syed; Kang, Sang-Wook; Kandil, Emad; Berber, Eren

    2017-12-01

    Numerous new approaches have been described over the years to improve the cosmetic outcomes of thyroid surgery. Transoral approach is a new technique that aims to achieve superior cosmetic outcomes by concealing the incision in the oral cavity. Transoral thyroidectomy through vestibular approach was performed in two institutions on cadaveric models. Procedure was performed endoscopically in one institution, while the robotic technique was utilized at the other. Transoral thyroidectomy was successfully performed at both institutions with robotic and endoscopic techniques. All vital structures were identified and preserved. Transoral thyroidectomy has been performed in animal and cadaveric models, as well as in some clinical studies. Our initial experience indicates the feasibility of this approach. More clinical studies are required to elucidate its full utility.

  18. Modeling of high-pressure generation using the laser colliding foil technique

    Energy Technology Data Exchange (ETDEWEB)

    Fabbro, R.; Faral, B.; Virmont, J.; Cottet, F.; Romain, J.P.

    1989-03-01

    An analytical model describing the collision of two foils is presented and applied to the collision of laser-accelerated foils. Numerical simulations have been made to verify this model and to compare its results in the case of laser-accelerated foils. Scaling laws relating the different parameters (shock pressure, laser intensity, target material, etc.) have been established. The application of this technique to high-pressure equation of state experiments is then discussed.

  19. Modeling of high-pressure generation using the laser colliding foil technique

    International Nuclear Information System (INIS)

    Fabbro, R.; Faral, B.; Virmont, J.; Cottet, F.; Romain, J.P.

    1989-01-01

    An analytical model describing the collision of two foils is presented and applied to the collision of laser-accelerated foils. Numerical simulations have been made to verify this model and to compare its results in the case of laser-accelerated foils. Scaling laws relating the different parameters (shock pressure, laser intensity, target material, etc.) have been established. The application of this technique to high-pressure equation of state experiments is then discussed

  20. Stakeholder approach, Stakeholders mental model: A visualization test with cognitive mapping technique

    Directory of Open Access Journals (Sweden)

    Garoui Nassreddine

    2012-04-01

    Full Text Available The idea of this paper is to determine the mental models of actors in the firm with respect to the stakeholder approach of corporate governance. The use of the cognitive map to view these diagrams to show the ways of thinking and conceptualization of the stakeholder approach. The paper takes a corporate governance perspective, discusses stakeholder model. It takes also a cognitive mapping technique.

  1. Application of class-modelling techniques to infrared spectra for analysis of pork adulteration in beef jerkys.

    Science.gov (United States)

    Kuswandi, Bambang; Putri, Fitra Karima; Gani, Agus Abdul; Ahmad, Musa

    2015-12-01

    The use of chemometrics to analyse infrared spectra to predict pork adulteration in the beef jerky (dendeng) was explored. In the first step, the analysis of pork in the beef jerky formulation was conducted by blending the beef jerky with pork at 5-80 % levels. Then, they were powdered and classified into training set and test set. The second step, the spectra of the two sets was recorded by Fourier Transform Infrared (FTIR) spectroscopy using atenuated total reflection (ATR) cell on the basis of spectral data at frequency region 4000-700 cm(-1). The spectra was categorised into four data sets, i.e. (a) spectra in the whole region as data set 1; (b) spectra in the fingerprint region (1500-600 cm(-1)) as data set 2; (c) spectra in the whole region with treatment as data set 3; and (d) spectra in the fingerprint region with treatment as data set 4. The third step, the chemometric analysis were employed using three class-modelling techniques (i.e. LDA, SIMCA, and SVM) toward the data sets. Finally, the best result of the models towards the data sets on the adulteration analysis of the samples were selected and the best model was compared with the ELISA method. From the chemometric results, the LDA model on the data set 1 was found to be the best model, since it could classify and predict 100 % accuracy of the sample tested. The LDA model was applied toward the real samples of the beef jerky marketed in Jember, and the results showed that the LDA model developed was in good agreement with the ELISA method.

  2. The need for novel model order reduction techniques in the electronics industry (Chapter 1)

    NARCIS (Netherlands)

    Schilders, W.H.A.; Benner, P.; Hinze, M.; Maten, ter E.J.W.

    2011-01-01

    In this paper, we discuss the present and future needs of the electronics industry with regard to model order reduction. The industry has always been one of the main motivating fields for the development of MOR techniques, and continues to play this role. We discuss the search for provably passive

  3. Combined rock-physical modelling and seismic inversion techniques for characterisation of stacked sandstone reservoir

    NARCIS (Netherlands)

    Justiniano, A.; Jaya, Y.; Diephuis, G.; Veenhof, R.; Pringle, T.

    2015-01-01

    The objective of the study is to characterise the Triassic massive stacked sandstone deposits of the Main Buntsandstein Subgroup at Block Q16 located in the West Netherlands Basin. The characterisation was carried out through combining rock-physics modelling and seismic inversion techniques. The

  4. Application of modelling techniques in the food industry: determination of shelf-life for chilled foods

    NARCIS (Netherlands)

    Membré, J.M.; Johnston, M.D.; Bassett, J.; Naaktgeboren, G.; Blackburn, W.; Gorris, L.G.M.

    2005-01-01

    Microbiological modelling techniques (predictive microbiology, the Bayesian Markov Chain Monte Carlo method and a probability risk assessment approach) were combined to assess the shelf-life of an in-pack heat-treated, low-acid sauce intended to be marketed under chilled conditions. From a safety

  5. Extending the reach of strong-coupling: an iterative technique for Hamiltonian lattice models

    International Nuclear Information System (INIS)

    Alberty, J.; Greensite, J.; Patkos, A.

    1983-12-01

    The authors propose an iterative method for doing lattice strong-coupling-like calculations in a range of medium to weak couplings. The method is a modified Lanczos scheme, with greatly improved convergence properties. The technique is tested on the Mathieu equation and on a Hamiltonian finite-chain XY model, with excellent results. (Auth.)

  6. New model reduction technique for a class of parabolic partial differential equations

    NARCIS (Netherlands)

    Vajta, Miklos

    1991-01-01

    A model reduction (or lumping) technique for a class of parabolic-type partial differential equations is given, and its application is discussed. The frequency response of the temperature distribution in any multilayer solid is developed and given by a matrix expression. The distributed transfer

  7. The commercial use of segmentation and predictive modeling techniques for database marketing in the Netherlands

    NARCIS (Netherlands)

    Verhoef, PC; Spring, PN; Hoekstra, JC; Leeflang, PSH

    Although the application of segmentation and predictive modeling is an important topic in the database marketing (DBM) literature, no study has yet investigated the extent of adoption of these techniques. We present the results of a Dutch survey involving 228 database marketing companies. We find

  8. Evaluation of inverse modeling techniques for pinpointing water leakages at building constructions

    NARCIS (Netherlands)

    Schijndel, van A.W.M.

    2015-01-01

    The location and nature of the moisture leakages are sometimes difficult to detect. Moreover, the relation between observed inside surface moisture patterns and where the moisture enters the construction is often not clear. The objective of this paper is to investigate inverse modeling techniques as

  9. Numerical modelling of radon-222 entry into houses: An outline of techniques and results

    DEFF Research Database (Denmark)

    Andersen, C.E.

    2001-01-01

    Numerical modelling is a powerful tool for studies of soil gas and radon-222 entry into houses. It is the purpose of this paper to review some main techniques and results. In the past, modelling has focused on Darcy flow of soil gas (driven by indoor–outdoor pressure differences) and combined...... diffusive and advective transport of radon. Models of different complexity have been used. The simpler ones are finite-difference models with one or two spatial dimensions. The more complex models allow for full three-dimensional and time dependency. Advanced features include: soil heterogeneity, anisotropy......, fractures, moisture, non-uniform soil temperature, non-Darcy flow of gas, and flow caused by changes in the atmospheric pressure. Numerical models can be used to estimate the importance of specific factors for radon entry. Models are also helpful when results obtained in special laboratory or test structure...

  10. Nonlinear modelling of polymer electrolyte membrane fuel cell stack using nonlinear cancellation technique

    Energy Technology Data Exchange (ETDEWEB)

    Barus, R. P. P., E-mail: rismawan.ppb@gmail.com [Engineering Physics, Faculty of Industrial Technology, Institut Teknologi Bandung, Jalan Ganesa 10 Bandung and Centre for Material and Technical Product, Jalan Sangkuriang No. 14 Bandung (Indonesia); Tjokronegoro, H. A.; Leksono, E. [Engineering Physics, Faculty of Industrial Technology, Institut Teknologi Bandung, Jalan Ganesa 10 Bandung (Indonesia); Ismunandar [Chemistry Study, Faculty of Mathematics and Science, Institut Teknologi Bandung, Jalan Ganesa 10 Bandung (Indonesia)

    2014-09-25

    Fuel cells are promising new energy conversion devices that are friendly to the environment. A set of control systems are required in order to operate a fuel cell based power plant system optimally. For the purpose of control system design, an accurate fuel cell stack model in describing the dynamics of the real system is needed. Currently, linear model are widely used for fuel cell stack control purposes, but it has limitations in narrow operation range. While nonlinear models lead to nonlinear control implemnetation whos more complex and hard computing. In this research, nonlinear cancellation technique will be used to transform a nonlinear model into a linear form while maintaining the nonlinear characteristics. The transformation is done by replacing the input of the original model by a certain virtual input that has nonlinear relationship with the original input. Then the equality of the two models is tested by running a series of simulation. Input variation of H2, O2 and H2O as well as disturbance input I (current load) are studied by simulation. The error of comparison between the proposed model and the original nonlinear model are less than 1 %. Thus we can conclude that nonlinear cancellation technique can be used to represent fuel cell nonlinear model in a simple linear form while maintaining the nonlinear characteristics and therefore retain the wide operation range.

  11. Nonlinear modelling of polymer electrolyte membrane fuel cell stack using nonlinear cancellation technique

    International Nuclear Information System (INIS)

    Barus, R. P. P.; Tjokronegoro, H. A.; Leksono, E.; Ismunandar

    2014-01-01

    Fuel cells are promising new energy conversion devices that are friendly to the environment. A set of control systems are required in order to operate a fuel cell based power plant system optimally. For the purpose of control system design, an accurate fuel cell stack model in describing the dynamics of the real system is needed. Currently, linear model are widely used for fuel cell stack control purposes, but it has limitations in narrow operation range. While nonlinear models lead to nonlinear control implemnetation whos more complex and hard computing. In this research, nonlinear cancellation technique will be used to transform a nonlinear model into a linear form while maintaining the nonlinear characteristics. The transformation is done by replacing the input of the original model by a certain virtual input that has nonlinear relationship with the original input. Then the equality of the two models is tested by running a series of simulation. Input variation of H2, O2 and H2O as well as disturbance input I (current load) are studied by simulation. The error of comparison between the proposed model and the original nonlinear model are less than 1 %. Thus we can conclude that nonlinear cancellation technique can be used to represent fuel cell nonlinear model in a simple linear form while maintaining the nonlinear characteristics and therefore retain the wide operation range

  12. CT and MRI assessment and characterization using segmentation and 3D modeling techniques: applications to muscle, bone and brain

    Directory of Open Access Journals (Sweden)

    Paolo Gargiulo

    2014-03-01

    Full Text Available This paper reviews the novel use of CT and MRI data and image processing tools to segment and reconstruct tissue images in 3D to determine characteristics of muscle, bone and brain.This to study and simulate the structural changes occurring in healthy and pathological conditions as well as in response to clinical treatments. Here we report the application of this methodology to evaluate and quantify: 1. progression of atrophy in human muscle subsequent to permanent lower motor neuron (LMN denervation, 2. muscle recovery as induced by functional electrical stimulation (FES, 3. bone quality in patients undergoing total hip replacement and 4. to model the electrical activity of the brain. Study 1: CT data and segmentation techniques were used to quantify changes in muscle density and composition by associating the Hounsfield unit values of muscle, adipose and fibrous connective tissue with different colors. This method was employed to monitor patients who have permanent muscle LMN denervation in the lower extremities under two different conditions: permanent LMN denervated not electrically stimulated and stimulated. Study 2: CT data and segmentation techniques were employed, however, in this work we assessed bone and muscle conditions in the pre-operative CT scans of patients scheduled to undergo total hip replacement. In this work, the overall anatomical structure, the bone mineral density (BMD and compactness of quadriceps muscles and proximal femoral was computed to provide a more complete view for surgeons when deciding which implant technology to use. Further, a Finite element analysis provided a map of the strains around the proximal femur socket when solicited by typical stresses caused by an implant press fitting. Study 3 describes a method to model the electrical behavior of human brain using segmented MR images. The aim of the work is to use these models to predict the electrical activity of the human brain under normal and pathological

  13. Experience with the Large Eddy Simulation (LES) Technique for the Modelling of Premixed and Non-premixed Combustion

    OpenAIRE

    Malalasekera, W; Ibrahim, SS; Masri, AR; Gubba, SR; Sadasivuni, SK

    2013-01-01

    Compared to RANS based combustion modelling, the Large Eddy Simulation (LES) technique has recently emerged as a more accurate and very adaptable technique in terms of handling complex turbulent interactions in combustion modelling problems. In this paper application of LES based combustion modelling technique and the validation of models in non-premixed and premixed situations are considered. Two well defined experimental configurations where high quality data are available for validation is...

  14. The Effect of Off-Farm Employment on Forestland Transfers in China: A Simultaneous-Equation Tobit Model Estimation

    Directory of Open Access Journals (Sweden)

    Han Zhang

    2017-09-01

    Full Text Available China’s new round tenure reform has devolved collective forests to individuals on an egalitarian basis. To balance the equity–efficiency dilemma, forestland transfers are highly advocated by policymakers. However, the forestland rental market is still inactive after the reform. To examine the role of off-farm employment in forestland transfers, a simultaneous Tobit system of equations was employed to account for the endogeneity, interdependency, and censoring issues. Accordingly, the Nelson–Olson two-stage procedure, embedded with a multivariate Tobit estimator, was applied to a nationally representative dataset. The estimation results showed that off-farm employment plays a significantly negative role in forestland rent-in, at the 5% risk level. However, off-farm activities had no significant effect on forestland rent-out. Considering China’s specific situation, a reasonable explanation is that households hold forestland as a crucial means of social security against the risk of unemployment. In both rent-in and rent-out equations, high transaction costs are one of the main obstacles impeding forestland transfer. A remarkable finding was that forestland transactions occurred with a statistically significant factor equalization effect, which would be helpful to adjust the mismatched labor–land ratio and improve the land-use efficiency.

  15. Finite element modeling of penetration of rigid cylindrical bar impacting on a clamped circular plate, employing a strain rate dependent flow rate and a Gruneisen equation of state

    International Nuclear Information System (INIS)

    Tariq, M.; Khan, I.A.

    2003-01-01

    A time dependent Finite Element simulation of penetration of a rigid cylindrical bar impacting on a copper plate is conducted, to demonstrate how material behavior appears to change when Johnson-Cook plasticity rule is employed along with a Gruneisen, equation of state with cubic shock velocity-particle relationship, and defining pressure both for compressed and expanded materials, as compared to the behavior when only isotropic strain-hardening model is employed. The bar impacts the plate with a velocity of 1000 m/s, and penetrates the plate, a portion of it coming out of the other side. Results are obtained and compared taking both an isotropic strain-hardening model, and a model incorporating Johnson-Cook flow rule along with Gruneisen equation of state. (author)

  16. Analysis of the chemical elements behavior in leaves infected with fumagina employing X-ray fluorescence technique; Analise do comportamento de elementos quimicos em folhas infectadas com fumagina atraves da tecnica de fluorescencia de raios-X

    Energy Technology Data Exchange (ETDEWEB)

    Blonski, Maria Selia

    2002-02-01

    Fumagina is produced by the Capnodium fungi, a mushroom from Dothideales order, that lives associated with the yellow aphid (Monelliopsis pecanis), excreting a heavy load of a sugared substance that provides the sooty fungus development, covering, in some cases the totality, of leaves, fruits and branches surfaces. One of the big problems caused by this disease is the physic photosynthesis stuckness, reducing in 98% the active photosynthesis radiation and in 70% the net photosynthesis. This photosynthesis reduction affects the plants carbohydrate reservoirs, resulting in reduced and bad qualified crops. It was employed the Energy Dispersion X-Ray Fluorescence Technique (EDXRF) to study the fumagina disease. The measurements were realized with infected and healthy leaves of citric plants (orange and lemon trees), collected at the Farm School of the State University of Londrina, and also leaves of icsora plant from Piracicaba-SP. Through this technique it was observed a metabolic disorder related to Fe and Ca elements. By comparing the EDXRF spectra of the healthy plant leaves with the infected ones, it was verified that the concentration of Ca is higher than the concentration Fe in leaves of healthy plants, while for the leaves of plants infected with fumagina, it was observed a higher concentration of Fe. Employing EDXRF and the emission/transmission method for the matrix effect correction, samples of the citric plants were also analysed in pellet. The elements K, Ca, Ti, Mn , Fe, Cu and Zn were quantified. It was verified a higher Fe concentration in fumagina samples compared to the samples without fumagina. For all the elements of interest the minimum level of detection found was at the order of {mu}g g{sup -1}. (author)

  17. A note on the multi model super ensemble technique for reducing forecast errors

    International Nuclear Information System (INIS)

    Kantha, L.; Carniel, S.; Sclavo, M.

    2008-01-01

    The multi model super ensemble (S E) technique has been used with considerable success to improve meteorological forecasts and is now being applied to ocean models. Although the technique has been shown to produce deterministic forecasts that can be superior to the individual models in the ensemble or a simple multi model ensemble forecast, there is a clear need to understand its strengths and limitations. This paper is an attempt to do so in simple, easily understood contexts. The results demonstrate that the S E forecast is almost always better than the simple ensemble forecast, the degree of improvement depending on the properties of the models in the ensemble. However, the skill of the S E forecast with respect to the true forecast depends on a number of factors, principal among which is the skill of the models in the ensemble. As can be expected, if the ensemble consists of models with poor skill, the S E forecast will also be poor, although better than the ensemble forecast. On the other hand, the inclusion of even a single skillful model in the ensemble increases the forecast skill significantly.

  18. A characteristic study of CCF modeling techniques and optimization of CCF defense strategies

    International Nuclear Information System (INIS)

    Kim, Min Chull

    2000-02-01

    Common Cause Failures (CCFs ) are among the major contributors to risk and core damage frequency (CDF ) from operating nuclear power plants (NPPs ). Our study on CCF focused on the following aspects : 1) a characteristic study on the CCF modeling techniques and 2) development of the optimal CCF defense strategy. Firstly, the characteristics of CCF modeling techniques were studied through sensitivity study of CCF occurrence probability upon system redundancy. The modeling techniques considered in this study include those most widely used worldwide, i.e., beta factor, MGL, alpha factor, and binomial failure rate models. We found that MGL and alpha factor models are essentially identical in terms of the CCF probability. Secondly, in the study for CCF defense, the various methods identified in the previous studies for defending against CCF were classified into five different categories. Based on these categories, we developed a generic method by which the optimal CCF defense strategy can be selected. The method is not only qualitative but also quantitative in nature: the selection of the optimal strategy among candidates is based on the use of analytic hierarchical process (AHP). We applied this method to two motor-driven valves for containment sump isolation in Ulchin 3 and 4 nuclear power plants. The result indicates that the method for developing an optimal CCF defense strategy is effective

  19. HIGHLY-ACCURATE MODEL ORDER REDUCTION TECHNIQUE ON A DISCRETE DOMAIN

    Directory of Open Access Journals (Sweden)

    L. D. Ribeiro

    2015-09-01

    Full Text Available AbstractIn this work, we present a highly-accurate technique of model order reduction applied to staged processes. The proposed method reduces the dimension of the original system based on null values of moment-weighted sums of heat and mass balance residuals on real stages. To compute these sums of weighted residuals, a discrete form of Gauss-Lobatto quadrature was developed, allowing a high degree of accuracy in these calculations. The locations where the residuals are cancelled vary with time and operating conditions, characterizing a desirable adaptive nature of this technique. Balances related to upstream and downstream devices (such as condenser, reboiler, and feed tray of a distillation column are considered as boundary conditions of the corresponding difference-differential equations system. The chosen number of moments is the dimension of the reduced model being much lower than the dimension of the complete model and does not depend on the size of the original model. Scaling of the discrete independent variable related with the stages was crucial for the computational implementation of the proposed method, avoiding accumulation of round-off errors present even in low-degree polynomial approximations in the original discrete variable. Dynamical simulations of distillation columns were carried out to check the performance of the proposed model order reduction technique. The obtained results show the superiority of the proposed procedure in comparison with the orthogonal collocation method.

  20. Dynamic model reduction: An overview of available techniques with application to power systems

    Directory of Open Access Journals (Sweden)

    Đukić Savo D.

    2012-01-01

    Full Text Available This paper summarises the model reduction techniques used for the reduction of large-scale linear and nonlinear dynamic models, described by the differential and algebraic equations that are commonly used in control theory. The groups of methods discussed in this paper for reduction of the linear dynamic model are based on singular perturbation analysis, modal analysis, singular value decomposition, moment matching and methods based on a combination of singular value decomposition and moment matching. Among the nonlinear dynamic model reduction methods, proper orthogonal decomposition, the trajectory piecewise linear method, balancing-based methods, reduction by optimising system matrices and projection from a linearised model, are described. Part of the paper is devoted to the techniques commonly used for reduction (equivalencing of large-scale power systems, which are based on coherency, synchrony, singular perturbation analysis, modal analysis and identification. Two (most interesting of the described techniques are applied to the reduction of the commonly used New England 10-generator, 39-bus test power system.

  1. Effects of pump recycling technique on stimulated Brillouin scattering threshold: a theoretical model.

    Science.gov (United States)

    Al-Asadi, H A; Al-Mansoori, M H; Ajiya, M; Hitam, S; Saripan, M I; Mahdi, M A

    2010-10-11

    We develop a theoretical model that can be used to predict stimulated Brillouin scattering (SBS) threshold in optical fibers that arises through the effect of Brillouin pump recycling technique. Obtained simulation results from our model are in close agreement with our experimental results. The developed model utilizes single mode optical fiber of different lengths as the Brillouin gain media. For 5-km long single mode fiber, the calculated threshold power for SBS is about 16 mW for conventional technique. This value is reduced to about 8 mW when the residual Brillouin pump is recycled at the end of the fiber. The decrement of SBS threshold is due to longer interaction lengths between Brillouin pump and Stokes wave.

  2. Finite-element-model updating using computational intelligence techniques applications to structural dynamics

    CERN Document Server

    Marwala, Tshilidzi

    2010-01-01

    Finite element models (FEMs) are widely used to understand the dynamic behaviour of various systems. FEM updating allows FEMs to be tuned better to reflect measured data and may be conducted using two different statistical frameworks: the maximum likelihood approach and Bayesian approaches. Finite Element Model Updating Using Computational Intelligence Techniques applies both strategies to the field of structural mechanics, an area vital for aerospace, civil and mechanical engineering. Vibration data is used for the updating process. Following an introduction a number of computational intelligence techniques to facilitate the updating process are proposed; they include: • multi-layer perceptron neural networks for real-time FEM updating; • particle swarm and genetic-algorithm-based optimization methods to accommodate the demands of global versus local optimization models; • simulated annealing to put the methodologies into a sound statistical basis; and • response surface methods and expectation m...

  3. Optimization models and techniques for implementation and pricing of electricity markets

    International Nuclear Information System (INIS)

    Madrigal Martinez, M.

    2001-01-01

    The operation and planning of vertically integrated electric power systems can be optimized using models that simulate solutions to problems. As the electric power industry is going through a period of restructuring, there is a need for new optimization tools. This thesis describes the importance of optimization tools and presents techniques for implementing them. It also presents methods for pricing primary electricity markets. Three modeling groups are studied. The first considers a simplified continuous and discrete model for power pool auctions. The second considers the unit commitment problem, and the third makes use of a new type of linear network-constrained clearing system model for daily markets for power and spinning reserve. The newly proposed model considers bids for supply and demand and bilateral contracts. It is a direct current model for the transmission network

  4. The Effect of Group Investigation Learning Model with Brainstroming Technique on Students Learning Outcomes

    Directory of Open Access Journals (Sweden)

    Astiti Kade kAyu

    2018-01-01

    Full Text Available This study aims to determine the effect of group investigation (GI learning model with brainstorming technique on student physics learning outcomes (PLO compared to jigsaw learning model with brainstroming technique. The learning outcome in this research are the results of learning in the cognitive domain. The method used in this research is experiment with Randomised Postest Only Control Group Design. Population in this research is all students of class XI IPA SMA Negeri 9 Kupang year lesson 2015/2016. The selected sample are 40 students of class XI IPA 1 as the experimental class and 38 students of class XI IPA 2 as the control class using simple random sampling technique. The instrument used is 13 items description test. The first hypothesis was tested by using two tailed t-test. From that, it is obtained that H0 rejected which means there are differences of students physics learning outcome. The second hypothesis was tested using one tailed t-test. It is obtained that H0 rejected which means the students PLO in experiment class were higher than control class. Based on the results of this study, researchers recommend the use of GI learning models with brainstorming techniques to improve PLO, especially in the cognitive domain.

  5. Floating Node Method and Virtual Crack Closure Technique for Modeling Matrix Cracking-Delamination Migration

    Science.gov (United States)

    DeCarvalho, Nelson V.; Chen, B. Y.; Pinho, Silvestre T.; Baiz, P. M.; Ratcliffe, James G.; Tay, T. E.

    2013-01-01

    A novel approach is proposed for high-fidelity modeling of progressive damage and failure in composite materials that combines the Floating Node Method (FNM) and the Virtual Crack Closure Technique (VCCT) to represent multiple interacting failure mechanisms in a mesh-independent fashion. In this study, the approach is applied to the modeling of delamination migration in cross-ply tape laminates. Delamination, matrix cracking, and migration are all modeled using fracture mechanics based failure and migration criteria. The methodology proposed shows very good qualitative and quantitative agreement with experiments.

  6. Forecasting performance of three automated modelling techniques during the economic crisis 2007-2009

    DEFF Research Database (Denmark)

    Kock, Anders Bredahl; Teräsvirta, Timo

    In this work we consider forecasting macroeconomic variables during an economic crisis. The focus is on a speci…c class of models, the so-called single hidden-layer feedforward autoregressive neural network models. What makes these models interesting in the present context is that they form a cla...... during the economic crisis 2007–2009. Forecast accuracy is measured by the root mean square forecast error. Hypothesis testing is also used to compare the performance of the different techniques with each other....

  7. Constructing an Urban Population Model for Medical Insurance Scheme Using Microsimulation Techniques

    Directory of Open Access Journals (Sweden)

    Linping Xiong

    2012-01-01

    Full Text Available China launched a pilot project of medical insurance reform in 79 cities in 2007 to cover urban nonworking residents. An urban population model was created in this paper for China’s medical insurance scheme using microsimulation model techniques. The model made it clear for the policy makers the population distributions of different groups of people, the potential urban residents entering the medical insurance scheme. The income trends of units of individuals and families were also obtained. These factors are essential in making the challenging policy decisions when considering to balance the long-term financial sustainability of the medical insurance scheme.

  8. Accuracy Enhanced Stability and Structure Preserving Model Reduction Technique for Dynamical Systems with Second Order Structure

    DEFF Research Database (Denmark)

    Tahavori, Maryamsadat; Shaker, Hamid Reza

    A method for model reduction of dynamical systems with the second order structure is proposed in this paper. The proposed technique preserves the second order structure of the system, and also preserves the stability of the original systems. The method uses the controllability and observability...... gramians within the time interval to build the appropriate Petrov-Galerkin projection for dynamical systems within the time interval of interest. The bound on approximation error is also derived. The numerical results are compared with the counterparts from other techniques. The results confirm...

  9. Comparing modelling techniques when designing VPH gratings for BigBOSS

    Science.gov (United States)

    Poppett, Claire; Edelstein, Jerry; Lampton, Michael; Jelinsky, Patrick; Arns, James

    2012-09-01

    BigBOSS is a Stage IV Dark Energy instrument based on the Baryon Acoustic Oscillations (BAO) and Red Shift Distortions (RSD) techniques using spectroscopic data of 20 million ELG and LRG galaxies at 0.5VPH) gratings have been identified as a key technology which will enable the efficiency requirement to be met, however it is important to be able to accurately predict their performance. In this paper we quantitatively compare different modelling techniques in order to assess the parameter space over which they are more capable of accurately predicting measured performance. Finally we present baseline parameters for grating designs that are most suitable for the BigBOSS instrument.

  10. Investigating the probability of detection of typical cavity shapes through modelling and comparison of geophysical techniques

    Science.gov (United States)

    James, P.

    2011-12-01

    With a growing need for housing in the U.K., the government has proposed increased development of brownfield sites. However, old mine workings and natural cavities represent a potential hazard before, during and after construction on such sites, and add further complication to subsurface parameters. Cavities are hence a limitation to certain redevelopment and their detection is an ever important consideration. The current standard technique for cavity detection is a borehole grid, which is intrusive, non-continuous, slow and expensive. A new robust investigation standard in the detection of cavities is sought and geophysical techniques offer an attractive alternative. Geophysical techniques have previously been utilised successfully in the detection of cavities in various geologies, but still has an uncertain reputation in the engineering industry. Engineers are unsure of the techniques and are inclined to rely on well known techniques than utilise new technologies. Bad experiences with geophysics are commonly due to the indiscriminate choice of particular techniques. It is imperative that a geophysical survey is designed with the specific site and target in mind at all times, and the ability and judgement to rule out some, or all, techniques. To this author's knowledge no comparative software exists to aid technique choice. Also, previous modelling software limit the shapes of bodies and hence typical cavity shapes are not represented. Here, we introduce 3D modelling software (Matlab) which computes and compares the response to various cavity targets from a range of techniques (gravity, gravity gradient, magnetic, magnetic gradient and GPR). Typical near surface cavity shapes are modelled including shafts, bellpits, various lining and capping materials, and migrating voids. The probability of cavity detection is assessed in typical subsurface and noise conditions across a range of survey parameters. Techniques can be compared and the limits of detection distance

  11. Forecasting Macroeconomic Variables using Neural Network Models and Three Automated Model Selection Techniques

    DEFF Research Database (Denmark)

    Kock, Anders Bredahl; Teräsvirta, Timo

    such as the neural network model is not appropriate if the data is generated by a linear mechanism. Hence, it might be appropriate to test the null of linearity prior to building a nonlinear model. We investigate whether this kind of pretesting improves the forecast accuracy compared to the case where...

  12. A novel modular multilevel converter modelling technique based on semi-analytical models for HVDC application

    Directory of Open Access Journals (Sweden)

    Ahmed Zama

    2016-12-01

    Full Text Available Thanks to scalability, performance and efficiency, the Modular Multilevel Converter (MMC, since its invention, becomes an attractive topology in industrial applications such as high voltage direct current (HVDC transmission system. However, modelling challenges related to the high number of switching elements in the MMC are highlighted when such systems are integrated into large simulated networks for stability or protection algorithms testing. In this work, a novel dynamic models for MMC is proposed. The proposed models are intended to simplify modeling challenges related to the high number of switching elements in the MMC. The models can be easily used to simulate the converter for stability analysis or protection algorithms for HVDC grids.

  13. Identifying and quantifying energy savings on fired plant using low cost modelling techniques

    International Nuclear Information System (INIS)

    Tucker, Robert; Ward, John

    2012-01-01

    Research highlights: → Furnace models based on the zone method for radiation calculation are described. → Validated steady-state and transient models have been developed. → We show how these simple models can identify the best options for saving energy. → High emissivity coatings predicted to give performance enhancement on a fired heater. → Optimal heat recovery strategies on a steel reheating furnace are predicted. -- Abstract: Combustion in fired heaters, boilers and furnaces often accounts for the major energy consumption on industrial processes. Small improvements in efficiency can result in large reductions in energy consumption, CO 2 emissions, and operating costs. This paper will describe some useful low cost modelling techniques based on the zone method to help identify energy saving opportunities on high temperature fuel-fired process plant. The zone method has for many decades, been successfully applied to small batch furnaces through to large steel-reheating furnaces, glass tanks, boilers and fired heaters on petrochemical plant. Zone models can simulate both steady-state furnace operation and more complex transient operation typical of a production environment. These models can be used to predict thermal efficiency and performance, and more importantly, to assist in identifying and predicting energy saving opportunities from such measures as: ·Improving air/fuel ratio and temperature controls. ·Improved insulation. ·Use of oxygen or oxygen enrichment. ·Air preheating via flue gas heat recovery. ·Modification to furnace geometry and hearth loading. There is also increasing interest in the application of refractory coatings for increasing surface radiation in fired plant. All of the techniques can yield savings ranging from a few percent upwards and can deliver rapid financial payback, but their evaluation often requires robust and reliable models in order to increase confidence in making financial investment decisions. This paper gives

  14. Out- and insourcing, an analysis model for use of instrumented techniques

    DEFF Research Database (Denmark)

    Bang, Henrik Peter; Grønbæk, Niels; Larsen, Claus Richard

    2017-01-01

    We sketch an outline of a model for analyzing the use of ICT-tools, in particular CAS, in teaching designs employed by ‘generic’ teachers. Our model uses the business economics concepts out- and insourcing as metaphors within the dialectics of tool and content in planning of teaching. Outsourcing...... is done in order to enhance outcome through external partners. The converse concept of insourcing refers to internal sourcing. We shall adhere to the framework of the anthropological theory of the didactic, viewing out- and insourcing primarily as decisions about the technology component of praxeologies....... We use the model on a concrete example from Danish upper secondary mathematics to uncover what underlies teachers’ decisions (deliberate or colloquial) on incorporating instrumented approaches....

  15. Modelling phase separation in Fe-Cr system using different atomistic kinetic Monte Carlo techniques

    International Nuclear Information System (INIS)

    Castin, N.; Bonny, G.; Terentyev, D.; Lavrentiev, M.Yu.; Nguyen-Manh, D.

    2011-01-01

    Atomistic kinetic Monte Carlo (AKMC) simulations were performed to study α-α' phase separation in Fe-Cr alloys. Two different energy models and two approaches to estimate the local vacancy migration barriers were used. The energy models considered are a two-band model Fe-Cr potential and a cluster expansion, both fitted to ab initio data. The classical Kang-Weinberg decomposition, based on the total energy change of the system, and an Artificial Neural Network (ANN), employed as a regression tool were used to predict the local vacancy migration barriers 'on the fly'. The results are compared with experimental thermal annealing data and differences between the applied AKMC approaches are discussed. The ability of the ANN regression method to accurately predict migration barriers not present in the training list is also addressed by performing cross-check calculations using the nudged elastic band method.

  16. Applications of soft computing in time series forecasting simulation and modeling techniques

    CERN Document Server

    Singh, Pritpal

    2016-01-01

    This book reports on an in-depth study of fuzzy time series (FTS) modeling. It reviews and summarizes previous research work in FTS modeling and also provides a brief introduction to other soft-computing techniques, such as artificial neural networks (ANNs), rough sets (RS) and evolutionary computing (EC), focusing on how these techniques can be integrated into different phases of the FTS modeling approach. In particular, the book describes novel methods resulting from the hybridization of FTS modeling approaches with neural networks and particle swarm optimization. It also demonstrates how a new ANN-based model can be successfully applied in the context of predicting Indian summer monsoon rainfall. Thanks to its easy-to-read style and the clear explanations of the models, the book can be used as a concise yet comprehensive reference guide to fuzzy time series modeling, and will be valuable not only for graduate students, but also for researchers and professionals working for academic, business and governmen...

  17. A review of cutting mechanics and modeling techniques for biological materials.

    Science.gov (United States)

    Takabi, Behrouz; Tai, Bruce L

    2017-07-01

    This paper presents a comprehensive survey on the modeling of tissue cutting, including both soft tissue and bone cutting processes. In order to achieve higher accuracy in tissue cutting, as a critical process in surgical operations, the meticulous modeling of such processes is important in particular for surgical tool development and analysis. This review paper is focused on the mechanical concepts and modeling techniques utilized to simulate tissue cutting such as cutting forces and chip morphology. These models are presented in two major categories, namely soft tissue cutting and bone cutting. Fracture toughness is commonly used to describe tissue cutting while Johnson-Cook material model is often adopted for bone cutting in conjunction with finite element analysis (FEA). In each section, the most recent mathematical and computational models are summarized. The differences and similarities among these models, challenges, novel techniques, and recommendations for future work are discussed along with each section. This review is aimed to provide a broad and in-depth vision of the methods suitable for tissue and bone cutting simulations. Copyright © 2017 IPEM. Published by Elsevier Ltd. All rights reserved.

  18. Hydrological time series modeling: A comparison between adaptive neuro-fuzzy, neural network and autoregressive techniques

    Science.gov (United States)

    Lohani, A. K.; Kumar, Rakesh; Singh, R. D.

    2012-06-01

    SummaryTime series modeling is necessary for the planning and management of reservoirs. More recently, the soft computing techniques have been used in hydrological modeling and forecasting. In this study, the potential of artificial neural networks and neuro-fuzzy system in monthly reservoir inflow forecasting are examined by developing and comparing monthly reservoir inflow prediction models, based on autoregressive (AR), artificial neural networks (ANNs) and adaptive neural-based fuzzy inference system (ANFIS). To take care the effect of monthly periodicity in the flow data, cyclic terms are also included in the ANN and ANFIS models. Working with time series flow data of the Sutlej River at Bhakra Dam, India, several ANN and adaptive neuro-fuzzy models are trained with different input vectors. To evaluate the performance of the selected ANN and adaptive neural fuzzy inference system (ANFIS) models, comparison is made with the autoregressive (AR) models. The ANFIS model trained with the input data vector including previous inflows and cyclic terms of monthly periodicity has shown a significant improvement in the forecast accuracy in comparison with the ANFIS models trained with the input vectors considering only previous inflows. In all cases ANFIS gives more accurate forecast than the AR and ANN models. The proposed ANFIS model coupled with the cyclic terms is shown to provide better representation of the monthly inflow forecasting for planning and operation of reservoir.

  19. Composite use of numerical groundwater flow modeling and geoinformatics techniques for monitoring Indus Basin aquifer, Pakistan.

    Science.gov (United States)

    Ahmad, Zulfiqar; Ashraf, Arshad; Fryar, Alan; Akhter, Gulraiz

    2011-02-01

    The integration of the Geographic Information System (GIS) with groundwater modeling and satellite remote sensing capabilities has provided an efficient way of analyzing and monitoring groundwater behavior and its associated land conditions. A 3-dimensional finite element model (Feflow) has been used for regional groundwater flow modeling of Upper Chaj Doab in Indus Basin, Pakistan. The approach of using GIS techniques that partially fulfill the data requirements and define the parameters of existing hydrologic models was adopted. The numerical groundwater flow model is developed to configure the groundwater equipotential surface, hydraulic head gradient, and estimation of the groundwater budget of the aquifer. GIS is used for spatial database development, integration with a remote sensing, and numerical groundwater flow modeling capabilities. The thematic layers of soils, land use, hydrology, infrastructure, and climate were developed using GIS. The Arcview GIS software is used as additive tool to develop supportive data for numerical groundwater flow modeling and integration and presentation of image processing and modeling results. The groundwater flow model was calibrated to simulate future changes in piezometric heads from the period 2006 to 2020. Different scenarios were developed to study the impact of extreme climatic conditions (drought/flood) and variable groundwater abstraction on the regional groundwater system. The model results indicated a significant response in watertable due to external influential factors. The developed model provides an effective tool for evaluating better management options for monitoring future groundwater development in the study area.

  20. Efficiency assessment of runoff harvesting techniques using a 3D coupled surface-subsurface hydrological model

    International Nuclear Information System (INIS)

    Verbist, K.; Cronelis, W. M.; McLaren, R.; Gabriels, D.; Soto, G.

    2009-01-01

    In arid and semi-arid zones runoff harvesting techniques are often applied to increase the water retention and infiltration on steep slopes. Additionally, they act as an erosion control measure to reduce land degradation hazards. Both in literature and in the field, a large variety of runoff collecting systems are found, as well as large variations in design and dimensions. Therefore, detailed measurements were performed on a semi-arid slope in central Chile to allow identification of the effect of a simple water harvesting technique on soil water availability. For this purpose, twenty two TDR-probes were installed and were monitored continuously during and after a simulated rainfall event. These data were used to calibrate the 3D distributed flow model HydroGeoSphere, to assess the runoff components and soil water retention as influenced by the water harvesting technique, both under simulated and natural rainfall conditions. (Author) 6 refs.

  1. Proposal of a congestion control technique in LAN networks using an econometric model ARIMA

    Directory of Open Access Journals (Sweden)

    Joaquín F Sánchez

    2017-01-01

    Full Text Available Hasty software development can produce immediate implementations with source code unnecessarily complex and hardly readable. These small kinds of software decay generate a technical debt that could be big enough to seriously affect future maintenance activities. This work presents an analysis technique for identifying architectural technical debt related to non-uniformity of naming patterns; the technique is based on term frequency over package hierarchies. The proposal has been evaluated on projects of two popular organizations, Apache and Eclipse. The results have shown that most of the projects have frequent occurrences of the proposed naming patterns, and using a graph model and aggregated data could enable the elaboration of simple queries for debt identification. The technique has features that favor its applicability on emergent architectures and agile software development.

  2. Chronology of DIC technique based on the fundamental mathematical modeling and dehydration impact.

    Science.gov (United States)

    Alias, Norma; Saipol, Hafizah Farhah Saipan; Ghani, Asnida Che Abd

    2014-12-01

    A chronology of mathematical models for heat and mass transfer equation is proposed for the prediction of moisture and temperature behavior during drying using DIC (Détente Instantanée Contrôlée) or instant controlled pressure drop technique. DIC technique has the potential as most commonly used dehydration method for high impact food value including the nutrition maintenance and the best possible quality for food storage. The model is governed by the regression model, followed by 2D Fick's and Fourier's parabolic equation and 2D elliptic-parabolic equation in a rectangular slice. The models neglect the effect of shrinkage and radiation effects. The simulations of heat and mass transfer equations with parabolic and elliptic-parabolic types through some numerical methods based on finite difference method (FDM) have been illustrated. Intel®Core™2Duo processors with Linux operating system and C programming language have been considered as a computational platform for the simulation. Qualitative and quantitative differences between DIC technique and the conventional drying methods have been shown as a comparative.

  3. Modelling, analysis and validation of microwave techniques for the characterisation of metallic nanoparticles

    Science.gov (United States)

    Sulaimalebbe, Aslam

    In the last decade, the study of nanoparticle (NP) systems has become a large and interesting research area due to their novel properties and functionalities, which are different from those of the bulk materials, and also their potential applications in different fields. It is vital to understand the behaviour and properties of nano-materials aiming at implementing nanotechnology, controlling their behaviour and designing new material systems with superior performance. Physical characterisation of NPs falls into two main categories, property and structure analysis, where the properties of the NPs cannot be studied without the knowledge of size and structure. The direct measurement of the electrical properties of metal NPs presents a key challenge and necessitates the use of innovative experimental techniques. There have been numerous reports of two/four point resistance measurements of NPs films and also electrical conductivity of NPs films using the interdigitated microarray (IDA) electrode. However, using microwave techniques such as open ended coaxial probe (OCP) and microwave dielectric resonator (DR) for electrical characterisation of metallic NPs are much more accurate and effective compared to other traditional techniques. This is because they are inexpensive, convenient, non-destructive, contactless, hazardless (i.e. at low power) and require no special sample preparation. This research is the first attempt to determine the microwave properties of Pt and Au NP films, which were appealing materials for nano-scale electronics, using the aforementioned microwave techniques. The ease of synthesis, relatively cheap, unique catalytic activities and control over the size and the shape were the main considerations in choosing Pt and Au NPs for the present study. The initial phase of this research was to implement and validate the aperture admittance model for the OCP measurement through experiments and 3D full wave simulation using the commercially available Ansoft

  4. Development of pathological anthropomorphic models using 3D modelling techniques for numerical dosimetry

    International Nuclear Information System (INIS)

    Costa, Kleber Souza Silva; Barbosa, Antonio Konrado de Santana; Vieira, Jose Wilson; Lima, Fernando Roberto de Andrade

    2011-01-01

    Computational exposure models can be used to estimate human body absorbed dose in a series of situations such as X-Ray exams for diagnosis, accidents and medical treatments. These models are fundamentally composed of an anthropomorphic simulator (phantom), an algorithm that simulates a radioactive source and a Monte Carlo Code. The accuracy of data obtained in the simulation is strongly connected to the adequacy of such simulation to the real situation. The phantoms are one of the key factors for the researcher manipulation. They are generally developed in supine position and its anatomy is patronized by compiled data from international institutions such as ICRP or ICRU. Several pathologies modify the structure of organs and body tissues. In order to measure how significant these alterations are, an anthropomorphic model was developed for this study: patient mastectomies. This model was developed using voxel phantom FASH and then coupled with EGSnrc Monte Carlo code

  5. Data-driven techniques to estimate parameters in a rate-dependent ferromagnetic hysteresis model

    International Nuclear Information System (INIS)

    Hu Zhengzheng; Smith, Ralph C.; Ernstberger, Jon M.

    2012-01-01

    The quantification of rate-dependent ferromagnetic hysteresis is important in a range of applications including high speed milling using Terfenol-D actuators. There exist a variety of frameworks for characterizing rate-dependent hysteresis including the magnetic model in Ref. , the homogenized energy framework, Preisach formulations that accommodate after-effects, and Prandtl-Ishlinskii models. A critical issue when using any of these models to characterize physical devices concerns the efficient estimation of model parameters through least squares data fits. A crux of this issue is the determination of initial parameter estimates based on easily measured attributes of the data. In this paper, we present data-driven techniques to efficiently and robustly estimate parameters in the homogenized energy model. This framework was chosen due to its physical basis and its applicability to ferroelectric, ferromagnetic and ferroelastic materials.

  6. Characterization and modelling techniques for gas metal arc welding of DP 600 sheet steels

    Energy Technology Data Exchange (ETDEWEB)

    Mukherjee, K.; Prahl, U.; Bleck, W. [RWTH Aachen University, Department of Ferrous Metallurgy (IEHK) (Germany); Reisgen, U.; Schleser, M.; Abdurakhmanov, A. [RWTH Aachen University, Welding and Joining Institute (ISF) (Germany)

    2010-11-15

    The objectives of the present work are to characterize the Gas Metal Arc Welding process of DP 600 sheet steel and to summarize the modelling techniques. The time-temperature evolution during the welding cycle was measured experimentally and modelled with the softwaretool SimWeld. To model the phase transformations during the welding cycle dilatometer tests were done to quantify the parameters for phase field modelling by MICRESS {sup registered}. The important input parameters are interface mobility, nucleation density, etc. A contribution was made to include austenite to bainite transformation in MICRESS {sup registered}. This is useful to predict the microstructure in the fast cooling segments. The phase transformation model is capable to predict the microstructure along the heating and cooling cycles of welding. Tensile tests have shown the evidence of failure at the heat affected zone, which has the ferrite-tempered martensite microstructure. (orig.)

  7. Comparison of two stochastic techniques for reliable urban runoff prediction by modeling systematic errors

    DEFF Research Database (Denmark)

    Del Giudice, Dario; Löwe, Roland; Madsen, Henrik

    2015-01-01

    from different fields and have not yet been compared in environmental modeling. To compare the two approaches, we develop a unifying terminology, evaluate them theoretically, and apply them to conceptual rainfall-runoff modeling in the same drainage system. Our results show that both approaches can......In urban rainfall-runoff, commonly applied statistical techniques for uncertainty quantification mostly ignore systematic output errors originating from simplified models and erroneous inputs. Consequently, the resulting predictive uncertainty is often unreliable. Our objective is to present two...... approaches which use stochastic processes to describe systematic deviations and to discuss their advantages and drawbacks for urban drainage modeling. The two methodologies are an external bias description (EBD) and an internal noise description (IND, also known as stochastic gray-box modeling). They emerge...

  8. A stochastic delay model for pricing debt and equity: Numerical techniques and applications

    Science.gov (United States)

    Tambue, Antoine; Kemajou Brown, Elisabeth; Mohammed, Salah

    2015-01-01

    Delayed nonlinear models for pricing corporate liabilities and European options were recently developed. Using self-financed strategy and duplication we were able to derive a Random Partial Differential Equation (RPDE) whose solutions describe the evolution of debt and equity values of a corporate in the last delay period interval in the accompanied paper (Kemajou et al., 2012) [14]. In this paper, we provide robust numerical techniques to solve the delayed nonlinear model for the corporate value, along with the corresponding RPDEs modeling the debt and equity values of the corporate. Using financial data from some firms, we forecast and compare numerical solutions from both the nonlinear delayed model and classical Merton model with the real corporate data. From this comparison, it comes up that in corporate finance the past dependence of the firm value process may be an important feature and therefore should not be ignored.

  9. New Diagnostic, Launch and Model Control Techniques in the NASA Ames HFFAF Ballistic Range

    Science.gov (United States)

    Bogdanoff, David W.

    2012-01-01

    This report presents new diagnostic, launch and model control techniques used in the NASA Ames HFFAF ballistic range. High speed movies were used to view the sabot separation process and the passage of the model through the model splap paper. Cavities in the rear of the sabot, to catch the muzzle blast of the gun, were used to control sabot finger separation angles and distances. Inserts were installed in the powder chamber to greatly reduce the ullage volume (empty space) in the chamber. This resulted in much more complete and repeatable combustion of the powder and hence, in much more repeatable muzzle velocities. Sheets of paper or cardstock, impacting one half of the model, were used to control the amplitudes of the model pitch oscillations.

  10. Rapid customization system for 3D-printed splint using programmable modeling technique - a practical approach.

    Science.gov (United States)

    Li, Jianyou; Tanaka, Hiroya

    2018-01-01

    Traditional splinting processes are skill dependent and irreversible, and patient satisfaction levels during rehabilitation are invariably lowered by the heavy structure and poor ventilation of splints. To overcome this drawback, use of the 3D-printing technology has been proposed in recent years, and there has been an increase in public awareness. However, application of 3D-printing technologies is limited by the low CAD proficiency of clinicians as well as unforeseen scan flaws within anatomic models.A programmable modeling tool has been employed to develop a semi-automatic design system for generating a printable splint model. The modeling process was divided into five stages, and detailed steps involved in construction of the proposed system as well as automatic thickness calculation, the lattice structure, and assembly method have been thoroughly described. The proposed approach allows clinicians to verify the state of the splint model at every stage, thereby facilitating adjustment of input content and/or other parameters to help solve possible modeling issues. A finite element analysis simulation was performed to evaluate the structural strength of generated models. A fit investigation was applied on fabricated splints and volunteers to assess the wearing experience. Manual modeling steps involved in complex splint designs have been programed into the proposed automatic system. Clinicians define the splinting region by drawing two curves, thereby obtaining the final model within minutes. The proposed system is capable of automatically patching up minor flaws within the limb model as well as calculating the thickness and lattice density of various splints. Large splints could be divided into three parts for simultaneous multiple printing. This study highlights the advantages, limitations, and possible strategies concerning application of programmable modeling tools in clinical processes, thereby aiding clinicians with lower CAD proficiencies to become adept

  11. Evaluating Employability Skills: Employer and Student Perceptions

    Science.gov (United States)

    Saunders, Venetia; Zuzel, Katherine

    2010-01-01

    Graduate employability is a key issue for Higher Education. In this two-part study student employability skills have been evaluated from the perspective of sandwich students and graduates in biomolecular science, and their employers. A strong correlation was found between employer and sandwich student/graduate perceptions of the relative…

  12. Evaluation of 3-dimensional superimposition techniques on various skeletal structures of the head using surface models.

    Directory of Open Access Journals (Sweden)

    Nikolaos Gkantidis

    Full Text Available To test the applicability, accuracy, precision, and reproducibility of various 3D superimposition techniques for radiographic data, transformed to triangulated surface data.Five superimposition techniques (3P: three-point registration; AC: anterior cranial base; AC + F: anterior cranial base + foramen magnum; BZ: both zygomatic arches; 1Z: one zygomatic arch were tested using eight pairs of pre-existing CT data (pre- and post-treatment. These were obtained from non-growing orthodontic patients treated with rapid maxillary expansion. All datasets were superimposed by three operators independently, who repeated the whole procedure one month later. Accuracy was assessed by the distance (D between superimposed datasets on three form-stable anatomical areas, located on the anterior cranial base and the foramen magnum. Precision and reproducibility were assessed using the distances between models at four specific landmarks. Non parametric multivariate models and Bland-Altman difference plots were used for analyses.There was no difference among operators or between time points on the accuracy of each superimposition technique (p>0.05. The AC + F technique was the most accurate (D0.05, the detected structural changes differed significantly between different techniques (p<0.05. Bland-Altman difference plots showed that BZ superimposition was comparable to AC, though it presented slightly higher random error.Superimposition of 3D datasets using surface models created from voxel data can provide accurate, precise, and reproducible results, offering also high efficiency and increased post-processing capabilities. In the present study population, the BZ superimposition was comparable to AC, with the added advantage of being applicable to scans with a smaller field of view.

  13. Evaluation of 3-dimensional superimposition techniques on various skeletal structures of the head using surface models.

    Science.gov (United States)

    Gkantidis, Nikolaos; Schauseil, Michael; Pazera, Pawel; Zorkun, Berna; Katsaros, Christos; Ludwig, Björn

    2015-01-01

    To test the applicability, accuracy, precision, and reproducibility of various 3D superimposition techniques for radiographic data, transformed to triangulated surface data. Five superimposition techniques (3P: three-point registration; AC: anterior cranial base; AC + F: anterior cranial base + foramen magnum; BZ: both zygomatic arches; 1Z: one zygomatic arch) were tested using eight pairs of pre-existing CT data (pre- and post-treatment). These were obtained from non-growing orthodontic patients treated with rapid maxillary expansion. All datasets were superimposed by three operators independently, who repeated the whole procedure one month later. Accuracy was assessed by the distance (D) between superimposed datasets on three form-stable anatomical areas, located on the anterior cranial base and the foramen magnum. Precision and reproducibility were assessed using the distances between models at four specific landmarks. Non parametric multivariate models and Bland-Altman difference plots were used for analyses. There was no difference among operators or between time points on the accuracy of each superimposition technique (p>0.05). The AC + F technique was the most accurate (D0.05), the detected structural changes differed significantly between different techniques (p<0.05). Bland-Altman difference plots showed that BZ superimposition was comparable to AC, though it presented slightly higher random error. Superimposition of 3D datasets using surface models created from voxel data can provide accurate, precise, and reproducible results, offering also high efficiency and increased post-processing capabilities. In the present study population, the BZ superimposition was comparable to AC, with the added advantage of being applicable to scans with a smaller field of view.

  14. Modeling of an Aged Porous Silicon Humidity Sensor Using ANN Technique

    Directory of Open Access Journals (Sweden)

    Tarikul ISLAM

    2006-10-01

    Full Text Available Porous silicon (PS sensor based on capacitive technique used for measuring relative humidity has the advantages of low cost, ease of fabrication with controlled structure and CMOS compatibility. But the response of the sensor is nonlinear function of humidity and suffers from errors due to aging and stability. One adaptive linear (ADALINE ANN model has been developed to model the behavior of the sensor with a view to estimate these errors and compensate them. The response of the sensor is represented by third order polynomial basis function whose coefficients are determined by the ANN technique. The drift in sensor output due to aging of PS layer is also modeled by adapting the weights of the polynomial function. ANN based modeling is found to be more suitable than conventional physical modeling of PS humidity sensor in changing environment and drift due to aging. It helps online estimation of nonlinearity as well as monitoring of the fault of the PS humidity sensor using the coefficients of the model.

  15. Parameter estimation techniques and uncertainty in ground water flow model predictions

    International Nuclear Information System (INIS)

    Zimmerman, D.A.; Davis, P.A.

    1990-01-01

    Quantification of uncertainty in predictions of nuclear waste repository performance is a requirement of Nuclear Regulatory Commission regulations governing the licensing of proposed geologic repositories for high-level radioactive waste disposal. One of the major uncertainties in these predictions is in estimating the ground-water travel time of radionuclides migrating from the repository to the accessible environment. The cause of much of this uncertainty has been attributed to a lack of knowledge about the hydrogeologic properties that control the movement of radionuclides through the aquifers. A major reason for this lack of knowledge is the paucity of data that is typically available for characterizing complex ground-water flow systems. Because of this, considerable effort has been put into developing parameter estimation techniques that infer property values in regions where no measurements exist. Currently, no single technique has been shown to be superior or even consistently conservative with respect to predictions of ground-water travel time. This work was undertaken to compare a number of parameter estimation techniques and to evaluate how differences in the parameter estimates and the estimation errors are reflected in the behavior of the flow model predictions. That is, we wished to determine to what degree uncertainties in flow model predictions may be affected simply by the choice of parameter estimation technique used. 3 refs., 2 figs

  16. The Optimal Employment of a Deep Seaweb Acoustic Network for Submarine Communications at Speed and Depth Using a Defender-Attacker-Defender Model

    Science.gov (United States)

    2013-09-01

    location costs of a LRP. Berger, Coullard, and Daskin (2007) utilized a set-partitioning- based formulation of an uncapacitated location-routing...EMPLOYMENT OF A DEEP SEAWEB ACOUSTIC NETWORK FOR SUBMARINE COMMUNICATIONS AT SPEED AND DEPTH USING A DEFENDER-ATTACKER- DEFENDER MODEL by Andrew D...Hendricksen September 2013 Thesis Advisor: W. Matthew Carlyle Thesis Co-Advisor: Joseph A . Rice Second Reader: Robert E. Burks THIS PAGE

  17. Soil temperature modeling at different depths using neuro-fuzzy, neural network, and genetic programming techniques

    Science.gov (United States)

    Kisi, Ozgur; Sanikhani, Hadi; Cobaner, Murat

    2017-08-01

    The applicability of artificial neural networks (ANN), adaptive neuro-fuzzy inference system (ANFIS), and genetic programming (GP) techniques in estimating soil temperatures (ST) at different depths is investigated in this study. Weather data from two stations, Mersin and Adana, Turkey, were used as inputs to the applied models in order to model monthly STs. The first part of the study focused on comparison of ANN, ANFIS, and GP models in modeling ST of two stations at the depths of 10, 50, and 100 cm. GP was found to perform better than the ANN and ANFIS-SC in estimating monthly ST. The effect of periodicity (month of the year) on models' accuracy was also investigated. Including periodicity component in models' inputs considerably increased their accuracies. The root mean square error (RMSE) of ANN models was respectively decreased by 34 and 27 % for the depths of 10 and 100 cm adding the periodicity input. In the second part of the study, the accuracies of the ANN, ANFIS, and GP models were compared in estimating ST of Mersin Station using the climatic data of Adana Station. The ANN models generally performed better than the ANFIS-SC and GP in modeling ST of Mersin Station without local climatic inputs.

  18. Comparison Between Laser Scanning and Automated 3d Modelling Techniques to Reconstruct Complex and Extensive Cultural Heritage Areas

    Science.gov (United States)

    Fassi, F.; Fregonese, L.; Ackermann, S.; De Troia, V.

    2013-02-01

    In Cultural Heritage field, the necessity to survey objects in a fast manner, with the ability to repeat the measurements several times for deformation or degradation monitoring purposes, is increasing. In this paper, two significant cases, an architectonical one and an archaeological one, are presented. Due to different reasons and emergency situations, the finding of the optimal solution to enable quick and well-timed survey for a complete digital reconstruction of the object is required. In both cases, two survey methods have been tested and used: a laser scanning approach that allows to obtain high-resolution and complete scans within a short time and a photogrammetric one that allows the three-dimensional reconstruction of the object from images. In the last months, several methodologies, including free or low cost techniques, have arisen. These kinds of software allow the fully automatically three-dimensional reconstruction of objects from images, giving back a dense point cloud and, in some case, a surfaced mesh model. In this paper some comparisons between the two methodologies above mentioned are presented, using the example of some real cases of study. The surveys have been performed by employing both photogrammetry and laser scanner techniques. The methodological operational choices, depending on the required goal, the difficulties encountered during the survey with these methods, the execution time (that is the key parameter), and finally the obtained results, are fully described and examinated. On the final 3D model, an analytical comparison has been made, to analyse the differences, the tolerances, the possibility of accuracy improvement and the future developments.

  19. Model Predictive Control techniques with application to photovoltaic, DC Microgrid, and a multi-sourced hybrid energy system

    Science.gov (United States)

    Shadmand, Mohammad Bagher

    Renewable energy sources continue to gain popularity. However, two major limitations exist that prevent widespread adoption: availability and variability of the electricity generated and the cost of the equipment. The focus of this dissertation is Model Predictive Control (MPC) for optimal sized photovoltaic (PV), DC Microgrid, and multi-sourced hybrid energy systems. The main considered applications are: maximum power point tracking (MPPT) by MPC, droop predictive control of DC microgrid, MPC of grid-interaction inverter, MPC of a capacitor-less VAR compensator based on matrix converter (MC). This dissertation firstly investigates a multi-objective optimization technique for a hybrid distribution system. The variability of a high-penetration PV scenario is also studied when incorporated into the microgrid concept. Emerging (PV) technologies have enabled the creation of contoured and conformal PV surfaces; the effect of using non-planar PV modules on variability is also analyzed. The proposed predictive control to achieve maximum power point for isolated and grid-tied PV systems speeds up the control loop since it predicts error before the switching signal is applied to the converter. The low conversion efficiency of PV cells means we want to ensure always operating at maximum possible power point to make the system economical. Thus the proposed MPPT technique can capture more energy compared to the conventional MPPT techniques from same amount of installed solar panel. Because of the MPPT requirement, the output voltage of the converter may vary. Therefore a droop control is needed to feed multiple arrays of photovoltaic systems to a DC bus in microgrid community. Development of a droop control technique by means of predictive control is another application of this dissertation. Reactive power, denoted as Volt Ampere Reactive (VAR), has several undesirable consequences on AC power system network such as reduction in power transfer capability and increase in

  20. A model compound study: the ecotoxicological evaluation of five organic contaminants employing a battery of marine bioassays.

    Science.gov (United States)

    Macken, Ailbhe; Giltrap, Michelle; Foley, Barry; McGovern, Evin; McHugh, Brendan; Davoren, Maria

    2008-06-01

    This paper describes the ecotoxicological evaluation of five organic contaminants frequently detected in marine sediments (tributyltin, triphenyltin, benzo[a]pyrene, fluoranthene, and PCB 153) using three marine species (Vibrio fischeri, Tetraselmis suecica, and Tisbe battagliai). The sensitivity of each species varied for all compounds. The triorganotins were consistently the most toxic to all species. The applicability of each test system to assess the acute toxicity of environmental contaminants and their use in Toxicity Identification Evaluation (TIE) is discussed. Suitability of the Microtox and T. battagliai tests for employment in TIE studies were further assessed through spiking experiments with tributyltin. Results demonstrated that the most effective treatment to remove organotin toxicity from the sample was the C18 resin. The results of this study have important implications for risk assessment in estuarine and coastal waters in Ireland, where, at present the monitoring of sediment and water quality is predominantly reliant on chemical analysis alone.