WorldWideScience

Sample records for depth-service volume model

  1. Depth of Field Effects for Interactive Direct Volume Rendering

    KAUST Repository

    Schott, Mathias; Pascal Grosset, A.V.; Martin, Tobias; Pegoraro, Vincent; Smith, Sean T.; Hansen, Charles D.

    2011-01-01

    In this paper, a method for interactive direct volume rendering is proposed for computing depth of field effects, which previously were shown to aid observers in depth and size perception of synthetically generated images. The presented technique extends those benefits to volume rendering visualizations of 3D scalar fields from CT/MRI scanners or numerical simulations. It is based on incremental filtering and as such does not depend on any precomputation, thus allowing interactive explorations of volumetric data sets via on-the-fly editing of the shading model parameters or (multi-dimensional) transfer functions. © 2011 The Author(s).

  2. Depth of Field Effects for Interactive Direct Volume Rendering

    KAUST Repository

    Schott, Mathias

    2011-06-01

    In this paper, a method for interactive direct volume rendering is proposed for computing depth of field effects, which previously were shown to aid observers in depth and size perception of synthetically generated images. The presented technique extends those benefits to volume rendering visualizations of 3D scalar fields from CT/MRI scanners or numerical simulations. It is based on incremental filtering and as such does not depend on any precomputation, thus allowing interactive explorations of volumetric data sets via on-the-fly editing of the shading model parameters or (multi-dimensional) transfer functions. © 2011 The Author(s).

  3. A hybrid finite-volume and finite difference scheme for depth-integrated non-hydrostatic model

    Science.gov (United States)

    Yin, Jing; Sun, Jia-wen; Wang, Xing-gang; Yu, Yong-hai; Sun, Zhao-chen

    2017-06-01

    A depth-integrated, non-hydrostatic model with hybrid finite difference and finite volume numerical algorithm is proposed in this paper. By utilizing a fraction step method, the governing equations are decomposed into hydrostatic and non-hydrostatic parts. The first part is solved by using the finite volume conservative discretization method, whilst the latter is considered by solving discretized Poisson-type equations with the finite difference method. The second-order accuracy, both in time and space, of the finite volume scheme is achieved by using an explicit predictor-correction step and linear construction of variable state in cells. The fluxes across the cell faces are computed in a Godunov-based manner by using MUSTA scheme. Slope and flux limiting technique is used to equip the algorithm with total variation dimensioning property for shock capturing purpose. Wave breaking is treated as a shock by switching off the non-hydrostatic pressure in the steep wave front locally. The model deals with moving wet/dry front in a simple way. Numerical experiments are conducted to verify the proposed model.

  4. Magnetotelluric Detection Thresholds as a Function of Leakage Plume Depth, TDS and Volume

    Energy Technology Data Exchange (ETDEWEB)

    Yang, X. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Buscheck, T. A. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Mansoor, K. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Carroll, S. A. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2017-04-21

    We conducted a synthetic magnetotelluric (MT) data analysis to establish a set of specific thresholds of plume depth, TDS concentration and volume for detection of brine and CO2 leakage from legacy wells into shallow aquifers in support of Strategic Monitoring Subtask 4.1 of the US DOE National Risk Assessment Partnership (NRAP Phase II), which is to develop geophysical forward modeling tools. 900 synthetic MT data sets span 9 plume depths, 10 TDS concentrations and 10 plume volumes. The monitoring protocol consisted of 10 MT stations in a 2×5 grid laid out along the flow direction. We model the MT response in the audio frequency range of 1 Hz to 10 kHz with a 50 Ωm baseline resistivity and the maximum depth up to 2000 m. Scatter plots show the MT detection thresholds for a trio of plume depth, TDS concentration and volume. Plumes with a large volume and high TDS located at a shallow depth produce a strong MT signal. We demonstrate that the MT method with surface based sensors can detect a brine and CO2 plume so long as the plume depth, TDS concentration and volume are above the thresholds. However, it is unlikely to detect a plume at a depth larger than 1000 m with the change of TDS concentration smaller than 10%. Simulated aquifer impact data based on the Kimberlina site provides a more realistic view of the leakage plume distribution than rectangular synthetic plumes in this sensitivity study, and it will be used to estimate MT responses over simulated brine and CO2 plumes and to evaluate the leakage detectability. Integration of the simulated aquifer impact data and the MT method into the NRAP DREAM tool may provide an optimized MT survey configuration for MT data collection. This study presents a viable approach for sensitivity study of geophysical monitoring methods for leakage detection. The results come in handy for rapid assessment of leakage detectability.

  5. Single Tree Vegetation Depth Estimation Tool for Satellite Services Link Design

    Directory of Open Access Journals (Sweden)

    Z. Hasirci

    2016-04-01

    Full Text Available Attenuation caused by tree shadowing is an important factor for describing the propagation channel of satellite services. Thus, vegetation effects should be determined by experimental studies or empirical formulations. In this study, tree types in the Black Sea Region of Turkey are classified based on their geometrical shapes into four groups such as conic, ellipsoid, spherical and hemispherical. The variations of the vegetation depth according to different tree shapes are calculated with ray tracing method. It is showed that different geometrical shapes have different vegetation depths even if they have same foliage volume for different elevation angles. The proposed method is validated with the related literature in terms of average single tree attenuation. On the other hand, due to decrease system requirements (speed, memory usage etc. of ray tracing method, an artificial neural network is proposed as an alternative. A graphical user interface is created for the above processes in MATLAB environment named vegetation depth estimation tool (VdET.

  6. Volume measurement of the leg with the depth camera for quantitative evaluation of edema

    Science.gov (United States)

    Kiyomitsu, Kaoru; Kakinuma, Akihiro; Takahashi, Hiroshi; Kamijo, Naohiro; Ogawa, Keiko; Tsumura, Norimichi

    2017-02-01

    Volume measurement of the leg is important in the evaluation of leg edema. Recently, method for measurement by using a depth camera is proposed. However, many depth cameras are expensive. Therefore, we propose a method using Microsoft Kinect. We obtain a point cloud of the leg by Kinect Fusion technique and calculate the volume. We measured the volume of leg for three healthy students during three days. In each measurement, the increase of volume was confirmed from morning to evening. It is known that the volume of leg is increased in doing office work. Our experimental results meet this expectation.

  7. Effect of Climate Change on Service Life of High Volume Fly Ash Concrete Subjected to Carbonation—A Korean Case Study

    Directory of Open Access Journals (Sweden)

    Ki-Bong Park

    2017-01-01

    Full Text Available The increase in CO2 concentrations and global warming will increase the carbonation depth of concrete. Furthermore, temperature rise will increase the rate of corrosion of steel rebar after carbonation. On the other hand, compared with normal concrete, high volume fly ash (HVFA concrete is more vulnerable to carbonation-induced corrosion. Carbonation durability design with climate change is crucial to the rational use of HVFA concrete. This study presents a probabilistic approach that predicts the service life of HVFA concrete structures subjected to carbonation-induced corrosion resulting from increasing CO2 concentrations and temperatures. First, in the corrosion initiation stage, a hydration-carbonation integration model is used to evaluate the contents of the carbonatable material, porosity, and carbonation depth of HVFA concrete. The Monte Carlo method is adopted to determine the probability of corrosion initiation. Second, in the corrosion propagation stage, an updated model is proposed to evaluate the rate of corrosion, degree of corrosion for cover cracking of concrete, and probability of corrosion cracking. Third, the whole service life is determined considering both corrosion initiation stage and corrosion propagation stage. The analysis results show that climate change creates a significant impact on the service life of durable concrete.

  8. Comparison of Statistically Modeled Contaminated Soil Volume Estimates and Actual Excavation Volumes at the Maywood FUSRAP Site - 13555

    Energy Technology Data Exchange (ETDEWEB)

    Moore, James [U.S. Army Corps of Engineers - New York District 26 Federal Plaza, New York, New York 10278 (United States); Hays, David [U.S. Army Corps of Engineers - Kansas City District 601 E. 12th Street, Kansas City, Missouri 64106 (United States); Quinn, John; Johnson, Robert; Durham, Lisa [Argonne National Laboratory, Environmental Science Division 9700 S. Cass Ave., Argonne, Illinois 60439 (United States)

    2013-07-01

    As part of the ongoing remediation process at the Maywood Formerly Utilized Sites Remedial Action Program (FUSRAP) properties, Argonne National Laboratory (Argonne) assisted the U.S. Army Corps of Engineers (USACE) New York District by providing contaminated soil volume estimates for the main site area, much of which is fully or partially remediated. As part of the volume estimation process, an initial conceptual site model (ICSM) was prepared for the entire site that captured existing information (with the exception of soil sampling results) pertinent to the possible location of surface and subsurface contamination above cleanup requirements. This ICSM was based on historical anecdotal information, aerial photographs, and the logs from several hundred soil cores that identified the depth of fill material and the depth to bedrock under the site. Specialized geostatistical software developed by Argonne was used to update the ICSM with historical sampling results and down-hole gamma survey information for hundreds of soil core locations. The updating process yielded both a best guess estimate of contamination volumes and a conservative upper bound on the volume estimate that reflected the estimate's uncertainty. Comparison of model results to actual removed soil volumes was conducted on a parcel-by-parcel basis. Where sampling data density was adequate, the actual volume matched the model's average or best guess results. Where contamination was un-characterized and unknown to the model, the actual volume exceeded the model's conservative estimate. Factors affecting volume estimation were identified to assist in planning further excavations. (authors)

  9. Industrial Sector Technology Use Model (ISTUM): industrial energy use in the United States, 1974-2000. Volume 3. Appendix on service and fuel demands. Final report

    Energy Technology Data Exchange (ETDEWEB)

    1979-10-01

    This book is the third volume of the ISTUM report. The first volume of the report describes the primary model logic and the model's data inputs. The second volume lists and evaluates the results of one model run. This and the fourth volume give supplementary information in two sets of model data - the energy consumption base and technology descriptions. Chapter III of Vol. I, Book 1 describes the ISTUM demand base and explains how that demand base was developed. This volume serves as a set of appendices to that chapter. The chapter on demands in Vol. I describes the assumptions and methodology used in constructing the ISTUM demand base; this volume simply lists tables of data from that demand base. This book divides the demand tables into two appendices. Appendix III-1 contains detailed tables on ISTUM fuel-consumption estimates, service-demand forecasts, and size and load-factor distributions. Appendix III-2 contains tables detailing ISTUM allocations of each industry's fuel consumption to service sectors. The tables show how the ECDB was used to develop the ISTUM demand base.

  10. Service Modeling for Service Engineering

    Science.gov (United States)

    Shimomura, Yoshiki; Tomiyama, Tetsuo

    Intensification of service and knowledge contents within product life cycles is considered crucial for dematerialization, in particular, to design optimal product-service systems from the viewpoint of environmentally conscious design and manufacturing in advanced post industrial societies. In addition to the environmental limitations, we are facing social limitations which include limitations of markets to accept increasing numbers of mass-produced artifacts and such environmental and social limitations are restraining economic growth. To attack and remove these problems, we need to reconsider the current mass production paradigm and to make products have more added values largely from knowledge and service contents to compensate volume reduction under the concept of dematerialization. Namely, dematerialization of products needs to enrich service contents. However, service was mainly discussed within marketing and has been mostly neglected within traditional engineering. Therefore, we need new engineering methods to look at services, rather than just functions, called "Service Engineering." To establish service engineering, this paper proposes a modeling technique of service.

  11. Individual Global Navigation Satellite Systems in the Space Service Volume

    Science.gov (United States)

    Force, Dale A.

    2015-01-01

    Besides providing position, navigation, and timing (PNT) to terrestrial users, GPS is currently used to provide for precision orbit determination, precise time synchronization, real-time spacecraft navigation, and three-axis control of Earth orbiting satellites. With additional Global Navigation Satellite Systems (GNSS) coming into service (GLONASS, Beidou, and Galileo), it will be possible to provide these services by using other GNSS constellations. The paper, "GPS in the Space Service Volume," presented at the ION GNSS 19th International Technical Meeting in 2006 (Ref. 1), defined the Space Service Volume, and analyzed the performance of GPS out to 70,000 km. This paper will report a similar analysis of the performance of each of the additional GNSS and compare them with GPS alone. The Space Service Volume, defined as the volume between 3,000 km altitude and geosynchronous altitude, as compared with the Terrestrial Service Volume between the surface and 3,000 km. In the Terrestrial Service Volume, GNSS performance will be similar to performance on the Earth's surface. The GPS system has established signal requirements for the Space Service Volume. A separate paper presented at the conference covers the use of multiple GNSS in the Space Service Volume.

  12. Care Models of eHealth Services: A Case Study on the Design of a Business Model for an Online Precare Service.

    Science.gov (United States)

    van Meeuwen, Dorine Pd; van Walt Meijer, Quirine J; Simonse, Lianne Wl

    2015-03-24

    With a growing population of health care clients in the future, the organization of high-quality and cost-effective service providing becomes an increasing challenge. New online eHealth services are proposed as innovative options for the future. Yet, a major barrier to these services appears to be the lack of new business model designs. Although design efforts generally result in visual models, no such artifacts have been found in the literature on business model design. This paper investigates business model design in eHealth service practices from a design perspective. It adopts a research by design approach and seeks to unravel what characteristics of business models determine an online service and what are important value exchanges between health professionals and clients. The objective of the study was to analyze the construction of care models in-depth, framing the essential elements of a business model, and design a new care model that structures these elements for the particular context of an online pre-care service in practice. This research employs a qualitative method of an in-depth case study in which different perspectives on constructing a care model are investigated. Data are collected by using the visual business modeling toolkit, designed to cocreate and visualize the business model. The cocreated models are transcribed and analyzed per actor perspective, transactions, and value attributes. We revealed eight new actors in the business model for providing the service. Essential actors are: the intermediary network coordinator connecting companies, the service dedicated information technology specialists, and the service dedicated health specialist. In the transactions for every service providing we found a certain type of contract, such as a license contract and service contracts for precare services and software products. In addition to the efficiency, quality, and convenience, important value attributes appeared to be: timelines, privacy and

  13. Volume of Home and Community Based Services and...

    Data.gov (United States)

    U.S. Department of Health & Human ServicesVolume of Home- and Community-Based Services and Time to Nursing-Home Placement The purpose of this study was to determine whether the volume of Home and Community...

  14. Care Models of eHealth Services: A Case Study on the Design of a Business Model for an Online Precare Service

    Science.gov (United States)

    2015-01-01

    Background With a growing population of health care clients in the future, the organization of high-quality and cost-effective service providing becomes an increasing challenge. New online eHealth services are proposed as innovative options for the future. Yet, a major barrier to these services appears to be the lack of new business model designs. Although design efforts generally result in visual models, no such artifacts have been found in the literature on business model design. This paper investigates business model design in eHealth service practices from a design perspective. It adopts a research by design approach and seeks to unravel what characteristics of business models determine an online service and what are important value exchanges between health professionals and clients. Objective The objective of the study was to analyze the construction of care models in-depth, framing the essential elements of a business model, and design a new care model that structures these elements for the particular context of an online pre-care service in practice. Methods This research employs a qualitative method of an in-depth case study in which different perspectives on constructing a care model are investigated. Data are collected by using the visual business modeling toolkit, designed to cocreate and visualize the business model. The cocreated models are transcribed and analyzed per actor perspective, transactions, and value attributes. Results We revealed eight new actors in the business model for providing the service. Essential actors are: the intermediary network coordinator connecting companies, the service dedicated information technology specialists, and the service dedicated health specialist. In the transactions for every service providing we found a certain type of contract, such as a license contract and service contracts for precare services and software products. In addition to the efficiency, quality, and convenience, important value attributes

  15. Shared Communications: Volume 2. In-Depth Systems Research

    Energy Technology Data Exchange (ETDEWEB)

    Truett, LF

    2004-09-22

    This report is the second of two documents that examine the literature for actual examples of organizations and agencies that share communications resources. While the primary emphasis is on rural, intelligent transportation system (ITS) communications involving transit, examples will not be limited to rural activities, nor to ITS implementation, nor even to transit. In addition, the term ''communication'' will be broadly applied to include all information resources. The first document of this series, ''Shared Communications: Volume I. A Summary and Literature Review'', defines the meaning of the term ''shared communication resources'' and provides many examples of agencies that share resources. This document, ''Shared Communications: Volume II. In-Depth Systems Research'', reviews attributes that contributed to successful applications of the sharing communication resources concept. A few examples of each type of communication sharing are provided. Based on the issues and best practice realworld examples, recommendations for potential usage and recommended approaches for field operational tests are provided.

  16. Are PCI Service Volumes Associated with 30-Day Mortality? A Population-Based Study from Taiwan.

    Science.gov (United States)

    Yu, Tsung-Hsien; Chou, Ying-Yi; Wei, Chung-Jen; Tung, Yu-Chi

    2017-11-09

    The volume-outcome relationship has been discussed for over 30 years; however, the findings are inconsistent. This might be due to the heterogeneity of service volume definitions and categorization methods. This study takes percutaneous coronary intervention (PCI) as an example to examine whether the service volume was associated with PCI 30-day mortality, given different service volume definitions and categorization methods. A population-based, cross-sectional multilevel study was conducted. Two definitions of physician and hospital volume were used: (1) the cumulative PCI volume in a previous year before each PCI; (2) the cumulative PCI volume within the study period. The volume was further treated in three ways: (1) a categorical variable based on the American Heart Association's recommendation; (2) a semi-data-driven categorical variable based on k-means clustering algorithm; and (3) a data-driven categorical variable based on the Generalized Additive Model. The results showed that, after adjusting the patient-, physician-, and hospital-level covariates, physician volume was associated inversely with PCI 30-day mortality, but hospital volume was not, no matter which definitions and categorization methods of service volume were applied. Physician volume is negatively associated with PCI 30-day mortality, but the results might vary because of definition and categorization method.

  17. A global reference model of Moho depths based on WGM2012

    Science.gov (United States)

    Zhou, D.; Li, C.

    2017-12-01

    The crust-mantle boundary (Moho discontinuity) represents the largest density contrast in the lithosphere, which can be detected by Bouguer gravity anomaly. We present our recent inversion of global Moho depths from World Gravity Map 2012. Because oceanic lithospheres increase in density as they cool, we perform thermal correction based on the plate cooling model. We adopt a temperature Tm=1300°C at the bottom of lithosphere. The plate thickness is tested by varying by 5 km from 90 to 140 km, and taken as 130 km that gives a best-fit crustal thickness constrained by seismic crustal thickness profiles. We obtain the residual Bouguer gravity anomalies by subtracting the thermal correction from WGM2012, and then estimate Moho depths based on the Parker-Oldenburg algorithm. Taking the global model Crust1.0 as a priori constraint, we adopt Moho density contrasts of 0.43 and 0.4 g/cm3 , and initial mean Moho depths of 37 and 20 km in the continental and oceanic domains, respectively. The number of iterations in the inversion is set to be 150, which is large enough to obtain an error lower than a pre-assigned convergence criterion. The estimated Moho depths range between 0 76 km, and are averaged at 36 and 15 km in continental and oceanic domain, respectively. Our results correlate very well with Crust1.0 with differences mostly within ±5.0 km. Compared to the low resolution of Crust1.0 in oceanic domain, our results have a much larger depth range reflecting diverse structures such as ridges, seamounts, volcanic chains and subduction zones. Base on this model, we find that young(95mm/yr) we observe relatively thicker crust. Conductive cooling of lithosphere may constrain the melting of the mantle at ultraslow spreading centers. Lower mantle temperatures indicated by deeper Curie depths at slow and fast spreading ridges may decrease the volume of magmatism and crustal thickness. This new global model of gravity-derived Moho depth, combined with geochemical and Curie

  18. Trends in Medicare Service Volume for Cataract Surgery and the Impact of the Medicare Physician Fee Schedule.

    Science.gov (United States)

    Gong, Dan; Jun, Lin; Tsai, James C

    2017-08-01

    To calculate the associations between Medicare payment and service volume for complex and noncomplex cataract surgeries. The 2005-2009 CMS Part B National Summary Data Files, CMS Part B Carrier Summary Data Files, and the Medicare Physician Fee Schedule. Conducting a retrospective, longitudinal analysis using a fixed-effects model of Medicare Part B carriers representing all 50 states and the District of Columbia from 2005 to 2009, we calculated the Medicare payment-service volume elasticities for noncomplex (CPT 66984) and complex (CPT 66982) cataract surgeries. Service volume data were extracted from the CMS Part B National Summary and Carrier Summary Data Files. Payment data were extracted from the Medicare Physician Fee Schedule. From 2005 to 2009, the proportion of total cataract services billed as complex increased from 3.2 to 6.7 percent. Every 1 percent decrease in Medicare payment was associated with a nonsignificant change in noncomplex cataract service volume (elasticity = 0.15, 95 percent CI [-0.09, 0.38]) but a statistically significant increase in complex cataract service volume (elasticity = -1.12, 95 percent CI [-1.60, -0.63]). Reduced Medicare payment was associated with a significant increase in complex cataract service volume but not in noncomplex cataract service volume, resulting in a shift toward performing a greater proportion of complex cataract surgeries from 2005 to 2009. © Health Research and Educational Trust.

  19. Combined Global Navigation Satellite Systems in the Space Service Volume

    Science.gov (United States)

    Force, Dale A.; Miller, James J.

    2013-01-01

    Besides providing position, velocity, and timing (PVT) for terrestrial users, the Global Positioning System (GPS) is also being used to provide PVT information for earth orbiting satellites. In 2006, F. H. Bauer, et. al., defined the Space Service Volume in the paper GPS in the Space Service Volume , presented at ION s 19th international Technical Meeting of the Satellite Division, and looked at GPS coverage for orbiting satellites. With GLONASS already operational, and the first satellites of the Galileo and Beidou/COMPASS constellations already in orbit, it is time to look at the use of the new Global Navigation Satellite Systems (GNSS) coming into service to provide PVT information for earth orbiting satellites. This presentation extends GPS in the Space Service Volume by examining the coverage capability of combinations of the new constellations with GPS GPS was first explored as a system for refining the position, velocity, and timing of other spacecraft equipped with GPS receivers in the early eighties. Because of this, a new GPS utility developed beyond the original purpose of providing position, velocity, and timing services for land, maritime, and aerial applications. GPS signals are now received and processed by spacecraft both above and below the GPS constellation, including signals that spill over the limb of the earth. Support of GPS space applications is now part of the system plan for GPS, and support of the Space Service Volume by other GNSS providers has been proposed to the UN International Committee on GNSS (ICG). GPS has been demonstrated to provide decimeter level position accuracy in real-time for satellites in low Earth orbit (centimeter level in non-real-time applications). GPS has been proven useful for satellites in geosynchronous orbit, and also for satellites in highly elliptical orbits. Depending on how many satellites are in view, one can keep time locked to the GNSS standard, and through that to Universal Time as long as at least one

  20. Evaluation of Depth of Field for depth perception in DVR

    KAUST Repository

    Grosset, A.V.Pascal; Schott, Mathias; Bonneau, Georges-Pierre; Hansen, Charles D.

    2013-01-01

    In this paper we present a user study on the use of Depth of Field for depth perception in Direct Volume Rendering. Direct Volume Rendering with Phong shading and perspective projection is used as the baseline. Depth of Field is then added to see its impact on the correct perception of ordinal depth. Accuracy and response time are used as the metrics to evaluate the usefulness of Depth of Field. The onsite user study has two parts: static and dynamic. Eye tracking is used to monitor the gaze of the subjects. From our results we see that though Depth of Field does not act as a proper depth cue in all conditions, it can be used to reinforce the perception of which feature is in front of the other. The best results (high accuracy & fast response time) for correct perception of ordinal depth occurs when the front feature (out of the two features users were to choose from) is in focus and perspective projection is used. © 2013 IEEE.

  1. Evaluation of Depth of Field for depth perception in DVR

    KAUST Repository

    Grosset, A.V.Pascal

    2013-02-01

    In this paper we present a user study on the use of Depth of Field for depth perception in Direct Volume Rendering. Direct Volume Rendering with Phong shading and perspective projection is used as the baseline. Depth of Field is then added to see its impact on the correct perception of ordinal depth. Accuracy and response time are used as the metrics to evaluate the usefulness of Depth of Field. The onsite user study has two parts: static and dynamic. Eye tracking is used to monitor the gaze of the subjects. From our results we see that though Depth of Field does not act as a proper depth cue in all conditions, it can be used to reinforce the perception of which feature is in front of the other. The best results (high accuracy & fast response time) for correct perception of ordinal depth occurs when the front feature (out of the two features users were to choose from) is in focus and perspective projection is used. © 2013 IEEE.

  2. A Methodology for Selection of a Satellite Servicing Architecture. Volume 3. Appendices.

    Science.gov (United States)

    1985-12-01

    model transfers between inclined circu- lar orbits. If OSV time of flight becomes more critical . then a choice between the other two techniques is...ABSTRACT iContinue on ,viverse if necesary and identify by block, number) Title: A METODO ~LOGY FOR SELECION~ OF A SATELLITE SERVICING ARCHITEIR VOLUME

  3. An order insertion scheduling model of logistics service supply chain considering capacity and time factors.

    Science.gov (United States)

    Liu, Weihua; Yang, Yi; Wang, Shuqing; Liu, Yang

    2014-01-01

    Order insertion often occurs in the scheduling process of logistics service supply chain (LSSC), which disturbs normal time scheduling especially in the environment of mass customization logistics service. This study analyses order similarity coefficient and order insertion operation process and then establishes an order insertion scheduling model of LSSC with service capacity and time factors considered. This model aims to minimize the average unit volume operation cost of logistics service integrator and maximize the average satisfaction degree of functional logistics service providers. In order to verify the viability and effectiveness of our model, a specific example is numerically analyzed. Some interesting conclusions are obtained. First, along with the increase of completion time delay coefficient permitted by customers, the possible inserting order volume first increases and then trends to be stable. Second, supply chain performance reaches the best when the volume of inserting order is equal to the surplus volume of the normal operation capacity in mass service process. Third, the larger the normal operation capacity in mass service process is, the bigger the possible inserting order's volume will be. Moreover, compared to increasing the completion time delay coefficient, improving the normal operation capacity of mass service process is more useful.

  4. Hanford analytical services quality assurance requirements documents. Volume 1: Administrative Requirements

    International Nuclear Information System (INIS)

    Hyatt, J.E.

    1997-01-01

    Hanford Analytical Services Quality Assurance Requirements Document (HASQARD) is issued by the Analytical Services, Program of the Waste Management Division, US Department of Energy (US DOE), Richland Operations Office (DOE-RL). The HASQARD establishes quality requirements in response to DOE Order 5700.6C (DOE 1991b). The HASQARD is designed to meet the needs of DOE-RL for maintaining a consistent level of quality for sampling and field and laboratory analytical services provided by contractor and commercial field and laboratory analytical operations. The HASQARD serves as the quality basis for all sampling and field/laboratory analytical services provided to DOE-RL through the Analytical Services Program of the Waste Management Division in support of Hanford Site environmental cleanup efforts. This includes work performed by contractor and commercial laboratories and covers radiological and nonradiological analyses. The HASQARD applies to field sampling, field analysis, and research and development activities that support work conducted under the Hanford Federal Facility Agreement and Consent Order Tri-Party Agreement and regulatory permit applications and applicable permit requirements described in subsections of this volume. The HASQARD applies to work done to support process chemistry analysis (e.g., ongoing site waste treatment and characterization operations) and research and development projects related to Hanford Site environmental cleanup activities. This ensures a uniform quality umbrella to analytical site activities predicated on the concepts contained in the HASQARD. Using HASQARD will ensure data of known quality and technical defensibility of the methods used to obtain that data. The HASQARD is made up of four volumes: Volume 1, Administrative Requirements; Volume 2, Sampling Technical Requirements; Volume 3, Field Analytical Technical Requirements; and Volume 4, Laboratory Technical Requirements. Volume 1 describes the administrative requirements

  5. Evaluation of carburization depth in service exposed ferritic steel using magnetic Barkhausen noise analysis

    International Nuclear Information System (INIS)

    Vaidyanathan, S.; Moorthy, V.; Jayakumar, T.; Baldev Raj

    1996-01-01

    The feasibility of using magnetic Barkhausen (MBN) measurement for the evaluation of carburization depth in ferritic steels has been reported in this paper. MBN measurements were carried out on samples from service exposed 0.5Cr-0.5Mo ferritic steel tube at different depths (cross section) from carburised ID surface to simulate the variation in carbon concentration gradient within the skin depth of MBN with increasing time of exposure to carburization. It has been observed that the MBN level increases with increasing depth of measurement. An inverse relation between MBN level and carbon content/hardness value has been observed. This study suggests that, the MBN measurements on the carburised surface can be correlated with the concentration gradient within the skin depth of the MBN which would help in predicting the approximate depth of the carburised layer with proper prior calibration. (author)

  6. Energy modeling issues in quick service restaurants

    Energy Technology Data Exchange (ETDEWEB)

    Smith, V.A.; Johnson, K.F.

    1997-03-01

    The complexity of monitoring and modeling the energy performance of food-service facilities was discussed. Usually, less than one third of the energy consumed in a commercial food-service facility is used by equipment and systems typically modeled in building simulation software such as DOE-2. Algorithms have not yet been developed to handle independent makeup air units and the kitchen and dining room HVAC systems. The energy used by food process equipment and water heating is based on customer-volume and operation-hours. Monitoring projects have been undertaken to provide detailed energy use profiles of individual appliances and whole restaurants. Some technical issues that are unique to food-service modeling in current versions of DOE-2.1E software in the context of quick service restaurants, such as difficulties in modelling internal heat gains of hooded cooking appliances and walk-in refrigeration, and system and zone limitations on tracking energy consumption, were discussed. 1 fig.

  7. Knowledge service decision making in business incubators based on the supernetwork model

    Science.gov (United States)

    Zhao, Liming; Zhang, Haihong; Wu, Wenqing

    2017-08-01

    As valuable resources for incubating firms, knowledge resources have received gradually increasing attention from all types of business incubators, and business incubators use a variety of knowledge services to stimulate rapid growth in incubating firms. Based on previous research, we generalize the knowledge transfer and knowledge networking services of two main forms of knowledge services and further divide knowledge transfer services into knowledge depth services and knowledge breadth services. Then, we construct the business incubators' knowledge supernetwork model, describe the evolution mechanism among heterogeneous agents and utilize a simulation to explore the performance variance of different business incubators' knowledge services. The simulation results show that knowledge stock increases faster when business incubators are able to provide knowledge services to more incubating firms and that the degree of discrepancy in the knowledge stock increases during the process of knowledge growth. Further, knowledge transfer services lead to greater differences in the knowledge structure, while knowledge networking services lead to smaller differences. Regarding the two types of knowledge transfer services, knowledge depth services are more conducive to knowledge growth than knowledge breadth services, but knowledge depth services lead to greater gaps in knowledge stocks and greater differences in knowledge structures. Overall, it is optimal for business incubators to select a single knowledge service or portfolio strategy based on the amount of time and energy expended on the two types of knowledge services.

  8. Comparison of depth-averaged concentration and bed load flux sediment transport models of dam-break flow

    Directory of Open Access Journals (Sweden)

    Jia-heng Zhao

    2017-10-01

    Full Text Available This paper presents numerical simulations of dam-break flow over a movable bed. Two different mathematical models were compared: a fully coupled formulation of shallow water equations with erosion and deposition terms (a depth-averaged concentration flux model, and shallow water equations with a fully coupled Exner equation (a bed load flux model. Both models were discretized using the cell-centered finite volume method, and a second-order Godunov-type scheme was used to solve the equations. The numerical flux was calculated using a Harten, Lax, and van Leer approximate Riemann solver with the contact wave restored (HLLC. A novel slope source term treatment that considers the density change was introduced to the depth-averaged concentration flux model to obtain higher-order accuracy. A source term that accounts for the sediment flux was added to the bed load flux model to reflect the influence of sediment movement on the momentum of the water. In a one-dimensional test case, a sensitivity study on different model parameters was carried out. For the depth-averaged concentration flux model, Manning's coefficient and sediment porosity values showed an almost linear relationship with the bottom change, and for the bed load flux model, the sediment porosity was identified as the most sensitive parameter. The capabilities and limitations of both model concepts are demonstrated in a benchmark experimental test case dealing with dam-break flow over variable bed topography.

  9. A Quantitative Analysis of the Relationship between Medicare Payment and Service Volume for Glaucoma Procedures from 2005 through 2009.

    Science.gov (United States)

    Gong, Dan; Jun, Lin; Tsai, James C

    2015-05-01

    To calculate the association between Medicare payment and service volume for 6 commonly performed glaucoma procedures. Retrospective, longitudinal database study. A 100% dataset of all glaucoma procedures performed on Medicare Part B beneficiaries within the United States from 2005 to 2009. Fixed-effects regression model using Medicare Part B carrier data for all 50 states and the District of Columbia, controlling for time-invariant carrier-specific characteristics, national trends in glaucoma service volume, Medicare beneficiary population, number of ophthalmologists, and income per capita. Payment-volume elasticities, defined as the percent change in service volume per 1% change in Medicare payment, for laser trabeculoplasty (Current Procedural Terminology [CPT] code 65855), trabeculectomy without previous surgery (CPT code 66170), trabeculectomy with previous surgery (CPT code 66172), aqueous shunt to reservoir (CPT code 66180), laser iridotomy (CPT code 66761), and scleral reinforcement with graft (CPT code 67255). The payment-volume elasticity was nonsignificant for 4 of 6 procedures studied: laser trabeculoplasty (elasticity, -0.27; 95% confidence interval [CI], -1.31 to 0.77; P = 0.61), trabeculectomy without previous surgery (elasticity, -0.42; 95% CI, -0.85 to 0.01; P = 0.053), trabeculectomy with previous surgery (elasticity, -0.28; 95% CI, -0.83 to 0.28; P = 0.32), and aqueous shunt to reservoir (elasticity, -0.47; 95% CI, -3.32 to 2.37; P = 0.74). Two procedures yielded significant associations between Medicare payment and service volume. For laser iridotomy, the payment-volume elasticity was -1.06 (95% CI, -1.39 to -0.72; P payment, laser iridotomy service volume increased by 1.06%. For scleral reinforcement with graft, the payment-volume elasticity was -2.92 (95% CI, -5.72 to -0.12; P = 0.041): for every 1% decrease in CPT code 67255 payment, scleral reinforcement with graft service volume increased by 2.92%. This study calculated the association

  10. [In-depth interviews and the Kano model to determine user requirements in a burns unit].

    Science.gov (United States)

    González-Revaldería, J; Holguín-Holgado, P; Lumbreras-Marín, E; Núñez-López, G

    To determine the healthcare requirements of patients in a Burns Unit, using qualitative techniques, such us in-depth personal interviews and Kano's methodology. Qualitative methodology using in-depth personal interviews (12 patients), Kano's conceptual model, and the SERVQHOS questionnaire (24 patients). All patients had been hospitalised in the last 12 months in the Burns Unit. Using Kano's methodology, service attributes were grouped by affinity diagrams, and classified as follows: must-be, attractive (unexpected, great satisfaction), and one-dimensional (linked to the degree of functionality of the service). The outcomes were compared with those obtained with SERVQHOS questionnaire. From the analysis of in-depth interviews, 11 requirements were obtained, referring to hotel aspects, information, need for closer staff relationship, and organisational aspects. The attributes classified as must-be were free television and automatic TV disconnection at midnight. Those classified as attractive were: individual room for more privacy, information about dressing change times in order to avoid anxiety, and additional staff for in-patients. The results were complementary to those obtained with the SERVQHOS questionnaire. In-depth personal interviews provide extra knowledge about patient requirements, complementing the information obtained with questionnaires. With this methodology, a more active patient participation is achieved and the companion's opinion is also taken into account. Copyright © 2016 SECA. Publicado por Elsevier España, S.L.U. All rights reserved.

  11. Age-depth modelling with radiocarbon

    International Nuclear Information System (INIS)

    Howarth, J.D.

    2017-01-01

    Chronology is a critical component of any study into the Quaternary because the information about climate and environmental change preserved in sedimentary deposits can only be placed in a useful context when it is associated with a robust chronological framework. This overview will introduce you to the key concepts in age depth modelling.

  12. Volumetrically-Derived Global Navigation Satellite System Performance Assessment from the Earths Surface through the Terrestrial Service Volume and the Space Service Volume

    Science.gov (United States)

    Welch, Bryan W.

    2016-01-01

    NASA is participating in the International Committee on Global Navigation Satellite Systems (GNSS) (ICG)'s efforts towards demonstrating the benefits to the space user from the Earth's surface through the Terrestrial Service Volume (TSV) to the edge of the Space Service Volume (SSV), when a multi-GNSS solution space approach is utilized. The ICG Working Group: Enhancement of GNSS Performance, New Services and Capabilities has started a three phase analysis initiative as an outcome of recommendations at the ICG-10 meeting, in preparation for the ICG-11 meeting. The first phase of that increasing complexity and fidelity analysis initiative was recently expanded to compare nadir-facing and zenith-facing user hemispherical antenna coverage with omnidirectional antenna coverage at different distances of 8,000 km altitude and 36,000 km altitude. This report summarizes the performance using these antenna coverage techniques at distances ranging from 100 km altitude to 36,000 km to be all encompassing, as well as the volumetrically-derived system availability metrics.

  13. Forecasting the Future Food Service World of Work. Final Report. Volume II. Centralized Food Service Systems. Service Management Reports.

    Science.gov (United States)

    Powers, Thomas F., Ed.; Swinton, John R., Ed.

    Volume II of a three-volume study on the future of the food service industry considers the effects that centralized food production will have on the future of food production systems. Based on information from the Fair Acres Project and the Michigan State University Vegetable Processing Center, the authors describe the operations of a centralized…

  14. A depth-averaged debris-flow model that includes the effects of evolving dilatancy. I. physical basis

    Science.gov (United States)

    Iverson, Richard M.; George, David L.

    2014-01-01

    To simulate debris-flow behaviour from initiation to deposition, we derive a depth-averaged, two-phase model that combines concepts of critical-state soil mechanics, grain-flow mechanics and fluid mechanics. The model's balance equations describe coupled evolution of the solid volume fraction, m, basal pore-fluid pressure, flow thickness and two components of flow velocity. Basal friction is evaluated using a generalized Coulomb rule, and fluid motion is evaluated in a frame of reference that translates with the velocity of the granular phase, vs. Source terms in each of the depth-averaged balance equations account for the influence of the granular dilation rate, defined as the depth integral of ∇⋅vs. Calculation of the dilation rate involves the effects of an elastic compressibility and an inelastic dilatancy angle proportional to m−meq, where meq is the value of m in equilibrium with the ambient stress state and flow rate. Normalization of the model equations shows that predicted debris-flow behaviour depends principally on the initial value of m−meq and on the ratio of two fundamental timescales. One of these timescales governs downslope debris-flow motion, and the other governs pore-pressure relaxation that modifies Coulomb friction and regulates evolution of m. A companion paper presents a suite of model predictions and tests.

  15. Depth and stratigraphy of regolith. Site descriptive modelling SDM-Site Laxemar

    International Nuclear Information System (INIS)

    Nyman, Helena; Sohlenius, Gustav; Stroemgren, Maarten; Brydsten, Lars

    2008-06-01

    At the Laxemar-Simpevarp site, numerical and descriptive modelling are performed both for the deep bedrock and for the surface systems. The surface geology and regolith depth are important parameters for e.g. hydrogeological and geochemical modelling and for the over all understanding of the area. Regolith refers to all the unconsolidated deposits overlying the bedrock. The regolith depth model (RDM) presented here visualizes the stratigraphical distribution of the regolith as well as the elevation of the bedrock surface. The model covers 280 km 2 including both terrestrial and marine areas. In the model the stratigraphy is represented by six layers (Z1-Z6) that corresponds to different types of regolith. The model is geometric and the properties of the layers are assigned by the user according to the purpose. The GeoModel program, which is an ArcGIS extension, was used for modelling the regolith depths. A detailed topographical Digital Elevation Model (DEM) and a map of Quaternary deposits were used as input to the model. Altogether 319 boreholes and 440 other stratigraphical observations were also used. Furthermore a large number of depth data interpreted from geophysical investigations were used; refraction seismic measurements from 51 profiles, 11,000 observation points from resistivity measurements and almost 140,000 points from seismic and sediment echo sounding data. The results from the refraction seismic and resistivity measurements give information about the total regolith depths, whereas most other data also give information about the stratigraphy of the regolith. Some of the used observations did not reach the bedrock surface. They do, however, describe the minimum regolith depth at each location and were therefore used where the regolith depth would have been thinner without using the observation point. A large proportion of the modelled area has a low data density and the area was therefore divided into nine domains. These domains were defined based

  16. Depth and stratigraphy of regolith. Site descriptive modelling SDM-Site Laxemar

    Energy Technology Data Exchange (ETDEWEB)

    Nyman, Helena (SWECO Position, Stockholm (Sweden)); Sohlenius, Gustav (Geological Survey of Sweden (SGU), Uppsala (Sweden)); Stroemgren, Maarten; Brydsten, Lars (Umeaa Univ., Umeaa (Sweden))

    2008-06-15

    At the Laxemar-Simpevarp site, numerical and descriptive modelling are performed both for the deep bedrock and for the surface systems. The surface geology and regolith depth are important parameters for e.g. hydrogeological and geochemical modelling and for the over all understanding of the area. Regolith refers to all the unconsolidated deposits overlying the bedrock. The regolith depth model (RDM) presented here visualizes the stratigraphical distribution of the regolith as well as the elevation of the bedrock surface. The model covers 280 km2 including both terrestrial and marine areas. In the model the stratigraphy is represented by six layers (Z1-Z6) that corresponds to different types of regolith. The model is geometric and the properties of the layers are assigned by the user according to the purpose. The GeoModel program, which is an ArcGIS extension, was used for modelling the regolith depths. A detailed topographical Digital Elevation Model (DEM) and a map of Quaternary deposits were used as input to the model. Altogether 319 boreholes and 440 other stratigraphical observations were also used. Furthermore a large number of depth data interpreted from geophysical investigations were used; refraction seismic measurements from 51 profiles, 11,000 observation points from resistivity measurements and almost 140,000 points from seismic and sediment echo sounding data. The results from the refraction seismic and resistivity measurements give information about the total regolith depths, whereas most other data also give information about the stratigraphy of the regolith. Some of the used observations did not reach the bedrock surface. They do, however, describe the minimum regolith depth at each location and were therefore used where the regolith depth would have been thinner without using the observation point. A large proportion of the modelled area has a low data density and the area was therefore divided into nine domains. These domains were defined based on

  17. Improved Model for Depth Bias Correction in Airborne LiDAR Bathymetry Systems

    Directory of Open Access Journals (Sweden)

    Jianhu Zhao

    2017-07-01

    Full Text Available Airborne LiDAR bathymetry (ALB is efficient and cost effective in obtaining shallow water topography, but often produces a low-accuracy sounding solution due to the effects of ALB measurements and ocean hydrological parameters. In bathymetry estimates, peak shifting of the green bottom return caused by pulse stretching induces depth bias, which is the largest error source in ALB depth measurements. The traditional depth bias model is often applied to reduce the depth bias, but it is insufficient when used with various ALB system parameters and ocean environments. Therefore, an accurate model that considers all of the influencing factors must be established. In this study, an improved depth bias model is developed through stepwise regression in consideration of the water depth, laser beam scanning angle, sensor height, and suspended sediment concentration. The proposed improved model and a traditional one are used in an experiment. The results show that the systematic deviation of depth bias corrected by the traditional and improved models is reduced significantly. Standard deviations of 0.086 and 0.055 m are obtained with the traditional and improved models, respectively. The accuracy of the ALB-derived depth corrected by the improved model is better than that corrected by the traditional model.

  18. Estimated probabilities, volumes, and inundation areas depths of potential postwildfire debris flows from Carbonate, Slate, Raspberry, and Milton Creeks, near Marble, Gunnison County, Colorado

    Science.gov (United States)

    Stevens, Michael R.; Flynn, Jennifer L.; Stephens, Verlin C.; Verdin, Kristine L.

    2011-01-01

    During 2009, the U.S. Geological Survey, in cooperation with Gunnison County, initiated a study to estimate the potential for postwildfire debris flows to occur in the drainage basins occupied by Carbonate, Slate, Raspberry, and Milton Creeks near Marble, Colorado. Currently (2010), these drainage basins are unburned but could be burned by a future wildfire. Empirical models derived from statistical evaluation of data collected from recently burned basins throughout the intermountain western United States were used to estimate the probability of postwildfire debris-flow occurrence and debris-flow volumes for drainage basins occupied by Carbonate, Slate, Raspberry, and Milton Creeks near Marble. Data for the postwildfire debris-flow models included drainage basin area; area burned and burn severity; percentage of burned area; soil properties; rainfall total and intensity for the 5- and 25-year-recurrence, 1-hour-duration-rainfall; and topographic and soil property characteristics of the drainage basins occupied by the four creeks. A quasi-two-dimensional floodplain computer model (FLO-2D) was used to estimate the spatial distribution and the maximum instantaneous depth of the postwildfire debris-flow material during debris flow on the existing debris-flow fans that issue from the outlets of the four major drainage basins. The postwildfire debris-flow probabilities at the outlet of each drainage basin range from 1 to 19 percent for the 5-year-recurrence, 1-hour-duration rainfall, and from 3 to 35 percent for 25-year-recurrence, 1-hour-duration rainfall. The largest probabilities for postwildfire debris flow are estimated for Raspberry Creek (19 and 35 percent), whereas estimated debris-flow probabilities for the three other creeks range from 1 to 6 percent. The estimated postwildfire debris-flow volumes at the outlet of each creek range from 7,500 to 101,000 cubic meters for the 5-year-recurrence, 1-hour-duration rainfall, and from 9,400 to 126,000 cubic meters for

  19. Service expectations from high- and low-volume customers in the alcoholic beverage industry

    Directory of Open Access Journals (Sweden)

    Jacques Beukes

    2013-08-01

    Research purpose: This research study investigated the relationship between the volume a customer buys from an alcoholic beverage supply company and what influence this volume has on their customer service expectations. Motivation for the study: The main purpose of this study was to evaluate what influence the volume an organisation buys from alcoholic beverage suppliers has on their service quality expectations. Research design, approach and method: A non-probability judgement sample method was used, with a sample size of 220 respondents. The questionnaire requested respondents (high- and low-volume to rank their customer service expectations and opinions with reference to Parasuraman’s service delivery dimensions. Ranking was done using a five-point Likert scale. Main findings: The findings of the study indicated that both the high- and low-volume customers felt that alcoholic beverage supply companies had to deliver on all five service delivery dimensions but failed to do so to full satisfaction. Practical and managerial implications: It is recommended that the alcoholic beverage supply companies should address the problem areas identified in this study to avoid defection of customers. Contribution and value add: This may assist alcoholic beverage supply companies to better understand the customers’ demographic profiles. The study also revealed that the satisfaction level experienced by customers in both sections of the study (high- and low-demand, with a considerable gap between expectations and opinions within the empathy dimension.

  20. Modelling the evolution of composition-and stress-depth profiles in austenitic stainless steels during low-temperature nitriding

    DEFF Research Database (Denmark)

    Jespersen, Freja Nygaard; Hattel, Jesper Henri; Somers, Marcel A. J.

    2016-01-01

    . In the present paper solid mechanics was combined with thermodynamics and diffusion kinetics to simulate the evolution of composition-depth and stress-depth profiles resulting from nitriding. The model takes into account a composition-dependent diffusion coefficient of nitrogen in expanded austenite, short range......Nitriding of stainless steel causes a surface zone of expanded austenite, which improves the wear resistance of the stainless steel while preserving the stainless behaviour. During nitriding huge residual stresses are introduced in the treated zone, arising from the volume expansion...... that accompanies the dissolution of high nitrogen contents in expanded austenite. An intriguing phenomenon during low-temperature nitriding is that the residual stresses evoked by dissolution of nitrogen in the solid state, affect the thermodynamics and the diffusion kinetics of nitrogen dissolution...

  1. A Simple Model of the Variability of Soil Depths

    Directory of Open Access Journals (Sweden)

    Fang Yu

    2017-06-01

    Full Text Available Soil depth tends to vary from a few centimeters to several meters, depending on many natural and environmental factors. We hypothesize that the cumulative effect of these factors on soil depth, which is chiefly dependent on the process of biogeochemical weathering, is particularly affected by soil porewater (i.e., solute transport and infiltration from the land surface. Taking into account evidence for a non-Gaussian distribution of rock weathering rates, we propose a simple mathematical model to describe the relationship between soil depth and infiltration flux. The model was tested using several areas in mostly semi-arid climate zones. The application of this model demonstrates the use of fundamental principles of physics to quantify the coupled effects of the five principal soil-forming factors of Dokuchaev.

  2. Price volatility, trading volume, and market depth in Asian commodity futures exchanges

    Directory of Open Access Journals (Sweden)

    Tanachote Boonvorachote

    2016-01-01

    Full Text Available This paper empirically investigates the impact of trading activity including trading volume and open interest on price volatility in Asian futures exchanges. Trading volume and open interest represent market information for investors. This study uses three different definitions of volatility: (1 daily volatility measured by close-to-close returns, (2 non-trading volatility measured by close-to-open returns, and (3 trading volatility measured by open-to-close returns. The impact of trading volume and open interest on price volatility is investigated. Following Bessembinder and Seguin (1993, volume and open interest are divided into expected and unexpected components. The GARCH (1,1 model is employed using expected and unexpected components of trading activity (volume and open interest as explanatory variables. The results show a positive contemporaneous relationship between expected and unexpected trading volume and volatility, while open interest mitigates volatility. Policy makers can use these findings to suggest to investors that trading activity (volume and open interest is a proxy of market information flowing to exchanges, especially unexpected trading activity. New information flowing to exchanges can mostly be noticed in unexpected trading volumes and open interests.

  3. A holistic water depth simulation model for small ponds

    Science.gov (United States)

    Ali, Shakir; Ghosh, Narayan C.; Mishra, P. K.; Singh, R. K.

    2015-10-01

    Estimation of time varying water depth and time to empty of a pond is prerequisite for comprehensive and coordinated planning of water resource for its effective utilization. A holistic water depth simulation (HWDS) and time to empty (TE) model for small, shallow ephemeral ponds have been derived by employing the generalized model based on the Green-Ampt equation in the basic water balance equation. The HWDS model includes time varying rainfall, runoff, surface water evaporation, outflow and advancement of wetting front length as external inputs. The TE model includes two external inputs; surface water evaporation and advancement of wetting front length. Both the models also consider saturated hydraulic conductivity and fillable porosity of the pond's bed material as their parameters. The solution of the HWDS model involved numerical iteration in successive time intervals. The HWDS model has successfully evaluated with 3 years of field data from two small ponds located within a watershed in a semi-arid region in western India. The HWDS model simulated time varying water depth in the ponds with high accuracy as shown by correlation coefficient (R2 ⩾ 0.9765), index of agreement (d ⩾ 0.9878), root mean square errors (RMSE ⩽ 0.20 m) and percent bias (PB ⩽ 6.23%) for the pooled data sets of the measured and simulated water depth. The statistical F and t-tests also confirmed the reliability of the HWDS model at probability level, p ⩽ 0.0001. The response of the TE model showed its ability to estimate the time to empty the ponds. An additional field calibration and validation of the HWDS and TE models with observed field data in varied hydro-climatic conditions could be conducted to increase the applicability and credibility of the models.

  4. When Models and Observations Collide: Journeying towards an Integrated Snow Depth Product

    Science.gov (United States)

    Webster, M.; Petty, A.; Boisvert, L.; Markus, T.; Kurtz, N. T.; Kwok, R.; Perovich, D. K.

    2017-12-01

    Knowledge of snow depth is essential for assessing changes in sea ice mass balance due to snow's insulating and reflective properties. In remote sensing applications, the accuracy of sea ice thickness retrievals from altimetry crucially depends on snow depth. Despite the need for snow depth data, we currently lack continuous observations that capture the basin-scale snow depth distribution and its seasonal evolution. Recent in situ and remote sensing observations are sparse in space and time, and contain uncertainties, caveats, and/or biases that often require careful interpretation. Likewise, using model output for remote sensing applications is limited due to uncertainties in atmospheric forcing and different treatments of snow processes. Here, we summarize our efforts in bringing observational and model data together to develop an approach for an integrated snow depth product. We start with a snow budget model and incrementally incorporate snow processes to determine the effects on snow depth and to assess model sensitivity. We discuss lessons learned in model-observation integration and ideas for potential improvements to the treatment of snow in models.

  5. Finite volume model for two-dimensional shallow environmental flow

    Science.gov (United States)

    Simoes, F.J.M.

    2011-01-01

    This paper presents the development of a two-dimensional, depth integrated, unsteady, free-surface model based on the shallow water equations. The development was motivated by the desire of balancing computational efficiency and accuracy by selective and conjunctive use of different numerical techniques. The base framework of the discrete model uses Godunov methods on unstructured triangular grids, but the solution technique emphasizes the use of a high-resolution Riemann solver where needed, switching to a simpler and computationally more efficient upwind finite volume technique in the smooth regions of the flow. Explicit time marching is accomplished with strong stability preserving Runge-Kutta methods, with additional acceleration techniques for steady-state computations. A simplified mass-preserving algorithm is used to deal with wet/dry fronts. Application of the model is made to several benchmark cases that show the interplay of the diverse solution techniques.

  6. Climate Model Diagnostic Analyzer Web Service System

    Science.gov (United States)

    Lee, S.; Pan, L.; Zhai, C.; Tang, B.; Kubar, T. L.; Li, J.; Zhang, J.; Wang, W.

    2015-12-01

    Both the National Research Council Decadal Survey and the latest Intergovernmental Panel on Climate Change Assessment Report stressed the need for the comprehensive and innovative evaluation of climate models with the synergistic use of global satellite observations in order to improve our weather and climate simulation and prediction capabilities. The abundance of satellite observations for fundamental climate parameters and the availability of coordinated model outputs from CMIP5 for the same parameters offer a great opportunity to understand and diagnose model biases in climate models. In addition, the Obs4MIPs efforts have created several key global observational datasets that are readily usable for model evaluations. However, a model diagnostic evaluation process requires physics-based multi-variable comparisons that typically involve large-volume and heterogeneous datasets, making them both computationally- and data-intensive. In response, we have developed a novel methodology to diagnose model biases in contemporary climate models and implementing the methodology as a web-service based, cloud-enabled, provenance-supported climate-model evaluation system. The evaluation system is named Climate Model Diagnostic Analyzer (CMDA), which is the product of the research and technology development investments of several current and past NASA ROSES programs. The current technologies and infrastructure of CMDA are designed and selected to address several technical challenges that the Earth science modeling and model analysis community faces in evaluating and diagnosing climate models. In particular, we have three key technology components: (1) diagnostic analysis methodology; (2) web-service based, cloud-enabled technology; (3) provenance-supported technology. The diagnostic analysis methodology includes random forest feature importance ranking, conditional probability distribution function, conditional sampling, and time-lagged correlation map. We have implemented the

  7. Volume Attenuation and High Frequency Loss as Auditory Depth Cues in Stereoscopic 3D Cinema

    Science.gov (United States)

    Manolas, Christos; Pauletto, Sandra

    2014-09-01

    Assisted by the technological advances of the past decades, stereoscopic 3D (S3D) cinema is currently in the process of being established as a mainstream form of entertainment. The main focus of this collaborative effort is placed on the creation of immersive S3D visuals. However, with few exceptions, little attention has been given so far to the potential effect of the soundtrack on such environments. The potential of sound both as a means to enhance the impact of the S3D visual information and to expand the S3D cinematic world beyond the boundaries of the visuals is large. This article reports on our research into the possibilities of using auditory depth cues within the soundtrack as a means of affecting the perception of depth within cinematic S3D scenes. We study two main distance-related auditory cues: high-end frequency loss and overall volume attenuation. A series of experiments explored the effectiveness of these auditory cues. Results, although not conclusive, indicate that the studied auditory cues can influence the audience judgement of depth in cinematic 3D scenes, sometimes in unexpected ways. We conclude that 3D filmmaking can benefit from further studies on the effectiveness of specific sound design techniques to enhance S3D cinema.

  8. A new method for depth profiling reconstruction in confocal microscopy

    Science.gov (United States)

    Esposito, Rosario; Scherillo, Giuseppe; Mensitieri, Giuseppe

    2018-05-01

    Confocal microscopy is commonly used to reconstruct depth profiles of chemical species in multicomponent systems and to image nuclear and cellular details in human tissues via image intensity measurements of optical sections. However, the performance of this technique is reduced by inherent effects related to wave diffraction phenomena, refractive index mismatch and finite beam spot size. All these effects distort the optical wave and cause an image to be captured of a small volume around the desired illuminated focal point within the specimen rather than an image of the focal point itself. The size of this small volume increases with depth, thus causing a further loss of resolution and distortion of the profile. Recently, we proposed a theoretical model that accounts for the above wave distortion and allows for a correct reconstruction of the depth profiles for homogeneous samples. In this paper, this theoretical approach has been adapted for describing the profiles measured from non-homogeneous distributions of emitters inside the investigated samples. The intensity image is built by summing the intensities collected from each of the emitters planes belonging to the illuminated volume, weighed by the emitters concentration. The true distribution of the emitters concentration is recovered by a new approach that implements this theoretical model in a numerical algorithm based on the Maximum Entropy Method. Comparisons with experimental data and numerical simulations show that this new approach is able to recover the real unknown concentration distribution from experimental profiles with an accuracy better than 3%.

  9. A depth semi-averaged model for coastal dynamics

    Science.gov (United States)

    Antuono, M.; Colicchio, G.; Lugni, C.; Greco, M.; Brocchini, M.

    2017-05-01

    The present work extends the semi-integrated method proposed by Antuono and Brocchini ["Beyond Boussinesq-type equations: Semi-integrated models for coastal dynamics," Phys. Fluids 25(1), 016603 (2013)], which comprises a subset of depth-averaged equations (similar to Boussinesq-like models) and a Poisson equation that accounts for vertical dynamics. Here, the subset of depth-averaged equations has been reshaped in a conservative-like form and both the Poisson equation formulations proposed by Antuono and Brocchini ["Beyond Boussinesq-type equations: Semi-integrated models for coastal dynamics," Phys. Fluids 25(1), 016603 (2013)] are investigated: the former uses the vertical velocity component (formulation A) and the latter a specific depth semi-averaged variable, ϒ (formulation B). Our analyses reveal that formulation A is prone to instabilities as wave nonlinearity increases. On the contrary, formulation B allows an accurate, robust numerical implementation. Test cases derived from the scientific literature on Boussinesq-type models—i.e., solitary and Stokes wave analytical solutions for linear dispersion and nonlinear evolution and experimental data for shoaling properties—are used to assess the proposed solution strategy. It is found that the present method gives reliable predictions of wave propagation in shallow to intermediate waters, in terms of both semi-averaged variables and conservation properties.

  10. Depths of Intraplate Indian Ocean Earthquakes from Waveform Modeling

    Science.gov (United States)

    Baca, A. J.; Polet, J.

    2014-12-01

    The Indian Ocean is a region of complex tectonics and anomalous seismicity. The ocean floor in this region exhibits many bathymetric features, most notably the multiple inactive fracture zones within the Wharton Basin and the Ninetyeast Ridge. The 11 April 2012 MW 8.7 and 8.2 strike-slip events that took place in this area are unique because their rupture appears to have extended to a depth where brittle failure, and thus seismic activity, was considered to be impossible. We analyze multiple intraplate earthquakes that have occurred throughout the Indian Ocean to better constrain their focal depths in order to enhance our understanding of how deep intraplate events are occurring and more importantly determine if the ruptures are originating within a ductile regime. Selected events are located within the Indian Ocean away from major plate boundaries. A majority are within the deforming Indo-Australian tectonic plate. Events primarily display thrust mechanisms with some strike-slip or a combination of the two. All events are between MW5.5-6.5. Event selections were handled this way in order to facilitate the analysis of teleseismic waveforms using a point source approximation. From these criteria we gathered a suite of 15 intraplate events. Synthetic seismograms of direct P-waves and depth phases are computed using a 1-D propagator matrix approach and compared with global teleseismic waveform data to determine a best depth for each event. To generate our synthetic seismograms we utilized the CRUST1.0 software, a global crustal model that generates velocity values at the hypocenter of our events. Our waveform analysis results reveal that our depths diverge from the Global Centroid Moment Tensor (GCMT) depths, which underestimate our deep lithosphere events and overestimate our shallow depths by as much as 17 km. We determined a depth of 45km for our deepest event. We will show a comparison of our final earthquake depths with the lithospheric thickness based on

  11. Theoretical performance model for single image depth from defocus.

    Science.gov (United States)

    Trouvé-Peloux, Pauline; Champagnat, Frédéric; Le Besnerais, Guy; Idier, Jérôme

    2014-12-01

    In this paper we present a performance model for depth estimation using single image depth from defocus (SIDFD). Our model is based on an original expression of the Cramér-Rao bound (CRB) in this context. We show that this model is consistent with the expected behavior of SIDFD. We then study the influence on the performance of the optical parameters of a conventional camera such as the focal length, the aperture, and the position of the in-focus plane (IFP). We derive an approximate analytical expression of the CRB away from the IFP, and we propose an interpretation of the SIDFD performance in this domain. Finally, we illustrate the predictive capacity of our performance model on experimental data comparing several settings of a consumer camera.

  12. TRENDS IN ESTIMATED MIXING DEPTH DAILY MAXIMUMS

    Energy Technology Data Exchange (ETDEWEB)

    Buckley, R; Amy DuPont, A; Robert Kurzeja, R; Matt Parker, M

    2007-11-12

    Mixing depth is an important quantity in the determination of air pollution concentrations. Fireweather forecasts depend strongly on estimates of the mixing depth as a means of determining the altitude and dilution (ventilation rates) of smoke plumes. The Savannah River United States Forest Service (USFS) routinely conducts prescribed fires at the Savannah River Site (SRS), a heavily wooded Department of Energy (DOE) facility located in southwest South Carolina. For many years, the Savannah River National Laboratory (SRNL) has provided forecasts of weather conditions in support of the fire program, including an estimated mixing depth using potential temperature and turbulence change with height at a given location. This paper examines trends in the average estimated mixing depth daily maximum at the SRS over an extended period of time (4.75 years) derived from numerical atmospheric simulations using two versions of the Regional Atmospheric Modeling System (RAMS). This allows for differences to be seen between the model versions, as well as trends on a multi-year time frame. In addition, comparisons of predicted mixing depth for individual days in which special balloon soundings were released are also discussed.

  13. High volume acupuncture clinic (HVAC) for chronic knee pain--audit of a possible model for delivery of acupuncture in the National Health Service.

    Science.gov (United States)

    Berkovitz, Saul; Cummings, Mike; Perrin, Chris; Ito, Rieko

    2008-03-01

    Recent research has established the efficacy, effectiveness and cost effectiveness of acupuncture for some forms of chronic musculoskeletal pain. However, there are practical problems with delivery which currently prevent its large scale implementation in the National Health Service. We have developed a delivery model at our hospital, a 'high volume' acupuncture clinic (HVAC) in which patients are treated in a group setting for single conditions using standardised or semi-standardised electroacupuncture protocols by practitioners with basic training. We discuss our experiences using this model for chronic knee pain and present an outcome audit for the first 77 patients, demonstrating satisfactory initial (eight week) clinical results. Longer term (one year) data are currently being collected and the model should next be tested in primary care to confirm its feasibility.

  14. Structural and functional correlates of hypnotic depth and suggestibility.

    Science.gov (United States)

    McGeown, William Jonathan; Mazzoni, Giuliana; Vannucci, Manila; Venneri, Annalena

    2015-02-28

    This study explores whether self-reported depth of hypnosis and hypnotic suggestibility are associated with individual differences in neuroanatomy and/or levels of functional connectivity. Twenty-nine people varying in suggestibility were recruited and underwent structural, and after a hypnotic induction, functional magnetic resonance imaging at rest. We used voxel-based morphometry to assess the correlation of grey matter (GM) and white matter (WM) against the independent variables: depth of hypnosis, level of relaxation and hypnotic suggestibility. Functional networks identified with independent components analysis were regressed with the independent variables. Hypnotic depth ratings were positively correlated with GM volume in the frontal cortex and the anterior cingulate cortex (ACC). Hypnotic suggestibility was positively correlated with GM volume in the left temporal-occipital cortex. Relaxation ratings did not correlate significantly with GM volume and none of the independent variables correlated with regional WM volume measures. Self-reported deeper levels of hypnosis were associated with less connectivity within the anterior default mode network. Taken together, the results suggest that the greater GM volume in the medial frontal cortex and ACC, and lower connectivity in the DMN during hypnosis facilitate experiences of greater hypnotic depth. The patterns of results suggest that hypnotic depth and hypnotic suggestibility should not be considered synonyms. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  15. Service Level Decision-making in Rural Physiotherapy: Development of Conceptual Models.

    Science.gov (United States)

    Adams, Robyn; Jones, Anne; Lefmann, Sophie; Sheppard, Lorraine

    2016-06-01

    Understanding decision-making about health service provision is increasingly important in an environment of increasing demand and constrained resources. Multiple factors are likely to influence decisions about which services will be provided, yet workforce is the most noted factor in the rural physiotherapy literature. This paper draws together results obtained from exploration of service level decision-making (SLDM) to propose 'conceptual' models of rural physiotherapy SLDM. A prioritized qualitative approach enabled exploration of participant perspectives about rural physiotherapy decision-making. Stakeholder perspectives were obtained through surveys and in-depth interviews. Interviews were transcribed verbatim and reviewed by participants. Participant confidentiality was maintained by coding both participants and sites. A system theory-case study heuristic provided a framework for exploration across sites within the investigation area: a large area of one Australian state with a mix of regional, rural and remote communities. Thirty-nine surveys were received from participants in 11 communities. Nineteen in-depth interviews were conducted with physiotherapists and key decision-makers. Results reveal the complexity of factors influencing rural physiotherapy service provision and the value of a systems approach when exploring decision-making about rural physiotherapy service provision. Six key features were identified that formed the rural physiotherapy SLDM system: capacity and capability; contextual influences; layered decision-making; access issues; value and beliefs; and tensions and conflict. Rural physiotherapy SLDM is not a one-dimensional process but results from the complex interaction of clusters of systems issues. Decision-making about physiotherapy service provision is influenced by both internal and external factors. Similarities in influencing factors and the iterative nature of decision-making emerged, which enabled linking physiotherapy SLDM with

  16. Mapping the global depth to bedrock for land surface modelling

    Science.gov (United States)

    Shangguan, W.; Hengl, T.; Yuan, H.; Dai, Y. J.; Zhang, S.

    2017-12-01

    Depth to bedrock serves as the lower boundary of land surface models, which controls hydrologic and biogeochemical processes. This paper presents a framework for global estimation of Depth to bedrock (DTB). Observations were extracted from a global compilation of soil profile data (ca. 130,000 locations) and borehole data (ca. 1.6 million locations). Additional pseudo-observations generated by expert knowledge were added to fill in large sampling gaps. The model training points were then overlaid on a stack of 155 covariates including DEM-based hydrological and morphological derivatives, lithologic units, MODIS surfacee reflectance bands and vegetation indices derived from the MODIS land products. Global spatial prediction models were developed using random forests and Gradient Boosting Tree algorithms. The final predictions were generated at the spatial resolution of 250m as an ensemble prediction of the two independently fitted models. The 10-fold cross-validation shows that the models explain 59% for absolute DTB and 34% for censored DTB (depths deep than 200 cm are predicted as 200 cm). The model for occurrence of R horizon (bedrock) within 200 cm does a good job. Visual comparisons of predictions in the study areas where more detailed maps of depth to bedrock exist show that there is a general match with spatial patterns from similar local studies. Limitation of the data set and extrapolation in data spare areas should not be ignored in applications. To improve accuracy of spatial prediction, more borehole drilling logs will need to be added to supplement the existing training points in under-represented areas.

  17. Exploring the potential of multivariate depth-damage and rainfall-damage models

    DEFF Research Database (Denmark)

    van Ootegem, Luc; van Herck, K.; Creten, T.

    2018-01-01

    In Europe, floods are among the natural catastrophes that cause the largest economic damage. This article explores the potential of two distinct types of multivariate flood damage models: ‘depth-damage’ models and ‘rainfall-damage’ models. We use survey data of 346 Flemish households that were...... victim of pluvial floods complemented with rainfall data from both rain gauges and weather radars. In the econometrical analysis, a Tobit estimation technique is used to deal with the issue of zero damage observations. The results show that in the ‘depth-damage’ models flood depth has a significant...... impact on the damage. In the ‘rainfall-damage’ models there is a significant impact of rainfall accumulation on the damage when using the gauge rainfall data as predictor, but not when using the radar rainfall data. Finally, non-hazard indicators are found to be important for explaining pluvial flood...

  18. The maximum economic depth of groundwater abstraction for irrigation

    Science.gov (United States)

    Bierkens, M. F.; Van Beek, L. P.; de Graaf, I. E. M.; Gleeson, T. P.

    2017-12-01

    Over recent decades, groundwater has become increasingly important for agriculture. Irrigation accounts for 40% of the global food production and its importance is expected to grow further in the near future. Already, about 70% of the globally abstracted water is used for irrigation, and nearly half of that is pumped groundwater. In many irrigated areas where groundwater is the primary source of irrigation water, groundwater abstraction is larger than recharge and we see massive groundwater head decline in these areas. An important question then is: to what maximum depth can groundwater be pumped for it to be still economically recoverable? The objective of this study is therefore to create a global map of the maximum depth of economically recoverable groundwater when used for irrigation. The maximum economic depth is the maximum depth at which revenues are still larger than pumping costs or the maximum depth at which initial investments become too large compared to yearly revenues. To this end we set up a simple economic model where costs of well drilling and the energy costs of pumping, which are a function of well depth and static head depth respectively, are compared with the revenues obtained for the irrigated crops. Parameters for the cost sub-model are obtained from several US-based studies and applied to other countries based on GDP/capita as an index of labour costs. The revenue sub-model is based on gross irrigation water demand calculated with a global hydrological and water resources model, areal coverage of crop types from MIRCA2000 and FAO-based statistics on crop yield and market price. We applied our method to irrigated areas in the world overlying productive aquifers. Estimated maximum economic depths range between 50 and 500 m. Most important factors explaining the maximum economic depth are the dominant crop type in the area and whether or not initial investments in well infrastructure are limiting. In subsequent research, our estimates of

  19. Industrial Sector Technology Use Model (ISTUM): industrial energy use in the United States, 1974-2000. Volume 1. Primary model documentation. Final report

    Energy Technology Data Exchange (ETDEWEB)

    Bohn, Roger E.; Herod, J. Steven; Andrews, Gwen L.; Budzik, Philip M.; Eissenstat, Richard S.; Grossmann, John R.; Reiner, Gary M.; Roschke, Thomas E.; Shulman, Michael J.; Toppen, Timothy R.; Veno, William R.; Violette, Daniel M.; Smolinski, Michael D.; Habel, Deborah; Cook, Alvin E.

    1979-10-01

    ISTUM is designed to predict the commercial market penetration of various energy technologies in the industrial sector out to the year 2000. It is a refinement and further development of Market Oriented Program Planning Study task force in 1977. ISTUM assesses the comparative economic competitiveness of each technology and competes over 100 energy technologies - conventionals, fossil/energy, conservation, cogeneration, solar, and geothermal. A broad overview of the model, the solution of the model, and an in-depth discussion of strength and limitations of the model are provided in Volume I. (MCW)

  20. Using Service Scenarios to Model Business Services

    DEFF Research Database (Denmark)

    Bækgaard, Lars

    The purpose of the paper is to present and evaluate the notion of service scenarios. A service is work done by a service executor in interaction with a service consumer. A service scenario is a model of a service system and the roles that are played by the actors participating and interacting...... during the execution of a service. The model represents the roles and the interactions between the participants. Service scenarios can be used to model specific services and roles played by human beings and IT systems in the execution of services. The use of service scenarios is demonstrated by means...... of a case study in a public library. The case study indicates that service systems should be understood as socio-technical systems in which service executors and service consumers co-create value in mutual interaction with each other and with a set of shared resources....

  1. Feasibility of imaging epileptic seizure onset with EIT and depth electrodes.

    Science.gov (United States)

    Witkowska-Wrobel, Anna; Aristovich, Kirill; Faulkner, Mayo; Avery, James; Holder, David

    2018-06-01

    Imaging ictal and interictal activity with Electrical Impedance Tomography (EIT) using intracranial electrode mats has been demonstrated in animal models of epilepsy. In human epilepsy subjects undergoing presurgical evaluation, depth electrodes are often preferred. The purpose of this work was to evaluate the feasibility of using EIT to localise epileptogenic areas with intracranial electrodes in humans. The accuracy of localisation of the ictal onset zone was evaluated in computer simulations using 9M element FEM models derived from three subjects. 5 mm radius perturbations imitating a single seizure onset event were placed in several locations forming two groups: under depth electrode coverage and in the contralateral hemisphere. Simulations were made for impedance changes of 1% expected for neuronal depolarisation over milliseconds and 10% for cell swelling over seconds. Reconstructions were compared with EEG source modelling for a radially orientated dipole with respect to the closest EEG recording contact. The best accuracy of EIT was obtained using all depth and 32 scalp electrodes, greater than the equivalent accuracy with EEG inverse source modelling. The localisation error was 5.2 ± 1.8, 4.3 ± 0 and 46.2 ± 25.8 mm for perturbations within the volume enclosed by depth electrodes and 29.6 ± 38.7, 26.1 ± 36.2, 54.0 ± 26.2 mm for those without (EIT 1%, 10% change, EEG source modelling, n = 15 in 3 subjects, p EIT was insensitive to source dipole orientation, all 15 perturbations within the volume enclosed by depth electrodes were localised, whereas the standard clinical method of visual inspection of EEG voltages, only localised 8 out of 15 cases. This suggests that adding EIT to SEEG measurements could be beneficial in localising the onset of seizures. Copyright © 2018 The Authors. Published by Elsevier Inc. All rights reserved.

  2. International Nuclear Model. Volume 3. Program description

    International Nuclear Information System (INIS)

    Andress, D.

    1985-01-01

    This is Volume 3 of three volumes of documentation of the International Nuclear Model (INM). This volume presents the Program Description of the International Nuclear Model, which was developed for the Nuclear and Alternate Fuels Division (NAFD), Office of Coal, Nuclear, Electric and Alternate Fuels, Energy Information Administration (EIA), US Department of Energy (DOE). The International Nuclear Model (INM) is a comprehensive model of the commercial nuclear power industry. It simulates economic decisions for reactor deployment and fuel management decision based on an input set of technical economic and scenario parameters. The technical parameters include reactor operating characteristics, fuel cycle timing and mass loss factors, and enrichment tails assays. Economic parameters include fuel cycle costs, financial data, and tax alternatives. INM has a broad range of scenario options covering, for example, process constraints, interregional activities, reprocessing, and fuel management selection. INM reports reactor deployment schedules, electricity generation, and fuel cycle requirements and costs. It also has specialized reports for extended burnup and permanent disposal. Companion volumes to Volume 3 are: Volume 1 - Model Overview, and Volume 2 - Data Base Relationships

  3. On modeling of beryllium molten depths in simulated plasma disruptions

    International Nuclear Information System (INIS)

    Tsotridis, G.; Rother, H.

    1996-01-01

    Plasma-facing components in tokamak-type fusion reactors are subjected to intense heat loads during plasma disruptions. The influence of high heat fluxes on the depth of heat-affected zones of pure beryllium metal and beryllium containing very low levels of surface active impurities is studied by using a two-dimensional transient computer model that solves the equations of motion and energy. Results are presented for a range of energy densities and disruption times. Under certain conditions, impurities, through their effect on surface tension, create convective flows and hence influence the flow intensities and the resulting depths of the beryllium molten layers during plasma disruptions. The calculated depths of the molten layers are also compared with other mathematical models that are based on the assumption that heat is transported through the material by conduction only. 32 refs., 6 figs., 1 tab

  4. A Water Temperature Simulation Model for Rice Paddies With Variable Water Depths

    Science.gov (United States)

    Maruyama, Atsushi; Nemoto, Manabu; Hamasaki, Takahiro; Ishida, Sachinobu; Kuwagata, Tsuneo

    2017-12-01

    A water temperature simulation model was developed to estimate the effects of water management on the thermal environment in rice paddies. The model was based on two energy balance equations: for the ground and for the vegetation, and considered the water layer and changes in the aerodynamic properties of its surface with water depth. The model was examined with field experiments for water depths of 0 mm (drained conditions) and 100 mm (flooded condition) at two locations. Daily mean water temperatures in the flooded condition were mostly higher than in the drained condition in both locations, and the maximum difference reached 2.6°C. This difference was mainly caused by the difference in surface roughness of the ground. Heat exchange by free convection played an important role in determining water temperature. From the model simulation, the temperature difference between drained and flooded conditions was more apparent under low air temperature and small leaf area index conditions; the maximum difference reached 3°C. Most of this difference occurred when the range of water depth was lower than 50 mm. The season-long variation in modeled water temperature showed good agreement with an observation data set from rice paddies with various rice-growing seasons, for a diverse range of water depths (root mean square error of 0.8-1.0°C). The proposed model can estimate water temperature for a given water depth, irrigation, and drainage conditions, which will improve our understanding of the effect of water management on plant growth and greenhouse gas emissions through the thermal environment of rice paddies.

  5. Prediction of Cavitation Depth in an Al-Cu Alloy Melt with Bubble Characteristics Based on Synchrotron X-ray Radiography

    Science.gov (United States)

    Huang, Haijun; Shu, Da; Fu, Yanan; Zhu, Guoliang; Wang, Donghong; Dong, Anping; Sun, Baode

    2018-04-01

    The size of cavitation region is a key parameter to estimate the metallurgical effect of ultrasonic melt treatment (UST) on preferential structure refinement. We present a simple numerical model to predict the characteristic length of the cavitation region, termed cavitation depth, in a metal melt. The model is based on wave propagation with acoustic attenuation caused by cavitation bubbles which are dependent on bubble characteristics and ultrasonic intensity. In situ synchrotron X-ray imaging of cavitation bubbles has been made to quantitatively measure the size of cavitation region and volume fraction and size distribution of cavitation bubbles in an Al-Cu melt. The results show that cavitation bubbles maintain a log-normal size distribution, and the volume fraction of cavitation bubbles obeys a tanh function with the applied ultrasonic intensity. Using the experimental values of bubble characteristics as input, the predicted cavitation depth agrees well with observations except for a slight deviation at higher acoustic intensities. Further analysis shows that the increase of bubble volume and bubble size both leads to higher attenuation by cavitation bubbles, and hence, smaller cavitation depth. The current model offers a guideline to implement UST, especially for structural refinement.

  6. Surface area and the seabed area, volume, depth, slope, and topographic variation for the world's seas, oceans, and countries.

    Science.gov (United States)

    Costello, Mark John; Cheung, Alan; De Hauwere, Nathalie

    2010-12-01

    Depth and topography directly and indirectly influence most ocean environmental conditions, including light penetration and photosynthesis, sedimentation, current movements and stratification, and thus temperature and oxygen gradients. These parameters are thus likely to influence species distribution patterns and productivity in the oceans. They may be considered the foundation for any standardized classification of ocean ecosystems and important correlates of metrics of biodiversity (e.g., species richness and composition, fisheries). While statistics on ocean depth and topography are often quoted, how they were derived is rarely cited, and unless calculated using the same spatial resolution the resulting statistics will not be strictly comparable. We provide such statistics using the best available resolution (1-min) global bathymetry, and open source digital maps of the world's seas and oceans and countries' Exclusive Economic Zones, using a standardized methodology. We created a terrain map and calculated sea surface and seabed area, volume, and mean, standard deviation, maximum, and minimum, of both depth and slope. All the source data and our database are freely available online. We found that although the ocean is flat, and up to 71% of the area has a ocean volume exceeds 1.3 billion km(3) (or 1.3 sextillion liters), and sea surface and seabed areas over 354 million km(2). We propose the coefficient of variation of slope as an index of topographic heterogeneity. Future studies may improve on this database, for example by using a more detailed bathymetry, and in situ measured data. The database could be used to classify ocean features, such as abyssal plains, ridges, and slopes, and thus provide the basis for a standards based classification of ocean topography.

  7. Health services for reproductive tract infections among female migrant workers in industrial zones in Ha Noi, Viet Nam: an in-depth assessment

    Directory of Open Access Journals (Sweden)

    Kim Le

    2012-02-01

    Full Text Available Abstract Background Rural-to-urban migration involves a high proportion of females because job opportunities for female migrants have increased in urban industrial areas. Those who migrate may be healthier than those staying in the village and they may benefit from better health care services at destination, but the 'healthy' effect can be reversed at destination due to migration-related health risk factors. The study aimed to explore the need for health care services for reproductive tract infections (RTIs among female migrants working in the Sai Dong industrial zone as well as their services utilization. Methods The cross sectional study employed a mixed method approach. A cohort of 300 female migrants was interviewed to collect quantitative data. Two focus groups and 20 in-depth interviews were conducted to collect qualitative data. We have used frequency and cross-tabulation techniques to analyze the quantitative data and the qualitative data was used to triangulate and to provide more in-depth information. Results The needs for health care services for RTI were high as 25% of participants had RTI syndromes. Only 21.6% of female migrants having RTI syndromes ever seek helps for health care services. Barriers preventing migrants to access services were traditional values, long working hours, lack of information, and high cost of services. Employers had limited interests in reproductive health of female migrants, and there was ineffective collaboration between the local health system and enterprises. These barriers were partly caused by lack of health promotion programs suitable for migrants. Most respondents needed more information on RTIs and preferred to receive these from their employers since they commonly work shifts - and spend most of their day time at work. Conclusion While RTIs are a common health problem among female migrant workers in industrial zones, female migrants had many obstacles in accessing RTI care services. The findings

  8. Simulating streamflow and water table depth with a coupled hydrological model

    Directory of Open Access Journals (Sweden)

    Alphonce Chenjerayi Guzha

    2010-09-01

    Full Text Available A coupled model integrating MODFLOW and TOPNET with the models interacting through the exchange of recharge and baseflow and river-aquifer interactions was developed and applied to the Big Darby Watershed in Ohio, USA. Calibration and validation results show that there is generally good agreement between measured streamflow and simulated results from the coupled model. At two gauging stations, average goodness of fit (R2, percent bias (PB, and Nash Sutcliffe efficiency (ENS values of 0.83, 11.15%, and 0.83, respectively, were obtained for simulation of streamflow during calibration, and values of 0.84, 8.75%, and 0.85, respectively, were obtained for validation. The simulated water table depths yielded average R2 values of 0.77 and 0.76 for calibration and validation, respectively. The good match between measured and simulated streamflows and water table depths demonstrates that the model is capable of adequately simulating streamflows and water table depths in the watershed and also capturing the influence of spatial and temporal variation in recharge.

  9. Depth geological model building: application to the 3D high resolution 'ANDRA' seismic block

    International Nuclear Information System (INIS)

    Mari, J.L.; Yven, B.

    2012-01-01

    Document available in extended abstract form only. 3D seismic blocks and logging data, mainly acoustic and density logs, are often used for geological model building in time. The geological model must be then converted from time to depth. Geostatistical approach for time-to-depth conversion of seismic horizons is often used in many geo-modelling projects. From a geostatistical point of view, the time-to-depth conversion of seismic horizons is a classical estimation problem involving one or more secondary variables. Bayesian approach [1] provides an excellent estimator which is more general than the traditional kriging with external drift(s) and fits very well to the needs for time-to-depth conversion of seismic horizons. The time-to-depth conversion of the selected seismic horizons is used to compute a time-to-depth conversion model at the time sampling rate (1 ms). The 3D depth conversion model allows the computation of an interval velocity block which is compared with the acoustic impedance block to estimate a density block as QC. Non realistic density values are edited and the interval velocity block as well as the depth conversion model is updated. The proposed procedure has been applied on a 3D data set. The dataset comes from a High Resolution 3D seismic survey recorded in France at the boundary of the Meuse and Haute-Marne departments in the vicinity of the Andra Center (National radioactive waste management Agency). The 3D design is a cross spread. The active spread is composed of 12 receiver lines with 120 stations each. The source lines are perpendicular to the receiver lines. The receiver and source line spacings are respectively 80 m and 120 m. The receiver and source point spacings are 20 m. The source is a Vibroseis source generating a signal in the 14 - 140 Hz frequency bandwidth.. The bin size is 10 x 10 m 2 . The nominal fold is 60. A conventional seismic sequence was applied to the data set. It includes amplitude recovery, deconvolution and wave

  10. An artificial intelligence tool for complex age-depth models

    Science.gov (United States)

    Bradley, E.; Anderson, K. A.; de Vesine, L. R.; Lai, V.; Thomas, M.; Nelson, T. H.; Weiss, I.; White, J. W. C.

    2017-12-01

    CSciBox is an integrated software system for age modeling of paleoenvironmental records. It incorporates an array of data-processing and visualization facilities, ranging from 14C calibrations to sophisticated interpolation tools. Using CSciBox's GUI, a scientist can build custom analysis pipelines by composing these built-in components or adding new ones. Alternatively, she can employ CSciBox's automated reasoning engine, Hobbes, which uses AI techniques to perform an in-depth, autonomous exploration of the space of possible age-depth models and presents the results—both the models and the reasoning that was used in constructing and evaluating them—to the user for her inspection. Hobbes accomplishes this using a rulebase that captures the knowledge of expert geoscientists, which was collected over the course of more than 100 hours of interviews. It works by using these rules to generate arguments for and against different age-depth model choices for a given core. Given a marine-sediment record containing uncalibrated 14C dates, for instance, Hobbes tries CALIB-style calibrations using a choice of IntCal curves, with reservoir age correction values chosen from the 14CHRONO database using the lat/long information provided with the core, and finally composes the resulting age points into a full age model using different interpolation methods. It evaluates each model—e.g., looking for outliers or reversals—and uses that information to guide the next steps of its exploration, and presents the results to the user in human-readable form. The most powerful of CSciBox's built-in interpolation methods is BACON, a Bayesian sedimentation-rate algorithm—a powerful but complex tool that can be difficult to use. Hobbes adjusts BACON's many parameters autonomously to match the age model to the expectations of expert geoscientists, as captured in its rulebase. It then checks the model against the data and iteratively re-calculates until it is a good fit to the data.

  11. A boundary element model for diffraction of water waves on varying water depth

    Energy Technology Data Exchange (ETDEWEB)

    Poulin, Sanne

    1997-12-31

    In this thesis a boundary element model for calculating diffraction of water waves on varying water depth is presented. The varying water depth is approximated with a perturbed constant depth in the mild-slope wave equation. By doing this, the domain integral which is a result of the varying depth is no longer a function of the unknown wave potential but only a function of position and the constant depth wave potential. The number of unknowns is the resulting system of equations is thus reduced significantly. The integration procedures in the model are tested very thoroughly and it is found that a combination of analytical integration in the singular region and standard numerical integration outside works very well. The gradient of the wave potential is evaluated successfully using a hypersingular integral equation. Deviations from the analytical solution are only found on the boundary or very close to, but these deviations have no significant influence on the accuracy of the solution. The domain integral is evaluated using the dual reciprocity method. The results are compared with a direct integration of the integral, and the accuracy is quite satisfactory. The problem with irregular frequencies is taken care of by the CBIEM (or CHIEF-method) together with a singular value decomposition technique. This method is simple to implement and works very well. The model is verified using Homma`s island as a test case. The test cases are limited to shallow water since the analytical solution is only valid in this region. Several depth ratios are examined, and it is found that the accuracy of the model increases with increasing wave period and decreasing depth ratio. Short waves, e.g. wind generated waves, can allow depth variations up to approximately 2 before the error exceeds 10%, while long waves can allow larger depth ratios. It is concluded that the perturbation idea is highly usable. A study of (partially) absorbing boundary conditions is also conducted. (EG)

  12. Spherical and cylindrical cavity expansion models based prediction of penetration depths of concrete targets.

    Directory of Open Access Journals (Sweden)

    Xiaochao Jin

    Full Text Available The cavity expansion theory is most widely used to predict the depth of penetration of concrete targets. The main purpose of this work is to clarify the differences between the spherical and cylindrical cavity expansion models and their scope of application in predicting the penetration depths of concrete targets. The factors that influence the dynamic cavity expansion process of concrete materials were first examined. Based on numerical results, the relationship between expansion pressure and velocity was established. Then the parameters in the Forrestal's formula were fitted to have a convenient and effective prediction of the penetration depth. Results showed that both the spherical and cylindrical cavity expansion models can accurately predict the depth of penetration when the initial velocity is lower than 800 m/s. However, the prediction accuracy decreases with the increasing of the initial velocity and diameters of the projectiles. Based on our results, it can be concluded that when the initial velocity is higher than the critical velocity, the cylindrical cavity expansion model performs better than the spherical cavity expansion model in predicting the penetration depth, while when the initial velocity is lower than the critical velocity the conclusion is quite the contrary. This work provides a basic principle for selecting the spherical or cylindrical cavity expansion model to predict the penetration depth of concrete targets.

  13. Effect on tracer concentrations of ABL depth models in complex terrain

    Energy Technology Data Exchange (ETDEWEB)

    Galmarini, S.; Salin, P. [Joint Research Center Ispra (Italy); Anfossi, D.; Trini-Castelli, S. [CNR-ICGF, Turin (Italy); Schayes, G. [Univ. Louvain-la-Neuve, Louvain (Belgium)

    1997-10-01

    In the present preliminary study we use different ABL (atmospheric boundary layer) depth formulations to study atmospheric dispersion in complex-terrain conditions. The flow in an Alpine valley during the tracer experiment TRANSALP is simulated by means of a mesoscale model and a tracer dispersion is reproduced using a Lagrangian particle model. The ABL dept enters as key parameter in particle model turbulent-dispersion formulation. The preliminary results reveal that the ABL depth parameter can influence the dispersion process but that in the case of a dispersion in a valley-daytime flow the results depend much more strongly on the model horizontal and vertical resolution. A relatively coarse horizontal resolution implies a considerable smoothing of the topography that largely affects the dispersion characteristics. The vertical resolution does not allow on to resolve with sufficient details the rapid and large variation of the flow characteristic as the terrain feature vary. Two of the methods used to determine the ABL depth depend strongly on the resolution. The method that instead depends only on surface parameters like heat flux and surface based stability allowed us to obtain results to be considered satisfactory for what concerns the dispersion process, quite consistent with the flow model results, less numeric dependent and more physically sound. (LN)

  14. Value-added strategy models to provide quality services in senior health business.

    Science.gov (United States)

    Yang, Ya-Ting; Lin, Neng-Pai; Su, Shyi; Chen, Ya-Mei; Chang, Yao-Mao; Handa, Yujiro; Khan, Hafsah Arshed Ali; Elsa Hsu, Yi-Hsin

    2017-06-20

    The rapid population aging is now a global issue. The increase in the elderly population will impact the health care industry and health enterprises; various senior needs will promote the growth of the senior health industry. Most senior health studies are focused on the demand side and scarcely on supply. Our study selected quality enterprises focused on aging health and analyzed different strategies to provide excellent quality services to senior health enterprises. We selected 33 quality senior health enterprises in Taiwan and investigated their excellent quality services strategies by face-to-face semi-structured in-depth interviews with CEO and managers of each enterprise in 2013. A total of 33 senior health enterprises in Taiwan. Overall, 65 CEOs and managers of 33 enterprises were interviewed individually. None. Core values and vision, organization structure, quality services provided, strategies for quality services. This study's results indicated four type of value-added strategy models adopted by senior enterprises to offer quality services: (i) residential care and co-residence model, (ii) home care and living in place model, (iii) community e-business experience model and (iv) virtual and physical portable device model. The common part in these four strategy models is that the services provided are elderly centered. These models offer virtual and physical integrations, and also offer total solutions for the elderly and their caregivers. Through investigation of successful strategy models for providing quality services to seniors, we identified opportunities to develop innovative service models and successful characteristics, also policy implications were summarized. The observations from this study will serve as a primary evidenced base for enterprises developing their senior market and, also for promoting the value co-creation possibility through dialogue between customers and those that deliver service. © The Author 2017. Published by Oxford

  15. Use of machine learning techniques for modeling of snow depth

    Directory of Open Access Journals (Sweden)

    G. V. Ayzel

    2017-01-01

    Full Text Available Snow exerts significant regulating effect on the land hydrological cycle since it controls intensity of heat and water exchange between the soil-vegetative cover and the atmosphere. Estimating of a spring flood runoff or a rain-flood on mountainous rivers requires understanding of the snow cover dynamics on a watershed. In our work, solving a problem of the snow cover depth modeling is based on both available databases of hydro-meteorological observations and easily accessible scientific software that allows complete reproduction of investigation results and further development of this theme by scientific community. In this research we used the daily observational data on the snow cover and surface meteorological parameters, obtained at three stations situated in different geographical regions: Col de Porte (France, Sodankyla (Finland, and Snoquamie Pass (USA.Statistical modeling of the snow cover depth is based on a complex of freely distributed the present-day machine learning models: Decision Trees, Adaptive Boosting, Gradient Boosting. It is demonstrated that use of combination of modern machine learning methods with available meteorological data provides the good accuracy of the snow cover modeling. The best results of snow cover depth modeling for every investigated site were obtained by the ensemble method of gradient boosting above decision trees – this model reproduces well both, the periods of snow cover accumulation and its melting. The purposeful character of learning process for models of the gradient boosting type, their ensemble character, and use of combined redundancy of a test sample in learning procedure makes this type of models a good and sustainable research tool. The results obtained can be used for estimating the snow cover characteristics for river basins where hydro-meteorological information is absent or insufficient.

  16. Effects of burn location and investigator on burn depth in a porcine model.

    Science.gov (United States)

    Singer, Adam J; Toussaint, Jimmy; Chung, Won Taek; Thode, Henry C; McClain, Steve; Raut, Vivek

    2016-02-01

    In order to be useful, animal models should be reproducible and consistent regardless of sampling bias, investigator creating burn, and burn location. We determined the variability in burn depth based on biopsy location, burn location and investigator in a porcine model of partial thickness burns. 24 partial thickness burns (2.5 cm by 2.5 cm each) were created on the backs of 2 anesthetized pigs by 2 investigators (one experienced, one inexperienced) using a previously validated model. In one of the pigs, the necrotic epidermis covering each burn was removed. Five full thickness 4mm punch biopsies were obtained 1h after injury from the four corners and center of the burns and stained with Hematoxylin and Eosin and Masson's trichrome for determination of burn depth by a board certified dermatopathologist blinded to burn location and investigator. Comparisons of burn depth by biopsy location, burn location and investigator were performed with t-tests and ANOVA as appropriate. The mean (SD) depth of injury to blood vessels (the main determinant of burn progression) in debrided and non-debrided pigs pooled together was 1.8 (0.3)mm, which included 75% of the dermal depth. Non-debrided burns were 0.24 mm deeper than debrided burns (Plocations, in debrided burns. Additionally, there were also no statistical differences in burn depths from midline to lateral in either of these burn types. Burn depth was similar for both investigators and among biopsy locations. Burn depth was greater for caudal locations in non-debrided burns and overall non-debrided burns were deeper than debrided burns. However, burn depth did not differ based on investigator, biopsy site, and medial-lateral location. Copyright © 2015 Elsevier Ltd and ISBI. All rights reserved.

  17. A two-phase debris-flow model that includes coupled evolution of volume fractions, granular dilatancy, and pore-fluid pressure

    Science.gov (United States)

    George, David L.; Iverson, Richard M.

    2011-01-01

    Pore-fluid pressure plays a crucial role in debris flows because it counteracts normal stresses at grain contacts and thereby reduces intergranular friction. Pore-pressure feedback accompanying debris deformation is particularly important during the onset of debrisflow motion, when it can dramatically influence the balance of forces governing downslope acceleration. We consider further effects of this feedback by formulating a new, depth-averaged mathematical model that simulates coupled evolution of granular dilatancy, solid and fluid volume fractions, pore-fluid pressure, and flow depth and velocity during all stages of debris-flow motion. To illustrate implications of the model, we use a finite-volume method to compute one-dimensional motion of a debris flow descending a rigid, uniformly inclined slope, and we compare model predictions with data obtained in large-scale experiments at the USGS debris-flow flume. Predictions for the first 1 s of motion show that increasing pore pressures (due to debris contraction) cause liquefaction that enhances flow acceleration. As acceleration continues, however, debris dilation causes dissipation of pore pressures, and this dissipation helps stabilize debris-flow motion. Our numerical predictions of this process match experimental data reasonably well, but predictions might be improved by accounting for the effects of grain-size segregation.

  18. Effects of uncertainty in model predictions of individual tree volume on large area volume estimates

    Science.gov (United States)

    Ronald E. McRoberts; James A. Westfall

    2014-01-01

    Forest inventory estimates of tree volume for large areas are typically calculated by adding model predictions of volumes for individual trees. However, the uncertainty in the model predictions is generally ignored with the result that the precision of the large area volume estimates is overestimated. The primary study objective was to estimate the effects of model...

  19. Generation of synthetic Kinect depth images based on empirical noise model

    DEFF Research Database (Denmark)

    Iversen, Thorbjørn Mosekjær; Kraft, Dirk

    2017-01-01

    The development, training and evaluation of computer vision algorithms rely on the availability of a large number of images. The acquisition of these images can be time-consuming if they are recorded using real sensors. An alternative is to rely on synthetic images which can be rapidly generated....... This Letter describes a novel method for the simulation of Kinect v1 depth images. The method is based on an existing empirical noise model from the literature. The authors show that their relatively simple method is able to provide depth images which have a high similarity with real depth images....

  20. Quantifying spatial distribution of snow depth errors from LiDAR using Random Forests

    Science.gov (United States)

    Tinkham, W.; Smith, A. M.; Marshall, H.; Link, T. E.; Falkowski, M. J.; Winstral, A. H.

    2013-12-01

    There is increasing need to characterize the distribution of snow in complex terrain using remote sensing approaches, especially in isolated mountainous regions that are often water-limited, the principal source of terrestrial freshwater, and sensitive to climatic shifts and variations. We apply intensive topographic surveys, multi-temporal LiDAR, and Random Forest modeling to quantify snow volume and characterize associated errors across seven land cover types in a semi-arid mountainous catchment at a 1 and 4 m spatial resolution. The LiDAR-based estimates of both snow-off surface topology and snow depths were validated against ground-based measurements across the catchment. Comparison of LiDAR-derived snow depths to manual snow depth surveys revealed that LiDAR based estimates were more accurate in areas of low lying vegetation such as shrubs (RMSE = 0.14 m) as compared to areas consisting of tree cover (RMSE = 0.20-0.35 m). The highest errors were found along the edge of conifer forests (RMSE = 0.35 m), however a second conifer transect outside the catchment had much lower errors (RMSE = 0.21 m). This difference is attributed to the wind exposure of the first site that led to highly variable snow depths at short spatial distances. The Random Forest modeled errors deviated from the field measured errors with a RMSE of 0.09-0.34 m across the different cover types. Results show that snow drifts, which are important for maintaining spring and summer stream flows and establishing and sustaining water-limited plant species, contained 30 × 5-6% of the snow volume while only occupying 10% of the catchment area similar to findings by prior physically-based modeling approaches. This study demonstrates the potential utility of combining multi-temporal LiDAR with Random Forest modeling to quantify the distribution of snow depth with a reasonable degree of accuracy. Future work could explore the utility of Terrestrial LiDAR Scanners to produce validation of snow-on surface

  1. Kajian Model Estimasi Volume Limpasan Permukaan, Debit Puncak Aliran, dan Erosi Tanah dengan Model Soil Conservation Service (SCS, Rasional Dan Modified Universal Soil Loss Equation (MUSLE (Studi Kasus di DAS Keduang, Wonogiri

    Directory of Open Access Journals (Sweden)

    Ugro Hari Murtiono

    2008-12-01

    Full Text Available Hydrologic modelling has been developing and it is usefull for basic data in managing water resources. The aim of the reseach is to estimate volume runoff, maximum discharge, and soil erosion with SCS, Rational, and MUSLE models on Keduang Watershed. Explain the data analysis, and flow to get the data. SCS parameters model use are: runoff, rainfall, deferent between rainfall runoff. The deferent rainfall between runoff relationship kurva Runoff Coefisient (Curve Nunmber/CN. This Coefisient connected with Soil Hydrology Group (antecedent moisture content/AMC, landuse, and cultivation method. Rational parameters model use are: runoff coefisient, soil type, slope, land cover, rainfall intensity, and watershed areas. MUSLE parameters model use are: rainfall erosifity (RM, soil erodibility (K, slope length (L, slope (S, land cover (C, and soil conservation practice (P. The result shows that the conservation service models be applied Keduang Watershed, Wonogiri is over estimed abaut 29.54 %, Rational model is over estimed abaut 49.96 %, and MUSLE model is over estimed abaut 48.47 %.

  2. Depth Compensation Model for Gaze Estimation in Sport Analysis

    DEFF Research Database (Denmark)

    Batista Narcizo, Fabricio; Hansen, Dan Witzner

    2015-01-01

    is tested in a totally controlled environment with aim to check the influences of eye tracker parameters and ocular biometric parameters on its behavior. We also present a gaze estimation method based on epipolar geometry for binocular eye tracking setups. The depth compensation model has shown very...

  3. Mathematical modeling of the heat transfer for determining the depth of thawing basin buildings with long service life

    Science.gov (United States)

    Sirditov, Ivan; Stepanov, Sergei

    2017-11-01

    In this paper, a numerical study of the problem of determining a thawing basin in the permafrost soil for buildings with a long service life is carried out using two methods, with the formulas of set of rules 25.13330.2012 "Soil bases and foundations on permafrost soils" and using a mathematical model.

  4. Comparison of actual tidal volume in neonatal lung model volume control ventilation using three ventilators.

    Science.gov (United States)

    Toyama, H; Endo, Y; Ejima, Y; Matsubara, M; Kurosawa, S

    2011-07-01

    In neonates, small changes in tidal volumes (V(T)) may lead to complications. Previous studies have shown a significant difference between ventilator-measured tidal volume and tidal volume delivered (actual V(T)). We evaluated the accuracy of three different ventilators to deliver small V(T) during volume-controlled ventilation. We tested Servo 300, 840 ventilator and Evita 4 Neoflow ventilators with lung models simulating normal and injured neonatal lung compliance models. Gas volume delivered from the ventilator into the test circuit (V(TV)) and actual V(T) to the test lung were measured using Ventrak respiration monitors at set V(T) (30 ml). The gas volume increase of the breathing circuit was then calculated. Tidal volumes of the SV300 and PB840 in both lung models were similar to the set V(T) and the actual tidal volumes in the injured model (20.7 ml and 19.8 ml, respectively) were significantly less than that in the normal model (27.4 ml and 23.4 ml). PB840 with circuit compliance compensation could not improve the actual V(T). V(TV) of the EV4N in the normal and the injured models (37.8 ml and 46.6 ml) were markedly increased compared with set V(T), and actual V(T) were similar to set V(T) in the normal and injured model (30.2 ml and 31.9 ml, respectively). EV4N measuring V(T) close to the lung could match actual V(T) to almost the same value as the set V(T) however the gas volume of the breathing circuit was increased. If an accurate value for the patient's actual V(T) is needed, this V(T) must be measured by a sensor located between the Y-piece and the tracheal tube.

  5. Monitoring massive fracture growth at 2-km depths using surface tiltmeter arrays

    Science.gov (United States)

    Wood, M.D.

    1979-01-01

    Tilt due to massive hydraulic fractures induced in sedimentary rocks at depths of up to 2.2 km have been recorded by surface tiltmeters. Injection of fluid volumes up to 4 ?? 105 liters and masses of propping agent up to 5 ?? 105 kg is designed to produce fractures approximately 1 km long, 50-100 m high and about 1 cm wide. The surface tilt data adequately fit a dislocation model of a tensional fault in a half-space. Theoretical and observational results indicate that maximum tilt occurs at a distance off the strike of the fracture equivalent to 0.4 of the depth to the fracture. Azimuth and extent of the fracture deduced from the geometry of the tilt field agree with other kinds of geophysical measurements. Detailed correlation of the tilt signatures with pumping parameters (pressure, rate, volume, mass) have provided details on asymmetry in geometry and growth rate. Whereas amplitude variations in tilt vary inversely with the square of the depth, changes in flow rate or pressure gradient can produce a cubic change in width. These studies offer a large-scale experimental approach to the study of problems involving fracturing, mass transport, and dilatancy processes. ?? 1979.

  6. Fitness-for-service and decisions for petroleum and chemical equipment. PVP-Volume 315

    International Nuclear Information System (INIS)

    Prager, M.; Becht, C. IV; Depadova, T.A.; Okazaki, M.; Onyewuenyi, O.A.; Smith, J.P.; Takezono, S.; Weingart, L.J.; Yagi, K.

    1995-01-01

    This volume is part of a series of publications intended to present the technical foundation for broadly accepted practices to establish the mechanical integrity of equipment in service. A focal point for this activity has been a Materials Properties Council program on fitness-for-service (FSS) reported in earlier PVP volumes. Work reported here covers the full range of equipment of interest to petroleum and chemical companies from LNG to creep service and provides a snapshot of current Codes, methods, concerns, and problems. It encompasses crack-like flaws and local thinning situations, welds, clad vessels, storage tanks, and pressure vessels. The work in progress is only a start, and the papers herein should be viewed as part of the process of validating the techniques used. While most of the applications are to petroleum refineries and natural gas processing plants, some papers deal with fossil-fuel power plants, nuclear power plants, synthetic fuels refineries, and materials for high-temperature applications. Papers have been processed separately for inclusion on the data base

  7. Lake and wetland ecosystem services measuring water storage and local climate regulation

    Science.gov (United States)

    Wong, Christina P.; Jiang, Bo; Bohn, Theodore J.; Lee, Kai N.; Lettenmaier, Dennis P.; Ma, Dongchun; Ouyang, Zhiyun

    2017-04-01

    Developing interdisciplinary methods to measure ecosystem services is a scientific priority, however, progress remains slow in part because we lack ecological production functions (EPFs) to quantitatively link ecohydrological processes to human benefits. In this study, we tested a new approach, combining a process-based model with regression models, to create EPFs to evaluate water storage and local climate regulation from a green infrastructure project on the Yongding River in Beijing, China. Seven artificial lakes and wetlands were established to improve local water storage and human comfort; evapotranspiration (ET) regulates both services. Managers want to minimize the trade-off between water losses and cooling to sustain water supplies while lowering the heat index (HI) to improve human comfort. We selected human benefit indicators using water storage targets and Beijing's HI, and the Variable Infiltration Capacity model to determine the change in ET from the new ecosystems. We created EPFs to quantify the ecosystem services as marginal values [Δfinal ecosystem service/Δecohydrological process]: (1) Δwater loss (lake evaporation/volume)/Δdepth and (2) Δsummer HI/ΔET. We estimate the new ecosystems increased local ET by 0.7 mm/d (20.3 W/m2) on the Yongding River. However, ET rates are causing water storage shortfalls while producing no improvements in human comfort. The shallow lakes/wetlands are vulnerable to drying when inflow rates fluctuate, low depths lead to higher evaporative losses, causing water storage shortfalls with minimal cooling effects. We recommend managers make the lakes deeper to increase water storage, and plant shade trees to improve human comfort in the parks.

  8. Yields and Nutritional of Greenhouse Tomato in Response to Different Soil Aeration Volume at two depths of Subsurface drip irrigation

    Science.gov (United States)

    Li, Yuan; Niu, Wenquan; Dyck, Miles; Wang, Jingwei; Zou, Xiaoyang

    2016-01-01

    This study investigated the effects of 4 aeration levels (varied by injection of air to the soil through subsurface irrigation lines) at two subsurface irrigation line depths (15 and 40 cm) on plant growth, yield and nutritional quality of greenhouse tomato. In all experiments, fruit number, width and length, yield, vitamin C, lycopene and sugar/acid ratio of tomato markedly increased in response to the aeration treatments. Vitamin C, lycopene, and sugar/acid ratio increased by 41%, 2%, and 43%, respectively, in the 1.5 times standard aeration volume compared with the no-aeration treatment. An interaction between aeration level and depth of irrigation line was also observed with yield, fruit number, fruit length, vitamin C and sugar/acid ratio of greenhouse tomato increasing at each aeration level when irrigation lines were placed at 40 cm depth. However, when the irrigation lines were 15 cm deep, the trend of total fruit yields, fruit width, fruit length and sugar/acid ratio first increased and then decreased with increasing aeration level. Total soluble solids and titrable acid decreased with increasing aeration level both at 15 and 40 cm irrigation line placement. When all of the quality factors, yields and economic benefit are considered together, the combination of 40 cm line depth and “standard” aeration level was the optimum combination. PMID:27995970

  9. GPU-accelerated depth map generation for X-ray simulations of complex CAD geometries

    Science.gov (United States)

    Grandin, Robert J.; Young, Gavin; Holland, Stephen D.; Krishnamurthy, Adarsh

    2018-04-01

    Interactive x-ray simulations of complex computer-aided design (CAD) models can provide valuable insights for better interpretation of the defect signatures such as porosity from x-ray CT images. Generating the depth map along a particular direction for the given CAD geometry is the most compute-intensive step in x-ray simulations. We have developed a GPU-accelerated method for real-time generation of depth maps of complex CAD geometries. We preprocess complex components designed using commercial CAD systems using a custom CAD module and convert them into a fine user-defined surface tessellation. Our CAD module can be used by different simulators as well as handle complex geometries, including those that arise from complex castings and composite structures. We then make use of a parallel algorithm that runs on a graphics processing unit (GPU) to convert the finely-tessellated CAD model to a voxelized representation. The voxelized representation can enable heterogeneous modeling of the volume enclosed by the CAD model by assigning heterogeneous material properties in specific regions. The depth maps are generated from this voxelized representation with the help of a GPU-accelerated ray-casting algorithm. The GPU-accelerated ray-casting method enables interactive (> 60 frames-per-second) generation of the depth maps of complex CAD geometries. This enables arbitrarily rotation and slicing of the CAD model, leading to better interpretation of the x-ray images by the user. In addition, the depth maps can be used to aid directly in CT reconstruction algorithms.

  10. Fog Density Estimation and Image Defogging Based on Surrogate Modeling for Optical Depth.

    Science.gov (United States)

    Jiang, Yutong; Sun, Changming; Zhao, Yu; Yang, Li

    2017-05-03

    In order to estimate fog density correctly and to remove fog from foggy images appropriately, a surrogate model for optical depth is presented in this paper. We comprehensively investigate various fog-relevant features and propose a novel feature based on the hue, saturation, and value color space which correlate well with the perception of fog density. We use a surrogate-based method to learn a refined polynomial regression model for optical depth with informative fog-relevant features such as dark-channel, saturation-value, and chroma which are selected on the basis of sensitivity analysis. Based on the obtained accurate surrogate model for optical depth, an effective method for fog density estimation and image defogging is proposed. The effectiveness of our proposed method is verified quantitatively and qualitatively by the experimental results on both synthetic and real-world foggy images.

  11. Accuracy of spatio-temporal RARX model predictions of water table depths

    NARCIS (Netherlands)

    Knotters, M.; Bierkens, M.F.P.

    2002-01-01

    Time series of water table depths (Ht) are predicted in space using a regionalised autoregressive exogenous variable (RARX) model with precipitation surplus (Pt) as input variable. Because of their physical basis, RARX model parameters can be guessed from auxiliary information such as a digital

  12. Model for hydrogen isotope backscattering, trapping and depth profiles in C and a-Si

    International Nuclear Information System (INIS)

    Cohen, S.A.; McCracken, G.M.

    1979-03-01

    A model of low energy hydrogen trapping and backscattering in carbon and a-silicon is described. Depth profiles are calculated and numerical results presented for various incident angular and energy distributions. The calculations yield a relation between depth profiles and the incident ion energy distribution. The use of this model for tokamak plasma diagnosis is discussed

  13. The Everglades Depth Estimation Network (EDEN) surface-water model, version 2

    Science.gov (United States)

    Telis, Pamela A.; Xie, Zhixiao; Liu, Zhongwei; Li, Yingru; Conrads, Paul

    2015-01-01

    The Everglades Depth Estimation Network (EDEN) is an integrated network of water-level gages, interpolation models that generate daily water-level and water-depth data, and applications that compute derived hydrologic data across the freshwater part of the greater Everglades landscape. The U.S. Geological Survey Greater Everglades Priority Ecosystems Science provides support for EDEN in order for EDEN to provide quality-assured monitoring data for the U.S. Army Corps of Engineers Comprehensive Everglades Restoration Plan.

  14. A global reference model of Curie-point depths based on EMAG2

    Science.gov (United States)

    Li, Chun-Feng; Lu, Yu; Wang, Jian

    2017-03-01

    In this paper, we use a robust inversion algorithm, which we have tested in many regional studies, to obtain the first global model of Curie-point depth (GCDM) from magnetic anomaly inversion based on fractal magnetization. Statistically, the oceanic Curie depth mean is smaller than the continental one, but continental Curie depths are almost bimodal, showing shallow Curie points in some old cratons. Oceanic Curie depths show modifications by hydrothermal circulations in young oceanic lithosphere and thermal perturbations in old oceanic lithosphere. Oceanic Curie depths also show strong dependence on the spreading rate along active spreading centers. Curie depths and heat flow are correlated, following optimal theoretical curves of average thermal conductivities K = ~2.0 W(m°C)-1 for the ocean and K = ~2.5 W(m°C)-1 for the continent. The calculated heat flow from Curie depths and large-interval gridding of measured heat flow all indicate that the global heat flow average is about 70.0 mW/m2, leading to a global heat loss ranging from ~34.6 to 36.6 TW.

  15. Depth resolution and preferential sputtering in depth profiling of sharp interfaces

    International Nuclear Information System (INIS)

    Hofmann, S.; Han, Y.S.; Wang, J.Y.

    2017-01-01

    Highlights: • Interfacial depth resolution from MRI model depends on sputtering rate differences. • Depth resolution critically depends on the dominance of roughness or atomic mixing. • True (depth scale) and apparent (time scale) depth resolutions are different. • Average sputtering rate approximately yields true from apparent depth resolution. • Profiles by SIMS and XPS are different but similar to surface concentrations. - Abstract: The influence of preferential sputtering on depth resolution of sputter depth profiles is studied for different sputtering rates of the two components at an A/B interface. Surface concentration and intensity depth profiles on both the sputtering time scale (as measured) and the depth scale are obtained by calculations with an extended Mixing-Roughness-Information depth (MRI)-model. The results show a clear difference for the two extreme cases (a) preponderant roughness and (b) preponderant atomic mixing. In case (a), the interface width on the time scale (Δt(16–84%)) increases with preferential sputtering if the faster sputtering component is on top of the slower sputtering component, but the true resolution on the depth scale (Δz(16–84%)) stays constant. In case (b), the interface width on the time scale stays constant but the true resolution on the depth scale varies with preferential sputtering. For similar order of magnitude of the atomic mixing and the roughness parameters, a transition state between the two extremes is obtained. While the normalized intensity profile of SIMS represents that of the surface concentration, an additional broadening effect is encountered in XPS or AES by the influence of the mean electron escape depth which may even cause an additional matrix effect at the interface.

  16. Depth resolution and preferential sputtering in depth profiling of sharp interfaces

    Energy Technology Data Exchange (ETDEWEB)

    Hofmann, S. [Max Planck Institute for Intelligent Systems (formerly MPI for Metals Research), Heisenbergstrasse 3, D-70569 Stuttgart (Germany); Han, Y.S. [Department of Physics, Shantou University, 243 Daxue Road, Shantou, 515063 Guangdong (China); Wang, J.Y., E-mail: wangjy@stu.edu.cn [Department of Physics, Shantou University, 243 Daxue Road, Shantou, 515063 Guangdong (China)

    2017-07-15

    Highlights: • Interfacial depth resolution from MRI model depends on sputtering rate differences. • Depth resolution critically depends on the dominance of roughness or atomic mixing. • True (depth scale) and apparent (time scale) depth resolutions are different. • Average sputtering rate approximately yields true from apparent depth resolution. • Profiles by SIMS and XPS are different but similar to surface concentrations. - Abstract: The influence of preferential sputtering on depth resolution of sputter depth profiles is studied for different sputtering rates of the two components at an A/B interface. Surface concentration and intensity depth profiles on both the sputtering time scale (as measured) and the depth scale are obtained by calculations with an extended Mixing-Roughness-Information depth (MRI)-model. The results show a clear difference for the two extreme cases (a) preponderant roughness and (b) preponderant atomic mixing. In case (a), the interface width on the time scale (Δt(16–84%)) increases with preferential sputtering if the faster sputtering component is on top of the slower sputtering component, but the true resolution on the depth scale (Δz(16–84%)) stays constant. In case (b), the interface width on the time scale stays constant but the true resolution on the depth scale varies with preferential sputtering. For similar order of magnitude of the atomic mixing and the roughness parameters, a transition state between the two extremes is obtained. While the normalized intensity profile of SIMS represents that of the surface concentration, an additional broadening effect is encountered in XPS or AES by the influence of the mean electron escape depth which may even cause an additional matrix effect at the interface.

  17. Long term warranty and after sales service concept, policies and cost models

    CERN Document Server

    Rahman, Anisur

    2015-01-01

    This volume presents concepts, policies and cost models for various long-term warranty and maintenance contracts. It offers several numerical examples for estimating costs to both the manufacturer and consumer. Long-term warranties and maintenance contracts are becoming increasingly popular, as these types of aftersales services provide assurance to consumers that they can enjoy long, reliable service, and protect them from defects and the potentially high costs of repairs. Studying long-term warranty and service contracts is important to manufacturers and consumers alike, as offering long-term warranty and maintenance contracts produce additional costs for manufacturers / service providers over the product’s service life. These costs must be factored into the price, or the manufacturer / dealer will incur losses instead of making a profit. On the other hand, the buyer / consumer needs to weigh the cost of maintaining it over its service life and to decide whether or not these policies are worth purchasing....

  18. Soil depth modelling using terrain analysis and satellite imagery: the case study of Qeshlaq mountainous watershed (Kurdistan, Iran

    Directory of Open Access Journals (Sweden)

    Salahudin Zahedi

    2017-09-01

    Full Text Available Soil depth is a major soil characteristic, which is commonly used in distributed hydrological modelling in order to present watershed subsurface attributes. This study aims at developing a statistical model for predicting the spatial pattern of soil depth over the mountainous watershed from environmental variables derived from a digital elevation model (DEM and remote sensing data. Among the explanatory variables used in the models, seven are derived from a 10 m resolution DEM, namely specific catchment area, wetness index, aspect, slope, plan curvature, elevation and sediment transport index. Three variables landuse, NDVI and pca1 are derived from Landsat8 imagery, and are used for predicting soil depth by the models. Soil attributes, soil moisture, topographic curvature, training samples for each landuse and major vegetation types are considered at 429 profiles within four subwatersheds. Random forests (RF, support vector machine (SVM and artificial neural network (ANN are used to predict soil depth using the explanatory variables. The models are run using 336 data points in the calibration dataset with all 31 explanatory variables, and soil depth as the response of the models. Mean decrease permutation accuracy is performed on Variable selection. Testing dataset is done with the model soil depth values at testing locations (93 points using different efficiency criteria. Prediction error is computed for both the calibration and testing datasets. Results show that the variables landuse, specific surface area, slope, pca1, NDVI and aspect are the most important explanatory variables in predicting soil depth. RF and SVM models are appropriate for the mountainous watershed areas that have been limited in the depth of the soil and ANN model is more suitable for watershed with the fields of agricultural and deep soil depth.

  19. FEE-SCHEDULE INCREASES IN CANADA: IMPLICATION FOR SERVICE VOLUMES AMONG FAMILY AND SPECIALIST PHYSICIANS.

    Science.gov (United States)

    Ariste, Ruolz

    2015-01-01

    Physician spending has substantially increased over the last few years in Canada to reach $27.4 billion in 2010. Total clinical payment to physicians has grown at an average annual rate of 7.6% from 2004 to 2010. The key policy question is whether or not this additional money has bought more physician services. So, the purpose of this study is to understand if we are paying more for the same amount of medical services in Canada or we are getting more bangs for our buck. At the same time, the paper attempts to find out whether or not there is a productivity difference between family physician services and surgical procedures. Using the Baumol theory and data from the National Physician Database for the period 2004-2010, the paper breaks down growth in physician remuneration into growth in unit cost and number of services, both from the physician and the payer perspectives. After removing general inflation and population growth from the 7.6% growth in total clinical payment, we found that real payment per service and volume of services per capita grew at an average annual rate of 3.2% and 1.4% respectively, suggesting that payment per service was the main cost driver of physician remuneration at the national level. Taking the payer perspective, it was found that, for the fee-for-service (FFS) scheme, volume of services per physician decreased at an average annual rate of -0.6%, which is a crude indicator that labour productivity of physicians on FFS has fallen during the period. However, the situation differs for the surgical procedures. Results also vary by province. Overall, our finding is consistent with the Baumol theory, which hypothesizes higher productivity growth in technology-driven sectors.

  20. Mapping potential carbon and timber losses from hurricanes using a decision tree and ecosystem services driver model.

    Science.gov (United States)

    Delphin, S; Escobedo, F J; Abd-Elrahman, A; Cropper, W

    2013-11-15

    Information on the effect of direct drivers such as hurricanes on ecosystem services is relevant to landowners and policy makers due to predicted effects from climate change. We identified forest damage risk zones due to hurricanes and estimated the potential loss of 2 key ecosystem services: aboveground carbon storage and timber volume. Using land cover, plot-level forest inventory data, the Integrated Valuation of Ecosystem Services and Tradeoffs (InVEST) model, and a decision tree-based framework; we determined potential damage to subtropical forests from hurricanes in the Lower Suwannee River (LS) and Pensacola Bay (PB) watersheds in Florida, US. We used biophysical factors identified in previous studies as being influential in forest damage in our decision tree and hurricane wind risk maps. Results show that 31% and 0.5% of the total aboveground carbon storage in the LS and PB, respectively was located in high forest damage risk (HR) zones. Overall 15% and 0.7% of the total timber net volume in the LS and PB, respectively, was in HR zones. This model can also be used for identifying timber salvage areas, developing ecosystem service provision and management scenarios, and assessing the effect of other drivers on ecosystem services and goods. Copyright © 2013 Elsevier Ltd. All rights reserved.

  1. Revenue Potential for Inpatient IR Consultation Services: A Financial Model.

    Science.gov (United States)

    Misono, Alexander S; Mueller, Peter R; Hirsch, Joshua A; Sheridan, Robert M; Siddiqi, Assad U; Liu, Raymond W

    2016-05-01

    Interventional radiology (IR) has historically failed to fully capture the value of evaluation and management services in the inpatient setting. Understanding financial benefits of a formally incorporated billing discipline may yield meaningful insights for interventional practices. A revenue modeling tool was created deploying standard financial modeling techniques, including sensitivity and scenario analyses. Sensitivity analysis calculates revenue fluctuation related to dynamic adjustment of discrete variables. In scenario analysis, possible future scenarios as well as revenue potential of different-size clinical practices are modeled. Assuming a hypothetical inpatient IR consultation service with a daily patient census of 35 patients and two new consults per day, the model estimates annual charges of $2.3 million and collected revenue of $390,000. Revenues are most sensitive to provider billing documentation rates and patient volume. A range of realistic scenarios-from cautious to optimistic-results in a range of annual charges of $1.8 million to $2.7 million and a collected revenue range of $241,000 to $601,000. Even a small practice with a daily patient census of 5 and 0.20 new consults per day may expect annual charges of $320,000 and collected revenue of $55,000. A financial revenue modeling tool is a powerful adjunct in understanding economics of an inpatient IR consultation service. Sensitivity and scenario analyses demonstrate a wide range of revenue potential and uncover levers for financial optimization. Copyright © 2016 SIR. Published by Elsevier Inc. All rights reserved.

  2. Estimating floodwater depths from flood inundation maps and topography

    Science.gov (United States)

    Cohen, Sagy; Brakenridge, G. Robert; Kettner, Albert; Bates, Bradford; Nelson, Jonathan M.; McDonald, Richard R.; Huang, Yu-Fen; Munasinghe, Dinuke; Zhang, Jiaqi

    2018-01-01

    Information on flood inundation extent is important for understanding societal exposure, water storage volumes, flood wave attenuation, future flood hazard, and other variables. A number of organizations now provide flood inundation maps based on satellite remote sensing. These data products can efficiently and accurately provide the areal extent of a flood event, but do not provide floodwater depth, an important attribute for first responders and damage assessment. Here we present a new methodology and a GIS-based tool, the Floodwater Depth Estimation Tool (FwDET), for estimating floodwater depth based solely on an inundation map and a digital elevation model (DEM). We compare the FwDET results against water depth maps derived from hydraulic simulation of two flood events, a large-scale event for which we use medium resolution input layer (10 m) and a small-scale event for which we use a high-resolution (LiDAR; 1 m) input. Further testing is performed for two inundation maps with a number of challenging features that include a narrow valley, a large reservoir, and an urban setting. The results show FwDET can accurately calculate floodwater depth for diverse flooding scenarios but also leads to considerable bias in locations where the inundation extent does not align well with the DEM. In these locations, manual adjustment or higher spatial resolution input is required.

  3. Salient regions detection using convolutional neural networks and color volume

    Science.gov (United States)

    Liu, Guang-Hai; Hou, Yingkun

    2018-03-01

    Convolutional neural network is an important technique in machine learning, pattern recognition and image processing. In order to reduce the computational burden and extend the classical LeNet-5 model to the field of saliency detection, we propose a simple and novel computing model based on LeNet-5 network. In the proposed model, hue, saturation and intensity are utilized to extract depth cues, and then we integrate depth cues and color volume to saliency detection following the basic structure of the feature integration theory. Experimental results show that the proposed computing model outperforms some existing state-of-the-art methods on MSRA1000 and ECSSD datasets.

  4. Organisational and extraorganisational determinants of volume of service delivery by English community pharmacies: a cross-sectional survey and secondary data analysis

    Science.gov (United States)

    Hann, Mark; Schafheutle, Ellen I; Bradley, Fay; Elvey, Rebecca; Wagner, Andrew; Halsall, Devina; Hassell, Karen

    2017-01-01

    Objectives This study aimed to identify the organisational and extraorganisational factors associated with existing variation in the volume of services delivered by community pharmacies. Design and setting Linear and ordered logistic regression of linked national data from secondary sources—community pharmacy activity, socioeconomic and health need datasets—and primary data from a questionnaire survey of community pharmacies in nine diverse geographical areas in England. Outcome measures Annual dispensing volume; annual volume of medicines use reviews (MURs). Results National dataset (n=10 454 pharmacies): greater dispensing volume was significantly associated with pharmacy ownership type (large chains>independents>supermarkets), greater deprivation, higher local prevalence of cardiovascular disease and depression, older people (aged >75 years) and infants (aged 0–4 years) but lower prevalence of mental health conditions. Greater volume of MURs was significantly associated with pharmacy ownership type (large chains/supermarkets>>independents), greater dispensing volume, and lower disease prevalence. Survey dataset (n=285 pharmacies; response=34.6%): greater dispensing volume was significantly associated with staffing, skill-mix, organisational culture, years open and greater deprivation. Greater MUR volume was significantly associated with pharmacy ownership type (large chains/supermarkets>>independents), greater dispensing volume, weekly opening hours and lower asthma prevalence. Conclusions Organisational and extraorganisational factors were found to impact differently on dispensing volume and MUR activity, the latter being driven more by corporate ownership than population need. While levels of staffing and skill-mix were associated with dispensing volume, they did not influence MUR activity. Despite recent changes to the contractual framework, the existing fee-for-service reimbursement may therefore not be the most appropriate for the delivery of

  5. Service systems concepts, modeling, and programming

    CERN Document Server

    Cardoso, Jorge; Poels, Geert

    2014-01-01

    This SpringerBrief explores the internal workings of service systems. The authors propose a lightweight semantic model for an effective representation to capture the essence of service systems. Key topics include modeling frameworks, service descriptions and linked data, creating service instances, tool support, and applications in enterprises.Previous books on service system modeling and various streams of scientific developments used an external perspective to describe how systems can be integrated. This brief introduces the concept of white-box service system modeling as an approach to mo

  6. Sensory memory of illusory depth in structure-from-motion.

    Science.gov (United States)

    Pastukhov, Alexander; Lissner, Anna; Füllekrug, Jana; Braun, Jochen

    2014-01-01

    When multistable displays (stimuli consistent with two or more equally plausible perceptual interpretations) are presented intermittently, their perceptions are stabilized by sensory memory. Independent memory traces are generated not only for different types of multistable displays (Maier, Wilke, Logothetis, & Leopold, Current Biology 13:1076-1085, 2003), but also for different ambiguous features of binocular rivalry (Pearson & Clifford, Journal of Vision 4:196-202, 2004). In the present study, we examined whether a similar independence of sensory memories is observed in structure-from-motion (SFM), a multistable display with two ambiguous properties. In SFM, a 2-D planar motion creates a vivid impression of a rotating 3-D volume. Both the illusory rotation and illusory depth (i.e., how close parts of an object appear to the observer) of an SFM object are ambiguous. We dissociated the sensory memories of these two ambiguous properties by using an intermittent presentation in combination with a forced-ambiguous-switch paradigm (Pastukhov, Vonau, & Braun, PLoS ONE 7:e37734, 2012). We demonstrated that the illusory depth of SFM generates a sensory memory trace that is independent from that of illusory rotation. Despite this independence, the specificities levels of the sensory memories were identical for illusory depth and illusory rotation. The history effect was weakened by a change in the volumetric property of a shape (whether it was a hollow band or a filled drum volume), but not by changes in color or size. We discuss how these new results constrain models of sensory memory and SFM processing.

  7. Maximum Neutral Buoyancy Depth of Juvenile Chinook Salmon: Implications for Survival during Hydroturbine Passage

    Energy Technology Data Exchange (ETDEWEB)

    Pflugrath, Brett D.; Brown, Richard S.; Carlson, Thomas J.

    2012-03-01

    This study investigated the maximum depth at which juvenile Chinook salmon Oncorhynchus tshawytscha can acclimate by attaining neutral buoyancy. Depth of neutral buoyancy is dependent upon the volume of gas within the swim bladder, which greatly influences the occurrence of injuries to fish passing through hydroturbines. We used two methods to obtain maximum swim bladder volumes that were transformed into depth estimations - the increased excess mass test (IEMT) and the swim bladder rupture test (SBRT). In the IEMT, weights were surgically added to the fishes exterior, requiring the fish to increase swim bladder volume in order to remain neutrally buoyant. SBRT entailed removing and artificially increasing swim bladder volume through decompression. From these tests, we estimate the maximum acclimation depth for juvenile Chinook salmon is a median of 6.7m (range = 4.6-11.6 m). These findings have important implications to survival estimates, studies using tags, hydropower operations, and survival of juvenile salmon that pass through large Kaplan turbines typical of those found within the Columbia and Snake River hydropower system.

  8. A staged service innovation model

    NARCIS (Netherlands)

    Song, L.Z.; Song, Michael; Benedetto, Di A.C.

    2009-01-01

    Drawing from the new product development (NPD) literature, service quality literature (SERVQUAL), and empirically grounded research with 53 service innovation decision makers, we develop a staged service innovation model (SIM) for decision makers. We tested the model using empirical data from 329

  9. Developing a change model for peer worker interventions in mental health services: a qualitative research study.

    Science.gov (United States)

    Gillard, S; Gibson, S L; Holley, J; Lucock, M

    2015-10-01

    A range of peer worker roles are being introduced into mental health services internationally. There is some evidence that attests to the benefits of peer workers for the people they support but formal trial evidence in inconclusive, in part because the change model underpinning peer support-based interventions is underdeveloped. Complex intervention evaluation guidance suggests that understandings of how an intervention is associated with change in outcomes should be modelled, theoretically and empirically, before the intervention can be robustly evaluated. This paper aims to model the change mechanisms underlying peer worker interventions. In a qualitative, comparative case study of ten peer worker initiatives in statutory and voluntary sector mental health services in England in-depth interviews were carried out with 71 peer workers, service users, staff and managers, exploring their experiences of peer working. Using a Grounded Theory approach we identified core processes within the peer worker role that were productive of change for service users supported by peer workers. Key change mechanisms were: (i) building trusting relationships based on shared lived experience; (ii) role-modelling individual recovery and living well with mental health problems; (iii) engaging service users with mental health services and the community. Mechanisms could be further explained by theoretical literature on role-modelling and relationship in mental health services. We were able to model process and downstream outcomes potentially associated with peer worker interventions. An empirically and theoretically grounded change model can be articulated that usefully informs the development, evaluation and planning of peer worker interventions.

  10. Models of Public Service Provision

    DEFF Research Database (Denmark)

    Andersen, Lotte Bøgh; Kristensen, Nicolai; Pedersen, Lene Holm

    2013-01-01

    This article extends the framework of Le Grand (2003, 2010) to encompass responsiveness, and the main argument is that the combination of employee motivation, user capacity, and models of public service provision potentially has serious implications for responsiveness across service areas. Although...... research on employee motivation thrives, especially in the public service motivation (PSM) literature, few studies have investigated user capacity empirically, and we know little about the combination of PSM, user capacity and models of service provision. Analyzing four central service areas (day care......, schools, hospitals, and universities), we find variations in both user capacity and PSM. Taking this variation as a point of departure we discuss what implications different combinations of employee motivation, user capacity, and models of public service provision may have for responsiveness....

  11. Tsunami waves generated by submarine landslides of variable volume: analytical solutions for a basin of variable depth

    Directory of Open Access Journals (Sweden)

    I. Didenkulova

    2010-11-01

    Full Text Available Tsunami wave generation by submarine landslides of a variable volume in a basin of variable depth is studied within the shallow-water theory. The problem of landslide induced tsunami wave generation and propagation is studied analytically for two specific convex bottom profiles (h ~ x4/3 and h ~ x4. In these cases the basic equations can be reduced to the constant-coefficient wave equation with the forcing determined by the landslide motion. For certain conditions on the landslide characteristics (speed and volume per unit cross-section the wave field can be described explicitly. It is represented by one forced wave propagating with the speed of the landslide and following its offshore direction, and two free waves propagating in opposite directions with the wave celerity. For the case of a near-resonant motion of the landslide along the power bottom profile h ~ xγ the dynamics of the waves propagating offshore is studied using the asymptotic approach. If the landslide is moving in the fully resonant regime the explicit formula for the amplitude of the wave can be derived. It is demonstrated that generally tsunami wave amplitude varies non-monotonically with distance.

  12. Identification of hydrological model parameters for flood forecasting using data depth measures

    Science.gov (United States)

    Krauße, T.; Cullmann, J.

    2011-03-01

    The development of methods for estimating the parameters of hydrological models considering uncertainties has been of high interest in hydrological research over the last years. Besides the very popular Markov Chain Monte Carlo (MCMC) methods which estimate the uncertainty of model parameters in the settings of a Bayesian framework, the development of depth based sampling methods, also entitled robust parameter estimation (ROPE), have attracted an increasing research interest. These methods understand the estimation of model parameters as a geometric search of a set of robust performing parameter vectors by application of the concept of data depth. Recent studies showed that the parameter vectors estimated by depth based sampling perform more robust in validation. One major advantage of this kind of approach over the MCMC methods is that the formulation of a likelihood function within a Bayesian uncertainty framework gets obsolete and arbitrary purpose-oriented performance criteria defined by the user can be integrated without any further complications. In this paper we present an advanced ROPE method entitled the Advanced Robust Parameter Estimation by Monte Carlo algorithm (AROPEMC). The AROPEMC algorithm is a modified version of the original robust parameter estimation algorithm ROPEMC developed by Bárdossy and Singh (2008). AROPEMC performs by merging iterative Monte Carlo simulations, identifying well performing parameter vectors, the sampling of robust parameter vectors according to the principle of data depth and the application of a well-founded stopping criterion applied in supervised machine learning. The principals of the algorithm are illustrated by means of the Rosenbrock's and Rastrigin's function, two well known performance benchmarks for optimisation algorithms. Two case studies demonstrate the advantage of AROPEMC compared to state of the art global optimisation algorithms. A distributed process-oriented hydrological model is calibrated and

  13. IRI related data and model services at NSSDC

    Science.gov (United States)

    Bilitza, D.; Papitashvili, N.; King, J.

    NASA's National Space Science Data Center (NSSDC) provides internet access to a large number of space physics data sets and models. We will review and explain the different products and services that might be of interest to the IRI community. Data can be obtained directly through anonymous ftp or through the SPyCAT WWW interface to a large volume of space physics data on juke-box type mass storage devices. A newly developed WWW system, the ATMOWeb, provides browse and sub-setting capabilities for selected atmospheric and thermospheric data. NSSDC maintains an archive of space physics models that includes a subset of ionospheric models. The model software can be retrieved via anonymous ftp. A selection of the most frequently requested models can be run on-line through special WWW interfaces. Currently supported models include the International Reference Ionosphere (IRI), the Mass Spectrometer and Incoherent Scatter (MSIS) atmospheric model, the International Geomagnetic Reference Field (IGRF) and the AE-8/AP-8 radiation belt models. In this article special emphasis will be given to the IRI interface and its various input/output options. Several new options and a Java-based plotting capability were recently added to the Web interface.

  14. Nuclear Test Depth Determination with Synthetic Modelling: Global Analysis from PNEs to DPRK-2016

    Science.gov (United States)

    Rozhkov, Mikhail; Stachnik, Joshua; Baker, Ben; Epiphansky, Alexey; Bobrov, Dmitry

    2016-04-01

    Seismic event depth determination is critical for the event screening process at the International Data Center, CTBTO. A thorough determination of the event depth can be conducted mostly through additional special analysis because the IDC's Event Definition Criteria is based, in particular, on depth estimation uncertainties. This causes a large number of events in the Reviewed Event Bulletin to have depth constrained to the surface making the depth screening criterion not applicable. Further it may result in a heavier workload to manually distinguish between subsurface and deeper crustal events. Since the shape of the first few seconds of signal of very shallow events is very sensitive to the depth phases, cross correlation between observed and theoretic seismograms can provide a basis for the event depth estimation, and so an expansion to the screening process. We applied this approach mostly to events at teleseismic and partially regional distances. The approach was found efficient for the seismic event screening process, with certain caveats related mostly to poorly defined source and receiver crustal models which can shift the depth estimate. An adjustable teleseismic attenuation model (t*) for synthetics was used since this characteristic is not known for most of the rays we studied. We studied a wide set of historical records of nuclear explosions, including so called Peaceful Nuclear Explosions (PNE) with presumably known depths, and recent DPRK nuclear tests. The teleseismic synthetic approach is based on the stationary phase approximation with hudson96 program, and the regional modelling was done with the generalized ray technique by Vlastislav Cerveny modified to account for the complex source topography. The software prototype is designed to be used for the Expert Technical Analysis at the IDC. With this, the design effectively reuses the NDC-in-a-Box code and can be comfortably utilized by the NDC users. The package uses Geotool as a front-end for data

  15. Radon depth migration

    International Nuclear Information System (INIS)

    Hildebrand, S.T.; Carroll, R.J.

    1993-01-01

    A depth migration method is presented that used Radon-transformed common-source seismograms as input. It is shown that the Radon depth migration method can be extended to spatially varying velocity depth models by using asymptotic ray theory (ART) to construct wavefield continuation operators. These operators downward continue an incident receiver-array plane wave and an assumed point-source wavefield into the subsurface. The migration velocity model is constrain to have longer characteristic wavelengths than the dominant source wavelength such that the ART approximations for the continuation operators are valid. This method is used successfully to migrate two synthetic data examples: (1) a point diffractor, and (2) a dipping layer and syncline interface model. It is shown that the Radon migration method has a computational advantage over the standard Kirchhoff migration method in that fewer rays are computed in a main memory implementation

  16. Usage of Geoprocessing Services in Precision Forestry for Wood Volume Calculation and Wind Risk Assessment

    Directory of Open Access Journals (Sweden)

    Tomáš Mikita

    2015-01-01

    Full Text Available This paper outlines the idea of a precision forestry tool for optimizing clearcut size and shape within the process of forest recovery and its publishing in the form of a web processing service for forest owners on the Internet. The designed tool titled COWRAS (Clearcut Optimization and Wind Risk Assessment is developed for optimization of clearcuts (their location, shape, size, and orientation with subsequent wind risk assessment. The tool primarily works with airborne LiDAR data previously processed to the form of a digital surface model (DSM and a digital elevation model (DEM. In the first step, the growing stock on the planned clearcut determined by its location and area in feature class is calculated (by the method of individual tree detection. Subsequently tree heights from canopy height model (CHM are extracted and then diameters at breast height (DBH and wood volume using the regressions are calculated. Information about wood volume of each tree in the clearcut is exported and summarized in a table. In the next step, all trees in the clearcut are removed and a new DSM without trees in the clearcut is generated. This canopy model subsequently serves as an input for evaluation of wind risk damage by the MAXTOPEX tool (Mikita et al., 2012. In the final raster, predisposition of uncovered forest stand edges (around the clearcut to wind risk is calculated based on this analysis. The entire tool works in the background of ArcGIS server as a spatial decision support system for foresters.

  17. Definition of technology development missions for early Space Station satellite servicing. Volume 1: Executive summary

    Science.gov (United States)

    1984-01-01

    The Executive Summary volume 1, includes an overview of both phases of the Definition of Technology Development Missions for Early Space Station Satellite Servicing. The primary purpose of Phase 1 of the Marshall Space Flight Center (MSFC) Satellite Servicing Phase 1 study was to establish requirements for demonstrating the capability of performing satellite servicing activities on a permanently manned Space Station in the early 1990s. The scope of Phase 1 included TDM definition, outlining of servicing objectives, derivation of initial Space Station servicing support requirements, and generation of the associated programmatic schedules and cost. The purpose of phase 2 of the satellite servicing study was to expand and refine the overall understanding of how best to use the manned space station as a test bed for demonstration of satellite servicing capabilities.

  18. Effects of topographic data quality on estimates of shallow slope stability using different regolith depth models

    Science.gov (United States)

    Baum, Rex L.

    2017-01-01

    Thickness of colluvium or regolith overlying bedrock or other consolidated materials is a major factor in determining stability of unconsolidated earth materials on steep slopes. Many efforts to model spatially distributed slope stability, for example to assess susceptibility to shallow landslides, have relied on estimates of constant thickness, constant depth, or simple models of thickness (or depth) based on slope and other topographic variables. Assumptions of constant depth or thickness rarely give satisfactory results. Geomorphologists have devised a number of different models to represent the spatial variability of regolith depth and applied them to various settings. I have applied some of these models that can be implemented numerically to different study areas with different types of terrain and tested the results against available depth measurements and landslide inventories. The areas include crystalline rocks of the Colorado Front Range, and gently dipping sedimentary rocks of the Oregon Coast Range. Model performance varies with model, terrain type, and with quality of the input topographic data. Steps in contour-derived 10-m digital elevation models (DEMs) introduce significant errors into the predicted distribution of regolith and landslides. Scan lines, facets, and other artifacts further degrade DEMs and model predictions. Resampling to a lower grid-cell resolution can mitigate effects of facets in lidar DEMs of areas where dense forest severely limits ground returns. Due to its higher accuracy and ability to penetrate vegetation, lidar-derived topography produces more realistic distributions of cover and potential landslides than conventional photogrammetrically derived topographic data.

  19. Food waste volume and origin: Case studies in the Finnish food service sector.

    Science.gov (United States)

    Silvennoinen, Kirsi; Heikkilä, Lotta; Katajajuuri, Juha-Matti; Reinikainen, Anu

    2015-12-01

    We carried out a project to map the volume and composition of food waste in the Finnish food service sector. The amount, type and origin of avoidable food waste were investigated in 51 food service outlets, including schools, day-care centres, workplace canteens, petrol stations, restaurants and diners. Food service outlet personnel kept diaries and weighed the food produced and wasted during a one-week or one-day period. For weighing and sorting, the food waste was divided into two categories: originally edible (OE) food waste was separated from originally inedible (OIE) waste, such as vegetable peelings, bones and coffee grounds. In addition, food waste (OE) was divided into three categories in accordance with its origins: kitchen waste, service waste and customer leftovers. According to the results, about 20% of all food handled and prepared in the sector was wasted. The findings also suggest that the main drivers of wasted food are buffet services and overproduction. Copyright © 2015 Elsevier Ltd. All rights reserved.

  20. Service models and realization of differentiated services networks

    Science.gov (United States)

    Elizondo, Antonio J.; Garcia Osma, Maria L.; Einsiedler, Hans J.; Roth, Rudolf; Smirnov, Michael I.; Bartoli, Maurizio; Castelli, Paolo; Varga, Balazs; Krampell, Magnus

    2001-07-01

    Internet Service Providers need to offer Quality of Service (QoS) to fulfil the requirements of applications of their customers. Moreover, in a competitive market environment costs must be low. The selected service model must be effective and low in complexity, but it should still provide high quality and service differentiation, that the current Internet is not yet capable to support. The Differentiated Services (DiffServ) Architecture has been proposed for enabling a range of different Classes of Service (CoS). In the EURESCOM project P1006 several European service providers co-operated to examine various aspects involved in the introduction of service differentiation using the DiffServ approach. The project explored a set of service models for Expedited Forwarding (EF) and Assured Forwarding (AF) and identified requirements for network nodes. Besides, we addressed also measurement issues, charging and accounting issues. Special attention has been devoted to requirements of elastic traffic that adapts its sending rate to congestion state and available bandwidth. QoS mechanisms must prove Transmission Control Protocol (TCP) friendliness. TCP performance degrades under multiple losses. Since RED based queue management may still cause multiple discards, a modified marking scheme called Capped Leaky Bucket is proposed to improve the performance of elastic applications.

  1. Organisational and extraorganisational determinants of volume of service delivery by English community pharmacies: a cross-sectional survey and secondary data analysis.

    Science.gov (United States)

    Hann, Mark; Schafheutle, Ellen I; Bradley, Fay; Elvey, Rebecca; Wagner, Andrew; Halsall, Devina; Hassell, Karen; Jacobs, Sally

    2017-10-10

    This study aimed to identify the organisational and extraorganisational factors associated with existing variation in the volume of services delivered by community pharmacies. Linear and ordered logistic regression of linked national data from secondary sources-community pharmacy activity, socioeconomic and health need datasets-and primary data from a questionnaire survey of community pharmacies in nine diverse geographical areas in England. Annual dispensing volume; annual volume of medicines use reviews (MURs). National dataset (n=10 454 pharmacies): greater dispensing volume was significantly associated with pharmacy ownership type (large chains>independents>supermarkets), greater deprivation, higher local prevalence of cardiovascular disease and depression, older people (aged >75 years) and infants (aged 0-4 years) but lower prevalence of mental health conditions. Greater volume of MURs was significantly associated with pharmacy ownership type (large chains/supermarkets>independents), greater dispensing volume, and lower disease prevalence.Survey dataset (n=285 pharmacies; response=34.6%): greater dispensing volume was significantly associated with staffing, skill-mix, organisational culture, years open and greater deprivation. Greater MUR volume was significantly associated with pharmacy ownership type (large chains/supermarkets>independents), greater dispensing volume, weekly opening hours and lower asthma prevalence. Organisational and extraorganisational factors were found to impact differently on dispensing volume and MUR activity, the latter being driven more by corporate ownership than population need. While levels of staffing and skill-mix were associated with dispensing volume, they did not influence MUR activity. Despite recent changes to the contractual framework, the existing fee-for-service reimbursement may therefore not be the most appropriate for the delivery of cognitive (rather than supply) services, still appearing to incentivise quantity

  2. RELAP5/MOD3 code manual. Volume 4, Models and correlations

    International Nuclear Information System (INIS)

    1995-08-01

    The RELAP5 code has been developed for best-estimate transient simulation of light water reactor coolant systems during postulated accidents. The code models the coupled behavior of the reactor coolant system and the core for loss-of-coolant accidents and operational transients such as anticipated transient without scram, loss of offsite power, loss of feedwater, and loss of flow. A generic modeling approach is used that permits simulating a variety of thermal hydraulic systems. Control system and secondary system components are included to permit modeling of plant controls, turbines, condensers, and secondary feedwater systems. RELAP5/MOD3 code documentation is divided into seven volumes: Volume I presents modeling theory and associated numerical schemes; Volume II details instructions for code application and input data preparation; Volume III presents the results of developmental assessment cases that demonstrate and verify the models used in the code; Volume IV discusses in detail RELAP5 models and correlations; Volume V presents guidelines that have evolved over the past several years through the use of the RELAP5 code; Volume VI discusses the numerical scheme used in RELAP5; and Volume VII presents a collection of independent assessment calculations

  3. Parametric packet-based audiovisual quality model for IPTV services

    CERN Document Server

    Garcia, Marie-Neige

    2014-01-01

    This volume presents a parametric packet-based audiovisual quality model for Internet Protocol TeleVision (IPTV) services. The model is composed of three quality modules for the respective audio, video and audiovisual components. The audio and video quality modules take as input a parametric description of the audiovisual processing path, and deliver an estimate of the audio and video quality. These outputs are sent to the audiovisual quality module which provides an estimate of the audiovisual quality. Estimates of perceived quality are typically used both in the network planning phase and as part of the quality monitoring. The same audio quality model is used for both these phases, while two variants of the video quality model have been developed for addressing the two application scenarios. The addressed packetization scheme is MPEG2 Transport Stream over Real-time Transport Protocol over Internet Protocol. In the case of quality monitoring, that is the case for which the network is already set-up, the aud...

  4. Virtual-view PSNR prediction based on a depth distortion tolerance model and support vector machine.

    Science.gov (United States)

    Chen, Fen; Chen, Jiali; Peng, Zongju; Jiang, Gangyi; Yu, Mei; Chen, Hua; Jiao, Renzhi

    2017-10-20

    Quality prediction of virtual-views is important for free viewpoint video systems, and can be used as feedback to improve the performance of depth video coding and virtual-view rendering. In this paper, an efficient virtual-view peak signal to noise ratio (PSNR) prediction method is proposed. First, the effect of depth distortion on virtual-view quality is analyzed in detail, and a depth distortion tolerance (DDT) model that determines the DDT range is presented. Next, the DDT model is used to predict the virtual-view quality. Finally, a support vector machine (SVM) is utilized to train and obtain the virtual-view quality prediction model. Experimental results show that the Spearman's rank correlation coefficient and root mean square error between the actual PSNR and the predicted PSNR by DDT model are 0.8750 and 0.6137 on average, and by the SVM prediction model are 0.9109 and 0.5831. The computational complexity of the SVM method is lower than the DDT model and the state-of-the-art methods.

  5. Prediction model for carbonation depth of concrete subjected to freezing-thawing cycles

    Science.gov (United States)

    Xiao, Qian Hui; Li, Qiang; Guan, Xiao; Xian Zou, Ying

    2018-03-01

    Through the indoor simulation test of the concrete durability under the coupling effect of freezing-thawing and carbonation, the variation regularity of concrete neutralization depth under freezing-thawing and carbonation was obtained. Based on concrete carbonation mechanism, the relationship between the air diffusion coefficient and porosity in concrete was analyzed and the calculation method of porosity in Portland cement concrete and fly ash cement concrete was investigated, considering the influence of the freezing-thawing damage on the concrete diffusion coefficient. Finally, a prediction model of carbonation depth of concrete under freezing-thawing circumstance was established. The results obtained using this prediction model agreed well with the experimental test results, and provided a theoretical reference and basis for the concrete durability analysis under multi-factor environments.

  6. On the effect of velocity gradients on the depth of correlation in μPIV

    International Nuclear Information System (INIS)

    Mustin, B; Stoeber, B

    2016-01-01

    The present work revisits the effect of velocity gradients on the depth of the measurement volume (depth of correlation) in microscopic particle image velocimetry (μPIV). General relations between the μPIV weighting functions and the local correlation function are derived from the original definition of the weighting functions. These relations are used to investigate under which circumstances the weighting functions are related to the curvature of the local correlation function. Furthermore, this work proposes a modified definition of the depth of correlation that leads to more realistic results than previous definitions for the case when flow gradients are taken into account. Dimensionless parameters suitable to describe the effect of velocity gradients on μPIV cross correlation are derived and visual interpretations of these parameters are proposed. We then investigate the effect of the dimensionless parameters on the weighting functions and the depth of correlation for different flow fields with spatially constant flow gradients and with spatially varying gradients. Finally this work demonstrates that the results and dimensionless parameters are not strictly bound to a certain model for particle image intensity distributions but are also meaningful when other models for particle images are used. (paper)

  7. On the effect of velocity gradients on the depth of correlation in μPIV

    Science.gov (United States)

    Mustin, B.; Stoeber, B.

    2016-03-01

    The present work revisits the effect of velocity gradients on the depth of the measurement volume (depth of correlation) in microscopic particle image velocimetry (μPIV). General relations between the μPIV weighting functions and the local correlation function are derived from the original definition of the weighting functions. These relations are used to investigate under which circumstances the weighting functions are related to the curvature of the local correlation function. Furthermore, this work proposes a modified definition of the depth of correlation that leads to more realistic results than previous definitions for the case when flow gradients are taken into account. Dimensionless parameters suitable to describe the effect of velocity gradients on μPIV cross correlation are derived and visual interpretations of these parameters are proposed. We then investigate the effect of the dimensionless parameters on the weighting functions and the depth of correlation for different flow fields with spatially constant flow gradients and with spatially varying gradients. Finally this work demonstrates that the results and dimensionless parameters are not strictly bound to a certain model for particle image intensity distributions but are also meaningful when other models for particle images are used.

  8. The service dominant business model : a service focused conceptualization

    NARCIS (Netherlands)

    Lüftenegger, E.R.; Comuzzi, M.; Grefen, P.W.P.J.; Weisleder, C.A.

    2013-01-01

    Existing approaches on business model tools are constrained by the goods dominant way of doing business. Nowadays, the shift from goods based approaches towards a service dominant strategy requires novel business model tools specially focused for service business. In this report we present the

  9. The Incredible Shrinking Cup Lab: An Investigation of the Effect of Depth and Water Pressure on Polystyrene

    Science.gov (United States)

    This activity familiarizes students with the effects of increased depth on pressure and volume. Students will determine the volume of polystyrene cups before and after they are submerged to differing depths in the ocean and the Laurentian Great Lakes. Students will also calculate...

  10. Optimal models of extreme volume-prices are time-dependent

    International Nuclear Information System (INIS)

    Rocha, Paulo; Boto, João Pedro; Raischel, Frank; Lind, Pedro G

    2015-01-01

    We present evidence that the best model for empirical volume-price distributions is not always the same and it strongly depends in (i) the region of the volume-price spectrum that one wants to model and (ii) the period in time that is being modelled. To show these two features we analyze stocks of the New York stock market with four different models: Γ, Γ-inverse, log-normal, and Weibull distributions. To evaluate the accuracy of each model we use standard relative deviations as well as the Kullback-Leibler distance and introduce an additional distance particularly suited to evaluate how accurate are the models for the distribution tails (large volume-price). Finally we put our findings in perspective and discuss how they can be extended to other situations in finance engineering

  11. Soil temperature modeling at different depths using neuro-fuzzy, neural network, and genetic programming techniques

    Science.gov (United States)

    Kisi, Ozgur; Sanikhani, Hadi; Cobaner, Murat

    2017-08-01

    The applicability of artificial neural networks (ANN), adaptive neuro-fuzzy inference system (ANFIS), and genetic programming (GP) techniques in estimating soil temperatures (ST) at different depths is investigated in this study. Weather data from two stations, Mersin and Adana, Turkey, were used as inputs to the applied models in order to model monthly STs. The first part of the study focused on comparison of ANN, ANFIS, and GP models in modeling ST of two stations at the depths of 10, 50, and 100 cm. GP was found to perform better than the ANN and ANFIS-SC in estimating monthly ST. The effect of periodicity (month of the year) on models' accuracy was also investigated. Including periodicity component in models' inputs considerably increased their accuracies. The root mean square error (RMSE) of ANN models was respectively decreased by 34 and 27 % for the depths of 10 and 100 cm adding the periodicity input. In the second part of the study, the accuracies of the ANN, ANFIS, and GP models were compared in estimating ST of Mersin Station using the climatic data of Adana Station. The ANN models generally performed better than the ANFIS-SC and GP in modeling ST of Mersin Station without local climatic inputs.

  12. Ultrasound data for laboratory calibration of an analytical model to calculate crack depth on asphalt pavements

    Directory of Open Access Journals (Sweden)

    Miguel A. Franesqui

    2017-08-01

    Full Text Available This article outlines the ultrasound data employed to calibrate in the laboratory an analytical model that permits the calculation of the depth of partial-depth surface-initiated cracks on bituminous pavements using this non-destructive technique. This initial calibration is required so that the model provides sufficient precision during practical application. The ultrasonic pulse transit times were measured on beam samples of different asphalt mixtures (semi-dense asphalt concrete AC-S; asphalt concrete for very thin layers BBTM; and porous asphalt PA. The cracks on the laboratory samples were simulated by means of notches of variable depths. With the data of ultrasound transmission time ratios, curve-fittings were carried out on the analytical model, thus determining the regression parameters and their statistical dispersion. The calibrated models obtained from laboratory datasets were subsequently applied to auscultate the evolution of the crack depth after microwaves exposure in the research article entitled “Top-down cracking self-healing of asphalt pavements with steel filler from industrial waste applying microwaves” (Franesqui et al., 2017 [1].

  13. Ultrasound data for laboratory calibration of an analytical model to calculate crack depth on asphalt pavements.

    Science.gov (United States)

    Franesqui, Miguel A; Yepes, Jorge; García-González, Cándida

    2017-08-01

    This article outlines the ultrasound data employed to calibrate in the laboratory an analytical model that permits the calculation of the depth of partial-depth surface-initiated cracks on bituminous pavements using this non-destructive technique. This initial calibration is required so that the model provides sufficient precision during practical application. The ultrasonic pulse transit times were measured on beam samples of different asphalt mixtures (semi-dense asphalt concrete AC-S; asphalt concrete for very thin layers BBTM; and porous asphalt PA). The cracks on the laboratory samples were simulated by means of notches of variable depths. With the data of ultrasound transmission time ratios, curve-fittings were carried out on the analytical model, thus determining the regression parameters and their statistical dispersion. The calibrated models obtained from laboratory datasets were subsequently applied to auscultate the evolution of the crack depth after microwaves exposure in the research article entitled "Top-down cracking self-healing of asphalt pavements with steel filler from industrial waste applying microwaves" (Franesqui et al., 2017) [1].

  14. Enabling Real-time Water Decision Support Services Using Model as a Service

    Science.gov (United States)

    Zhao, T.; Minsker, B. S.; Lee, J. S.; Salas, F. R.; Maidment, D. R.; David, C. H.

    2014-12-01

    Through application of computational methods and an integrated information system, data and river modeling services can help researchers and decision makers more rapidly understand river conditions under alternative scenarios. To enable this capability, workflows (i.e., analysis and model steps) are created and published as Web services delivered through an internet browser, including model inputs, a published workflow service, and visualized outputs. The RAPID model, which is a river routing model developed at University of Texas Austin for parallel computation of river discharge, has been implemented as a workflow and published as a Web application. This allows non-technical users to remotely execute the model and visualize results as a service through a simple Web interface. The model service and Web application has been prototyped in the San Antonio and Guadalupe River Basin in Texas, with input from university and agency partners. In the future, optimization model workflows will be developed to link with the RAPID model workflow to provide real-time water allocation decision support services.

  15. Mental health service user participation in Chinese culture: a model of independence or interdependence?

    Science.gov (United States)

    Tang, Jessica Pui-Shan; Tse, Samson Shu-Ki; Davidson, Larry; Cheng, Patrick

    2017-12-22

    Current models of user participation in mental health services were developed within Western culture and thus may not be applicable to Chinese communities. To present a new model of user participation, which emerged from research within a Chinese community, for understanding the processes of and factors influencing user participation in a non-Western culture. Multiple qualitative methods, including focus groups, individual in-depth interviews, and photovoice, were applied within the framework of constructivist grounded theory and collaborative research. Diverging from conceptualizations of user participation with emphasis on civil rights and the individual as a central agent, participants in the study highlighted the interpersonal dynamics between service users and different players affecting the participation intensity and outcomes. They valued a reciprocal relationship with their caregivers in making treatment decisions, cooperated with staff to observe power hierarchies and social harmony, identified the importance of peer support in enabling service engagement and delivery, and emphasized professional facilitation in advancing involvement at the policy level. User participation in Chinese culture embeds dynamic interdependence. The proposed model adds this new dimension to the existing frameworks and calls for attention to the complex local ecology and cultural consistency in realizing user participation.

  16. Chironomid-based water depth reconstructions: an independent evaluation of site-specific and local inference models

    NARCIS (Netherlands)

    Engels, S.; Cwynar, L.C.; Rees, A.B.H.; Shuman, B.N.

    2012-01-01

    Water depth is an important environmental variable that explains a significant portion of the variation in the chironomid fauna of shallow lakes. We developed site-specific and local chironomid water-depth inference models using 26 and 104 surface-sediment samples, respectively, from seven

  17. Model documentation Natural Gas Transmission and Distribution Model of the National Energy Modeling System. Volume 1

    International Nuclear Information System (INIS)

    1996-01-01

    The Natural Gas Transmission and Distribution Model (NGTDM) of the National Energy Modeling System is developed and maintained by the Energy Information Administration (EIA), Office of Integrated Analysis and Forecasting. This report documents the archived version of the NGTDM that was used to produce the natural gas forecasts presented in the Annual Energy Outlook 1996, (DOE/EIA-0383(96)). The purpose of this report is to provide a reference document for model analysts, users, and the public that defines the objectives of the model, describes its basic approach, and provides detail on the methodology employed. Previously this report represented Volume I of a two-volume set. Volume II reported on model performance, detailing convergence criteria and properties, results of sensitivity testing, comparison of model outputs with the literature and/or other model results, and major unresolved issues

  18. Model documentation Natural Gas Transmission and Distribution Model of the National Energy Modeling System. Volume 1

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1996-02-26

    The Natural Gas Transmission and Distribution Model (NGTDM) of the National Energy Modeling System is developed and maintained by the Energy Information Administration (EIA), Office of Integrated Analysis and Forecasting. This report documents the archived version of the NGTDM that was used to produce the natural gas forecasts presented in the Annual Energy Outlook 1996, (DOE/EIA-0383(96)). The purpose of this report is to provide a reference document for model analysts, users, and the public that defines the objectives of the model, describes its basic approach, and provides detail on the methodology employed. Previously this report represented Volume I of a two-volume set. Volume II reported on model performance, detailing convergence criteria and properties, results of sensitivity testing, comparison of model outputs with the literature and/or other model results, and major unresolved issues.

  19. On the role of model depth and hydraulic properties for groundwater flow modelling during glacial climate conditions

    International Nuclear Information System (INIS)

    Vidstrand, Patrik; Rhen, Ingvar

    2011-03-01

    A common assumption in regional groundwater flow simulations of periods with glacial climate conditions is that the salinity at the bottom boundary of the model domain is stable (constant over time). This assumption is partly based on the general fact that water density increases with increasing salinity, but also supported by measurements, e.g. the mobile (fracture water) and immobile (porewater) salinity typically increase with depth, whereas the conductive fracture frequency and fracture transmissivity often decrease with depth. Plausibly, the depth to stable hydrogeological conditions varies between sites, and the question studied here is whether hydrogeological disturbances could occur at 2-4 km depth during glacial climate conditions. With regard to the results of SDM-Site and SR-Site, the hydrogeological conditions at repository depth indicate less groundwater flow during glacial climate conditions at Forsmark than at Laxemar. For this reason, this study uses the Laxemar site as a hypothetical site of potentially more permeable conditions, hence more readily affected during glacial climate conditions. A series of flow simulations conducted with DarcyTools in an approximately 5 km deep, super-regional model domain centred on the Laxemar site are reported. The studied cases (model variants) represent a variety of different property specifications along with variations in initial conditions concerning salinity. The model domain is subjected to a transient top boundary representing an advancing ice sheet margin. The behaviour of the grid cell pressure, Darcy flux and mobile salinity is monitored at four different elevations along a vertical scan line through the centre of the suggested location for a KBS-3 repository at Laxemar. The studied monitoring points are located at -0.5 km, -2.5 km, -3.0 km, and -3.5 km. These elevations are chosen with the objective to study the range of hydrogeological disturbance that could occur at 2-4 km depth. The flow model is run

  20. On the role of model depth and hydraulic properties for groundwater flow modelling during glacial climate conditions

    Energy Technology Data Exchange (ETDEWEB)

    Vidstrand, Patrik (TerraSolve AB (Sweden)); Rhen, Ingvar (SWECO Environment AB (Sweden))

    2011-03-15

    A common assumption in regional groundwater flow simulations of periods with glacial climate conditions is that the salinity at the bottom boundary of the model domain is stable (constant over time). This assumption is partly based on the general fact that water density increases with increasing salinity, but also supported by measurements, e.g. the mobile (fracture water) and immobile (porewater) salinity typically increase with depth, whereas the conductive fracture frequency and fracture transmissivity often decrease with depth. Plausibly, the depth to stable hydrogeological conditions varies between sites, and the question studied here is whether hydrogeological disturbances could occur at 2-4 km depth during glacial climate conditions. With regard to the results of SDM-Site and SR-Site, the hydrogeological conditions at repository depth indicate less groundwater flow during glacial climate conditions at Forsmark than at Laxemar. For this reason, this study uses the Laxemar site as a hypothetical site of potentially more permeable conditions, hence more readily affected during glacial climate conditions. A series of flow simulations conducted with DarcyTools in an approximately 5 km deep, super-regional model domain centred on the Laxemar site are reported. The studied cases (model variants) represent a variety of different property specifications along with variations in initial conditions concerning salinity. The model domain is subjected to a transient top boundary representing an advancing ice sheet margin. The behaviour of the grid cell pressure, Darcy flux and mobile salinity is monitored at four different elevations along a vertical scan line through the centre of the suggested location for a KBS-3 repository at Laxemar. The studied monitoring points are located at -0.5 km, -2.5 km, -3.0 km, and -3.5 km. These elevations are chosen with the objective to study the range of hydrogeological disturbance that could occur at 2-4 km depth. The flow model is run

  1. Customer premises services market demand assessment 1980 - 2000: Volume 2

    Science.gov (United States)

    Gamble, R. B.; Saporta, L.; Heidenrich, G. A.

    1983-01-01

    Potential customer premises service (CPS), telecommunication services, potential CPS user classes, a primary research survey, comparative economics, market demand forcasts, distance distribution of traffic, segmentation of market demand, and a nationwide traffic distribution model are discussed.

  2. Lanchester-Type Models of Warfare, Volume II

    OpenAIRE

    Taylor, James G.

    1980-01-01

    This monograph is a comprehensive treatist on Lanchester-type models of warfare, i.e. differential-equation models of attrition in force-on-force combat operations. Its goal is to provide both an introduction to and current-state-of-the-art overview of Lanchester-type models of warfare as well as a comprehensive and unified in-depth treatment of them. Both deterministic as well as stochastic models are considered. Such models have been widely used in the United States and elsewhere for the...

  3. Two-Dimensional Depth-Averaged Beach Evolution Modeling: Case Study of the Kizilirmak River Mouth, Turkey

    DEFF Research Database (Denmark)

    Baykal, Cüneyt; Ergin, Ayşen; Güler, Işikhan

    2014-01-01

    investigated by satellite images, physical model tests, and one-dimensional numerical models. The current study uses a two-dimensional depth-averaged numerical beach evolution model, developed based on existing methodologies. This model is mainly composed of four main submodels: a phase-averaged spectral wave......This study presents an application of a two-dimensional beach evolution model to a shoreline change problem at the Kizilirmak River mouth, which has been facing severe coastal erosion problems for more than 20 years. The shoreline changes at the Kizilirmak River mouth have been thus far...... transformation model, a two-dimensional depth-averaged numerical waveinduced circulation model, a sediment transport model, and a bottom evolution model. To validate and verify the numerical model, it is applied to several cases of laboratory experiments. Later, the model is applied to a shoreline change problem...

  4. Investigating automated depth modelling of archaeo-magnetic datasets

    Science.gov (United States)

    Cheyney, Samuel; Hill, Ian; Linford, Neil; Leech, Christopher

    2010-05-01

    Magnetic surveying is a commonly used tool for first-pass non-invasive archaeological surveying, and is often used to target areas for more detailed geophysical investigation, or excavation. Quick and routine processing of magnetic datasets mean survey results are typically viewed as 2D greyscale maps and the shapes of anomalies are interpreted in terms of likely archaeological structures. This technique is simple, but ignores some of the information content of the data. The data collected using dense spatial sampling with modern precise instrumentation are capable of yielding numerical estimates of the depths to buried structures, and their physical properties. The magnetic field measured at the surface is a superposition of the responses to all anomalous magnetic susceptibilities in the subsurface, and is therefore capable of revealing a 3D model of the magnetic properties. The application of mathematical modelling techniques to very-near-surface surveys such as for archaeology is quite rare, however similar methods are routinely used in regional scale mineral exploration surveys. Inverse modelling techniques have inherent ambiguity due to the nature of the mathematical "inverse problem". Often, although a good fit to the recorded values can be obtained, the final model will be non-unique and may be heavily biased by the starting model provided. Also the run time and computer resources required can be restrictive. Our approach is to derive as much information as possible from the data directly, and use this to define a starting model for inversion. This addresses both the ambiguity of the inverse problem and reduces the task for the inversion computation. A number of alternative methods exist that can be used to obtain parameters for source bodies in potential field data. Here, methods involving the derivatives of the total magnetic field are used in association with advanced image processing techniques to outline the edges of anomalous bodies more accurately

  5. Total tree, merchantable stem and branch volume models for ...

    African Journals Online (AJOL)

    Total tree, merchantable stem and branch volume models for miombo woodlands of Malawi. Daud J Kachamba, Tron Eid. Abstract. The objective of this study was to develop general (multispecies) models for prediction of total tree, merchantable stem and branch volume including options with diameter at breast height (dbh) ...

  6. Modeling the economics of LLW volume reduction

    International Nuclear Information System (INIS)

    Voth, M.H.; Witzig, W.F.

    1986-01-01

    Generators of low-level (radioactive) waste (LLW) are under pressure to implement volume reduction (VR) programs for political and economic reasons. Political reasons include the appearance of generating less waste or meeting quotas. Economic reasons include avoiding high disposal costs and associated surcharges. Volume reduction results in less total volume over which fixed disposal costs are allocated and therefore higher unit costs for disposal. As numerous small compacts are developed, this often overlooked effect becomes more pronounced. The described model presents two unique significant features. First, a feedback loop considers the impact of VR on disposal rates, and second, it appeals to logic without extensive knowledge of VR technology or computer modeling. The latter feature is especially useful in conveying information to students and nontechnical decision makers, demonstrating the impact of each of a complicated set of variables with reproducible results

  7. Reference depth for geostrophic computation - A new method

    Digital Repository Service at National Institute of Oceanography (India)

    Varkey, M.J.; Sastry, J.S.

    Various methods are available for the determination of reference depth for geostrophic computation. A new method based on the vertical profiles of mean and variance of the differences of mean specific volume anomaly (delta x 10) for different layers...

  8. Funding models for outreach ophthalmology services.

    Science.gov (United States)

    Turner, Angus W; Mulholland, Will; Taylor, Hugh R

    2011-01-01

    This paper aims to describe funding models used and compare the effects of funding models for remuneration on clinical activity and cost-effectiveness in outreach eye services in Australia. Cross-sectional case study based in remote outreach ophthalmology services in Australia. Key stake-holders from eye services in nine outreach regions participated in the study. Semistructured interviews were conducted to perform a qualitative assessment of outreach eye services' funding mechanisms. Records of clinical activity were used to statistically compare funding models. Workforce availability (supply of ophthalmologists), costs of services, clinical activity (surgery and clinic consultation rates) and waiting times. The supply of ophthalmologists (full-time equivalence) to all remote regions was below the national average (up to 19 times lower). Cataract surgery rates were also below national averages (up to 10 times lower). Fee-for-service funding significantly increased clinical activity. There were also trends to shorter waiting times and lower costs per attendance. For outreach ophthalmology services, the funding model used for clinician reimbursement may influence the efficiency and costs of the services. Fee-for-service funding models, safety-net funding options or differential funding/incentives need further exploration to ensure isolated disadvantaged areas prone to poor patient attendance are not neglected. In order for outreach eye health services to be sustainable, remuneration rates need to be comparable to those for urban practice. © 2011 The Authors. Clinical and Experimental Ophthalmology © 2011 Royal Australian and New Zealand College of Ophthalmologists.

  9. Computational Model for Spacecraft/Habitat Volume

    Data.gov (United States)

    National Aeronautics and Space Administration — Please note that funding to Dr. Simon Hsiang, a critical co-investigator for the development of the Spacecraft Optimization Layout and Volume (SOLV) model, was...

  10. Markov chain model helps predict pitting corrosion depth and rate in underground pipelines

    Energy Technology Data Exchange (ETDEWEB)

    Caleyo, F.; Velazquez, J.C.; Hallen, J. M. [ESIQIE, Instituto Politecnico Nacional, Mexico D. F. (Mexico); Esquivel-Amezcua, A. [PEMEX PEP Region Sur, Villahermosa, Tabasco (Mexico); Valor, A. [Universidad de la Habana, Vedado, La Habana (Cuba)

    2010-07-01

    Recent reports place pipeline corrosion costs in North America at seven billion dollars per year. Pitting corrosion causes the higher percentage of failures among other corrosion mechanisms. This has motivated multiple modelling studies to be focused on corrosion pitting of underground pipelines. In this study, a continuous-time, non-homogenous pure birth Markov chain serves to model external pitting corrosion in buried pipelines. The analytical solution of Kolmogorov's forward equations for this type of Markov process gives the transition probability function in a discrete space of pit depths. The transition probability function can be completely identified by making a correlation between the stochastic pit depth mean and the deterministic mean obtained experimentally. The model proposed in this study can be applied to pitting corrosion data from repeated in-line pipeline inspections. Case studies presented in this work show how pipeline inspection and maintenance planning can be improved by using the proposed Markovian model for pitting corrosion.

  11. Oxygen input and oxygen yield of tanks of great depth in theory and practice; Theorie und Praxis von Sauerstoffeintrag und -ertrag bei groesseren Beckentiefen

    Energy Technology Data Exchange (ETDEWEB)

    Poepel, H.J.; Wagner, M. [Technische Hochschule Darmstadt (Germany). Inst. fuer Wasserversorgung, Abwasserbeseitigung und Raumplanung; Weidmann, F.

    1999-07-01

    Activated sludge tanks are nowadays planned and built with greater depths (8.00 to 12.00 m) than hitherto (4.00 to 6.00 m); some are already in operation. For such depths there exist no confirmed dimensioning approaches. The fundamentals of oxygen transfer in deep tanks are pointed out and a model for calculating the influence of depth is set up. It is confirmed via an extensive test program with variation of water depth, the rate of density of membrane aerators, and air volume flows. It permits converting oxygen supply parameters of an aeration system established for a certain blow-in depth to a given alternative depth. For oxygen yield, too, relations are developed, which indicate gross yield for different compressor types (sliding vane rotary compressor, turbo compressor, screw-type compressor) as a function of blow-in-depth, air volume flow and rate of density with great accuracy. (orig.) [German] Belebungsbecken werden heute tiefer (8,00 bis 12,00 m) als bisher (4,00 bis 6,00 m) geplant, gebaut und bereits betrieben. Fuer diese Tiefen liegen keine gesicherten Bemesssungsansaetze vor. Die Grundlagen des Sauerstoffuebergangs in tiefen Becken werden dargelegt und ein Modell zur Berechnung des Tiefeneinflusses erstellt. Ueber ein ausfuehrliches Versuchsprogramm mit Variation der Wassertiefe, der Belegungsdichte mit Membranbelueftern und der Luftvolumenstroeme wird das Modell bestaetigt. Damit koennen Sauerstoffzufuhrparametern eines Belueftungssystems, die fuer eine bestimmte Einblastiefe bekannt sind, auf beliebige andere Tiefen umgerechnet werden. Auch fuer den Sauerstoffertrag werden Beziehungen entwickelt, die den Bruttoertrag fuer unterschiedliche Verdichterarten (Drehkolben-, Turbo- und Schraubenverdichter) in Abhaengigkeit von Einblastiefe, Luftvolumenstrom und Belegungsdichte mit grosser Genauigkeit festlegen. (orig.)

  12. Hydrologic controls on equilibrium soil depths

    Science.gov (United States)

    Nicótina, L.; Tarboton, D. G.; Tesfa, T. K.; Rinaldo, A.

    2011-04-01

    This paper deals with modeling the mutual feedbacks between runoff production and geomorphological processes and attributes that lead to patterns of equilibrium soil depth. Our primary goal is an attempt to describe spatial patterns of soil depth resulting from long-term interactions between hydrologic forcings and soil production, erosion, and sediment transport processes under the framework of landscape dynamic equilibrium. Another goal is to set the premises for exploiting the role of soil depths in shaping the hydrologic response of a catchment. The relevance of the study stems from the massive improvement in hydrologic predictions for ungauged basins that would be achieved by using directly soil depths derived from geomorphic features remotely measured and objectively manipulated. Hydrological processes are here described by explicitly accounting for local soil depths and detailed catchment topography. Geomorphological processes are described by means of well-studied geomorphic transport laws. The modeling approach is applied to the semiarid Dry Creek Experimental Watershed, located near Boise, Idaho. Modeled soil depths are compared with field data obtained from an extensive survey of the catchment. Our results show the ability of the model to describe properly the mean soil depth and the broad features of the distribution of measured data. However, local comparisons show significant scatter whose origins are discussed.

  13. Model-driven Service Engineering with SoaML

    Science.gov (United States)

    Elvesæter, Brian; Carrez, Cyril; Mohagheghi, Parastoo; Berre, Arne-Jørgen; Johnsen, Svein G.; Solberg, Arnor

    This chapter presents a model-driven service engineering (MDSE) methodology that uses OMG MDA specifications such as BMM, BPMN and SoaML to identify and specify services within a service-oriented architecture. The methodology takes advantage of business modelling practices and provides a guide to service modelling with SoaML. The presentation is case-driven and illuminated using the telecommunication example. The chapter focuses in particular on the use of the SoaML modelling language as a means for expressing service specifications that are aligned with business models and can be realized in different platform technologies.

  14. Service business model framework and the service innovation scope

    NARCIS (Netherlands)

    van der Aa, W.; van der Rhee, B.; Victorino, L.

    2011-01-01

    In this paper we present a framework for service business models. We build on three streams of research. The first stream is the service management and marketing literature that focuses on the specific challenges of managing a service business. The second stream consists of research on e-business

  15. Models for estimation of tree volume in the miombo woodlands of ...

    African Journals Online (AJOL)

    Volume of trees is an important parameter in forest management, but only volume models with limited geographical and tree size coverage have previously been developed for Tanzanian miombo woodlands. This study developed models for estimating total, merchantable stem and branches volume applicable for the entire ...

  16. Expert cube development with SQL server analysis services 2012 multidimensional models

    CERN Document Server

    Ferrari, Alberto; Russo, Marco

    2014-01-01

    An easy-to-follow guide full of hands on examples of real-world Analysis Services cube development tasks. Each topic is explained and placed in context, and for the more inquisitive reader, there also more in-depth details of the concepts used.If you are an Analysis Services cube designer wishing to learn more advanced topic and best practices for cube design, this book is for you.You are expected to have some prior experience with Analysis Services cube development.

  17. Stem biomass and volume models of selected tropical tree species ...

    African Journals Online (AJOL)

    Stem biomass and stem volume were modelled as a function of diameter (at breast height; Dbh) and stem height (height to the crown base). Logarithmic models are presented that utilise Dbh and height data to predict tree component biomass and stem volumes. Alternative models are given that afford prediction based on ...

  18. From Cloister to Commons: Concepts and Models for Service-Learning in Religious Studies. AAHE's Series on Service-Learning in the Disciplines.

    Science.gov (United States)

    Devine, Richard, Ed.; Favazza, Joseph A., Ed.; McLain, F. Michael, Ed.

    This essays in this volume, 19th in a series, discuss why and how service-learning can be implemented in Religious Studies and what that discipline contributes to the pedagogy of service-learning. Part 1, "Service-Learning and the Dilemma of Religious Studies," contains: (1) "Service-Learning and the Dilemma of Religious Studies: Descriptive or…

  19. Modelling of soil depth and lake sediments. An application of the GeoEditor at the Forsmark site

    International Nuclear Information System (INIS)

    Vikstroem, Maria

    2005-02-01

    This report aims at describing the modelled soil depth according to three layers with different hydrogeological properties at the Forsmark site, based on available data from boreholes, observation points, seismic data and radar profiles. For the lakes in the area, the sediment has been modelled according to six layers of the most common deposits in the area. The peat layer at Stenroesmossen has also been visualized. The program used in the modelling of soil depths is the GeoEditor, which is an ArcView3.3-extension. The input data used in the model consist of 1,532 points based on seismic measurements, 31 profiles of interpreted ground penetrating radar data, 119 boreholes and 472 observation points. The western and south eastern part of the area has a low data density. In the southern parts the data density with respect to estimated bedrock elevation is low. Observation points in this area are generally not very deep and do not describe the actual bedrock elevation. They do, however, describe the minimum soil depth at each location. A detailed topographical DEM, bathymetry and map of Quaternary deposits were also used. The model is based on a three-layer-principle where each layer is assumed to have similar hydrological characteristics. The uppermost layer, Z1, is characterized by the impact from surface processes, roots and biological activity. The bottom layer, Z3, is characterized by contact with the bedrock. The middle layer, Z2, is assumed to have different hydraulic qualities than Z1 and Z3. The lake sediments have been modelled according to six classes of typical deposits. The modelled soil depths show a relatively high bedrock elevation and thus small total soil depth in the major part of the area. The median soil depth has been calculated to 1.9 m, based on model results in areas with higher data density. The maximum modelled soil depth is about 13 m, just north of Lake Stocksjoen. Generally, the sediment layers in the lakes of the area consists of a

  20. Definition of technology development missions for early Space Station satellite servicing. Volume 2: Technical

    Science.gov (United States)

    Cable, D. A.; Diewald, C. A.; Hills, T. C.; Parmentier, T. J.; Spencer, R. A.; Stone, G. E.

    1984-01-01

    Volume 2 contains the Technical Report of the approach and results of the Phase 2 study. The phase 2 servicing study was initiated in June 1983, and is being reported in this document. The scope of the contract was to: (1) define in detail five selected technology development missions (TDM); (2) conduct a design requirement analysis to refine definitions of satellite servicing requirements at the space station; and (3) develop a technology plan that would identify and schedule prerequisite precursor technology development, associated. STS flight experiments and space station experiments needed to provide onorbit validation of the evolving technology.

  1. MODEL PERUBAHAN VOLUME KERIPIK BUAH SELAMA PROSES PENGGORENGAN SECARA VAKUM [Model for Volume Changes in Fruit Chips during Vacuum Frying

    Directory of Open Access Journals (Sweden)

    Jamaluddin1*

    2011-06-01

    Full Text Available Expansion and puffing are specific characteristics of fried products critical for consumer preferences. To obtain expanded and puffed dried products that fit well with consumer acceptance criteria, it is necessary to pay attention to the process conditions which change the raw material characteristics during frying. The important changes include volume and density ratio of the products during frying. Hypothetically, these changes are due to water vaporization and the decrease dry matter in the products. The objective of this research is to develop a mathematical model of volume and density ratio changes for jack fruit during vacuum frying as a function of water and starch content reductions. Samples were vacuum fried at 70–100OC and pressure of 80-90 kPa for 15–60 min. The parameters observed were volume and density as well as water and starch contents of samples before and after vacuum frying. The results showed that the developed model can be used to predict changes in volume and density ratio of jack fruit during vacuum frying.

  2. Depth Reconstruction from Single Images Using a Convolutional Neural Network and a Condition Random Field Model

    Directory of Open Access Journals (Sweden)

    Dan Liu

    2018-04-01

    Full Text Available This paper presents an effective approach for depth reconstruction from a single image through the incorporation of semantic information and local details from the image. A unified framework for depth acquisition is constructed by joining a deep Convolutional Neural Network (CNN and a continuous pairwise Conditional Random Field (CRF model. Semantic information and relative depth trends of local regions inside the image are integrated into the framework. A deep CNN network is firstly used to automatically learn a hierarchical feature representation of the image. To get more local details in the image, the relative depth trends of local regions are incorporated into the network. Combined with semantic information of the image, a continuous pairwise CRF is then established and is used as the loss function of the unified model. Experiments on real scenes demonstrate that the proposed approach is effective and that the approach obtains satisfactory results.

  3. Depth Reconstruction from Single Images Using a Convolutional Neural Network and a Condition Random Field Model.

    Science.gov (United States)

    Liu, Dan; Liu, Xuejun; Wu, Yiguang

    2018-04-24

    This paper presents an effective approach for depth reconstruction from a single image through the incorporation of semantic information and local details from the image. A unified framework for depth acquisition is constructed by joining a deep Convolutional Neural Network (CNN) and a continuous pairwise Conditional Random Field (CRF) model. Semantic information and relative depth trends of local regions inside the image are integrated into the framework. A deep CNN network is firstly used to automatically learn a hierarchical feature representation of the image. To get more local details in the image, the relative depth trends of local regions are incorporated into the network. Combined with semantic information of the image, a continuous pairwise CRF is then established and is used as the loss function of the unified model. Experiments on real scenes demonstrate that the proposed approach is effective and that the approach obtains satisfactory results.

  4. Biological modelling of fuzzy target volumes in 3D radiotherapy

    International Nuclear Information System (INIS)

    Levegruen, S.; Kampen, M. van; Waschek, T.; Engenhart, R.; Schlegel, W.

    1995-01-01

    Purpose/Objective: The outcome of each radiotherapy depends critically on the optimal choice of the target volume. The goal of the radiotherapist is to include all tumor spread at the same time as saving as much healthy tissue as possible. Even when the information of all imaging modalities is combined, the diagnostic techniques are not sensitive and specific enough to visualize all microscopic tumor cell spread. Due to this lack of information there is room for different interpretations concerning the extend of the target volume, leading to a fuzzy target volume. The aim of this work is to develop a model to score different target volume boundaries within the region of diagnostic uncertainty in terms of tumor control probability (TCP) and normal tissue complication probabilities (NTCP). Materials and Methods: In order to assess the region of diagnostic uncertainty, the radiotherapist defines interactively a minimal planning target volume that absolutely must be irradiated according to the diagnostic information available and a maximal planning target volume outside which no tumor cell spread is expected. For the NTCP calculation we use the Lyman 4 parameter model to estimate the response of an organ at risk to a uniform partial volume irradiation. The TCP calculation is based on the Poisson model of cell killing. The TCP estimation depends not only on volume, dose, clonogenic cell density and the α parameter of the linear quadratic model but also on the probability to find clonogenic cells in the considered volume. Inside the minimal PTV this probability is 1, outside the maximal PTV it is 0. Therefore all voxels inside the minimal PTV are assigned the value of 1 with respect to the target volume, all voxels outside the maximal PTV the value of 0. For voxels in the region of uncertainty in between, a 3D linear interpolation is performed. Here we assume the probability to follow the interpolated values. Starting with the minimal PTV, the expected gain in TCP and

  5. Associations of olfactory bulb and depth of olfactory sulcus with basal ganglia and hippocampus in patients with Parkinson's disease.

    Science.gov (United States)

    Tanik, Nermin; Serin, Halil Ibrahim; Celikbilek, Asuman; Inan, Levent Ertugrul; Gundogdu, Fatma

    2016-05-04

    Parkinson's disease (PD) is a neurodegenerative disorder characterized by hyposmia in the preclinical stages. We investigated the relationships of olfactory bulb (OB) volume and olfactory sulcus (OS) depth with basal ganglia and hippocampal volumes. The study included 25 patients with PD and 40 age- and sex-matched control subjects. Idiopathic PD was diagnosed according to published diagnostic criteria. The Hoehn and Yahr (HY) scale, the motor subscale of the Unified Parkinson's Disease Rating Scale (UPDRS III), and the Mini-Mental State Examination (MMSE) were administered to participants. Volumetric measurements of olfactory structures, the basal ganglia, and hippocampus were performed using magnetic resonance imaging (MRI). OB volume and OS depth were significantly reduced in PD patients compared to healthy control subjects (p<0.001 and p<0.001, respectively). The OB and left putamen volumes were significantly correlated (p=0.048), and the depth of the right OS was significantly correlated with right hippocampal volume (p=0.018). We found significant correlations between OB and putamen volumes and OS depth and hippocampal volume. Our study is the first to demonstrate associations of olfactory structures with the putamen and hippocampus using MRI volumetric measurements. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  6. Client/consultant model services agreement

    CERN Document Server

    International Federation of Consulting Engineers

    2006-01-01

    The terms of the Client Consultant Model Services agreement (The White Book) have been prepared by the Fédération Internationale des Ingénieurs-Conseils (FIDIC) and are recommended for general use for the purposes of pre-investment and feasibility studies, designs and administration of construction and project management, where proposals for such services are invited on an international basis. They are equally adaptable for domestic agreements. - See more at: http://fidic.org/books/clientconsultant-model-services-agreement-4th-ed-2006-white-book#sthash.3Uxy5qT3.dpuf

  7. Forest volume-to-biomass models and estimates of mass for live and standing dead trees of U.S. forests.

    Science.gov (United States)

    James E. Smith; Linda S. Heath; Jennifer C. Jenkins

    2003-01-01

    Includes methods and equations for nationally consistent estimates of tree-mass density at the stand level (Mg/ha) as predicted by growing-stock volumes reported by the USDA Forest Service for forests of the conterminous United States. Developed for use in FORCARB, a carbon budget model for U.S. forests, the equations also are useful for converting plot-, stand- and...

  8. Study on e-government services quality: The integration of online and offline services

    Directory of Open Access Journals (Sweden)

    Jing Fan

    2015-05-01

    Full Text Available Purpose: E-Government, as a new bond linking the government and the public, has gradually become the focus of innovation in government services. The paper focuses on the e-Government service quality issues from the perspective of users. Design/methodology/approach: From the aspects of online service quality perception and offline service quality perception, based on IS Success model and SERQUAL model, e-Government Services Quality model has been set up with information quality, system quality and service quality as key factors. Then, the survey method was applied to collect data and then to test the model. Findings: It was found that users’ perception of offline service quality has a significant effect on improving their perception of online service quality, and online service quality perception has a significant effect on public satisfaction of e-Government services; information clarity, system security and stability, interactive services and “one-stop” services all have a significant effect on public satisfaction of e-Government services. However, offline service quality perception has certain positive effect on public satisfaction of e-Government services but not dramatically. Research limitations/implications: Mobile e-Government as an important direction of the development of e-Government, in the future, we will study more about mobile e-Government services channels. Originality/value: This study further develops the theory of information system service quality, and also provides a theoretical reference for government departments. On the one hand, based on the characteristics of e-government system, information quality, system quality and service quality in the previous system service model are further discussed; on the other hand, both online and offline services are taken into consideration in the information system service model, thus establishing the e-government services quality model and making an in-depth study of the

  9. Formal Modeling of Service Session Management

    NARCIS (Netherlands)

    Le, V.M.; van Beijnum, Bernhard J.F.; de Goede, Leo; Almeroth, Kevin C.; Hasan, Masum

    2002-01-01

    This paper proposes a concept to apply modeling tools to Multi-Provider Telematics Service Management. The service architecture is based on the framework called “Open Service Components” which serves as building blocks to compose end-to-end telematics services in terms of service components offered

  10. Business Models, Vaccination Services, and Public Health Relationships of Retail Clinics: A Qualitative Study.

    Science.gov (United States)

    Arthur, Bayo C; Fisher, Allison Kennedy; Shoemaker, Sarah J; Pozniak, Alyssa; Stokley, Shannon

    2015-01-01

    Despite the rapid growth of retail clinics (RCs), literature is limited in terms of how these facilities offer preventive services, particularly vaccination services. The purpose of this study was to obtain an in-depth understanding of the RC business model pertaining to vaccine offerings, profitability, and decision making. From March to June 2009, we conducted 15 interviews with key individuals from three types of organizations: 12 representatives of RC corporations, 2 representatives of retail hosts (i.e., stores in which the RCs are located), and 1 representative of an industry association. We analyzed interview transcripts qualitatively. Our results indicate that consumer demand and profitability were the main drivers in offering vaccinations. RCs in this sample primarily offered vaccinations to adults and adolescents, and they were not well integrated with local public health and immunization registries. Our findings demonstrate the potential for stronger linkages with public health in these settings. The findings also may help inform future research to increase patient access to vaccination services at RCs.

  11. Ultrasonic Measurement of Corrosion Depth Development in Concrete Exposed to Acidic Environment

    Directory of Open Access Journals (Sweden)

    Fan Yingfang

    2012-01-01

    Full Text Available Corrosion depth of concrete can reflect the damage state of the load-carrying capacity and durability of the concrete structures servicing in severe environment. Ultrasonic technology was studied to evaluate the corrosion depth quantitatively. Three acidic environments with the pH level of 3.5, 2.5, and 1.5 were simulated by the mixture of sulfate and nitric acid solutions in the laboratory. 354 prism specimens with the dimension of 150 mm × 150 mm × 300 mm were prepared. The prepared specimens were first immersed in the acidic mixture for certain periods, followed by physical, mechanical, computerized tomography (CT and ultrasonic test. Damage depths of the concrete specimen under different corrosion states were obtained from both CT and ultrasonic test. Based on the ultrasonic test, a bilinear regression model is proposed to estimate the corrosion depth. It is shown that the results achieved by ultrasonic and CT test are in good agreement with each other. Relation between the corrosion depth of concrete specimen and the mechanical indices such as mass loss, compressive strength, and elastic modulus is discussed in detail. It can be drawn that the ultrasonic test is a reliable nondestructive way to measure the damage depth of concrete exposed to acidic environment.

  12. A service-oriented data access control model

    Science.gov (United States)

    Meng, Wei; Li, Fengmin; Pan, Juchen; Song, Song; Bian, Jiali

    2017-01-01

    The development of mobile computing, cloud computing and distributed computing meets the growing individual service needs. Facing with complex application system, it's an urgent problem to ensure real-time, dynamic, and fine-grained data access control. By analyzing common data access control models, on the basis of mandatory access control model, the paper proposes a service-oriented access control model. By regarding system services as subject and data of databases as object, the model defines access levels and access identification of subject and object, and ensures system services securely to access databases.

  13. Shave-off depth profiling: Depth profiling with an absolute depth scale

    International Nuclear Information System (INIS)

    Nojima, M.; Maekawa, A.; Yamamoto, T.; Tomiyasu, B.; Sakamoto, T.; Owari, M.; Nihei, Y.

    2006-01-01

    Shave-off depth profiling provides profiling with an absolute depth scale. This method uses a focused ion beam (FIB) micro-machining process to provide the depth profile. We show that the shave-off depth profile of a particle reflected the spherical shape of the sample and signal intensities had no relationship to the depth. Through the introduction of FIB micro-sampling, the shave-off depth profiling of a dynamic random access memory (DRAM) tip was carried out. The shave-off profile agreed with a blue print from the manufacturing process. Finally, shave-off depth profiling is discussed with respect to resolutions and future directions

  14. Application of Cauchy-type integrals in developing effective methods for depth-to-basement inversion of gravity and gravity gradiometry data

    DEFF Research Database (Denmark)

    Cai, Hongzhu; Zhdanov, Michael

    2015-01-01

    to be discretized for the calculation of gravity field. This was especially significant in the modeling and inversion of gravity data for determining the depth to the basement. Another important result was developing a novel method of inversion of gravity data to recover the depth to basement, based on the 3D...... Cauchy-type integral representation. Our numerical studies determined that the new method is much faster than conventional volume discretization method to compute the gravity response. Our synthetic model studies also showed that the developed inversion algorithm based on Cauchy-type integral is capable......One of the most important applications of gravity surveys in regional geophysical studies is determining the depth to basement. Conventional methods of solving this problem are based on the spectrum and/or Euler deconvolution analysis of the gravity field and on parameterization of the earth...

  15. A Diffusion Model for Two-sided Service Systems

    Science.gov (United States)

    Homma, Koichi; Yano, Koujin; Funabashi, Motohisa

    A diffusion model is proposed for two-sided service systems. ‘Two-sided’ refers to the existence of an economic network effect between two different and interrelated groups, e.g., card holders and merchants in an electronic money service. The service benefit for a member of one side depends on the number and quality of the members on the other side. A mathematical model by J. H. Rohlfs explains the network (or bandwagon) effect of communications services. In Rohlfs' model, only the users' group exists and the model is one-sided. This paper extends Rohlfs' model to a two-sided model. We propose, first, a micro model that explains individual behavior in regard to service subscription of both sides and a computational method that drives the proposed model. Second, we develop macro models with two diffusion-rate variables by simplifying the micro model. As a case study, we apply the models to an electronic money service and discuss the simulation results and actual statistics.

  16. The LIFEspan model of transitional rehabilitative care for youth with disabilities: healthcare professionals' perspectives on service delivery.

    Science.gov (United States)

    Hamdani, Yani; Proulx, Meghann; Kingsnorth, Shauna; Lindsay, Sally; Maxwell, Joanne; Colantonio, Angela; Macarthur, Colin; Bayley, Mark

    2014-01-01

    LIFEspan is a service delivery model of continuous coordinated care developed and implemented by a cross-organization partnership between a pediatric and an adult rehabilitation hospital. Previous work explored enablers and barriers to establishing the partnership service. This paper examines healthcare professionals' (HCPs') experiences of 'real world' service delivery aimed at supporting transitional rehabilitative care for youth with disabilities. This qualitative study - part of an ongoing mixed method longitudinal study - elicited HCPs' perspectives on their experiences of LIFEspan service delivery through in-depth interviews. Data were categorized into themes of service delivery activities, then interpreted from the lens of a service integration/coordination framework. Five main service delivery themes were identified: 1) addressing youth's transition readiness and capacities; 2) shifting responsibility for healthcare management from parents to youth; 3) determining services based on organizational resources; 4) linking between pediatric and adult rehabilitation services; and, 5) linking with multi-sector services. LIFEspan contributed to service delivery activities that coordinated care for youth and families and integrated inter-hospital services. However, gaps in service integration with primary care, education, social, and community services limited coordinated care to the rehabilitation sector. Recommendations are made to enhance service delivery using a systems/sector-based approach.

  17. Flexible Scenarios to build a Service Oriented Architecture on Demand

    African Journals Online (AJOL)

    pc

    2018-03-22

    Mar 22, 2018 ... technology and business requirements that could be used in a real Enterprise ... Keywords - Service Oriented Architecture, Service Enterprise,. Lego Model, On ... In the recent years, many companies have chosen ... management and more formal approaches through in-depth interviews .... Eng.,12: 74-78.

  18. Chalk porosity and sonic velocity versus burial depth

    DEFF Research Database (Denmark)

    Fabricius, Ida Lykke; Gommesen, Lars; Krogsbøll, Anette Susanne

    2008-01-01

    Seventy chalk samples from four formations in the overpressured Danish central North Sea have been analyzed to investigate how correlations of porosity and sonic velocity with burial depth are affected by varying mineralogy, fluid pressure, and early introduction of petroleum. The results show th...... for fluid pressure because the cementing ions originate from stylolites, which are mechanically similar to fractures. We find that cementation occurs over a relatively short depth interval.......Seventy chalk samples from four formations in the overpressured Danish central North Sea have been analyzed to investigate how correlations of porosity and sonic velocity with burial depth are affected by varying mineralogy, fluid pressure, and early introduction of petroleum. The results show...... that porosity and sonic velocity follow the most consistent depth trends when fluid pressure and pore-volume compressibility are considered. Quartz content up to 10% has no marked effect, but more than 5% clay causes lower porosity and velocity. The mineralogical effect differs between P-wave and shear velocity...

  19. Radar-Based Depth Area Reduction Factors for Colorado

    Science.gov (United States)

    Curtis, D. C.; Humphrey, J. H.; Bare, D.

    2011-12-01

    More than 340,000 fifteen-minute storm cells, nearly 45,000 one-hour cells, and over 20,000 three-hour cells found in 21 months of gage adjusted radar-rainfall estimates (GARR) over El Paso County, CO, were identified and evaluated using TITAN (Thunderstorm Identification, Tracking, Analysis and Nowcasting) software. TITAN's storm cell identification capability enabled the analysis of the geometric properties of storms, time step by time step. The gage-adjusted radar-rainfall data set was derived for months containing runoff producing events observed in the Fountain Creek Watershed within El Paso County from 1994-2008. Storm centered Depth Area Reduction Factors (DARFs) were computed and compared to DARFs published by the U.S. National Weather Service (NWS) in Technical Paper 29, which are widely used in stormwater infrastructure design. Radar-based storm centered DARFs decay much more sharply than the NWS standard curves. The results suggest lower watershed average rainfall inputs from radar-based storm centered DARFs than from standard NWS DARFs for a given watershed area. The results also suggest that DARFs are variable by return period and, perhaps, by location. Both findings could have significant impacts on design storm standards. Lower design volumes for a given return period translate to lower capacity requirements and lower cost infrastructure. Conversely, the higher volume requirements implied for the NWS DARFs translate to higher capacity requirements, higher costs, but lower risk of failure. Ultimately, a decision about which approach is to use depends on the risk tolerance of the decision maker. However, the growing volume of historical radar rainfall estimates coupled with the type of analysis described herein, supports a better understanding of risk and more informed decision-making by local officials.

  20. Prototyping an online wetland ecosystem services model using open model sharing standards

    Science.gov (United States)

    Feng, M.; Liu, S.; Euliss, N.H.; Young, Caitlin; Mushet, D.M.

    2011-01-01

    Great interest currently exists for developing ecosystem models to forecast how ecosystem services may change under alternative land use and climate futures. Ecosystem services are diverse and include supporting services or functions (e.g., primary production, nutrient cycling), provisioning services (e.g., wildlife, groundwater), regulating services (e.g., water purification, floodwater retention), and even cultural services (e.g., ecotourism, cultural heritage). Hence, the knowledge base necessary to quantify ecosystem services is broad and derived from many diverse scientific disciplines. Building the required interdisciplinary models is especially challenging as modelers from different locations and times may develop the disciplinary models needed for ecosystem simulations, and these models must be identified and made accessible to the interdisciplinary simulation. Additional difficulties include inconsistent data structures, formats, and metadata required by geospatial models as well as limitations on computing, storage, and connectivity. Traditional standalone and closed network systems cannot fully support sharing and integrating interdisciplinary geospatial models from variant sources. To address this need, we developed an approach to openly share and access geospatial computational models using distributed Geographic Information System (GIS) techniques and open geospatial standards. We included a means to share computational models compliant with Open Geospatial Consortium (OGC) Web Processing Services (WPS) standard to ensure modelers have an efficient and simplified means to publish new models. To demonstrate our approach, we developed five disciplinary models that can be integrated and shared to simulate a few of the ecosystem services (e.g., water storage, waterfowl breeding) that are provided by wetlands in the Prairie Pothole Region (PPR) of North America.

  1. Soil depth influence on Amazonian ecophysiology

    Science.gov (United States)

    Fagerstrom, I.; Baker, I. T.; Gallup, S.; Denning, A. S.

    2017-12-01

    Models of land-atmosphere interaction are important for simulating present day weather and critical for predictions of future climate. Land-atmosphere interaction models have become increasingly complex in the last 30 years, leading to the need for further studies examining their intricacies and improvement. This research focuses on the effect of variable soil depth on Amazonian Gross Primary Production (GPP), respiration, and their combination into overall carbon flux. We evaluate a control, which has a universal soil depth of 10 meters, with two experiments of variable soil depths. To conduct this study we ran the 3 models for the period 2000-2012, evaluating similarities and differences between them. We focus on the Amazon rain forest, and compare differences in components of carbon flux. Not surprisingly, we find that the main differences between the models arises in regions where the soil depth is dissimilar between models. However, we did not observe significant differences in GPP between known drought, wet, and average years; interannual variability in carbon dynamics was less than anticipated. We also anticipated that differences between models would be most significant during the dry season, but found discrepancies that persisted through the entire annual cycle.

  2. Regional Disparities in Online Map User Access Volume and Determining Factors

    Science.gov (United States)

    Li, R.; Yang, N.; Li, R.; Huang, W.; Wu, H.

    2017-09-01

    The regional disparities of online map user access volume (use `user access volume' in this paper to indicate briefly) is a topic of growing interest with the increment of popularity in public users, which helps to target the construction of geographic information services for different areas. At first place we statistically analysed the online map user access logs and quantified these regional access disparities on different scales. The results show that the volume of user access is decreasing from east to the west in China as a whole, while East China produces the most access volume; these cities are also the crucial economic and transport centres. Then Principal Component Regression (PCR) is applied to explore the regional disparities of user access volume. A determining model for Online Map access volume is proposed afterwards, which indicates that area scale is the primary determining factor for regional disparities, followed by public transport development level and public service development level. Other factors like user quality index and financial index have very limited influence on the user access volume. According to the study of regional disparities in user access volume, map providers can reasonably dispatch and allocate the data resources and service resources in each area and improve the operational efficiency of the Online Map server cluster.

  3. An operator calculus for surface and volume modeling

    Science.gov (United States)

    Gordon, W. J.

    1984-01-01

    The mathematical techniques which form the foundation for most of the surface and volume modeling techniques used in practice are briefly described. An outline of what may be termed an operator calculus for the approximation and interpolation of functions of more than one independent variable is presented. By considering the linear operators associated with bivariate and multivariate interpolation/approximation schemes, it is shown how they can be compounded by operator multiplication and Boolean addition to obtain a distributive lattice of approximation operators. It is then demonstrated via specific examples how this operator calculus leads to practical techniques for sculptured surface and volume modeling.

  4. Forecasted Flood Depth Grids Providing Early Situational Awareness to FEMA during the 2017 Atlantic Hurricane Season

    Science.gov (United States)

    Jones, M.; Longenecker, H. E., III

    2017-12-01

    The 2017 hurricane season brought the unprecedented landfall of three Category 4 hurricanes (Harvey, Irma and Maria). FEMA is responsible for coordinating the federal response and recovery efforts for large disasters such as these. FEMA depends on timely and accurate depth grids to estimate hazard exposure, model damage assessments, plan flight paths for imagery acquisition, and prioritize response efforts. In order to produce riverine or coastal depth grids based on observed flooding, the methodology requires peak crest water levels at stream gauges, tide gauges, high water marks, and best-available elevation data. Because peak crest data isn't available until the apex of a flooding event and high water marks may take up to several weeks for field teams to collect for a large-scale flooding event, final observed depth grids are not available to FEMA until several days after a flood has begun to subside. Within the last decade NOAA's National Weather Service (NWS) has implemented the Advanced Hydrologic Prediction Service (AHPS), a web-based suite of accurate forecast products that provide hydrograph forecasts at over 3,500 stream gauge locations across the United States. These forecasts have been newly implemented into an automated depth grid script tool, using predicted instead of observed water levels, allowing FEMA access to flood hazard information up to 3 days prior to a flooding event. Water depths are calculated from the AHPS predicted flood stages and are interpolated at 100m spacing along NHD hydrolines within the basin of interest. A water surface elevation raster is generated from these water depths using an Inverse Distance Weighted interpolation. Then, elevation (USGS NED 30m) is subtracted from the water surface elevation raster so that the remaining values represent the depth of predicted flooding above the ground surface. This automated process requires minimal user input and produced forecasted depth grids that were comparable to post

  5. Integration of services into workflow applications

    CERN Document Server

    Czarnul, Pawel

    2015-01-01

    Describing state-of-the-art solutions in distributed system architectures, Integration of Services into Workflow Applications presents a concise approach to the integration of loosely coupled services into workflow applications. It discusses key challenges related to the integration of distributed systems and proposes solutions, both in terms of theoretical aspects such as models and workflow scheduling algorithms, and technical solutions such as software tools and APIs.The book provides an in-depth look at workflow scheduling and proposes a way to integrate several different types of services

  6. NOAA ESRI Grid - depth predictions bathymetry model in New York offshore planning area from Biogeography Branch

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This dataset represents depth predictions from a bathymetric model developed for the New York offshore spatial planning area. The model also includes...

  7. Diameter structure modeling and the calculation of plantation volume of black poplar clones

    Directory of Open Access Journals (Sweden)

    Andrašev Siniša

    2004-01-01

    Full Text Available A method of diameter structure modeling was applied in the calculation of plantation (stand volume of two black poplar clones in the section Aigeiros (Duby: 618 (Lux and S1-8. Diameter structure modeling by Weibull function makes it possible to calculate the plantation volume by volume line. Based on the comparison of the proposed method with the existing methods, the obtained error of plantation volume was less than 2%. Diameter structure modeling and the calculation of plantation volume by diameter structure model, by the regularity of diameter distribution, enables a better analysis of the production level and assortment structure and it can be used in the construction of yield and increment tables.

  8. Wavefield Extrapolation in Pseudo-depth Domain

    KAUST Repository

    Ma, Xuxin

    2011-12-11

    Wave-equation based seismic migration and inversion tools are widely used by the energy industry to explore hydrocarbon and mineral resources. By design, most of these techniques simulate wave propagation in a space domain with the vertical axis being depth measured from the surface. Vertical depth is popular because it is a straightforward mapping of the subsurface space. It is, however, not computationally cost-effective because the wavelength changes with local elastic wave velocity, which in general increases with depth in the Earth. As a result, the sampling per wavelength also increases with depth. To avoid spatial aliasing in deep fast media, the seismic wave is oversampled in shallow slow media and therefore increase the total computation cost. This issue is effectively tackled by using the vertical time axis instead of vertical depth. This is because in a vertical time representation, the "wavelength" is essentially time period for vertical rays. This thesis extends the vertical time axis to the pseudo-depth axis, which features distance unit while preserving the properties of the vertical time representation. To explore the potentials of doing wave-equation based imaging in the pseudo-depth domain, a Partial Differential Equation (PDE) is derived to describe acoustic wave in this new domain. This new PDE is inherently anisotropic because the use of a constant vertical velocity to convert between depth and vertical time. Such anisotropy results in lower reflection coefficients compared with conventional space domain modeling results. This feature is helpful to suppress the low wavenumber artifacts in reverse-time migration images, which are caused by the widely used cross-correlation imaging condition. This thesis illustrates modeling acoustic waves in both conventional space domain and pseudo-depth domain. The numerical tool used to model acoustic waves is built based on the lowrank approximation of Fourier integral operators. To investigate the potential

  9. Utilization of laser Doppler flowmetry and tissue spectrophotometry for burn depth assessment using a miniature swine model.

    Science.gov (United States)

    Lotter, Oliver; Held, Manuel; Schiefer, Jennifer; Werner, Ole; Medved, Fabian; Schaller, Hans-Eberhard; Rahmanian-Schwarz, Afshin; Jaminet, Patrick; Rothenberger, Jens

    2015-01-01

    Currently, the diagnosis of burn depth is primarily based on a visual assessment and can be dependent on the surgeons' experience. The goal of this study was to determine the ability of laser Doppler flowmeter combined with a tissue spectrophotometer to discriminate burn depth in a miniature swine burn model. Burn injuries of varying depth, including superficial-partial, deep-partial, and full thickness, were created in seven Göttingen minipigs using an aluminium bar (100 °C), which was applied to the abdominal skin for periods of 1, 3, 6, 12, 30, and 60 seconds with gravity alone. The depth of injury was evaluated histologically using hematoxylin and eosin staining. All burns were assessed 3 hours after injury using a device that combines a laser light and a white light to determine blood flow, hemoglobin oxygenation, and relative amount of hemoglobin. The blood flow (41 vs. 124 arbitrary units [AU]) and relative amount of hemoglobin (32 vs. 52 AU) were significantly lower in full thickness compared with superficial-partial thickness burns. However, no significant differences in hemoglobin oxygenation were observed between these depths of burns (61 vs. 60%). These results show the ability of laser Doppler flowmeter and tissue spectrophotometer in combination to discriminate between various depths of injury in the minipig model, suggesting that this device may offer a valuable tool for burn depth assessment influencing burn management. © 2014 by the Wound Healing Society.

  10. INTRA/Mod3.2. Manual and Code Description. Volume I - Physical Modelling

    International Nuclear Information System (INIS)

    Andersson, Jenny; Edlund, O.; Hermann, J.; Johansson, Lise-Lotte

    1999-01-01

    The INTRA Manual consists of two volumes. Volume I of the manual is a thorough description of the code INTRA, the Physical modelling of INTRA and the ruling numerical methods and volume II, the User's Manual is an input description. This document, the Physical modelling of INTRA, contains code characteristics, integration methods and applications

  11. INTRA/Mod3.2. Manual and Code Description. Volume I - Physical Modelling

    Energy Technology Data Exchange (ETDEWEB)

    Andersson, Jenny; Edlund, O; Hermann, J; Johansson, Lise-Lotte

    1999-01-01

    The INTRA Manual consists of two volumes. Volume I of the manual is a thorough description of the code INTRA, the Physical modelling of INTRA and the ruling numerical methods and volume II, the User`s Manual is an input description. This document, the Physical modelling of INTRA, contains code characteristics, integration methods and applications

  12. Comparison of depth-dose distributions of proton therapeutic beams calculated by means of logical detectors and ionization chamber modeled in Monte Carlo codes

    Energy Technology Data Exchange (ETDEWEB)

    Pietrzak, Robert [Department of Nuclear Physics and Its Applications, Institute of Physics, University of Silesia, Katowice (Poland); Konefał, Adam, E-mail: adam.konefal@us.edu.pl [Department of Nuclear Physics and Its Applications, Institute of Physics, University of Silesia, Katowice (Poland); Sokół, Maria; Orlef, Andrzej [Department of Medical Physics, Maria Sklodowska-Curie Memorial Cancer Center, Institute of Oncology, Gliwice (Poland)

    2016-08-01

    The success of proton therapy depends strongly on the precision of treatment planning. Dose distribution in biological tissue may be obtained from Monte Carlo simulations using various scientific codes making it possible to perform very accurate calculations. However, there are many factors affecting the accuracy of modeling. One of them is a structure of objects called bins registering a dose. In this work the influence of bin structure on the dose distributions was examined. The MCNPX code calculations of Bragg curve for the 60 MeV proton beam were done in two ways: using simple logical detectors being the volumes determined in water, and using a precise model of ionization chamber used in clinical dosimetry. The results of the simulations were verified experimentally in the water phantom with Marcus ionization chamber. The average local dose difference between the measured relative doses in the water phantom and those calculated by means of the logical detectors was 1.4% at first 25 mm, whereas in the full depth range this difference was 1.6% for the maximum uncertainty in the calculations less than 2.4% and for the maximum measuring error of 1%. In case of the relative doses calculated with the use of the ionization chamber model this average difference was somewhat greater, being 2.3% at depths up to 25 mm and 2.4% in the full range of depths for the maximum uncertainty in the calculations of 3%. In the dose calculations the ionization chamber model does not offer any additional advantages over the logical detectors. The results provided by both models are similar and in good agreement with the measurements, however, the logical detector approach is a more time-effective method. - Highlights: • Influence of the bin structure on the proton dose distributions was examined for the MC simulations. • The considered relative proton dose distributions in water correspond to the clinical application. • MC simulations performed with the logical detectors and the

  13. Modeling of Filtration Processes—Microfiltration and Depth Filtration for Harvest of a Therapeutic Protein Expressed in Pichia pastoris at Constant Pressure

    Directory of Open Access Journals (Sweden)

    Muthukumar Sampath

    2014-12-01

    Full Text Available Filtration steps are ubiquitous in biotech processes due to the simplicity of operation, ease of scalability and the myriad of operations that they can be used for. Microfiltration, depth filtration, ultrafiltration and diafiltration are some of the most commonly used biotech unit operations. For clean feed streams, when fouling is minimal, scaling of these unit operations is performed linearly based on the filter area per unit volume of feed stream. However, for cases when considerable fouling occurs, such as the case of harvesting a therapeutic product expressed in Pichia pastoris, linear scaling may not be possible and current industrial practices involve use of 20–30% excess filter area over and above the calculated filter area to account for the uncertainty in scaling. In view of the fact that filters used for harvest are likely to have a very limited lifetime, this oversizing of the filters can add considerable cost of goods for the manufacturer. Modeling offers a way out of this conundrum. In this paper, we examine feasibility of using the various proposed models for filtration of a therapeutic product expressed in Pichia pastoris at constant pressure. It is observed that none of the individual models yield a satisfactory fit of the data, thus indicating that more than one fouling mechanism is at work. Filters with smaller pores were found to undergo fouling via complete pore blocking followed by cake filtration. On the other hand, filters with larger pores were found to undergo fouling via intermediate pore blocking followed by cake filtration. The proposed approach can be used for more accurate sizing of microfilters and depth filters.

  14. Independent evaluation of the SNODAS snow depth product using regional-scale lidar-derived measurements

    Science.gov (United States)

    Hedrick, A.; Marshall, H.-P.; Winstral, A.; Elder, K.; Yueh, S.; Cline, D.

    2015-01-01

    Repeated light detection and ranging (lidar) surveys are quickly becoming the de facto method for measuring spatial variability of montane snowpacks at high resolution. This study examines the potential of a 750 km2 lidar-derived data set of snow depths, collected during the 2007 northern Colorado Cold Lands Processes Experiment (CLPX-2), as a validation source for an operational hydrologic snow model. The SNOw Data Assimilation System (SNODAS) model framework, operated by the US National Weather Service, combines a physically based energy-and-mass-balance snow model with satellite, airborne and automated ground-based observations to provide daily estimates of snowpack properties at nominally 1 km resolution over the conterminous United States. Independent validation data are scarce due to the assimilating nature of SNODAS, compelling the need for an independent validation data set with substantial geographic coverage. Within 12 distinctive 500 × 500 m study areas located throughout the survey swath, ground crews performed approximately 600 manual snow depth measurements during each of the CLPX-2 lidar acquisitions. This supplied a data set for constraining the uncertainty of upscaled lidar estimates of snow depth at the 1 km SNODAS resolution, resulting in a root-mean-square difference of 13 cm. Upscaled lidar snow depths were then compared to the SNODAS estimates over the entire study area for the dates of the lidar flights. The remotely sensed snow depths provided a more spatially continuous comparison data set and agreed more closely to the model estimates than that of the in situ measurements alone. Finally, the results revealed three distinct areas where the differences between lidar observations and SNODAS estimates were most drastic, providing insight into the causal influences of natural processes on model uncertainty.

  15. Modelling and analysing interoperability in service compositions using COSMO

    NARCIS (Netherlands)

    Quartel, Dick; van Sinderen, Marten J.

    2008-01-01

    A service composition process typically involves multiple service models. These models may represent the composite and composed services from distinct perspectives, e.g. to model the role of some system that is involved in a service, and at distinct abstraction levels, e.g. to model the goal,

  16. Industrial Sector Technology Use Model (ISTUM): industrial energy use in the United States, 1974-2000. Volume 4. Technology appendix. Final Report

    Energy Technology Data Exchange (ETDEWEB)

    1979-10-01

    Volume IV of the ISTUM documentation gives information on the individual technology specifications, but relates closely with Chapter II of Volume I. The emphasis in that chapter is on providing an overview of where each technology fits into the general-model logic. Volume IV presents the actual cost structure and specification of every technology modeled in ISTUM. The first chapter presents a general overview of the ISTUM technology data base. It includes an explanation of the data base printouts and how the separate-cost building blocks are combined to derive an aggregate-technology cost. The remaining chapters are devoted to documenting the specific-technology cost specifications. Technologies included are: conventional technologies (boiler and non-boiler conventional technologies); fossil-energy technologies (atmospheric fluidized bed combustion, low Btu coal and medium Btu coal gasification); cogeneration (steam, machine drive, and electrolytic service sectors); and solar and geothermal technologies (solar steam, solar space heat, and geothermal steam technologies), and conservation technologies.

  17. Energy extension service pilot program evaluation report: the first year. Volume II: pilot state reports

    Energy Technology Data Exchange (ETDEWEB)

    1979-09-01

    Volume II of the Energy Extension Service Evaluation presents a discussion of the operations of the ten EES pilot-state programs during the period from October 1, 1977 through September 30, 1978. Each of the ten pilot states - Alabama, Connecticut, Michigan, New Mexico, Pennsylvania, Tennessee, Texas, Washington, Wisconsin, and Wyoming - received a grant of approximately $1.1 million to develop and implement a 19-month program beginning on October 1, 1977. Volume II provides a case-study description of the operations of the pilot program in each state, with special attention given to the two programs selected in each state for more detailed study and survey research. Some survey data and analysis are presented for the emphasis programs.

  18. Service creation: a model-based approach

    NARCIS (Netherlands)

    Quartel, Dick; van Sinderen, Marten J.; Ferreira Pires, Luis

    1999-01-01

    This paper presents a model-based approach to support service creation. In this approach, services are assumed to be created from (available) software components. The creation process may involve multiple design steps in which the requested service is repeatedly decomposed into more detailed

  19. Measuring depth profiles of residual stress with Raman spectroscopy

    Energy Technology Data Exchange (ETDEWEB)

    Enloe, W.S.; Sparks, R.G.; Paesler, M.A.

    1988-12-01

    Knowledge of the variation of residual stress is a very important factor in understanding the properties of machined surfaces. The nature of the residual stress can determine a part`s susceptibility to wear deformation, and cracking. Raman spectroscopy is known to be a very useful technique for measuring residual stress in many materials. These measurements are routinely made with a lateral resolution of 1{mu}m and an accuracy of 0.1 kbar. The variation of stress with depth; however, has not received much attention in the past. A novel technique has been developed that allows quantitative measurement of the variation of the residual stress with depth with an accuracy of 10nm in the z direction. Qualitative techniques for determining whether the stress is varying with depth are presented. It is also demonstrated that when the stress is changing over the volume sampled, errors can be introduced if the variation of the stress with depth is ignored. Computer aided data analysis is used to determine the depth dependence of the residual stress.

  20. Developing a Long Short-Term Memory (LSTM) based model for predicting water table depth in agricultural areas

    Science.gov (United States)

    Zhang, Jianfeng; Zhu, Yan; Zhang, Xiaoping; Ye, Ming; Yang, Jinzhong

    2018-06-01

    Predicting water table depth over the long-term in agricultural areas presents great challenges because these areas have complex and heterogeneous hydrogeological characteristics, boundary conditions, and human activities; also, nonlinear interactions occur among these factors. Therefore, a new time series model based on Long Short-Term Memory (LSTM), was developed in this study as an alternative to computationally expensive physical models. The proposed model is composed of an LSTM layer with another fully connected layer on top of it, with a dropout method applied in the first LSTM layer. In this study, the proposed model was applied and evaluated in five sub-areas of Hetao Irrigation District in arid northwestern China using data of 14 years (2000-2013). The proposed model uses monthly water diversion, evaporation, precipitation, temperature, and time as input data to predict water table depth. A simple but effective standardization method was employed to pre-process data to ensure data on the same scale. 14 years of data are separated into two sets: training set (2000-2011) and validation set (2012-2013) in the experiment. As expected, the proposed model achieves higher R2 scores (0.789-0.952) in water table depth prediction, when compared with the results of traditional feed-forward neural network (FFNN), which only reaches relatively low R2 scores (0.004-0.495), proving that the proposed model can preserve and learn previous information well. Furthermore, the validity of the dropout method and the proposed model's architecture are discussed. Through experimentation, the results show that the dropout method can prevent overfitting significantly. In addition, comparisons between the R2 scores of the proposed model and Double-LSTM model (R2 scores range from 0.170 to 0.864), further prove that the proposed model's architecture is reasonable and can contribute to a strong learning ability on time series data. Thus, one can conclude that the proposed model can

  1. GlobalSoilMap France: High-resolution spatial modelling the soils of France up to two meter depth.

    Science.gov (United States)

    Mulder, V L; Lacoste, M; Richer-de-Forges, A C; Arrouays, D

    2016-12-15

    This work presents the first GlobalSoilMap (GSM) products for France. We developed an automatic procedure for mapping the primary soil properties (clay, silt, sand, coarse elements, pH, soil organic carbon (SOC), cation exchange capacity (CEC) and soil depth). The procedure employed a data-mining technique and a straightforward method for estimating the 90% confidence intervals (CIs). The most accurate models were obtained for pH, sand and silt. Next, CEC, clay and SOC were found reasonably accurate predicted. Coarse elements and soil depth were the least accurate of all models. Overall, all models were considered robust; important indicators for this were 1) the small difference in model diagnostics between the calibration and cross-validation set, 2) the unbiased mean predictions, 3) the smaller spatial structure of the prediction residuals in comparison to the observations and 4) the similar performance compared to other developed GlobalSoilMap products. Nevertheless, the confidence intervals (CIs) were rather wide for all soil properties. The median predictions became less reliable with increasing depth, as indicated by the increase of CIs with depth. In addition, model accuracy and the corresponding CIs varied depending on the soil variable of interest, soil depth and geographic location. These findings indicated that the CIs are as informative as the model diagnostics. In conclusion, the presented method resulted in reasonably accurate predictions for the majority of the soil properties. End users can employ the products for different purposes, as was demonstrated with some practical examples. The mapping routine is flexible for cloud-computing and provides ample opportunity to be further developed when desired by its users. This allows regional and international GSM partners with fewer resources to develop their own products or, otherwise, to improve the current routine and work together towards a robust high-resolution digital soil map of the world

  2. Forecasted electric power demands for the Baltimore Gas and Electric Company. Volume 1 and Volume 2. Documentation manual

    International Nuclear Information System (INIS)

    Estomin, S.L.; Beach, J.E.; Goldsmith, J.V.

    1991-05-01

    The two-volume report presents the results of an econometric forecast of peak load and electric power demand for the Baltimore Gas and Electric Company (BG ampersand E) through the year 2009. Separate energy sales models were estimated for residential sales in Baltimore City, residential sales in the BG ampersand E service area excluding Baltimore City, commercial sales, industrial sales, streetlighting sales, and Company use plus losses. Econometric equations were also estimated for electric space heating and air conditioning saturation in Baltimore City and in the remainder of the BG ampersand E service territory. In addition to the energy sales models and the electric space conditioning saturation models, econometric models of summer and winter peak demand on the BG ampersand E system were estimated

  3. Evaluation of the Effective Moisture Penetration Depth Model for Estimating Moisture Buffering in Buildings

    Energy Technology Data Exchange (ETDEWEB)

    Woods, J. [National Renewable Energy Lab. (NREL), Golden, CO (United States); Winkler, J. [National Renewable Energy Lab. (NREL), Golden, CO (United States); Christensen, D. [National Renewable Energy Lab. (NREL), Golden, CO (United States)

    2013-01-01

    This study examines the effective moisture penetration depth (EMPD) model, and its suitability for building simulations. The EMPD model is a compromise between the simple, inaccurate effective capacitance approach and the complex, yet accurate, finite-difference approach. Two formulations of the EMPD model were examined, including the model used in the EnergyPlus building simulation software. An error in the EMPD model we uncovered was fixed with the release of EnergyPlus version 7.2, and the EMPD model in earlier versions of EnergyPlus should not be used.

  4. Model-Driven Development of Context-Aware Services

    NARCIS (Netherlands)

    Andrade Almeida, João; Iacob, Maria Eugenia; Jonkers, Henk; Quartel, Dick; Eliassen, Frank; Montresor, Alberto

    2006-01-01

    In this paper, we define a model-driven design trajectory for context-aware services consisting of three levels of models with different degrees of abstraction and platform independence. The models at the highest level of platform independence describe the behaviour of a context-aware service and

  5. Fluctuating water depths affect American alligator (Alligator mississippiensis) body condition in the Everglades, Florida, USA

    Science.gov (United States)

    Brandt, Laura A.; Beauchamp, Jeffrey S.; Jeffery, Brian M.; Cherkiss, Michael S.; Mazzotti, Frank J.

    2016-01-01

    Successful restoration of wetland ecosystems requires knowledge of wetland hydrologic patterns and an understanding of how those patterns affect wetland plant and animal populations.Within the Everglades, Florida, USA restoration, an applied science strategy including conceptual ecological models linking drivers to indicators is being used to organize current scientific understanding to support restoration efforts. A key driver of the ecosystem affecting the distribution and abundance of organisms is the timing, distribution, and volume of water flows that result in water depth patterns across the landscape. American alligators (Alligator mississippiensis) are one of the ecological indicators being used to assess Everglades restoration because they are a keystone species and integrate biological impacts of hydrological operations through all life stages. Alligator body condition (the relative fatness of an animal) is one of the metrics being used and targets have been set to allow us to track progress. We examined trends in alligator body condition using Fulton’s K over a 15 year period (2000–2014) at seven different wetland areas within the Everglades ecosystem, assessed patterns and trends relative to restoration targets, and related those trends to hydrologic variables. We developed a series of 17 a priori hypotheses that we tested with an information theoretic approach to identify which hydrologic factors affect alligator body condition. Alligator body condition was highest throughout the Everglades during the early 2000s and is approximately 5–10% lower now (2014). Values have varied by year, area, and hydrology. Body condition was positively correlated with range in water depth and fall water depth. Our top model was the “Current” model and included variables that describe current year hydrology (spring depth, fall depth, hydroperiod, range, interaction of range and fall depth, interaction of range and hydroperiod). Across all models, interaction

  6. A conceptual model of nurses' goal orientation, service behavior, and service performance.

    Science.gov (United States)

    Chien, Chun-Cheng; Chou, Hsin-Kai; Hung, Shuo-Tsung

    2008-01-01

    Based on the conceptual framework known as the "service triangle," the authors constructed a model of nurses' goal orientation, service behavior, and service performance to investigate the antecedents and consequences of the medical service behavior provided by nurses. This cross-sectional study collected data from 127 nurses in six hospitals using a mail-in questionnaire. Analysis of the model revealed that the customer-oriented behavior of nurses had a positive influence on organizational citizenship behavior; and both of these behaviors had a significant positive influence on service performance. The results also indicate that a higher learning goal orientation among nurses was associated with the performance of both observable customer-oriented behavior and organizational-citizenship behavior.

  7. A web service for service composition to aid geospatial modelers

    Science.gov (United States)

    Bigagli, L.; Santoro, M.; Roncella, R.; Mazzetti, P.

    2012-04-01

    The identification of appropriate mechanisms for process reuse, chaining and composition is considered a key enabler for the effective uptake of a global Earth Observation infrastructure, currently pursued by the international geospatial research community. In the Earth and Space Sciences, such a facility could primarily enable integrated and interoperable modeling, for what several approaches have been proposed and developed, over the last years. In fact, GEOSS is specifically tasked with the development of the so-called "Model Web". At increasing levels of abstraction and generalization, the initial stove-pipe software tools have evolved to community-wide modeling frameworks, to Component-Based Architecture solution, and, more recently, started to embrace Service-Oriented Architectures technologies, such as the OGC WPS specification and the WS-* stack of W3C standards for service composition. However, so far, the level of abstraction seems too low for implementing the Model Web vision, and far too complex technological aspects must still be addressed by both providers and users, resulting in limited usability and, eventually, difficult uptake. As by the recent ICT trend of resource virtualization, it has been suggested that users in need of a particular processing capability, required by a given modeling workflow, may benefit from outsourcing the composition activities into an external first-class service, according to the Composition as a Service (CaaS) approach. A CaaS system provides the necessary interoperability service framework for adaptation, reuse and complementation of existing processing resources (including models and geospatial services in general) in the form of executable workflows. This work introduces the architecture of a CaaS system, as a distributed information system for creating, validating, editing, storing, publishing, and executing geospatial workflows. This way, the users can be freed from the need of a composition infrastructure and

  8. Perceived Service Quality models: Are They Still Relevant?

    OpenAIRE

    Polyakova, Olga; Mirza, Mohammed T.

    2015-01-01

    This paper reviews the concept of perceived service quality and provides an update to the body of service quality knowledge. It consolidates the pathway of perceived service quality concept, from its emergence to the research model’s development. It also critically reviews service characteristics as prerequisites of perceived service quality conceptualisation. The examination of six perceived service quality models is intended to identify a superior model that could be used by further researc...

  9. Customer premises services market demand assessment 1980 - 2000. Volume 1: Executive summary

    Science.gov (United States)

    Gamble, R. B.; Saporta, L.; Heidenrich, G. A.

    1983-01-01

    Estimates of market demand for domestic civilian telecommunications services for the years 1980 to 2000 are provided. Overall demand, demand or satellite services, demand for satellite delivered Customer Premises Service (CPS), and demand for 30/20 GHz Customer Premises Services are covered. Emphasis is placed on the CPS market and demand is segmented by market, by service, by user class and by geographic region. Prices for competing services are discussed and the distribution of traffic with respect to distance is estimated. A nationwide traffic distribution model for CPS in terms of demand for CPS traffic and earth stations for each of the major SMSAs in the United States are provided.

  10. Determination of rock depth using artificial intelligence techniques

    Institute of Scientific and Technical Information of China (English)

    R. Viswanathan; Pijush Samui

    2016-01-01

    This article adopts three artificial intelligence techniques, Gaussian Process Regression (GPR), Least Square Support Vector Machine (LSSVM) and Extreme Learning Machine (ELM), for prediction of rock depth (d) at any point in Chennai. GPR, ELM and LSSVM have been used as regression techniques. Latitude and longitude are also adopted as inputs of the GPR, ELM and LSSVM models. The performance of the ELM, GPR and LSSVM models has been compared. The developed ELM, GPR and LSSVM models produce spatial variability of rock depth and offer robust models for the prediction of rock depth.

  11. Service-oriented infrastructure for scientific data mashups

    Science.gov (United States)

    Baru, C.; Krishnan, S.; Lin, K.; Moreland, J. L.; Nadeau, D. R.

    2009-12-01

    An important challenge in informatics is the development of concepts and corresponding architecture and tools to assist scientists with their data integration tasks. A typical Earth Science data integration request may be expressed, for example, as “For a given region (i.e. lat/long extent, plus depth), return a 3D structural model with accompanying physical parameters of density, seismic velocities, geochemistry, and geologic ages, using a cell size of 10km.” Such requests create “mashups” of scientific data. Currently, such integration is hand-crafted and depends heavily upon a scientist’s intimate knowledge of how to process, interpret, and integrate data from individual sources. In most case, the ultimate “integration” is performed by overlaying output images from individual processing steps using image manipulation software such as, say, Adobe Photoshop—leading to “Photoshop science”, where it is neither easy to repeat the integration steps nor to share the data mashup. As a result, scientists share only the final images and not the mashup itself. A more capable information infrastructure is needed to support the authoring and sharing of scientific data mashups. The infrastructure must include services for data discovery, access, and transformation and should be able to create mashups that are interactive, allowing users to probe and manipulate the data and follow its provenance. We present an architectural framework based on a service-oriented architecture for scientific data mashups in a distributed environment. The framework includes services for Data Access, Data Modeling, and Data Interaction. The Data Access services leverage capabilities for discovery and access to distributed data resources provided by efforts such as GEON and the EarthScope Data Portal, and services for federated metadata catalogs under development by projects like the Geosciences Information Network (GIN). The Data Modeling services provide 2D, 3D, and 4D modeling

  12. Validation of a laser-assisted wound measurement device in a wound healing model.

    Science.gov (United States)

    Constantine, Ryan S; Bills, Jessica D; Lavery, Lawrence A; Davis, Kathryn E

    2016-10-01

    In the treatment and monitoring of a diabetic or chronic wound, accurate and repeatable measurement of the wound provides indispensable data for the patient's medical record. This study aims to measure the accuracy of the laser-assisted wound measurement (LAWM) device against traditional methods in the measurement of area, depth and volume. We measured four 'healing' wounds in a Play-Doh(®) -based model over five subsequent states of wound healing progression in which the model was irregularly filled in to replicate the healing process. We evaluated the LAWM device against traditional methods including digital photograph assessment with National Institutes of Health ImageJ software, measurements of depth with a ruler and weight-to-volume assessment with dental paste. Statistical analyses included analysis of variance (ANOVA) and paired t-tests. We demonstrate that there are significantly different and nearly statistically significant differences between traditional ruler depth measurement and LAWM device measurement, but there are no statistically significant differences in area measurement. Volume measurements were found to be significantly different in two of the wounds. Rate of percentage change was analysed for volume and depth in the wound healing model, and the LAWM device was not significantly different than the traditional measurement technique. While occasionally inaccurate in its absolute measurement, the LAWM device is a useful tool in the clinician's arsenal as it reliably measures rate of percentage change in depth and volume and offers a potentially aseptic alternative to traditional measurement techniques. © 2014 The Authors. International Wound Journal © 2014 Medicalhelplines.com Inc and John Wiley & Sons Ltd.

  13. A model for ageing-home-care service process improvement

    OpenAIRE

    Yu, Shu-Yan; Shie, An-Jin

    2017-01-01

    The purpose of this study was to develop an integrated model to improve service processes in ageing-home-care. According to the literature, existing service processes have potential service failures that affect service quality and efficacy. However, most previous studies have only focused on conceptual model development using New Service Development (NSD) and fail to provide a systematic model to analyse potential service failures and facilitate managers developing solutions to improve the se...

  14. Manager personality, manager service quality orientation, and service climate: test of a model.

    Science.gov (United States)

    Salvaggio, Amy Nicole; Schneider, Benjamin; Nishii, Lisa H; Mayer, David M; Ramesh, Anuradha; Lyon, Julie S

    2007-11-01

    This article conceptually and empirically explores the relationships among manager personality, manager service quality orientation, and climate for customer service. Data were collected from 1,486 employees and 145 managers in grocery store departments (N = 145) to test the authors' theoretical model. Largely consistent with hypotheses, results revealed that core self-evaluations were positively related to managers' service quality orientation, even after dimensions of the Big Five model of personality were controlled, and that service quality orientation fully mediated the relationship between personality and global service climate. Implications for personality and organizational climate research are discussed. (c) 2007 APA

  15. An analysis on Public Service Announcements (PSA) within the scope of Elaboration Likelihood Model: Orange and Hazelnut Consumption Samples

    OpenAIRE

    Bical, Adil; Yılmaz, R. Ayhan

    2018-01-01

    The purpose of the study is to reveal that how persuasion works in public service announcements on hazelnut and orange consumption ones broadcasted in Turkey. According to Petty and Cacioppo, Elaboration Likelihood Model explains the process of persuasion on two routes: central and peripheral. In-depth interviews were conducted to obtain the goal of the study. Respondents were asked whether they process the message of the PSA centrally or peripherally. Advertisements on consumption of hazelnu...

  16. Energy Extension Service Pilot Program: evaluation report after two years. Volume I. Evaluation summary

    Energy Technology Data Exchange (ETDEWEB)

    None

    1980-04-01

    The EES pilot program was initiated in August 1977, when 10 states were selected on a competitive basis for participation. The pilot states (Alabama, Connecticut, Michigan, New Mexico, Pennsylvania, Tennessee, Texas, Washington, Wisconsin, and Wyoming) devoted the first 6 months to start-up activities. This document is a follow-up report to the three volume Evaluation Summary of the first year of the pilot EES program published in September 1979. The purpose of this report is to provide an overview of the impacts and costs of the two years of the pilot program, and to check the consistency of findings over the two year period. The analysis addresses the following: (1) were the impact findings of Year I and Year II consistent, or did Year I and Year II attitudes and behavior vary. If variation existed, could it be attributed to program changes as the EES progressed from a start-up phase (Year I) to more normal service delivery (Year II); and (2) did costs of service delivery change (again reflecting start-up and normal service delivery costs). Did cost changes affect conclusions about the relative cost effectiveness of delivering services to different target audiences.

  17. ADVANCED EARTH OBSERVATION APPROACH FOR MULTISCALE FOREST ECOSYSTEM SERVICES MODELING AND MAPPING (MIMOSE

    Directory of Open Access Journals (Sweden)

    G. Chirici

    2014-04-01

    Full Text Available In the last decade ecosystem services (ES have been proposed as a method for quantifying the multifunctional role of forest ecosystems. Their spatial distribution on large areas is frequently limited by the lack of information, because field data collection with traditional methods requires much effort in terms of time and cost.  In this contribution we propose a methodology (namely, MultIscale Mapping Of ecoSystem servicEs - MIMOSE based on the integration of remotely sensed images and field observation to produce a wall-to-wall geodatabase of forest parcels accompanied with several information useful as a basis for future trade-off analysis of different ES. Here, we present the application of the MIMOSE approach to a study area of 443,758 hectares  coincident with administrative Molise Region in Central Italy. The procedure is based on a local high resolution forest types map integrated with information on the main forest management approaches. Through the non-parametric k-Nearest Neighbors techniques, we produced a growing stock volume map integrating a local forest inventory with a multispectral satellite IRS LISS III imagery. With the growing stock volume map we derived a forest age map for even-aged forest types. Later these information were used to automatically create a vector forest parcels map by multidimensional image segmentation that were finally populated with a number of information useful for ES spatial estimation. The contribution briefly introduce to the MIMOSE methodology presenting the preliminary results we achieved which constitute the basis for a future implementation of ES modeling.

  18. 1988 DOE model conference proceedings: Volume 4

    Energy Technology Data Exchange (ETDEWEB)

    1988-01-01

    These Proceedings of the October 3-7, 1988, DOE Model Conference are a compilation of the papers that were presented in the technical or poster sessions at the conference. Papers and posters not submitted for publication are not included in the Proceedings. The Table of Contents lists the titles of papers as well as the names of the presenters. These individuals are not, in all cases, the primary authors of the papers published. The actual title pages, appearing later with the papers, show the primary author(s) and all co-authors. The papers in all three volumes of the Proceedings appear as they were originally submitted for publication and have not been edited or changed in any way. Topics discussed in Volume 4 include site characterization and remediation projects, environmental monitoring and modeling; disposal site selection and facility design, risk assessment, safety and health issues, and site remediation technology.

  19. 1988 DOE model conference proceedings: Volume 4

    International Nuclear Information System (INIS)

    1988-01-01

    These Proceedings of the October 3-7, 1988, DOE Model Conference are a compilation of the papers that were presented in the technical or poster sessions at the conference. Papers and posters not submitted for publication are not included in the Proceedings. The Table of Contents lists the titles of papers as well as the names of the presenters. These individuals are not, in all cases, the primary authors of the papers published. The actual title pages, appearing later with the papers, show the primary author(s) and all co-authors. The papers in all three volumes of the Proceedings appear as they were originally submitted for publication and have not been edited or changed in any way. Topics discussed in Volume 4 include site characterization and remediation projects, environmental monitoring and modeling; disposal site selection and facility design, risk assessment, safety and health issues, and site remediation technology

  20. Independent evaluation of the SNODAS snow depth product using regional scale LiDAR-derived measurements

    Science.gov (United States)

    Hedrick, A.; Marshall, H.-P.; Winstral, A.; Elder, K.; Yueh, S.; Cline, D.

    2014-06-01

    Repeated Light Detection and Ranging (LiDAR) surveys are quickly becoming the de facto method for measuring spatial variability of montane snowpacks at high resolution. This study examines the potential of a 750 km2 LiDAR-derived dataset of snow depths, collected during the 2007 northern Colorado Cold Lands Processes Experiment (CLPX-2), as a validation source for an operational hydrologic snow model. The SNOw Data Assimilation System (SNODAS) model framework, operated by the US National Weather Service, combines a physically-based energy-and-mass-balance snow model with satellite, airborne and automated ground-based observations to provide daily estimates of snowpack properties at nominally 1 km resolution over the coterminous United States. Independent validation data is scarce due to the assimilating nature of SNODAS, compelling the need for an independent validation dataset with substantial geographic coverage. Within twelve distinctive 500 m × 500 m study areas located throughout the survey swath, ground crews performed approximately 600 manual snow depth measurements during each of the CLPX-2 LiDAR acquisitions. This supplied a dataset for constraining the uncertainty of upscaled LiDAR estimates of snow depth at the 1 km SNODAS resolution, resulting in a root-mean-square difference of 13 cm. Upscaled LiDAR snow depths were then compared to the SNODAS-estimates over the entire study area for the dates of the LiDAR flights. The remotely-sensed snow depths provided a more spatially continuous comparison dataset and agreed more closely to the model estimates than that of the in situ measurements alone. Finally, the results revealed three distinct areas where the differences between LiDAR observations and SNODAS estimates were most drastic, suggesting natural processes specific to these regions as causal influences on model uncertainty.

  1. A Model-Driven, Science Data Product Registration Service

    Science.gov (United States)

    Hardman, S.; Ramirez, P.; Hughes, J. S.; Joyner, R.; Cayanan, M.; Lee, H.; Crichton, D. J.

    2011-12-01

    The Planetary Data System (PDS) has undertaken an effort to overhaul the PDS data architecture (including the data model, data structures, data dictionary, etc.) and to deploy an upgraded software system (including data services, distributed data catalog, etc.) that fully embraces the PDS federation as an integrated system while taking advantage of modern innovations in information technology (including networking capabilities, processing speeds, and software breakthroughs). A core component of this new system is the Registry Service that will provide functionality for tracking, auditing, locating, and maintaining artifacts within the system. These artifacts can range from data files and label files, schemas, dictionary definitions for objects and elements, documents, services, etc. This service offers a single reference implementation of the registry capabilities detailed in the Consultative Committee for Space Data Systems (CCSDS) Registry Reference Model White Book. The CCSDS Reference Model in turn relies heavily on the Electronic Business using eXtensible Markup Language (ebXML) standards for registry services and the registry information model, managed by the OASIS consortium. Registries are pervasive components in most information systems. For example, data dictionaries, service registries, LDAP directory services, and even databases provide registry-like services. These all include an account of informational items that are used in large-scale information systems ranging from data values such as names and codes, to vocabularies, services and software components. The problem is that many of these registry-like services were designed with their own data models associated with the specific type of artifact they track. Additionally these services each have their own specific interface for interacting with the service. This Registry Service implements the data model specified in the ebXML Registry Information Model (RIM) specification that supports the various

  2. California Integrated Service Delivery Evaluation Report. Phase I

    Science.gov (United States)

    Moore, Richard W.; Rossy, Gerard; Roberts, William; Chapman, Kenneth; Sanchez, Urte; Hanley, Chris

    2010-01-01

    This study is a formative evaluation of the OneStop Career Center Integrated Service Delivery (ISD) Model within the California Workforce System. The study was sponsored by the California Workforce Investment Board. The study completed four in-depth case studies of California OneStops to describe how they implemented the ISD model which brings…

  3. Prediction of resource volumes at untested locations using simple local prediction models

    Science.gov (United States)

    Attanasi, E.D.; Coburn, T.C.; Freeman, P.A.

    2006-01-01

    This paper shows how local spatial nonparametric prediction models can be applied to estimate volumes of recoverable gas resources at individual undrilled sites, at multiple sites on a regional scale, and to compute confidence bounds for regional volumes based on the distribution of those estimates. An approach that combines cross-validation, the jackknife, and bootstrap procedures is used to accomplish this task. Simulation experiments show that cross-validation can be applied beneficially to select an appropriate prediction model. The cross-validation procedure worked well for a wide range of different states of nature and levels of information. Jackknife procedures are used to compute individual prediction estimation errors at undrilled locations. The jackknife replicates also are used with a bootstrap resampling procedure to compute confidence bounds for the total volume. The method was applied to data (partitioned into a training set and target set) from the Devonian Antrim Shale continuous-type gas play in the Michigan Basin in Otsego County, Michigan. The analysis showed that the model estimate of total recoverable volumes at prediction sites is within 4 percent of the total observed volume. The model predictions also provide frequency distributions of the cell volumes at the production unit scale. Such distributions are the basis for subsequent economic analyses. ?? Springer Science+Business Media, LLC 2007.

  4. A simple formula for depth dose calculation for Co-60 teletherapy beam dosimetry

    International Nuclear Information System (INIS)

    Tripathi, U.B.; Kelkar, N.Y.

    1979-01-01

    Knowledge of dose at all points of interest in the plane of tumour is essential for treatment planning. A very simple formula for scatter dose calculation along the central axis of a Co-60 beam has been derived. This formula uses primary dose at depth d, scatter air ratio at the depth of maximum ionisation and the effective depth of the volume, irradiating the medium. The method for calculation of percentage depth dose at any point in the principal plane has been explained in detail. The simple form of the formulation will help in improving the treatment plans for treatments of lesions using Co-60 teletherapy machines. (orig.) [de

  5. NOAA ESRI Grid - depth uncertainty predictions in New York offshore planning area from Biogeography Branch bathymetry model

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This dataset represents depth uncertainty predictions from a bathymetric model developed for the New York offshore spatial planning area. The model also includes...

  6. Revenue-Sharing Contract Models for Logistics Service Supply Chains with Mass Customization Service

    Directory of Open Access Journals (Sweden)

    Weihua Liu

    2015-01-01

    Full Text Available The revenue-sharing contract is one of the most important supply chain coordination contracts; it has been applied in various supply chains. However, studies related to service supply chains with mass customization (MC are lacking. Considering the equity of benefit distribution between the members of service supply chains, in this paper, we designed two revenue-sharing contracts. The first contract for the maximum equity of a single logistics service integrator (LSI and single functional logistics service provider (FLSP in a two-echelon logistics service supply chain was designed by introducing the fair entropy function (“one to one” model. Furthermore, the method is extended to a more complex supply chain, which consists of a single LSI and multiple FLSPs. A new contract was designed not only for considering the equity of an LSI and each FLSP but also for the equity between each FLSP (“one to N” model. The “one to one” model in three-echelon LSSC is also provided. The result exemplifies that, whether in the “one to one” model or “one to N” model, there exists a best interval of customized level when the revenue-sharing coefficient reaches its maximum.

  7. Improved pulmonary function in working divers breathing nitrox at shallow depths

    Science.gov (United States)

    Fitzpatrick, Daniel T.; Conkin, Johnny

    2003-01-01

    INTRODUCTION: There is limited data about the long-term pulmonary effects of nitrox use in divers at shallow depths. This study examined changes in pulmonary function in a cohort of working divers breathing a 46% oxygen enriched mixture while diving at depths less than 12 m. METHODS: A total of 43 working divers from the Neutral Buoyancy Laboratory (NBL), NASA-Johnson Space Center completed a questionnaire providing information on diving history prior to NBL employment, diving history outside the NBL since employment, and smoking history. Cumulative dive hours were obtained from the NBL dive-time database. Medical records were reviewed to obtain the diver's height, weight, and pulmonary function measurements from initial pre-dive, first year and third year annual medical examinations. RESULTS: The initial forced vital capacity (FVC) and forced expiratory volume in 1 s (FEV1) were greater than predicted, 104% and 102%, respectively. After 3 yr of diving at the NBL, both the FVC and FEV1 showed a significant (p volumes. Regular diving with nitrox at shallow depths over a 3-yr period did not impair pulmonary function. Improvements in FVC and FEV1 were primarily due to a training effect.

  8. MOD-AGE - an algorithm for age-depth model construction; U-series dated speleothems case study

    Science.gov (United States)

    Hercman, H.; Pawlak, J.

    2012-04-01

    We present MOD-AGE - a new system for chronology construction. MOD-AGE can be used for profiles that have been dated by different methods. As input data, the system uses the following basic measurements: activities, atomic ratios or age, as well as depth measurement. Based on probability distributions describing the measurement results, MOD-AGE estimates the age~depth relation and its confidence bands. To avoid the use of difficult-to-meet assumptions, MOD-AGE uses nonparametric methods. We applied a Monte Carlo simulation to model age and depth values based on the real distribution of counted data (activities, atomic ratios, depths etc.). Several fitting methods could be applied for estimating the relationships; based on several tests, we decide to use LOESS method (locally weighted scatterplot smoothing). The stratigraphic correction procedure applied in the MOD-AGE program uses a probability calculus, which assumes that the ages of all the samples are correctly estimated. Information about the probability distribution of the samples' ages is used to estimate the most probable sequence that is concordant according to the superposition rule. MOD-AGE is presented as a tool for the chronology construction of speleothems that have been analyzed by the U-series method, and it is compared to the StalAge algorithm presented by D. Scholtz and D.L Hoffmann (2011). Scholtz, D., Hoffmann, D. L., 2011. StalAge - An algorithm designed for construction of speleothem age models. Quaternary Geochronology 6, 369-382.

  9. Investigation of Scour Depth at Bridge Piers using Bri-Stars Model in Iran

    OpenAIRE

    Gh. Saeidifar; F. Raeiszadeh

    2011-01-01

    BRI-STARS (BRIdge Stream Tube model for Alluvial River Simulation) program was used to investigate the scour depth around bridge piers in some of the major river systems in Iran. Model calibration was performed by collecting different field data. Field data are cataloged on three categories, first group of bridges that their rivers bed are formed by fine material, second group of bridges that their rivers bed are formed by sand material, and finally bridges that their rivers bed a...

  10. Evaluating methods for controlling depth perception in stereoscopic cinematography

    Science.gov (United States)

    Sun, Geng; Holliman, Nick

    2009-02-01

    Existing stereoscopic imaging algorithms can create static stereoscopic images with perceived depth control function to ensure a compelling 3D viewing experience without visual discomfort. However, current algorithms do not normally support standard Cinematic Storytelling techniques. These techniques, such as object movement, camera motion, and zooming, can result in dynamic scene depth change within and between a series of frames (shots) in stereoscopic cinematography. In this study, we empirically evaluate the following three types of stereoscopic imaging approaches that aim to address this problem. (1) Real-Eye Configuration: set camera separation equal to the nominal human eye interpupillary distance. The perceived depth on the display is identical to the scene depth without any distortion. (2) Mapping Algorithm: map the scene depth to a predefined range on the display to avoid excessive perceived depth. A new method that dynamically adjusts the depth mapping from scene space to display space is presented in addition to an existing fixed depth mapping method. (3) Depth of Field Simulation: apply Depth of Field (DOF) blur effect to stereoscopic images. Only objects that are inside the DOF are viewed in full sharpness. Objects that are far away from the focus plane are blurred. We performed a human-based trial using the ITU-R BT.500-11 Recommendation to compare the depth quality of stereoscopic video sequences generated by the above-mentioned imaging methods. Our results indicate that viewers' practical 3D viewing volumes are different for individual stereoscopic displays and viewers can cope with much larger perceived depth range in viewing stereoscopic cinematography in comparison to static stereoscopic images. Our new dynamic depth mapping method does have an advantage over the fixed depth mapping method in controlling stereo depth perception. The DOF blur effect does not provide the expected improvement for perceived depth quality control in 3D cinematography

  11. International Digital Elevation Model Service (IDEMS): A Revived IAG Service

    Science.gov (United States)

    Kelly, K. M.; Hirt, C., , Dr; Kuhn, M.; Barzaghi, R.

    2017-12-01

    A newly developed International Digital Elevation Model Service (IDEMS) is now available under the umbrella of the International Gravity Field Service of the International Association of Geodesy. Hosted and operated by Environmental Systems Research Institute (Esri) (http://www.esri.com/), the new IDEMS website is available at: https://idems.maps.arcgis.com/home/index.html. IDEMS provides a focus for distribution of data and information about various digital elevation models, including spherical-harmonic models of Earth's global topography and lunar and planetary DEM. Related datasets, such as representation of inland water within DEMs, and relevant software which are available in the public domain are also provided. Currently, IDEMS serves as repository of links to providers of global terrain and bathymetry, terrain related Earth models and datasets such as digital elevation data services managed and maintained by Esri (Terrain and TopoBathy), Bedmap2-Ice thickness and subglacial topographic model of Antarctica and Ice, Cloud, and Land Elevation ICESat/GLAS Data, as well as planetary terrain data provided by PDS Geosciences Node at Washington University, St. Louis. These services provide online access to a collection of multi-resolution and multi-source elevation and bathymetry data, including metadata and source information. In addition to IDEMS current holdings of terrestrial and planetary DEMs, some topography related products IDEMS may include in future are: dynamic ocean topography, 3D crustal density models, Earth's dynamic topography, etc. IDEMS may also consider terrain related products such as quality assessments, global terrain corrections, global height anomaly-to-geoid height corrections and other geodesy-relevant studies and products. IDEMS encourages contributions to the site from the geodetic community in any of the product types listed above. Please contact the authors if you would like to contribute or recommend content you think appropriate for

  12. Quantification of effective plant rooting depth: advancing global hydrological modelling

    Science.gov (United States)

    Yang, Y.; Donohue, R. J.; McVicar, T.

    2017-12-01

    Plant rooting depth (Zr) is a key parameter in hydrological and biogeochemical models, yet the global spatial distribution of Zr is largely unknown due to the difficulties in its direct measurement. Moreover, Zr observations are usually only representative of a single plant or several plants, which can differ greatly from the effective Zr over a modelling unit (e.g., catchment or grid-box). Here, we provide a global parameterization of an analytical Zr model that balances the marginal carbon cost and benefit of deeper roots, and produce a climatological (i.e., 1982-2010 average) global Zr map. To test the Zr estimates, we apply the estimated Zr in a highly transparent hydrological model (i.e., the Budyko-Choudhury-Porporato (BCP) model) to estimate mean annual actual evapotranspiration (E) across the globe. We then compare the estimated E with both water balance-based E observations at 32 major catchments and satellite grid-box retrievals across the globe. Our results show that the BCP model, when implemented with Zr estimated herein, optimally reproduced the spatial pattern of E at both scales and provides improved model outputs when compared to BCP model results from two already existing global Zr datasets. These results suggest that our Zr estimates can be effectively used in state-of-the-art hydrological models, and potentially biogeochemical models, where the determination of Zr currently largely relies on biome type-based look-up tables.

  13. Three-Dimensional Service Value Creation Model Based on Multidisciplinary Framework: Service Value Transition in Flower Tourism and Robotized Music Appreciation Services

    OpenAIRE

    Nakamura, Kotaro; Imahori, Takahiro; Ikawa, Yasuo

    2010-01-01

    This paper is focused on the process of value creation in the service business and on the shift of service value created in actual service businesses. The analysis is used to demonstrate proposed models for explaining this shift. Service value is successfully created when customers enjoy the benefits of services proposed through a system of service businesses. The model visualizes the shift of service value, focusing on the three axes of width (of the place for service providing/usage in the ...

  14. Volume and aboveground biomass models for dry Miombo woodland in Tanzania

    DEFF Research Database (Denmark)

    Mwakalukwa, Ezekiel Edward; Meilby, Henrik; Treue, Thorsten

    2014-01-01

    Tools to accurately estimate tree volume and biomass are scarce for most forest types in East Africa, including Tanzania. Based on a sample of 142 trees and 57 shrubs from a 6,065 ha area of dry miombo woodland in Iringa rural district in Tanzania, regression models were developed for volume...... and biomass of three important species, Brachystegia spiciformis Benth. (n=40), Combretum molle G. Don (n=41), and Dalbergia arbutifolia Baker (n=37) separately, and for broader samples of trees (28 species, n=72), shrubs (16 species, n=31), and trees and shrubs combined (44 species, n=104). Applied...... of the predictions tended to increase from general to species-specific models. Except for a few volume and biomass models developed for shrubs, all models had R2 values of 96–99%. Thus, the models appear robust and should be applicable to forests with similar site conditions, species, and diameter ranges....

  15. Flow and transport simulation of Madeira River using three depth-averaged two-equation turbulence closure models

    Directory of Open Access Journals (Sweden)

    Li-ren Yu

    2012-03-01

    Full Text Available This paper describes a numerical simulation in the Amazon water system, aiming to develop a quasi-three-dimensional numerical tool for refined modeling of turbulent flow and passive transport of mass in natural waters. Three depth-averaged two-equation turbulence closure models, k˜−ε˜,k˜−w˜, and k˜−ω˜ , were used to close the non-simplified quasi-three dimensional hydrodynamic fundamental governing equations. The discretized equations were solved with the advanced multi-grid iterative method using non-orthogonal body-fitted coarse and fine grids with collocated variable arrangement. Except for steady flow computation, the processes of contaminant inpouring and plume development at the beginning of discharge, caused by a side-discharge of a tributary, have also been numerically investigated. The three depth-averaged two-equation closure models are all suitable for modeling strong mixing turbulence. The newly established turbulence models such as the k˜−ω˜ model, with a higher order of magnitude of the turbulence parameter, provide a possibility for improving computational precision.

  16. Lighting design for globally illuminated volume rendering.

    Science.gov (United States)

    Zhang, Yubo; Ma, Kwan-Liu

    2013-12-01

    With the evolution of graphics hardware, high quality global illumination becomes available for real-time volume rendering. Compared to local illumination, global illumination can produce realistic shading effects which are closer to real world scenes, and has proven useful for enhancing volume data visualization to enable better depth and shape perception. However, setting up optimal lighting could be a nontrivial task for average users. There were lighting design works for volume visualization but they did not consider global light transportation. In this paper, we present a lighting design method for volume visualization employing global illumination. The resulting system takes into account view and transfer-function dependent content of the volume data to automatically generate an optimized three-point lighting environment. Our method fully exploits the back light which is not used by previous volume visualization systems. By also including global shadow and multiple scattering, our lighting system can effectively enhance the depth and shape perception of volumetric features of interest. In addition, we propose an automatic tone mapping operator which recovers visual details from overexposed areas while maintaining sufficient contrast in the dark areas. We show that our method is effective for visualizing volume datasets with complex structures. The structural information is more clearly and correctly presented under the automatically generated light sources.

  17. Peculiarities of designing Holistic Electronic Government Services Integration Model

    Directory of Open Access Journals (Sweden)

    Tadas Limba

    2011-12-01

    Full Text Available Purpos– the aim ok this paper is to develop a Holistic Electronic Government Services Integration Model which could ensure the efficient integration of electronic government services in the local self-government level.Methodolog– the following analyses have been carried out in thirkpaper: theoretical-systematic; normative and conceptual comparative analysis of the researcha A method of modeling has also been applied.Finding– the scientific work analyzes the improvement opportunities of the models of electronic government services and their application alternatives in Lithuanian municipalities. The newly developed model of electronic government services that has been designed basng on the principle of integrating online expert consultation is primarily targeted at improvement of inside processes’ changes of an organization. Practicing the application of that model in the local self-government level starting with improvement of inside processes of an organization should help adapt more accurately and efficiently to the changing needs of the society while providing electronic government services, thus establishing a higher public value.Practical implication– the practical novelty of work is reflected not only through the integration opportunities’ assessment of the principle of online expert consultation services into the theoretical models of electronic government services that have already been developed by the scientists, but also on the basis of this principle there has been created a “Holistic Electronic Government Services Integration Model” in accordance with “E-Diamond” model basis and its practical application realization with the design of “The project of implementing the principle of online expert consultation on the model of electronic government services” for the future investigations.Originalit– the systematic, comparative analysis of the models of electronic government services carried out in the scientific

  18. COSMO: a conceptual framework for service modelling and refinement

    NARCIS (Netherlands)

    Quartel, Dick; Steen, Maarten W.A.; Pokraev, S.; van Sinderen, Marten J.

    This paper presents a conceptual framework for service modelling and refinement, called the COSMO (COnceptual Service MOdelling) framework. This framework provides concepts to model and reason about services, and to support operations, such as composition and discovery, which are performed on them

  19. Whole object surface area and volume of partial-view 3D models

    International Nuclear Information System (INIS)

    Mulukutla, Gopal K; Proussevitch, Alexander A; Genareau, Kimberly D; Durant, Adam J

    2017-01-01

    Micro-scale 3D models, important components of many studies in science and engineering, are often used to determine morphological characteristics such as shape, surface area and volume. The application of techniques such as stereoscopic scanning electron microscopy on whole objects often results in ‘partial-view’ models with a portion of object not within the field of view thus not captured in the 3D model. The nature and extent of the surface not captured is dependent on the complex interaction of imaging system attributes (e.g. working distance, viewing angle) with object size, shape and morphology. As a result, any simplistic assumptions in estimating whole object surface area or volume can lead to significant errors. In this study, we report on a novel technique to estimate the physical fraction of an object captured in a partial-view 3D model of an otherwise whole object. This allows a more accurate estimate of surface area and volume. Using 3D models, we demonstrate the robustness of this method and the accuracy of surface area and volume estimates relative to true values. (paper)

  20. Using a sand wave model for optimal monitoring of navigation depth

    NARCIS (Netherlands)

    Knaapen, Michiel; Hulscher, Suzanne J.M.H.; Tiessen, Meinard C.H.; van den Berg, J.; Parker, G.; García, M.H.

    2005-01-01

    In the Euro Channel to Rotterdam Harbor, sand waves reduce the navigable depth to an unacceptable level. To avoid the risk of grounding, the navigation depth is monitored and sand waves that reduce the navigation depth unacceptably are dredged. After the dredging, the sand waves slowly regain their

  1. Evaluation Model of Tea Industry Information Service Quality

    OpenAIRE

    Shi , Xiaohui; Chen , Tian’en

    2015-01-01

    International audience; According to characteristics of tea industry information service, this paper have built service quality evaluation index system for tea industry information service quality, R-cluster analysis and multiple regression have been comprehensively used to contribute evaluation model with a high practice and credibility. Proved by the experiment, the evaluation model of information service quality has a good precision, which has guidance significance to a certain extent to e...

  2. Fixed site neutralization model programmer's manual. Volume II

    International Nuclear Information System (INIS)

    Engi, D.; Chapman, L.D.; Judnick, W.; Blum, R.; Broegler, L.; Lenz, J.; Weinthraub, A.; Ballard, D.

    1979-12-01

    This report relates to protection of nuclear materials at nuclear facilities. This volume presents the source listings for the Fixed Site Neutralization Model and its supporting modules, the Plex Preprocessor and the Data Preprocessor

  3. Spatial variation in void volume during charged particle bombardment: the effects of injected interstitials

    International Nuclear Information System (INIS)

    Lee, E.H.; Mansur, L.K.; Yoo, M.H.

    1979-01-01

    Experimental observations of the void volume at several depths along the range of 4 MeV Ni ions in 316 stainless steel are reported. The specimens were first preconditioned by neutron irradiation at temperatures of 450 and 584 0 C to fluences of approximately 8 x 10 26 n/m -2 . The void volume after ion bombardment to 60 dpa at the peak damage depth is significantly lower at the peak damage depth than in the region between that and the free surface. The ratio of the step height to void volume at the depth of peak energy deposition between regions masked from and exposed to the beam is strongly dependent on bombardment temperature. The reduction of void volume near the peak damage depth is larger for the 584 0 C than for the 450 0 C preconditioned material. These observations are consistent with recent theoretical results which account for the injection of the bombarding ions as self-interstitials. The theory necessary to understand the effect is developed

  4. Modelling mid-span water table depth and drainage discharge ...

    African Journals Online (AJOL)

    Results of simulated WTDs at various combinations of drain depth and spacing indicated that in clay soil a WTD of 1.0 to 1.5 m from the soil surface can be achieved by installing drain pipes at drain spacing ranging from 25 to 40 m and drain depth between 1.4 and 1.8 m. On the other hand, in clay-loam soil, the same 1.0 to ...

  5. Mathematical models for volume rendering and neutron transport

    International Nuclear Information System (INIS)

    Max, N.

    1994-09-01

    This paper reviews several different models for light interaction with volume densities of absorbing, glowing, reflecting, or scattering material. They include absorption only, glow only, glow and absorption combined, single scattering of external illumination, and multiple scattering. The models are derived from differential equations, and illustrated on a data set representing a cloud. They are related to corresponding models in neutron transport. The multiple scattering model uses an efficient method to propagate the radiation which does not suffer from the ray effect

  6. Average fetal depth in utero: data for estimation of fetal absorbed radiation dose

    International Nuclear Information System (INIS)

    Ragozzino, M.W.; Breckle, R.; Hill, L.M.; Gray, J.E.

    1986-01-01

    To estimate fetal absorbed dose from radiographic examinations, the depth from the anterior maternal surface to the midline of the fetal skull and abdomen was measured by ultrasound in 97 pregnant women. The relationships between fetal depth, fetal presentation, and maternal parameters of height, weight, anteroposterior (AP) thickness, gestational age, placental location, and bladder volume were analyzed. Maternal AP thickness (MAP) can be estimated from gestational age, maternal height, and maternal weight. Fetal midskull and abdominal depths were nearly equal. Fetal depth normalized to MAP was independent or nearly independent of maternal parameters and fetal presentation. These data enable a reasonable estimation of absorbed dose to fetal brain, abdomen, and whole body

  7. A Model-Driven Approach for Telecommunications Network Services Definition

    Science.gov (United States)

    Chiprianov, Vanea; Kermarrec, Yvon; Alff, Patrick D.

    Present day Telecommunications market imposes a short concept-to-market time for service providers. To reduce it, we propose a computer-aided, model-driven, service-specific tool, with support for collaborative work and for checking properties on models. We started by defining a prototype of the Meta-model (MM) of the service domain. Using this prototype, we defined a simple graphical modeling language specific for service designers. We are currently enlarging the MM of the domain using model transformations from Network Abstractions Layers (NALs). In the future, we will investigate approaches to ensure the support for collaborative work and for checking properties on models.

  8. Statistical Modeling of Ultrawideband Body-Centric Wireless Channels Considering Room Volume

    Directory of Open Access Journals (Sweden)

    Miyuki Hirose

    2012-01-01

    Full Text Available This paper presents the results of a statistical modeling of onbody ultrawideband (UWB radio channels for wireless body area network (WBAN applications. Measurements were conducted in five different rooms. A measured delay profile can be divided into two domains; in the first domain (04 ns has multipath components that are dominant and dependent on room volume. The first domain was modeled with a conventional power decay law model, and the second domain with a modified Saleh-Valenzuela model considering the room volume. Realizations of the impulse responses are presented based on the composite model and compared with the measured average power delay profiles.

  9. Inference of viscosity jump at 670 km depth and lower mantle viscosity structure from GIA observations

    Science.gov (United States)

    Nakada, Masao; Okuno, Jun'ichi; Irie, Yoshiya

    2018-03-01

    A viscosity model with an exponential profile described by temperature (T) and pressure (P) distributions and constant activation energy (E_{{{um}}}^{{*}} for the upper mantle and E_{{{lm}}}^* for the lower mantle) and volume (V_{{{um}}}^{{*}} and V_{{{lm}}}^*) is employed in inferring the viscosity structure of the Earth's mantle from observations of glacial isostatic adjustment (GIA). We first construct standard viscosity models with an average upper-mantle viscosity ({\\bar{η }_{{{um}}}}) of 2 × 1020 Pa s, a typical value for the oceanic upper-mantle viscosity, satisfying the observationally derived three GIA-related observables, GIA-induced rate of change of the degree-two zonal harmonic of the geopotential, {\\dot{J}_2}, and differential relative sea level (RSL) changes for the Last Glacial Maximum sea levels at Barbados and Bonaparte Gulf in Australia and for RSL changes at 6 kyr BP for Karumba and Halifax Bay in Australia. Standard viscosity models inferred from three GIA-related observables are characterized by a viscosity of ˜1023 Pa s in the deep mantle for an assumed viscosity at 670 km depth, ηlm(670), of (1 - 50) × 1021 Pa s. Postglacial RSL changes at Southport, Bermuda and Everglades in the intermediate region of the North American ice sheet, largely dependent on its gross melting history, have a crucial potential for inference of a viscosity jump at 670 km depth. The analyses of these RSL changes based on the viscosity models with {\\bar{η }_{{{um}}}} ≥ 2 × 1020 Pa s and lower-mantle viscosity structures for the standard models yield permissible {\\bar{η }_{{{um}}}} and ηlm (670) values, although there is a trade-off between the viscosity and ice history models. Our preferred {\\bar{η }_{{{um}}}} and ηlm (670) values are ˜(7 - 9) × 1020 and ˜1022 Pa s, respectively, and the {\\bar{η }_{{{um}}}} is higher than that for the typical value of oceanic upper mantle, which may reflect a moderate laterally heterogeneous upper

  10. Airline service quality evaluation: A review on concepts and models

    OpenAIRE

    Navid Haghighat

    2017-01-01

    This paper reviews different major service quality concept and models which led to great developments in evaluating service quality with focusing on improvement process of the models through discussing criticisms of each model. Criticisms against these models are discussed to clarify development steps of newer models which led to the improvement of airline service quality models. The precise and accurate evaluation of service quality needs utilizing a reliable concept with comprehensive crite...

  11. Modelling of excavation depth and fractures in rock caused by tool indentation

    International Nuclear Information System (INIS)

    Kou Shaoquan; Tan Xiangchun; Lindqvist, P.A.

    1997-10-01

    The hydraulic regime after excavation in the near-field rock around deposition holes and deposition tunnels in a spent nuclear fuel repository is of concern for prediction of the saturation process of bentonite buffer and tunnel backfill. The hydraulic condition of main interest in this context is a result of the fracture network that is caused by the excavation. Modelling of the excavation disturbed zone in hard rocks caused by mechanical excavation has been carried out in the Division of Mining Engineering since 1993. This report contains an overview of the work conducted. The mechanical excavation is reasonably simplified as an indentation process of the interaction between rigid indenters and rocks. A large number of experiments have been carried out in the laboratory, and the results used for identifying crushed zones and fracture systems in rock under indentation are presented based on these experiments. The indentation causes crushing and damage of the rock and results in a crushed zone and a cracked zone. The indenter penetrates the rock with a certain depth when the force is over a threshold value relevant to the rock and tool. Outside the cracked zone there are basically three systems of cracks: median cracks, radial cracks, and side cracks. Fully developed radial cracks on each side of the indented area can connect with each other and join with median crack. This forms the so-called radial/median crack system. The influence of the mechanical properties of the rock is discussed based on our conceptual model, and the main factors governing the indentation event are summarised. The cracked zone is dealt with by an analytical fracture model. The side crack is simulated by applying the boundary element method coupled with fracture mechanics. Functional relationships are established relating either the indentation depth or the length of radial/median cracks to the various quantities characterising the physical event, namely the shape and the size of the

  12. Modelling of excavation depth and fractures in rock caused by tool indentation

    Energy Technology Data Exchange (ETDEWEB)

    Kou Shaoquan; Tan Xiangchun; Lindqvist, P.A. [Luleaa Univ. of Technology (Sweden)

    1997-10-01

    The hydraulic regime after excavation in the near-field rock around deposition holes and deposition tunnels in a spent nuclear fuel repository is of concern for prediction of the saturation process of bentonite buffer and tunnel backfill. The hydraulic condition of main interest in this context is a result of the fracture network that is caused by the excavation. Modelling of the excavation disturbed zone in hard rocks caused by mechanical excavation has been carried out in the Division of Mining Engineering since 1993. This report contains an overview of the work conducted. The mechanical excavation is reasonably simplified as an indentation process of the interaction between rigid indenters and rocks. A large number of experiments have been carried out in the laboratory, and the results used for identifying crushed zones and fracture systems in rock under indentation are presented based on these experiments. The indentation causes crushing and damage of the rock and results in a crushed zone and a cracked zone. The indenter penetrates the rock with a certain depth when the force is over a threshold value relevant to the rock and tool. Outside the cracked zone there are basically three systems of cracks: median cracks, radial cracks, and side cracks. Fully developed radial cracks on each side of the indented area can connect with each other and join with median crack. This forms the so-called radial/median crack system. The influence of the mechanical properties of the rock is discussed based on our conceptual model, and the main factors governing the indentation event are summarised. The cracked zone is dealt with by an analytical fracture model. The side crack is simulated by applying the boundary element method coupled with fracture mechanics. Functional relationships are established relating either the indentation depth or the length of radial/median cracks to the various quantities characterising the physical event, namely the shape and the size of the

  13. Model-driven development of smart grid services using SoaML

    DEFF Research Database (Denmark)

    Kosek, Anna Magdalena; Gehrke, Oliver

    2014-01-01

    This paper presents a model-driven software devel- opment process which can be applied to the design of smart grid services. The Service Oriented Architecture Modelling Language (SoaML) is used to describe the architecture as well as the roles and interactions between service participants....... The individual modelling steps and an example design of a SoaML model for a voltage control service are presented and explained. Finally, the paper discusses a proof-of-concept implementation of the modelled service in a smart grid testing laboratory....

  14. Integration of EEG lead placement templates into traditional technologist-based staffing models reduces costs in continuous video-EEG monitoring service.

    Science.gov (United States)

    Kolls, Brad J; Lai, Amy H; Srinivas, Anang A; Reid, Robert R

    2014-06-01

    The purpose of this study was to determine the relative cost reductions within different staffing models for continuous video-electroencephalography (cvEEG) service by introducing a template system for 10/20 lead application. We compared six staffing models using decision tree modeling based on historical service line utilization data from the cvEEG service at our center. Templates were integrated into technologist-based service lines in six different ways. The six models studied were templates for all studies, templates for intensive care unit (ICU) studies, templates for on-call studies, templates for studies of ≤ 24-hour duration, technologists for on-call studies, and technologists for all studies. Cost was linearly related to the study volume for all models with the "templates for all" model incurring the lowest cost. The "technologists for all" model carried the greatest cost. Direct cost comparison shows that any introduction of templates results in cost savings, with the templates being used for patients located in the ICU being the second most cost efficient and the most practical of the combined models to implement. Cost difference between the highest and lowest cost models under the base case produced an annual estimated savings of $267,574. Implementation of the ICU template model at our institution under base case conditions would result in a $205,230 savings over our current "technologist for all" model. Any implementation of templates into a technologist-based cvEEG service line results in cost savings, with the most significant annual savings coming from using the templates for all studies, but the most practical implementation approach with the second highest cost reduction being the template used in the ICU. The lowered costs determined in this work suggest that a template-based cvEEG service could be supported at smaller centers with significantly reduced costs and could allow for broader use of cvEEG patient monitoring.

  15. Modeling Accumulated Volume of Landslides Using Remote Sensing and DTM Data

    Directory of Open Access Journals (Sweden)

    Zhengchao Chen

    2014-02-01

    Full Text Available Landslides, like other natural hazards, such as avalanches, floods, and debris flows, may result in a lot of property damage and human casualties. The volume of landslide deposits is a key parameter for landslide studies and disaster relief. Using remote sensing and digital terrain model (DTM data, this paper analyzes errors that can occur in calculating landslide volumes using conventional models. To improve existing models, the mechanisms and laws governing the material deposited by landslides are studied and then the mass balance principle and mass balance line are defined. Based on these ideas, a novel and improved model (Mass Balance Model, MBM is proposed. By using a parameter called the “height adaptor”, MBM translates the volume calculation into an automatic search for the mass balance line within the scope of the landslide. Due to the use of mass balance constraints and the height adaptor, MBM is much more effective and reliable. A test of MBM was carried out for the case of a typical landslide, triggered by the Wenchuan Earthquake of 12 May 2008.

  16. ACCURACY ANALYSIS OF KINECT DEPTH DATA

    Directory of Open Access Journals (Sweden)

    K. Khoshelham

    2012-09-01

    Full Text Available This paper presents an investigation of the geometric quality of depth data obtained by the Kinect sensor. Based on the mathematical model of depth measurement by the sensor a theoretical error analysis is presented, which provides an insight into the factors influencing the accuracy of the data. Experimental results show that the random error of depth measurement increases with increasing distance to the sensor, and ranges from a few millimetres up to about 4 cm at the maximum range of the sensor. The accuracy of the data is also found to be influenced by the low resolution of the depth measurements.

  17. EARTHWORK VOLUME CALCULATION FROM DIGITAL TERRAIN MODELS

    Directory of Open Access Journals (Sweden)

    JANIĆ Milorad

    2015-06-01

    Full Text Available Accurate calculation of cut and fill volume has an essential importance in many fields. This article shows a new method, which has no approximation, based on Digital Terrain Models. A relatively new mathematical model is developed for that purpose, which is implemented in the software solution. Both of them has been tested and verified in the praxis on several large opencast mines. This application is developed in AutoLISP programming language and works in AutoCAD environment.

  18. Tensor-guided fitting of subduction slab depths

    Science.gov (United States)

    Bazargani, Farhad; Hayes, Gavin P.

    2013-01-01

    Geophysical measurements are often acquired at scattered locations in space. Therefore, interpolating or fitting the sparsely sampled data as a uniform function of space (a procedure commonly known as gridding) is a ubiquitous problem in geophysics. Most gridding methods require a model of spatial correlation for data. This spatial correlation model can often be inferred from some sort of secondary information, which may also be sparsely sampled in space. In this paper, we present a new method to model the geometry of a subducting slab in which we use a data‐fitting approach to address the problem. Earthquakes and active‐source seismic surveys provide estimates of depths of subducting slabs but only at scattered locations. In addition to estimates of depths from earthquake locations, focal mechanisms of subduction zone earthquakes also provide estimates of the strikes of the subducting slab on which they occur. We use these spatially sparse strike samples and the Earth’s curved surface geometry to infer a model for spatial correlation that guides a blended neighbor interpolation of slab depths. We then modify the interpolation method to account for the uncertainties associated with the depth estimates.

  19. Depth distribution of 2-keV helium-ion irradiation-induced cavities in nickel

    International Nuclear Information System (INIS)

    Fenske, G.; Das, S.K.; Kaminsky, M.

    1981-01-01

    Transmission electron microscopy has been used to study the effect of total dose on the depth distribution of cavities (voids or bubbles) in nickel irradiated at 500 0 C with 20-keV 4 He + ions. A transverse sectioning technique allowed us to obtain the entire depth distribution of cavities from a single specimen. The diameter, number density and volume fraction of cavities were measured as a function of depth from micrographs taken from samples sectioned parallel to the direction of the incident beam. Results for the doses at 2.9 x 10 15 and 2.9 x 10 16 ions/cm 2 show an increase in the average cavity diameter, number density and volume fraction with increasing dose. A further increase in dose from 2.9 x 10 16 to 2.9 x 10 17 ions/cm 2 also shows an increase in the average cavity diameter but a decrease in the number density. This observation is interpreted as evidence for the coalescence of cavities. 3 figures, 1 table

  20. Predictive models of turbidity and water depth in the Doñana marshes using Landsat TM and ETM+ images.

    Science.gov (United States)

    Bustamante, Javier; Pacios, Fernando; Díaz-Delgado, Ricardo; Aragonés, David

    2009-05-01

    We have used Landsat-5 TM and Landsat-7 ETM+ images together with simultaneous ground-truth data at sample points in the Doñana marshes to predict water turbidity and depth from band reflectance using Generalized Additive Models. We have point samples for 12 different dates simultaneous with 7 Landsat-5 and 5 Landsat-7 overpasses. The best model for water turbidity in the marsh explained 38% of variance in ground-truth data and included as predictors band 3 (630-690 nm), band 5 (1550-1750 nm) and the ratio between bands 1 (450-520 nm) and 4 (760-900 nm). Water turbidity is easier to predict for water bodies like the Guadalquivir River and artificial ponds that are deep and not affected by bottom soil reflectance and aquatic vegetation. For the latter, a simple model using band 3 reflectance explains 78.6% of the variance. Water depth is easier to predict than turbidity. The best model for water depth in the marsh explains 78% of the variance and includes as predictors band 1, band 5, the ratio between band 2 (520-600 nm) and band 4, and bottom soil reflectance in band 4 in September, when the marsh is dry. The water turbidity and water depth models have been developed in order to reconstruct historical changes in Doñana wetlands during the last 30 years using the Landsat satellite images time series.

  1. Modelling effective soil depth at field scale from soil sensors and geomorphometric indices

    Directory of Open Access Journals (Sweden)

    Mauricio Castro Franco

    2017-04-01

    Full Text Available The effective soil depth (ESD affects both dynamic of hydrology and plant growth. In the southeast of Buenos Aires province, the presence of petrocalcic horizon constitutes a limitation to ESD. The aim of this study was to develop a statistic model to predict spatial patterns of ESD using apparent electrical conductivity at two depths: 0-30 (ECa_30 and 0-90 (ECa_90 and geomorphometric indices. To do this, a Random Forest (RF analysis was applied. RF was able to select those variables according to their predictive potential for ESD. In that order, ECa_90, catchment slope, elevation and ECa_30 had main prediction importance. For validating purposes, 3035 ESD measurements were carried out, in five fields. ECa and ESD values showed complex spatial pattern at short distances. RF parameters with lowest error (OOBerror were calibrated. RF model simplified which uses main predictors had a similar predictive development to it uses all predictors. Furthermore, RF model simplified had the ability to delineate similar pattern to those obtained from in situ measure of ESD in all fields. In general, RF was an effective method and easy to work. However, further studies are needed which add other types of variables importance calculation, greater number of fields and test other predictors in order to improve these results.

  2. 1987 Oak Ridge model conference: Proceedings: Volume 2, Environmental protection

    Energy Technology Data Exchange (ETDEWEB)

    1987-01-01

    See the abstract for Volume I for general information on the conference. Topics discussed in Volume II include data management techiques for environmental protection efforts, the use of models in environmental auditing, in emergency plans, chemical accident emergency response, risk assessment, monitoring of waste sites, air and water monitoring of waste sites, and in training programs. (TEM)

  3. 1987 Oak Ridge model conference: Proceedings: Volume 2, Environmental protection

    International Nuclear Information System (INIS)

    1987-01-01

    See the abstract for Volume I for general information on the conference. Topics discussed in Volume II include data management techiques for environmental protection efforts, the use of models in environmental auditing, in emergency plans, chemical accident emergency response, risk assessment, monitoring of waste sites, air and water monitoring of waste sites, and in training programs

  4. Volume dependence of the melting temperature for alkali metals with Debye's model

    International Nuclear Information System (INIS)

    Soma, T.; Kagaya, H.M.; Nishigaki, M.

    1983-01-01

    Using the volume dependence of the Grueneisen constant at higher temperatures, the volume effect on the melting temperature of alkali metals is studied by Lindeman's melting law and Debye's model. The obtained melting curve increases as a function of the compressed volume and shows the maximum of the melting point at the characteristic volume. The resultant data are qualitatively in agreement with the observed tendency for alkali metals. (author)

  5. Relative Wave Energy based Adaptive Neuro-Fuzzy Inference System model for the Estimation of Depth of Anaesthesia.

    Science.gov (United States)

    Benzy, V K; Jasmin, E A; Koshy, Rachel Cherian; Amal, Frank; Indiradevi, K P

    2018-01-01

    The advancement in medical research and intelligent modeling techniques has lead to the developments in anaesthesia management. The present study is targeted to estimate the depth of anaesthesia using cognitive signal processing and intelligent modeling techniques. The neurophysiological signal that reflects cognitive state of anaesthetic drugs is the electroencephalogram signal. The information available on electroencephalogram signals during anaesthesia are drawn by extracting relative wave energy features from the anaesthetic electroencephalogram signals. Discrete wavelet transform is used to decomposes the electroencephalogram signals into four levels and then relative wave energy is computed from approximate and detail coefficients of sub-band signals. Relative wave energy is extracted to find out the degree of importance of different electroencephalogram frequency bands associated with different anaesthetic phases awake, induction, maintenance and recovery. The Kruskal-Wallis statistical test is applied on the relative wave energy features to check the discriminating capability of relative wave energy features as awake, light anaesthesia, moderate anaesthesia and deep anaesthesia. A novel depth of anaesthesia index is generated by implementing a Adaptive neuro-fuzzy inference system based fuzzy c-means clustering algorithm which uses relative wave energy features as inputs. Finally, the generated depth of anaesthesia index is compared with a commercially available depth of anaesthesia monitor Bispectral index.

  6. Model documentation: Natural Gas Transmission and Distribution Model of the National Energy Modeling System; Volume 1

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1994-02-24

    The Natural Gas Transmission and Distribution Model (NGTDM) is a component of the National Energy Modeling System (NEMS) used to represent the domestic natural gas transmission and distribution system. NEMS is the third in a series of computer-based, midterm energy modeling systems used since 1974 by the Energy Information Administration (EIA) and its predecessor, the Federal Energy Administration, to analyze domestic energy-economy markets and develop projections. This report documents the archived version of NGTDM that was used to produce the natural gas forecasts used in support of the Annual Energy Outlook 1994, DOE/EIA-0383(94). The purpose of this report is to provide a reference document for model analysts, users, and the public that defines the objectives of the model, describes its basic design, provides detail on the methodology employed, and describes the model inputs, outputs, and key assumptions. It is intended to fulfill the legal obligation of the EIA to provide adequate documentation in support of its models (Public Law 94-385, Section 57.b.2). This report represents Volume 1 of a two-volume set. (Volume 2 will report on model performance, detailing convergence criteria and properties, results of sensitivity testing, comparison of model outputs with the literature and/or other model results, and major unresolved issues.) Subsequent chapters of this report provide: (1) an overview of the NGTDM (Chapter 2); (2) a description of the interface between the National Energy Modeling System (NEMS) and the NGTDM (Chapter 3); (3) an overview of the solution methodology of the NGTDM (Chapter 4); (4) the solution methodology for the Annual Flow Module (Chapter 5); (5) the solution methodology for the Distributor Tariff Module (Chapter 6); (6) the solution methodology for the Capacity Expansion Module (Chapter 7); (7) the solution methodology for the Pipeline Tariff Module (Chapter 8); and (8) a description of model assumptions, inputs, and outputs (Chapter 9).

  7. 1987 Oak Ridge model conference: Proceedings: Volume 3, Health and safety

    International Nuclear Information System (INIS)

    1987-01-01

    See the abstract for Volume I for general information on the conference. Topics discussed in Volume III include the use of models in handling hazardous materials, communication at waste sites, asbestos, regulatory decisions, emergency planning, training programs, occupational hazards, and protection of subcontractors

  8. Applying ARIMA model for annual volume time series of the Magdalena River

    Directory of Open Access Journals (Sweden)

    Gloria Amaris

    2017-04-01

    Conclusions: The simulated results obtained with the ARIMA model compared to the observed data showed a fairly good adjustment of the minimum and maximum magnitudes. This allows concluding that it is a good tool for estimating minimum and maximum volumes, even though this model is not capable of simulating the exact behaviour of an annual volume time series.

  9. Helium-induced blistering and volume swelling in nickel

    International Nuclear Information System (INIS)

    Fenske, G.R.

    1979-01-01

    The results of an experimental investigation of He-induced blistering are presented. The mechanisms involved in blistering were examined by observing the microstructure of the implanted region using TEM. The volume swelling was measured as a function of the implant depth. The investigation revealed factors important in understanding the mechanisms involved in blister formation. First, a direct comparison of measured skin-thicknesses with the location of the maximum volume swelling demonstrated that the skin separates at the peak swelling depth, not at the end of the swelling profile. Second, an examination of the assumptions that have been used to predict skin-thicknesses revealed that the differences between predicted and measured skin thicknesses at low energies can be attributed to: failure to account for volume swelling in the skin, using a Gaussian approximation to the range profile, or one generated with a Monte-Carlo code, and uncertainties in the electronic stopping powers. Beyond a certain dose, the density of cavities in the peak-swelling region decreased with increasing dose; indicating that cavity coalescence does occur. A calculation of the He concentration required to fracture the load-bearing cross section between the cavities revealed that a sufficient quantity of He was available to generate the required gas pressures. These observations indicate that models based on coalescence followed by gas-driven deformation provide an accurate description of the mechanisms involved in blistering; and they can accurately predict skin thicknesses at low energies

  10. Utilization of Advanced Diagnostic Methods for Texture and Rut Depth Analysis on a Testing Pavement Section

    Directory of Open Access Journals (Sweden)

    Slabej Martin

    2015-05-01

    Full Text Available Qualitative characteristics of pavement in wide range reflects the pavement serviceability, which is a summary of the characteristics of the pavement, providing a fast, smooth, economical and especially safe driving of motor-vehicles. The target factor of pavement serviceability and safety of roads represents the quality of their surface properties. In the framework of research activities performed in the Research Centre founded under the auspices of University of Žilina, individual parameters of pavement serviceability were monitored by pavement surface scanning. This paper describes the creation of a 3D - road surface model and its analysis and evaluation from the viewpoint of two pavement serviceability parameters - the rut depth and texture. Measurements were performed on an experimental pavement section used contemporary in an Accelerated Pavement Testing experiment. The long-term goal is to ascertain functions predicting degradation of these two pavement serviceability parameters.

  11. Data mining for service

    CERN Document Server

    2014-01-01

    Virtually all nontrivial and modern service related problems and systems involve data volumes and types that clearly fall into what is presently meant as "big data", that is, are huge, heterogeneous, complex, distributed, etc. Data mining is a series of processes which include collecting and accumulating data, modeling phenomena, and discovering new information, and it is one of the most important steps to scientific analysis of the processes of services.  Data mining application in services requires a thorough understanding of the characteristics of each service and knowledge of the compatibility of data mining technology within each particular service, rather than knowledge only in calculation speed and prediction accuracy. Varied examples of services provided in this book will help readers understand the relation between services and data mining technology. This book is intended to stimulate interest among researchers and practitioners in the relation between data mining technology and its application to ...

  12. Acceptance of Swedish e-health services

    Science.gov (United States)

    Jung, Mary-Louise; Loria, Karla

    2010-01-01

    Objective: To investigate older people’s acceptance of e-health services, in order to identify determinants of, and barriers to, their intention to use e-health. Method: Based on one of the best-established models of technology acceptance, Technology Acceptance Model (TAM), in-depth exploratory interviews with twelve individuals over 45 years of age and of varying backgrounds are conducted. Results: This investigation could find support for the importance of usefulness and perceived ease of use of the e-health service offered as the main determinants of people’s intention to use the service. Additional factors critical to the acceptance of e-health are identified, such as the importance of the compatibility of the services with citizens’ needs and trust in the service provider. Most interviewees expressed positive attitudes towards using e-health and find these services useful, convenient, and easy to use. Conclusion: E-health services are perceived as a good complement to traditional health care service delivery, even among older people. These people, however, need to become aware of the e-health alternatives that are offered to them and the benefits they provide. PMID:21289860

  13. Modulation depth of Michelson interferometer with Gaussian beam.

    Science.gov (United States)

    Välikylä, Tuomas; Kauppinen, Jyrki

    2011-12-20

    Mirror misalignment or the tilt angle of the Michelson interferometer can be estimated from the modulation depth measured with collimated monochromatic light. The intensity of the light beam is usually assumed to be uniform, but, for example, with gas lasers it generally has a Gaussian distribution, which makes the modulation depth less sensitive to the tilt angle. With this assumption, the tilt angle may be underestimated by about 50%. We have derived a mathematical model for modulation depth with a circular aperture and Gaussian beam. The model reduces the error of the tilt angle estimate to below 1%. The results of the model have been verified experimentally.

  14. Estimating the Rut Depth by UAV Photogrammetry

    Directory of Open Access Journals (Sweden)

    Paavo Nevalainen

    2017-12-01

    Full Text Available The rut formation during forest operations is an undesirable phenomenon. A methodology is being proposed to measure the rut depth distribution of a logging site by photogrammetric point clouds produced by unmanned aerial vehicles (UAV. The methodology includes five processing steps that aim at reducing the noise from the surrounding trees and undergrowth for identifying the trails. A canopy height model is produced to focus the point cloud on the open pathway around the forest machine trail. A triangularized ground model is formed by a point cloud filtering method. The ground model is vectorized using the histogram of directed curvatures (HOC method to produce an overall ground visualization. Finally, a manual selection of the trails leads to an automated rut depth profile analysis. The bivariate correlation (Pearson’s r between rut depths measured manually and by UAV photogrammetry is r = 0.67 . The two-class accuracy a of detecting the rut depth exceeding 20 cm is a = 0.65 . There is potential for enabling automated large-scale evaluation of the forestry areas by using autonomous drones and the process described.

  15. SOA Modeling Patterns for Service Oriented Discovery and Analysis

    CERN Document Server

    Bell, Michael

    2010-01-01

    Learn the essential tools for developing a sound service-oriented architecture. SOA Modeling Patterns for Service-Oriented Discovery and Analysis introduces a universal, easy-to-use, and nimble SOA modeling language to facilitate the service identification and examination life cycle stage. This business and technological vocabulary will benefit your service development endeavors and foster organizational software asset reuse and consolidation, and reduction of expenditure. Whether you are a developer, business architect, technical architect, modeler, business analyst, team leader, or manager,

  16. Operating cost model for local service airlines

    Science.gov (United States)

    Anderson, J. L.; Andrastek, D. A.

    1976-01-01

    Several mathematical models now exist which determine the operating economics for a United States trunk airline. These models are valuable in assessing the impact of new aircraft into an airline's fleet. The use of a trunk airline cost model for the local service airline does not result in representative operating costs. A new model is presented which is representative of the operating conditions and resultant costs for the local service airline. The calculated annual direct and indirect operating costs for two multiequipment airlines are compared with their actual operating experience.

  17. Correction of fluorescence for depth-specific optical and vascular properties using reflectance and differential path-length spectroscopy during PDT

    Science.gov (United States)

    van Zaane, F.; Middelburg, T. A.; de Bruijn, H. S.; van der Ploeg-van den Heuvel, A.; de Haas, E. R. M.; Sterenborg, H. J. C. M.; Neumann, H. A. M.; Robinson, D. J.

    2009-06-01

    Introduction: The rate of PpIX fluorescence photobleaching is routinely used as a dose metric for ALA-PDT. Diffuse reflection spectroscopy is often used to account for variations in tissue optical properties at the photosensitizer excitation and emission bands. It can be used to quantify changes in vascular parameters, such as blood volume fraction and saturation, and can aid understanding of tissue response to PDT. The volume and(/or) depth over which these signals are acquired are critical. The aim of this study is to use quantitative reflectance spectroscopy (DPS) to correct fluorescence for changes in tissue optical properties and monitor PDT. Materials & Methods: ALA was topically applied to hairless mice skin and the incubated spot was treated with PDT according to fractionated illumination schemes. DPS measurements of vascular parameters and optical properties were performed directly before and after illumination. Both the differential signal, delivery-and-collection-fiber signal and the collection fiber signal, which all probe different measurement volumes, are analyzed. Results & Conclusions: Analysis of DPS measurements shows that at the depth where most fluorescence originates, there is almost no blood present. During PDT vascular parameters at this depth stay constant. In more oxygenated layers of the tissue, the optical properties do change during PDT, suggesting that only a small part of PpIX fluorescence originates from the interesting depths where vascular response occurs. Correcting fluorescence emission spectra for optical changes at specific depths and not for the total of changes in a larger volume, as is usually done now, can improve PpIX photobleaching based treatment monitoring.

  18. Liquidity Effects on the Simultaneity of Trading Volume and Order Imbalance

    Directory of Open Access Journals (Sweden)

    Erman Denny Arfianto

    2016-10-01

    Full Text Available The purpose of this research is to analyze the simultaneity between trading volume and order imbalance, the influence of past performance, market risk, market capitalization, tick size to the trading volume and the influence of tick size, depth and bid-ask spread to the order imbalance of companies that were listed on LQ 45 index. The samples in this research were selected by using the purposive sampling method with some selected criteria. Fifty-five companies listed on 2014’s LQ 45 index were chosen as the sample. The results showed that the trading volume is simultaneously related to the order imbalance; past performance, market risk, and market capitalization have the positive and significant effect to the trading volume; tick size has the negative and significant effect to the trading volume; the order imbalance has the negative and insignificant effect to the trading volume; tick size, depth, bid-ask spread, and trading volume have no significant effect to the order imbalance.

  19. Depth-Sizing Technique for Crack Indications in Steam Generator Tubing

    International Nuclear Information System (INIS)

    Cho, Chan Hee; Lee, Hee Jeong; Kim, Hong Deok

    2009-01-01

    The nuclear power plants have been safely operated by plugging the steam generator tubes which have the crack indications. Tube rupture events can occur if analysts fail to detect crack indications during in-service inspection. There are various types of crack indication in steam generator tubes and they have been detected by the eddy current test. The integrity assessment should be performed using the crack-sizing results from eddy current data when the crack indication is detected. However, it is not easy to evaluate the crack-depth precisely and consistently due to the complexity of the methods. The current crack-sizing methods were reviewed in this paper and the suitable ones were selected through the laboratory tests. The retired steam generators of Kori Unit 1 were used for this study. The round robin tests by the domestic qualified analysts were carried out and the statistical models were introduced to establish the appropriate depth-sizing techniques. It is expected that the proposed techniques in this study can be utilized in the Steam Generator Management Program

  20. High-Resolution Assimilation of GRACE Terrestrial Water Storage Observations to Represent Local-Scale Water Table Depths

    Science.gov (United States)

    Stampoulis, D.; Reager, J. T., II; David, C. H.; Famiglietti, J. S.; Andreadis, K.

    2017-12-01

    Despite the numerous advances in hydrologic modeling and improvements in Land Surface Models, an accurate representation of the water table depth (WTD) still does not exist. Data assimilation of observations of the joint NASA and DLR mission, Gravity Recovery and Climate Experiment (GRACE) leads to statistically significant improvements in the accuracy of hydrologic models, ultimately resulting in more reliable estimates of water storage. However, the usually shallow groundwater compartment of the models presents a problem with GRACE assimilation techniques, as these satellite observations account for much deeper aquifers. To improve the accuracy of groundwater estimates and allow the representation of the WTD at fine spatial scales we implemented a novel approach that enables a large-scale data integration system to assimilate GRACE data. This was achieved by augmenting the Variable Infiltration Capacity (VIC) hydrologic model, which is the core component of the Regional Hydrologic Extremes Assessment System (RHEAS), a high-resolution modeling framework developed at the Jet Propulsion Laboratory (JPL) for hydrologic modeling and data assimilation. The model has insufficient subsurface characterization and therefore, to reproduce groundwater variability not only in shallow depths but also in deep aquifers, as well as to allow GRACE assimilation, a fourth soil layer of varying depth ( 1000 meters) was added in VIC as the bottom layer. To initialize a water table in the model we used gridded global WTD data at 1 km resolution which were spatially aggregated to match the model's resolution. Simulations were then performed to test the augmented model's ability to capture seasonal and inter-annual trends of groundwater. The 4-layer version of VIC was run with and without assimilating GRACE Total Water Storage anomalies (TWSA) over the Central Valley in California. This is the first-ever assimilation of GRACE TWSA for the determination of realistic water table depths, at

  1. Towards operationalization of business models : Designing service compositions for service-dominant business models

    NARCIS (Netherlands)

    Suratno, B.; Grefen, P.; Turetken, O.

    2017-01-01

    The new trend of service-dominant business which produces so-called value-in-use as a competitive advantage demands rapidly changing business models and collaboration of organizations in a cross-organizational business network. As information technology nowadays largely contributes to the way of

  2. Estimating tree bole volume using artificial neural network models for four species in Turkey.

    Science.gov (United States)

    Ozçelik, Ramazan; Diamantopoulou, Maria J; Brooks, John R; Wiant, Harry V

    2010-01-01

    Tree bole volumes of 89 Scots pine (Pinus sylvestris L.), 96 Brutian pine (Pinus brutia Ten.), 107 Cilicica fir (Abies cilicica Carr.) and 67 Cedar of Lebanon (Cedrus libani A. Rich.) trees were estimated using Artificial Neural Network (ANN) models. Neural networks offer a number of advantages including the ability to implicitly detect complex nonlinear relationships between input and output variables, which is very helpful in tree volume modeling. Two different neural network architectures were used and produced the Back propagation (BPANN) and the Cascade Correlation (CCANN) Artificial Neural Network models. In addition, tree bole volume estimates were compared to other established tree bole volume estimation techniques including the centroid method, taper equations, and existing standard volume tables. An overview of the features of ANNs and traditional methods is presented and the advantages and limitations of each one of them are discussed. For validation purposes, actual volumes were determined by aggregating the volumes of measured short sections (average 1 meter) of the tree bole using Smalian's formula. The results reported in this research suggest that the selected cascade correlation artificial neural network (CCANN) models are reliable for estimating the tree bole volume of the four examined tree species since they gave unbiased results and were superior to almost all methods in terms of error (%) expressed as the mean of the percentage errors. 2009 Elsevier Ltd. All rights reserved.

  3. Volume and Aboveground Biomass Models for Dry Miombo Woodland in Tanzania

    Directory of Open Access Journals (Sweden)

    Ezekiel Edward Mwakalukwa

    2014-01-01

    Full Text Available Tools to accurately estimate tree volume and biomass are scarce for most forest types in East Africa, including Tanzania. Based on a sample of 142 trees and 57 shrubs from a 6,065 ha area of dry miombo woodland in Iringa rural district in Tanzania, regression models were developed for volume and biomass of three important species, Brachystegia spiciformis Benth. (n = 40, Combretum molle G. Don (n = 41, and Dalbergia arbutifolia Baker (n = 37 separately, and for broader samples of trees (28 species, n = 72, shrubs (16 species, n = 32, and trees and shrubs combined (44 species, n = 104. Applied independent variables were log-transformed diameter, height, and wood basic density, and in each case a range of different models were tested. The general tendency among the final models is that the fit improved when height and wood basic density were included. Also the precision and accuracy of the predictions tended to increase from general to species-specific models. Except for a few volume and biomass models developed for shrubs, all models had R2 values of 96–99%. Thus, the models appear robust and should be applicable to forests with similar site conditions, species, and diameter ranges.

  4. Updating default depths in the ISC bulletin

    Science.gov (United States)

    Bolton, Maiclaire K.; Storchak, Dmitry A.; Harris, James

    2006-09-01

    The International Seismological Centre (ISC) publishes the definitive global bulletin of earthquake locations. In the ISC bulletin, we aim to obtain a free depth, but often this is not possible. Subsequently, the first option is to obtain a depth derived from depth phases. If depth phases are not available, we then use the reported depth from a reputable local agency. Finally, as a last resort, we set a default depth. In the past, common depths of 10, 33, or multiples of 50 km have been assigned. Assigning a more meaningful default depth, specific to a seismic region will increase the consistency of earthquake locations within the ISC bulletin and allow the ISC to publish better positions and magnitude estimates. It will also improve the association of reported secondary arrivals to corresponding seismic events. We aim to produce a global set of default depths, based on a typical depth for each area, from well-constrained events in the ISC bulletin or where depth could be constrained using a consistent set of depth phase arrivals provided by a number of different reporters. In certain areas, we must resort to using other assumptions. For these cases, we use a global crustal model (Crust2.0) to set default depths to half the thickness of the crust.

  5. Femtosecond laser excitation of dielectric materials: experiments and modeling of optical properties and ablation depths

    DEFF Research Database (Denmark)

    Wædegaard, Kristian Juncher; Frislev, Martin Thomas; Balling, Peter

    2013-01-01

    Modeling of the interaction between a dielec- tric material and ultrashort laser pulses provides the tem- poral evolution of the electronic excitation and the optical properties of the dielectric. Experimentally determined re- flectances and ablation depths for sapphire are compared...... to the calculations. A decrease in reflectance at high fluences is observed experimentally, which demonstrates the neces- sity of a temperature-dependent electron scattering rate in the model. The comparison thus provides new constraints on the optical parameters of the model....

  6. Modeling water flow, depth and inundation extent over the rivers of the Contiguous US within a Catchment-based Land Surface Modeling Framework

    Science.gov (United States)

    Liu, Z.; David, C. H.; Famiglietti, J. S.

    2013-12-01

    With population growth and increasing demand of water supply, the need for integrated continental and global scale surface water dynamics simulation systems relying on both observations and models is ever increasing. In this study we characterize how accurately we can estimate river discharge, river depth and the corresponding inundation extent over the contiguous U.S. by combining observations and models. We present a continental-scale implementation of the Catchment-based Hydrological And Routing Modeling System (CHARMS) that includes an explicit representation of the river networks from a Geographic Information System (GIS) dataset. The river networks and contributing catchment boundaries of the Contiguous U.S are upscaled from the NHDPlus dataset. The average upscaled catchment size is 2773 km2 and the unique main river channel contained in each catchment consists of several river reaches of average length 1.6 km. We derive 18 sets of empirical relationship between channel dimension (bankfull depth and bankfull width) and drainage area based on USGS gauge observations to describe river dynamics for the 18 water resource regions of the NHDPlus representation of the United States. These relationships are used to separate the main river channel and floodplain. Modeled monthly and daily streamflow show reasonable agreement with gauge observations and initial results show that basins with fewer anthropogenic modifications are more accurately simulated. Modeled monthly and daily river depth and floodplain extent associated with each river reach are also explicitly estimated over the U.S., although such simulations are more challenging to validate. Our results have implications for capturing the seasonal-to-interannual dynamics of surface water in climate models. Such a continental-scale modeling framework development would, by design, facilitate the use of existing in situ observations and be suitable for integrating the upcoming NASA Surface Water and Ocean

  7. Airline service quality evaluation: A review on concepts and models

    Directory of Open Access Journals (Sweden)

    Navid Haghighat

    2017-12-01

    Full Text Available This paper reviews different major service quality concept and models which led to great developments in evaluating service quality with focusing on improvement process of the models through discussing criticisms of each model. Criticisms against these models are discussed to clarify development steps of newer models which led to the improvement of airline service quality models. The precise and accurate evaluation of service quality needs utilizing a reliable concept with comprehensive criteria and effective measurement techniques as the fundamentals of a valuable framework. In this paper, service quality models improvement is described based on three major service quality concepts, the disconfirmation, performance and hierarchical concepts which are developed subsequently. Reviewing various criteria and different measurement techniques such a statistical analysis and multi-criteria decision making assist researchers to have a clear understanding of the development of the evaluation framework in the airline industry. This study aims at promoting reliable frameworks for evaluating airline service quality in different countries and societies due to economic, cultural and social aspects of each society.

  8. Defect distribution in low-temperature molecular beam epitaxy grown Si/Si(100), improved depth profiling with monoenergetic positrons

    International Nuclear Information System (INIS)

    Szeles, C.; Asoka-Kumar, P.; Lynn, K.G.; Gossmann, H.; Unterwald, F.C.; Boone, T.

    1995-01-01

    The depth distribution of open-volume defects has been studied in Si(100) crystals grown by molecular beam epitaxy at 300 degree C by the variable-energy monoenergetic positron beam technique combined with well-controlled chemical etching. This procedure gave a 10 nm depth resolution which is a significant improvement over the inherent depth resolving power of the positron beam technique. The epitaxial layer was found to grow defect-free up to 80 nm, from the interface, where small vacancy clusters, larger than divacancies, appear. The defect density then sharply increases toward the film surface. The result clearly shows that the nucleation of small open-volume defects is a precursor state to the breakdown of epitaxy and to the evolution of an amorphous film

  9. Developing a community based service model for disability: Listening to the needs of all beneficiaries and providers.

    Science.gov (United States)

    Collins, Katrina

    2017-12-11

    To inform the strategic and operational development of a community based service model at the Crann Centre, Cork, Ireland for SB children, adults, their families and providers. A needs assessment was conducted by gathering the views of multiple stakeholder perspectives within the SB community in the geographical region the Centre will serve. The intention is to create project deliverables that are responsive to the needs highlighted through this research. The study used a multi method design with a participatory research approach to explore the needs of SB individuals, families and providers. This involved in depth interviews, focus groups and online surveys. One hundred and fifty-nine respondents contributed to this qualitative needs assessment. The research established a range of psychosocial, clinical, vocational and educational issues causing ongoing difficulties for SB individuals and families. Providers highlighted supports that would benefit the social and clinical wellbeing of persons with SB. Collectively participants in the study reported that there was an absence of coordinated, continuous and comprehensive service delivery for the SB community in the region. This was amplified by geographical location of services and access to relevant supports. Consensus across stakeholders in this research pointed to the necessity for an innovative model of community based provision at the Crann Centre. This was described as offering a service with family at the core of an assets based model of practice. A key finding was the lack of importance placed on the social and emotional development of SB individuals. Traditionally participants described a singular focus on physical health through clinically defined treatment models. The desire for a social model of disability that informed health and wellbeing of SB individuals and families emerged as a prominent recommendation from the research.

  10. Modeling Medical Services with Mobile Health Applications

    Directory of Open Access Journals (Sweden)

    Zhenfei Wang

    2018-01-01

    Full Text Available The rapid development of mobile health technology (m-Health provides unprecedented opportunities for improving health services. As the bridge between doctors and patients, mobile health applications enable patients to communicate with doctors through their smartphones, which is becoming more and more popular among people. To evaluate the influence of m-Health applications on the medical service market, we propose a medical service equilibrium model. The model can balance the supply of doctors and demand of patients and reflect possible options for both doctors and patients with or without m-Health applications in the medical service market. In the meantime, we analyze the behavior of patients and the activities of doctors to minimize patients’ full costs of healthcare and doctors’ futility. Then, we provide a resolution algorithm through mathematical reasoning. Lastly, based on artificially generated dataset, experiments are conducted to evaluate the medical services of m-Health applications.

  11. Modelling ecosystem service flows under uncertainty with stochiastic SPAN

    Science.gov (United States)

    Johnson, Gary W.; Snapp, Robert R.; Villa, Ferdinando; Bagstad, Kenneth J.

    2012-01-01

    Ecosystem service models are increasingly in demand for decision making. However, the data required to run these models are often patchy, missing, outdated, or untrustworthy. Further, communication of data and model uncertainty to decision makers is often either absent or unintuitive. In this work, we introduce a systematic approach to addressing both the data gap and the difficulty in communicating uncertainty through a stochastic adaptation of the Service Path Attribution Networks (SPAN) framework. The SPAN formalism assesses ecosystem services through a set of up to 16 maps, which characterize the services in a study area in terms of flow pathways between ecosystems and human beneficiaries. Although the SPAN algorithms were originally defined deterministically, we present them here in a stochastic framework which combines probabilistic input data with a stochastic transport model in order to generate probabilistic spatial outputs. This enables a novel feature among ecosystem service models: the ability to spatially visualize uncertainty in the model results. The stochastic SPAN model can analyze areas where data limitations are prohibitive for deterministic models. Greater uncertainty in the model inputs (including missing data) should lead to greater uncertainty expressed in the model’s output distributions. By using Bayesian belief networks to fill data gaps and expert-provided trust assignments to augment untrustworthy or outdated information, we can account for uncertainty in input data, producing a model that is still able to run and provide information where strictly deterministic models could not. Taken together, these attributes enable more robust and intuitive modelling of ecosystem services under uncertainty.

  12. Semantic Web Services Challenge, Results from the First Year. Series: Semantic Web And Beyond, Volume 8.

    Science.gov (United States)

    Petrie, C.; Margaria, T.; Lausen, H.; Zaremba, M.

    Explores trade-offs among existing approaches. Reveals strengths and weaknesses of proposed approaches, as well as which aspects of the problem are not yet covered. Introduces software engineering approach to evaluating semantic web services. Service-Oriented Computing is one of the most promising software engineering trends because of the potential to reduce the programming effort for future distributed industrial systems. However, only a small part of this potential rests on the standardization of tools offered by the web services stack. The larger part of this potential rests upon the development of sufficient semantics to automate service orchestration. Currently there are many different approaches to semantic web service descriptions and many frameworks built around them. A common understanding, evaluation scheme, and test bed to compare and classify these frameworks in terms of their capabilities and shortcomings, is necessary to make progress in developing the full potential of Service-Oriented Computing. The Semantic Web Services Challenge is an open source initiative that provides a public evaluation and certification of multiple frameworks on common industrially-relevant problem sets. This edited volume reports on the first results in developing common understanding of the various technologies intended to facilitate the automation of mediation, choreography and discovery for Web Services using semantic annotations. Semantic Web Services Challenge: Results from the First Year is designed for a professional audience composed of practitioners and researchers in industry. Professionals can use this book to evaluate SWS technology for their potential practical use. The book is also suitable for advanced-level students in computer science.

  13. Mixed layer depth calculation in deep convection regions in ocean numerical models

    Science.gov (United States)

    Courtois, Peggy; Hu, Xianmin; Pennelly, Clark; Spence, Paul; Myers, Paul G.

    2017-12-01

    Mixed Layer Depths (MLDs) diagnosed by conventional numerical models are generally based on a density difference with the surface (e.g., 0.01 kg.m-3). However, the temperature-salinity compensation and the lack of vertical resolution contribute to over-estimated MLD, especially in regions of deep convection. In the present work, we examined the diagnostic MLD, associated with the deep convection of the Labrador Sea Water (LSW), calculated with a simple density difference criterion. The over-estimated MLD led us to develop a new tool, based on an observational approach, to recalculate MLD from model output. We used an eddy-permitting, 1/12° regional configuration of the Nucleus for European Modelling of the Ocean (NEMO) to test and discuss our newly defined MLD. We compared our new MLD with that from observations, and we showed a major improvement with our new algorithm. To show the new MLD is not dependent on a single model and its horizontal resolution, we extended our analysis to include 1/4° eddy-permitting simulations, and simulations using the Modular Ocean Model (MOM) model.

  14. A Computational Model of Hydraulic Volume Displacement Drive

    Directory of Open Access Journals (Sweden)

    V. N. Pil'gunov

    2014-01-01

    Full Text Available The paper offers a computational model of industrial-purpose hydraulic drive with two hydraulic volume adjustable working chamber machines (pump and motor. Adjustable pump equipped with the pressure control unit can be run together with several adjustable hydraulic motors on the principle of three-phase hydraulic socket-outlet with high-pressure lines, drain, and drainage system. The paper considers the pressure-controlled hydrostatic transmission with hydraulic motor as an output link. It shows a possibility to create a saving hydraulic drive using a functional tie between the adjusting parameters of the pump and hydraulic motor through the pressure difference, torque, and angular rate of the hydraulic motor shaft rotation. The programmable logic controller can implement such tie. The Coulomb and viscous frictions are taken into consideration when developing a computational model of the hydraulic volume displacement drive. Discharge balance considers external and internal leakages in equivalent clearances of hydraulic machines, as well as compression loss volume caused by hydraulic fluid compressibility and deformation of pipe walls. To correct dynamic properties of hydraulic drive, the paper offers that in discharge balance are included the additional regulated external leakages in the open circuit of hydraulic drive and regulated internal leakages in the closed-loop circuit. Generalized differential equations having functional multipliers and multilinked nature have been obtained to describe the operation of hydraulic positioning and speed drive with two hydraulic volume adjustable working chamber machines. It is shown that a proposed computational model of hydraulic drive can be taken into consideration in development of LS («Load-Sensing» drives, in which the pumping pressure is tuned to the value required for the most loaded slave motor to overcome the load. Results attained can be used both in designing the industrial-purpose heavy

  15. Electric field depth-focality tradeoff in transcranial magnetic stimulation: simulation comparison of 50 coil designs.

    Science.gov (United States)

    Deng, Zhi-De; Lisanby, Sarah H; Peterchev, Angel V

    2013-01-01

    Various transcranial magnetic stimulation (TMS) coil designs are available or have been proposed. However, key coil characteristics such as electric field focality and attenuation in depth have not been adequately compared. Knowledge of the coil focality and depth characteristics can help TMS researchers and clinicians with coil selection and interpretation of TMS studies. To quantify the electric field focality and depth of penetration of various TMS coils. The electric field distributions induced by 50 TMS coils were simulated in a spherical human head model using the finite element method. For each coil design, we quantified the electric field penetration by the half-value depth, d(1/2), and focality by the tangential spread, S(1/2), defined as the half-value volume (V(1/2)) divided by the half-value depth, S(1/2) = V(1/2)/d(1/2). The 50 TMS coils exhibit a wide range of electric field focality and depth, but all followed a depth-focality tradeoff: coils with larger half-value depth cannot be as focal as more superficial coils. The ranges of achievable d(1/2) are similar between coils producing circular and figure-8 electric field patterns, ranging 1.0-3.5 cm and 0.9-3.4 cm, respectively. However, figure-8 field coils are more focal, having S(1/2) as low as 5 cm(2) compared to 34 cm(2) for circular field coils. For any coil design, the ability to directly stimulate deeper brain structures is obtained at the expense of inducing wider electrical field spread. Novel coil designs should be benchmarked against comparison coils with consistent metrics such as d(1/2) and S(1/2). Copyright © 2013 Elsevier Inc. All rights reserved.

  16. Combining 3d Volume and Mesh Models for Representing Complicated Heritage Buildings

    Science.gov (United States)

    Tsai, F.; Chang, H.; Lin, Y.-W.

    2017-08-01

    This study developed a simple but effective strategy to combine 3D volume and mesh models for representing complicated heritage buildings and structures. The idea is to seamlessly integrate 3D parametric or polyhedral models and mesh-based digital surfaces to generate a hybrid 3D model that can take advantages of both modeling methods. The proposed hybrid model generation framework is separated into three phases. Firstly, after acquiring or generating 3D point clouds of the target, these 3D points are partitioned into different groups. Secondly, a parametric or polyhedral model of each group is generated based on plane and surface fitting algorithms to represent the basic structure of that region. A "bare-bones" model of the target can subsequently be constructed by connecting all 3D volume element models. In the third phase, the constructed bare-bones model is used as a mask to remove points enclosed by the bare-bones model from the original point clouds. The remaining points are then connected to form 3D surface mesh patches. The boundary points of each surface patch are identified and these boundary points are projected onto the surfaces of the bare-bones model. Finally, new meshes are created to connect the projected points and original mesh boundaries to integrate the mesh surfaces with the 3D volume model. The proposed method was applied to an open-source point cloud data set and point clouds of a local historical structure. Preliminary results indicated that the reconstructed hybrid models using the proposed method can retain both fundamental 3D volume characteristics and accurate geometric appearance with fine details. The reconstructed hybrid models can also be used to represent targets in different levels of detail according to user and system requirements in different applications.

  17. Merging Real-Time Channel Sensor Networks with Continental-Scale Hydrologic Models: A Data Assimilation Approach for Improving Accuracy in Flood Depth Predictions

    Directory of Open Access Journals (Sweden)

    Amir Javaheri

    2018-01-01

    Full Text Available This study proposes a framework that (i uses data assimilation as a post processing technique to increase the accuracy of water depth prediction, (ii updates streamflow generated by the National Water Model (NWM, and (iii proposes a scope for updating the initial condition of continental-scale hydrologic models. Predicted flows by the NWM for each stream were converted to the water depth using the Height Above Nearest Drainage (HAND method. The water level measurements from the Iowa Flood Inundation System (a test bed sensor network in this study were converted to water depths and then assimilated into the HAND model using the ensemble Kalman filter (EnKF. The results showed that after assimilating the water depth using the EnKF, for a flood event during 2015, the normalized root mean square error was reduced by 0.50 m (51% for training tributaries. Comparison of the updated modeled water stage values with observations at testing locations showed that the proposed methodology was also effective on the tributaries with no observations. The overall error reduced from 0.89 m to 0.44 m for testing tributaries. The updated depths were then converted to streamflow using rating curves generated by the HAND model. The error between updated flows and observations at United States Geological Survey (USGS station at Squaw Creek decreased by 35%. For future work, updated streamflows could also be used to dynamically update initial conditions in the continental-scale National Water Model.

  18. Grid Enabled Geospatial Catalogue Web Service

    Science.gov (United States)

    Chen, Ai-Jun; Di, Li-Ping; Wei, Ya-Xing; Liu, Yang; Bui, Yu-Qi; Hu, Chau-Min; Mehrotra, Piyush

    2004-01-01

    Geospatial Catalogue Web Service is a vital service for sharing and interoperating volumes of distributed heterogeneous geospatial resources, such as data, services, applications, and their replicas over the web. Based on the Grid technology and the Open Geospatial Consortium (0GC) s Catalogue Service - Web Information Model, this paper proposes a new information model for Geospatial Catalogue Web Service, named as GCWS which can securely provides Grid-based publishing, managing and querying geospatial data and services, and the transparent access to the replica data and related services under the Grid environment. This information model integrates the information model of the Grid Replica Location Service (RLS)/Monitoring & Discovery Service (MDS) with the information model of OGC Catalogue Service (CSW), and refers to the geospatial data metadata standards from IS0 19115, FGDC and NASA EOS Core System and service metadata standards from IS0 191 19 to extend itself for expressing geospatial resources. Using GCWS, any valid geospatial user, who belongs to an authorized Virtual Organization (VO), can securely publish and manage geospatial resources, especially query on-demand data in the virtual community and get back it through the data-related services which provide functions such as subsetting, reformatting, reprojection etc. This work facilitates the geospatial resources sharing and interoperating under the Grid environment, and implements geospatial resources Grid enabled and Grid technologies geospatial enabled. It 2!so makes researcher to focus on science, 2nd not cn issues with computing ability, data locztic, processir,g and management. GCWS also is a key component for workflow-based virtual geospatial data producing.

  19. A formal model for classifying trusted Semantic Web Services

    OpenAIRE

    Galizia, Stefania; Gugliotta, Alessio; Pedrinaci, Carlos

    2008-01-01

    Semantic Web Services (SWS) aim to alleviate Web service limitations, by combining Web service technologies with the potential of Semantic Web. Several open issues have to be tackled yet, in order to enable a safe and efficient Web services selection. One of them is represented by trust. In this paper, we introduce a trust definition and formalize a model for managing trust in SWS. The model approaches the selection of trusted Web services as a classification problem, and it is realized by an...

  20. Variation of Shrinkage Strain within the Depth of Concrete Beams

    Directory of Open Access Journals (Sweden)

    Jong-Hyun Jeong

    2015-11-01

    Full Text Available The variation of shrinkage strain within beam depth was examined through four series of time-dependent laboratory experiments on unreinforced concrete beam specimens. Two types of beam specimens, horizontally cast and vertically cast, were tested; shrinkage variation was observed in the horizontally cast specimens. This indicated that the shrinkage variation within the beam depth was due to water bleeding and tamping during the placement of the fresh concrete. Shrinkage strains were measured within the beam depth by two types of strain gages, surface-attached and embedded. The shrinkage strain distribution within the beam depth showed a consistent tendency for the two types of gages. The test beams were cut into four sections after completion of the test, and the cutting planes were divided into four equal sub-areas to measure the aggregate concentration for each sub-area of the cutting plane. The aggregate concentration increased towards the bottom of the beam. The shrinkage strain distribution was estimated by Hobbs’ equation, which accounts for the change of aggregate volume concentration.

  1. Flow of groundwater from great depths into the near surface deposits - modelling of a local domain in northeast Uppland

    International Nuclear Information System (INIS)

    Holmen, Johan G.; Forsman, Jonas

    2005-01-01

    Purpose: To study the flow of groundwater from rock masses at great depths and into the surface near deposits by use of mathematical models; and to estimate the spatial and temporal distribution of groundwater from great depths in the surface near deposits (quaternary deposits). The study is about the hydraulic interaction between the geosphere and the biosphere. Methodology: The system studied is represented by time dependent three dimensional mathematical models. The models include groundwater flows in the rock mass and in the quaternary deposits as well as surface water flows. The established groundwater models have such a resolution (degree of detail) that both rock masses at great depth and near surface deposits are included in the flow system studied. The modelling includes simulations under both steady state conditions and transient conditions The transient simulations represents the varying state of the groundwater system studied, caused by the variation in hydro-meteorological conditions during a normal year, a wet-year and a dry-year. The boundary condition along the topography of the model is a non-linear boundary condition, representing the ground surface above the sea and the varying actual groundwater recharge. Area studied: The area studied is located in Sweden, in the Northeast of the Uppland province, close to the Forsmark nuclear power plant. Water balance modelling: To obtain three significantly different groundwater recharge periods for the transient groundwater flow simulations a water balance modelling was carried out based on a statistical analysis of available hydro-meteorological data. To obtain a temporal distribution of the runoff (i.e. potential groundwater recharge), we have conducted a numerical time dependent water balance modelling. General conclusions of groundwater modelling: The discharge areas for the flow paths from great depth are given by the topography and located along valleys and lakes; the spatial and temporal extension of

  2. Flow of groundwater from great depths into the near surface deposits - modelling of a local domain in northeast Uppland

    Energy Technology Data Exchange (ETDEWEB)

    Holmen, Johan G.; Forsman, Jonas [Golder Associates, Stockholm (Sweden)

    2005-01-15

    Purpose: To study the flow of groundwater from rock masses at great depths and into the surface near deposits by use of mathematical models; and to estimate the spatial and temporal distribution of groundwater from great depths in the surface near deposits (quaternary deposits). The study is about the hydraulic interaction between the geosphere and the biosphere. Methodology: The system studied is represented by time dependent three dimensional mathematical models. The models include groundwater flows in the rock mass and in the quaternary deposits as well as surface water flows. The established groundwater models have such a resolution (degree of detail) that both rock masses at great depth and near surface deposits are included in the flow system studied. The modelling includes simulations under both steady state conditions and transient conditions The transient simulations represents the varying state of the groundwater system studied, caused by the variation in hydro-meteorological conditions during a normal year, a wet-year and a dry-year. The boundary condition along the topography of the model is a non-linear boundary condition, representing the ground surface above the sea and the varying actual groundwater recharge. Area studied: The area studied is located in Sweden, in the Northeast of the Uppland province, close to the Forsmark nuclear power plant. Water balance modelling: To obtain three significantly different groundwater recharge periods for the transient groundwater flow simulations a water balance modelling was carried out based on a statistical analysis of available hydro-meteorological data. To obtain a temporal distribution of the runoff (i.e. potential groundwater recharge), we have conducted a numerical time dependent water balance modelling. General conclusions of groundwater modelling: The discharge areas for the flow paths from great depth are given by the topography and located along valleys and lakes; the spatial and temporal extension of

  3. Developing a stochastic traffic volume prediction model for public-private partnership projects

    Science.gov (United States)

    Phong, Nguyen Thanh; Likhitruangsilp, Veerasak; Onishi, Masamitsu

    2017-11-01

    Transportation projects require an enormous amount of capital investment resulting from their tremendous size, complexity, and risk. Due to the limitation of public finances, the private sector is invited to participate in transportation project development. The private sector can entirely or partially invest in transportation projects in the form of Public-Private Partnership (PPP) scheme, which has been an attractive option for several developing countries, including Vietnam. There are many factors affecting the success of PPP projects. The accurate prediction of traffic volume is considered one of the key success factors of PPP transportation projects. However, only few research works investigated how to predict traffic volume over a long period of time. Moreover, conventional traffic volume forecasting methods are usually based on deterministic models which predict a single value of traffic volume but do not consider risk and uncertainty. This knowledge gap makes it difficult for concessionaires to estimate PPP transportation project revenues accurately. The objective of this paper is to develop a probabilistic traffic volume prediction model. First, traffic volumes were estimated following the Geometric Brownian Motion (GBM) process. Monte Carlo technique is then applied to simulate different scenarios. The results show that this stochastic approach can systematically analyze variations in the traffic volume and yield more reliable estimates for PPP projects.

  4. Penetration Depth and Defect Image Contrast Formation in Grazing-Incidence X-ray Topography of 4H-SiC Wafers

    Science.gov (United States)

    Yang, Yu; Guo, Jianqiu; Goue, Ouloide Yannick; Kim, Jun Gyu; Raghothamachar, Balaji; Dudley, Michael; Chung, Gill; Sanchez, Edward; Manning, Ian

    2018-02-01

    Synchrotron x-ray topography in grazing-incidence geometry is useful for discerning defects at different depths below the crystal surface, particularly for 4H-SiC epitaxial wafers. However, the penetration depths measured from x-ray topographs are much larger than theoretical values. To interpret this discrepancy, we have simulated the topographic contrast of dislocations based on two of the most basic contrast formation mechanisms, viz. orientation and kinematical contrast. Orientation contrast considers merely displacement fields associated with dislocations, while kinematical contrast considers also diffraction volume, defined as the effective misorientation around dislocations and the rocking curve width for given diffraction vector. Ray-tracing simulation was carried out to visualize dislocation contrast for both models, taking into account photoelectric absorption of the x-ray beam inside the crystal. The results show that orientation contrast plays the key role in determining both the contrast and x-ray penetration depth for different types of dislocation.

  5. Quebec mental health services networks: models and implementation

    Directory of Open Access Journals (Sweden)

    Marie-Josée Fleury

    2005-06-01

    Full Text Available Purpose: In the transformation of health care systems, the introduction of integrated service networks is considered to be one of the main solutions for enhancing efficiency. In the last few years, a wealth of literature has emerged on the topic of services integration. However, the question of how integrated service networks should be modelled to suit different implementation contexts has barely been touched. To fill that gap, this article presents four models for the organization of mental health integrated networks. Data sources: The proposed models are drawn from three recently published studies on mental health integrated services in the province of Quebec (Canada with the author as principal investigator. Description: Following an explanation of the concept of integrated service network and a description of the Quebec context for mental health networks, the models, applicable in all settings: rural, urban or semi-urban, and metropolitan, and summarized in four figures, are presented. Discussion and conclusion: To apply the models successfully, the necessity of rallying all the actors of a system, from the strategic, tactical and operational levels, according to the type of integration involved: functional/administrative, clinical and physician-system is highlighted. The importance of formalizing activities among organizations and actors in a network and reinforcing the governing mechanisms at the local level is also underlined. Finally, a number of integration strategies and key conditions of success to operationalize integrated service networks are suggested.

  6. Disc volume reduction with percutaneous nucleoplasty in an animal model.

    Directory of Open Access Journals (Sweden)

    Richard Kasch

    Full Text Available STUDY DESIGN: We assessed volume following nucleoplasty disc decompression in lower lumbar spines from cadaveric pigs using 7.1Tesla magnetic resonance imaging (MRI. PURPOSE: To investigate coblation-induced volume reductions as a possible mechanism underlying nucleoplasty. METHODS: We assessed volume following nucleoplastic disc decompression in pig spines using 7.1-Tesla MRI. Volumetry was performed in lumbar discs of 21 postmortem pigs. A preoperative image data set was obtained, volume was determined, and either disc decompression or placebo therapy was performed in a randomized manner. Group 1 (nucleoplasty group was treated according to the usual nucleoplasty protocol with coblation current applied to 6 channels for 10 seconds each in an application field of 360°; in group 2 (placebo group the same procedure was performed but without coblation current. After the procedure, a second data set was generated and volumes calculated and matched with the preoperative measurements in a blinded manner. To analyze the effectiveness of nucleoplasty, volumes between treatment and placebo groups were compared. RESULTS: The average preoperative nucleus volume was 0.994 ml (SD: 0.298 ml. In the nucleoplasty group (n = 21 volume was reduced by an average of 0.087 ml (SD: 0.110 ml or 7.14%. In the placebo group (n = 21 volume was increased by an average of 0.075 ml (SD: 0.075 ml or 8.94%. The average nucleoplasty-induced volume reduction was 0.162 ml (SD: 0.124 ml or 16.08%. Volume reduction in lumbar discs was significant in favor of the nucleoplasty group (p<0.0001. CONCLUSIONS: Our study demonstrates that nucleoplasty has a volume-reducing effect on the lumbar nucleus pulposus in an animal model. Furthermore, we show the volume reduction to be a coblation effect of nucleoplasty in porcine discs.

  7. Signs of depth-luminance covariance in 3-D cluttered scenes.

    Science.gov (United States)

    Scaccia, Milena; Langer, Michael S

    2018-03-01

    In three-dimensional (3-D) cluttered scenes such as foliage, deeper surfaces often are more shadowed and hence darker, and so depth and luminance often have negative covariance. We examined whether the sign of depth-luminance covariance plays a role in depth perception in 3-D clutter. We compared scenes rendered with negative and positive depth-luminance covariance where positive covariance means that deeper surfaces are brighter and negative covariance means deeper surfaces are darker. For each scene, the sign of the depth-luminance covariance was given by occlusion cues. We tested whether subjects could use this sign information to judge the depth order of two target surfaces embedded in 3-D clutter. The clutter consisted of distractor surfaces that were randomly distributed in a 3-D volume. We tested three independent variables: the sign of the depth-luminance covariance, the colors of the targets and distractors, and the background luminance. An analysis of variance showed two main effects: Subjects performed better when the deeper surfaces were darker and when the color of the target surfaces was the same as the color of the distractors. There was also a strong interaction: Subjects performed better under a negative depth-luminance covariance condition when targets and distractors had different colors than when they had the same color. Our results are consistent with a "dark means deep" rule, but the use of this rule depends on the similarity between the color of the targets and color of the 3-D clutter.

  8. On the Use of Generalized Volume Scattering Models for the Improvement of General Polarimetric Model-Based Decomposition

    Directory of Open Access Journals (Sweden)

    Qinghua Xie

    2017-01-01

    Full Text Available Recently, a general polarimetric model-based decomposition framework was proposed by Chen et al., which addresses several well-known limitations in previous decomposition methods and implements a simultaneous full-parameter inversion by using complete polarimetric information. However, it only employs four typical models to characterize the volume scattering component, which limits the parameter inversion performance. To overcome this issue, this paper presents two general polarimetric model-based decomposition methods by incorporating the generalized volume scattering model (GVSM or simplified adaptive volume scattering model, (SAVSM proposed by Antropov et al. and Huang et al., respectively, into the general decomposition framework proposed by Chen et al. By doing so, the final volume coherency matrix structure is selected from a wide range of volume scattering models within a continuous interval according to the data itself without adding unknowns. Moreover, the new approaches rely on one nonlinear optimization stage instead of four as in the previous method proposed by Chen et al. In addition, the parameter inversion procedure adopts the modified algorithm proposed by Xie et al. which leads to higher accuracy and more physically reliable output parameters. A number of Monte Carlo simulations of polarimetric synthetic aperture radar (PolSAR data are carried out and show that the proposed method with GVSM yields an overall improvement in the final accuracy of estimated parameters and outperforms both the version using SAVSM and the original approach. In addition, C-band Radarsat-2 and L-band AIRSAR fully polarimetric images over the San Francisco region are also used for testing purposes. A detailed comparison and analysis of decomposition results over different land-cover types are conducted. According to this study, the use of general decomposition models leads to a more accurate quantitative retrieval of target parameters. However, there

  9. Control volume based modelling of compressible flow in reciprocating machines

    DEFF Research Database (Denmark)

    Andersen, Stig Kildegård; Thomsen, Per Grove; Carlsen, Henrik

    2004-01-01

    , and multidimensional effects must be calculated using empirical correlations; correlations for steady state flow can be used as an approximation. A transformation that assumes ideal gas is presented for transforming equations for masses and energies in control volumes into the corresponding pressures and temperatures......An approach to modelling unsteady compressible flow that is primarily one dimensional is presented. The approach was developed for creating distributed models of machines with reciprocating pistons but it is not limited to this application. The approach is based on the integral form of the unsteady...... conservation laws for mass, energy, and momentum applied to a staggered mesh consisting of two overlapping strings of control volumes. Loss mechanisms can be included directly in the governing equations of models by including them as terms in the conservation laws. Heat transfer, flow friction...

  10. A model to incorporate organ deformation in the evaluation of dose/volume relationship

    International Nuclear Information System (INIS)

    Yan, D.; Jaffray, D.; Wong, J.; Brabbins, D.; Martinez, A. A.

    1997-01-01

    Purpose: Measurements of internal organ motion have demonstrated that daily organ deformation exists during the course of radiation treatment. However, a model to evaluate the resultant dose delivered to a daily deformed organ remains a difficult challenge. Current methods which model such organ deformation as rigid body motion in the dose calculation for treatment planning evaluation are incorrect and misleading. In this study, a new model for treatment planning evaluation is introduced which incorporates patient specific information of daily organ deformation and setup variation. The model was also used to retrospectively analyze the actual treatment data measured using daily CT scans for 5 patients with prostate treatment. Methods and Materials: The model assumes that for each patient, the organ of interest can be measured during the first few treatment days. First, the volume of each organ is delineated from each of the daily measurements and cumulated in a 3D bit-map. A tissue occupancy distribution is then constructed with the 50% isodensity representing the mean, or effective, organ volume. During the course of treatment, each voxel in the effective organ volume is assumed to move inside a local 3D neighborhood with a specific distribution function. The neighborhood and the distribution function are deduced from the positions and shapes of the organ in the first few measurements using the biomechanics model of viscoelastic body. For each voxel, the local distribution function is then convolved with the spatial dose distribution. The latter includes also the variation in dose due to daily setup error. As a result, the cumulative dose to the voxel incorporates the effects of daily setup variation and organ deformation. A ''variation adjusted'' dose volume histogram, aDVH, for the effective organ volume can then be constructed for the purpose of treatment evaluation and optimization. Up to 20 daily CT scans and daily portal images for 5 patients with prostate

  11. [Geothermal system temperature-depth database and model for data analysis]. 5. quarterly technical progress report

    Energy Technology Data Exchange (ETDEWEB)

    Blackwell, D.D.

    1998-04-25

    During this first quarter of the second year of the contract activity has involved several different tasks. The author has continued to work on three tasks most intensively during this quarter: the task of implementing the data base for geothermal system temperature-depth, the maintenance of the WWW site with the heat flow and gradient data base, and finally the development of a modeling capability for analysis of the geothermal system exploration data. The author has completed the task of developing a data base template for geothermal system temperature-depth data that can be used in conjunction with the regional data base that he had already developed and is now implementing it. Progress is described.

  12. A resource oriented webs service for environmental modeling

    Science.gov (United States)

    Ferencik, Ioan

    2013-04-01

    Environmental modeling is a largely adopted practice in the study of natural phenomena. Environmental models can be difficult to build and use and thus sharing them within the community is an important aspect. The most common approach to share a model is to expose it as a web service. In practice the interaction with this web service is cumbersome due to lack of standardized contract and the complexity of the model being exposed. In this work we investigate the use of a resource oriented approach in exposing environmental models as web services. We view a model as a layered resource build atop the object concept from Object Oriented Programming, augmented with persistence capabilities provided by an embedded object database to keep track of its state and implementing the four basic principles of resource oriented architectures: addressability, statelessness, representation and uniform interface. For implementation we use exclusively open source software: Django framework, dyBase object oriented database and Python programming language. We developed a generic framework of resources structured into a hierarchy of types and consequently extended this typology with recurses specific to the domain of environmental modeling. To test our web service we used cURL, a robust command-line based web client.

  13. Development and Analysis of Volume Multi-Sphere Method Model Generation using Electric Field Fitting

    Science.gov (United States)

    Ingram, G. J.

    Electrostatic modeling of spacecraft has wide-reaching applications such as detumbling space debris in the Geosynchronous Earth Orbit regime before docking, servicing and tugging space debris to graveyard orbits, and Lorentz augmented orbits. The viability of electrostatic actuation control applications relies on faster-than-realtime characterization of the electrostatic interaction. The Volume Multi-Sphere Method (VMSM) seeks the optimal placement and radii of a small number of equipotential spheres to accurately model the electrostatic force and torque on a conducting space object. Current VMSM models tuned using force and torque comparisons with commercially available finite element software are subject to the modeled probe size and numerical errors of the software. This work first investigates fitting of VMSM models to Surface-MSM (SMSM) generated electrical field data, removing modeling dependence on probe geometry while significantly increasing performance and speed. A proposed electric field matching cost function is compared to a force and torque cost function, the inclusion of a self-capacitance constraint is explored and 4 degree-of-freedom VMSM models generated using electric field matching are investigated. The resulting E-field based VMSM development framework is illustrated on a box-shaped hub with a single solar panel, and convergence properties of select models are qualitatively analyzed. Despite the complex non-symmetric spacecraft geometry, elegantly simple 2-sphere VMSM solutions provide force and torque fits within a few percent.

  14. Uncertainty in solid precipitation and snow depth prediction for Siberia using the Noah and Noah-MP land surface models

    Science.gov (United States)

    Suzuki, Kazuyoshi; Zupanski, Milija

    2018-01-01

    In this study, we investigate the uncertainties associated with land surface processes in an ensemble predication context. Specifically, we compare the uncertainties produced by a coupled atmosphere-land modeling system with two different land surface models, the Noah- MP land surface model (LSM) and the Noah LSM, by using the Maximum Likelihood Ensemble Filter (MLEF) data assimilation system as a platform for ensemble prediction. We carried out 24-hour prediction simulations in Siberia with 32 ensemble members beginning at 00:00 UTC on 5 March 2013. We then compared the model prediction uncertainty of snow depth and solid precipitation with observation-based research products and evaluated the standard deviation of the ensemble spread. The prediction skill and ensemble spread exhibited high positive correlation for both LSMs, indicating a realistic uncertainty estimation. The inclusion of a multiple snowlayer model in the Noah-MP LSM was beneficial for reducing the uncertainties of snow depth and snow depth change compared to the Noah LSM, but the uncertainty in daily solid precipitation showed minimal difference between the two LSMs. The impact of LSM choice in reducing temperature uncertainty was limited to surface layers of the atmosphere. In summary, we found that the more sophisticated Noah-MP LSM reduces uncertainties associated with land surface processes compared to the Noah LSM. Thus, using prediction models with improved skill implies improved predictability and greater certainty of prediction.

  15. Constitutive Modelling in Thermomechanical Processes, Using The Control Volume Method on Staggered Grid

    DEFF Research Database (Denmark)

    Thorborg, Jesper

    , however, is constituted by the implementation of the $J_2$ flow theory in the control volume method. To apply the control volume formulation on the process of hardening concrete viscoelastic stress-strain models has been examined in terms of various rheological models. The generalized 3D models are based...... on two different suggestions in the literature, that is compressible or incompressible behaviour of the viscos response in the dashpot element. Numerical implementation of the models has shown very good agreement with corresponding analytical solutions. The viscoelastic solid mechanical model is used...

  16. Altura da lâmina, tempo e volume de enchimento de um equipamento de irrigação por pavio e determinação da uniformidade de distribuição de água em substratos Water depth, filling time and volume of wick irrigation equipment and determination of water distribution uniformity in substrates

    Directory of Open Access Journals (Sweden)

    Rhuanito Soranz Ferrarezi

    2012-01-01

    Full Text Available Os objetivos deste experimento foram realizar a avaliação da altura da lâmina de água, do tempo e volume de enchimento de um equipamento de irrigação por pavio usando calhas autocompensadoras e determinar a uniformidade de distribuição de água (UDA nesse equipamento utilizando substratos orgânicos comerciais (casca de pinus/CP e fibra de coco/FC. Dois módulos experimentais foram montados em delineamento experimental inteiramente casualizado com cinco repetições. Verificou-se grande variação das medidas de altura da lâmina de água (1,6 a 4,0 cm, mesmo com o equipamento nivelado. O tempo médio de enchimento foi de 6h22min para o Módulo 1 com CP e de 3h45min para o Módulo 2 com FC. O volume de enchimento foi variável, observando-se que as calhas das extremidades (n.° 1 e 5 apresentaram os menores volumes no Módulo 1, e as calhas do início (n.° 1 e 2 no Módulo 2. No Módulo 1, a umidade volumétrica (θ variou de 42% a 94%, e no Módulo 2, de 24% a 72%, com pontos isolados de secamento e/ou encharcamento. A altura da lâmina de água, o tempo e o volume de enchimento das calhas foram desuniformes nos dois módulos experimentais e nas cinco calhas autocompensadoras, indicando imperfeições no equipamento. A distribuição de água foi variável nos substratos em razão de suas características físico-hídricas e também da altura da lâmina de água nas calhas, apresentando maior umidade e uniformidade de distribuição de água na casca de pinus do que na fibra de coco.The aims of this study were to evaluate the water depth, filling time and volume in a wick irrigation equipment using auto compensating gutters and to determine the water distribution uniformity (WDU in these equipments filled with organic commercial substrates (pine bark/PB and coconut coir/CC. We assembled two experimental modules in a completely randomized design with five replications. There was variation in water depth measurements (1.6 to 4.0 cm, even

  17. Simultaneous retrieval of sea ice thickness and snow depth using concurrent active altimetry and passive L-band remote sensing data

    Science.gov (United States)

    Zhou, L.; Xu, S.; Liu, J.

    2017-12-01

    The retrieval of sea ice thickness mainly relies on satellite altimetry, and the freeboard measurements are converted to sea ice thickness (hi) under certain assumptions over snow loading. The uncertain in snow depth (hs) is a major source of uncertainty in the retrieved sea ice thickness and total volume for both radar and laser altimetry. In this study, novel algorithms for the simultaneous retrieval of hi and hs are proposed for the data synergy of L-band (1.4 GHz) passive remote sensing and both types of active altimetry: (1) L-band (1.4GHz) brightness temperature (TB) from Soil Moisture Ocean Salinity (SMOS) satellite and sea ice freeboard (FBice) from radar altimetry, (2) L-band TB data and snow freeboard (FBsnow) from laser altimetry. Two physical models serve as the forward models for the retrieval: L-band radiation model, and the hydrostatic equilibrium model. Verification with SMOS and Operational IceBridge (OIB) data is carried out, showing overall good retrieval accuracy for both sea ice parameters. Specifically, we show that the covariability between hs and FBsnow is crucial for the synergy between TB and FBsnow. Comparison with existing algorithms shows lower uncertainty in both sea ice parameters, and that the uncertainty in the retrieved sea ice thickness as caused by that of snow depth is spatially uncorrelated, with the potential reduction of the volume uncertainty through spatial sampling. The proposed algorithms can be applied to the retrieval of sea ice parameters at basin-scale, using concurrent active and passive remote sensing data based on satellites.

  18. Using UML to Model Web Services for Automatic Composition

    OpenAIRE

    Amal Elgammal; Mohamed El-Sharkawi

    2010-01-01

    There is a great interest paid to the web services paradigm nowadays. One of the most important problems related to the web service paradigm is the automatic composition of web services. Several frameworks have been proposed to achieve this novel goal. The most recent and richest framework (model) is the Colombo model. However, even for experienced developers, working with Colombo formalisms is low-level, very complex and timeconsuming. We propose to use UML (Unified Modeling Language) to mod...

  19. Long-Term Prediction of Emergency Department Revenue and Visitor Volume Using Autoregressive Integrated Moving Average Model

    Directory of Open Access Journals (Sweden)

    Chieh-Fan Chen

    2011-01-01

    Full Text Available This study analyzed meteorological, clinical and economic factors in terms of their effects on monthly ED revenue and visitor volume. Monthly data from January 1, 2005 to September 30, 2009 were analyzed. Spearman correlation and cross-correlation analyses were performed to identify the correlation between each independent variable, ED revenue, and visitor volume. Autoregressive integrated moving average (ARIMA model was used to quantify the relationship between each independent variable, ED revenue, and visitor volume. The accuracies were evaluated by comparing model forecasts to actual values with mean absolute percentage of error. Sensitivity of prediction errors to model training time was also evaluated. The ARIMA models indicated that mean maximum temperature, relative humidity, rainfall, non-trauma, and trauma visits may correlate positively with ED revenue, but mean minimum temperature may correlate negatively with ED revenue. Moreover, mean minimum temperature and stock market index fluctuation may correlate positively with trauma visitor volume. Mean maximum temperature, relative humidity and stock market index fluctuation may correlate positively with non-trauma visitor volume. Mean maximum temperature and relative humidity may correlate positively with pediatric visitor volume, but mean minimum temperature may correlate negatively with pediatric visitor volume. The model also performed well in forecasting revenue and visitor volume.

  20. Service Learning In Physics: The Consultant Model

    Science.gov (United States)

    Guerra, David

    2005-04-01

    Each year thousands of students across the country and across the academic disciplines participate in service learning. Unfortunately, with no clear model for integrating community service into the physics curriculum, there are very few physics students engaged in service learning. To overcome this shortfall, a consultant based service-learning program has been developed and successfully implemented at Saint Anselm College (SAC). As consultants, students in upper level physics courses apply their problem solving skills in the service of others. Most recently, SAC students provided technical and managerial support to a group from Girl's Inc., a national empowerment program for girls in high-risk, underserved areas, who were participating in the national FIRST Lego League Robotics competition. In their role as consultants the SAC students provided technical information through brainstorming sessions and helped the girls stay on task with project management techniques, like milestone charting. This consultant model of service-learning, provides technical support to groups that may not have a great deal of resources and gives physics students a way to improve their interpersonal skills, test their technical expertise, and better define the marketable skill set they are developing through the physics curriculum.

  1. Social Responsibility and Sustainability: Multidisciplinary Perspectives through Service Learning. Service Learning for Civic Engagement Series

    Science.gov (United States)

    McDonald, Tracy, Ed.

    2011-01-01

    This concluding volume in the series presents the work of faculty who have been moved to make sustainability the focus of their work, and to use service learning as one method of teaching sustainability to their students. The chapters in the opening section of this book-- Environmental Awareness--offer models for opening students to the awareness…

  2. DEPTH-CHARGE static and time-dependent perturbation/sensitivity system for nuclear reactor core analysis. Revision I. [DEPTH-CHARGE code

    Energy Technology Data Exchange (ETDEWEB)

    White, J.R.

    1985-04-01

    This report provides the background theory, user input, and sample problems required for the efficient application of the DEPTH-CHARGE system - a code black for both static and time-dependent perturbation theory and data sensitivity analyses. The DEPTH-CHARGE system is of modular construction and has been implemented within the VENTURE-BURNER computational system at Oak Ridge National Laboratory. The DEPTH module (coupled with VENTURE) solves for the three adjoint functions of Depletion Perturbation Theory and calculates the desired time-dependent derivatives of the response with respect to the nuclide concentrations and nuclear data utilized in the reference model. The CHARGE code is a collection of utility routines for general data manipulation and input preparation and considerably extends the usefulness of the system through the automatic generation of adjoint sources, estimated perturbed responses, and relative data sensitivity coefficients. Combined, the DEPTH-CHARGE system provides, for the first time, a complete generalized first-order perturbation/sensitivity theory capability for both static and time-dependent analyses of realistic multidimensional reactor models. This current documentation incorporates minor revisions to the original DEPTH-CHARGE documentation (ORNL/CSD-78) to reflect some new capabilities within the individual codes.

  3. Model of service-oriented catering supply chain performance evaluation

    Directory of Open Access Journals (Sweden)

    Juanqiong Gou

    2013-03-01

    Full Text Available Purpose: The aim of this paper is constructing a performance evaluation model for service-oriented catering supply chain. Design/methodology/approach: With the research on the current situation of catering industry, this paper summarized the characters of the catering supply chain, and then presents the service-oriented catering supply chain model based on the platform of logistics and information. At last, the fuzzy AHP method is used to evaluate the performance of service-oriented catering supply chain. Findings: With the analysis of the characteristics of catering supply chain, we construct the performance evaluation model in order to guarantee the food safety, logistics efficiency, price stability and so on. Practical implications: In order to evolve an efficient and effective service supply chain, it can not only used to own enterprise improvement, but also can be used for selecting different customers, to choose a different model of development. Originality/value: This paper has a new definition of service-oriented catering supply chain. And it offers a model to evaluate the performance of this catering supply chain.

  4. 1988 DOE model conference proceedings: Volume 5

    Energy Technology Data Exchange (ETDEWEB)

    1988-01-01

    These Proceedings of the October 3--7, 1988 DOE Model Conference are a compilation of the papers that were presented in the technical or poster sessions at the conference papers and posters not submitted for publication are not included in the Proceedings. The Table of Contents lists the titles of papers as well as the names of the presenters. These individuals are not, in all cases, the primary authors of the papers published. The actual title pages, appearing later with the papers, show the primary author(s) and all co-authors. The papers in all three volumes of the Proceedings appear as they were originally submitted for publication and have not been edited or changed in any way. Topics discussed in Volume 5 include environmental assessments and program strategies, waste treatment technologies, and regulations and compliance studies.

  5. 1988 DOE model conference proceedings: Volume 3

    International Nuclear Information System (INIS)

    1988-01-01

    These Proceedings of the October 3 - 7, 1988, DOE Model Conference are a compilation of the papers that were presented in the technical or poster sessions at the conference. Papers and posters not submitted for publication are not included in the Proceedings. The Table of Contents lists the titles of papers as well as the names of the presenters. These individuals are not, in all cases, the primary authors of the papers published. The actual title pages, appearing later with the papers, show the primary author(s) and all co-authors. The papers in all three volumes of the proceedings appear as they were originally submitted for publication and have not been edited or changed in any way. Topics included in Volume 3 include treatment of soils, waste characterization and certification, waste minimization site remediation management plans and programs, and training programs

  6. 1988 DOE model conference proceedings: Volume 5

    International Nuclear Information System (INIS)

    1988-01-01

    These Proceedings of the October 3--7, 1988 DOE Model Conference are a compilation of the papers that were presented in the technical or poster sessions at the conference papers and posters not submitted for publication are not included in the Proceedings. The Table of Contents lists the titles of papers as well as the names of the presenters. These individuals are not, in all cases, the primary authors of the papers published. The actual title pages, appearing later with the papers, show the primary author(s) and all co-authors. The papers in all three volumes of the Proceedings appear as they were originally submitted for publication and have not been edited or changed in any way. Topics discussed in Volume 5 include environmental assessments and program strategies, waste treatment technologies, and regulations and compliance studies

  7. Remote information service access system based on a client-server-service model

    Science.gov (United States)

    Konrad, A.M.

    1996-08-06

    A local host computing system, a remote host computing system as connected by a network, and service functionalities: a human interface service functionality, a starter service functionality, and a desired utility service functionality, and a Client-Server-Service (CSS) model is imposed on each service functionality. In one embodiment, this results in nine logical components and three physical components (a local host, a remote host, and an intervening network), where two of the logical components are integrated into one Remote Object Client component, and that Remote Object Client component and the other seven logical components are deployed among the local host and remote host in a manner which eases compatibility and upgrade problems, and provides an illusion to a user that a desired utility service supported on a remote host resides locally on the user`s local host, thereby providing ease of use and minimal software maintenance for users of that remote service. 16 figs.

  8. An innovative service process development based on a reference model

    Directory of Open Access Journals (Sweden)

    Lorenzo Sanfelice Frazzon

    2015-06-01

    Full Text Available This article examines the new service development (NSD process, focusing specifically in a case of a financial service, guided by the following research questions: what are the processes and practices used in the development and design of new financial services? How the results of the financial NSD proposal reflects on the NSD are as a whole? Therefore, the study aims to show and describe a financial service development, conducted at Helpinveste. The paper focuses on the Conceptual Design service (activities: definition of specifications and development of alternative solutions for the service and Service Process Design (Service Representation phases. The methodological procedures are based on the process approach, using a reference model for developing new services. In order to operationalize the model, several techniques for the various stages of the project were used, e.g. QFD and Service Blueprint. Lastly, conclusions report contributions from the reference model application, both theoretical and practical contributions, as well the limitations and further research recommendations.

  9. Heavy-traffic limits for polling models with exhaustive service and non-FCFS service orders

    NARCIS (Netherlands)

    P. Vis (Petra); R. Bekker (Rene); R.D. van der Mei (Rob)

    2015-01-01

    htmlabstractWe study cyclic polling models with exhaustive service at each queue under a variety of non-FCFS (first-come-first-served) local service orders, namely last-come-first-served with and without preemption, random-order-of-service, processor sharing, the multi-class priority scheduling with

  10. Determination of bone mineral volume fraction using impedance analysis and Bruggeman model

    Energy Technology Data Exchange (ETDEWEB)

    Ciuchi, Ioana Veronica; Olariu, Cristina Stefania, E-mail: oocristina@yahoo.com; Mitoseriu, Liliana, E-mail: lmtsr@uaic.ro

    2013-11-20

    Highlights: • Mineral volume fraction of a bone sample was determined. • Dielectric properties for bone sample and for the collagen type I were determined by impedance spectroscopy. • Bruggeman effective medium approximation was applied in order to evaluate mineral volume fraction of the sample. • The computed values were compared with ones derived from a histogram test performed on SEM micrographs. -- Abstract: Measurements by impedance spectroscopy and Bruggeman effective medium approximation model were employed in order to determine the mineral volume fraction of dry bone. This approach assumes that two or more phases are present into the composite: the matrix (environment) and the other ones are inclusion phases. A fragment of femur diaphysis dense bone from a young pig was investigated in its dehydrated state. Measuring the dielectric properties of bone and its main components (hydroxyapatite and collagen) and using the Bruggeman approach, the mineral volume filling factor was determined. The computed volume fraction of the mineral volume fraction was confirmed by a histogram test analysis based on the SEM microstructures. In spite of its simplicity, the method provides a good approximation for the bone mineral volume fraction. The method which uses impedance spectroscopy and EMA modeling can be further developed by considering the conductive components of the bone tissue as a non-invasive in situ impedance technique for bone composition evaluation and monitoring.

  11. Determination of bone mineral volume fraction using impedance analysis and Bruggeman model

    International Nuclear Information System (INIS)

    Ciuchi, Ioana Veronica; Olariu, Cristina Stefania; Mitoseriu, Liliana

    2013-01-01

    Highlights: • Mineral volume fraction of a bone sample was determined. • Dielectric properties for bone sample and for the collagen type I were determined by impedance spectroscopy. • Bruggeman effective medium approximation was applied in order to evaluate mineral volume fraction of the sample. • The computed values were compared with ones derived from a histogram test performed on SEM micrographs. -- Abstract: Measurements by impedance spectroscopy and Bruggeman effective medium approximation model were employed in order to determine the mineral volume fraction of dry bone. This approach assumes that two or more phases are present into the composite: the matrix (environment) and the other ones are inclusion phases. A fragment of femur diaphysis dense bone from a young pig was investigated in its dehydrated state. Measuring the dielectric properties of bone and its main components (hydroxyapatite and collagen) and using the Bruggeman approach, the mineral volume filling factor was determined. The computed volume fraction of the mineral volume fraction was confirmed by a histogram test analysis based on the SEM microstructures. In spite of its simplicity, the method provides a good approximation for the bone mineral volume fraction. The method which uses impedance spectroscopy and EMA modeling can be further developed by considering the conductive components of the bone tissue as a non-invasive in situ impedance technique for bone composition evaluation and monitoring

  12. Precise determination of universal finite volume observables in the Gross-Neveu model

    Energy Technology Data Exchange (ETDEWEB)

    Korzec, T.

    2007-01-26

    The Gross-Neveu model is a quantum field theory in two space time dimensions that shares many features with quantum chromo dynamics. In this thesis the continuum model and its discretized versions are reviewed and a finite volume renormalization scheme is introduced and tested. Calculations in the limit of infinitely many fermion flavors as well as perturbative computations are carried out. In extensive Monte-Carlo simulations of the one flavor and the four flavor lattice models with Wilson fermions a set of universal finite volume observables is calculated to a high precision. In the one flavor model which is equivalent to the massless Thirring model the continuum extrapolated Monte-Carlo results are confronted with an exact solution of the model. (orig.)

  13. Precise determination of universal finite volume observables in the Gross-Neveu model

    International Nuclear Information System (INIS)

    Korzec, T.

    2007-01-01

    The Gross-Neveu model is a quantum field theory in two space time dimensions that shares many features with quantum chromo dynamics. In this thesis the continuum model and its discretized versions are reviewed and a finite volume renormalization scheme is introduced and tested. Calculations in the limit of infinitely many fermion flavors as well as perturbative computations are carried out. In extensive Monte-Carlo simulations of the one flavor and the four flavor lattice models with Wilson fermions a set of universal finite volume observables is calculated to a high precision. In the one flavor model which is equivalent to the massless Thirring model the continuum extrapolated Monte-Carlo results are confronted with an exact solution of the model. (orig.)

  14. BPMN4SOA : A service oriented process modelling language

    OpenAIRE

    Bergstøl, Eivind

    2010-01-01

    Service oriented architectures have become very popular the last few years. The abstraction of computer systems into a service paradigm bring many new solutions, both for cross business processes to aid interoperability and the reuse of existing legacy systems in a new network centric world. In the wake of this, service modelling has become a part of OMGs Model Driven Architecture and new modelling languages that are based on past experience for the new paradigm are emerging. BPMN 2.0 and...

  15. The cloud services innovation platform- enabling service-based environmental modelling using infrastructure-as-a-service cloud computing

    Science.gov (United States)

    Service oriented architectures allow modelling engines to be hosted over the Internet abstracting physical hardware configuration and software deployments from model users. Many existing environmental models are deployed as desktop applications running on user's personal computers (PCs). Migration ...

  16. A mechanistic model of an upper bound on oceanic carbon export as a function of mixed layer depth and temperature

    Directory of Open Access Journals (Sweden)

    Z. Li

    2017-11-01

    Full Text Available Export production reflects the amount of organic matter transferred from the ocean surface to depth through biological processes. This export is in large part controlled by nutrient and light availability, which are conditioned by mixed layer depth (MLD. In this study, building on Sverdrup's critical depth hypothesis, we derive a mechanistic model of an upper bound on carbon export based on the metabolic balance between photosynthesis and respiration as a function of MLD and temperature. We find that the upper bound is a positively skewed bell-shaped function of MLD. Specifically, the upper bound increases with deepening mixed layers down to a critical depth, beyond which a long tail of decreasing carbon export is associated with increasing heterotrophic activity and decreasing light availability. We also show that in cold regions the upper bound on carbon export decreases with increasing temperature when mixed layers are deep, but increases with temperature when mixed layers are shallow. A meta-analysis shows that our model envelopes field estimates of carbon export from the mixed layer. When compared to satellite export production estimates, our model indicates that export production in some regions of the Southern Ocean, particularly the subantarctic zone, is likely limited by light for a significant portion of the growing season.

  17. Moving beyond the age-depth model paradigm in deep-sea palaeoclimate archives: dual radiocarbon and stable isotope analysis on single foraminifera

    Science.gov (United States)

    Lougheed, Bryan C.; Metcalfe, Brett; Ninnemann, Ulysses S.; Wacker, Lukas

    2018-04-01

    Late-glacial palaeoclimate reconstructions from deep-sea sediment archives provide valuable insight into past rapid changes in ocean chemistry. Unfortunately, only a small proportion of the ocean floor with sufficiently high sediment accumulation rate (SAR) is suitable for such reconstructions using the long-standing age-depth model approach. We employ ultra-small radiocarbon (14C) dating on single microscopic foraminifera to demonstrate that the long-standing age-depth model method conceals large age uncertainties caused by post-depositional sediment mixing, meaning that existing studies may underestimate total geochronological error. We find that the age-depth distribution of our 14C-dated single foraminifera is in good agreement with existing bioturbation models only after one takes the possibility of Zoophycos burrowing into account. To overcome the problems associated with the age-depth paradigm, we use the first ever dual 14C and stable isotope (δ18O and δ13C) analysis on single microscopic foraminifera to produce a palaeoclimate time series independent of the age-depth paradigm. This new state of the art essentially decouples single foraminifera from the age-depth paradigm to provide multiple floating, temporal snapshots of ocean chemistry, thus allowing for the successful extraction of temporally accurate palaeoclimate data from low-SAR deep-sea archives. This new method can address large geographical gaps in late-glacial benthic palaeoceanographic reconstructions by opening up vast areas of previously disregarded, low-SAR deep-sea archives to research, which will lead to an improved understanding of the global interaction between oceans and climate.

  18. Moving beyond the age–depth model paradigm in deep-sea palaeoclimate archives: dual radiocarbon and stable isotope analysis on single foraminifera

    Directory of Open Access Journals (Sweden)

    B. C. Lougheed

    2018-04-01

    Full Text Available Late-glacial palaeoclimate reconstructions from deep-sea sediment archives provide valuable insight into past rapid changes in ocean chemistry. Unfortunately, only a small proportion of the ocean floor with sufficiently high sediment accumulation rate (SAR is suitable for such reconstructions using the long-standing age–depth model approach. We employ ultra-small radiocarbon (14C dating on single microscopic foraminifera to demonstrate that the long-standing age–depth model method conceals large age uncertainties caused by post-depositional sediment mixing, meaning that existing studies may underestimate total geochronological error. We find that the age–depth distribution of our 14C-dated single foraminifera is in good agreement with existing bioturbation models only after one takes the possibility of Zoophycos burrowing into account. To overcome the problems associated with the age–depth paradigm, we use the first ever dual 14C and stable isotope (δ18O and δ13C analysis on single microscopic foraminifera to produce a palaeoclimate time series independent of the age–depth paradigm. This new state of the art essentially decouples single foraminifera from the age–depth paradigm to provide multiple floating, temporal snapshots of ocean chemistry, thus allowing for the successful extraction of temporally accurate palaeoclimate data from low-SAR deep-sea archives. This new method can address large geographical gaps in late-glacial benthic palaeoceanographic reconstructions by opening up vast areas of previously disregarded, low-SAR deep-sea archives to research, which will lead to an improved understanding of the global interaction between oceans and climate.

  19. The effects of forward speed and depth of conservation tillage on soil bulk density

    Directory of Open Access Journals (Sweden)

    A Mahmoudi

    2015-09-01

    , besides the importance of tillage depth and speed in different tiller performance. Materials and methods: This investigation was carried out based on random blocks in the form of split plot experimental design. The main factor, tillage depth, (was 10 and 20cm at both levels and the second factor, tillage speed, (was 6, 8, 10, 12 km h-1 in four levels for Bostan-Abad and 8,10,12,14 km h-1 for Hashtrood with four repetitions. It was carried out using complex tillage made in Sazeh Keshte Bukan Company, which is mostly used in Eastern Azerbaijanand using Massey Ferguson 285 and 399 tractors in Bostab-Abad and Hashtrood, respectively. In this investigation, the characteristics of soil bulk density were studied in two sampling depths of 7 and 17 centimeters. Bulk density is an indicator of soil compaction. It is calculated as the dry weight of soil divided by its volume. This volume includes the volume of soil particles and the volume of pores among soil particles. Bulk density is typically expressed in g cm-3. Results and Discussion: In this study, the effect of both factors on the feature of the soil bulk density at the sampling depth of 5-10 and 15-20 cm was examined. In Bostan-Abad, regarding tillage speed effect for studies characteristics at 1% probability level on soil bulk density was effective. The effect of tillage depth on the soil bulk density was significant at 5% probability level . The interaction effect of tillage speed and depth on soil bulk density was significant at probability level of 1%. Regarding sampling depth effect, the soil bulk density was significant at 5% probability level, respectively. In Hashtrood, the effect of tillage speed on soil bulk density at probability level of 1%, and also tillage depth effect on soil bulk density was significant at 5% level of probability. The interaction effect of tillage speed and depth on soil bulk density was significant at 5% level of probability. Regarding the depth of sampling it was significant on soil bulk

  20. Vegetation root zone storage and rooting depth, derived from local calibration of a global hydrological model

    Science.gov (United States)

    van der Ent, R.; Van Beek, R.; Sutanudjaja, E.; Wang-Erlandsson, L.; Hessels, T.; Bastiaanssen, W.; Bierkens, M. F.

    2017-12-01

    The storage and dynamics of water in the root zone control many important hydrological processes such as saturation excess overland flow, interflow, recharge, capillary rise, soil evaporation and transpiration. These processes are parameterized in hydrological models or land-surface schemes and the effect on runoff prediction can be large. Root zone parameters in global hydrological models are very uncertain as they cannot be measured directly at the scale on which these models operate. In this paper we calibrate the global hydrological model PCR-GLOBWB using a state-of-the-art ensemble of evaporation fields derived by solving the energy balance for satellite observations. We focus our calibration on the root zone parameters of PCR-GLOBWB and derive spatial patterns of maximum root zone storage. We find these patterns to correspond well with previous research. The parameterization of our model allows for the conversion of maximum root zone storage to root zone depth and we find that these correspond quite well to the point observations where available. We conclude that climate and soil type should be taken into account when regionalizing measured root depth for a certain vegetation type. We equally find that using evaporation rather than discharge better allows for local adjustment of root zone parameters within a basin and thus provides orthogonal data to diagnose and optimize hydrological models and land surface schemes.

  1. Wind Wave Behavior in Fetch and Depth Limited Estuaries

    Science.gov (United States)

    Karimpour, Arash; Chen, Qin; Twilley, Robert R.

    2017-01-01

    Wetland dominated estuaries serve as one of the most productive natural ecosystems through their ecological, economic and cultural services, such as nursery grounds for fisheries, nutrient sequestration, and ecotourism. The ongoing deterioration of wetland ecosystems in many shallow estuaries raises concerns about the contributing erosive processes and their roles in restraining coastal restoration efforts. Given the combination of wetlands and shallow bays as landscape components that determine the function of estuaries, successful restoration strategies require knowledge of wind wave behavior in fetch and depth limited water as a critical design feature. We experimentally evaluate physics of wind wave growth in fetch and depth limited estuaries. We demonstrate that wave growth rate in shallow estuaries is a function of wind fetch to water depth ratio, which helps to develop a new set of parametric wave growth equations. We find that the final stage of wave growth in shallow estuaries can be presented by a product of water depth and wave number, whereby their product approaches 1.363 as either depth or wave energy increases. Suggested wave growth equations and their asymptotic constraints establish the magnitude of wave forces acting on wetland erosion that must be included in ecosystem restoration design.

  2. PRAGMATICS DRIVEN LAND COVER SERVICE COMPOSITION UTILIZING BEHAVIOR-INTENTION MODEL

    Directory of Open Access Journals (Sweden)

    H. Wu

    2016-06-01

    Full Text Available Web service composition is one of the key issues to develop a global land cover (GLC information service portal. Aiming at the defect that traditional syntax and semantic service compositionare difficult to take pragmatic information into account, the paper firstly analyses three tiers of web service language and their succession relations, discusses the conceptual model of pragmatic web service, and proposes the idea of pragmatics-oriented adaptive composition method based on the analysis of some examples. On this basis it puts forward the pragmatic web service model based on Behavior-Intention through presetting and expression of service usability, users' intention, and other pragmatic information, develops the on-demand assembly method based on the agent theory and matching and reconstruction method on heterogeneous message, solves the key technological issue of algorithm applicability and heterogeneous message transformation in the process of covering web service composition on the ground, applies these methods into service combination, puts forward the pragmatic driven service composition method based on behavior-intention model, and effectively settles the issue of coordination and interaction of composite service invocation.

  3. 105-KW Sandfilter Backwash Pit sludge volume calculation

    International Nuclear Information System (INIS)

    Dodd, E.N. Jr.

    1995-01-01

    The volume of sludge contained in the 100-KW Sandfilter Backwash Pit (SFBWP) was calculated from depth measurements of the sludge, pit dimension measurements and analysis of video tape recordings taken by an underwater camera. The term sludge as used in this report is any combination of sand, sediment, or corrosion products visible in the SFBWP area. This work was performed to determine baseline volume for use in determination of quantities of uranium and plutonium deposited in the pit from sandfilter backwashes. The SFBWP has three areas where sludge is deposited: (1) the main pit floor, (2) the transfer channel floor, and (3) the surfaces and structures in the SFBWP. The depths of sludge and the uniformity of deposition varies significantly between these three areas. As a result, each of the areas was evaluated separately. The total volume of sludge determined was 3.75 M 3 (132.2 ft 3 )

  4. Environmental Models as a Service: Enabling Interoperability ...

    Science.gov (United States)

    Achieving interoperability in environmental modeling has evolved as software technology has progressed. The recent rise of cloud computing and proliferation of web services initiated a new stage for creating interoperable systems. Scientific programmers increasingly take advantage of streamlined deployment processes and affordable cloud access to move algorithms and data to the web for discoverability and consumption. In these deployments, environmental models can become available to end users through RESTful web services and consistent application program interfaces (APIs) that consume, manipulate, and store modeling data. RESTful modeling APIs also promote discoverability and guide usability through self-documentation. Embracing the RESTful paradigm allows models to be accessible via a web standard, and the resulting endpoints are platform- and implementation-agnostic while simultaneously presenting significant computational capabilities for spatial and temporal scaling. RESTful APIs present data in a simple verb-noun web request interface: the verb dictates how a resource is consumed using HTTP methods (e.g., GET, POST, and PUT) and the noun represents the URL reference of the resource on which the verb will act. The RESTful API can self-document in both the HTTP response and an interactive web page using the Open API standard. This lets models function as an interoperable service that promotes sharing, documentation, and discoverability. Here, we discuss the

  5. Maintenance Personnel Performance Simulation (MAPPS) model: description of model content, structure, and sensitivity testing. Volume 2

    International Nuclear Information System (INIS)

    Siegel, A.I.; Bartter, W.D.; Wolf, J.J.; Knee, H.E.

    1984-12-01

    This volume of NUREG/CR-3626 presents details of the content, structure, and sensitivity testing of the Maintenance Personnel Performance Simulation (MAPPS) model that was described in summary in volume one of this report. The MAPPS model is a generalized stochastic computer simulation model developed to simulate the performance of maintenance personnel in nuclear power plants. The MAPPS model considers workplace, maintenance technician, motivation, human factors, and task oriented variables to yield predictive information about the effects of these variables on successful maintenance task performance. All major model variables are discussed in detail and their implementation and interactive effects are outlined. The model was examined for disqualifying defects from a number of viewpoints, including sensitivity testing. This examination led to the identification of some minor recalibration efforts which were carried out. These positive results indicate that MAPPS is ready for initial and controlled applications which are in conformity with its purposes

  6. Volume-Targeted Ventilation in the Neonate: Benchmarking Ventilators on an Active Lung Model.

    Science.gov (United States)

    Krieger, Tobias J; Wald, Martin

    2017-03-01

    Mechanically ventilated neonates have been observed to receive substantially different ventilation after switching ventilator models, despite identical ventilator settings. This study aims at establishing the range of output variability among 10 neonatal ventilators under various breathing conditions. Relative benchmarking test of 10 neonatal ventilators on an active neonatal lung model. Neonatal ICU. Ten current neonatal ventilators. Ventilators were set identically to flow-triggered, synchronized, volume-targeted, pressure-controlled, continuous mandatory ventilation and connected to a neonatal lung model. The latter was configured to simulate three patients (500, 1,500, and 3,500 g) in three breathing modes each (passive breathing, constant active breathing, and variable active breathing). Averaged across all weight conditions, the included ventilators delivered between 86% and 110% of the target tidal volume in the passive mode, between 88% and 126% during constant active breathing, and between 86% and 120% under variable active breathing. The largest relative deviation occurred during the 500 g constant active condition, where the highest output machine produced 147% of the tidal volume of the lowest output machine. All machines deviate significantly in volume output and ventilation regulation. These differences depend on ventilation type, respiratory force, and patient behavior, preventing the creation of a simple conversion table between ventilator models. Universal neonatal tidal volume targets for mechanical ventilation cannot be transferred from one ventilator to another without considering necessary adjustments.

  7. VOLUMNECT: measuring volumes with Kinect

    Science.gov (United States)

    Quintino Ferreira, Beatriz; Griné, Miguel; Gameiro, Duarte; Costeira, João. Paulo; Sousa Santos, Beatriz

    2014-03-01

    This article presents a solution to volume measurement object packing using 3D cameras (such as the Microsoft KinectTM). We target application scenarios, such as warehouses or distribution and logistics companies, where it is important to promptly compute package volumes, yet high accuracy is not pivotal. Our application auto- matically detects cuboid objects using the depth camera data and computes their volume and sorting it allowing space optimization. The proposed methodology applies to a point cloud simple computer vision and image processing methods, as connected components, morphological operations and Harris corner detector, producing encouraging results, namely an accuracy in volume measurement of 8mm. Aspects that can be further improved are identified; nevertheless, the current solution is already promising turning out to be cost effective for the envisaged scenarios.

  8. A hybrid ARIMA and neural network model applied to forecast catch volumes of Selar crumenophthalmus

    Science.gov (United States)

    Aquino, Ronald L.; Alcantara, Nialle Loui Mar T.; Addawe, Rizavel C.

    2017-11-01

    The Selar crumenophthalmus with the English name big-eyed scad fish, locally known as matang-baka, is one of the fishes commonly caught along the waters of La Union, Philippines. The study deals with the forecasting of catch volumes of big-eyed scad fish for commercial consumption. The data used are quarterly caught volumes of big-eyed scad fish from 2002 to first quarter of 2017. This actual data is available from the open stat database published by the Philippine Statistics Authority (PSA)whose task is to collect, compiles, analyzes and publish information concerning different aspects of the Philippine setting. Autoregressive Integrated Moving Average (ARIMA) models, Artificial Neural Network (ANN) model and the Hybrid model consisting of ARIMA and ANN were developed to forecast catch volumes of big-eyed scad fish. Statistical errors such as Mean Absolute Errors (MAE) and Root Mean Square Errors (RMSE) were computed and compared to choose the most suitable model for forecasting the catch volume for the next few quarters. A comparison of the results of each model and corresponding statistical errors reveals that the hybrid model, ARIMA-ANN (2,1,2)(6:3:1), is the most suitable model to forecast the catch volumes of the big-eyed scad fish for the next few quarters.

  9. Coupling between cracking and permeability, a model for structure service life prediction

    International Nuclear Information System (INIS)

    Lasne, M.; Gerard, B.; Breysse, D.

    1993-01-01

    Many authors have chosen permeability coefficients (permeation, diffusion) as a reference for material durability and for structure service life prediction. When we look for designing engineered barriers for radioactive waste storage we find these macroscopic parameters very essential. In order to work with a predictive model of transfer properties evolution in a porous media (concrete, mortar, rock) we introduce a 'micro-macro' hierarchical model of permeability whose data are the total porosity and the pore size distribution. In spite of the simplicity of the model (very small CPU time consuming) comparative studies show predictive results for sound cement pastes, mortars and concretes. Associated to these works we apply a model of damage due to hydration processes at early ages to a container as a preliminary underproject for the definitive storage of Low Level radioactive Waste (LLW). Data are geometry, cement properties and damage measurement of concrete. This model takes into account the mechanical property of the concrete maturation (volumic variations during cement hydration can damage the structures). Some local microcracking can appear and affect the long term durability. Following these works we introduce our research program for the concrete cracking analysis. An experimental campaign is designed in order to determine damage-cracking-porosity-permeability coupling. (authors). 12 figs., 16 refs

  10. A Multilateral Negotiation Model for Cloud Service Market

    Science.gov (United States)

    Yoo, Dongjin; Sim, Kwang Mong

    Trading cloud services between consumers and providers is a complicated issue of cloud computing. Since a consumer can negotiate with multiple providers to acquire the same service and each provider can receive many requests from multiple consumers, to facilitate the trading of cloud services among multiple consumers and providers, a multilateral negotiation model for cloud market is necessary. The contribution of this work is the proposal of a business model supporting a multilateral price negotiation for trading cloud services. The design of proposed systems for cloud service market includes considering a many-to-many negotiation protocol, and price determining factor from service level feature. Two negotiation strategies are implemented: 1) MDA (Market Driven Agent); and 2) adaptive concession making responding to changes of bargaining position are proposed for cloud service market. Empirical results shows that MDA achieved better performance in some cases that the adaptive concession making strategy, it is noted that unlike the MDA, the adaptive concession making strategy does not assume that an agent has information of the number of competitors (e.g., a consumer agent adopting the adaptive concession making strategy need not know the number of consumer agents competing for the same service).

  11. Nonlinear ecosystem services response to groundwater availability under climate extremes

    Science.gov (United States)

    Qiu, J.; Zipper, S. C.; Motew, M.; Booth, E.; Kucharik, C. J.; Steven, L. I.

    2017-12-01

    Depletion of groundwater has been accelerating at regional to global scales. Besides serving domestic, industrial and agricultural needs, in situ groundwater is also a key control on biological, physical and chemical processes across the critical zone, all of which underpin supply of ecosystem services essential for humanity. While there is a rich history of research on groundwater effects on subsurface and surface processes, understanding interactions, nonlinearity and feedbacks between groundwater and ecosystem services remain limited, and almost absent in the ecosystem service literature. Moreover, how climate extremes may alter groundwater effects on services is underexplored. In this research, we used a process-based ecosystem model (Agro-IBIS) to quantify groundwater effects on eight ecosystem services related to food, water and biogeochemical processes in an urbanizing agricultural watershed in the Midwest, USA. We asked: (1) Which ecosystem services are more susceptible to shallow groundwater influences? (2) Do effects of groundwater on ecosystem services vary under contrasting climate conditions (i.e., dry, wet and average)? (3) Where on the landscape are groundwater effects on ecosystem services most pronounced? (4) How do groundwater effects depend on water table depth? Overall, groundwater significantly impacted all services studied, with the largest effects on food production, water quality and quantity, and flood regulation services. Climate also mediated groundwater effects with the strongest effects occurring under dry climatic conditions. There was substantial spatial heterogeneity in groundwater effects across the landscape that is driven in part by spatial variations in water table depth. Most ecosystem services responded nonlinearly to groundwater availability, with most apparent groundwater effects occurring when the water table is shallower than a critical depth of 2.5-m. Our findings provide compelling evidence that groundwater plays a vital

  12. Performance modeling of network data services

    Energy Technology Data Exchange (ETDEWEB)

    Haynes, R.A.; Pierson, L.G.

    1997-01-01

    Networks at major computational organizations are becoming increasingly complex. The introduction of large massively parallel computers and supercomputers with gigabyte memories are requiring greater and greater bandwidth for network data transfers to widely dispersed clients. For networks to provide adequate data transfer services to high performance computers and remote users connected to them, the networking components must be optimized from a combination of internal and external performance criteria. This paper describes research done at Sandia National Laboratories to model network data services and to visualize the flow of data from source to sink when using the data services.

  13. External validation of a forest inventory and analysis volume equation and comparisons with estimates from multiple stem-profile models

    Science.gov (United States)

    Christopher M. Oswalt; Adam M. Saunders

    2009-01-01

    Sound estimation procedures are desideratum for generating credible population estimates to evaluate the status and trends in resource conditions. As such, volume estimation is an integral component of the U.S. Department of Agriculture, Forest Service, Forest Inventory and Analysis (FIA) program's reporting. In effect, reliable volume estimation procedures are...

  14. Hypersonic - Model Analysis as a Service

    DEFF Research Database (Denmark)

    Acretoaie, Vlad; Störrle, Harald

    2014-01-01

    Hypersonic is a Cloud-based tool that proposes a new approach to the deployment of model analysis facilities. It is implemented as a RESTful Web service API o_ering analysis features such as model clone detection. This approach allows the migration of resource intensive analysis algorithms from...

  15. 1988 DOE model conference proceedings: Volume 1

    International Nuclear Information System (INIS)

    1988-01-01

    These Proceedings of the October 3-7, 1988, DOE Model Conference are a compilation of the papers that were presented in the technical or poster sessions at the conference. Papers and posters not submitted for publication are not included in the Proceedings. The Table of Contents lists the titles of papers as well as the names of the presenters. These individuals are not, in all cases, the primary authors of the papers published. The actual title pages, appearing later with the papers, show the primary author(s) and all co-authors. The papers in all three volumes of the Proceedings appear as they were originally submitted for publication and have not been edited or changed in any way. Topics included in Volume 1 are Environmental Data Management, Site characterization technology, Wastewater treatment, Waste management in foreign countries, Transuranic waste management, and Groundwater characterization and treatment

  16. 1988 DOE model conference proceedings: Volume 2

    International Nuclear Information System (INIS)

    1988-01-01

    These Proceedings of the October 3 - 7, 1988, DOE Model Conference are a compilation of the papers that were presented in the technical or poster sessions at the conference. Papers and posters not submitted for publication are not included in the proceedings. The Table of Contents lists the titles of papers as well as the names of the presenters. These individuals are not, in all cases, the primary authors of the papers published. The actual title pages, appearing later with the papers, show the primary author(s) and all co-authors. The papers in all three volumes of the Proceedings appear as they were originally submitted for publication and have not been edited or changed in any way. Volume 2 contains information on environmental restoration at federal facilities, waste disposal technology, quality assurance, contingency planning and emergency response, decontamination and decommissioning, environmental restoration, and public involvement in waste management

  17. 1988 DOE model conference proceedings: Volume 1

    Energy Technology Data Exchange (ETDEWEB)

    1988-01-01

    These Proceedings of the October 3-7, 1988, DOE Model Conference are a compilation of the papers that were presented in the technical or poster sessions at the conference. Papers and posters not submitted for publication are not included in the Proceedings. The Table of Contents lists the titles of papers as well as the names of the presenters. These individuals are not, in all cases, the primary authors of the papers published. The actual title pages, appearing later with the papers, show the primary author(s) and all co-authors. The papers in all three volumes of the Proceedings appear as they were originally submitted for publication and have not been edited or changed in any way. Topics included in Volume 1 are Environmental Data Management, Site characterization technology, Wastewater treatment, Waste management in foreign countries, Transuranic waste management, and Groundwater characterization and treatment.

  18. A systematic composite service design modeling method using graph-based theory.

    Science.gov (United States)

    Elhag, Arafat Abdulgader Mohammed; Mohamad, Radziah; Aziz, Muhammad Waqar; Zeshan, Furkh

    2015-01-01

    The composite service design modeling is an essential process of the service-oriented software development life cycle, where the candidate services, composite services, operations and their dependencies are required to be identified and specified before their design. However, a systematic service-oriented design modeling method for composite services is still in its infancy as most of the existing approaches provide the modeling of atomic services only. For these reasons, a new method (ComSDM) is proposed in this work for modeling the concept of service-oriented design to increase the reusability and decrease the complexity of system while keeping the service composition considerations in mind. Furthermore, the ComSDM method provides the mathematical representation of the components of service-oriented design using the graph-based theoryto facilitate the design quality measurement. To demonstrate that the ComSDM method is also suitable for composite service design modeling of distributed embedded real-time systems along with enterprise software development, it is implemented in the case study of a smart home. The results of the case study not only check the applicability of ComSDM, but can also be used to validate the complexity and reusability of ComSDM. This also guides the future research towards the design quality measurement such as using the ComSDM method to measure the quality of composite service design in service-oriented software system.

  19. Model-based segmentation in orbital volume measurement with cone beam computed tomography and evaluation against current concepts.

    Science.gov (United States)

    Wagner, Maximilian E H; Gellrich, Nils-Claudius; Friese, Karl-Ingo; Becker, Matthias; Wolter, Franz-Erich; Lichtenstein, Juergen T; Stoetzer, Marcus; Rana, Majeed; Essig, Harald

    2016-01-01

    Objective determination of the orbital volume is important in the diagnostic process and in evaluating the efficacy of medical and/or surgical treatment of orbital diseases. Tools designed to measure orbital volume with computed tomography (CT) often cannot be used with cone beam CT (CBCT) because of inferior tissue representation, although CBCT has the benefit of greater availability and lower patient radiation exposure. Therefore, a model-based segmentation technique is presented as a new method for measuring orbital volume and compared to alternative techniques. Both eyes from thirty subjects with no known orbital pathology who had undergone CBCT as a part of routine care were evaluated (n = 60 eyes). Orbital volume was measured with manual, atlas-based, and model-based segmentation methods. Volume measurements, volume determination time, and usability were compared between the three methods. Differences in means were tested for statistical significance using two-tailed Student's t tests. Neither atlas-based (26.63 ± 3.15 mm(3)) nor model-based (26.87 ± 2.99 mm(3)) measurements were significantly different from manual volume measurements (26.65 ± 4.0 mm(3)). However, the time required to determine orbital volume was significantly longer for manual measurements (10.24 ± 1.21 min) than for atlas-based (6.96 ± 2.62 min, p < 0.001) or model-based (5.73 ± 1.12 min, p < 0.001) measurements. All three orbital volume measurement methods examined can accurately measure orbital volume, although atlas-based and model-based methods seem to be more user-friendly and less time-consuming. The new model-based technique achieves fully automated segmentation results, whereas all atlas-based segmentations at least required manipulations to the anterior closing. Additionally, model-based segmentation can provide reliable orbital volume measurements when CT image quality is poor.

  20. Hydrologic regulation of plant rooting depth.

    Science.gov (United States)

    Fan, Ying; Miguez-Macho, Gonzalo; Jobbágy, Esteban G; Jackson, Robert B; Otero-Casal, Carlos

    2017-10-03

    Plant rooting depth affects ecosystem resilience to environmental stress such as drought. Deep roots connect deep soil/groundwater to the atmosphere, thus influencing the hydrologic cycle and climate. Deep roots enhance bedrock weathering, thus regulating the long-term carbon cycle. However, we know little about how deep roots go and why. Here, we present a global synthesis of 2,200 root observations of >1,000 species along biotic (life form, genus) and abiotic (precipitation, soil, drainage) gradients. Results reveal strong sensitivities of rooting depth to local soil water profiles determined by precipitation infiltration depth from the top (reflecting climate and soil), and groundwater table depth from below (reflecting topography-driven land drainage). In well-drained uplands, rooting depth follows infiltration depth; in waterlogged lowlands, roots stay shallow, avoiding oxygen stress below the water table; in between, high productivity and drought can send roots many meters down to the groundwater capillary fringe. This framework explains the contrasting rooting depths observed under the same climate for the same species but at distinct topographic positions. We assess the global significance of these hydrologic mechanisms by estimating root water-uptake depths using an inverse model, based on observed productivity and atmosphere, at 30″ (∼1-km) global grids to capture the topography critical to soil hydrology. The resulting patterns of plant rooting depth bear a strong topographic and hydrologic signature at landscape to global scales. They underscore a fundamental plant-water feedback pathway that may be critical to understanding plant-mediated global change.

  1. Hydrologic regulation of plant rooting depth

    Science.gov (United States)

    Fan, Ying; Miguez-Macho, Gonzalo; Jobbágy, Esteban G.; Jackson, Robert B.; Otero-Casal, Carlos

    2017-10-01

    Plant rooting depth affects ecosystem resilience to environmental stress such as drought. Deep roots connect deep soil/groundwater to the atmosphere, thus influencing the hydrologic cycle and climate. Deep roots enhance bedrock weathering, thus regulating the long-term carbon cycle. However, we know little about how deep roots go and why. Here, we present a global synthesis of 2,200 root observations of >1,000 species along biotic (life form, genus) and abiotic (precipitation, soil, drainage) gradients. Results reveal strong sensitivities of rooting depth to local soil water profiles determined by precipitation infiltration depth from the top (reflecting climate and soil), and groundwater table depth from below (reflecting topography-driven land drainage). In well-drained uplands, rooting depth follows infiltration depth; in waterlogged lowlands, roots stay shallow, avoiding oxygen stress below the water table; in between, high productivity and drought can send roots many meters down to the groundwater capillary fringe. This framework explains the contrasting rooting depths observed under the same climate for the same species but at distinct topographic positions. We assess the global significance of these hydrologic mechanisms by estimating root water-uptake depths using an inverse model, based on observed productivity and atmosphere, at 30″ (˜1-km) global grids to capture the topography critical to soil hydrology. The resulting patterns of plant rooting depth bear a strong topographic and hydrologic signature at landscape to global scales. They underscore a fundamental plant-water feedback pathway that may be critical to understanding plant-mediated global change.

  2. Fracture evaluation of an in-service piping flaw caused by microbiologically induced corrosion

    International Nuclear Information System (INIS)

    Rudland, D.L.; Scott, P.M.; Wilkowski, G.M.; Rahman, S.

    1996-01-01

    A pipe fracture experiment was conducted on a section of 6-inch nominal diameter pipe which was degraded by microbiologically induced corrosion (MIC) at a circumferential girth weld. The pipe was a section of one of the service water piping systems to one of the emergency diesel generators at the Haddam Neck (Connecticut Yankee) plant. The experimental results will help validate future ASME Section XI pipe flaw evaluation criteria for other than Class 1 piping. A critical aspect of this experiment was an assessment of the degree of conservatism embodied in the ASME definition of flaw size. The ASME flaw size definition assumes a rectangular shaped, constant depth flaw with a depth equal to its maximum depth for its entire length. Since most service flaws are irregular in shape, this definition may be overly conservative. Results from several fracture prediction models are compared with the experimental results. These results show that, for this case, the ASME Appendix H criteria significantly underpredicted the experimental maximum moment, while other fracture prediction models provided good predictions when accurate pipe, weld and flaw dimensions were used

  3. Glass Transition Temperature of Saccharide Aqueous Solutions Estimated with the Free Volume/Percolation Model.

    Science.gov (United States)

    Constantin, Julian Gelman; Schneider, Matthias; Corti, Horacio R

    2016-06-09

    The glass transition temperature of trehalose, sucrose, glucose, and fructose aqueous solutions has been predicted as a function of the water content by using the free volume/percolation model (FVPM). This model only requires the molar volume of water in the liquid and supercooled regimes, the molar volumes of the hypothetical pure liquid sugars at temperatures below their pure glass transition temperatures, and the molar volumes of the mixtures at the glass transition temperature. The model is simplified by assuming that the excess thermal expansion coefficient is negligible for saccharide-water mixtures, and this ideal FVPM becomes identical to the Gordon-Taylor model. It was found that the behavior of the water molar volume in trehalose-water mixtures at low temperatures can be obtained by assuming that the FVPM holds for this mixture. The temperature dependence of the water molar volume in the supercooled region of interest seems to be compatible with the recent hypothesis on the existence of two structure of liquid water, being the high density liquid water the state of water in the sugar solutions. The idealized FVPM describes the measured glass transition temperature of sucrose, glucose, and fructose aqueous solutions, with much better accuracy than both the Gordon-Taylor model based on an empirical kGT constant dependent on the saccharide glass transition temperature and the Couchman-Karasz model using experimental heat capacity changes of the components at the glass transition temperature. Thus, FVPM seems to be an excellent tool to predict the glass transition temperature of other aqueous saccharides and polyols solutions by resorting to volumetric information easily available.

  4. A STRATEGIC MANAGEMENT MODEL FOR SERVICE ORGANIZATIONS

    OpenAIRE

    Andreea ZAMFIR

    2013-01-01

    This paper provides a knowledge-based strategic management of services model, with a view to emphasise an approach to gaining competitive advantage through knowledge, people and networking. The long-term evolution of the service organization is associated with the way in which the strategic management is practised.

  5. On the retrieval of sea ice thickness and snow depth using concurrent laser altimetry and L-band remote sensing data

    Science.gov (United States)

    Zhou, Lu; Xu, Shiming; Liu, Jiping; Wang, Bin

    2018-03-01

    The accurate knowledge of sea ice parameters, including sea ice thickness and snow depth over the sea ice cover, is key to both climate studies and data assimilation in operational forecasts. Large-scale active and passive remote sensing is the basis for the estimation of these parameters. In traditional altimetry or the retrieval of snow depth with passive microwave remote sensing, although the sea ice thickness and the snow depth are closely related, the retrieval of one parameter is usually carried out under assumptions over the other. For example, climatological snow depth data or as derived from reanalyses contain large or unconstrained uncertainty, which result in large uncertainty in the derived sea ice thickness and volume. In this study, we explore the potential of combined retrieval of both sea ice thickness and snow depth using the concurrent active altimetry and passive microwave remote sensing of the sea ice cover. Specifically, laser altimetry and L-band passive remote sensing data are combined using two forward models: the L-band radiation model and the isostatic relationship based on buoyancy model. Since the laser altimetry usually features much higher spatial resolution than L-band data from the Soil Moisture Ocean Salinity (SMOS) satellite, there is potentially covariability between the observed snow freeboard by altimetry and the retrieval target of snow depth on the spatial scale of altimetry samples. Statistically significant correlation is discovered based on high-resolution observations from Operation IceBridge (OIB), and with a nonlinear fitting the covariability is incorporated in the retrieval algorithm. By using fitting parameters derived from large-scale surveys, the retrievability is greatly improved compared with the retrieval that assumes flat snow cover (i.e., no covariability). Verifications with OIB data show good match between the observed and the retrieved parameters, including both sea ice thickness and snow depth. With

  6. Efficient Business Service Consumption by Customization with Variability Modelling

    Directory of Open Access Journals (Sweden)

    Michael Stollberg

    2010-07-01

    Full Text Available The establishment of service orientation in industry determines the need for efficient engineering technologies that properly support the whole life cycle of service provision and consumption. A central challenge is adequate support for the efficient employment of komplex services in their individual application context. This becomes particularly important for large-scale enterprise technologies where generic services are designed for reuse in several business scenarios. In this article we complement our work regarding Service Variability Modelling presented in a previous publication. There we presented an approach for the customization of services for individual application contexts by creating simplified variants, based on model-driven variability management. That work presents our revised service variability metamodel, new features of the variability tools and an applicability study, which reveals that substantial improvements on the efficiency of standard business service consumption under both usability and economic aspects can be achieved.

  7. Above the cloud computing: applying cloud computing principles to create an orbital services model

    Science.gov (United States)

    Straub, Jeremy; Mohammad, Atif; Berk, Josh; Nervold, Anders K.

    2013-05-01

    Large satellites and exquisite planetary missions are generally self-contained. They have, onboard, all of the computational, communications and other capabilities required to perform their designated functions. Because of this, the satellite or spacecraft carries hardware that may be utilized only a fraction of the time; however, the full cost of development and launch are still bone by the program. Small satellites do not have this luxury. Due to mass and volume constraints, they cannot afford to carry numerous pieces of barely utilized equipment or large antennas. This paper proposes a cloud-computing model for exposing satellite services in an orbital environment. Under this approach, each satellite with available capabilities broadcasts a service description for each service that it can provide (e.g., general computing capacity, DSP capabilities, specialized sensing capabilities, transmission capabilities, etc.) and its orbital elements. Consumer spacecraft retain a cache of service providers and select one utilizing decision making heuristics (e.g., suitability of performance, opportunity to transmit instructions and receive results - based on the orbits of the two craft). The two craft negotiate service provisioning (e.g., when the service can be available and for how long) based on the operating rules prioritizing use of (and allowing access to) the service on the service provider craft, based on the credentials of the consumer. Service description, negotiation and sample service performance protocols are presented. The required components of each consumer or provider spacecraft are reviewed. These include fully autonomous control capabilities (for provider craft), a lightweight orbit determination routine (to determine when consumer and provider craft can see each other and, possibly, pointing requirements for craft with directional antennas) and an authentication and resource utilization priority-based access decision making subsystem (for provider craft

  8. Magmatic densities control erupted volumes in Icelandic volcanic systems

    Science.gov (United States)

    Hartley, Margaret; Maclennan, John

    2018-04-01

    Magmatic density and viscosity exert fundamental controls on the eruptibility of magmas. In this study, we investigate the extent to which magmatic physical properties control the eruptibility of magmas from Iceland's Northern Volcanic Zone (NVZ). By studying subaerial flows of known age and volume, we are able to directly relate erupted volumes to magmatic physical properties, a task that has been near-impossible when dealing with submarine samples dredged from mid-ocean ridges. We find a strong correlation between magmatic density and observed erupted volumes on the NVZ. Over 85% of the total volume of erupted material lies close to a density and viscosity minimum that corresponds to the composition of basalts at the arrival of plagioclase on the liquidus. These magmas are buoyant with respect to the Icelandic upper crust. However, a number of small-volume eruptions with densities greater than typical Icelandic upper crust are also found in Iceland's neovolcanic zones. We use a simple numerical model to demonstrate that the eruption of magmas with higher densities and viscosities is facilitated by the generation of overpressure in magma chambers in the lower crust and uppermost mantle. This conclusion is in agreement with petrological constraints on the depths of crystallisation under Iceland.

  9. Magmatic Densities Control Erupted Volumes in Icelandic Volcanic Systems

    Directory of Open Access Journals (Sweden)

    Margaret Hartley

    2018-04-01

    Full Text Available Magmatic density and viscosity exert fundamental controls on the eruptibility of magmas. In this study, we investigate the extent to which magmatic physical properties control the eruptibility of magmas from Iceland's Northern Volcanic Zone (NVZ. By studying subaerial flows of known age and volume, we are able to directly relate erupted volumes to magmatic physical properties, a task that has been near-impossible when dealing with submarine samples dredged from mid-ocean ridges. We find a strong correlation between magmatic density and observed erupted volumes on the NVZ. Over 85% of the total volume of erupted material lies close to a density and viscosity minimum that corresponds to the composition of basalts at the arrival of plagioclase on the liquidus. These magmas are buoyant with respect to the Icelandic upper crust. However, a number of small-volume eruptions with densities greater than typical Icelandic upper crust are also found in Iceland's neovolcanic zones. We use a simple numerical model to demonstrate that the eruption of magmas with higher densities and viscosities is facilitated by the generation of overpressure in magma chambers in the lower crust and uppermost mantle. This conclusion is in agreement with petrological constraints on the depths of crystallization under Iceland.

  10. Acceptance of Swedish e-health services

    Directory of Open Access Journals (Sweden)

    Mary-Louise Jung

    2010-11-01

    Full Text Available Mary-Louise Jung1, Karla Loria11Division of Industrial Marketing, e-Commerce and Logistics, Lulea University of Technology, SwedenObjective: To investigate older people’s acceptance of e-health services, in order to identify determinants of, and barriers to, their intention to use e-health.Method: Based on one of the best-established models of technology acceptance, Technology Acceptance Model (TAM, in-depth exploratory interviews with twelve individuals over 45 years of age and of varying backgrounds are conducted.Results: This investigation could find support for the importance of usefulness and perceived ease of use of the e-health service offered as the main determinants of people’s intention to use the service. Additional factors critical to the acceptance of e-health are identified, such as the importance of the compatibility of the services with citizens’ needs and trust in the service provider. Most interviewees expressed positive attitudes towards using e-health and find these services useful, convenient, and easy to use.Conclusion: E-health services are perceived as a good complement to traditional health care service delivery, even among older people. These people, however, need to become aware of the e-health alternatives that are offered to them and the benefits they provide.Keywords: health services, elderly, technology, Internet, TAM, patient acceptance, health-seeking behavior

  11. Web service availability-impact of error recovery and traffic model

    International Nuclear Information System (INIS)

    Martinello, Magnos; Kaa-hat niche, Mohamed; Kanoun, Karama

    2005-01-01

    Internet is often used for transaction based applications such as online banking, stock trading and shopping, where the service interruption or outages are unacceptable. Therefore, it is important for designers of such applications to analyze how hardware, software and performance related failures affect the quality of service delivered to the users. This paper presents analytical models for evaluating the service availability of web cluster architectures. A composite performance and availability modeling approach is defined considering various causes of service unavailability. In particular, web cluster systems are modeled taking into account: two error recovery strategies (client transparent and non-client-transparent) as well as two traffic models (Poisson and modulated Poisson). Sensitivity analysis results are presented to show their impact on the web service availability. The obtained results provide useful guidelines to web designers

  12. Web Services and Model-Driven Enterprise Information Services. Proceedings of the Joint Workshop on Web Services and Model-Driven Enterprise Information Services, WSMDEIS 2005.

    NARCIS (Netherlands)

    Bevinakoppa, S.; Ferreira Pires, Luis; Hammoudi, S.

    2005-01-01

    Web services and Model-driven development are two emerging research fields and have been receiving a lot of attention in the recent years. New approaches on these two areas can bring many benefits to the development of information systems, distribution flexibility, interoperability, maintainability

  13. A model to estimate the cost of the National Essential Public Health Services Package in Beijing, China.

    Science.gov (United States)

    Yin, Delu; Wong, Sabrina T; Chen, Wei; Xin, Qianqian; Wang, Lihong; Cui, Mingming; Yin, Tao; Li, Ruili; Zheng, Xiaoguo; Yang, Huiming; Yu, Juanjuan; Chen, Bowen; Yang, Weizhong

    2015-06-06

    In order to address several health challenges, the Chinese government issued the National Essential Public Health Services Package (NEPHSP) in 2009. In China's large cities, the lack of funding for community health centers and consequent lack of comprehensive services and high quality care has become a major challenge. However, no study has been carried out to estimate the cost of delivering the services in the package. This project was to develop a cost estimation approach appropriate to the context and use it to calculate the cost of the NEPHSP in Beijing in 2011. By adjusting models of cost analysis of primary health care and workload indicators of staffing need developed by the World Health Organization, a model was developed to estimate the cost of the services in the package through an intensive interactive process. A total of 17 community health centers from eight administrative districts in Beijing were selected. Their service volume and expenditure data in 2010 were used to evaluate the costs of providing the NEPHSP in Beijing based on the applied model. The total workload of all types of primary health care in 17 sample centers was equivalent to the workload requirement for 14,056,402 standard clinic visits. The total expenditure of the 17 sample centers was 26,329,357.62 USD in 2010. The cost of the workload requirement of one standard clinic visit was 1.87 USD. The workload of the NEPHSP was equivalent to 5,514,777 standard clinic visits (39.23 % of the total workload). The model suggests that the cost of the package in Beijing was 7.95 USD per capita in 2010. The cost of the NEPHSP in urban areas was lower than suburban areas: 7.31 and 8.65 USD respectively. The average investment of 3.97 USD per capita in NEPHSP was lower than the amount needed to meet its running costs. NEPHSP in Beijing is therefore underfunded. Additional investment is needed, and a dynamic cost estimate mechanism should be introduced to ensure services remain adequately funded.

  14. Open-source Software for Demand Forecasting of Clinical Laboratory Test Volumes Using Time-series Analysis.

    Science.gov (United States)

    Mohammed, Emad A; Naugler, Christopher

    2017-01-01

    Demand forecasting is the area of predictive analytics devoted to predicting future volumes of services or consumables. Fair understanding and estimation of how demand will vary facilitates the optimal utilization of resources. In a medical laboratory, accurate forecasting of future demand, that is, test volumes, can increase efficiency and facilitate long-term laboratory planning. Importantly, in an era of utilization management initiatives, accurately predicted volumes compared to the realized test volumes can form a precise way to evaluate utilization management initiatives. Laboratory test volumes are often highly amenable to forecasting by time-series models; however, the statistical software needed to do this is generally either expensive or highly technical. In this paper, we describe an open-source web-based software tool for time-series forecasting and explain how to use it as a demand forecasting tool in clinical laboratories to estimate test volumes. This tool has three different models, that is, Holt-Winters multiplicative, Holt-Winters additive, and simple linear regression. Moreover, these models are ranked and the best one is highlighted. This tool will allow anyone with historic test volume data to model future demand.

  15. Open-source software for demand forecasting of clinical laboratory test volumes using time-series analysis

    Directory of Open Access Journals (Sweden)

    Emad A Mohammed

    2017-01-01

    Full Text Available Background: Demand forecasting is the area of predictive analytics devoted to predicting future volumes of services or consumables. Fair understanding and estimation of how demand will vary facilitates the optimal utilization of resources. In a medical laboratory, accurate forecasting of future demand, that is, test volumes, can increase efficiency and facilitate long-term laboratory planning. Importantly, in an era of utilization management initiatives, accurately predicted volumes compared to the realized test volumes can form a precise way to evaluate utilization management initiatives. Laboratory test volumes are often highly amenable to forecasting by time-series models; however, the statistical software needed to do this is generally either expensive or highly technical. Method: In this paper, we describe an open-source web-based software tool for time-series forecasting and explain how to use it as a demand forecasting tool in clinical laboratories to estimate test volumes. Results: This tool has three different models, that is, Holt-Winters multiplicative, Holt-Winters additive, and simple linear regression. Moreover, these models are ranked and the best one is highlighted. Conclusion: This tool will allow anyone with historic test volume data to model future demand.

  16. Molar concentration-depth profiles at the solution surface of a cationic surfactant reconstructed with angle resolved X-ray photoelectron spectroscopy

    International Nuclear Information System (INIS)

    Wang Chuangye; Morgner, Harald

    2011-01-01

    In the current work, we first reconstructed the molar fraction-depth profiles of cation and anion near the surface of tetrabutylammonium iodide dissolved in formamide by a refined calculation procedure, based on angle resolved X-ray photoelectron spectroscopy experiments. In this calculation procedure, both the transmission functions of the core levels and the inelastic mean free paths of the photoelectrons have been taken into account. We have evaluated the partial molar volumes of surfactant and solvent by the densities of such solutions with different bulk concentrations. With those partial molar volumes, the molar concentration-depth profiles of tetrabutylammonium ion and iodide ion were determined. The surface excesses of both surfactant ions were then achieved directly by integrating these depth profiles. The anionic molar concentration-depth profiles and surface excesses have been compared with their counterparts determined by neutral impact ion scattering spectroscopy. The comparisons exhibit good agreements. Being capable of determining molar concentration-depth profiles of surfactant ions by core levels with different kinetic energies may extend the applicable range of ARXPS in investigating solution surfaces.

  17. Innovative health service delivery models in low and middle income countries - what can we learn from the private sector?

    Science.gov (United States)

    Bhattacharyya, Onil; Khor, Sara; McGahan, Anita; Dunne, David; Daar, Abdallah S; Singer, Peter A

    2010-07-15

    The poor in low and middle income countries have limited access to health services due to limited purchasing power, residence in underserved areas, and inadequate health literacy. This produces significant gaps in health care delivery among a population that has a disproportionately large burden of disease. They frequently use the private health sector, due to perceived or actual gaps in public services. A subset of private health organizations, some called social enterprises, have developed novel approaches to increase the availability, affordability and quality of health care services to the poor through innovative health service delivery models. This study aims to characterize these models and identify areas of innovation that have led to effective provision of care for the poor. An environmental scan of peer-reviewed and grey literature was conducted to select exemplars of innovation. A case series of organizations was then purposively sampled to maximize variation. These cases were examined using content analysis and constant comparison to characterize their strategies, focusing on business processes. After an initial sample of 46 studies, 10 case studies of exemplars were developed spanning different geography, disease areas and health service delivery models. These ten organizations had innovations in their marketing, financing, and operating strategies. These included approaches such a social marketing, cross-subsidy, high-volume, low cost models, and process reengineering. They tended to have a narrow clinical focus, which facilitates standardizing processes of care, and experimentation with novel delivery models. Despite being well-known, information on the social impact of these organizations was variable, with more data on availability and affordability and less on quality of care. These private sector organizations demonstrate a range of innovations in health service delivery that have the potential to better serve the poor's health needs and be

  18. Innovative health service delivery models in low and middle income countries - what can we learn from the private sector?

    Directory of Open Access Journals (Sweden)

    Daar Abdallah S

    2010-07-01

    Full Text Available Abstract Background The poor in low and middle income countries have limited access to health services due to limited purchasing power, residence in underserved areas, and inadequate health literacy. This produces significant gaps in health care delivery among a population that has a disproportionately large burden of disease. They frequently use the private health sector, due to perceived or actual gaps in public services. A subset of private health organizations, some called social enterprises, have developed novel approaches to increase the availability, affordability and quality of health care services to the poor through innovative health service delivery models. This study aims to characterize these models and identify areas of innovation that have led to effective provision of care for the poor. Methods An environmental scan of peer-reviewed and grey literature was conducted to select exemplars of innovation. A case series of organizations was then purposively sampled to maximize variation. These cases were examined using content analysis and constant comparison to characterize their strategies, focusing on business processes. Results After an initial sample of 46 studies, 10 case studies of exemplars were developed spanning different geography, disease areas and health service delivery models. These ten organizations had innovations in their marketing, financing, and operating strategies. These included approaches such a social marketing, cross-subsidy, high-volume, low cost models, and process reengineering. They tended to have a narrow clinical focus, which facilitates standardizing processes of care, and experimentation with novel delivery models. Despite being well-known, information on the social impact of these organizations was variable, with more data on availability and affordability and less on quality of care. Conclusions These private sector organizations demonstrate a range of innovations in health service delivery that have

  19. Modeling Road Traffic Using Service Center

    Directory of Open Access Journals (Sweden)

    HARAGOS, I.-M.

    2012-05-01

    Full Text Available Transport systems have an essential role in modern society because they facilitate access to natural resources and they stimulate trade. Current studies aimed at improving transport networks by developing new methods for optimization. Because of the increase in the global number of cars, one of the most common problems facing the transport network is congestion. By creating traffic models and simulate them, we can avoid this problem and find appropriate solutions. In this paper we propose a new method for modeling traffic. This method considers road intersections as being service centers. A service center represents a set consisting of a queue followed by one or multiple servers. This model was used to simulate real situations in an urban traffic area. Based on this simulation, we have successfully determined the optimal functioning and we have computed the performance measures.

  20. Temporal validation for landsat-based volume estimation model

    Science.gov (United States)

    Renaldo J. Arroyo; Emily B. Schultz; Thomas G. Matney; David L. Evans; Zhaofei Fan

    2015-01-01

    Satellite imagery can potentially reduce the costs and time associated with ground-based forest inventories; however, for satellite imagery to provide reliable forest inventory data, it must produce consistent results from one time period to the next. The objective of this study was to temporally validate a Landsat-based volume estimation model in a four county study...

  1. Quality Service Standard of Food and Beverage Service Staff in Hotel

    OpenAIRE

    Thanasit Suksutdhi

    2014-01-01

    This survey research aims to study the standard of service quality of food and beverage service staffs in hotel business by studying the service standard of three sample hotels, Siam Kempinski Hotel Bangkok, Four Seasons Resort Chiang Mai, and Banyan Tree Phuket. In order to find the international service standard of food and beverage service, triangular research, i.e. quantitative, qualitative, and survey were employed. In this research, questionnaires and in-depth interview were used for ge...

  2. Systemic Model for Optimal Regulation in Public Service

    Directory of Open Access Journals (Sweden)

    Lucica Matei

    2006-05-01

    Full Text Available The current paper inscribes within those approaching the issue of public services from the interdisciplinary perspective. Public service development and imposing standards of efficiency and effectiveness, as well as for citizens’ satisfaction bring in front line the systemic modelling and establishing optimal policies for organisation and functioning of public services. The issue under discussion imposes an interface with powerful determinations of social nature. Consequently, the most adequate modelling might be that with a probabilistic and statistic nature. The fundamental idea of this paper, that obviously can be broadly developed, starts with assimilating the way of organisation and functioning of a public service with a waiting thread, to which some hypotheses are associated concerning the order of provision, performance measurement through costs or waiting time in the system etc. We emphasise the openness and dynamics of the public service system, as well as modelling by turning into account the statistic knowledge and researches, and we do not make detailed remarks on the cybernetic characteristics of this system. The optimal adjustment is achieved through analysis on the feedback and its comparison with the current standards or good practices.

  3. Measuring the Quality of Ecotourism Services

    Directory of Open Access Journals (Sweden)

    Nor’Aini Yusof

    2014-06-01

    Full Text Available Ecotourism forms the pillar of the country’s tourism industry. Ecotourists make up more than 10% of international tourists in Malaysia. When service quality is thought of as an important factor to the success of tourism service providers, the importance of estimating service quality provided to tourists becomes apparent. Estimating service quality provides tourism service providers with the necessary information needed to manage their marketing operations appropriately. Therefore, this estimation should be performed with the right measurement scales. Despite the high volume of research on service quality (SERVQUAL models in recent years, limited effort has been directed toward improving the tool for measuring service quality, particularly to apply to the ecotourism sector in developing countries. This article aims to improve a SERVQUAL model that is suitable for ecotourism areas in developing countries using five dimensions of the original model and one additional sustainability dimension. Based on a survey of 127 tourists in Tasik Kenyir, an exploratory factor analysis (EFA was conducted to discover the underlying dimension of ecotourism services and test for reliability and validity. Using EFA resulted in seven factors totaling 27 items. These factors are labeled as follows: tangible sustainability, sustainable practices, tangibility, reliability, responsiveness, assurance, and empathy. The results reveal that when SERVQUAL is applied within the ecotourism context, new dimensions of tangible sustainability and sustainable practices may emerge. The result implies the need to refine the SERVQUAL model when used in different contexts.

  4. Modelling river bank erosion processes and mass failure mechanisms using 2-D depth averaged numerical model

    Science.gov (United States)

    Die Moran, Andres; El kadi Abderrezzak, Kamal; Tassi, Pablo; Herouvet, Jean-Michel

    2014-05-01

    Bank erosion is a key process that may cause a large number of economic and environmental problems (e.g. land loss, damage to structures and aquatic habitat). Stream bank erosion (toe erosion and mass failure) represents an important form of channel morphology changes and a significant source of sediment. With the advances made in computational techniques, two-dimensional (2-D) numerical models have become valuable tools for investigating flow and sediment transport in open channels at large temporal and spatial scales. However, the implementation of mass failure process in 2D numerical models is still a challenging task. In this paper, a simple, innovative algorithm is implemented in the Telemac-Mascaret modeling platform to handle bank failure: failure occurs whether the actual slope of one given bed element is higher than the internal friction angle. The unstable bed elements are rotated around an appropriate axis, ensuring mass conservation. Mass failure of a bank due to slope instability is applied at the end of each sediment transport evolution iteration, once the bed evolution due to bed load (and/or suspended load) has been computed, but before the global sediment mass balance is verified. This bank failure algorithm is successfully tested using two laboratory experimental cases. Then, bank failure in a 1:40 scale physical model of the Rhine River composed of non-uniform material is simulated. The main features of the bank erosion and failure are correctly reproduced in the numerical simulations, namely the mass wasting at the bank toe, followed by failure at the bank head, and subsequent transport of the mobilised material in an aggradation front. Volumes of eroded material obtained are of the same order of magnitude as the volumes measured during the laboratory tests.

  5. Service use and unmet service needs in grandparents raising grandchildren.

    Science.gov (United States)

    Yancura, Loriena A

    2013-01-01

    Most in-depth studies of grandparents raising grandchildren use samples recruited from service providers, so little is known about those who do not use formal services. A sample of 200 grandparents registered with a public school district completed a survey on service use and unmet service needs. Of the 131 who did not use services, 82 reported unmet service needs, and 49 reported no needs. Those with unmet needs were younger, more likely to be Native Hawaiian, and less likely to receive public assistance. These findings indicate that some grandparents are falling through the cracks of the service provision network.

  6. Evaluating Service Quality in Universities: A Service Department Perspective

    Science.gov (United States)

    Smith, Gareth; Smith, Alison; Clarke, Alison

    2007-01-01

    Purpose: The purpose of the study is to report on an in-depth exploration of service quality in an Information Technology service department in a Higher Education Institute (HEI) and to evaluate the instrument used. Design/methodology/approach: The study surveys customers using the SERVQUAL instrument, which is one of the most widely used and…

  7. Research on new information service model of the contemporary library

    International Nuclear Information System (INIS)

    Xin Pingping; Lu Yan

    2010-01-01

    According to the development of the internet and multimedia technology, the information service models in the contemporary library become both of the traditional and digital information service. The libraries in each country do their best to make the voluminous information and the complex technology be High-integrated in the background management, and also make the front interface be more and more convenient to the users. The essential characteristics of the information service of the contemporary library are all-in-one and humanness. In this article, we will describe several new hot information service models of the contemporary library in detail, such as individualized service, reference service, reference service and strategic information service. (authors)

  8. METHODOLOGY OF THE DRUGS MARKET VOLUME MODELING ON THE EXAMPLE OF HEMOPHILIA A

    Directory of Open Access Journals (Sweden)

    N. B. Molchanova

    2015-01-01

    Full Text Available Hemophilia A is a serious genetic disease, which may lead to disability of a patient even in early ages without a required therapy. The only one therapeutic approach is a replacement therapy with drugs of bloodcoagulation factor VIII (FVIII. The modeling of coagulation drugs market volume will allow evaluation of the level of patients’ provision with a necessary therapy. Modeling of a “perfect” market of drugs and its comparison with the real one was the purpose of the study. During the modeling of market volume we have used the data about the number of hamophilia A patients on the basis of the federal registry, Russian and international morbidity indices, and the data of a real practice about average consumption of drugs of bloodcoagulation factors and data about the drugs prescription according to the standards and protocols of assistance rendering. According to the standards of care delivery, average annual volume of FVIII drugs consumption amounted to 406 325 244 IU for children and 964 578 678 IU for adults, i.e. an average volume of a “perfect” market is equal to 1 370 903 922 IU for all patients. The market volume is 1.8 times bigger than a real volume of FVIII drugs which, according to the data of IMS marketing agency, amounted to 765 000 000 IU in 2013. The modeling conducted has shown that despite a relatively high patients’ coverage there is a potential for almost double growth.

  9. A model for effective planning of SME support services.

    Science.gov (United States)

    Rakićević, Zoran; Omerbegović-Bijelović, Jasmina; Lečić-Cvetković, Danica

    2016-02-01

    This paper presents a model for effective planning of support services for small and medium-sized enterprises (SMEs). The idea is to scrutinize and measure the suitability of support services in order to give recommendations for the improvement of a support planning process. We examined the applied support services and matched them with the problems and needs of SMEs, based on the survey conducted in 2013 on a sample of 336 SMEs in Serbia. We defined and analysed the five research questions that refer to support services, their consistency with the SMEs' problems and needs, and the relation between the given support and SMEs' success. The survey results have shown a statistically significant connection between them. Based on this result, we proposed an eight-phase model as a method for the improvement of support service planning for SMEs. This model helps SMEs to plan better their requirements in terms of support; government and administration bodies at all levels and organizations that provide support services to understand better SMEs' problems and needs for support. Copyright © 2015 Elsevier Ltd. All rights reserved.

  10. Evaluation of SNODAS snow depth and snow water equivalent estimates for the Colorado Rocky Mountains, USA

    Science.gov (United States)

    Clow, David W.; Nanus, Leora; Verdin, Kristine L.; Schmidt, Jeffrey

    2012-01-01

    The National Weather Service's Snow Data Assimilation (SNODAS) program provides daily, gridded estimates of snow depth, snow water equivalent (SWE), and related snow parameters at a 1-km2 resolution for the conterminous USA. In this study, SNODAS snow depth and SWE estimates were compared with independent, ground-based snow survey data in the Colorado Rocky Mountains to assess SNODAS accuracy at the 1-km2 scale. Accuracy also was evaluated at the basin scale by comparing SNODAS model output to snowmelt runoff in 31 headwater basins with US Geological Survey stream gauges. Results from the snow surveys indicated that SNODAS performed well in forested areas, explaining 72% of the variance in snow depths and 77% of the variance in SWE. However, SNODAS showed poor agreement with measurements in alpine areas, explaining 16% of the variance in snow depth and 30% of the variance in SWE. At the basin scale, snowmelt runoff was moderately correlated (R2 = 0.52) with SNODAS model estimates. A simple method for adjusting SNODAS SWE estimates in alpine areas was developed that uses relations between prevailing wind direction, terrain, and vegetation to account for wind redistribution of snow in alpine terrain. The adjustments substantially improved agreement between measurements and SNODAS estimates, with the R2 of measured SWE values against SNODAS SWE estimates increasing from 0.42 to 0.63 and the root mean square error decreasing from 12 to 6 cm. Results from this study indicate that SNODAS can provide reliable data for input to moderate-scale to large-scale hydrologic models, which are essential for creating accurate runoff forecasts. Refinement of SNODAS SWE estimates for alpine areas to account for wind redistribution of snow could further improve model performance. Published 2011. This article is a US Government work and is in the public domain in the USA.

  11. BioModels.net Web Services, a free and integrated toolkit for computational modelling software.

    Science.gov (United States)

    Li, Chen; Courtot, Mélanie; Le Novère, Nicolas; Laibe, Camille

    2010-05-01

    Exchanging and sharing scientific results are essential for researchers in the field of computational modelling. BioModels.net defines agreed-upon standards for model curation. A fundamental one, MIRIAM (Minimum Information Requested in the Annotation of Models), standardises the annotation and curation process of quantitative models in biology. To support this standard, MIRIAM Resources maintains a set of standard data types for annotating models, and provides services for manipulating these annotations. Furthermore, BioModels.net creates controlled vocabularies, such as SBO (Systems Biology Ontology) which strictly indexes, defines and links terms used in Systems Biology. Finally, BioModels Database provides a free, centralised, publicly accessible database for storing, searching and retrieving curated and annotated computational models. Each resource provides a web interface to submit, search, retrieve and display its data. In addition, the BioModels.net team provides a set of Web Services which allows the community to programmatically access the resources. A user is then able to perform remote queries, such as retrieving a model and resolving all its MIRIAM Annotations, as well as getting the details about the associated SBO terms. These web services use established standards. Communications rely on SOAP (Simple Object Access Protocol) messages and the available queries are described in a WSDL (Web Services Description Language) file. Several libraries are provided in order to simplify the development of client software. BioModels.net Web Services make one step further for the researchers to simulate and understand the entirety of a biological system, by allowing them to retrieve biological models in their own tool, combine queries in workflows and efficiently analyse models.

  12. M-Health Service for Train Passengers Using Mobile GPS System: An ArchiMate Service Layer Model

    Directory of Open Access Journals (Sweden)

    MUHAMMAD SAJID

    2017-01-01

    Full Text Available EA (Enterprise Architecture is an instrument that is employed to describe the organization?s structure, business layout and operations within the IT (Information Technology environment. Different types of organizations extensively employed EA for aligning their business and operations with IT resources. EA may also be employed in non-organizational setting such as service providing agencies; rescue, medical emergency and education services. This paper suggests an EAF (Enterprise Architecture Framework for non-organizational setting by critically analyzing the top four EAs. The paper also proposes a new m-Health service model based on the mobile GPS (Global Positioning System for train/rail passengers by employing the ArchiMate modeling language and compares the proposed model with existing service providers.

  13. Knowledge-based model of competition in restaurant industry: a qualitative study about culinary competence, creativity, and innovation in five full-service restaurants in Jakarta

    OpenAIRE

    NAPITUPULU JOSHUA H.; ASTUTI ENDANG SITI; HAMID DJAMHUR; RAHARDJO KUSDI

    2016-01-01

    The purpose of the study is to have an in-depth description in the form of the analysis of culinary competence, creativity and innovation that develops knowledge-based model of competence in full-service restaurant business. Studies on restaurant generally focused on customers more particularly customer’s satisfaction and loyalty, and very few studies discussed internal competitive factors in restaurant business. The study aims at filling out the research gap, using knowledge-based approach t...

  14. Uncertainty modelling and analysis of volume calculations based on a regular grid digital elevation model (DEM)

    Science.gov (United States)

    Li, Chang; Wang, Qing; Shi, Wenzhong; Zhao, Sisi

    2018-05-01

    The accuracy of earthwork calculations that compute terrain volume is critical to digital terrain analysis (DTA). The uncertainties in volume calculations (VCs) based on a DEM are primarily related to three factors: 1) model error (ME), which is caused by an adopted algorithm for a VC model, 2) discrete error (DE), which is usually caused by DEM resolution and terrain complexity, and 3) propagation error (PE), which is caused by the variables' error. Based on these factors, the uncertainty modelling and analysis of VCs based on a regular grid DEM are investigated in this paper. Especially, how to quantify the uncertainty of VCs is proposed by a confidence interval based on truncation error (TE). In the experiments, the trapezoidal double rule (TDR) and Simpson's double rule (SDR) were used to calculate volume, where the TE is the major ME, and six simulated regular grid DEMs with different terrain complexity and resolution (i.e. DE) were generated by a Gauss synthetic surface to easily obtain the theoretical true value and eliminate the interference of data errors. For PE, Monte-Carlo simulation techniques and spatial autocorrelation were used to represent DEM uncertainty. This study can enrich uncertainty modelling and analysis-related theories of geographic information science.

  15. A reference model for space data system interconnection services

    Science.gov (United States)

    Pietras, John; Theis, Gerhard

    1993-01-01

    The widespread adoption of standard packet-based data communication protocols and services for spaceflight missions provides the foundation for other standard space data handling services. These space data handling services can be defined as increasingly sophisticated processing of data or information received from lower-level services, using a layering approach made famous in the International Organization for Standardization (ISO) Open System Interconnection Reference Model (OSI-RM). The Space Data System Interconnection Reference Model (SDSI-RM) incorporates the conventions of the OSIRM to provide a framework within which a complete set of space data handling services can be defined. The use of the SDSI-RM is illustrated through its application to data handling services and protocols that have been defined by, or are under consideration by, the Consultative Committee for Space Data Systems (CCSDS).

  16. Decision support system in an international-voice-services business company

    Science.gov (United States)

    Hadianti, R.; Uttunggadewa, S.; Syamsuddin, M.; Soewono, E.

    2017-01-01

    We consider a problem facing by an international telecommunication services company in maximizing its profit. From voice services by controlling cost and business partnership. The competitiveness in this industry is very high, so that any efficiency from controlling cost and business partnership can help the company to survive in the very high competitiveness situation. The company trades voice traffic with a large number of business partners. There are four trading schemes that can be chosen by this company, namely, flat rate, class tiering, volume commitment, and revenue capped. Each scheme has a specific characteristic on the rate and volume deal, where the last three schemes are regarded as strategic schemes to be offered to business partner to ensure incoming traffic volume for both parties. This company and each business partner need to choose an optimal agreement in a certain period of time that can maximize the company’s profit. In this agreement, both parties agree to use a certain trading scheme, rate and rate/volume/revenue deal. A decision support system is then needed in order to give a comprehensive information to the sales officers to deal with the business partners. This paper discusses the mathematical model of the optimal decision for incoming traffic volume control, which is a part of the analysis needed to build the decision support system. The mathematical model is built by first performing data analysis to see how elastic the incoming traffic volume is. As the level of elasticity is obtained, we then derive a mathematical modelling that can simulate the impact of any decision on trading to the revenue of the company. The optimal decision can be obtained from these simulations results. To evaluate the performance of the proposed method we implement our decision model to the historical data. A software tool incorporating our methodology is currently in construction.

  17. The use of consumer depth cameras for 3D surface imaging of people with obesity: A feasibility study.

    Science.gov (United States)

    Wheat, J S; Clarkson, S; Flint, S W; Simpson, C; Broom, D R

    2018-05-21

    Three dimensional (3D) surface imaging is a viable alternative to traditional body morphology measures, but the feasibility of using this technique with people with obesity has not been fully established. Therefore, the aim of this study was to investigate the validity, repeatability and acceptability of a consumer depth camera 3D surface imaging system in imaging people with obesity. The concurrent validity of the depth camera based system was investigated by comparing measures of mid-trunk volume to a gold-standard. The repeatability and acceptability of the depth camera system was assessed in people with obesity at a clinic. There was evidence of a fixed systematic difference between the depth camera system and the gold standard but excellent correlation between volume estimates (r 2 =0.997), with little evidence of proportional bias. The depth camera system was highly repeatable - low typical error (0.192L), high intraclass correlation coefficient (>0.999) and low technical error of measurement (0.64%). Depth camera based 3D surface imaging was also acceptable to people with obesity. It is feasible (valid, repeatable and acceptable) to use a low cost, flexible 3D surface imaging system to monitor the body size and shape of people with obesity in a clinical setting. Copyright © 2018 Asia Oceania Association for the Study of Obesity. Published by Elsevier Ltd. All rights reserved.

  18. Engaging Students in Mathematical Modeling through Service-Learning

    Science.gov (United States)

    Carducci, Olivia M.

    2014-01-01

    I have included a service-learning project in my mathematical modeling course for the last 6 years. This article describes my experience with service-learning in this course. The article includes a description of the course and the service-learning projects. There is a discussion of how to connect with community partners and identify…

  19. Comparision between Brain Atrophy and Subdural Volume to Predict Chronic Subdural Hematoma: Volumetric CT Imaging Analysis.

    Science.gov (United States)

    Ju, Min-Wook; Kim, Seon-Hwan; Kwon, Hyon-Jo; Choi, Seung-Won; Koh, Hyeon-Song; Youm, Jin-Young; Song, Shi-Hun

    2015-10-01

    Brain atrophy and subdural hygroma were well known factors that enlarge the subdural space, which induced formation of chronic subdural hematoma (CSDH). Thus, we identified the subdural volume that could be used to predict the rate of future CSDH after head trauma using a computed tomography (CT) volumetric analysis. A single institution case-control study was conducted involving 1,186 patients who visited our hospital after head trauma from January 1, 2010 to December 31, 2014. Fifty-one patients with delayed CSDH were identified, and 50 patients with age and sex matched for control. Intracranial volume (ICV), the brain parenchyme, and the subdural space were segmented using CT image-based software. To adjust for variations in head size, volume ratios were assessed as a percentage of ICV [brain volume index (BVI), subdural volume index (SVI)]. The maximum depth of the subdural space on both sides was used to estimate the SVI. Before adjusting for cranium size, brain volume tended to be smaller, and subdural space volume was significantly larger in the CSDH group (p=0.138, p=0.021, respectively). The BVI and SVI were significantly different (p=0.003, p=0.001, respectively). SVI [area under the curve (AUC), 77.3%; p=0.008] was a more reliable technique for predicting CSDH than BVI (AUC, 68.1%; p=0.001). Bilateral subdural depth (sum of subdural depth on both sides) increased linearly with SVI (pSubdural space volume was significantly larger in CSDH groups. SVI was a more reliable technique for predicting CSDH. Bilateral subdural depth was useful to measure SVI.

  20. Model of service-oriented catering supply chain performance evaluation

    OpenAIRE

    Gou, Juanqiong; Shen, Guguan; Chai, Rui

    2013-01-01

    Purpose: The aim of this paper is constructing a performance evaluation model for service-oriented catering supply chain. Design/methodology/approach: With the research on the current situation of catering industry, this paper summarized the characters of the catering supply chain, and then presents the service-oriented catering supply chain model based on the platform of logistics and information. At last, the fuzzy AHP method is used to evaluate the performance of service-oriented catering ...

  1. Service Quality in Distance Education using the Gronroos Model

    OpenAIRE

    Hamid, Fazelina Sahul; Yip, Nick

    2016-01-01

    Demand for distance education programs have been increasing rapidly over the years. As a result, assessment of the quality of distance education programs has become a strategic issue that is very pertinent for program survival. This study uses Gronroos Model for assessing the service quality of the Malaysian distance education institutions. This model is chosen because it takes into account of the service delivery process and also service outcome. Our study confirms the multidimensional natur...

  2. Alternative free volume models and positron cages for the characterisation of nanoporosity in materials

    International Nuclear Information System (INIS)

    Felix, M.V.; Morones, R.; Castano, V.M.

    2004-01-01

    Three semi-empirical positron stationary Quantum Models were developed for the study of nanoporosity in a wide range of solid porous materials. The cubic, conic and cylindrical well potentials were considered and their geometric parameters related to the Positron Annihilation LifeTime (PALT) measurements. If a conic or a cubic symmetry is assumed, a resonance lifetime phenomenon was found, which enables proposal of a technique to catch positrons in free volume sites. In the cylindrical case, an alternative method to determine free volume sizes in materials was developed. The free volume equations of these new models were then compared to the well-known and widely utilised Spherical Free Volume Model (SFVM) and remarkable differences were found. A strong variation of the free volume size-positron lifetime relation with the geometry involved was observed and a remarkable dependence of the electron layer thickness parameter ΔR with the hole-shape under study and with the nature of the material considered. The mathematical functions appearing in the conic and cylindrical cases are the superposition of Bessel functions of the first kind and trigonometric functions in the cubic case. Generalised free volume diagrams were constructed and a brief geometrical scheme of the diverse cases considered was obtained. (author)

  3. ATHENA code manual. Volume 1. Code structure, system models, and solution methods

    International Nuclear Information System (INIS)

    Carlson, K.E.; Roth, P.A.; Ransom, V.H.

    1986-09-01

    The ATHENA (Advanced Thermal Hydraulic Energy Network Analyzer) code has been developed to perform transient simulation of the thermal hydraulic systems which may be found in fusion reactors, space reactors, and other advanced systems. A generic modeling approach is utilized which permits as much of a particular system to be modeled as necessary. Control system and secondary system components are included to permit modeling of a complete facility. Several working fluids are available to be used in one or more interacting loops. Different loops may have different fluids with thermal connections between loops. The modeling theory and associated numerical schemes are documented in Volume I in order to acquaint the user with the modeling base and thus aid effective use of the code. The second volume contains detailed instructions for input data preparation

  4. Improvement effect on the depth-dose distribution by CSF drainage and air infusion of a tumour-removed cavity in boron neutron capture therapy for malignant brain tumours

    International Nuclear Information System (INIS)

    Sakurai, Yoshinori; Ono, Koji; Miyatake, Shin-ichi; Maruhashi, Akira

    2006-01-01

    Boron neutron capture therapy (BNCT) without craniotomy for malignant brain tumours was started using an epi-thermal neutron beam at the Kyoto University Reactor in June 2002. We have tried some techniques to overcome the treatable-depth limit in BNCT. One of the effective techniques is void formation utilizing a tumour-removed cavity. The tumorous part is removed by craniotomy about 1 week before a BNCT treatment in our protocol. Just before the BNCT irradiation, the cerebro-spinal fluid (CSF) in the tumour-removed cavity is drained out, air is infused to the cavity and then the void is made. This void improves the neutron penetration, and the thermal neutron flux at depth increases. The phantom experiments and survey simulations modelling the CSF drainage and air infusion of the tumour-removed cavity were performed for the size and shape of the void. The advantage of the CSF drainage and air infusion is confirmed for the improvement in the depth-dose distribution. From the parametric surveys, it was confirmed that the cavity volume had good correlation with the improvement effect, and the larger effect was expected as the cavity volume was larger

  5. Improvement effect on the depth-dose distribution by CSF drainage and air infusion of a tumour-removed cavity in boron neutron capture therapy for malignant brain tumours

    Science.gov (United States)

    Sakurai, Yoshinori; Ono, Koji; Miyatake, Shin-ichi; Maruhashi, Akira

    2006-03-01

    Boron neutron capture therapy (BNCT) without craniotomy for malignant brain tumours was started using an epi-thermal neutron beam at the Kyoto University Reactor in June 2002. We have tried some techniques to overcome the treatable-depth limit in BNCT. One of the effective techniques is void formation utilizing a tumour-removed cavity. The tumorous part is removed by craniotomy about 1 week before a BNCT treatment in our protocol. Just before the BNCT irradiation, the cerebro-spinal fluid (CSF) in the tumour-removed cavity is drained out, air is infused to the cavity and then the void is made. This void improves the neutron penetration, and the thermal neutron flux at depth increases. The phantom experiments and survey simulations modelling the CSF drainage and air infusion of the tumour-removed cavity were performed for the size and shape of the void. The advantage of the CSF drainage and air infusion is confirmed for the improvement in the depth-dose distribution. From the parametric surveys, it was confirmed that the cavity volume had good correlation with the improvement effect, and the larger effect was expected as the cavity volume was larger.

  6. Hydrologic controls on the development of equilibrium soil depths

    Science.gov (United States)

    Nicotina, L.; Tarboton, D. G.; Tesfa, T. K.; Rinaldo, A.

    2010-12-01

    The object of the present work was the study of the coevolution of runoff production and geomorphological processes and its effects on the formation of equilibrium soil depth by focusing on their mutual feedbacks. The primary goal of this work is to describe spatial patterns of soil depth resulting, under the hypothesis of dynamic equilibrium, from long-term interactions between hydrologic forcings and soil production, erosion and sediment transport processes. These processes dominate the formation of actual soil depth patterns that represent the boundary condition for water redistribution, thus this paper also proposes and attempt to set the premises for decoding their individual role and mutual interactions in shaping the hydrologic response of a catchment. The relevance of the study stems from the massive improvement in hydrologic predictions for ungauged basins that would be achieved by using directly soil depths derived from geomorphic features remotely measured and objectively manipulated. Moreover the setup of a coupled hydrologic-geomorphologic approach represents a first step into the study of such interactions and in particular of the effects of soil moisture in determining soil production functions. Hydrological processes are here described by explicitly accounting for local soil depths and detailed catchment topography from high resolution digital terrain models (DTM). Geomorphological processes are described by means of well-studied geomorphic transport laws. Soil depth is assumed, in the exponential soil production function, as a proxy for all the mechanisms that induce mechanical disruption of bedrock and it’s conversion into soil. This formulation, although empirical, has been widely used in the literature and is currently accepted. The modeling approach is applied to the semi-arid Dry Creek Experimental Watershed, located near Boise, Idaho, USA. Modeled soil depths are compared with field data obtained from an extensive survey of the catchment

  7. Measuring Compositions in Organic Depth Profiling: Results from a VAMAS Interlaboratory Study

    Energy Technology Data Exchange (ETDEWEB)

    Shard, A. G.; Havelund, Rasmus; Spencer, Steve J.; Gilmore, I. S.; Alexander, Morgan R.; Angerer, Tina B.; Aoyagi, Satoka; Barnes, Jean P.; Benayad, Anass; Bernasik, Andrzej; Ceccone, Giacomo; Counsell, Jonathan D.; Deeks, Christopher; Fletcher, John S.; Graham, Daniel J.; Heuser, Christian; Lee, Tae G.; Marie, Camille; Marzec, Mateusz M.; Mishra, Gautam; Rading, Derk; Renault, Oliver; Scurr, David J.; Shon, Hyun K.; Spampinato, Valentina; Tian, Hua; Wang, Fuyi; Winograd, Nicholas; Wu, Kui; Wucher, Andreas; Zhou, Yufan; Zhu, Zihua

    2015-07-23

    We report the results of a VAMAS (Versailles Project on Advanced Materials and Standards) interlaboratory study on the measurement of composition in organic depth profiling. Layered samples with known binary compositions of Irganox 1010 and either Irganox 1098 or Fmoc-pentafluoro-L-phenylalanine in each layer were manufactured in a single batch and distributed to more than 20 participating laboratories. The samples were analyzed using argon cluster ion sputtering and either X-ray Photoelectron Spectroscopy (XPS) or Time-of-Flight Secondary Ion Mass Spectrometry (ToF-SIMS) to generate depth profiles. Participants were asked to estimate the volume fractions in two of the layers and were provided with the compositions of all other layers. Participants using XPS provided volume fractions within 0.03 of the nominal values. Participants using ToF-SIMS either made no attempt, or used various methods that gave results ranging in error from 0.02 to over 0.10 in volume fraction, the latter representing a 50% relative error for a nominal volume fraction of 0.2. Error was predominantly caused by inadequacy in the ability to compensate for primary ion intensity variations and the matrix effect in SIMS. Matrix effects in these materials appear to be more pronounced as the number of atoms in both the primary analytical ion and the secondary ion increase. Using the participants’ data we show that organic SIMS matrix effects can be measured and are remarkably consistent between instruments. We provide recommendations for identifying and compensating for matrix effects. Finally we demonstrate, using a simple normalization method, that virtually all ToF-SIMS participants could have obtained estimates of volume fraction that were at least as accurate and consistent as XPS.

  8. Modelling OAIS Compliance for Disaggregated Preservation Services

    Directory of Open Access Journals (Sweden)

    Gareth Knight

    2007-07-01

    Full Text Available The reference model for the Open Archival Information System (OAIS is well established in the research community as a method of modelling the functions of a digital repository and as a basis in which to frame digital curation and preservation issues. In reference to the 5th anniversary review of the OAIS, it is timely to consider how it may be interpreted by an institutional repository. The paper examines methods of sharing essential functions and requirements of an OAIS between two or more institutions, outlining the practical considerations of outsourcing. It also details the approach taken by the SHERPA DP Project to introduce a disaggregated service model for institutional repositories that wish to implement preservation services.

  9. Exploration of depth modeling mode one lossless wedgelets storage strategies for 3D-high efficiency video coding

    Science.gov (United States)

    Sanchez, Gustavo; Marcon, César; Agostini, Luciano Volcan

    2018-01-01

    The 3D-high efficiency video coding has introduced tools to obtain higher efficiency in 3-D video coding, and most of them are related to the depth maps coding. Among these tools, the depth modeling mode-1 (DMM-1) focuses on better encoding edges regions of depth maps. The large memory required for storing all wedgelet patterns is one of the bottlenecks in the DMM-1 hardware design of both encoder and decoder since many patterns must be stored. Three algorithms to reduce the DMM-1 memory requirements and a hardware design targeting the most efficient among these algorithms are presented. Experimental results demonstrate that the proposed solutions surpass related works reducing up to 78.8% of the wedgelet memory, without degrading the encoding efficiency. Synthesis results demonstrate that the proposed algorithm reduces almost 75% of the power dissipation when compared to the standard approach.

  10. Application of Service Quality Model in Education Environment

    Directory of Open Access Journals (Sweden)

    Ting Ding Hooi

    2016-02-01

    Full Text Available Most of the ideas on service quality stem from the West. The massive developments in research in the West are undeniable of their importance. This leads to the generation and development of new ideas. These ideas were subsequently channeled to developing countries. Ideas obtained were then formulated and used by these developing countries in order to obtain better approach in channeling service quality. There are ample to be learnt from the service quality model, SERVQUAL which attain high acceptance in the West. Service quality in the education system is important to guarantee the effectiveness and quality of education. Effective and quality education will be able to offer quality graduates, which will contribute to the development of the nation. This paper will discuss the application of the SERVQUAL model into the education environment.

  11. Prediction of lake depth across a 17-state region in the United States

    Science.gov (United States)

    Oliver, Samantha K.; Soranno, Patricia A.; Fergus, C. Emi; Wagner, Tyler; Winslow, Luke A.; Scott, Caren E.; Webster, Katherine E.; Downing, John A.; Stanley, Emily H.

    2016-01-01

    Lake depth is an important characteristic for understanding many lake processes, yet it is unknown for the vast majority of lakes globally. Our objective was to develop a model that predicts lake depth using map-derived metrics of lake and terrestrial geomorphic features. Building on previous models that use local topography to predict lake depth, we hypothesized that regional differences in topography, lake shape, or sedimentation processes could lead to region-specific relationships between lake depth and the mapped features. We therefore used a mixed modeling approach that included region-specific model parameters. We built models using lake and map data from LAGOS, which includes 8164 lakes with maximum depth (Zmax) observations. The model was used to predict depth for all lakes ≥4 ha (n = 42 443) in the study extent. Lake surface area and maximum slope in a 100 m buffer were the best predictors of Zmax. Interactions between surface area and topography occurred at both the local and regional scale; surface area had a larger effect in steep terrain, so large lakes embedded in steep terrain were much deeper than those in flat terrain. Despite a large sample size and inclusion of regional variability, model performance (R2 = 0.29, RMSE = 7.1 m) was similar to other published models. The relative error varied by region, however, highlighting the importance of taking a regional approach to lake depth modeling. Additionally, we provide the largest known collection of observed and predicted lake depth values in the United States.

  12. Efficient Depth Enhancement Using a Combination of Color and Depth Information.

    Science.gov (United States)

    Lee, Kyungjae; Ban, Yuseok; Lee, Sangyoun

    2017-07-01

    Studies on depth images containing three-dimensional information have been performed for many practical applications. However, the depth images acquired from depth sensors have inherent problems, such as missing values and noisy boundaries. These problems significantly affect the performance of applications that use a depth image as their input. This paper describes a depth enhancement algorithm based on a combination of color and depth information. To fill depth holes and recover object shapes, asynchronous cellular automata with neighborhood distance maps are used. Image segmentation and a weighted linear combination of spatial filtering algorithms are applied to extract object regions and fill disocclusion in the object regions. Experimental results on both real-world and public datasets show that the proposed method enhances the quality of the depth image with low computational complexity, outperforming conventional methods on a number of metrics. Furthermore, to verify the performance of the proposed method, we present stereoscopic images generated by the enhanced depth image to illustrate the improvement in quality.

  13. Heat flow of standard depth

    International Nuclear Information System (INIS)

    Cull, J.P.

    1981-01-01

    Secular and long-term periodic changes in surface temperature cause perturbations to the geothermal gradient which may be significant to depths of at least 1000 m, and major corrections are required to determine absolute values of heat flow from the Earth's interior. However, detailed climatic models remain contentious and estimates of error in geothermal gradients differ widely. Consequently, regions of anomalous heat flow which could contain geothermal resources may be more easily resolved by measuring relative values at a standard depth (e.g. 100 m) so that all data are subject to similar corrections. (orig./ME)

  14. Improved rainfall-runoff approach using lumped and conceptual modelling

    OpenAIRE

    Durán Barroso, Pablo

    2016-01-01

    Rainfall-runoff quantification is one of the most important tasks in both engineering and watershed management as it allows to identify, forecast and explain watershed response. The division of the rainfall depth between infiltration and runoff has a high level of complexity due to the spatial heterogeneity in real catchments and the temporal precipitation variability, which provide scale effects on the overall runoff volumes. The Natural Resources Conservation Service Curve Number (NRCS CN) ...

  15. Facilitation of intermediate-depth earthquakes by eclogitization-related stresses and H2O

    Science.gov (United States)

    Nakajima, J.; Uchida, N.; Hasegawa, A.; Shiina, T.; Hacker, B. R.; Kirby, S. H.

    2012-12-01

    Generation of intermediate-depth earthquakes is an ongoing enigma because high lithostatic pressures render ordinary dry frictional failure unlikely. A popular hypothesis to solve this conundrum is fluid-related embrittlement (e.g., Kirby et al., 1996; Preston et al., 2003), which is known to work even for dehydration reactions with negative volume change (Jung et al., 2004). One consequence of reaction with the negative volume change is the formation of a paired stress field as a result of strain compatibility across the reaction front (Hacker, 1996; Kirby et al., 1996). Here we analyze waveforms of a tiny seismic cluster in the lower crust of the downgoing Pacific plate at a depth of 155 km and propose new evidence in favor of this mechanism: tensional earthquakes lying 1 km above compressional earthquakes, and earthquakes with highly similar waveforms lying on well-defined planes with complementary rupture areas. The tensional stress is interpreted to be caused by the dimensional mismatch between crust transformed to eclogite and underlying untransformed crust, and the earthquakes are interpreted to be facilitated by fluid produced by eclogitization. These observations provide seismic evidence for the dual roles of volume-change related stresses and fluid-related embrittlement as viable processes for nucleating earthquakes in downgoing oceanic lithosphere.

  16. Sensitivity Analysis of Wavelet Neural Network Model for Short-Term Traffic Volume Prediction

    Directory of Open Access Journals (Sweden)

    Jinxing Shen

    2013-01-01

    Full Text Available In order to achieve a more accurate and robust traffic volume prediction model, the sensitivity of wavelet neural network model (WNNM is analyzed in this study. Based on real loop detector data which is provided by traffic police detachment of Maanshan, WNNM is discussed with different numbers of input neurons, different number of hidden neurons, and traffic volume for different time intervals. The test results show that the performance of WNNM depends heavily on network parameters and time interval of traffic volume. In addition, the WNNM with 4 input neurons and 6 hidden neurons is the optimal predictor with more accuracy, stability, and adaptability. At the same time, a much better prediction record will be achieved with the time interval of traffic volume are 15 minutes. In addition, the optimized WNNM is compared with the widely used back-propagation neural network (BPNN. The comparison results indicated that WNNM produce much lower values of MAE, MAPE, and VAPE than BPNN, which proves that WNNM performs better on short-term traffic volume prediction.

  17. Real-time bladder volume monitoring by the application of a new implantable bladder volume sensor for a small animal model

    Directory of Open Access Journals (Sweden)

    Dong Sup Lee

    2011-04-01

    Full Text Available Although real-time monitoring of bladder volume together with intravesical pressure can provide more information for understanding the functional changes of the urinary bladder, it still entails difficulties in the accurate prediction of real-time bladder volume in urodynamic studies with small animal models. We studied a new implantable bladder volume monitoring device with eight rats. During cystometry, microelectrodes prepared by the microelectromechanical systems process were placed symmetrically on both lateral walls of the bladder, and the expanded bladder volume was calculated. Immunohistological study was done after 1 week and after 4 weeks to evaluate the biocompatibility of the microelectrode. From the point that infused saline volume into the bladder was higher than 0.6 mL, estimated bladder volume was statistically correlated with the volume of saline injected (p<0.01. Additionally, the microelectromechanical system microelectrodes used in this study showed reliable biocompatibility. Therefore, the device can be used to evaluate changes in bladder volume in studies with small animals, and it may help to provide more information about functional changes in the bladder in laboratory studies. Furthermore, owing to its biocompatibility, the device could be chronically implanted in conscious ambulating animals, thus allowing a novel longitudinal study to be performed for a specific purpose.

  18. MELSAR: a mesoscale air quality model for complex terrain. Volume 2. Appendices

    Energy Technology Data Exchange (ETDEWEB)

    Allwine, K.J.; Whiteman, C.D.

    1985-04-01

    This final report is submitted as part of the Green River Ambient Model Assessment (GRAMA) project conducted at the US Department of Energy's Pacific Northwest Laboratory for the US Environmental Protection Agency. The GRAMA Program has, as its ultimate goal, the development of validated air quality models that can be applied to the complex terrain of the Green River Formation of western Colorado, eastern Utah and southern Wyoming. The Green River Formation is a geologic formation containing large reserves of oil shale, coal, and other natural resources. Development of these resources may lead to a degradation of the air quality of the region. Air quality models are needed immediately for planning and regulatory purposes to assess the magnitude of these regional impacts. This report documents one of the models being developed for this purpose within GRAMA - specifically a model to predict short averaging time (less than or equal to 24 h) pollutant concentrations resulting from the mesoscale transport of pollutant releases from multiple sources. MELSAR has not undergone any rigorous operational testing, sensitivity analyses, or validation studies. Testing and evaluation of the model are needed to gain a measure of confidence in the model's performance. This report consists of two volumes. This volume contains the Appendices, which include listings of the FORTRAN code and Volume 1 contains the model overview, technical description, and user's guide. 13 figs., 10 tabs.

  19. Model for the radionuclide measurement of ascitic fluid volumes

    International Nuclear Information System (INIS)

    Kaplan, W.D.; Davis, M.A.; Uren, R.F.; Wisotsky, T.; LaTegola, M.

    1978-01-01

    Technetium-99m phytate colloids formed in vitro and in vivo were examined as radioindicators for estimation of the volume of third-space fluid in an ovarian ascites model using C3HeB/FeJ mice. In double-label experiments, the accuracy of the colloids for dilution analysis was found to be equal or superior to that of I-125 HSA. Sampling times 3 to 5 min after intraperitoneal administration were found to produce the best volume estimates. Four needle-stopcock assemblies inserted sequentially into the quadrants of the peritoneal cavity were used for administration and sampling of the radioindicators. The stopcocks could be closed to prevent leakage of ascitic fluid during the procedure. In contrast to radiolabeled albumin, Tc-99m phytate colloids have clinical use for simultaneous imaging of radiotracer migration to assess potential occlusion of diaphragmatic lymphatics by neoplastic cells, and for dilution analysis to estimate volume of ascitic fluid

  20. Stress variations during a glacial cycle at 500 m depth in Forsmark and Oskarshamn: Earth model effects

    Energy Technology Data Exchange (ETDEWEB)

    Lund, Bjoern [Uppsala Univ. (Sweden). Dept. of Earth Sciences

    2006-06-15

    This study has considered the response to a glaciation of Earth models of increasingly complex structure in elastic parameters and viscosity. The models are one-dimensional in the sense that they vary only in the depth direction, i.e. there are only uniform, horizontal layers in the models. I find that as the complexity of the models increase, and the properties of the uppermost kilometer of the Earth become less affected by average properties from deeper down, the flexural stresses at 500 m depth decrease, as expected. A lower Young's modulus, lower compressibility and lower density in the uppermost layer all act to lower the stresses. However, the three properties act differently on the resulting response. Introducing layering in Young's modulus generally decreases the stresses all along a profile through the ice model. Going from incompressible to compressible models affect the stresses outside the ice edge significantly more than the stresses under the ice sheet. Introducing layering in density conversely affect the stresses under the ice sheet more than those outside the ice edge. The combined effects of the most complex models tested here show that the glacially induced horizontal stresses at 500 m depth decrease to levels very similar in magnitude to the loading stress. There are, however, temporal variations in these horizontal stresses that do not follow the loading stress and which induce tensional or compressional horizontal stresses that persist when no ice is present.As is well known, changes in viscosity structure has a very large effect on the Earth response. Viscosity affect both the magnitudes of the induced stresses and the temporal behavior of the stress evolution. This is confirmed in the current study.The glacially induced stresses for some of the models have been used in combination with the current background stress field at Forsmark and Oskarshamn, as estimated in SKB's site models, to evaluate fault stability throughout a

  1. Stress variations during a glacial cycle at 500 m depth in Forsmark and Oskarshamn: Earth model effects

    International Nuclear Information System (INIS)

    Lund, Bjoern

    2006-06-01

    This study has considered the response to a glaciation of Earth models of increasingly complex structure in elastic parameters and viscosity. The models are one-dimensional in the sense that they vary only in the depth direction, i.e. there are only uniform, horizontal layers in the models. I find that as the complexity of the models increase, and the properties of the uppermost kilometer of the Earth become less affected by average properties from deeper down, the flexural stresses at 500 m depth decrease, as expected. A lower Young's modulus, lower compressibility and lower density in the uppermost layer all act to lower the stresses. However, the three properties act differently on the resulting response. Introducing layering in Young's modulus generally decreases the stresses all along a profile through the ice model. Going from incompressible to compressible models affect the stresses outside the ice edge significantly more than the stresses under the ice sheet. Introducing layering in density conversely affect the stresses under the ice sheet more than those outside the ice edge. The combined effects of the most complex models tested here show that the glacially induced horizontal stresses at 500 m depth decrease to levels very similar in magnitude to the loading stress. There are, however, temporal variations in these horizontal stresses that do not follow the loading stress and which induce tensional or compressional horizontal stresses that persist when no ice is present.As is well known, changes in viscosity structure has a very large effect on the Earth response. Viscosity affect both the magnitudes of the induced stresses and the temporal behavior of the stress evolution. This is confirmed in the current study.The glacially induced stresses for some of the models have been used in combination with the current background stress field at Forsmark and Oskarshamn, as estimated in SKB's site models, to evaluate fault stability throughout a glacial cycle. The

  2. Modeling of Cementitious Representative Volume Element with Additives

    Science.gov (United States)

    Shahzamanian, M. M.; Basirun, W. J.

    CEMHYD3D has been employed to simulate the representative volume element (RVE) of cementitious systems (Type I cement) containing fly ash (Class F) through a voxel-based finite element analysis (FEA) approach. Three-dimensional microstructures composed of voxels are generated for a heterogeneous cementitious material consisting of various constituent phases. The primary focus is to simulate a cementitious RVE containing fly ash and to present the homogenized macromechanical properties obtained from its analysis. Simple kinematic uniform boundary conditions as well as periodic boundary conditions were imposed on the RVE to obtain the principal and shear moduli. Our current work considers the effect of fly ash percentage on the elastic properties based on the mass and volume replacements. RVEs with lengths of 50, 100 and 200μm at different degrees of hydration are generated, and the elastic properties are modeled and simulated. In general, the elastic properties of a cementitious RVE with fly ash replacement for cement based on mass and volume differ from each other. Moreover, the finite element (FE) mesh density effect is studied. Results indicate that mechanical properties decrease with increasing mesh density.

  3. Mechanistic Fluid Transport Model to Estimate Gastrointestinal Fluid Volume and Its Dynamic Change Over Time.

    Science.gov (United States)

    Yu, Alex; Jackson, Trachette; Tsume, Yasuhiro; Koenigsknecht, Mark; Wysocki, Jeffrey; Marciani, Luca; Amidon, Gordon L; Frances, Ann; Baker, Jason R; Hasler, William; Wen, Bo; Pai, Amit; Sun, Duxin

    2017-11-01

    Gastrointestinal (GI) fluid volume and its dynamic change are integral to study drug disintegration, dissolution, transit, and absorption. However, key questions regarding the local volume and its absorption, secretion, and transit remain unanswered. The dynamic fluid compartment absorption and transit (DFCAT) model is proposed to estimate in vivo GI volume and GI fluid transport based on magnetic resonance imaging (MRI) quantified fluid volume. The model was validated using GI local concentration of phenol red in human GI tract, which was directly measured by human GI intubation study after oral dosing of non-absorbable phenol red. The measured local GI concentration of phenol red ranged from 0.05 to 168 μg/mL (stomach), to 563 μg/mL (duodenum), to 202 μg/mL (proximal jejunum), and to 478 μg/mL (distal jejunum). The DFCAT model characterized observed MRI fluid volume and its dynamic changes from 275 to 46.5 mL in stomach (from 0 to 30 min) with mucus layer volume of 40 mL. The volumes of the 30 small intestine compartments were characterized by a max of 14.98 mL to a min of 0.26 mL (0-120 min) and a mucus layer volume of 5 mL per compartment. Regional fluid volumes over 0 to 120 min ranged from 5.6 to 20.38 mL in the proximal small intestine, 36.4 to 44.08 mL in distal small intestine, and from 42 to 64.46 mL in total small intestine. The DFCAT model can be applied to predict drug dissolution and absorption in the human GI tract with future improvements.

  4. An u-Service Model Based on a Smart Phone for Urban Computing Environments

    Science.gov (United States)

    Cho, Yongyun; Yoe, Hyun

    In urban computing environments, all of services should be based on the interaction between humans and environments around them, which frequently and ordinarily in home and office. This paper propose an u-service model based on a smart phone for urban computing environments. The suggested service model includes a context-aware and personalized service scenario development environment that can instantly describe user's u-service demand or situation information with smart devices. To do this, the architecture of the suggested service model consists of a graphical service editing environment for smart devices, an u-service platform, and an infrastructure with sensors and WSN/USN. The graphic editor expresses contexts as execution conditions of a new service through a context model based on ontology. The service platform deals with the service scenario according to contexts. With the suggested service model, an user in urban computing environments can quickly and easily make u-service or new service using smart devices.

  5. Mid-depth temperature maximum in an estuarine lake

    Science.gov (United States)

    Stepanenko, V. M.; Repina, I. A.; Artamonov, A. Yu; Gorin, S. L.; Lykossov, V. N.; Kulyamin, D. V.

    2018-03-01

    The mid-depth temperature maximum (TeM) was measured in an estuarine Bol’shoi Vilyui Lake (Kamchatka peninsula, Russia) in summer 2015. We applied 1D k-ɛ model LAKE to the case, and found it successfully simulating the phenomenon. We argue that the main prerequisite for mid-depth TeM development is a salinity increase below the freshwater mixed layer, sharp enough in order to increase the temperature with depth not to cause convective mixing and double diffusion there. Given that this condition is satisfied, the TeM magnitude is controlled by physical factors which we identified as: radiation absorption below the mixed layer, mixed-layer temperature dynamics, vertical heat conduction and water-sediments heat exchange. In addition to these, we formulate the mechanism of temperature maximum ‘pumping’, resulting from the phase shift between diurnal cycles of mixed-layer depth and temperature maximum magnitude. Based on the LAKE model results we quantify the contribution of the above listed mechanisms and find their individual significance highly sensitive to water turbidity. Relying on physical mechanisms identified we define environmental conditions favouring the summertime TeM development in salinity-stratified lakes as: small-mixed layer depth (roughly, ~wind and cloudless weather. We exemplify the effect of mixed-layer depth on TeM by a set of selected lakes.

  6. Validation of Service Blueprint models by means of formal simulation techniques

    OpenAIRE

    Estañol Lamarca, Montserrat; Marcos, Esperanza; Oriol Hilari, Xavier; Pérez, Francisco J.; Teniente López, Ernest; Vara, Juan M.

    2017-01-01

    As service design has gained interest in the last years, so has gained one of its primary tools: the Service Blueprint. In essence, a service blueprint is a graphical tool for the design of business models, specifically for the design of business service operations. Despite its level of adoption, tool support for service design tasks is still on its early days and available tools for service blueprint modeling are mainly focused on enhancing usability and enabling collaborative edition, disre...

  7. 39 CFR 3050.25 - Volume and revenue data.

    Science.gov (United States)

    2010-07-01

    ... 39 Postal Service 1 2010-07-01 2010-07-01 false Volume and revenue data. 3050.25 Section 3050.25 Postal Service POSTAL REGULATORY COMMISSION PERSONNEL PERIODIC REPORTING § 3050.25 Volume and revenue... billing determinants, broken out by quarter, within 90 days of the close of each fiscal year; (c) Revenue...

  8. Service Capacity Reserve under Uncertainty by Hospital’s ER Analogies: A Practical Model for Car Services

    Directory of Open Access Journals (Sweden)

    Miguel Ángel Pérez Salaverría

    2014-01-01

    Full Text Available We define a capacity reserve model to dimension passenger car service installations according to the demographic distribution of the area to be serviced by using hospital’s emergency room analogies. Usually, service facilities are designed applying empirical methods, but customers arrive under uncertain conditions not included in the original estimations, and there is a gap between customer’s real demand and the service’s capacity. Our research establishes a valid methodology and covers the absence of recent researches and the lack of statistical techniques implementation, integrating demand uncertainty in a unique model built in stages by implementing ARIMA forecasting, queuing theory, and Monte Carlo simulation to optimize the service capacity and occupancy, minimizing the implicit cost of the capacity that must be reserved to service unexpected customers. Our model has proved to be a useful tool for optimal decision making under uncertainty integrating the prediction of the cost implicit in the reserve capacity to serve unexpected demand and defining a set of new process indicators, such us capacity, occupancy, and cost of capacity reserve never studied before. The new indicators are intended to optimize the service operation. This set of new indicators could be implemented in the information systems used in the passenger car services.

  9. Climate Model Diagnostic Analyzer Web Service System

    Science.gov (United States)

    Lee, S.; Pan, L.; Zhai, C.; Tang, B.; Jiang, J. H.

    2014-12-01

    We have developed a cloud-enabled web-service system that empowers physics-based, multi-variable model performance evaluations and diagnoses through the comprehensive and synergistic use of multiple observational data, reanalysis data, and model outputs. We have developed a methodology to transform an existing science application code into a web service using a Python wrapper interface and Python web service frameworks. The web-service system, called Climate Model Diagnostic Analyzer (CMDA), currently supports (1) all the observational datasets from Obs4MIPs and a few ocean datasets from NOAA and Argo, which can serve as observation-based reference data for model evaluation, (2) many of CMIP5 model outputs covering a broad range of atmosphere, ocean, and land variables from the CMIP5 specific historical runs and AMIP runs, and (3) ECMWF reanalysis outputs for several environmental variables in order to supplement observational datasets. Analysis capabilities currently supported by CMDA are (1) the calculation of annual and seasonal means of physical variables, (2) the calculation of time evolution of the means in any specified geographical region, (3) the calculation of correlation between two variables, (4) the calculation of difference between two variables, and (5) the conditional sampling of one physical variable with respect to another variable. A web user interface is chosen for CMDA because it not only lowers the learning curve and removes the adoption barrier of the tool but also enables instantaneous use, avoiding the hassle of local software installation and environment incompatibility. CMDA will be used as an educational tool for the summer school organized by JPL's Center for Climate Science in 2014. In order to support 30+ simultaneous users during the school, we have deployed CMDA to the Amazon cloud environment. The cloud-enabled CMDA will provide each student with a virtual machine while the user interaction with the system will remain the same

  10. Models for Primary Eye Care Services in India

    Directory of Open Access Journals (Sweden)

    Vasundhra Misra

    2015-01-01

    In the current situation, an integrated health care system with primary eye care promoted by government of India is apparently the best answer. This model is both cost effective and practical for the prevention and control of blindness among the underprivileged population. Other models functioning with the newer technology of tele-ophthalmology or mobile clinics also add to the positive outcome in providing primary eye care services. This review highlights the strengths and weaknesses of various models presently functioning in the country with the idea of providing useful inputs for eye care providers and enabling them to identify and adopt an appropriate model for primary eye care services.

  11. Individualizing Services, Individualizing Responsibility

    DEFF Research Database (Denmark)

    Garsten, Christina; Hollertz, Katarina; Jacobsson, Kerstin

    possibilities for individual voice, autonomy and self-determination in the local delivery of activation policy? What barriers do specific organisational models and practices imply for clients to choose, determine and access tailor-made programmes and services? What policy technologies are at work in governing......-oriented, and the normative demands placed on individuals appear increasingly totalizing, concerning the whole individual rather than the job-related aspects only. The paper is based on 23 in-depth interviews with individual clients as well as individual caseworkers and other professionals engaged in client-related work...

  12. Conversion of a Surface Model of a Structure of Interest into a Volume Model for Medical Image Retrieval

    Directory of Open Access Journals (Sweden)

    Sarmad ISTEPHAN

    2015-06-01

    Full Text Available Volumetric medical image datasets contain vital information for noninvasive diagnosis, treatment planning and prognosis. However, direct and unlimited query of such datasets is hindered due to the unstructured nature of the imaging data. This study is a step towards the unlimited query of medical image datasets by focusing on specific Structures of Interest (SOI. A requirement in achieving this objective is having both the surface and volume models of the SOI. However, typically, only the surface model is available. Therefore, this study focuses on creating a fast method to convert a surface model to a volume model. Three methods (1D, 2D and 3D are proposed and evaluated using simulated and real data of Deep Perisylvian Area (DPSA within the human brain. The 1D method takes 80 msec for DPSA model; about 4 times faster than 2D method and 7.4 fold faster than 3D method, with over 97% accuracy. The proposed 1D method is feasible for surface to volume conversion in computer aided diagnosis, treatment planning and prognosis systems containing large amounts of unstructured medical images.

  13. An agent-based model for energy service companies

    International Nuclear Information System (INIS)

    Robinson, Marguerite; Varga, Liz; Allen, Peter

    2015-01-01

    Highlights: • An agent-based model for household energy efficiency upgrades is considered. • Energy service companies provide an alternative to traditional utility providers. • Household self-financing is a limiting factor to widespread efficiency upgrading. • Longer term service contracts can lead to reduced household energy costs. • Future energy price increases enable service providers to retain their customer base. - Abstract: The residential housing sector is a major consumer of energy accounting for approximately one third of carbon emissions in the United Kingdom. Achieving a sustainable, low-carbon infrastructure necessitates a reduced and more efficient use of domestic energy supplies. Energy service companies offer an alternative to traditional providers, which supply a single utility product to satisfy the unconstrained demand of end users, and have been identified as a potentially important actor in sustainable future economies. An agent-based model is developed to examine the potential of energy service companies to contribute to the large scale upgrading of household energy efficiency, which would ultimately lead to a more sustainable and secure energy infrastructure. The migration of households towards energy service companies is described by an attractiveness array, through which potential customers can evaluate the future benefits, in terms of household energy costs, of changing provider. It is shown that self-financing is a limiting factor to the widespread upgrading of residential energy efficiency. Greater reductions in household energy costs could be achieved by committing to longer term contracts, allowing upgrade costs to be distributed over greater time intervals. A steadily increasing cost of future energy usage lends an element of stability to the market, with energy service companies displaying the ability to retain customers on contract expiration. The model highlights how a greater focus on the provision of energy services, as

  14. Dual beam organic depth profiling using large argon cluster ion beams

    Science.gov (United States)

    Holzweber, M; Shard, AG; Jungnickel, H; Luch, A; Unger, WES

    2014-01-01

    Argon cluster sputtering of an organic multilayer reference material consisting of two organic components, 4,4′-bis[N-(1-naphthyl-1-)-N-phenyl- amino]-biphenyl (NPB) and aluminium tris-(8-hydroxyquinolate) (Alq3), materials commonly used in organic light-emitting diodes industry, was carried out using time-of-flight SIMS in dual beam mode. The sample used in this study consists of a ∽400-nm-thick NPB matrix with 3-nm marker layers of Alq3 at depth of ∽50, 100, 200 and 300 nm. Argon cluster sputtering provides a constant sputter yield throughout the depth profiles, and the sputter yield volumes and depth resolution are presented for Ar-cluster sizes of 630, 820, 1000, 1250 and 1660 atoms at a kinetic energy of 2.5 keV. The effect of cluster size in this material and over this range is shown to be negligible. © 2014 The Authors. Surface and Interface Analysis published by John Wiley & Sons Ltd. PMID:25892830

  15. Predicting oropharyngeal tumor volume throughout the course of radiation therapy from pretreatment computed tomography data using general linear models.

    Science.gov (United States)

    Yock, Adam D; Rao, Arvind; Dong, Lei; Beadle, Beth M; Garden, Adam S; Kudchadker, Rajat J; Court, Laurence E

    2014-05-01

    The purpose of this work was to develop and evaluate the accuracy of several predictive models of variation in tumor volume throughout the course of radiation therapy. Nineteen patients with oropharyngeal cancers were imaged daily with CT-on-rails for image-guided alignment per an institutional protocol. The daily volumes of 35 tumors in these 19 patients were determined and used to generate (1) a linear model in which tumor volume changed at a constant rate, (2) a general linear model that utilized the power fit relationship between the daily and initial tumor volumes, and (3) a functional general linear model that identified and exploited the primary modes of variation between time series describing the changing tumor volumes. Primary and nodal tumor volumes were examined separately. The accuracy of these models in predicting daily tumor volumes were compared with those of static and linear reference models using leave-one-out cross-validation. In predicting the daily volume of primary tumors, the general linear model and the functional general linear model were more accurate than the static reference model by 9.9% (range: -11.6%-23.8%) and 14.6% (range: -7.3%-27.5%), respectively, and were more accurate than the linear reference model by 14.2% (range: -6.8%-40.3%) and 13.1% (range: -1.5%-52.5%), respectively. In predicting the daily volume of nodal tumors, only the 14.4% (range: -11.1%-20.5%) improvement in accuracy of the functional general linear model compared to the static reference model was statistically significant. A general linear model and a functional general linear model trained on data from a small population of patients can predict the primary tumor volume throughout the course of radiation therapy with greater accuracy than standard reference models. These more accurate models may increase the prognostic value of information about the tumor garnered from pretreatment computed tomography images and facilitate improved treatment management.

  16. Predicting oropharyngeal tumor volume throughout the course of radiation therapy from pretreatment computed tomography data using general linear models

    International Nuclear Information System (INIS)

    Yock, Adam D.; Kudchadker, Rajat J.; Rao, Arvind; Dong, Lei; Beadle, Beth M.; Garden, Adam S.; Court, Laurence E.

    2014-01-01

    Purpose: The purpose of this work was to develop and evaluate the accuracy of several predictive models of variation in tumor volume throughout the course of radiation therapy. Methods: Nineteen patients with oropharyngeal cancers were imaged daily with CT-on-rails for image-guided alignment per an institutional protocol. The daily volumes of 35 tumors in these 19 patients were determined and used to generate (1) a linear model in which tumor volume changed at a constant rate, (2) a general linear model that utilized the power fit relationship between the daily and initial tumor volumes, and (3) a functional general linear model that identified and exploited the primary modes of variation between time series describing the changing tumor volumes. Primary and nodal tumor volumes were examined separately. The accuracy of these models in predicting daily tumor volumes were compared with those of static and linear reference models using leave-one-out cross-validation. Results: In predicting the daily volume of primary tumors, the general linear model and the functional general linear model were more accurate than the static reference model by 9.9% (range: −11.6%–23.8%) and 14.6% (range: −7.3%–27.5%), respectively, and were more accurate than the linear reference model by 14.2% (range: −6.8%–40.3%) and 13.1% (range: −1.5%–52.5%), respectively. In predicting the daily volume of nodal tumors, only the 14.4% (range: −11.1%–20.5%) improvement in accuracy of the functional general linear model compared to the static reference model was statistically significant. Conclusions: A general linear model and a functional general linear model trained on data from a small population of patients can predict the primary tumor volume throughout the course of radiation therapy with greater accuracy than standard reference models. These more accurate models may increase the prognostic value of information about the tumor garnered from pretreatment computed tomography

  17. Magnetotelluric inversion for depth-to-basement estimation

    DEFF Research Database (Denmark)

    Cai, Hongzhu; Zhdanov, Michael

    2015-01-01

    The magnetotelluric (MT) method can be effectively applied for depth-to-basement estimation, because there exists a strong contrast in resistivity between a conductive sedimentary basin and a resistive crystalline basement. Conventional inversions of MT data are usually aimed at determining...... the volumetric distribution of the conductivity within the inversion domain. By the nature of the MT method, the recovered distribution of the subsurface conductivity is typically diffusive, which makes it difficult to select the sediment-basement interface. This paper develops a novel approach to 3D MT...... inversion for the depth-to-basement estimate. The key to this approach is selection of the model parameterization with the depth to basement being the major unknown parameter. In order to estimate the depth to the basement, the inversion algorithm recovers both the thickness and the conductivities...

  18. A physical multifield model predicts the development of volume and structure in the human brain

    Science.gov (United States)

    Rooij, Rijk de; Kuhl, Ellen

    2018-03-01

    The prenatal development of the human brain is characterized by a rapid increase in brain volume and a development of a highly folded cortex. At the cellular level, these events are enabled by symmetric and asymmetric cell division in the ventricular regions of the brain followed by an outwards cell migration towards the peripheral regions. The role of mechanics during brain development has been suggested and acknowledged in past decades, but remains insufficiently understood. Here we propose a mechanistic model that couples cell division, cell migration, and brain volume growth to accurately model the developing brain between weeks 10 and 29 of gestation. Our model accurately predicts a 160-fold volume increase from 1.5 cm3 at week 10 to 235 cm3 at week 29 of gestation. In agreement with human brain development, the cortex begins to form around week 22 and accounts for about 30% of the total brain volume at week 29. Our results show that cell division and coupling between cell density and volume growth are essential to accurately model brain volume development, whereas cell migration and diffusion contribute mainly to the development of the cortex. We demonstrate that complex folding patterns, including sinusoidal folds and creases, emerge naturally as the cortex develops, even for low stiffness contrasts between the cortex and subcortex.

  19. Estimating traffic volume on Wyoming low volume roads using linear and logistic regression methods

    Directory of Open Access Journals (Sweden)

    Dick Apronti

    2016-12-01

    Full Text Available Traffic volume is an important parameter in most transportation planning applications. Low volume roads make up about 69% of road miles in the United States. Estimating traffic on the low volume roads is a cost-effective alternative to taking traffic counts. This is because traditional traffic counts are expensive and impractical for low priority roads. The purpose of this paper is to present the development of two alternative means of cost-effectively estimating traffic volumes for low volume roads in Wyoming and to make recommendations for their implementation. The study methodology involves reviewing existing studies, identifying data sources, and carrying out the model development. The utility of the models developed were then verified by comparing actual traffic volumes to those predicted by the model. The study resulted in two regression models that are inexpensive and easy to implement. The first regression model was a linear regression model that utilized pavement type, access to highways, predominant land use types, and population to estimate traffic volume. In verifying the model, an R2 value of 0.64 and a root mean square error of 73.4% were obtained. The second model was a logistic regression model that identified the level of traffic on roads using five thresholds or levels. The logistic regression model was verified by estimating traffic volume thresholds and determining the percentage of roads that were accurately classified as belonging to the given thresholds. For the five thresholds, the percentage of roads classified correctly ranged from 79% to 88%. In conclusion, the verification of the models indicated both model types to be useful for accurate and cost-effective estimation of traffic volumes for low volume Wyoming roads. The models developed were recommended for use in traffic volume estimations for low volume roads in pavement management and environmental impact assessment studies.

  20. Model construction of nursing service satisfaction in hospitalized tumor patients.

    Science.gov (United States)

    Chen, Yongyi; Liu, Jingshi; Xiao, Shuiyuan; Liu, Xiangyu; Tang, Xinhui; Zhou, Yujuan

    2014-01-01

    This study aims to construct a satisfaction model on nursing service in hospitalized tumor patients. Using questionnaires, data about hospitalized tumor patients' expectation, quality perception and satisfaction of hospital nursing service were obtained. A satisfaction model of nursing service in hospitalized tumor patients was established through empirical study and by structural equation method. This model was suitable for tumor specialized hospital, with reliability and validity. Patient satisfaction was significantly affected by quality perception and patient expectation. Patient satisfaction and patient loyalty was also affected by disease pressure. Hospital brand was positively correlated with patient satisfaction and patient loyalty, negatively correlated with patient complaint. Patient satisfaction was positively correlated with patient loyalty, patient complaints, and quality perception, and negatively correlated with disease pressure and patient expectation. The satisfaction model on nursing service in hospitalized tumor patients fits well. By this model, the quality of hospital nursing care may be improved.

  1. Affordability Funding Models for Early Childhood Services

    Science.gov (United States)

    Purcal, Christiane; Fisher, Karen

    2006-01-01

    This paper presents a model of the approaches open to government to ensure that early childhood services are affordable to families. We derived the model from a comparative literature review of affordability approaches taken by government, both in Australia and internationally. The model adds significantly to the literature by proposing a means to…

  2. Volume-weighted particle-tracking method for solute-transport modeling; Implementation in MODFLOW–GWT

    Science.gov (United States)

    Winston, Richard B.; Konikow, Leonard F.; Hornberger, George Z.

    2018-02-16

    In the traditional method of characteristics for groundwater solute-transport models, advective transport is represented by moving particles that track concentration. This approach can lead to global mass-balance problems because in models of aquifers having complex boundary conditions and heterogeneous properties, particles can originate in cells having different pore volumes and (or) be introduced (or removed) at cells representing fluid sources (or sinks) of varying strengths. Use of volume-weighted particles means that each particle tracks solute mass. In source or sink cells, the changes in particle weights will match the volume of water added or removed through external fluxes. This enables the new method to conserve mass in source or sink cells as well as globally. This approach also leads to potential efficiencies by allowing the number of particles per cell to vary spatially—using more particles where concentration gradients are high and fewer where gradients are low. The approach also eliminates the need for the model user to have to distinguish between “weak” and “strong” fluid source (or sink) cells. The new model determines whether solute mass added by fluid sources in a cell should be represented by (1) new particles having weights representing appropriate fractions of the volume of water added by the source, or (2) distributing the solute mass added over all particles already in the source cell. The first option is more appropriate for the condition of a strong source; the latter option is more appropriate for a weak source. At sinks, decisions whether or not to remove a particle are replaced by a reduction in particle weight in proportion to the volume of water removed. A number of test cases demonstrate that the new method works well and conserves mass. The method is incorporated into a new version of the U.S. Geological Survey’s MODFLOW–GWT solute-transport model.

  3. Depth information in natural environments derived from optic flow by insect motion detection system: A model analysis

    Directory of Open Access Journals (Sweden)

    Alexander eSchwegmann

    2014-08-01

    Full Text Available Knowing the depth structure of the environment is crucial for moving animals in many behavioral contexts, such as collision avoidance, targeting objects, or spatial navigation. An important source of depth information is motion parallax. This powerful cue is generated on the eyes during translatory self-motion with the retinal images of nearby objects moving faster than those of distant ones. To investigate how the visual motion pathway represents motion-based depth information we analyzed its responses to image sequences recorded in natural cluttered environments with a wide range of depth structures. The analysis was done on the basis of an experimentally validated model of the visual motion pathway of insects, with its core elements being correlation-type elementary motion detectors (EMDs. It is the key result of our analysis that the absolute EMD responses, i.e. the motion energy profile, represent the contrast-weighted nearness of environmental structures during translatory self-motion at a roughly constant velocity. In other words, the output of the EMD array highlights contours of nearby objects. This conclusion is largely independent of the scale over which EMDs are spatially pooled and was corroborated by scrutinizing the motion energy profile after eliminating the depth structure from the natural image sequences. Hence, the well-established dependence of correlation-type EMDs on both velocity and textural properties of motion stimuli appears to be advantageous for representing behaviorally relevant information about the environment in a computationally parsimonious way.

  4. Supporting Collaborative Model and Data Service Development and Deployment with DevOps

    Science.gov (United States)

    David, O.

    2016-12-01

    Adopting DevOps practices for model service development and deployment enables a community to engage in service-oriented modeling and data management. The Cloud Services Integration Platform (CSIP) developed the last 5 years at Colorado State University provides for collaborative integration of environmental models into scalable model and data services as a micro-services platform with API and deployment infrastructure. Originally developed to support USDA natural resource applications, it proved suitable for a wider range of applications in the environmental modeling domain. While extending its scope and visibility it became apparent community integration and adequate work flow support through the full model development and application cycle drove successful outcomes.DevOps provide best practices, tools, and organizational structures to optimize the transition from model service development to deployment by minimizing the (i) operational burden and (ii) turnaround time for modelers. We have developed and implemented a methodology to fully automate a suite of applications for application lifecycle management, version control, continuous integration, container management, and container scaling to enable model and data service developers in various institutions to collaboratively build, run, deploy, test, and scale services within minutes.To date more than 160 model and data services are available for applications in hydrology (PRMS, Hydrotools, CFA, ESP), water and wind erosion prediction (WEPP, WEPS, RUSLE2), soil quality trends (SCI, STIR), water quality analysis (SWAT-CP, WQM, CFA, AgES-W), stream degradation assessment (SWAT-DEG), hydraulics (cross-section), and grazing management (GRAS). In addition, supporting data services include soil (SSURGO), ecological site (ESIS), climate (CLIGEN, WINDGEN), land management and crop rotations (LMOD), and pesticides (WQM), developed using this workflow automation and decentralized governance.

  5. Exclusive queueing model including the choice of service windows

    Science.gov (United States)

    Tanaka, Masahiro; Yanagisawa, Daichi; Nishinari, Katsuhiro

    2018-01-01

    In a queueing system involving multiple service windows, choice behavior is a significant concern. This paper incorporates the choice of service windows into a queueing model with a floor represented by discrete cells. We contrived a logit-based choice algorithm for agents considering the numbers of agents and the distances to all service windows. Simulations were conducted with various parameters of agent choice preference for these two elements and for different floor configurations, including the floor length and the number of service windows. We investigated the model from the viewpoint of transit times and entrance block rates. The influences of the parameters on these factors were surveyed in detail and we determined that there are optimum floor lengths that minimize the transit times. In addition, we observed that the transit times were determined almost entirely by the entrance block rates. The results of the presented model are relevant to understanding queueing systems including the choice of service windows and can be employed to optimize facility design and floor management.

  6. A review of international pharmacy-based minor ailment services and proposed service design model.

    Science.gov (United States)

    Aly, Mariyam; García-Cárdenas, Victoria; Williams, Kylie; Benrimoj, Shalom I

    2018-01-05

    The need to consider sustainable healthcare solutions is essential. An innovative strategy used to promote minor ailment care is the utilisation of community pharmacists to deliver minor ailment services (MASs). Promoting higher levels of self-care can potentially reduce the strain on existing resources. To explore the features of international MASs, including their similarities and differences, and consider the essential elements to design a MAS model. A grey literature search strategy was completed in June 2017 to comply with the Preferred Reporting Items for Systematic Reviews and Meta-Analyses standard. This included (1) Google/Yahoo! search engines, (2) targeted websites, and (3) contact with commissioning organisations. Executive summaries, table of contents and title pages of documents were reviewed. Key characteristics of MASs were extracted and a MAS model was developed. A total of 147 publications were included in the review. Key service elements identified included eligibility, accessibility, staff involvement, reimbursement systems. Several factors need to be considered when designing a MAS model; including contextualisation of MAS to the market. Stakeholder engagement, service planning, governance, implementation and review have emerged as key aspects involved with a design model. MASs differ in their structural parameters. Consideration of these parameters is necessary when devising MAS aims and assessing outcomes to promote sustainability and success of the service. Copyright © 2018 Elsevier Inc. All rights reserved.

  7. Absorption and scattering coefficient dependence of laser-Doppler flowmetry models for large tissue volumes

    International Nuclear Information System (INIS)

    Binzoni, T; Leung, T S; Ruefenacht, D; Delpy, D T

    2006-01-01

    Based on quasi-elastic scattering theory (and random walk on a lattice approach), a model of laser-Doppler flowmetry (LDF) has been derived which can be applied to measurements in large tissue volumes (e.g. when the interoptode distance is >30 mm). The model holds for a semi-infinite medium and takes into account the transport-corrected scattering coefficient and the absorption coefficient of the tissue, and the scattering coefficient of the red blood cells. The model holds for anisotropic scattering and for multiple scattering of the photons by the moving scatterers of finite size. In particular, it has also been possible to take into account the simultaneous presence of both Brownian and pure translational movements. An analytical and simplified version of the model has also been derived and its validity investigated, for the case of measurements in human skeletal muscle tissue. It is shown that at large optode spacing it is possible to use the simplified model, taking into account only a 'mean' light pathlength, to predict the blood flow related parameters. It is also demonstrated that the 'classical' blood volume parameter, derived from LDF instruments, may not represent the actual blood volume variations when the investigated tissue volume is large. The simplified model does not need knowledge of the tissue optical parameters and thus should allow the development of very simple and cost-effective LDF hardware

  8. Depth-profiling using X-ray photoelectron spectroscopy

    International Nuclear Information System (INIS)

    Pijolat, M.; Hollinger, G.

    1980-12-01

    The possibilities of X-ray photoelectron spectroscopy (or ESCA) for depth-profiling into shallow depths (approximately 10-100 A) have been studied. The method of ion-sputtering removal has first been investigated in order to improve its depth-resolution (approximately 50-150 A). A procedure which eliminates the effects due to the resolution function of the instrumental probe (analysed depth approximately 50 A) has been settled; but it is not yet sufficient, and the sputter - broadening due to the ion-induced damages must be taken into account (broadening function approximately 50 A for approximately 150 A removal). Because of serious difficulties in estimating the broadening function an alternative is to develop non destructive methods, so a new method based on the dependence of the analysed depth with the electron emission angle is presented. The extraction of the concentration profile from angular distribution experiments is achieved, in the framework of a flat-layer model, by minimizing the difference between theoretical and experimental relative intensities. The applicability and limitations of the method are discussed on the basis of computer simulation results. The depth probed is of the order of 3 lambda (lambda being the value of the inelastic mean free path, typically 10-20 A) and the depth-resolution is of the order of lambda/3 [fr

  9. Estimating Stand Volume and Above-Ground Biomass of Urban Forests Using LiDAR

    Directory of Open Access Journals (Sweden)

    Vincenzo Giannico

    2016-04-01

    Full Text Available Assessing forest stand conditions in urban and peri-urban areas is essential to support ecosystem service planning and management, as most of the ecosystem services provided are a consequence of forest stand characteristics. However, collecting data for assessing forest stand conditions is time consuming and labor intensive. A plausible approach for addressing this issue is to establish a relationship between in situ measurements of stand characteristics and data from airborne laser scanning (LiDAR. In this study we assessed forest stand volume and above-ground biomass (AGB in a broadleaved urban forest, using a combination of LiDAR-derived metrics, which takes the form of a forest allometric model. We tested various methods for extracting proxies of basal area (BA and mean stand height (H from the LiDAR point-cloud distribution and evaluated the performance of different models in estimating forest stand volume and AGB. The best predictors for both models were the scale parameters of the Weibull distribution of all returns (except the first (proxy of BA and the 95th percentile of the distribution of all first returns (proxy of H. The R2 were 0.81 (p < 0.01 for the stand volume model and 0.77 (p < 0.01 for the AGB model with a RMSE of 23.66 m3·ha−1 (23.3% and 19.59 Mg·ha−1 (23.9%, respectively. We found that a combination of two LiDAR-derived variables (i.e., proxy of BA and proxy of H, which take the form of a forest allometric model, can be used to estimate stand volume and above-ground biomass in broadleaved urban forest areas. Our results can be compared to other studies conducted using LiDAR in broadleaved forests with similar methods.

  10. A Depth Video-based Human Detection and Activity Recognition using Multi-features and Embedded Hidden Markov Models for Health Care Monitoring Systems

    Directory of Open Access Journals (Sweden)

    Ahmad Jalal

    2017-08-01

    Full Text Available Increase in number of elderly people who are living independently needs especial care in the form of healthcare monitoring systems. Recent advancements in depth video technologies have made human activity recognition (HAR realizable for elderly healthcare applications. In this paper, a depth video-based novel method for HAR is presented using robust multi-features and embedded Hidden Markov Models (HMMs to recognize daily life activities of elderly people living alone in indoor environment such as smart homes. In the proposed HAR framework, initially, depth maps are analyzed by temporal motion identification method to segment human silhouettes from noisy background and compute depth silhouette area for each activity to track human movements in a scene. Several representative features, including invariant, multi-view differentiation and spatiotemporal body joints features were fused together to explore gradient orientation change, intensity differentiation, temporal variation and local motion of specific body parts. Then, these features are processed by the dynamics of their respective class and learned, modeled, trained and recognized with specific embedded HMM having active feature values. Furthermore, we construct a new online human activity dataset by a depth sensor to evaluate the proposed features. Our experiments on three depth datasets demonstrated that the proposed multi-features are efficient and robust over the state of the art features for human action and activity recognition.

  11. Scheimpflug with computational imaging to extend the depth of field of iris recognition systems

    Science.gov (United States)

    Sinharoy, Indranil

    Despite the enormous success of iris recognition in close-range and well-regulated spaces for biometric authentication, it has hitherto failed to gain wide-scale adoption in less controlled, public environments. The problem arises from a limitation in imaging called the depth of field (DOF): the limited range of distances beyond which subjects appear blurry in the image. The loss of spatial details in the iris image outside the small DOF limits the iris image capture to a small volume-the capture volume. Existing techniques to extend the capture volume are usually expensive, computationally intensive, or afflicted by noise. Is there a way to combine the classical Scheimpflug principle with the modern computational imaging techniques to extend the capture volume? The solution we found is, surprisingly, simple; yet, it provides several key advantages over existing approaches. Our method, called Angular Focus Stacking (AFS), consists of capturing a set of images while rotating the lens, followed by registration, and blending of the in-focus regions from the images in the stack. The theoretical underpinnings of AFS arose from a pair of new and general imaging models we developed for Scheimpflug imaging that directly incorporates the pupil parameters. The model revealed that we could register the images in the stack analytically if we pivot the lens at the center of its entrance pupil, rendering the registration process exact. Additionally, we found that a specific lens design further reduces the complexity of image registration making AFS suitable for real-time performance. We have demonstrated up to an order of magnitude improvement in the axial capture volume over conventional image capture without sacrificing optical resolution and signal-to-noise ratio. The total time required for capturing the set of images for AFS is less than the time needed for a single-exposure, conventional image for the same DOF and brightness level. The net reduction in capture time can

  12. MR determination of neonatal spinal canal depth.

    Science.gov (United States)

    Arthurs, Owen; Thayyil, Sudhin; Wade, Angie; Chong, W K Kling; Sebire, Neil J; Taylor, Andrew M

    2012-08-01

    Lumbar punctures (LPs) are frequently performed in neonates and often result in traumatic haemorrhagic taps. Knowledge of the distance from the skin to the middle of the spinal canal (mid-spinal canal depth - MSCD) may reduce the incidence of traumatic taps, but there is little data in extremely premature or low birth weight neonates. Here, we determined the spinal canal depth at post-mortem in perinatal deaths using magnetic resonance imaging (MRI). Spinal canal depth was measured in 78 post-mortem foetuses and perinatal cases (mean gestation 26 weeks; mean weight 1.04kg) at the L3/L4 inter-vertebral space at post-mortem MRI. Both anterior (ASCD) and posterior (PSCD) spinal canal depth were measured; MSCD was calculated and modelled against weight and gestational age. ASCD and PSCD (mm) correlated significantly with weight and gestational age (all r>0.8). A simple linear model MSCD (mm)=3×Weight (kg)+5 was the best fit, identifying an SCD value within the correct range for 87.2% (68/78) (95% CI (78.0, 92.9%)) cases. Gestational age did not add significantly to the predictive value of the model. There is a significant correlation between MSCD and body weight at post-mortem MRI in foetuses and perinatal deaths. If this association holds in preterm neonates, use of the formula MSCD (mm)=3×Weight (kg)+5 could result in fewer traumatic LPs in this population. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  13. MR determination of neonatal spinal canal depth

    Energy Technology Data Exchange (ETDEWEB)

    Arthurs, Owen, E-mail: owenarthurs@uk2.net [Centre for Cardiovascular MR, Great Ormond Street Hospital for Children, London WC1N 3JH (United Kingdom); Thayyil, Sudhin, E-mail: s.thayyil@ucl.ac.uk [Academic Neonatology, Institute for Women' s Health, London WC1E 6AU (United Kingdom); Wade, Angie, E-mail: a.wade@ucl.ac.uk [Centre for Paediatric Epidemiology and Biostatistics, UCL Institute of Child Health, London (United Kingdom); Chong, W.K., E-mail: Kling.Chong@gosh.nhs.uk [Paediatric Neuroradiology, Great Ormond Street Hospital for Children, London (United Kingdom); Sebire, Neil J., E-mail: Neil.Sebire@gosh.nhs.uk [Histopathology, Great Ormond Street Hospital for Children, London WC1E 6AU (United Kingdom); Taylor, Andrew M., E-mail: a.taylor76@ucl.ac.uk [Centre for Cardiovascular MR, Cardiorespiratory Unit, Great Ormond Street Hospital for Children and UCL Institute of Cardiovascular Science, London WC1E 6AU (United Kingdom)

    2012-08-15

    Objectives: Lumbar punctures (LPs) are frequently performed in neonates and often result in traumatic haemorrhagic taps. Knowledge of the distance from the skin to the middle of the spinal canal (mid-spinal canal depth - MSCD) may reduce the incidence of traumatic taps, but there is little data in extremely premature or low birth weight neonates. Here, we determined the spinal canal depth at post-mortem in perinatal deaths using magnetic resonance imaging (MRI). Patients and methods: Spinal canal depth was measured in 78 post-mortem foetuses and perinatal cases (mean gestation 26 weeks; mean weight 1.04 kg) at the L3/L4 inter-vertebral space at post-mortem MRI. Both anterior (ASCD) and posterior (PSCD) spinal canal depth were measured; MSCD was calculated and modelled against weight and gestational age. Results: ASCD and PSCD (mm) correlated significantly with weight and gestational age (all r > 0.8). A simple linear model MSCD (mm) = 3 Multiplication-Sign Weight (kg) + 5 was the best fit, identifying an SCD value within the correct range for 87.2% (68/78) (95% CI (78.0, 92.9%)) cases. Gestational age did not add significantly to the predictive value of the model. Conclusion: There is a significant correlation between MSCD and body weight at post-mortem MRI in foetuses and perinatal deaths. If this association holds in preterm neonates, use of the formula MSCD (mm) = 3 Multiplication-Sign Weight (kg) + 5 could result in fewer traumatic LPs in this population.

  14. How the Kano model contributes to Kansei engineering in services.

    Science.gov (United States)

    Hartono, Markus; Chuan, Tan Kay

    2011-11-01

    Recent studies show that products and services hold great appeal if they are attractively designed to elicit emotional feelings from customers. Kansei engineering (KE) has good potential to provide a competitive advantage to those able to read and translate customer affect and emotion in actual product and services. This study introduces an integrative framework of the Kano model and KE, applied to services. The Kano model was used and inserted into KE to exhibit the relationship between service attribute performance and customer emotional response. Essentially, the Kano model categorises service attribute quality into three major groups (must-be [M], one-dimensional [O] and attractive [A]). The findings of a case study that involved 100 tourists who stayed in luxury 4- and 5-star hotels are presented. As a practical matter, this research provides insight on which service attributes deserve more attention with regard to their significant impact on customer emotional needs. STATEMENT OF RELEVANCE: Apart from cognitive evaluation, emotions and hedonism play a big role in service encounters. Through a focus on delighting qualities of service attributes, this research enables service providers and managers to establish the extent to which they prioritise their improvement efforts and to always satisfy their customer emotions beyond expectation.

  15. Urology Group Compensation and Ancillary Service Models in an Era of Value-based Care.

    Science.gov (United States)

    Shore, Neal D; Jacoby, Dana

    2016-01-01

    Changes involving the health care economic landscape have affected physicians' workflow, productivity, compensation structures, and culture. Ongoing Federal legislation regarding regulatory documentation and imminent payment-changing methodologies have encouraged physician consolidation into larger practices, creating affiliations with hospitals, multidisciplinary medical specialties, and integrated delivery networks. As subspecialization and evolution of care models have accelerated, independent medical groups have broadened ancillary service lines by investing in enterprises that compete with hospital-based (academic and nonacademic) entities, as well as non-physician- owned multispecialty enterprises, for both outpatient and inpatient services. The looming and dramatic shift from volume- to value-based health care compensation will assuredly affect urology group compensation arrangements and productivity formulae. For groups that can implement change rapidly, efficiently, and harmoniously, there will be opportunities to achieve the Triple Aim goals of the Patient Protection and Affordable Care Act, while maintaining a successful medical-financial practice. In summary, implementing new payment algorithms alongside comprehensive care coordination will assist urology groups in addressing the health economic cost and quality challenges that have been historically encountered with fee-for-service systems. Urology group leadership and stakeholders will need to adjust internal processes, methods of care coordination, cultural dependency, and organizational structures in order to create better systems of care and management. In response, ancillary services and patient throughput will need to evolve in order to adequately align quality measurement and reporting systems across provider footprints and patient populations.

  16. Wind-wave amplification mechanisms: possible models for steep wave events in finite depth

    Directory of Open Access Journals (Sweden)

    P. Montalvo

    2013-11-01

    Full Text Available We extend the Miles mechanism of wind-wave generation to finite depth. A β-Miles linear growth rate depending on the depth and wind velocity is derived and allows the study of linear growth rates of surface waves from weak to moderate winds in finite depth h. The evolution of β is plotted, for several values of the dispersion parameter kh with k the wave number. For constant depths we find that no matter what the values of wind velocities are, at small enough wave age the β-Miles linear growth rates are in the known deep-water limit. However winds of moderate intensities prevent the waves from growing beyond a critical wave age, which is also constrained by the water depth and is less than the wave age limit of deep water. Depending on wave age and wind velocity, the Jeffreys and Miles mechanisms are compared to determine which of them dominates. A wind-forced nonlinear Schrödinger equation is derived and the Akhmediev, Peregrine and Kuznetsov–Ma breather solutions for weak wind inputs in finite depth h are obtained.

  17. A Review on Broker Based Cloud Service Model

    Directory of Open Access Journals (Sweden)

    Nagarajan Rajganesh

    2016-09-01

    Full Text Available Cloud computing emerged as a utility oriented computing that facilitates resource sharing under pay-as-you-go model. Nowadays, cloud offerings are not limited to range of services and anything can be shared as a service through the Internet. In this work, a detailed literature survey with respect to cloud service discovery and composition has been accounted. A proposed architecture with the inclusion of cloud broker is presented in our work. It focuses the importance of suitable service selection and its ranking towards fulfilling the customer’s service requirements. The proposed cloud broker advocates techniques such as reasoning and decision making capabilities for the improved cloud service selection and composition.

  18. Improving traffic signal management and operations : a basic service model.

    Science.gov (United States)

    2009-12-01

    This report provides a guide for achieving a basic service model for traffic signal management and : operations. The basic service model is based on simply stated and defensible operational objectives : that consider the staffing level, expertise and...

  19. Comparing consumer-directed and agency models for providing supportive services at home.

    Science.gov (United States)

    Benjamin, A E; Matthias, R; Franke, T M

    2000-04-01

    To examine the service experiences and outcomes of low-income Medicaid beneficiaries with disabilities under two different models for organizing home-based personal assistance services: agency-directed and consumer-directed. A survey of a random sample of 1,095 clients, age 18 and over, who receive services in California's In-Home Supportive Services (IHSS) program funded primarily by Medicaid. Other data were obtained from the California Management and Payrolling System (CMIPS). The sample was stratified by service model (agency-directed or consumer-directed), client age (over or under age 65), and severity. Data were collected on client demographics, condition/functional status, and supportive service experience. Outcome measures were developed in three areas: safety, unmet need, and service satisfaction. Factor analysis was used to reduce multiple outcome measures to nine dimensions. Multiple regression analysis was used to assess the effect of service model on each outcome dimension, taking into account the client-provider relationship, client demographics, and case mix. Recipients of IHSS services as of mid-1996 were interviewed by telephone. The survey was conducted in late 1996 and early 1997. On various outcomes, recipients in the consumer-directed model report more positive outcomes than those in the agency model, or they report no difference. Statistically significant differences emerge on recipient safety, unmet needs, and service satisfaction. A family member present as a paid provider is also associated with more positive reported outcomes within the consumer-directed model, but model differences persist even when this is taken into account. Although both models have strengths and weaknesses, from a recipient perspective the consumer-directed model is associated with more positive outcomes. Although health professionals have expressed concerns about the capacity of consumer direction to assure quality, particularly with respect to safety, meeting unmet

  20. Continuous modelling study of numerical volumes - Applications to the visualization of anatomical structures

    International Nuclear Information System (INIS)

    Goret, C.

    1990-12-01

    Several technics of imaging (IRM, image scanners, tomoscintigraphy, echography) give numerical informations presented by means of a stack of parallel cross-sectional images. Since many years, 3-D mathematical tools have been developed and allow the 3 D images synthesis of surfaces. In first part, we give the technics of numerical volume exploitation and their medical applications to diagnosis and therapy. The second part is about a continuous modelling of the volume with a tensor product of cubic splines. We study the characteristics of this representation and its clinical validation. Finally, we treat of the problem of surface visualization of objects contained in the volume. The results show the interest of this model and allow to propose specifications for 3-D workstation realization [fr

  1. Primary School Pre-Service Mathematics Teachers' Views on Mathematical Modeling

    Science.gov (United States)

    Karali, Diren; Durmus, Soner

    2015-01-01

    The current study aimed to identify the views of pre-service teachers, who attended a primary school mathematics teaching department but did not take mathematical modeling courses. The mathematical modeling activity used by the pre-service teachers was developed with regards to the modeling activities utilized by Lesh and Doerr (2003) in their…

  2. Glass Property Data and Models for Estimating High-Level Waste Glass Volume

    Energy Technology Data Exchange (ETDEWEB)

    Vienna, John D.; Fluegel, Alexander; Kim, Dong-Sang; Hrma, Pavel R.

    2009-10-05

    This report describes recent efforts to develop glass property models that can be used to help estimate the volume of high-level waste (HLW) glass that will result from vitrification of Hanford tank waste. The compositions of acceptable and processable HLW glasses need to be optimized to minimize the waste-form volume and, hence, to save cost. A database of properties and associated compositions for simulated waste glasses was collected for developing property-composition models. This database, although not comprehensive, represents a large fraction of data on waste-glass compositions and properties that were available at the time of this report. Glass property-composition models were fit to subsets of the database for several key glass properties. These models apply to a significantly broader composition space than those previously publised. These models should be considered for interim use in calculating properties of Hanford waste glasses.

  3. Glass Property Data and Models for Estimating High-Level Waste Glass Volume

    International Nuclear Information System (INIS)

    Vienna, John D.; Fluegel, Alexander; Kim, Dong-Sang; Hrma, Pavel R.

    2009-01-01

    This report describes recent efforts to develop glass property models that can be used to help estimate the volume of high-level waste (HLW) glass that will result from vitrification of Hanford tank waste. The compositions of acceptable and processable HLW glasses need to be optimized to minimize the waste-form volume and, hence, to save cost. A database of properties and associated compositions for simulated waste glasses was collected for developing property-composition models. This database, although not comprehensive, represents a large fraction of data on waste-glass compositions and properties that were available at the time of this report. Glass property-composition models were fit to subsets of the database for several key glass properties. These models apply to a significantly broader composition space than those previously publised. These models should be considered for interim use in calculating properties of Hanford waste glasses.

  4. Development of a hip joint model for finite volume simulations.

    Science.gov (United States)

    Cardiff, P; Karač, A; FitzPatrick, D; Ivanković, A

    2014-01-01

    This paper establishes a procedure for numerical analysis of a hip joint using the finite volume method. Patient-specific hip joint geometry is segmented directly from computed tomography and magnetic resonance imaging datasets and the resulting bone surfaces are processed into a form suitable for volume meshing. A high resolution continuum tetrahedral mesh has been generated, where a sandwich model approach is adopted; the bones are represented as a stiffer cortical shells surrounding more flexible cancellous cores. Cartilage is included as a uniform thickness extruded layer and the effect of layer thickness is investigated. To realistically position the bones, gait analysis has been performed giving the 3D positions of the bones for the full gait cycle. Three phases of the gait cycle are examined using a finite volume based custom structural contact solver implemented in open-source software OpenFOAM.

  5. Development of a Comprehensive Energy Service Business Model

    Energy Technology Data Exchange (ETDEWEB)

    Lee, S.K. [Korea Energy Economics Institute, Euiwang (Korea)

    2001-11-01

    Traditionally, energy industry has been regarded as supply- oriented and characterized by its vertically integrated structure and monopolized market. In particular, network industries such as electricity, natural gas, and district heating and cooling, because of their large initial capital investments, were inevitably state-owned. As their sizes have ever increased, government-owned corporations are confronted with internal crises such as financial limitations, increasing internal transaction costs, etc. In addition to these internal problems, fundamental changes in the external environment such as advances in communication and modular technologies, globalization, and market liberalization have forced energy industries to undergo a restructuring process. Restructuring in the energy industry is intended to introduce competition in the market by unbundling the energy service into production, transportation, and distribution. The energy service has been vertically as well as horizontally integrated by suppliers. Restructuring, which has been implemented in the United States and the European countries, is now being introduced in Korea. However, energy services have not yet been regulated as a separate industry even in those countries which are well advanced in their restructuring. WTO negotiation is under way to separate the service sector in 'General Agreement on Tariffs and Trade(GATT).' Areas that can be categorized as a service sector are 17 subdivided sub-sectors as dispersed in each sector. Energy service business models expected to be emerged in the restructuring process of energy industry include: transmission/trunk pipeline projects, distribution/local pipeline projects, trading, brokerage, metering service, local distribution of natural gas, to name a few, for electricity and natural gas sectors; rational use of energy such as CES business and ESCOs for district heating and cooling, and energy conservation sector. Rational use of energy, in

  6. OPTIMUM VOLUME OF BANK RESERVE: FORECASTING OF OVERDUE CREDIT INDEBTEDNESS USING COPULA MODELS

    Directory of Open Access Journals (Sweden)

    Kazakova K. A.

    2015-12-01

    Full Text Available The article propose to consider the possibility of RLUF-copulas application for the creation of joint distributions of overdue credit indebtedness ranks with macroeconomic indicators for the purpose of indebtedness forecasting and also for the definition of optimum volumes of reserve requirements for the corresponding losses. In this research the comparative analysis of multivariate distributions of RLUF-copula estimation with such classical copulas, as FGM-copula, Frank's copula and Gauss's copula is made. In the article the method of maximum likelihood is used for receiving estimates of model parameters. In case of RLUF-copula Bayesian estimates of parameters are received using the Metropolis algorithm with random volatility. Forecasting of bank reserve volumes for all received models is executed in the form of random sample generation by the means of the algorithm of acceptance-deviation for the creation of the corresponding sample of joint distribution using the copula density function. As the result of playing of hundred possible scenarios of indebtedness volumes is obtained the 95 % confidence level for the possible volume of credit indebtedness which can fully act as the optimum volume of reserve requirements for the corresponding credit losses.

  7. Understanding the relationship between the Centers for Medicare and Medicaid Services' Hospital Compare star rating, surgical case volume, and short-term outcomes after major cancer surgery.

    Science.gov (United States)

    Kaye, Deborah R; Norton, Edward C; Ellimoottil, Chad; Ye, Zaojun; Dupree, James M; Herrel, Lindsey A; Miller, David C

    2017-11-01

    Both the Centers for Medicare and Medicaid Services' (CMS) Hospital Compare star rating and surgical case volume have been publicized as metrics that can help patients to identify high-quality hospitals for complex care such as cancer surgery. The current study evaluates the relationship between the CMS' star rating, surgical volume, and short-term outcomes after major cancer surgery. National Medicare data were used to evaluate the relationship between hospital star ratings and cancer surgery volume quintiles. Then, multilevel logistic regression models were fit to examine the association between cancer surgery outcomes and both star rankings and surgical volumes. Lastly, a graphical approach was used to compare how well star ratings and surgical volume predicted cancer surgery outcomes. This study identified 365,752 patients undergoing major cancer surgery for 1 of 9 cancer types at 2,550 hospitals. Star rating was not associated with surgical volume (P cancer surgery outcomes (mortality, complication rate, readmissions, and prolonged length of stay). The adjusted predicted probabilities for 5- and 1-star hospitals were 2.3% and 4.5% for mortality, 39% and 48% for complications, 10% and 15% for readmissions, and 8% and 16% for a prolonged length of stay, respectively. The adjusted predicted probabilities for hospitals with the highest and lowest quintile cancer surgery volumes were 2.7% and 5.8% for mortality, 41% and 55% for complications, 12.2% and 11.6% for readmissions, and 9.4% and 13% for a prolonged length of stay, respectively. Furthermore, surgical volume and the star rating were similarly associated with mortality and complications, whereas the star rating was more highly associated with readmissions and prolonged length of stay. In the absence of other information, these findings suggest that the star rating may be useful to patients when they are selecting a hospital for major cancer surgery. However, more research is needed before these ratings can

  8. Modeling for Ecosystem Services: Challenges and Opportunities

    Science.gov (United States)

    Guswa, A. J.; Brauman, K. A.; Ghile, Y.

    2012-12-01

    Ecosystem services are those values provided to human society by the structure and processes of ecosystems and landscapes. Water-related services include the transformation of precipitation impulses into supplies of water for hydropower, irrigation, and industrial and municipal uses, the retention and removal of applied nutrients and pollutants, flood-damage mitigation, recreation, and the provision of cultural and aesthetic values. Incorporation of changes to the value of these services in land-use planning and decision making requires identification of the relevant services, engagement of stakeholders, knowledge of how land-use changes impact water quality, quantity, and timing, and mechanisms for putting value on the hydrologic and biogeochemical changes. We present three examples that highlight the characteristics, challenges, and opportunities associated with prototypical decisions that incorporate ecosystem services values: scenario analysis, payment for ecosystem services, and optimal spatial planning. Through these examples, we emphasize the challenges of data availability, model resolution and complexity, and attribution of value. We also provide some suggestions for ways forward.

  9. Prehospital tidal volume influences hospital tidal volume: A cohort study.

    Science.gov (United States)

    Stoltze, Andrew J; Wong, Terrence S; Harland, Karisa K; Ahmed, Azeemuddin; Fuller, Brian M; Mohr, Nicholas M

    2015-06-01

    The purposes of the study are to describe current practice of ventilation in a modern air medical system and to measure the association of ventilation strategy with subsequent ventilator care and acute respiratory distress syndrome (ARDS). Retrospective observational cohort study of intubated adult patients (n = 235) transported by a university-affiliated air medical transport service to a 711-bed tertiary academic center between July 2011 and May 2013. Low tidal volume ventilation was defined as tidal volumes less than or equal to 8 mL/kg predicted body weight. Multivariable regression was used to measure the association between prehospital tidal volume, hospital ventilation strategy, and ARDS. Most patients (57%) were ventilated solely with bag valve ventilation during transport. Mean tidal volume of mechanically ventilated patients was 8.6 mL/kg predicted body weight (SD, 0.2 mL/kg). Low tidal volume ventilation was used in 13% of patients. Patients receiving low tidal volume ventilation during air medical transport were more likely to receive low tidal volume ventilation in the emergency department (P tidal volume (P = .840). Low tidal volume ventilation was rare during air medical transport. Air transport ventilation strategy influenced subsequent ventilation but was not associated with ARDS. Copyright © 2015 Elsevier Inc. All rights reserved.

  10. Franchising reproductive health services.

    Science.gov (United States)

    Stephenson, Rob; Tsui, Amy Ong; Sulzbach, Sara; Bardsley, Phil; Bekele, Getachew; Giday, Tilahun; Ahmed, Rehana; Gopalkrishnan, Gopi; Feyesitan, Bamikale

    2004-12-01

    Networks of franchised health establishments, providing a standardized set of services, are being implemented in developing countries. This article examines associations between franchise membership and family planning and reproductive health outcomes for both the member provider and the client. Regression models are fitted examining associations between franchise membership and family planning and reproductive health outcomes at the service provider and client levels in three settings. Franchising has a positive association with both general and family planning client volumes, and the number of family planning brands available. Similar associations with franchise membership are not found for reproductive health service outcomes. In some settings, client satisfaction is higher at franchised than other types of health establishments, although the association between franchise membership and client outcomes varies across the settings. Franchise membership has apparent benefits for both the provider and the client, providing an opportunity to expand access to reproductive health services, although greater attention is needed to shift the focus from family planning to a broader reproductive health context.

  11. Data Modeling for Mobile Services in the Real World

    DEFF Research Database (Denmark)

    Jensen, Christian Søndergaard; Pedersen, Torben Bach; Speicys, L.

    2003-01-01

    Research contributions on data modeling, data structures, query processing, and indexing for mobile services may have an impact in the longer term, but each contribution typically offers an isolated solution to one small part of the practical problem of delivering mobile services in the real world....... In contrast, this paper describes holistic concepts and techniques for mobile data modeling that are readily applicable in practice. Focus is on services to be delivered to mobile users, such as route guidance, point-of-interest search, road pricing, parking payment, traffic monitoring, etc. While geo...

  12. Modelling shallow landslide susceptibility by means of a subsurface flow path connectivity index and estimates of soil depth spatial distribution

    Directory of Open Access Journals (Sweden)

    C. Lanni

    2012-11-01

    Full Text Available Topographic index-based hydrological models have gained wide use to describe the hydrological control on the triggering of rainfall-induced shallow landslides at the catchment scale. A common assumption in these models is that a spatially continuous water table occurs simultaneously across the catchment. However, during a rainfall event isolated patches of subsurface saturation form above an impeding layer and their hydrological connectivity is a necessary condition for lateral flow initiation at a point on the hillslope.

    Here, a new hydrological model is presented, which allows us to account for the concept of hydrological connectivity while keeping the simplicity of the topographic index approach. A dynamic topographic index is used to describe the transient lateral flow that is established at a hillslope element when the rainfall amount exceeds a threshold value allowing for (a development of a perched water table above an impeding layer, and (b hydrological connectivity between the hillslope element and its own upslope contributing area. A spatially variable soil depth is the main control of hydrological connectivity in the model. The hydrological model is coupled with the infinite slope stability model and with a scaling model for the rainfall frequency–duration relationship to determine the return period of the critical rainfall needed to cause instability on three catchments located in the Italian Alps, where a survey of soil depth spatial distribution is available. The model is compared with a quasi-dynamic model in which the dynamic nature of the hydrological connectivity is neglected. The results show a better performance of the new model in predicting observed shallow landslides, implying that soil depth spatial variability and connectivity bear a significant control on shallow landsliding.

  13. A comparison of tools for modeling freshwater ecosystem services.

    Science.gov (United States)

    Vigerstol, Kari L; Aukema, Juliann E

    2011-10-01

    Interest in ecosystem services has grown tremendously among a wide range of sectors, including government agencies, NGO's and the business community. Ecosystem services entailing freshwater (e.g. flood control, the provision of hydropower, and water supply), as well as carbon storage and sequestration, have received the greatest attention in both scientific and on-the-ground applications. Given the newness of the field and the variety of tools for predicting water-based services, it is difficult to know which tools to use for different questions. There are two types of freshwater-related tools--traditional hydrologic tools and newer ecosystem services tools. Here we review two of the most prominent tools of each type and their possible applications. In particular, we compare the data requirements, ease of use, questions addressed, and interpretability of results among the models. We discuss the strengths, challenges and most appropriate applications of the different models. Traditional hydrological tools provide more detail whereas ecosystem services tools tend to be more accessible to non-experts and can provide a good general picture of these ecosystem services. We also suggest gaps in the modeling toolbox that would provide the greatest advances by improving existing tools. Copyright © 2011 Elsevier Ltd. All rights reserved.

  14. Experimental evidence concerning the significant information depth of electron backscatter diffraction (EBSD)

    Energy Technology Data Exchange (ETDEWEB)

    Wisniewski, Wolfgang, E-mail: wolfgang.w@uni-jena.de [Otto-Schott-Institut, Jena University, Fraunhoferstr. 6, 07743 Jena (Germany); Saager, Stefan [Fraunhofer Institute for Organic Electronics, Electron Beam and Plasma Technology FEP, Winterbergstraße 28, 01277 Dresden (Germany); Böbenroth, Andrea [Fraunhofer Institute for the Microstructure of Materials and Systems IMWS, Walter-Huelse-Straße 1, 06120 Halle (Saale) (Germany); Rüssel, Christian [Otto-Schott-Institut, Jena University, Fraunhoferstr. 6, 07743 Jena (Germany)

    2017-02-15

    Experiments concerning the information depth of electron backscatter diffraction (EBSD) are performed on samples featuring an amorphous wedge on a crystalline substrate and a crystalline wedge on an amorphous substrate. The effects of the acceleration voltage and exemplary software settings on the ability to measure through an amorphous layer are presented. Changes in the EBSD-signal could be detected through a ≈142 nm thick layer of amorphous Si while orientation measurements could be performed through a ≈116 nm thick layer when using a voltage of 30 kV. The complexity of the information depth significant to a given EBSD-pattern and the multiple parameters influencing it are discussed. It is suggested that a “core information depth” is significant to high quality patterns while a larger “maximum information depth” becomes relevant when the pattern quality decreases or the sample is inhomogeneous within the information volume, i.e. in the form of partially crystalline materials or crystal layers in the nm scale. - Highlights: • Experimental evidence of the significant information depth of EBSD is presented. • Effects of the voltage and exemplary software settings are discussed. • Dependence of the significant information depth on the pattern quality is proposed. • The information depth may reach up to 142 nm in Si when using a voltage of 30 kV. • The information depth depends on the available technology.

  15. Replicating the Ice-Volume Signal of the Early Pleistocene with a Complex Earth System Model

    Science.gov (United States)

    Tabor, C. R.; Poulsen, C. J.; Pollard, D.

    2013-12-01

    Milankovitch theory proposes high-latitude summer insolation intensity paces the ice ages by controlling perennial snow cover amounts (Milankovitch, 1941). According to theory, the ~21 kyr cycle of precession should dominate the ice-volume records since it has the greatest influence on high-latitude summer insolation. Modeling experiments frequently support Milankovitch theory by attributing the majority of Northern Hemisphere high-latitude summer snowmelt to changes in the cycle of precession (e.g. Jackson and Broccoli, 2003). However, ice-volume proxy records, especially those of the Early Pleistocene (2.6-0.8 Ma), display variability with a period of ~41 kyr (Raymo and Lisiecki, 2005), indicative of insolation forcing from obliquity, which has a much smaller influence on summer insolation intensity than precession. Several hypotheses attempt to explain the discrepancies between Milkankovitch theory and the proxy records by invoking phenomena such as insolation gradients (Raymo and Nisancioglu, 2003), hemispheric offset (Raymo et al., 2006; Lee and Poulsen, 2009), and integrated summer energy (Huybers, 2006); however, all of these hypotheses contain caveats (Ruddiman, 2006) and have yet to be supported by modeling studies that use a complex GCM. To explore potential solutions to this '41 kyr problem,' we use an Earth system model composed of the GENESIS GCM and Land Surface model, the BIOME4 vegetation model, and the Pennsylvania State ice-sheet model. Using an asynchronous coupling technique, we run four idealized transient combinations of obliquity and precession, representing the orbital extremes of the Pleistocene (Berger and Loutre, 1991). Each experiment is run through several complete orbital cycles with a dynamic ice domain spanning North America and Greenland, and fixed preindustrial greenhouse-gas concentrations. For all orbital configurations, model results produce greater ice-volume spectral power at the frequency of obliquity despite significantly

  16. A Study on Intelligent User-Centric Logistics Service Model Using Ontology

    Directory of Open Access Journals (Sweden)

    Saraswathi Sivamani

    2014-01-01

    Full Text Available Much research has been undergone in the smart logistics environment for the prompt delivery of the product in the right place at the right time. Most of the services were based on time management, routing technique, and location based services. The services in the recent logistics environment aim for situation based logistics service centered around the user by utilizing various information technologies such as mobile devices, computer systems, and GPS. This paper proposes a smart logistics service model for providing user-centric intelligent logistics service by utilizing smartphones in a smart environment. We also develop an OWL based ontology model for the smart logistics for the better understanding among the context information. In addition to basic delivery information, the proposed service model makes use of the location and situation information of the delivery vehicle and user, to draw the route information according to the user’s requirement. With the increase of internet usage, the real-time situations are received which helps to create a more reliable relationship, owing to the Internet of Things. Through this service model, it is possible to engage in the development of various IT and logistics convergence services based on situation information between the deliverer and user which occurs in real time.

  17. Launch Services, a Proven Model

    Science.gov (United States)

    Trafton, W. C.; Simpson, J.

    2002-01-01

    From a commercial perspective, the ability to justify "leap frog" technology such as reusable systems has been difficult to justify because the estimated 5B to 10B investment is not supported in the current flat commercial market coupled with an oversupply of launch service suppliers. The market simply does not justify investment of that magnitude. Currently, next generation Expendable Launch Systems, including Boeing's Delta IV, Lockheed Martin's Atlas 5, Ariane V ESCA and RSC's H-IIA are being introduced into operations signifying that only upgrades to proven systems are planned to meet the changes in anticipated satellite demand (larger satellites, more lifetime, larger volumes, etc.) in the foreseeable future. We do not see a new fleet of ELVs emerging beyond that which is currently being introduced, only continuous upgrades of the fleet to meet the demands. To induce a radical change in the provision of launch services, a Multinational Government investment must be made and justified by World requirements. The commercial market alone cannot justify such an investment. And if an investment is made, we cannot afford to repeat previous mistakes by relying on one system such as shuttle for commercial deployment without having any back-up capability. Other issues that need to be considered are national science and security requirements, which to a large extent fuels the Japanese, Chinese, Indian, Former Soviet Union, European and United States space transportation entries. Additionally, this system must support or replace current Space Transportation Economies with across-the-board benefits. For the next 10 to 20 years, Multinational cooperation will be in the form of piecing together launch components and infrastructure to supplement existing launch systems and reducing the amount of non-recurring investment while meeting the future requirements of the End-User. Virtually all of the current systems have some form of multinational participation: Sea Launch

  18. Influence of Anchoring on Burial Depth of Submarine Pipelines.

    Science.gov (United States)

    Zhuang, Yuan; Li, Yang; Su, Wei

    2016-01-01

    Since the beginning of the twenty-first century, there has been widespread construction of submarine oil-gas transmission pipelines due to an increase in offshore oil exploration. Vessel anchoring operations are causing more damage to submarine pipelines due to shipping transportation also increasing. Therefore, it is essential that the influence of anchoring on the required burial depth of submarine pipelines is determined. In this paper, mathematical models for ordinary anchoring and emergency anchoring have been established to derive an anchor impact energy equation for each condition. The required effective burial depth for submarine pipelines has then been calculated via an energy absorption equation for the protection layer covering the submarine pipelines. Finally, the results of the model calculation have been verified by accident case analysis, and the impact of the anchoring height, anchoring water depth and the anchor weight on the required burial depth of submarine pipelines has been further analyzed.

  19. A blueprint for mapping and modelling ecosystem services

    NARCIS (Netherlands)

    Crossman, N.; Burkhard, B.; Nedkov, S.; Willemen, L.L.J.; Petz, K.; Palomo, I.; Drakou, E.G.; Martín-Lopez, B.; McPhearson, T.; Boyanova, K.; Alkemade, R.; Egoh, B.; Dunbar, M.D.; Maes, J.

    2013-01-01

    The inconsistency in methods to quantify and map ecosystem services challenges the development of robust values of ecosystem services in national accounts and broader policy and natural resource management decision-making. In this paper we develop and test a blueprint to give guidance on modelling

  20. Computational study of depth completion consistent with human bi-stable perception for ambiguous figures.

    Science.gov (United States)

    Mitsukura, Eiichi; Satoh, Shunji

    2018-03-01

    We propose a computational model that is consistent with human perception of depth in "ambiguous regions," in which no binocular disparity exists. Results obtained from our model reveal a new characteristic of depth perception. Random dot stereograms (RDS) are often used as examples because RDS provides sufficient disparity for depth calculation. A simple question confronts us: "How can we estimate the depth of a no-texture image region, such as one on white paper?" In such ambiguous regions, mathematical solutions related to binocular disparities are not unique or indefinite. We examine a mathematical description of depth completion that is consistent with human perception of depth for ambiguous regions. Using computer simulation, we demonstrate that resultant depth-maps qualitatively reproduce human depth perception of two kinds. The resultant depth maps produced using our model depend on the initial depth in the ambiguous region. Considering this dependence from psychological viewpoints, we conjecture that humans perceive completed surfaces that are affected by prior-stimuli corresponding to the initial condition of depth. We conducted psychological experiments to verify the model prediction. An ambiguous stimulus was presented after a prior stimulus removed ambiguity. The inter-stimulus interval (ISI) was inserted between the prior stimulus and post-stimulus. Results show that correlation of perception between the prior stimulus and post-stimulus depends on the ISI duration. Correlation is positive, negative, and nearly zero in the respective cases of short (0-200 ms), medium (200-400 ms), and long ISI (>400 ms). Furthermore, based on our model, we propose a computational model that can explain the dependence. Copyright © 2017 Elsevier Ltd. All rights reserved.

  1. Three Software Tools for Viewing Sectional Planes, Volume Models, and Surface Models of a Cadaver Hand.

    Science.gov (United States)

    Chung, Beom Sun; Chung, Min Suk; Shin, Byeong Seok; Kwon, Koojoo

    2018-02-19

    The hand anatomy, including the complicated hand muscles, can be grasped by using computer-assisted learning tools with high quality two-dimensional images and three-dimensional models. The purpose of this study was to present up-to-date software tools that promote learning of stereoscopic morphology of the hand. On the basis of horizontal sectioned images and outlined images of a male cadaver, vertical planes, volume models, and surface models were elaborated. Software to browse pairs of the sectioned and outlined images in orthogonal planes and software to peel and rotate the volume models, as well as a portable document format (PDF) file to select and rotate the surface models, were produced. All of the software tools were downloadable free of charge and usable off-line. The three types of tools for viewing multiple aspects of the hand could be adequately employed according to individual needs. These new tools involving the realistic images of a cadaver and the diverse functions are expected to improve comprehensive knowledge of the hand shape. © 2018 The Korean Academy of Medical Sciences.

  2. An integrated service excellence model for military test and ...

    African Journals Online (AJOL)

    The purpose of this article is to introduce an Integrated Service Excellence Model (ISEM) for empowering the leadership core of the capital-intensive military test and evaluation facilities to provide strategic military test and evaluation services and to continuously improve service excellence by ensuring that all activities ...

  3. Resilience to Changing Snow Depth in a Shrubland Ecosystem.

    Science.gov (United States)

    Loik, M. E.

    2008-12-01

    Snowfall is the dominant hydrologic input for high elevations and latitudes of the arid- and semi-arid western United States. Sierra Nevada snowpack provides numerous important services for California, but is vulnerable to anthropogenic forcing of the coupled ocean-atmosphere system. GCM and RCM scenarios envision reduced snowpack and earlier melt under a warmer climate, but how will these changes affect soil and plant water relations and ecosystem processes? And, how resilient will this ecosystem be to short- and long-term forcing of snow depth and melt timing? To address these questions, our experiments utilize large- scale, long-term roadside snow fences to manipulate snow depth and melt timing in eastern California, USA. Interannual snow depth averages 1344 mm with a CV of 48% (April 1, 1928-2008). Snow fences altered snow melt timing by up to 18 days in high-snowfall years, and affected short-term soil moisture pulses less in low- than medium- or high-snowfall years. Sublimation in this arid location accounted for about 2 mol m- 2 of water loss from the snowpack in 2005. Plant water potential increased after the ENSO winter of 2005 and stayed relatively constant for the following three years, even after the low snowfall of winter 2007. Over the long-term, changes in snow depth and melt timing have impacted cover or biomass of Achnatherum thurberianum, Elymus elemoides, and Purshia tridentata. Growth of adult conifers (Pinus jeffreyi and Pi. contorta) was not equally sensitive to snow depth. Thus, complex interactions between snow depth, soil water inputs, physiological processes, and population patterns help drive the resilience of this ecosystem to changes in snow depth and melt timing.

  4. Models for high cell density bioreactors must consider biomass volume fraction: Cell recycle example.

    Science.gov (United States)

    Monbouquette, H G

    1987-06-01

    Intrinsic models, which take into account biomass volume fraction, must be formulated for adequate simulation of high-biomass-density fermentations with cell recycle. Through comparison of corresponding intrinsic and non-intrinsic models in dimensionless form, constraints for non-intrinsic model usage in terms of biokinetic and fermenter operating parameters can be identified a priori. Analysis of a simple product-inhibition model indicates that the non-intrinsic approach is suitable only when the attainable biomass volume fraction in the fermentation broth is less than about 0.10. Inappropriate application of a non-intrinsic model can lead to gross errors in calculated substrate and product concentrations, substrate conversion, and volumetric productivity.

  5. Models for high cell density bioreactors must consider biomass volume fraction: cell recycle example

    Energy Technology Data Exchange (ETDEWEB)

    Monbouquette, H.G.

    1987-06-01

    Intrinsic models, which take into account biomass volume fraction, must be formulated for adequate simulation of high-biomass-density fermentations with cell recycle. Through comparison of corresponding intrinsic and non-intrinsic models in dimensionless form, constraints for non-intrinsic model usage in terms of biokinetic and fermenter operating parameters can be identified a priori. Analysis of a simple product-inhibition model indicates that the non-intrinsic approach is suitable only when the attainable biomass volume fraction in the fermentation broth is less than about 0.10. Inappropriate application of a non-intrinsic model can lead to gross errors in calculated substrate and product concentrations, substrate conversion, and volumetric productivity. (Refs. 14).

  6. Clinical Predictive Modeling Development and Deployment through FHIR Web Services.

    Science.gov (United States)

    Khalilia, Mohammed; Choi, Myung; Henderson, Amelia; Iyengar, Sneha; Braunstein, Mark; Sun, Jimeng

    2015-01-01

    Clinical predictive modeling involves two challenging tasks: model development and model deployment. In this paper we demonstrate a software architecture for developing and deploying clinical predictive models using web services via the Health Level 7 (HL7) Fast Healthcare Interoperability Resources (FHIR) standard. The services enable model development using electronic health records (EHRs) stored in OMOP CDM databases and model deployment for scoring individual patients through FHIR resources. The MIMIC2 ICU dataset and a synthetic outpatient dataset were transformed into OMOP CDM databases for predictive model development. The resulting predictive models are deployed as FHIR resources, which receive requests of patient information, perform prediction against the deployed predictive model and respond with prediction scores. To assess the practicality of this approach we evaluated the response and prediction time of the FHIR modeling web services. We found the system to be reasonably fast with one second total response time per patient prediction.

  7. Measurement of Optic Disc Cup Surface Depth Using Cirrus HD-OCT.

    Science.gov (United States)

    Kim, Young Kook; Ha, Ahnul; Lee, Won June; Jeoung, Jin Wook; Park, Ki Ho

    2017-12-01

    To introduce the measurement method of optic disc cup surface depth using spectral-domain optical coherence tomography (SD-OCT) and then evaluate the rates of cup surface depression at 3 different stages of glaucoma. We retrospectively identified 52 eyes with preperimetric glaucoma, 56 with mild-or-moderate glaucoma and 50 with severe glaucoma and followed them for at least 48 months. Eyes were imaged using SD-OCT (Cirrus HD-OCT) at 12-month intervals. The mean cup surface depth was calculated using the following formula: Cup volume/(disc area×average cup-to-disc ratio)-200 μm. The rates of mean cup surface depression (μm/y) were significantly greater in mild-or-moderate glaucoma (-7.96±1.03) than in preperimetric (-3.11±0.61) and severe glaucoma (-0.70±0.12; all Pcup surface depression (%/y) were significantly greater than those of average of retinal nerve fiber layer (RNFL) thinning (%/y) in preperimetric glaucoma (-1.64±0.12 vs. -1.11±0.07; Pcup surface depth changed slower than did average RNFL thickness (-0.64±0.06 vs. -0.75±0.08%/y; Pcup surface depth changed faster than did the RNFL thickness. These results signify the possibility that SD-OCT-based estimation of cup surface depth might be useful for monitoring of glaucoma development and progression.

  8. An integrated tiered service delivery model (ITSDM based on local CD4 testing demands can improve turn-around times and save costs whilst ensuring accessible and scalable CD4 services across a national programme.

    Directory of Open Access Journals (Sweden)

    Deborah K Glencross

    Full Text Available The South African National Health Laboratory Service (NHLS responded to HIV treatment initiatives with two-tiered CD4 laboratory services in 2004. Increasing programmatic burden, as more patients access anti-retroviral therapy (ART, has demanded extending CD4 services to meet increasing clinical needs. The aim of this study was to review existing services and develop a service-model that integrated laboratory-based and point-of-care testing (POCT, to extend national coverage, improve local turn-around/(TAT and contain programmatic costs.NHLS Corporate Data Warehouse CD4 data, from 60-70 laboratories and 4756 referring health facilities was reviewed for referral laboratory workload, respective referring facility volumes and related TAT, from 2009-2012.An integrated tiered service delivery model (ITSDM is proposed. Tier-1/POCT delivers CD4 testing at single health-clinics providing ART in hard-to-reach areas (350-1500 tests/day, serving ≥ 200 health-clinics. Tier-6 provides national support for standardisation, harmonization and quality across the organization.The ITSDM offers improved local TAT by extending CD4 services into rural/remote areas with new Tier-3 or Tier-2/POC-Hub services installed in existing community laboratories, most with developed infrastructure. The advantage of lower laboratory CD4 costs and use of existing infrastructure enables subsidization of delivery of more expensive POC services, into hard-to-reach districts without reasonable access to a local CD4 laboratory. Full ITSDM implementation across 5 service tiers (as opposed to widespread implementation of POC testing to extend service can facilitate sustainable 'full service coverage' across South Africa, and save>than R125 million in HIV/AIDS programmatic costs. ITSDM hierarchical parental-support also assures laboratory/POC management, equipment maintenance, quality control and on-going training between tiers.

  9. Soil map, area and volume calculations in Orrmyrberget catchment basin at Gideaa, Northern Sweden

    International Nuclear Information System (INIS)

    Ittner, T.; Tammela, P.T.; Gustafsson, E.

    1991-06-01

    Fallout studies in the Gideaa study site after the Chernobyl fallout in 1986, has come to the point that a more exact surface mapping of the studied catchment basin is needed. This surface mapping is mainly made for area calculations of different soil types within the study site. The mapping focus on the surface, as the study concerns fallout redistribution and it is extended to also include materials down to a depth of 0.5 meter. Volume calculations are made for the various soil materials within the top 0.5 m. These volume and area calculations will then be used in the modelling of the migration and redistribution of the fallout radionuclides within the studied catchment basin. (au)

  10. Variation in Measurements of Transtibial Stump Model Volume A Comparison of Five Methods

    NARCIS (Netherlands)

    Bolt, A.; de Boer-Wilzing, V. G.; Geertzen, J. H. B.; Emmelot, C. H.; Baars, E. C. T.; Dijkstra, P. U.

    Objective: To determine the right moment for fitting the first prosthesis, it is necessary to know when the volume of the stump has stabilized. The aim of this study is to analyze variation in measurements of transtibial stump model volumes using the water immersion method, the Design TT system, the

  11. Performance Prediction Modelling for Flexible Pavement on Low Volume Roads Using Multiple Linear Regression Analysis

    Directory of Open Access Journals (Sweden)

    C. Makendran

    2015-01-01

    Full Text Available Prediction models for low volume village roads in India are developed to evaluate the progression of different types of distress such as roughness, cracking, and potholes. Even though the Government of India is investing huge quantum of money on road construction every year, poor control over the quality of road construction and its subsequent maintenance is leading to the faster road deterioration. In this regard, it is essential that scientific maintenance procedures are to be evolved on the basis of performance of low volume flexible pavements. Considering the above, an attempt has been made in this research endeavor to develop prediction models to understand the progression of roughness, cracking, and potholes in flexible pavements exposed to least or nil routine maintenance. Distress data were collected from the low volume rural roads covering about 173 stretches spread across Tamil Nadu state in India. Based on the above collected data, distress prediction models have been developed using multiple linear regression analysis. Further, the models have been validated using independent field data. It can be concluded that the models developed in this study can serve as useful tools for the practicing engineers maintaining flexible pavements on low volume roads.

  12. Alligator Rivers Analogue project. Geochemical modelling of secondary uranium ore formation. Final Report - Volume 11

    Energy Technology Data Exchange (ETDEWEB)

    Sverjensky, D. [The John Hopkins Univ, Dept of Earth and Planetary Sciences, Baltimore (United States); Bennett, D.G.; Read, D. [W.S. Atkins Science and Technology, Epsom Surrey, (United Kingdom)

    1992-12-31

    The purpose of the present study was to establish how the uranyl phosphate zone at the Koongarra site was formed. The overall approach taken in the present study employed theoretical chemical mass transfer calculations and models that permit investigation and reconstruction of the kinds of waters that could produce the uranyl phosphate zone. These calculations have used the geological and mineralogical data for the Koongarra weathered zone (Volumes 2, 8, and 9 of this series), to constrain the initial compositions and reactions undergone by groundwater during the formation of the uranyl phosphate zone. In carrying out these calculations the present-day analyses of Koongarra waters are used only as a guide to the possible initial composition of the fluids associated with the formation of the phosphate zone. Aqueous speciation, saturation state and chemical mass transfer calculations were carried out using the computer programs EQ3NR and EQ6 (Wolery, 1983; Wolery et al., 1984) and a thermodynamic database generated at The Johns Hopkins University over the last eight years which is tabulated in the Appendix 1 to Volume 12 of this series. Despite uncertainties in the thermodynamic characterisation of species, all the above calculations suggest that the uranyl phosphate zone at Koongarra has not formed from present-day groundwaters (Volume 12 of this series). The present-day groundwaters in the weathered zone (eg. at 13 m depth) appear to be undersaturated with respect to saleeite. Furthermore, as present-day groundwaters descend below the water table they rapidly lose their atmospheric oxygen imprint, as is typical of most groundwaters, and become even more reducing in character. Under these circumstances, the groundwaters become more undersaturated with respect to saleeite than the shallow groundwaters. Because much of the phosphate zone is currently below the water table, under saturated zone conditions, it is suggested in the present study that the uranyl phosphate

  13. Alligator Rivers Analogue project. Geochemical modelling of secondary uranium ore formation. Final Report - Volume 11

    International Nuclear Information System (INIS)

    Sverjensky, D.; Bennett, D.G.; Read, D.

    1992-01-01

    The purpose of the present study was to establish how the uranyl phosphate zone at the Koongarra site was formed. The overall approach taken in the present study employed theoretical chemical mass transfer calculations and models that permit investigation and reconstruction of the kinds of waters that could produce the uranyl phosphate zone. These calculations have used the geological and mineralogical data for the Koongarra weathered zone (Volumes 2, 8, and 9 of this series), to constrain the initial compositions and reactions undergone by groundwater during the formation of the uranyl phosphate zone. In carrying out these calculations the present-day analyses of Koongarra waters are used only as a guide to the possible initial composition of the fluids associated with the formation of the phosphate zone. Aqueous speciation, saturation state and chemical mass transfer calculations were carried out using the computer programs EQ3NR and EQ6 (Wolery, 1983; Wolery et al., 1984) and a thermodynamic database generated at The Johns Hopkins University over the last eight years which is tabulated in the Appendix 1 to Volume 12 of this series. Despite uncertainties in the thermodynamic characterisation of species, all the above calculations suggest that the uranyl phosphate zone at Koongarra has not formed from present-day groundwaters (Volume 12 of this series). The present-day groundwaters in the weathered zone (eg. at 13 m depth) appear to be undersaturated with respect to saleeite. Furthermore, as present-day groundwaters descend below the water table they rapidly lose their atmospheric oxygen imprint, as is typical of most groundwaters, and become even more reducing in character. Under these circumstances, the groundwaters become more undersaturated with respect to saleeite than the shallow groundwaters. Because much of the phosphate zone is currently below the water table, under saturated zone conditions, it is suggested in the present study that the uranyl phosphate

  14. Alligator Rivers Analogue project. Geochemical modelling of secondary uranium ore formation. Final Report - Volume 11

    Energy Technology Data Exchange (ETDEWEB)

    Sverjensky, D [The John Hopkins Univ, Dept of Earth and Planetary Sciences, Baltimore (United States); Bennett, D G; Read, D [W.S. Atkins Science and Technology, Epsom Surrey, (United Kingdom)

    1993-12-31

    The purpose of the present study was to establish how the uranyl phosphate zone at the Koongarra site was formed. The overall approach taken in the present study employed theoretical chemical mass transfer calculations and models that permit investigation and reconstruction of the kinds of waters that could produce the uranyl phosphate zone. These calculations have used the geological and mineralogical data for the Koongarra weathered zone (Volumes 2, 8, and 9 of this series), to constrain the initial compositions and reactions undergone by groundwater during the formation of the uranyl phosphate zone. In carrying out these calculations the present-day analyses of Koongarra waters are used only as a guide to the possible initial composition of the fluids associated with the formation of the phosphate zone. Aqueous speciation, saturation state and chemical mass transfer calculations were carried out using the computer programs EQ3NR and EQ6 (Wolery, 1983; Wolery et al., 1984) and a thermodynamic database generated at The Johns Hopkins University over the last eight years which is tabulated in the Appendix 1 to Volume 12 of this series. Despite uncertainties in the thermodynamic characterisation of species, all the above calculations suggest that the uranyl phosphate zone at Koongarra has not formed from present-day groundwaters (Volume 12 of this series). The present-day groundwaters in the weathered zone (eg. at 13 m depth) appear to be undersaturated with respect to saleeite. Furthermore, as present-day groundwaters descend below the water table they rapidly lose their atmospheric oxygen imprint, as is typical of most groundwaters, and become even more reducing in character. Under these circumstances, the groundwaters become more undersaturated with respect to saleeite than the shallow groundwaters. Because much of the phosphate zone is currently below the water table, under saturated zone conditions, it is suggested in the present study that the uranyl phosphate

  15. Past climate changes and permafrost depth at the Lake El'gygytgyn site: implications from data and thermal modeling

    Directory of Open Access Journals (Sweden)

    D. Mottaghy

    2013-01-01

    Full Text Available This study focuses on the temperature field observed in boreholes drilled as part of interdisciplinary scientific campaign targeting the El'gygytgyn Crater Lake in NE Russia. Temperature data are available from two sites: the lake borehole 5011-1 located near the center of the lake reaching 400 m depth, and the land borehole 5011-3 at the rim of the lake, with a depth of 140 m. Constraints on permafrost depth and past climate changes are derived from numerical simulation of the thermal regime associated with the lake-related talik structure. The thermal properties of the subsurface needed for these simulations are based on laboratory measurements of representative cores from the quaternary sediments and the underlying impact-affected rock, complemented by further information from geophysical logs and data from published literature. The temperature observations in the lake borehole 5011-1 are dominated by thermal perturbations related to the drilling process, and thus only give reliable values for the lowermost value in the borehole. Undisturbed temperature data recorded over more than two years are available in the 140 m deep land-based borehole 5011-3. The analysis of these observations allows determination of not only the recent mean annual ground surface temperature, but also the ground surface temperature history, though with large uncertainties. Although the depth of this borehole is by far too insufficient for a complete reconstruction of past temperatures back to the Last Glacial Maximum, it still affects the thermal regime, and thus permafrost depth. This effect is constrained by numerical modeling: assuming that the lake borehole observations are hardly influenced by the past changes in surface air temperature, an estimate of steady-state conditions is possible, leading to a meaningful value of 14 ± 5 K for the post-glacial warming. The strong curvature of the temperature data in shallower depths around 60 m can be explained by a

  16. On the relation between cost and service models for general inventory systems

    NARCIS (Netherlands)

    Houtum, van G.J.J.A.N.; Zijm, W.H.M.

    2000-01-01

    In this paper, we present a systematic overview of possible relations between cost and service models for fairly general single- and multi-stage inventory systems. In particular, we relate various types of penalty costs in pure cost models to equivalent types of service measures in service models.

  17. Competition between Plant-Populations with Different Rooting Depths. 2. Pot Experiments

    NARCIS (Netherlands)

    Berendse, F.

    1981-01-01

    In a previous paper in this series a model was proposed lor the competition between plant populations with different rooting depths. This model predicts that in mixtures of plant populations with different rooting depths the Relative Yield Total will exceed unity. Secondly it predicts that in these

  18. Ontology Driven Meta-Modeling of Service Oriented Architecture

    African Journals Online (AJOL)

    pc

    2018-03-05

    Mar 5, 2018 ... #Department of Computer Applications, National Institute of ... *5Department of Computer Science, Winona State University, MN, USA ..... Further, it has aided in service .... Software: A Research Roadmap”, Workshop on the Future of ... and A. Solberg, “Model-driven service engineering with SoaML”, in.

  19. Business model driven service architecture design for enterprise application integration

    OpenAIRE

    Gacitua-Decar, Veronica; Pahl, Claus

    2008-01-01

    Increasingly, organisations are using a Service-Oriented Architecture (SOA) as an approach to Enterprise Application Integration (EAI), which is required for the automation of business processes. This paper presents an architecture development process which guides the transition from business models to a service-based software architecture. The process is supported by business reference models and patterns. Firstly, the business process models are enhanced with domain model elements, applicat...

  20. Does Categorization Method Matter in Exploring Volume-Outcome Relation? A Multiple Categorization Methods Comparison in Coronary Artery Bypass Graft Surgery Surgical Site Infection.

    Science.gov (United States)

    Yu, Tsung-Hsien; Tung, Yu-Chi; Chung, Kuo-Piao

    2015-08-01

    Volume-infection relation studies have been published for high-risk surgical procedures, although the conclusions remain controversial. Inconsistent results may be caused by inconsistent categorization methods, the definitions of service volume, and different statistical approaches. The purpose of this study was to examine whether a relation exists between provider volume and coronary artery bypass graft (CABG) surgical site infection (SSI) using different categorization methods. A population-based cross-sectional multi-level study was conducted. A total of 10,405 patients who received CABG surgery between 2006 and 2008 in Taiwan were recruited. The outcome of interest was surgical site infection for CABG surgery. The associations among several patient, surgeon, and hospital characteristics was examined. The definition of surgeons' and hospitals' service volume was the cumulative CABG service volumes in the previous year for each CABG operation and categorized by three types of approaches: Continuous, quartile, and k-means clustering. The results of multi-level mixed effects modeling showed that hospital volume had no association with SSI. Although the relation between surgeon volume and surgical site infection was negative, it was inconsistent among the different categorization methods. Categorization of service volume is an important issue in volume-infection study. The findings of the current study suggest that different categorization methods might influence the relation between volume and SSI. The selection of an optimal cutoff point should be taken into account for future research.