WorldWideScience

Sample records for san file systems

  1. 75 FR 15429 - San Diego Gas & Electric Co.; California Independent System Operator; Notice of Filing

    Science.gov (United States)

    2010-03-29

    ... From the Federal Register Online via the Government Publishing Office DEPARTMENT OF ENERGY Federal Energy Regulatory Commission San Diego Gas & Electric Co.; California Independent System Operator; Notice of Filing March 22, 2010. Take notice that on July 20, 2009, Avista Energy, Inc. pursuant to the...

  2. COSMOS (County of San Mateo Online System). A Searcher's Manual.

    Science.gov (United States)

    San Mateo County Superintendent of Schools, Redwood City, CA. Educational Resources Center.

    Operating procedures are explained for COSMOS (County of San Mateo Online System), a computerized information retrieval system designed for the San Mateo Educational Resources Center (SMERC), which provides interactive access to both ERIC and a local file of fugitive documents. COSMOS hardware and modem compatibility requirements are reviewed,…

  3. U.S. Department of Energy Best Practices Workshop onFile Systems & Archives San Francisco, CA September 26-27, 2011 Position Paper

    Energy Technology Data Exchange (ETDEWEB)

    Hedges, R M

    2011-09-01

    This position paper discusses issues of usability of the large parallel file systems in the Livermore Computing Center. The primary uses of these file systems are for storage and access of data that is created during the course of a simulation running on an LC system. The Livermore Computing Center has multiple, globally mounted parallel file systems in each of its computing environments. The single biggest issue of file system usability that we have encountered through the years is to maintain continuous file system responsiveness. Given the back end storage hardware that our file systems are provisioned with, it is easily possible for a particularly I/O intensive application or one with particularly inefficiently coded I/O operations to bring the file system to an apparent halt. The practice that we will be addressing is one of having an ability to indentify, diagnose, analyze and optimize the I/O quickly and effectively.

  4. File System Virtual Appliances

    Science.gov (United States)

    2010-05-01

    some of my closest friendships. Ashraf, Ippo and Neil made life much more fun. Mike Merideth introduced me to fine scotch, hi-fi speakers, and piano ...system virtual appliances (4) We analyze the sources of latency in traditional inter-VM communica- tion techniques and present a novel energy- and...tiple file system implementations within the Sun UNIX kernel” [52]. This was achieved through two techniques . First, outside the file system layer

  5. 75 FR 27338 - San Diego Gas & Electric Company; Notice of Filing

    Science.gov (United States)

    2010-05-14

    ... From the Federal Register Online via the Government Publishing Office DEPARTMENT OF ENERGY Federal Energy Regulatory Commission San Diego Gas & Electric Company; Notice of Filing May 7, 2010. Take notice that on May 4, 2010, The California Power Exchange Corporation filed a refund report, pursuant to the...

  6. Formalizing a Hierarchical File System

    NARCIS (Netherlands)

    Hesselink, Wim H.; Lali, M.I.

    2009-01-01

    In this note, we define an abstract file system as a partial function from (absolute) paths to data. Such a file system determines the set of valid paths. It allows the file system to be read and written at a valid path, and it allows the system to be modified by the Unix operations for removal (rm)

  7. Formalizing a hierarchical file system

    NARCIS (Netherlands)

    Hesselink, Wim H.; Lali, Muhammad Ikram

    2012-01-01

    An abstract file system is defined here as a partial function from (absolute) paths to data. Such a file system determines the set of valid paths. It allows the file system to be read and written at a valid path, and it allows the system to be modified by the Unix operations for creation, removal, a

  8. 一种SAN环境下集群文件系统的元数据缓存研究%Research of a SAN Environment Cluster File System's Meta-date Cache

    Institute of Scientific and Technical Information of China (English)

    许祥; 罗宇

    2012-01-01

    为了发挥SAN环境的存储访问优势,阐明了一种以CIFS协议为原型的集群文件系统,通过考察此架构下数据和元数据相互独立又彼此制约的特殊关系,提出了一种元数据缓存方法;为了减少因元数据获取而给数据读写带来的性能损失,对元数据缓存舍弃了原有的管理方法,尽可能地保证其在客户端是可缓冲的;针对可能引起客户端与服务器端元数据不一致的情况进行了分析,并提出了解决方法;最后通过测试进行了初步验证.%To use the advantage of the SAN environment,, introduce a new cluster file system based on CIFS,after the analysis of the relationship of data and the meta-data, it propose a meta-data cache method;To reduce the performance loss of the new system because of the obtainment of the metadata, gives up the original management method, making sure that the meta-data can cache as it possible;Analyse the situation that causes the inconsistent of the meta-data between client and server, propose a solution to solve this problem.

  9. Mixed-Media File Systems

    NARCIS (Netherlands)

    Bosch, Hendrikus Gerardus Petrus

    1999-01-01

    This thesis addresses the problem of implementing mixed-media storage systems. In this work a mixed-media file system is defined to be a system that stores both conventional (best-effort) file data and real-time continuous-media data. Continuous-media data is usually bulky, and servers storing and r

  10. PFS: A Distributed and Customizable File System

    NARCIS (Netherlands)

    Bosch, Peter

    1996-01-01

    In this paper we present our ongoing work on the Pegasus File System (PFS), a distributed and customizable file system that can be used for off-line file system experiments and on-line file system storage. PFS is best described as an object-oriented component library from which either a true file sy

  11. Introduction to Hadoop Distributed File System

    Directory of Open Access Journals (Sweden)

    Vaibhav Gopal korat

    2012-04-01

    Full Text Available HDFS is a distributed file system designed to hold very large amounts of data (terabytes or even petabytes, and provide high-throughput access to this information. Files are stored in a redundant fashion across multiple machines to ensure their durability to failure and high availability to very parallel applications. This paper includes the step by step introduction to the file system to distributed file system and to the Hadoop Distributed File System. Section I introduces What is file System, Need of File System, Conventional File System, its advantages, Need of Distributed File System, What is Distributed File System and Benefits of Distributed File System. Also the analysis of large dataset and comparison of mapreducce with RDBMS, HPC and Grid Computing communities have been doing large-scale data processing for years. Sections II introduce the concept of Hadoop Distributed File System. Lastly section III contains Conclusion followed with the References.

  12. Hybrid energy system cost analysis: San Nicolas Island, California

    Energy Technology Data Exchange (ETDEWEB)

    Olsen, T.L.; McKenna, E.

    1996-07-01

    This report analyzes the local wind resource and evaluates the costs and benefits of supplementing the current diesel-powered energy system on San Nicolas Island, California (SNI), with wind turbines. In Section 2.0 the SNI site, naval operations, and current energy system are described, as are the data collection and analysis procedures. Section 3.0 summarizes the wind resource data and analyses that were presented in NREL/TP 442-20231. Sections 4.0 and 5.0 present the conceptual design and cost analysis of a hybrid wind and diesel energy system on SNI, with conclusions following in Section 6. Appendix A presents summary pages of the hybrid system spreadsheet model, and Appendix B contains input and output files for the HYBRID2 program.

  13. Design and Implementation of Ceph: A Scalable Distributed File System

    Energy Technology Data Exchange (ETDEWEB)

    Weil, S A; Brandt, S A; Miller, E L; Long, D E; Maltzahn, C

    2006-04-19

    File system designers continue to look to new architectures to improve scalability. Object-based storage diverges from server-based (e.g. NFS) and SAN-based storage systems by coupling processors and memory with disk drives, delegating low-level allocation to object storage devices (OSDs) and decoupling I/O (read/write) from metadata (file open/close) operations. Even recent object-based systems inherit decades-old architectural choices going back to early UNIX file systems, however, limiting their ability to effectively scale to hundreds of petabytes. We present Ceph, a distributed file system that provides excellent performance and reliability with unprecedented scalability. Ceph maximizes the separation between data and metadata management by replacing allocation tables with a pseudo-random data distribution function (CRUSH) designed for heterogeneous and dynamic clusters of unreliable OSDs. We leverage OSD intelligence to distribute data replication, failure detection and recovery with semi-autonomous OSDs running a specialized local object storage file system (EBOFS). Finally, Ceph is built around a dynamic distributed metadata management cluster that provides extremely efficient metadata management that seamlessly adapts to a wide range of general purpose and scientific computing file system workloads. We present performance measurements under a variety of workloads that show superior I/O performance and scalable metadata management (more than a quarter million metadata ops/sec).

  14. Dynamic Metadata Management in Semantic File Systems

    Directory of Open Access Journals (Sweden)

    T. Anand

    2015-03-01

    Full Text Available The progression in data capacity and difficulty inflicts great challenges for file systems. To address these contests, an inventive namespace management scheme is in distracted need to deliver both the ease and competence of data access. For scalability, each server makes only local, autonomous decisions about relocation for load balancing. Associative access is provided by a traditional extension to present tree-structured file system conventions, and by protocols that are intended specifically for content based access.Rapid attributebased accesstofile system contents is fulfilled by instinctive extraction and indexing of key properties of file system objects. The programmed indexing of files and calendars is called “semantic” because user programmable transducers use data about the semantics of efficient file system objects to extract the properties for indexing. Tentative results from a semantic file system execution support the thesis that semantic file systems present a more active storage abstraction than do traditional tree planned file systems for data sharing and command level programming. Semantic file system is executed as a middleware in predictable file systems and works orthogonally with categorized directory trees. The semantic relationships and file groups recognized in file systems can also be used to facilitate file prefetching among other system-level optimizations. Allencompassing trace-driven experiments on our sample implementation validate the efficiency and competence.

  15. Cut-and-paste file-systems: integrating simulators and file-systems

    NARCIS (Netherlands)

    Bosch, Peter; Mullender, Sape J.

    1996-01-01

    We have implemented an integrated and configurable file system called the PFS and a trace-driven file-system simulator called Patsy. Patsy is used for off-line analysis of file-system algorithms, PFS is used for on-line file-system data storage. Algorithms are first analyzed in Patsy and when we are

  16. Solar sanitary system (SOL-SAN)

    Energy Technology Data Exchange (ETDEWEB)

    Cobb, J.C.

    1996-11-01

    Ordinary composting toilets, because of cooling by evaporation, do not heat the product (humus) hot enough to kill all pathogenic viruses, bacteria, or parasite eggs and cysts. The SOL-SAN system uses direct radiation to pasteurize incoming river water for drinking and also, separately, to pasteurize and dry the humus, and to pasteurize the effluent gray/brown water. Work is in progress on simple fool-proof methods of insuring that the water will not flow out unless it has been pasteurized. Heat exchangers recapture the heat from these very hot pasteurized liquids, thereby warming more in-coming water for washing, which is important for preventing transmission of pathogenic microbes. When pasteurized, the humus and gray/brown water can safely be recycled to fertilize and water the family vegetable garden. Thus no sewer would be needed, and the vegetables or fish would grow well. Widespread use of the SOL-SAN system would save water and nutrients, reduce the prevalence of infectious diseases, improve the nutrition and vitality of the population, and save the large fraction of human food now consumed by parasites.

  17. Cut-and-paste file-systems: integrating simulators and file systems

    NARCIS (Netherlands)

    Bosch, Peter; Mullender, Sape J.

    1995-01-01

    We have implemented an integrated and configurable file system called the Pegasus filesystem (PFS) and a trace-driven file-system simulator called Patsy. Patsy is used for off-line analysis of file-systemalgorithms, PFS is used for on-line file-systemdata storage. Algorithms are first analyzed in Pa

  18. Verifying compiled file system code

    OpenAIRE

    Mühlberg, Jan Tobias; Lüttgen, Gerald

    2011-01-01

    This article presents a case study on retrospective verification of the Linux Virtual File System (VFS), which is aimed at checking violations of API usage rules and memory properties. Since VFS maintains dynamic data structures and is written in a mixture of C and inlined assembly, modern software model checkers cannot be applied. Our case study centres around our novel automated software verification tool, the SOCA Verifier, which symbolically executes and analyses compi...

  19. A Metadata-Rich File System

    Energy Technology Data Exchange (ETDEWEB)

    Ames, S; Gokhale, M B; Maltzahn, C

    2009-01-07

    Despite continual improvements in the performance and reliability of large scale file systems, the management of file system metadata has changed little in the past decade. The mismatch between the size and complexity of large scale data stores and their ability to organize and query their metadata has led to a de facto standard in which raw data is stored in traditional file systems, while related, application-specific metadata is stored in relational databases. This separation of data and metadata requires considerable effort to maintain consistency and can result in complex, slow, and inflexible system operation. To address these problems, we have developed the Quasar File System (QFS), a metadata-rich file system in which files, metadata, and file relationships are all first class objects. In contrast to hierarchical file systems and relational databases, QFS defines a graph data model composed of files and their relationships. QFS includes Quasar, an XPATH-extended query language for searching the file system. Results from our QFS prototype show the effectiveness of this approach. Compared to the defacto standard, the QFS prototype shows superior ingest performance and comparable query performance on user metadata-intensive operations and superior performance on normal file metadata operations.

  20. File Transfer Algorithm for Autonomous Decentralized System

    Institute of Scientific and Technical Information of China (English)

    GUI Xun; TAN Yong-dong; Qian Qing-quan

    2008-01-01

    A file transfer algorithm based on ADP (autonomous decentralized protocol) was proposed to solve the problem that the ADS (autonomous decentralized system) middleware (NeXUS/Dlink) lacks of file transfer functions for Windows. The algorithm realizes the peer-to-peer file transfer, one-to-N inquiry/multi-response file transfer and one-to-N file distribution in the same data field based on communication patterns provided by the ADP. The peer-to-peer file transfer is implemented through a peer-to-peer communication path, one-to-N inquiry/multi-response file transfer and one-to-N file distribution are implemented through multicast commtmieation. In this algorithm, a file to be transferred is named with a GUID ( global unique identification), every data packet is marked with a sequence number, and file-receiving in parallel is implemented by caching DPOs (data processing objects) and multithread technologies. The algorithm is applied in a simulation system of the decentralized control platform, and the test results and long time stable mrming prove the feasibility of the algorithm.

  1. Clockwise: a mixed-media file system

    NARCIS (Netherlands)

    Bosch, Peter; Jansen, Pierre G.; Mullender, Sape J.

    1999-01-01

    This paper presents Clockwise, a mixed-media file system. The primary goal of Clockwise is to provide a storage architecture that supports the storage and retrieval of best-effort and real-time file system data. Clockwise provides an abstraction called a dynamic partition that groups lists of relate

  2. Athos: Efficient Authentication of Outsourced File Systems

    DEFF Research Database (Denmark)

    Triandopoulos, Nikolaos; Goodrich, Michael T.; Papamanthou, Charalampos

    2008-01-01

    outsourced storage. Using light-weight cryptographic primitives and efficient data-structuring techniques, we design authentication schemes that allow a client to efficiently verify that the file system is fully consistent with the exact history of updates and queries requested by the client. In Athos, file-system......We study the problem of authenticated storage, where we wish to construct protocols that allow to outsource any complex file system to an untrusted server and yet ensure the file-system's integrity. We introduce Athos, a new, platform-independent and user-transparent architecture for authenticated...... operations are verified in time that is logarithmic in the size of the file system using optimal storage complexity-constant storage overhead at the client and asymptotically no extra overhead at the server. We provide a prototype implementation of Athos validating its performance and its authentication...

  3. Experiences on File Systems: Which is the best file system for you?

    CERN Document Server

    Blomer, J

    2015-01-01

    The distributed file system landscape is scattered. Besides a plethora of research file systems, there is also a large number of production grade file systems with various strengths and weaknesses. The file system, as an abstraction of permanent storage, is appealing because it provides application portability and integration with legacy and third-party applications, including UNIX utilities. On the other hand, the general and simple file system interface makes it notoriously difficult for a distributed file system to perform well under a variety of different workloads. This contribution provides a taxonomy of commonly used distributed file systems and points out areas of research and development that are particularly important for high-energy physics.

  4. Protecting your files on the AFS file system

    CERN Multimedia

    2011-01-01

    The Andrew File System is a world-wide distributed file system linking hundreds of universities and organizations, including CERN. Files can be accessed from anywhere, via dedicated AFS client programs or via web interfaces that export the file contents on the web. Due to the ease of access to AFS it is of utmost importance to properly protect access to sensitive data in AFS. As the use of AFS access control mechanisms is not obvious to all users, passwords, private SSH keys or certificates have been exposed in the past. In one specific instance, this also led to bad publicity due to a journalist accessing supposedly "private" AFS folders (SonntagsZeitung 2009/11/08). This problem does not only affect the individual user but also has a bad impact on CERN's reputation when it comes to IT security. Therefore, all departments and LHC experiments agreed in April 2010 to apply more stringent folder protections to all AFS user folders. The goal of this data protection policy is to assist users in...

  5. Protecting your files on the DFS file system

    CERN Multimedia

    Computer Security Team

    2011-01-01

    The Windows Distributed File System (DFS) hosts user directories for all NICE users plus many more data.    Files can be accessed from anywhere, via a dedicated web portal (http://cern.ch/dfs). Due to the ease of access to DFS with in CERN it is of utmost importance to properly protect access to sensitive data. As the use of DFS access control mechanisms is not obvious to all users, passwords, certificates or sensitive files might get exposed. At least this happened in past to the Andrews File System (AFS) - the Linux equivalent to DFS) - and led to bad publicity due to a journalist accessing supposedly "private" AFS folders (SonntagsZeitung 2009/11/08). This problem does not only affect the individual user but also has a bad impact on CERN's reputation when it comes to IT security. Therefore, all departments and LHC experiments agreed recently to apply more stringent protections to all DFS user folders. The goal of this data protection policy is to assist users in pro...

  6. Small file aggregation in a parallel computing system

    Science.gov (United States)

    Faibish, Sorin; Bent, John M.; Tzelnic, Percy; Grider, Gary; Zhang, Jingwang

    2014-09-02

    Techniques are provided for small file aggregation in a parallel computing system. An exemplary method for storing a plurality of files generated by a plurality of processes in a parallel computing system comprises aggregating the plurality of files into a single aggregated file; and generating metadata for the single aggregated file. The metadata comprises an offset and a length of each of the plurality of files in the single aggregated file. The metadata can be used to unpack one or more of the files from the single aggregated file.

  7. NCPC Central Files Information System (CFIS)

    Data.gov (United States)

    National Capital Planning Commission — This dataset contains records from NCPC's Central Files Information System (CFIS), which is a comprehensive database of projects submitted to NCPC for design review...

  8. 77 FR 45348 - Combined Notice of Filings #2

    Science.gov (United States)

    2012-07-31

    ... following electric rate filings: Docket Numbers: ER12-2055-001. Applicants: San Gorgonio Farms, Inc... filings are accessible in the Commission's eLibrary system by clicking on the links or querying the docket...

  9. System for Multicast File Transfer

    Directory of Open Access Journals (Sweden)

    Dorin Custura

    2012-03-01

    Full Text Available The distribution of big files over the network from a single source to a large number of recipients is not efficient by using standard client-server or even peer-to peer file transfer protocols.  Thus, the transfer of a hierarchy of big files to multiple destinations can be optimized in terms of bandwidth usage and data storage reads by using multicast networking. In order to achieve that, a simple application layer protocol can be imagined. It uses multicast UDP as transport and it provides a mechanism for data ordering and retransmission. Some security problems are also considered in this protocol, because at this time the Internet standards supporting multicast security are still in the development stage.

  10. Tuning HDF5 for Lustre File Systems

    Energy Technology Data Exchange (ETDEWEB)

    Howison, Mark; Koziol, Quincey; Knaak, David; Mainzer, John; Shalf, John

    2010-09-24

    HDF5 is a cross-platform parallel I/O library that is used by a wide variety of HPC applications for the flexibility of its hierarchical object-database representation of scientific data. We describe our recent work to optimize the performance of the HDF5 and MPI-IO libraries for the Lustre parallel file system. We selected three different HPC applications to represent the diverse range of I/O requirements, and measured their performance on three different systems to demonstrate the robustness of our optimizations across different file system configurations and to validate our optimization strategy. We demonstrate that the combined optimizations improve HDF5 parallel I/O performance by up to 33 times in some cases running close to the achievable peak performance of the underlying file system and demonstrate scalable performance up to 40,960-way concurrency.

  11. San Juan implements one-man survey system

    Energy Technology Data Exchange (ETDEWEB)

    Andrae, S. (San Juan Coal Co., Waterflow, NM (United States))

    1994-07-01

    Describes the one-man survey system which has been implemented at the San Juan surface mine in northwestern New Mexico. The Geodimeter System 4000, produced by Geotronics of Sweden, consists of a tripod-mounted electronic total station and a range rod-mounted remote positioning unit (RPU). A radio link between the tripod-mounted total station and the RPU enables one person to control the instrument and collect data. At San Juan the system has been used to survey overburden removal and mining. Only in cases where the pits become very long, and control cannot be set in the pit, is a two-person crew used. The system is useful for surveys of compliance projects and lends itself well to regrading work. 3 photos.

  12. Solar energy system performance evaluation-seasonal report for Elcam San Diego, San Diego, California

    Science.gov (United States)

    1980-01-01

    The solar energy system, Elcam San Diego, was designed to supply domestic hot water heating for a single family residence located in Encinitas, California. System description, performance assessment, operating energy, energy savings, maintenance, and conclusions are presented. The system is a 'Sunspot' two tank cascade type, where solar energy is supplied to either a 66 gallon preheat tank (solar storage) or a 40 gallon domestic hot water tank. Water is pumped directly from one of the two tanks, through the 65 square feet collector array and back into the same tank. Freeze protection is provided by automatically circulating hot water from the hot water tank through the collectors and exposed plumbing when freezing conditions exist. Auxiliary energy is supplied by natural gas. Analysis is based on instrumented system data monitored and collected for one full season of operation.

  13. Collective operations in a file system based execution model

    Science.gov (United States)

    Shinde, Pravin; Van Hensbergen, Eric

    2013-02-19

    A mechanism is provided for group communications using a MULTI-PIPE synthetic file system. A master application creates a multi-pipe synthetic file in the MULTI-PIPE synthetic file system, the master application indicating a multi-pipe operation to be performed. The master application then writes a header-control block of the multi-pipe synthetic file specifying at least one of a multi-pipe synthetic file system name, a message type, a message size, a specific destination, or a specification of the multi-pipe operation. Any other application participating in the group communications then opens the same multi-pipe synthetic file. A MULTI-PIPE file system module then implements the multi-pipe operation as identified by the master application. The master application and the other applications then either read or write operation messages to the multi-pipe synthetic file and the MULTI-PIPE synthetic file system module performs appropriate actions.

  14. Public Document Room file classification system

    Energy Technology Data Exchange (ETDEWEB)

    1982-06-01

    This listing contains detailed descriptions of the file classification system for documents available from the Public Document Room (PDR) of the US Nuclear Regulatory Commission. As a public service branch of the agency, the PDR maintains facilities for receiving, processing, storing, and retrieving documents which NRC generates or receives in performing its regulatory function. Unlike a library, the PDR does not maintain collections of formally published materials, such as books, monographs, serials, periodicals, or general indexes. The documents on file at the PDR can be reports, written records of meetings (transcripts), existing or proposed regulations, the text of licenses or their amendments, and correspondence.

  15. 76 FR 70651 - Fee for Filing a Patent Application Other Than by the Electronic Filing System

    Science.gov (United States)

    2011-11-15

    .... Information concerning electronic filing via EFS-Web is available from the USPTO's Patent Electronic Business... the Electronic Filing System AGENCY: United States Patent and Trademark Office, Commerce. ACTION... for a design, plant, or provisional application, that is not filed by electronic means as...

  16. PCT Reforms Its Patent Filing System

    Institute of Scientific and Technical Information of China (English)

    2004-01-01

    As of January 1,2004,the first critical steps in seekingpatent protection in multiple countries will be easier as aresult of reforms to the international patent filing system.Aseries of reforms to the World Intellectual PropertyOrganisation's(WIPO)Patent Cooperation Treaty(PCT),ranging from a new simplified system of designatingcountries in which patent protection is sought to an enhancedsearch and preliminary examination system,will simplify thecomplex procedure of obtaining patent protection in severa...

  17. File Assignment Policy in Network Storage System

    Institute of Scientific and Technical Information of China (English)

    CaoQiang; XieChang-sheng

    2003-01-01

    Network storage increase capacity and scalability of storage system, data availability and enables the sharing of data among clients. When the developing network technology reduce performance gap between disk and network, however,mismatched policies and access pattern can significantly reduce network storage performance. So the strategy of data place ment in system is an important factor that impacts the performance of overall system. In this paper, the two algorithms of file assignment are presented. One is Greed partition that aims at the load balance across all NADs (Network Attached Disk). The other is Sort partition that tries to minimize variance of service time in each NAD. Moreover, we also compare the performance of our two algorithms in practical environment. Our experimental results show that when the size distribution (load characters) of all assigning files is closer and larger, Sort partition provides consistently better response times than Greedy algorithm. However, when the range of all assigning files is wider, there are more small files and access rate is higher, the Greedy algorithm has superior performance in compared with the Sort partition in off-line.

  18. File Assignment Policy in Network Storage System

    Institute of Scientific and Technical Information of China (English)

    Cao Qiang; Xie Chang-sheng

    2003-01-01

    Network storage increase capacity and scalability of storage system, data availability and enables the sharing of data among clients. When the developing network technology reduce performance gap between disk and network, however, mismatched policies and access pattern can significantly reduce network storage performance. So the strategy of data placement in system is an important factor that impacts the performance of overall system. In this paper, the two algorithms of file assignment are presented. One is Greed partition that aims at the load balance across all NADs (Network Attached Disk). The other is Sort partition that tries to minimize variance of service time in each NAD. Moreover, we also compare the performance of our two algorithms in practical environment. Our experimental results show that when the size distribution (load characters) of all assigning files is closer and larger, Sort partition provides consistently better response times than Greedy algorithm. However, when the range of all assigning files is wider, there are more small files and access rate is higher, the Greedy algorithm has superior performance in compared with the Sort partition in off-line.

  19. The global unified parallel file system (GUPFS) project: FY 2002 activities and results

    Energy Technology Data Exchange (ETDEWEB)

    Butler, Gregory F.; Lee, Rei Chi; Welcome, Michael L.

    2003-04-07

    The Global Unified Parallel File System (GUPFS) project is a multiple-phase, five-year project at the National Energy Research Scientific Computing (NERSC) Center to provide a scalable, high performance, high bandwidth, shared file system for all the NERSC production computing and support systems. The primary purpose of the GUPFS project is to make it easier to conduct advanced scientific research using the NERSC systems. This is to be accomplished through the use of a shared file system providing a unified file namespace, operating on consolidated shared storage that is directly accessed by all the NERSC production computing and support systems. During its first year, FY 2002, the GUPFS project focused on identifying, testing, and evaluating existing and emerging shared/cluster file system, SAN fabric, and storage technologies; identifying NERSC user input/output (I/O) requirements, methods, and mechanisms; and developing appropriate benchmarking methodologies and benchmark codes for a parallel environment. This report presents the activities and progress of the GUPFS project during its first year, the results of the evaluations conducted, and plans for near-term and longer-term investigations.

  20. Analyzing Service Rates for File Transfers in Peer-to-peer File Sharing Systems

    Institute of Scientific and Technical Information of China (English)

    WANG Kai; PAN Li; LI Jian-hua

    2008-01-01

    When examining the file transfer performance in a peer-to-peer file sharing system, a fundamental problem is how to describe the service rate for a file transfer.In this paper, the problem is examined by analyzing the distribution of server-like nodes' upstream-bandwidth among their concurrent transfers.A sufficient condition for the service rate, what a receiver obtains for downloading a file, to asymptotically be uniform is presented.On the aggregate service rate for transferring a file in a system, a sufficient condition for it to asymptotically follow a Zipf distribution is presented.These asymptotic equalities are both in the mean square sense.These analyses and the sufficient conditions provide a mathematic base for modeling file transfer processes in peer-to-peer file sharing systems.

  1. Distributed Data Management and Distributed File Systems

    CERN Document Server

    Girone, Maria

    2015-01-01

    The LHC program has been successful in part due to the globally distributed computing resources used for collecting, serving, processing, and analyzing the large LHC datasets. The introduction of distributed computing early in the LHC program spawned the development of new technologies and techniques to synchronize information and data between physically separated computing centers. Two of the most challenges services are the distributed file systems and the distributed data management systems. In this paper I will discuss how we have evolved from local site services to more globally independent services in the areas of distributed file systems and data management and how these capabilities may continue to evolve into the future. I will address the design choices, the motivations, and the future evolution of the computing systems used for High Energy Physics.

  2. Electronic Document Management Using Inverted Files System

    Science.gov (United States)

    Suhartono, Derwin; Setiawan, Erwin; Irwanto, Djon

    2014-03-01

    The amount of documents increases so fast. Those documents exist not only in a paper based but also in an electronic based. It can be seen from the data sample taken by the SpringerLink publisher in 2010, which showed an increase in the number of digital document collections from 2003 to mid of 2010. Then, how to manage them well becomes an important need. This paper describes a new method in managing documents called as inverted files system. Related with the electronic based document, the inverted files system will closely used in term of its usage to document so that it can be searched over the Internet using the Search Engine. It can improve document search mechanism and document save mechanism.

  3. Electronic Document Management Using Inverted Files System

    Directory of Open Access Journals (Sweden)

    Suhartono Derwin

    2014-03-01

    Full Text Available The amount of documents increases so fast. Those documents exist not only in a paper based but also in an electronic based. It can be seen from the data sample taken by the SpringerLink publisher in 2010, which showed an increase in the number of digital document collections from 2003 to mid of 2010. Then, how to manage them well becomes an important need. This paper describes a new method in managing documents called as inverted files system. Related with the electronic based document, the inverted files system will closely used in term of its usage to document so that it can be searched over the Internet using the Search Engine. It can improve document search mechanism and document save mechanism.

  4. A History of the Andrew File System

    CERN Document Server

    CERN. Geneva; Altman, Jeffrey

    2011-01-01

    Derrick Brashear and Jeffrey Altman will present a technical history of the evolution of Andrew File System starting with the early days of the Andrew Project at Carnegie Mellon through the commercialization by Transarc Corporation and IBM and a decade of OpenAFS. The talk will be technical with a focus on the various decisions and implementation trade-offs that were made over the course of AFS versions 1 through 4, the development of the Distributed Computing Environment Distributed File System (DCE DFS), and the course of the OpenAFS development community. The speakers will also discuss the various AFS branches developed at the University of Michigan, Massachusetts Institute of Technology and Carnegie Mellon University.

  5. Design and Implementation of a Metadata-rich File System

    Energy Technology Data Exchange (ETDEWEB)

    Ames, S; Gokhale, M B; Maltzahn, C

    2010-01-19

    Despite continual improvements in the performance and reliability of large scale file systems, the management of user-defined file system metadata has changed little in the past decade. The mismatch between the size and complexity of large scale data stores and their ability to organize and query their metadata has led to a de facto standard in which raw data is stored in traditional file systems, while related, application-specific metadata is stored in relational databases. This separation of data and semantic metadata requires considerable effort to maintain consistency and can result in complex, slow, and inflexible system operation. To address these problems, we have developed the Quasar File System (QFS), a metadata-rich file system in which files, user-defined attributes, and file relationships are all first class objects. In contrast to hierarchical file systems and relational databases, QFS defines a graph data model composed of files and their relationships. QFS incorporates Quasar, an XPATH-extended query language for searching the file system. Results from our QFS prototype show the effectiveness of this approach. Compared to the de facto standard, the QFS prototype shows superior ingest performance and comparable query performance on user metadata-intensive operations and superior performance on normal file metadata operations.

  6. Digital Libraries: The Next Generation in File System Technology.

    Science.gov (United States)

    Bowman, Mic; Camargo, Bill

    1998-01-01

    Examines file sharing within corporations that use wide-area, distributed file systems. Applications and user interactions strongly suggest that the addition of services typically associated with digital libraries (content-based file location, strongly typed objects, representation of complex relationships between documents, and extrinsic…

  7. Optimizing Input/Output Using Adaptive File System Policies

    Science.gov (United States)

    Madhyastha, Tara M.; Elford, Christopher L.; Reed, Daniel A.

    1996-01-01

    Parallel input/output characterization studies and experiments with flexible resource management algorithms indicate that adaptivity is crucial to file system performance. In this paper we propose an automatic technique for selecting and refining file system policies based on application access patterns and execution environment. An automatic classification framework allows the file system to select appropriate caching and pre-fetching policies, while performance sensors provide feedback used to tune policy parameters for specific system environments. To illustrate the potential performance improvements possible using adaptive file system policies, we present results from experiments involving classification-based and performance-based steering.

  8. Benchmarking File System Benchmarking: It *IS* Rocket Science

    OpenAIRE

    Seltzer, Margo I.; Tarasov, Vasily; Bhanage, Saumitra; Zadok, Erez

    2011-01-01

    The quality of file system benchmarking has not improved in over a decade of intense research spanning hundreds of publications. Researchers repeatedly use a wide range of poorly designed benchmarks, and in most cases, develop their own ad-hoc benchmarks. Our community lacks a definition of what we want to benchmark in a file system. We propose several dimensions of file system benchmarking and review the wide range of tools and techniques in widespread use. We experimentally show that even t...

  9. 78 FR 21930 - Aquenergy Systems, Inc.; Notice of Intent To File License Application, Filing of Pre-Application...

    Science.gov (United States)

    2013-04-12

    ... Energy Regulatory Commission Aquenergy Systems, Inc.; Notice of Intent To File License Application... Filing: Notice of Intent to File License Application and Request to Use the Traditional Licensing Process. b. Project No.: P-2428-004. c. Date Filed: November 11, 2012. d. Submitted by: Aquenergy...

  10. A Reputation System with Anti-Pollution Mechanism in P2P File Sharing Systems

    OpenAIRE

    Qi Mei; Guo Yajun; Yan Huifang

    2009-01-01

    File pollution has become a very serious problem in peer-to-peer file sharing systems, because of which it greatly reduces the effectiveness of systems. Users downloaded pollution files not only consumed bandwidth, but were also likely to share polluted files without checking. If these polluted files carry a virus, Trojan horse, or other malicious code, the loss of users would be disastrous. There is much research done on reputation-based anti-pollution mechanisms. Peer reputation systems and...

  11. A Reputation System with Anti-Pollution Mechanism in P2P File Sharing Systems

    OpenAIRE

    Qi Mei; Guo Yajun; Yan Huifang

    2009-01-01

    File pollution has become a very serious problem in peer-to-peer file sharing systems, because of which it greatly reduces the effectiveness of systems. Users downloaded pollution files not only consumed bandwidth, but were also likely to share polluted files without checking. If these polluted files carry a virus, Trojan horse, or other malicious code, the loss of users would be disastrous. There is much research done on reputation-based anti-pollution mechanisms. Peer reputation systems and...

  12. SCR Algorithm: Saving/Restoring States of File Systems

    Institute of Scientific and Technical Information of China (English)

    魏晓辉; 鞠九滨

    2000-01-01

    Fault-tolerance is very important in cluster computing and has been implemented in many famous cluster-computing systems using checkpoint/restart mechanisms. But existent check-pointing algorithms cannot restore the states of a file system when roll-backing the running of a program, so there axe many restrictions on file accesses in existent fault-tolerance systems. SCR algorithm, an algorithm based on atomic operation and consistent schedule, which can restore the states of file systems, is presented in this paper. In the SCR algorithm, system calls on file systems are classified into idem-potent operations and non-idem-potent operations. A non-idem-potent operation modifies a file system's states, while an idem-potent operation does not. SCR algorithm tracks changes of the file system states. It logs each non-idem-potent operation used by user programs and the information that can restore the operation in disks. When check-pointing roll-backing the program, SCR algorithm will revert the file system states to the last checkpoint time. By using SCR algorithm, users are allowed to use any file operation in their programs.

  13. Adding Data Management Services to Parallel File Systems

    Energy Technology Data Exchange (ETDEWEB)

    Brandt, Scott [Univ. of California, Santa Cruz, CA (United States)

    2015-03-04

    The objective of this project, called DAMASC for “Data Management in Scientific Computing”, is to coalesce data management with parallel file system management to present a declarative interface to scientists for managing, querying, and analyzing extremely large data sets efficiently and predictably. Managing extremely large data sets is a key challenge of exascale computing. The overhead, energy, and cost of moving massive volumes of data demand designs where computation is close to storage. In current architectures, compute/analysis clusters access data in a physically separate parallel file system and largely leave it scientist to reduce data movement. Over the past decades the high-end computing community has adopted middleware with multiple layers of abstractions and specialized file formats such as NetCDF-4 and HDF5. These abstractions provide a limited set of high-level data processing functions, but have inherent functionality and performance limitations: middleware that provides access to the highly structured contents of scientific data files stored in the (unstructured) file systems can only optimize to the extent that file system interfaces permit; the highly structured formats of these files often impedes native file system performance optimizations. We are developing Damasc, an enhanced high-performance file system with native rich data management services. Damasc will enable efficient queries and updates over files stored in their native byte-stream format while retaining the inherent performance of file system data storage via declarative queries and updates over views of underlying files. Damasc has four key benefits for the development of data-intensive scientific code: (1) applications can use important data-management services, such as declarative queries, views, and provenance tracking, that are currently available only within database systems; (2) the use of these services becomes easier, as they are provided within a familiar file

  14. Stochastic Petri net analysis of a replicated file system

    Science.gov (United States)

    Bechta Dugan, Joanne; Ciardo, Gianfranco

    1989-01-01

    A stochastic Petri-net model of a replicated file system is presented for a distributed environment where replicated files reside on different hosts and a voting algorithm is used to maintain consistency. Witnesses, which simply record the status of the file but contain no data, can be used in addition to or in place of files to reduce overhead. A model sufficiently detailed to include file status (current or out-of-date), as well as failure and repair of hosts where copies or witnesses reside, is presented. The number of copies and witnesses is a parameter of the model. Two different majority protocols are examined, one where a majority of all copies and witnesses is necessary to form a quorum, and the other where only a majority of the copies and witnesses on operational hosts is needed. The latter, known as adaptive voting, is shown to increase file availability in most cases.

  15. Design and Implementation of Log Structured FAT and ExFAT File Systems

    Directory of Open Access Journals (Sweden)

    Keshava Munegowda

    2014-08-01

    Full Text Available The File Allocation Table (FAT file system is supported in multiple Operating Systems (OS. Hence, FAT file system is universal exchange format for files/directories used in Solid State Drives (SSD and Hard disk Drives (HDD. The Microsoft Corporation introduced the new file system called Extended FAT file system (ExFAT to support larger size storage devices. The ExFAT file system is optimized to use with SSDs. But, Both FAT and ExFAT are not power fail safe. This means that the uncontrolled power loss or abrupt storage device removable from the computer system, during file system update, causes corruption of file system meta data and hence it leads to loss of data in storage device. This paper implements the Logging and Committing features to FAT and ExFAT file systems and ensures that the file system meta data is consistent across the abrupt power loss or device removal from the computer system.

  16. The global unified parallel file system (GUPFS) project: FY 2003 activities and results

    Energy Technology Data Exchange (ETDEWEB)

    Butler, Gregory F.; Baird William P.; Lee, Rei C.; Tull, Craig E.; Welcome, Michael L.; Whitney Cary L.

    2004-04-30

    The Global Unified Parallel File System (GUPFS) project is a multiple-phase project at the National Energy Research Scientific Computing (NERSC) Center whose goal is to provide a scalable, high-performance, high-bandwidth, shared file system for all of the NERSC production computing and support systems. The primary purpose of the GUPFS project is to make the scientific users more productive as they conduct advanced scientific research at NERSC by simplifying the scientists' data management tasks and maximizing storage and data availability. This is to be accomplished through the use of a shared file system providing a unified file namespace, operating on consolidated shared storage that is accessible by all the NERSC production computing and support systems. In order to successfully deploy a scalable high-performance shared file system with consolidated disk storage, three major emerging technologies must be brought together: (1) shared/cluster file systems software, (2) cost-effective, high-performance storage area network (SAN) fabrics, and (3) high-performance storage devices. Although they are evolving rapidly, these emerging technologies individually are not targeted towards the needs of scientific high-performance computing (HPC). The GUPFS project is in the process of assessing these emerging technologies to determine the best combination of solutions for a center-wide shared file system, to encourage the development of these technologies in directions needed for HPC, particularly at NERSC, and to then put them into service. With the development of an evaluation methodology and benchmark suites, and with the updating of the GUPFS testbed system, the project did a substantial number of investigations and evaluations during FY 2003. The investigations and evaluations involved many vendors and products. From our evaluation of these products, we have found that most vendors and many of the products are more focused on the commercial market. Most vendors

  17. Dynamic Non-Hierarchical File Systems for Exascale Storage

    Energy Technology Data Exchange (ETDEWEB)

    Long, Darrell E. [PI; Miller, Ethan L [Co PI

    2015-02-24

    This constitutes the final report for “Dynamic Non-Hierarchical File Systems for Exascale Storage”. The ultimate goal of this project was to improve data management in scientific computing and high-end computing (HEC) applications, and to achieve this goal we proposed: to develop the first, HEC-targeted, file system featuring rich metadata and provenance collection, extreme scalability, and future storage hardware integration as core design goals, and to evaluate and develop a flexible non-hierarchical file system interface suitable for providing more powerful and intuitive data management interfaces to HEC and scientific computing users. Data management is swiftly becoming a serious problem in the scientific community – while copious amounts of data are good for obtaining results, finding the right data is often daunting and sometimes impossible. Scientists participating in a Department of Energy workshop noted that most of their time was spent “...finding, processing, organizing, and moving data and it’s going to get much worse”. Scientists should not be forced to become data mining experts in order to retrieve the data they want, nor should they be expected to remember the naming convention they used several years ago for a set of experiments they now wish to revisit. Ideally, locating the data you need would be as easy as browsing the web. Unfortunately, existing data management approaches are usually based on hierarchical naming, a 40 year-old technology designed to manage thousands of files, not exabytes of data. Today’s systems do not take advantage of the rich array of metadata that current high-end computing (HEC) file systems can gather, including content-based metadata and provenance1 information. As a result, current metadata search approaches are typically ad hoc and often work by providing a parallel management system to the “main” file system, as is done in Linux (the locate utility), personal computers, and enterprise search

  18. Building a Portable File System for Heterogeneous Clusters

    Institute of Scientific and Technical Information of China (English)

    HUANG Qifeng; YANG Guangwen; ZHENG Weimin; SHEN Meiming; DENG Yiyan

    2005-01-01

    Existing in-kernel distributed file systems cannot cope with the higher requirements in well-equipped cluster environments, especially when the system becomes larger and inevitably heterogeneous. TH-CluFS is a cluster file system designed for large heterogeneous systems. TH-CluFS is implemented completely in the user space by emulating the network file system (NFS) V2 server, and is easily portable to other portable operating system interface (POSIX)-compliant platforms with application programming/binary interface API/ABI compliance. In addition, TH-CluFS uses a serverless architecture which flexibly distributes data at file granularity and achieves a consistent file system view from distributed metadata. The global cache makes full use of the aggregated memories and disks in the cluster to optimize system performance. Experimental results suggest that although TH-CluFS is implemented as user-level components, it functions as a portable, single system image, and scalable cluster file system with acceptable performance sacrifices.

  19. Non-POSIX File System for LHCb Online Event Handling

    CERN Document Server

    Garnier, J C; Cherukuwada, S S

    2011-01-01

    LHCb aims to use its O(20000) CPU cores in the high level trigger (HLT) and its 120 TB Online storage system for data reprocessing during LHC shutdown periods. These periods can last a few days for technical maintenance or only a few hours during beam interfill gaps. These jobs run on files which are staged in from tape storage to the local storage buffer. The result are again one or more files. Efficient file writing and reading is essential for the performance of the system. Rather than using a traditional shared file-system such as NFS or CIFS we have implemented a custom, light-weight, non-Posix network file-system for the handling of these files. Streaming this file-system for the data-access allows to obtain high performance, while at the same time keep the resource consumption low and add nice features not found in NFS such as high-availability, transparent fail-over of the read and write service. The writing part of this streaming service is in successful use for the Online, real-time writing of the d...

  20. Bin-Carver: Automatic Recovery of Binary Executable Files

    Science.gov (United States)

    2012-05-01

    wild daily, recovery of binary executable files becomes an important problem, especially for the case in which malware deletes itself after compromising...traditional file carving mainly focuses on document and image files such as PDF and JPEG. With the vast amount of malware code appearing in the wild ...Annual Network and Distributed System Security Symposium (NDSS’12), San Diego, CA, 2012. [18] M. Karresand, N. Shahmehri, Oscar – file type identification

  1. Efficient load rebalancing for distributed file system in Clouds

    Directory of Open Access Journals (Sweden)

    Mr. Mohan S. Deshmukh

    2016-05-01

    Full Text Available Cloud computing is an upcoming era in software industry. It’s a very vast and developing technology. Distributed file systems play an important role in cloud computing applications based on map reduce techniques. While making use of distributed file systems for cloud computing, nodes serves computing and storage functions at the same time. Given file is divided into small parts to use map reduce algorithms in parallel. But the problem lies here since in cloud computing nodes may be added, deleted or modified any time and also operations on files may be done dynamically. This causes the unequal load distribution of load among the nodes which leads to load imbalance problem in distributed file system. Newly developed distributed file system mostly depends upon central node for load distribution but this method is not helpful in large-scale and where chances of failure are more. Use of central node for load distribution creates a problem of single point dependency and chances of performance of bottleneck are more. As well as issues like movement cost and network traffic caused due to migration of nodes and file chunks need to be resolved. So we are proposing algorithm which will overcome all these problems and helps to achieve uniform load distribution efficiently. To verify the feasibility and efficiency of our algorithm we will be using simulation setup and compare our algorithm with existing techniques for the factors like load imbalance factor, movement cost and network traffic.

  2. Cross-system log file analysis for hypothesis testing

    NARCIS (Netherlands)

    Glahn, Christian

    2008-01-01

    Glahn, C. (2008). Cross-system log file analysis for hypothesis testing. Presented at Empowering Learners for Lifelong Competence Development: pedagogical, organisational and technological issues. 4th TENCompetence Open Workshop. April, 10, 2008, Madrid, Spain.

  3. National Child Abuse and Neglect Data System (NCANDS) Child File

    Data.gov (United States)

    U.S. Department of Health & Human Services — The National Child Abuse and Neglect Data System (NCANDS) Child File data set consists of child-specific data of all reports of maltreatment to State child...

  4. Cross-system log file analysis for hypothesis testing

    NARCIS (Netherlands)

    Glahn, Christian

    2008-01-01

    Glahn, C. (2008). Cross-system log file analysis for hypothesis testing. Presented at Empowering Learners for Lifelong Competence Development: pedagogical, organisational and technological issues. 4th TENCompetence Open Workshop. April, 10, 2008, Madrid, Spain.

  5. High-Performance, Multi-Node File Copies and Checksums for Clustered File Systems

    Science.gov (United States)

    Kolano, Paul Z.; Ciotti, Robert B.

    2012-01-01

    Modern parallel file systems achieve high performance using a variety of techniques, such as striping files across multiple disks to increase aggregate I/O bandwidth and spreading disks across multiple servers to increase aggregate interconnect bandwidth. To achieve peak performance from such systems, it is typically necessary to utilize multiple concurrent readers/writers from multiple systems to overcome various singlesystem limitations, such as number of processors and network bandwidth. The standard cp and md5sum tools of GNU coreutils found on every modern Unix/Linux system, however, utilize a single execution thread on a single CPU core of a single system, and hence cannot take full advantage of the increased performance of clustered file systems. Mcp and msum are drop-in replacements for the standard cp and md5sum programs that utilize multiple types of parallelism and other optimizations to achieve maximum copy and checksum performance on clustered file systems. Multi-threading is used to ensure that nodes are kept as busy as possible. Read/write parallelism allows individual operations of a single copy to be overlapped using asynchronous I/O. Multinode cooperation allows different nodes to take part in the same copy/checksum. Split-file processing allows multiple threads to operate concurrently on the same file. Finally, hash trees allow inherently serial checksums to be performed in parallel. Mcp and msum provide significant performance improvements over standard cp and md5sum using multiple types of parallelism and other optimizations. The total speed-ups from all improvements are significant. Mcp improves cp performance over 27x, msum improves md5sum performance almost 19x, and the combination of mcp and msum improves verified copies via cp and md5sum by almost 22x. These improvements come in the form of drop-in replacements for cp and md5sum, so are easily used and are available for download as open source software at http://mutil.sourceforge.net.

  6. Secure Deletion on Log-structured File Systems

    CERN Document Server

    Reardon, Joel; Capkun, Srdjan; Basin, David

    2011-01-01

    We address the problem of secure data deletion on log-structured file systems. We focus on the YAFFS file system, widely used on Android smartphones. We show that these systems provide no temporal guarantees on data deletion and that deleted data still persists for nearly 44 hours with average phone use and indefinitely if the phone is not used after the deletion. Furthermore, we show that file overwriting and encryption, methods commonly used for secure deletion on block-structured file systems, do not ensure data deletion in log-structured file systems. We propose three mechanisms for secure deletion on log-structured file systems. Purging is a user-level mechanism that guarantees secure deletion at the cost of negligible device wear. Ballooning is a user-level mechanism that runs continuously and gives probabilistic improvements to secure deletion. Zero overwriting is a kernel-level mechanism that guarantees immediate secure deletion without device wear. We implement these mechanisms on Nexus One smartphon...

  7. Building Hot Snapshot Copy Based on Windows File System

    Institute of Scientific and Technical Information of China (English)

    WANG Lina; GUO Chi; WANG Dejun; ZHU Qin

    2006-01-01

    This paper describes a method for building hot snapshot copy based on windows-file system (HSCF). The architecture and running mechanism of HSCF are discussed after giving a comparison with other on-line backup technology. HSCF, based on a file system filter driver, protects computer data and ensures their integrity and consistency with following three steps:access to open files, synchronization and copy-on-write. Its strategies for improving system performance are analyzed including priority setting, incremental snapshot and load balance. HSCF is a new kind of snapshot technology to solve the data integrity and consistency problem in online backup, which is different from other storage-level snapshot and Open File Solution.

  8. Design and Implementation of a Storage Virtualization System Based on SCSI Target Simulator in SAN

    Institute of Scientific and Technical Information of China (English)

    LI Bigang; SHU Jiwu; ZHENG Weimin

    2005-01-01

    The ideal storage virtualization system is compatible with all operating systems in storage area networks (SANs). However, current storage systems on clustered hosts and multiple operating systems are not practical. This paper presents a storage virtualization system based on a SCSI target simulator in a SAN to solve these problems. This storage virtualization system runs in the target hosts of the SAN, dynamically stores the physical information, and uses the mapping table method to modify the SCSI command addresses. The system uses the bitmap technique to manage the free space. The storage virtualization system provides various functions, such as logical volume resizing, data mirroring, and snapshots, and is compatible with clustered hosts and multiple operating systems, such as Windows NT and RedHat.

  9. The Design of a Secure File Storage System

    Science.gov (United States)

    1979-12-01

    8217brjnw Into the FM process meory to - m - check for proper discretionary access. The complete Dathname, in terms of the FSS file system, - passed to...research shows that a viable approach to the auestion of internal computer security exists. This approach, sometimes termed the "security kernel approach...eration is gninR on, aI significant advantage if the data file is long . kfter theJ file is stored by the 10 process, the FM process gets a ticket to the

  10. [PVFS 2000: An operational parallel file system for Beowulf

    Science.gov (United States)

    Ligon, Walt

    2004-01-01

    The approach has been to develop Parallel Virtual File System version 2 (PVFS2) , retaining the basic philosophy of the original file system but completely rewriting the code. It shows the architecture of the server and client components. BMI - BMI is the network abstraction layer. It is designed with a common driver and modules for each protocol supported. The interface is non-blocking, and provides mechanisms for optimizations including pinning user buffers. Currently TCP/IP and GM(Myrinet) modules have been implemented. Trove -Trove is the storage abstraction layer. It provides for storing both data spaces and name/value pairs. Trove can also be implemented using different underlying storage mechanisms including native files, raw disk partitions, SQL and other databases. The current implementation uses native files for data spaces and Berkeley db for name/value pairs.

  11. Predictive Upper Cretaceous to Early Miocene Paleogeography of the San Andreas Fault System

    Science.gov (United States)

    Burnham, K.

    2006-12-01

    Paleogeographic reconstruction of the region of the San Andreas fault was hampered for more than twenty years by the apparent incompatibility of authoritative lithologic correlations. These led to disparate estimates of dextral strike-slip offsets, notably 315 km between Pinnacles and Neenach Volcanics (Matthews, 1976), versus 563 km between Anchor Bay and Eagle Rest Peak (Ross et al., 1973). In addition, estimates of total dextral slip on the San Gregorio fault have ranged from 5 km to 185 km. Sixteen upper Cretaceous and Paleogene conglomerates of the California Coast Ranges, from Anchor Bay to Simi Valley, have been included in a multidisciplinary study. Detailed analysis, including microscopic petrography and microprobe geochemistry, verified Seiders and Cox's (1992) and Wentworth's (1996) correlation of the upper Cretaceous Strata of Anchor Bay with an unnamed conglomerate east of Half Moon Bay. Similar detailed study, with the addition of SHRIMP U/Pb zircon dating, verified that the Paleocene or Eocene Point Reyes Conglomerate at Point Reyes is a tectonically displaced segment of the Carmelo Formation of Point Lobos. These studies centered on identification of matching unique clast varieties, rather than on simply counting general clast types, and included analyses of matrices, fossils, paleocurrents, diagenesis, adjacent rocks, and stratigraphy. The work also led to three new correlations: the Point Reyes Conglomerate with granitic source rock at Point Lobos; a magnetic anomaly at Black Point with a magnetic anomaly near San Gregorio; and the Strata of Anchor Bay with previously established source rock, the potassium-poor Logan Gabbro (Ross et al., 1973) at a more recently recognized location (Brabb and Hanna, 1981; McLaughlin et al., 1996) just east of the San Gregorio fault, south of San Gregorio. From these correlations, an upper Cretaceous early Oligocene paleogeography of the San Andreas fault system was constructed that honors both the Anchor Bay

  12. BIM SYSTEM FOR THE CONSERVATION AND PRESERVATION OF THE MOSAICS OF SAN MARCO IN VENICE

    Directory of Open Access Journals (Sweden)

    F. Fassi

    2017-08-01

    Full Text Available The Basilica of San Marco in Venice is a well-known masterpiece of World Heritage. It is a real multi-faceted architecture. The management of the church and its construction site is very complicated, and requires an efficient system to collect and manage different kinds of data. The BIM approach appeared to be the most suitable to collect multi-source data, to monitor activities and guarantee the well-timed operations inside the church. The purpose of this research was to build a BIM of the Basilica, considering all aspects that characterize it and that require particular care. Many problems affected the phase of the acquisition of data, and forced the team to establish a clear working pipeline that allowed the survey simultaneously, hand in hand, with all the usual activities of the church. The fundamental principle for the organization of the whole work was the subdivision of the entire complex in smaller parts, which could be managed independently, both in the acquisition and the modelling stage. This subdivision also reflects the method used for the photogrammetric acquisition. The complexity of some elements, as capitals and statues, was acquired with different Level of Detail (LoD using various photogrammetric acquisitions: from the most general ones to describe the space, to the most detailed one 1:1 scale renderings. In this way, different LoD point clouds correspond to different areas or details. As evident, this pipeline allows to work in a more efficient way during the survey stage, but it involves more difficulties in the modelling stage. Because of the complexity of the church and the presence of sculptural elements represented by a mesh, from the beginning the problem of the amount of data was evident: it is nonsense to manage all models in a single file. The challenging aspect of the research job was the precise requirement of the Procuratoria di San Marco: to obtain the 1:1 representation of all the mosaics of the Basilica. This

  13. Bim System for the Conservation and Preservation of the Mosaics of San Marco in Venice

    Science.gov (United States)

    Fassi, F.; Fregonese, L.; Adami, A.; Rechichi, F.

    2017-08-01

    The Basilica of San Marco in Venice is a well-known masterpiece of World Heritage. It is a real multi-faceted architecture. The management of the church and its construction site is very complicated, and requires an efficient system to collect and manage different kinds of data. The BIM approach appeared to be the most suitable to collect multi-source data, to monitor activities and guarantee the well-timed operations inside the church. The purpose of this research was to build a BIM of the Basilica, considering all aspects that characterize it and that require particular care. Many problems affected the phase of the acquisition of data, and forced the team to establish a clear working pipeline that allowed the survey simultaneously, hand in hand, with all the usual activities of the church. The fundamental principle for the organization of the whole work was the subdivision of the entire complex in smaller parts, which could be managed independently, both in the acquisition and the modelling stage. This subdivision also reflects the method used for the photogrammetric acquisition. The complexity of some elements, as capitals and statues, was acquired with different Level of Detail (LoD) using various photogrammetric acquisitions: from the most general ones to describe the space, to the most detailed one 1:1 scale renderings. In this way, different LoD point clouds correspond to different areas or details. As evident, this pipeline allows to work in a more efficient way during the survey stage, but it involves more difficulties in the modelling stage. Because of the complexity of the church and the presence of sculptural elements represented by a mesh, from the beginning the problem of the amount of data was evident: it is nonsense to manage all models in a single file. The challenging aspect of the research job was the precise requirement of the Procuratoria di San Marco: to obtain the 1:1 representation of all the mosaics of the Basilica. This requirement

  14. NVRAM as Main Storage of Parallel File System

    Directory of Open Access Journals (Sweden)

    MALINOWSKI Artur

    2016-05-01

    Full Text Available Modern cluster environments' main trouble used to be lack of computational power provided by CPUs and GPUs, but recently they suffer more and more from insufficient performance of input and output operations. Apart from better network infrastructure and more sophisticated processing algorithms, a lot of solutions base on emerging memory technologies. This paper presents evaluation of using non-volatile random-access memory as a main storage of Parallel File System. The author justifies feasibility of such configuration and evaluates it with MPI I/O, OrangeFS as a file system, two popular cluster I/O benchmarks and software memory simulation. Obtained results suggest, that with Parallel File System highly optimized for block devices, small differences in access time and memory bandwidth does not influence system performance.

  15. Geophysical Surveys of the San Andreas and Crystal Springs Reservoir System Including Seismic-Reflection Profiles and Swath Bathymetry, San Mateo County, California

    Science.gov (United States)

    Finlayson, David P.; Triezenberg, Peter J.; Hart, Patrick E.

    2010-01-01

    This report describes geophysical data acquired by the U.S. Geological Survey (USGS) in San Andreas Reservoir and Upper and Lower Crystal Springs Reservoirs, San Mateo County, California, as part of an effort to refine knowledge of the location of traces of the San Andreas Fault within the reservoir system and to provide improved reservoir bathymetry for estimates of reservoir water volume. The surveys were conducted by the Western Coastal and Marine Geology (WCMG) Team of the USGS for the San Francisco Public Utilities Commission (SFPUC). The data were acquired in three separate surveys: (1) in June 2007, personnel from WCMG completed a three-day survey of San Andreas Reservoir, collecting approximately 50 km of high-resolution Chirp subbottom seismic-reflection data; (2) in November 2007, WCMG conducted a swath-bathymetry survey of San Andreas reservoir; and finally (3) in April 2008, WCMG conducted a swath-bathymetry survey of both the upper and lower Crystal Springs Reservoir system. Top of PageFor more information, contact David Finlayson.

  16. Tectonic history of the north portion of the San Andreas fault system, California, inferred from gravity and magnetic anomalies

    Science.gov (United States)

    Griscom, A.; Jachens, R.C.

    1989-01-01

    Geologic and geophysical data for the San Andreas fault system north of San Francisco suggest that the eastern boundary of the Pacific plate migrated eastward from its presumed original position at the base of the continental slope to its present position along the San Andreas transform fault by means of a series of eastward jumps of the Mendocino triple junction. These eastward jumps total a distance of about 150 km since 29 Ma. Correlation of right-laterally displaced gravity and magnetic anomalies that now have components at San Francisco and on the shelf north of Point Arena indicates that the presently active strand of the San Andreas fault north of the San Francisco peninsula formed recently at about 5 Ma when the triple junction jumped eastward a minimum of 100 km to its present location at the north end of the San Andreas fault. -from Authors

  17. Petrophysical Analysis and Geographic Information System for San Juan Basin Tight Gas Reservoirs

    Energy Technology Data Exchange (ETDEWEB)

    Martha Cather; Robert Lee; Robert Balch; Tom Engler; Roger Ruan; Shaojie Ma

    2008-10-01

    The primary goal of this project is to increase the availability and ease of access to critical data on the Mesaverde and Dakota tight gas reservoirs of the San Juan Basin. Secondary goals include tuning well log interpretations through integration of core, water chemistry and production analysis data to help identify bypassed pay zones; increased knowledge of permeability ratios and how they affect well drainage and thus infill drilling plans; improved time-depth correlations through regional mapping of sonic logs; and improved understanding of the variability of formation waters within the basin through spatial analysis of water chemistry data. The project will collect, integrate, and analyze a variety of petrophysical and well data concerning the Mesaverde and Dakota reservoirs of the San Juan Basin, with particular emphasis on data available in the areas defined as tight gas areas for purpose of FERC. A relational, geo-referenced database (a geographic information system, or GIS) will be created to archive this data. The information will be analyzed using neural networks, kriging, and other statistical interpolation/extrapolation techniques to fine-tune regional well log interpretations, improve pay zone recognition from old logs or cased-hole logs, determine permeability ratios, and also to analyze water chemistries and compatibilities within the study area. This single-phase project will be accomplished through four major tasks: Data Collection, Data Integration, Data Analysis, and User Interface Design. Data will be extracted from existing databases as well as paper records, then cleaned and integrated into a single GIS database. Once the data warehouse is built, several methods of data analysis will be used both to improve pay zone recognition in single wells, and to extrapolate a variety of petrophysical properties on a regional basis. A user interface will provide tools to make the data and results of the study accessible and useful. The final deliverable

  18. Solar energy system economic evaluation for Elcam-Tempe, Tempe, Arizona and Elcam-San Diego, San Diego, California

    Science.gov (United States)

    1980-01-01

    The long term economic performance of the solar energy system at its installation site is analyzed and four additional locations selected to demonstrate the viability of the design over a broad range of environmental and economic conditions. The economic analysis of the solar energy systems that were installed at Tempe, Arizona and San Diego, California, is developed for these and four other sites typical of a wide range of environmental and economic conditions in the continental United States. This analysis is accomplished based on the technical and economic models in the f Chart design procedure with inputs based on the characteristics of the installed system and local conditions. The results are expressed in terms of the economic parameters of present worth of system cost over a projected twenty year life: life cycle savings; year of positive savings; and year of payback for the optimized solar energy system at each of the analysis sites. The sensitivity of the economic evaluation to uncertainites in constituent system and economic variables is also investigated. The results demonstrate that the solar energy system is economically viable at all of the sites for which the analysis was conducted.

  19. JavaFIRE: A Replica and File System for Grids

    Science.gov (United States)

    Petek, Marko; da Silva Gomes, Diego; Resin Geyer, Claudio Fernando; Santoro, Alberto; Gowdy, Stephen

    2012-12-01

    The work is focused on the creation of a replica and file transfers system for Computational Grids inspired on the needs of the High Energy Physics (HEP). Due to the high volume of data created by the HEP experiments, an efficient file and dataset replica system may play an important role on the computing model. Data replica systems allow the creation of copies, distributed between the different storage elements on the Grid. In the HEP context, the data files are basically immutable. This eases the task of the replica system, because given sufficient local storage resources any dataset just needs to be replicated to a particular site once. Concurrent with the advent of computational Grids, another important theme in the distributed systems area that has also seen some significant interest is that of peer-to-peer networks (p2p). P2p networks are an important and evolving mechanism that eases the use of distributed computing and storage resources by end users. One common technique to achieve faster file download from possibly overloaded storage elements over congested networks is to split the files into smaller pieces. This way, each piece can be transferred from a different replica, in parallel or not, optimizing the moments when the network conditions are better suited to the transfer. The main tasks achieved by the system are: the creation of replicas, the development of a system for replicas transfer (RFT) and for replicas location (RLS) with a different architecture that the one provided by Globus and the development of a system for file transfer in pieces on computational grids with interfaces for several storage elements. The RLS uses a p2p overlay based on the Kademlia algorithm.

  20. Efficient methodology for implementation of Encrypted File System in User Space

    CERN Document Server

    Kumar, Dr Shishir; Jasra, Sameer Kumar; Jain, Akshay Kumar

    2009-01-01

    The Encrypted File System (EFS) pushes encryption services into the file system itself. EFS supports secure storage at the system level through a standard UNIX file system interface to encrypted files. User can associate a cryptographic key with the directories they wish to protect. Files in these directories (as well as their pathname components) are transparently encrypted and decrypted with the specified key without further user intervention; clear text is never stored on a disk or sent to a remote file server. EFS can use any available file system for its underlying storage without modifications, including remote file servers such as NFS. System management functions, such as file backup, work in a normal manner and without knowledge of the key. Performance is an important factor to users since encryption can be time consuming. This paper describes the design and implementation of EFS in user space using faster cryptographic algorithms on UNIX Operating system. Implementing EFS in user space makes it porta...

  1. Delay Scheduling Based Replication Scheme for Hadoop Distributed File System

    Directory of Open Access Journals (Sweden)

    S. Suresh

    2015-03-01

    Full Text Available The data generated and processed by modern computing systems burgeon rapidly. MapReduce is an important programming model for large scale data intensive applications. Hadoop is a popular open source implementation of MapReduce and Google File System (GFS. The scalability and fault-tolerance feature of Hadoop makes it as a standard for BigData processing. Hadoop uses Hadoop Distributed File System (HDFS for storing data. Data reliability and faulttolerance is achieved through replication in HDFS. In this paper, a new technique called Delay Scheduling Based Replication Algorithm (DSBRA is proposed to identify and replicate (dereplicate the popular (unpopular files/blocks in HDFS based on the information collected from the scheduler. Experimental results show that, the proposed method achieves 13% and 7% improvements in response time and locality over existing algorithms respectively.

  2. Dual-system Tectonics of the San Luis Range and Vicinity, Coastal Central California

    Science.gov (United States)

    Hamilton, D. H.

    2010-12-01

    The M 6.5 "San Simeon" earthquake of December 22, 2003, occurred beneath the Santa Lucia Range in coastal central California, and resulted in around $250,000,000 property damage and two deaths from collapse of an historic building in the town of Paso Robles, located 40 km from the epicenter. The earthquake and more than 10,000 aftershocks were well recorded by nearby seismographs, which permitted detailed analysis of the event (eg: McLaren et al., 2008). This analysis facilitated evaluation of the hazard of the occurrence of a similar event in the nearby San Luis Range, located along the coast west of the city of San Luis Obispo some 55 km south of the San Simeon epicenter. The future occurrence of earthquakes analogous to the 2003 event in this area had been proposed in the late 1960’s (eg: Benioff and Smith, 1967; Richter, 1969) but the apparent hazard of such occurrences came to be overshadowed by the discovery of the “Hosgri” strike slip fault passing close to the area in the offshore. However data accumulated since the early 1970’s clearly demonstrate the hazard as being partitioned between nearby earthquakes of strike slip origin, and underlying earthquakes of thrust origin analogous to that of the 2003 San Simeon earthquake. And for the onshore San Luis Range area, an underlying actively seismogenic thrust wedge appears to provide the maximum potential seismic ground motion; exceeding that potentially resulting from large events on nearby strike slip faults of the San Simeon-Hosgri system, for onshore sites. Understanding and documentation of the geology, geomorphology, tectonics and seismogenesis of the San Luis Range and vicinity has recently experienced a quantum improvement as both new and accumulated data have been analysed. An integrated interpretation of all available data now clearly shows that a dual “side by side” system of active tectonics exists in the region. Essentially the most obvious evidence for this is seen simply in the

  3. A compact file format for labeled transition systems

    NARCIS (Netherlands)

    Langevelde, I.A. van

    2001-01-01

    A compact open file format for labeled transition systems, which are commonly used in specification and verification of concurrent systems, is introduced. This combination of openness, both in specification and implementation, and compactness is unprecedented, since existing formats in this field

  4. Deploying Server-side File System Monitoring at NERSC

    Energy Technology Data Exchange (ETDEWEB)

    Uselton, Andrew

    2009-05-01

    The Franklin Cray XT4 at the NERSC center was equipped with the server-side I/O monitoring infrastructure Cerebro/LMT, which is described here in detail. Insights gained from the data produced include a better understanding of instantaneous data rates during file system testing, file system behavior during regular production time, and long-term average behaviors. Information and insights gleaned from this monitoring support efforts to proactively manage the I/O infrastructure on Franklin. A simple model for I/O transactions is introduced and compared with the 250 million observations sent to the LMT database from August 2008 to February 2009.

  5. Efficient Search in P2P File Sharing System

    Institute of Scientific and Technical Information of China (English)

    Xiao Bo; Jin Wei; Hou Mengshu

    2006-01-01

    A new routing algorithm of peer-to-peer file sharing system with routing indices was proposed, in which a node forwards a query to neighbors that are more likely to have answers based on its statistics. The proposed algorithm was tested by creating a P2P simulator and varying the input parameters, and was compared to the search algorithms using flooding (FLD) and random walk (RW). The result shows that with the proposed design, the queries are routed effectively, the network flows are reduced remarkably, and the peer-to-peer file sharing system gains a good expansibility.

  6. The SNS/HFIR Web Portal System for SANS

    Energy Technology Data Exchange (ETDEWEB)

    Campbell, Stuart I [ORNL; Miller, Stephen D [ORNL; Bilheux, Jean-Christophe [ORNL; Reuter, Michael A [ORNL; Peterson, Peter F [ORNL; Kohl, James Arthur [ORNL; Trater, James R [ORNL; Vazhkudai, Sudharshan S [ORNL; Lynch, Vickie E [ORNL

    2010-01-01

    In a busy world, continuing with the status-quo, to do things the way we are already familiar, often seems to be the most efficient way to conduct our work. We look for the value-add to decide if investing in a new method is worth the effort. How shall we evaluate if we have reached this tipping point for change? For contemporary researchers, understanding the properties of the data is a good starting point. The new generation of neutron scattering instruments being built are higher resolution and produce one or more orders of magnitude larger data than the previous generation of instruments. For instance, we have grown out of being able to perform some important tasks with our laptops the data are too big and the computations would simply take too long. These large datasets can be problematic as facility users now begin to grapple with many of the same issues faced by more established computing communities. These issues include data access, management, and movement, data format standards, distributed computing, and collaboration among others. The Neutron Science Portal has been architected, designed, and implemented to provide users with an easy-to-use interface for managing and processing data, while also keeping an eye on meeting modern cybersecurity requirements imposed on institutions. The cost of entry for users has been lowered by utilizing a web interface providing access to backend portal resources. Users can browse or search for data which they are allowed to see, data reduction applications can be run without having to load the software, sample activation calculations can be performed for SNS and HFIR beamlines, McStas simulations can be run on TeraGrid and ORNL computers, and advanced analysis applications such as those being produced by the DANSE project can be run. Behind the scenes is a live cataloging system which automatically catalogs and archives experiment data via the data management system, and provides proposal team members access to their

  7. 77 FR 35376 - San Antonio Water System; Notice of Petition for Declaratory Order and Soliciting Comments...

    Science.gov (United States)

    2012-06-13

    ... From the Federal Register Online via the Government Publishing Office DEPARTMENT OF ENERGY Federal Energy Regulatory Commission San Antonio Water System; Notice of Petition for Declaratory Order and... Antonio Water System (SAWS). e. Name of Project: SAWS Naco Hydroelectric Project. f. Location:...

  8. NASIS data base management system - IBM 360/370 OS MVT implementation. 6: NASIS message file

    Science.gov (United States)

    1973-01-01

    The message file for the NASA Aerospace Safety Information System (NASIS) is discussed. The message file contains all the message and term explanations for the system. The data contained in the file can be broken down into three separate sections: (1) global terms, (2) local terms, and (3) system messages. The various terms are defined and their use within the system is explained.

  9. NASIS data base management system: IBM 360 TSS implementation. Volume 6: NASIS message file

    Science.gov (United States)

    1973-01-01

    The message file for the NASA Aerospace Safety Information System (NASIS) is discussed. The message file contains all the message and term explanations for the system. The data contained in the file can be broken down into three separate sections: (1) global terms, (2) local terms, and (3) system messages. The various terms are defined and their use within the system is explained.

  10. Screw-in forces during instrumentation by various file systems

    Science.gov (United States)

    2016-01-01

    Objectives The purpose of this study was to compare the maximum screw-in forces generated during the movement of various Nickel-Titanium (NiTi) file systems. Materials and Methods Forty simulated canals in resin blocks were randomly divided into 4 groups for the following instruments: Mtwo size 25/0.07 (MTW, VDW GmbH), Reciproc R25 (RPR, VDW GmbH), ProTaper Universal F2 (PTU, Dentsply Maillefer), and ProTaper Next X2 (PTN, Dentsply Maillefer, n = 10). All the artificial canals were prepared to obtain a standardized lumen by using ProTaper Universal F1. Screw-in forces were measured using a custom-made experimental device (AEndoS-k, DMJ system) during instrumentation with each NiTi file system using the designated movement. The rotation speed was set at 350 rpm with an automatic 4 mm pecking motion at a speed of 1 mm/sec. The pecking depth was increased by 1 mm for each pecking motion until the file reach the working length. Forces were recorded during file movement, and the maximum force was extracted from the data. Maximum screw-in forces were analyzed by one-way ANOVA and Tukey's post hoc comparison at a significance level of 95%. Results Reciproc and ProTaper Universal files generated the highest maximum screw-in forces among all the instruments while M-two and ProTaper Next showed the lowest (p < 0.05). Conclusions Geometrical differences rather than shaping motion and alloys may affect the screw-in force during canal instrumentation. To reduce screw-in forces, the use of NiTi files with smaller cross-sectional area for higher flexibility is recommended. PMID:27847752

  11. Storing files in a parallel computing system based on user or application specification

    Science.gov (United States)

    Faibish, Sorin; Bent, John M.; Nick, Jeffrey M.; Grider, Gary; Torres, Aaron

    2016-03-29

    Techniques are provided for storing files in a parallel computing system based on a user-specification. A plurality of files generated by a distributed application in a parallel computing system are stored by obtaining a specification from the distributed application indicating how the plurality of files should be stored; and storing one or more of the plurality of files in one or more storage nodes of a multi-tier storage system based on the specification. The plurality of files comprise a plurality of complete files and/or a plurality of sub-files. The specification can optionally be processed by a daemon executing on one or more nodes in a multi-tier storage system. The specification indicates how the plurality of files should be stored, for example, identifying one or more storage nodes where the plurality of files should be stored.

  12. Generalized File Management Systems: Their Implication for a California Junior College Data Base System.

    Science.gov (United States)

    Fedrick, Robert John

    Criteria to use in evaluating data processing efficiency, factors of file and record definitions, convenience of use for non-programmers, report generating capabilities, and customer support for generalized file management systems for use by the California junior colleges are indicated by the author. The purchase of such a system at the state…

  13. Cross-system log file analysis for hypothesis testing

    NARCIS (Netherlands)

    Glahn, Christian; Specht, Marcus; Schoonenboom, Judith; Sligte, Henk; Moghnieh, Ayman; Hernández-Leo, Davinia; Stefanov, Krassen; Lemmers, Ruud; Koper, Rob

    2008-01-01

    Glahn, C., Specht, M., Schoonenboom, J., Sligte, H., Moghnieh, A., Hernández-Leo, D. Stefanov, K., Lemmers, R., & Koper, R. (2008). Cross-system log file analysis for hypothesis testing. In H. Sligte & R. Koper (Eds.), Proceedings of the 4th TENCompetence Open Workshop. Empowering Learners for Lifel

  14. AliEnFS - a Linux File System for the AliEn Grid Services

    CERN Document Server

    Peters, A J; Buncic, P; Peters, Andreas J.

    2003-01-01

    Among the services offered by the AliEn (ALICE Environment http://alien.cern.ch) Grid framework there is a virtual file catalogue to allow transparent access to distributed data-sets using various file transfer protocols. $alienfs$ (AliEn File System) integrates the AliEn file catalogue as a new file system type into the Linux kernel using LUFS, a hybrid user space file system framework (Open Source http://lufs.sourceforge.net). LUFS uses a special kernel interface level called VFS (Virtual File System Switch) to communicate via a generalised file system interface to the AliEn file system daemon. The AliEn framework is used for authentication, catalogue browsing, file registration and read/write transfer operations. A C++ API implements the generic file system operations. The goal of AliEnFS is to allow users easy interactive access to a worldwide distributed virtual file system using familiar shell commands (f.e. cp,ls,rm ...) The paper discusses general aspects of Grid File Systems, the AliEn implementation...

  15. THE SUSTAINABILITY OF THE AGRICULTURAL SYSTEMS WITH SMALL IRRIGATION. THE CASE OF SAN PABLO ACTIPAN

    OpenAIRE

    René Neri Noriega; Ignacio Ocampo Fletes; Juan Francisco Escobedo Castillo; Andrés Pérez Magaña; Susana Edith Rappo Miguez

    2008-01-01

    Was realized an analysis of the sustainability of the agricultural systems with small irrigation that use water of the underground in San Pablo, Actipan, Tepeaca, Puebla state. The analysis was carried out with agroecological focus, using the Framework for the Evaluation of Systems of Management Incorporating Indicators of Sustainability (MESMIS). It was realized a transversal study comparing two irrigation societies: "The Chamizal” (reference system) and “Lázaro Cárdenas" (alternative system...

  16. AliEnFS - a Linux File System for the AliEn Grid Services

    OpenAIRE

    Peters, Andreas J.; Saiz, P.; Buncic, P.

    2003-01-01

    Among the services offered by the AliEn (ALICE Environment http://alien.cern.ch) Grid framework there is a virtual file catalogue to allow transparent access to distributed data-sets using various file transfer protocols. $alienfs$ (AliEn File System) integrates the AliEn file catalogue as a new file system type into the Linux kernel using LUFS, a hybrid user space file system framework (Open Source http://lufs.sourceforge.net). LUFS uses a special kernel interface level called VFS (Virtual F...

  17. 76 FR 71019 - Amendment of Inspector General's Operation and Reporting (IGOR) System Investigative Files (EPA-40)

    Science.gov (United States)

    2011-11-16

    ... AGENCY Amendment of Inspector General's Operation and Reporting (IGOR) System Investigative Files (EPA-40... General's Operation and Reporting (IGOR) System Investigative Files (EPA-40) to the Inspector General... The Inspector General's Operation and Reporting (IGOR) System Investigative Files (EPA-40) will...

  18. 5 CFR 293.504 - Composition of, and access to, the Employee Medical File System.

    Science.gov (United States)

    2010-01-01

    ... Employee Medical File System. 293.504 Section 293.504 Administrative Personnel OFFICE OF PERSONNEL MANAGEMENT CIVIL SERVICE REGULATIONS PERSONNEL RECORDS Employee Medical File System Records § 293.504 Composition of, and access to, the Employee Medical File System. (a) All employee occupational medical...

  19. Mantle strength of the San Andreas fault system and the role of mantle-crust feedbacks

    NARCIS (Netherlands)

    Chatzaras, V.; Tikoff, B.; Newman, J.; Withers, A.C.; Drury, M.R.

    2015-01-01

    In lithospheric-scale strike-slip fault zones, upper crustal strength is well constrained from borehole observations and fault rock deformation experiments, but mantle strength is less well known. Using peridotite xenoliths, we show that the upper mantle below the San Andreas fault system (Californi

  20. INNOVATION IN PRACTICE--THE INSTANT STUDENT RESPONSE SYSTEM WITH EMPHASIS ON MOUNT SAN JACINTO COLLEGE.

    Science.gov (United States)

    PHILLIPS, PEYTON H.

    DEVELOPED AND CONSTRUCTED AT MT. SAN JACINTO COLLEGE, CALIFORNIA, A CLASSROOM RESPONSE SYSTEM PERMITS THE INSTRUCTOR TO NOTE INDIVIDUAL STUDENT RESPONSES TO QUESTIONS AND TO PROVIDE IMMEDIATE FEEDBACK. PRESENTATION OF QUESTIONS BY MEANS OF AN OVERHEAD PROJECTOR HAS PROVED TO BE MORE SATISFACTORY THAN PRESENTING THEM ORALLY, KNOWLEDGE OF STUDENT…

  1. Overview and Status of the Ceph File System

    CERN Document Server

    CERN. Geneva

    2017-01-01

    The Ceph file system (CephFS) is the POSIX-compatible distributed file system running on top of Ceph's powerful and stable object store. This presentation will give a general introduction of CephFS and detail the recent work the Ceph team has done to improve its stability and usability. In particular, we will cover directory fragmentation, multiple active metadata servers, and directory subtree pinning to metadata servers, features slated for stability in the imminent Luminous release. This talk will also give an overview of how we are measuring performance of multiple active metadata servers using large on-demand cloud deployments. The results will highlight how CephFS distributes metadata load across metadata servers to achieve scaling. About the speaker Patrick Donnelly is a software engineer at Red Hat, Inc. currently working on the Ceph distributed file system. In 2016 he completed his Ph.D. in computer science at the University of Notre Dame with a dissertation on the topic of file transfer management...

  2. Benefits of an image-oriented parallel file system

    Science.gov (United States)

    Hersch, Roger D.

    1993-04-01

    Professionals in various fields such as medical imaging, biology, and civil engineering require rapid access to huge amounts of uncompressed pixmap image data. In order to fulfill these requirements, a parallel image server architecture is proposed, based on arrays of intelligent disk nodes, each disk node being composed of one processor and one disk. Pixmap image data is partitioned into rectangular extents, whose size and distribution among disk nodes minimize overall image access times. Disk node processors are responsible for maintaining both the data structure associated with their image file extents and an extent cache offering fast access to recently used data. Disk node processors may also be used for applying image processing operations to locally retrieved image parts. This contribution introduces the concept of an image oriented file system, where the file system is aware of image size, extent size, and extent distribution. Such an image oriented file system provides a natural way of combining parallel disk accesses and processing operations. The performance of the proposed multiprocessor-multidisk architecture is bounded either by communication throughput or by disk access speed. However, when disk accesses are combined with low-level local processing operations such as image size reduction (zooming), close to linear speedup factors can be obtained by increasing the number of intelligent disk nodes.

  3. Storage Area Networks and The High Performance Storage System

    Energy Technology Data Exchange (ETDEWEB)

    Hulen, H; Graf, O; Fitzgerald, K; Watson, R W

    2002-03-04

    The High Performance Storage System (HPSS) is a mature Hierarchical Storage Management (HSM) system that was developed around a network-centered architecture, with client access to storage provided through third-party controls. Because of this design, HPSS is able to leverage today's Storage Area Network (SAN) infrastructures to provide cost effective, large-scale storage systems and high performance global file access for clients. Key attributes of SAN file systems are found in HPSS today, and more complete SAN file system capabilities are being added. This paper traces the HPSS storage network architecture from the original implementation using HIPPI and IPI-3 technology, through today's local area network (LAN) capabilities, and to SAN file system capabilities now in development. At each stage, HPSS capabilities are compared with capabilities generally accepted today as characteristic of storage area networks and SAN file systems.

  4. Sharing lattice QCD data over a widely distributed file system

    Science.gov (United States)

    Amagasa, T.; Aoki, S.; Aoki, Y.; Aoyama, T.; Doi, T.; Fukumura, K.; Ishii, N.; Ishikawa, K.-I.; Jitsumoto, H.; Kamano, H.; Konno, Y.; Matsufuru, H.; Mikami, Y.; Miura, K.; Sato, M.; Takeda, S.; Tatebe, O.; Togawa, H.; Ukawa, A.; Ukita, N.; Watanabe, Y.; Yamazaki, T.; Yoshie, T.

    2015-12-01

    JLDG is a data-grid for the lattice QCD (LQCD) community in Japan. Several large research groups in Japan have been working on lattice QCD simulations using supercomputers distributed over distant sites. The JLDG provides such collaborations with an efficient method of data management and sharing. File servers installed on 9 sites are connected to the NII SINET VPN and are bound into a single file system with the GFarm. The file system looks the same from any sites, so that users can do analyses on a supercomputer on a site, using data generated and stored in the JLDG at a different site. We present a brief description of hardware and software of the JLDG, including a recently developed subsystem for cooperating with the HPCI shared storage, and report performance and statistics of the JLDG. As of April 2015, 15 research groups (61 users) store their daily research data of 4.7PB including replica and 68 million files in total. Number of publications for works which used the JLDG is 98. The large number of publications and recent rapid increase of disk usage convince us that the JLDG has grown up into a useful infrastructure for LQCD community in Japan.

  5. San Pedro Mártir mid-infrared photometric system

    Directory of Open Access Journals (Sweden)

    Luis Salas

    2006-01-01

    Full Text Available Con el objetivo de definir el Sistema Fotométrico para el Mediano-Infrarrojo de San Pedro Mártir, se realizaron observaciones de estrellas de calibración bien estudiadas con la cámara del mediano-infrarrojo CID-BIB (2 - 28 m del Observatorio Astronómico Nacional, durante 9 temporadas de observación en 2000 a 2005. Se utilizó un conjunto de 9 filtros, los de la serie de “silicatos" SiN, SiO, SiP, SiQ, SiR, SiS, el filtro de banda ancha N (10.8 m, y los filtros angostos QH2 (17.15 m y Q2 (18.7 m, para determinar los coeficientes de extinción y los puntos cero en magnitud. Las correcciones por extinción atmosférica se llevaron a cabo mediante el uso de aproximantes de Padé, y los coeficientes involucrados se obtuvieron mediante una relacion lineal con el coeficiente de extinción a baja masa de aire. Se presentan y comparan las curvas de transmisión atmosférica de SPM y los coeficientes de extinción con los del sitio astronómico Mauna Kea. Utilizando un conjunto de fuentes IRAS LSR observadas con el CID-BIB se encuentran terminos de color.

  6. 29 CFR 4902.11 - Specific exemptions: Office of Inspector General Investigative File System.

    Science.gov (United States)

    2010-07-01

    ... Investigative File System. 4902.11 Section 4902.11 Labor Regulations Relating to Labor (Continued) PENSION... General Investigative File System. (a) Criminal Law Enforcement. (1) Exemption. Under the authority... Inspector General Investigative File System—PBGC” from the provisions of 5 U.S.C. 552a (c)(3), (c)(4),...

  7. 29 CFR 1602.43 - Commission's remedy for school systems' or districts' failure to file report.

    Science.gov (United States)

    2010-07-01

    ...' failure to file report. Any school system or district failing or refusing to file report EEO-5 when... 29 Labor 4 2010-07-01 2010-07-01 false Commission's remedy for school systems' or districts' failure to file report. 1602.43 Section 1602.43 Labor Regulations Relating to Labor (Continued)...

  8. Nickel-Titanium Single-file System in Endodontics.

    Science.gov (United States)

    Dagna, Alberto

    2015-10-01

    This work describes clinical cases treated with a innovative single-use and single-file nickel-titanium (NiTi) system used in continuous rotation. Nickel-titanium files are commonly used for root canal treatment but they tend to break because of bending stresses and torsional stresses. Today new instruments used only for one treatment have been introduced. They help the clinician to make the root canal shaping easier and safer because they do not require sterilization and after use have to be discarded. A new sterile instrument is used for each treatment in order to reduce the possibility of fracture inside the canal. The new One Shape NiTi single-file instrument belongs to this group. One Shape is used for complete shaping of root canal after an adequate preflaring. Its protocol is simple and some clinical cases are presented. It is helpful for easy cases and reliable for difficult canals. After 2 years of clinical practice, One Shape seems to be helpful for the treatment of most of the root canals, with low risk of separation. After each treatment, the instrument is discarded and not sterilized in autoclave or re-used. This single-use file simplifies the endodontic therapy, because only one instrument is required for canal shaping of many cases. The respect of clinical protocol guarantees predictable good results.

  9. Part III: AFS - A Secure Distributed File System

    Energy Technology Data Exchange (ETDEWEB)

    Wachsmann, A.; /SLAC

    2005-06-29

    AFS is a secure distributed global file system providing location independence, scalability and transparent migration capabilities for data. AFS works across a multitude of Unix and non-Unix operating systems and is used at many large sites in production for many years. AFS still provides unique features that are not available with other distributed file systems even though AFS is almost 20 years old. This age might make it less appealing to some but with IBM making AFS available as open-source in 2000, new interest in use and development was sparked. When talking about AFS, people often mention other file systems as potential alternatives. Coda (http://www.coda.cs.cmu.edu/) with its disconnected mode will always be a research project and never have production quality. Intermezzo (http://www.inter-mezzo.org/) is now in the Linux kernel but not available for any other operating systems. NFSv4 (http://www.nfsv4.org/) which picked up many ideas from AFS and Coda is not mature enough yet to be used in serious production mode. This article presents the rich features of AFS and invites readers to play with it.

  10. Distributed PACS using distributed file system with hierarchical meta data servers.

    Science.gov (United States)

    Hiroyasu, Tomoyuki; Minamitani, Yoshiyuki; Miki, Mitsunori; Yokouchi, Hisatake; Yoshimi, Masato

    2012-01-01

    In this research, we propose a new distributed PACS (Picture Archiving and Communication Systems) which is available to integrate several PACSs that exist in each medical institution. The conventional PACS controls DICOM file into one data-base. On the other hand, in the proposed system, DICOM file is separated into meta data and image data and those are stored individually. Using this mechanism, since file is not always accessed the entire data, some operations such as finding files, changing titles, and so on can be performed in high-speed. At the same time, as distributed file system is utilized, accessing image files can also achieve high-speed access and high fault tolerant. The introduced system has a more significant point. That is the simplicity to integrate several PACSs. In the proposed system, only the meta data servers are integrated and integrated system can be constructed. This system also has the scalability of file access with along to the number of file numbers and file sizes. On the other hand, because meta-data server is integrated, the meta data server is the weakness of this system. To solve this defect, hieratical meta data servers are introduced. Because of this mechanism, not only fault--tolerant ability is increased but scalability of file access is also increased. To discuss the proposed system, the prototype system using Gfarm was implemented. For evaluating the implemented system, file search operating time of Gfarm and NFS were compared.

  11. The SNS/HFIR Web Portal System for SANS

    Science.gov (United States)

    Campbell, Stuart I.; Miller, Stephen D.; Bilheux, Jean-Christophe; Reuter, Michael A.; Peterson, Peter F.; Kohl, James A.; Trater, James R.; Vazhkudai, Sudharshan S.; Lynch, Vickie E.; Green, Mark L.

    2010-10-01

    The new generation of neutron scattering instruments being built are higher resolution and produce one or more orders of magnitude larger data than the previous generation of instruments. For instance, we have grown out of being able to perform some important tasks with our laptops. The data sizes are too big and the computational time would be too long. These large datasets can be problematic as facility users now begin to struggle with many of the same issues faced by more established computing communities. These issues include data access, management, and movement, data format standards, distributed computing, and collaboration with others. The Neutron Science Portal has been designed, and implemented to provide users with an easy-to-use interface for managing and processing data, while also keeping an eye on meeting modern computer security requirements that are currently being imposed on institutions. Users can browse or search for data which they are allowed to see, run data reduction and analysis applications, perform sample activation calculations and perform McStas simulations. Collaboration is facilitated by providing users a read/writeable common area, shared across all experiment team members. The portal currently has over 370 registered users; almost 7TB of experiment and user data, approximately 1,000,000 files cataloged, and had almost 10,000 unique visits last year. Future directions for enhancing portal robustness include examining how to mirror data and portal services, better facilitation of collaborations via virtual organizations, enhancing disconnected service via "thick client" applications, and better inter-facility connectivity to support cross-cutting research.

  12. The SNS/HFIR Web Portal System for SANS

    Energy Technology Data Exchange (ETDEWEB)

    Campbell, Stuart I; Miller, Stephen D; Bilheux, Jean-Christophe; Reuter, Michael A; Peterson, Peter F; Kohl, James A; Trater, James R; Vazhkudai, Sudharshan S; Lynch, Vickie E [Oak Ridge National Laboratory (United States); Green, Mark L, E-mail: campbellsi@ornl.go [Tech-X Corporation, Boulder, CO (United States)

    2010-10-01

    The new generation of neutron scattering instruments being built are higher resolution and produce one or more orders of magnitude larger data than the previous generation of instruments. For instance, we have grown out of being able to perform some important tasks with our laptops. The data sizes are too big and the computational time would be too long. These large datasets can be problematic as facility users now begin to struggle with many of the same issues faced by more established computing communities. These issues include data access, management, and movement, data format standards, distributed computing, and collaboration with others. The Neutron Science Portal has been designed, and implemented to provide users with an easy-to-use interface for managing and processing data, while also keeping an eye on meeting modern computer security requirements that are currently being imposed on institutions. Users can browse or search for data which they are allowed to see, run data reduction and analysis applications, perform sample activation calculations and perform McStas simulations. Collaboration is facilitated by providing users a read/writeable common area, shared across all experiment team members. The portal currently has over 370 registered users; almost 7TB of experiment and user data, approximately 1,000,000 files cataloged, and had almost 10,000 unique visits last year. Future directions for enhancing portal robustness include examining how to mirror data and portal services, better facilitation of collaborations via virtual organizations, enhancing disconnected service via 'thick client' applications, and better inter-facility connectivity to support cross-cutting research.

  13. External impacts of an intraurban air transportation system in the San Francisco Bay area

    Science.gov (United States)

    Lu, J. Y.; Gebman, J. R.; Kirkwood, T. F.; Mcclure, P. T.; Stucker, J. P.

    1972-01-01

    The effects are studied of an intraurban V/STOL commuter system on the economic, social, and physical environment of the San Francisco Bay Area. The Bay Area was chosen mainly for a case study; the real intent of the analysis is to develop methods by which the effects of such a system could be evaluated for any community. Aspects of the community life affected include: income and employment, benefits and costs, noise, air pollution, and road congestion.

  14. World coordinate system keywords for FITS files from Lick Observatory

    Science.gov (United States)

    Allen, Steven L.; Gates, John; Kibrick, Robert I.

    2010-07-01

    Every bit of metadata added at the time of acquisition increases the value of image data, facilitates automated processing of those data, and decreases the effort required during subsequent data curation activities. In 2002 the FITS community completed a standard for World Coordinate System (WCS) information which describes the celestial coordinates of pixels in astronomical image data. Most of the instruments in use at Lick Observatory and Keck Observatory predate this standard. None of them was designed to produce FITS files with celestial WCS information. We report on the status of WCS keywords in the FITS files of various astronomical detectors at Lick and Keck. These keywords combine the information from sources which include the telescope pointing system, the optics of the telescope and instrument, a description of the pixel layout of the detector focal plane, and the hardware and software mappings between the silicon pixels of the detector and the pixels in the data array of the FITS file. The existing WCS keywords include coordinates which refer to the detector structure itself (for locating defects and artifacts), but not celestial coordinates. We also present proof-of-concept from the first data acquisition system at Lick Observatory which inserts the WCS keywords for a celestial coordinate system.

  15. Motivators, Barriers and Concerns in Adoption of Electronic Filing System: Survey Evidence from Malaysian Professional Accountants

    Directory of Open Access Journals (Sweden)

    Ming-Ling Lai

    2010-01-01

    Full Text Available Problem statement: Worldwide, electronic filing (e-filing system and its' adoption has attracted much attention, however, scholarly study on accounting professionals' acceptance of e-filing system is scant. Approach: This study aimed (i to examine factors that motivated professional accountants to use e-filing (ii to solicit their usage experience and (iii to assess the barriers to adoption and other compliance considerations. The questionnaire survey was administered on 700 professionals from tax practice and commercial sectors who attended "Budget 2008" Tax Seminars, organized by the Malaysian Institute of Accountants in Peninsular Malaysia. In total, 456 usable responses from accounting and tax professionals were collected and analyzed. Results: The survey found out of 456 respondents, just 23.7% had used e-filing in 2007 to file personal tax return forms. Majority of the e-filers opted to use e-filing for the sake of convenience (55.8%, in faith to get faster tax refund (16.8% and speed of filing (15.9%. For those who did not use e-filing, the key impediments were concerned over the security and did not trust of e-filing system. Some (4.8% were unable to access to the e-filing website. Overall, just 26.1% of the professionals surveyed had confidence in the IRBM in managing the e-filing system successfully. Majority (41.2% thought that 'speedy tax refund' to be the most desirable incentive to motivate individuals to use e-filing. Conclusion: As the IRBM is counting on professional accountants to promote the usage of e-filing system, this study provided important insights to the IRBM to developing marketing and business strategies to motivate professional accountant in business to use e-filing in order to accelerate the diffusion of e-filing system in a developing country like Malaysia.

  16. NVRAM as Main Storage of Parallel File System

    OpenAIRE

    MALINOWSKI Artur

    2016-01-01

    Modern cluster environments' main trouble used to be lack of computational power provided by CPUs and GPUs, but recently they suffer more and more from insufficient performance of input and output operations. Apart from better network infrastructure and more sophisticated processing algorithms, a lot of solutions base on emerging memory technologies. This paper presents evaluation of using non-volatile random-access memory as a main storage of Parallel File System. The author justifies fea...

  17. Thesaurus of descriptors for the vertical file system

    Energy Technology Data Exchange (ETDEWEB)

    1977-09-01

    The Thesaurus used for the NYIT Energy Information Center is presented. The center is a comprehensive information service covering every aspect of energy conservation and related technology, including conservation programs and practices, alternative energy systems, energy legislation, and public policy development in the United States and abroad. The Thesaurus includes all subject headings found in the Vertical File as well as other cross referenced terms likely to come to mind when seeking information on a specific energy area.

  18. Mechanical insights into tectonic reorganization of the southern San Andreas fault system at ca. 1.1-1.5 Ma

    Science.gov (United States)

    Fattaruso, L.; Cooke, M. L.; Dorsey, R. J.

    2013-12-01

    Reorganization of active fault systems may result from changes in relative plate motion and evolving fault geometries. Between ~1.5 and 1.1 Ma the southern San Andreas fault system underwent a major reorganization that included initiation of the San Jacinto fault zone, termination of slip on the extensional West Salton detachment fault, and reorganization of structures in the Mecca Hills northeast of the San Andreas fault during a local change from transtension to transpression conditions with no known change in Pacific-North America relative plate motion. The active trace of the southern San Andreas fault itself also evolved during this time, with shifts in activity from the Mission Creek to Mill Creek to the present-day active fault geometry of the San Bernardino, Garnet Hill, and Banning strands of the San Andreas fault. Although there is a rich geologic record of these changes, the mechanisms that controlled abandonment of active faults, initiation of new strands, and shifting loci of uplift are poorly understood. We use three-dimensional mechanical Boundary Element Method models to investigate this major tectonic reorganization at ~1.1-1.5 Ma. Previous mechanical modeling studies have examined the evolution of the southern San Andreas fault geometry in the San Gorgonio Pass using a series of snapshot models of the succession of active fault geometries. We use the same approach to explore the role of fault interaction and tectonic loading in abandonment of the West Salton detachment fault and initiation of the San Jacinto fault. The snapshots include: (1) regional transtension with an active West Salton detachment fault and active Mission Creek strand of the San Andreas fault; (2) cessation of local extension in combination with initiation of the San Jacinto fault in which we explore both north-to-south propagation and simultaneous growth; (3) shift of activity to the Mill Creek strand of the San Andreas fault; and (4) shift of activity to the present

  19. Health Care Information System (HCIS) Data File

    Data.gov (United States)

    U.S. Department of Health & Human Services — The data was derived from the Health Care Information System (HCIS), which contains Medicare Part A (Inpatient, Skilled Nursing Facility, Home Health Agency (Part A...

  20. Parallel file system with metadata distributed across partitioned key-value store c

    Energy Technology Data Exchange (ETDEWEB)

    Bent, John M.; Faibish, Sorin; Grider, Gary; Torres, Aaron

    2017-09-19

    Improved techniques are provided for storing metadata associated with a plurality of sub-files associated with a single shared file in a parallel file system. The shared file is generated by a plurality of applications executing on a plurality of compute nodes. A compute node implements a Parallel Log Structured File System (PLFS) library to store at least one portion of the shared file generated by an application executing on the compute node and metadata for the at least one portion of the shared file on one or more object storage servers. The compute node is also configured to implement a partitioned data store for storing a partition of the metadata for the shared file, wherein the partitioned data store communicates with partitioned data stores on other compute nodes using a message passing interface. The partitioned data store can be implemented, for example, using Multidimensional Data Hashing Indexing Middleware (MDHIM).

  1. Generation and use of the Goddard trajectory determination system SLP ephemeris files

    Science.gov (United States)

    Armstrong, M. G.; Tomaszewski, I. B.

    1973-01-01

    Information is presented to acquaint users of the Goddard Trajectory Determination System Solar/Lunar/Planetary ephemeris files with the details connected with the generation and use of these files. In particular, certain sections constitute a user's manual for the ephemeris files.

  2. Methods and apparatus for multi-resolution replication of files in a parallel computing system using semantic information

    Energy Technology Data Exchange (ETDEWEB)

    Faibish, Sorin; Bent, John M.; Tzelnic, Percy; Grider, Gary; Torres, Aaron

    2015-10-20

    Techniques are provided for storing files in a parallel computing system using different resolutions. A method is provided for storing at least one file generated by a distributed application in a parallel computing system. The file comprises one or more of a complete file and a sub-file. The method comprises the steps of obtaining semantic information related to the file; generating a plurality of replicas of the file with different resolutions based on the semantic information; and storing the file and the plurality of replicas of the file in one or more storage nodes of the parallel computing system. The different resolutions comprise, for example, a variable number of bits and/or a different sub-set of data elements from the file. A plurality of the sub-files can be merged to reproduce the file.

  3. Management Concerns for Optical Based Filing Systems

    Science.gov (United States)

    1990-03-01

    terminals, WORM Palo Alto, CA 94303 drives, optical jukeboxes, printers, scanners. Candi Technology Inc. Systems integrators providing 2354 Calle Del Mundo ...n.3, p. 68, 6 February 1989. Dortch, M., "A Storage Media Primer ", LAN Times, v.6, n.1, pp. 38-39, January 1989. Dukeman, John, "Optical Disk - A

  4. Reform of lawsuit system and revision of filing conditions

    Institute of Scientific and Technical Information of China (English)

    ZHANG Weiping

    2006-01-01

    Currently the Civil Procedure Law stipulates rather "high conditions" for lawsuits and the reason is that in the institutional design,we have equated the conditions of adjudicating the merits with those of lawsuits and the initiation of lawsuits.The trial of conditions of adjudicating the merits are usually conducted after the beginning of lawsuits,while in China it is carded out before the beginning of lawsuits,and thus the related procedures have become a kind of "pre-lawsuit procedures",and theoretical and institutional confusions and contradictions arise.This article is of the opinion that filing conditions should be separated from those of adjudicating the merits,and the trial of the latter should be incorporated into the proceedings.A "dual,trial structure should be constructed,that is,the trial of conditions for adjudicating the merits goes parallel with that of merit disputes.In the attempt to improve civil procedures,attention should be given to the institutionalization of conditions of adjudicating the merits,which should be reasonably designed and integrated into relevant systems.When reforming the lawsuit system,we should also adjust the courts'trial organs.We recommend not setting up any case-filing or appeal divisions and removing the existing "separation of case-filing and trial".

  5. Sediment transport in the San Francisco Bay Coastal System: an overview

    Science.gov (United States)

    Barnard, Patrick L.; Schoellhamer, David H.; Jaffe, Bruce E.; McKee, Lester J.; Barnard, P.L.; Jaffee, B.E.; Schoellhamer, D.H.

    2013-01-01

    The papers in this special issue feature state-of-the-art approaches to understanding the physical processes related to sediment transport and geomorphology of complex coastal–estuarine systems. Here we focus on the San Francisco Bay Coastal System, extending from the lower San Joaquin–Sacramento Delta, through the Bay, and along the adjacent outer Pacific Coast. San Francisco Bay is an urbanized estuary that is impacted by numerous anthropogenic activities common to many large estuaries, including a mining legacy, channel dredging, aggregate mining, reservoirs, freshwater diversion, watershed modifications, urban run-off, ship traffic, exotic species introductions, land reclamation, and wetland restoration. The Golden Gate strait is the sole inlet connecting the Bay to the Pacific Ocean, and serves as the conduit for a tidal flow of ~ 8 × 109 m3/day, in addition to the transport of mud, sand, biogenic material, nutrients, and pollutants. Despite this physical, biological and chemical connection, resource management and prior research have often treated the Delta, Bay and adjacent ocean as separate entities, compartmentalized by artificial geographic or political boundaries. The body of work herein presents a comprehensive analysis of system-wide behavior, extending a rich heritage of sediment transport research that dates back to the groundbreaking hydraulic mining-impact research of G.K. Gilbert in the early 20th century.

  6. DDNFS: a Distributed Digital Notary File System

    Directory of Open Access Journals (Sweden)

    Alexander Zangerl

    2011-10-01

    Full Text Available Safeguarding online communications using public key cryptography is a well-establishedpractice today, but with the increasing reliance on “faceless”, solely online entities one of thecore aspects of public key cryptography is becoming a substantial problem in practice: Who canwe trust to introduce us to and vouch for some online party whose public key we see for the firsttime? Most existing certification models lack flexibility and have come under attack repeatedlyin recent years[1, 2], and finding practical improvements has a high priority.We propose that the real-world concept of a notary or certifying witness can be adapted totoday’s online environment quite easily, and that such a system when combined with peer-topeertechnologies for defense in depth is a viable alternative to monolithic trust infrastructures.Instead of trusting assurances from a single party, integrity certifications (and data replicationcan be provided among a group of independent parties in a peer-to-peer fashion. As thelikelihood of all such assurance providers being subverted at the very same time is very muchless than that of a single party, overall robustness is improved.This paper presents the design and the implementation of our prototype online notary systemwhere independent computer notaries provide integrity certification and highly-availablereplicated storage, and discusses how this online notary system handles some common threatpatterns.

  7. Deep crustal heterogeneity along and around the San Andreas fault system in central California and its relation to the segmentation

    Science.gov (United States)

    Nishigami, Kin'ya

    2000-04-01

    The three-dimensional distribution of scatterers in the crust along and around the San Andreas fault system in central California is estimated using an inversion analysis of coda envelopes from local earthquakes. I analyzed 3801 wave traces from 157 events recorded at 140 stations of the Northern California Seismic Network. The resulting scatterer distribution shows a correlation with the San Gregorio, San Andreas, Hayward, and Calaveras faults. These faults seem to be almost vertical from the surface to ˜15 km depth. Some of the other scatterers are estimated to be at shallow depths, 0-5 km, below the Diablo Range, and these may be interpreted as being generated by topographic roughness. The depth distribution of scatterers shows relatively stronger scattering in the lower crust, at ˜15-25 km depth, especially between the San Andreas fault and the Hayward-Calaveras faults. This suggests a subhorizontal detachment structure connecting these two faults in the lower crust. Several clusters of scatterers are located along the San Andreas fault at intervals of ˜20-30 km from south of San Francisco to the intersection with the Calaveras fault. This part of the San Andreas fault appears to consist of partially locked segments, also ˜20-30 km long, which rupture during M6-7 events, and segment boundaries characterized by stronger scattering and stationary microseismicity. The segment boundaries delineated by the present analysis correspond with those estimated from the slip distribution of the great 1906 San Francisco earthquake, and from the fault geometry as reported by the Working Group on California Earthquake Probabilities [1990], although the segment boundaries along the San Andreas fault in and around the San Francisco Bay area are still uncertain.

  8. Evidence for Late Oligocene-Early Miocene episode of transtension along San Andreas Fault system in central California

    Energy Technology Data Exchange (ETDEWEB)

    Stanley, R.G.

    1986-04-01

    The San Andreas is one of the most intensely studied fault systems in the world, but many aspects of its kinematic history remain controversial. For example, the period from the late Eocene to early Miocene is widely believed to have been a time of negligible strike-slip movement along the San Andreas fault proper, based on the rough similarity of offset of the Eocene Butano-Point of rocks Submarine Fan, the early Miocene Pinnacles-Neenach volcanic center, and an early Miocene shoreline in the northern Gabilan Range and San Emigdio Mountains. Nonetheless, evidence indicates that a late Oligocene-early Miocene episode of transtension, or strike-slip motion with a component of extension, occurred within the San Andreas fault system. The evidence includes: (1) about 22-24 Ma, widespread, synchronous volcanic activity occurred at about 12 volcanic centers along a 400-km long segment of the central California coast; (2) most of these volcanic centers are located along faults of the San Andreas system, including the San Andreas fault proper, the San Gregorio-Hosgri fault, and the Zayante-Vergeles fault, suggesting that these and other faults were active and served as conduits for magmas rising from below; (3) during the late Oligocene and early Miocene, a pull-apart basin developed adjacent to the San Andreas fault proper in the La Honda basin near Santa Cruz; and (4) during the late Oligocene and early Miocene, active faulting, rapid subsidence, and marine transgression occurred in the La Honda and other sedimentary basins in central California. The amount of right-lateral displacement along the San Andreas fault proper during this transtentional episode is unknown but was probably about 7.5-35 km, based on model studies of pull-apart basin formation. This small amount of movement is well within the range of error in published estimates of the offset of the Eocene to early Miocene geologic features noted.

  9. DDNFS: a Distributed Digital Notary File System

    CERN Document Server

    Zangerl, Alexander

    2011-01-01

    Safeguarding online communications using public key cryptography is a well-established practice today, but with the increasing reliance on `faceless', solely online entities one of the core aspects of public key cryptography is becoming a substantial problem in practice: Who can we trust to introduce us to and vouch for some online party whose public key we see for the first time? Most existing certification models lack flexibility and have come under attack repeatedly in recent years, and finding practical improvements has a high priority. We propose that the real-world concept of a notary or certifying witness can be adapted to today's online environment quite easily, and that such a system when combined with peer-to-peer technologies for defense in depth is a viable alternative to monolithic trust infrastructures. Instead of trusting assurances from a single party, integrity certifications (and data replication) can be provided among a group of independent parties in a peer-to-peer fashion. As the likeliho...

  10. Recent deformation on the San Diego Trough and San Pedro Basin fault systems, offshore Southern California: Assessing evidence for fault system connectivity.

    Science.gov (United States)

    Bormann, J. M.; Kent, G. M.; Driscoll, N. W.; Harding, A. J.

    2016-12-01

    The seismic hazard posed by offshore faults for coastal communities in Southern California is poorly understood and may be considerable, especially when these communities are located near long faults that have the ability to produce large earthquakes. The San Diego Trough fault (SDTF) and San Pedro Basin fault (SPBF) systems are active northwest striking, right-lateral faults in the Inner California Borderland that extend offshore between San Diego and Los Angeles. Recent work shows that the SDTF slip rate accounts for 25% of the 6-8 mm/yr of deformation accommodated by the offshore fault network, and seismic reflection data suggest that these two fault zones may be one continuous structure. Here, we use recently acquired CHIRP, high-resolution multichannel seismic (MCS) reflection, and multibeam bathymetric data in combination with USGS and industry MCS profiles to characterize recent deformation on the SDTF and SPBF zones and to evaluate the potential for an end-to-end rupture that spans both fault systems. The SDTF offsets young sediments at the seafloor for 130 km between the US/Mexico border and Avalon Knoll. The northern SPBF has robust geomorphic expression and offsets the seafloor in the Santa Monica Basin. The southern SPBF lies within a 25-km gap between high-resolution MCS surveys. Although there does appear to be a through-going fault at depth in industry MCS profiles, the low vertical resolution of these data inhibits our ability to confirm recent slip on the southern SPBF. Empirical scaling relationships indicate that a 200-km-long rupture of the SDTF and its southern extension, the Bahia Soledad fault, could produce a M7.7 earthquake. If the SDTF and the SPBF are linked, the length of the combined fault increases to >270 km. This may allow ruptures initiating on the SDTF to propagate within 25 km of the Los Angeles Basin. At present, the paleoseismic histories of the faults are unknown. We present new observations from CHIRP and coring surveys at

  11. Distributed Plate Boundary Deformation Across the San Andreas Fault System, Central California

    Science.gov (United States)

    Dyson, M.; Titus, S. J.; Demets, C.; Tikoff, B.

    2007-12-01

    Plate boundaries are now recognized as broad zones of complex deformation as opposed to narrow zones with discrete offsets. When assessing how plate boundary deformation is accommodated, both spatially and temporally, it is therefore crucial to understand the relative contribution of the discrete and distributed components of deformation. The creeping segment of the San Andreas fault is an ideal location to study the distribution of plate boundary deformation for several reasons. First, the geometry of the fault system in central California is relatively simple. Plate motion is dominated by slip along the relatively linear strike-slip San Andreas fault, but also includes lesser slip along the adjacent and parallel Hosgri-San Gregorio and Rinconada faults, as well as within the borderlands between the three fault strands. Second, the aseismic character of the San Andreas fault in this region allows for the application of modern geodetic techniques to assess creep rates along the fault and across the region. Third, geologic structures within the borderlands are relatively well-preserved allowing comparison between modern and ancient rates and styles of deformation. Continuous GPS stations, alignment arrays surveys, and other geodetic methods demonstrate that approximately 5 mm/yr of distributed slip is accumulated (on top of the fault slip rate) across a 70-100 km wide region centered on the San Andreas fault. New campaign GPS data also suggest 2-5 mm/yr of deformation in the borderlands. These rates depend on the magnitude of the coseismic and postseismic corrections that must be made to our GPS time series to compensate for the 2003 San Simeon and 2004 Parkfield earthquakes, which rupture faults outside, but near the edges of our GPS network. The off-fault deformation pattern can be compared to the style of permanent deformation recorded in the geologic record. Fold and thrust belts in the borderlands are better developed in the Tertiary sedimentary rocks west of

  12. 76 FR 2681 - Amended Environmental Impact Statement Filing System Guidance for Implementing 40 CFR 1506.9 and...

    Science.gov (United States)

    2011-01-14

    ... AGENCY Amended Environmental Impact Statement Filing System Guidance for Implementing 40 CFR 1506.9 and... guidelines to implement its EIS filing responsibilities. The purpose of the EPA Filing System Guidelines is to provide guidance to Federal agencies on filing EISs, including draft, final, and supplemental...

  13. 77 FR 51530 - Amended Environmental Impact Statement Filing System Guidance for Implementing 40 CFR 1506.9 and...

    Science.gov (United States)

    2012-08-24

    ... AGENCY Amended Environmental Impact Statement Filing System Guidance for Implementing 40 CFR 1506.9 and... guidelines to implement its EIS filing responsibilities. The purpose of the EPA Filing System Guidelines is to provide guidance to Federal agencies on filing EISs, including draft, final, and supplemental...

  14. Heuristic file sorted assignment algorithm of parallel I/O on cluster computing system

    Institute of Scientific and Technical Information of China (English)

    CHEN Zhi-gang; ZENG Bi-qing; XIONG Ce; DENG Xiao-heng; ZENG Zhi-wen; LIU An-feng

    2005-01-01

    A new file assignment strategy of parallel I/O, which is named heuristic file sorted assignment algorithm was proposed on cluster computing system. Based on the load balancing, it assigns the files to the same disk according to the similar service time. Firstly, the files were sorted and stored at the set I in descending order in terms of their service time, then one disk of cluster node was selected randomly when the files were to be assigned, and at last the continuous files were taken orderly from the set I to the disk until the disk reached its load maximum. The experimental results show that the new strategy improves the performance by 20.2% when the load of the system is light and by 31.6% when the load is heavy. And the higher the data access rate, the more evident the improvement of the performance obtained by the heuristic file sorted assignment algorithm.

  15. Enabling Large-Scale Storage in Sensor Networks with the Coffee File System

    OpenAIRE

    Tsiftes, Nicolas; Dunkels, Adam; He, Zhitao; Voigt, Thiemo

    2009-01-01

    Persistent storage offers multiple advantages for sensor networks, yet the available storage systems have been unwieldy because of their complexity and device-specific designs. We present the Coffee file system for flash-based sensor devices. Coffee provides a programming interface for building efficient and portable storage abstractions. Unlike previous flash file systems, Coffee uses a small and constant RAM footprint per file, making it scale elegantly with workloads consisting of large fi...

  16. Implementing Journaling in a Linux Shared Disk File System

    Science.gov (United States)

    Preslan, Kenneth W.; Barry, Andrew; Brassow, Jonathan; Cattelan, Russell; Manthei, Adam; Nygaard, Erling; VanOort, Seth; Teigland, David; Tilstra, Mike; O'Keefe, Matthew; Erickson, Grant; Agarwal, Manish

    2000-01-01

    In computer systems today, speed and responsiveness is often determined by network and storage subsystem performance. Faster, more scalable networking interfaces like Fibre Channel and Gigabit Ethernet provide the scaffolding from which higher performance computer systems implementations may be constructed, but new thinking is required about how machines interact with network-enabled storage devices. In this paper we describe how we implemented journaling in the Global File System (GFS), a shared-disk, cluster file system for Linux. Our previous three papers on GFS at the Mass Storage Symposium discussed our first three GFS implementations, their performance, and the lessons learned. Our fourth paper describes, appropriately enough, the evolution of GFS version 3 to version 4, which supports journaling and recovery from client failures. In addition, GFS scalability tests extending to 8 machines accessing 8 4-disk enclosures were conducted: these tests showed good scaling. We describe the GFS cluster infrastructure, which is necessary for proper recovery from machine and disk failures in a collection of machines sharing disks using GFS. Finally, we discuss the suitability of Linux for handling the big data requirements of supercomputing centers.

  17. Using the Sirocco File System for high-bandwidth checkpoints.

    Energy Technology Data Exchange (ETDEWEB)

    Klundt, Ruth Ann; Curry, Matthew L.; Ward, H. Lee

    2012-02-01

    The Sirocco File System, a file system for exascale under active development, is designed to allow the storage software to maximize quality of service through increased flexibility and local decision-making. By allowing the storage system to manage a range of storage targets that have varying speeds and capacities, the system can increase the speed and surety of storage to the application. We instrument CTH to use a group of RAM-based Sirocco storage servers allocated within the job as a high-performance storage tier to accept checkpoints, allowing computation to potentially continue asynchronously of checkpoint migration to slower, more permanent storage. The result is a 10-60x speedup in constructing and moving checkpoint data from the compute nodes. This demonstration of early Sirocco functionality shows a significant benefit for a real I/O workload, checkpointing, in a real application, CTH. By running Sirocco storage servers within a job as RAM-only stores, CTH was able to store checkpoints 10-60x faster than storing to PanFS, allowing the job to continue computing sooner. While this prototype did not include automatic data migration, the checkpoint was available to be pushed or pulled to disk-based storage as needed after the compute nodes continued computing. Future developments include the ability to dynamically spawn Sirocco nodes to absorb checkpoints, expanding this mechanism to other fast tiers of storage like flash memory, and sharing of dynamic Sirocco nodes between multiple jobs as needed.

  18. The 2007 San Diego Wildfire impact on the Emergency Department of the University of California, San Diego Hospital System.

    Science.gov (United States)

    Schranz, Craig I; Castillo, Edward M; Vilke, Gary M

    2010-01-01

    In October 2007, San Diego County experienced a severe firestorm resulting in the burning of more than 368,000 acres, the destruction of more than 1,700 homes, and the evacuation of more than 500,000 people. The goal of this study was to assess the impact of the 2007 San Diego Wildfires, and the acute change in air quality that followed, on the patient volume and types of complaints in the emergency department. A retrospective review was performed of a database of all patients presenting to the Emergency Departments of University of California, San Diego (UCSD) hospitals for a six-day period both before (14-19 October 2007) and after (21-26 October 2007) the start of the 2007 firestorm. Charts were abstracted for data, including demographics, chief complaints, past medical history, fire-related injuries and disposition status. As a measure of pollution, levels of 2.5 micron Particulate Matter (PM 2.5) also were calculated from data provided by the San Diego Air Pollution Control District. Emergency department volume decreased by 5.8% for the period following the fire. A rapid rise in PM2.5 levels coincided with the onset of the fires. The admission rate was higher in the period following the fires (19.8% vs. 15.2%) from the baseline period. Additionally, the Left Without Being Seen (LWBS) rate doubled to 4.6% from 2.3%. There was a statistically significant increase in patients presenting with a chief complaint of shortness of breath (6.5% vs. 4.2% p = 0.028) and smoke exposure (1.1% vs. 0% p = 0.001) following the fires. Patients with significant cardiac or pulmonary histories were no more likely to present to the emergency department during the fires. Despite the decreased volume, the admission and LWBS rate did increase following the onset of the firestorm. The cause of this increase is unclear. Despite a sudden decline in air quality, patients with significant cardiac and pulmonary morbidity did not vary their emergency department utilization rate. Based on the

  19. SAN/CXFS test report to LLNL

    Energy Technology Data Exchange (ETDEWEB)

    Ruwart, T M; Eldel, A

    2000-01-01

    The primary objectives of this project were to evaluate the performance of the SGI CXFS File System in a Storage Area Network (SAN) and compare/contrast it to the performance of a locally attached XFS file system on the same computer and storage subsystems. The University of Minnesota participants were asked to verify that the performance of the SAN/CXFS configuration did not fall below 85% of the performance of the XFS local configuration. There were two basic hardware test configurations constructed from the following equipment: Two Onyx 2 computer systems each with two Qlogic-based Fibre Channel/XIO Host Bus Adapter (HBA); One 8-Port Brocade Silkworm 2400 Fibre Channel Switch; and Four Ciprico RF7000 RAID Disk Arrays populated Seagate Barracuda 50GB disk drives. The Operating System on each of the ONYX 2 computer systems was IRIX 6.5.6. The first hardware configuration consisted of directly connecting the Ciprico arrays to the Qlogic controllers without the Brocade switch. The purpose for this configuration was to establish baseline performance data on the Qlogic controllers / Ciprico disk raw subsystem. This baseline performance data would then be used to demonstrate any performance differences arising from the addition of the Brocade Fibre Channel Switch. Furthermore, the performance of the Qlogic controllers could be compared to that of the older, Adaptec-based XIO dual-channel Fibre Channel adapters previously used on these systems. It should be noted that only raw device tests were performed on this configuration. No file system testing was performed on this configuration. The second hardware configuration introduced the Brocade Fibre Channel Switch. Two FC ports from each of the ONYX2 computer systems were attached to four ports of the switch and the four Ciprico arrays were attached to the remaining four. Raw disk subsystem tests were performed on the SAN configuration in order to demonstrate the performance differences between the direct-connect and the

  20. 75 FR 3939 - Merit Systems Protection Board (MSPB) Provides Notice of Opportunity To File Amicus Briefs

    Science.gov (United States)

    2010-01-25

    ... From the Federal Register Online via the Government Publishing Office MERIT SYSTEMS PROTECTION BOARD Merit Systems Protection Board (MSPB) Provides Notice of Opportunity To File Amicus Briefs AGENCY..., the Merit Systems Protection Board (MSPB) is providing notice of the opportunity to file amicus...

  1. Application of Distributed File System in Marketing File System%分布式文件系统在营销档案系统中的应用

    Institute of Scientific and Technical Information of China (English)

    方舟; 裴旭斌; 裘炜浩

    2015-01-01

    The marketing file system of Zhejiang (Provincial) Electric Power Company now is fully applied in the whole province. The fast growth of file data brings stricter requirements on storage of electronic files and other nonstructural data. By investigating features of distributed file system and taking the application of MooseFS distributed file system in marketing file system as an example, it is verified that the distributed file system is superior to the concentrated storage in terms of load balance, online capacity expansion and so on. The application of distributed file system provides a feasible solution to massive data storage of marketing file system.%目前,浙江省电力公司在全省范围内使用营销档案系统,档案数据的快速增长,对电子档案文件等非结构化数据的存储有了更高的要求。通过对分布式文件系统的特性进行研究,并以MooseFS分布式文件系统在营销档案系统的实际应用为例,验证了分布式文件系统相比于传统集中存储方式在负载均衡和在线扩容等方面的优势。分布式文件系统的应用为营销档案系统海量数据的存储提供了一种可行的解决方案。

  2. Development of a simultaneous SANS / FTIR measuring system and its application to polymer cocrystals

    Science.gov (United States)

    Kaneko, F.; Seto, N.; Sato, S.; Radulescu, A.; Schiavone, M. M.; Allgaier, J.; Ute, K.

    2016-09-01

    In order to provide plenty of structure information which would assist in the analysis and interpretation of small angle neutron scattering (SANS) profile, a novel method for the simultaneous time-resolved measurement of SANS and Fourier transform infrared (FTIR) spectroscopy has been developed. The method was realized by building a device consisting of a portable FTIR spectrometer and an optical system equipped with two aluminum coated quartz plates that are fully transparent to neutron beams but play as mirrors for infrared radiation. The optical system allows both a neutron beam and an infrared beam pass through the same position of a test specimen coaxially. The device was installed on a small angle neutron diffractometer, KWS2 of the Jülich Centre for Neutron Science (JCNS) outstation at Heinz Maier-Leibnitz Center (MLZ) in Garching, Germany. In order to check the performance of this simultaneous measuring system, the structural changes in the cocrystals of syndiotactic polystyrene during the course of heating were followed. It has been confirmed that FTIR spectra measured in parallel are able to provide information about the behavior of each component and also useful to grasp in real time what is actually happening in the sample system.

  3. Recurrence of seismic migrations along the central California segment of the San Andreas fault system

    Science.gov (United States)

    Wood, M.D.; Allen, S.S.

    1973-01-01

    VERIFICATIONS of tectonic concepts1 concerning seafloor spreading are emerging in a manner that has direct bearing on earthquake prediction. Although the gross pattern of worldwide seismicity contributed to the formulation of the plate tectonic hypothesis, it is the space-time characteristics of this seismicity that may contribute more toward understanding the kinematics and dynamics of the driving mechanism long speculated to originate in the mantle. If the lithosphere is composed of plates that move essentially as rigid bodies, then there should be seismic edge effects associated with this movement. It is these interplate effects, especially seismic migration patterns, that we discuss here. The unidirectional propagation at constant velocity (80 km yr-1 east to west) for earthquakes (M???7.2) on the Antblian fault for the period 1939 to 1956 (ref. 2) is one of the earliest observations of such a phenomenon. Similar studies3,4 of the Alaska Aleutian seismic zone and certain regions of the west coast of South America suggest unidirectional and recurring migrations of earthquakes (M???7.7) occur in these areas. Between these two regions along the great transform faults of the west coast of North America, there is some evidence 5 for unidirectional, constant velocity and recurrent migration of great earthquakes. The small population of earthquakes (M>7.2) in Savage's investigation5 indicates a large spatial gap along the San Andreas system in central California from 1830 to 1970. Previous work on the seismicity of this gap in central California indicates that the recurrence curves remain relatively constant, independent of large earthquakes, for periods up to a century6. Recurrence intervals for earthquakes along the San Andreas Fault have been calculated empirically by Wallace7 on the basis of geological evidence, surface measurements and assumptions restricted to the surficial seismic layer. Here we examine the evidence for recurrence of seismic migrations along

  4. Experience on QA in the CernVM File System

    CERN Document Server

    CERN. Geneva; MEUSEL, Rene

    2015-01-01

    The CernVM-File System (CVMFS) delivers experiment software installations to thousands of globally distributed nodes in the WLCG and beyond. In recent years it became a mission-critical component for offline data processing of the LHC experiments and many other collaborations. From a software engineering perspective, CVMFS is a medium-sized C++ system-level project. Following the growth of the project, we introduced a number of measures to improve the code quality, testability, and maintainability. In particular, we found very useful code reviews through github pull requests and automated unit- and integration testing. We are also transitioning to a test-driven development for new features and bug fixes. These processes are supported by a number of tools, such as Google Test, Jenkins, Docker, and others. We would like to share our experience on problems we encountered and on which processes and tools worked well for us.

  5. Nodal aberration theory for wild-filed asymmetric optical systems

    Science.gov (United States)

    Chen, Yang; Cheng, Xuemin; Hao, Qun

    2016-10-01

    Nodal Aberration Theory (NAT) was used to calculate the zero field position in Full Field Display (FFD) for the given aberration term. Aiming at wide-filed non-rotational symmetric decentered optical systems, we have presented the nodal geography behavior of the family of third-order and fifth-order aberrations. Meanwhile, we have calculated the wavefront aberration expressions when one optical element in the system is tilted, which was not at the entrance pupil. By using a three-piece-cellphone lens example in optical design software CodeV, the nodal geography is testified under several situations; and the wavefront aberrations are calculated when the optical element is tilted. The properties of the nodal aberrations are analyzed by using Fringe Zernike coefficients, which are directly related with the wavefront aberration terms and usually obtained by real ray trace and wavefront surface fitting.

  6. Heavy mineral analysis for assessing the provenance of sandy sediment in the San Francisco Bay Coastal System

    Science.gov (United States)

    Wong, F. L.; Woodrow, D. L.; McGann, M. L.

    2012-12-01

    Heavy minerals have been used to trace the sources and transportation of sandy sediment in San Francisco Bay and nearby coastal areas since the 1960s. We have the opportunity to sample similar environments and revisit the heavy mineral populations under the current San Francisco Coastal System study of the provenance of beach sand. Most of the sandy sediment in San Francisco Bay can be traced to distant sources including the Sierra Nevada batholith and associated terranes with local contributions from the Franciscan Complex. Heavy minerals from Sierran sources include ordinary hornblende, metamorphic amphiboles, and hypersthene while those from the Franciscan Complex include other types of pyroxene, epidote, basaltic hornblende, and glaucophane... Tertiary strata and volcanics in the surrounding hills and displaced Sierran rocks found on the continental shelf west of the San Andreas Fault Zone introduce similar minerals, but perhaps in a lesser volume to be identified as major contributors... The primary result of cluster analysis of heavy minerals separated from sand-sized sediment taken within San Francisco Bay, the adjacent continental shelf, local beaches, cliffs outside the Golden Gate, and upstream drainages indicate a widespread occurrence of sediment traceable to the Sierra Nevada. A second cluster of samples identifies samples of mixed Sierran and Franciscan lineage within the strait of the Golden Gate, on the San Francisco bar, and on coastal beaches. Sediment samples with predominantly Franciscan mineral content appear on beaches around Point Reyes, possibly transported from the Russian River. The heavy mineral composition supports transport from the east, through San Francisco Bay and out the Golden Gate to the San Francisco bar and southward.

  7. New System for Secure Cover File of Hidden Data in the Image Page within Executable File Using Statistical Steganography Techniques

    CERN Document Server

    Islam, Rafiqul; Zaidan, A A; Zaidan, B B

    2010-01-01

    A Previously traditional methods were sufficient to protect the information, since it is simplicity in the past does not need complicated methods but with the progress of information technology, it become easy to attack systems, and detection of encryption methods became necessary to find ways parallel with the differing methods used by hackers, so the embedding methods could be under surveillance from system managers in an organization that requires the high level of security. This fact requires researches on new hiding methods and cover objects which hidden information is embedded in. It is the result from the researches to embed information in executable files, but when will use the executable file for cover they have many challenges must be taken into consideration which is any changes made to the file will be firstly detected by untie viruses, secondly the functionality of the file is not still functioning. In this paper, a new information hiding system is presented. The aim of the proposed system is to ...

  8. Utilizing Lustre file system with dCache for CMS analysis

    Energy Technology Data Exchange (ETDEWEB)

    Wu, Y; Kim, B; Fu, Y; Bourilkov, D; Avery, P [Department of Physics, University of Florida, Gainesville, FL 32611 (United States); Rodriguez, J L [Department of Physics, Florida International University, Miami, FL 33199 (United States)

    2010-04-01

    This paper presents storage implementations that utilize the Lustre file system for CMS analysis with direct POSIX file access while keeping dCache as the frontend for data distribution and management. We describe two implementations that integrate dCache with Lustre and how to enable user data access without going through the dCache file read protocol. Our initial CMS analysis job measurement and transfer performance results are shown and the advantages of different implementations are briefly discussed.

  9. GPS Seismology and Earthquake Early Warning along the Southern San Andreas Fault System

    Science.gov (United States)

    Bock, Y.; Jackson, M. E.

    2007-05-01

    We are in the process of upgrading CGPS stations in southern California to high-rate (1-10 Hz) real-time (latency Cerro Prieto faults, the region of highest strain rate in southern California and the narrowest part of the North America-Pacific plate boundary. South of the Big Bend, the zero velocity contour (the "boundary") between the North America and Pacific plates does not follow the SAF segment, but rather is located just east of the San Jacinto Fault (SJF) segment and then follows the Imperial and Cerro Prieto faults. The primary purpose of the real-time network is to serve as an early warning system for a large earthquake along the southern San Andreas Fault System by quickly measuring coseismic displacements, and also for GPS seismology to rapidly measure the associated dynamic displacements. The network, called the California Real Time Network (CRTN), also supplies data for real GPS surveys within the region and will provide rapid displacement waveforms to the SCEC data archive at Caltech in the event of a medium to large earthquake. Although the real-time data flow is currently at 1 Hz, the PBO stations have an internal buffer that records GPS data at a 10 Hz rate.

  10. Mineralogy of Faults in the San Andreas System That are Characterized by Creep

    Science.gov (United States)

    Moore, D. E.; Rymer, M. J.; McLaughlin, R. J.; Lienkaemper, J. J.

    2011-12-01

    The San Andreas Fault Observatory at Depth (SAFOD) is a deep-drilling program sited in the central creeping section of the San Andreas Fault (SAF) near Parkfield, California. Core was recovered from two locations at ~2.7 km vertical depth that correspond to the places where the well casing is being deformed in response to fault creep. The two creeping strands are narrow zones of fault gouge, 1.6 and 2.6 m in width, respectively, that are the products of shear-enhanced metasomatic reactions between serpentinite tectonically entrained in the fault and adjoining sedimentary wall rocks. Both gouge zones consist of porphyroclasts of serpentinite and sedimentary rock dispersed in a foliated matrix of Mg-rich, saponitic ± corrensitic clays, and porphyroclasts of all types are variably altered to the same Mg-rich clays as the gouge matrix. Some serpentinite porphyroclasts also contain the assemblage talc + actinolite + chlorite + andradite garnet, which is characteristic of reaction zones developed between ultramafic and crustal rocks at greenschist- to subgreenschist-facies conditions. The presence of this higher-temperature assemblage raises the possibility that the serpentinite and its alteration products may extend to significantly greater depths in the fault. Similar fault gouge has also been identified in a serpentinite outcrop near the drill site that forms part of a sheared serpentinite body mapped for several kilometers within the creeping section of the SAF. The SAFOD core thus supports the long-held view that serpentinite is implicated in the origin of creep, as does at least one other creeping fault of the San Andreas System. The Bartlett Springs Fault (BSF) is a right-lateral strike-slip fault located north of San Francisco, California. Its slip rate currently is estimated to be 6 +/- 2 mm/yr, and along a segment that crosses Lake Pillsbury half the surface slip rate is taken up by creep. An exposure of this fault segment near Lake Pillsbury consists of

  11. The storage system of PCM based on random access file system

    Science.gov (United States)

    Han, Wenbing; Chen, Xiaogang; Zhou, Mi; Li, Shunfen; Li, Gezi; Song, Zhitang

    2016-10-01

    Emerging memory technologies such as Phase change memory (PCM) tend to offer fast, random access to persistent storage with better scalability. It's a hot topic of academic and industrial research to establish PCM in storage hierarchy to narrow the performance gap. However, the existing file systems do not perform well with the emerging PCM storage, which access storage medium via a slow, block-based interface. In this paper, we propose a novel file system, RAFS, to bring about good performance of PCM, which is built in the embedded platform. We attach PCM chips to the memory bus and build RAFS on the physical address space. In the proposed file system, we simplify traditional system architecture to eliminate block-related operations and layers. Furthermore, we adopt memory mapping and bypassed page cache to reduce copy overhead between the process address space and storage device. XIP mechanisms are also supported in RAFS. To the best of our knowledge, we are among the first to implement file system on real PCM chips. We have analyzed and evaluated its performance with IOZONE benchmark tools. Our experimental results show that the RAFS on PCM outperforms Ext4fs on SDRAM with small record lengths. Based on DRAM, RAFS is significantly faster than Ext4fs by 18% to 250%.

  12. San Juan National Forest Land Management Planning Support System (LMPSS) requirements definition

    Science.gov (United States)

    Werth, L. F. (Principal Investigator)

    1981-01-01

    The role of remote sensing data as it relates to a three-component land management planning system (geographic information, data base management, and planning model) can be understood only when user requirements are known. Personnel at the San Juan National Forest in southwestern Colorado were interviewed to determine data needs for managing and monitoring timber, rangelands, wildlife, fisheries, soils, water, geology and recreation facilities. While all the information required for land management planning cannot be obtained using remote sensing techniques, valuable information can be provided for the geographic information system. A wide range of sensors such as small and large format cameras, synthetic aperture radar, and LANDSAT data should be utilized. Because of the detail and accuracy required, high altitude color infrared photography should serve as the baseline data base and be supplemented and updated with data from the other sensors.

  13. Secure File Allocation and Caching in Large-scale Distributed Systems

    DEFF Research Database (Denmark)

    Di Mauro, Alessio; Mei, Alessandro; Jajodia, Sushil

    2012-01-01

    In this paper, we present a file allocation and caching scheme that guarantees high assurance, availability, and load balancing in a large-scale distributed file system that can support dynamic updates of authorization policies. The scheme uses fragmentation and replication to store files with hi......-balancing, and reducing delay of read operations. The system offers a trade-off-between performance and security that is dynamically tunable according to the current level of threat. We validate our mechanisms with extensive simulations in an Internet-like network....... security requirements in a system composed of a majority of low-security servers. We develop mechanisms to fragment files, to allocate them into multiple servers, and to cache them as close as possible to their readers while preserving the security requirement of the files, providing load...

  14. Geomorphic evidence of active tectonics in the San Gorgonio Pass region of the San Andreas Fault system: an example of discovery-based research in undergraduate teaching

    Science.gov (United States)

    Reinen, L. A.; Yule, J. D.

    2014-12-01

    Student-conducted research in courses during the first two undergraduate years can increase learning and improve student self-confidence in scientific study, and is recommended for engaging and retaining students in STEM fields (PCAST, 2012). At Pomona College, incorporating student research throughout the geology curriculum tripled the number of students conducting research prior to their senior year that culminated in a professional conference presentation (Reinen et al., 2006). Here we present an example of discovery-based research in Neotectonics, a second-tier course predominantly enrolling first-and second-year students; describe the steps involved in the four week project; and discuss early outcomes of student confidence, engagement and retention. In the San Gorgonio Pass region (SGPR) in southern California, the San Andreas fault undergoes a transition from predominantly strike-slip to a complex system of faults with significant dip-slip, resulting in diffuse deformation and raising the question of whether a large earthquake on the San Andreas could propagate through the region (Yule, 2009). In spring 2014, seven students in the Neotectonics course conducted original research investigating quantifiable geomorphic evidence of tectonic activity in the SGPR. Students addressed questions of [1] unequal uplift in the San Bernardino Mountains, [2] fault activity indicated by stream knick points, [3] the role of fault style on mountain front sinuosity, and [4] characteristic earthquake slip determined via fault scarp degradation models. Students developed and revised individual projects, collaborated with each other on methods, and presented results in a public forum. A final class day was spent reviewing the projects and planning future research directions. Pre- and post-course surveys show increases in students' self-confidence in the design, implementation, and presentation of original scientific inquiries. 5 of 6 eligible students participated in research the

  15. 76 FR 43206 - Electronic Tariff Filing System (ETFS)

    Science.gov (United States)

    2011-07-20

    ... public reviewing the tariff by including some descriptive information on the Title page of the tariff... nondominant carriers to research their previously filed tariff revisions to include different transmittal... by revising paragraphs (b) and (e) to read as follows: Sec. 61.14 Method of filing...

  16. Geophysical evidence for Quaternary deformation within the offshore San Andreas Fault System, Point Reyes Peninsula, California

    Science.gov (United States)

    Stozek, B.

    2010-12-01

    Our previous work studying the rate and style of uplift of marine terraces on the Point Reyes Peninsula indicates the peninsula has been undergoing differential uplift due to interacting fault geometries in the offshore zone. To better understand offshore fault interactions, recently collected mini-sparker seismic reflection data acquired by the USGS and multi-beam bathymetric data acquired by California State University at Monterey Bay within the 3-mile (5 km) limit offshore of the Point Reyes Peninsula, are being used to reinterpret the tectono-stratigraphic framework of the San Andreas fault (SAF) system. Eight offshore Shell exploratory well logs that provide seismic velocity and paleontologic data are being used in conjunction with industry multichannel (deep-penetration) seismic reflection profiles to provide age control and extend the analyses beyond 3 mile limit of the high-resolution data. Isopach and structure maps of key stratigraphic intervals were generated to show how the stratigraphic units are influenced by fault interactions. These datasets allow for new interpretations of the offshore Neogene stratigraphy and the evolution of the Point Reyes fault, an offshore component of the SAF system. Observations of Quaternary sedimentary sequences in the high-resolution mini-sparker dataset provide evidence of localized areas of subsidence and uplift within the offshore SAF system. For example, the most recent angular unconformity above the Point Reyes fault deepens to the north where the fault bends from an east-west to a more northerly orientation. Stratigraphic horizons in the offshore zone are correlated with the same geologic units exposed on the Point Reyes Peninsula. Both unconformity-bounded sedimentary sequences mapped on reflection profiles in the offshore and marine terraces that have been uplifted on the peninsula are tied to sea-level fluctuations. Our new interpretation of the Point Reyes fault zone will be incorporated into a kinematic fault

  17. 77 FR 34376 - Combined Notice of Filings #1

    Science.gov (United States)

    2012-06-11

    ...-001. Applicants: Southern California Edison Company. Description: Amendment to GIA and DSA for San.... Applicants: Southern California Edison Company. Description: GIA and Distribution Service Agmt Palm Springs... Transmission System Operator, Inc. Description: G551 Amended GIA to be effective 6/2/2012. Filed Date:...

  18. Bathymetry--Offshore of San Francisco, California

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — This part of DS 781 presents data for the bathymetry and shaded-relief maps of Offshore of San Francisco, California (raster data file is included in...

  19. Habitat--Offshore of San Francisco, California

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — This part of DS 781 presents data for the habitat map of the seafloor of the Offshore of San Francisco map area, California. The vector data file is included in...

  20. Habitat--Offshore of San Francisco, California

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — This part of DS 781 presents data for the habitat map of the seafloor of the Offshore of San Francisco map area, California. The vector data file is included in...

  1. Contours--Offshore of San Francisco, California

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — This part of DS 781 presents data for the bathymetric contours for several seafloor maps of the Offshore of San Francisco map area, California. The vector data file...

  2. Bottom-up, decision support system development : a wetlandsalinity management application in California's San Joaquin Valley

    Energy Technology Data Exchange (ETDEWEB)

    Quinn, Nigel W.T.

    2006-05-10

    Seasonally managed wetlands in the Grasslands Basin ofCalifornia's San Joaquin Valley provide food and shelter for migratorywildfowl during winter months and sport for waterfowl hunters during theannual duck season. Surface water supply to these wetland contain saltwhich, when drained to the San Joaquin River during the annual drawdownperiod, negatively impacts downstream agricultural riparian waterdiverters. Recent environmental regulation, limiting discharges salinityto the San Joaquin River and primarily targeting agricultural non-pointsources, now addresses return flows from seasonally managed wetlands.Real-time water quality management has been advocated as a means ofmatching wetland return flows to the assimilative capacity of the SanJoaquin River. Past attempts to build environmental monitoring anddecision support systems to implement this concept have failed forreasons that are discussed in this paper. These reasons are discussed inthe context of more general challenges facing the successfulimplementation of environmental monitoring, modelling and decisionsupport systems. The paper then provides details of a current researchand development project which will ultimately provide wetland managerswith the means of matching salt exports with the available assimilativecapacity of the San Joaquin River, when fully implemented. Manipulationof the traditional wetland drawdown comes at a potential cost to thesustainability of optimal wetland moist soil plant habitat in thesewetlands - hence the project provides appropriate data and a feedback andresponse mechanism for wetland managers to balance improvements to SanJoaquin River quality with internally-generated information on the healthof the wetland resource. The author concludes the paper by arguing thatthe architecture of the current project decision support system, whencoupled with recent advances in environmental data acquisition, dataprocessing and information dissemination technology, holds

  3. SAN CARLOS APACHE PAPERS.

    Science.gov (United States)

    ROESSEL, ROBERT A., JR.

    THE FIRST SECTION OF THIS BOOK COVERS THE HISTORICAL AND CULTURAL BACKGROUND OF THE SAN CARLOS APACHE INDIANS, AS WELL AS AN HISTORICAL SKETCH OF THE DEVELOPMENT OF THEIR FORMAL EDUCATIONAL SYSTEM. THE SECOND SECTION IS DEVOTED TO THE PROBLEMS OF TEACHERS OF THE INDIAN CHILDREN IN GLOBE AND SAN CARLOS, ARIZONA. IT IS DIVIDED INTO THREE PARTS--(1)…

  4. Performance and Scalability Evaluation of the Ceph Parallel File System

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Feiyi [ORNL; Nelson, Mark [Inktank Storage, Inc.; Oral, H Sarp [ORNL; Settlemyer, Bradley W [ORNL; Atchley, Scott [ORNL; Caldwell, Blake A [ORNL; Hill, Jason J [ORNL

    2013-01-01

    Ceph is an open-source and emerging parallel distributed file and storage system technology. By design, Ceph assumes running on unreliable and commodity storage and network hardware and provides reliability and fault-tolerance through controlled object placement and data replication. We evaluated the Ceph technology for scientific high-performance computing (HPC) environments. This paper presents our evaluation methodology, experiments, results and observations from mostly parallel I/O performance and scalability perspectives. Our work made two unique contributions. First, our evaluation is performed under a realistic setup for a large-scale capability HPC environment using a commercial high-end storage system. Second, our path of investigation, tuning efforts, and findings made direct contributions to Ceph's development and improved code quality, scalability, and performance. These changes should also benefit both Ceph and HPC communities at large. Throughout the evaluation, we observed that Ceph still is an evolving technology under fast-paced development and showing great promises.

  5. AgRISTARS: Renewable resources inventory. Land information support system implementation plan and schedule. [San Juan National Forest pilot test

    Science.gov (United States)

    Yao, S. S. (Principal Investigator)

    1981-01-01

    The planning and scheduling of the use of remote sensing and computer technology to support the land management planning effort at the national forests level are outlined. The task planning and system capability development were reviewed. A user evaluation is presented along with technological transfer methodology. A land management planning pilot test of the San Juan National Forest is discussed.

  6. Predictive model of San Andreas fault system paleogeography, Late Cretaceous to early Miocene, derived from detailed multidisciplinary conglomerate correlations

    Science.gov (United States)

    Burnham, Kathleen

    2009-01-01

    Paleogeographic reconstruction of the region of the San Andreas fault system in western California, USA, was hampered for more than two decades by the apparent incompatibility of authoritative lithologic correlations. These led to disparate estimates of dextral strike-slip offsets across the San Andreas fault, notably 315 km between Pinnacles and Neenach Volcanics, versus 563 km offset between Anchor Bay and Eagle Rest peak. Furthermore, one section of the San Andreas fault between Pinnacles and Point Reyes had been reported to have six pairs of features showing only ~ 30 km offset, while several younger features in that same area were reported consistent with ~ 315 km offset. Estimates of total dextral slip on the adjoining San Gregorio fault have ranged from 5 km to 185 km. Sixteen Upper Cretaceous and Paleogene conglomerates of the California Coast Ranges, from Anchor Bay to Simi Valley, were included in a multidisciplinary study centered on identification of matching unique clast varieties, rather than on simply counting general clast types. Detailed analysis verified the prior correlation of the Upper Cretaceous strata of Anchor Bay at Anchor Bay with a then-unnamed conglomerate at Highway 92 and Skyline Road (south of San Francisco); and verified that the Paleocene or Eocene Point Reyes Conglomerate at Point Reyes is a tectonically displaced segment of the Carmelo Formation of Point Lobos (near Monterey). The work also led to three new correlations: Point Reyes Conglomerate with granitic source rock at Point Lobos; a magnetic anomaly at Black Point (near Sea Ranch) with a magnetic anomaly near San Gregorio; and strata of Anchor Bay with previously established source rock, the potassium-poor Logan Gabbro of Eagle Rest peak, at a more recently recognized subsurface location just east of the San Gregorio fault, south of San Gregorio. From these correlations, a Late Cretaceous to early Oligocene paleogeography was constructed which was unique in utilizing modern

  7. NVFAT: A FAT-Compatible File System with NVRAM Write Cache for Its Metadata

    Science.gov (United States)

    Doh, In Hwan; Lee, Hyo J.; Moon, Young Je; Kim, Eunsam; Choi, Jongmoo; Lee, Donghee; Noh, Sam H.

    File systems make use of the buffer cache to enhance their performance. Traditionally, part of DRAM, which is volatile memory, is used as the buffer cache. In this paper, we consider the use of of Non-Volatile RAM (NVRAM) as a write cache for metadata of the file system in embedded systems. NVRAM is a state-of-the-art memory that provides characteristics of both non-volatility and random byte addressability. By employing NVRAM as a write cache for dirty metadata, we retain the same integrity of a file system that always synchronously writes its metadata to storage, while at the same time improving file system performance to the level of a file system that always writes asynchronously. To show quantitative results, we developed an embedded board with NVRAM and modify the VFAT file system provided in Linux 2.6.11 to accommodate the NVRAM write cache. We performed a wide range of experiments on this platform for various synthetic and realistic workloads. The results show that substantial reductions in execution time are possible from an application viewpoint. Another consequence of the write cache is its benefits at the FTL layer, leading to improved wear leveling of Flash memory and increased energy savings, which are important measures in embedded systems. From the real numbers obtained through our experiments, we show that wear leveling is improved considerably and also quantify the improvements in terms of energy.

  8. Design and Implementation of Two-Level Metadata Server in Small-Scale Cluster File System

    Institute of Scientific and Technical Information of China (English)

    LIU Yuling; YU Hongfen; SONG Weiwei

    2006-01-01

    The reliability and high performance of metadata service is crucial to the store architecture. A novel design of a two-level metadata server file system (TTMFS) is presented, which behaves high reliability and performance. The merits both centralized management and distributed management are considered simultaneously in our design. In this file system, the advanced-metadata server is responsible for manage directory metadata and the whole namespace. The double-metadata server is responsible for maintaining file metadata. And this paper uses the Markov return model to analyze the reliability of the two-level metadata server. The experiment data indicates that the design can provide high throughput.

  9. An energy systems view of sustainability: emergy evaluation of the San Luis Basin, Colorado.

    Science.gov (United States)

    Campbell, Daniel E; Garmestani, Ahjond S

    2012-03-01

    Energy Systems Theory (EST) provides a framework for understanding and interpreting sustainability. EST implies that "what is sustainable" for a system at any given level of organization is determined by the cycles of change originating in the next larger system and within the system of concern. The pulsing paradigm explains the ubiquitous cycles of change that apparently govern ecosystems, rather than succession to a steady state that is then sustainable. Therefore, to make robust decisions among environmental policies and alternatives, decision-makers need to know where their system resides in the cycles of change that govern it. This theory was examined by performing an emergy evaluation of the sustainability of a regional system, the San Luis Basin (SLB), CO. By 1980, the SLB contained a climax stage agricultural system with well-developed crop and livestock production along with food and animal waste processing. The SLB is also a hinterland in that it exports raw materials and primary products (exploitation stage) to more developed areas. Emergy indices calculated for the SLB from 1995 to 2005 revealed changes in the relative sustainability of the system over this time. The sustainability of the region as indicated by the renewable emergy used as a percent of total use declined 4%, whereas, the renewable carrying capacity declined 6% over this time. The Emergy Sustainability Index (ESI) showed the largest decline (27%) in the sustainability of the region. The total emergy used by the SLB, a measure of system well-being, was fairly stable (CV = 0.05). In 1997, using renewable emergy alone, the SLB could support 50.7% of its population at the current standard of living, while under similar conditions the U.S. could support only 4.8% of its population. In contrast to other indices of sustainability, a new index, the Emergy Sustainable Use Index (ESUI), which considers the benefits gained by the larger system compared to the potential for local environmental

  10. RRB's SVES Input File - Post Entitlement State Verification and Exchange System (PSSVES)

    Data.gov (United States)

    Social Security Administration — Several PSSVES request files are transmitted to SSA each year for processing in the State Verification and Exchange System (SVES). This is a first step in obtaining...

  11. MASS SMALL-FILE STORAGE FILE SYSTEM RESEARCH OVERVIEW%海量小文件存储文件系统研究综述

    Institute of Scientific and Technical Information of China (English)

    王铃惠; 李小勇; 张轶彬

    2012-01-01

    With the development of Internet, the small file storage size shows a geometric growth too. Therefore traditional file systems no longer meet the requirements for storage performance. For small file storage, especially mass small-file storage, optimization is becoming more and more important. The paper first of all explains the necessity for small file storage system optimization; then analyzes problems existing in present small file storage and expounds optimization approaches. Afterward it introduces three representative file systems for small file storage. In the end there is a summary.%随着互联网的发展,存储的小文件数量也呈几何级的增长.传统文件系统已不能满足存储性能的需求,对于小文件存储,尤其是海量小文件存储的优化已变得越来越重要.首先提出对小文件存储的系统进行优化的必要性,然后对小文件存储中存在的问题进行分析并阐述优化的方式,并介绍三种具有代表性的适合小文件存储的文件系统,最后总结归纳.

  12. The Fifth Workshop on HPC Best Practices: File Systems and Archives

    Energy Technology Data Exchange (ETDEWEB)

    Hick, Jason; Hules, John; Uselton, Andrew

    2011-11-30

    The workshop on High Performance Computing (HPC) Best Practices on File Systems and Archives was the fifth in a series sponsored jointly by the Department Of Energy (DOE) Office of Science and DOE National Nuclear Security Administration. The workshop gathered technical and management experts for operations of HPC file systems and archives from around the world. Attendees identified and discussed best practices in use at their facilities, and documented findings for the DOE and HPC community in this report.

  13. A Heat Warning System to Reduce Heat Illness in San Diego County

    Science.gov (United States)

    Tardy, A. O.; Corcus, I.; Guirguis, K.; Gershunov, A.; Basu, R.; Stepanski, B.

    2016-12-01

    The National Weather Service (NWS) has issued official heat alerts to the public and decision making partners for many years by developing a single criterion or regional criteria from heat indices which combine temperature and humidity. The criteria have typically relied on fixed thresholds and did not consider impact from a particular heat episode, nor did it factor seasonality, population acclimatization, or impacts on the most vulnerable subgroups. In 2013, the NWS San Diego office began modifying their criteria to account for local climatology with much less dependence on humidity or the heat index. These local changes were based on initial findings from the California Department of Public Health, EpiCenter California Injury Data Online system (EPIC), which document heat health impacts. The Scripps Institution of Oceanography (SIO) in collaboration with the California Environmental Protection Agency's Office of Environmental Health Hazard Assessment and the NWS completed a study of hospital visits during heat waves in California showing significant health impacts occurred in the past when no regional heat warning was issued. Therefore, the results supported the need for an exploratory project to implement significant modification of the traditional local criteria. To understand the impacts of heat on community health, medical outcome data were provided by the County of San Diego Emergency Medical Services Branch (EMS), which is provided by the County's Public Health Officer to monitor heat-related illness and injury daily during specific heat episodes. The data were combined with SIO research to inform the modification of local NWS heat criteria and establish trigger points to pilot new procedures for the issuance of heat alerts. Finally, procedures were customized for each of the county health departments in the NWS area of responsibility across extreme southwest California counties in collaboration with their Office of Emergency Services (OES). The

  14. Analysis of fish diversion efficiency and survivorship in the fish return system at San Onofre Nuclear Generating Station

    OpenAIRE

    Love, Milton S.; Sandhu, Meenu; Stein, Jeffrey; Herbinson, Kevin T.; Moore, Robert H; Mullin, Michael; Stephens, John S.

    1989-01-01

    This study examined the efficiency of fish diversion and survivorship of diverted fishes in the San Onofre Nuclear Generating Station Fish Return System in 1984 and 1985. Generally, fishes were diverted back to the ocean with high frequency, particularly in 1984. Most species were diverted at rates of 80% or more. Over 90% of the most abundant species, Engraulis mordax, were diverted. The system worked particularly well for strong-swimming forms such as Paralobrax clothratus, Atherinopsis cal...

  15. Design of NAND FLASH File System Based on Loss of Balance Algorithm

    Directory of Open Access Journals (Sweden)

    Jinwu Ju

    2011-02-01

    Full Text Available NAND FLASH is a commonly large capacity memory, which used in embedded systems .It often used to store the operating system kernel and file system. NAND FLASH memory has a limited number of block erase feature. Built file system in NAND FLASH, loss of balance method should be adopted. A balanced system of block erase operation, can extend the life of NAND FLASH and improve overall system reliability. The paper analyzes the characteristics of NAND FLASH work .presents a loss of balance algorithm which used in NAND FLASH memory , and given out the implementation of the algorithm design method.

  16. Security in the CernVM File System and the Frontier Distributed Database Caching System

    CERN Document Server

    Dykstra, David

    2014-01-01

    Both the CernVM File System (CVMFS) and the Frontier Distributed Database Caching System (Frontier) distribute centrally updated data worldwide for LHC experiments using http proxy caches. Neither system provides privacy or access control on reading the data, but both control access to updates of the data and can guarantee the integrity of the data transferred to clients over the internet. CVMFS has since its early days required digital signatures and secure hashes on all distributed data, and recently both CVMFS and Frontier have added X509-based integrity checking. In this paper we detail and compare the security models of CVMFS and Frontier.

  17. Security in the CernVM File System and the Frontier Distributed Database Caching System

    Energy Technology Data Exchange (ETDEWEB)

    Dykstra, D.; Blomer, J. [CERN

    2014-01-01

    Both the CernVM File System (CVMFS) and the Frontier Distributed Database Caching System (Frontier) distribute centrally updated data worldwide for LHC experiments using http proxy caches. Neither system provides privacy or access control on reading the data, but both control access to updates of the data and can guarantee the authenticity and integrity of the data transferred to clients over the internet. CVMFS has since its early days required digital signatures and secure hashes on all distributed data, and recently Frontier has added X.509-based authenticity and integrity checking. In this paper we detail and compare the security models of CVMFS and Frontier.

  18. LHCB: Non-POSIX File System for the LHCB Online Event Handling

    CERN Multimedia

    Garnier, J-C; Cherukuwada, S S

    2010-01-01

    LHCb aims to use its O(20000) CPU cores in the High Level Trigger (HLT) and its 120 TB Online storage system for data reprocessing during LHC shutdown periods. These periods can last between a few days and several weeks during the winter shutdown or even only a few hours during beam interfill gaps. These jobs run on files which are staged in from tape storage to the local storage buffer. The result are again one or more files. Efficient file writing and reading is essential for the performance of the system. Rather than using a traditional shared filesystem such as NFS or CIFS we have implemented a custom, light-weight, non-Posix file-system for the handling of these files. Streaming this filesystem for the data-access allows to obtain high performance, while at the same time keep the resource consumption low and add nice features not found in NFS such as high-availability, transparent failover of the read and write service. The writing part of this file-system is in successful use for the Online, real-time w...

  19. University of California San Francisco automated radiology department system-without picture archival and communication system (PACS)

    Science.gov (United States)

    Quintin, June A.; Simborg, Donald W.

    1982-01-01

    A fully automated and comprehensive Radiology Department system was implemented in the Fall of 1980, which highly integrates the multiple functions of a large Radiology Department in a major medical center. The major components include patient registration, film tracking, management statistics, patient flow control, radiologist reporting, pathology coding and billing. The highly integrated design allows sharing of critical files to reduce redundancy and errors in communication and allows rapid dissemination of information throughout the department. As one node of an integrated distributed hospital system, information from central hospital functions such as patient identification are incorporated into the system and reports and other information are available to other hospital systems. The system is implemented on a Data General Eclipse S/250 using the MIIS operating system. The management of a radiology department has become sufficiently complex that the application of computer techniques to the smooth operation of the department has become almost a necessity. This system provides statistics on room utilization, technologist productivity, and radiologist activity. Room utilization graphs are a valuable aid for staffing and scheduling of technologists, as well as analyzing appropriateness of radiologic equipment in a department. Daily reports summarize by radiology section exams not dictated. File room reports indicate which film borrowers are delinquent in returning films for 24 hours, 48 hours and one week. Letters to the offenders are automatically generated on the high speed line printer. Although all radiology departments have similar needs, customization is likely to be required to meet specific priorities and needs at any individual department. It is important in choosing a system vendor that such flexibility be available. If appropriately designed, a system will provide considerable improvements in efficiency and effectiveness.

  20. Virtual file system on NoSQL for processing high volumes of HL7 messages.

    Science.gov (United States)

    Kimura, Eizen; Ishihara, Ken

    2015-01-01

    The Standardized Structured Medical Information Exchange (SS-MIX) is intended to be the standard repository for HL7 messages that depend on a local file system. However, its scalability is limited. We implemented a virtual file system using NoSQL to incorporate modern computing technology into SS-MIX and allow the system to integrate local patient IDs from different healthcare systems into a universal system. We discuss its implementation using the database MongoDB and describe its performance in a case study.

  1. A deep crustal fluid channel into the San Andreas Fault system near Parkfield, California

    Science.gov (United States)

    Becken, M.; Ritter, O.; Park, S.K.; Bedrosian, P.A.; Weckmann, U.; Weber, M.

    2008-01-01

    Magnetotelluric (MT) data from 66 sites along a 45-km-long profile across the San Andreas Fault (SAF) were inverted to obtain the 2-D electrical resistivity structure of the crust near the San Andreas Fault Observatory at Depth (SAFOD). The most intriguing feature of the resistivity model is a steeply dipping upper crustal high-conductivity zone flanking the seismically defined SAF to the NE, that widens into the lower crust and appears to be connected to a broad conductivity anomaly in the upper mantle. Hypothesis tests of the inversion model suggest that upper and lower crustal and upper-mantle anomalies may be interconnected. We speculate that the high conductivities are caused by fluids and may represent a deep-rooted channel for crustal and/or mantle fluid ascent. Based on the chemical analysis of well waters, it was previously suggested that fluids can enter the brittle regime of the SAF system from the lower crust and mantle. At high pressures, these fluids can contribute to fault-weakening at seismogenic depths. These geochemical studies predicted the existence of a deep fluid source and a permeable pathway through the crust. Our resistivity model images a conductive pathway, which penetrates the entire crust, in agreement with the geochemical interpretation. However, the resistivity model also shows that the upper crustal branch of the high-conductivity zone is located NE of the seismically defined SAF, suggesting that the SAF does not itself act as a major fluid pathway. This interpretation is supported by both, the location of the upper crustal high-conductivity zone and recent studies within the SAFOD main hole, which indicate that pore pressures within the core of the SAF zone are not anomalously high, that mantle-derived fluids are minor constituents to the fault-zone fluid composition and that both the volume of mantle fluids and the fluid pressure increase to the NE of the SAF. We further infer from the MT model that the resistive Salinian block

  2. Data-Derived Coulomb Stress Rate Uncertainties of the San Andreas Fault System

    Science.gov (United States)

    Smith-Konter, B. R.; Solis, T.; Sandwell, D. T.

    2008-12-01

    Interseismic stress rates of the San Andreas Fault System (SAFS), derived from the present-day geodetic network spanning the North American-Pacific plate boundary, range from 0.5 - 7 MPa/100yrs and vary as a function of fault locking depth, slip rate, and fault geometry. Calculations of accumulated stress over several earthquake cycles, consistent with coseismic stress drops of ~3-7 MPa, also largely depend on the rupture history of a fault over the past few thousand years. However, uncertainties in paleoseismic slip history, combined with ongoing discrepancies in geologic/geodetic slip rates and variable locking depths throughout the earthquake cycle, can introduce uncertainties in stress rate and in present-day stress accumulation calculations. For example, a number of recent geodetic studies have challenged geologic slip rates along the SAFS, varying by as much as 25% of the total slip budget; geodetically determined locking depths, while within the bounds of seismicity, typically have uncertainties that range from 0.5 - 5 km; uncertainties in paleoseismic chronologies can span several decades, with slip uncertainties on the order of a few meters. Here we assess the importance of paleoseismic accuracy, variations in slip rates, and basic stress model components using a 3-D semi-analytic time-dependent deformation model of the SAFS. We perform a sensitivity analysis of Coulomb stress rate and present-day accumulated stress with respect to the six primary parameters of our model: slip rate, locking depth, mantle viscosity, elastic plate thickness, coefficient of friction, and slip history. In each case, we calculate a stress derivative with respect to a parameter over the estimated range of uncertainty, as well as any tradeoffs in parameters. Our results suggest that a 25% variation, or exchange, of slip rates between the primary SAFS and faults of the Eastern California Shear Zone (ECSZ) yields a respective decrease (SAFS) and increase (ECSZ) of stress rate by

  3. High resolution measurements of aseismic slip (creep) on the San Andreas fault system from Parkfield to San Francisco Bay area; 1966 to the present

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — These data provide measures of aseismic slip (creep) at approximately 40 sites located on the San Andreas, Hayward, and Calaveras faults in Central California from...

  4. San Francisco District Laboratory (SAN)

    Data.gov (United States)

    Federal Laboratory Consortium — Program Capabilities Food Analysis SAN-DO Laboratory has an expert in elemental analysis who frequently performs field inspections of materials. A recently acquired...

  5. Rehabilitation and certification of the PGPB Cactus-San Fernando gas pipeline system

    Energy Technology Data Exchange (ETDEWEB)

    Graciano, L.S. [Permex Gas y Petroquimica Basica, Mexico City (Mexico); Clyne, A. [GE Energy PII Pipeline Solutions, Buenos Aires (Argentina); Cazenave, P.; Willis, S. [GE Energy PII Pipeline Solutions, Houston, TX (United States); Kania, R. [GE Energy Pipeline Solutions, Calgary, AB (Canada)

    2004-07-01

    The Cactus-San Fernando gas pipeline system is 650 km in length and was constructed in the late 1970s. The system transports more than 1100 million standard cubic feet per day of dry natural gas to electricity generators in Mexico. This paper described a project undertaken to re-validate the pipeline and demonstrate the future integrity of the pipeline system and ensure that it was suitable for operation to 1219 psig. Pipeline sections were inspected using high resolution magnetic flux leakage (MFL) in-line inspection (ILI) tools, and inertial mapping unit vehicles equipped with global positioning surveys (GPS). The combined inspections allowed the project team to accurately identify features of the pipeline that required repairs. External and internal corrosion were identified as the most prevalent defects. RSTRENG methodologies were used to investigate the interaction of individual corrosion anomalies. Corrosion patterns were compared, and above-ground survey data were used to establish the causes of both the external and internal corrosion, as well as to establish future corrosion growth rates. Decision tree analysis was then used to analyze the growth rates and to identify statistical differences between corrosion growth rates as a function of distance along the pipeline. After the ILI reports were generated, an integrity assessment was then conducted to identify necessary repair options. Repairs plans were then developed along with recommended re-inspection intervals for each section. After the integrity assessments were accepted by a certification company, field work was conducted to locate and measure defects. Defects characteristic of major volumetric welding flaws introduced during pipeline construction were identified and repaired with an epoxy sleeve technique. It was concluded that repairs needed to operate the pipeline at the requested pressure were accomplished within a period of 8 months. 7 refs., 2 tabs., 4 figs.

  6. Enterprise Resource Planning (ERP) : a case study of Space and Naval Warfare Systems Center San Diego's Project Cabrillo

    OpenAIRE

    Hoffman, Dean M.; Oxendine, Eric

    2002-01-01

    Approved for public release; distribution unlimited This thesis examines the Enterprise Resource Planning (ERP) pilot implementation conducted at the Space and Naval Warfare Systems Center San Diego (SSC-SD), the first of four Department of the Navy (DON) pilot implementations. Specifically, comparisons are drawn between both successful and unsuccessful ERP implementations within private sector organizations and that of SSC-SD. Any commonalities in implementation challenges could be...

  7. Effect of Instrumentation Length and Instrumentation Systems: Hand Versus Rotary Files on Apical Crack Formation – An In vitro Study

    Science.gov (United States)

    Mahesh, MC; Bhandary, Shreetha

    2017-01-01

    Introduction Stresses generated during root canal instrumentation have been reported to cause apical cracks. The smaller, less pronounced defects like cracks can later propagate into vertical root fracture, when the tooth is subjected to repeated stresses from endodontic or restorative procedures. Aim This study evaluated occurrence of apical cracks with stainless steel hand files, rotary NiTi RaCe and K3 files at two different instrumentation lengths. Materials and Methods In the present in vitro study, 60 mandibular premolars were mounted in resin blocks with simulated periodontal ligament. Apical 3 mm of the root surfaces were exposed and stained using India ink. Preoperative images of root apices were obtained at 100x using stereomicroscope. The teeth were divided into six groups of 10 each. First two groups were instrumented with stainless steel files, next two groups with rotary NiTi RaCe files and the last two groups with rotary NiTi K3 files. The instrumentation was carried out till the apical foramen (Working Length-WL) and 1 mm short of the apical foramen (WL-1) with each file system. After root canal instrumentation, postoperative images of root apices were obtained. Preoperative and postoperative images were compared and the occurrence of cracks was recorded. Descriptive statistical analysis and Chi-square tests were used to analyze the results. Results Apical root cracks were seen in 30%, 35% and 20% of teeth instrumented with K-files, RaCe files and K3 files respectively. There was no statistical significance among three instrumentation systems in the formation of apical cracks (p=0.563). Apical cracks were seen in 40% and 20% of teeth instrumented with K-files; 60% and 10% of teeth with RaCe files and 40% and 0% of teeth with K3 files at WL and WL-1 respectively. For groups instrumented with hand files there was no statistical significance in number of cracks at WL and WL-1 (p=0.628). But for teeth instrumented with RaCe files and K3 files

  8. Organization of the Inverted Files in a Distributed Information Retrieval System Based on Thesauri.

    Science.gov (United States)

    Mazur, Zygmunt

    1986-01-01

    Describes how operations on local inverted files are to be modified in order to use them in distributed information retrieval systems based on thesauri. The presented rules may be viewed as the logical approach in implementing a distributed retrieval system consisting of n local retrieval systems. (Author/MBR)

  9. 76 FR 4001 - Foreign Trade Regulations (FTR): Mandatory Automated Export System Filing for All Shipments...

    Science.gov (United States)

    2011-01-21

    ... Commerce Census Bureau 15 CFR Part 30 Foreign Trade Regulations (FTR): Mandatory Automated Export System... Foreign Trade Regulations (FTR): Mandatory Automated Export System Filing for All Shipments Requiring... Automated Export System (AES) or through AESDirect for all shipments of used self-propelled...

  10. Electronic teaching files: seven-year experience using a commercial picture archiving and communication system.

    Science.gov (United States)

    Siegel, E; Reiner, B

    2001-06-01

    With the advent of electronic imaging and the internet, the ability to create, search, access, and archive digital imaging teaching files has dramatically improved. Despite the fact that a picture archival and communication system (PACS) has the potential to greatly simplify the creation of, archival, and access to a department or multifacility teaching file, this potential has not yet been satisfactorily realized in our own and most other PACS installations. Several limitations of the teaching file tools within our PACS have become apparent over time. These have, at our facility, resulted in a substantially reduced role of the teaching file tools for conferences, daily teaching, and research purposes. With the PACS at our institution, academic folders can only be created by the systems engineer, which often serves as an impediment to the teaching process. Once these folders are created, multiple steps are required to identify the appropriate folders, and subsequently save images. Difficulties exist for those attempting to search for the teaching file images. Without pre-existing knowledge of the folder name and contents, it is difficult to query the system for specific images. This is due to the fact that there is currently no fully satisfactory mechanism for categorizing, indexing, and searching cases using the PACS. There is currently no easy mechanism to save teaching, research, or clinical files onto a CD or other removable media or to automatically strip demographic or other patient information from the images. PACS vendors should provide much more sophisticated tools to create and annotate teaching file images in an easy to use but standard format (possibly Radiological Society of North America's Medical Image Resource Center [MIRC] format) that could be exchanged with other sites and other vendors' PAC systems. The privilege to create teaching or conference files should be given to the individual radiologists, technologists, and other users, and an audit

  11. Development of 2D casting process CAD system based on PDF/image files

    Institute of Scientific and Technical Information of China (English)

    Tang Hongtao; Zhou Jianxin; Wang Lin; Liao Dunming; Tao Qing

    2014-01-01

    A casting process CAD is put forward to design and draw casting process. The 2D casting process CAD, most of the current systems are developed based on one certain version of the AutoCAD system. However the application of these 2D casting process CAD systems in foundry enterprises are restricted because they have several deficiencies, such as being overly dependent on the AutoCAD system, and some part files based on PDF format can not be opened directly. To overcome these deficiencies, for the first time an innovative 2D casting process CAD system based on PDF and image format file has been proposed, which breaks through the traditional research and application notion of the 2D casting process CAD system based on AutoCAD. Several key technologies of this system such as coordinate transformation, CAD interactive drawing, file storage, PDF and image format files display, and image recognition technologies were described in detail. A practical 2D CAD casting process system named HZCAD2D(PDF) was developed, which is capable of designing and drawing the casting process on the part drawing based on the PDF format directly, without spending time on drawing the part produced by AutoCAD system. Final y, taking two actual castings as examples, the casting processes were drawn using this system, demonstrating that this system can significantly shorten the cycle of casting process designing.

  12. BASIS FOR IMPLEMENTING RESTORATION STRATEGIES: SAN NICOLÁS ZOYATLAN SOCIAL-ECOLOGICAL SYSTEM (GUERRERO, MEXICO

    Directory of Open Access Journals (Sweden)

    Virginia Cervantes Gutiérrez

    2014-06-01

    Full Text Available In Latin America, indigenous communities have been distinguished by continuous marginalization and degradation of their social-ecological systems (SES, and this is the case of San Nicolás Zoyatlan (Guerrero, Mexico. With the purpose of establishing restoration strategies, we conducted an in situ diagnosis to assess the current condition of the community and its disturbance factors. We combined methods of environmental and social sciences to analyze, in diachronic and synchronic contexts, how the SES’s natural resources are used. Results indicate that the chronic ecological disturbance of the SES and agrarian conflicts have contributed to the degradation of vegetation and soil. Thus, vegetation is impoverished, with secondary tropical deciduous forest (TDF shrub vegetation predominating, and the risk of soil degradation is high, despite the inhabitants’ knowledge and appreciation of this resource. To reverse degradation, we defined and established rehabilitation practices with the involvement of inhabitants. These comprised agroforestry systems (AFSs and plantations (PLs on community lands, using 18 native TDF species selected on the basis of local preferences. Our ongoing assessment indicates that after more than a decade, the AFSs and PLs are maintained and managed by the people, as reflected in the current size of plants whose inputs contribute to improve physical and chemical soil characteristics, such as soil aggregate formation, cation exchange capacity and organic matter content. However, we concluded that we need to confirm whether the ecosystem processes and environmental services are recovered and maintained over time, in addition to determining how they have influenced and will continue to influence the socioeconomic factors involved in maintaining rehabilitation practices.

  13. Tables of file names, times, and locations of images collected during unmanned aerial systems (UAS) flights over Coast Guard Beach, Nauset Spit, Nauset Inlet, and Nauset Marsh, Cape Cod National Seashore, Eastham, Massachusetts on 1 March 2016 (text files)

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — These text files contain tables of the file names, times, and locations of images obtained from an unmanned aerial systems (UAS) flown in the Cape Cod National...

  14. Tables of file names, times, and locations of images collected during unmanned aerial systems (UAS) flights over Coast Guard Beach, Nauset Spit, Nauset Inlet, and Nauset Marsh, Cape Cod National Seashore, Eastham, Massachusetts on 1 March 2016 (text files)

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — These text files contain tables of the file names, times, and locations of images obtained from an unmanned aerial systems (UAS) flown in the Cape Cod National...

  15. Choosing an SAT with SAnTA: A Recommender System for Informal Workplace Learning

    Directory of Open Access Journals (Sweden)

    Frank Linton

    2015-10-01

    Full Text Available Current intelligence community analytic standards encourage the use of structured analytic techniques; these are taught in training programs and supported by analytic tradecraft cells. Yet resources are scarce, timely personalized help is not available to all when it is most needed, and conditions for the improper selection and misapplication of these techniques prevail. The Structured Analytic Technique Advisor (SAnTA is an electronic interactive job aid that recommends structured analytic techniques to analysts, based on the current state of their analysis and on the synthesized expertise of tradecraft staff. SAnTA is undergoing formative evaluation and iterative development. When released, SAnTA will provide individualized support to a large number of analysts every day.

  16. An asynchronous writing method for restart files in the gysela code in prevision of exascale systems*

    Directory of Open Access Journals (Sweden)

    Thomine O.

    2013-12-01

    Full Text Available The present work deals with an optimization procedure developed in the full-f global GYrokinetic SEmi-LAgrangian code (GYSELA. Optimizing the writing of the restart files is necessary to reduce the computing impact of crashes. These files require a very large memory space, and particularly so for very large mesh sizes. The limited bandwidth of the data pipe between the comput- ing nodes and the storage system induces a non-scalable part in the GYSELA code, which increases with the mesh size. Indeed the transfer time of RAM to data depends linearly on the files size. The necessity of non synchronized writing-in-file procedure is therefore crucial. A new GYSELA module has been developed. This asynchronous procedure allows the frequent writ- ing of the restart files, whilst preventing a severe slowing down due to the limited writing bandwidth. This method has been improved to generate a checksum control of the restart files, and automatically rerun the code in case of a crash for any cause.

  17. Petroleum systems of the San Joaquin Basin Province -- geochemical characteristics of gas types: Chapter 10 in Petroleum systems and geologic assessment of oil and gas in the San Joaquin Basin Province, California

    Science.gov (United States)

    Lillis, Paul G.; Warden, Augusta; Claypool, George E.; Magoon, Leslie B.

    2008-01-01

    The San Joaquin Basin Province is a petroliferous basin filled with predominantly Late Cretaceous to Pliocene-aged sediments, with organic-rich marine rocks of Late Cretaceous, Eocene, and Miocene age providing the source of most of the oil and gas. Previous geochemical studies have focused on the origin of the oil in the province, but the origin of the natural gas has received little attention. To identify and characterize natural gas types in the San Joaquin Basin, 66 gas samples were analyzed and combined with analyses of 15 gas samples from previous studies. For the purpose of this resource assessment, each gas type was assigned to the most likely petroleum system. Three general gas types are identified on the basis of bulk and stable carbon isotopic composition—thermogenic dry (TD), thermogenic wet (TW) and biogenic (B). The thermogenic gas types are further subdivided on the basis of the δ13C values of methane and ethane and nitrogen content into TD-1, TD-2, TD-Mixed, TW-1, TW-2, and TW-Mixed. Gas types TD-1 and TD-Mixed, a mixture of biogenic and TD-1 gases, are produced from gas fields in the northern San Joaquin Basin. Type TD-1 gas most likely originated from the Late Cretaceous to Paleocene Moreno Formation, a gas-prone source rock. The biogenic component of the TD-Mixed gas existed in the trap prior to the influx of thermogenic gas. For the assessment, these gas types were assigned to the Winters- Domengine Total Petroleum System, but subsequent to the assessment were reclassified as part of the Moreno-Nortonville gas system. Dry thermogenic gas produced from oil fields in the southern San Joaquin Basin (TD-2 gas) most likely originated from the oil-prone source rock of Miocene age. These samples have low wetness values due to migration fractionation or biodegradation. The thermogenic wet gas types (TW-1, TW-2, TW-Mixed) are predominantly associated gas produced from oil fields in the southern and central San Joaquin Basin. Type TW-1 gas most likely

  18. Holocene slip rates along the San Andreas Fault System in the San Gorgonio Pass and implications for large earthquakes in southern California

    Science.gov (United States)

    Heermance, Richard V.; Yule, Doug

    2017-06-01

    The San Gorgonio Pass (SGP) in southern California contains a 40 km long region of structural complexity where the San Andreas Fault (SAF) bifurcates into a series of oblique-slip faults with unknown slip history. We combine new 10Be exposure ages (Qt4: 8600 (+2100, -2200) and Qt3: 5700 (+1400, -1900) years B.P.) and a radiocarbon age (1260 ± 60 years B.P.) from late Holocene terraces with scarp displacement of these surfaces to document a Holocene slip rate of 5.7 (+2.7, -1.5) mm/yr combined across two faults. Our preferred slip rate is 37-49% of the average slip rates along the SAF outside the SGP (i.e., Coachella Valley and San Bernardino sections) and implies that strain is transferred off the SAF in this area. Earthquakes here most likely occur in very large, throughgoing SAF events at a lower recurrence than elsewhere on the SAF, so that only approximately one third of SAF ruptures penetrate or originate in the pass.Plain Language SummaryHow large are earthquakes on the southern San Andreas Fault? The answer to this question depends on whether or not the earthquake is contained only along individual fault sections, such as the Coachella Valley section north of Palm Springs, or the rupture crosses multiple sections including the area through the San Gorgonio Pass. We have determined the age and offset of faulted stream deposits within the San Gorgonio Pass to document slip rates of these faults over the last 10,000 years. Our results indicate a long-term slip rate of 6 mm/yr, which is almost 1/2 of the rates east and west of this area. These new rates, combined with faulted geomorphic surfaces, imply that large magnitude earthquakes must occasionally rupture a 300 km length of the San Andreas Fault from the Salton Sea to the Mojave Desert. Although many ( 65%) earthquakes along the southern San Andreas Fault likely do not rupture through the pass, our new results suggest that large >Mw 7.5 earthquakes are possible on the southern San Andreas Fault and likely

  19. NADIR: A prototype system for detecting network and file system abuse

    Energy Technology Data Exchange (ETDEWEB)

    Hochberg, J.G.; Jackson, K.A.; Stallings, C.A.; McClary, J.F.; DuBois, D.H.; Ford, J.R.

    1992-01-01

    This paper describes the design of a prototype computer misuse detection system for the Los Alamos Notional Laboratory's Integrated Computing Network (ICN). This automated expert system, the Network Anomaly Detection and Intrusion Reporter (NADIR), streamlines and supplements the manual audit record review traditionally performed by security auditors. NADIR compares network activity, as summarized in weekly profiles of individual users and the ICN as a whole, against expert rules that define security policy, improper or suspicious behavior, and normal user activity. NADIR reports suspicious behavior to security auditors and provides tools to aid in follow-up investigations. This paper describes analysis by NADIR of two types of ICN activity: user authentication and access control, and mass file storage. It highlights system design issues of data handling, exploiting existing auditing systems, and performing audit analysis at the network level.

  20. NADIR: A prototype system for detecting network and file system abuse

    Energy Technology Data Exchange (ETDEWEB)

    Hochberg, J.G.; Jackson, K.A.; Stallings, C.A.; McClary, J.F.; DuBois, D.H.; Ford, J.R.

    1992-10-01

    This paper describes the design of a prototype computer misuse detection system for the Los Alamos Notional Laboratory`s Integrated Computing Network (ICN). This automated expert system, the Network Anomaly Detection and Intrusion Reporter (NADIR), streamlines and supplements the manual audit record review traditionally performed by security auditors. NADIR compares network activity, as summarized in weekly profiles of individual users and the ICN as a whole, against expert rules that define security policy, improper or suspicious behavior, and normal user activity. NADIR reports suspicious behavior to security auditors and provides tools to aid in follow-up investigations. This paper describes analysis by NADIR of two types of ICN activity: user authentication and access control, and mass file storage. It highlights system design issues of data handling, exploiting existing auditing systems, and performing audit analysis at the network level.

  1. The self-adjusting file (SAF) system: An evidence-based update.

    Science.gov (United States)

    Metzger, Zvi

    2014-09-01

    Current rotary file systems are effective tools. Nevertheless, they have two main shortcomings: They are unable to effectively clean and shape oval canals and depend too much on the irrigant to do the cleaning, which is an unrealistic illusionThey may jeopardize the long-term survival of the tooth via unnecessary, excessive removal of sound dentin and creation of micro-cracks in the remaining root dentin. The new Self-adjusting File (SAF) technology uses a hollow, compressible NiTi file, with no central metal core, through which a continuous flow of irrigant is provided throughout the procedure. The SAF technology allows for effective cleaning of all root canals including oval canals, thus allowing for the effective disinfection and obturation of all canal morphologies. This technology uses a new concept of cleaning and shaping in which a uniform layer of dentin is removed from around the entire perimeter of the root canal, thus avoiding unnecessary excessive removal of sound dentin. Furthermore, the mode of action used by this file system does not apply the machining of all root canals to a circular bore, as do all other rotary file systems, and does not cause micro-cracks in the remaining root dentin. The new SAF technology allows for a new concept in cleaning and shaping root canals: Minimally Invasive 3D Endodontics.

  2. 78 FR 67927 - Foreign Trade Regulations (FTR): Mandatory Automated Export System Filing for All Shipments...

    Science.gov (United States)

    2013-11-13

    ... Export System Filing for All Shipments Requiring Shipper's Export Declaration Information: Substantive... the Automated Export System (AES) under control number 0607-0152. DATES: The effective date of the... instrument used for collecting export trade data, which is used by the Census Bureau for statistical...

  3. SF Bayweb 2009: Planting the Seeds of an Observing System in the San Francisco Bay

    Science.gov (United States)

    2010-06-01

    UC Berkeley Berkeley, CA 94720 Toby Garfield SFSU Romberg Tiburon Center Tiburon, CA 94920 John Largier UC Davis / Bodega Marine Laboratory... Bodega Bay, Ca 94923 Abstract - A pilot project was recently completed in the San Francisco Bay from May 1-10, 2009, to test the use of advanced

  4. Post-1906 stress recovery of the San Andreas fault system calculated from three-dimensional finite element analysis

    Science.gov (United States)

    Parsons, T.

    2002-01-01

    The M = 7.8 1906 San Francisco earthquake cast a stress shadow across the San Andreas fault system, inhibiting other large earthquakes for at least 75 years. The duration of the stress shadow is a key question in San Francisco Bay area seismic hazard assessment. This study presents a three-dimensional (3-D) finite element simulation of post-1906 stress recovery. The model reproduces observed geologic slip rates on major strike-slip faults and produces surface velocity vectors comparable to geodetic measurements. Fault stressing rates calculated with the finite element model are evaluated against numbers calculated using deep dislocation slip. In the finite element model, tectonic stressing is distributed throughout the crust and upper mantle, whereas tectonic stressing calculated with dislocations is focused mostly on faults. In addition, the finite element model incorporates postseismic effects such as deep afterslip and viscoelastic relaxation in the upper mantle. More distributed stressing and postseismic effects in the finite element model lead to lower calculated tectonic stressing rates and longer stress shadow durations (17-74 years compared with 7-54 years). All models considered indicate that the 1906 stress shadow was completely erased by tectonic loading no later than 1980. However, the stress shadow still affects present-day earthquake probability. Use of stressing rate parameters calculated with the finite element model yields a 7-12% reduction in 30-year probability caused by the 1906 stress shadow as compared with calculations not incorporating interactions. The aggregate interaction-based probability on selected segments (not including the ruptured San Andreas fault) is 53-70% versus the noninteraction range of 65-77%.

  5. Extending the POSIX I/O interface: a parallel file system perspective.

    Energy Technology Data Exchange (ETDEWEB)

    Vilayannur, M.; Lang, S.; Ross, R.; Klundt, R.; Ward, L.; Mathematics and Computer Science; VMWare, Inc.; SNL

    2008-12-11

    The POSIX interface does not lend itself well to enabling good performance for high-end applications. Extensions are needed in the POSIX I/O interface so that high-concurrency HPC applications running on top of parallel file systems perform well. This paper presents the rationale, design, and evaluation of a reference implementation of a subset of the POSIX I/O interfaces on a widely used parallel file system (PVFS) on clusters. Experimental results on a set of micro-benchmarks confirm that the extensions to the POSIX interface greatly improve scalability and performance.

  6. Load-Balance Policy in Two Level-Cluster File System

    Institute of Scientific and Technical Information of China (English)

    LIU Yuling; SONG Weiwei; MA Xiaoxue

    2006-01-01

    In this paper, we explored a load-balancing algorithm in a cluster file system contains two levels of metadata-server,primary-level server quickly distributestasks to second-level servers depending on the closest load-balancing information. At the same time, we explored a method which accurately reflect I/O traffic and storage of storage-node: computing the heat-value of file, according to which we realized a more logical storage allocation. According to the experiment result, we conclude that this new algorithm shortens the executing time of tasks and improves the system performance compared with other load algorithm.

  7. Evolutionary Game Theory-Based Evaluation of P2P File-Sharing Systems in Heterogeneous Environments

    Directory of Open Access Journals (Sweden)

    Yusuke Matsuda

    2010-01-01

    Full Text Available Peer-to-Peer (P2P file sharing is one of key technologies for achieving attractive P2P multimedia social networking. In P2P file-sharing systems, file availability is improved by cooperative users who cache and share files. Note that file caching carries costs such as storage consumption and processing load. In addition, users have different degrees of cooperativity in file caching and they are in different surrounding environments arising from the topological structure of P2P networks. With evolutionary game theory, this paper evaluates the performance of P2P file sharing systems in such heterogeneous environments. Using micro-macro dynamics, we analyze the impact of the heterogeneity of user selfishness on the file availability and system stability. Further, through simulation experiments with agent-based dynamics, we reveal how other aspects, for example, synchronization among nodes and topological structure, affect the system performance. Both analytical and simulation results show that the environmental heterogeneity contributes to the file availability and system stability.

  8. A convertor and user interface to import CAD files into worldtoolkit virtual reality systems

    Science.gov (United States)

    Wang, Peter Hor-Ching

    1996-01-01

    Virtual Reality (VR) is a rapidly developing human-to-computer interface technology. VR can be considered as a three-dimensional computer-generated Virtual World (VW) which can sense particular aspects of a user's behavior, allow the user to manipulate the objects interactively, and render the VW at real-time accordingly. The user is totally immersed in the virtual world and feel the sense of transforming into that VW. NASA/MSFC Computer Application Virtual Environments (CAVE) has been developing the space-related VR applications since 1990. The VR systems in CAVE lab are based on VPL RB2 system which consists of a VPL RB2 control tower, an LX eyephone, an Isotrak polhemus sensor, two Fastrak polhemus sensors, a folk of Bird sensor, and two VPL DG2 DataGloves. A dynamics animator called Body Electric from VPL is used as the control system to interface with all the input/output devices and to provide the network communications as well as VR programming environment. The RB2 Swivel 3D is used as the modelling program to construct the VW's. A severe limitation of the VPL VR system is the use of RB2 Swivel 3D, which restricts the files to a maximum of 1020 objects and doesn't have the advanced graphics texture mapping. The other limitation is that the VPL VR system is a turn-key system which does not provide the flexibility for user to add new sensors and C language interface. Recently, NASA/MSFC CAVE lab provides VR systems built on Sense8 WorldToolKit (WTK) which is a C library for creating VR development environments. WTK provides device drivers for most of the sensors and eyephones available on the VR market. WTK accepts several CAD file formats, such as Sense8 Neutral File Format, AutoCAD DXF and 3D Studio file format, Wave Front OBJ file format, VideoScape GEO file format, Intergraph EMS stereolithographics and CATIA Stereolithographics STL file formats. WTK functions are object-oriented in their naming convention, are grouped into classes, and provide easy C

  9. The Design of Medical Consumable Filing System Based on RFID Technology

    Institute of Scientific and Technical Information of China (English)

    WU Min; ZHONG Tian-ping; TANG Li-ming; QIN Xian; HUANG Ya-ping

    2015-01-01

    Objective: With the modern information technology, much management problems in the records of medical equipment and hospital supplies have been solved to achieve a comprehensive, real-time and accurate management. Methods: The import BARCODE file archiving system, in order to borrow and return series operation of the automatic identification, set the expiration of contracts, licenses and time of alarms. Results: The efficiency and accuracy of data entry, and mobile data collection terminal data processing in real time. Conclusion: Based on RFID technology for medical equipment supplies the file system to the radio channel as a transmission media, the target for non-contact automatic identification technology, medical equipment, supplies, intelligent file management and information technology.

  10. Further implementation of the user space file system based on CastorFS and Xrootd

    CERN Document Server

    Jiao, Manjun

    The LHC (Large Hadron Collider) experiments use a mass storage system for recording petabytes of experimental data. This system is called CASTOR[1] (CERN Advanced STORage manager) and it is powerful and convenient for many use cases. However, it is impossible to use standard tools and scripts straight away for dealing with the files in the system since the users can access the data just in the way of using command-line utilities and parsing their output. Thus a complete POSIX filesystem – CastorFS[2] is developed based on the FUSE[3] (File System in Userspace) and two CASTOR I/O libraries-RFIO (Remote File I/O) library and NS (Name Server) library. Although it is applied successfully, it has serious limitation of wide application because the I/O protocols it relies on are very old. Each time the CASTOR side receives the calling for accessing data files from the user side, the load of the CASTOR side system will increase quickly even out of its upper bound and the system would crash. Besides that, those two ...

  11. Overcoming the challenges of gathering system requirements for coalbed methane production : a San Juan Basin case study

    Energy Technology Data Exchange (ETDEWEB)

    Midkiff, K. [Burlington Resources Canada Ltd., Calgary, AB (Canada)

    2002-07-01

    For the past 14 years, Burlington Resources has operated the Val Verde Gas Gathering and Treating System in the San Juan Basin of New Mexico and Colorado. It is a world-class CO{sub 2} removal and dehydration facility consisting of 8 individual treating trains. This paper described the company's experiences with material and equipment selection, pressure regimes, design objectives and other issues regarding CO{sub 2} production, including environmental challenges. The coalbed methane (CBM) production contained a substantial quantity of CO{sub 2} (10 per cent) and was not suitable for gathering and processing within the conventional infrastructure of the San Juan Basin. Val Verde gathers gas from 229 wellheads and 10 central delivery points. There are about 465 miles of pipeline in the area. Because of the depletion characteristics of CBM systems, most of the wells gathered by Val Verde currently use wellhead booster compression prior to entry into the system. The paper described the gathering and processing requirements for CBM, pressure regimes, the efficiency of modular design, CO{sub 2} management, and hydraulic modeling systems. A comparison of CBM systems to conventional gathering systems was also presented. 6 figs.

  12. Solar sensor activated and computer controlled shading system -- Field study at the San Francisco New Main Library

    Energy Technology Data Exchange (ETDEWEB)

    Jain, P.

    1999-07-01

    This paper presents the result of the post-occupancy evaluation of MechoShade Systems{trademark} AAC-PC Window Management Program at the San Francisco New Main Public Library. The AAC-PC Window Management Program is a solar sensor activated (Li-Cor Radiometers) and PC computer controlled shading system. The purpose of this research was to inform the building design community about the occupant response to the new technology of automatic shading. There was a double need for this research because of the (1) Rising trend of automatic shading system applications specially in large office or public buildings, and (2) Significant lack of published reports on post-occupancy evaluation of any type of automatic shading systems. The automatic shading system in the new main San Francisco Public Library has had continuous problems since the building opened in 1996. The library staff has been experiencing low to high level of many environmental discomforts. Occupant comfort and satisfaction should be an important design criteria for the long term success of any new technology in buildings whether it is for solar control, energy efficiency, or any other purpose.

  13. San Francisco District Laboratory (SAN)

    Data.gov (United States)

    Federal Laboratory Consortium — Program CapabilitiesFood Analysis SAN-DO Laboratory has an expert in elemental analysis who frequently performs field inspections of materials. A recently acquired...

  14. Structure and mechanics of the San Andreas-San Gregorio fault junction, San Francisco, California

    Science.gov (United States)

    Parsons, Tom; Bruns, Terry R.; Sliter, Ray

    2005-01-01

    The right-lateral San Gregorio and San Andreas faults meet west of the Golden Gate near San Francisco. Coincident seismic reflection and refraction profiling across the San Gregorio and San Andreas faults south of their junction shows the crust between them to have formed shallow extensional basins that are dissected by parallel strike-slip faults. We employ a regional finite element model to investigate the long-term consequences of the fault geometry. Over the course of 2-3 m.y. of slip on the San Andreas-San Gregorio fault system, elongated extensional basins are predicted to form between the two faults. An additional consequence of the fault geometry is that the San Andreas fault is expected to have migrated eastward relative to the San Gregorio fault. We thus propose a model of eastward stepping right-lateral fault formation to explain the observed multiple fault strands and depositional basins. The current manifestation of this process might be the observed transfer of slip from the San Andreas fault east to the Golden Gate fault.

  15. Electrochemical impedance spectroscopy investigation on the clinical lifetime of ProTaper rotary file system.

    Science.gov (United States)

    Penta, Virgil; Pirvu, Cristian; Demetrescu, Ioana

    2014-01-01

    The main objective of the current paper is to show that electrochemical impedance spectroscopy (EIS) could be a method for evaluating and predicting of ProTaper rotary file system clinical lifespan. This particular aspect of everyday use of the endodontic files is of great importance in each dental practice and has profound clinical implications. The method used for quantification resides in the electrochemical impedance spectroscopy theory and has in its main focus the characteristics of the surface titanium oxide layer. This electrochemical technique has been adapted successfully to identify the quality of the Ni-Ti files oxide layer. The modification of this protective layer induces changes in corrosion behavior of the alloy modifying the impedance value of the file. In order to assess the method, 14 ProTaper sets utilized on different patients in a dental clinic have been submitted for testing using EIS. The information obtained in regard to the surface oxide layer has offered an indication of use and proves that the said layer evolves with each clinical application. The novelty of this research is related to an electrochemical technique successfully adapted for Ni-Ti file investigation and correlation with surface and clinical aspects.

  16. Electrochemical Impedance Spectroscopy Investigation on the Clinical Lifetime of ProTaper Rotary File System

    Science.gov (United States)

    Pirvu, Cristian; Demetrescu, Ioana

    2014-01-01

    The main objective of the current paper is to show that electrochemical impedance spectroscopy (EIS) could be a method for evaluating and predicting of ProTaper rotary file system clinical lifespan. This particular aspect of everyday use of the endodontic files is of great importance in each dental practice and has profound clinical implications. The method used for quantification resides in the electrochemical impedance spectroscopy theory and has in its main focus the characteristics of the surface titanium oxide layer. This electrochemical technique has been adapted successfully to identify the quality of the Ni-Ti files oxide layer. The modification of this protective layer induces changes in corrosion behavior of the alloy modifying the impedance value of the file. In order to assess the method, 14 ProTaper sets utilized on different patients in a dental clinic have been submitted for testing using EIS. The information obtained in regard to the surface oxide layer has offered an indication of use and proves that the said layer evolves with each clinical application. The novelty of this research is related to an electrochemical technique successfully adapted for Ni-Ti file investigation and correlation with surface and clinical aspects. PMID:24605336

  17. Electrochemical Impedance Spectroscopy Investigation on the Clinical Lifetime of ProTaper Rotary File System

    Directory of Open Access Journals (Sweden)

    Virgil Penta

    2014-01-01

    Full Text Available The main objective of the current paper is to show that electrochemical impedance spectroscopy (EIS could be a method for evaluating and predicting of ProTaper rotary file system clinical lifespan. This particular aspect of everyday use of the endodontic files is of great importance in each dental practice and has profound clinical implications. The method used for quantification resides in the electrochemical impedance spectroscopy theory and has in its main focus the characteristics of the surface titanium oxide layer. This electrochemical technique has been adapted successfully to identify the quality of the Ni-Ti files oxide layer. The modification of this protective layer induces changes in corrosion behavior of the alloy modifying the impedance value of the file. In order to assess the method, 14 ProTaper sets utilized on different patients in a dental clinic have been submitted for testing using EIS. The information obtained in regard to the surface oxide layer has offered an indication of use and proves that the said layer evolves with each clinical application. The novelty of this research is related to an electrochemical technique successfully adapted for Ni-Ti file investigation and correlation with surface and clinical aspects.

  18. Sand sources and transport pathways for the San Francisco Bay coastal system, based on X-ray diffraction mineralogy

    Science.gov (United States)

    Hein, James R.; Mizell, Kira; Barnard, Patrick L.; Barnard, P.L.; Jaffee, B.E.; Schoellhamer, D.H.

    2013-01-01

    The mineralogical compositions of 119 samples collected from throughout the San Francisco Bay coastal system, including bayfloor and seafloor, area beaches, cliff outcrops, and major drainages, were determined using X-ray diffraction (XRD). Comparison of the mineral concentrations and application of statistical cluster analysis of XRD spectra allowed for the determination of provenances and transport pathways. The use of XRD mineral identifications provides semi-quantitative compositions needed for comparisons of beach and offshore sands with potential cliff and river sources, but the innovative cluster analysis of XRD diffraction spectra provides a unique visualization of how groups of samples within the San Francisco Bay coastal system are related so that sand-sized sediment transport pathways can be inferred. The main vector for sediment transport as defined by the XRD analysis is from San Francisco Bay to the outer coast, where the sand then accumulates on the ebb tidal delta and also moves alongshore. This mineralogical link defines a critical pathway because large volumes of sediment have been removed from the Bay over the last century via channel dredging, aggregate mining, and borrow pit mining, with comparable volumes of erosion from the ebb tidal delta over the same period, in addition to high rates of shoreline retreat along the adjacent, open-coast beaches. Therefore, while previously only a temporal relationship was established, the transport pathway defined by mineralogical and geochemical tracers support the link between anthropogenic activities in the Bay and widespread erosion outside the Bay. The XRD results also establish the regional and local importance of sediment derived from cliff erosion, as well as both proximal and distal fluvial sources. This research is an important contribution to a broader provenance study aimed at identifying the driving forces for widespread geomorphic change in a heavily urbanized coastal-estuarine system.

  19. Support-shape Dependent Catalytic Activity in Pt/alumina Systems Using USANS/SANS

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Sang Hoon; Han, Sugyeong; Ha, Heonphil; Byun, Jiyoung; Kim, Man-ho [KIST, Seoul (Korea, Republic of)

    2015-10-15

    Pt nanoparticles dispersed on ceramic powder such as alumina and ceria powder are used as catalyst materials to reduce pollution from automobile exhaust, power plant exhaust, etc. Much effort has been put to investigate the relationship between types of catalyst support materials and reactivity of the supported metallic particles. The surface shape of support materials can also be expected to control the catalysts size with the surface shape of support materials. In this presentation, we show our SANS (small angle neutron scattering) -USANS (ultra small angle neutron scattering) analysis on the structural differences of different shapes of the same γ alumina powder with different loadings of Pt nanoparticles. Then, the reactivity of the prepared catalyst materials are presented and discussed based on the investigation of the structure of the support materials by SANS. The shapes of gamma alumina, rod-like or plate-like shape, were determined from nanometer to micrometer with USANS and SANS analysis. We found that the platelet-like alumina consists of an aggregate of 2 - 3 layers, which further reduce specific surface area and catalytic activity compared to rod-like shape. Rod-like shape shows more than 100% enhancement in the catalytic activities in model three-way-catalyst (TWC) reactions of CO, NO, and C{sub 3}H{sub 6} at low temperature around 200 .deg. C.

  20. In Search of an API for Scalable File Systems: Under the Table or Above it?

    Science.gov (United States)

    2009-06-01

    Polte, Wittawat Tantisiroj, and Lin Xiao Carnegie Mellon University 1 Introduction “Big Data” is everywhere – both the IT industry and the scientific...news/specials/ bigdata /. [25] PATIL, S. V., AND GIBSON, G. GIGA+: Scalable Directories for Shared File Systems . Tech. Rep. CMU-PDL-08-108, Carnegie

  1. Beyond the Data Archive: The Creation of an Interactive Numeric File Retrieval System.

    Science.gov (United States)

    Chiang, Katherine; And Others

    1993-01-01

    Describes the creation of an interactive retrieval system for electronic numeric files that was developed at Cornell University's (New York) Mann Library. Topics discussed include user characteristics; special data characteristics; quality control; software; hardware; data preparation; database design and construction; interface; subject indexing…

  2. 17 CFR 242.608 - Filing and amendment of national market system plans.

    Science.gov (United States)

    2010-04-01

    ... EXCHANGE COMMISSION (CONTINUED) REGULATIONS M, SHO, ATS, AC, AND NMS AND CUSTOMER MARGIN REQUIREMENTS FOR SECURITY FUTURES Regulation Nms-Regulation of the National Market System § 242.608 Filing and amendment of... Regulation NMS and part 240, subpart A of this chapter shall, in addition to compliance with this...

  3. 78 FR 25740 - Meridian Energy USA, Inc. v. California Independent System Operator Corporation; Notice of Filing

    Science.gov (United States)

    2013-05-02

    ... From the Federal Register Online via the Government Publishing Office DEPARTMENT OF ENERGY Federal Energy Regulatory Commission Meridian Energy USA, Inc. v. California Independent System Operator Corporation; Notice of Filing Take notice that on April 24, 2013, Meridian Energy USA, Inc....

  4. IMPLEMENTATION OF 2-PHASE COMMIT BY INTEGRATING THE DATABASE & THE FILE SYSTEM

    Directory of Open Access Journals (Sweden)

    V. Raghavendra Prasad,

    2010-10-01

    Full Text Available Transaction is a series of data manipulation statements that must either fully complete or fully fail, leaving the system in a consistent state, Transactions are the key to reliable software applications In J2EE Business layer components accesses transactional resource managers like RDBMS/Messaging provider. From the databasepoint of view, the Resource Managers coordinates along with the Transaction Manager to perform the work which is transparent to the developer; however this is not possible with regards to a important resource, i.e. file system. Moreover, DBMS has the capacity to commit or roll back a transaction but this in independent of the file system. In this paper, we integrate the two-phase commit protocol of the RDBMS with the file system by using Java. As Java IO does not provide transactional support and hence it requires the developer to implement the transaction support manually in their application, this paper aims to develop a transaction aware resource manager for manipulation of files in Java.

  5. Interseismic interactions in geometrically complex fault systems: Implications for San Francisco Bay Area fault creep and tectonics

    Science.gov (United States)

    Evans, E. L.; Meade, B. J.; Loveless, J. P.

    2010-12-01

    Fault systems at active plate boundaries accommodate the differential motion of tectonic plates through slip on anastomosing faults within the seismogenic upper crust. The partitioning of slip across fault systems can be inferred from models of space-based geodetic measurements to estimate both fault slip rates and interseismic fault creep. Covariance between slip rate estimates on sub-parallel faults may be significant but can be reduced with the addition of the fundamental constraint that total slip across a fault system must sum to the differential plate motion rate. The importance of ensuring such kinematic consistency becomes increasingly important in strike-slip fault systems such as in the San Francisco Bay Area, where slip is localized across 4-8 sub-parallel faults with San Francisco Bay Area constrained by both GPS and InSAR observations and find that this effect may lead to a substantial revision of interseismic creep estimates on the Hayward fault by as much as 6 mm/yr at depth.

  6. The incidence of root microcracks caused by 3 different single-file systems versus the protaper system

    NARCIS (Netherlands)

    Liu, R.; Hou, B.X.; Wesselink, P.R.; Wu, M.K.; Shemesh, H.

    2013-01-01

    Introduction The aim of this study was to compare the incidence of root cracks observed at the apical root surface and/or in the canal wall after canal instrumentation with 3 single-file systems and the ProTaper system (Dentsply Maillefer, Ballaigues, Switzerland). Methods One hundred mandibular inc

  7. Petroleum potential of the northern Sinu-San Jacinto Basin, Colombia: an integrated petroleum system and basin modeling approach

    Energy Technology Data Exchange (ETDEWEB)

    Nino, Christian H.; Goncalves, Felix T.T.; Bedregal, Ricardo P. [Universidade Federal do Rio de Janeiro (UFRJ), RJ (Brazil). Coordenacao dos Programas de Pos-graduacao de Engenharia (COPPE). Lab. de Modelagem de Bacias (LAB2M); Azevedo, Debora A. [Universidade Federal do Rio de Janeiro (UFRJ), RJ (Brazil). Inst. de Quimica; Landau, Luis [Universidade Federal do Rio de Janeiro (UFRJ), RJ (Brazil). Coordenacao dos Programas de Pos-graduacao de Engenharia (COPPE). Lab. de Metodos Computacionais em Engenharia (LAMCE)

    2004-07-01

    The northern Sinu-San Jacinto basin, located in the northwestern corner of South America (Colombia), belongs to the accretionary prism that resulted from the collision and subduction of the Caribbean plate under the South America plate. Despite all the previous exploratory efforts, solely a few small sub-commercial oil and gas accumulation have been found up to now. The geological and geochemical information acquired by different companies during the lasts decades was integrated with new geochemical analysis and basin modeling to characterize the petroleum systems, to reconstruct the hydrocarbon charge history in the study area and to better assess the exploratory risk. (author)

  8. Petroleum potential of the northern Sinu-San Jacinto Basin, Colombia: an integrated petroleum system and basin modeling approach

    Energy Technology Data Exchange (ETDEWEB)

    Nino, Christian H.; Goncalves, Felix T.T.; Bedregal, Ricardo P. [Universidade Federal do Rio de Janeiro (UFRJ), RJ (Brazil). Coordenacao dos Programas de Pos-graduacao de Engenharia (COPPE). Lab. de Modelagem de Bacias (LAB2M); Azevedo, Debora A. [Universidade Federal do Rio de Janeiro (UFRJ), RJ (Brazil). Inst. de Quimica; Landau, Luis [Universidade Federal do Rio de Janeiro (UFRJ), RJ (Brazil). Coordenacao dos Programas de Pos-graduacao de Engenharia (COPPE). Lab. de Metodos Computacionais em Engenharia (LAMCE)

    2004-07-01

    The northern Sinu-San Jacinto basin, located in the northwestern corner of South America (Colombia), belongs to the accretionary prism that resulted from the collision and subduction of the Caribbean plate under the South America plate. Despite all the previous exploratory efforts, solely a few small sub-commercial oil and gas accumulation have been found up to now. The geological and geochemical information acquired by different companies during the lasts decades was integrated with new geochemical analysis and basin modeling to characterize the petroleum systems, to reconstruct the hydrocarbon charge history in the study area and to better assess the exploratory risk. (author)

  9. DSSTox EPA Integrated Risk Information System Structure-Index Locator File: SDF File and Documentation

    Science.gov (United States)

    EPA's Integrated Risk Information System (IRIS) database was developed and is maintained by EPA's Office of Research and Developement, National Center for Environmental Assessment. IRIS is a database of human health effects that may result from exposure to various substances fou...

  10. Comparison of the amount of apical debris extrusion associated with different retreatment systems and supplementary file application during retreatment process

    Science.gov (United States)

    Çiçek, Ersan; Koçak, Mustafa Murat; Koçak, Sibel; Sağlam, Baran Can

    2016-01-01

    Background: The type of instrument affects the amount of debris extruded. The aim of this study was to compare the effect of retreatment systems and supplementary file application on the amount of apical debris extrusion. Materials and Methods: Forty-eight extracted mandibular premolars with a single canal and similar length were selected. The root canals were prepared with the ProTaper Universal system with a torque-controlled engine. The root canals were dried and were obturated using Gutta-percha and sealer. The specimens were randomly divided into four equal groups according to the retreatment procedures (Group 1, Mtwo retreatment files; Group 2, Mtwo retreatment files + Mtwo rotary file #30 supplementary file; Group 3, ProTaper Universal retreatment (PTUR) files; and Group 4, PTUR files + ProTaper F3 supplementary file). The extruded debris during instrumentation were collected into preweighed Eppendorf tubes. The amount of apically extruded debris was calculated by subtracting the initial weight of the tube from the final weight. Three consecutive weights were obtained for each tube. Results: No statistically significant difference was found in the amount of apically extruded debris between Groups 1 and 3 (P = 0.590). A significant difference was observed between Groups 1 and 2 (P < 0.05), and between Groups 3 and 4 (P < 0.05). Conclusions: The use of supplementary file significantly increased the amount of apically extruded debris. PMID:27563185

  11. Hybrid system of generating electricity, solar eolic diesel San Juanico, Baja California Sur, Mexico; Sistema hibrido de generacion electrica, eolico solar diesel San Juanico, Baja California Sur, Mexico

    Energy Technology Data Exchange (ETDEWEB)

    Huerta, Javier [Comision Federal de Electricidad, La Paz, Baja California Sur (Mexico); Johnston, Peter [Technology Development, Arizona (United States); Napikoski, Chester [Generation Engineering, Arizona (United States); Escutia, Ricardo [Comision Federal de Electricidad, Baja California Sur (Mexico)

    2000-07-01

    The Comision Federal de Electricidad (CFE), and the northamerican electric company Arizona Public Service (APS), made an agreement of collaboration to develop a project of generating electricity with the use of renewable resources. The premises that where agreed on are the following: 1. Focus the project a rural community. 2. The cost of the whole project should be lower than compared to the interconnection to a conventional system. 3. Acceptance of the community, and the governmental authorities. 4. Sustentability of the operation of the system. Several technical and economical analysis where done, such as the evaluation of the solar and eolic resources, study of the environmental impact, negotiation agreements so it would be possible to obtain de economical resources from Niagara Mohawk (NIMO), and the USAID, all of this thru the supervising of the Sandia National Laboratories. After the anemometric and solar radiation measures where made, it was considered that the community of San Juanico, en Baja California Sur, Mexico, was the most feasible one, it was necessary also to consider the aspects of logistics, socials, size of the community and as a detonator for the economic activities of tourism and fishing. The APS formulated the executive project in accordance with the recommendations of the different areas of CFE. The project consists basically in the installation of 10 wind generators of 10 Kw, a battery bank for 432 KWh, plus a diesel generator for emergencies of 80 Kw. Besides the civil and electromechanical installation. It was necessary to involve the community in the knowledge and followup of the project form it's, considering that this factor would be essential, so it could be successful. Lamps of low consumption where installed on the houses and street lightning, to optimize the system. The patronato that is a civil association of the community, is in charge of the administration of the system, it receives support from personnel of CFE. The income

  12. Building A High Performance Parallel File System Using Grid Datafarm and ROOT I/O

    CERN Document Server

    Morita, Y; Watase, Y; Tatebe, Osamu; Sekiguchi, S; Matsuoka, S; Soda, N; Dell'Acqua, A

    2003-01-01

    Sheer amount of petabyte scale data foreseen in the LHC experiments require a careful consideration of the persistency design and the system design in the world-wide distributed computing. Event parallelism of the HENP data analysis enables us to take maximum advantage of the high performance cluster computing and networking when we keep the parallelism both in the data processing phase, in the data management phase, and in the data transfer phase. A modular architecture of FADS/ Goofy, a versatile detector simulation framework for Geant4, enables an easy choice of plug-in facilities for persistency technologies such as Objectivity/DB and ROOT I/O. The framework is designed to work naturally with the parallel file system of Grid Datafarm (Gfarm). FADS/Goofy is proven to generate 10^6 Geant4-simulated Atlas Mockup events using a 512 CPU PC cluster. The data in ROOT I/O files is replicated using Gfarm file system. The histogram information is collected from the distributed ROOT files. During the data replicatio...

  13. Folds--Offshore of San Francisco Map Area, California

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — This part of DS 781 presents data for folds for the geologic and geomorphic map of the Offshore of San Francisco map area, California. The vector data file is...

  14. Faults--Offshore of San Francisco Map Area, California

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — This part of DS 781 presents data for faults for the geologic and geomorphic map of the Offshore San Francisco map area, California. The vector data file is included...

  15. Faults--Offshore of San Francisco Map Area, California

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — This part of DS 781 presents data for faults for the geologic and geomorphic map of the Offshore San Francisco map area, California. The vector data file is included...

  16. Folds--Offshore of San Francisco Map Area, California

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — This part of DS 781 presents data for folds for the geologic and geomorphic map of the Offshore of San Francisco map area, California. The vector data file is...

  17. Faults--Offshore of San Francisco Map Area, California

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — This part of DS 781 presents data for faults for the geologic and geomorphic map of the Offshore San Francisco map area, California. The vector data file is...

  18. Creating Interactive Graphical Overlays in the Advanced Weather Interactive Processing System Using Shapefiles and DGM Files

    Science.gov (United States)

    Barrett, Joe H., III; Lafosse, Richard; Hood, Doris; Hoeth, Brian

    2007-01-01

    Graphical overlays can be created in real-time in the Advanced Weather Interactive Processing System (AWIPS) using shapefiles or Denver AWIPS Risk Reduction and Requirements Evaluation (DARE) Graphics Metafile (DGM) files. This presentation describes how to create graphical overlays on-the-fly for AWIPS, by using two examples of AWIPS applications that were created by the Applied Meteorology Unit (AMU) located at Cape Canaveral Air Force Station (CCAFS), Florida. The first example is the Anvil Threat Corridor Forecast Tool, which produces a shapefile that depicts a graphical threat corridor of the forecast movement of thunderstorm anvil clouds, based on the observed or forecast upper-level winds. This tool is used by the Spaceflight Meteorology Group (SMG) at Johnson Space Center, Texas and 45th Weather Squadron (45 WS) at CCAFS to analyze the threat of natural or space vehicle-triggered lightning over a location. The second example is a launch and landing trajectory tool that produces a DGM file that plots the ground track of space vehicles during launch or landing. The trajectory tool can be used by SMG and the 45 WS forecasters to analyze weather radar imagery along a launch or landing trajectory. The presentation will list the advantages and disadvantages of both file types for creating interactive graphical overlays in future AWIPS applications. Shapefiles are a popular format used extensively in Geographical Information Systems. They are usually used in AWIPS to depict static map backgrounds. A shapefile stores the geometry and attribute information of spatial features in a dataset (ESRI 1998). Shapefiles can contain point, line, and polygon features. Each shapefile contains a main file, index file, and a dBASE table. The main file contains a record for each spatial feature, which describes the feature with a list of its vertices. The index file contains the offset of each record from the beginning of the main file. The dBASE table contains records for each

  19. FSO对象模型在VB中的应用%File System Object Applied in Visual BASIC

    Institute of Scientific and Technical Information of China (English)

    陈斌; 贾青

    2001-01-01

    This paper introduces the methods and techniques of realizing I/O operations of disk files using File System Object (FSO).%本文介绍了如何使用FSO对象模型实现磁盘文件的I/O操作的方法和技巧。

  20. 76 FR 31954 - San Jose Water Company; Notice of Declaration of Intention and Soliciting Comments, Protests, and...

    Science.gov (United States)

    2011-06-02

    ... of San Jose, Santa Clara County, California. g. Filed Pursuant to: Section 23(b)(1) of the Federal... Water Company, 110 W. Santa Clara Street, San Jose, CA 95196- 0001; Telephone: (408) 279-7814; FAX:...

  1. Endodontic treatment of mandibular molar with root dilaceration using Reciproc single-file system.

    Science.gov (United States)

    Meireles, Daniely Amorin; Bastos, Mariana Mena Barreto; Marques, André Augusto Franco; Garcia, Lucas da Fonseca Roberti; Sponchiado, Emílio Carlos

    2013-08-01

    Biomechanical preparation of root canals with accentuated curvature is challenging. New rotatory systems, such as Reciproc, require a shorter period of time to prepare curved canals, and became a viable alternative for endodontic treatment of teeth with root dilaceration. Thus, this study aimed to report a clinical case of endodontic therapy of root with accentuated dilaceration using Reciproc single-file system. Mandibular right second molar was diagnosed as asymptomatic irreversible pulpitis. Pulp chamber access was performed, and glide path was created with #10 K-file (Dentsply Maillefer) and PathFile #13, #16 and #19 (Dentsply Maillefer) up to the temporary working length. The working length measured corresponded to 20 mm in the mesio-buccal and mesio-lingual canals, and 22 mm in the distal canal. The R25 file (VDW GmbH) was used in all the canals for instrumentation and final preparation, followed by filling with Reciproc gutta-percha cones (VDW GmbH) and AH Plus sealer (Dentsply Maillefer), using thermal compaction technique. The case has been receiving follow-up for 6 mon and no painful symptomatology or periapical lesions have been found. Despite the difficulties, the treatment could be performed in a shorter period of time than the conventional methods.

  2. Endodontic treatment of mandibular molar with root dilaceration using Reciproc single-file system

    Directory of Open Access Journals (Sweden)

    Daniely Amorin Meireles

    2013-08-01

    Full Text Available Biomechanical preparation of root canals with accentuated curvature is challenging. New rotatory systems, such as Reciproc, require a shorter period of time to prepare curved canals, and became a viable alternative for endodontic treatment of teeth with root dilaceration. Thus, this study aimed to report a clinical case of endodontic therapy of root with accentuated dilaceration using Reciproc single-file system. Mandibular right second molar was diagnosed as asymptomatic irreversible pulpitis. Pulp chamber access was performed, and glide path was created with #10 K-file (Dentsply Maillefer and PathFile #13, #16 and #19 (Dentsply Maillefer up to the temporary working length. The working length measured corresponded to 20 mm in the mesio-buccal and mesio-lingual canals, and 22 mm in the distal canal. The R25 file (VDW GmbH was used in all the canals for instrumentation and final preparation, followed by filling with Reciproc gutta-percha cones (VDW GmbH and AH Plus sealer (Dentsply Maillefer, using thermal compaction technique. The case has been receiving follow-up for 6 mon and no painful symptomatology or periapical lesions have been found. Despite the difficulties, the treatment could be performed in a shorter period of time than the conventional methods.

  3. Aquifer-System Characterization by Integrating Data from the Subsurface and from Space, San Joaquin Valley, California, USA

    Science.gov (United States)

    Sneed, M.; Brandt, J. T.

    2014-12-01

    Extensive groundwater pumping from the aquifer system in the San Joaquin Valley, California, between 1926 and 1970 caused widespread aquifer-system compaction and resultant land subsidence that locally exceeded 8 m. The importation of surface water in the early 1970s resulted in decreased pumping, recovery of water levels, and a reduced rate of subsidence in some areas. Recently, land-use changes and reductions in surface-water availability have caused pumping to increase, water levels to decline, and subsidence to recur. Reduced freeboard and flow capacity of several Federal, State, and local canals have resulted from this subsidence. Vertical land-surface changes during 2005-14 in the San Joaquin Valley were determined by using space-based [Interferometric Synthetic Aperture Radar (InSAR) and Global Positioning System (GPS)] and subsurface (extensometer) data; groundwater-level and lithologic data were used to understand and estimate properties that partly control the stress/strain response of the aquifer system. Results of the InSAR analysis indicate that two areas covering about 7,200 km2 subsided 20-540 mm during 2008-10; GPS data indicate that these rates continued through 2014. Groundwater levels (stress) and vertical land-surface changes (strain) were used to estimate preconsolidation head and aquifer system storage coefficients. Integrating lithology into the analysis indicates that in some parts of the valley, the compaction occurred primarily within quickly-equilibrating fine-grained deposits in deeper parts of the aquifer system. In other parts of the valley, anomalously fine-grained alluvial-fan deposits underlie one of the most rapidly subsiding areas, indicating the shallow sediments may also contribute to total subsidence. This information helps improve hydrologic and aquifer-system compaction models, which in turn can be used to consider land subsidence as a constraint in evaluating water-resource management options.

  4. Refining previous estimates of groundwater outflows from the Medina/Diversion Lake system, San Antonio area, Texas

    Science.gov (United States)

    Slattery, Richard N.; Asquith, William H.; Gordon, John D.

    2017-02-15

    IntroductionIn 2016, the U.S. Geological Survey (USGS), in cooperation with the San Antonio Water System, began a study to refine previously derived estimates of groundwater outflows from Medina and Diversion Lakes in south-central Texas near San Antonio. When full, Medina and Diversion Lakes (hereinafter referred to as the Medina/Diversion Lake system) (fig. 1) impound approximately 255,000 acre-feet and 2,555 acre-feet of water, respectively.Most recharge to the Edwards aquifer occurs as seepage from streams as they cross the outcrop (recharge zone) of the aquifer (Slattery and Miller, 2017). Groundwater outflows from the Medina/Diversion Lake system have also long been recognized as a potentially important additional source of recharge. Puente (1978) published methods for estimating monthly and annual estimates of the potential recharge to the Edwards aquifer from the Medina/Diversion Lake system. During October 1995–September 1996, the USGS conducted a study to better define short-term rates of recharge and to reduce the error and uncertainty associated with estimates of monthly recharge from the Medina/Diversion Lake system (Lambert and others, 2000). As a followup to that study, Slattery and Miller (2017) published estimates of groundwater outflows from detailed water budgets for the Medina/Diversion Lake system during 1955–1964, 1995–1996, and 2001–2002. The water budgets were compiled for selected periods during which time the water-budget components were inferred to be relatively stable and the influence of precipitation, stormwater runoff, and changes in storage were presumably minimal. Linear regression analysis techniques were used by Slattery and Miller (2017) to assess the relation between the stage in Medina Lake and groundwater outflows from the Medina/Diversion Lake system.

  5. A Mobile P2P File Sharing System Based on File Routing Table%一种基于文件路由表的移动P2P文件共享系统

    Institute of Scientific and Technical Information of China (English)

    樊里略; 苏文莉; 陈佳

    2012-01-01

    A P2P file sharing system for the mobile environment is designed. Based on the file routing table, file search scheme and transfer protocol are presented. The system is able to control file transferring self-adaptively and guarantee the complete file downloading when the nodes move out of the transferring range or leave system dynamically. The simulation results show that the file search scheme and transfer protocol in M-P2PFS outperform the approaches in P2P file sharing system in wire internet such as Gnutella. The M-P2PFS has high file searching accuracy and file transferring success rate.%设计了一种移动环境中的P2P文件共享系统(Mobile P2P File Sharing System,M-P2PFS),提出了基于节点文件路由表的文件搜索策略和传输协议,使得在有节点移动和动态退出情况下,文件传输过程能自适应调整,保证文件下载的完整性.通过实验表明M-P2PFS系统具有较好的性能,能保证较高的文件搜索准确率和文件传输成功率.

  6. Operation of a real-time warning system for debris flows in the San Francisco bay area, California

    Science.gov (United States)

    Wilson, Raymond C.; Mark, Robert K.; Barbato, Gary; ,

    1993-01-01

    The United States Geological Survey (USGS) and the National Weather Service (NWS) have developed an operational warning system for debris flows during severe rainstorms in the San Francisco Bay region. The NWS makes quantitative forecasts of precipitation from storm systems approaching the Bay area and coordinates a regional network of radio-telemetered rain gages. The USGS has formulated thresholds for the intensity and duration of rainfall required to initiate debris flows. The first successful public warnings were issued during a severe storm sequence in February 1986. Continued operation of the warning system since 1986 has provided valuable working experience in rainfall forecasting and monitoring, refined rainfall thresholds, and streamlined procedures for issuing public warnings. Advisory statements issued since 1986 are summarized.

  7. San Marino.

    Science.gov (United States)

    1985-02-01

    San Marino, an independent republic located in north central Italy, in 1983 had a population of 22,206 growing at an annual rate of .9%. The literacy rate is 97% and the infant mortality rate is 9.6/1000. The terrain is mountainous and the climate is moderate. According to local tradition, San Marino was founded by a Christian stonecutter in the 4th century A.D. as a refuge against religious persecution. Its recorded history began in the 9th century, and it has survived assaults on its independence by the papacy, the Malatesta lords of Rimini, Cesare Borgia, Napoleon, and Mussolini. An 1862 treaty with the newly formed Kingdom of Italy has been periodically renewed and amended. The present government is an alliance between the socialists and communists. San Marino has had its own statutes and governmental institutions since the 11th century. Legislative authority at present is vested in a 60-member unicameral parliament. Executive authority is exercised by the 11-member Congress of State, the members of which head the various administrative departments of the goverment. The posts are divided among the parties which form the coalition government. Judicial authority is partly exercised by Italian magistrates in civil and criminal cases. San Marino's policies are tied to Italy's and political organizations and labor unions active in Italy are also active in San Marino. Since World War II, there has been intense rivalry between 2 political coalitions, the Popular Alliance composed of the Christian Democratic Party and the Independent Social Democratic Party, and the Liberty Committee, coalition of the Communist Party and the Socialist Party. San Marino's gross domestic product was $137 million and its per capita income was $6290 in 1980. The principal economic activities are farming and livestock raising, along with some light manufacturing. Foreign transactions are dominated by tourism. The government derives most of its revenue from the sale of postage stamps to

  8. San Francisco Bay Area Fault Observations Displayed in Google Earth

    Science.gov (United States)

    Lackey, H.; Hernandez, M.; Nayak, P.; Zapata, I.; Schumaker, D.

    2006-12-01

    According to the United States Geological Survey (USGS), the San Francisco Bay Area has a 62% probability of experiencing a major earthquake in the next 30 years. The Hayward fault and the San Andreas fault are the two main faults in the Bay Area that are capable of producing earthquakes of magnitude 6.7 or larger - a size that could profoundly affect many of the 7 million people who live in the Bay Area. The Hayward fault has a 27% probability of producing a major earthquake in next 30 years, and the San Andreas fault has a 21% probability. Our research group, which is part of the SF-ROCKS high school outreach program, studied the Hayward and San Andreas faults. The goal of our project was to observe these faults at various locations, measure the effects of creep, and to present the data in Google Earth, a freeware tool for the public to easily view and interact with these and other seismic-hazard data. We examined the Hayward and San Andreas faults (as mapped by USGS scientists) in Google Earth to identify various sites where we could possibly find evidence of fault creep. We next visited these sites in the field where we mapped the location using a hand- held Global Positioning System, identified and photographed fault evidence, and measured offset features with a ruler or tape measure. Fault evidence included en echelon shears in pavement, warped buildings, and offset features such as sidewalks. Fault creep offset measurements range from 1.5 19 cm. We also identified possible evidence of fault creep along the San Andreas fault in South San Francisco where it had not been previously described. In Google Earth, we plotted our field sites, linked photographs showing evidence of faulting, and included detailed captions to explain the photographs. We will design a webpage containing the data in a Keyhole Markup Language (KML) file format for display in Google Earth. Any interested person needs only to download the free version of Google Earth software and visit our

  9. Implement Instructions Access Files Directly in Operating Systems%在操作系统实现指令对文件的直接寻址

    Institute of Scientific and Technical Information of China (English)

    刘福岩; 尤晋元

    2000-01-01

    In this paper, the concept and implementation of Instructions Directly Access File are discussed,its advantage and profit are analysed. The author belives Instructions Directly Access File should be adopted in operating system.

  10. A File Allocation Strategy for Energy-Efficient Disk Storage Systems

    Energy Technology Data Exchange (ETDEWEB)

    Otoo, Ekow J; Otoo, Ekow J.; Rotem, Doron; Pinar, Ali; Tsao, Shi-Chiang

    2008-06-27

    Exponential data growth is a reality for most enterprise and scientific data centers.Improvements in price/performance and storage densities of disks have made it both easy and affordable to maintain most of the data in large disk storage farms. The provisioning of disk storage farms however, is at the expense of high energy consumption due to the large number of spinning disks. The power for spinning the disks and the associated cooling costs is a significant fraction of the total power consumption of a typical data center. Given the trend of rising global fuel and energy prices and the high rate of data growth, the challenge is to implement appropriateconfigurations of large scale disk storage systems that meet performancerequirements for information retrieval across data centers. We present part of the solution to this challenge with an energy efficient file allocation strategy on a large scale disk storage system. Given performance characteristics of thedisks, and a profile of the workload in terms of frequencies of file requests and their sizes, the basic idea is to allocate files to disks such that the disks can be configured into two sets of active (constantly spinning), and passive (capable of being spun up or down) disk pools. The goal is to minimize the number of active disks subject to I/O performance constraints. We present an algorithm for solving this problem with guaranteed bounds from the optimal solution. Our algorithm runs in O(n) time where n is the number of files allocated. It uses a mapping of our file allocation problem to a generalization of the bin packing problem known as 2-dimensional vector packing. Detailed simulation results are also provided.

  11. Design and Implementation of File Access and Control System Based on Dynamic Web

    Institute of Scientific and Technical Information of China (English)

    GAO Fuxiang; YAO Lan; BAO Shengfei; YU Ge

    2006-01-01

    A dynamic Web application, which can help the departments of enterprise to collaborate with each other conveniently, is proposed. Several popular design solutions are introduced at first. Then, dynamic Web system is chosen for developing the file access and control system. Finally, the paper gives the detailed process of the design and implementation of the system, which includes some key problems such as solutions of document management and system security. Additionally, the limitations of the system as well as the suggestions of further improvement are also explained.

  12. Non-volatile main memory management methods based on a file system.

    Science.gov (United States)

    Oikawa, Shuichi

    2014-01-01

    There are upcoming non-volatile (NV) memory technologies that provide byte addressability and high performance. PCM, MRAM, and STT-RAM are such examples. Such NV memory can be used as storage because of its data persistency without power supply while it can be used as main memory because of its high performance that matches up with DRAM. There are a number of researches that investigated its uses for main memory and storage. They were, however, conducted independently. This paper presents the methods that enables the integration of the main memory and file system management for NV memory. Such integration makes NV memory simultaneously utilized as both main memory and storage. The presented methods use a file system as their basis for the NV memory management. We implemented the proposed methods in the Linux kernel, and performed the evaluation on the QEMU system emulator. The evaluation results show that 1) the proposed methods can perform comparably to the existing DRAM memory allocator and significantly better than the page swapping, 2) their performance is affected by the internal data structures of a file system, and 3) the data structures appropriate for traditional hard disk drives do not always work effectively for byte addressable NV memory. We also performed the evaluation of the effects caused by the longer access latency of NV memory by cycle-accurate full-system simulation. The results show that the effect on page allocation cost is limited if the increase of latency is moderate.

  13. 77 FR 43592 - System Energy Resources, Inc.; Notice of Filing

    Science.gov (United States)

    2012-07-25

    ..., 2012, System Energy Resources, Inc. (System Energy Resources), submitted a supplement to its petition... supplement, System Energy Resources supplements its March 28 petition to provide additional information and... From the Federal Register Online via the Government Publishing Office DEPARTMENT OF ENERGY...

  14. Age, distribution, and stratigraphic relationship of rock units in the San Joaquin Basin Province, California: Chapter 5 in Petroleum systems and geologic assessment of oil and gas in the San Joaquin Basin Province, California

    Science.gov (United States)

    Hosford Scheirer, Allegra; Magoon, Leslie B.

    2008-01-01

    The San Joaquin Basin is a major petroleum province that forms the southern half of California’s Great Valley, a 700-km-long, asymmetrical basin that originated between a subduction zone to the west and the Sierra Nevada to the east. Sedimentary fill and tectonic structures of the San Joaquin Basin record the Mesozoic through Cenozoic geologic history of North America’s western margin. More than 25,000 feet (>7,500 meters) of sedimentary rocks overlie the basement surface and provide a nearly continuous record of sedimentation over the past ~100 m.y. Further, depositional geometries and fault structures document the tectonic evolution of the region from forearc setting to strike-slip basin to transpressional margin. Sedimentary architecture in the San Joaquin Basin is complicated because of these tectonic regimes and because of lateral changes in depositional environment and temporal changes in relative sea level. Few formations are widespread across the basin. Consequently, a careful analysis of sedimentary facies is required to unravel the basin’s depositional history on a regional scale. At least three high-quality organic source rocks formed in the San Joaquin Basin during periods of sea level transgression and anoxia. Generated on the basin’s west side, hydrocarbons migrated into nearly every facies type in the basin, from shelf and submarine fan sands to diatomite and shale to nonmarine coarse-grained rocks to schist. In 2003, the U.S. Geological Survey (USGS) completed a geologic assessment of undiscovered oil and gas resources and future additions to reserves in the San Joaquin Valley of California (USGS San Joaquin Basin Province Assessment Team, this volume, chapter 1). Several research aims supported this assessment: identifying and mapping the petroleum systems, modeling the generation, migration, and accumulation of hydrocarbons, and defining the volumes of rock to be analyzed for additional resources. To better understand the three dimensional

  15. HitPeers: A P2P File Sharing System with Category Tree

    Institute of Scientific and Technical Information of China (English)

    Jiang Shouxu(姜守旭); Liang Zhengqiang; Li Jianzhong

    2003-01-01

    HitPeers conotitute a scalable and highly efficient P2P file sharing system in which all the data file can be shared. The center of HitPeers is the Category Tree (CT). CT collects the published information with category. It is flexible enough to let users customize their own local CT. Its hierarchy helps users to find the information they desire most conveniently. To increase the robustness and retain the efficiency, HitPeers will divide the tree into disjoint parts. Every part is a subtree. Some special nodes named Onodes will take charge of the subtree and play the role of a service provider. HitPeers produce more and more Onodes to meet the service demands in the internet-scale distributed environment. This paper will show the profile of the whole system.

  16. Integrated monitoring system of the San Vito Romano rockslide (Central Italy)

    Science.gov (United States)

    Amato, Gabriele; Fubelli, Giandomenico; Pezzo, Giuseppe; Iasio, Christian

    2017-04-01

    San Vito Romano is a small town 40km east of Rome (Central Italy). Since the village began to grow, during the 70s, the new buildings started showing evidences of landslide activity, included ground settlement and cracking. In response, several boreholes have been realised along the following years in the entire municipality area and many geotechnical data became available. By the way, after the recent landslides event occurred in July 2011, the need to deploy devices to monitor deformations became of paramount importance. As consequence, rainfall and temperatures started to be daily monitored, 6 extensimeters in real time connection were placed on the most damaged buildings and periodical inclinometric and piezometric measurements started to be conducted. Moreover, a geomorphological field survey has been realised and PSInSAR data from ERS, Envisat and Cosmo SkyMed satellites have been investigated. This work aims to integrate the information provided by the all in-situ and remote sensing techniques in order to reconstruct dimension, typology, state of activity and triggering factors of the phenomena that affect San Vito Romano. The study area is about 2km2 wide. Here, siliciclastic formation with a high percent of clayey minerals outcrops and a significant thickness of landslide deposits have been recognized in many borehole stratigraphies. The preliminary results show that the study area is affected by shallow rapid landslides as like by deep seated phenomena. The first mainly involve clayey layers and colluvium/eluvium coverture while the second is rockslide that affects the bedrock for >20m in depth. In both cases, ground displacements seem to be directly connected with rainfall intensity and underground water level variation. Data have been elaborated trough geostatistical, GIS and time series analysis and the overall results will be presented and discussed in the full work.

  17. Register file soft error recovery

    Science.gov (United States)

    Fleischer, Bruce M.; Fox, Thomas W.; Wait, Charles D.; Muff, Adam J.; Watson, III, Alfred T.

    2013-10-15

    Register file soft error recovery including a system that includes a first register file and a second register file that mirrors the first register file. The system also includes an arithmetic pipeline for receiving data read from the first register file, and error detection circuitry to detect whether the data read from the first register file includes corrupted data. The system further includes error recovery circuitry to insert an error recovery instruction into the arithmetic pipeline in response to detecting the corrupted data. The inserted error recovery instruction replaces the corrupted data in the first register file with a copy of the data from the second register file.

  18. Endodontic Treatment of Hypertaurodontic Mandibular Molar Using Reciprocating Single-file System: A Case Report.

    Science.gov (United States)

    C do Nascimento, Adriano; A F Marques, André; C Sponchiado-Júnior, Emílio; F R Garcia, Lucas; M A de Carvalho, Fredson

    2016-01-01

    Taurodontism is a developmental tooth disorder characterized by lack of constriction in the cementoenamel junction and consequent vertical stretch of the pulp chamber, accompanied by apical displacement of the pulpal floor. The endodontic treatment of teeth with this type of morpho-anatomical anomaly is challenging. The purpose of this article is to report the successful endodontic treatment of a hypertaurodontic mandibular molar using a reciprocating single-file system.

  19. A Survey of Distributed Capability File Systems and Their Application to Cloud Environments

    Science.gov (United States)

    2014-09-01

    Department of the Navy memorandum N2N6/4U119014. [2] I. R. Porche III, B. Wilson, E.-E. Johnson , S. Tierney, and E. Saltzman, “Data flood: Helping...file systems,” in Proceedings of the 2007 ACM/IEEE Conference on Supercomputing (SC’07), Nov. 2007, pp. 1–12. [61] J. G. Steiner , C. Neuman, and J. I

  20. Uncoupling File System Components for Bridging Legacy and Modern Storage Architectures

    Science.gov (United States)

    Golpayegani, N.; Halem, M.; Tilmes, C.; Prathapan, S.; Earp, D. N.; Ashkar, J. S.

    2016-12-01

    Long running Earth Science projects can span decades of architectural changes in both processing and storage environments. As storage architecture designs change over decades such projects need to adjust their tools, systems, and expertise to properly integrate such new technologies with their legacy systems. Traditional file systems lack the necessary support to accommodate such hybrid storage infrastructure resulting in more complex tool development to encompass all possible storage architectures used for the project. The MODIS Adaptive Processing System (MODAPS) and the Level 1 and Atmospheres Archive and Distribution System (LAADS) is an example of a project spanning several decades which has evolved into a hybrid storage architecture. MODAPS/LAADS has developed the Lightweight Virtual File System (LVFS) which ensures a seamless integration of all the different storage architectures, including standard block based POSIX compliant storage disks, to object based architectures such as the S3 compliant HGST Active Archive System, and the Seagate Kinetic disks utilizing the Kinetic Protocol. With LVFS, all analysis and processing tools used for the project continue to function unmodified regardless of the underlying storage architecture enabling MODAPS/LAADS to easily integrate any new storage architecture without the costly need to modify existing tools to utilize such new systems. Most file systems are designed as a single application responsible for using metadata to organizing the data into a tree, determine the location for data storage, and a method of data retrieval. We will show how LVFS' unique approach of treating these components in a loosely coupled fashion enables it to merge different storage architectures into a single uniform storage system which bridges the underlying hybrid architecture.

  1. Log-less metadata management on metadata server for parallel file systems.

    Science.gov (United States)

    Liao, Jianwei; Xiao, Guoqiang; Peng, Xiaoning

    2014-01-01

    This paper presents a novel metadata management mechanism on the metadata server (MDS) for parallel and distributed file systems. In this technique, the client file system backs up the sent metadata requests, which have been handled by the metadata server, so that the MDS does not need to log metadata changes to nonvolatile storage for achieving highly available metadata service, as well as better performance improvement in metadata processing. As the client file system backs up certain sent metadata requests in its memory, the overhead for handling these backup requests is much smaller than that brought by the metadata server, while it adopts logging or journaling to yield highly available metadata service. The experimental results show that this newly proposed mechanism can significantly improve the speed of metadata processing and render a better I/O data throughput, in contrast to conventional metadata management schemes, that is, logging or journaling on MDS. Besides, a complete metadata recovery can be achieved by replaying the backup logs cached by all involved clients, when the metadata server has crashed or gone into nonoperational state exceptionally.

  2. Log-Less Metadata Management on Metadata Server for Parallel File Systems

    Directory of Open Access Journals (Sweden)

    Jianwei Liao

    2014-01-01

    Full Text Available This paper presents a novel metadata management mechanism on the metadata server (MDS for parallel and distributed file systems. In this technique, the client file system backs up the sent metadata requests, which have been handled by the metadata server, so that the MDS does not need to log metadata changes to nonvolatile storage for achieving highly available metadata service, as well as better performance improvement in metadata processing. As the client file system backs up certain sent metadata requests in its memory, the overhead for handling these backup requests is much smaller than that brought by the metadata server, while it adopts logging or journaling to yield highly available metadata service. The experimental results show that this newly proposed mechanism can significantly improve the speed of metadata processing and render a better I/O data throughput, in contrast to conventional metadata management schemes, that is, logging or journaling on MDS. Besides, a complete metadata recovery can be achieved by replaying the backup logs cached by all involved clients, when the metadata server has crashed or gone into nonoperational state exceptionally.

  3. FDA Adverse Event Reporting System (FAERS): Latest Quartely Data Files

    Data.gov (United States)

    U.S. Department of Health & Human Services — The FDA Adverse Event Reporting System (FAERS) is a database that contains information on adverse event and medication error reports submitted to FDA. The database...

  4. 一种多维度存储文件系统的测试指标体系%Multi-dimensional Test Index System of Storage File System

    Institute of Scientific and Technical Information of China (English)

    王惠峰; 李先国; 李战怀; 张晓; 贺秦禄

    2011-01-01

    现有的文件系统测试工具不能准确、全面地反映文件系统的整体状况.针对该问题,提出一种多维度存储文件系统的测试指标体系,从多个角度探索影响文件系统的因素,阐述存储系统的各项技术指标,为评测和优化存储系统提供支持.介绍自主研发的专用测试工具,并对蓝鲸文件系统和CAPFS文件系统进行测试,结果表明,该文件系统指标体系有效实用.%For the existed test tools of file system can not accurately and fully reflect the overall state of the file system, the paper provides a multi-dimensional test index system of storage file system. It can explore the factors that affect the file system from multiple perspectives and comprehensively present various technical indicators of the file system. It provides strong support to evaluate and optimize the performance of storage file system. It introduces the specialized test tools and tests the Blue Whale File System(BWFS) and CAPFS. Results show the index system is practical and effective.

  5. Discourage free riding in Peer-to-Peer file sharing systems with file migration and workload balancing approach

    Institute of Scientific and Technical Information of China (English)

    YU Yijiao; JIN Hai

    2007-01-01

    Free riding is a great challenge to the development and maintenance of Peer-to-Peer(P2P)networks.A file migration and workload balancing based approach (FMWBBA)to discourage free riding is proposed in this paper.The heart of our mechanism is to migrate some shared files from the overloaded peers to the neighboring free riders automatically and transparently,which enforces free riders to offer services when altruistic peers are heavily overloaded.File migration is a key issue in our approach,and some related strategies are discussed.A simulation is designed to verify this approach,and the results show that it can not only alleviate free riding,but also improve the Quality of Service(QoS)and robustness of P2P networks efficiently.

  6. The relationship between carbonate facies, volcanic rocks and plant remains in a late Palaeozoic lacustrine system (San Ignacio Fm, Frontal Cordillera, San Juan province, Argentina)

    Science.gov (United States)

    Busquets, P.; Méndez-Bedia, I.; Gallastegui, G.; Colombo, F.; Cardó, R.; Limarino, O.; Heredia, N.; Césari, S. N.

    2013-07-01

    The San Ignacio Fm, a late Palaeozoic foreland basin succession that crops out in the Frontal Cordillera (Argentinean Andes), contains lacustrine microbial carbonates and volcanic rocks. Modification by extensive pedogenic processes contributed to the massive aspect of the calcareous beds. Most of the volcanic deposits in the San Ignacio Fm consist of pyroclastic rocks and resedimented volcaniclastic deposits. Less frequent lava flows produced during effusive eruptions led to the generation of tabular layers of fine-grained, greenish or grey andesites, trachytes and dacites. Pyroclastic flow deposits correspond mainly to welded ignimbrites made up of former glassy pyroclasts devitrified to microcrystalline groundmass, scarce crystals of euhedral plagioclase, quartz and K-feldspar, opaque minerals, aggregates of fine-grained phyllosilicates and fiammes defining a bedding-parallel foliation generated by welding or diagenetic compaction. Widespread silicified and silica-permineralized plant remains and carbonate mud clasts are found, usually embedded within the ignimbrites. The carbonate sequences are underlain and overlain by volcanic rocks. The carbonate sequence bottoms are mostly gradational, while their tops are usually sharp. The lower part of the carbonate sequences is made up of mud which appear progressively, filling interstices in the top of the underlying volcanic rocks. They gradually become more abundant until they form the whole of the rock fabric. Carbonate on volcanic sandstones and pyroclastic deposits occur, with the nucleation of micritic carbonate and associated production of pyrite. Cyanobacteria, which formed the locus of mineral precipitation, were related with this nucleation. The growth of some of the algal mounds was halted by the progressive accumulation of volcanic ash particles, but in most cases the upper boundary is sharp and suddenly truncated by pyroclastic flows or volcanic avalanches. These pyroclastic flows partially destroyed the

  7. Development of a utility system for nuclear reaction data file: WinNRDF

    Energy Technology Data Exchange (ETDEWEB)

    Aoyama, Shigeyoshi [Information Processing Center, Kitami Inst. of Tech., Hokkaido (Japan); Ohbayasi, Yosihide; Masui, Hiroshi [Meme Media Lab., Hokkaido Univ., Sapporo (Japan); Chiba, Masaki [Graduate School of Science, Hokkaido Univ., Sapporo (Japan); Kato, Kiyoshi; Ohnishi, Akira [Faculty of Social Information, Sapporo Gakuin Univ., Ebetsu, Hokkaido (Japan)

    2000-03-01

    A utility system, WinNRDF, is developed for charged particle nuclear reaction data of NRDF (Nuclear Reaction Data File) on the Windows interface. By using this system, we can easily search the experimental data of a charged particle nuclear reaction in NRDF than old retrieval systems on the mainframe and also see graphically the experimental data on GUI (Graphical User Interface). We adopted a mechanism of making a new index of keywords to put to practical use of the time dependent properties of the NRDF database. (author)

  8. DOC-a file system cache to support mobile computers

    Science.gov (United States)

    Huizinga, D. M.; Heflinger, K.

    1995-09-01

    This paper identifies design requirements of system-level support for mobile computing in small form-factor battery-powered portable computers and describes their implementation in DOC (Disconnected Operation Cache). DOC is a three-level client caching system designed and implemented to allow mobile clients to transition between connected, partially disconnected and fully disconnected modes of operation with minimal user involvement. Implemented for notebook computers, DOC addresses not only typical issues of mobile elements such as resource scarcity and fluctuations in service quality but also deals with the pitfalls of MS-DOS, the operating system which prevails in the commercial notebook market. Our experiments performed in the software engineering environment of AST Research indicate not only considerable performance gains for connected and partially disconnected modes of DOC, but also the successful operation of the disconnected mode.

  9. Gravity change from 2014 to 2015, Sierra Vista Subwatershed, Upper San Pedro Basin, Arizona

    Science.gov (United States)

    Kennedy, Jeffrey R.

    2016-01-01

    Relative-gravity data and absolute-gravity data were collected in the Sierra Vista Subwatershed, Upper San Pedro Basin, Arizona, in May–June 2014 and 2015. Data from 2014 and a description of the survey network were published in USGS Open-File Report 2015–1086. Data presented in the shapefile here are the following:(1) Network-adjusted values from 2015,(2) Gravity change from 2014 to 2015, and(3) Survey-grade coordinates obtained from a Global Positioning System (GPS) survey in 2015. 2015 data and network adjustment results are presented in Kennedy, J.R., 2016, Gravity change from 2014 to 2015, Sierra Vista Subwatershed, Upper San Pedro Basin, Arizona: U.S. Geological Survey Open–File Report 2016–1155, 15 p., http://dx.doi.org/10.3133/ofr201611552014 data and network adjustment results are presented inKennedy, J.R., 2015, Gravity data from the Sierra Vista Subwatershed, Upper San Pedro Basin, Arizona: U.S. Geological Survey Open–File Report 2015–1086, 26 p., http://dx.doi.org/10.3133/ofr20151086

  10. 基于安全操作系统的透明加密文件系统的设计%Design and Implementation of a Transparent Cryptographic File System Based on Secure Operating System

    Institute of Scientific and Technical Information of China (English)

    魏丕会; 卿斯汉; 刘海峰

    2003-01-01

    Almost all the important information is saved on physical media as files and managed by file system. Sofilesystem's security is an important promise to information security. We present a transparent cryptographic file sys-tem based on secure operating system(SecTCFS). The users do not aware the exist of the encrypting process. Au-thentication promises that valid user can access the files in the system.

  11. The San Andreas Transform System and the Tectonics of California: An Alternative Approach

    Science.gov (United States)

    Platt, J. P.; Kaus, B.; Becker, T. W.

    2006-12-01

    Pacific - North America displacement in California is distributed over a zone of intracontinental deformation 400 km wide, and incorporates large regions of transtensional and transpressional deformation. This pattern of deformation is not easily explicable in terms of brittle Coulomb failure, which should localize deformation on to a single fault. There is no consensus at present on what controls the width of this zone or the distribution of strain within it. We model the transform as a weak ductile shear zone, terminating at either end in an effectively stress-free boundary. The shear zone exerts a shear-stress boundary condition on the stronger but deformable continental lithosphere either side. Stress and strain-rate decrease away from the shear zone because of its limited length in relation to the scale of the plates. Force balance in a sheet of deformable material with free upper and lower surfaces requires lateral gradients in horizontal shear-strain rate to be balanced by longitudinal gradients in horizontal stretching rate. Analytical estimates and 3D numerical modeling demonstrate that these gradients will create zones of lithospheric thickening and thinning distributed anti-symmetrically about the shear zone. Lithospheric thickening in the Transverse Ranges and the Klamath Mountains, and thinning in the Eastern California shear zone and the San Francisco Bay area, correspond reasonably well to these predictions. This provides a test for the length- scales concept, and a powerful predictive tool for understanding the tectonics of California and other intracontinental transforms.

  12. Modeling the Gila-San Francisco Basin using system dynamics in support of the 2004 Arizona Water Settlement Act.

    Energy Technology Data Exchange (ETDEWEB)

    Tidwell, Vincent Carroll; Sun, Amy Cha-Tien; Peplinski, William J.; Klise, Geoffrey Taylor

    2012-04-01

    Water resource management requires collaborative solutions that cross institutional and political boundaries. This work describes the development and use of a computer-based tool for assessing the impact of additional water allocation from the Gila River and the San Francisco River prescribed in the 2004 Arizona Water Settlements Act. Between 2005 and 2010, Sandia National Laboratories engaged concerned citizens, local water stakeholders, and key federal and state agencies to collaboratively create the Gila-San Francisco Decision Support Tool. Based on principles of system dynamics, the tool is founded on a hydrologic balance of surface water, groundwater, and their associated coupling between water resources and demands. The tool is fitted with a user interface to facilitate sensitivity studies of various water supply and demand scenarios. The model also projects the consumptive use of water in the region as well as the potential CUFA (Consumptive Use and Forbearance Agreement which stipulates when and where Arizona Water Settlements Act diversions can be made) diversion over a 26-year horizon. Scenarios are selected to enhance our understanding of the potential human impacts on the rivers ecological health in New Mexico; in particular, different case studies thematic to water conservation, water rights, and minimum flow are tested using the model. The impact on potential CUFA diversions, agricultural consumptive use, and surface water availability are assessed relative to the changes imposed in the scenarios. While it has been difficult to gage the acceptance level from the stakeholders, the technical information that the model provides are valuable for facilitating dialogues in the context of the new settlement.

  13. Architecture of scalability file system for meteorological observation data storing

    Science.gov (United States)

    Botygin, I. A.; Popov, V. N.; Tartakovsky, V. A.; Sherstnev, V. S.

    2015-11-01

    The approach allows to organize distributed storage of large amounts of diverse data in order to further their parallel processing in high performance cluster systems for problems of climatic processes analysis and forecasting. For different classes of data was used the practice of using meta descriptions - like formalism associated with certain categories of resources. Development of a metadata component was made based on an analysis of data of surface meteorological observations, atmosphere vertical sounding, atmosphere wind sounding, weather radar observing, observations from satellites and others. A common set of metadata components was formed for their general description. The structure and content of the main components of a generalized meta descriptions are presented in detail on the example of reporting meteorological observations from land and sea stations.

  14. ANALYSIS OF TIME DISTANCES OF ENCRYPTION/DECRYPTION OF MEDICAL INFORMATION SYSTEMS DATABASES FILES

    Directory of Open Access Journals (Sweden)

    Ye. B. Lopin

    2014-02-01

    Full Text Available In the article on the specific example the results of studies medical information systems databases files encryption/decryptiontime have been presented. The present studies are performed using the developed three fundamentally different algorithms, that include Blowfish encryption algorithm as a part. The studies were performed using a specially developed (in the programming environment Delphi 7 computer program "Generators" (author's title and two computers having obsolete configuration and assembled with the Intel Core 2 Duo E8400 processor and the DualCore Intel Pentium E2180 processor. The studies have established that the encryption/decryption of files using the first developed algorithm during execution of which multiple access to the hard drive for reading/writing of 8-byte information blocks of is implemented, takes much longer time (about 10 times than the encryption/decryption using the second and third algorithms during execution of which access to the hard drive for a file reading/writing is performed once.

  15. The stress shadow effect: a mechanical analysis of the evenly-spaced parallel strike-slip faults in the San Andreas fault system

    Science.gov (United States)

    Zuza, A. V.; Yin, A.; Lin, J. C.

    2015-12-01

    Parallel evenly-spaced strike-slip faults are prominent in the southern San Andreas fault system, as well as other settings along plate boundaries (e.g., the Alpine fault) and within continental interiors (e.g., the North Anatolian, central Asian, and northern Tibetan faults). In southern California, the parallel San Jacinto, Elsinore, Rose Canyon, and San Clemente faults to the west of the San Andreas are regularly spaced at ~40 km. In the Eastern California Shear Zone, east of the San Andreas, faults are spaced at ~15 km. These characteristic spacings provide unique mechanical constraints on how the faults interact. Despite the common occurrence of parallel strike-slip faults, the fundamental questions of how and why these fault systems form remain unanswered. We address this issue by using the stress shadow concept of Lachenbruch (1961)—developed to explain extensional joints by using the stress-free condition on the crack surface—to present a mechanical analysis of the formation of parallel strike-slip faults that relates fault spacing and brittle-crust thickness to fault strength, crustal strength, and the crustal stress state. We discuss three independent models: (1) a fracture mechanics model, (2) an empirical stress-rise function model embedded in a plastic medium, and (3) an elastic-plate model. The assumptions and predictions of these models are quantitatively tested using scaled analogue sandbox experiments that show that strike-slip fault spacing is linearly related to the brittle-crust thickness. We derive constraints on the mechanical properties of the southern San Andreas strike-slip faults and fault-bounded crust (e.g., local fault strength and crustal/regional stress) given the observed fault spacing and brittle-crust thickness, which is obtained by defining the base of the seismogenic zone with high-resolution earthquake data. Our models allow direct comparison of the parallel faults in the southern San Andreas system with other similar strike

  16. Deep-water turbidites as Holocene earthquake proxies: the Cascadia subduction zone and Northern San Andreas Fault systems

    Directory of Open Access Journals (Sweden)

    J. E. Johnson

    2003-06-01

    Full Text Available New stratigraphic evidence from the Cascadia margin demonstrates that 13 earthquakes ruptured the margin from Vancouver Island to at least the California border following the catastrophic eruption of Mount Mazama. These 13 events have occurred with an average repeat time of ?? 600 years since the first post-Mazama event ?? 7500 years ago. The youngest event ?? 300 years ago probably coincides with widespread evidence of coastal subsidence and tsunami inundation in buried marshes along the Cascadia coast. We can extend the Holocene record to at least 9850 years, during which 18 events correlate along the same region. The pattern of repeat times is consistent with the pattern observed at most (but not all localities onshore, strengthening the contention that both were produced by plate-wide earthquakes. We also observe that the sequence of Holocene events in Cascadia may contain a repeating pattern, a tantalizing look at what may be the long-term behavior of a major fault system. Over the last ?? 7500 years, the pattern appears to have repeated at least three times, with the most recent A.D. 1700 event being the third of three events following a long interval of 845 years between events T4 and T5. This long interval is one that is also recognized in many of the coastal records, and may serve as an anchor point between the offshore and onshore records. Similar stratigraphic records are found in two piston cores and one box core from Noyo Channel, adjacent to the Northern San Andreas Fault, which show a cyclic record of turbidite beds, with thirty- one turbidite beds above a Holocene/.Pleistocene faunal «datum». Thus far, we have determined ages for 20 events including the uppermost 5 events from these cores. The uppermost event returns a «modern» age, which we interpret is likely the 1906 San Andreas earthquake. The penultimate event returns an intercept age of A.D. 1664 (2 ?? range 1505- 1822. The third event and fourth event

  17. Catalog of earthquakes along the San Andreas fault system in Central California, April-June 1972

    Science.gov (United States)

    Wesson, R.L.; Bennett, R.E.; Lester, F.W.

    1973-01-01

    Numerous small earthquakes occur each day in the coast ranges of Central California. The detailed study of these earthquakes provides a tool for gaining insight into the tectonic and physical processes responsible for the generation of damaging earthquakes. This catalog contains the fundamental parameters for earthquakes located within and adjacent to the seismograph network operated by the National Center for Earthquake Research (NCER), U.S. Geological Survey, during the period April - June, 1972. The motivation for these detailed studies has been described by Pakiser and others (1969) and by Eaton and others (1970). Similar catalogs of earthquakes for the years 1969, 1970 and 1971 have been prepared by Lee and others (1972 b, c, d). A catalog for the first quarter of 1972 has been prepared by Wesson and others (1972). The basic data contained in these catalogs provide a foundation for further studies. This catalog contains data on 910 earthquakes in Central California. A substantial portion of the earthquakes reported in this catalog represents a continuation of the sequence of earthquakes in the Bear Valley area which began in February, 1972 (Wesson and others, 1972). Arrival times at 126 seismograph stations were used to locate the earthquakes listed in this catalog. Of these, 101 are telemetered stations operated by NCER. Readings from the remaining 25 stations were obtained through the courtesy of the Seismographic Stations, University of California, Berkeley (UCB); the Earthquake Mechanism Laboratory, National Oceanic and Atmospheric Administration, San Francisco (EML); and the California Department of Water Resources, Sacramento. The Seismographic Stations of the University of California, Berkeley, have for many years published a bulletin describing earthquakes in Northern California and the surrounding area, and readings at UCB Stations from more distant events. The purpose of the present catalog is not to replace the UCB Bulletin, but rather to supplement

  18. Catalog of earthquakes along the San Andreas fault system in Central California: January-March, 1972

    Science.gov (United States)

    Wesson, R.L.; Bennett, R.E.; Meagher, K.L.

    1973-01-01

    Numerous small earthquakes occur each day in the Coast Ranges of Central California. The detailed study of these earthquakes provides a tool for gaining insight into the tectonic and physical processes responsible for the generation of damaging earthquakes. This catalog contains the fundamental parameters for earthquakes located within and adjacent to the seismograph network operated by the National Center for Earthquake Research (NCER), U.S. Geological Survey, during the period January - March, 1972. The motivation for these detailed studies has been described by Pakiser and others (1969) and by Eaton and others (1970). Similar catalogs of earthquakes for the years 1969, 1970 and 1971 have been prepared by Lee and others (1972 b,c,d). The basic data contained in these catalogs provide a foundation for further studies. This catalog contains data on 1,718 earthquakes in Central California. Of particular interest is a sequence of earthquakes in the Bear Valley area which contained single shocks with local magnitudes of S.O and 4.6. Earthquakes from this sequence make up roughly 66% of the total and are currently the subject of an interpretative study. Arrival times at 118 seismograph stations were used to locate the earthquakes listed in this catalog. Of these, 94 are telemetered stations operated by NCER. Readings from the remaining 24 stations were obtained through the courtesy of the Seismographic Stations, University of California, Berkeley (UCB); the Earthquake Mechanism Laboratory, National Oceanic and Atmospheric Administration, San Francisco (EML); and the California Department of Water Resources, Sacramento. The Seismographic Stations of the University of California, Berkeley,have for many years published a bulletin describing earthquakes in Northern California and the surrounding area, and readings at UCB Stations from more distant events. The purpose of the present catalog is not to replace the UCB Bulletin, but rather to supplement it, by describing the

  19. Catalog of earthquakes along the San Andreas fault system in Central California, July-September 1972

    Science.gov (United States)

    Wesson, R.L.; Meagher, K.L.; Lester, F.W.

    1973-01-01

    Numerous small earthquakes occur each day in the coast ranges of Central California. The detailed study of these earthquakes provides a tool for gaining insight into the tectonic and physical processes responsible for the generation of damaging earthquakes. This catalog contains the fundamental parameters for earthquakes located within and adjacent to the seismograph network operated by the National Center for Earthquake Research (NCER), U.S. Geological Survey, during the period July - September, 1972. The motivation for these detailed studies has been described by Pakiser and others (1969) and by Eaton and others (1970). Similar catalogs of earthquakes for the years 1969, 1970 and 1971 have been prepared by Lee and others (1972 b, c, d). Catalogs for the first and second quarters of 1972 have been prepared by Wessan and others (1972 a & b). The basic data contained in these catalogs provide a foundation for further studies. This catalog contains data on 1254 earthquakes in Central California. Arrival times at 129 seismograph stations were used to locate the earthquakes listed in this catalog. Of these, 104 are telemetered stations operated by NCER. Readings from the remaining 25 stations were obtained through the courtesy of the Seismographic Stations, University of California, Berkeley (UCB), the Earthquake Mechanism Laboratory, National Oceanic and Atmospheric Administration, San Francisco (EML); and the California Department of Water Resources, Sacramento. The Seismographic Stations of the University of California, Berkeley, have for many years published a bulletin describing earthquakes in Northern California and the surrounding area, and readings at UCB Stations from more distant events. The purpose of the present catalog is not to replace the UCB Bulletin, but rather to supplement it, by describing the seismicity of a portion of central California in much greater detail.

  20. Large-scale right-slip displacement on the East San Francisco Bay Region fault system, California: Implications for location of late Miocene to Pliocene Pacific plate boundary

    Science.gov (United States)

    McLaughlin, R.J.; Sliter, W.V.; Sorg, D.H.; Russell, P.C.; Sarna-Wojcicki, A. M.

    1996-01-01

    A belt of northwardly younging Neogene and Quaternary volcanic rocks and hydrothermal vein systems, together with a distinctive Cretaceous terrane of the Franciscan Complex (the Permanente terrane), exhibits about 160 to 170 km of cumulative dextral offset across faults of the East San Francisco Bay Region (ESFBR) fault system. The offset hydrothermal veins and volcanic rocks range in age from .01 Ma at the northwest end to about 17.6 Ma at the southeast end. In the fault block between the San Andreas and ESFBR fault systems, where volcanic rocks are scarce, hydrothermal vein system ages clearly indicate that the northward younging thermal overprint affected these rocks beginning about 18 Ma. The age progression of these volcanic rocks and hydrothermal vein systems is consistent with previously proposed models that relate northward propagation of the San Andreas transform to the opening of an asthenospheric window beneath the North American plate margin in the wake of subducting lithosphere. The similarity in the amount of offset of the Permanente terrane across the ESFBR fault system to that derived by restoring continuity in the northward younging age progression of volcanic rocks and hydrothermal veins suggests a model in which 80-110 km of offset are taken up 8 to 6 Ma on a fault aligned with the Bloomfield-Tolay-Franklin-Concord-Sunol-Calaveras faults. An additional 50-70 km of cumulative slip are taken up ??? 6 Ma by the Rogers Creek-Hayward and Concord-Franklin-Sunol-Calaveras faults. An alternative model in which the Permanente terrane is offset about 80 km by pre-Miocene faults does not adequately restore the distribution of 8-12 Ma volcanic rocks and hydrothermal veins to a single northwardly younging age trend. If 80-110 km of slip was taken up by the ESFBR fault system between 8 and 6 Ma, dextral slip rates were 40-55 mm/yr. Such high rates might occur if the ESFBR fault system rather than the San Andreas fault acted as the transform margin at this time

  1. Computer files.

    Science.gov (United States)

    Malik, M

    1995-02-01

    From what has been said, several recommendations can be made for users of small personal computers regardless of which operating system they use. If your computer has a large hard disk not specially required by any single application, organize the disk into a small number of volumes. You will then be using the computer as if it had several smaller disks, which will help you to create a logical file structure. The size of individual volumes has to be selected carefully with respect to the files kept in each volume. Otherwise, it may be that you will have too much space in one volume and not enough in another. In each volume, organize the structure of directories and subdirectories logically so that they correspond to the logic of your file content. Be aware of the fact that the directories suggested as default when installing new software are often not the optimum. For instance, it is better to put different graphics packages under a common subdirectory rather than to install them at the same level as all other packages including statistics, text processors, etc. Create a special directory for each task you use the computer. Note that it is a bad practice to keep many different and logically unsorted files in the root directory of any of your volumes. Only system and important service files should be kept there. Although any file may be written all over the disk, access to it will be faster if it is written over the minimum number of cylinders. From time to time, use special programs that reorganize your files in this way.(ABSTRACT TRUNCATED AT 250 WORDS)

  2. Improvement of Incentive Mechanism on BitTorrent-like Peer-to-Peer File Sharing Systems

    Institute of Scientific and Technical Information of China (English)

    YU Jia-di; LI Ming-lu; HONG Feng; XUE Guang-tao

    2007-01-01

    BitTorrent is a very popular Peer.to-Peer file sharing system, which adopts a set of incentive mechanisms to encourage contribution and prevent free-riding. However,we find that BitTorrent's incentive mechanism can prevent free-riding effectively in a system with a relatively low number of seeds, but may fail in producing a disincentive for free-riding in a system with a high number of seeds. The reason is that BitTorrent does not provide effective mechanisms for seeds to guard against free-riding.Therefore, we propose a seed bandwidth allocation strategy for the BitTorrent system to reduce the effect of seeds on free-riding. Our target is that a downloader which provides more service to the system will be granted a higher benefit than downloaders which provide lower service when some downioaders ask for downloading file from a seed. Finally,simulation results are given, which validate the effectiveness of the proposed strategy.

  3. Coalbed gas systems, resources, and production and a review of contrasting cases from the San Juan and Powder River basins

    Energy Technology Data Exchange (ETDEWEB)

    Ayers, W.B. [Texas A& M University, College Station, TX (United States)

    2002-07-01

    Coalbed gas is stored primarily within micropores of the coal matrix in an adsorbed state and secondarily in micropores and fractures as free gas or solution gas in water. The key parameters that control gas resources and producibility are thermal maturity, maceral composition, gas content, coal thickness, fracture density, in-situ stress, permeability, burial history, and hydrologic setting. These parameters vary greatly in the producing fields of the United States and the world. In 2000, the San Juan basin accounted for more than 80% of the United States coalbed gas production. This basin contains a giant coalbed gas play, the Fruitland fairway, which has produced more than 7 tcf (0.2 Tm{sup 3}) of gas. The Fruitland coalbed gas system M and its key elements contrast with the Fort Union coalbed gas play in the Powder River basin. The Fort Union coalbed play is one of the fastest developing gas plays in the United States. Its production escalated from 14 bcf (0.4 Gm{sup 3}) in 1997 to 147.3 bcf (4.1 Gm{sup 3}) in 2000, when it accounted for 10.7% of the United States coalbed gas production. By 2001, annual production was 244.7 bcf (6.9 Gm{sup 3}). Differences between the Fruitland and Fort Union petroleum systems make them ideal for elucidating the key elements of contrasting coalbed gas petroleum systems.

  4. Standard interface file handbook

    Energy Technology Data Exchange (ETDEWEB)

    Shapiro, A.; Huria, H.C. (Cincinnati Univ., OH (United States))

    1992-10-01

    This handbook documents many of the standard interface file formats that have been adopted by the US Department of Energy to facilitate communications between and portability of, various large reactor physics and radiation transport software packages. The emphasis is on those files needed for use of the VENTURE/PC diffusion-depletion code system. File structures, contents and some practical advice on use of the various files are provided.

  5. Characterizing the recent behavior and earthquake potential of the blind western San Cayetano and Ventura fault systems

    Science.gov (United States)

    McAuliffe, L. J.; Dolan, J. F.; Hubbard, J.; Shaw, J. H.

    2011-12-01

    The recent occurrence of several destructive thrust fault earthquakes highlights the risks posed by such events to major urban centers around the world. In order to determine the earthquake potential of such faults in the western Transverse Ranges of southern California, we are studying the activity and paleoearthquake history of the blind Ventura and western San Cayetano faults through a multidisciplinary analysis of strata that have been folded above the fault tiplines. These two thrust faults form the middle section of a >200-km-long, east-west belt of large, interconnected reverse faults that extends across southern California. Although each of these faults represents a major seismic source in its own right, we are exploring the possibility of even larger-magnitude, multi-segment ruptures that may link these faults to other major faults to the east and west in the Transverse Ranges system. The proximity of this large reverse-fault system to several major population centers, including the metropolitan Los Angeles region, and the potential for tsunami generation during offshore ruptures of the western parts of the system, emphasizes the importance of understanding the behavior of these faults for seismic hazard assessment. During the summer of 2010 we used a mini-vibrator source to acquire four, one- to three-km-long, high-resolution seismic reflection profiles. The profiles were collected along the locus of active folding above the blind, western San Cayetano and Ventura faults - specifically, across prominent fold scarps that have developed in response to recent slip on the underlying thrust ramps. These high-resolution data overlap with the uppermost parts of petroleum-industry seismic reflection data, and provide a near-continuous image of recent folding from several km depth to within 50-100 m of the surface. Our initial efforts to document the earthquake history and slip-rate of this large, multi-fault reverse fault system focus on a site above the blind

  6. Geologic Model of a Non-Volcanic Hydrothermal System: San Bartolome de Los Banos, Guanajuato, Mexico; Modelo geologico de un sistema hidrotermal no volcanico: San Bartolome de Los Banos, Guanajuato, Mexico

    Energy Technology Data Exchange (ETDEWEB)

    Lopez Hernandez, Aida [Gerencia de Proyectos Geotermoelectricos de la Comision Federal de Electricidad, Morelia (Mexico)

    1996-01-01

    The San Bartolome de Los Banos area is associated with a steeped hydraulic interconnected basins system, limited by regional Pliocene faults. The depressions are filled by sedimentary and volcanic products. The thermal manifestations, with temperatures over 90 degrees celsius, are associated to the main faults. The thermal anomaly is not related to recent volcanic activity, probably it is due to deep circulating water, moved by the hydraulic regional gradient. The thermal springs are discharges from the hydraulic system produced when the fluids are forced to flow up owing to hydraulic constrictions, that set up forced convection phenomena. [Espanol] La zona hidrotermal de San Bartolome de Los Banos esta formada por un sistema de cuencas escalonadas e interconectadas hidrologicamente, limitadas por fallas regionales originadas durante el Plioceno. Las estructuras afectaron a una secuencia de rocas volcanicas cuyas edades oscilan entre el Terciario Inferior y el Plioceno. Las depresiones estan rellenas por sedimentos y productos volcanicos. Existen manifestaciones termales asociadas a las zonas de debilidad, generadas por las fallas principales; las temperaturas superficiales son superiores a los 90 grados celsius. El termalismo en esta zona no esta asociado con actividad volcanica reciente, en apariencia se debe a la circulacion profunda de los fluidos, movidos por el gradiente hidraulico regional. Las manifestaciones termales corresponden a las zonas de descarga del sistema y se originan porque los fluidos son forzados a ascender al encontrar constricciones, produciendose una conveccion forzada.

  7. Retracted: Evaluation of the incidence of microcracks caused by Mtwo and ProTaper NEXT rotary file systems versus the Self Adjusting File: A scanning electron microscopic study.

    Science.gov (United States)

    Saha, S G; Vijaywargiya, N; Dubey, S; Saxena, D; Kala, S

    2015-11-24

    The following article from International Endodontic Journal, 'Evaluation of the incidence of microcracks caused by Mtwo and ProTaper NEXT rotary file systems versus the Self Adjusting File: a scanning electron microscopic study' by S. G. Saha, N. Vijaywargiya, S. Dubey, D. Saxena & S. Kala, published online on 24 November 2015 in Wiley Online Library (wileyonlinelibrary. com), has been retracted by agreement between the authors, the journal Editor in Chief, Prof. Paul Dummer, and John Wiley & Sons Ltd. The retraction has been agreed due to the consideration that the SEM methodology used by the authors has the potential to cause cracks and is thus is not suitable for the evaluation of micro-cracks in roots.

  8. A Measurement Study of the Structured Overlay Network in P2P File-Sharing Systems

    Directory of Open Access Journals (Sweden)

    Mo Zhou

    2007-01-01

    Full Text Available The architecture of P2P file-sharing applications has been developing to meet the needs of large scale demands. The structured overlay network, also known as DHT, has been used in these applications to improve the scalability, and robustness of the system, and to make it free from single-point failure. We believe that the measurement study of the overlay network used in the real file-sharing P2P systems can provide guidance for the designing of such systems, and improve the performance of the system. In this paper, we perform the measurement in two different aspects. First, a modified client is designed to provide view to the overlay network from a single-user vision. Second, the instances of crawler programs deployed in many nodes managed to crawl the user information of the overlay network as much as possible. We also find a vulnerability in the overlay network, combined with the character of the DNS service, a more serious DDoS attack can be launched.

  9. Using the Reputation Score Management for Constructing Fair P2P File Sharing System

    Directory of Open Access Journals (Sweden)

    Jun Han

    2013-06-01

    Full Text Available This paper has used the reputation score management for constructing a fair P2P file sharing system, the system design principle is simple and easy to realize, and every node entering into the P2P network obtains a certain reputation score, and obtains the corresponding resources reward according to the score. This paper has described the fair sharing strategies facing node network bandwidth and TTL, and these strategies can be used independently or be combined with other reputation score managements of P2P network. These two strategies have been discussed in the specific reputation score management system of P2P network Eigen Trust, and the test results indicate that: compared with a common P2P network, the fair sharing strategies of this paper have faster file download speed and can decrease the network message communication amount during the process looking for resources. It can also be combined with another reputation management system. it is simple and easy to be realized, its main purposes are to fairly share network bandwidth and to decrease information communication volume, and it can suppress the free riding behavior to some extent.

  10. 14 CFR 406.113 - Filing documents with the Docket Management System (DMS) and sending documents to the...

    Science.gov (United States)

    2010-01-01

    ... Management System (DMS) and sending documents to the administrative law judge and Assistant Chief Counsel for Litigation. (a) The Federal Docket Management System (FDMS). (1) Documents filed in a civil penalty adjudication are kept in the Federal Docket Management System (FDMS), except for documents that contain...

  11. 75 FR 25885 - The Merit Systems Protection Board (MSPB) is Providing Notice of the Opportunity to File Amicus...

    Science.gov (United States)

    2010-05-10

    ... From the Federal Register Online via the Government Publishing Office ] MERIT SYSTEMS PROTECTION BOARD The Merit Systems Protection Board (MSPB) is Providing Notice of the Opportunity to File Amicus...-0953- I-1. AGENCY: Merit Systems Protection Board. ACTION: Notice. SUMMARY: Mr. Evans is a...

  12. 75 FR 6728 - The Merit Systems Protection Board (MSPB) is Providing Notice of the Opportunity to File Amicus...

    Science.gov (United States)

    2010-02-10

    ... From the Federal Register Online via the Government Publishing Office MERIT SYSTEMS PROTECTION BOARD The Merit Systems Protection Board (MSPB) is Providing Notice of the Opportunity to File Amicus... v. Department of Defense, Docket No. AT-0752-10-0184-I-1 AGENCY: Merit Systems Protection...

  13. 75 FR 20007 - The Merit Systems Protection Board (MSPB) Is Providing Notice of the Opportunity To File Amicus...

    Science.gov (United States)

    2010-04-16

    ... From the Federal Register Online via the Government Publishing Office MERIT SYSTEMS PROTECTION BOARD The Merit Systems Protection Board (MSPB) Is Providing Notice of the Opportunity To File Amicus...- 09-0261-R-1 AGENCY: Merit Systems Protection Board. ACTION: Notice. SUMMARY: Aguzie and several...

  14. Parallel File System I/O Performance Testing On LANL Clusters

    Energy Technology Data Exchange (ETDEWEB)

    Wiens, Isaac Christian [Los Alamos National Lab. (LANL), Los Alamos, NM (United States). High Performance Computing Division. Programming and Runtime Environments; Green, Jennifer Kathleen [Los Alamos National Lab. (LANL), Los Alamos, NM (United States). High Performance Computing Division. Programming and Runtime Environments

    2016-08-18

    These are slides from a presentation on parallel file system I/O performance testing on LANL clusters. I/O is a known bottleneck for HPC applications. Performance optimization of I/O is often required. This summer project entailed integrating IOR under Pavilion and automating the results analysis. The slides cover the following topics: scope of the work, tools utilized, IOR-Pavilion test workflow, build script, IOR parameters, how parameters are passed to IOR, *run_ior: functionality, Python IOR-Output Parser, Splunk data format, Splunk dashboard and features, and future work.

  15. Evaluation of clinical data in childhood asthma. Application of a computer file system

    Energy Technology Data Exchange (ETDEWEB)

    Fife, D.; Twarog, F.J.; Geha, R.S.

    1983-10-01

    A computer file system was used in our pediatric allergy clinic to assess the value of chest roentgenograms and hemoglobin determinations used in the examination of patients and to correlate exposure to pets and forced hot air with the severity of asthma. Among 889 children with asthma, 20.7% had abnormal chest roentgenographic findings, excluding hyperinflation and peribronchial thickening, and 0.7% had abnormal hemoglobin values. Environmental exposure to pets or forced hot air was not associated with increased severity of asthma, as assessed by five measures of outcome: number of medications administered, requirement for corticosteroids, frequency of clinic visits, frequency of emergency room visits, and frequency of hospitalizations.

  16. The influence of two reciprocating single-file and two rotary-file systems on the apical extrusion of debris and its biological relationship with symptomatic apical periodontitis. A systematic review and meta-analysis.

    Science.gov (United States)

    Caviedes-Bucheli, J; Castellanos, F; Vasquez, N; Ulate, E; Munoz, H R

    2016-03-01

    This systematic review and meta-analysis investigated the influence of the number of files (full-sequence rotary-file versus reciprocating single-file systems) used during root canal preparation on the apical extrusion of debris and its biological relationship with the occurrence of symptomatic apical periodontitis. An extensive literature research was carried out in the Medline, ISI Web of Science and Cochrane databases, for relevant articles with the keyword search strategy. Based on inclusion and exclusion criteria, two reviewers independently rated the quality of each study determining the level of evidence of the articles selected. The primary outcome for the meta-analysis was determined by the amount of debris extruded into the periapical tissue during root canal preparation with multiple- or single-file systems in four laboratory studies. Analysis of in vivo release of neuropeptides (SP and CGRP) after root canal preparation with single- or multiple-file systems was also carried out. Amongst the 128 articles initially found, 113 were excluded for being nonrelevant or not fulfilling the selection criteria. Another four articles were excluded after methodology evaluation. Finally, nine laboratory studies and two in vivo studies were included in the systematic review. Four of the laboratory studies were further included for meta-analysis that revealed greater debris extrusion after the use of single-file techniques when compared to multiple-file systems. Analysis of in vivo neuropeptide expression in the periodontal ligament suggests that the design of the instrument is more important than the number of files used. Both rotary and reciprocating single-file systems generate apical extrusion of debris in laboratory studies, or expression of neuropeptides in vivo. Available evidence is limited, but supports the fact that this inflammatory reaction is not influenced by the number of files but the type of movement and the instrument design.

  17. Environmental Sensitivity Index (ESI) Atlas: San Francisco Bay - 1998, maps and geographic information systems data (NODC Accession 0036884)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This data set comprises the Environmental Sensitivity Index (ESI) maps for the shoreline of San Francisco Bay. ESI data characterize estuarine environments and...

  18. Environmental Sensitivity Index (ESI) Atlas: San Francisco Bay, California maps and geographic information systems data (NODC Accession 0013224)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This data set comprises the Environmental Sensitivity Index (ESI) maps for the shoreline of San Francisco Bay. ESI data characterize estuarine environments and...

  19. Evaluation of habitat management strategies on the flora and fauna of wetland systems in the San Luis Valley, Colorado

    Data.gov (United States)

    US Fish and Wildlife Service, Department of the Interior — The importance of maintaining the integrity of lands composing the San Luis Valley has not been fully quantified. scientific evaluation and study of short- and...

  20. Modeling of the interplay between single-file diffusion and conversion reaction in mesoporous systems

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Jing [Iowa State Univ., Ames, IA (United States)

    2013-01-11

    We analyze the spatiotemporal behavior of species concentrations in a diffusion-mediated conversion reaction which occurs at catalytic sites within linear pores of nanometer diameter. A strict single-file (no passing) constraint occurs in the diffusion within such narrow pores. Both transient and steady-state behavior is precisely characterized by kinetic Monte Carlo simulations of a spatially discrete lattice–gas model for this reaction–diffusion process considering various distributions of catalytic sites. Exact hierarchical master equations can also be developed for this model. Their analysis, after application of mean-field type truncation approximations, produces discrete reaction–diffusion type equations (mf-RDE). For slowly varying concentrations, we further develop coarse-grained continuum hydrodynamic reaction–diffusion equations (h-RDE) incorporating a precise treatment of single-file diffusion (SFD) in this multispecies system. Noting the shortcomings of mf-RDE and h-RDE, we then develop a generalized hydrodynamic (GH) formulation of appropriate gh-RDE which incorporates an unconventional description of chemical diffusion in mixed-component quasi-single-file systems based on a refined picture of tracer diffusion for finite-length pores. The gh-RDE elucidate the non-exponential decay of the steady-state reactant concentration into the pore and the non-mean-field scaling of the reactant penetration depth. Then an extended model of a catalytic conversion reaction within a functionalized nanoporous material is developed to assess the effect of varying the reaction product – pore interior interaction from attractive to repulsive. The analysis is performed utilizing the generalized hydrodynamic formulation of the reaction-diffusion equations which can reliably capture the complex interplay between reaction and restricted transport for both irreversible and reversible reactions.

  1. Mafic and ultramafic inclusions along the San Andreas Fault System: their geophysical character and effect on earthquake behavior, California, USA

    Science.gov (United States)

    Ponce, D. A.; Langenheim, V. E.; Jachens, R. C.; Hildenbrand, T. G.

    2003-04-01

    Mafic and ultramafic rocks along the San Andreas Fault System (SAFS) influence earthquake processes where their geologic setting often provides information on the tectonic evolution of these large-scale strike-slip faults. In the northern part of the SAFS, along the Hayward Fault (HF), inversion of gravity and magnetic data indicate that seismicity avoids the interior of a large gabbro body and mechanical models may be able to explain how this massive mafic block influences the distribution of stress. Aftershocks of the M6.7 1989 Loma Prieta earthquake are also spatially related to the distribution of a gabbro body, clustering along the SAF and terminating at the NW end of the gabbro body where it abuts the fault surface. Based on geophysical modeling and a three-dimensional view of the subsurface geology and seismicity, aftershocks do not occur in the interior of the buried gabbro body. In the southern part of the SAFS, aftershocks and ruptures of the M7.1 1999 Hector Mine and M7.3 1992 Landers earthquakes avoid the interior of a Jurassic diorite that extends to depths of approximately 15 km and was probably an important influence on the rupture geometry of the these earthquakes. Seismicity prior to the Landers earthquake also tend to avoid the diorite, suggesting that it affects strain distribution. The San Jacinto Fault (SJF), a discontinuity within the Peninsular Ranges batholith (PRB), separates mafic, dense, and magnetic rocks of the western PRB from more felsic, less dense, and weakly magnetic rocks of the eastern PRB. The geophysical gradients do not cross the SJF zone, but instead bend to the northwest and coincide with the fault zone. Because emplacement of the PRB presumably welded across this older crustal boundary, the SJF zone probably developed along the favorably oriented margin of the dense, stronger western PRB. Two historical M6.7 earthquakes may have nucleated along the PRB discontinuity suggesting that the PRB may continue to affect how strain

  2. How can climate change and engineered water conveyance affect sediment dynamics in the San Francisco Bay-Delta system?

    Science.gov (United States)

    Achete, Fernanda; Van der Wegen, Mick; Roelvink, Jan Adriaan; Jaffe, Bruce E.

    2017-01-01

    Suspended sediment concentration is an important estuarine health indicator. Estuarine ecosystems rely on the maintenance of habitat conditions, which are changing due to direct human impact and climate change. This study aims to evaluate the impact of climate change relative to engineering measures on estuarine fine sediment dynamics and sediment budgets. We use the highly engineered San Francisco Bay-Delta system as a case study. We apply a process-based modeling approach (Delft3D-FM) to assess the changes in hydrodynamics and sediment dynamics resulting from climate change and engineering scenarios. The scenarios consider a direct human impact (shift in water pumping location), climate change (sea level rise and suspended sediment concentration decrease), and abrupt disasters (island flooding, possibly as the results of an earthquake). Levee failure has the largest impact on the hydrodynamics of the system. Reduction in sediment input from the watershed has the greatest impact on turbidity levels, which are key to primary production and define habitat conditions for endemic species. Sea level rise leads to more sediment suspension and a net sediment export if little room for accommodation is left in the system due to continuous engineering works. Mitigation measures like levee reinforcement are effective for addressing direct human impacts, but less effective for a persistent, widespread, and increasing threat like sea level rise. Progressive adaptive mitigation measures to the changes in sediment and flow dynamics resulting from sea level rise may be a more effective strategy. Our approach shows that a validated process-based model is a useful tool to address long-term (decades to centuries) changes in sediment dynamics in highly engineered estuarine systems. In addition, our modeling approach provides a useful basis for long-term, process-based studies addressing ecosystem dynamics and health.

  3. Permanent-File-Validation Utility Computer Program

    Science.gov (United States)

    Derry, Stephen D.

    1988-01-01

    Errors in files detected and corrected during operation. Permanent File Validation (PFVAL) utility computer program provides CDC CYBER NOS sites with mechanism to verify integrity of permanent file base. Locates and identifies permanent file errors in Mass Storage Table (MST) and Track Reservation Table (TRT), in permanent file catalog entries (PFC's) in permit sectors, and in disk sector linkage. All detected errors written to listing file and system and job day files. Program operates by reading system tables , catalog track, permit sectors, and disk linkage bytes to vaidate expected and actual file linkages. Used extensively to identify and locate errors in permanent files and enable online correction, reducing computer-system downtime.

  4. Toward Millions of File System IOPS on Low-Cost, Commodity Hardware.

    Science.gov (United States)

    Zheng, Da; Burns, Randal; Szalay, Alexander S

    2013-01-01

    We describe a storage system that removes I/O bottlenecks to achieve more than one million IOPS based on a user-space file abstraction for arrays of commodity SSDs. The file abstraction refactors I/O scheduling and placement for extreme parallelism and non-uniform memory and I/O. The system includes a set-associative, parallel page cache in the user space. We redesign page caching to eliminate CPU overhead and lock-contention in non-uniform memory architecture machines. We evaluate our design on a 32 core NUMA machine with four, eight-core processors. Experiments show that our design delivers 1.23 million 512-byte read IOPS. The page cache realizes the scalable IOPS of Linux asynchronous I/O (AIO) and increases user-perceived I/O performance linearly with cache hit rates. The parallel, set-associative cache matches the cache hit rates of the global Linux page cache under real workloads.

  5. I/O Performance of an RAID-10 Style Parallel File System

    Institute of Scientific and Technical Information of China (English)

    Dan Feng; Hong Jiang; Yi-Feng Zhu

    2004-01-01

    Without any additional cost, all the disks on the nodes of a cluster can be connected together through CEFT-PVFS, an RAID-10 style parallel file system, to provide a multi-GB/s parallel I/O performance.I/O response time is one of the most important measures of quality of service for a client. When multiple clients submit data-intensive jobs at the same time, the response time experienced by the user is an indicator of the power of the cluster. In this paper, a queuing model is used to analyze in detail the average response time when multiple clients access CEFT-PVFS. The results reveal that response time is with a function of several operational parameters. The results show that I/O response time decreases with the increases in I/O buffer hit rate for read requests, write buffer size for write requests and the number of server nodes in the parallel file system, while the higher the I/O requests arrival rate, the longer the I/O response time. On the other hand, the collective power of a large cluster supported by CEFT-PVFS is shown to be able to sustain a steady and stable I/O response time for a relatively large range of the request arrival rate.

  6. Peer selecting model based on FCM for wireless distributed P2P files sharing systems

    Institute of Scientific and Technical Information of China (English)

    LI Xi; JI Hong; ZHENG Rui-ming

    2010-01-01

    Ⅱn order to improve the performance of wireless distributed peer-to-peer(P2P)files sharing systems,a general system architecture and a novel peer selecting model based on fuzzy cognitive maps(FCM)are proposed in this paper.The new model provides an effective approach on choosing an optimal peer from several resource discovering results for the best file transfer.Compared with the traditional rain-hops scheme that uses hops as the only selecting criterion,the proposed model uses FCM to investigate the complex relationships among various relative factors in wireless environments and gives an overall evaluation score on the candidate.It also has strong scalability for being independent of specified P2P resource discovering protocols.Furthermore,a complete implementation is explained in concrete modules.The simulation results show that the proposed model is effective and feasible compared with rain-hops scheme,with the success transfer rate increased by at least20% and transfer time improved as high as 34%.

  7. 76 FR 2368 - Balance Power Systems, LLC; Supplemental Notice That Initial Market-Based Rate Filing Includes...

    Science.gov (United States)

    2011-01-13

    ... Energy Regulatory Commission Balance Power Systems, LLC; Supplemental Notice That Initial Market-Based... supplemental notice in the above-referenced proceeding of Balance Power Systems, LLC's application for market... 20426. The filings in the above-referenced proceeding are accessible in the Commission's eLibrary...

  8. Geology and oil and gas assessment of the Fruitland Total Petroleum System, San Juan Basin, New Mexico and Colorado: Chapter 6 in Geology and Oil and Gas Assessment of the Fruitland Total Petroleum System, San Juan Basin, New Mexico and Colorado

    Science.gov (United States)

    Ridgley, J.L.; Condon, S.M.; Hatch, J.R.

    2013-01-01

    The Fruitland Total Petroleum System (TPS) of the San Juan Basin Province includes all genetically related hydrocarbons generated from coal beds and organic-rich shales in the Cretaceous Fruitland Formation. Coal beds are considered to be the primary source of the hydrocarbons. Potential reservoir rocks in the Fruitland TPS consist of the Upper Cretaceous Pictured Cliffs Sandstone, Fruitland Formation (both sandstone and coal beds), and the Farmington Sandstone Member of the Kirtland Formation, and the Tertiary Ojo Alamo Sandstone, and Animas, Nacimiento, and San Jose Formations.

  9. Projected evolution of California's San Francisco Bay-Delta-river system in a century of climate change.

    Directory of Open Access Journals (Sweden)

    James E Cloern

    Full Text Available BACKGROUND: Accumulating evidence shows that the planet is warming as a response to human emissions of greenhouse gases. Strategies of adaptation to climate change will require quantitative projections of how altered regional patterns of temperature, precipitation and sea level could cascade to provoke local impacts such as modified water supplies, increasing risks of coastal flooding, and growing challenges to sustainability of native species. METHODOLOGY/PRINCIPAL FINDINGS: We linked a series of models to investigate responses of California's San Francisco Estuary-Watershed (SFEW system to two contrasting scenarios of climate change. Model outputs for scenarios of fast and moderate warming are presented as 2010-2099 projections of nine indicators of changing climate, hydrology and habitat quality. Trends of these indicators measure rates of: increasing air and water temperatures, salinity and sea level; decreasing precipitation, runoff, snowmelt contribution to runoff, and suspended sediment concentrations; and increasing frequency of extreme environmental conditions such as water temperatures and sea level beyond the ranges of historical observations. CONCLUSIONS/SIGNIFICANCE: Most of these environmental indicators change substantially over the 21(st century, and many would present challenges to natural and managed systems. Adaptations to these changes will require flexible planning to cope with growing risks to humans and the challenges of meeting demands for fresh water and sustaining native biota. Programs of ecosystem rehabilitation and biodiversity conservation in coastal landscapes will be most likely to meet their objectives if they are designed from considerations that include: (1 an integrated perspective that river-estuary systems are influenced by effects of climate change operating on both watersheds and oceans; (2 varying sensitivity among environmental indicators to the uncertainty of future climates; (3 inevitability of

  10. Projected evolution of California's San Francisco bay-delta-river system in a century of climate change

    Science.gov (United States)

    Cloern, J.E.; Knowles, N.; Brown, L.R.; Cayan, D.; Dettinger, M.D.; Morgan, T.L.; Schoellhamer, D.H.; Stacey, M.T.; van der Wegen, M.; Wagner, R.W.; Jassby, A.D.

    2011-01-01

    Background: Accumulating evidence shows that the planet is warming as a response to human emissions of greenhouse gases. Strategies of adaptation to climate change will require quantitative projections of how altered regional patterns of temperature, precipitation and sea level could cascade to provoke local impacts such as modified water supplies, increasing risks of coastal flooding, and growing challenges to sustainability of native species. Methodology/Principal Findings: We linked a series of models to investigate responses of California's San Francisco Estuary-Watershed (SFEW) system to two contrasting scenarios of climate change. Model outputs for scenarios of fast and moderate warming are presented as 2010-2099 projections of nine indicators of changing climate, hydrology and habitat quality. Trends of these indicators measure rates of: increasing air and water temperatures, salinity and sea level; decreasing precipitation, runoff, snowmelt contribution to runoff, and suspended sediment concentrations; and increasing frequency of extreme environmental conditions such as water temperatures and sea level beyond the ranges of historical observations. Conclusions/Significance: Most of these environmental indicators change substantially over the 21st century, and many would present challenges to natural and managed systems. Adaptations to these changes will require flexible planning to cope with growing risks to humans and the challenges of meeting demands for fresh water and sustaining native biota. Programs of ecosystem rehabilitation and biodiversity conservation in coastal landscapes will be most likely to meet their objectives if they are designed from considerations that include: (1) an integrated perspective that river-estuary systems are influenced by effects of climate change operating on both watersheds and oceans; (2) varying sensitivity among environmental indicators to the uncertainty of future climates; (3) inevitability of biological community

  11. Projected evolution of California's San Francisco Bay-Delta-River System in a century of continuing climate change

    Science.gov (United States)

    Cloern, James E.; Knowles, Noah; Brown, Larry R.; Cayan, Daniel; Dettinger, Michael D.; Morgan, Tara L.; Schoellhamer, David H.; Stacey, Mark T.; van der Wegen, Mick; Wagner, R. Wayne; Jassby, Alan D.

    2011-01-01

    Background Accumulating evidence shows that the planet is warming as a response to human emissions of greenhouse gases. Strategies of adaptation to climate change will require quantitative projections of how altered regional patterns of temperature, precipitation and sea level could cascade to provoke local impacts such as modified water supplies, increasing risks of coastal flooding, and growing challenges to sustainability of native species. Methodology/Principal Findings We linked a series of models to investigate responses of California's San Francisco Estuary-Watershed (SFEW) system to two contrasting scenarios of climate change. Model outputs for scenarios of fast and moderate warming are presented as 2010–2099 projections of nine indicators of changing climate, hydrology and habitat quality. Trends of these indicators measure rates of: increasing air and water temperatures, salinity and sea level; decreasing precipitation, runoff, snowmelt contribution to runoff, and suspended sediment concentrations; and increasing frequency of extreme environmental conditions such as water temperatures and sea level beyond the ranges of historical observations. Conclusions/Significance Most of these environmental indicators change substantially over the 21st century, and many would present challenges to natural and managed systems. Adaptations to these changes will require flexible planning to cope with growing risks to humans and the challenges of meeting demands for fresh water and sustaining native biota. Programs of ecosystem rehabilitation and biodiversity conservation in coastal landscapes will be most likely to meet their objectives if they are designed from considerations that include: (1) an integrated perspective that river-estuary systems are influenced by effects of climate change operating on both watersheds and oceans; (2) varying sensitivity among environmental indicators to the uncertainty of future climates; (3) inevitability of biological community

  12. l382nc.m77t - MGD77 data file for Geophysical data from field activity L-3-82-NC in Off San Mateo County, Northern California from 02/27/1982 to 03/01/1982

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — Single-beam bathymetry, gravity, and magnetic data along with DGPS navigation data was collected as part of field activity L-3-82-NC in Off San Mateo County,...

  13. l282nc.m77t - MGD77 data file for Geophysical data from field activity L-2-82-NC in Off San Mateo, Northern California from 02/07/1982 to 02/12/1982

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — Single-beam bathymetry, gravity, and magnetic data along with DGPS navigation data was collected as part of field activity L-2-82-NC in Off San Mateo, Northern...

  14. l382nc.m77t - MGD77 data file for Geophysical data from field activity L-3-82-NC in Off San Mateo County, Northern California from 02/27/1982 to 03/01/1982

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — Single-beam bathymetry, gravity, and magnetic data along with DGPS navigation data was collected as part of field activity L-3-82-NC in Off San Mateo County,...

  15. l282nc.m77t - MGD77 data file for Geophysical data from field activity L-2-82-NC in Off San Mateo, Northern California from 02/07/1982 to 02/12/1982

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — Single-beam bathymetry, gravity, and magnetic data along with DGPS navigation data was collected as part of field activity L-2-82-NC in Off San Mateo, Northern...

  16. National Assessment of Oil and Gas Project - San Juan Basin Province (022) Total Petroleum Systems

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — The Total Petroleum System is used in the National Assessment Project and incorporates the Assessment Unit, which is the fundamental geologic unit used for the...

  17. National Assessment of Oil and Gas Project - San Joaquin Basin Province (010) Total Petroleum Systems

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — The Total Petroleum System is used in the National Assessment Project and incorporates the Assessment Unit, which is the fundamental geologic unit used for the...

  18. Comparative evaluation of H&H and WFNS grading scales with modified H&H (sans systemic disease): A study on 1000 patients with subarachnoid hemorrhage.

    Science.gov (United States)

    Aggarwal, Ashish; Dhandapani, Sivashanmugam; Praneeth, Kokkula; Sodhi, Harsimrat Bir Singh; Pal, Sudhir Singh; Gaudihalli, Sachin; Khandelwal, N; Mukherjee, Kanchan K; Tewari, M K; Gupta, Sunil Kumar; Mathuriya, S N

    2017-03-15

    The comparative studies on grading in subarachnoid hemorrhage (SAH) had several limitations such as the unclear grading of Glasgow Coma Scale 15 with neurological deficits in World Federation of Neurosurgical Societies (WFNS), and the inclusion of systemic disease in Hunt and Hess (H&H) scales. Their differential incremental impacts and optimum cut-off values for unfavourable outcome are unsettled. This is a prospective comparison of prognostic impacts of grading schemes to address these issues. SAH patients were assessed using WFNS, H&H (including systemic disease), modified H&H (sans systemic disease) and followed up with Glasgow Outcome Score (GOS) at 3 months. Their performance characteristics were analysed as incremental ordinal variables and different grading scale dichotomies using rank-order correlation, sensitivity, specificity, positive predictive value, negative predictive value, Youden's J and multivariate analyses. A total of 1016 patients were studied. As univariate incremental variable, H&H sans systemic disease had the best negative rank-order correlation coefficient (-0.453) with respect to lower GOS (p H&H sans systemic disease had the greatest adjusted incremental impact of 0.72 (95% confidence interval (CI) 0.54-0.91) against a lower GOS as compared to 0.6 (95% CI 0.45-0.74) and 0.55 (95% CI 0.42-0.68) for H&H and WFNS grades, respectively. In multivariate categorical analysis, H&H grades 4-5 sans systemic disease had the greatest impact on unfavourable GOS with an adjusted odds ratio of 6.06 (95% CI 3.94-9.32). To conclude, H&H grading sans systemic disease had the greatest impact on unfavourable GOS. Though systemic disease is an important prognostic factor, it should be considered distinctly from grading. Appropriate cut-off values suggesting unfavourable outcome for H&H and WFNS were 4-5 and 3-5, respectively, indicating the importance of neurological deficits in addition to level of consciousness.

  19. 集中管控安全文件系统%Secure File System for Centralized Storage

    Institute of Scientific and Technical Information of China (English)

    周英红

    2011-01-01

    Firstly, the security issue of general file system applicable to centralized storage system is analyzed, then the necessity for independent development of the secure file system raised, and the implementation technology of the secure file system explored. A secure file system architecture with specific file format, access interface and encryption method is proposed. With this architecture, the sensitive message could be safely managed by adoption of customized file browser, secret-level identifier and access log.%在分析了集中管控系统中采用国外通用文件系统可能存在的问题的基础上,提出了自主研发国产化安全文件系统的必要性,并探讨了安全文件系统的实现技术,构建了一个从文件格式、文件访问接口、文件存储加密等方面来保护存储信息的安全文件系统体系结构,并采用专用文件浏览器、文件密级标识、文件访问日志等措施来规范和管理对敏感信息的使用,从技术上杜绝病毒、木马等恶意程序在文件系统内部的传播、感染,有效阻止病毒、木马对涉密文件的窃取。

  20. Habitat use patterns of the invasive red lionfish Pterois volitans: a comparison between mangrove and reef systems in San Salvador, Bahamas

    Science.gov (United States)

    Pimiento, Catalina; Nifong, James C.; Hunter, Margaret E.; Monaco, Eric; Silliman, Brian R.

    2015-01-01

    The Indo-Pacific red lionfish Pterois volitans is widespread both in its native and its non-native habitats. The rapid invasion of this top predator has had a marked negative effect on fish populations in the Western Atlantic and the Caribbean. It is now well documented that lionfish are invading many tropical and sub-tropical habitats. However, there are fewer data available on the change in lionfish abundance over time and the variation of body size and diet across habitats. A recent study in San Salvador, Bahamas, found body size differences between individuals from mangrove and reef systems. That study further suggested that ontogenetic investigation of habitat use patterns could help clarify whether lionfish are using the mangrove areas of San Salvador as nurseries. The aim of the present study is to determine temporal trends in lionfish relative abundance in mangrove and reef systems in San Salvador, and to further assess whether there is evidence suggesting an ontogenetic shift from mangroves to reef areas. Accordingly, we collected lionfish from mangrove and reef habitats and calculated catch per unit effort (a proxy for relative abundance), compared body size distributions across these two systems, and employed a combination of stable isotope, stomach content, and genetic analyses of prey, to evaluate differences in lionfish trophic interactions and habitat use patterns. Our results show that populations may have increased in San Salvador during the last 4 years, and that there is a strong similarity in body size between habitats, stark differences in prey items, and no apparent overlap in the use of habitat and/or food resources. These results suggest that there is not evidence an for ontogenetic shift from mangroves to reefs, and support other studies that propose lionfish are opportunistic forages with little movement across habitats.

  1. Tracer dynamics in a single-file system with absorbing boundary.

    Science.gov (United States)

    Ryabov, Artem; Chvosta, Petr

    2014-02-01

    The paper addresses the single-file diffusion in the presence of an absorbing boundary. The emphasis is on an interplay between the hard-core interparticle interaction and the absorption process. The resulting dynamics exhibits several qualitatively new features. First, starting with the exact probability density function for a given particle (a tracer), we study the long-time asymptotics of its moments. Both the mean position and the mean-square displacement are controlled by dynamical exponents which depend on the initial order of the particle in the file. Second, conditioning on nonabsorption, we study the distribution of long-living particles. In the conditioned framework, the dynamical exponents are the same for all particles, however, a given particle possesses an effective diffusion coefficient which depends on its initial order. After performing the thermodynamic limit, the conditioned dynamics of the tracer is subdiffusive, the generalized diffusion coefficient D(1/2) being different from that reported for the system without absorbing boundary.

  2. Architecture of a high-performance PACS based on a shared file system

    Science.gov (United States)

    Glicksman, Robert A.; Wilson, Dennis L.; Perry, John H.; Prior, Fred W.

    1992-07-01

    The Picture Archive and Communication System developed by Loral Western Development Laboratories and Siemens Gammasonics Incorporated utilizes an advanced, high speed, fault tolerant image file server or Working Storage Unit (WSU) combined with 100 Mbit per second fiber optic data links. This central shared file server is capable of supporting the needs of more than one hundred workstations and acquisition devices at interactive rates. If additional performance is required, additional working storage units may be configured in a hyper-star topology. Specialized processing and display hardware is used to enhance Apple Macintosh personal computers to provide a family of low cost, easy to use, yet extremely powerful medical image workstations. The Siemens LiteboxTM application software provides a consistent look and feel to the user interface of all workstation in the family. Modern database and wide area communications technologies combine to support not only large hospital PACS but also outlying clinics and smaller facilities. Basic RIS functionality is integrated into the PACS database for convenience and data integrity.

  3. Kinematics of rotating panels of E-W faults in the San Andreas system: what can we tell from geodesy?

    Science.gov (United States)

    Platt, J. P.; Becker, T. W.

    2013-09-01

    Sets of E- to NE-trending sinistral and/or reverse faults occur within the San Andreas system, and are associated with palaeomagnetic evidence for clockwise vertical-axis rotations. These structures cut across the trend of active dextral faults, posing questions as to how displacement is transferred across them. Geodetic data show that they lie within an overall dextral shear field, but the data are commonly interpreted to indicate little or no slip, nor any significant rate of rotation. We model these structures as rotating by bookshelf slip in a dextral shear field, and show that a combination of sinistral slip and rotation can produce the observed velocity field. This allows prediction of rates of slip, rotation, fault-parallel extension and fault-normal shortening within the panel. We use this method to calculate the kinematics of the central segment of the Garlock Fault, which cuts across the eastern California shear zone at a high angle. We obtain a sinistral slip rate of 6.1 ± 1.1 mm yr-1, comparable to geological evidence, but higher than most previous geodetic estimates, and a rotation rate of 4.0 ± 0.7° Myr-1 clockwise. The western Transverse Ranges transect a similar shear zone in coastal and offshore California, but at an angle of only 40°. As a result, the faults, which were sinistral when they were at a higher angle to the shear zone, have been reactivated in a dextral sense at a low rate, and the rate of rotation of the panel has decreased from its long-term rate of ˜5° to 1.6° ± 0.2° Myr-1 clockwise. These results help to resolve some of the apparent discrepancies between geological and geodetic slip-rate estimates, and provide an enhanced understanding of the mechanics of intracontinental transform systems.

  4. Past leaded gasoline emissions as a nonpoint source tracer in riparian systems: A study of river inputs to San Francisco Bay

    Science.gov (United States)

    Dunlap, C.E.; Bouse, R.; Flegal, A.R.

    2000-01-01

    Variations in the isotopic composition of lead in 1995-1998 river waters flowing into San Francisco Bay trace the washout of lead deposited in the drainage basin from leaded gasoline combustion. At the confluence of the Sacramento and San Joaquin rivers where they enter the Bay, the isotopic compositions of lead in the waters define a linear trend away from the measured historical compositions of leaded gas in California. The river waters are shifted away from leaded gasoline values and toward an isotopic composition similar to Sierra Nevadan inputs which became the predominant source of sedimentation in San Francisco Bay following the onset of hydraulic gold mining in 1853. Using lead isotopic compositions of hydraulic mine sediments and average leaded gasoline as mixing end members, we calculate that more than 50% of the lead in the present river water originated from leaded gasoline combustion. The strong adsorption of lead (log K(d) > 7.4) to particulates appears to limit the flushing of gasoline lead from the drainage basin, and the removal of that lead from the system may have reached an asymptotic limit. Consequently, gasoline lead isotopes should prove to be a useful nonpoint source tracer of the environmental distribution of particle- reactive anthropogenic metals in freshwater systems.

  5. Disaster Tolerance System Building in File Management System%浅谈档案管理系统中的灾容系统建设

    Institute of Scientific and Technical Information of China (English)

    孙锋

    2011-01-01

    本文简要介绍了容灾系统的基本原理和技术,及其在档案管理系统方面的主要应用,并提出档案系统中建设容灾系统的对策。%This paper introduces the basic principles of disaster recovery systems and technology, and its file management system in the main application,and proposed the construction of the file system disaster recovery system response.

  6. Using developmental evaluation as a system of organizational learning: An example from San Francisco.

    Science.gov (United States)

    Shea, Jennifer; Taylor, Tory

    2017-07-08

    In the last 20 years, developmental evaluation has emerged as a promising approach to support organizational learning in emergent social programs. Through a continuous system of inquiry, reflection, and application of knowledge, developmental evaluation serves as a system of tools, methods, and guiding principles intended to support constructive organizational learning. However, missing from the developmental evaluation literature is a nuanced framework to guide evaluators in how to elevate the organizational practices and concepts most relevant for emergent programs. In this article, we describe and reflect on work we did to develop, pilot, and refine an integrated pilot framework. Drawing on established developmental evaluation inquiry frameworks and incorporating lessons learned from applying the pilot framework, we put forward the Evaluation-led Learning framework to help fill that gap and encourage others to implement and refine it. We posit that without explicitly incorporating the assessments at the foundation of the Evaluation-led Learning framework, developmental evaluation's ability to affect organizational learning in productive ways will likely be haphazard and limited. Copyright © 2017 Elsevier Ltd. All rights reserved.

  7. Evaluation of apical extrusion of debris and irrigant using two new reciprocating and one continuous rotation single file systems.

    Directory of Open Access Journals (Sweden)

    Gurudutt Nayak

    2014-06-01

    Full Text Available Apical extrusion of debris and irrigants during cleaning and shaping of the root canal is one of the main causes of periapical inflammation and postoperative flare-ups. The purpose of this study was to quantitatively measure the amount of debris and irrigants extruded apically in single rooted canals using two reciprocating and one rotary single file nickel-titanium instrumentation systems.Sixty human mandibular premolars, randomly assigned to three groups (n = 20 were instrumented using two reciprocating (Reciproc and Wave One and one rotary (One Shape single-file nickel-titanium systems. Bidistilled water was used as irrigant with traditional needle irrigation delivery system. Eppendorf tubes were used as test apparatus for collection of debris and irrigant. The volume of extruded irrigant was collected and quantified via 0.1-mL increment measure supplied on the disposable plastic insulin syringe. The liquid inside the tubes was dried and the mean weight of debris was assessed using an electronic microbalance. The data were statistically analysed using Kruskal-Wallis nonparametric test and Mann Whitney U test with Bonferroni adjustment. P-values less than 0.05 were considered significant.The Reciproc file system produced significantly more debris compared with OneShape file system (P0.05. Extrusion of irrigant was statistically insignificant irrespective of the instrument or instrumentation technique used (P >0.05.Although all systems caused apical extrusion of debris and irrigant, continuous rotary instrumentation was associated with less extrusion as compared with the use of reciprocating file systems.

  8. A multi-dimensional analysis of the upper Rio Grande-San Luis Valley social-ecological system

    Science.gov (United States)

    Mix, Ken

    The Upper Rio Grande (URG), located in the San Luis Valley (SLV) of southern Colorado, is the primary contributor to streamflow to the Rio Grande Basin, upstream of the confluence of the Rio Conchos at Presidio, TX. The URG-SLV includes a complex irrigation-dependent agricultural social-ecological system (SES), which began development in 1852, and today generates more than 30% of the SLV revenue. The diversions of Rio Grande water for irrigation in the SLV have had a disproportionate impact on the downstream portion of the river. These diversions caused the flow to cease at Ciudad Juarez, Mexico in the late 1880s, creating international conflict. Similarly, low flows in New Mexico and Texas led to interstate conflict. Understanding changes in the URG-SLV that led to this event and the interactions among various drivers of change in the URG-SLV is a difficult task. One reason is that complex social-ecological systems are adaptive, contain feedbacks, emergent properties, cross-scale linkages, large-scale dynamics and non-linearities. Further, most analyses of SES to date have been qualitative, utilizing conceptual models to understand driver interactions. This study utilizes both qualitative and quantitative techniques to develop an innovative approach for analyzing driver interactions in the URG-SLV. Five drivers were identified for the URG-SLV social-ecological system: water (streamflow), water rights, climate, agriculture, and internal and external water policy. The drivers contained several longitudes (data aspect) relevant to the system, except water policy, for which only discreet events were present. Change point and statistical analyses were applied to the longitudes to identify quantifiable changes, to allow detection of cross-scale linkages between drivers, and presence of feedback cycles. Agricultural was identified as the driver signal. Change points for agricultural expansion defined four distinct periods: 1852--1923, 1924--1948, 1949--1978 and 1979

  9. The dental X-ray file of crew members in the Scandinavian Airlines System (SAS).

    Science.gov (United States)

    Keiser-Nielsen, S; Johanson, G; Solheim, T

    1981-11-01

    In 1977, the Scandinavian Airlines System (SAS) established a dental X-ray file of all crew members. Its aim was to have immediately available an adequate set of physical antemortem data useful for identification in case of a fatal crash. Recently, an investigation into the quality and suitability of this material was carried out. The radiographs of 100 Danish, 100 Norwegian, and 100 Swedish pilots were picked at random and evaluated for formal deficiences, technical deficiencies, treatment pattern as useful for identification purposes, and the presence of pathology. The major results of the investigation were that a number of formal and technical deficiencies were disclosed, that the treatment pattern would seem adequate for identification purposes, and that a number of pathological findings were made, several of which had to be considered possible safety risks in the form of barodontalgia.

  10. Cleaning Effectiveness of a Reciprocating Single-file and a Conventional Rotary Instrumentation System

    Science.gov (United States)

    de Carvalho, Fredson Marcio Acris; Gonçalves, Leonardo Cantanhede de Oliveira; Marques, André Augusto Franco; Alves, Vanessa; Bueno, Carlos Eduardo da Silveira; De Martin, Alexandre Sigrist

    2016-01-01

    Objective: To compare cleaning effectiveness by histological analysis of a reciprocating single-file system with ProTaper rotary instruments during the preparation of curved root canals in extracted teeth. Methods: A total of 40 root canals with curvatures ranging between 20 - 40 degrees were divided into two groups of 20 canals. Canals were prepared to the following apical sizes: Reciproc size 25 (n=20); ProTaper: F2 (n=20). The normal distribution of data was tested by the Kolmogorov-Smirnov test and the values obtained for the test (Mann-Whitney U test, P .05) between the two groups. Conclusion: The application of reciprocating motion during instrumentation did not result in increased debris when compared with continuous rotation motion, even in the apical part of curved canals. Both instruments resulted in debris in the canal lumen, irrespective of the movement kinematics applied. PMID:28217185

  11. Effect of Patency File on Transportation and Curve Straightening in Canal Preparation with ProTaper System.

    Science.gov (United States)

    Hasheminia, Seyed Mohsen; Farhadi, Nastaran; Shokraneh, Ali

    2013-01-01

    The aim of this ex vivo study was to evaluate the effect of using a patency file on apical transportation and curve straightening during canal instrumentation with the ProTaper rotary system. Seventy permanent mandibular first molars with mesiobuccal canals, measuring 18-23 mm in length and with a 25-40° curvature (according to the Schneider method), were selected. The working lengths were determined and the teeth were mounted and divided into two experimental groups: (A) prepared by the ProTaper system without using a patency file (n = 35) and (B) prepared by the ProTaper system using a patency file (n = 35). Radiographs taken before and after the preparation were imported into Photoshop software and the apical transportation, and curve straightening were measured. Data were analyzed using independent t-test. Partial correlation analysis was performed to evaluate the relationship between the initial curvature, transportation, and curve straightening (α = 0.05). Using a patency file during canal preparation significantly decreased both apical transportation and curve straightening (P ProTaper rotary system.

  12. ESSEA as an Enhancement to K-12 Earth Systems Science Efforts at San José State University

    Science.gov (United States)

    Messina, P.; Metzger, E. P.; Sedlock, R. L.

    2002-12-01

    San José State University's Geology Department has implemented and maintained a two-fold approach to teacher education efforts. Both pre-service and in-service populations have been participants in a wide variety of content-area enrichment, training, and professional development endeavors. Spearheading these initiatives is the Bay Area Earth Science Institute (BAESI); organized in 1990, this program has served more than 1,000 teachers in weekend- and summer-workshops, and field trips. It sustains a network of Bay Area teachers via its Website (http://www.baesi.org), newsletter, and allows teachers to borrow classroom-pertinent materials through the Earth Science Resource Center. The Department has developed a course offering in Earth Systems Science (Geology 103), which targets pre-service teachers within SJSU's multiple-subject credential program. The curriculum satisfies California subject matter competency requirements in the geosciences, and infuses pedagogy into the syllabus. Course activities are intended for pre-service and in-service teachers' adaptation in their own classrooms. The course has been enhanced by two SJSU-NASA collaborations (Project ALERT and the Sun-Earth Connection Education Forum), which have facilitated incorporation of NASA data, imagery, and curricular materials. SJSU's M.A. in Natural Science, a combined effort of the Departments of Geology, Biology, and Program in Science Education, is designed to meet the multi-disciplinary needs of single-subject credential science teachers by providing a flexible, individually-tailored curriculum that combines science course work with a science education project. Several BAESI teachers have extended their Earth science knowledge and teaching skills through such projects as field guides to local sites of geological interest; lab-based modules for teaching about earthquakes, rocks and minerals, water quality, and weather; and interactive online materials for students and teachers of science. In

  13. Distributed Seismic Data File System%分布式地震数据文件系统

    Institute of Scientific and Technical Information of China (English)

    刘永江; 邵庆; 彭淑罗

    2015-01-01

    在地震资料处理系统中,对TB级的海量地震数据文件的存取存在严重的I/O瓶颈.针对地震数据文件操作的特点,提出了一个基于集群环境的分布式地震数据文件系统(DSFS).该系统由数据库节点、地震数据文件节点、计算节点组成.设计了面向地震道集数据的分布式文件架构,该架构以地震道头信息索引数据库为中心,将一个大的地震数据文件分解为多个可独立操作的子文件,建立基于道头字的索引数据表,根据索引表可快速定位道数据所在的子文件及数据块.设计了一组DSFS文件操作和道集数据读写操作,提供了针对虚拟文件按道头字值进行数据查询和输入输出,屏蔽了分布式文件的细节,并提出了文件分布及并行存取策略.DSFS在地震资料处理系统得到应用并具有很高的数据存取效率.%The serious I/O bottleneck was found in seismic data process system to access large seismic data file which size reaches TB grade. Aiming at the futures of accessing seismic data files,a distributed seismic data file system (DSFS) based on cluster computer is put forward,that is composed of database node,seismic data file node and computing node. A distributed file architecture facing seismic gath-er is designed,taking seismic trace indexing database as the center,spliting a large seismic data file into sub files that can be access seis-mic data separately,a seismic trace indexing table is designed to locate the sub file and data block that a trace data is stored. A set of oper-ators on DSFS file and accessing gather data are designed,providing the data query and input and output data by trace key values in large file,the distributed file details is screened. The strategy for distributed file and parallel I/O is presented. The DSFS has been used in seis-mic data process system,and has very high I/O efficiency.

  14. The analysis of I node of file system in Linux%Linux文件系统中I节点的分析

    Institute of Scientific and Technical Information of China (English)

    帅峰云; 施展

    2011-01-01

    File system is an important part of the operating system, due to various formats file systems are supported in Linux, so how to access the file system in different formats and achieve to information sharing between different formats file systems become very impoortant. Taking Ext2 format file system as an example, we described in detail that the process of Linux operating system access different formats file system, and a complete framework for Linux file system, and the relation contents of node I that be used in access documents.%文件系统是一个操作系统的重要组成部分,由于Linux支持多种格式的文件系统,因此系统如何访问不同格式的文件系统并且实现不同格式的文件系统之间资料的共享就显得很重要.主要是结合Ext2格式的文件系统为例,详细讲述了Linux操作系统访问不同格式文件系统的过程和一个完整Linux文件系统的框架以及访问文件时用到的I节点的相关内容.

  15. In vitro comparison rate of dental root canal transportation using two single file systems on the simulated resin blocks

    Directory of Open Access Journals (Sweden)

    Mohammad Javad Etesami

    2016-07-01

    Full Text Available Background and Aims: Cleaning and shaping is one of the most important stages in endodontic treatment. Single-file systems save time and reduce the risk of transmission of pathogens. This in vitro study was aimed to compare the rate of canal transportation after the preparation of the stimulated resin root canal with two single-file systems, namely Waveone and Reciproc. Materials and Methods: Thirty stimulated resin root canal blocks with size 8/0. 02 K file were randomly divided into two study groups. The preparation in Group A and Group B was performed using Reciproc and Waveone files, respectively. Pre and post- preparation photographs were taken and the images were superimposed to evaluate the inner and outer wall’s curvature tendency at three points (apical, middle and coronal using AutoCad pragram. Data were analyzed using T-test. Results: Based on the results, the degree of transportation in the inner and outer walls of the canal was less at the level of 3 millimeters (P0.05. Conclusion: Waveone showed better performance in the middle third of canal and this system maybe recommended.

  16. Gravity change from 2014 to 2015, Sierra Vista Subwatershed, Upper San Pedro Basin, Arizona

    Science.gov (United States)

    Kennedy, Jeffrey R.

    2016-09-13

    Relative-gravity data and absolute-gravity data were collected at 68 stations in the Sierra Vista Subwatershed, Upper San Pedro Basin, Arizona, in May–June 2015 for the purpose of estimating aquifer-storage change. Similar data from 2014 and a description of the survey network were published in U.S. Geological Survey Open-File Report 2015–1086. Data collection and network adjustment results are presented in this report, which is accompanied by a supporting Web Data Release (http://dx.doi.org/10.5066/F7SQ8XHX). Station positions are presented from a Global Positioning System campaign to determine station elevation.

  17. Dentinal damage and fracture resistance of oval roots prepared with single-file systems using different kinematics.

    Science.gov (United States)

    Abou El Nasr, Hend Mahmoud; Abd El Kader, Karim Galal

    2014-06-01

    Vertical root fracture is a common finding in endodontically treated teeth, notably oval roots. The aim of the present study was to determine the effect of instrumentation kinematics and the material of instrument construction of single-file systems on dentin walls and fracture resistance of oval roots. Sixty-five roots with oval canals were allocated into a control group (n = 5) and 3 experimental groups of 20 roots each. Group WO was instrumented with the WaveOne primary file (Dentsply Maillefer, Baillagues, Switzerland), group PT-Rec was prepared with F2 ProTaper files (Dentsply Maillefer, Baillagues, Switzerland) used in a reciprocating motion, and group PT-Rot was prepared with F2 ProTaper files used in a rotation motion. For crack evaluation, half of the samples (n = 30) were embedded in acrylic resin, and the blocks were sectioned at 3, 6, and 9 mm from the apex. The sections were examined under a stereomicroscope and scored for crack presence. The other half of the specimens (n = 30) were obturated using lateral condensation of gutta-percha and AdSeal sealer (Meta Biomed Co, Ltd, Chungbuk, Korea). The specimens were then subjected to a load of 1 mm/min to determine the force required to fracture the roots. WaveOne instruments induced the least amount of cracks and exhibited greatest resistance to fracture compared with ProTaper F2 files whether used in reciprocating or rotating motions. The alloy from which the material is manufactured is a more important factor determining the dentin damaging potential of single-file instruments than the motion of instrumentation. Copyright © 2014 American Association of Endodontists. Published by Elsevier Inc. All rights reserved.

  18. 75 FR 27986 - Electronic Filing System-Web (EFS-Web) Contingency Option

    Science.gov (United States)

    2010-05-19

    ... inventors and other small entity applicants who file their applications electronically via EFS-Web receive a... Internet page ( http://www.uspto.gov/patents/process/file/efs/index.jsp ). The EFS-Web Contingency Option... see http://www.uspto.gov/ebc/portal/efs/sb130_instructions.doc ); 9. Petitions to accept an...

  19. Digital data from the Questa-San Luis and Santa Fe East helicopter magnetic surveys in Santa Fe and Taos Counties, New Mexico, and Costilla County, Colorado

    Science.gov (United States)

    Bankey, Viki; Grauch, V.J.S.; Drenth, B.J.; ,

    2006-01-01

    This report contains digital data, image files, and text files describing data formats and survey procedures for aeromagnetic data collected during high-resolution aeromagnetic surveys in southern Colorado and northern New Mexico in December, 2005. One survey covers the eastern edge of the San Luis basin, including the towns of Questa, New Mexico and San Luis, Colorado. A second survey covers the mountain front east of Santa Fe, New Mexico, including the town of Chimayo and portions of the Pueblos of Tesuque and Nambe. Several derivative products from these data are also presented as grids and images, including reduced-to-pole data and data continued to a reference surface. Images are presented in various formats and are intended to be used as input to geographic information systems, standard graphics software, or map plotting packages.

  20. 77 FR 54811 - Safety Zone; TriRock San Diego, San Diego Bay, San Diego, CA

    Science.gov (United States)

    2012-09-06

    ... SECURITY Coast Guard 33 CFR Part 165 RIN 1625-AA00 Safety Zone; TriRock San Diego, San Diego Bay, San Diego... safety zone upon the navigable waters of the San Diego Bay, San Diego, CA, in support of a bay swim in San Diego Harbor. This safety zone is necessary to provide for the safety of the participants, crew...

  1. 78 FR 58878 - Safety Zone; San Diego Shark Fest Swim; San Diego Bay, San Diego, CA

    Science.gov (United States)

    2013-09-25

    ... SECURITY Coast Guard 33 CFR Part 165 RIN 1625-AA00 Safety Zone; San Diego Shark Fest Swim; San Diego Bay, San Diego, CA AGENCY: Coast Guard, DHS. ACTION: Temporary final rule. SUMMARY: The Coast Guard is establishing a safety zone upon the navigable waters of the San Diego Bay, San Diego, CA, in support of San...

  2. 78 FR 53243 - Safety Zone; TriRock San Diego, San Diego Bay, San Diego, CA

    Science.gov (United States)

    2013-08-29

    ... SECURITY Coast Guard 33 CFR Part 165 RIN 1625-AA00 Safety Zone; TriRock San Diego, San Diego Bay, San Diego... temporary safety zone upon the navigable waters of the San Diego Bay, San Diego, CA, in support of a... Bryan Gollogly, Waterways Management, U.S. Coast Guard Sector San Diego; telephone (619) 278-7656, email...

  3. Parallel checksumming of data chunks of a shared data object using a log-structured file system

    Science.gov (United States)

    Bent, John M.; Faibish, Sorin; Grider, Gary

    2016-09-06

    Checksum values are generated and used to verify the data integrity. A client executing in a parallel computing system stores a data chunk to a shared data object on a storage node in the parallel computing system. The client determines a checksum value for the data chunk; and provides the checksum value with the data chunk to the storage node that stores the shared object. The data chunk can be stored on the storage node with the corresponding checksum value as part of the shared object. The storage node may be part of a Parallel Log-Structured File System (PLFS), and the client may comprise, for example, a Log-Structured File System client on a compute node or burst buffer. The checksum value can be evaluated when the data chunk is read from the storage node to verify the integrity of the data that is read.

  4. The Design and Realization of Test Files Uploading System%考试文件上传系统的设计与实现

    Institute of Scientific and Technical Information of China (English)

    李太凤

    2014-01-01

    计算机的考试会有大量的考试文件需要提交。考试文件上传系统通过HTML表单获取学生提交文件的信息,使用ASP无组件上传类将文件上传到服务器上,并进行文件重命名等操作,实现对考试文件的统一管理。利用相应的编程方法和技巧,设计出符合实际需求的各种考试文件上传系统,可改变传统提交考试文件的方式,减轻教师负担,提高工作效率。%A large quantity of exam files need to be uploaded during computer-based testing. The exam files uploading system changes the traditional way of submitting exam files, lightens teachers’work load, and increases efficiency. The system realizes the unified management of the exam files by such procedures as acquiring the files submitted by students through HTML form, uploading the files to servers by ASP non-component upload, renaming the files, etc. It is possible to design the system of uploading exam files by comprehensively using all kinds of programming methods and skills.

  5. Fine-Resolution Modeling of the Santa Cruz and San Pedro River Basins for Climate Change and Riparian System Studies

    Science.gov (United States)

    Robles-Morua, A.; Vivoni, E. R.; Volo, T. J.; Rivera, E. R.; Dominguez, F.; Meixner, T.

    2011-12-01

    This project is part of a multidisciplinary effort aimed at understanding the impacts of climate variability and change on the ecological services provided by riparian ecosystems in semiarid watersheds of the southwestern United States. Valuing the environmental and recreational services provided by these ecosystems in the future requires a numerical simulation approach to estimate streamflow in ungauged tributaries as well as diffuse and direct recharge to groundwater basins. In this work, we utilize a distributed hydrologic model known as the TIN-based Real-time Integrated Basin Simulator (tRIBS) in the upper Santa Cruz and San Pedro basins with the goal of generating simulated hydrological fields that will be coupled to a riparian groundwater model. With the distributed model, we will evaluate a set of climate change and population scenarios to quantify future conditions in these two river systems and their impacts on flood peaks, recharge events and low flows. Here, we present a model confidence building exercise based on high performance computing (HPC) runs of the tRIBS model in both basins during the period of 1990-2000. Distributed model simulations utilize best-available data across the US-Mexico border on topography, land cover and soils obtained from analysis of remotely-sensed imagery and government databases. Meteorological forcing over the historical period is obtained from a combination of sparse ground networks and weather radar rainfall estimates. We then focus on a comparison between simulation runs using ground-based forcing to cases where the Weather Research Forecast (WRF) model is used to specify the historical conditions. Two spatial resolutions are considered from the WRF model fields - a coarse (35-km) and a downscaled (10- km) forcing. Comparisons will focus on the distribution of precipitation, soil moisture, runoff generation and recharge and assess the value of the WRF coarse and downscaled products. These results provide confidence in

  6. Applying Integrated ITS Technologies to Parking Management Systems: A Transit-Based Case Study in the San Francisco Bay Area

    OpenAIRE

    Shaheen, Susan; Rodier, Caroline J.; Eaken, Amanda M.

    2004-01-01

    California Partners for Advanced Transit and Highways has teamed with the California Department of Transportation, the Bay Area Rapid Transit (BART) District, ParkingCarmaâ„¢, and Quixote Corporation to launch a smart parking research demonstration at the Rockridge BART station in the East San Francisco Bay Area (California, USA). The results of an extensive literature review demonstrate that different smart parking applications implemented worldwide can ease traveler delays, increase transit...

  7. A brief history of oil and gas exploration in the southern San Joaquin Valley of California: Chapter 3 in Petroleum systems and geologic assessment of oil and gas in the San Joaquin Basin Province, California

    Science.gov (United States)

    Takahashi, Kenneth I.; Gautier, Donald L.

    2007-01-01

    The Golden State got its nickname from the Sierra Nevada gold that lured so many miners and settlers to the West, but California has earned much more wealth from so-called “black gold” than from metallic gold. The San Joaquin Valley has been the principal source for most of the petroleum produced in the State during the past 145 years. In attempting to assess future additions to petroleum reserves in a mature province such as the San Joaquin Basin, it helps to be mindful of the history of resource development. In this chapter we present a brief overview of the long and colorful history of petroleum exploration and development in the San Joaquin Valley. This chapter relies heavily upon the work of William Rintoul, who wrote extensively on the history of oil and gas exploration in California and especially in the San Joaquin Valley. No report on the history of oil and gas exploration in the San Joaquin Valley would be possible without heavily referencing his publications. We also made use of publications by Susan Hodgson and a U.S. Geological Survey Web site, Natural Oil and Gas Seeps in California (http://seeps.wr.usgs.gov/seeps/index.html), for much of the material describing the use of petroleum by Native Americans in the San Joaquin Valley. Finally, we wish to acknowledge the contribution of Don Arnot, who manages the photograph collection at the West Kern Oil Museum in Taft, California. The collection consists of more than 10,000 photographs that have been scanned and preserved in digital form on CD-ROM. Many of the historical photographs used in this paper are from that collection. Finally, to clarify our terminology, we use the term “San Joaquin Valley” when we refer to the geographical or topographical feature and the term “San Joaquin Basin” when we refer to geological province and the rocks therein.

  8. Visual system of recovering and combination of information for ENDF (Evaluated Nuclear Data File) format libraries; Sistema visual de recuperacao e combinacao de informacoes para bibliotecas no formato ENDF (Evaluated Nuclear Data File)

    Energy Technology Data Exchange (ETDEWEB)

    Ferreira, Claudia A.S. Velloso [Centro Tecnico Aeroespacial, Sao Jose dos Campos, SP (Brazil); Corcuera, Raquel A. Paviotti [Instituto Nacional de Pesquisas Espaciais (INPE), Sao Jose dos Campos, SP (Brazil)

    1997-04-01

    This report presents a data information retrieval and merger system for ENDF (Evaluated Nuclear Data File) format libraries, which can be run on personal computers under the Windows {sup TM} environment. The input is the name of an ENDF/B library, which can be chosen in a proper window. The system has a display function which allows the user to visualize the reaction data of a specific nuclide and to produce a printed copy of these data. The system allows the user to retrieve and/or combine evaluated data to create a single file of data in ENDF format, from a number of different files, each of which is in the ENDF format. The user can also create a mini-library from an ENDF/B library. This interactive and easy-to-handle system is a useful tool for Nuclear Data Centers and it is also of interest to nuclear and reactor physics researchers. (author)

  9. SFFS:Low-Latency Small-File-Oriented Distributed File System%SFFS:低延迟的面向小文件的分布式文件系统

    Institute of Scientific and Technical Information of China (English)

    王鲁俊; 龙翔; 吴兴博; 王雷

    2014-01-01

    SNS (social networking services) and E-commerce services developed rapidly. Such services need store numerous small files like pictures, music files and macro blog texts. Traditional distributed storage systems, such as HDFS (Hadoop distributed file system), are designed for large files, which will have problems such as too much over-head with metadata and high latency when dealing with large number of small files. This paper analyzes the architec-ture and read-write flow of TFS (Taobao file system), and finds that TFS has to build several network connections when writing or reading a small file, which increases the read-write latency. Aiming at the challenge of storing numerous small files and the problems of TFS, this paper proposes SFFS (small-file file system), a low-latency high availability small-file-oriented distributed storage. The performance experiments show that the write latency of SFFS decreases 76.6%, and the read latency of SFFS decreases about 10%compared with TFS. SFFS also has a higher availability than TFS since the center node in SFFS has lighter load and can recover more quickly.%社交网站和电子商务等网络服务发展迅速,这类服务需要存储大量图片、音乐、微博文本等小文件。传统的分布式存储系统,如HDFS(Hadoop distributed file system),是面向大文件而设计的,在存储小文件时会产生元数据开销过大,访问延迟较高等问题,不能适应存储海量小文件的应用环境。分析了TFS(Taobao file system)的系统架构和读写流程,发现TFS在每次读/写过程中至少要建立3次网络连接,增大了读写延迟。针对海量小文件存储带来的挑战和TFS存在的问题,提出了一种新的低延迟、高可用的面向海量小文件的分布式存储方案,并实现了分布式文件系统SFFS(small-file file system)。性能测试表明,SFFS和TFS相比,写延迟降低了76.6%,读延迟降低了约10%。通过对系统结

  10. SU-E-T-576: Evaluation of Patient Specific VMAT QA Using Dynalog Files and Treatment Planning System

    Energy Technology Data Exchange (ETDEWEB)

    Defoor, D; Stathakis, S; Mavroidis, P; Papanikolaou, N [University of Texas Health Science Center, UTHSCSA, San Antonio, TX (United States)

    2014-06-01

    Purpose: This research investigates the use of Mult-ileaf Collimator (MLC) dynalog files to modify a Volumetric Arc Therapy (VMAT) DICOM Radiotherapy Treatment file from the Treatment Planning System (TPS) for quality assurance and treatment plan verification. Methods: Actual MLC positions and gantry angles where retrieved from the MLC Dynalog files of an approved and treated VMAT plan. The treatment machine used was a Novalis TX linac equipped with high definition MLC. The DICOM RT file of the plan was exported from the TPS (Eclipse, Varian Medical Systems) and the actual MLC leaf positions and gantry angles were inserted in place of the planned positions for each control point. The modified DICOM RT file was then imported back into the TPS where dose calculations were performed. The resulting dose distributions were then exported to VeriSoft (PTW) where a 3D gamma was calculated using 3mm-3% and 2mm-2% criteria. A 2D gamma was also calculated using dose measurements on the Delta4 (Sandidose) phantom. Results: A 3D gamma was calculated in Verisoft at 3mm-3% of 99.5% and at 2mm-2% of 99.2%. The pretreatment verification on the Delta4 yielded a 2D gamma at 3mm-3% of 97.9% and at 2mm-2% of 88.5%. The dose volume histograms of the approved plan and the dynalog plan are virtually identical. Conclusion: Initial results show good agreement of the dynalog dose distribution with the approved plan. Future work on this research will aim to increase the number of patients and replace the planned fractionated dose per control point with the actual fractionated dose.

  11. The San Bernabe power substation; La subestacion San Bernabe

    Energy Technology Data Exchange (ETDEWEB)

    Chavez Sanudo, Andres D. [Luz y Fuerza del Centro, Mexico, D. F. (Mexico)

    1997-12-31

    The first planning studies that gave rise to the San Bernabe substation go back to year 1985. The main circumstance that supports this decision is the gradual restriction for electric power generation that has been suffering the Miguel Aleman Hydro System, until its complete disappearance, to give priority to the potable water supply through the Cutzamala pumping system, that feeds in an important way Mexico City and the State of Mexico. In this document the author describes the construction project of the San Bernabe Substation; mention is made of the technological experiences obtained during the construction and its geographical location is shown, as well as the one line diagram of the same [Espanol] Los primeros estudios de planeacion que dieron origen a la subestacion San Bernabe se remontan al ano de 1985. La circunstancia principal que soporta esta decision es la restriccion paulatina para generar energia que ha venido experimentando el Sistema Hidroelectrico Miguel Aleman, hasta su desaparicion total, para dar prioridad al suministro de agua potable por medio del sistema de bombeo Cutzamala, que alimenta en forma importante a la Ciudad de Mexico y al Estado de Mexico. En este documento el autor describe el proyecto de construccion de la subestacion San Bernabe; se mencionan las experiencias tecnologicas obtenidas durante su construccion y se ilustra su ubicacion geografica, asi como un diagrama unifilar de la misma

  12. A LiDAR Survey of an Exposed Magma Plumbing System in the San Rafael Desert, Utah

    Science.gov (United States)

    Richardson, J. A.; Kinman, S.; Connor, L.; Connor, C.; Wetmore, P. H.

    2013-12-01

    Fields of dozens to hundreds of volcanoes are a common occurrence on Earth and are created due to distributed-style volcanism often referred to as "monogenetic." These volcanic fields represent a significant hazard on both local and regional scales. While it is important to understand the physical states of active volcanic fields, it is difficult or impossible to directly observe active magma emplacement. Because of this, observing an exposed magmatic plumbing system may enable further efforts to describe active volcanic fields. The magmatic plumbing system of a Pliocene-aged monogenetic volcanic field is currently exposed as a sill and dike swarm in the San Rafael Desert of Central Utah. Alkali diabase and shonkinitic sills and dikes in this region intruded into Mesozoic sedimentary units of the Colorado Plateau and now make up the most erosion resistant units, forming mesas, ridges, and small peaks associated with sills, dikes, and plug-like bodies respectively. Diez et al. (Lithosphere, 2009) and Kiyosugi et al. (Geology, 2012) provide evidence that each cylindrical plug-like body represents a conduit that once fed one volcano. The approximate original depth of the currently exposed swarm is estimated to be 0.8 km. Volcanic and sedimentary materials may be discriminated at very high resolution with the use of Light Detection and Ranging (LiDAR). LiDAR produces a three dimensional point cloud, where each point has an associated return intensity. High resolution, bare earth digital elevation models (DEMs) can be produced after vegetation is identified and removed from the dataset. The return intensity at each point can enable classification as either sedimentary or volcanic rock. A Terrestrial LiDAR Survey (TLS) has been carried out to map a large hill with at least one volcanic conduit at its core. This survey implements a RIEGL VZ-400 3D Laser Scanner, which successfully maps solid objects in line-of-sight and within 600 meters. The laser used has a near

  13. 48 CFR 1404.802 - Contract files.

    Science.gov (United States)

    2010-10-01

    ... 48 Federal Acquisition Regulations System 5 2010-10-01 2010-10-01 false Contract files. 1404.802 Section 1404.802 Federal Acquisition Regulations System DEPARTMENT OF THE INTERIOR GENERAL ADMINISTRATIVE MATTERS Contract Files 1404.802 Contract files. In addition to the requirements in FAR 4.802, files...

  14. Increasing Social–Ecological Resilience by Placing Science at the Decision Table: the Role of the San Pedro Basin (Arizona Decision-Support System Model

    Directory of Open Access Journals (Sweden)

    Tim Finan

    2009-06-01

    Full Text Available We have analyzed how the collaborative development process of a decision-support system (DSS model can effectively contribute to increasing the resilience of regional social–ecological systems. In particular, we have focused on the case study of the transboundary San Pedro Basin, in the Arizona-Sonora desert region. This is a semi-arid watershed where water is a scarce resource used to cover competing human and environmental needs. We have outlined the essential traits in the development of the decision-support process that contributed to an improvement of water-resources management capabilities while increasing the potential for consensual problem solving. Comments and feedback from the stakeholders benefiting from the DSS in the San Pedro Basin are presented and analyzed within the regional (United States–Mexico boundary, social, and institutional context. We have indicated how multidisciplinary collaboration between academia and stakeholders can be an effective step toward collaborative management. Such technology transfer and capacity building provides a common arena for testing water-management policies and evaluating future scenarios. Putting science at the service of a participatory decision-making process can provide adaptive capacity to accommodate future change (i.e., building resilience in the management system.

  15. Seafloor character--Offshore of San Gregorio, California

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — This part of SIM 3306 presents data for the seafloor-character map (see sheet 5, SIM 3306) of the Offshore of San Gregorio map area, California. The raster data file...

  16. 77 FR 16220 - Spartanburg Water System; Notice of Preliminary Permit Application Accepted for Filing and...

    Science.gov (United States)

    2012-03-20

    ...'s existing tailrace; (6) a 1,100-foot-long, 4.16 kilo-volt (KV) transmission line to; (7) a proposed... 4.36. Comments, motions to intervene, notices of intent, and competing applications may be filed...

  17. Integration of bed characteristics, geochemical tracers, current measurements, and numerical modeling for assessing the provenance of beach sand in the San Francisco Bay Coastal System

    Science.gov (United States)

    Barnard, Patrick L.; Foxgrover, Amy C.; Elias, Edwin P.L.; Erikson, Li H.; Hein, James R.; McGann, Mary; Mizell, Kira; Rosenbauer, Robert J.; Swarzenski, Peter W.; Takesue, Renee K.; Wong, Florence L.; Woodrow, Donald L.; Barnard, P.L.; Jaffee, B.E.; Schoellhamer, D.H.

    2013-01-01

    Over 150 million m3 of sand-sized sediment has disappeared from the central region of the San Francisco Bay Coastal System during the last half century. This enormous loss may reflect numerous anthropogenic influences, such as watershed damming, bay-fill development, aggregate mining, and dredging. The reduction in Bay sediment also appears to be linked to a reduction in sediment supply and recent widespread erosion of adjacent beaches, wetlands, and submarine environments. A unique, multi-faceted provenance study was performed to definitively establish the primary sources, sinks, and transport pathways of beach-sized sand in the region, thereby identifying the activities and processes that directly limit supply to the outer coast. This integrative program is based on comprehensive surficial sediment sampling of the San Francisco Bay Coastal System, including the seabed, Bay floor, area beaches, adjacent rock units, and major drainages. Analyses of sample morphometrics and biological composition (e.g., Foraminifera) were then integrated with a suite of tracers including 87Sr/86Sr and 143Nd/144Nd isotopes, rare earth elements, semi-quantitative X-ray diffraction mineralogy, and heavy minerals, and with process-based numerical modeling, in situ current measurements, and bedform asymmetry to robustly determine the provenance of beach-sized sand in the region.

  18. Research on Storage of Hadoop Distributed Small File System%基于Hadoop的小文件分布式存储技术研究

    Institute of Scientific and Technical Information of China (English)

    袁晓春

    2014-01-01

    HDFS (Hadoop Distributed File System)以其高容错性、高伸缩性等优点,允许用户将Hadoop部署在低廉的硬件上,广泛适用于大文件存储。然而对于海量小文件,因为内存开销过高,因此对数据的存储技术提出了更高的要求。基于Hadoop分布式文件系统(HDFS)架构,研究小文件在Hadoop架构下的数据处理策略,通过实验将其与传统的文件系统的读写、计算速度进行比较。%HDFS (Hadoop Distributed File System) for its high fault tolerance, high scalability, etc., allows the user to deploy Hadoop inexpensive hardware, is widely used in large file storage. However, for the mass of small files, because the memory overhead is too high, so the data storage technology put forward higher requirements. Based on Hadoop distributed file system (HDFS) architecture, small file data processing policy in the Hadoop framework, by comparing the read and write test with the traditional file system, the calculation speed.

  19. Trouble Brewing in San Diego. Policy Brief

    Science.gov (United States)

    Buck, Stuart

    2010-01-01

    The city of San Diego will face enormous budgetary pressures from the growing deficits in public pensions, both at a state and local level. In this policy brief, the author estimates that San Diego faces total of $45.4 billion, including $7.95 billion for the county pension system, $5.4 billion for the city pension system, and an estimated $30.7…

  20. Polluted File Detection Method in P2P File-sharing System%P2P文件分享系统中污染文件检测方法

    Institute of Scientific and Technical Information of China (English)

    黄智勇; 石幸利; 周喜川; 陈新龙

    2012-01-01

    针对P2P文件分享系统中污染文件的传播检测问题,在声誉度检测机制的基础上,提出一种基于接触跟踪树的检测方法.通过跟踪文件的传播路径,利用相关节点的声誉度建立接触跟踪树,对接触跟踪树的拓扑结构进行分析,获取传播文件为污染文件的概率值,从而实现对污染文件的检测.实验结果表明,该方法能有效提高检测精度,减小系统误报率.%Aiming at the problem of detecting the polluted files in file-sharing system, this paper proposes a detection method based on contact tracing tree and reputation mechanism. It establishes contact tracing trees by tracking the propagation of files in the file-sharing system, and confirms the polluted files by analyzing the topology of contact tracing trees. Experimental results show that the method can effectively improve the detection accuracy, and reduce the rate of false alarm.

  1. 嵌入式Linux文件系统的构建和移植%Design and Portability of Embedded Linux File System

    Institute of Scientific and Technical Information of China (English)

    王小妮

    2015-01-01

    An important part of construction root file system of embedded Linux system is built based on S3C6410 processor. Parts of the root file system and file types are introduced. A basic root file system is built using the Busybox toolbox. A method of fast transplant of file system is introduced, providing a reference for file system transplant on other processors.%在S3C6410处理器上构建嵌入式Linux系统的重要组成部分根文件系统。分析了根文件系统的组成部分,并对文件类型进行了介绍。利用Busybox工具集构建一个基本的文件系统,并介绍了快速移植文件系统的方法,为其他处理器上文件系统的移植提供参考。

  2. Retardations in fault creep rates before local moderate earthquakes along the San Andreas fault system, central California

    Science.gov (United States)

    Burford, R.O.

    1988-01-01

    Records of shallow aseismic slip (fault creep) obtained along parts of the San Andreas and Calaveras faults in central California demonstrate that significant changes in creep rates often have been associated with local moderate earthquakes. An immediate postearthquake increase followed by gradual, long-term decay back to a previous background rate is generally the most obvious earthquake effect on fault creep. This phenomenon, identified as aseismic afterslip, usually is characterized by above-average creep rates for several months to a few years. In several cases, minor step-like movements, called coseismic slip events, have occurred at or near the times of mainshocks. One extreme case of coseismic slip, recorded at Cienega Winery on the San Andreas fault 17.5 km southeast of San Juan Bautista, consisted of 11 mm of sudden displacement coincident with earthquakes of ML=5.3 and ML=5.2 that occurred 2.5 minutes apart on 9 April 1961. At least one of these shocks originated on the main fault beneath the winery. Creep activity subsequently stopped at the winery for 19 months, then gradually returned to a nearly steady rate slightly below the previous long-term average. The phenomena mentioned above can be explained in terms of simple models consisting of relatively weak material along shallow reaches of the fault responding to changes in load imposed by sudden slip within the underlying seismogenic zone. In addition to coseismic slip and afterslip phenomena, however, pre-earthquake retardations in creep rates also have been observed. Onsets of significant, persistent decreases in creep rates have occurred at several sites 12 months or more before the times of moderate earthquakes. A 44-month retardation before the 1979 ML=5.9 Coyote Lake earthquake on the Calaveras fault was recorded at the Shore Road creepmeter site 10 km northwest of Hollister. Creep retardation on the San Andreas fault near San Juan Bautista has been evident in records from one creepmeter site for

  3. User's guide for MODTOOLS: Computer programs for translating data of MODFLOW and MODPATH into geographic information system files

    Science.gov (United States)

    Orzol, Leonard L.

    1997-01-01

    MODTOOLS is a set of computer programs for translating data of the ground-water model, MODFLOW, and the particle-tracker, MODPATH, into a Geographic Information System (GIS). MODTOOLS translates data into a GIS software called ARC/INFO. MODFLOW is the recognized name for the U.S. Geological Survey Modular Three-Dimensional Finite-Difference Ground-Water Model. MODTOOLS uses the data arrays input to or output by MODFLOW during a ground-water flow simulation to construct several types of GIS output files. MODTOOLS can also be used to translate data from MODPATH into GIS files. MODPATH and its companion program, MODPATH-PLOT, are collectively called the U.S. Geological Survey Three-Dimensional Particle Tracking Post-Processing Programs. MODPATH is used to calculate ground-water flow paths using the results of MODFLOW and MODPATH-PLOT can be used to display the flow paths in various ways.

  4. Use of a Geographic Information System and lichens to map air pollution in a tropical city: San José, Costa Rica

    Directory of Open Access Journals (Sweden)

    Erich Neurohr Bustamante

    2013-06-01

    Full Text Available There are no studies of air pollution bio-indicators based on Geographic Information Systems (GIS for Costa Rica. In this study we present the results of a project that analyzed tree trunk lichens as bioindicators of air pollution in 40 urban parks located along the passage of wind through the city of San Jose in 2008 and 2009. The data were processed with GIS and are presented in an easy to understand color coded isoline map. Our results are consistent with the generally accepted view that lichens respond to the movement of air masses, decreasing their cover in the polluted areas. Furthermore, lichen cover matched the concentration of atmospheric nitrogen oxides from a previous study of the same area. Our maps should be incorporated to urban regulatory plans for the city of San José to zone the location of schools, hospitals and other facilities in need of clean air and to inexpensively assess the risk for breast cancer and respiratory diseases in several neighborhoods throughout the city.

  5. Exploring proximity based peer clustering in BitTorrent-like Peer-to-Peer file sharing systems

    Institute of Scientific and Technical Information of China (English)

    Yu Jiadi; Li Minglu

    2008-01-01

    A hierarchical clustered BitTorrent (CBT) system is proposed to improve the file sharing performance of the BitTorrent system, in which peers are grouped into clusters in a large-scale BitTorrent-like underlying overlay network in such a way that clusters are evenly distributed and that the peers within the cluster are relatively close to each other. A fluid model is developed to compare the performance of the proposed CBT system with the BitTorrent system, and the result shows that the CBT system can effectively improve the performance of the system. Simulation results also demonstrate that the CBT system improves the system scalability and efficiency while retaining the robustness and incentives of the original BitTorrent paradigm.

  6. Analysis and Design of Access Control in Network File System for IMA System%面向IMA的网络文件系统访问控制分析与设计

    Institute of Scientific and Technical Information of China (English)

    段海军; 叶宏; 雷清; 郭勇; 张鹏

    2011-01-01

    In order to solve the problem of access control in network file system for IMA system, we analyse access control and put forward a design scheme of access control. We use the Network File Lock to realize multiple partitions mutually exclusive access to remote files by locking files and unlocking files. We use the module of access control to authenticate the rights of the user. The user can access to files only if through verification. Log files save the whole operation process of accessing remote files. The paper draws principle of network file lock and purview control and modular of log.%为了解决面向IMA的网络文件系统访问控制问题,分析了其中的访问控制,并提出一种访问控制的设计方案.采用网络文件锁,通过对文件的上锁和解锁,实现多个分区互斥访问远程文件;使用权限控制模块验证用户对文件的访问权限,用户通过验证后才能访问文件;日志文件记录整个访问远程文件的过程.给出了网络文件锁、权限控制和日志模块的工作原理.

  7. 吉林省高速公路竣工文件档案体系和档号编制研究%Preparation of Files System and Files Number of Completion Documents of Expressway in Jilin Province

    Institute of Scientific and Technical Information of China (English)

    高福

    2014-01-01

    The preparation and arrangement of completion document files of expressway has experienced a growing process in China. Based on introducing the current situation of files management of expressway in Jilin Province, the article teases the completion document system and files number preparation method of expressway in Jilin Province, explores the improvement measures of completion document preparation files and file number preparation of expressway in Jilin Province.%我国高速公路竣工文件档案编制整理经历了从无到有,逐渐成长的一个过程。在介绍吉林省高速公路档案管理现状的基础上,对吉林省高速公路竣工文件体系和档号编制方法进行梳理,探讨吉林省高速公路竣工文件档案编制体系和档号编制的改良办法。

  8. A Fault—Tolerant File Management Algorithm in Distributed Computer System “THUDS”

    Institute of Scientific and Technical Information of China (English)

    廖先Shi; 金兰

    1989-01-01

    A concurrent control with independent processes from simultaneous access to a critical section is discussed for the case where there are two distinct classes of processes known as readers and writers.The readers can share the file with one another,but the interleaved execution with readers and writers may produce undesirable conflicts.The file management algorithm proposed in this paper is the activity of avoiding these results.This algorithm not only guarantees the consistency and integrity of the shared file,but also supports optimal parallelism.The concept of dynamic virtual queue is introduced and serves the foundation for this algorithm.Our algorithm with its implicit redundancy allows software fault-tolerant technique.

  9. San Carlo Operaen

    DEFF Research Database (Denmark)

    Holm, Bent

    2005-01-01

    En indplacering af operahuset San Carlo i en kulturhistorisk repræsentationskontekst med særligt henblik på begrebet napolalità.......En indplacering af operahuset San Carlo i en kulturhistorisk repræsentationskontekst med særligt henblik på begrebet napolalità....

  10. Empowering file-based radio production through media asset management systems

    Science.gov (United States)

    Muylaert, Bjorn; Beckers, Tom

    2006-10-01

    In recent years, IT-based production and archiving of media has matured to a level which enables broadcasters to switch over from tape- or CD-based to file-based workflows for the production of their radio and television programs. This technology is essential for the future of broadcasters as it provides the flexibility and speed of execution the customer demands by enabling, among others, concurrent access and production, faster than real-time ingest, edit during ingest, centrally managed annotation and quality preservation of media. In terms of automation of program production, the radio department is the most advanced within the VRT, the Flemish broadcaster. Since a couple of years ago, the radio department has been working with digital equipment and producing its programs mainly on standard IT equipment. Historically, the shift from analogue to digital based production has been a step by step process initiated and coordinated by each radio station separately, resulting in a multitude of tools and metadata collections, some of them developed in-house, lacking integration. To make matters worse, each of those stations adopted a slightly different production methodology. The planned introduction of a company-wide Media Asset Management System allows a coordinated overhaul to a unified production architecture. Benefits include the centralized ingest and annotation of audio material and the uniform, integrated (in terms of IT infrastructure) workflow model. Needless to say, the ingest strategy, metadata management and integration with radio production systems play a major role in the level of success of any improvement effort. This paper presents a data model for audio-specific concepts relevant to radio production. It includes an investigation of ingest techniques and strategies. Cooperation with external, professional production tools is demonstrated through a use-case scenario: the integration of an existing, multi-track editing tool with a commercially available

  11. Always-optimally-coordinated candidate selection algorithm for peer-to-peer files sharing system in mobile self-organized networks

    Institute of Scientific and Technical Information of China (English)

    Li Xi; Ji Hong; Zheng Ruiming; Li Ting

    2009-01-01

    In order to improve the performance of peer-to-peer files sharing system under mobile distributed environments, a novel always-optimally-coordinated (AOC) criterion and corresponding candidate selection algorithm are proposed in this paper. Compared with the traditional min-hops criterion, the new approach introduces a fuzzy knowledge combination theory to investigate several important factors that influence files transfer success rate and efficiency. Whereas the min-hops based protocols only ask the nearest candidate peer for desired files, the selection algorithm based on AOC comprehensively considers users' preference and network requirements with flexible balancing rules. Furthermore, its advantage also expresses in the independence of specified resource discovering protocols, allowing for scalability. The simulation results show that when using the AOC based peer selection algorithm, system performance is much better than the min-hops scheme, with files successful transfer rate improved more than 50% and transfer time reduced at least 20%.

  12. Meant to make a difference, the clinical experience of minimally invasive endodontics with the self-adjusting file system in India.

    Science.gov (United States)

    Pawar, Ajinkya M; Pawar, Mansing G; Kokate, Sharad R

    2014-01-01

    The vital steps in any endodontic treatment are thorough mechanical shaping and chemical cleaning followed by obtaining a fluid tight impervious seal by an inert obturating material. For the past two decades, introduction and use of rotary nickel-titanium (Ni-Ti) files have changed our concepts of endodontic treatment from conventional to contemporary. They have reported good success rates, but still have many drawbacks. The Self-Adjusting File (SAF) introduces a new era in endodontics by performing the vital steps of shaping and cleaning simultaneously. The SAF is a hollow file in design that adapts itself three-dimensionally to the root canal and is a single file system, made up of Ni-Ti lattice. The case series presented in the paper report the clinical experience, while treating primary endodontic cases with the SAF system in India.

  13. Meant to make a difference, the clinical experience of minimally invasive endodontics with the self-adjusting file system in India

    Directory of Open Access Journals (Sweden)

    Ajinkya M Pawar

    2014-01-01

    Full Text Available The vital steps in any endodontic treatment are thorough mechanical shaping and chemical cleaning followed by obtaining a fluid tight impervious seal by an inert obturating material. For the past two decades, introduction and use of rotary nickel-titanium (Ni-Ti files have changed our concepts of endodontic treatment from conventional to contemporary. They have reported good success rates, but still have many drawbacks. The Self-Adjusting File (SAF introduces a new era in endodontics by performing the vital steps of shaping and cleaning simultaneously. The SAF is a hollow file in design that adapts itself three-dimensionally to the root canal and is a single file system, made up of Ni-Ti lattice. The case series presented in the paper report the clinical experience, while treating primary endodontic cases with the SAF system in India.

  14. File sharing

    NARCIS (Netherlands)

    van Eijk, N.

    2011-01-01

    File sharing’ has become generally accepted on the Internet. Users share files for downloading music, films, games, software etc. In this note, we have a closer look at the definition of file sharing, the legal and policy-based context as well as enforcement issues. The economic and cultural impact

  15. Statistical analyses of hydrologic system components and simulation of Edwards aquifer water-level response to rainfall using transfer-function models, San Antonio region, Texas

    Science.gov (United States)

    Miller, Lisa D.; Long, Andrew J.

    2006-01-01

    In 2003 the U.S. Geological Survey, in cooperation with the San Antonio Water System, did a study using historical data to statistically analyze hydrologic system components in the San Antonio region of Texas and to develop transfer-function models to simulate water levels at selected sites (wells) in the Edwards aquifer on the basis of rainfall. Water levels for two wells in the confined zone in Medina County and one well in the confined zone in Bexar County were highly correlated and showed little or no lag time between water-level responses. Water levels in these wells also were highly correlated with springflow at Comal Springs. Water-level hydrographs for 35 storms showed that an individual well can respond differently to similar amounts of rainfall. Fourteen water-level-recession hydrographs for a Medina County well showed that recession rates were variable. Transfer-function models were developed to simulate water levels at one confined-zone well and two recharge-zone wells in response to rainfall. For the confined-zone well, 50 percent of the simulated water levels are within 10 feet of the measured water levels, and 80 percent of the simulated water levels are within 15 feet of the measured water levels. For one recharge-zone well, 50 percent of the simulated water levels are within 5 feet of the measured water levels, and 90 percent of the simulated water levels are within 14 feet of the measured water levels. For the other recharge-zone well, 50 percent of the simulated water levels are within 14 feet of the measured water levels, and 90 percent of the simulated water levels are within 27 feet of the measured water levels. The transfer-function models showed that (1) the Edwards aquifer in the San Antonio region responds differently to recharge (effective rainfall) at different wells; and (2) multiple flow components are present in the aquifer. If simulated long-term system response results from a change in the hydrologic budget, then water levels would

  16. Central nervous system tuberculosis in children: review of 35 cases at the Hospital Universitario San Vicente de Paúl in Medellín, Colombia.1997-2004. Meningoencefalitis tuberculosa en niños: Revisión de 35 casos en el Hospital Universitario San Vicente de Paúl en Medellín, Colombia. 1997-2004

    OpenAIRE

    José William Cornejo Ochoa; Dagoberto Nicanor Cabrera Hémer; Rodrigo Andrés Solarte Mila

    2005-01-01

    Objetive. To document the clinical and diagnostic features and to explore factors associated with central nervous system tuberculosis at the “Hospital San Vicente de Paúl (HUSVP)” in Medellín-Colombia. Patients and methods. Review of the patient’s records to obtain information on demographic data, medical history, clinical manifestations, laboratory results, treatment and complications of 35 children with central nervous system tuberculosis admitted to the hospital between July 1997 and July ...

  17. 一个基于NOW的跨平台并行文件系统的设计和实现%The Design and Implementation of a Parallel File System Based on The Heterogeneous Network of Workstation

    Institute of Scientific and Technical Information of China (English)

    赵欣; 陈道蓄; 谢立

    2000-01-01

    To improve the I/O performance of a parallel system, we designed and implemented a Parallcl File System Based On The Heterogeneous Network Of Wcrkstation(PFSHNOW). It is a cross-plat-form parallel file system,and has some characteristics such as parallelism, efficiency, cross-platform, convenience of management, and intelligence. This article described the model of the PFSHNOW and the advanced technique used in the file system. At the end, we also gave the result of evaluation.

  18. Study & Realization of the Real Time File Mirroring System%实时文件镜像系统的研究与实现

    Institute of Scientific and Technical Information of China (English)

    丁原; 刘玉树; 朱天焕

    2001-01-01

    By designing the scheme to construct a real-time file mirroring system, the article studies the key technologies in the construction process. The system is constructed in the Windows NT/Windows 2000 environment. Together with the cluster server system, it can be used as a disaster recovery system.

  19. Creating Interactive Graphical Overlays in the Advanced Weather Interactive Processing System (AWIPS) Using Shapefiles and DGM Files

    Science.gov (United States)

    Barrett, Joe H., III; Lafosse, Richard; Hood, Doris; Hoeth, Brian

    2007-01-01

    Graphical overlays can be created in real-time in the Advanced Weather Interactive Processing System (AWIPS) using shapefiles or DARE Graphics Metafile (DGM) files. This presentation describes how to create graphical overlays on-the-fly for AWIPS, by using two examples of AWIPS applications that were created by the Applied Meteorology Unit (AMU). The first example is the Anvil Threat Corridor Forecast Tool, which produces a shapefile that depicts a graphical threat corridor of the forecast movement of thunderstorm anvil clouds, based on the observed or forecast upper-level winds. This tool is used by the Spaceflight Meteorology Group (SMG) and 45th Weather Squadron (45 WS) to analyze the threat of natural or space vehicle-triggered lightning over a location. The second example is a launch and landing trajectory tool that produces a DGM file that plots the ground track of space vehicles during launch or landing. The trajectory tool can be used by SMG and the 45 WS forecasters to analyze weather radar imagery along a launch or landing trajectory. Advantages of both file types will be listed.

  20. Patient Treatment File (PTF)

    Data.gov (United States)

    Department of Veterans Affairs — This database is part of the National Medical Information System (NMIS). The Patient Treatment File (PTF) contains a record for each inpatient care episode provided...

  1. Using surface creep rate to infer fraction locked for sections of the San Andreas fault system in northern California from alignment array and GPS data

    Science.gov (United States)

    Lienkaemper, James J.; McFarland, Forrest S.; Simpson, Robert W.; Caskey, S. John

    2014-01-01

    Surface creep rate, observed along five branches of the dextral San Andreas fault system in northern California, varies considerably from one section to the next, indicating that so too may the depth at which the faults are locked. We model locking on 29 fault sections using each section’s mean long‐term creep rate and the consensus values of fault width and geologic slip rate. Surface creep rate observations from 111 short‐range alignment and trilateration arrays and 48 near‐fault, Global Positioning System station pairs are used to estimate depth of creep, assuming an elastic half‐space model and adjusting depth of creep iteratively by trial and error to match the creep observations along fault sections. Fault sections are delineated either by geometric discontinuities between them or by distinctly different creeping behaviors. We remove transient rate changes associated with five large (M≥5.5) regional earthquakes. Estimates of fraction locked, the ratio of moment accumulation rate to loading rate, on each section of the fault system provide a uniform means to inform source parameters relevant to seismic‐hazard assessment. From its mean creep rates, we infer the main branch (the San Andreas fault) ranges from only 20%±10% locked on its central creeping section to 99%–100% on the north coast. From mean accumulation rates, we infer that four urban faults appear to have accumulated enough seismic moment to produce major earthquakes: the northern Calaveras (M 6.8), Hayward (M 6.8), Rodgers Creek (M 7.1), and Green Valley (M 7.1). The latter three faults are nearing or past their mean recurrence interval.

  2. Massive Storage Oriented File System Evaluation Benchmark%面向海量存储的文件系统评测基准

    Institute of Scientific and Technical Information of China (English)

    李鑫; 李战怀; 张晓

    2011-01-01

    In order to meet file system level performance evaluation needs of massive storage system, a general file system benchmark conforming to POSIX.l is developed. It provides an effective method to evaluate different file systems. It also offers reliable data to the developers who use POSIX. 1 file system API to design applications. With the cluster test structure, Lzpack can make precise evaluations of file I/O performance and file system metadata operation performance. This paper not only makes clear descriptions of the system structure and key features of Lzpack, but also analyzes the results from tests in different file systems under the application of Lzpack. It puts forward the train of thought on improvement of Lzpack in later stage.%为满足海量存储文件系统级性能评测需求,开发一个通用的符合POSIX.I标准的文件系统基准评测工具LZpack,可为不同文件系统的性能评测比较提供一种有效的方法,也可为使用文件系统操作API集合的应用程序设计者提供性能评价的依据.LZpack采用集群评测的评测架构,可以对文件I/O性能及元数据操作性能进行准确评测.对LZpack的系统结构和关键问题进行描述,对使用LZpack在不同文件系统上的测试结果进行分析,提出LZpack下一步的改进思路.

  3. Computational system to create an entry file for replicating I-125 seeds simulating brachytherapy case studies using the MCNPX code

    Directory of Open Access Journals (Sweden)

    Leonardo da Silva Boia

    2014-03-01

    Full Text Available Purpose: A computational system was developed for this paper in the C++ programming language, to create a 125I radioactive seed entry file, based on the positioning of a virtual grid (template in voxel geometries, with the purpose of performing prostate cancer treatment simulations using the MCNPX code.Methods: The system is fed with information from the planning system with regard to each seed’s location and its depth, and an entry file is automatically created with all the cards (instructions for each seed regarding their cell blocks and surfaces spread out spatially in the 3D environment. The system provides with precision a reproduction of the clinical scenario for the MCNPX code’s simulation environment, thereby allowing the technique’s in-depth study.Results and Conclusion: The preliminary results from this study showed that the lateral penumbra of uniform scanning proton beams was less sensitive In order to validate the computational system, an entry file was created with 88 125I seeds that were inserted in the phantom’s MAX06 prostate region with initial activity determined for the seeds at the 0.27 mCi value. Isodose curves were obtained in all the prostate slices in 5 mm steps in the 7 to 10 cm interval, totaling 7 slices. Variance reduction techniques were applied in order to optimize computational time and the reduction of uncertainties such as photon and electron energy interruptions in 4 keV and forced collisions regarding cells of interest. Through the acquisition of isodose curves, the results obtained show that hot spots have values above 300 Gy, as anticipated in literature, stressing the importance of the sources’ correct positioning, in which the computational system developed provides, in order not to release excessive doses in adjacent risk organs. The 144 Gy prescription curve showed in the validation process that it covers perfectly a large percentage of the volume, at the same time that it demonstrates a large

  4. 78 FR 16365 - Foreign Trade Regulations: Mandatory Automated Export System Filing for All Shipments Requiring...

    Science.gov (United States)

    2013-03-14

    ... national interest. In August 2003, the Census Bureau, in agreement with U.S. Customs and Border Protection... and CBP plan to continue the moratorium on accepting new applications pending the development of a... the equipment number may not be available at the time of filing and as a result would create a...

  5. 76 FR 24883 - DNB Exports LLC, and AFI Elektromekanikanik Ve Elektronik San. Tic. Ltd. Sti. v. Barsan Global...

    Science.gov (United States)

    2011-05-03

    ... DNB Exports LLC, and AFI Elektromekanikanik Ve Elektronik San. Tic. Ltd. Sti. v. Barsan Global Lojistiks Ve Gumruk Musavirligi A.S., Barsan International, Inc., and Impexia Inc.; Notice of Filing of... Commission (``Commission'') by DNB Exports LLC (``DNB''), and AFI Elektromekanikanik Ve Elektronik San....

  6. 78 FR 4981 - Pacific Imperial Railroad, Inc.-Change in Operator Exemption-Rail Line of San Diego and Arizona...

    Science.gov (United States)

    2013-01-23

    ... San Diego and Arizona Eastern Railway Company Pacific Imperial Railroad, Inc. (PIR), a noncarrier, has filed a verified notice of exemption under 49 CFR 1150.31 to change operators from San Diego & Imperial... Diego and Arizona Eastern Railway Company (SD&AE). The change in operators for the line is being...

  7. 日志文件系统在嵌入式存储设备上的实现%The Implementation of Journaling File System on Embedded Memory Device

    Institute of Scientific and Technical Information of China (English)

    郑良辰; 孙玉芳

    2002-01-01

    In embedded systems,unexpected power-off often causes the corruption of the file system and the lose of data.It is necessary to develop a special kind of file system to prevent such corruption.As a kind of journaling file system specially for embedded memory devices,JFFS is just the files system we need.In order to make use of JFFS more extensively,Redfiag Software Ltd.Co.has successfully soved the problem about JFFS'''''''' implementation on DiskOnChip,a special kind of embedded memory device.This paper mostly discusses the design of JFFS and its implementation on DiskOnChip.

  8. 一种构建嵌入式Linux根文件系统的方法%A method of building embedded Linux root file system

    Institute of Scientific and Technical Information of China (English)

    刘二钢

    2016-01-01

    根文件系统是构建嵌入式Linux系统的非常重要的组成部分。文中主要以制作Yaffs2根文件系统为例,研究如何使用BusyBox构建嵌入式Linux的根文件系统,包括BusyBox的配置、编译和安装,以及在嵌入式Linux环境下生成根文件系统映像文件的方法。文中所介绍的方法能够成功地在ARM开发板中移植和运行,为嵌入式系统的开发提出了一种简单易行的研究思路。%The root file system is a very important part of building the embedded Linux system. This paper mainly makes Yaffs2 root file system as an example of studying how to use the BusyBox to build embedded Linux root file system, including the BusyBox configuration, compilation and installation, and generates the system image files of root file in the embedded Linux environment. The method introduced in the paper is able to transplant and run successfully in the ARM development board, and puts forward a kind of feasible research ideas for the development of embedded system.

  9. A Design of the Files Management System Based on RFID%一种基于RFID的文件管理系统设计方案

    Institute of Scientific and Technical Information of China (English)

    李敬红

    2011-01-01

    This paper deals with some security problems existing in secret files especially paper medium file management and secret file management system is designed and realized based on the RFID technology. By use of file monitor and safety alarm measure in this system, high efficient and safe management for secret files is realized and incidence rate of the case of leakage of a secret is decreased, and eventuallly the idea of monitoring and managing secret files in paper meadium by utilization of automated equipment is realized.%针对涉密文件特别是纸介质文件管理上存在的安全问题,基于RFID技术设计实现了涉密文件管理系统.该系统采用了文件监控与安全报警的方法,实现涉密文件高效安全的管理,降低了泄密事件的发生率,实现了利用自动化设备对纸介质涉密文件进行监控管理的设想.

  10. 基于虚拟文件系统的安全存储技术的研究%Secure Storage Technology Based on Virtual File System

    Institute of Scientific and Technical Information of China (English)

    崔奇

    2013-01-01

      介绍了一种基于虚拟文件系统的加密存储方法。通过文件系统开发技术设计并实现一个虚拟文件系统,该文件系统能够将一个二进制文件映射成虚拟磁盘,而且可以在虚拟磁盘上存取数据。为了保证虚拟磁盘中的文件的安全性,还在虚拟文件系统上集成了加密引擎、密钥管理和硬件绑定等模块。这种安全存储技术操作简单,并且能够提供高效实时的加密/解密措施,具有一定的实用价值。%This paper describes an encrypted storage method based on the virtual file system . We design and implement a virtual file system through the file system development technology .Virtual file system regards a binary file as virtual disk ,and can access the data on the virtual disk .In order to ensure the security of the file in virtual disk ,we integrate encryption engine ,key management and hardware binding modules into virtual file system .This security storage technology is easy to use ,and can provide efficient real -time encryption /decryption measures ,so it has practical value .

  11. A scalable infrastructure for CMS data analysis based on OpenStack Cloud and Gluster file system

    Science.gov (United States)

    Toor, S.; Osmani, L.; Eerola, P.; Kraemer, O.; Lindén, T.; Tarkoma, S.; White, J.

    2014-06-01

    The challenge of providing a resilient and scalable computational and data management solution for massive scale research environments requires continuous exploration of new technologies and techniques. In this project the aim has been to design a scalable and resilient infrastructure for CERN HEP data analysis. The infrastructure is based on OpenStack components for structuring a private Cloud with the Gluster File System. We integrate the state-of-the-art Cloud technologies with the traditional Grid middleware infrastructure. Our test results show that the adopted approach provides a scalable and resilient solution for managing resources without compromising on performance and high availability.

  12. Long-term rates and the depth extent of fault creep along the San Andreas Fault system in northern California from alinement arrays and GPS data

    Science.gov (United States)

    Lienkaemper, J. J.; McFarland, F. S.; Simpson, R. W.; Caskey, J.

    2013-12-01

    The dextral San Andreas Fault system (SAFS) in northern California comprises five branches that exhibit considerable variation in the amount and spatial extent of aseismic release or creep. We estimate the depth extent of creep with a forward elastic model using the algorithms of Okada (1992) and boundary value dislocation solutions for creep rate and depth of creeping patches. For purposes of analysis we label branches, from west to east: A (San Gregorio), B (San Andreas), C (Calaveras-Hayward-Rodgers Creek-Maacama), D (Northern Calaveras-Green Valley-Bartlett Springs) and E (Greenville. Since the 1960s alinement arrays have provided one of the most accurate means to estimate the long-term creep rate and these rates have been reasonably well determined for much of the San Francisco Bay area (SFBA) southward. Over the past decade we have been installing alinement arrays along the more remote faults, especially northward of the SFBA, to monitor the extent of creep on branches C and D. We currently monitor about 80 such arrays throughout the northern SAFS. To analyze the depth extent of creep over the entire system, we model 30 fault sections on these five branches, delineated either by geometric discontinuities between them or by distinctly different creeping behaviors. We have removed any significant transient rate changes imposed by large regional earthquakes. We use crustal velocities determined for global-positioning station pairs of survey mode and continuous (SGPS, CGPS or mixed pairs) that are located near each fault to provide additional constraint on average creep rates. We estimate the mean depth of creep from the mean observed surface creep rate for each section and the rate uncertainty allows estimation of a depth uncertainty. Uncertainties are generally much higher where only five years or less of alinement array data are available, but in some cases the addition of CGPS or multiple SGPS station pairs has been essential for a more complete evaluation of

  13. DATA Act File C Award Financial - Social Security

    Data.gov (United States)

    Social Security Administration — The DATA Act Information Model Schema Reporting Submission Specification File C. File C includes the agency award information from the financial accounting system at...

  14. 48 CFR 204.802 - Contract files.

    Science.gov (United States)

    2010-10-01

    ... 48 Federal Acquisition Regulations System 3 2010-10-01 2010-10-01 false Contract files. 204.802 Section 204.802 Federal Acquisition Regulations System DEFENSE ACQUISITION REGULATIONS SYSTEM, DEPARTMENT OF DEFENSE GENERAL ADMINISTRATIVE MATTERS Contract Files 204.802 Contract files. Official...

  15. 78 FR 34362 - Combined Notice of Filings #1

    Science.gov (United States)

    2013-06-07

    ... Fortistar Methane Group Gas Recovery Systems, LLC. Filed Date: 5/23/13. Accession Number: 20130523-5138.... Description: Amendment to Application and Initial Baseline Tariff Filing to be effective 8/1/2013. Filed...

  16. Comparison of Postoperative Pain after Root Canal Preparation with Two Reciprocating and Rotary Single-File Systems: A Randomized Clinical Trial

    Science.gov (United States)

    Mollashahi, Narges Farhad; Saberi, Eshagh Ali; Havaei, Seyed Rohollah; Sabeti, Mohammad

    2017-01-01

    Introduction: Root canal preparation techniques may cause postoperative pain. The aim of the present study was to compare the intensity of postoperative pain after endodontic treatment using hand files, single file rotary (OneShape), and single file reciprocating (Reciproc) systems. Methods and Materials: In this single-blind, parallel-grouped randomized clinical trial a total of 150 healthy patients aged between 20 to 50 years old were diagnosed with symptomatic irreversible pulpitis of one maxillary or mandibular molars. The teeth were randomly assigned to three groups according to the root canal instrumentation technique: hand files (control), OneShape and Reciproc. Treatment was performed in a single visit by an endodontist. The severity of the postoperative pain was assessed by the visual analogue scale (VAS) after 6, 12, 24, 48 and 72 h. Data were analyzed using the Kruskal-Wallis and Mann-Whitney U tests. Results: The patients in control group reported significantly higher mean postoperative pain intensity at 12, 24, 48, and 72 h compared to the patients in the two other groups (P0.05). Conclusion: The instrumentation kinematics (single-file reciprocating or single-file rotary) had no impact on intensity of postoperative pain. PMID:28179917

  17. Maps of Quaternary Deposits and Liquefaction Susceptibility in the Central San Francisco Bay Region, California

    Science.gov (United States)

    Witter, Robert C.; Knudsen, Keith L.; Sowers, Janet M.; Wentworth, Carl M.; Koehler, Richard D.; Randolph, Carolyn E.; Brooks, Suzanna K.; Gans, Kathleen D.

    2006-01-01

    This report presents a map and database of Quaternary deposits and liquefaction susceptibility for the urban core of the San Francisco Bay region. It supercedes the equivalent area of U.S. Geological Survey Open-File Report 00-444 (Knudsen and others, 2000), which covers the larger 9-county San Francisco Bay region. The report consists of (1) a spatial database, (2) two small-scale colored maps (Quaternary deposits and liquefaction susceptibility), (3) a text describing the Quaternary map and liquefaction interpretation (part 3), and (4) a text introducing the report and describing the database (part 1). All parts of the report are digital; part 1 describes the database and digital files and how to obtain them by downloading across the internet. The nine counties surrounding San Francisco Bay straddle the San Andreas fault system, which exposes the region to serious earthquake hazard (Working Group on California Earthquake Probabilities, 1999). Much of the land adjacent to the Bay and the major rivers and streams is underlain by unconsolidated deposits that are particularly vulnerable to earthquake shaking and liquefaction of water-saturated granular sediment. This new map provides a consistent detailed treatment of the central part of the 9-county region in which much of the mapping of Open-File Report 00-444 was either at smaller (less detailed) scale or represented only preliminary revision of earlier work. Like Open-File Report 00-444, the current mapping uses geomorphic expression, pedogenic soils, inferred depositional environments, and geologic age to define and distinguish the map units. Further scrutiny of the factors controlling liquefaction susceptibility has led to some changes relative to Open-File Report 00-444: particularly the reclassification of San Francisco Bay mud (Qhbm) to have only MODERATE susceptibility and the rating of artificial fills according to the Quaternary map units inferred to underlie them (other than dams - adf). The two colored

  18. Capteur Tridimensionnel Sans Contact

    Science.gov (United States)

    Magnant, D.

    1986-07-01

    Three-dimensionnal measurements on human body using a scanning laser beam. The active optical apparatus principle and the image data processing giving three dimensional informations of complex forms is presented. The output is given in terms of one or several files of real coordinates. The basic components of this system are : A light-sheet which is generated by a laser source and, the optical sensors (cameras) with corresponding hard-soft extractor; This 3D sensorial system is especially adapted to partial or total acquisitions of body coordinates. The main advantages are : - Vision and measurement capability of complete accessible contours without shadow areas. - The real time data acquisition and scanning of the object in a few seconds. - The access to distance measurements between significant points. - The presently obtained accuracy is better than 1/1000 in relatives units and lower than one MM absolute. - Physically stuck markers on body are not necessary. - The monochromaticity of the laser light source allows the use of a color filter over the detector (camera) for ambient light rejection. - The fully programmable capability for any use, allows the adaptation to a large variety of particular cases. - The hardware open system offers many options. - The hard-soft tool is designed for auto-calibration operation. - The system offers easy connection to a host computer or a production robot.

  19. 文件系统的快速启动机制研究%Research On Quick Start-up Mechanism of File System

    Institute of Scientific and Technical Information of China (English)

    杨琼; 胡宁; 周霆; 徐晓光

    2014-01-01

    文章提供一种文件系统的快速启动方法,通过设计一种合理的文件系统一致性方案以减少文件系统的完整性检查,且只在合适的时机扫描磁盘以减少磁盘扫描开销有效缩短文件系统启动时间。%This article provides a quick mount method of file system. By designing a rational file system consistency solution to reduce the integrity check of the file system, and only check disk at the right time to reduce disk-check overhead, effectively shortening the mount time.

  20. Efficacy of Two Irrigants Used with Self-Adjusting File System on Smear Layer: A Scanning Electron Microscopy Study.

    Science.gov (United States)

    Genç Şen, Özgür; Kaya, Sadullah; Er, Özgür; Alaçam, Tayfun

    2014-01-01

    Mechanical instrumentation of root canals produces a smear layer that adversely affects the root canal seal. The aim of this study was to evaluate efficacy of MTAD and citric acid solutions used with self-adjusting file (SAF) system on smear layer. Twenty-three single-rooted human teeth were used for the study. Canals were instrumented manually up to a number 20 K file size. SAF was used to prepare the root canals. The following groups were studied: Group 1: MTAD + 5.25% NaOCl, Group 2: 20% citric acid + 5.25% NaOCl, and Group 3: Control (5.25% NaOCl). All roots were split longitudinally and subjected to scanning electron microscopy. The presence of smear layer in the coronal, middle, and apical thirds was evaluated using a five-score evaluation system. Kruskal-Wallis and Mann-Whitney U tests were used for statistical analysis. In the coronal third, Group 2 exhibited the best results and was statistically different froms the other groups (P 0.05). The solutions used in Group 1 and 2 could effectively remove smear layer in most of the specimens. However, citric acid was more effective than MTAD in the three thirds of the canal.

  1. Preliminary geologic map of the Fontana 7.5' quadrangle, Riverside and San Bernardino Counties, California

    Science.gov (United States)

    Morton, Douglas M.; Digital preparation by Bovard, Kelly R.

    2003-01-01

    Open-File Report 03-418 is a digital geologic data set that maps and describes the geology of the Fontana 7.5’ quadrangle, Riverside and San Bernardino Counties, California. The Fontana quadrangle database is one of several 7.5’ quadrangle databases that are being produced by the Southern California Areal Mapping Project (SCAMP). These maps and databases are, in turn, part of the nation-wide digital geologic map coverage being developed by the National Cooperative Geologic Map Program of the U.S. Geological Survey (USGS). General Open-File Report 03-418 contains a digital geologic map database of the Fontana 7.5’ quadrangle, Riverside and San Bernardino Counties, California that includes: 1. ARC/INFO (Environmental Systems Research Institute, http://www.esri.com) version 7.2.1 coverages of the various elements of the geologic map. 2. A Postscript file (fon_map.ps) to plot the geologic map on a topographic base, and containing a Correlation of Map Units diagram (CMU), a Description of Map Units (DMU), and an index map. 3. An Encapsulated PostScript (EPS) file (fon_grey.eps) created in Adobe Illustrator 10.0 to plot the geologic map on a grey topographic base, and containing a Correlation of Map Units (CMU), a Description of Map Units (DMU), and an index map. 4. Portable Document Format (.pdf) files of: a. the Readme file; includes in Appendix I, data contained in fon_met.txt b. The same graphics as plotted in 2 and 3 above.Test plots have not produced precise 1:24,000-scale map sheets. Adobe Acrobat page size setting influences map scale. The Correlation of Map Units and Description of Map Units is in the editorial format of USGS Geologic Investigations Series (I-series) maps but has not been edited to comply with I-map standards. Within the geologic map data package, map units are identified by standard geologic map criteria such as formation-name, age, and lithology. Where known, grain size is indicated on the map by a subscripted letter or letters following

  2. Geology of the epithermal Ag-Au Huevos Verdes vein system and San José district, Deseado massif, Patagonia, Argentina

    Science.gov (United States)

    Dietrich, Andreas; Gutierrez, Ronald; Nelson, Eric P.; Layer, Paul W.

    2012-03-01

    The San José district is located in the northwest part of the Deseado massif and hosts a number of epithermal Ag-Au quartz veins of intermediate sulfidation style, including the Huevos Verdes vein system. Veins are hosted by andesitic rocks of the Bajo Pobre Formation and locally by rhyodacitic pyroclastic rocks of the Chon Aike Formation. New 40Ar/39Ar constraints on the age of host rocks and mineralization define Late Jurassic ages of 151.3 ± 0.7 Ma to 144.7 ± 0.1 Ma for volcanic rocks of the Bajo Pobre Formation and of 147.6 ± 1.1 Ma for the Chon Aike Formation. Illite ages of the Huevos Verdes vein system of 140.8 ± 0.2 and 140.5 ± 0.3 Ma are 4 m.y. younger than the volcanic host rock unit. These age dates are among the youngest reported for Jurassic volcanism in the Deseado massif and correlate well with the regional context of magmatic and hydrothermal activity. The Huevos Verdes vein system has a strike length of 2,000 m, with several ore shoots along strike. The vein consists of a pre-ore stage and three main ore stages. Early barren quartz and chalcedony are followed by a mottled quartz stage of coarse saccharoidal quartz with irregular streaks and discontinuous bands of sulfide-rich material. The banded quartz-sulfide stage consists of sulfide-rich bands alternating with bands of quartz and bands of chlorite ± illite. Late-stage sulfide-rich veinlets are associated with kaolinite gangue. Ore minerals are argentite and electrum, together with pyrite, sphalerite, galena, chalcopyrite, minor bornite, covellite, and ruby silver. Wall rock alteration is characterized by narrow (220°C. Kaolinite occurring with the late sulfide-rich veinlet stage indicates pH 315°, whereas strike directions of <315° are predicted with an induced dextral strike-slip movement. The components of the structural model appear to be present on a regional scale and are not restricted to the San José district.

  3. The Marianas-San Marcos vein system: characteristics of a shallow low sulfidation epithermal Au-Ag deposit in the Cerro Negro district, Deseado Massif, Patagonia, Argentina

    Science.gov (United States)

    Vidal, Conrado Permuy; Guido, Diego M.; Jovic, Sebastián M.; Bodnar, Robert J.; Moncada, Daniel; Melgarejo, Joan Carles; Hames, Willis

    2016-08-01

    The Cerro Negro district, within the Argentinian Deseado Massif province, has become one of the most significant recent epithermal discoveries, with estimated reserves plus resources of ˜6.7 Moz Au equivalent. The Marianas-San Marcos vein system contains about 70 % of the Au-Ag resources in the district. Mineralization consists of Upper Jurassic (155 Ma) epithermal Au- and Ag-rich veins of low to intermediate sulfidation style, hosted in and genetically related to Jurassic intermediate composition volcanic rocks (159-156 Ma). Veins have a complex infill history, represented by ten stages with clear crosscutting relationships that can be summarized in four main episodes: a low volume, metal-rich initial episode (E1), an extended banded quartz episode with minor mineralization (E2), a barren waning stage episode (E3), and a silver-rich late tectonic-hydrothermal episode (E4). The first three episodes are interpreted to have formed at the same time and probably from fluids of similar composition: a 290-230 °C fluid dominated by meteoric and volcanic waters (-3‰ to -0‰ δ18Owater), with <3 % NaCl equivalent salinity and with a magmatic source of sulfur (-1 to -2 ‰ δ34Swater). Metal was mainly precipitated at the beginning of vein formation (episode 1) due to a combination of boiling at ˜600 to 800 m below the paleowater table, and associated mixing/cooling processes, as evidenced by sulfide-rich bands showing crustiform-colloform quartz, adularia, and chlorite-smectite banding. During episodes 2 and 3, metal contents progressively decrease during continuing boiling conditions, and veins were filled by quartz and calcite during waning stages of the hydrothermal system, and the influx of bicarbonate waters (-6 to -8.5 ‰ δ18Owater). Hydrothermal alteration is characterized by proximal illite, adularia, and silica zone with chlorite and minor epidote, intermediate interlayered illite-smectite and a distal chlorite halo. This assemblage is in agreement with

  4. A Case for Historic Joint Rupture of the San Andreas and San Jacinto Faults

    Science.gov (United States)

    Lozos, J.

    2015-12-01

    The ~M7.5 southern California earthquake of 8 December 1812 ruptured the San Andreas Fault from Cajon Pass to at least as far north as Pallet Creek (Biasi et al., 2002). The 1812 rupture has also been identified in trenches at Burro Flats to the south (Yule and Howland, 2001). However, the lack of a record of 1812 at Plunge Creek, between Cajon Pass and Burro Flats (McGill et al., 2002), complicates the interpretation of this event as a straightforward San Andreas rupture. Paleoseismic records of a large early 19th century rupture on the northern San Jacinto Fault (Onderdonk et al., 2013; Kendrick and Fumal, 2005) allow for alternate interpretations of the 1812 earthquake. I use dynamic rupture modeling on the San Andreas-San Jacinto junction to determine which rupture behaviors produce slip patterns consistent with observations of the 1812 event. My models implement realistic fault geometry, a realistic velocity structure, and stress orientations based on seismicity literature. Under these simple assumptions, joint rupture of the two faults is the most common behavior. My modeling rules out a San Andreas-only rupture that is consistent with the data from the 1812 earthquake, and also shows that single fault events are unable to match the average slip per event for either fault. The choice of nucleation point affects the details of rupture directivity and slip distribution, but not the first order result that multi-fault rupture is the preferred behavior. While it cannot be definitively said that joint San Andreas-San Jacinto rupture occurred in 1812, these results are consistent with paleoseismic and historic data. This has implications for the possibility of future multi-fault rupture within the San Andreas system, as well as for interpretation of other paleoseismic events in regions of complex fault interactions.

  5. The San Andreas Fault in the San Francisco Bay area, California: a geology fieldtrip guidebook to selected stops on public lands

    Science.gov (United States)

    Stoffer, Philip W.

    2005-01-01

    This guidebook contains a series of geology fieldtrips with selected destinations along the San Andreas Fault in part of the region that experienced surface rupture during the Great San Francisco Earthquake of 1906. Introductory materials present general information about the San Andreas Fault System, landscape features, and ecological factors associated with faults in the South Bay, Santa Cruz Mountains, the San Francisco Peninsula, and the Point Reyes National Seashore regions. Trip stops include roadside areas and recommended hikes along regional faults and to nearby geologic and landscape features that provide opportunities to make casual observations about the geologic history and landscape evolution. Destinations include the sites along the San Andreas and Calaveras faults in the San Juan Bautista and Hollister region. Stops on public land along the San Andreas Fault in the Santa Cruz Mountains in Santa Clara and Santa Cruz counties include in the Loma Prieta summit area, Forest of Nicene Marks State Park, Lexington County Park, Sanborn County Park, Castle Rock State Park, and the Mid Peninsula Open Space Preserve. Destinations on the San Francisco Peninsula and along the coast in San Mateo County include the Crystal Springs Reservoir area, Mussel Rock Park, and parts of Golden Gate National Recreation Area, with additional stops associated with the San Gregorio Fault system at Montara State Beach, the James F. Fitzgerald Preserve, and at Half Moon Bay. Field trip destinations in the Point Reyes National Seashore and vicinity provide information about geology and character of the San Andreas Fault system north of San Francisco.

  6. Evidence for 115 kilometers of right slip on the san gregorio-hosgri fault trend.

    Science.gov (United States)

    Graham, S A; Dickinson, W R

    1978-01-13

    The San Gregorio-Hosgri fault trend is a component of the San Andreas fault system on which there may have been about 115 kilometers of post-early Miocene right-lateral strike slip. If so, right slip on the San Andreas and San Gregorio-Hosgri faults accounts for most of the movement between the Pacific and North American plates since mid-Miocene time. Furthermore, the magnitude of right slip on a Paleogene proto-San Andreas fault inferred from the present distribution of granitic basement is reduced considerably when Neogene-Recent San Gregorio-Hosgri right slip is taken into account.

  7. Systems Pharmacology Based Study of the Molecular Mechanism of SiNiSan Formula for Application in Nervous and Mental Diseases.

    Science.gov (United States)

    Shen, Xia; Zhao, Zhenyu; Luo, Xuan; Wang, Hao; Hu, Benxiang; Guo, Zihu

    2016-01-01

    Background. Mental disorder is a group of systemic diseases characterized by a variety of physical and mental discomfort, which has become the rising threat to human life. Herbal medicines were used to treat mental disorders for thousand years in China in which the molecular mechanism is not yet clear. Objective. To systematically explain the mechanisms of SiNiSan (SNS) formula on the treatment of mental disorders. Method. A systems pharmacology method, with ADME screening, targets prediction, and DAVID enrichment analysis, was employed as the principal approach in our study. Results. 60 active ingredients of SNS formula and 187 mental disorders related targets were discovered to have interactions with them. Furthermore, the enrichment analysis of drug-target network showed that SNS probably acts through "multi-ingredient, multitarget, and multisystems" holistic coordination in different organs pattern by indirectly regulating the nutritional and metabolic pathway even their serial complications. Conclusions. Our research provides a reference for the molecular mechanism of medicinal herbs in the treatment of mental disease on a systematic level. Hopefully, it will also provide a theoretical basis for the discovery of lead compounds of natural medicines for other diseases based on traditional medicine.

  8. Parallel compression of data chunks of a shared data object using a log-structured file system

    Science.gov (United States)

    Bent, John M.; Faibish, Sorin; Grider, Gary

    2016-10-25

    Techniques are provided for parallel compression of data chunks being written to a shared object. A client executing on a compute node or a burst buffer node in a parallel computing system stores a data chunk generated by the parallel computing system to a shared data object on a storage node by compressing the data chunk; and providing the data compressed data chunk to the storage node that stores the shared object. The client and storage node may employ Log-Structured File techniques. The compressed data chunk can be de-compressed by the client when the data chunk is read. A storage node stores a data chunk as part of a shared object by receiving a compressed version of the data chunk from a compute node; and storing the compressed version of the data chunk to the shared data object on the storage node.

  9. Structure and geomorphology of the "big bend" in the Hosgri-San Gregorio fault system, offshore of Big Sur, central California

    Science.gov (United States)

    Johnson, S. Y.; Watt, J. T.; Hartwell, S. R.; Kluesner, J. W.; Dartnell, P.

    2015-12-01

    The right-lateral Hosgri-San Gregorio fault system extends mainly offshore for about 400 km along the central California coast and is a major structure in the distributed transform margin of western North America. We recently mapped a poorly known 64-km-long section of the Hosgri fault offshore Big Sur between Ragged Point and Pfieffer Point using high-resolution bathymetry, tightly spaced single-channel seismic-reflection and coincident marine magnetic profiles, and reprocessed industry multichannel seismic-reflection data. Regionally, this part of the Hosgri-San Gregorio fault system has a markedly more westerly trend (by 10° to 15°) than parts farther north and south, and thus represents a transpressional "big bend." Through this "big bend," the fault zone is never more than 6 km from the shoreline and is a primary control on the dramatic coastal geomorphology that includes high coastal cliffs, a narrow (2- to 8-km-wide) continental shelf, a sharp shelfbreak, and a steep (as much as 17°) continental slope incised by submarine canyons and gullies. Depth-converted industry seismic data suggest that the Hosgri fault dips steeply to the northeast and forms the eastern boundary of the asymmetric (deeper to the east) Sur Basin. Structural relief on Franciscan basement across the Hosgri fault is about 2.8 km. Locally, we recognize five discrete "sections" of the Hosgri fault based on fault trend, shallow structure (e.g., disruption of young sediments), seafloor geomorphology, and coincidence with high-amplitude magnetic anomalies sourced by ultramafic rocks in the Franciscan Complex. From south to north, section lengths and trends are as follows: (1) 17 km, 312°; (2) 10 km, 322°; (3)13 km, 317°; (4) 3 km, 329°; (5) 21 km, 318°. Through these sections, the Hosgri surface trace includes several right steps that vary from a few hundred meters to about 1 km wide, none wide enough to provide a barrier to continuous earthquake rupture.

  10. 基于PHP技术的网络文件管理系统设计%Design of network file management system based on PHP technology

    Institute of Scientific and Technical Information of China (English)

    张源伟; 杨铭; 郭昊

    2013-01-01

    To meet the demand of the development of network file management system, a network file management system based on PHP and MySQL technologies is designed in this paper to provide storage services like files upload and download, and management services such as file browsing, update, classification and sharing. It has friendly operating interface, high treatment efficiency and perfect security. It can provide a convenient and reliable way for users to conduct file online storage, management and sharing.%  针对当前网络文件管理系统发展的需要,以PHP和MySQL技术为重点,设计了一种基于PHP技术的网络文件管理系统,以提供文件上传、下载等存储服务和文件浏览、更新、分类、分享等管理服务。该系统操作界面友好,处理效率高,并具有良好的安全性设计,可为用户进行文件在线存储、管理以及用户之间的文件共享等提供方便可靠的途径。

  11. Methods for File Recovery on NTFS File System%NTFS 格式存储设备数据恢复方法研究

    Institute of Scientific and Technical Information of China (English)

    徐国天

    2015-01-01

    目的:研究 NTFS 存储设备的3种数据恢复方式,测试、比较不同方式的恢复效果,促进电子物证检验工作。方法本文针对同一 NTFS 存储设备,分别使自行设计的 NTFS 日志检验软件测试基于 NTFS 日志文件的恢复方式,使用 Final Data 的快速扫描功能测试基于 MFT 记录的恢复方式,使用 Final Data 的完整扫描功能测试基于文件头部存储特征值的恢复方式,比较3种方式的恢复效果,分析各自的恢复原理。结果基于 NTFS日志和 MFT 记录的方式恢复出的信息较全,用时较短,但不适合恢复较长时间之前删除的文件。基于文件头部存储特征值的方式可恢复较长时间前删除的文件,但用时长,不能恢复文件名、创建时间等信息,也不能有效恢复离散存储的文件。结论结合实际情况、综合运用3种方式可有效恢复数据。%Objective In practice, such situations are often encountered that the files have not been restored because of the incorrect recovery tools and/or varied restoring methods. In this paper, three data recovery modes used with NTFS storage device were analyzed and their effects were tested and compared. Methods For the same NTFS storage device, we used NTFS log inspection software developed from previous research to test the recovery choice based on NTFS log file, utilized the quick scan function of Final Data to test the recovery choice based on MFT, and used the full scan function of Final Data to test the recovery choice based on characteristic value. Finally we compared the effect of the three choices and analyzed their recovery principles. Results The recovery choices based on NTFS log file and MFT could obtain comprehensive information but were not suitable for files deleted long before. Though the recovery choice based on characteristic value played poor effect on restoring either the non-contiguous files or the file names and file-creating time, it

  12. Scalable I/O Systems via Node-Local Storage: Approaching 1 TB/sec File I/O

    Energy Technology Data Exchange (ETDEWEB)

    Moody, A; Bronevetsky, G

    2008-05-20

    The growth in the computational capability of modern supercomputing systems has been accompanied by corresponding increases in CPU count, total RAM, and total storage capacity. Indeed, systems such as Blue-Gene/L [3], BlueGene/P, Ranger, and the Cray XT series have grown to more than 100k processors, with 100 TeraBytes of RAM and are attached to multi-PetaByte storage systems. However, as part of this design evolution, large supercomputers have lost node-local storage elements, such as disks. While this decision was motivated by important considerations like overall system reliability, it also resulted in these systems losing a key level in their memory hierarchy, with nothing to fill the gap between local RAM and the parallel file system. While today's large supercomputers are typically attached to fast parallel file systems, which provide tens of GBs/s of I/O bandwidth, the computational capacity has grown much faster than the storage bandwidth capacity. As such, these machines are now provided with much less than 1GB/s of I/O bandwidth per TeraFlop of compute power, which is below the generally accepted limit required for a well-balanced system [8] [16]. The result is that today's limited I/O bandwidth is choking the capabilities of modern supercomputers, specifically in terms of limiting their working sets and making fault tolerance techniques, such as checkpointing, prohibitively expensive. This paper presents an alternative system design oriented on using node-local storage to improve aggregate system I/O bandwidth. We focus on the checkpointing use-case and present an experimental evaluation of SCR, a new checkpointing library that makes use of node-local storage to significantly improve the performance of checkpointing on large-scale supercomputers. Experiments show that SCR achieves unprecedented write speeds, reaching 700GB/s on 8,752 processors. Our results scale such that we expect a similarly structured system consisting of 12,500 processors

  13. The San Diego Panasonic Partnership: A Case Study in Restructuring.

    Science.gov (United States)

    Holzman, Michael; Tewel, Kenneth J.

    1992-01-01

    The Panasonic Foundation provides resources for restructuring school districts. The article examines its partnership with the San Diego City School District, highlighting four schools that demonstrate promising practices and guiding principles. It describes recent partnership work on systemic issues, noting the next steps to be taken in San Diego.…

  14. Voice and Valency in San Luis Potosi Huasteco

    Science.gov (United States)

    Munoz Ledo Yanez, Veronica

    2014-01-01

    This thesis presents an analysis of the system of transitivity, voice and valency alternations in Huasteco of San Luis Potosi (Mayan) within a functional-typological framework. The study is based on spoken discourse and elicited data collected in the municipalities of Aquismon and Tancanhuitz de Santos in the state of San Luis Potosi, Mexico. The…

  15. The San Diego Panasonic Partnership: A Case Study in Restructuring.

    Science.gov (United States)

    Holzman, Michael; Tewel, Kenneth J.

    1992-01-01

    The Panasonic Foundation provides resources for restructuring school districts. The article examines its partnership with the San Diego City School District, highlighting four schools that demonstrate promising practices and guiding principles. It describes recent partnership work on systemic issues, noting the next steps to be taken in San Diego.…

  16. Scalable I/O Systems via Node-Local Storage: Approaching 1 TB/sec File I/O

    Energy Technology Data Exchange (ETDEWEB)

    Bronevetsky, G; Moody, A

    2009-08-18

    In the race to PetaFLOP-speed supercomputing systems, the increase in computational capability has been accompanied by corresponding increases in CPU count, total RAM, and storage capacity. However, a proportional increase in storage bandwidth has lagged behind. In order to improve system reliability and to reduce maintenance effort for modern large-scale systems, system designers have opted to remove node-local storage from the compute nodes. Today's multi-TeraFLOP supercomputers are typically attached to parallel file systems that provide only tens of GBs/s of I/O bandwidth. As a result, such machines have access to much less than 1GB/s of I/O bandwidth per TeraFLOP of compute power, which is below the generally accepted limit required for a well-balanced system. In a many ways, the current I/O bottleneck limits the capabilities of modern supercomputers, specifically in terms of limiting their working sets and restricting fault tolerance techniques, which become critical on systems consisting of tens of thousands of components. This paper resolves the dilemma between high performance and high reliability by presenting an alternative system design which makes use of node-local storage to improve aggregate system I/O bandwidth. In this work, we focus on the checkpointing use-case and present an experimental evaluation of the Scalable Checkpoint/Restart (SCR) library, a new adaptive checkpointing library that uses node-local storage to significantly improve the checkpointing performance of large-scale supercomputers. Experiments show that SCR achieves unprecedented write speeds, reaching a measured 700GB/s of aggregate bandwidth on 8,752 processors and an estimated 1TB/s for a similarly structured machine of 12,500 processors. This corresponds to a speedup of over 70x compared to the bandwidth provided by the 10GB/s parallel file system the cluster uses. Further, SCR can adapt to an environment in which there is wide variation in performance or capacity among

  17. Land Boundary Conditions for the Goddard Earth Observing System Model Version 5 (GEOS-5) Climate Modeling System: Recent Updates and Data File Descriptions

    Science.gov (United States)

    Mahanama, Sarith P.; Koster, Randal D.; Walker, Gregory K.; Takacs, Lawrence L.; Reichle, Rolf H.; De Lannoy, Gabrielle; Liu, Qing; Zhao, Bin; Suarez, Max J.

    2015-01-01

    The Earths land surface boundary conditions in the Goddard Earth Observing System version 5 (GEOS-5) modeling system were updated using recent high spatial and temporal resolution global data products. The updates include: (i) construction of a global 10-arcsec land-ocean lakes-ice mask; (ii) incorporation of a 10-arcsec Globcover 2009 land cover dataset; (iii) implementation of Level 12 Pfafstetter hydrologic catchments; (iv) use of hybridized SRTM global topography data; (v) construction of the HWSDv1.21-STATSGO2 merged global 30 arc second soil mineral and carbon data in conjunction with a highly-refined soil classification system; (vi) production of diffuse visible and near-infrared 8-day MODIS albedo climatologies at 30-arcsec from the period 2001-2011; and (vii) production of the GEOLAND2 and MODIS merged 8-day LAI climatology at 30-arcsec for GEOS-5. The global data sets were preprocessed and used to construct global raster data files for the software (mkCatchParam) that computes parameters on catchment-tiles for various atmospheric grids. The updates also include a few bug fixes in mkCatchParam, as well as changes (improvements in algorithms, etc.) to mkCatchParam that allow it to produce tile-space parameters efficiently for high resolution AGCM grids. The update process also includes the construction of data files describing the vegetation type fractions, soil background albedo, nitrogen deposition and mean annual 2m air temperature to be used with the future Catchment CN model and the global stream channel network to be used with the future global runoff routing model. This report provides detailed descriptions of the data production process and data file format of each updated data set.

  18. Integrating buprenorphine treatment into a public healthcare system: the San Francisco Department of Public Health's office-based Buprenorphine Pilot Program.

    Science.gov (United States)

    Hersh, David; Little, Sherri L; Gleghorn, Alice

    2011-01-01

    Despite well-documented efficacy, US physicians have been relatively slow to embrace the use of buprenorphine for the treatment of opioid dependence. In order to introduce and support the use of buprenorphine across the San Francisco Department of Public Health system of care, the Buprenorphine Pilot Program was initiated in 2003. Program treatment sites included a centralized buprenorphine induction clinic and program pharmacy, and three community-based treatment sites; two primary care clinics and a private dual diagnosis group practice. The target patient population consisted of opioid-dependent patients typically seen in an urban, public health setting, including individuals experiencing extreme poverty, homelessness/unstable housing, unemployment, polysubstance abuse/dependence, coexisting mental health disorders, and/or little psychosocial support. This program evaluation reviews patient characteristics, treatment retention, substance use over time, patient impressions, and provider practices for the 57 patients admitted between 9/1/03 and 8/31/05. At baseline, over 80% of patients were injecting heroin, over 40% were homeless, and over one-third were using cocaine. Outcomes included an overall one-year retention rate of 61%, a rapid and dramatic decline in opioid use, very positive patient impressions of the program and of buprenorphine, and significant shifts in provider practices over time.

  19. Utilizing Multibeam Bathymetry and Geographic Information Systems (GIS) to Expand Our Mapping Ability of Potential Rockfish Benthic Habitats in the San Juan Islands, Washington

    Science.gov (United States)

    Kelly-Slatten, K.

    2013-12-01

    In order to construct an accurate cartographic representation of the potential rockfish habitat zone in the San Juan Archipelago, Washington, bathymetric data is needed to form layers within Geographic Information Systems (GIS) that include, but are not limited to, slope, hillshade, and aspect. Backscatter data is also important in order to demonstrate the induration of the marine floor, which in turn may tell the researcher what type of sediment and substrate makes up that part of the benthic region. Once these layers are added to the GIS map, another layer (referred to as Potential Benthic Habitats) is created and inserted. This layer uses the same induration data but groups them into polygons, which are then color-coded and displayed on the map. With all the layers now pictured, it is clear that the intertidal zones are not complete. Aerial photographs are then added to fill in the gaps according to the GPS coordinates associated with the middle section of each picture. When all pictures and layers have been included, the GIS map is a somewhat three-dimensional, color-coordinated, aerial photograph enhanced depiction of Skipjack, Waldron, Orcas, and Sucia Islands. The bathymetric and backscatter data are plugged into Excel to graphically illustrate specific numbers that represent the various potential habitats. The given data support the idea that potential rockfish habitat (Sedimentary Bedrock and Fractured Bedrock) must be closely monitored and maintained in attempt to preserve and conserve the three either threatened or endangered rockfish species within the Puget Sound locale.

  20. Modeling a Sustainable Salt Tolerant Grass-Livestock Production System under Saline Conditions in the Western San Joaquin Valley of California

    Directory of Open Access Journals (Sweden)

    Stephen R. Kaffka

    2013-09-01

    Full Text Available Salinity and trace mineral accumulation threaten the sustainability of crop production in many semi-arid parts of the world, including California’s western San Joaquin Valley (WSJV. We used data from a multi-year field-scale trial in Kings County and related container trials to simulate a forage-grazing system under saline conditions. The model uses rainfall and irrigation water amounts, irrigation water quality, soil, plant, and atmospheric variables to predict Bermuda grass (Cynodon dactylon (L. Pers. growth, quality, and use by cattle. Simulations based on field measurements and a related container study indicate that although soil chemical composition is affected by irrigation water quality, irrigation timing and frequency can be used to mitigate salt and trace mineral accumulation. Bermuda grass yields of up to 12 Mg dry matter (DM·ha−1 were observed at the field site and predicted by the model. Forage yield and quality supports un-supplemented cattle stocking rates of 1.0 to 1.2 animal units (AU·ha−1. However, a balance must be achieved between stocking rate, desired average daily gain, accumulation of salts in the soil profile, and potential pollution of ground water from drainage and leaching. Using available weather data, crop-specific parameter values and field scale measurements of soil salinity and nitrogen levels, the model can be used by farmers growing forages on saline soils elsewhere, to sustain forage and livestock production under similarly marginal conditions.

  1. Zoogeography of the San Andreas Fault system: Great Pacific Fracture Zones correspond with spatially concordant phylogeographic boundaries in western North America.

    Science.gov (United States)

    Gottscho, Andrew D

    2016-02-01

    The purpose of this article is to provide an ultimate tectonic explanation for several well-studied zoogeographic boundaries along the west coast of North America, specifically, along the boundary of the North American and Pacific plates (the San Andreas Fault system). By reviewing 177 references from the plate tectonics and zoogeography literature, I demonstrate that four Great Pacific Fracture Zones (GPFZs) in the Pacific plate correspond with distributional limits and spatially concordant phylogeographic breaks for a wide variety of marine and terrestrial animals, including invertebrates, fish, amphibians, reptiles, birds, and mammals. These boundaries are: (1) Cape Mendocino and the North Coast Divide, (2) Point Conception and the Transverse Ranges, (3) Punta Eugenia and the Vizcaíno Desert, and (4) Cabo Corrientes and the Sierra Transvolcanica. However, discussion of the GPFZs is mostly absent from the zoogeography and phylogeography literature likely due to a disconnect between biologists and geologists. I argue that the four zoogeographic boundaries reviewed here ultimately originated via the same geological process (triple junction evolution). Finally, I suggest how a comparative phylogeographic approach can be used to test the hypothesis presented here.

  2. 一种非阻塞读文件系统的实现方法%Implementation Method of File System with Non-blocking Read

    Institute of Scientific and Technical Information of China (English)

    熊安萍; 唐巍; 蒋溢

    2011-01-01

    针对现有文件系统在容错性及读性能远低于数据库系统的情况,运用数据库系统中数据的多版本技术实现数据的快速闪回及非阻塞读原理,通过改进文件系统元数据结构及结合写时复制技术的方法,在文件系统中增加文件瞬时恢复功能及非阻塞读功能,解决现有文件系统在容错性不足的问题,提高文件系统的读性能.通过该方法制作MVFS文件系统,测试结果表明,应用该方法生成的文件系统具有优越的读性能、容错性和可靠性.%Focused on the existing file system in the fault-tolerance and reliability are much lower than the database system, this paper uses the principle of data multiple versions technology in database system to achieve quickly flashback and non-blocking read.By improving the meta-data structure of file system and the combination of copy-on-write techniques, this paper solves the problem of existing file system lack of fault tolerance and enhances the read performance of file system.MVFS is made by the method.Experimental results on MVFS show that the method applied to generate the file system posses excellent read performance, fault tolerance and reliability

  3. Construction and research of embedded Linux NFS root file system%嵌入式Linux NFS根文件系统的构建及研究

    Institute of Scientific and Technical Information of China (English)

    康天下; 支剑锋

    2012-01-01

    在嵌入式Linux系统开发过程中,根文件系统是构建嵌入式Linux系统的重要组成部分.为了方便和简化嵌入式Linux开发过程中的调试过程,主要研究了如何使用Busybox构建出基本的嵌入式Linux根文件系统,包括Busybox的配置、编译和安装.在此基础上,进一步构建出基于NFS的嵌入式Linux根文件系统,并给出了启动脚本和配置文件.这种根文件系统可以方便地在线更改、调试程序,降低了嵌入式系统的开发门槛.%In the process of the embedded Linux system development, the root file system is an important part of building the embedded Linux system. In order to facilitate and simplify the debugging phase of embedded Linux development process, this paper mainly researches how to build a basic embedded Linux root file system with Busybox, including the configuration, installation and compilation of Busybox. On this base, the NFS-based embedded Linux root file system was built. The startup script and configuration file are given in this paper. This root file system can be easily changed and debugged online, and reduces the threshold of embedded systems development.

  4. Geophysical evidence for wedging in the San Gorgonio Pass structural knot, southern San Andreas fault zone, southern California

    Science.gov (United States)

    Langenheim, V.E.; Jachens, R.C.; Matti, J.C.; Hauksson, E.; Morton, D.M.; Christensen, A.

    2005-01-01

    Geophysical data and surface geology define intertonguing thrust wedges that form the upper crust in the San Gorgonio Pass region. This picture serves as the basis for inferring past fault movements within the San Andreas system, which are fundamental to understanding the tectonic evolution of the San Gorgonio Pass region. Interpretation of gravity data indicates that sedimentary rocks have been thrust at least 5 km in the central part of San Gorgonio Pass beneath basement rocks of the southeast San Bernardino Mountains. Subtle, long-wavelength magnetic anomalies indicate that a magnetic body extends in the subsurface north of San Gorgonio Pass and south under Peninsular Ranges basement, and has a southern edge that is roughly parallel to, but 5-6 km south of, the surface trace of the Banning fault. This deep magnetic body is composed either of upper-plate rocks of San Gabriel Mountains basement or rocks of San Bernardino Mountains basement or both. We suggest that transpression across the San Gorgonio Pass region drove a wedge of Peninsular Ranges basement and its overlying sedimentary cover northward into the San Bernardino Mountains during the Neogene, offsetting the Banning fault at shallow depth. Average rates of convergence implied by this offset are broadly consistent with estimates of convergence from other geologic and geodetic data. Seismicity suggests a deeper detachment surface beneath the deep magnetic body. This interpretation suggests that the fault mapped at the surface evolved not only in map but also in cross-sectional view. Given the multilayered nature of deformation, it is unlikely that the San Andreas fault will rupture cleanly through the complex structures in San Gorgonio Pass. ?? 2005 Geological Society of America.

  5. 48 CFR 1404.805 - Disposal of contract files.

    Science.gov (United States)

    2010-10-01

    ... 48 Federal Acquisition Regulations System 5 2010-10-01 2010-10-01 false Disposal of contract files. 1404.805 Section 1404.805 Federal Acquisition Regulations System DEPARTMENT OF THE INTERIOR GENERAL ADMINISTRATIVE MATTERS Contract Files 1404.805 Disposal of contract files. Disposition of files shall...

  6. 78 FR 79429 - Combined Notice of Filings #1

    Science.gov (United States)

    2013-12-30

    ... 1000 Compliance Filing to be effective 2/17/2014. Filed Date: 12/18/13. Accession Number: 20131218-5197...: Midcontinent Independent System Operator, Inc. submits 12-16-2013 Entergy IAs Succession Filing Part 1 to be...: Midcontinent Independent System Operator, Inc. submits 12-16-2013 Entergy IAs Succession Filing Part 2 to...

  7. San Pascual (1989) n. 272

    OpenAIRE

    Pérez, María Dolores, O.S.C. (Directora)

    1989-01-01

    Editorial. Entrevista madre abadesa. Ofrenda. San Pascual tercer centenario de la canonizacion y cuarto de su muerte. San Pascual, un Santo universal. Pascual Baylón, poeta. grupo Scout Sant Pasqual. Aportaciones, donativos, limosnas, benefactores. Boletin informativo del templo de San Pascual de villareal.

  8. 嵌入式Linux中Yaffs文件系统的构建与优化%Construction and Optimization of Yaffs File System in Embedded Linux

    Institute of Scientific and Technical Information of China (English)

    汪祖民; 张红梅

    2015-01-01

    针对嵌入式Linux开发中文件系统的重要作用,详细介绍了如何利用busybox构造一个最小的Yaffs日志型文件系统,并配置、优化文件系统的物理布局和各级子目录文件,使制作的文件系统在满足开发功能的条件下最大限度地减小所占内存空间。设置文件系统用户和所属组来提高系统安全性,使该文件系统更好地应用于嵌入式开发。并针对基于NandFlash的Yaffs在损耗均衡和垃圾回收方面的不足,提出了优化策略,延长NandFlash的使用寿命。%To important role of file system in embedded Linux,introduces how to use busybox constructs a minimal Yaffs log file system,configuration and optimization of the file system's physical layout and subdirectories at all levels,making the file system under the condition of satisfying development function maximum decrease of memory space. Settings the user and group of the file system to improve system security,in order to better apply to embedded development. And for the lack of Yaffs based on NandFlash wear leveling and garbage collection in terms of proposed optimization strategies to extend the life of NandFlash.

  9. Work Systems Package/Pontoon Implacement Vehicle Operational Testing at San Clemente Island, 1979. Demonstration of a Technology for Remotely Controlled Deep-Water Recovery of Objects up to 5 Tons from Depths to 20000 Feet.

    Science.gov (United States)

    1980-06-06

    constant bag volume is maintained by a relief valve which keeps the internal bag pressure at 5 to 10 psi over ambient pressure. Buoyancy catl be...attachment devices and techniques by operators who \\ ere new to the system’s operational idiosyncrasies. When considering these facts, the reality of...derived from previous programs, and new , recoery techniques which were generated specifically for the San Clemente Island (SCI) operations. The ability to

  10. Annual Systems Engineering Conference (11th) Held in San Diego, California on October 20-23, 2008. Volume 3

    Science.gov (United States)

    2008-10-23

    Integration ASA(ALT) Luncheon with Speaker in the Regatta Pavilion Dr. Ronald Jost, Deputy Assistant Secretary of Defense, C3 , Space & Spectrum BAYVIEW...ASA(ALT) 12:15 pm - 1:30 pm Luncheon with Speaker in the Regatta Pavilion Dr. Ronald Jost, Deputy Assistant Secretary of Defense, C3 , Space...Deployment Systems Acquisition Operations & Support C Sustainment FRP Decision Review FOC LRIP/ IOT &ECritical Design Review Pre- Systems Acquisition (Program

  11. Geology, sequence stratigraphy, and oil and gas assessment of the Lewis Shale Total Petroleum System, San Juan Basin, New Mexico and Colorado: Chapter 5 in Total petroleum systems and geologic assessment of undiscovered oil and gas resources in the San Juan Basin Province, exclusive of Paleozoic rocks, New Mexico and Colorado

    Science.gov (United States)

    Dubiel, R.F.

    2013-01-01

    The Lewis Shale Total Petroleum System (TPS) in the San Juan Basin Province contains a continuous gas accumulation in three distinct stratigraphic units deposited in genetically related depositional environments: offshore-marine shales, mudstones, siltstones, and sandstones of the Lewis Shale, and marginal-marine shoreface sandstones and siltstones of both the La Ventana Tongue and the Chacra Tongue of the Cliff House Sandstone. The Lewis Shale was not a completion target in the San Juan Basin (SJB) in early drilling from about the 1950s through 1990. During that time, only 16 wells were completed in the Lewis from natural fracture systems encountered while drilling for deeper reservoir objectives. In 1991, existing wells that penetrated the Lewis Shale were re-entered by petroleum industry operators in order to fracture-stimulate the Lewis and to add Lewis gas production onto preexisting, and presumably often declining, Mesaverde Group production stratigraphically lower in the section. By 1997, approximately 101 Lewis completions had been made, both as re-entries into existing wells and as add-ons to Mesaverde production in new wells. Based on recent industry drilling and completion practices leading to successful gas production from the Lewis and because new geologic models indicate that the Lewis Shale contains both source rocks and reservoir rocks, the Lewis Shale TPS was defined and evaluated as part of this U.S. Geological Survey oil and gas assessment of the San Juan Basin. Gas in the Lewis Shale Total Petroleum System is produced from shoreface sandstones and siltstones in the La Ventana and Chacra Tongues and from distal facies of these prograding clastic units that extend into marine rocks of the Lewis Shale in the central part of the San Juan Basin. Reservoirs are in shoreface sandstone parasequences of the La Ventana and Chacra and their correlative distal parasequences in the Lewis Shale where both natural and artificially enhanced fractures produce

  12. Study on administrative system for official e-files-box business%e-files-box 办公管理系统

    Institute of Scientific and Technical Information of China (English)

    李敏

    2005-01-01

    介绍了办公管理系统在实际应用中的可行性领域.通过对数据库的管理,实现了e-files-box办公管理系统的开发,着重阐述了系统的主要功能,性能及实现.系统具有友好、简洁、美观的用户界面,良好的安全性,使用户的操作得以充分保障,而且满足数据的一致性和唯一性.

  13. Comparative evaluation of sealing properties of different obturation systems placed over apically fractured rotary NiTi files

    Directory of Open Access Journals (Sweden)

    Sonali Taneja

    2012-01-01

    Full Text Available Aim: To evaluate sealing properties of different obturation systems placed over apically fractured rotary NiTi files. Materials and Methods: Forty freshly extracted human mandibular premolars were prepared by using Protaper (Dentsply-Maillefer, Ballaigues, Switzerland or the RaCe (FKG Dentaire, La Chaux-de-Fonds, Switzerland systems (n=20 for each, after which half of the specimens were subjected to instrument separation at the apical level. Roots with and without apically separated instruments (n=5 were filled with the two obturation systems i.e. Thermafil and lateral compaction technique. The modified glucose penetration setup was used to assess the microleakage. The leakage data was statistically analyzed. Results: The amount of leakage was significantly lower in specimens containing fractured instruments, regardless of the obturation method used. Roots obturated with Thermafil displayed significantly less leakage than cold lateral compaction technique, both, in the presence and absence of separated instruments. There was no significant difference among specimens prepared with ProTaper and RaCe when Thermafil obturation was done. But with cold lateral compaction technique, RaCe system showed less leakage as compared to ProTaper system. Conclusion: The type of obturation may play more important role than the type of instrument or retained/non-retained instrument factor.

  14. ACONC Files

    Data.gov (United States)

    U.S. Environmental Protection Agency — ACONC files containing simulated ozone and PM2.5 fields that were used to create the model difference plots shown in the journal article. This dataset is associated...

  15. 831 Files

    Data.gov (United States)

    Social Security Administration — SSA-831 file is a collection of initial and reconsideration adjudicative level DDS disability determinations. (A few hearing level cases are also present, but the...

  16. Page tamper distributed file replication system%网页防篡改中分布式文件同步复制系统

    Institute of Scientific and Technical Information of China (English)

    赵莉; 梁静

    2012-01-01

    为了解决Web服务器核心内嵌防篡改系统中,文件的分布式发布以及文件篡改后的即时恢复的问题。文中基于J2EE技术架构以及MVC(模型一视图一控制)的软件开发模式,设计了一种分布式文件同步复制系统。该系统能将文件分布式的发送到多个Web服务器上;当检测到某个Web服务器上的文件被篡改了,系统能迅速从原始库同步复制到相应的Web服务器上,以达到网页防篡改的目的。系统通过对文件进行比对,只传输差异部分,提高了系统资源利用率。%Web server core embedded tampervresistant system, in order to solve the distributed release of the document and file tampering after instant recovery, software development model based on J2EE technology architecture and the MVC (Model-View-Controller) design a distributed file replication system, file distributed of the system can send to multiple Web servers; detects that a Web server on the file has been tampered with, can quickly synchronize from the original library is copied to the appropriate Web server to reach pages anti-the purpose of tampering, system through file transfers only the different parts, to improve the utilization of system resources.

  17. 基于FAT32文件系统的数据恢复技术%Data Recovery Technology Based on the FAT32 File System

    Institute of Scientific and Technical Information of China (English)

    张明旺

    2012-01-01

    This paper presents the data recovery methods in the FAT32 file system in case of hard disk data loss due to accidental misuse.A detailed description is given of the structure of the FAT32 file system and file storage features with emphasis on the principles of the FAT32 file system data recovery.The specific methods and processes of data recovery are described in detail.%针对用户在FAT32文件系统下由于误操作,造成硬盘上数据被误删除等数据丢失的情况,提出如何恢复数据的方法。通过对FAT32文件系统的结构,文件的存储特点等方面的介绍,重点分析了FAT32文件系统下数据恢复的原理,并在此基础上对数据恢复的具体方法和过程进行了阐述。

  18. Silicon Valley's Processing Needs versus San Jose State University's Manufacturing Systems Processing Component: Implications for Industrial Technology

    Science.gov (United States)

    Obi, Samuel C.

    2004-01-01

    Manufacturing professionals within universities tend to view manufacturing systems from a global perspective. This perspective tends to assume that manufacturing processes are employed equally in every manufacturing enterprise, irrespective of the geography and the needs of the people in those diverse regions. But in reality local and societal…

  19. New insights on stress rotations from a forward regional model of the San Andreas fault system near its Big Bend in southern California

    Science.gov (United States)

    Fitzenz, D.D.; Miller, S.A.

    2004-01-01

    Understanding the stress field surrounding and driving active fault systems is an important component of mechanistic seismic hazard assessment. We develop and present results from a time-forward three-dimensional (3-D) model of the San Andreas fault system near its Big Bend in southern California. The model boundary conditions are assessed by comparing model and observed tectonic regimes. The model of earthquake generation along two fault segments is used to target measurable properties (e.g., stress orientations, heat flow) that may allow inferences on the stress state on the faults. It is a quasi-static model, where GPS-constrained tectonic loading drives faults modeled as mostly sealed viscoelastic bodies embedded in an elastic half-space subjected to compaction and shear creep. A transpressive tectonic regime develops southwest of the model bend as a result of the tectonic loading and migrates toward the bend because of fault slip. The strength of the model faults is assessed on the basis of stress orientations, stress drop, and overpressures, showing a departure in the behavior of 3-D finite faults compared to models of 1-D or homogeneous infinite faults. At a smaller scale, stress transfers from fault slip transiently induce significant perturbations in the local stress tensors (where the slip profile is very heterogeneous). These stress rotations disappear when subsequent model earthquakes smooth the slip profile. Maps of maximum absolute shear stress emphasize both that (1) future models should include a more continuous representation of the faults and (2) that hydrostatically pressured intact rock is very difficult to break when no material weakness is considered. Copyright 2004 by the American Geophysical Union.

  20. Ensemble cloud-resolving modelling of a historic back-building mesoscale convective system over Liguria: the San Fruttuoso case of 1915

    Science.gov (United States)

    Parodi, Antonio; Ferraris, Luca; Gallus, William; Maugeri, Maurizio; Molini, Luca; Siccardi, Franco; Boni, Giorgio

    2017-05-01

    Highly localized and persistent back-building mesoscale convective systems represent one of the most dangerous flash-flood-producing storms in the north-western Mediterranean area. Substantial warming of the Mediterranean Sea in recent decades raises concerns over possible increases in frequency or intensity of these types of events as increased atmospheric temperatures generally support increases in water vapour content. However, analyses of the historical record do not provide a univocal answer, but these are likely affected by a lack of detailed observations for older events. In the present study, 20th Century Reanalysis Project initial and boundary condition data in ensemble mode are used to address the feasibility of performing cloud-resolving simulations with 1 km horizontal grid spacing of a historic extreme event that occurred over Liguria: the San Fruttuoso case of 1915. The proposed approach focuses on the ensemble Weather Research and Forecasting (WRF) model runs that show strong convergence over the Ligurian Sea (17 out of 56 members) as these runs are the ones most likely to best simulate the event. It is found that these WRF runs generally do show wind and precipitation fields that are consistent with the occurrence of highly localized and persistent back-building mesoscale convective systems, although precipitation peak amounts are underestimated. Systematic small north-westward position errors with regard to the heaviest rain and strongest convergence areas imply that the reanalysis members may not be adequately representing the amount of cool air over the Po Plain outflowing into the Ligurian Sea through the Apennines gap. Regarding the role of historical data sources, this study shows that in addition to reanalysis products, unconventional data, such as historical meteorological bulletins, newspapers, and even photographs, can be very valuable sources of knowledge in the reconstruction of past extreme events.

  1. Environmental justice implications of arsenic contamination in California's San Joaquin Valley: a cross-sectional, cluster-design examining exposure and compliance in community drinking water systems.

    Science.gov (United States)

    Balazs, Carolina L; Morello-Frosch, Rachel; Hubbard, Alan E; Ray, Isha

    2012-11-14

    Few studies of environmental justice examine inequities in drinking water contamination. Those studies that have done so usually analyze either disparities in exposure/harm or inequitable implementation of environmental policies. The US EPA's 2001 Revised Arsenic Rule, which tightened the drinking water standard for arsenic from 50 μg/L to 10 μg/L, offers an opportunity to analyze both aspects of environmental justice. We hypothesized that Community Water Systems (CWSs) serving a higher proportion of minority residents or residents of lower socioeconomic status (SES) have higher drinking water arsenic levels and higher odds of non-compliance with the revised standard. Using water quality sampling data for arsenic and maximum contaminant level (MCL) violation data for 464 CWSs actively operating from 2005-2007 in California's San Joaquin Valley we ran bivariate tests and linear regression models. Higher home ownership rate was associated with lower arsenic levels (ß-coefficient= -0.27 μg As/L, 95% (CI), -0.5, -0.05). This relationship was stronger in smaller systems (ß-coefficient = -0.43, CI, -0.84, -0.03). CWSs with higher rates of homeownership had lower odds of receiving an MCL violation (OR, 0.33; 95% CI, 0.16, 0.67); those serving higher percentages of minorities had higher odds (OR, 2.6; 95% CI, 1.2, 5.4) of an MCL violation. We found that higher arsenic levels and higher odds of receiving an MCL violation were most common in CWSs serving predominantly socio-economically disadvantaged communities. Our findings suggest that communities with greater proportions of low SES residents not only face disproportionate arsenic exposures, but unequal MCL compliance challenges.

  2. 77 FR 20379 - San Diego Gas &

    Science.gov (United States)

    2012-04-04

    ... From the Federal Register Online via the Government Publishing Office DEPARTMENT OF ENERGY Federal Energy Regulatory Commission San Diego Gas & Electric Company v. Sellers of Energy and Ancillary Services Into Markets Operated by the California Independent System Operator Corporation and the California...

  3. Ecología poblacional de Crocodylus acutus en los sistemas estuarinos de San Blas, Nayarit, México Population ecology of Crocodylus acutus in the estuarine systems of San Blas, Nayarit, Mexico

    Directory of Open Access Journals (Sweden)

    Helios Hernández-Hurtado

    2011-09-01

    Full Text Available El presente estudio se realizó en los meses de junio 2005 y febrero 2006 (estiaje, y en octubre 2005 y octubre 2007 (lluvias, en los esteros de San Blas. El objetivo fue caracterizar la población de cocodrilos a partir de su distribución y abundancia, El método utilizado consistió en recorridos nocturnos en los esteros, para contabilizar el número de individuos por kilómetro. Se realizaron 6 transectos recorriendo un total de 89 kilómetros. Los transectos A, B y C presentaron una densidad que osciló entre 2.68 y 4.31 ind/km o 0.05 ind/ha con 26 nidos activos, 6 tipos de vegetación y salinidad de 11.03‰ en estiaje a 4.92‰ en lluvias. Los transectos D, E y F presentaron una densidad que osciló entre 0.014 y 0.36 ind/km o 0.002 ind/ha, sin nidos, 1 tipo de vegetación y salinidad qosciló de 35.85‰ en estiaje a 24.56‰ en lluvias. Se obtuvo la similitud estadística en A, B y C= 80%, D y E= 70% y en F= 10% respecto a los demás. La población estimada fue de 333 cocodrilos, registrando todas las clases-talla, donde el hábitat disponible presenta las características necesarias para que Crocodylus acutus realice su ciclo biológico.This study was conducted in San Blas estuaries during June 2005 and February 2006 (dry season and October 2005 and October 2007 (rainy season. The purpose was to describe the population of crocodiles according to its distribution and abundance. Spotlight surveys counting animals found per kilometer were done navigating 89 kilometers in 6 transects. Transects A, B and C had a density from 2.68 to 4.31 crocodiles/kilometer or 0.05 crocodiles/ha, with 26 active nests, 6 different types of vegetation and salinity from 11.03‰ in dry season to 4.92‰ in rainy season. Transects D, E and F had a density from 0.014 to 0.36 crocodiles/km or 0.002 crocodiles/ha, any nest was found, 1 single type of vegetation and salinity from 35.85‰ in dry season to 24.56‰ in rainy season. Statistical similarity in

  4. Effectiveness of various irrigation activation protocols and the self-adjusting file system on smear layer and debris removal.

    Science.gov (United States)

    Çapar, İsmail Davut; Aydinbelge, Hale Ari

    2014-01-01

    The purpose of the present study is to evaluate smear layer generation and residual debris after using self-adjusting file (SAF) or rotary instrumentation and to compare the debris and smear layer removal efficacy of the SAF cleaning/shaping irrigation system against final agitation techniques. One hundred and eight maxillary lateral incisor teeth were randomly divided into nine experimental groups (n = 12), and root canals were prepared using ProTaper Universal rotary files, with the exception of the SAF instrumentation group. During instrumentation, root canals were irrigated with a total of 16 mL of 5% NaOCl. For final irrigation, rotary-instrumented groups were irrigated with 10 mL of 17% EDTA and 10 mL of 5% NaOCl using different irrigation agitation regimens (syringe irrigation with needles, NaviTip FX, manual dynamic irrigation, CanalBrush, EndoActivator, EndoVac, passive ultrasonic irrigation (PUI), and SAF irrigation). In the SAF instrumentation group, root canals were instrumented for 4 min at a rate of 4 mL/min with 5% NaOCl and received a final flush with same as syringe irrigation with needles. The surface of the root dentin was observed using a scanning electron microscope. The SAF instrumentation group generated less smear layer and yielded cleaner canals compared to rotary instrumentation. The EndoActivator, EndoVac, PUI, and SAF irrigation groups increased the efficacy of irrigating solutions on the smear layer and debris removal. The SAF instrumentation yielded cleaner canal walls when compared to rotary instrumentation. None of the techniques completely removed the smear layer from the root canal walls.

  5. 一种有效保护数据的文件系统%A file system for data effective protection

    Institute of Scientific and Technical Information of China (English)

    张游杰

    2012-01-01

    A new file system PSFS (pseudo sequence file system) is presented, which can effectively protect the data removed by mistake. The proposed file system takes a pseudo circular queue as the core to makes full use of the increasing capacity of hard disk, flash disk and other storage device. The deleted files are guaranteed not to be covered when the storage medium have enough space. The structure of the file system is described in detail. A example is used to illustrate its principle and data recovery method. Comparing with the existing data recovery technologies, the proposed method can recover the data more quickly and accurately.%提出了一种可有效保护被误删除数据的新的文件系统PSFS,以一个伪循环队列为核心,充分利用了磁盘和U盘等存储设备越来越大的容量,保证在存储介质有空余空间时不覆盖被删除的文件.在此重点描述了所提文件系统的结构,通过举例分析了其工作原理及恢复数据的方法.该方法表明所提文件系统的数据恢复速度比现有其他数据恢复技术要快,且数据恢复的准确率也大大提高.

  6. Supplement B: Research Networking Systems Characteristics Profiles. A Companion to the OCLC Research Report, Registering Researchers in Authority Files

    Science.gov (United States)

    Smith-Yoshimura, Karen; Altman, Micah; Conlon, Michael; Cristán, Ana Lupe; Dawson, Laura; Dunham, Joanne; Hickey, Thom; Hill, Amanda; Hook, Daniel; Horstmann, Wolfram; MacEwan, Andrew; Schreur, Philip; Smart, Laura; Wacker, Melanie; Woutersen, Saskia

    2014-01-01

    The OCLC Research Report, "Registering Researchers in Authority Files", [Accessible in ERIC as ED564924] summarizes the results of the research conducted by the OCLC Research Registering Researchers in Authority Files Task Group in 2012-2014. Details of this research are in supplementary data sets: (1) "Supplement A: Use Cases. A…

  7. Identifiable Data Files - Denominator File

    Data.gov (United States)

    U.S. Department of Health & Human Services — The Denominator File combines Medicare beneficiary entitlement status information from administrative enrollment records with third-party payer information and GHP...

  8. [A new computerized system for electroencephalography at the Kochi Medical School Hospital: the present status and problems of electroencephalogram data filing systems].

    Science.gov (United States)

    Doi, T; Kataoka, H; Nishida, M; Sasaki, M

    1990-06-01

    The usual electroencephalography (EEG) recording consumes great amounts of paper, considerable storage space for records and much time and energy for their search and retrieval. In addition, we can not perform digitized analyses of the records with the present method. To solve these problems, our laboratory developed a new computerized system for EEG, in which data are retained in optic disks, and which has been in service for routine examination since December, 1988. The functions of the system and EEG filing system, include the collection, retention, retrieval, transmission and analyses of data with the reproduction of the original EEG and editing function of summary reports to be filed in the medical records. The summary report consists of summary, characteristic wave patterns picked up and edited from EEG, and spectral array and topographical mapping by digitized analyses of EEG. The condition for the collection of EEG data was 200 Hz/8 bit, and the reproduced wave patterns were accepted by all clinicians. The merits of the system include; (i) saving of paper, space and time needed for EEG, (ii) enabling the comparison of the wave patterns in the form of summary reports and (iii) the capability of digitized analyses of EEG by retaining the EEG data in the data base. The problems remaining to be improved for the system are the longer time required for examination (5-10 min) and the higher running cost (yen 460/order). Regarding the latter problem, a revised method which dispenses with recording paper is under consideration. That is, in the case of screening examinations, summary reports for medical records alone would be delivered to clinicians. This idea has been accepted by some clinicians. To realize the revised system, we presently are planning to establish a method to display EEG on CRT.

  9. The Design of Network File Download System Based on .NET%基于.NET的网络文件下载系统的设计

    Institute of Scientific and Technical Information of China (English)

    韩晓菊

    2013-01-01

    The network file download system is information data storage space, with the system the users can login the website to upload、download and share the information. The design discusses how to create the system administrator interface and user interface of network file download system in detail.%网络文件下载系统,是一种用户可以通过互联网登录网站进行数据上传、下载、共享等操作的信息数据存储空间。本次设计详细论述了如何使用ASP.NET来创建网络文件下载系统管理员界面与用户界面。

  10. File Security Protection System Based on Minifilter%一种基于Minifilter的文件安全保护系统

    Institute of Scientific and Technical Information of China (English)

    戈洋洋; 毛宇光

    2013-01-01

    针对操作系统中数据文件可能出现的文件泄露等安全性问题,在研究了访问控制领域经典的BLP和Biba模型的基础上,提出了一种可同时维护数据文件保密性和完整性的新强制访问控制模型.同时,结合Minifilter文件系统过滤驱动程序开发并实现一种文件安全保护系统,对核心逻辑和关键模块进行了详细介绍.该系统能根据既定策略有效地保护系统的文件安全,为数据保护提供了一种新的解决方案.%Aiming at ensuring file security, a new mandatory access control model is presented, which is based on the BLP model and Biba model. In accordance with the Minifilter, a file security protection system has been implemented, and the components and key technologies of it are given in detail. This file security protection system provides effective protection for file according to special strategy, thus offering a new scheme for data protection.

  11. On the File Management System Based on Radio Fre-quency Identification (RFID)%论基于RFID的智能文件管理系统

    Institute of Scientific and Technical Information of China (English)

    崔霄翔

    2014-01-01

    The paperless office is vigorously advocated currently, but paper files still prevail and convey a considerable amount of sensitive and even confidential information. Using the remote de-tection of Radio Frequency Identification(RFID)technology, re-ferring to the perfect book management system, this system adopts the method of file monitoring and security alert, thus safely realizing the circulation management and safety management of paper confidential files, reducing the incidence rate of secret di-vulgence, and improving the efficiency of paper confidential files management.%虽然现在大力提倡无纸化办公,但是纸质文件依然大行其道,并且承载了相当多的敏感甚至机密的信息。本系统利用RFID技术的远距离探测,并借鉴成熟的图书管理体系,采用了文件监控与安全报警的方法,安全高效地实现了纸质涉密文件的流转管理和安全管理,降低了泄密事件的发生率,提高了纸质涉密文件的管理效率。

  12. Fail-over file transfer process

    Science.gov (United States)

    Semancik, Susan K. (Inventor); Conger, Annette M. (Inventor)

    2005-01-01

    The present invention provides a fail-over file transfer process to handle data file transfer when the transfer is unsuccessful in order to avoid unnecessary network congestion and enhance reliability in an automated data file transfer system. If a file cannot be delivered after attempting to send the file to a receiver up to a preset number of times, and the receiver has indicated the availability of other backup receiving locations, then the file delivery is automatically attempted to one of the backup receiving locations up to the preset number of times. Failure of the file transfer to one of the backup receiving locations results in a failure notification being sent to the receiver, and the receiver may retrieve the file from the location indicated in the failure notification when ready.

  13. Chemical controls on fault behavior: weakening of serpentinite sheared against quartz-bearing rocks and its significance for fault creep in the San Andreas system

    Science.gov (United States)

    Moore, Diane E.; Lockner, David A.

    2013-01-01

    The serpentinized ultramafic rocks found in many plate-tectonic settings commonly are juxtaposed against crustal rocks along faults, and the chemical contrast between the rock types potentially could influence the mechanical behavior of such faults. To investigate this possibility, we conducted triaxial experiments under hydrothermal conditions (200-350°C), shearing serpentinite gouge between forcing blocks of granite or quartzite. In an ultramafic chemical environment, the coefficient of friction, µ, of lizardite and antigorite serpentinite is 0.5-0.6, and µ increases with increasing temperature over the tested range. However, when either lizardite or antigorite serpentinite is sheared against granite or quartzite, strength is reduced to µ ~ 0.3, with the greatest strength reductions at the highest temperatures (temperature weakening) and slowest shearing rates (velocity strengthening). The weakening is attributed to a solution-transfer process that is promoted by the enhanced solubility of serpentine in pore fluids whose chemistry has been modified by interaction with the quartzose wall rocks. The operation of this process will promote aseismic slip (creep) along serpentinite-bearing crustal faults at otherwise seismogenic depths. During short-term experiments serpentine minerals reprecipitate in low-stress areas, whereas in longer experiments new Mg-rich phyllosilicates crystallize in response to metasomatic exchanges across the serpentinite-crustal rock contact. Long-term shear of serpentinite against crustal rocks will cause the metasomatic mineral assemblages, which may include extremely weak minerals such as saponite or talc, to play an increasingly important role in the mechanical behavior of the fault. Our results may explain the distribution of creep on faults in the San Andreas system.

  14. Mechanisms of aggradation in fluvial systems influenced by explosive volcanism: An example from the Upper Cretaceous Bajo Barreal Formation, San Jorge Basin, Argentina

    Science.gov (United States)

    Umazano, Aldo M.; Bellosi, Eduardo S.; Visconti, Graciela; Melchor, Ricardo N.

    2008-01-01

    The Late Cretaceous succession of the San Jorge Basin (Patagonia, Argentina) records different continental settings that interacted with explosive volcanism derived from a volcanic arc located in the western part of Patagonia. This paper discusses the contrasting aggradational mechanisms in fluvial systems strongly influenced by explosive volcanism which took place during sedimentation of the Bajo Barreal Formation. During deposition of the lower member of the unit, common ash-fall events and scarce sandy debris-flows occurred, indicating syn-eruptive conditions. However, the record of primary pyroclastic deposits is scarce because they were reworked by river flows. The sandy fluvial channels were braided and show evidence of important variations in water discharge. The overbank flows (sheet-floods) represent the main aggradational mechanism of the floodplain. In places, subordinate crevasse-splays and shallow lakes also contributed to the floodplain aggradation. In contrast, deposition of the upper member occurred in a fluvial-aeolian setting without input of primary volcaniclastic detritus, indicating inter-eruptive conditions. The fluvial channels were also braided and flowed across low-relief floodplains that mainly aggraded by deposition of silt-sized sediments of aeolian origin (loess) and, secondarily by sheet-floods. The Bajo Barreal Formation differs from the classic model of syn-eruptive and inter-eruptive depositional conditions in the presence of a braided fluvial pattern during inter-eruptive periods, at least at one locality. This braided fluvial pattern is attributed to the high input of fine-grained pyroclastic material that composes the loessic sediments.

  15. 77 FR 15026 - Privacy Act of 1974; Farm Records File (Automated) System of Records

    Science.gov (United States)

    2012-03-14

    ... entity data; Combined producer data; production and marketing data; Lease and transfer of allotments and... System, Automated Price Support System, Average Crop Revenue Elections, ] Asparagus Revenue Market Loss...

  16. Composition and Realization of Source-to-Sink High-Performance Flows: File Systems, Storage, Hosts, LAN and WAN

    Energy Technology Data Exchange (ETDEWEB)

    Wu, Chase Qishi [Univ. of Tennessee, Memphis, TN (United States)

    2016-12-01

    A number of Department of Energy (DOE) science applications, involving exascale computing systems and large experimental facilities, are expected to generate large volumes of data, in the range of petabytes to exabytes, which will be transported over wide-area networks for the purpose of storage, visualization, and analysis. To support such capabilities, significant progress has been made in various components including the deployment of 100 Gbps networks with future 1 Tbps bandwidth, increases in end-host capabilities with multiple cores and buses, capacity improvements in large disk arrays, and deployment of parallel file systems such as Lustre and GPFS. High-performance source-to-sink data flows must be composed of these component systems, which requires significant optimizations of the storage-to-host data and execution paths to match the edge and long-haul network connections. In particular, end systems are currently supported by 10-40 Gbps Network Interface Cards (NIC) and 8-32 Gbps storage Host Channel Adapters (HCAs), which carry the individual flows that collectively must reach network speeds of 100 Gbps and higher. Indeed, such data flows must be synthesized using multicore, multibus hosts connected to high-performance storage systems on one side and to the network on the other side. Current experimental results show that the constituent flows must be optimally composed and preserved from storage systems, across the hosts and the networks with minimal interference. Furthermore, such a capability must be made available transparently to the science users without placing undue demands on them to account for the details of underlying systems and networks. And, this task is expected to become even more complex in the future due to the increasing sophistication of hosts, storage systems, and networks that constitute the high-performance flows. The objectives of this proposal are to (1) develop and test the component technologies and their synthesis methods to

  17. 75 FR 55975 - Safety Zone; San Diego Harbor Shark Fest Swim; San Diego Bay, San Diego, CA

    Science.gov (United States)

    2010-09-15

    ... SECURITY Coast Guard 33 CFR Part 165 RIN 1625-AA00 Safety Zone; San Diego Harbor Shark Fest Swim; San Diego Bay, San Diego, CA AGENCY: Coast Guard, DHS. ACTION: Temporary final rule. SUMMARY: The Coast Guard is establishing a temporary safety zone upon the navigable waters of the San Diego Bay, San Diego, CA, in support...

  18. File Encryptiou System Using IBE Service%基于IBE Service的新型文件加密系统

    Institute of Scientific and Technical Information of China (English)

    施健; 陈铁明; 茆俊康

    2012-01-01

    Identity-based public key encryption (short for IBE) system can directly take user's ID as her public key, without need of public key certificate. Comparing to the traditional PKI, IBE is easy to develop and deploy with lower cost. It is specially suited for the enterprises with centralized key management supported. In this paper, a web service-based IBE key management service system, IBE Service, is firstly proposed, which facilitates users of different security domains to manage IBE keys and provides a user secure policy-centered key service. Based on IBE service, a general file encryption client application is then developed. It utilizes SOAP protocol to implement XML-based IBE key data communications. The proposed new file encryption system can map the receiver's ID as her public key, and the receiver can automatically do decryption by achieving her private key from IBE service. It is more secure, efficient, as well as with flexible ID secure policy supported.%基于身份的公钥加密(Identity-based Encryption,简称IBE)体制采用用户ID作为公钥,无需公钥证书操作,较传统的PKI体系具有开发部署简单、应用成本低等优势,尤其适用于密钥集中式管理的企业级应用.设计了一个基于Web Service的IBE密钥管理服务系统IBE Service,实现各个网络安全域内的用户密钥管理,提供以用户安全策略为中心的密钥服务;基于IBE Service开发了一个面向通用文件加密的客户端应用,主要通过SOAP服务接口实现基于XML的IBE密钥数据交互.新型的文件加密系统可将接收方ID直接映射为公钥,接收方自动向IBE Service获取私钥完成文件解密,具有安全、便捷等优点,且支持灵活的ID安全策略管理.

  19. Projecting Cumulative Benefits of Multiple River Restoration Projects: An Example from the Sacramento-San Joaquin River System in California

    Science.gov (United States)

    Kondolf, G. Mathias; Angermeier, Paul L.; Cummins, Kenneth; Dunne, Thomas; Healey, Michael; Kimmerer, Wim; Moyle, Peter B.; Murphy, Dennis; Patten, Duncan; Railsback, Steve; Reed, Denise J.; Spies, Robert; Twiss, Robert

    2008-12-01

    Despite increasingly large investments, the potential ecological effects of river restoration programs are still small compared to the degree of human alterations to physical and ecological function. Thus, it is rarely possible to “restore” pre-disturbance conditions; rather restoration programs (even large, well-funded ones) will nearly always involve multiple small projects, each of which can make some modest change to selected ecosystem processes and habitats. At present, such projects are typically selected based on their attributes as individual projects (e.g., consistency with programmatic goals of the funders, scientific soundness, and acceptance by local communities), and ease of implementation. Projects are rarely prioritized (at least explicitly) based on how they will cumulatively affect ecosystem function over coming decades. Such projections require an understanding of the form of the restoration response curve, or at least that we assume some plausible relations and estimate cumulative effects based thereon. Drawing on our experience with the CALFED Bay-Delta Ecosystem Restoration Program in California, we consider potential cumulative system-wide benefits of a restoration activity extensively implemented in the region: isolating/filling abandoned floodplain gravel pits captured by rivers to reduce predation of outmigrating juvenile salmon by exotic warmwater species inhabiting the pits. We present a simple spreadsheet model to show how different assumptions about gravel pit bathymetry and predator behavior would affect the cumulative benefits of multiple pit-filling and isolation projects, and how these insights could help managers prioritize which pits to fill.

  20. Infrared studies of Nova Scorpii 2014: an outburst in a symbiotic system sans an accompanying blast wave

    CERN Document Server

    Joshi, Vishal; Ashok, N M; Venkataraman, V; Walter, F M

    2015-01-01

    Near-IR spectroscopy is presented for Nova Scorpii 2014. It is shown that the outburst occurred in a symbiotic binary system - an extremely rare configuration for a classical nova outburst to occur in but appropriate for the eruption of a recurrent nova of the T CrB class. We estimate the spectral class of secondary as M5III $\\pm$ (two sub-classes). The maximum magnitude versus rate of decline (MMRD) relations give an unacceptably large value of 37.5 kpc for the distance. The spectra are typical of the He/N class of novae with strong HeI and H lines. The profiles are broad and flat topped with full width at zero intensities (FWZIs) approaching 9000-10000 km s$^{-1}$ and also have a sharp narrow component superposed which is attributable to emission from the giant's wind. Hot shocked gas, accompanied by X-rays and $\\gamma$ rays, is expected to form when the high velocity ejecta from the nova plows into the surrounding giant wind. Although X-ray emission was observed no $\\gamma$-ray emission was reported. It is...

  1. Download this PDF file

    African Journals Online (AJOL)

    Fr. Ikenga

    Nigeria; and the gains and challenges of utilizing e-taxation in tax ... teething problems usually encountered in all new schemes. ... is to help tax authorities reduce and possibly eliminate tax evasion. .... issues that include how to use the tax authorities' electronic tax filing system and ..... financial institution for Direct Deposit.

  2. Isotopic evidence for the infiltration of mantle and metamorphic CO2-H2O fluids from below in faulted rocks from the San Andreas Fault System

    Energy Technology Data Exchange (ETDEWEB)

    Pili, E.; Kennedy, B.M.; Conrad, M.E.; Gratier, J.-P.

    2010-12-15

    To characterize the origin of the fluids involved in the San Andreas Fault (SAF) system, we carried out an isotope study of exhumed faulted rocks from deformation zones, vein fillings and their hosts and the fluid inclusions associated with these materials. Samples were collected from segments along the SAF system selected to provide a depth profile from upper to lower crust. In all, 75 samples from various structures and lithologies from 13 localities were analyzed for noble gas, carbon, and oxygen isotope compositions. Fluid inclusions exhibit helium isotope ratios ({sup 3}He/{sup 4}He) of 0.1-2.5 times the ratio in air, indicating that past fluids percolating through the SAF system contained mantle helium contributions of at least 35%, similar to what has been measured in present-day ground waters associated with the fault (Kennedy et al., 1997). Calcite is the predominant vein mineral and is a common accessory mineral in deformation zones. A systematic variation of C- and O-isotope compositions of carbonates from veins, deformation zones and their hosts suggests percolation by external fluids of similar compositions and origin with the amount of fluid infiltration increasing from host rocks to vein to deformation zones. The isotopic trend observed for carbonates in veins and deformation zones follows that shown by carbonates in host limestones, marbles, and other host rocks, increasing with increasing contribution of deep metamorphic crustal volatiles. At each crustal level, the composition of the infiltrating fluids is thus buffered by deeper metamorphic sources. A negative correlation between calcite {delta}{sup 13}C and fluid inclusion {sup 3}He/{sup 4}He is consistent with a mantle origin for a fraction of the infiltrating CO{sub 2}. Noble gas and stable isotope systematics show consistent evidence for the involvement of mantle-derived fluids combined with infiltration of deep metamorphic H{sub 2}O and CO{sub 2} in faulting, supporting the involvement of

  3. Experience, use, and performance measurement of the Hadoop File System in a typical nuclear physics analysis workflow

    Science.gov (United States)

    Sangaline, E.; Lauret, J.

    2014-06-01

    The quantity of information produced in Nuclear and Particle Physics (NPP) experiments necessitates the transmission and storage of data across diverse collections of computing resources. Robust solutions such as XRootD have been used in NPP, but as the usage of cloud resources grows, the difficulties in the dynamic configuration of these systems become a concern. Hadoop File System (HDFS) exists as a possible cloud storage solution with a proven track record in dynamic environments. Though currently not extensively used in NPP, HDFS is an attractive solution offering both elastic storage and rapid deployment. We will present the performance of HDFS in both canonical I/O tests and for a typical data analysis pattern within the RHIC/STAR experimental framework. These tests explore the scaling with different levels of redundancy and numbers of clients. Additionally, the performance of FUSE and NFS interfaces to HDFS were evaluated as a way to allow existing software to function without modification. Unfortunately, the complicated data structures in NPP are non-trivial to integrate with Hadoop and so many of the benefits of the MapReduce paradigm could not be directly realized. Despite this, our results indicate that using HDFS as a distributed filesystem offers reasonable performance and scalability and that it excels in its ease of configuration and deployment in a cloud environment.

  4. Evaluation of the Efficacy of TRUShape and Reciproc File Systems in the Removal of Root Filling Material: An Ex Vivo Micro-Computed Tomographic Study.

    Science.gov (United States)

    de Siqueira Zuolo, Arthur; Zuolo, Mario Luis; da Silveira Bueno, Carlos Eduardo; Chu, Rene; Cunha, Rodrigo Sanches

    2016-02-01

    The purpose of this study was to evaluate the efficacy of TRUShape (Dentsply Tulsa Dental Specialties, Tulsa, OK) compared with the Reciproc file (VDW, Munich, Germany) in the removal of filling material from oval canals filled with 2 different sealers and differences in the working time. Sixty-four mandibular canines with oval canals were prepared and divided into 4 groups (n = 16). Half of the specimens were filled with gutta-percha and pulp canal sealer (PCS), and the remainders were filled with gutta-percha and bioceramic sealer (BCS). The specimens were retreated using either the Reciproc or TRUShape files. A micro-computed tomographic scanner was used to assess filling material removal, and the time taken for removal was also recorded. Data were analyzed using the Kruskal-Wallis and Mann-Whitney U tests. The mean volume of the remaining filling material was similar when comparing both files (P ≥ .05). However, in the groups filled with BCS, the percentage of remaining filling material was higher than in the groups filled with PCS (P material when comparing both files system; however, Reciproc was faster than TRUShape. BCS groups exhibited significantly more remaining filling material in the canals and required more time for retreatment. Remaining filling material was observed in all samples regardless of the technique or sealer used. Copyright © 2016 American Association of Endodontists. Published by Elsevier Inc. All rights reserved.

  5. RadNet Air Data From San Diego, CA

    Science.gov (United States)

    This page presents radiation air monitoring and air filter analysis data for San Diego, CA from EPA's RadNet system. RadNet is a nationwide network of monitoring stations that measure radiation in air, drinking water and precipitation.

  6. Use of a Geographic Information System and lichens to map air pollution in a tropical city: San José, Costa Rica Uso del Sistema de Información Geográfica y líquenes para mapear la contaminación del aire en una ciudad tropical: San José, Costa Rica

    Directory of Open Access Journals (Sweden)

    Erich Neurohr Bustamante

    2013-06-01

    Full Text Available There are no studies of air pollution bio-indicators based on Geographic Information Systems (GIS for Costa Rica. In this study we present the results of a project that analyzed tree trunk lichens as bioindicators of air pollution in 40 urban parks located along the passage of wind through the city of San Jose in 2008 and 2009. The data were processed with GIS and are presented in an easy to understand color coded isoline map. Our results are consistent with the generally accepted view that lichens respond to the movement of air masses, decreasing their cover in the polluted areas. Furthermore, lichen cover matched the concentration of atmospheric nitrogen oxides from a previous study of the same area. Our maps should be incorporated to urban regulatory plans for the city of San José to zone the location of schools, hospitals and other facilities in need of clean air and to inexpensively assess the risk for breast cancer and respiratory diseases in several neighborhoods throughout the city.En Costa Rica no hay estudios de la contaminación del aire basados en los Sistemas de Información Geográfica (SIG. En este artículo se muestran los resultados de un estudio que analizó los líquenes en troncos de árboles de 40 parques urbanos situados a lo largo del paso del viento por la ciudad de San José, durante los años 2008 y 2009. Los datos fueron procesados mediante SIG y se presentan de manera simple en un mapa de isolíneas con códigos de color. Los resultados concuerdan con la opinión generalmente aceptada de que los líquenes reaccionan ante la circulación de masas de aire al decrecer su cobertura en las zonas más contaminadas. Además, la cobertura de líquenes coincidió con la concentración de óxidos de nitrógeno en la atmósfera, tomada de un estudio previo en la misma zona. Nuestros mapas deben incorporarse en los planes reguladores de la ciudad de San José para zonificar la ubicación de escuelas, hospitales y otros edificios

  7. 77 FR 20793 - Marine Mammals; File No. 16599

    Science.gov (United States)

    2012-04-06

    ... National Oceanic and Atmospheric Administration RIN 0648-XA905 Marine Mammals; File No. 16599 AGENCY... Dorian Houser, Ph.D., National Marine Mammal Foundation, 2240 Shelter Island Drive, 200, San Diego, CA... has been issued under the authority of the Marine Mammal Protection Act of 1972, as amended (MMPA; 16...

  8. 78 FR 7755 - Marine Mammals; File No. 17754

    Science.gov (United States)

    2013-02-04

    ... National Oceanic and Atmospheric Administration RIN 0648-XC477 Marine Mammals; File No. 17754 AGENCY: National Marine Fisheries Service (NMFS), National Oceanic and Atmospheric Administration (NOAA), Commerce...World 1404-18 Higashi-cho, Kamogawa, Chiba, Japan to Sea World San Antonio. The applicant requests...

  9. Tank waste remediation system year 2000 dedicated file server project HNF-3418 project plan

    Energy Technology Data Exchange (ETDEWEB)

    SPENCER, S.G.

    1999-04-26

    The Server Project is to ensure that all TWRS supporting hardware (fileservers and workstations) will not cause a system failure because of the BIOS or Operating Systems cannot process Year 2000 dates.

  10. Registering Researchers in Authority Files

    NARCIS (Netherlands)

    Altman, M.; Conlon, M.; Cristan, A.L.; Dawson, L.; Dunham, J.; Hickey, T.; Hook, D.; Horstmann, W.; MacEwan, A.; Schreur, P.; Smart, L.; Smith-Yoshimura, K.; Wacker, M.; Woutersen, S.

    2014-01-01

    Registering researchers in some type of authority file or identifier system has become more compelling as both institutions and researchers recognize the need to compile their scholarly output. The report presents functional requirements and recommendations for six stakeholders: researchers, funders

  11. 基于机器学习的并行文件系统性能预测%Predicting the Parallel File System Performance via Machine Learning

    Institute of Scientific and Technical Information of China (English)

    赵铁柱; 董守斌; Verdi March; Simon See

    2011-01-01

    并行文件系统能有效解决高性能计算系统的海量数据存储和I/O瓶颈问题.由于影响系统性能的因素十分复杂,如何有效地评估系统性能并对性能进行预测成为一个潜在的挑战和热点.以并行文件系统的性能评估和预测作为研究目标,在研究文件系统的架构和性能因子后,设计了一个基于机器学习的并行文件系统预测模型,运用特征选择算法对性能因子数量进行约简,挖掘出系统性能和影响因子之间的特定的关系进行性能预测.通过设计大量实验用例,对特定的Lustre文件系统进行性能评估和预测.评估和实验结果表明:threads/OST、对象存储器(OSS)的数量、磁盘数目和RAID的组织方式是4个调整系统性能最重要因子,预测结果的平均相对误差能控制在25.1%~32.1%之间,具有较好预准确度.%Parallel file system can effectively solve the problems of massive data storage and I/O bottleneck. Because the potential impact on the system is not clearly understood, how to evaluate and predict performance of parallel file system becomes the potential challenge and hotspot. In this work,we aim to research the performance evaluation and prediction of parallel file system. After studying the architecture and performance factors of such file system, we design a predictive mode of parallel file system based on machine learning approaches. We use feature selection algorithms to reduce the number of performance factors to be tested in validating the performance. We also mine the particular relationship of system performance and impact factors to predict the performance of a specific file system. We validate and predict the performance of a specific Lustre file system through a series of experiment cases. Our evaluation and experiment results indicate that threads/OST, num of OSSs (Object Storage Server), hum of disks and num and type of RAID are the four most important parameters to tune the performance

  12. San Francisco Bay Area Baseline Trash Loading Summary Results, San Francisco Bay Area CA, 2012, US EPA Region 9

    Data.gov (United States)

    U.S. Environmental Protection Agency — The San Francisco Bay Area stormwater permit sets trash control guidelines for discharges through the storm drain system. The permit covers Alameda, Contra Costa,...

  13. Measuring Hg and MeHg fluxes from dynamic systems using high resolution in situ monitoring - case study: the Sacramento-San Joaquin Delta

    Science.gov (United States)

    Fleck, J. A.; Bergamaschi, B. A.; Downing, B. D.; Lionberger, M. A.; Schoellhamer, D.; Boss, E.; Heim, W.; Stephenson, M.

    2006-12-01

    Quantifying net loads in tidal systems is difficult, time consuming, and often very expensive. Owing to the relatively rapid nature of tidal exchange, numerous measurements are required in a brief amount of time to accurately quantify constituent fluxes between a tidal wetland and its surrounding waters. Further complicating matters, the differences in chemical concentrations of a constituent between the flood and ebb tides are often small, so that the net export of the constituent is orders of magnitude smaller than the bulk exchange in either direction over the tidal cycle. Thus, high-resolution sampling coupled with high-sensitivity instruments over an adequate amount of time is required to accurately determine a net flux. These complications are exacerbated for mercury species because of the difficulties related to clean sampling and trace-level analysis. The USGS currently is collecting data to determine the fluxes of total mercury (Hg) and methyl-Hg (MeHg) in dissolved and particulate phases at Browns Island in the San Francisco Bay-Delta, a tidally influenced estuarine system. Our field deployment package consists of an upward-looking current profiler to quantify water flux, and an array of other instruments measuring the following parameters: UV absorption, DO, pH, salinity, temperature, water depth, optical backscatter, fluorescence, and spectral attenuation. Measurements are collected at 30-minute intervals for seasonal, month-long deployments in the main slough of Brown's Island. We infer Hg and MeHg concentrations by using multivariate analysis of spectral absorbance and fluorescence properties of the continuous measurements, and comparing them to those of discrete samples taken hourly over a 25-hour tidal cycle for each deployment. Preliminary results indicate that in situ measurements can be used to predict MeHg concentrations in a tidal wetland slough in both the filtered (r2=0.96) and unfiltered (r2=0.95) fractions. Despite seasonal differences in

  14. 基于Android的多功能文件管理系统的设计与实现%Design and Implementation of Android Based Versatile File Management System

    Institute of Scientific and Technical Information of China (English)

    谭忠兵; 苏斯灿

    2012-01-01

    The system involved in this paper is an Android based versatile file management system. The aim is to develop an easily used management system based on Android platform, and the main functions of the system are browsing file, revising file authority, adding file to file library, browsing file history, file compression and encryption, etc.%文章所涉内容是一款基于Android的多功能文件管理系统的设计与实现。该设计以Android系统为开发平台,其目的是开发出一种方便用户使用的管理软件,使其具有文件浏览、文件权限的修改、添加文件到文件库、文件历史记录浏览、文件压缩以及文件加密等功能。

  15. Establishing VxWorks TrueFFS File System with NOR Flash%用NOR Flash建立VxWorks TrueFFS文件系统

    Institute of Scientific and Technical Information of China (English)

    邵富杰; 徐云宽

    2012-01-01

    This paper describes the method to establish the TrueFFS file system of embedded real-time operating system VxWorks, taking SST39VF1601 NOR Flash as an example. Firstly, DOS file system is configured, including the TrueFFS core component and the translation layer according to the technology used by SST39VF1601. Secondly, MTD layer and Socket layer Drivers are written. Lastly, the VxWorks DOS file system is mounted on a TrueFFS Flash drive, and simple test is carried out.%详细说明了嵌入式系统中常用的NOR Flash存储器建立Vxworks TrueFFS文件系统的方法。首先配置完整的DOS文件系统支持,包含核心TrueFFS组件和翻译层组件;然后编写MTD层和Socket层驱动程序;最后在TrueFFS的NOR Flash驱动上挂接VxWorksDOS文件系统,并进行了简单的测试。

  16. 云存储环境下文件系统伸缩性研究%Research on Scalability of File System in Cloud Storage Environment

    Institute of Scientific and Technical Information of China (English)

    李晓慧; 李战怀; 张晓; 李兆虎

    2012-01-01

    In the cloud storage environment, servers need to allocate storage space for clients dynamicly. This paper specifically aims at managing the NTFS file system volumes. It highlights the important data structure of the NTFS file system, and designs and implements a tool which expands or shrinks system volume based on needs. The experiments show that the tool can expand or shrink the size of the file system volume without data corruption.%在云存储环境下,为解决存储资源利用率问题,服务器端需动态为用户分配存储空间.本文致力于解决如何伸缩系统卷文件系统可识别大小.具体针对NTFS文件系统卷,重点介绍NTFS文件系统关于容量管理的重要数据结构,设计并实现根据用户申请的磁盘空间大小,自动伸缩系统卷的工具.实验表明:该工具能够在无数据损坏的前提下快速扩展或缩小系统卷文件系统识别的大小.

  17. Executive summary--2002 assessment of undiscovered oil and gas resources in the San Juan Basin Province, exclusive of Paleozoic rocks, New Mexico and Colorado: Chapter 1 in Total petroleum systems and geologic assessment of undiscovered oil and gas resources in the San Juan Basin Province, exclusive of Paleozoic rocks, New Mexico and Colorado

    Science.gov (United States)

    ,

    2013-01-01

    In 2002, the U.S. Geological Survey (USGS) estimated undiscovered oil and gas resources that have the potential for additions to reserves in the San Juan Basin Province (5022), New Mexico and Colorado (fig. 1). Paleozoic rocks were not appraised. The last oil and gas assessment for the province was in 1995 (Gautier and others, 1996). There are several important differences between the 1995 and 2002 assessments. The area assessed is smaller than that in the 1995 assessment. This assessment of undiscovered hydrocarbon resources in the San Juan Basin Province also used a slightly different approach in the assessment, and hence a number of the plays defined in the 1995 assessment are addressed differently in this report. After 1995, the USGS has applied a total petroleum system (TPS) concept to oil and gas basin assessments. The TPS approach incorporates knowledge of the source rocks, reservoir rocks, migration pathways, and time of generation and expulsion of hydrocarbons; thus the assessments are geologically based. Each TPS is subdivided into one or more assessment units, usually defined by a unique set of reservoir rocks, but which have in common the same source rock. Four TPSs and 14 assessment units were geologically evaluated, and for 13 units, the undiscovered oil and gas resources were quantitatively assessed.

  18. A Javascript library that uses Windows Script Host (WSH) to analyze prostate motion data fragmented across a multitude of Excel files by the Calypso 4D Localization System.

    Science.gov (United States)

    Vali, Faisal S; Hsi, Alex; Cho, Paul; Parsai, Homayon; Garver, Elizabeth; Garza, Richard

    2008-11-06

    The Calypso 4D Localization System records prostate motion continuously during radiation treatment. It stores the data across thousands of Excel files. We developed Javascript (JScript) libraries for Windows Script Host (WSH) that use ActiveX Data Objects, OLE Automation and SQL to statistically analyze the data and display the results as a comprehensible Excel table. We then leveraged these libraries in other research to perform vector math on data spread across multiple access databases.

  19. The Principle and Design Method of Windows NT File System Driver%Windows NT文件系统驱动程序原理及设计方法

    Institute of Scientific and Technical Information of China (English)

    韩德志; 钟铭

    2001-01-01

    The paper detailedly dicusses the fuction principle of Windows NT file system driver and the data structure of file system drive-layer,it summarizes the design method of file system driver by a practical design preccess of file system driver.%本文详细阐述了Windows NT下的文件系统驱动程序的功能原理以及文件系统驱动层相应的数据结构,通过一个具体的写文件系统驱动程序设计过程总结出了文件系统驱动程序的设计方法。

  20. San Cástulo

    OpenAIRE

    Jaramillo, Tania

    2014-01-01

    Porque no te acercas y nos entendemos, nos vamos cayendo por el lucro de la colonia, nos perdemos en la esquina de san Cástulo y nos vamos volando a Eleuterio, en una noche, que la luna nos vigile, que nos aguarde, que retrase el día, y la gente permanezca dormida o despierta pero temerosa de la noche, de los policías y los delincuentes, de los violadores y de nosotros, de la vida nocturna, de ese lugar oscuro en alguna parte, donde nos convertimos y aullamos.

  1. San Cástulo

    OpenAIRE

    Jaramillo, Tania

    2014-01-01

    Porque no te acercas y nos entendemos, nos vamos cayendo por el lucro de la colonia, nos perdemos en la esquina de san Cástulo y nos vamos volando a Eleuterio, en una noche, que la luna nos vigile, que nos aguarde, que retrase el día, y la gente permanezca dormida o despierta pero temerosa de la noche, de los policías y los delincuentes, de los violadores y de nosotros, de la vida nocturna, de ese lugar oscuro en alguna parte, donde nos convertimos y aullamos.

  2. Coma blisters sans coma.

    Science.gov (United States)

    Heinisch, Silke; Loosemore, Michael; Cusack, Carrie A; Allen, Herbert B

    2012-09-01

    Coma blisters (CBs) are self-limited lesions that occur in regions of pressure during unconscious states classically induced by barbiturates. We report a case of CBs sans coma that were histologically confirmed in a 41-year-old woman who developed multiple tense abdominal bullae with surrounding erythema following a transatlantic flight. Interestingly, the patient was fully conscious and denied medication use or history of medical conditions. A clinical diagnosis of CBs was confirmed by histopathologic findings of eccrine gland necrosis, a hallmark of these bulIous lesions.

  3. Comparison of Apical Extrusion of Debris by Using Single-File, Full-Sequence Rotary and Reciprocating Systems

    Science.gov (United States)

    Ehsani, Maryam; Harandi, Azadeh; Tavanafar, Saeid; Raoof, Maryam; Galledar, Saeedeh

    2016-01-01

    Objectives: During root canal preparation, apical extrusion of debris can cause inflammation, flare-ups, and delayed healing. Therefore, instrumentation techniques that cause the least extrusion of debris are desirable. This study aimed to compare apical extrusion of debris by five single-file, full-sequence rotary and reciprocating systems. Materials and Methods: One hundred twenty human mandibular premolars with similar root lengths, apical diameters, and canal curvatures were selected and randomly assigned to six groups (n=20): Reciproc R25 (25, 0.08), WaveOne Primary (25, 0.08), OneShape (25, 0.06), F360 (25, 0.04), Neoniti A1 (25, 0.08), and ProTaper Universal. Instrumentation of the root canals was performed in accordance with the manufacturers’ instructions. Each tooth's debris was collected in a pre-weighed vial. After drying the debris in an incubator, the mass was measured three times consecutively; the mean was then calculated. The preparation time by each system was also measured. For data analysis, one-way ANOVA and Games-Howell post hoc test were used. Results: The mean masses (±standard deviation) of the apical debris were as follows: 2.071±1.38mg (ProTaper Universal), 1.702±1.306mg (Neoniti A1), 1.295±0.839mg (OneShape), 1.109±0.676mg (WaveOne), 0.976±0.478mg (Reciproc) and 0.797±0.531mg (F360). Compared to ProTaper Universal, F360 generated significantly less debris (P=0.02). The ProTaper system required the longest preparation time (mean=88.6 seconds); the Reciproc (P=0.008), OneShape (P=0.006), and F360 (P=0.001) required significantly less time (PProTaper Universal.

  4. Modeling and simulation of a high-performance PACS based on a shared file system architecture

    Science.gov (United States)

    Meredith, Glenn; Anderson, Kenneth R.; Wirsz, Emil; Prior, Fred W.; Wilson, Dennis L.

    1992-07-01

    Siemens and Loral Western Development Labs have designed a Picture Archiving and Communication System capable of supporting a large, fully digital hospital. Its functions include the management, storage and retrieval of medical images. The system may be modeled as a heterogeneous network of processing elements, transfer devices and storage units. Several discrete event simulation models have been designed to investigate different levels of the design. These models include the System Model, focusing on the flow of image traffic throughout the system, the Workstation Models, focusing on the internal processing in the different types of workstations, and the Communication Network Model, focusing on the control communication and host computer processing. The first two of these models are addressed here, with reference being made to a separate paper regarding the Communication Network Model. This paper describes some of the issues addressed with the models, the modeling techniques used and the performance results from the simulations. Important parameters of interest include: time to retrieve images from different possible storage locations and the utilization levels of the transfer devices and other key hardware components. To understand system performance under fully loaded conditions, the proposed system for the Madigan Army Medical Center was modeled in detail, as part of the Medical Diagnostic Imaging Support System (MDIS) proposal.

  5. Strengthening the Quality File Management to Ensure the Effective Running of Quality Management System%加强档案管理确保质量管理体系有效运行

    Institute of Scientific and Technical Information of China (English)

    蔡新华; 朱红琴; 吴卫琴

    2012-01-01

    The quality file management is an important part of the quality management system. The paper discussed the practice of quality file management from the aspects of establishment and improvement of the quality file management system, the filing, acceptance check, file preparation, shelf life, custody, lending, identification, destruction and continuous improvement of the files. The implementation of these measures ensures the effective management of the quality file, provides a guarantee for the effective operation of the quality management system.%质量档案管理是质量管理体系的重要组成部分.从建立健全管理体制、明确归档范围以及质量档案的归档及验收、档号编制、保存期限、保管、借阅、鉴定销毁、持续改进等方面,总结了质量档案管理实践.上述措施的落实,为质量管理体系的有效运行提供了支撑与保障.

  6. Improving the Flash Flood Frequency Analysis using dendrogeomorphological evidences in the Arenal River crossing Arenas de San Pedro Village (Spanish Central System)

    Science.gov (United States)

    Ruiz-Villanueva, V.; Ballesteros, J. A.; Díez-Herrero, A.; Bodoque, J. M.

    2009-04-01

    The flash flood frequency analysis in mountainous catchments presents specific scientific challenges. One of the challenges is the relevant gradient in precipitation intensity with altitude. Another challenge is the lack of information from rainfall or discharge gauge stations or from documentary sources. Dendrogeomorphology studies the response in the wood growth pattern and the botanical signs on the trees affected by geomorphological processes. With regard to the flood frequency, the dendrogeomorphological evidences bring forward valuable infomation about single past events (with annual or even seasonal precision) and their occurrence periodicity. The main macro-evidence that we can find in the tree trunk is a stem scar originated by a wound in the bark of the tree. When the tree grows, this wound remains reflected in the tree ring sequence. The best way to analyze the tree ring sequence is by using a complete section of the trunk, this couldn't be possible unless the tree is cut down. Due to the unfeasibility of cutting down the trees, in Dendrogeomorphology is enough to obtain an increment core, using a Pressler borer. Nevertheless, this study has been based on complete stem sections analysis facilitated for the felling works in the riverine vegetation in the Arenal River, carried out by the Tagus River Water Authority. These felling works have allowed us to obtain sections and to analyze the stump of the tree in situ. On this way, 100 samples of Alnus glutinosa and Fraxinus angustifolia located by the river along the Arenal River crossing Arenas de San Pedro Village (Ávila, northern slopes of the Gredos Mountain Range in the Spanish Central System) have been analyzed. This village is known for its historical problems of flood during extreme events. A meticulous fieldwork has been carried out. Every sample was analyzed locating its geomorphological position, the distance to the riverbed and the height of the stump in which the evidences were observed. Using a

  7. File Encryption System Based on Triple DES and RSA%基于Triple DES与RSA的文件加密系统

    Institute of Scientific and Technical Information of China (English)

    胡振

    2012-01-01

    Comparing the characteristics of symmetric cryptography and asymmetric cryptography, this paper outlines the basic principles of the Triple DES algorithm and the RSA algorithm. On the basis of a detailed analysis of issues related to file security and in-depth study on . NET Framework encryption algorithm class, this paper proposes the file encryption scheme on the combination of the Triple DES algorithm and the RSA algorithm, designs the system' s overall structure and the basic process, then the system is realized with VB. NET. After practice, the results show that the system is simple and convenient for file encryption.%比较对称密码体制与非对称密码体制的特点,简述Triple DES算法与RSA算法的基本原理.在详细分析文件安全的相关问题和深入研究.NET Framework密码算法类的基础上,提出Triple DES算法与RSA算法结合运用的文件加密方案,设计系统的总体结构和基本流程,并以VB.NET实现了基于Triple DES与RSA的文件加密系统.实践表明,用本系统进行文件加密简单而方便.

  8. 基于位图示法的NSFS文件系统设计%Design of NSFS file system based on bitmap method

    Institute of Scientific and Technical Information of China (English)

    2013-01-01

    The file system is the core function element for data storage of an operating system. It is applied to data storage and data management. Any operating system without file system can only calculate data,but the calculated results can not re⁃side in random access memory,and all the data and information will be lost after computer power off. Actually file system is a necessary functional component of a modern operating system. As information technology is improving rapidly nowadays,the file system in the embedded operating system has been more and more important in various fields. In aviation industry,the embed⁃ded operating system must provide a convenient file system which should be easy to use and provide high performance. NSFS is a file system with high availability based on bitmap method. It is designed as a concise architecture. Moreover,the cluster size can be allocated based on the file size adaptively with minimal wasted storage space. In summary,NSFS is good at performance with high flexibility and reliability.%  文件系统是操作系统进行数据存储的核心功能组件,它具有数据的存储和数据的管理功能。失去了文件系统的操作系统只能进行数据计算处理,而不能存储计算结果,所有重要的数据和信息在计算机断电的时候都会消失。可以说文件系统是现代操作系统不可分割的基本功能组件。在信息化高速发展的今天,嵌入式操作系统技术中的文件系统有着越来越重要的地位,航空领域的嵌入式操作系统必须提供一个方便易用高性能高可靠的文件系统。NSFS文件系统是基于位图示法设计的具有高可用性的文件系统。它的结构简洁,可以根据文件大小自适应地分配簇大小,空间浪费极小,具有良好的性能和很大的灵活性,并且具备较高的可靠性。

  9. Design of Files Management System Interface Platform Based on Middleware%基于中间件的档案管理系统接口平台设计

    Institute of Scientific and Technical Information of China (English)

    张伟娜

    2011-01-01

    Files management system is used for collecting files in government units and then organizes them to catalog. The interface platform is used to ensure that the file management subsystem and the file management system to communicate directly. Hie use of middleware can be effectively distributed to design and implement the file management system interface. The paper researches the file management system interface based on the middleware, focuses on analyzing the specific application of middleware technology in files management system interface, and designs the interface platform and security to ensure seamless connection between systems. It can realize the system data exchange and optimizes the development of files management system by using middleware to design interface middleware.%档案管理系统用于收集各政府单位中的档案数据,并将其整理编目.其中,接口平台保证各档案管理子系统与档案管理系统之间的直接通信,采用中间件可以有效地对分布式的档案管理系统接口进行设计和实现.本文对基于中间件的档案管理系统接口进行研究,重点分析中间件技术在档案管理系统接口中的具体应用,并对接口平台及安全性进行设计,以保证系统间的无缝连接.本文将中间件应用于接口设计,实现了系统数据互通,优化了档案管理系统开发.

  10. 76 FR 46774 - Privacy Act of 1974; System of Records-Federal Student Aid Application File

    Science.gov (United States)

    2011-08-03

    ... reading system, and the entire complex is patrolled by security personnel during non-business hours. The...; Loan Satisfactory Repayment Change; Active Bankruptcy Change; Overpayments Change; Aggregate Loan Change; Defaulted Loan; Discharged Loan; Loan Satisfactory Repayment; Active Bankruptcy; Additional Loans...

  11. Verification of respiratory-gated radiotherapy with new real-time tumour-tracking radiotherapy system using cine EPID images and a log file

    Science.gov (United States)

    Shiinoki, Takehiro; Hanazawa, Hideki; Yuasa, Yuki; Fujimoto, Koya; Uehara, Takuya; Shibuya, Keiko

    2017-02-01

    A combined system comprising the TrueBeam linear accelerator and a new real-time tumour-tracking radiotherapy system, SyncTraX, was installed at our institution. The objectives of this study are to develop a method for the verification of respiratory-gated radiotherapy with SyncTraX using cine electronic portal image device (EPID) images and a log file and to verify this treatment in clinical cases. Respiratory-gated radiotherapy was performed using TrueBeam and the SyncTraX system. Cine EPID images and a log file were acquired for a phantom and three patients during the course of the treatment. Digitally reconstructed radiographs (DRRs) were created for each treatment beam using a planning CT set. The cine EPID images, log file, and DRRs were analysed using a developed software. For the phantom case, the accuracy of the proposed method was evaluated to verify the respiratory-gated radiotherapy. For the clinical cases, the intra- and inter-fractional variations of the fiducial marker used as an internal surrogate were calculated to evaluate the gating accuracy and set-up uncertainty in the superior-inferior (SI), anterior-posterior (AP), and left-right (LR) directions. The proposed method achieved high accuracy for the phantom verification. For the clinical cases, the intra- and inter-fractional variations of the fiducial marker were  ⩽3 mm and  ±3 mm in the SI, AP, and LR directions. We proposed a method for the verification of respiratory-gated radiotherapy with SyncTraX using cine EPID images and a log file and showed that this treatment is performed with high accuracy in clinical cases. This work was partly presented at the 58th Annual meeting of American Association of Physicists in Medicine.

  12. Verification of respiratory-gated radiotherapy with new real-time tumour-tracking radiotherapy system using cine EPID images and a log file.

    Science.gov (United States)

    Shiinoki, Takehiro; Hanazawa, Hideki; Yuasa, Yuki; Fujimoto, Koya; Uehara, Takuya; Shibuya, Keiko

    2017-02-21

    A combined system comprising the TrueBeam linear accelerator and a new real-time tumour-tracking radiotherapy system, SyncTraX, was installed at our institution. The objectives of this study are to develop a method for the verification of respiratory-gated radiotherapy with SyncTraX using cine electronic portal image device (EPID) images and a log file and to verify this treatment in clinical cases. Respiratory-gated radiotherapy was performed using TrueBeam and the SyncTraX system. Cine EPID images and a log file were acquired for a phantom and three patients during the course of the treatment. Digitally reconstructed radiographs (DRRs) were created for each treatment beam using a planning CT set. The cine EPID images, log file, and DRRs were analysed using a developed software. For the phantom case, the accuracy of the proposed method was evaluated to verify the respiratory-gated radiotherapy. For the clinical cases, the intra- and inter-fractional variations of the fiducial marker used as an internal surrogate were calculated to evaluate the gating accuracy and set-up uncertainty in the superior-inferior (SI), anterior-posterior (AP), and left-right (LR) directions. The proposed method achieved high accuracy for the phantom verification. For the clinical cases, the intra- and inter-fractional variations of the fiducial marker were  ⩽3 mm and  ±3 mm in the SI, AP, and LR directions. We proposed a method for the verification of respiratory-gated radiotherapy with SyncTraX using cine EPID images and a log file and showed that this treatment is performed with high accuracy in clinical cases.

  13. 48 CFR 4.802 - Contract files.

    Science.gov (United States)

    2010-10-01

    ... 48 Federal Acquisition Regulations System 1 2010-10-01 2010-10-01 false Contract files. 4.802 Section 4.802 Federal Acquisition Regulations System FEDERAL ACQUISITION REGULATION GENERAL ADMINISTRATIVE... locator system should be established to ensure the ability to locate promptly any contract files....

  14. San Diego's Capital Planning Process

    Science.gov (United States)

    Lytton, Michael

    2009-01-01

    This article describes San Diego's capital planning process. As part of its capital planning process, the San Diego Unified School District has developed a systematic analysis of functional quality at each of its school sites. The advantage of this approach is that it seeks to develop and apply quantifiable metrics and standards for the more…

  15. Los Angeles og San Francisco

    DEFF Research Database (Denmark)

    Ørstrup, Finn Rude

    1998-01-01

    Kompendium udarbejdet til en studierejse til Los Angeles og San Francisco april-maj 1998 Kunstakademiets Arkitektskole, Institut 3H......Kompendium udarbejdet til en studierejse til Los Angeles og San Francisco april-maj 1998 Kunstakademiets Arkitektskole, Institut 3H...

  16. Data Sharing in a File Structured QoS Aware Peer-To-Peer System

    Directory of Open Access Journals (Sweden)

    A. Samydurai

    2014-05-01

    Full Text Available The Peer-to-Peer system has the potential capacity in building an efficient unified network structure for location based secure data transfer with information sharing and minimizing the failure factor. In this study we define the data transfer with certain futuristic characters that will provide data sharing strength for the users to understand the efficiency with which the data reaches the end node or users. For this purpose we have used specific QoS metric tools like, bandwidth, lookup time, delay, response time and trip time that will help in data sharing. This study concentrates on the search operation on the P2P network for data retrieval. Replication strategy is used in order to increase the probability of successful search. The lookup search employed in this regard uses two type of search namely ring and binary search. The efficiency aspect of data transfer in this study raises the importance of the system compared with the earlier system. The performance analysis will clearly chat out the quality of this system of data sharing and searching. Evaluations have shown a positive sign in the functioning of the proposed system.

  17. Common File Formats.

    Science.gov (United States)

    Mills, Lauren

    2014-03-21

    An overview of the many file formats commonly used in bioinformatics and genome sequence analysis is presented, including various data file formats, alignment file formats, and annotation file formats. Example workflows illustrate how some of the different file types are typically used.

  18. Biological and associated water-quality data for lower Olmos Creek and upper San Antonio River, San Antonio, Texas, March-October 1990

    Science.gov (United States)

    Taylor, R. Lynn

    1995-01-01

    Biological and associated water-quality data were collected from lower Olmos Creek and upper San Antonio River in San Antonio, Texas, during March-October 1990, the second year of a multiyear data-collection program. The data will be used to document water-quality conditions prior to implementation of a proposal to reuse treated wastewater to irrigate city properties in Olmos Basin and Brackenridge Parks and to augment flows in the Olmos Creek/San Antonio River system.

  19. 48 CFR 204.805 - Disposal of contract files.

    Science.gov (United States)

    2010-10-01

    ... 48 Federal Acquisition Regulations System 3 2010-10-01 2010-10-01 false Disposal of contract files. 204.805 Section 204.805 Federal Acquisition Regulations System DEFENSE ACQUISITION REGULATIONS SYSTEM, DEPARTMENT OF DEFENSE GENERAL ADMINISTRATIVE MATTERS Contract Files 204.805 Disposal of contract files....

  20. You Share, I Share: Network Effects and Economic Incentives in P2P File-Sharing System

    CERN Document Server

    Salek, Mahyar; Kempe, David

    2011-01-01

    We study the interaction between network effects and external incentives on file sharing behavior in Peer-to-Peer (P2P) networks. Many current or envisioned P2P networks reward individuals for sharing files, via financial incentives or social recognition. Peers weigh this reward against the cost of sharing incurred when others download the shared file. As a result, if other nearby nodes share files as well, the cost to an individual node decreases. Such positive network sharing effects can be expected to increase the rate of peers who share files. In this paper, we formulate a natural model for the network effects of sharing behavior, which we term the "demand model." We prove that the model has desirable diminishing returns properties, meaning that the network benefit of increasing payments decreases when the payments are already high. This result holds quite generally, for submodular objective functions on the part of the network operator. In fact, we show a stronger result: the demand model leads to a "cov...