WorldWideScience

Sample records for model mattek ashland

  1. Ashland Area Support Substation Project

    International Nuclear Information System (INIS)

    1992-06-01

    The Bonneville Power Administration (BPA) provides wholesale electric service to the City of Ashland (the City) by transferring power over Pacific Power ampersand Light Company's (PP ampersand L) 115-kilovolt (kV) transmission lines and through PP ampersand L's Ashland and Oak Knoll Substations. The City distributes power over a 12.5-kV system which is heavily loaded during winter peak periods and which has reached the limit of its ability to serve peak loads in a reliable manner. Peak loads under normal winter conditions have exceeded the ratings of the transformers at both the Ashland and Oak Knoll Substations. In 1989, the City modified its distribution system at the request of PP ampersand L to allow transfer of three megawatts (MW's) of electric power from the overloaded Ashland Substation to the Oak Knoll Substation. In cooperation with PP ampersand L, BPA installed a temporary 6-8 megavolt-amp (MVA) 115-12.5-kV transformer for this purpose. This additional transformer, however, is only a temporary remedy. BPA needs to provide additional, reliable long-term service to the Ashland area through additional transformation in order to keep similar power failures from occurring during upcoming winters in the Ashland area. The temporary installation of another 20-MVA mobile transformer at the Ashland Substation and additional load curtailment are currently being studied to provide for sustained electrical service by the peak winter period 1992. Two overall electrical plans-of-service are described and evaluated in this report. One of them is proposed for action. Within that proposed plan-of-service are location options for the substation. Note that descriptions of actions that may be taken by the City of Ashland are based on information provided by them

  2. Acquisition of High Field Nuclear Magnetic Resonance Spectrometers for Research in Molecular Structure, Function and Dynamics

    Science.gov (United States)

    2010-09-01

    report here the synthesis of multigram quantities of PCL and employ analytical tools, such as Raman spectroscopy of single PLNP and live - cell imaging of...deconvolution microscopy (DVDM) for live cell imaging together with zeta potential measurements to further characterize PCL, PLNP, and PLNP-derived polyplexes...and live - cell imaging . In our experiment, Hep G2 cells were seeded on 35-mm- diameter round glass bottom culture dishes (MatTek Corp., Ashland, MA

  3. 76 FR 41864 - Home Federal Savings and Loan Association, Ashland, KY; Approval of Conversion Application

    Science.gov (United States)

    2011-07-15

    ... DEPARTMENT OF THE TREASURY Office of Thrift Supervision [AC-24 OTS Nos. 04246 and H4776] Home Federal Savings and Loan Association, Ashland, KY; Approval of Conversion Application Notice is hereby... Savings and Loan Association, Ashland, Kentucky, to convert to the stock form of organization. Copies of...

  4. 78 FR 54168 - Special Local Regulation, Cumberland River, Mile 157.0 to 159.0; Ashland City, TN

    Science.gov (United States)

    2013-09-03

    ... Local Regulation, Cumberland River, Mile 157.0 to 159.0; Ashland City, TN AGENCY: Coast Guard, DHS... Special local regulation; Cumberland River, Miles 157.0 to 159.0, Ashland City, TN. (a) Location. The... regulation for the waters of the Cumberland River beginning at mile marker 157.0 and ending at mile marker...

  5. Ashland outlines $261 million in refinery unit construction

    International Nuclear Information System (INIS)

    Anon.

    1992-01-01

    This paper reports that Ashland Petroleum Co. has spelled out $261 million in projects completed, under way, or planned to produce cleaner fuel and further reduce emissions at two U.S. refineries. The company: Started up at $13 million pollution control system at its 213,400 b/cd Catlettsburg, Ky., plant. Started construction on six projects at its 67,100 b/cd St. Paul Park, Minn., refinery that will cost about $114 million and enable the plant to produce cleaner burning diesel fuel and further reduce emissions

  6. 75 FR 16728 - Beaver Creek Landscape Management Project, Ashland Ranger District, Custer National Forest...

    Science.gov (United States)

    2010-04-02

    ... DEPARTMENT OF AGRICULTURE Forest Service Beaver Creek Landscape Management Project, Ashland Ranger... manner that increases resiliency of the Beaver Creek Landscape Management Project area ecosystem to... requirements to require. The Beaver Creek Landscape Management Project includes treatments previously proposed...

  7. 76 FR 13344 - Beaver Creek Landscape Management Project, Ashland Ranger District, Custer National Forest...

    Science.gov (United States)

    2011-03-11

    ... DEPARTMENT OF AGRICULTURE Forest Service Beaver Creek Landscape Management Project, Ashland Ranger... Impact Statement for the Beaver Creek Landscape Management Project was published in the Federal Register... Responsible Official for the Beaver Creek Landscape Management Project. DATES: The Final Environmental Impact...

  8. 75 FR 11511 - Rogue River-Siskiyou National Forest; Mt. Ashland Ski Area Expansion, Jackson County, OR

    Science.gov (United States)

    2010-03-11

    ... Court of Appeals found it necessary to understand the type of habitat the Pacific fisher requires for... Appeals for the Ninth Circuit in CV-05-03004-PA, to conditionally authorize expansion of the Mt. Ashland... Alternative 6. The Forest Service received twenty-eight notices of appeal to the ROD. In December 2004, the...

  9. Recycling and disposal of FUSRAP materials from the Ashland 2 site at a licensed uranium mill

    International Nuclear Information System (INIS)

    Howard, B.; Conboy, D.; Rehmann, M.; Roberts, H.

    1999-01-01

    During World War II the Manhattan Engineering District (MED) used facilities near Buffalo, N.Y. to extract natural uranium from ores. Some of the byproduct material left from the ores (MED byproduct), containing low levels of uranium, thorium, and radium, was deposited on a disposal site known as Ashland 2, located in Tonawanda, NY. On behalf of the United States Army Corps of Engineers (USACE, or the Corps), ICF Kaiser Engineers (ICFKE) was tasked to provide the best value clean-up results that meet all of the criteria established in the Record of Decision for the site. International Uranium (USA) Corporation (IUC), the operator of the White Mesa Uranium Mill, a Nuclear Regulatory Commission (NRC)-licensed mill near Blanding, Utah, was selected to perform uranium extraction on the excavated materials, therefore giving the best value as it provided beneficial use of the material consistent with the Resource Conservation and Recovery Act (RCRA) intent to encourage recycling and recovery, while also providing the most cost-effective means of disposal. Challenges overcome to complete this project included (1) identifying the best-value location to accept the material; (2) meeting regulatory requirements with IUC obtaining an NRC license amendment to accept and process the material as an alternate feed; (3) excavating and preparing the material for shipment, then shipping the material to the Mill for uranium recovery; and (4) processing the material, followed by disposal of tailings from the process in the Mill's licensed uranium tailings facility. Excavation from Ashland 2 and processing of the Ashland 2 material at the White Mesa Mill resulted in a cleaner environment at Tonawanda, a cost avoidance of up to $16 million, beneficial recovery of source material, and environmentally protective disposal of byproduct material. (author)

  10. Hydrogeochemical and stream sediment reconnaissance basic data for Ashland NTMS Quadrangle, Wisconsin; Michigan; Minnesota

    International Nuclear Information System (INIS)

    1979-01-01

    Results of a reconnaissance geochemical survey of the Ashland Quadrangle, Wisconsin; Michigan; Minnesota are reported. Field and laboratory data are presented for 312 groundwater and 383 stream sediment samples. Statistical and areal distributions of uranium and possible uranium-related variables are displayed. A generalized geologic map of the survey area is provided, and pertinent geologic factors which may be of significance in evaluating the potential for uranium mineralization are briefly discussed. Groundwater data indicate that the most promising area for potential uranium mineralization occurs along the Douglas Thrust Fault in northern Douglas County, Wisconsin. The Douglas Fault brings Fond du Lac Formation sediments in contact with Chengwatana volcanics where carbonate-rich water derived from the mafic volcanics enter the arkosic Fond du Lac Formation. Another area of interest surrounds the Bad River Indian Reservation in northern Ashland and Iron Counties. The waters here are produced from red lithic sandstone and are also associated with the Douglas Fault. Water chemistry of these waters appears similar to the waters from the Douglas County area. The stream sediment data are inconclusive because of the extensive cover of glacial deposits. A moderately favorable area is present in a strip along Lake Superior in Douglas County, where sediments are derived from arkoses of the Fond du Lac Formation

  11. Health-hazard evaluation report No. HETA-88-377-2120, Armco Coke Oven, Ashland Kentucky

    International Nuclear Information System (INIS)

    Kinnes, G.M.; Fleeger, A.K.; Baron, S.L.

    1991-06-01

    In response to a request from the Oil, Chemical and Atomic Workers International Union, a study was made of possible hazardous working conditions at ARMCO Coke Oven (SIC-3312), Ashland, Kentucky. The facility produces about 1,000,000 tons of coke annually. Of the approximately 400 total employees at the coke oven site, 55 work in the by products area. Air quality sampling results indicated overexposure to both benzene (71432) and coal tar pitch volatiles (CTPVs). Airborne levels of benzene ranged as high as 117 parts per million (ppm) with three of 17 samples being above the OSHA limit of 1ppm. Airborne concentrations of CTPVs ranged as high as 0.38mg/cu m with two of six readings being above OSHA limit of 0.2mg/cu m. Several polynuclear aromatic hydrocarbons were also detected. The authors conclude that by products area workers are potentially overexposed to carcinogens, including benzene, CTPVs, and polynuclear aromatic hydrocarbons. An epidemiologic study is considered unlikely to yield meaningful information at this time, due to the small number of workers and the short follow up period. The authors recommend specific measures for reducing potential employee exposures, including an environmental sampling program, a preventive maintenance program, improved housekeeping procedures, and reducing exposure in operators' booths

  12. Inter-Individual Variability in Human Response to Low-Dose Ionizing Radiation, Final Report

    International Nuclear Information System (INIS)

    Rocke, David

    2016-01-01

    In order to investigate inter-individual variability in response to low-dose ionizing radiation, we are working with three models, 1) in-vivo irradiated human skin, for which we have a realistic model, but with few subjects, all from a previous project, 2) ex-vivo irradiated human skin, for which we also have a realistic model, though with the limitations involved in keeping skin pieces alive in media, and 3) MatTek EpiDermFT skin plugs, which provides a more realistic model than cell lines, which is more controllable than human samples.

  13. Inter-Individual Variability in Human Response to Low-Dose Ionizing Radiation, Final Report

    Energy Technology Data Exchange (ETDEWEB)

    Rocke, David [Univ. of California, Davis, CA (United States)

    2016-08-01

    In order to investigate inter-individual variability in response to low-dose ionizing radiation, we are working with three models, 1) in-vivo irradiated human skin, for which we have a realistic model, but with few subjects, all from a previous project, 2) ex-vivo irradiated human skin, for which we also have a realistic model, though with the limitations involved in keeping skin pieces alive in media, and 3) MatTek EpiDermFT skin plugs, which provides a more realistic model than cell lines, which is more controllable than human samples.

  14. 40 CFR 81.350 - Wisconsin.

    Science.gov (United States)

    2010-07-01

    ... North: Green Bay West: W. Mason St. and Ashland Ave., along Ashland north to Matter St., west to Crocker St., north on Crocker St. to Bylsby St., then to Green Bay South: W. Mason St. and Ashland Ave., east along Mason to Irwin Ave. East: W. Mason St., and Irwin Ave., along Irwin Ave. north to Green Bay...

  15. An in vitro method for detecting chemical sensitization using human reconstructed skin models and its applicability to cosmetic, pharmaceutical, and medical device safety testing.

    Science.gov (United States)

    McKim, James M; Keller, Donald J; Gorski, Joel R

    2012-12-01

    Chemical sensitization is a serious condition caused by small reactive molecules and is characterized by a delayed type hypersensitivity known as allergic contact dermatitis (ACD). Contact with these molecules via dermal exposure represent a significant concern for chemical manufacturers. Recent legislation in the EU has created the need to develop non-animal alternative methods for many routine safety studies including sensitization. Although most of the alternative research has focused on pure chemicals that possess reasonable solubility properties, it is important for any successful in vitro method to have the ability to test compounds with low aqueous solubility. This is especially true for the medical device industry where device extracts must be prepared in both polar and non-polar vehicles in order to evaluate chemical sensitization. The aim of this research was to demonstrate the functionality and applicability of the human reconstituted skin models (MatTek Epiderm(®) and SkinEthic RHE) as a test system for the evaluation of chemical sensitization and its potential use for medical device testing. In addition, the development of the human 3D skin model should allow the in vitro sensitization assay to be used for finished product testing in the personal care, cosmetics, and pharmaceutical industries. This approach combines solubility, chemical reactivity, cytotoxicity, and activation of the Nrf2/ARE expression pathway to identify and categorize chemical sensitizers. Known chemical sensitizers representing extreme/strong-, moderate-, weak-, and non-sensitizing potency categories were first evaluated in the skin models at six exposure concentrations ranging from 0.1 to 2500 µM for 24 h. The expression of eight Nrf2/ARE, one AhR/XRE and two Nrf1/MRE controlled gene were measured by qRT-PCR. The fold-induction at each exposure concentration was combined with reactivity and cytotoxicity data to determine the sensitization potential. The results demonstrated that

  16. 76 FR 37265 - Standard Instrument Approach Procedures, and Takeoff Minimums and Obstacle Departure Procedures...

    Science.gov (United States)

    2011-06-27

    ..., AL, Richard Arthur Field, RNAV (GPS) RWY 36, Amdt 1 Gulf Shores, AL, Jack Edwards, VOR-A, Amdt 3, CANCELLED Crossett, AR, Z M Jack Stell Field, Takeoff Minimums and Obstacle DP, Orig Springerville, AZ..., RNAV (GPS) RWY 10, Amdt 1 Ashland, KY, Ashland Rgnl, RNAV (GPS) RWY 28, Amdt 1 Nantucket, MA, Nantucket...

  17. Integration of Nanofluids into Commercial Antifreeze Concentrates with ASTM D15 Corrosion Testing

    Science.gov (United States)

    2013-05-01

    Lockwood1 1 Valvoline New Product Development Laboratory, Ashland Consumer Markets , Lexington, Kentucky. 2 Tank Automotive Research, Development...Development Laboratory,Ashland Consumer Markets ,P.O. Box 14000,Lexington,KY,40512 8. PERFORMING ORGANIZATION REPORT NUMBER ; #23826 9. SPONSORING...Technology, Performance, and Life for Light-Duty Applications, Engine Coolant Testing: Fourth Volume, ASTM STP 1335, R.E. Beale ed., ASTM, Philadelphia

  18. Study seeks to boost Appalachian gas recovery

    International Nuclear Information System (INIS)

    Land, R.

    1992-01-01

    Ashland Exploration Inc. and the Gas Research Institute (GRI) are trying to find ways to increase gas recovery in the Appalachian basin. They are working together to investigate Mississippian Berea sandstone and Devonian shale in a program designed to achieve better understanding and improved performance of tight natural gas formations in the area. This paper reports that three wells on Ashland Exploration acreage in Pike County, Ky., are involved in the research program. Findings from the first two wells will be used to optimize evaluation and completion of the third well. The first two wells have been drilled. Drilling of the third well was under way at last report. Ashland Exploration has been involved with GRI's Devonian shale research since 1988. GRI's initial focus was on well stimulation because Devonian shale wells it reviewed had much lower recoveries than could be expected, based on estimated gas in place. Research during the past few years was designed to improve the execution and quality control of well stimulation

  19. The Holy Dose: Spiritual adventures with Southern Oregon's psychedelic crusaders

    OpenAIRE

    Weber, Alex L

    2011-01-01

    Ashland, Oregon is a smart little community nestled in the foothills of the Siskiyou Mountains about 20 minutes north of the California border. Home to Southern Oregon University and host to the yearly Shakespeare Festival, Ashland is one of those places both progressive and picturesque that often occupies a top spot on waiting-room magazines' “Best Small Towns” or “Best Places to Retire” lists. It's got a walkable business district with cozy fine-dining bistros, new-age book shops and old-sc...

  20. Energy in Canada: an overview

    International Nuclear Information System (INIS)

    Mawer, S.P.

    1997-01-01

    Future opportunities for Canadian petroleum producers and the impact of competition for a share of the US export market on the Canadian oil industry were discussed. An overview of the market in PADD II, the US mid-continent, as seen from the Ashland Petroleum Company's perspective, was presented. Ashland is the fourth largest importer of Canadian crude. The company firmly believes that despite only slight increases in demand, Canadian supply to the US mid-continent will grow by taking significant market share from Mexican, Venezuelan and Gulf coast competitors

  1. NATO Independent Cost Estimating and the Role of Life Cycle Cost Analysis in Managing the Defence Enterprise (Estimation independante des couts de l’OTAN et role de l’analyse des couts globaux de possesssion au sen de l’OTAN)

    Science.gov (United States)

    2012-08-01

    Tortuga LSD 46 LSD 6 1990 United States Rushmore LSD 47 LSD 7 1991 United States Ashland LSD 48 LSD 8 1992 United States Harpers Ferry LSD 49 LSD 1...LM5 LM6 LM7 LM8 LM9 Svalbard Carlskrona Thomaston Plymouth Rock Lyme Bay Anchorage Whidbey Island Raleigh Tortuga Protecteur Atle Largs Bay Fort...8217 Comstock Tortuga Rushmore Ashland ’Harpers Ferry’ ’Carter Hall’ ’Oak Hill’ ’Pearl Harbour’ Siroco Albion Bulwark ’Largs Bay’ ’Lyme Bay’ ’Mounts Bay

  2. ImmunoPET Imaging of Insulin-Like Growth Factor 1 Receptor in a Subcutaneous Mouse Model of Pancreatic Cancer

    Science.gov (United States)

    2016-06-30

    the cells were lysed using radioimmunoprecipitation assay (RIPA) buffer (Boston BioProducts, Ashland, MA, USA) supplemented with 1:100 protease...was centrifuged at 13,000g for 10 min at 4 °C. The supernatant was removed, and protein concentration was measured using the Pierce Coomassie ( Bradford ...Protein Assay Kit (Thermo Fisher Scientific, Carlsbad, CA, USA). Next, 20 μg of total protein was loaded into the corresponding wells of a 4− 12

  3. Simultaneous Inversion of UXO Parameters and Background Response

    Science.gov (United States)

    2012-03-01

    GIF) 7. PERFO RMING ORGANIZATIO N NAME(S) AND ADDRESS(ES) Sky Research, Inc., 445 Dead Indian Memorial Road, Ashland. OR 97520 University of Brit...bump in the topography (along the line shown in magenta colour ) leads to a lower amplitude in the data...0.2. Recall that the forward model for the soil can be written as: = (ℎ,ℎ, ,)()() (5) The data d is linearly

  4. The Effects of Low Dose Irradiation on Inflammatory Response Proteins in a 3D Reconstituted Human Skin Tissue Model

    Energy Technology Data Exchange (ETDEWEB)

    Varnum, Susan M.; Springer, David L.; Chaffee, Mary E.; Lien, Katie A.; Webb-Robertson, Bobbie-Jo M.; Waters, Katrina M.; Sacksteder, Colette A.

    2012-12-01

    Skin responses to moderate and high doses of ionizing radiation include the induction of DNA repair, apoptosis, and stress response pathways. Additionally, numerous studies indicate that radiation exposure leads to inflammatory responses in skin cells and tissue. However, the inflammatory response of skin tissue to low dose radiation (<10 cGy) is poorly understood. In order to address this, we have utilized a reconstituted human skin tissue model (MatTek EpiDerm FT) and assessed changes in 23 cytokines twenty-four and forty eight hours following treatment of skin with either 3 or 10 cGy low-dose of radiation. Three cytokines, IFN-γ, IL-2, MIP-1α, were significantly altered in response to low dose radiation. In contrast, seven cytokines were significantly altered in response to a high radiation dose of 200 cGy (IL-2, IL-10, IL-13, IFN-γ, MIP-1α, TNF α, and VEGF) or the tumor promoter 12-O-tetradecanoylphorbol 13-acetate (G-CSF, GM-CSF, IL-1α, IL-8, MIP-1α, MIP-1β, RANTES). Additionally, radiation induced inflammation appears to have a distinct cytokine response relative to the non-radiation induced stressor, TPA. Overall, these results indicate that there are subtle changes in the inflammatory protein levels following exposure to low dose radiation and this response is a sub-set of what is seen following a high dose in a human skin tissue model.

  5. Hydrogen in metals

    CSIR Research Space (South Africa)

    Carter, TJ

    2001-04-01

    Full Text Available .J. Cartera,*, L.A. Cornishb aAdvanced Engineering & Testing Services, MATTEK, CSIR, Private Bag X28, Auckland Park 2006, South Africa bSchool of Process and Materials Engineering, University of the Witwatersrand, Private Bag 3, P.O. WITS 2050, South Africa... are contrasted, and an unusual case study of hydrogen embrittlement of an alloy steel is presented. 7 2001 Published by Elsevier Science Ltd. Keywords: Hydrogen; Hydrogen-assisted cracking; Hydrogen damage; Hydrogen embrittlement 1. Introduction Hydrogen suC128...

  6. 78 FR 35033 - Formations of, Acquisitions by, and Mergers of Bank Holding Companies

    Science.gov (United States)

    2013-06-11

    ... shares of Ashland Bancshares, Inc., and thereby indirectly acquire Centennial Bank, both in Omaha... Holding Company, Inc., and immediately thereafter, Omaha State Bank, will merge with and into Centennial...

  7. 40 CFR 52.926 - Attainment dates for national standards.

    Science.gov (United States)

    2010-07-01

    ... Evansville (Indiana)-Ownsboro-Henderson (Kentucky) Interstate: a. Daviess County g i g g b b b b. Henderson... Virginia)-Ashland (Kentucky)-Portsmouth-Ironton (Ohio) Interstate: a. Boyd County g i g b b b g b. Lawrence...

  8. Regulation of above-ground oil and waste containers. Hearing before the Subcommittee on Transportation, Tourism, and Hazardous Materials of the Committee on Energy and Commerce, House of Representatives, One Hundredth Congress, Second Session, January 26, 1988

    Energy Technology Data Exchange (ETDEWEB)

    1988-01-01

    Representatives from the petroleum industry, US EPA, National Bureau of Standards and Congress were among those testifying at a hearing to discuss one of the worst inland environmental disasters in this Nation's history. The January 2 collapse of the Ashland Oil Co.'s storage tank in Floreffe, Pennsylvania resulted in the release of some 4 million gallons of diesel fuel. Approximately a million gallons escaped the containment structures and spilled over into the Monongahela River. This spill has contaminated the drinking water sources for millions of people downstream, from Pittsburgh to Cincinnati to Louisville, and beyond. Attention is focused on the causes of this tank's collapse, the response measures taken by Ashland Oil, the Coast Guard, the EPA, and the need for tighter federal regulations of above-ground tanks used for the storage of petroleum and hazardous substances.

  9. 78 FR 48802 - Safety Zones; Recurring Events in Captain of the Port Duluth Zone

    Science.gov (United States)

    2013-08-12

    ..., specifically the Duluth fourth festival, Cornucopia fireworks, Ashland fireworks and the Madeline Island fireworks, and other historically recurring marine events, including the Lake Superior Dragon Boat Festival... with a Notice of Enforcement and marine information broadcast. (2) Lake Superior Dragon Boat Festival...

  10. Oregon | Midmarket Solar Policies in the United States | Solar Research |

    Science.gov (United States)

    purpose funds and offers a wide range of solar incentives. ETO's Solar Electric program offers cash , or 35% of total project costs Utility Incentive Programs Utility Incentive Limitations Ashland Power Grant varies by projects 100% of capital costs Pacific Power - Blue Sky Community Project Funds

  11. 76 FR 30937 - Environmental Impacts Statements; Notice of Availability

    Science.gov (United States)

    2011-05-27

    ... and to Areas within and Adjacent to Wildland Urban Interface near Tennant, Goosenest Ranger District... land Use Development in the Specific Plan Area, City of Folsom, Sacramento County, CA, Review Period... 2004 FEIS, Ashland Ranger District, Rogue River National Forest and Scott River Ranger District...

  12. 16th Annual Expeditionary Warfare Conference: Integrating Future and Present Capabilities

    Science.gov (United States)

    2011-10-27

    undergo modernization • USS ASHLAND (LSD 48) and USS TORTUGA (LSD 46) will swap homeports (Norfolk/Sasebo) Installs Per FY: 2 - 2...Swedish RCBs landing in the LSD 41 class (USS TORTUGA ) well deck Unclassified MLP OSRV Lessons Learned (Representative) • All sailors aren‟t

  13. Robert E. Lee as Operational Artist During the American Civil War

    Science.gov (United States)

    2010-04-21

    the ability to master operational mt and design is invaluable. The current operational environment in which our nation is involved requires us to be...of Antietam, ed. Stephen W. Sears. VoL Compact Disc. Ashland, OR: Blackstone Audiobooks. ---.2003. Gettysburg. Boston, MA: Houghton Mifflin

  14. 75 FR 14625 - Notice of Proposed Withdrawal Extension and Opportunity for Public Meeting; Oregon

    Science.gov (United States)

    2010-03-26

    ... laws in order to protect scientific and ecological values, scenic and recreational values, and the... withdrawal extension is to continue protecting scientific and ecological research values at the Ashland Research Natural Area and its scenic and recreation values, along with the investment of Federal funds at...

  15. 75 FR 28014 - Petitions Concerning Whether Ammonia or Urea Sold or Distributed and Used for Certain Purposes...

    Science.gov (United States)

    2010-05-19

    ... paperboard products. The notice also makes available for review and public comment documents associated with... Nalco's sale and distribution of its unregistered ammonia product to be contrary to law. Ashland... sale and distribution of its urea product was unlawful. DATES: Comments must be received on or before...

  16. What's Cooking in America's Schoolyard Gardens?

    Science.gov (United States)

    Salter, Cathy

    2010-01-01

    This article discusses what's cooking in America's schoolyard gardens. From First Lady Michelle Obama's world-famous Kitchen Garden, to Alice Waters' groundbreaking Edible Schoolyard in Berkeley, California, to a nationally recognized elementary school learning garden in the small Midwestern town of Ashland, Missouri, school children are planting…

  17. 76 FR 59736 - Public Land Order No. 7778; Extension of Public Land Order No. 6876; Oregon

    Science.gov (United States)

    2011-09-27

    ... is necessary to continue protection of the unique natural and ecological values of the Ashland Research Natural Area (RNA), and the recreational values and the investment of Federal funds at the Jackson... first made requires this extension in order to continue protection of the unique natural and ecological...

  18. 75 FR 49551 - Petition for Waiver of Compliance

    Science.gov (United States)

    2010-08-13

    ... below. Docket Number FRA-1998-4821 Applicant: Mr. Timothy R. Luhm, Canadian National, Manager S&C, 17641 South Ashland Avenue, Homewood, Illinois 60430. The Canadian National (CN) seeks an extension for an... expire on April 25, 2011. In addition to the extension, CN requests that the allowable maximum authorized...

  19. 78 FR 55743 - Notice of Service Delivery Area Designation for the Shinnecock Indian Nation

    Science.gov (United States)

    2013-09-11

    ... the availability of funds, the person's relative medical priority, and the actual availability and..., Reservation, Wyoming. Sublette, WY. Aroostook Band of Micmac Indians of Aroostook, ME.\\3\\ Maine. Assiniboine..., Roosevelt, MT, Sheridan, MT, Valley, MT. Bad River Band of the Lake Superior Ashland, WI, Iron, WI. Tribe of...

  20. 78 FR 55737 - Notice of Service Delivery Area Designation for the Tejon Indian Tribe

    Science.gov (United States)

    2013-09-11

    ... on the availability of funds, the person's relative medical priority, and the actual availability and.... Sublette, WY. Aroostook Band of Micmac Indians of Aroostook, ME.\\3\\ Maine. Assiniboine and Sioux Tribes of..., Sheridan, MT, Valley, MT. Bad River Band of the Lake Superior Ashland, WI, Iron, WI. Tribe of Chippewa...

  1. Learning in the context of distribution drift

    Science.gov (United States)

    2017-05-09

    Figure 3 shows a heatmap of the pairwise drift in the joint distribution on the Landsat-8 French land usage satellite data. This data represents 10 meter...listed under the List of Publications. 1. White, C., Using Big Data for Smarter Decision Making. 2011, BI Research: Ashland, Or. 2. Cook , S., et al

  2. How Teacher Educators Can Address Our Nation's Financial Crisis

    Science.gov (United States)

    Dowdell, John; McElfresh, Dwight; Sikula, John

    2009-01-01

    This article from Ashland University reports on what one university, well known for its Teacher Education programs, is doing in an economically depressed state to address our country's financial crisis. Ohio has mandated that financial literacy be taught in high schools by 2010. Reported herein is what is being done to prepare teachers for this…

  3. Investigate The Role Of Cells To Cell Adhesion During Wound Healing

    Science.gov (United States)

    Montes, D. R.; Rodriguez, A. J.; Uyoh, E.; Antwi, P.; Olusegun, S.; Murray, P.

    2017-12-01

    Improvements in wound healing can save living organism and cut misery in health care centers and on the military battlefield. Microscopic examination of how Madin-Darby Canine Kidney (MDCK) epithelial cells respond to wounds under various treatments that block general calcium absorption and e-cadherin mechanics may yield novel wound treatment insights. We used MDCK cells as our tissue model for this experiment.Each tissue was grown on mattek dishes at high, middle, and low densities. Next, we incubated the tissue at 37 degrees celsius overnight. The next day, we wounded the MDCK tissue by scratching them with a 21G1 ½(38.1 mm) syringe needle. Before the scratch we took an image of the tissues. With the tissue wound we made a time-lapse movie of about 3 hours long with a interval of 1 min. Afterward, we compared each MDCK tissue by adding a calcium chelator named EGTA. As so, this will sequester calcium use as well to block e-cadherin mechanics. We then looked at the relationship between a wounded tissue and a healthy tissue. As such, wound dynamics can be observed. We asked ourselves, will EGTA prevent wound closure?

  4. Forty-two years of change in an old-growth and second-growth beech-maple forest of north central Ohio

    Science.gov (United States)

    Natalie R. Pinheiro; P. Charles Goebel; David M. Hix

    2008-01-01

    Using data collected in 1964 and 2006, we examined changes in the composition and structure of a second-growth and old-growth beech-maple forest of Crall Woods, located in Ashland County of north central Ohio. Over the 42 years, the old-growth forest (estimated to be at least 250 years old) experienced a significant shift in species composition as American beech,...

  5. DOE hearing on the Draft Area Recommendation Report for the Crystalline Repository Project, Ashland High School, Ashland, Wisconsin, Session 1: Transcript of proceedings

    International Nuclear Information System (INIS)

    1986-01-01

    This document contains a transcript of the comments of more than 35 witnesses. These meetings were held to hear public comments regarding the selection of a site for a second high-level nuclear waste repository

  6. Cultural Resources Investigations of the East and West Bayou Sale Tie-In Levee, St. Mary Parish, Louisiana.

    Science.gov (United States)

    1998-10-01

    Mendez , and Tara Bond 1994 Archaeological Data Recovery at Ashland-Belle Helene Plantation (16AN26), Ascension Parish, Louisiana, Volume III... Caro - lina. Miller, Daniel 1987 Material Culture and Mass Consumption. Basil Blackwell, Oxford, UK. Miller, Daniel and Christopher Tilley 1984...Jones, R. Mendez , H. Franks, and T. Bond. Volume 1, pp. II- I - 1 1-40. Submitted to the Louisiana Division of Archaeology, Baton Rouge. Weinstein

  7. Citizen Warrior: Major General Joshua Lawrence Chamberlain a Study in Command Leadership

    Science.gov (United States)

    2012-04-29

    anticipated―challenges. The complex, fluid environment of war demands the institutionalization of this intellectual energy to affect the necessary...choosing an appropriate course of action. I believe that these characteristics of “Military Genius” are as valuable in today’s environment as they were...1999) p. 303. 44 Wallace, p. 68. 45 James E. Kelly, Generals in Bronze, ed. William B. Styple, read Patrick Cullen (Ashland, Blackstone Audio, Inc

  8. Tactical Economics: The U.S. Army’s Tactical Contribution to Economic Development

    Science.gov (United States)

    2012-06-08

    Sachs, and Dr. William Easterly. Sachs is a world-renowned scholar made popular with his involvement on the transition of the former Soviet Union from...health, education, power and communications, and water and sanitation.30 Dr. William Easterly gives a contrasting view to Sachs. Easterly is an...34Muhammad Yunus and Ray Porter, Banker to the Poor [Micro-Lending and the Battle against World Poverty] (Ashland, OR: Blackstone Audiobooks). 35

  9. Demonstration of MPV Sensor at Yuma Proving Ground, AZ

    Science.gov (United States)

    2011-06-01

    test plot in Ashland, OR, where magnetic soils have shown to have a significant effect on EMI sensors ( Pasion et al., 2008). The recorded signal...sensors was also investigated during that survey as part of SERDP MM-1573 (PI: Len Pasion , Sky Research). The MPV offers possibilities to defeat...of magnetic soils (Lhomme et al., 2008; Pasion et al., 2008). The MPV response due to sensor motion and topography over magnetic soil is predicable

  10. Discrimination Using the Geonics EM63 in a Cued Interrogation Mode at Fort McClellan, AL

    Science.gov (United States)

    2009-03-01

    vectors of the nine items measured at the Ashland test plot: (a) Pasion - Oldenburg k1 versus k2; (b) Ratio of the primary polarization tensor at the 10th...discrimination potential of the Geonics EM63 at Fort McClellan, Alabama (AL) when deployed in a cued interrogation mode. Pasion - Oldenburg polarization...scrap metal, shrapnel and geology (e.g. Hart et al., 2001; Collins et al., 2001; Pasion & Oldenburg, 2001; Zhang et al., 2003a, 2003b; Billings

  11. Recapitalization of Amphibious Operation and Lift

    Science.gov (United States)

    2012-06-01

    CA LSD-45 USS Comstock San Diego, CA LSD-46 USS Tortuga Sasebo, Japan LSD-47 USS Rushmore San Diego, CA LSD-48 USS Ashland Little Creek, VA Table...elements 8) Conduct land missions to take over hostile nation forces The LSD: The USS Tortuga (LSD-46), located in its homeport of Sasebo, Japan, is...ATF from surface, subsurface, and air threats while conducting the assault. Once on station, the USS Tortuga is capable of conducting amphibious

  12. DOE hearing on the draft area recommendation report for the Crystalline Repository Project, Ashland Public High School, Ashland, Wisconsin: Session 2, April 9, 1986: Transcript of proceedings

    International Nuclear Information System (INIS)

    1986-01-01

    This document contains a transcript of the comments of 111 witnesses. These meetings were held to hear the public's comments regarding the selection of a site for a second high-level radioactive waste repository

  13. 2nd Ralf Yorque Workshop

    CERN Document Server

    1985-01-01

    These are the proceedings of the Second R. Yorque Workshop on Resource Management which took place in Ashland, Oregon on July 23-25, 1984. The purpose of the workshop is to provide an informal atmosphere for the discussion of resource assessment and management problems. Each participant presented a one hour morning talk; afternoons were reserved for informal chatting. The workshop was successful in stimulating ideas and interaction. The papers by R. Deriso, R. Hilborn and C. Walters all address the same basic issue, so they are lumped together. Other than that, the order to the papers in this volume was determined in the same fashion as the order of speakers during the workshop -- by random draw. Marc Mangel Department of Mathematics University of California Davis, California June 1985 TABLE OF CONTENTS A General Theory for Fishery Modeling Jon Schnute Data Transformations in Regression Analysis with Applications to Stock-Recruitment Relationships David Ruppert and Raymond J. Carroll ••••••. •�...

  14. PSD Applicability for Ashland Chemical's Maleic Anhydride Plant in Neal, West Virginia

    Science.gov (United States)

    This document may be of assistance in applying the New Source Review (NSR) air permitting regulations including the Prevention of Significant Deterioration (PSD) requirements. This document is part of the NSR Policy and Guidance Database. Some documents in the database are a scanned or retyped version of a paper photocopy of the original. Although we have taken considerable effort to quality assure the documents, some may contain typographical errors. Contact the office that issued the document if you need a copy of the original.

  15. Breckinridge Project, initial effort. Report III, Volume 2. Specifications

    Energy Technology Data Exchange (ETDEWEB)

    None

    1982-01-01

    Report III, Volume 2 contains those specifications numbered K through Y, as follows: Specifications for Compressors (K); Specifications for Piping (L); Specifications for Structures (M); Specifications for Insulation (N); Specifications for Electrical (P); Specifications for Concrete (Q); Specifications for Civil (S); Specifications for Welding (W); Specifications for Painting (X); and Specifications for Special (Y). The standard specifications of Bechtel Petroleum Incorporated have been amended as necessary to reflect the specific requirements of the Breckinridge Project and the more stringent specifications of Ashland Synthetic Fuels, Inc. These standard specifications are available for the Initial Effort (Phase Zero) work performed by all contractors and subcontractors.

  16. Hydrogeochemical and stream sediment reconnaissance program in central United States. Semiannual progress report, October 1, 1978--March 31, 1979

    International Nuclear Information System (INIS)

    Arendt, J.W.

    1979-01-01

    Basic data reports were open filed for eight NTMS quadrangles during the reporting period: Sherman, Houston, Ardmore, Emory Peak, Presidio, Enig, Austin, and Lawton. Basic data reports, which have been prepared and are in the process of being open filed, include Wichita, St. Cloud, Ashland, and Clinton. Results indicate that the most favorable areas for the occurrence of uranium mineralization in the open filed quadrangles reported are as follows: Austin Quadrangle, Lawton Quadrangle, Emory Peak Quadrangle. During the period, approximately 13,886 samples of groundwater and stream sediments were collected by the URE Project. Approximately 20,738 samples were analyzed by the URE Laboratory

  17. Cyberknife Relative Output Factor measurements using fiber-coupled luminescence, MOSFETS and RADPOS dosimetry system

    DEFF Research Database (Denmark)

    Ploquin, N.; Kertzscher Schwencke, Gustavo Adolfo Vladimir; Vandervoort, E.

    2012-01-01

    from 5 to 60 mm. ROFs were also measured using a mobileMOSFET system (Best Medical Canada) and EBT1 and EBT2 GAFCHROMIC® (ISP, Ashland) radiochromic films. For cone sizes 12.5–60 mm all detector results were in agreement within the measurement uncertainty. The microMOSFET/RADPOS measurements (published.......3% and 0.865 ± 0.3% for 5, 7.5 and 10 mm cones. Our study shows that the microMOSFET/RADPOS and optical fiber‐coupled RL dosimetry system are well suited for Cyberknife cone output factors measurements over the entire range of field sizes, provided that appropriate correction factors are applied...

  18. Technology of environmental pollution control, 2nd edition

    International Nuclear Information System (INIS)

    Shaheen, E.I.

    1991-01-01

    The final decade of the 20th century is truly the environmental decade of the century because of the gravity of environmental challenges we are facing. This book covers the environmental spectrum in an attempt to update the reader on new technologies and topics regarding pollution control. Engineers, scientists, plant operators, and students studying the subject of pollution control will use the comprehensive text as a reference for technological advances, regulations, and pollution control. The major disasters witnessed in the last few years, such as the Bhopal gas tragedy, the Chernobyl nuclear disaster, the Exxon Valdez oil spill and the Ashland of tank collapse are described in detail

  19. Novel technologies and an overall strategy to allow hazard assessment and risk prediction of chemicals, cosmetics, and drugs with animal-free methods.

    Science.gov (United States)

    Leist, Marcel; Lidbury, Brett A; Yang, Chihae; Hayden, Patrick J; Kelm, Jens M; Ringeissen, Stephanie; Detroyer, Ann; Meunier, Jean R; Rathman, James F; Jackson, George R; Stolper, Gina; Hasiwa, Nina

    2012-01-01

    Several alternative methods to replace animal experiments have been accepted by legal bodies. An even larger number of tests are under development or already in use for non-regulatory applications or for the generation of information stored in proprietary knowledge bases. The next step for the use of the different in vitro methods is their combination into integrated testing strategies (ITS) to get closer to the overall goal of predictive "in vitro-based risk evaluation processes." We introduce here a conceptual framework as the basis for future ITS and their use for risk evaluation without animal experiments. The framework allows incorporation of both individual tests and already integrated approaches. Illustrative examples for elements to be incorporated are drawn from the session "Innovative technologies" at the 8th World Congress on Alternatives and Animal Use in the Life Sciences, held in Montreal, 2011. For instance, LUHMES cells (conditionally immortalized human neurons) were presented as an example for a 2D cell system. The novel 3D platform developed by InSphero was chosen as an example for the design and use of scaffold-free, organotypic microtissues. The identification of critical pathways of toxicity (PoT) may be facilitated by approaches exemplified by the MatTek 3D model for human epithelial tissues with engineered toxicological reporter functions. The important role of in silico methods and of modeling based on various pre-existing data is demonstrated by Altamira's comprehensive approach to predicting a molecule's potential for skin irritancy. A final example demonstrates how natural variation in human genetics may be overcome using data analytic (pattern recognition) techniques borrowed from computer science and statistics. The overall hazard and risk assessment strategy integrating these different examples has been compiled in a graphical work flow.

  20. An epidermal equivalent assay for identification and ranking potency of contact sensitizers

    Energy Technology Data Exchange (ETDEWEB)

    Gibbs, Susan, E-mail: S.Gibbs@VUMC.nl [Department of Dermatology, VU University Medical Centre, Dept of Oral Cell Biology, ACTA, Amsterdam (Netherlands); Corsini, Emanuela [Laboratory of Toxicology, DiSFeB, Università degli Studi di Milano (Italy); Spiekstra, Sander W. [Department of Dermatology, VU University Medical Centre, Dept of Oral Cell Biology, ACTA, Amsterdam (Netherlands); Galbiati, Valentina [Laboratory of Toxicology, DiSFeB, Università degli Studi di Milano (Italy); Fuchs, Horst W. [CellSystems GmbH, Troisdorf (Germany); DeGeorge, George; Troese, Matthew [MB Research Labs, Spinnerstown, PA (United States); Hayden, Patrick; Deng, Wei [MatTek Corporation, Ashland, MA (United States); Roggen, Erwin [3Rs Management and Consultancy (Denmark)

    2013-10-15

    The purpose of this study was to explore the possibility of combining the epidermal equivalent (EE) potency assay with the assay which assesses release of interleukin-18 (IL-18) to provide a single test for identification and classification of skin sensitizing chemicals, including chemicals of low water solubility or stability. A protocol was developed using different 3D-epidermal models including in house VUMC model, epiCS® (previously EST1000™), MatTek EpiDerm™ and SkinEthic™ RHE and also the impact of different vehicles (acetone:olive oil 4:1, 1% DMSO, ethanol, water) was investigated. Following topical exposure for 24 h to 17 contact allergens and 13 non-sensitizers a robust increase in IL-18 release was observed only after exposure to contact allergens. A putative prediction model is proposed from data obtained from two laboratories yielding 95% accuracy. Correlating the in vitro EE sensitizer potency data, which assesses the chemical concentration which results in 50% cytotoxicity (EE-EC{sub 50}) with human and animal data showed a superior correlation with human DSA{sub 05} (μg/cm{sup 2}) data (Spearman r = 0.8500; P value (two-tailed) = 0.0061) compared to LLNA data (Spearman r = 0.5968; P value (two-tailed) = 0.0542). DSA{sub 05} = induction dose per skin area that produces a positive response in 5% of the tested population Also a good correlation was observed for release of IL-18 (SI-2) into culture supernatants with human DSA{sub 05} data (Spearman r = 0.8333; P value (two-tailed) = 0.0154). This easily transferable human in vitro assay appears to be very promising, but additional testing of a larger chemical set with the different EE models is required to fully evaluate the utility of this assay and to establish a definitive prediction model. - Highlights: • A potential epidermal equivalent assay to label and classify sensitizers • Il-18 release distinguishes sensitizers from non sensitizers • IL-18 release can rank sensitizer potency

  1. An epidermal equivalent assay for identification and ranking potency of contact sensitizers

    International Nuclear Information System (INIS)

    Gibbs, Susan; Corsini, Emanuela; Spiekstra, Sander W.; Galbiati, Valentina; Fuchs, Horst W.; DeGeorge, George; Troese, Matthew; Hayden, Patrick; Deng, Wei; Roggen, Erwin

    2013-01-01

    The purpose of this study was to explore the possibility of combining the epidermal equivalent (EE) potency assay with the assay which assesses release of interleukin-18 (IL-18) to provide a single test for identification and classification of skin sensitizing chemicals, including chemicals of low water solubility or stability. A protocol was developed using different 3D-epidermal models including in house VUMC model, epiCS® (previously EST1000™), MatTek EpiDerm™ and SkinEthic™ RHE and also the impact of different vehicles (acetone:olive oil 4:1, 1% DMSO, ethanol, water) was investigated. Following topical exposure for 24 h to 17 contact allergens and 13 non-sensitizers a robust increase in IL-18 release was observed only after exposure to contact allergens. A putative prediction model is proposed from data obtained from two laboratories yielding 95% accuracy. Correlating the in vitro EE sensitizer potency data, which assesses the chemical concentration which results in 50% cytotoxicity (EE-EC 50 ) with human and animal data showed a superior correlation with human DSA 05 (μg/cm 2 ) data (Spearman r = 0.8500; P value (two-tailed) = 0.0061) compared to LLNA data (Spearman r = 0.5968; P value (two-tailed) = 0.0542). DSA 05 = induction dose per skin area that produces a positive response in 5% of the tested population Also a good correlation was observed for release of IL-18 (SI-2) into culture supernatants with human DSA 05 data (Spearman r = 0.8333; P value (two-tailed) = 0.0154). This easily transferable human in vitro assay appears to be very promising, but additional testing of a larger chemical set with the different EE models is required to fully evaluate the utility of this assay and to establish a definitive prediction model. - Highlights: • A potential epidermal equivalent assay to label and classify sensitizers • Il-18 release distinguishes sensitizers from non sensitizers • IL-18 release can rank sensitizer potency • EC50 (chemical

  2. Breckinridge Project, initial effort

    Energy Technology Data Exchange (ETDEWEB)

    None, None

    1982-09-01

    Report III, Volume 1 contains those specifications numbered A through J, as follows: General Specifications (A); Specifications for Pressure Vessels (C); Specifications for Tanks (D); Specifications for Exchangers (E); Specifications for Fired Heaters (F); Specifications for Pumps and Drivers (G); and Specifications for Instrumentation (J). The standard specifications of Bechtel Petroleum Incorporated have been amended as necessary to reflect the specific requirements of the Breckinridge Project, and the more stringent specifications of Ashland Synthetic Fuels, Inc. These standard specifications are available to the Initial Effort (Phase Zero) work performed by all contractors and subcontractors. Report III, Volume 1 also contains the unique specifications prepared for Plants 8, 15, and 27. These specifications will be substantially reviewed during Phase I of the project, and modified as necessary for use during the engineering, procurement, and construction of this project.

  3. Hydrogeochemical and Stream Sediment Reconnaissance Program in central United States. Semiannual progress report, October 1, 1978--March 31, 1979

    International Nuclear Information System (INIS)

    Arendt, J.W.

    1979-01-01

    Basic data reports were open filed for eight NTMS quadrangles during the reporting period: Sherman, Houston, Ardmore, Emory Peak, Presidio, Enid, Austin, and Lawton. Basic data reports, which have been prepared and are in the process of being open filed, include Wichita, St. Cloud, Ashland, and Clinton. Results indicate that the most favorable areas for the occurrence of uranium mineralization in the open filed quadrangles are as follows: (1) Austin Quadrangle - Whitsett, Catahoula, Oakville, and Fleming Formations (Tertiary). (2) Lawton Quadrangle - Hennessey and Clearfork Groups, Garber Sandstone, and Post Oak Conglomerate (Lower Permian); and El Reno Group (Upper Permian). (3) Emory Peak Quadrangle - Tertiary tuffaceous ash beds and other igneous rocks, carbonate-dominant Cretaceous strata. During the period, approximately 13,886 samples of groundwater and stream sediments were collected by the URE Project. Approximately 20,738 samples were analyzed by the URE Laboratory

  4. SU-E-T-286: Dose Verification of Spot-Scanning Proton Beam Using GafChromic EBT3 Film

    Energy Technology Data Exchange (ETDEWEB)

    Chen, C; Tang, S; Mah, D [ProCure Proton Therapy Center, Somerset, NJ (United States); Chan, M [Memorial Sloan-Kettering Cancer Center, Basking Ridge, NJ (United States)

    2015-06-15

    Purpose: Dose verification of spot-scanning proton pencil beam is performed via planar dose measurements at several depths using an ionization-chamber array, requiring repeat irradiations of each field for each depth. Here we investigate film dosimetry which has two advantages: higher resolution and efficiency from one-shot irradiation for multiple depths. Methods: Film calibration was performed using an EBT3 film at 20-cm depth of Plastic Water (CIRS, Norfolk, VA) exposed by a 10-level step wedge on a Proteus Plus proton system (IBA, Belgium). The calibration doses ranged from 25–250 cGy(RBE) for proton energies of 170–200 MeV. A uniform 1000 cm{sup 3} dose cube and a clinical prostate combined with seminal-vesicle and pelvic-nodes plan were used for this study. All treatment plans were generated in the RayStation (RaySearch Lab, Sweden). The planar doses at different depths for both cases were measured with film using triple-channel dosimetry and the MatriXX PT (IBA Dosimetry, Germany). The Gamma passing rates, dose-difference maps, and profiles of 2D planar doses measured with EBT3 film and MatriXX, versus treatment planning system (TPS) calculations were analyzed and compared using the FilmQA Pro (Ashland Inc., Bridgewater, NJ). Results: The EBT3 film measurement results matched well with the TPS calculation data with an average passing rate >95% for 2%/2mm and are comparable with the MatriXX measurements (0.7%, 1.8%, 3.8% mean differences corresponding to 3%/3mm, 3%/2mm, 2%/2mm, respectively). Overall passing rates for EBT3 films appear higher than those with MatriXX detectors. Conclusion: The energy dependence of the film response could be minimized by calibration using proton beam with mixed energies. The greater efficiency of the dose verification using GafChromic EBT3 results in a potential cost trade-off between room capacity and film cost. EBT3 film may offer distinct advantages in highly intensity-modulated fields due to its higher resolution

  5. Relations of water-quality constituent concentrations to surrogate measurements in the lower Platte River corridor, Nebraska, 2007 through 2011

    Science.gov (United States)

    Schaepe, Nathaniel J.; Soenksen, Philip J.; Rus, David L.

    2014-01-01

    The lower Platte River, Nebraska, provides drinking water, irrigation water, and in-stream flows for recreation, wildlife habitat, and vital habitats for several threatened and endangered species. The U.S. Geological Survey (USGS), in cooperation with the Lower Platte River Corridor Alliance (LPRCA) developed site-specific regression models for water-quality constituents at four sites (Shell Creek near Columbus, Nebraska [USGS site 06795500]; Elkhorn River at Waterloo, Nebr. [USGS site 06800500]; Salt Creek near Ashland, Nebr. [USGS site 06805000]; and Platte River at Louisville, Nebr. [USGS site 06805500]) in the lower Platte River corridor. The models were developed by relating continuously monitored water-quality properties (surrogate measurements) to discrete water-quality samples. These models enable existing web-based software to provide near-real-time estimates of stream-specific constituent concentrations to support natural resources management decisions. Since 2007, USGS, in cooperation with the LPRCA, has continuously monitored four water-quality properties seasonally within the lower Platte River corridor: specific conductance, water temperature, dissolved oxygen, and turbidity. During 2007 through 2011, the USGS and the Nebraska Department of Environmental Quality collected and analyzed discrete water-quality samples for nutrients, major ions, pesticides, suspended sediment, and bacteria. These datasets were used to develop the regression models. This report documents the collection of these various water-quality datasets and the development of the site-specific regression models. Regression models were developed for all four monitored sites. Constituent models for Shell Creek included nitrate plus nitrite, total phosphorus, orthophosphate, atrazine, acetochlor, suspended sediment, and Escherichia coli (E. coli) bacteria. Regression models that were developed for the Elkhorn River included nitrate plus nitrite, total Kjeldahl nitrogen, total phosphorus

  6. Breckinridge Project, initial effort. Report XI, Volume V. Critical review of the design basis. [Critical review

    Energy Technology Data Exchange (ETDEWEB)

    None

    1982-01-01

    Report XI, Technical Audit, is a compendium of research material used during the Initial Effort in making engineering comparisons and decisions. Volumes 4 and 5 of Report XI present those studies which provide a Critical Review of the Design Basis. The Critical Review Report, prepared by Intercontinental Econergy Associates, Inc., summarizes findings from an extensive review of the data base for the H-Coal process design. Volume 4 presents this review and assessment, and includes supporting material; specifically, Design Data Tabulation (Appendix A), Process Flow Sheets (Appendix B), and References (Appendix C). Volume 5 is a continuation of the references of Appendix C. Studies of a proprietary nature are noted and referenced, but are not included in these volumes. They are included in the Limited Access versions of these reports and may be reviewed by properly cleared personnel in the offices of Ashland Synthetic Fuels, Inc.

  7. Enhancing Supply Chain Performance with Environmental Cost Information: Examples from Commonwealth Edison, Andersen Corporation, and Ashland Chemical

    Science.gov (United States)

    This collection of case studies illustrates how supply chain management practices can be improved by determining the financial impact of business activities that have an impact on a company’s environmental performance.

  8. Fighting against VOC emissions; Lutter contre les emissions de COV

    Energy Technology Data Exchange (ETDEWEB)

    Fanlo, J.L. [Ecole des Mines d' Ales, 30 (France); Puech, G. [APAVE, 75 - Paris (France); Patoux, R. [Rhodia Rhoditech (France)] [and others

    2001-12-01

    This document brings together 15 testimonies of experts about the processes used in the industry for the abatement of volatile organic compound (VOC) emissions. The different points approached concern: the first industrial experiments of fight against VOC emissions, how to audit the facilities, how to make a diagnosis, to hierarchized and to measure continuously VOC emissions, how to anticipate the explosion risks linked with VOC treatment processes, the techniques of VOC abatement at the source implemented by industrialists, the implementation of an emission mastery scheme by Crow Cork and Seal company, the implementation of a solvent management plan by Turbomeca company and of a paints strategy by Renault car-making company, the combination of VOC abatement techniques implemented by industrialists, the classification of destruction and recovery processes: the experience feedback of Sanofi Synthelabo and of Air Liquide companies, the combination of upstream and downstream techniques implemented by Pechiney Rhenalu, Ashland Polyester and Quebecor companies. (J.S.)

  9. Effects of potential surface coal mining on dissolved solids in Otter Creek and in the Otter Creek alluvial aquifer, southeastern Montana

    Science.gov (United States)

    Cannon, M.R.

    1985-01-01

    Otter Creek drains an area of 709 square miles in the coal-rich Powder River structural basin of southeastern Montana. The Knobloch coal beds in the Tongue River Member of the Paleocene Fort Union Formation is a shallow aquifer and a target for future surface mining in the downstream part of the Otter Creek basin. A mass-balance model was used to estimate the effects of potential mining on the dissolved solids concentration in Otter Creek and in the alluvial aquifer in the Otter Creek valley. With extensive mining of the Knobloch coal beds, the annual load of dissolved solids to Otter Creek at Ashland at median streamflow could increase by 2,873 tons, or a 32-percent increase compared to the annual pre-mining load. Increased monthly loads of Otter Creek, at the median streamflow, could range from 15 percent in February to 208 percent in August. The post-mining dissolved solids load to the subirrigated part of the alluvial valley could increase by 71 percent. The median dissolved solids concentration in the subirrigated part of the valley could be 4,430 milligrams per liter, compared to the pre-mining median concentration of 2,590 milligrams per liter. Post-mining loads from the potentially mined landscape were calculated using saturated-paste-extract data from 506 overburdened samples collected from 26 wells and test holes. Post-mining loads to the Otter Creek valley likely would continue at increased rates for hundreds of years after mining. If the actual area of Knobloch coal disturbed by mining were less than that used in the model, post-mining loads to the Otter Creek valley would be proportionally smaller. (USGS)

  10. Effects of highway deicing chemicals on shallow unconsolidated aquifers in Ohio, interim report, 1988-93

    Science.gov (United States)

    Jones, A.L.; Sroka, B.N.

    1997-01-01

    Effects of the application of highway deicing chemicals during winter months on ground- water quality are being studied by the U.S. Geological Survey in cooperation with the Ohio Department of Transportation and the Federal Highway Administration. Eight sites throughout the State were selected along major undivided highways where drainage is by open ditches and ground-water flow is approximately perpendicular to the highway. At these sites, records of deicer application rates are being kept and apparent movement of deicing chemicals through shallow, unconsolidated aquifers is being monitored by means of periodic measurements of specific con ductance and concentrations of dissolved sodium, calcium, and chloride. The counties and corre sponding sections of state routes being monitored are the following: State Route (SR) 3 in Ashland County, SR 84 in Ashtabula County, SR 29 in Champaign County, SR 4 in Clark County, SR 2 in Lucas County, SR 104 in Pickaway County, SR 14 in Portage County, and SR 97 in Richland County. The study began in January 1988 with background data collection, extensive literature review, and site selection. This process, including drilling of wells at the eight selected sites, lasted 3 years. Routine ground-water sampling at 4- to 6-week intervals began in January 1991. A relatively new type of multilevel, passive flow ground-water sampling device was constructed and used. Other conditions monitored on a regular basis included ground-water level (monitored con tinuously), specific conductance, air and soil temperature, precipitation, chloride concentration in soil samples, ground conductivity, and deicing chemical application times and rates. For the interim reporting period, water samples were collected from January 1991 through September 1993. Evidence from water analysis, specific conductance measurements, and surface geophysical measurements indicates that four of the eight sites (Ashtabula County, Lucas County, Portage County, and Richland

  11. Studies of ionising radiation induced bystander effects in 3D artificial tissue system and applications for radiation protection

    International Nuclear Information System (INIS)

    Belyakov, Oleg V.; Kuopio Univ.

    2008-01-01

    The universality of the target theory of radiation-induced effects is challenged by observations on non-targeted effects such as bystander effects. Essential features of non-targeted effects are that they do not require direct nuclear exposure by radiation and they are particularly significant at low doses. This new evidence suggests a need for a new paradigm in radiation biology. The new paradigm should cover both the classical (targeted) and the non-targeted effects. The bystander effect cannot be comprehensively explained on the basis of a single cell reaction. It is well known that an organism is composed of different cell types that interact as functional units in a way to maintain normal tissue function. Therefore the radiation response is not simply the sum of cellular responses as assumed in classical radiobiology, predominantly from studies using cell cultures. Experimental models, which maintain tissue-like intercellular cell signalling and 3D structure, are essential for proper understanding of the bystander effect. Our work relates to experimentation with novel 3D artificial human tissue systems available from MatTek Corporation (Boston, USA). Air-liquid interface culture technique is used to grow artificial tissues, which allow to model conditions present in vivo. The Gray Cancer Institute (Northwood, UK) charged particle microbeam was used to irradiate tissue samples in a known pattern with a known number of 3 He 2+ particles or protons. After irradiation, the tissues models were incubated for 3 days, fixed in 10 % NBF, paraffin embedded and then sliced into 5 μm histological sections located at varying distances from the plane of the irradiated cells. We studied in situ apoptosis and markers of differentiation. Significantly elevated bystander induced apoptosis was observed with 3'-OH DNA end-labelling based technique in 3D artificial tissue systems. Our results also suggested an importance of proliferation and differentiation status for bystander

  12. A PEDAGOGIA DA UNIVERSIDADE FEDERAL DE MATO GROSSO DO SUL CAMPUS DE NAVIRAÍ/MS: A CRECHE NA FORMAÇÃO INICIAL DE PROFESSORES

    Directory of Open Access Journals (Sweden)

    Klinger Teodoro Ciríaco

    2015-12-01

    Full Text Available The objective of this research was to investigate the characteristics and contributions of initial teacher training to work in Early Childhood Education (0-3 years. This paper refers to a research work Course Conclusion (TCC, which here will be presented only the first part of the study. So aponhamos the precepts of qualitative research, more specifically, in order to document analysis on the specifics of the nursery addressed in the curriculum of Education, Federal University of Mato Grosso do Sul, Campus Ashland / MS, (UFMS / CPNv. In data analysis, you can identify that discussions about childhood and kindergarten curriculum in the investigated course, are still few and that many of the specific disciplines of the area belongs to the elective courses above. Thus, we can say that would be fundamental, creating a movement in the initial training, for a more specific pedagogy to operate in Childhood, because the work experience and student travel, has shown us how much the contents of the nursery and discussions on the professional from kindergarten, are not the starting center for initial training, but it is something thought of second order.

  13. SU-F-T-574: MLC Based SRS Beam Commissioning - Minimum Target Size Investigation

    Energy Technology Data Exchange (ETDEWEB)

    Zakikhani, R [Florida Cancer Specialists - Largo, Largo, FL (United States); Able, C [Florida Cancer Specialists - New Port Richey, New Port Richey, FL (United States)

    2016-06-15

    Purpose: To implement a MLC accelerator based SRS program using small fields down to 1 cm × 1 cm and to determine the smallest target size safe for clinical treatment. Methods: Computerized beam scanning was performed in water using a diode detector and a linac-head attached transmission ion chamber to characterize the small field dosimetric aspects of a 6 MV photon beam (Trilogy-Varian Medical Systems, Inc.). The output factors, PDD and profiles of field sizes 1, 2, 3, 4, and 10 cm{sup 2} were measured and utilized to create a new treatment planning system (TPS) model (AAA ver 11021). Static MLC SRS treatment plans were created and delivered to a homogeneous phantom (Cube 20, CIRS, Inc.) for a 1.0 cm and 1.5 cm “PTV” target. A 12 field DMLC plan was created for a 2.1 cm target. Radiochromic film (EBT3, Ashland Inc.) was used to measure the planar dose in the axial, coronal and sagittal planes. A micro ion chamber (0.007 cc) was used to measure the dose at isocenter for each treatment delivery. Results: The new TPS model was validated by using a tolerance criteria of 2% dose and 2 mm distance to agreement. For fields ≤ 3 cm{sup 2}, the max PDD, Profile and OF difference was 0.9%, 2%/2mm and 1.4% respectively. The measured radiochromic film planar dose distributions had gamma scores of 95.3% or higher using a 3%/2mm criteria. Ion chamber measurements for all 3 test plans effectively met our goal of delivering the dose accurately to within 5% when compared to the expected dose reported by the TPS (1 cm plan Δ= −5.2%, 1.5 cm plan Δ= −2.0%, 2 cm plan Δ= 1.5%). Conclusion: End to end testing confirmed that MLC defined SRS for target sizes ≥ 1.0 cm can be safely planned and delivered.

  14. Oregon Low-Temperature-Resource Assessment Program. Final technical report

    Energy Technology Data Exchange (ETDEWEB)

    Priest, G.R.; Black, G.L.; Woller, N.M.

    1981-01-01

    Numerous low-temperature hydrothermal systems are available for exploitation throughout the Cascades and eastern Oregon. All of these areas have heat flow significantly higher than crustal averages and many thermal aquifers. In northeastern Oregon, low temperature geothermal resources are controlled by regional stratigraphic aquifers of the Columbia River Basalt Group at shallow depths and possibly by faults at greater depths. In southeastern Oregon most hydrothermal systems are of higher temperature than those of northeastern Oregon and are controlled by high-angle fault zones and layered volcanic aquifers. The Cascades have very high heat flow but few large population centers. Direct use potential in the Cascades is therefore limited, except possibly in the cities of Oakridge and Ashland, where load may be great enough to stimulate development. Absence of large population centers also inhibits initial low temperature geothermal development in eastern Oregon. It may be that uses for the abundant low temperature geothermal resources of the state will have to be found which do not require large nearby population centers. One promising use is generation of electricity from freon-based biphase electrical generators. These generators will be installed on wells at Vale and Lakeview in the summer of 1982 to evaluate their potential use on geothermal waters with temperatures as low as 80/sup 0/C (176/sup 0/F).

  15. Poster — Thur Eve — 39: Feasibility of Commissioning HybridArc with the Delta 4 two plane diode phantom: comparisons with Gafchromic Film

    Energy Technology Data Exchange (ETDEWEB)

    Bojechko, C. [University of Calgary, Department of Physics and Astronomy, Tom Baker Cancer Center, Calgary AB (Canada); Ploquin, N. [University of Calgary, Department of Physics and Astronomy, Tom Baker Cancer Center, Calgary AB (Canada); University of Calgary, Department of Oncology, Tom Baker Cancer Center, Calgary AB (Canada); Hudson, A. [University of Calgary, Department of Oncology, Tom Baker Cancer Center, Calgary AB (Canada); Sayous, Y. [Université Paul Sabotier Toulouse (France)

    2014-08-15

    HybridArc is a relatively novel radiation therapy technique which combines optimized dynamic conformai arcs (DCA) and intensity modulated radiation therapy (IMRT). HybridArc has possible dosimetry and efficiency advantages over stand alone DCA and IMRT treatments and can be readily implemented on any linac capable of DCA and IMRT, giving strong motivation to commission the modality. The Delta4 phantom (Scandidos, Uppsala, Sweden) has been used for IMRT and VMAT clinical dosimetric verification making it a candidate for HybridArc commissioning. However the HybridArc modality makes use of several non co-planar arcs which creates setup issues due to the geometry of the Delta4, resulting in possible phantom gantry collisions for plans with non-zero couch angles. An analysis was done determining the feasibility of using the Delta4 fixed at 0° couch angle compared with results obtained using Gafchromic ETB2 film (Ashland, Covington Kentucky) in an anthropomorphic phantom at the planned couch angles. A gamma index analysis of the measured and planned dose distributions was done using Delta4 and DoseLab Pro (Mobius Medical Systems, Houston Texas) software. For both arc and IMRT sub-fields there is reasonable correlation between the gamma index found from the Delta4 and Gafchromic film. All results show the feasibility of using the Delta4 for HybridArc commissioning.

  16. Poster — Thur Eve — 39: Feasibility of Commissioning HybridArc with the Delta 4 two plane diode phantom: comparisons with Gafchromic Film

    International Nuclear Information System (INIS)

    Bojechko, C.; Ploquin, N.; Hudson, A.; Sayous, Y.

    2014-01-01

    HybridArc is a relatively novel radiation therapy technique which combines optimized dynamic conformai arcs (DCA) and intensity modulated radiation therapy (IMRT). HybridArc has possible dosimetry and efficiency advantages over stand alone DCA and IMRT treatments and can be readily implemented on any linac capable of DCA and IMRT, giving strong motivation to commission the modality. The Delta4 phantom (Scandidos, Uppsala, Sweden) has been used for IMRT and VMAT clinical dosimetric verification making it a candidate for HybridArc commissioning. However the HybridArc modality makes use of several non co-planar arcs which creates setup issues due to the geometry of the Delta4, resulting in possible phantom gantry collisions for plans with non-zero couch angles. An analysis was done determining the feasibility of using the Delta4 fixed at 0° couch angle compared with results obtained using Gafchromic ETB2 film (Ashland, Covington Kentucky) in an anthropomorphic phantom at the planned couch angles. A gamma index analysis of the measured and planned dose distributions was done using Delta4 and DoseLab Pro (Mobius Medical Systems, Houston Texas) software. For both arc and IMRT sub-fields there is reasonable correlation between the gamma index found from the Delta4 and Gafchromic film. All results show the feasibility of using the Delta4 for HybridArc commissioning

  17. DOE hearing on the draft area recommendation report for the Crystalline Repository Project, Patrick Henry High School, Ashland, Virginia: Session 2: Transcript of proceedings

    International Nuclear Information System (INIS)

    1986-01-01

    This document contains a transcript of the comments of more than witnesses. Also included is a list of 2 exhibits. These meetings were held to hear public comments regarding the selection of a site for a second high-level nuclear waste repository

  18. Independent Monte-Carlo dose calculation for MLC based CyberKnife radiotherapy

    Science.gov (United States)

    Mackeprang, P.-H.; Vuong, D.; Volken, W.; Henzen, D.; Schmidhalter, D.; Malthaner, M.; Mueller, S.; Frei, D.; Stampanoni, M. F. M.; Dal Pra, A.; Aebersold, D. M.; Fix, M. K.; Manser, P.

    2018-01-01

    This work aims to develop, implement and validate a Monte Carlo (MC)-based independent dose calculation (IDC) framework to perform patient-specific quality assurance (QA) for multi-leaf collimator (MLC)-based CyberKnife® (Accuray Inc., Sunnyvale, CA) treatment plans. The IDC framework uses an XML-format treatment plan as exported from the treatment planning system (TPS) and DICOM format patient CT data, an MC beam model using phase spaces, CyberKnife MLC beam modifier transport using the EGS++ class library, a beam sampling and coordinate transformation engine and dose scoring using DOSXYZnrc. The framework is validated against dose profiles and depth dose curves of single beams with varying field sizes in a water tank in units of cGy/Monitor Unit and against a 2D dose distribution of a full prostate treatment plan measured with Gafchromic EBT3 (Ashland Advanced Materials, Bridgewater, NJ) film in a homogeneous water-equivalent slab phantom. The film measurement is compared to IDC results by gamma analysis using 2% (global)/2 mm criteria. Further, the dose distribution of the clinical treatment plan in the patient CT is compared to TPS calculation by gamma analysis using the same criteria. Dose profiles from IDC calculation in a homogeneous water phantom agree within 2.3% of the global max dose or 1 mm distance to agreement to measurements for all except the smallest field size. Comparing the film measurement to calculated dose, 99.9% of all voxels pass gamma analysis, comparing dose calculated by the IDC framework to TPS calculated dose for the clinical prostate plan shows 99.0% passing rate. IDC calculated dose is found to be up to 5.6% lower than dose calculated by the TPS in this case near metal fiducial markers. An MC-based modular IDC framework was successfully developed, implemented and validated against measurements and is now available to perform patient-specific QA by IDC.

  19. In vitro human skin permeation of endoxifen: potential for local transdermal therapy for primary prevention and carcinoma in situ of the breast

    Directory of Open Access Journals (Sweden)

    Lee O

    2011-07-01

    Full Text Available Oukseub Lee1, David Ivancic1, Robert T Chatterton Jr2, Alfred W Rademaker3, Seema A Khan11Department of Surgery, 2Department of Obstetrics/Gynecology, 3Department of Preventive Medicine, Northwestern University Feinberg School of Medicine, Chicago, IL, USAPurpose: Oral tamoxifen, a triphenylethylene (TPE, is useful for breast cancer prevention, but its adverse effects limit acceptance by women. Tamoxifen efficacy is related to its major metabolites 4-hydroxytamoxifen (4-OHT and N-desmethyl-4-hydroxytamoxifen (endoxifen [ENX]. Transdermal delivery of these to the breast may avert the toxicity of oral tamoxifen while maintaining efficacy. We evaluated the relative efficiency of skin permeation of 4-OHT and ENX in vitro, and tested oleic acid (OA as a permeation-enhancer.Methods: 4-OHT, ENX, and estradiol (E2 (0.2 mg/mL of 0.5 µCi 3H/mg were dissolved in 60% ethanol-phosphate buffer, ±OA (0.1%–5%. Permeation through EpiDermTM (Matek Corp, Ashland, MA and split-thickness human skin was calculated based on the amount of the agents recovered from the receiver fluid and skin using liquid scintillation counting over 24 hours.Results: In the EpiDerm model, the absorption of 4-OHT and ENX was 10%–11%; total penetration (TP was 26%–29% at 24 hours and was decreased by OA. In normal human skin, the absorption of 4-OHT and ENX was 0.3%; TP was 2%–4% at 24 hours. The addition of 1% OA improved the permeation of ENX significantly more than that of 4-OHT (P < 0.004; further titration of OA at 0.25%–0.5% further improved the permeation of ENX to a level similar to that of estradiol.Conclusion: The addition of OA to ENX results in a favorable rapid delivery equivalent to that of estradiol, a widely used transdermal hormone. The transdermal delivery of ENX to the breast should be further developed in preclinical and clinical studies.Keywords: endoxifen, breast cancer prevention, human skin, transdermal, oleic acid

  20. Northeast Empire Limited Partnership No. 2 Alternative Energy, Inc. and Maine Public Service Company (Docket No. EA-95): Environmental Assessment

    International Nuclear Information System (INIS)

    1992-05-01

    This environmental assessment (EA) has been prepared in connection with the application (the Application) by Alternative Energy, Inc. (AEI), Northeast Empire Limited Partnership number-sign 2 (the Partnership) and Maine Public Service Company (MPS and, collectively with AEI and the Partnership, the Applicants), filed on January 6, 1992, for authorization to transmit electric energy to a foreign country. The electric energy to be exported would be produced by a 39-megawatt (MW), wood-fired powerplant (the Project) to be developed by AEI and to be built by the Partnership in Ashland, Maine, which is located in MPS' service territory. In the Application, the Applicants requested authorization for MPS to transmit electricity generated by the Project from Maine through New Brunswick, Canada, on route back to the US for domestic use by Central Maine Power Company (MP). The need to cross the US -Canadian border arises due to the geography of the area and to facilitate access to an existing transmission line. None of the power will be used in Canada. This EA addresses the environmental impacts associated with the export of electric energy and the connected actions which are expected to result from the authorization of those exports. The electricity would be exported over cross-border transmission facilities maintained between MPS and New Brunswick Power Corporation (NB Power). No new cross-border facilities, and no modification of any existing cross-border facilities, are required in connection with the proposed action

  1. FY 1981 Report on the results of Sunshine Project. Coal energy; 1981 nendo sunshine keikaku seika hokokusho. Sekitan energy

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1981-07-01

    This report presents the results of (researches on solvolysis liquefaction mechanisms and reaction promotion with oil- and coal-based solvents), conducted as part of the research and development project for coal liquefaction techniques. The FY 1981 program includes researches on (1) the effects of liquefaction reaction conditions on liquefaction yield and production of light products for coal species of low degree of carbonization, including brown coal, (2) the effects of pretreatment of coal on its liquefaction reactivity, and (3) up-grading of the solvolysis coal liquid (SCL). For the item (1), HA240 (hydrogenated Ashland's A240) is used to investigate its liquefaction capacity for various coal species of low degree of carbonization, including brown coal. For the item (2), the effects of pretreatment in a hot water bath with reflux was investigated for sub-bituminous coal in the FY 1980. In the FY 1981, various pretreatment methods are attempted for enhancing liquefaction reactivity of brown coal. As a result, it is found that ash content of brown coal is decreased to one-third of the initial level, when it is treated in a diluted hydrochloric acid bath with reflux. For the item (3), SCL hydrogenated by Birch reduction (B-SCL, 1) is compared with H-SCL with respect to properties, structures and thermal crackability, to discuss the items required for the catalyst to be used in the second stage by clarifying effectiveness of the catalyst. (NEDO)

  2. Model-model Perencanaan Strategik

    OpenAIRE

    Amirin, Tatang M

    2005-01-01

    The process of strategic planning, used to be called as long-term planning, consists of several components, including strategic analysis, setting strategic direction (covering of mission, vision, and values), and action planning. Many writers develop models representing the steps of the strategic planning process, i.e. basic planning model, problem-based planning model, scenario model, and organic or self-organizing model.

  3. Model-to-model interface for multiscale materials modeling

    Energy Technology Data Exchange (ETDEWEB)

    Antonelli, Perry Edward [Iowa State Univ., Ames, IA (United States)

    2017-12-17

    A low-level model-to-model interface is presented that will enable independent models to be linked into an integrated system of models. The interface is based on a standard set of functions that contain appropriate export and import schemas that enable models to be linked with no changes to the models themselves. These ideas are presented in the context of a specific multiscale material problem that couples atomistic-based molecular dynamics calculations to continuum calculations of fluid ow. These simulations will be used to examine the influence of interactions of the fluid with an adjacent solid on the fluid ow. The interface will also be examined by adding it to an already existing modeling code, Large-scale Atomic/Molecular Massively Parallel Simulator (LAMMPS) and comparing it with our own molecular dynamics code.

  4. Models and role models.

    Science.gov (United States)

    ten Cate, Jacob M

    2015-01-01

    Developing experimental models to understand dental caries has been the theme in our research group. Our first, the pH-cycling model, was developed to investigate the chemical reactions in enamel or dentine, which lead to dental caries. It aimed to leverage our understanding of the fluoride mode of action and was also utilized for the formulation of oral care products. In addition, we made use of intra-oral (in situ) models to study other features of the oral environment that drive the de/remineralization balance in individual patients. This model addressed basic questions, such as how enamel and dentine are affected by challenges in the oral cavity, as well as practical issues related to fluoride toothpaste efficacy. The observation that perhaps fluoride is not sufficiently potent to reduce dental caries in the present-day society triggered us to expand our knowledge in the bacterial aetiology of dental caries. For this we developed the Amsterdam Active Attachment biofilm model. Different from studies on planktonic ('single') bacteria, this biofilm model captures bacteria in a habitat similar to dental plaque. With data from the combination of these models, it should be possible to study separate processes which together may lead to dental caries. Also products and novel agents could be evaluated that interfere with either of the processes. Having these separate models in place, a suggestion is made to design computer models to encompass the available information. Models but also role models are of the utmost importance in bringing and guiding research and researchers. 2015 S. Karger AG, Basel

  5. Hydrological models are mediating models

    Science.gov (United States)

    Babel, L. V.; Karssenberg, D.

    2013-08-01

    Despite the increasing role of models in hydrological research and decision-making processes, only few accounts of the nature and function of models exist in hydrology. Earlier considerations have traditionally been conducted while making a clear distinction between physically-based and conceptual models. A new philosophical account, primarily based on the fields of physics and economics, transcends classes of models and scientific disciplines by considering models as "mediators" between theory and observations. The core of this approach lies in identifying models as (1) being only partially dependent on theory and observations, (2) integrating non-deductive elements in their construction, and (3) carrying the role of instruments of scientific enquiry about both theory and the world. The applicability of this approach to hydrology is evaluated in the present article. Three widely used hydrological models, each showing a different degree of apparent physicality, are confronted to the main characteristics of the "mediating models" concept. We argue that irrespective of their kind, hydrological models depend on both theory and observations, rather than merely on one of these two domains. Their construction is additionally involving a large number of miscellaneous, external ingredients, such as past experiences, model objectives, knowledge and preferences of the modeller, as well as hardware and software resources. We show that hydrological models convey the role of instruments in scientific practice by mediating between theory and the world. It results from these considerations that the traditional distinction between physically-based and conceptual models is necessarily too simplistic and refers at best to the stage at which theory and observations are steering model construction. The large variety of ingredients involved in model construction would deserve closer attention, for being rarely explicitly presented in peer-reviewed literature. We believe that devoting

  6. Modeling Documents with Event Model

    Directory of Open Access Journals (Sweden)

    Longhui Wang

    2015-08-01

    Full Text Available Currently deep learning has made great breakthroughs in visual and speech processing, mainly because it draws lessons from the hierarchical mode that brain deals with images and speech. In the field of NLP, a topic model is one of the important ways for modeling documents. Topic models are built on a generative model that clearly does not match the way humans write. In this paper, we propose Event Model, which is unsupervised and based on the language processing mechanism of neurolinguistics, to model documents. In Event Model, documents are descriptions of concrete or abstract events seen, heard, or sensed by people and words are objects in the events. Event Model has two stages: word learning and dimensionality reduction. Word learning is to learn semantics of words based on deep learning. Dimensionality reduction is the process that representing a document as a low dimensional vector by a linear mode that is completely different from topic models. Event Model achieves state-of-the-art results on document retrieval tasks.

  7. Semiparametric modeling: Correcting low-dimensional model error in parametric models

    International Nuclear Information System (INIS)

    Berry, Tyrus; Harlim, John

    2016-01-01

    In this paper, a semiparametric modeling approach is introduced as a paradigm for addressing model error arising from unresolved physical phenomena. Our approach compensates for model error by learning an auxiliary dynamical model for the unknown parameters. Practically, the proposed approach consists of the following steps. Given a physics-based model and a noisy data set of historical observations, a Bayesian filtering algorithm is used to extract a time-series of the parameter values. Subsequently, the diffusion forecast algorithm is applied to the retrieved time-series in order to construct the auxiliary model for the time evolving parameters. The semiparametric forecasting algorithm consists of integrating the existing physics-based model with an ensemble of parameters sampled from the probability density function of the diffusion forecast. To specify initial conditions for the diffusion forecast, a Bayesian semiparametric filtering method that extends the Kalman-based filtering framework is introduced. In difficult test examples, which introduce chaotically and stochastically evolving hidden parameters into the Lorenz-96 model, we show that our approach can effectively compensate for model error, with forecasting skill comparable to that of the perfect model.

  8. Vector models and generalized SYK models

    Energy Technology Data Exchange (ETDEWEB)

    Peng, Cheng [Department of Physics, Brown University,Providence RI 02912 (United States)

    2017-05-23

    We consider the relation between SYK-like models and vector models by studying a toy model where a tensor field is coupled with a vector field. By integrating out the tensor field, the toy model reduces to the Gross-Neveu model in 1 dimension. On the other hand, a certain perturbation can be turned on and the toy model flows to an SYK-like model at low energy. A chaotic-nonchaotic phase transition occurs as the sign of the perturbation is altered. We further study similar models that possess chaos and enhanced reparameterization symmetries.

  9. [Bone remodeling and modeling/mini-modeling.

    Science.gov (United States)

    Hasegawa, Tomoka; Amizuka, Norio

    Modeling, adapting structures to loading by changing bone size and shapes, often takes place in bone of the fetal and developmental stages, while bone remodeling-replacement of old bone into new bone-is predominant in the adult stage. Modeling can be divided into macro-modeling(macroscopic modeling)and mini-modeling(microscopic modeling). In the cellular process of mini-modeling, unlike bone remodeling, bone lining cells, i.e., resting flattened osteoblasts covering bone surfaces will become active form of osteoblasts, and then, deposit new bone onto the old bone without mediating osteoclastic bone resorption. Among the drugs for osteoporotic treatment, eldecalcitol(a vitamin D3 analog)and teriparatide(human PTH[1-34])could show mini-modeling based bone formation. Histologically, mature, active form of osteoblasts are localized on the new bone induced by mini-modeling, however, only a few cell layer of preosteoblasts are formed over the newly-formed bone, and accordingly, few osteoclasts are present in the region of mini-modeling. In this review, histological characteristics of bone remodeling and modeling including mini-modeling will be introduced.

  10. ROCK PROPERTIES MODEL ANALYSIS MODEL REPORT

    International Nuclear Information System (INIS)

    Clinton Lum

    2002-01-01

    The purpose of this Analysis and Model Report (AMR) is to document Rock Properties Model (RPM) 3.1 with regard to input data, model methods, assumptions, uncertainties and limitations of model results, and qualification status of the model. The report also documents the differences between the current and previous versions and validation of the model. The rock properties models are intended principally for use as input to numerical physical-process modeling, such as of ground-water flow and/or radionuclide transport. The constraints, caveats, and limitations associated with this model are discussed in the appropriate text sections that follow. This work was conducted in accordance with the following planning documents: WA-0344, ''3-D Rock Properties Modeling for FY 1998'' (SNL 1997, WA-0358), ''3-D Rock Properties Modeling for FY 1999'' (SNL 1999), and the technical development plan, Rock Properties Model Version 3.1, (CRWMS MandO 1999c). The Interim Change Notice (ICNs), ICN 02 and ICN 03, of this AMR were prepared as part of activities being conducted under the Technical Work Plan, TWP-NBS-GS-000003, ''Technical Work Plan for the Integrated Site Model, Process Model Report, Revision 01'' (CRWMS MandO 2000b). The purpose of ICN 03 is to record changes in data input status due to data qualification and verification activities. These work plans describe the scope, objectives, tasks, methodology, and implementing procedures for model construction. The constraints, caveats, and limitations associated with this model are discussed in the appropriate text sections that follow. The work scope for this activity consists of the following: (1) Conversion of the input data (laboratory measured porosity data, x-ray diffraction mineralogy, petrophysical calculations of bound water, and petrophysical calculations of porosity) for each borehole into stratigraphic coordinates; (2) Re-sampling and merging of data sets; (3) Development of geostatistical simulations of porosity; (4

  11. Automated protein structure modeling with SWISS-MODEL Workspace and the Protein Model Portal.

    Science.gov (United States)

    Bordoli, Lorenza; Schwede, Torsten

    2012-01-01

    Comparative protein structure modeling is a computational approach to build three-dimensional structural models for proteins using experimental structures of related protein family members as templates. Regular blind assessments of modeling accuracy have demonstrated that comparative protein structure modeling is currently the most reliable technique to model protein structures. Homology models are often sufficiently accurate to substitute for experimental structures in a wide variety of applications. Since the usefulness of a model for specific application is determined by its accuracy, model quality estimation is an essential component of protein structure prediction. Comparative protein modeling has become a routine approach in many areas of life science research since fully automated modeling systems allow also nonexperts to build reliable models. In this chapter, we describe practical approaches for automated protein structure modeling with SWISS-MODEL Workspace and the Protein Model Portal.

  12. Geologic Framework Model Analysis Model Report

    Energy Technology Data Exchange (ETDEWEB)

    R. Clayton

    2000-12-19

    The purpose of this report is to document the Geologic Framework Model (GFM), Version 3.1 (GFM3.1) with regard to data input, modeling methods, assumptions, uncertainties, limitations, and validation of the model results, qualification status of the model, and the differences between Version 3.1 and previous versions. The GFM represents a three-dimensional interpretation of the stratigraphy and structural features of the location of the potential Yucca Mountain radioactive waste repository. The GFM encompasses an area of 65 square miles (170 square kilometers) and a volume of 185 cubic miles (771 cubic kilometers). The boundaries of the GFM were chosen to encompass the most widely distributed set of exploratory boreholes (the Water Table or WT series) and to provide a geologic framework over the area of interest for hydrologic flow and radionuclide transport modeling through the unsaturated zone (UZ). The depth of the model is constrained by the inferred depth of the Tertiary-Paleozoic unconformity. The GFM was constructed from geologic map and borehole data. Additional information from measured stratigraphy sections, gravity profiles, and seismic profiles was also considered. This interim change notice (ICN) was prepared in accordance with the Technical Work Plan for the Integrated Site Model Process Model Report Revision 01 (CRWMS M&O 2000). The constraints, caveats, and limitations associated with this model are discussed in the appropriate text sections that follow. The GFM is one component of the Integrated Site Model (ISM) (Figure l), which has been developed to provide a consistent volumetric portrayal of the rock layers, rock properties, and mineralogy of the Yucca Mountain site. The ISM consists of three components: (1) Geologic Framework Model (GFM); (2) Rock Properties Model (RPM); and (3) Mineralogic Model (MM). The ISM merges the detailed project stratigraphy into model stratigraphic units that are most useful for the primary downstream models and the

  13. Geologic Framework Model Analysis Model Report

    International Nuclear Information System (INIS)

    Clayton, R.

    2000-01-01

    The purpose of this report is to document the Geologic Framework Model (GFM), Version 3.1 (GFM3.1) with regard to data input, modeling methods, assumptions, uncertainties, limitations, and validation of the model results, qualification status of the model, and the differences between Version 3.1 and previous versions. The GFM represents a three-dimensional interpretation of the stratigraphy and structural features of the location of the potential Yucca Mountain radioactive waste repository. The GFM encompasses an area of 65 square miles (170 square kilometers) and a volume of 185 cubic miles (771 cubic kilometers). The boundaries of the GFM were chosen to encompass the most widely distributed set of exploratory boreholes (the Water Table or WT series) and to provide a geologic framework over the area of interest for hydrologic flow and radionuclide transport modeling through the unsaturated zone (UZ). The depth of the model is constrained by the inferred depth of the Tertiary-Paleozoic unconformity. The GFM was constructed from geologic map and borehole data. Additional information from measured stratigraphy sections, gravity profiles, and seismic profiles was also considered. This interim change notice (ICN) was prepared in accordance with the Technical Work Plan for the Integrated Site Model Process Model Report Revision 01 (CRWMS M and O 2000). The constraints, caveats, and limitations associated with this model are discussed in the appropriate text sections that follow. The GFM is one component of the Integrated Site Model (ISM) (Figure l), which has been developed to provide a consistent volumetric portrayal of the rock layers, rock properties, and mineralogy of the Yucca Mountain site. The ISM consists of three components: (1) Geologic Framework Model (GFM); (2) Rock Properties Model (RPM); and (3) Mineralogic Model (MM). The ISM merges the detailed project stratigraphy into model stratigraphic units that are most useful for the primary downstream models and

  14. Model(ing) Law

    DEFF Research Database (Denmark)

    Carlson, Kerstin

    The International Criminal Tribunal for the former Yugoslavia (ICTY) was the first and most celebrated of a wave of international criminal tribunals (ICTs) built in the 1990s designed to advance liberalism through international criminal law. Model(ing) Justice examines the case law of the ICTY...

  15. Comparisons of Multilevel Modeling and Structural Equation Modeling Approaches to Actor-Partner Interdependence Model.

    Science.gov (United States)

    Hong, Sehee; Kim, Soyoung

    2018-01-01

    There are basically two modeling approaches applicable to analyzing an actor-partner interdependence model: the multilevel modeling (hierarchical linear model) and the structural equation modeling. This article explains how to use these two models in analyzing an actor-partner interdependence model and how these two approaches work differently. As an empirical example, marital conflict data were used to analyze an actor-partner interdependence model. The multilevel modeling and the structural equation modeling produced virtually identical estimates for a basic model. However, the structural equation modeling approach allowed more realistic assumptions on measurement errors and factor loadings, rendering better model fit indices.

  16. Translating building information modeling to building energy modeling using model view definition.

    Science.gov (United States)

    Jeong, WoonSeong; Kim, Jong Bum; Clayton, Mark J; Haberl, Jeff S; Yan, Wei

    2014-01-01

    This paper presents a new approach to translate between Building Information Modeling (BIM) and Building Energy Modeling (BEM) that uses Modelica, an object-oriented declarative, equation-based simulation environment. The approach (BIM2BEM) has been developed using a data modeling method to enable seamless model translations of building geometry, materials, and topology. Using data modeling, we created a Model View Definition (MVD) consisting of a process model and a class diagram. The process model demonstrates object-mapping between BIM and Modelica-based BEM (ModelicaBEM) and facilitates the definition of required information during model translations. The class diagram represents the information and object relationships to produce a class package intermediate between the BIM and BEM. The implementation of the intermediate class package enables system interface (Revit2Modelica) development for automatic BIM data translation into ModelicaBEM. In order to demonstrate and validate our approach, simulation result comparisons have been conducted via three test cases using (1) the BIM-based Modelica models generated from Revit2Modelica and (2) BEM models manually created using LBNL Modelica Buildings library. Our implementation shows that BIM2BEM (1) enables BIM models to be translated into ModelicaBEM models, (2) enables system interface development based on the MVD for thermal simulation, and (3) facilitates the reuse of original BIM data into building energy simulation without an import/export process.

  17. Translating Building Information Modeling to Building Energy Modeling Using Model View Definition

    Directory of Open Access Journals (Sweden)

    WoonSeong Jeong

    2014-01-01

    Full Text Available This paper presents a new approach to translate between Building Information Modeling (BIM and Building Energy Modeling (BEM that uses Modelica, an object-oriented declarative, equation-based simulation environment. The approach (BIM2BEM has been developed using a data modeling method to enable seamless model translations of building geometry, materials, and topology. Using data modeling, we created a Model View Definition (MVD consisting of a process model and a class diagram. The process model demonstrates object-mapping between BIM and Modelica-based BEM (ModelicaBEM and facilitates the definition of required information during model translations. The class diagram represents the information and object relationships to produce a class package intermediate between the BIM and BEM. The implementation of the intermediate class package enables system interface (Revit2Modelica development for automatic BIM data translation into ModelicaBEM. In order to demonstrate and validate our approach, simulation result comparisons have been conducted via three test cases using (1 the BIM-based Modelica models generated from Revit2Modelica and (2 BEM models manually created using LBNL Modelica Buildings library. Our implementation shows that BIM2BEM (1 enables BIM models to be translated into ModelicaBEM models, (2 enables system interface development based on the MVD for thermal simulation, and (3 facilitates the reuse of original BIM data into building energy simulation without an import/export process.

  18. Models Archive and ModelWeb at NSSDC

    Science.gov (United States)

    Bilitza, D.; Papitashvili, N.; King, J. H.

    2002-05-01

    In addition to its large data holdings, NASA's National Space Science Data Center (NSSDC) also maintains an archive of space physics models for public use (ftp://nssdcftp.gsfc.nasa.gov/models/). The more than 60 model entries cover a wide range of parameters from the atmosphere, to the ionosphere, to the magnetosphere, to the heliosphere. The models are primarily empirical models developed by the respective model authors based on long data records from ground and space experiments. An online model catalog (http://nssdc.gsfc.nasa.gov/space/model/) provides information about these and other models and links to the model software if available. We will briefly review the existing model holdings and highlight some of its usages and users. In response to a growing need by the user community, NSSDC began to develop web-interfaces for the most frequently requested models. These interfaces enable users to compute and plot model parameters online for the specific conditions that they are interested in. Currently included in the Modelweb system (http://nssdc.gsfc.nasa.gov/space/model/) are the following models: the International Reference Ionosphere (IRI) model, the Mass Spectrometer Incoherent Scatter (MSIS) E90 model, the International Geomagnetic Reference Field (IGRF) and the AP/AE-8 models for the radiation belt electrons and protons. User accesses to both systems have been steadily increasing over the last years with occasional spikes prior to large scientific meetings. The current monthly rate is between 5,000 to 10,000 accesses for either system; in February 2002 13,872 accesses were recorded to the Modelsweb and 7092 accesses to the models archive.

  19. Modelling the models

    CERN Multimedia

    Anaïs Schaeffer

    2012-01-01

    By analysing the production of mesons in the forward region of LHC proton-proton collisions, the LHCf collaboration has provided key information needed to calibrate extremely high-energy cosmic ray models.   Average transverse momentum (pT) as a function of rapidity loss ∆y. Black dots represent LHCf data and the red diamonds represent SPS experiment UA7 results. The predictions of hadronic interaction models are shown by open boxes (sibyll 2.1), open circles (qgsjet II-03) and open triangles (epos 1.99). Among these models, epos 1.99 shows the best overall agreement with the LHCf data. LHCf is dedicated to the measurement of neutral particles emitted at extremely small angles in the very forward region of LHC collisions. Two imaging calorimeters – Arm1 and Arm2 – take data 140 m either side of the ATLAS interaction point. “The physics goal of this type of analysis is to provide data for calibrating the hadron interaction models – the well-known &...

  20. Model Manipulation for End-User Modelers

    DEFF Research Database (Denmark)

    Acretoaie, Vlad

    , and transformations using their modeling notation and editor of choice. The VM* languages are implemented via a single execution engine, the VM* Runtime, built on top of the Henshin graph-based transformation engine. This approach combines the benefits of flexibility, maturity, and formality. To simplify model editor......End-user modelers are domain experts who create and use models as part of their work. They are typically not Software Engineers, and have little or no programming and meta-modeling experience. However, using model manipulation languages developed in the context of Model-Driven Engineering often...... requires such experience. These languages are therefore only used by a small subset of the modelers that could, in theory, benefit from them. The goals of this thesis are to substantiate this observation, introduce the concepts and tools required to overcome it, and provide empirical evidence in support...

  1. Modeling energy-economy interactions using integrated models

    International Nuclear Information System (INIS)

    Uyterlinde, M.A.

    1994-06-01

    Integrated models are defined as economic energy models that consist of several submodels, either coupled by an interface module, or embedded in one large model. These models can be used for energy policy analysis. Using integrated models yields the following benefits. They provide a framework in which energy-economy interactions can be better analyzed than in stand-alone models. Integrated models can represent both energy sector technological details, as well as the behaviour of the market and the role of prices. Furthermore, the combination of modeling methodologies in one model can compensate weaknesses of one approach with strengths of another. These advantages motivated this survey of the class of integrated models. The purpose of this literature survey therefore was to collect and to present information on integrated models. To carry out this task, several goals were identified. The first goal was to give an overview of what is reported on these models in general. The second one was to find and describe examples of such models. Other goals were to find out what kinds of models were used as component models, and to examine the linkage methodology. Solution methods and their convergence properties were also a subject of interest. The report has the following structure. In chapter 2, a 'conceptual framework' is given. In chapter 3 a number of integrated models is described. In a table, a complete overview is presented of all described models. Finally, in chapter 4, the report is summarized, and conclusions are drawn regarding the advantages and drawbacks of integrated models. 8 figs., 29 refs

  2. DOE hearing on the draft area recommendation report for the Crystalline Repository Project, Patrick Henry High School, Ashland, Virginia: Session 1, April 7, 1986: Transcript of proceedings

    International Nuclear Information System (INIS)

    1986-01-01

    This document contains a transcript of the comments of 35 witnesses. These meetings were held to hear public comments regarding the selection of a site for a second high-level nuclear waste repository

  3. On the role of model structure in hydrological modeling : Understanding models

    NARCIS (Netherlands)

    Gharari, S.

    2016-01-01

    Modeling is an essential part of the science of hydrology. Models enable us to formulate what we know and perceive from the real world into a neat package. Rainfall-runoff models are abstract simplifications of how a catchment works. Within the research field of scientific rainfall-runoff modeling,

  4. Evolution of computational models in BioModels Database and the Physiome Model Repository.

    Science.gov (United States)

    Scharm, Martin; Gebhardt, Tom; Touré, Vasundra; Bagnacani, Andrea; Salehzadeh-Yazdi, Ali; Wolkenhauer, Olaf; Waltemath, Dagmar

    2018-04-12

    A useful model is one that is being (re)used. The development of a successful model does not finish with its publication. During reuse, models are being modified, i.e. expanded, corrected, and refined. Even small changes in the encoding of a model can, however, significantly affect its interpretation. Our motivation for the present study is to identify changes in models and make them transparent and traceable. We analysed 13734 models from BioModels Database and the Physiome Model Repository. For each model, we studied the frequencies and types of updates between its first and latest release. To demonstrate the impact of changes, we explored the history of a Repressilator model in BioModels Database. We observed continuous updates in the majority of models. Surprisingly, even the early models are still being modified. We furthermore detected that many updates target annotations, which improves the information one can gain from models. To support the analysis of changes in model repositories we developed MoSt, an online tool for visualisations of changes in models. The scripts used to generate the data and figures for this study are available from GitHub https://github.com/binfalse/BiVeS-StatsGenerator and as a Docker image at https://hub.docker.com/r/binfalse/bives-statsgenerator/ . The website https://most.bio.informatik.uni-rostock.de/ provides interactive access to model versions and their evolutionary statistics. The reuse of models is still impeded by a lack of trust and documentation. A detailed and transparent documentation of all aspects of the model, including its provenance, will improve this situation. Knowledge about a model's provenance can avoid the repetition of mistakes that others already faced. More insights are gained into how the system evolves from initial findings to a profound understanding. We argue that it is the responsibility of the maintainers of model repositories to offer transparent model provenance to their users.

  5. Model documentation report: Transportation sector model of the National Energy Modeling System

    Energy Technology Data Exchange (ETDEWEB)

    1994-03-01

    This report documents the objectives, analytical approach and development of the National Energy Modeling System (NEMS) Transportation Model (TRAN). The report catalogues and describes the model assumptions, computational methodology, parameter estimation techniques, model source code, and forecast results generated by the model. This document serves three purposes. First, it is a reference document providing a detailed description of TRAN for model analysts, users, and the public. Second, this report meets the legal requirements of the Energy Information Administration (EIA) to provide adequate documentation in support of its statistical and forecast reports (Public Law 93-275, 57(b)(1)). Third, it permits continuity in model development by providing documentation from which energy analysts can undertake model enhancements, data updates, and parameter refinements.

  6. Modeling complexes of modeled proteins.

    Science.gov (United States)

    Anishchenko, Ivan; Kundrotas, Petras J; Vakser, Ilya A

    2017-03-01

    Structural characterization of proteins is essential for understanding life processes at the molecular level. However, only a fraction of known proteins have experimentally determined structures. This fraction is even smaller for protein-protein complexes. Thus, structural modeling of protein-protein interactions (docking) primarily has to rely on modeled structures of the individual proteins, which typically are less accurate than the experimentally determined ones. Such "double" modeling is the Grand Challenge of structural reconstruction of the interactome. Yet it remains so far largely untested in a systematic way. We present a comprehensive validation of template-based and free docking on a set of 165 complexes, where each protein model has six levels of structural accuracy, from 1 to 6 Å C α RMSD. Many template-based docking predictions fall into acceptable quality category, according to the CAPRI criteria, even for highly inaccurate proteins (5-6 Å RMSD), although the number of such models (and, consequently, the docking success rate) drops significantly for models with RMSD > 4 Å. The results show that the existing docking methodologies can be successfully applied to protein models with a broad range of structural accuracy, and the template-based docking is much less sensitive to inaccuracies of protein models than the free docking. Proteins 2017; 85:470-478. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  7. Modeling styles in business process modeling

    NARCIS (Netherlands)

    Pinggera, J.; Soffer, P.; Zugal, S.; Weber, B.; Weidlich, M.; Fahland, D.; Reijers, H.A.; Mendling, J.; Bider, I.; Halpin, T.; Krogstie, J.; Nurcan, S.; Proper, E.; Schmidt, R.; Soffer, P.; Wrycza, S.

    2012-01-01

    Research on quality issues of business process models has recently begun to explore the process of creating process models. As a consequence, the question arises whether different ways of creating process models exist. In this vein, we observed 115 students engaged in the act of modeling, recording

  8. Comparison: Binomial model and Black Scholes model

    Directory of Open Access Journals (Sweden)

    Amir Ahmad Dar

    2018-03-01

    Full Text Available The Binomial Model and the Black Scholes Model are the popular methods that are used to solve the option pricing problems. Binomial Model is a simple statistical method and Black Scholes model requires a solution of a stochastic differential equation. Pricing of European call and a put option is a very difficult method used by actuaries. The main goal of this study is to differentiate the Binominal model and the Black Scholes model by using two statistical model - t-test and Tukey model at one period. Finally, the result showed that there is no significant difference between the means of the European options by using the above two models.

  9. Modelling in Business Model design

    NARCIS (Netherlands)

    Simonse, W.L.

    2013-01-01

    It appears that business model design might not always produce a design or model as the expected result. However when designers are involved, a visual model or artefact is produced. To assist strategic managers in thinking about how they can act, the designers challenge is to combine strategy and

  10. Modelling SDL, Modelling Languages

    Directory of Open Access Journals (Sweden)

    Michael Piefel

    2007-02-01

    Full Text Available Today's software systems are too complex to implement them and model them using only one language. As a result, modern software engineering uses different languages for different levels of abstraction and different system aspects. Thus to handle an increasing number of related or integrated languages is the most challenging task in the development of tools. We use object oriented metamodelling to describe languages. Object orientation allows us to derive abstract reusable concept definitions (concept classes from existing languages. This language definition technique concentrates on semantic abstractions rather than syntactical peculiarities. We present a set of common concept classes that describe structure, behaviour, and data aspects of high-level modelling languages. Our models contain syntax modelling using the OMG MOF as well as static semantic constraints written in OMG OCL. We derive metamodels for subsets of SDL and UML from these common concepts, and we show for parts of these languages that they can be modelled and related to each other through the same abstract concepts.

  11. Model integration and a theory of models

    OpenAIRE

    Dolk, Daniel R.; Kottemann, Jeffrey E.

    1993-01-01

    Model integration extends the scope of model management to include the dimension of manipulation as well. This invariably leads to comparisons with database theory. Model integration is viewed from four perspectives: Organizational, definitional, procedural, and implementational. Strategic modeling is discussed as the organizational motivation for model integration. Schema and process integration are examined as the logical and manipulation counterparts of model integr...

  12. Integrated Site Model Process Model Report

    International Nuclear Information System (INIS)

    Booth, T.

    2000-01-01

    The Integrated Site Model (ISM) provides a framework for discussing the geologic features and properties of Yucca Mountain, which is being evaluated as a potential site for a geologic repository for the disposal of nuclear waste. The ISM is important to the evaluation of the site because it provides 3-D portrayals of site geologic, rock property, and mineralogic characteristics and their spatial variabilities. The ISM is not a single discrete model; rather, it is a set of static representations that provide three-dimensional (3-D), computer representations of site geology, selected hydrologic and rock properties, and mineralogic-characteristics data. These representations are manifested in three separate model components of the ISM: the Geologic Framework Model (GFM), the Rock Properties Model (RPM), and the Mineralogic Model (MM). The GFM provides a representation of the 3-D stratigraphy and geologic structure. Based on the framework provided by the GFM, the RPM and MM provide spatial simulations of the rock and hydrologic properties, and mineralogy, respectively. Functional summaries of the component models and their respective output are provided in Section 1.4. Each of the component models of the ISM considers different specific aspects of the site geologic setting. Each model was developed using unique methodologies and inputs, and the determination of the modeled units for each of the components is dependent on the requirements of that component. Therefore, while the ISM represents the integration of the rock properties and mineralogy into a geologic framework, the discussion of ISM construction and results is most appropriately presented in terms of the three separate components. This Process Model Report (PMR) summarizes the individual component models of the ISM (the GFM, RPM, and MM) and describes how the three components are constructed and combined to form the ISM

  13. Concept Modeling vs. Data modeling in Practice

    DEFF Research Database (Denmark)

    Madsen, Bodil Nistrup; Erdman Thomsen, Hanne

    2015-01-01

    This chapter shows the usefulness of terminological concept modeling as a first step in data modeling. First, we introduce terminological concept modeling with terminological ontologies, i.e. concept systems enriched with characteristics modeled as feature specifications. This enables a formal...... account of the inheritance of characteristics and allows us to introduce a number of principles and constraints which render concept modeling more coherent than earlier approaches. Second, we explain how terminological ontologies can be used as the basis for developing conceptual and logical data models....... We also show how to map from the various elements in the terminological ontology to elements in the data models, and explain the differences between the models. Finally the usefulness of terminological ontologies as a prerequisite for IT development and data modeling is illustrated with examples from...

  14. Automated Protein Structure Modeling with SWISS-MODEL Workspace and the Protein Model Portal

    OpenAIRE

    Bordoli, Lorenza; Schwede, Torsten

    2012-01-01

    Comparative protein structure modeling is a computational approach to build three-dimensional structural models for proteins using experimental structures of related protein family members as templates. Regular blind assessments of modeling accuracy have demonstrated that comparative protein structure modeling is currently the most reliable technique to model protein structures. Homology models are often sufficiently accurate to substitute for experimental structures in a wide variety of appl...

  15. Model documentation report: Transportation sector model of the National Energy Modeling System

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1997-02-01

    Over the past year, several modifications have been made to the NEMS Transportation Model, incorporating greater levels of detail and analysis in modules previously represented in the aggregate or under a profusion of simplifying assumptions. This document is intended to amend those sections of the Model Documentation Report (MDR) which describe these superseded modules. Significant changes have been implemented in the LDV Fuel Economy Model, the Alternative Fuel Vehicle Model, the LDV Fleet Module, and the Highway Freight Model. The relevant sections of the MDR have been extracted from the original document, amended, and are presented in the following pages. A brief summary of the modifications follows: In the Fuel Economy Model, modifications have been made which permit the user to employ more optimistic assumptions about the commercial viability and impact of selected technological improvements. This model also explicitly calculates the fuel economy of an array of alternative fuel vehicles (AFV`s) which are subsequently used in the estimation of vehicle sales. In the Alternative Fuel Vehicle Model, the results of the Fuel Economy Model have been incorporated, and the program flows have been modified to reflect that fact. In the Light Duty Vehicle Fleet Module, the sales of vehicles to fleets of various size are endogenously calculated in order to provide a more detailed estimate of the impacts of EPACT legislation on the sales of AFV`s to fleets. In the Highway Freight Model, the previous aggregate estimation has been replaced by a detailed Freight Truck Stock Model, where travel patterns, efficiencies, and energy intensities are estimated by industrial grouping. Several appendices are provided at the end of this document, containing data tables and supplementary descriptions of the model development process which are not integral to an understanding of the overall model structure.

  16. The IMACLIM model; Le modele IMACLIM

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2003-07-01

    This document provides annexes to the IMACLIM model which propose an actualized description of IMACLIM, model allowing the design of an evaluation tool of the greenhouse gases reduction policies. The model is described in a version coupled with the POLES, technical and economical model of the energy industry. Notations, equations, sources, processing and specifications are proposed and detailed. (A.L.B.)

  17. Modelling Practice

    DEFF Research Database (Denmark)

    Cameron, Ian; Gani, Rafiqul

    2011-01-01

    This chapter deals with the practicalities of building, testing, deploying and maintaining models. It gives specific advice for each phase of the modelling cycle. To do this, a modelling framework is introduced which covers: problem and model definition; model conceptualization; model data...... requirements; model construction; model solution; model verification; model validation and finally model deployment and maintenance. Within the adopted methodology, each step is discussedthrough the consideration of key issues and questions relevant to the modelling activity. Practical advice, based on many...

  18. Leadership Models.

    Science.gov (United States)

    Freeman, Thomas J.

    This paper discusses six different models of organizational structure and leadership, including the scalar chain or pyramid model, the continuum model, the grid model, the linking pin model, the contingency model, and the circle or democratic model. Each model is examined in a separate section that describes the model and its development, lists…

  19. Cognitive models embedded in system simulation models

    International Nuclear Information System (INIS)

    Siegel, A.I.; Wolf, J.J.

    1982-01-01

    If we are to discuss and consider cognitive models, we must first come to grips with two questions: (1) What is cognition; (2) What is a model. Presumably, the answers to these questions can provide a basis for defining a cognitive model. Accordingly, this paper first places these two questions into perspective. Then, cognitive models are set within the context of computer simulation models and a number of computer simulations of cognitive processes are described. Finally, pervasive issues are discussed vis-a-vis cognitive modeling in the computer simulation context

  20. Better models are more effectively connected models

    Science.gov (United States)

    Nunes, João Pedro; Bielders, Charles; Darboux, Frederic; Fiener, Peter; Finger, David; Turnbull-Lloyd, Laura; Wainwright, John

    2016-04-01

    The concept of hydrologic and geomorphologic connectivity describes the processes and pathways which link sources (e.g. rainfall, snow and ice melt, springs, eroded areas and barren lands) to accumulation areas (e.g. foot slopes, streams, aquifers, reservoirs), and the spatial variations thereof. There are many examples of hydrological and sediment connectivity on a watershed scale; in consequence, a process-based understanding of connectivity is crucial to help managers understand their systems and adopt adequate measures for flood prevention, pollution mitigation and soil protection, among others. Modelling is often used as a tool to understand and predict fluxes within a catchment by complementing observations with model results. Catchment models should therefore be able to reproduce the linkages, and thus the connectivity of water and sediment fluxes within the systems under simulation. In modelling, a high level of spatial and temporal detail is desirable to ensure taking into account a maximum number of components, which then enables connectivity to emerge from the simulated structures and functions. However, computational constraints and, in many cases, lack of data prevent the representation of all relevant processes and spatial/temporal variability in most models. In most cases, therefore, the level of detail selected for modelling is too coarse to represent the system in a way in which connectivity can emerge; a problem which can be circumvented by representing fine-scale structures and processes within coarser scale models using a variety of approaches. This poster focuses on the results of ongoing discussions on modelling connectivity held during several workshops within COST Action Connecteur. It assesses the current state of the art of incorporating the concept of connectivity in hydrological and sediment models, as well as the attitudes of modellers towards this issue. The discussion will focus on the different approaches through which connectivity

  1. Constructive Epistemic Modeling: A Hierarchical Bayesian Model Averaging Method

    Science.gov (United States)

    Tsai, F. T. C.; Elshall, A. S.

    2014-12-01

    Constructive epistemic modeling is the idea that our understanding of a natural system through a scientific model is a mental construct that continually develops through learning about and from the model. Using the hierarchical Bayesian model averaging (HBMA) method [1], this study shows that segregating different uncertain model components through a BMA tree of posterior model probabilities, model prediction, within-model variance, between-model variance and total model variance serves as a learning tool [2]. First, the BMA tree of posterior model probabilities permits the comparative evaluation of the candidate propositions of each uncertain model component. Second, systemic model dissection is imperative for understanding the individual contribution of each uncertain model component to the model prediction and variance. Third, the hierarchical representation of the between-model variance facilitates the prioritization of the contribution of each uncertain model component to the overall model uncertainty. We illustrate these concepts using the groundwater modeling of a siliciclastic aquifer-fault system. The sources of uncertainty considered are from geological architecture, formation dip, boundary conditions and model parameters. The study shows that the HBMA analysis helps in advancing knowledge about the model rather than forcing the model to fit a particularly understanding or merely averaging several candidate models. [1] Tsai, F. T.-C., and A. S. Elshall (2013), Hierarchical Bayesian model averaging for hydrostratigraphic modeling: Uncertainty segregation and comparative evaluation. Water Resources Research, 49, 5520-5536, doi:10.1002/wrcr.20428. [2] Elshall, A.S., and F. T.-C. Tsai (2014). Constructive epistemic modeling of groundwater flow with geological architecture and boundary condition uncertainty under Bayesian paradigm, Journal of Hydrology, 517, 105-119, doi: 10.1016/j.jhydrol.2014.05.027.

  2. Models and role models

    NARCIS (Netherlands)

    ten Cate, J.M.

    2015-01-01

    Developing experimental models to understand dental caries has been the theme in our research group. Our first, the pH-cycling model, was developed to investigate the chemical reactions in enamel or dentine, which lead to dental caries. It aimed to leverage our understanding of the fluoride mode of

  3. Multiscale musculoskeletal modelling, data–model fusion and electromyography-informed modelling

    Science.gov (United States)

    Zhang, J.; Heidlauf, T.; Sartori, M.; Besier, T.; Röhrle, O.; Lloyd, D.

    2016-01-01

    This paper proposes methods and technologies that advance the state of the art for modelling the musculoskeletal system across the spatial and temporal scales; and storing these using efficient ontologies and tools. We present population-based modelling as an efficient method to rapidly generate individual morphology from only a few measurements and to learn from the ever-increasing supply of imaging data available. We present multiscale methods for continuum muscle and bone models; and efficient mechanostatistical methods, both continuum and particle-based, to bridge the scales. Finally, we examine both the importance that muscles play in bone remodelling stimuli and the latest muscle force prediction methods that use electromyography-assisted modelling techniques to compute musculoskeletal forces that best reflect the underlying neuromuscular activity. Our proposal is that, in order to have a clinically relevant virtual physiological human, (i) bone and muscle mechanics must be considered together; (ii) models should be trained on population data to permit rapid generation and use underlying principal modes that describe both muscle patterns and morphology; and (iii) these tools need to be available in an open-source repository so that the scientific community may use, personalize and contribute to the database of models. PMID:27051510

  4. Atmospheric statistical dynamic models. Model performance: the Lawrence Livermore Laboratoy Zonal Atmospheric Model

    International Nuclear Information System (INIS)

    Potter, G.L.; Ellsaesser, H.W.; MacCracken, M.C.; Luther, F.M.

    1978-06-01

    Results from the zonal model indicate quite reasonable agreement with observation in terms of the parameters and processes that influence the radiation and energy balance calculations. The model produces zonal statistics similar to those from general circulation models, and has also been shown to produce similar responses in sensitivity studies. Further studies of model performance are planned, including: comparison with July data; comparison of temperature and moisture transport and wind fields for winter and summer months; and a tabulation of atmospheric energetics. Based on these preliminary performance studies, however, it appears that the zonal model can be used in conjunction with more complex models to help unravel the problems of understanding the processes governing present climate and climate change. As can be seen in the subsequent paper on model sensitivity studies, in addition to reduced cost of computation, the zonal model facilitates analysis of feedback mechanisms and simplifies analysis of the interactions between processes

  5. Gradient-based model calibration with proxy-model assistance

    Science.gov (United States)

    Burrows, Wesley; Doherty, John

    2016-02-01

    Use of a proxy model in gradient-based calibration and uncertainty analysis of a complex groundwater model with large run times and problematic numerical behaviour is described. The methodology is general, and can be used with models of all types. The proxy model is based on a series of analytical functions that link all model outputs used in the calibration process to all parameters requiring estimation. In enforcing history-matching constraints during the calibration and post-calibration uncertainty analysis processes, the proxy model is run for the purposes of populating the Jacobian matrix, while the original model is run when testing parameter upgrades; the latter process is readily parallelized. Use of a proxy model in this fashion dramatically reduces the computational burden of complex model calibration and uncertainty analysis. At the same time, the effect of model numerical misbehaviour on calculation of local gradients is mitigated, this allowing access to the benefits of gradient-based analysis where lack of integrity in finite-difference derivatives calculation would otherwise have impeded such access. Construction of a proxy model, and its subsequent use in calibration of a complex model, and in analysing the uncertainties of predictions made by that model, is implemented in the PEST suite.

  6. Spike Neural Models Part II: Abstract Neural Models

    Directory of Open Access Journals (Sweden)

    Johnson, Melissa G.

    2018-02-01

    Full Text Available Neurons are complex cells that require a lot of time and resources to model completely. In spiking neural networks (SNN though, not all that complexity is required. Therefore simple, abstract models are often used. These models save time, use less computer resources, and are easier to understand. This tutorial presents two such models: Izhikevich's model, which is biologically realistic in the resulting spike trains but not in the parameters, and the Leaky Integrate and Fire (LIF model which is not biologically realistic but does quickly and easily integrate input to produce spikes. Izhikevich's model is based on Hodgkin-Huxley's model but simplified such that it uses only two differentiation equations and four parameters to produce various realistic spike patterns. LIF is based on a standard electrical circuit and contains one equation. Either of these two models, or any of the many other models in literature can be used in a SNN. Choosing a neural model is an important task that depends on the goal of the research and the resources available. Once a model is chosen, network decisions such as connectivity, delay, and sparseness, need to be made. Understanding neural models and how they are incorporated into the network is the first step in creating a SNN.

  7. Population balance models: a useful complementary modelling framework for future WWTP modelling

    DEFF Research Database (Denmark)

    Nopens, Ingmar; Torfs, Elena; Ducoste, Joel

    2015-01-01

    Population balance models (PBMs) represent a powerful modelling framework for the description of the dynamics of properties that are characterised by distributions. This distribution of properties under transient conditions has been demonstrated in many chemical engineering applications. Modelling...

  8. From Product Models to Product State Models

    DEFF Research Database (Denmark)

    Larsen, Michael Holm

    1999-01-01

    A well-known technology designed to handle product data is Product Models. Product Models are in their current form not able to handle all types of product state information. Hence, the concept of a Product State Model (PSM) is proposed. The PSM and in particular how to model a PSM is the Research...

  9. North American Carbon Project (NACP) Regional Model-Model and Model-Data Intercomparison Project

    Science.gov (United States)

    Huntzinger, D. N.; Post, W. M.; Jacobson, A. R.; Cook, R. B.

    2009-05-01

    Available observations are localized and widely separated in both space and time, so we depend heavily on models to characterize, understand, and predict carbon fluxes at regional and global scales. The results from each model differ because they use different approaches (forward vs. inverse), modeling strategies (detailed process, statistical, observation based), process representation, boundary conditions, initial conditions, and driver data. To investigate these differences we conducted a model-model and model-data comparison using available forward ecosystem model and atmospheric inverse output, along with regional scale inventory data. Forward or "bottom-up" models typically estimate carbon fluxes through a set of physiological relationships, and are based on our current mechanistic understanding of how carbon is exchanged within ecosystems. Inverse or "top-down" analyses use measured atmospheric concentrations of CO2, coupled with an atmospheric transport model to infer surface flux distributions. Although bottom-up models do fairly well at reproducing measured fluxes (i.e., net ecosystem exchange) at a given location, they vary considerably in their estimates of carbon flux over regional or continental scales, suggesting difficulty in scaling mechanistic relationships to large areas and/or timescales. Conversely, top-down inverse models predict fluxes that are quantitatively consistent with atmospheric measurements, suggesting that they are capturing large scale variability in flux quite well, but offer limited insights into the processes controlling this variability and how fluxes vary at fine spatial scales. The analyses focused on identifying and quantifying spatial and temporal patterns of carbon fluxes among the models; quantifying across-model variability, as well as comparing simulated or estimated surface fluxes and biomass to observed values at regional to continental scales for the period 2000-2005. The analysis focused on the following three

  10. Population Balance Models: A useful complementary modelling framework for future WWTP modelling

    DEFF Research Database (Denmark)

    Nopens, Ingmar; Torfs, Elena; Ducoste, Joel

    2014-01-01

    Population Balance Models (PBMs) represent a powerful modelling framework for the description of the dynamics of properties that are characterised by statistical distributions. This has been demonstrated in many chemical engineering applications. Modelling efforts of several current and future unit...

  11. Model Metric untuk Mengukur Fleksibilitas Model Proses Bisnis

    Directory of Open Access Journals (Sweden)

    Endang Wahyu Pamungkas

    2014-10-01

    Full Text Available Abstrak Organisasi bisnis dunia saat ini banyak memanfaatkan sistem informasi digital untuk memberikan pemahaman mengenai manajemen proses bisnis yang mereka jalani. Pemanfaatan sistem Enterprise Resource Planning (ERP merupakan contoh teknologi dalam manajemen proses bisnis. Melalui sistem ini perusahaan dapat membangun dan mengembangkan proses bisnis. Selain itu, perusahaan juga dapat menyesuaikan proses bisnis secara cepat terhadap perubahan yang terjadi seiring bertambahnya kebutuhan dan informasi, berubahnya kondisi pasar, atau perubahan kebijakan. Sehubungan dengan perubahan proses bisnis yang sering terjadi, maka aspek fleksibilitas terhadap model proses yang dibangun harus ditingkatkan. Dalam mendukung peningkatan fleksibilitas tersebut tentunya dibutuhkan sebuah model untuk mengukur tingkat flesibelitas model proses bisnis. Model tersebut yang kemudian dapat digunakan oleh analis untuk melakukan perbandingan sehingga dapat diperoleh model proses bisnis yang paling fleksibel dan cocok dengan perusahaan. Hal ini dapat dianalisa dengan melibatkan aspek-aspek fleksibel yang telah diteliti pada penelitian-penelitian sebelumnya. Dalam paper ini akan dilakukan penelitian mengenai aspek fleksibitas dalam model proses bisnis untuk menghasilkan model metric yang dapat melakukan kuantifikasi tingkat fleksibilitas pada model proses bisnis. Model metric yang dihasilkan pada penelitian ini mampu melakukan perhitungan fleksibelitas pada model proses bisnis secara kuantitatif. Kata kunci: ERP, fleksibilitas, metadata, model metric, model proses bisnis, variasi Abstract Recently, business organizations in the world are making use of digital information systems to provide an understanding of the business process management in which they live. Utilization of Enterprise Resource Planning (ERP system is an example of technology in business process management. Through this system, some companies can build and develop business process and can quickly adjust

  12. Modelling bankruptcy prediction models in Slovak companies

    Directory of Open Access Journals (Sweden)

    Kovacova Maria

    2017-01-01

    Full Text Available An intensive research from academics and practitioners has been provided regarding models for bankruptcy prediction and credit risk management. In spite of numerous researches focusing on forecasting bankruptcy using traditional statistics techniques (e.g. discriminant analysis and logistic regression and early artificial intelligence models (e.g. artificial neural networks, there is a trend for transition to machine learning models (support vector machines, bagging, boosting, and random forest to predict bankruptcy one year prior to the event. Comparing the performance of this with unconventional approach with results obtained by discriminant analysis, logistic regression, and neural networks application, it has been found that bagging, boosting, and random forest models outperform the others techniques, and that all prediction accuracy in the testing sample improves when the additional variables are included. On the other side the prediction accuracy of old and well known bankruptcy prediction models is quiet high. Therefore, we aim to analyse these in some way old models on the dataset of Slovak companies to validate their prediction ability in specific conditions. Furthermore, these models will be modelled according to new trends by calculating the influence of elimination of selected variables on the overall prediction ability of these models.

  13. The ModelCC Model-Driven Parser Generator

    Directory of Open Access Journals (Sweden)

    Fernando Berzal

    2015-01-01

    Full Text Available Syntax-directed translation tools require the specification of a language by means of a formal grammar. This grammar must conform to the specific requirements of the parser generator to be used. This grammar is then annotated with semantic actions for the resulting system to perform its desired function. In this paper, we introduce ModelCC, a model-based parser generator that decouples language specification from language processing, avoiding some of the problems caused by grammar-driven parser generators. ModelCC receives a conceptual model as input, along with constraints that annotate it. It is then able to create a parser for the desired textual syntax and the generated parser fully automates the instantiation of the language conceptual model. ModelCC also includes a reference resolution mechanism so that ModelCC is able to instantiate abstract syntax graphs, rather than mere abstract syntax trees.

  14. Environmental Satellite Models for a Macroeconomic Model

    International Nuclear Information System (INIS)

    Moeller, F.; Grinderslev, D.; Werner, M.

    2003-01-01

    To support national environmental policy, it is desirable to forecast and analyse environmental indicators consistently with economic variables. However, environmental indicators are physical measures linked to physical activities that are not specified in economic models. One way to deal with this is to develop environmental satellite models linked to economic models. The system of models presented gives a frame of reference where emissions of greenhouse gases, acid gases, and leaching of nutrients to the aquatic environment are analysed in line with - and consistently with - macroeconomic variables. This paper gives an overview of the data and the satellite models. Finally, the results of applying the model system to calculate the impacts on emissions and the economy are reviewed in a few illustrative examples. The models have been developed for Denmark; however, most of the environmental data used are from the CORINAIR system implemented in numerous countries

  15. A model evaluation checklist for process-based environmental models

    Science.gov (United States)

    Jackson-Blake, Leah

    2015-04-01

    Mechanistic catchment-scale phosphorus models appear to perform poorly where diffuse sources dominate. The reasons for this were investigated for one commonly-applied model, the INtegrated model of CAtchment Phosphorus (INCA-P). Model output was compared to 18 months of daily water quality monitoring data in a small agricultural catchment in Scotland, and model structure, key model processes and internal model responses were examined. Although the model broadly reproduced dissolved phosphorus dynamics, it struggled with particulates. The reasons for poor performance were explored, together with ways in which improvements could be made. The process of critiquing and assessing model performance was then generalised to provide a broadly-applicable model evaluation checklist, incorporating: (1) Calibration challenges, relating to difficulties in thoroughly searching a high-dimensional parameter space and in selecting appropriate means of evaluating model performance. In this study, for example, model simplification was identified as a necessary improvement to reduce the number of parameters requiring calibration, whilst the traditionally-used Nash Sutcliffe model performance statistic was not able to discriminate between realistic and unrealistic model simulations, and alternative statistics were needed. (2) Data limitations, relating to a lack of (or uncertainty in) input data, data to constrain model parameters, data for model calibration and testing, and data to test internal model processes. In this study, model reliability could be improved by addressing all four kinds of data limitation. For example, there was insufficient surface water monitoring data for model testing against an independent dataset to that used in calibration, whilst additional monitoring of groundwater and effluent phosphorus inputs would help distinguish between alternative plausible model parameterisations. (3) Model structural inadequacies, whereby model structure may inadequately represent

  16. Coupling Climate Models and Forward-Looking Economic Models

    Science.gov (United States)

    Judd, K.; Brock, W. A.

    2010-12-01

    Authors: Dr. Kenneth L. Judd, Hoover Institution, and Prof. William A. Brock, University of Wisconsin Current climate models range from General Circulation Models (GCM’s) with millions of degrees of freedom to models with few degrees of freedom. Simple Energy Balance Climate Models (EBCM’s) help us understand the dynamics of GCM’s. The same is true in economics with Computable General Equilibrium Models (CGE’s) where some models are infinite-dimensional multidimensional differential equations but some are simple models. Nordhaus (2007, 2010) couples a simple EBCM with a simple economic model. One- and two- dimensional ECBM’s do better at approximating damages across the globe and positive and negative feedbacks from anthroprogenic forcing (North etal. (1981), Wu and North (2007)). A proper coupling of climate and economic systems is crucial for arriving at effective policies. Brock and Xepapadeas (2010) have used Fourier/Legendre based expansions to study the shape of socially optimal carbon taxes over time at the planetary level in the face of damages caused by polar ice cap melt (as discussed by Oppenheimer, 2005) but in only a “one dimensional” EBCM. Economists have used orthogonal polynomial expansions to solve dynamic, forward-looking economic models (Judd, 1992, 1998). This presentation will couple EBCM climate models with basic forward-looking economic models, and examine the effectiveness and scaling properties of alternative solution methods. We will use a two dimensional EBCM model on the sphere (Wu and North, 2007) and a multicountry, multisector regional model of the economic system. Our aim will be to gain insights into intertemporal shape of the optimal carbon tax schedule, and its impact on global food production, as modeled by Golub and Hertel (2009). We will initially have limited computing resources and will need to focus on highly aggregated models. However, this will be more complex than existing models with forward

  17. EIA model documentation: Petroleum Market Model of the National Energy Modeling System

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1994-12-30

    The purpose of this report is to define the objectives of the Petroleum Market Model (PMM), describe its basic approach, and provide detail on how it works. This report is intended as a reference document for model analysts, users, and the public. Documentation of the model is in accordance with EIA`s legal obligation to provide adequate documentation in support of its models (Public Law 94-385, section 57.b.2). The PMM models petroleum refining activities, the marketing of products, the production of natural gas liquids and domestic methanol, projects petroleum provides and sources of supplies for meeting demand. In addition, the PMM estimates domestic refinery capacity expansion and fuel consumption.

  18. Breckinridge Project, initial effort. Report VII, Volume III. Cultural resource assessment socioeconomic background data

    Energy Technology Data Exchange (ETDEWEB)

    Macfarlane, Heather; Janzen, Donald E.

    1980-11-26

    This report has been prepared in conjunction with an environmental baseline study for a commercial coal conversion facility being conducted by Ashland Synthetic Fuels, Inc. (ASFI) and Airco Energy Company (AECO). This report represents a cultural resource assessment for the proposed plant site and two potential solid waste disposal areas. This assessment presents data collected by Dames and Moore during a recent archaeological reconnaissance of the unsurveyed southeastern portion of the proposed plant site and two potential solid waste disposal areas. Also, results of two previous surveys on the northern and southwestern portion of the plant site for American Smelting and Refining Company (ASARCO) and Kentucky Utilities are included. The Dames and Moore survey of the southeastern portion of the plant site identified one archaeological site, three standing structures and one historic cemetery. In addition 47 archaeological sites and six standing structures are known from two previous surveys of the remainder of the plant site (Cowan 1975 and Turnbow et al 1980). Eleven of the previously recorded archaeological sites were recommended for further assessment to evaluate their potential for inclusion within the Holt Bottoms Archaeological District currently listed on the National Register of Historic Places. None of the archaeological sites or standing structures located within the plant site during the Dames and Moore survey were recommended for further assessment. A total of eight archaeological sites were located during the Dames and Moore survey of the two potential solid waste disposal areas. Of this total only two sites were recommended for further assessment. Also, one previously unknown historic cemetry was located in the southernmost potential waste disposal area.

  19. Blasting at a Superfund chemical waste site

    International Nuclear Information System (INIS)

    Burns, D.R.

    1991-01-01

    During the summer of 1989, Maine Drilling and Blasting of Gardiner, Maine was contracted by Cayer Corporation of Harvard, Massachusetts to drill and blast an interceptor trench at the Nyanza Chemical Superfund Site in Ashland, Massachusetts. The interceptor trench was to be 1,365 feet long and to be blasted out of granite. The trench was to be 12 feet wide at the bottom with 1/1 slopes, the deepest cut being 30 feet deep. A French drain 12 feet wide by 15 to 35 feet deep was blasted below the main trench on a 2% slope from its center to each end. A French drain is an excavation where the rock is blasted but not dug. The trench would be used as a perimeter road with any ground water flow going through the French drain flowing to both ends of the trench. Being a Superfund project turned a simple blasting project into a regulatory nightmare. The US Environmental Protection Agency performed all the chemical related functions on site. The US Army Corps of Engineers was overseeing all related excavation and construction on site, as was the Massachusetts Department of Environmental Quality Engineering, the local Hazardous Wastes Council, and the local Fire Department. All parties had some input with the blasting and all issues had to be addressed. The paper outlines the project, how it was designed and completed. Also included is an outline of the blast plan to be submitted for approval, an outline of the Safety/Hazardous Waste training and a description of all the problems which arose during the project by various regulatory agencies

  20. Breckinridge Project, initial effort. Report VII, Volume 4. Safety and health plan

    Energy Technology Data Exchange (ETDEWEB)

    None

    1982-01-01

    The Safety and Health Plan recognizes the potential hazards associated with the Project and has been developed specifically to respond to these risks in a positive manner. Prevention, the primary objective of the Plan, starts with building safety controls into the process design and continues through engineering, construction, start-up, and operation of the Project facilities and equipment. Compliance with applicable federal, state, and local health and safety laws, regulations, and codes throughout all Project phases is required and assured. The Plan requires that each major Project phase be thoroughly reviewed and analyzed to determine that those provisions required to assure the safety and health of all employees and the public, and to prevent property and equipment losses, have been provided. The Plan requires followup on those items or situations where corrective action needs were identified to assure that the action was taken and is effective. Emphasis is placed on loss prevention. Exhibit 1 provides a breakdown of Ashland Synthetic Fuels, Inc.'s (ASFI's) Loss Prevention Program. The Plan recognizes that the varied nature of the work is such as to require the services of skilled, trained, and responsible personnel who are aware of the hazards and know that the work can be done safely, if done correctly. Good operating practice is likewise safe operating practice. Training is provided to familiarize personnel with good operational practice, the general sequence of activities, reporting requirements, and above all, the concept that each step in the operating procedures must be successfully concluded before the following step can be safely initiated. The Plan provides for periodic review and evaluation of all safety and loss prevention activities at the plant and departmental levels.

  1. Department of Defense support to spill response operations

    International Nuclear Information System (INIS)

    Ducey, D.L. Jr.; Walker, A.H.

    1993-01-01

    The Department of Defense (DOD), operating through the Directorate of Military Support in the Department of the Army, supports state, local and other federal agency response operations in a wide range of natural and man-caused emergencies. Examples within the past six years include the Ashland Oil tank collapse in Floreffe, Pennsylvania, Exxon Valdez cleanup, Loma Prieta earthquake, hurricanes Hugo and Andrew, Mexico City earthquake, Armero (Columbia) volcanic eruption, and Puerto Rico floods and mud slides. From March 24 to September 27, 1989, the period of the Exxon Valdez initial cleanup operations, DOD provided military and civilian personnel, US Navy ships for housing response workers, cargo and medical evacuation aircraft (fixed wing and helicopters), skimmers, modified dredges, landing craft, Dracones, Zodiak boats, radios, computers, and other miscellaneous equipment. This was in addition to assets of the Alaska Army and Air National Guard, which were committed by the governor. Support was provided to the US Coast Guard on scene coordinator and supervised by the Alaska Oil Spill Joint Task Force. The General Accounting Office, in its January 1990 report, Federal Costs Resulting from the Exxon Valdez Oil Spill, estimated that DOD spent $62.8 million through September 30, 1989, the largest expenditure by any federal agency. Use of military resources is a realistic scenario in certain situations. Primary responders at the federal and state level should understand how to request and employ these assets. This paper provides a background on DOD support to disaster relief operations, and discusses the types of support available to agencies responding to natural or man-caused emergencies, request and approval mechanisms, the Department of Defence organization to provide support, and reimbursement of the department

  2. Breckinridge Project, initial effort. Report VII, Volume II. Environmental baseline report

    Energy Technology Data Exchange (ETDEWEB)

    None

    1982-01-01

    Ashland Synthetic Fuels, Inc. (ASFI) and Airco Energy Company, Inc. (AECI) have recently formed the Breckinridge Project and are currently conducting a process and economic feasibility study of a commercial scale facility to produce synthetic liquid fuels from coal. The coal conversion process to be used is the H-COAL process, which is in the pilot plant testing stage under the auspices of the US Department of Energy at the H-COAL Pilot Plant Project near Catlettsburg, Kentucky. The preliminary plans for the commercial plant are for a 18,140 metric ton/day (24,000 ton/day) nominal coal assumption capacity utilizing the abundant high sulfur Western Kentucky coals. The Western Kentucky area offers a source of the coal along with adequate water, power, labor, transportation and other factors critical to the successful siting of a plant. Various studies by federal and state governments, as well as private industry, have reached similar conclusions regarding the suitability of such plant sites in western Kentucky. Of the many individual sites evaluated, a site in Breckinridge County, Kentucky, approximately 4 kilometers (2.5 miles) west of the town of Stephensport, has been identified as the plant location. Actions have been taken to obtain options to insure that this site will be available when needed. This report contains an overview of the regional setting and results of the baseline environmental studies. These studies include collection of data on ambient air and water quality, sound, aquatic and terrestrial biology and geology. This report contains the following chapters; introduction, review of significant findings, ambient air quality monitoring, sound, aquatic ecology, vegetation, wildlife, geology, soils, surface water, and ground water.

  3. Modeling Methods

    Science.gov (United States)

    Healy, Richard W.; Scanlon, Bridget R.

    2010-01-01

    Simulation models are widely used in all types of hydrologic studies, and many of these models can be used to estimate recharge. Models can provide important insight into the functioning of hydrologic systems by identifying factors that influence recharge. The predictive capability of models can be used to evaluate how changes in climate, water use, land use, and other factors may affect recharge rates. Most hydrological simulation models, including watershed models and groundwater-flow models, are based on some form of water-budget equation, so the material in this chapter is closely linked to that in Chapter 2. Empirical models that are not based on a water-budget equation have also been used for estimating recharge; these models generally take the form of simple estimation equations that define annual recharge as a function of precipitation and possibly other climatic data or watershed characteristics.Model complexity varies greatly. Some models are simple accounting models; others attempt to accurately represent the physics of water movement through each compartment of the hydrologic system. Some models provide estimates of recharge explicitly; for example, a model based on the Richards equation can simulate water movement from the soil surface through the unsaturated zone to the water table. Recharge estimates can be obtained indirectly from other models. For example, recharge is a parameter in groundwater-flow models that solve for hydraulic head (i.e. groundwater level). Recharge estimates can be obtained through a model calibration process in which recharge and other model parameter values are adjusted so that simulated water levels agree with measured water levels. The simulation that provides the closest agreement is called the best fit, and the recharge value used in that simulation is the model-generated estimate of recharge.

  4. Post-model selection inference and model averaging

    Directory of Open Access Journals (Sweden)

    Georges Nguefack-Tsague

    2011-07-01

    Full Text Available Although model selection is routinely used in practice nowadays, little is known about its precise effects on any subsequent inference that is carried out. The same goes for the effects induced by the closely related technique of model averaging. This paper is concerned with the use of the same data first to select a model and then to carry out inference, in particular point estimation and point prediction. The properties of the resulting estimator, called a post-model-selection estimator (PMSE, are hard to derive. Using selection criteria such as hypothesis testing, AIC, BIC, HQ and Cp, we illustrate that, in terms of risk function, no single PMSE dominates the others. The same conclusion holds more generally for any penalised likelihood information criterion. We also compare various model averaging schemes and show that no single one dominates the others in terms of risk function. Since PMSEs can be regarded as a special case of model averaging, with 0-1 random-weights, we propose a connection between the two theories, in the frequentist approach, by taking account of the selection procedure when performing model averaging. We illustrate the point by simulating a simple linear regression model.

  5. The DINA model as a constrained general diagnostic model: Two variants of a model equivalency.

    Science.gov (United States)

    von Davier, Matthias

    2014-02-01

    The 'deterministic-input noisy-AND' (DINA) model is one of the more frequently applied diagnostic classification models for binary observed responses and binary latent variables. The purpose of this paper is to show that the model is equivalent to a special case of a more general compensatory family of diagnostic models. Two equivalencies are presented. Both project the original DINA skill space and design Q-matrix using mappings into a transformed skill space as well as a transformed Q-matrix space. Both variants of the equivalency produce a compensatory model that is mathematically equivalent to the (conjunctive) DINA model. This equivalency holds for all DINA models with any type of Q-matrix, not only for trivial (simple-structure) cases. The two versions of the equivalency presented in this paper are not implied by the recently suggested log-linear cognitive diagnosis model or the generalized DINA approach. The equivalencies presented here exist independent of these recently derived models since they solely require a linear - compensatory - general diagnostic model without any skill interaction terms. Whenever it can be shown that one model can be viewed as a special case of another more general one, conclusions derived from any particular model-based estimates are drawn into question. It is widely known that multidimensional models can often be specified in multiple ways while the model-based probabilities of observed variables stay the same. This paper goes beyond this type of equivalency by showing that a conjunctive diagnostic classification model can be expressed as a constrained special case of a general compensatory diagnostic modelling framework. © 2013 The British Psychological Society.

  6. Performance Measurement Model A TarBase model with ...

    Indian Academy of Sciences (India)

    rohit

    Model A 8.0 2.0 94.52% 88.46% 76 108 12 12 0.86 0.91 0.78 0.94. Model B 2.0 2.0 93.18% 89.33% 64 95 10 9 0.88 0.90 0.75 0.98. The above results for TEST – 1 show details for our two models (Model A and Model B).Performance of Model A after adding of 32 negative dataset of MiRTif on our testing set(MiRecords) ...

  7. Underground economy modelling: simple models with complicated dynamics

    OpenAIRE

    Albu, Lucian-Liviu

    2003-01-01

    The paper aims to model the underground economy using two different models: one based on the labor supply method and a generalized model for the allocation of time. The model based on the labor supply method is conceived as a simulating one in order to determine some reasonable thresholds of the underground sector extension based only on the available macroeconomic statistical data. The generalized model for the allocation of time is a model based on direct approach which estimates the underg...

  8. Integrative structure modeling with the Integrative Modeling Platform.

    Science.gov (United States)

    Webb, Benjamin; Viswanath, Shruthi; Bonomi, Massimiliano; Pellarin, Riccardo; Greenberg, Charles H; Saltzberg, Daniel; Sali, Andrej

    2018-01-01

    Building models of a biological system that are consistent with the myriad data available is one of the key challenges in biology. Modeling the structure and dynamics of macromolecular assemblies, for example, can give insights into how biological systems work, evolved, might be controlled, and even designed. Integrative structure modeling casts the building of structural models as a computational optimization problem, for which information about the assembly is encoded into a scoring function that evaluates candidate models. Here, we describe our open source software suite for integrative structure modeling, Integrative Modeling Platform (https://integrativemodeling.org), and demonstrate its use. © 2017 The Protein Society.

  9. Modeling volatility using state space models.

    Science.gov (United States)

    Timmer, J; Weigend, A S

    1997-08-01

    In time series problems, noise can be divided into two categories: dynamic noise which drives the process, and observational noise which is added in the measurement process, but does not influence future values of the system. In this framework, we show that empirical volatilities (the squared relative returns of prices) exhibit a significant amount of observational noise. To model and predict their time evolution adequately, we estimate state space models that explicitly include observational noise. We obtain relaxation times for shocks in the logarithm of volatility ranging from three weeks (for foreign exchange) to three to five months (for stock indices). In most cases, a two-dimensional hidden state is required to yield residuals that are consistent with white noise. We compare these results with ordinary autoregressive models (without a hidden state) and find that autoregressive models underestimate the relaxation times by about two orders of magnitude since they do not distinguish between observational and dynamic noise. This new interpretation of the dynamics of volatility in terms of relaxators in a state space model carries over to stochastic volatility models and to GARCH models, and is useful for several problems in finance, including risk management and the pricing of derivative securities. Data sets used: Olsen & Associates high frequency DEM/USD foreign exchange rates (8 years). Nikkei 225 index (40 years). Dow Jones Industrial Average (25 years).

  10. Document Models

    Directory of Open Access Journals (Sweden)

    A.A. Malykh

    2017-08-01

    Full Text Available In this paper, the concept of locally simple models is considered. Locally simple models are arbitrarily complex models built from relatively simple components. A lot of practically important domains of discourse can be described as locally simple models, for example, business models of enterprises and companies. Up to now, research in human reasoning automation has been mainly concentrated around the most intellectually intensive activities, such as automated theorem proving. On the other hand, the retailer business model is formed from ”jobs”, and each ”job” can be modelled and automated more or less easily. At the same time, the whole retailer model as an integrated system is extremely complex. In this paper, we offer a variant of the mathematical definition of a locally simple model. This definition is intended for modelling a wide range of domains. Therefore, we also must take into account the perceptual and psychological issues. Logic is elitist, and if we want to attract to our models as many people as possible, we need to hide this elitism behind some metaphor, to which ’ordinary’ people are accustomed. As such a metaphor, we use the concept of a document, so our locally simple models are called document models. Document models are built in the paradigm of semantic programming. This allows us to achieve another important goal - to make the documentary models executable. Executable models are models that can act as practical information systems in the described domain of discourse. Thus, if our model is executable, then programming becomes redundant. The direct use of a model, instead of its programming coding, brings important advantages, for example, a drastic cost reduction for development and maintenance. Moreover, since the model is well and sound, and not dissolved within programming modules, we can directly apply AI tools, in particular, machine learning. This significantly expands the possibilities for automation and

  11. An online model composition tool for system biology models.

    Science.gov (United States)

    Coskun, Sarp A; Cicek, A Ercument; Lai, Nicola; Dash, Ranjan K; Ozsoyoglu, Z Meral; Ozsoyoglu, Gultekin

    2013-09-05

    There are multiple representation formats for Systems Biology computational models, and the Systems Biology Markup Language (SBML) is one of the most widely used. SBML is used to capture, store, and distribute computational models by Systems Biology data sources (e.g., the BioModels Database) and researchers. Therefore, there is a need for all-in-one web-based solutions that support advance SBML functionalities such as uploading, editing, composing, visualizing, simulating, querying, and browsing computational models. We present the design and implementation of the Model Composition Tool (Interface) within the PathCase-SB (PathCase Systems Biology) web portal. The tool helps users compose systems biology models to facilitate the complex process of merging systems biology models. We also present three tools that support the model composition tool, namely, (1) Model Simulation Interface that generates a visual plot of the simulation according to user's input, (2) iModel Tool as a platform for users to upload their own models to compose, and (3) SimCom Tool that provides a side by side comparison of models being composed in the same pathway. Finally, we provide a web site that hosts BioModels Database models and a separate web site that hosts SBML Test Suite models. Model composition tool (and the other three tools) can be used with little or no knowledge of the SBML document structure. For this reason, students or anyone who wants to learn about systems biology will benefit from the described functionalities. SBML Test Suite models will be a nice starting point for beginners. And, for more advanced purposes, users will able to access and employ models of the BioModels Database as well.

  12. Modeling inputs to computer models used in risk assessment

    International Nuclear Information System (INIS)

    Iman, R.L.

    1987-01-01

    Computer models for various risk assessment applications are closely scrutinized both from the standpoint of questioning the correctness of the underlying mathematical model with respect to the process it is attempting to model and from the standpoint of verifying that the computer model correctly implements the underlying mathematical model. A process that receives less scrutiny, but is nonetheless of equal importance, concerns the individual and joint modeling of the inputs. This modeling effort clearly has a great impact on the credibility of results. Model characteristics are reviewed in this paper that have a direct bearing on the model input process and reasons are given for using probabilities-based modeling with the inputs. The authors also present ways to model distributions for individual inputs and multivariate input structures when dependence and other constraints may be present

  13. A Model of Trusted Measurement Model

    OpenAIRE

    Ma Zhili; Wang Zhihao; Dai Liang; Zhu Xiaoqin

    2017-01-01

    A model of Trusted Measurement supporting behavior measurement based on trusted connection architecture (TCA) with three entities and three levels is proposed, and a frame to illustrate the model is given. The model synthesizes three trusted measurement dimensions including trusted identity, trusted status and trusted behavior, satisfies the essential requirements of trusted measurement, and unified the TCA with three entities and three levels.

  14. A unification of RDE model and XCDM model

    International Nuclear Information System (INIS)

    Liao, Kai; Zhu, Zong-Hong

    2013-01-01

    In this Letter, we propose a new generalized Ricci dark energy (NGR) model to unify Ricci dark energy (RDE) and XCDM. Our model can distinguish between RDE and XCDM by introducing a parameter β called weight factor. When β=1, NGR model becomes the usual RDE model. The XCDM model is corresponding to β=0. Moreover, NGR model permits the situation where neither β=1 nor β=0. We then perform a statefinder analysis on NGR model to see how β effects the trajectory on the r–s plane. In order to know the value of β, we constrain NGR model with latest observations including type Ia supernovae (SNe Ia) from Union2 set (557 data), baryonic acoustic oscillation (BAO) observation from the spectroscopic Sloan Digital Sky Survey (SDSS) data release 7 (DR7) galaxy sample and cosmic microwave background (CMB) observation from the 7-year Wilkinson Microwave Anisotropy Probe (WMAP7) results. With Markov Chain Monte Carlo (MCMC) method, the constraint result is β=0.08 −0.21 +0.30 (1σ) −0.28 +0.43 (2σ), which manifests the observations prefer a XCDM universe rather than RDE model. It seems RDE model is ruled out in NGR scenario within 2σ regions. Furthermore, we compare it with some of successful cosmological models using AIC information criterion. NGR model seems to be a good choice for describing the universe.

  15. Downscaling GISS ModelE Boreal Summer Climate over Africa

    Science.gov (United States)

    Druyan, Leonard M.; Fulakeza, Matthew

    2015-01-01

    The study examines the perceived added value of downscaling atmosphere-ocean global climate model simulations over Africa and adjacent oceans by a nested regional climate model. NASA/Goddard Institute for Space Studies (GISS) coupled ModelE simulations for June- September 1998-2002 are used to form lateral boundary conditions for synchronous simulations by the GISS RM3 regional climate model. The ModelE computational grid spacing is 2deg latitude by 2.5deg longitude and the RM3 grid spacing is 0.44deg. ModelE precipitation climatology for June-September 1998-2002 is shown to be a good proxy for 30-year means so results based on the 5-year sample are presumed to be generally representative. Comparison with observational evidence shows several discrepancies in ModelE configuration of the boreal summer inter-tropical convergence zone (ITCZ). One glaring shortcoming is that ModelE simulations do not advance the West African rain band northward during the summer to represent monsoon precipitation onset over the Sahel. Results for 1998-2002 show that onset simulation is an important added value produced by downscaling with RM3. ModelE Eastern South Atlantic Ocean computed sea-surface temperatures (SST) are some 4 K warmer than reanalysis, contributing to large positive biases in overlying surface air temperatures (Tsfc). ModelE Tsfc are also too warm over most of Africa. RM3 downscaling somewhat mitigates the magnitude of Tsfc biases over the African continent, it eliminates the ModelE double ITCZ over the Atlantic and it produces more realistic orographic precipitation maxima. Parallel ModelE and RM3 simulations with observed SST forcing (in place of the predicted ocean) lower Tsfc errors but have mixed impacts on circulation and precipitation biases. Downscaling improvements of the meridional movement of the rain band over West Africa and the configuration of orographic precipitation maxima are realized irrespective of the SST biases.

  16. Essays on model uncertainty in financial models

    NARCIS (Netherlands)

    Li, Jing

    2018-01-01

    This dissertation studies model uncertainty, particularly in financial models. It consists of two empirical chapters and one theoretical chapter. The first empirical chapter (Chapter 2) classifies model uncertainty into parameter uncertainty and misspecification uncertainty. It investigates the

  17. Mixed models for predictive modeling in actuarial science

    NARCIS (Netherlands)

    Antonio, K.; Zhang, Y.

    2012-01-01

    We start with a general discussion of mixed (also called multilevel) models and continue with illustrating specific (actuarial) applications of this type of models. Technical details on (linear, generalized, non-linear) mixed models follow: model assumptions, specifications, estimation techniques

  18. ModelMate - A graphical user interface for model analysis

    Science.gov (United States)

    Banta, Edward R.

    2011-01-01

    ModelMate is a graphical user interface designed to facilitate use of model-analysis programs with models. This initial version of ModelMate supports one model-analysis program, UCODE_2005, and one model software program, MODFLOW-2005. ModelMate can be used to prepare input files for UCODE_2005, run UCODE_2005, and display analysis results. A link to the GW_Chart graphing program facilitates visual interpretation of results. ModelMate includes capabilities for organizing directories used with the parallel-processing capabilities of UCODE_2005 and for maintaining files in those directories to be identical to a set of files in a master directory. ModelMate can be used on its own or in conjunction with ModelMuse, a graphical user interface for MODFLOW-2005 and PHAST.

  19. Reactor core modeling practice: Operational requirements, model characteristics, and model validation

    International Nuclear Information System (INIS)

    Zerbino, H.

    1997-01-01

    The physical models implemented in power plant simulators have greatly increased in performance and complexity in recent years. This process has been enabled by the ever increasing computing power available at affordable prices. This paper describes this process from several angles: First the operational requirements which are more critical from the point of view of model performance, both for normal and off-normal operating conditions; A second section discusses core model characteristics in the light of the solutions implemented by Thomson Training and Simulation (TT and S) in several full-scope simulators recently built and delivered for Dutch, German, and French nuclear power plants; finally we consider the model validation procedures, which are of course an integral part of model development, and which are becoming more and more severe as performance expectations increase. As a conclusion, it may be asserted that in the core modeling field, as in other areas, the general improvement in the quality of simulation codes has resulted in a fairly rapid convergence towards mainstream engineering-grade calculations. This is remarkable performance in view of the stringent real-time requirements which the simulation codes must satisfy as well as the extremely wide range of operating conditions that they are called upon to cover with good accuracy. (author)

  20. Mineralogic Model (MM3.0) Analysis Model Report

    Energy Technology Data Exchange (ETDEWEB)

    C. Lum

    2002-02-12

    The purpose of this report is to document the Mineralogic Model (MM), Version 3.0 (MM3.0) with regard to data input, modeling methods, assumptions, uncertainties, limitations and validation of the model results, qualification status of the model, and the differences between Version 3.0 and previous versions. A three-dimensional (3-D) Mineralogic Model was developed for Yucca Mountain to support the analyses of hydrologic properties, radionuclide transport, mineral health hazards, repository performance, and repository design. Version 3.0 of the MM was developed from mineralogic data obtained from borehole samples. It consists of matrix mineral abundances as a function of x (easting), y (northing), and z (elevation), referenced to the stratigraphic framework defined in Version 3.1 of the Geologic Framework Model (GFM). The MM was developed specifically for incorporation into the 3-D Integrated Site Model (ISM). The MM enables project personnel to obtain calculated mineral abundances at any position, within any region, or within any stratigraphic unit in the model area. The significance of the MM for key aspects of site characterization and performance assessment is explained in the following subsections. This work was conducted in accordance with the Development Plan for the MM (CRWMS M&O 2000). The planning document for this Rev. 00, ICN 02 of this AMR is Technical Work Plan, TWP-NBS-GS-000003, Technical Work Plan for the Integrated Site Model, Process Model Report, Revision 01 (CRWMS M&O 2000). The purpose of this ICN is to record changes in the classification of input status by the resolution of the use of TBV software and data in this report. Constraints and limitations of the MM are discussed in the appropriate sections that follow. The MM is one component of the ISM, which has been developed to provide a consistent volumetric portrayal of the rock layers, rock properties, and mineralogy of the Yucca Mountain site. The ISM consists of three components: (1

  1. Mineralogic Model (MM3.0) Analysis Model Report

    International Nuclear Information System (INIS)

    Lum, C.

    2002-01-01

    The purpose of this report is to document the Mineralogic Model (MM), Version 3.0 (MM3.0) with regard to data input, modeling methods, assumptions, uncertainties, limitations and validation of the model results, qualification status of the model, and the differences between Version 3.0 and previous versions. A three-dimensional (3-D) Mineralogic Model was developed for Yucca Mountain to support the analyses of hydrologic properties, radionuclide transport, mineral health hazards, repository performance, and repository design. Version 3.0 of the MM was developed from mineralogic data obtained from borehole samples. It consists of matrix mineral abundances as a function of x (easting), y (northing), and z (elevation), referenced to the stratigraphic framework defined in Version 3.1 of the Geologic Framework Model (GFM). The MM was developed specifically for incorporation into the 3-D Integrated Site Model (ISM). The MM enables project personnel to obtain calculated mineral abundances at any position, within any region, or within any stratigraphic unit in the model area. The significance of the MM for key aspects of site characterization and performance assessment is explained in the following subsections. This work was conducted in accordance with the Development Plan for the MM (CRWMS M and O 2000). The planning document for this Rev. 00, ICN 02 of this AMR is Technical Work Plan, TWP-NBS-GS-000003, Technical Work Plan for the Integrated Site Model, Process Model Report, Revision 01 (CRWMS M and O 2000). The purpose of this ICN is to record changes in the classification of input status by the resolution of the use of TBV software and data in this report. Constraints and limitations of the MM are discussed in the appropriate sections that follow. The MM is one component of the ISM, which has been developed to provide a consistent volumetric portrayal of the rock layers, rock properties, and mineralogy of the Yucca Mountain site. The ISM consists of three components

  2. ERM model analysis for adaptation to hydrological model errors

    Science.gov (United States)

    Baymani-Nezhad, M.; Han, D.

    2018-05-01

    Hydrological conditions are changed continuously and these phenomenons generate errors on flood forecasting models and will lead to get unrealistic results. Therefore, to overcome these difficulties, a concept called model updating is proposed in hydrological studies. Real-time model updating is one of the challenging processes in hydrological sciences and has not been entirely solved due to lack of knowledge about the future state of the catchment under study. Basically, in terms of flood forecasting process, errors propagated from the rainfall-runoff model are enumerated as the main source of uncertainty in the forecasting model. Hence, to dominate the exciting errors, several methods have been proposed by researchers to update the rainfall-runoff models such as parameter updating, model state updating, and correction on input data. The current study focuses on investigations about the ability of rainfall-runoff model parameters to cope with three types of existing errors, timing, shape and volume as the common errors in hydrological modelling. The new lumped model, the ERM model, has been selected for this study to evaluate its parameters for its use in model updating to cope with the stated errors. Investigation about ten events proves that the ERM model parameters can be updated to cope with the errors without the need to recalibrate the model.

  3. Model documentation report: Short-Term Hydroelectric Generation Model

    International Nuclear Information System (INIS)

    1993-08-01

    The purpose of this report is to define the objectives of the Short- Term Hydroelectric Generation Model (STHGM), describe its basic approach, and to provide details on the model structure. This report is intended as a reference document for model analysts, users, and the general public. Documentation of the model is in accordance with the Energy Information Administration's (AYE) legal obligation to provide adequate documentation in support of its models (Public Law 94-385, Section 57.b.2). The STHGM performs a short-term (18 to 27- month) forecast of hydroelectric generation in the United States using an autoregressive integrated moving average (UREMIA) time series model with precipitation as an explanatory variable. The model results are used as input for the short-term Energy Outlook

  4. Geochemistry Model Validation Report: External Accumulation Model

    International Nuclear Information System (INIS)

    Zarrabi, K.

    2001-01-01

    The purpose of this Analysis and Modeling Report (AMR) is to validate the External Accumulation Model that predicts accumulation of fissile materials in fractures and lithophysae in the rock beneath a degrading waste package (WP) in the potential monitored geologic repository at Yucca Mountain. (Lithophysae are voids in the rock having concentric shells of finely crystalline alkali feldspar, quartz, and other materials that were formed due to entrapped gas that later escaped, DOE 1998, p. A-25.) The intended use of this model is to estimate the quantities of external accumulation of fissile material for use in external criticality risk assessments for different types of degrading WPs: U.S. Department of Energy (DOE) Spent Nuclear Fuel (SNF) codisposed with High Level Waste (HLW) glass, commercial SNF, and Immobilized Plutonium Ceramic (Pu-ceramic) codisposed with HLW glass. The scope of the model validation is to (1) describe the model and the parameters used to develop the model, (2) provide rationale for selection of the parameters by comparisons with measured values, and (3) demonstrate that the parameters chosen are the most conservative selection for external criticality risk calculations. To demonstrate the applicability of the model, a Pu-ceramic WP is used as an example. The model begins with a source term from separately documented EQ6 calculations; where the source term is defined as the composition versus time of the water flowing out of a breached waste package (WP). Next, PHREEQC, is used to simulate the transport and interaction of the source term with the resident water and fractured tuff below the repository. In these simulations the primary mechanism for accumulation is mixing of the high pH, actinide-laden source term with resident water; thus lowering the pH values sufficiently for fissile minerals to become insoluble and precipitate. In the final section of the model, the outputs from PHREEQC, are processed to produce mass of accumulation

  5. Model uncertainty: Probabilities for models?

    International Nuclear Information System (INIS)

    Winkler, R.L.

    1994-01-01

    Like any other type of uncertainty, model uncertainty should be treated in terms of probabilities. The question is how to do this. The most commonly-used approach has a drawback related to the interpretation of the probabilities assigned to the models. If we step back and look at the big picture, asking what the appropriate focus of the model uncertainty question should be in the context of risk and decision analysis, we see that a different probabilistic approach makes more sense, although it raise some implementation questions. Current work that is underway to address these questions looks very promising

  6. Model evaluation methodology applicable to environmental assessment models

    International Nuclear Information System (INIS)

    Shaeffer, D.L.

    1979-08-01

    A model evaluation methodology is presented to provide a systematic framework within which the adequacy of environmental assessment models might be examined. The necessity for such a tool is motivated by the widespread use of models for predicting the environmental consequences of various human activities and by the reliance on these model predictions for deciding whether a particular activity requires the deployment of costly control measures. Consequently, the uncertainty associated with prediction must be established for the use of such models. The methodology presented here consists of six major tasks: model examination, algorithm examination, data evaluation, sensitivity analyses, validation studies, and code comparison. This methodology is presented in the form of a flowchart to show the logical interrelatedness of the various tasks. Emphasis has been placed on identifying those parameters which are most important in determining the predictive outputs of a model. Importance has been attached to the process of collecting quality data. A method has been developed for analyzing multiplicative chain models when the input parameters are statistically independent and lognormally distributed. Latin hypercube sampling has been offered as a promising candidate for doing sensitivity analyses. Several different ways of viewing the validity of a model have been presented. Criteria are presented for selecting models for environmental assessment purposes

  7. Galactic models

    International Nuclear Information System (INIS)

    Buchler, J.R.; Gottesman, S.T.; Hunter, J.H. Jr.

    1990-01-01

    Various papers on galactic models are presented. Individual topics addressed include: observations relating to galactic mass distributions; the structure of the Galaxy; mass distribution in spiral galaxies; rotation curves of spiral galaxies in clusters; grand design, multiple arm, and flocculent spiral galaxies; observations of barred spirals; ringed galaxies; elliptical galaxies; the modal approach to models of galaxies; self-consistent models of spiral galaxies; dynamical models of spiral galaxies; N-body models. Also discussed are: two-component models of galaxies; simulations of cloudy, gaseous galactic disks; numerical experiments on the stability of hot stellar systems; instabilities of slowly rotating galaxies; spiral structure as a recurrent instability; model gas flows in selected barred spiral galaxies; bar shapes and orbital stochasticity; three-dimensional models; polar ring galaxies; dynamical models of polar rings

  8. A Model-Model and Data-Model Comparison for the Early Eocene Hydrological Cycle

    Science.gov (United States)

    Carmichael, Matthew J.; Lunt, Daniel J.; Huber, Matthew; Heinemann, Malte; Kiehl, Jeffrey; LeGrande, Allegra; Loptson, Claire A.; Roberts, Chris D.; Sagoo, Navjit; Shields, Christine

    2016-01-01

    A range of proxy observations have recently provided constraints on how Earth's hydrological cycle responded to early Eocene climatic changes. However, comparisons of proxy data to general circulation model (GCM) simulated hydrology are limited and inter-model variability remains poorly characterised. In this work, we undertake an intercomparison of GCM-derived precipitation and P - E distributions within the extended EoMIP ensemble (Eocene Modelling Intercomparison Project; Lunt et al., 2012), which includes previously published early Eocene simulations performed using five GCMs differing in boundary conditions, model structure, and precipitation-relevant parameterisation schemes. We show that an intensified hydrological cycle, manifested in enhanced global precipitation and evaporation rates, is simulated for all Eocene simulations relative to the preindustrial conditions. This is primarily due to elevated atmospheric paleo-CO2, resulting in elevated temperatures, although the effects of differences in paleogeography and ice sheets are also important in some models. For a given CO2 level, globally averaged precipitation rates vary widely between models, largely arising from different simulated surface air temperatures. Models with a similar global sensitivity of precipitation rate to temperature (dP=dT ) display different regional precipitation responses for a given temperature change. Regions that are particularly sensitive to model choice include the South Pacific, tropical Africa, and the Peri-Tethys, which may represent targets for future proxy acquisition. A comparison of early and middle Eocene leaf-fossil-derived precipitation estimates with the GCM output illustrates that GCMs generally underestimate precipitation rates at high latitudes, although a possible seasonal bias of the proxies cannot be excluded. Models which warm these regions, either via elevated CO2 or by varying poorly constrained model parameter values, are most successful in simulating a

  9. The Bond Fluctuation Model and Other Lattice Models

    Science.gov (United States)

    Müller, Marcus

    Lattice models constitute a class of coarse-grained representations of polymeric materials. They have enjoyed a longstanding tradition for investigating the universal behavior of long chain molecules by computer simulations and enumeration techniques. A coarse-grained representation is often necessary to investigate properties on large time- and length scales. First, some justification for using lattice models will be given and the benefits and limitations will be discussed. Then, the bond fluctuation model by Carmesin and Kremer [1] is placed into the context of other lattice models and compared to continuum models. Some specific techniques for measuring the pressure in lattice models will be described. The bond fluctuation model has been employed in more than 100 simulation studies in the last decade and only few selected applications can be mentioned.

  10. A Distributed Snow Evolution Modeling System (SnowModel)

    Science.gov (United States)

    Liston, G. E.; Elder, K.

    2004-12-01

    A spatially distributed snow-evolution modeling system (SnowModel) has been specifically designed to be applicable over a wide range of snow landscapes, climates, and conditions. To reach this goal, SnowModel is composed of four sub-models: MicroMet defines the meteorological forcing conditions, EnBal calculates surface energy exchanges, SnowMass simulates snow depth and water-equivalent evolution, and SnowTran-3D accounts for snow redistribution by wind. While other distributed snow models exist, SnowModel is unique in that it includes a well-tested blowing-snow sub-model (SnowTran-3D) for application in windy arctic, alpine, and prairie environments where snowdrifts are common. These environments comprise 68% of the seasonally snow-covered Northern Hemisphere land surface. SnowModel also accounts for snow processes occurring in forested environments (e.g., canopy interception related processes). SnowModel is designed to simulate snow-related physical processes occurring at spatial scales of 5-m and greater, and temporal scales of 1-hour and greater. These include: accumulation from precipitation; wind redistribution and sublimation; loading, unloading, and sublimation within forest canopies; snow-density evolution; and snowpack ripening and melt. To enhance its wide applicability, SnowModel includes the physical calculations required to simulate snow evolution within each of the global snow classes defined by Sturm et al. (1995), e.g., tundra, taiga, alpine, prairie, maritime, and ephemeral snow covers. The three, 25-km by 25-km, Cold Land Processes Experiment (CLPX) mesoscale study areas (MSAs: Fraser, North Park, and Rabbit Ears) are used as SnowModel simulation examples to highlight model strengths, weaknesses, and features in forested, semi-forested, alpine, and shrubland environments.

  11. Event-based model diagnosis of rainfall-runoff model structures

    International Nuclear Information System (INIS)

    Stanzel, P.

    2012-01-01

    The objective of this research is a comparative evaluation of different rainfall-runoff model structures. Comparative model diagnostics facilitate the assessment of strengths and weaknesses of each model. The application of multiple models allows an analysis of simulation uncertainties arising from the selection of model structure, as compared with effects of uncertain parameters and precipitation input. Four different model structures, including conceptual and physically based approaches, are compared. In addition to runoff simulations, results for soil moisture and the runoff components of overland flow, interflow and base flow are analysed. Catchment runoff is simulated satisfactorily by all four model structures and shows only minor differences. Systematic deviations from runoff observations provide insight into model structural deficiencies. While physically based model structures capture some single runoff events better, they do not generally outperform conceptual model structures. Contributions to uncertainty in runoff simulations stemming from the choice of model structure show similar dimensions to those arising from parameter selection and the representation of precipitation input. Variations in precipitation mainly affect the general level and peaks of runoff, while different model structures lead to different simulated runoff dynamics. Large differences between the four analysed models are detected for simulations of soil moisture and, even more pronounced, runoff components. Soil moisture changes are more dynamical in the physically based model structures, which is in better agreement with observations. Streamflow contributions of overland flow are considerably lower in these models than in the more conceptual approaches. Observations of runoff components are rarely made and are not available in this study, but are shown to have high potential for an effective selection of appropriate model structures (author) [de

  12. Observation-Based Modeling for Model-Based Testing

    NARCIS (Netherlands)

    Kanstrén, T.; Piel, E.; Gross, H.G.

    2009-01-01

    One of the single most important reasons that modeling and modelbased testing are not yet common practice in industry is the perceived difficulty of making the models up to the level of detail and quality required for their automated processing. Models unleash their full potential only through

  13. Multi-Hypothesis Modelling Capabilities for Robust Data-Model Integration

    Science.gov (United States)

    Walker, A. P.; De Kauwe, M. G.; Lu, D.; Medlyn, B.; Norby, R. J.; Ricciuto, D. M.; Rogers, A.; Serbin, S.; Weston, D. J.; Ye, M.; Zaehle, S.

    2017-12-01

    Large uncertainty is often inherent in model predictions due to imperfect knowledge of how to describe the mechanistic processes (hypotheses) that a model is intended to represent. Yet this model hypothesis uncertainty (MHU) is often overlooked or informally evaluated, as methods to quantify and evaluate MHU are limited. MHU is increased as models become more complex because each additional processes added to a model comes with inherent MHU as well as parametric unceratinty. With the current trend of adding more processes to Earth System Models (ESMs), we are adding uncertainty, which can be quantified for parameters but not MHU. Model inter-comparison projects do allow for some consideration of hypothesis uncertainty but in an ad hoc and non-independent fashion. This has stymied efforts to evaluate ecosystem models against data and intepret the results mechanistically because it is not simple to interpret exactly why a model is producing the results it does and identify which model assumptions are key as they combine models of many sub-systems and processes, each of which may be conceptualised and represented mathematically in various ways. We present a novel modelling framework—the multi-assumption architecture and testbed (MAAT)—that automates the combination, generation, and execution of a model ensemble built with different representations of process. We will present the argument that multi-hypothesis modelling needs to be considered in conjunction with other capabilities (e.g. the Predictive Ecosystem Analyser; PecAn) and statistical methods (e.g. sensitivity anaylsis, data assimilation) to aid efforts in robust data model integration to enhance our predictive understanding of biological systems.

  14. Process models and model-data fusion in dendroecology

    Directory of Open Access Journals (Sweden)

    Joel eGuiot

    2014-08-01

    Full Text Available Dendrochronology (i.e. the study of annually dated tree-ring time series has proved to be a powerful technique to understand tree-growth. This paper intends to show the interest of using ecophysiological modeling not only to understand and predict tree-growth (dendroecology but also to reconstruct past climates (dendroclimatology. Process models have been used for several decades in dendroclimatology, but it is only recently that methods of model-data fusion have led to significant progress in modeling tree-growth as a function of climate and in reconstructing past climates. These model-data fusion (MDF methods, mainly based on the Bayesian paradigm, have been shown to be powerful for both model calibration and model inversion. After a rapid survey of tree-growth modeling, we illustrate MDF with examples based on series of Southern France Aleppo pines and Central France oaks. These examples show that if plants experienced CO2 fertilization, this would have a significant effect on tree-growth which in turn would bias the climate reconstructions. This bias could be extended to other environmental non-climatic factors directly or indirectly affecting annual ring formation and not taken into account in classical empirical models, which supports the use of more complex process-based models. Finally, we conclude by showing the interest of the data assimilation methods applied in climatology to produce climate re-analyses.

  15. `Models of' versus `Models for'. Toward an Agent-Based Conception of Modeling in the Science Classroom

    Science.gov (United States)

    Gouvea, Julia; Passmore, Cynthia

    2017-03-01

    The inclusion of the practice of "developing and using models" in the Framework for K-12 Science Education and in the Next Generation Science Standards provides an opportunity for educators to examine the role this practice plays in science and how it can be leveraged in a science classroom. Drawing on conceptions of models in the philosophy of science, we bring forward an agent-based account of models and discuss the implications of this view for enacting modeling in science classrooms. Models, according to this account, can only be understood with respect to the aims and intentions of a cognitive agent (models for), not solely in terms of how they represent phenomena in the world (models of). We present this contrast as a heuristic— models of versus models for—that can be used to help educators notice and interpret how models are positioned in standards, curriculum, and classrooms.

  16. Eclipse models

    International Nuclear Information System (INIS)

    Michel, F.C.

    1989-01-01

    Three existing eclipse models for the PSR 1957 + 20 pulsar are discussed in terms of their requirements and the information they yield about the pulsar wind: the interacting wind from a companion model, the magnetosphere model, and the occulting disk model. It is shown out that the wind model requires an MHD wind from the pulsar, with enough particles that the Poynting flux of the wind can be thermalized; in this model, a large flux of energetic radiation from the pulsar is required to accompany the wind and drive the wind off the companion. The magnetosphere model requires an EM wind, which is Poynting flux dominated; the advantage of this model over the wind model is that the plasma density inside the magnetosphere can be orders of magnitude larger than in a magnetospheric tail blown back by wind interaction. The occulting disk model also requires an EM wind so that the interaction would be pushed down onto the companion surface, minimizing direct interaction of the wind with the orbiting macroscopic particles

  17. Thermal unit availability modeling in a regional simulation model

    International Nuclear Information System (INIS)

    Yamayee, Z.A.; Port, J.; Robinett, W.

    1983-01-01

    The System Analysis Model (SAM) developed under the umbrella of PNUCC's System Analysis Committee is capable of simulating the operation of a given load/resource scenario. This model employs a Monte-Carlo simulation to incorporate uncertainties. Among uncertainties modeled is thermal unit availability both for energy simulation (seasonal) and capacity simulations (hourly). This paper presents the availability modeling in the capacity and energy models. The use of regional and national data in deriving the two availability models, the interaction between the two and modifications made to the capacity model in order to reflect regional practices is presented. A sample problem is presented to show the modification process. Results for modeling a nuclear unit using NERC-GADS is presented

  18. Using the Model Coupling Toolkit to couple earth system models

    Science.gov (United States)

    Warner, J.C.; Perlin, N.; Skyllingstad, E.D.

    2008-01-01

    Continued advances in computational resources are providing the opportunity to operate more sophisticated numerical models. Additionally, there is an increasing demand for multidisciplinary studies that include interactions between different physical processes. Therefore there is a strong desire to develop coupled modeling systems that utilize existing models and allow efficient data exchange and model control. The basic system would entail model "1" running on "M" processors and model "2" running on "N" processors, with efficient exchange of model fields at predetermined synchronization intervals. Here we demonstrate two coupled systems: the coupling of the ocean circulation model Regional Ocean Modeling System (ROMS) to the surface wave model Simulating WAves Nearshore (SWAN), and the coupling of ROMS to the atmospheric model Coupled Ocean Atmosphere Prediction System (COAMPS). Both coupled systems use the Model Coupling Toolkit (MCT) as a mechanism for operation control and inter-model distributed memory transfer of model variables. In this paper we describe requirements and other options for model coupling, explain the MCT library, ROMS, SWAN and COAMPS models, methods for grid decomposition and sparse matrix interpolation, and provide an example from each coupled system. Methods presented in this paper are clearly applicable for coupling of other types of models. ?? 2008 Elsevier Ltd. All rights reserved.

  19. Modeling Non-Gaussian Time Series with Nonparametric Bayesian Model.

    Science.gov (United States)

    Xu, Zhiguang; MacEachern, Steven; Xu, Xinyi

    2015-02-01

    We present a class of Bayesian copula models whose major components are the marginal (limiting) distribution of a stationary time series and the internal dynamics of the series. We argue that these are the two features with which an analyst is typically most familiar, and hence that these are natural components with which to work. For the marginal distribution, we use a nonparametric Bayesian prior distribution along with a cdf-inverse cdf transformation to obtain large support. For the internal dynamics, we rely on the traditionally successful techniques of normal-theory time series. Coupling the two components gives us a family of (Gaussian) copula transformed autoregressive models. The models provide coherent adjustments of time scales and are compatible with many extensions, including changes in volatility of the series. We describe basic properties of the models, show their ability to recover non-Gaussian marginal distributions, and use a GARCH modification of the basic model to analyze stock index return series. The models are found to provide better fit and improved short-range and long-range predictions than Gaussian competitors. The models are extensible to a large variety of fields, including continuous time models, spatial models, models for multiple series, models driven by external covariate streams, and non-stationary models.

  20. The Real and the Mathematical in Quantum Modeling: From Principles to Models and from Models to Principles

    Science.gov (United States)

    Plotnitsky, Arkady

    2017-06-01

    The history of mathematical modeling outside physics has been dominated by the use of classical mathematical models, C-models, primarily those of a probabilistic or statistical nature. More recently, however, quantum mathematical models, Q-models, based in the mathematical formalism of quantum theory have become more prominent in psychology, economics, and decision science. The use of Q-models in these fields remains controversial, in part because it is not entirely clear whether Q-models are necessary for dealing with the phenomena in question or whether C-models would still suffice. My aim, however, is not to assess the necessity of Q-models in these fields, but instead to reflect on what the possible applicability of Q-models may tell us about the corresponding phenomena there, vis-à-vis quantum phenomena in physics. In order to do so, I shall first discuss the key reasons for the use of Q-models in physics. In particular, I shall examine the fundamental principles that led to the development of quantum mechanics. Then I shall consider a possible role of similar principles in using Q-models outside physics. Psychology, economics, and decision science borrow already available Q-models from quantum theory, rather than derive them from their own internal principles, while quantum mechanics was derived from such principles, because there was no readily available mathematical model to handle quantum phenomena, although the mathematics ultimately used in quantum did in fact exist then. I shall argue, however, that the principle perspective on mathematical modeling outside physics might help us to understand better the role of Q-models in these fields and possibly to envision new models, conceptually analogous to but mathematically different from those of quantum theory, helpful or even necessary there or in physics itself. I shall suggest one possible type of such models, singularized probabilistic, SP, models, some of which are time-dependent, TDSP-models. The

  1. Optimisation of BPMN Business Models via Model Checking

    DEFF Research Database (Denmark)

    Herbert, Luke Thomas; Sharp, Robin

    2013-01-01

    We present a framework for the optimisation of business processes modelled in the business process modelling language BPMN, which builds upon earlier work, where we developed a model checking based method for the analysis of BPMN models. We define a structure for expressing optimisation goals...... for synthesized BPMN components, based on probabilistic computation tree logic and real-valued reward structures of the BPMN model, allowing for the specification of complex quantitative goals. We here present a simple algorithm, inspired by concepts from evolutionary algorithms, which iteratively generates...

  2. PENGGUNAAN THE ZMIJEWSKI MODEL, THE ALTMAN MODEL, DAN THE SPRINGATE MODEL SEBAGAI PREDIKTOR DELISTING

    Directory of Open Access Journals (Sweden)

    Mila Fatmawati

    2017-03-01

    Full Text Available The purpose of this study was to investigate empirical evidence that the Zmijewski model, the Altman model, andthe Springate models could be used as a predictor of delisting the company. Object of this study was to remove thelist of companies that trade shares (delisted in Indonesia Stock Exchange in 2003-2009. As a benchmark forcompanies delisted at the top used companies that were still listed on the Stock Exchange with the same numberand kind of business field. Comparison samples were taken randomly over the same period with the companydelisted. The method of analysis used logic regression. The results found that from the three delisting of predictormodels, only the Zmijewski models that could be used to predict the company delisted in the period of observation,while the Altman model and the Springate models could not be used as predictive models delisting. It is becauseThe Zmijewski model emphasized amounts of debt in predict delisting. The bigger the debt was, it would be moreaccurate in predicting as the company’s delisting. Meanwhile, the Altman model and the Springate modelemphasized more on profitability measures. The smaller the profitability was, the more precisely to predictcompany’s delisting. Condition of delisting the company that became object of observation company trends wasstill able to get profit, but it had a relative amount of debt.

  3. Process modelling on a canonical basis[Process modelling; Canonical modelling

    Energy Technology Data Exchange (ETDEWEB)

    Siepmann, Volker

    2006-12-20

    Based on an equation oriented solving strategy, this thesis investigates a new approach to process modelling. Homogeneous thermodynamic state functions represent consistent mathematical models of thermodynamic properties. Such state functions of solely extensive canonical state variables are the basis of this work, as they are natural objective functions in optimisation nodes to calculate thermodynamic equilibrium regarding phase-interaction and chemical reactions. Analytical state function derivatives are utilised within the solution process as well as interpreted as physical properties. By this approach, only a limited range of imaginable process constraints are considered, namely linear balance equations of state variables. A second-order update of source contributions to these balance equations is obtained by an additional constitutive equation system. These equations are general dependent on state variables and first-order sensitivities, and cover therefore practically all potential process constraints. Symbolic computation technology efficiently provides sparsity and derivative information of active equations to avoid performance problems regarding robustness and computational effort. A benefit of detaching the constitutive equation system is that the structure of the main equation system remains unaffected by these constraints, and a priori information allows to implement an efficient solving strategy and a concise error diagnosis. A tailor-made linear algebra library handles the sparse recursive block structures efficiently. The optimisation principle for single modules of thermodynamic equilibrium is extended to host entire process models. State variables of different modules interact through balance equations, representing material flows from one module to the other. To account for reusability and encapsulation of process module details, modular process modelling is supported by a recursive module structure. The second-order solving algorithm makes it

  4. Generalized latent variable modeling multilevel, longitudinal, and structural equation models

    CERN Document Server

    Skrondal, Anders; Rabe-Hesketh, Sophia

    2004-01-01

    This book unifies and extends latent variable models, including multilevel or generalized linear mixed models, longitudinal or panel models, item response or factor models, latent class or finite mixture models, and structural equation models.

  5. A Lagrangian mixing frequency model for transported PDF modeling

    Science.gov (United States)

    Turkeri, Hasret; Zhao, Xinyu

    2017-11-01

    In this study, a Lagrangian mixing frequency model is proposed for molecular mixing models within the framework of transported probability density function (PDF) methods. The model is based on the dissipations of mixture fraction and progress variables obtained from Lagrangian particles in PDF methods. The new model is proposed as a remedy to the difficulty in choosing the optimal model constant parameters when using conventional mixing frequency models. The model is implemented in combination with the Interaction by exchange with the mean (IEM) mixing model. The performance of the new model is examined by performing simulations of Sandia Flame D and the turbulent premixed flame from the Cambridge stratified flame series. The simulations are performed using the pdfFOAM solver which is a LES/PDF solver developed entirely in OpenFOAM. A 16-species reduced mechanism is used to represent methane/air combustion, and in situ adaptive tabulation is employed to accelerate the finite-rate chemistry calculations. The results are compared with experimental measurements as well as with the results obtained using conventional mixing frequency models. Dynamic mixing frequencies are predicted using the new model without solving additional transport equations, and good agreement with experimental data is observed.

  6. Modelling MIZ dynamics in a global model

    Science.gov (United States)

    Rynders, Stefanie; Aksenov, Yevgeny; Feltham, Daniel; Nurser, George; Naveira Garabato, Alberto

    2016-04-01

    Exposure of large, previously ice-covered areas of the Arctic Ocean to the wind and surface ocean waves results in the Arctic pack ice cover becoming more fragmented and mobile, with large regions of ice cover evolving into the Marginal Ice Zone (MIZ). The need for better climate predictions, along with growing economic activity in the Polar Oceans, necessitates climate and forecasting models that can simulate fragmented sea ice with a greater fidelity. Current models are not fully fit for the purpose, since they neither model surface ocean waves in the MIZ, nor account for the effect of floe fragmentation on drag, nor include sea ice rheology that represents both the now thinner pack ice and MIZ ice dynamics. All these processes affect the momentum transfer to the ocean. We present initial results from a global ocean model NEMO (Nucleus for European Modelling of the Ocean) coupled to the Los Alamos sea ice model CICE. The model setup implements a novel rheological formulation for sea ice dynamics, accounting for ice floe collisions, thus offering a seamless framework for pack ice and MIZ simulations. The effect of surface waves on ice motion is included through wave pressure and the turbulent kinetic energy of ice floes. In the multidecadal model integrations we examine MIZ and basin scale sea ice and oceanic responses to the changes in ice dynamics. We analyse model sensitivities and attribute them to key sea ice and ocean dynamical mechanisms. The results suggest that the effect of the new ice rheology is confined to the MIZ. However with the current increase in summer MIZ area, which is projected to continue and may become the dominant type of sea ice in the Arctic, we argue that the effects of the combined sea ice rheology will be noticeable in large areas of the Arctic Ocean, affecting sea ice and ocean. With this study we assert that to make more accurate sea ice predictions in the changing Arctic, models need to include MIZ dynamics and physics.

  7. Graphical Rasch models

    DEFF Research Database (Denmark)

    Kreiner, Svend; Christensen, Karl Bang

    Rasch models; Partial Credit models; Rating Scale models; Item bias; Differential item functioning; Local independence; Graphical models......Rasch models; Partial Credit models; Rating Scale models; Item bias; Differential item functioning; Local independence; Graphical models...

  8. Transforming Graphical System Models to Graphical Attack Models

    DEFF Research Database (Denmark)

    Ivanova, Marieta Georgieva; Probst, Christian W.; Hansen, Rene Rydhof

    2016-01-01

    Manually identifying possible attacks on an organisation is a complex undertaking; many different factors must be considered, and the resulting attack scenarios can be complex and hard to maintain as the organisation changes. System models provide a systematic representation of organisations...... approach to transforming graphical system models to graphical attack models in the form of attack trees. Based on an asset in the model, our transformations result in an attack tree that represents attacks by all possible actors in the model, after which the actor in question has obtained the asset....

  9. Integration of Simulink Models with Component-based Software Models

    DEFF Research Database (Denmark)

    Marian, Nicolae

    2008-01-01

    Model based development aims to facilitate the development of embedded control systems by emphasizing the separation of the design level from the implementation level. Model based design involves the use of multiple models that represent different views of a system, having different semantics...... of abstract system descriptions. Usually, in mechatronics systems, design proceeds by iterating model construction, model analysis, and model transformation. Constructing a MATLAB/Simulink model, a plant and controller behavior is simulated using graphical blocks to represent mathematical and logical...... constraints. COMDES (Component-based Design of Software for Distributed Embedded Systems) is such a component-based system framework developed by the software engineering group of Mads Clausen Institute for Product Innovation (MCI), University of Southern Denmark. Once specified, the software model has...

  10. Pavement Aging Model by Response Surface Modeling

    Directory of Open Access Journals (Sweden)

    Manzano-Ramírez A.

    2011-10-01

    Full Text Available In this work, surface course aging was modeled by Response Surface Methodology (RSM. The Marshall specimens were placed in a conventional oven for time and temperature conditions established on the basis of the environment factors of the region where the surface course is constructed by AC-20 from the Ing. Antonio M. Amor refinery. Volatilized material (VM, load resistance increment (ΔL and flow resistance increment (ΔF models were developed by the RSM. Cylindrical specimens with real aging were extracted from the surface course pilot to evaluate the error of the models. The VM model was adequate, in contrast (ΔL and (ΔF models were almost adequate with an error of 20 %, that was associated with the other environmental factors, which were not considered at the beginning of the research.

  11. Variable selection and model choice in geoadditive regression models.

    Science.gov (United States)

    Kneib, Thomas; Hothorn, Torsten; Tutz, Gerhard

    2009-06-01

    Model choice and variable selection are issues of major concern in practical regression analyses, arising in many biometric applications such as habitat suitability analyses, where the aim is to identify the influence of potentially many environmental conditions on certain species. We describe regression models for breeding bird communities that facilitate both model choice and variable selection, by a boosting algorithm that works within a class of geoadditive regression models comprising spatial effects, nonparametric effects of continuous covariates, interaction surfaces, and varying coefficients. The major modeling components are penalized splines and their bivariate tensor product extensions. All smooth model terms are represented as the sum of a parametric component and a smooth component with one degree of freedom to obtain a fair comparison between the model terms. A generic representation of the geoadditive model allows us to devise a general boosting algorithm that automatically performs model choice and variable selection.

  12. Model coupler for coupling of atmospheric, oceanic, and terrestrial models

    International Nuclear Information System (INIS)

    Nagai, Haruyasu; Kobayashi, Takuya; Tsuduki, Katsunori; Kim, Keyong-Ok

    2007-02-01

    A numerical simulation system SPEEDI-MP, which is applicable for various environmental studies, consists of dynamical models and material transport models for the atmospheric, terrestrial, and oceanic environments, meteorological and geographical databases for model inputs, and system utilities for file management, visualization, analysis, etc., using graphical user interfaces (GUIs). As a numerical simulation tool, a model coupling program (model coupler) has been developed. It controls parallel calculations of several models and data exchanges among them to realize the dynamical coupling of the models. It is applicable for any models with three-dimensional structured grid system, which is used by most environmental and hydrodynamic models. A coupled model system for water circulation has been constructed with atmosphere, ocean, wave, hydrology, and land-surface models using the model coupler. Performance tests of the coupled model system for water circulation were also carried out for the flood event at Saudi Arabia in January 2005 and the storm surge case by the hurricane KATRINA in August 2005. (author)

  13. ICRF modelling

    International Nuclear Information System (INIS)

    Phillips, C.K.

    1985-12-01

    This lecture provides a survey of the methods used to model fast magnetosonic wave coupling, propagation, and absorption in tokamaks. The validity and limitations of three distinct types of modelling codes, which will be contrasted, include discrete models which utilize ray tracing techniques, approximate continuous field models based on a parabolic approximation of the wave equation, and full field models derived using finite difference techniques. Inclusion of mode conversion effects in these models and modification of the minority distribution function will also be discussed. The lecture will conclude with a presentation of time-dependent global transport simulations of ICRF-heated tokamak discharges obtained in conjunction with the ICRF modelling codes. 52 refs., 15 figs

  14. Event Modeling

    DEFF Research Database (Denmark)

    Bækgaard, Lars

    2001-01-01

    The purpose of this chapter is to discuss conceptual event modeling within a context of information modeling. Traditionally, information modeling has been concerned with the modeling of a universe of discourse in terms of information structures. However, most interesting universes of discourse...... are dynamic and we present a modeling approach that can be used to model such dynamics.We characterize events as both information objects and change agents (Bækgaard 1997). When viewed as information objects events are phenomena that can be observed and described. For example, borrow events in a library can...

  15. Degeneracy of time series models: The best model is not always the correct model

    International Nuclear Information System (INIS)

    Judd, Kevin; Nakamura, Tomomichi

    2006-01-01

    There are a number of good techniques for finding, in some sense, the best model of a deterministic system given a time series of observations. We examine a problem called model degeneracy, which has the consequence that even when a perfect model of a system exists, one does not find it using the best techniques currently available. The problem is illustrated using global polynomial models and the theory of Groebner bases

  16. On the shell model connection of the cluster model

    International Nuclear Information System (INIS)

    Cseh, J.; Levai, G.; Kato, K.

    2000-01-01

    Complete text of publication follows. The interrelation of basic nuclear structure models is a longstanding problem. The connection between the spherical shell model and the quadrupole collective model has been studied extensively, and symmetry considerations proved to be especially useful in this respect. A collective band was interpreted in the shell model language long ago as a set of states (of the valence nucleons) with a specific SU(3) symmetry. Furthermore, the energies of these rotational states are obtained to a good approximation as eigenvalues of an SU(3) dynamically symmetric shell model Hamiltonian. On the other hand the relation of the shell model and cluster model is less well explored. The connection of the harmonic oscillator (i.e. SU(3)) bases of the two approaches is known, but it was established only for the unrealistic harmonic oscillator interactions. Here we investigate the question: Can an SU(3) dynamically symmetric interaction provide a similar connection between the spherical shell model and the cluster model, like the one between the shell and collective models? In other words: whether or not the energy of the states of the cluster bands, defined by a specific SU(3) symmetries, can be obtained from a shell model Hamiltonian (with SU(3) dynamical symmetry). We carried out calculations within the framework of the semimicroscopic algebraic cluster model, in which not only the cluster model space is obtained from the full shell model space by an SU(3) symmetry-dictated truncation, but SU(3) dynamically symmetric interactions are also applied. Actually, Hamiltonians of this kind proved to be successful in describing the gross features of cluster states in a wide energy range. The novel feature of the present work is that we apply exclusively shell model interactions. The energies obtained from such a Hamiltonian for several bands of the ( 12 C, 14 C, 16 O, 20 Ne, 40 Ca) + α systems turn out to be in good agreement with the experimental

  17. Approximating chiral quark models with linear σ-models

    International Nuclear Information System (INIS)

    Broniowski, Wojciech; Golli, Bojan

    2003-01-01

    We study the approximation of chiral quark models with simpler models, obtained via gradient expansion. The resulting Lagrangian of the type of the linear σ-model contains, at the lowest level of the gradient-expanded meson action, an additional term of the form ((1)/(2))A(σ∂ μ σ+π∂ μ π) 2 . We investigate the dynamical consequences of this term and its relevance to the phenomenology of the soliton models of the nucleon. It is found that the inclusion of the new term allows for a more efficient approximation of the underlying quark theory, especially in those cases where dynamics allows for a large deviation of the chiral fields from the chiral circle, such as in quark models with non-local regulators. This is of practical importance, since the σ-models with valence quarks only are technically much easier to treat and simpler to solve than the quark models with the full-fledged Dirac sea

  18. Multivariate statistical modelling based on generalized linear models

    CERN Document Server

    Fahrmeir, Ludwig

    1994-01-01

    This book is concerned with the use of generalized linear models for univariate and multivariate regression analysis. Its emphasis is to provide a detailed introductory survey of the subject based on the analysis of real data drawn from a variety of subjects including the biological sciences, economics, and the social sciences. Where possible, technical details and proofs are deferred to an appendix in order to provide an accessible account for non-experts. Topics covered include: models for multi-categorical responses, model checking, time series and longitudinal data, random effects models, and state-space models. Throughout, the authors have taken great pains to discuss the underlying theoretical ideas in ways that relate well to the data at hand. As a result, numerous researchers whose work relies on the use of these models will find this an invaluable account to have on their desks. "The basic aim of the authors is to bring together and review a large part of recent advances in statistical modelling of m...

  19. Modeling arson - An exercise in qualitative model building

    Science.gov (United States)

    Heineke, J. M.

    1975-01-01

    A detailed example is given of the role of von Neumann and Morgenstern's 1944 'expected utility theorem' (in the theory of games and economic behavior) in qualitative model building. Specifically, an arsonist's decision as to the amount of time to allocate to arson and related activities is modeled, and the responsiveness of this time allocation to changes in various policy parameters is examined. Both the activity modeled and the method of presentation are intended to provide an introduction to the scope and power of the expected utility theorem in modeling situations of 'choice under uncertainty'. The robustness of such a model is shown to vary inversely with the number of preference restrictions used in the analysis. The fewer the restrictions, the wider is the class of agents to which the model is applicable, and accordingly more confidence is put in the derived results. A methodological discussion on modeling human behavior is included.

  20. Air Quality Dispersion Modeling - Alternative Models

    Science.gov (United States)

    Models, not listed in Appendix W, that can be used in regulatory applications with case-by-case justification to the Reviewing Authority as noted in Section 3.2, Use of Alternative Models, in Appendix W.

  1. On the shell-model-connection of the cluster model

    International Nuclear Information System (INIS)

    Cseh, J.

    2000-01-01

    Complete text of publication follows. The interrelation of basic nuclear structure models is a longstanding problem. The connection between the spherical shell model and the quadrupole collective model has been studied extensively, and symmetry considerations proved to be especially useful in this respect. A collective band was interpreted in the shell model language long ago [1] as a set of states (of the valence nucleons) with a specific SU(3) symmetry. Furthermore, the energies of these rotational states are obtained to a good approximation as eigenvalues of an SU(3) dynamically symmetric shell model Hamiltonian. On the other hand the relation of the shell model and cluster model is less well explored. The connection of the harmonic oscillator (i.e. SU(3)) bases of the two approaches is known [2] but it was established only for the unrealistic harmonic oscillator interactions. Here we investigate the question: Can an SU(3) dynamically symmetric interaction provide a similar connection between the spherical shell model and the cluster model, like the one between the shell and collective models? In other words: whether or not the energy of the states of the cluster bands, defined by a specific SU(3) symmetries, can be obtained from a shell model Hamiltonian (with SU(3) dynamical symmetry). We carried out calculations within the framework of the semimicroscopic algebraic cluster model [3,4] in order to find an answer to this question, which seems to be affirmative. In particular, the energies obtained from such a Hamiltonian for several bands of the ( 12 C, 14 C, 16 O, 20 Ne, 40 Ca) + α systems turn out to be in good agreement with the experimental values. The present results show that the simple and transparent SU(3) connection between the spherical shell model and the cluster model is valid not only for the harmonic oscillator interactions, but for much more general (SU(3) dynamically symmetric) Hamiltonians as well, which result in realistic energy spectra. Via

  2. Wind tunnel modeling of roadways: Comparison with mathematical models

    International Nuclear Information System (INIS)

    Heidorn, K.; Davies, A.E.; Murphy, M.C.

    1991-01-01

    The assessment of air quality impacts from roadways is a major concern to urban planners. In order to assess future road and building configurations, a number of techniques have been developed including mathematical models, which simulate traffic emissions and atmospheric dispersion through a series of mathematical relationships and physical models. The latter models simulate emissions and dispersion through scaling of these processes in a wind tunnel. Two roadway mathematical models, HIWAY-2 and CALINE-4, were applied to a proposed development in a large urban area. Physical modeling procedures developed by Rowan Williams Davies and Irwin Inc. (RWDI) in the form of line source simulators were also applied, and the resulting carbon monoxide concentrations were compared. The results indicated a factor of two agreement between the mathematical and physical models. The physical model, however, reacted to change in building massing and configuration. The mathematical models did not, since no provision for such changes was included in the mathematical models. In general, the RWDI model resulted in higher concentrations than either HIWAY-2 or CALINE-4. Where there was underprediction, it was often due to shielding of the receptor by surrounding buildings. Comparison of these three models with the CALTRANS Tracer Dispersion Experiment showed good results although concentrations were consistently underpredicted

  3. Analysis of deregulation models; Denryoku shijo jiyuka model no bunseki

    Energy Technology Data Exchange (ETDEWEB)

    Yajima, M. [Central Research Institute of Electric Power Industry, Tokyo (Japan)

    1996-04-01

    Trends toward power market deregulation were investigated in Japan and 16 other countries, and various deregulation models were examined and evaluated for their merits and demerits. There are four basic models, that is, franchise bidding model, competitive bidding in power generation model, wholesale wheeling or retail wheeling model, and mandatory pool or voluntary pool model. Power market deregulation has been a global tendency since the second half of the 1970s, with various models adopted by different countries. Out of the above-said models, it is the retail wheeling model and pool models (open access models) that allow the final customer to select power suppliers, and the number of countries adopting these models is increasing. The said models are characterized in that the disintegration of the vertical transmission-distribution integration (separation of distribution service and retail supply service) and the liberation of the retail market are simultaneously accomplished. The pool models, in particular, are enjoying favor because conditions for fair competition have already been prepared and because it is believed high in efficiency. In Japan and France, where importance is attached to atomic power generation, the competitive bidding model is adopted as a means to harmonize the introduction of competition into the source development and power generation sectors. 7 refs., 4 tabs.

  4. Modelling Overview

    DEFF Research Database (Denmark)

    Larsen, Lars Bjørn; Vesterager, Johan

    This report provides an overview of the existing models of global manufacturing, describes the required modelling views and associated methods and identifies tools, which can provide support for this modelling activity.The model adopted for global manufacturing is that of an extended enterprise s...

  5. Statistical Model Checking of Rich Models and Properties

    DEFF Research Database (Denmark)

    Poulsen, Danny Bøgsted

    in undecidability issues for the traditional model checking approaches. Statistical model checking has proven itself a valuable supplement to model checking and this thesis is concerned with extending this software validation technique to stochastic hybrid systems. The thesis consists of two parts: the first part...... motivates why existing model checking technology should be supplemented by new techniques. It also contains a brief introduction to probability theory and concepts covered by the six papers making up the second part. The first two papers are concerned with developing online monitoring techniques...... systems. The fifth paper shows how stochastic hybrid automata are useful for modelling biological systems and the final paper is concerned with showing how statistical model checking is efficiently distributed. In parallel with developing the theory contained in the papers, a substantial part of this work...

  6. Geochemistry Model Validation Report: Material Degradation and Release Model

    Energy Technology Data Exchange (ETDEWEB)

    H. Stockman

    2001-09-28

    The purpose of this Analysis and Modeling Report (AMR) is to validate the Material Degradation and Release (MDR) model that predicts degradation and release of radionuclides from a degrading waste package (WP) in the potential monitored geologic repository at Yucca Mountain. This AMR is prepared according to ''Technical Work Plan for: Waste Package Design Description for LA'' (Ref. 17). The intended use of the MDR model is to estimate the long-term geochemical behavior of waste packages (WPs) containing U. S . Department of Energy (DOE) Spent Nuclear Fuel (SNF) codisposed with High Level Waste (HLW) glass, commercial SNF, and Immobilized Plutonium Ceramic (Pu-ceramic) codisposed with HLW glass. The model is intended to predict (1) the extent to which criticality control material, such as gadolinium (Gd), will remain in the WP after corrosion of the initial WP, (2) the extent to which fissile Pu and uranium (U) will be carried out of the degraded WP by infiltrating water, and (3) the chemical composition and amounts of minerals and other solids left in the WP. The results of the model are intended for use in criticality calculations. The scope of the model validation report is to (1) describe the MDR model, and (2) compare the modeling results with experimental studies. A test case based on a degrading Pu-ceramic WP is provided to help explain the model. This model does not directly feed the assessment of system performance. The output from this model is used by several other models, such as the configuration generator, criticality, and criticality consequence models, prior to the evaluation of system performance. This document has been prepared according to AP-3.10Q, ''Analyses and Models'' (Ref. 2), and prepared in accordance with the technical work plan (Ref. 17).

  7. Geochemistry Model Validation Report: Material Degradation and Release Model

    International Nuclear Information System (INIS)

    Stockman, H.

    2001-01-01

    The purpose of this Analysis and Modeling Report (AMR) is to validate the Material Degradation and Release (MDR) model that predicts degradation and release of radionuclides from a degrading waste package (WP) in the potential monitored geologic repository at Yucca Mountain. This AMR is prepared according to ''Technical Work Plan for: Waste Package Design Description for LA'' (Ref. 17). The intended use of the MDR model is to estimate the long-term geochemical behavior of waste packages (WPs) containing U. S . Department of Energy (DOE) Spent Nuclear Fuel (SNF) codisposed with High Level Waste (HLW) glass, commercial SNF, and Immobilized Plutonium Ceramic (Pu-ceramic) codisposed with HLW glass. The model is intended to predict (1) the extent to which criticality control material, such as gadolinium (Gd), will remain in the WP after corrosion of the initial WP, (2) the extent to which fissile Pu and uranium (U) will be carried out of the degraded WP by infiltrating water, and (3) the chemical composition and amounts of minerals and other solids left in the WP. The results of the model are intended for use in criticality calculations. The scope of the model validation report is to (1) describe the MDR model, and (2) compare the modeling results with experimental studies. A test case based on a degrading Pu-ceramic WP is provided to help explain the model. This model does not directly feed the assessment of system performance. The output from this model is used by several other models, such as the configuration generator, criticality, and criticality consequence models, prior to the evaluation of system performance. This document has been prepared according to AP-3.10Q, ''Analyses and Models'' (Ref. 2), and prepared in accordance with the technical work plan (Ref. 17)

  8. Assessing physical models used in nuclear aerosol transport models

    International Nuclear Information System (INIS)

    McDonald, B.H.

    1987-01-01

    Computer codes used to predict the behaviour of aerosols in water-cooled reactor containment buildings after severe accidents contain a variety of physical models. Special models are in place for describing agglomeration processes where small aerosol particles combine to form larger ones. Other models are used to calculate the rates at which aerosol particles are deposited on building structures. Condensation of steam on aerosol particles is currently a very active area in aerosol modelling. In this paper, the physical models incorporated in the current available international codes for all of these processes are reviewed and documented. There is considerable variation in models used in different codes, and some uncertainties exist as to which models are superior. 28 refs

  9. Particle Tracking Model (PTM) with Coastal Modeling System (CMS)

    Science.gov (United States)

    2015-11-04

    Coastal Inlets Research Program Particle Tracking Model (PTM) with Coastal Modeling System ( CMS ) The Particle Tracking Model (PTM) is a Lagrangian...currents and waves. The Coastal Inlets Research Program (CIRP) supports the PTM with the Coastal Modeling System ( CMS ), which provides coupled wave...and current forcing for PTM simulations. CMS -PTM is implemented in the Surface-water Modeling System, a GUI environment for input development

  10. A BRDF statistical model applying to space target materials modeling

    Science.gov (United States)

    Liu, Chenghao; Li, Zhi; Xu, Can; Tian, Qichen

    2017-10-01

    In order to solve the problem of poor effect in modeling the large density BRDF measured data with five-parameter semi-empirical model, a refined statistical model of BRDF which is suitable for multi-class space target material modeling were proposed. The refined model improved the Torrance-Sparrow model while having the modeling advantages of five-parameter model. Compared with the existing empirical model, the model contains six simple parameters, which can approximate the roughness distribution of the material surface, can approximate the intensity of the Fresnel reflectance phenomenon and the attenuation of the reflected light's brightness with the azimuth angle changes. The model is able to achieve parameter inversion quickly with no extra loss of accuracy. The genetic algorithm was used to invert the parameters of 11 different samples in the space target commonly used materials, and the fitting errors of all materials were below 6%, which were much lower than those of five-parameter model. The effect of the refined model is verified by comparing the fitting results of the three samples at different incident zenith angles in 0° azimuth angle. Finally, the three-dimensional modeling visualizations of these samples in the upper hemisphere space was given, in which the strength of the optical scattering of different materials could be clearly shown. It proved the good describing ability of the refined model at the material characterization as well.

  11. EIA model documentation: Petroleum market model of the national energy modeling system

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1995-12-28

    The purpose of this report is to define the objectives of the Petroleum Market Model (PMM), describe its basic approach, and provide detail on how it works. This report is intended as a reference document for model analysts, users, and the public. Documentation of the model is in accordance with EIA`s legal obligation to provide adequate documentation in support of its models. The PMM models petroleum refining activities, the marketing of petroleum products to consumption regions, the production of natural gas liquids in gas processing plants, and domestic methanol production. The PMM projects petroleum product prices and sources of supply for meeting petroleum product demand. The sources of supply include crude oil, both domestic and imported; other inputs including alcohols and ethers; natural gas plant liquids production; petroleum product imports; and refinery processing gain. In addition, the PMM estimates domestic refinery capacity expansion and fuel consumption. Product prices are estimated at the Census division level and much of the refining activity information is at the Petroleum Administration for Defense (PAD) District level.

  12. EIA model documentation: Petroleum market model of the national energy modeling system

    International Nuclear Information System (INIS)

    1995-01-01

    The purpose of this report is to define the objectives of the Petroleum Market Model (PMM), describe its basic approach, and provide detail on how it works. This report is intended as a reference document for model analysts, users, and the public. Documentation of the model is in accordance with EIA's legal obligation to provide adequate documentation in support of its models. The PMM models petroleum refining activities, the marketing of petroleum products to consumption regions, the production of natural gas liquids in gas processing plants, and domestic methanol production. The PMM projects petroleum product prices and sources of supply for meeting petroleum product demand. The sources of supply include crude oil, both domestic and imported; other inputs including alcohols and ethers; natural gas plant liquids production; petroleum product imports; and refinery processing gain. In addition, the PMM estimates domestic refinery capacity expansion and fuel consumption. Product prices are estimated at the Census division level and much of the refining activity information is at the Petroleum Administration for Defense (PAD) District level

  13. Towards policy relevant environmental modeling: contextual validity and pragmatic models

    Science.gov (United States)

    Miles, Scott B.

    2000-01-01

    "What makes for a good model?" In various forms, this question is a question that, undoubtedly, many people, businesses, and institutions ponder with regards to their particular domain of modeling. One particular domain that is wrestling with this question is the multidisciplinary field of environmental modeling. Examples of environmental models range from models of contaminated ground water flow to the economic impact of natural disasters, such as earthquakes. One of the distinguishing claims of the field is the relevancy of environmental modeling to policy and environment-related decision-making in general. A pervasive view by both scientists and decision-makers is that a "good" model is one that is an accurate predictor. Thus, determining whether a model is "accurate" or "correct" is done by comparing model output to empirical observations. The expected outcome of this process, usually referred to as "validation" or "ground truthing," is a stamp on the model in question of "valid" or "not valid" that serves to indicate whether or not the model will be reliable before it is put into service in a decision-making context. In this paper, I begin by elaborating on the prevailing view of model validation and why this view must change. Drawing from concepts coming out of the studies of science and technology, I go on to propose a contextual view of validity that can overcome the problems associated with "ground truthing" models as an indicator of model goodness. The problem of how we talk about and determine model validity has much to do about how we perceive the utility of environmental models. In the remainder of the paper, I argue that we should adopt ideas of pragmatism in judging what makes for a good model and, in turn, developing good models. From such a perspective of model goodness, good environmental models should facilitate communication, convey—not bury or "eliminate"—uncertainties, and, thus, afford the active building of consensus decisions, instead

  14. Stochastic biomathematical models with applications to neuronal modeling

    CERN Document Server

    Batzel, Jerry; Ditlevsen, Susanne

    2013-01-01

    Stochastic biomathematical models are becoming increasingly important as new light is shed on the role of noise in living systems. In certain biological systems, stochastic effects may even enhance a signal, thus providing a biological motivation for the noise observed in living systems. Recent advances in stochastic analysis and increasing computing power facilitate the analysis of more biophysically realistic models, and this book provides researchers in computational neuroscience and stochastic systems with an overview of recent developments. Key concepts are developed in chapters written by experts in their respective fields. Topics include: one-dimensional homogeneous diffusions and their boundary behavior, large deviation theory and its application in stochastic neurobiological models, a review of mathematical methods for stochastic neuronal integrate-and-fire models, stochastic partial differential equation models in neurobiology, and stochastic modeling of spreading cortical depression.

  15. Modelling of an homogeneous equilibrium mixture model

    International Nuclear Information System (INIS)

    Bernard-Champmartin, A.; Poujade, O.; Mathiaud, J.; Mathiaud, J.; Ghidaglia, J.M.

    2014-01-01

    We present here a model for two phase flows which is simpler than the 6-equations models (with two densities, two velocities, two temperatures) but more accurate than the standard mixture models with 4 equations (with two densities, one velocity and one temperature). We are interested in the case when the two-phases have been interacting long enough for the drag force to be small but still not negligible. The so-called Homogeneous Equilibrium Mixture Model (HEM) that we present is dealing with both mixture and relative quantities, allowing in particular to follow both a mixture velocity and a relative velocity. This relative velocity is not tracked by a conservation law but by a closure law (drift relation), whose expression is related to the drag force terms of the two-phase flow. After the derivation of the model, a stability analysis and numerical experiments are presented. (authors)

  16. Temperature-based modeling of reference evapotranspiration using several artificial intelligence models: application of different modeling scenarios

    Science.gov (United States)

    Sanikhani, Hadi; Kisi, Ozgur; Maroufpoor, Eisa; Yaseen, Zaher Mundher

    2018-02-01

    The establishment of an accurate computational model for predicting reference evapotranspiration (ET0) process is highly essential for several agricultural and hydrological applications, especially for the rural water resource systems, water use allocations, utilization and demand assessments, and the management of irrigation systems. In this research, six artificial intelligence (AI) models were investigated for modeling ET0 using a small number of climatic data generated from the minimum and maximum temperatures of the air and extraterrestrial radiation. The investigated models were multilayer perceptron (MLP), generalized regression neural networks (GRNN), radial basis neural networks (RBNN), integrated adaptive neuro-fuzzy inference systems with grid partitioning and subtractive clustering (ANFIS-GP and ANFIS-SC), and gene expression programming (GEP). The implemented monthly time scale data set was collected at the Antalya and Isparta stations which are located in the Mediterranean Region of Turkey. The Hargreaves-Samani (HS) equation and its calibrated version (CHS) were used to perform a verification analysis of the established AI models. The accuracy of validation was focused on multiple quantitative metrics, including root mean squared error (RMSE), mean absolute error (MAE), correlation coefficient (R 2), coefficient of residual mass (CRM), and Nash-Sutcliffe efficiency coefficient (NS). The results of the conducted models were highly practical and reliable for the investigated case studies. At the Antalya station, the performance of the GEP and GRNN models was better than the other investigated models, while the performance of the RBNN and ANFIS-SC models was best compared to the other models at the Isparta station. Except for the MLP model, all the other investigated models presented a better performance accuracy compared to the HS and CHS empirical models when applied in a cross-station scenario. A cross-station scenario examination implies the

  17. Deformed baryons: constituent quark model vs. bag model

    International Nuclear Information System (INIS)

    Iwamura, Y.; Nogami, Y.

    1985-01-01

    Recently Bhaduri et al. developed a nonrelativistic constituent quark model for deformed baryons. In that model the quarks move in a deformable mean field, and the deformation parameters are determined by minimizing the quark energy subject to the constraint of volume conservation. This constraint is an ad hoc assumption. It is shown that, starting with a bag model, a model similar to that of Bhaduri et al. can be constructed. The deformation parameters are determined by the pressure balance on the bag surface. There is, however, a distinct difference between the two models with respect to the state dependence of the ''volume''. Implications of this difference are discussed

  18. Business Model Innovation: How Iconic Business Models Emerge

    OpenAIRE

    Mikhalkina, T.; Cabantous, L.

    2015-01-01

    Despite ample research on the topic of business model innovation, little is known about the cognitive processes whereby some innovative business models gain the status of iconic representations of particular types of firms. This study addresses the question: How do iconic business models emerge? In other words: How do innovative business models become prototypical exemplars for new categories of firms? We focus on the case of Airbnb, and analyze how six mainstream business media publications ...

  19. ECONOMIC MODELING STOCKS CONTROL SYSTEM: SIMULATION MODEL

    OpenAIRE

    Климак, М.С.; Войтко, С.В.

    2016-01-01

    Considered theoretical and applied aspects of the development of simulation models to predictthe optimal development and production systems that create tangible products andservices. It isproved that theprocessof inventory control needs of economicandmathematical modeling in viewof thecomplexity of theoretical studies. A simulation model of stocks control that allows make managementdecisions with production logistics

  20. Bayesian Model Averaging of Artificial Intelligence Models for Hydraulic Conductivity Estimation

    Science.gov (United States)

    Nadiri, A.; Chitsazan, N.; Tsai, F. T.; Asghari Moghaddam, A.

    2012-12-01

    This research presents a Bayesian artificial intelligence model averaging (BAIMA) method that incorporates multiple artificial intelligence (AI) models to estimate hydraulic conductivity and evaluate estimation uncertainties. Uncertainty in the AI model outputs stems from error in model input as well as non-uniqueness in selecting different AI methods. Using one single AI model tends to bias the estimation and underestimate uncertainty. BAIMA employs Bayesian model averaging (BMA) technique to address the issue of using one single AI model for estimation. BAIMA estimates hydraulic conductivity by averaging the outputs of AI models according to their model weights. In this study, the model weights were determined using the Bayesian information criterion (BIC) that follows the parsimony principle. BAIMA calculates the within-model variances to account for uncertainty propagation from input data to AI model output. Between-model variances are evaluated to account for uncertainty due to model non-uniqueness. We employed Takagi-Sugeno fuzzy logic (TS-FL), artificial neural network (ANN) and neurofuzzy (NF) to estimate hydraulic conductivity for the Tasuj plain aquifer, Iran. BAIMA combined three AI models and produced better fitting than individual models. While NF was expected to be the best AI model owing to its utilization of both TS-FL and ANN models, the NF model is nearly discarded by the parsimony principle. The TS-FL model and the ANN model showed equal importance although their hydraulic conductivity estimates were quite different. This resulted in significant between-model variances that are normally ignored by using one AI model.

  1. Climate simulations for 1880-2003 with GISS modelE

    International Nuclear Information System (INIS)

    Hansen, J.; Lacis, A.; Miller, R.; Schmidt, G.A.; Russell, G.; Canuto, V.; Del Genio, A.; Hall, T.; Hansen, J.; Sato, M.; Kharecha, P.; Nazarenko, L.; Aleinov, I.; Bauer, S.; Chandler, M.; Faluvegi, G.; Jonas, J.; Ruedy, R.; Lo, K.; Cheng, Y.; Lacis, A.; Schmidt, G.A.; Del Genio, A.; Miller, R.; Cairns, B.; Hall, T.; Baum, E.; Cohen, A.; Fleming, E.; Jackman, C.; Friend, A.; Kelley, M.

    2007-01-01

    We carry out climate simulations for 1880-2003 with GISS modelE driven by ten measured or estimated climate forcing. An ensemble of climate model runs is carried out for each forcing acting individually and for all forcing mechanisms acting together. We compare side-by-side simulated climate change for each forcing, all forcing, observations, unforced variability among model ensemble members, and, if available, observed variability. Discrepancies between observations and simulations with all forcing are due to model deficiencies, inaccurate or incomplete forcing, and imperfect observations. Although there are notable discrepancies between model and observations, the fidelity is sufficient to encourage use of the model for simulations of future climate change. By using a fixed well-documented model and accurately defining the 1880-2003 forcing, we aim to provide a benchmark against which the effect of improvements in the model, climate forcing, and observations can be tested. Principal model deficiencies include unrealistic weak tropical El Nino-like variability and a poor distribution of sea ice, with too much sea ice in the Northern Hemisphere and too little in the Southern Hemisphere. Greatest uncertainties in the forcing are the temporal and spatial variations of anthropogenic aerosols and their indirect effects on clouds. (authors)

  2. ModelMage: a tool for automatic model generation, selection and management.

    Science.gov (United States)

    Flöttmann, Max; Schaber, Jörg; Hoops, Stephan; Klipp, Edda; Mendes, Pedro

    2008-01-01

    Mathematical modeling of biological systems usually involves implementing, simulating, and discriminating several candidate models that represent alternative hypotheses. Generating and managing these candidate models is a tedious and difficult task and can easily lead to errors. ModelMage is a tool that facilitates management of candidate models. It is designed for the easy and rapid development, generation, simulation, and discrimination of candidate models. The main idea of the program is to automatically create a defined set of model alternatives from a single master model. The user provides only one SBML-model and a set of directives from which the candidate models are created by leaving out species, modifiers or reactions. After generating models the software can automatically fit all these models to the data and provides a ranking for model selection, in case data is available. In contrast to other model generation programs, ModelMage aims at generating only a limited set of models that the user can precisely define. ModelMage uses COPASI as a simulation and optimization engine. Thus, all simulation and optimization features of COPASI are readily incorporated. ModelMage can be downloaded from http://sysbio.molgen.mpg.de/modelmage and is distributed as free software.

  3. Modelling binary data

    CERN Document Server

    Collett, David

    2002-01-01

    INTRODUCTION Some Examples The Scope of this Book Use of Statistical Software STATISTICAL INFERENCE FOR BINARY DATA The Binomial Distribution Inference about the Success Probability Comparison of Two Proportions Comparison of Two or More Proportions MODELS FOR BINARY AND BINOMIAL DATA Statistical Modelling Linear Models Methods of Estimation Fitting Linear Models to Binomial Data Models for Binomial Response Data The Linear Logistic Model Fitting the Linear Logistic Model to Binomial Data Goodness of Fit of a Linear Logistic Model Comparing Linear Logistic Models Linear Trend in Proportions Comparing Stimulus-Response Relationships Non-Convergence and Overfitting Some other Goodness of Fit Statistics Strategy for Model Selection Predicting a Binary Response Probability BIOASSAY AND SOME OTHER APPLICATIONS The Tolerance Distribution Estimating an Effective Dose Relative Potency Natural Response Non-Linear Logistic Regression Models Applications of the Complementary Log-Log Model MODEL CHECKING Definition of Re...

  4. Model theory

    CERN Document Server

    Chang, CC

    2012-01-01

    Model theory deals with a branch of mathematical logic showing connections between a formal language and its interpretations or models. This is the first and most successful textbook in logical model theory. Extensively updated and corrected in 1990 to accommodate developments in model theoretic methods - including classification theory and nonstandard analysis - the third edition added entirely new sections, exercises, and references. Each chapter introduces an individual method and discusses specific applications. Basic methods of constructing models include constants, elementary chains, Sko

  5. Multistate Model Builder (MSMB): a flexible editor for compact biochemical models.

    Science.gov (United States)

    Palmisano, Alida; Hoops, Stefan; Watson, Layne T; Jones, Thomas C; Tyson, John J; Shaffer, Clifford A

    2014-04-04

    Building models of molecular regulatory networks is challenging not just because of the intrinsic difficulty of describing complex biological processes. Writing a model is a creative effort that calls for more flexibility and interactive support than offered by many of today's biochemical model editors. Our model editor MSMB - Multistate Model Builder - supports multistate models created using different modeling styles. MSMB provides two separate advances on existing network model editors. (1) A simple but powerful syntax is used to describe multistate species. This reduces the number of reactions needed to represent certain molecular systems, thereby reducing the complexity of model creation. (2) Extensive feedback is given during all stages of the model creation process on the existing state of the model. Users may activate error notifications of varying stringency on the fly, and use these messages as a guide toward a consistent, syntactically correct model. MSMB default values and behavior during model manipulation (e.g., when renaming or deleting an element) can be adapted to suit the modeler, thus supporting creativity rather than interfering with it. MSMB's internal model representation allows saving a model with errors and inconsistencies (e.g., an undefined function argument; a syntactically malformed reaction). A consistent model can be exported to SBML or COPASI formats. We show the effectiveness of MSMB's multistate syntax through models of the cell cycle and mRNA transcription. Using multistate reactions reduces the number of reactions need to encode many biochemical network models. This reduces the cognitive load for a given model, thereby making it easier for modelers to build more complex models. The many interactive editing support features provided by MSMB make it easier for modelers to create syntactically valid models, thus speeding model creation. Complete information and the installation package can be found at http

  6. Application of Improved Radiation Modeling to General Circulation Models

    Energy Technology Data Exchange (ETDEWEB)

    Michael J Iacono

    2011-04-07

    This research has accomplished its primary objectives of developing accurate and efficient radiation codes, validating them with measurements and higher resolution models, and providing these advancements to the global modeling community to enhance the treatment of cloud and radiative processes in weather and climate prediction models. A critical component of this research has been the development of the longwave and shortwave broadband radiative transfer code for general circulation model (GCM) applications, RRTMG, which is based on the single-column reference code, RRTM, also developed at AER. RRTMG is a rigorously tested radiation model that retains a considerable level of accuracy relative to higher resolution models and measurements despite the performance enhancements that have made it possible to apply this radiation code successfully to global dynamical models. This model includes the radiative effects of all significant atmospheric gases, and it treats the absorption and scattering from liquid and ice clouds and aerosols. RRTMG also includes a statistical technique for representing small-scale cloud variability, such as cloud fraction and the vertical overlap of clouds, which has been shown to improve cloud radiative forcing in global models. This development approach has provided a direct link from observations to the enhanced radiative transfer provided by RRTMG for application to GCMs. Recent comparison of existing climate model radiation codes with high resolution models has documented the improved radiative forcing capability provided by RRTMG, especially at the surface, relative to other GCM radiation models. Due to its high accuracy, its connection to observations, and its computational efficiency, RRTMG has been implemented operationally in many national and international dynamical models to provide validated radiative transfer for improving weather forecasts and enhancing the prediction of global climate change.

  7. Modeling Guru: Knowledge Base for NASA Modelers

    Science.gov (United States)

    Seablom, M. S.; Wojcik, G. S.; van Aartsen, B. H.

    2009-05-01

    Modeling Guru is an on-line knowledge-sharing resource for anyone involved with or interested in NASA's scientific models or High End Computing (HEC) systems. Developed and maintained by the NASA's Software Integration and Visualization Office (SIVO) and the NASA Center for Computational Sciences (NCCS), Modeling Guru's combined forums and knowledge base for research and collaboration is becoming a repository for the accumulated expertise of NASA's scientific modeling and HEC communities. All NASA modelers and associates are encouraged to participate and provide knowledge about the models and systems so that other users may benefit from their experience. Modeling Guru is divided into a hierarchy of communities, each with its own set forums and knowledge base documents. Current modeling communities include those for space science, land and atmospheric dynamics, atmospheric chemistry, and oceanography. In addition, there are communities focused on NCCS systems, HEC tools and libraries, and programming and scripting languages. Anyone may view most of the content on Modeling Guru (available at http://modelingguru.nasa.gov/), but you must log in to post messages and subscribe to community postings. The site offers a full range of "Web 2.0" features, including discussion forums, "wiki" document generation, document uploading, RSS feeds, search tools, blogs, email notification, and "breadcrumb" links. A discussion (a.k.a. forum "thread") is used to post comments, solicit feedback, or ask questions. If marked as a question, SIVO will monitor the thread, and normally respond within a day. Discussions can include embedded images, tables, and formatting through the use of the Rich Text Editor. Also, the user can add "Tags" to their thread to facilitate later searches. The "knowledge base" is comprised of documents that are used to capture and share expertise with others. The default "wiki" document lets users edit within the browser so others can easily collaborate on the

  8. Modeling soil water content for vegetation modeling improvement

    Science.gov (United States)

    Cianfrani, Carmen; Buri, Aline; Zingg, Barbara; Vittoz, Pascal; Verrecchia, Eric; Guisan, Antoine

    2016-04-01

    Soil water content (SWC) is known to be important for plants as it affects the physiological processes regulating plant growth. Therefore, SWC controls plant distribution over the Earth surface, ranging from deserts and grassland to rain forests. Unfortunately, only a few data on SWC are available as its measurement is very time consuming and costly and needs specific laboratory tools. The scarcity of SWC measurements in geographic space makes it difficult to model and spatially project SWC over larger areas. In particular, it prevents its inclusion in plant species distribution model (SDMs) as predictor. The aims of this study were, first, to test a new methodology allowing problems of the scarcity of SWC measurements to be overpassed and second, to model and spatially project SWC in order to improve plant SDMs with the inclusion of SWC parameter. The study was developed in four steps. First, SWC was modeled by measuring it at 10 different pressures (expressed in pF and ranging from pF=0 to pF=4.2). The different pF represent different degrees of soil water availability for plants. An ensemble of bivariate models was built to overpass the problem of having only a few SWC measurements (n = 24) but several predictors to include in the model. Soil texture (clay, silt, sand), organic matter (OM), topographic variables (elevation, aspect, convexity), climatic variables (precipitation) and hydrological variables (river distance, NDWI) were used as predictors. Weighted ensemble models were built using only bivariate models with adjusted-R2 > 0.5 for each SWC at different pF. The second step consisted in running plant SDMs including modeled SWC jointly with the conventional topo-climatic variable used for plant SDMs. Third, SDMs were only run using the conventional topo-climatic variables. Finally, comparing the models obtained in the second and third steps allowed assessing the additional predictive power of SWC in plant SDMs. SWC ensemble models remained very good, with

  9. Computer-aided modeling framework – a generic modeling template

    DEFF Research Database (Denmark)

    Fedorova, Marina; Sin, Gürkan; Gani, Rafiqul

    and test models systematically, efficiently and reliably. In this way, development of products and processes can be made faster, cheaper and more efficient. In this contribution, as part of the framework, a generic modeling template for the systematic derivation of problem specific models is presented....... The application of the modeling template is highlighted with a case study related to the modeling of a catalytic membrane reactor coupling dehydrogenation of ethylbenzene with hydrogenation of nitrobenzene...

  10. A simplified model exploration research of new anisotropic diffuse radiation model

    International Nuclear Information System (INIS)

    Yao, Wanxiang; Li, Zhengrong; Wang, Xiao; Zhao, Qun; Zhang, Zhigang; Lin, Lin

    2016-01-01

    Graphical abstract: The specific process of measured diffuse radiation data. - Highlights: • Simplified diffuse radiation model is extremely important for solar radiation simulation and energy simulation. • A new simplified anisotropic diffuse radiation model (NSADR model) is proposed. • The accuracy of existing models and NSADR model is compared based on the measured values. • The accuracy of the NSADR model is higher than that of the existing models, and suitable for calculating diffuse radiation. - Abstract: More accurate new anisotropic diffuse radiation model (NADR model) has been proposed, but the parameters and calculation process of NADR model used in the process are complex. So it is difficult to widely used in the simulation software and engineering calculation. Based on analysis of the diffuse radiation model and measured diffuse radiation data, this paper put forward three hypotheses: (1) diffuse radiation from sky horizontal region is concentrated in a very thin layer which is close to the line source; (2) diffuse radiation from circumsolar region is concentrated in the point of the sun; (3) diffuse radiation from orthogonal region is concentrated in the point located at 90 degree angles with the Sun. Based on these hypotheses, NADR model is simplified to a new simplified anisotropic diffuse radiation model (NSADR model). Then the accuracy of NADR model and its simplified model (NSADR model) are compared with existing models based on the measured values, and the result shows that Perez model and its simplified model are relatively accurate among existing models. However, the accuracy of these two models is lower than the NADR model and NSADR model due to neglect the influence of the orthogonal diffuse radiation. The accuracy of the NSADR model is higher than that of the existing models, meanwhile, another advantage is that the NSADR model simplifies the process of solution parameters and calculation. Therefore it is more suitable for

  11. Biosphere Model Report

    Energy Technology Data Exchange (ETDEWEB)

    D. W. Wu

    2003-07-16

    The purpose of this report is to document the biosphere model, the Environmental Radiation Model for Yucca Mountain, Nevada (ERMYN), which describes radionuclide transport processes in the biosphere and associated human exposure that may arise as the result of radionuclide release from the geologic repository at Yucca Mountain. The biosphere model is one of the process models that support the Yucca Mountain Project (YMP) Total System Performance Assessment (TSPA) for the license application (LA), the TSPA-LA. The ERMYN model provides the capability of performing human radiation dose assessments. This report documents the biosphere model, which includes: (1) Describing the reference biosphere, human receptor, exposure scenarios, and primary radionuclides for each exposure scenario (Section 6.1); (2) Developing a biosphere conceptual model using site-specific features, events, and processes (FEPs), the reference biosphere, the human receptor, and assumptions (Section 6.2 and Section 6.3); (3) Building a mathematical model using the biosphere conceptual model and published biosphere models (Sections 6.4 and 6.5); (4) Summarizing input parameters for the mathematical model, including the uncertainty associated with input values (Section 6.6); (5) Identifying improvements in the ERMYN model compared with the model used in previous biosphere modeling (Section 6.7); (6) Constructing an ERMYN implementation tool (model) based on the biosphere mathematical model using GoldSim stochastic simulation software (Sections 6.8 and 6.9); (7) Verifying the ERMYN model by comparing output from the software with hand calculations to ensure that the GoldSim implementation is correct (Section 6.10); and (8) Validating the ERMYN model by corroborating it with published biosphere models; comparing conceptual models, mathematical models, and numerical results (Section 7).

  12. Biosphere Model Report

    Energy Technology Data Exchange (ETDEWEB)

    M. A. Wasiolek

    2003-10-27

    The purpose of this report is to document the biosphere model, the Environmental Radiation Model for Yucca Mountain, Nevada (ERMYN), which describes radionuclide transport processes in the biosphere and associated human exposure that may arise as the result of radionuclide release from the geologic repository at Yucca Mountain. The biosphere model is one of the process models that support the Yucca Mountain Project (YMP) Total System Performance Assessment (TSPA) for the license application (LA), the TSPA-LA. The ERMYN model provides the capability of performing human radiation dose assessments. This report documents the biosphere model, which includes: (1) Describing the reference biosphere, human receptor, exposure scenarios, and primary radionuclides for each exposure scenario (Section 6.1); (2) Developing a biosphere conceptual model using site-specific features, events, and processes (FEPs), the reference biosphere, the human receptor, and assumptions (Section 6.2 and Section 6.3); (3) Building a mathematical model using the biosphere conceptual model and published biosphere models (Sections 6.4 and 6.5); (4) Summarizing input parameters for the mathematical model, including the uncertainty associated with input values (Section 6.6); (5) Identifying improvements in the ERMYN model compared with the model used in previous biosphere modeling (Section 6.7); (6) Constructing an ERMYN implementation tool (model) based on the biosphere mathematical model using GoldSim stochastic simulation software (Sections 6.8 and 6.9); (7) Verifying the ERMYN model by comparing output from the software with hand calculations to ensure that the GoldSim implementation is correct (Section 6.10); and (8) Validating the ERMYN model by corroborating it with published biosphere models; comparing conceptual models, mathematical models, and numerical results (Section 7).

  13. Biosphere Model Report

    International Nuclear Information System (INIS)

    D. W. Wu

    2003-01-01

    The purpose of this report is to document the biosphere model, the Environmental Radiation Model for Yucca Mountain, Nevada (ERMYN), which describes radionuclide transport processes in the biosphere and associated human exposure that may arise as the result of radionuclide release from the geologic repository at Yucca Mountain. The biosphere model is one of the process models that support the Yucca Mountain Project (YMP) Total System Performance Assessment (TSPA) for the license application (LA), the TSPA-LA. The ERMYN model provides the capability of performing human radiation dose assessments. This report documents the biosphere model, which includes: (1) Describing the reference biosphere, human receptor, exposure scenarios, and primary radionuclides for each exposure scenario (Section 6.1); (2) Developing a biosphere conceptual model using site-specific features, events, and processes (FEPs), the reference biosphere, the human receptor, and assumptions (Section 6.2 and Section 6.3); (3) Building a mathematical model using the biosphere conceptual model and published biosphere models (Sections 6.4 and 6.5); (4) Summarizing input parameters for the mathematical model, including the uncertainty associated with input values (Section 6.6); (5) Identifying improvements in the ERMYN model compared with the model used in previous biosphere modeling (Section 6.7); (6) Constructing an ERMYN implementation tool (model) based on the biosphere mathematical model using GoldSim stochastic simulation software (Sections 6.8 and 6.9); (7) Verifying the ERMYN model by comparing output from the software with hand calculations to ensure that the GoldSim implementation is correct (Section 6.10); and (8) Validating the ERMYN model by corroborating it with published biosphere models; comparing conceptual models, mathematical models, and numerical results (Section 7)

  14. A Primer for Model Selection: The Decisive Role of Model Complexity

    Science.gov (United States)

    Höge, Marvin; Wöhling, Thomas; Nowak, Wolfgang

    2018-03-01

    Selecting a "best" model among several competing candidate models poses an often encountered problem in water resources modeling (and other disciplines which employ models). For a modeler, the best model fulfills a certain purpose best (e.g., flood prediction), which is typically assessed by comparing model simulations to data (e.g., stream flow). Model selection methods find the "best" trade-off between good fit with data and model complexity. In this context, the interpretations of model complexity implied by different model selection methods are crucial, because they represent different underlying goals of modeling. Over the last decades, numerous model selection criteria have been proposed, but modelers who primarily want to apply a model selection criterion often face a lack of guidance for choosing the right criterion that matches their goal. We propose a classification scheme for model selection criteria that helps to find the right criterion for a specific goal, i.e., which employs the correct complexity interpretation. We identify four model selection classes which seek to achieve high predictive density, low predictive error, high model probability, or shortest compression of data. These goals can be achieved by following either nonconsistent or consistent model selection and by either incorporating a Bayesian parameter prior or not. We allocate commonly used criteria to these four classes, analyze how they represent model complexity and what this means for the model selection task. Finally, we provide guidance on choosing the right type of criteria for specific model selection tasks. (A quick guide through all key points is given at the end of the introduction.)

  15. Model-free and model-based reward prediction errors in EEG.

    Science.gov (United States)

    Sambrook, Thomas D; Hardwick, Ben; Wills, Andy J; Goslin, Jeremy

    2018-05-24

    Learning theorists posit two reinforcement learning systems: model-free and model-based. Model-based learning incorporates knowledge about structure and contingencies in the world to assign candidate actions with an expected value. Model-free learning is ignorant of the world's structure; instead, actions hold a value based on prior reinforcement, with this value updated by expectancy violation in the form of a reward prediction error. Because they use such different learning mechanisms, it has been previously assumed that model-based and model-free learning are computationally dissociated in the brain. However, recent fMRI evidence suggests that the brain may compute reward prediction errors to both model-free and model-based estimates of value, signalling the possibility that these systems interact. Because of its poor temporal resolution, fMRI risks confounding reward prediction errors with other feedback-related neural activity. In the present study, EEG was used to show the presence of both model-based and model-free reward prediction errors and their place in a temporal sequence of events including state prediction errors and action value updates. This demonstration of model-based prediction errors questions a long-held assumption that model-free and model-based learning are dissociated in the brain. Copyright © 2018 Elsevier Inc. All rights reserved.

  16. Sequence modelling and an extensible data model for genomic database

    Energy Technology Data Exchange (ETDEWEB)

    Li, Peter Wei-Der [California Univ., San Francisco, CA (United States); Univ. of California, Berkeley, CA (United States)

    1992-01-01

    The Human Genome Project (HGP) plans to sequence the human genome by the beginning of the next century. It will generate DNA sequences of more than 10 billion bases and complex marker sequences (maps) of more than 100 million markers. All of these information will be stored in database management systems (DBMSs). However, existing data models do not have the abstraction mechanism for modelling sequences and existing DBMS`s do not have operations for complex sequences. This work addresses the problem of sequence modelling in the context of the HGP and the more general problem of an extensible object data model that can incorporate the sequence model as well as existing and future data constructs and operators. First, we proposed a general sequence model that is application and implementation independent. This model is used to capture the sequence information found in the HGP at the conceptual level. In addition, abstract and biological sequence operators are defined for manipulating the modelled sequences. Second, we combined many features of semantic and object oriented data models into an extensible framework, which we called the ``Extensible Object Model``, to address the need of a modelling framework for incorporating the sequence data model with other types of data constructs and operators. This framework is based on the conceptual separation between constructors and constraints. We then used this modelling framework to integrate the constructs for the conceptual sequence model. The Extensible Object Model is also defined with a graphical representation, which is useful as a tool for database designers. Finally, we defined a query language to support this model and implement the query processor to demonstrate the feasibility of the extensible framework and the usefulness of the conceptual sequence model.

  17. Sequence modelling and an extensible data model for genomic database

    Energy Technology Data Exchange (ETDEWEB)

    Li, Peter Wei-Der (California Univ., San Francisco, CA (United States) Lawrence Berkeley Lab., CA (United States))

    1992-01-01

    The Human Genome Project (HGP) plans to sequence the human genome by the beginning of the next century. It will generate DNA sequences of more than 10 billion bases and complex marker sequences (maps) of more than 100 million markers. All of these information will be stored in database management systems (DBMSs). However, existing data models do not have the abstraction mechanism for modelling sequences and existing DBMS's do not have operations for complex sequences. This work addresses the problem of sequence modelling in the context of the HGP and the more general problem of an extensible object data model that can incorporate the sequence model as well as existing and future data constructs and operators. First, we proposed a general sequence model that is application and implementation independent. This model is used to capture the sequence information found in the HGP at the conceptual level. In addition, abstract and biological sequence operators are defined for manipulating the modelled sequences. Second, we combined many features of semantic and object oriented data models into an extensible framework, which we called the Extensible Object Model'', to address the need of a modelling framework for incorporating the sequence data model with other types of data constructs and operators. This framework is based on the conceptual separation between constructors and constraints. We then used this modelling framework to integrate the constructs for the conceptual sequence model. The Extensible Object Model is also defined with a graphical representation, which is useful as a tool for database designers. Finally, we defined a query language to support this model and implement the query processor to demonstrate the feasibility of the extensible framework and the usefulness of the conceptual sequence model.

  18. Modelling of JET diagnostics using Bayesian Graphical Models

    Energy Technology Data Exchange (ETDEWEB)

    Svensson, J. [IPP Greifswald, Greifswald (Germany); Ford, O. [Imperial College, London (United Kingdom); McDonald, D.; Hole, M.; Nessi, G. von; Meakins, A.; Brix, M.; Thomsen, H.; Werner, A.; Sirinelli, A.

    2011-07-01

    The mapping between physics parameters (such as densities, currents, flows, temperatures etc) defining the plasma 'state' under a given model and the raw observations of each plasma diagnostic will 1) depend on the particular physics model used, 2) is inherently probabilistic, from uncertainties on both observations and instrumental aspects of the mapping, such as calibrations, instrument functions etc. A flexible and principled way of modelling such interconnected probabilistic systems is through so called Bayesian graphical models. Being an amalgam between graph theory and probability theory, Bayesian graphical models can simulate the complex interconnections between physics models and diagnostic observations from multiple heterogeneous diagnostic systems, making it relatively easy to optimally combine the observations from multiple diagnostics for joint inference on parameters of the underlying physics model, which in itself can be represented as part of the graph. At JET about 10 diagnostic systems have to date been modelled in this way, and has lead to a number of new results, including: the reconstruction of the flux surface topology and q-profiles without any specific equilibrium assumption, using information from a number of different diagnostic systems; profile inversions taking into account the uncertainties in the flux surface positions and a substantial increase in accuracy of JET electron density and temperature profiles, including improved pedestal resolution, through the joint analysis of three diagnostic systems. It is believed that the Bayesian graph approach could potentially be utilised for very large sets of diagnostics, providing a generic data analysis framework for nuclear fusion experiments, that would be able to optimally utilize the information from multiple diagnostics simultaneously, and where the explicit graph representation of the connections to underlying physics models could be used for sophisticated model testing. This

  19. Culturicon model: A new model for cultural-based emoticon

    Science.gov (United States)

    Zukhi, Mohd Zhafri Bin Mohd; Hussain, Azham

    2017-10-01

    Emoticons are popular among distributed collective interaction user in expressing their emotion, gestures and actions. Emoticons have been proved to be able to avoid misunderstanding of the message, attention saving and improved the communications among different native speakers. However, beside the benefits that emoticons can provide, the study regarding emoticons in cultural perspective is still lacking. As emoticons are crucial in global communication, culture should be one of the extensively research aspect in distributed collective interaction. Therefore, this study attempt to explore and develop model for cultural-based emoticon. Three cultural models that have been used in Human-Computer Interaction were studied which are the Hall Culture Model, Trompenaars and Hampden Culture Model and Hofstede Culture Model. The dimensions from these three models will be used in developing the proposed cultural-based emoticon model.

  20. Business Models and Business Model Innovation

    DEFF Research Database (Denmark)

    Foss, Nicolai J.; Saebi, Tina

    2018-01-01

    While research on business models and business model innovation continue to exhibit growth, the field is still, even after more than two decades of research, characterized by a striking lack of cumulative theorizing and an opportunistic borrowing of more or less related ideas from neighbouring...

  1. The bumper bundle book of modelling NLP modelling made simple

    CERN Document Server

    Burgess, Fran

    2014-01-01

    A Neurolinguistic Programming textbook which focusses on the core activity of NLP - modelling. It covers the thinking behind NLP modelling, presents an extensive range of modelling methodologies and skills, offers applications of modelling, and provides specific details for model and technique construction.

  2. Anatomically accurate, finite model eye for optical modeling.

    Science.gov (United States)

    Liou, H L; Brennan, N A

    1997-08-01

    There is a need for a schematic eye that models vision accurately under various conditions such as refractive surgical procedures, contact lens and spectacle wear, and near vision. Here we propose a new model eye close to anatomical, biometric, and optical realities. This is a finite model with four aspheric refracting surfaces and a gradient-index lens. It has an equivalent power of 60.35 D and an axial length of 23.95 mm. The new model eye provides spherical aberration values within the limits of empirical results and predicts chromatic aberration for wavelengths between 380 and 750 nm. It provides a model for calculating optical transfer functions and predicting optical performance of the eye.

  3. Ecological models and pesticide risk assessment: current modeling practice.

    Science.gov (United States)

    Schmolke, Amelie; Thorbek, Pernille; Chapman, Peter; Grimm, Volker

    2010-04-01

    Ecological risk assessments of pesticides usually focus on risk at the level of individuals, and are carried out by comparing exposure and toxicological endpoints. However, in most cases the protection goal is populations rather than individuals. On the population level, effects of pesticides depend not only on exposure and toxicity, but also on factors such as life history characteristics, population structure, timing of application, presence of refuges in time and space, and landscape structure. Ecological models can integrate such factors and have the potential to become important tools for the prediction of population-level effects of exposure to pesticides, thus allowing extrapolations, for example, from laboratory to field. Indeed, a broad range of ecological models have been applied to chemical risk assessment in the scientific literature, but so far such models have only rarely been used to support regulatory risk assessments of pesticides. To better understand the reasons for this situation, the current modeling practice in this field was assessed in the present study. The scientific literature was searched for relevant models and assessed according to nine characteristics: model type, model complexity, toxicity measure, exposure pattern, other factors, taxonomic group, risk assessment endpoint, parameterization, and model evaluation. The present study found that, although most models were of a high scientific standard, many of them would need modification before they are suitable for regulatory risk assessments. The main shortcomings of currently available models in the context of regulatory pesticide risk assessments were identified. When ecological models are applied to regulatory risk assessments, we recommend reviewing these models according to the nine characteristics evaluated here. (c) 2010 SETAC.

  4. Model performance analysis and model validation in logistic regression

    Directory of Open Access Journals (Sweden)

    Rosa Arboretti Giancristofaro

    2007-10-01

    Full Text Available In this paper a new model validation procedure for a logistic regression model is presented. At first, we illustrate a brief review of different techniques of model validation. Next, we define a number of properties required for a model to be considered "good", and a number of quantitative performance measures. Lastly, we describe a methodology for the assessment of the performance of a given model by using an example taken from a management study.

  5. Pharmacokinetic modeling of gentamicin in treatment of infective endocarditis: Model development and validation of existing models

    Science.gov (United States)

    van der Wijk, Lars; Proost, Johannes H.; Sinha, Bhanu; Touw, Daan J.

    2017-01-01

    Gentamicin shows large variations in half-life and volume of distribution (Vd) within and between individuals. Thus, monitoring and accurately predicting serum levels are required to optimize effectiveness and minimize toxicity. Currently, two population pharmacokinetic models are applied for predicting gentamicin doses in adults. For endocarditis patients the optimal model is unknown. We aimed at: 1) creating an optimal model for endocarditis patients; and 2) assessing whether the endocarditis and existing models can accurately predict serum levels. We performed a retrospective observational two-cohort study: one cohort to parameterize the endocarditis model by iterative two-stage Bayesian analysis, and a second cohort to validate and compare all three models. The Akaike Information Criterion and the weighted sum of squares of the residuals divided by the degrees of freedom were used to select the endocarditis model. Median Prediction Error (MDPE) and Median Absolute Prediction Error (MDAPE) were used to test all models with the validation dataset. We built the endocarditis model based on data from the modeling cohort (65 patients) with a fixed 0.277 L/h/70kg metabolic clearance, 0.698 (±0.358) renal clearance as fraction of creatinine clearance, and Vd 0.312 (±0.076) L/kg corrected lean body mass. External validation with data from 14 validation cohort patients showed a similar predictive power of the endocarditis model (MDPE -1.77%, MDAPE 4.68%) as compared to the intensive-care (MDPE -1.33%, MDAPE 4.37%) and standard (MDPE -0.90%, MDAPE 4.82%) models. All models acceptably predicted pharmacokinetic parameters for gentamicin in endocarditis patients. However, these patients appear to have an increased Vd, similar to intensive care patients. Vd mainly determines the height of peak serum levels, which in turn correlate with bactericidal activity. In order to maintain simplicity, we advise to use the existing intensive-care model in clinical practice to avoid

  6. Pharmacokinetic modeling of gentamicin in treatment of infective endocarditis: Model development and validation of existing models.

    Directory of Open Access Journals (Sweden)

    Anna Gomes

    Full Text Available Gentamicin shows large variations in half-life and volume of distribution (Vd within and between individuals. Thus, monitoring and accurately predicting serum levels are required to optimize effectiveness and minimize toxicity. Currently, two population pharmacokinetic models are applied for predicting gentamicin doses in adults. For endocarditis patients the optimal model is unknown. We aimed at: 1 creating an optimal model for endocarditis patients; and 2 assessing whether the endocarditis and existing models can accurately predict serum levels. We performed a retrospective observational two-cohort study: one cohort to parameterize the endocarditis model by iterative two-stage Bayesian analysis, and a second cohort to validate and compare all three models. The Akaike Information Criterion and the weighted sum of squares of the residuals divided by the degrees of freedom were used to select the endocarditis model. Median Prediction Error (MDPE and Median Absolute Prediction Error (MDAPE were used to test all models with the validation dataset. We built the endocarditis model based on data from the modeling cohort (65 patients with a fixed 0.277 L/h/70kg metabolic clearance, 0.698 (±0.358 renal clearance as fraction of creatinine clearance, and Vd 0.312 (±0.076 L/kg corrected lean body mass. External validation with data from 14 validation cohort patients showed a similar predictive power of the endocarditis model (MDPE -1.77%, MDAPE 4.68% as compared to the intensive-care (MDPE -1.33%, MDAPE 4.37% and standard (MDPE -0.90%, MDAPE 4.82% models. All models acceptably predicted pharmacokinetic parameters for gentamicin in endocarditis patients. However, these patients appear to have an increased Vd, similar to intensive care patients. Vd mainly determines the height of peak serum levels, which in turn correlate with bactericidal activity. In order to maintain simplicity, we advise to use the existing intensive-care model in clinical practice to

  7. Viscoelastic Model for Lung Parenchyma for Multi-Scale Modeling of Respiratory System, Phase II: Dodecahedral Micro-Model

    Energy Technology Data Exchange (ETDEWEB)

    Freed, Alan D.; Einstein, Daniel R.; Carson, James P.; Jacob, Rick E.

    2012-03-01

    In the first year of this contractual effort a hypo-elastic constitutive model was developed and shown to have great potential in modeling the elastic response of parenchyma. This model resides at the macroscopic level of the continuum. In this, the second year of our support, an isotropic dodecahedron is employed as an alveolar model. This is a microscopic model for parenchyma. A hopeful outcome is that the linkage between these two scales of modeling will be a source of insight and inspiration that will aid us in the final year's activity: creating a viscoelastic model for parenchyma.

  8. Ventilation Model

    International Nuclear Information System (INIS)

    Yang, H.

    1999-01-01

    The purpose of this analysis and model report (AMR) for the Ventilation Model is to analyze the effects of pre-closure continuous ventilation in the Engineered Barrier System (EBS) emplacement drifts and provide heat removal data to support EBS design. It will also provide input data (initial conditions, and time varying boundary conditions) for the EBS post-closure performance assessment and the EBS Water Distribution and Removal Process Model. The objective of the analysis is to develop, describe, and apply calculation methods and models that can be used to predict thermal conditions within emplacement drifts under forced ventilation during the pre-closure period. The scope of this analysis includes: (1) Provide a general description of effects and heat transfer process of emplacement drift ventilation. (2) Develop a modeling approach to simulate the impacts of pre-closure ventilation on the thermal conditions in emplacement drifts. (3) Identify and document inputs to be used for modeling emplacement ventilation. (4) Perform calculations of temperatures and heat removal in the emplacement drift. (5) Address general considerations of the effect of water/moisture removal by ventilation on the repository thermal conditions. The numerical modeling in this document will be limited to heat-only modeling and calculations. Only a preliminary assessment of the heat/moisture ventilation effects and modeling method will be performed in this revision. Modeling of moisture effects on heat removal and emplacement drift temperature may be performed in the future

  9. Methodology for geometric modelling. Presentation and administration of site descriptive models; Metodik foer geometrisk modellering. Presentation och administration av platsbeskrivande modeller

    Energy Technology Data Exchange (ETDEWEB)

    Munier, Raymond [Swedish Nuclear Fuel and Waste Management Co., Stockholm (Sweden); Hermanson, Jan [Golder Associates (Sweden)

    2001-03-01

    This report presents a methodology to construct, visualise and present geoscientific descriptive models based on data from the site investigations, which the SKB currently performs, to build an underground nuclear waste disposal facility in Sweden. It is designed for interaction with SICADA (SKB:s site characterisation database) and RVS (SKB:s Rock Visualisation System). However, the concepts of the methodology are general and can be used with other tools capable of handling 3D geometries and parameters. The descriptive model is intended to be an instrument where site investigation data from all disciplines are put together to form a comprehensive visual interpretation of the studied rock mass. The methodology has four main components: 1. Construction of a geometrical model of the interpreted main structures at the site. 2. Description of the geoscientific characteristics of the structures. 3. Description and geometrical implementation of the geometric uncertainties in the interpreted model structures. 4. Quality system for the handling of the geometrical model, its associated database and some aspects of the technical auditing. The geometrical model forms a basis for understanding the main elements and structures of the investigated site. Once the interpreted geometries are in place in the model, the system allows for adding descriptive and quantitative data to each modelled object through a system of intuitive menus. The associated database allows each geometrical object a complete quantitative description of all geoscientific disciplines, variabilities, uncertainties in interpretation and full version history. The complete geometrical model and its associated database of object descriptions are to be recorded in a central quality system. Official, new and old versions of the model are administered centrally in order to have complete quality assurance of each step in the interpretation process. The descriptive model is a cornerstone in the understanding of the

  10. Coupled model of INM-IO global ocean model, CICE sea ice model and SCM OIAS framework

    Science.gov (United States)

    Bayburin, Ruslan; Rashit, Ibrayev; Konstantin, Ushakov; Vladimir, Kalmykov; Gleb, Dyakonov

    2015-04-01

    Status of coupled Arctic model of ocean and sea ice is presented. Model consists of INM IO global ocean component of high resolution, Los Alamos National Laboratory CICE sea ice model and a framework SCM OIAS for the ocean-ice-atmosphere-land coupled modeling on massively-parallel architectures. Model is currently under development at the Institute of Numerical Mathematics (INM), Hydrometeorological Center (HMC) and P.P. Shirshov Institute of Oceanology (IO). Model is aimed at modeling of intra-annual variability of hydrodynamics in Arctic and. The computational characteristics of the world ocean-sea ice coupled model governed by SCM OIAS are presented. The model is parallelized using MPI technologies and currently can use efficiently up to 5000 cores. Details of programming implementation, computational configuration and physical phenomena parametrization are analyzed in terms of intercoupling complex. Results of five year computational experiment of sea ice, snow and ocean state evolution in Arctic region on tripole grid with horizontal resolution of 3-5 kilometers, closed by atmospheric forcing field from repeating "normal" annual course taken from CORE1 experiment data base are presented and analyzed in terms of the state of vorticity and warm Atlantic water expansion.

  11. Models of breast cancer: quo vadis, animal modeling?

    International Nuclear Information System (INIS)

    Wagner, Kay-Uwe

    2004-01-01

    Rodent models for breast cancer have for many decades provided unparalleled insights into cellular and molecular aspects of neoplastic transformation and tumorigenesis. Despite recent improvements in the fidelity of genetically engineered mice, rodent models are still being criticized by many colleagues for not being 'authentic' enough to the human disease. Motives for this criticism are manifold and range from a very general antipathy against the rodent model system to well-founded arguments that highlight physiological variations between species. Newly proposed differences in genetic pathways that cause cancer in humans and mice invigorated the ongoing discussion about the legitimacy of the murine system to model the human disease. The present commentary intends to stimulate a debate on this subject by providing the background about new developments in animal modeling, by disputing suggested limitations of genetically engineered mice, and by discussing improvements but also ambiguous expectations on the authenticity of xenograft models to faithfully mimic the human disease

  12. Mathematical modelling

    CERN Document Server

    2016-01-01

    This book provides a thorough introduction to the challenge of applying mathematics in real-world scenarios. Modelling tasks rarely involve well-defined categories, and they often require multidisciplinary input from mathematics, physics, computer sciences, or engineering. In keeping with this spirit of modelling, the book includes a wealth of cross-references between the chapters and frequently points to the real-world context. The book combines classical approaches to modelling with novel areas such as soft computing methods, inverse problems, and model uncertainty. Attention is also paid to the interaction between models, data and the use of mathematical software. The reader will find a broad selection of theoretical tools for practicing industrial mathematics, including the analysis of continuum models, probabilistic and discrete phenomena, and asymptotic and sensitivity analysis.

  13. Empirical investigation on modeling solar radiation series with ARMA–GARCH models

    International Nuclear Information System (INIS)

    Sun, Huaiwei; Yan, Dong; Zhao, Na; Zhou, Jianzhong

    2015-01-01

    Highlights: • Apply 6 ARMA–GARCH(-M) models to model and forecast solar radiation. • The ARMA–GARCH(-M) models produce more accurate radiation forecasting than conventional methods. • Show that ARMA–GARCH-M models are more effective for forecasting solar radiation mean and volatility. • The ARMA–EGARCH-M is robust and the ARMA–sGARCH-M is very competitive. - Abstract: Simulation of radiation is one of the most important issues in solar utilization. Time series models are useful tools in the estimation and forecasting of solar radiation series and their changes. In this paper, the effectiveness of autoregressive moving average (ARMA) models with various generalized autoregressive conditional heteroskedasticity (GARCH) processes, namely ARMA–GARCH models are evaluated for their effectiveness in radiation series. Six different GARCH approaches, which contain three different ARMA–GARCH models and corresponded GARCH in mean (ARMA–GARCH-M) models, are applied in radiation data sets from two representative climate stations in China. Multiple evaluation metrics of modeling sufficiency are used for evaluating the performances of models. The results show that the ARMA–GARCH(-M) models are effective in radiation series estimation. Both in fitting and prediction of radiation series, the ARMA–GARCH(-M) models show better modeling sufficiency than traditional models, while ARMA–EGARCH-M models are robustness in two sites and the ARMA–sGARCH-M models appear very competitive. Comparisons of statistical diagnostics and model performance clearly show that the ARMA–GARCH-M models make the mean radiation equations become more sufficient. It is recommended the ARMA–GARCH(-M) models to be the preferred method to use in the modeling of solar radiation series

  14. Hydrogeological conceptual model development and numerical modelling using CONNECTFLOW, Forsmark modelling stage 2.3

    Energy Technology Data Exchange (ETDEWEB)

    Follin, Sven (SF GeoLogic AB, Taeby (Sweden)); Hartley, Lee; Jackson, Peter; Roberts, David (Serco TAP (United Kingdom)); Marsic, Niko (Kemakta Konsult AB, Stockholm (Sweden))

    2008-05-15

    Three versions of a site descriptive model (SDM) have been completed for the Forsmark area. Version 0 established the state of knowledge prior to the start of the site investigation programme. Version 1.1 was essentially a training exercise and was completed during 2004. Version 1.2 was a preliminary site description and concluded the initial site investigation work (ISI) in June 2005. Three modelling stages are planned for the complete site investigation work (CSI). These are labelled stage 2.1, 2.2 and 2.3, respectively. An important component of each of these stages is to address and continuously try to resolve discipline-specific uncertainties of importance for repository engineering and safety assessment. Stage 2.1 included an updated geological model for Forsmark and aimed to provide a feedback from the modelling working group to the site investigation team to enable completion of the site investigation work. Stage 2.2 described the conceptual understanding and the numerical modelling of the bedrock hydrogeology in the Forsmark area based on data freeze 2.2. The present report describes the modelling based on data freeze 2.3, which is the final data freeze in Forsmark. In comparison, data freeze 2.3 is considerably smaller than data freeze 2.2. Therefore, stage 2.3 deals primarily with model confirmation and uncertainty analysis, e.g. verification of important hypotheses made in stage 2.2 and the role of parameter uncertainty in the numerical modelling. On the whole, the work reported here constitutes an addendum to the work reported in stage 2.2. Two changes were made to the CONNECTFLOW code in stage 2.3. These serve to: 1) improve the representation of the hydraulic properties of the regolith, and 2) improve the conditioning of transmissivity of the deformation zones against single-hole hydraulic tests. The changes to the modelling of the regolith were made to improve the consistency with models made with the MIKE SHE code, which involved the introduction

  15. The Use of Modeling-Based Text to Improve Students' Modeling Competencies

    Science.gov (United States)

    Jong, Jing-Ping; Chiu, Mei-Hung; Chung, Shiao-Lan

    2015-01-01

    This study investigated the effects of a modeling-based text on 10th graders' modeling competencies. Fifteen 10th graders read a researcher-developed modeling-based science text on the ideal gas law that included explicit descriptions and representations of modeling processes (i.e., model selection, model construction, model validation, model…

  16. Model documentation Natural Gas Transmission and Distribution Model of the National Energy Modeling System. Volume 1

    International Nuclear Information System (INIS)

    1996-01-01

    The Natural Gas Transmission and Distribution Model (NGTDM) of the National Energy Modeling System is developed and maintained by the Energy Information Administration (EIA), Office of Integrated Analysis and Forecasting. This report documents the archived version of the NGTDM that was used to produce the natural gas forecasts presented in the Annual Energy Outlook 1996, (DOE/EIA-0383(96)). The purpose of this report is to provide a reference document for model analysts, users, and the public that defines the objectives of the model, describes its basic approach, and provides detail on the methodology employed. Previously this report represented Volume I of a two-volume set. Volume II reported on model performance, detailing convergence criteria and properties, results of sensitivity testing, comparison of model outputs with the literature and/or other model results, and major unresolved issues

  17. Model documentation Natural Gas Transmission and Distribution Model of the National Energy Modeling System. Volume 1

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1996-02-26

    The Natural Gas Transmission and Distribution Model (NGTDM) of the National Energy Modeling System is developed and maintained by the Energy Information Administration (EIA), Office of Integrated Analysis and Forecasting. This report documents the archived version of the NGTDM that was used to produce the natural gas forecasts presented in the Annual Energy Outlook 1996, (DOE/EIA-0383(96)). The purpose of this report is to provide a reference document for model analysts, users, and the public that defines the objectives of the model, describes its basic approach, and provides detail on the methodology employed. Previously this report represented Volume I of a two-volume set. Volume II reported on model performance, detailing convergence criteria and properties, results of sensitivity testing, comparison of model outputs with the literature and/or other model results, and major unresolved issues.

  18. Rotating universe models

    International Nuclear Information System (INIS)

    Tozini, A.V.

    1984-01-01

    A review is made of some properties of the rotating Universe models. Godel's model is identified as a generalized filted model. Some properties of new solutions of the Einstein's equations, which are rotating non-stationary Universe models, are presented and analyzed. These models have the Godel's model as a particular case. Non-stationary cosmological models are found which are a generalization of the Godel's metrics in an analogous way in which Friedmann is to the Einstein's model. (L.C.) [pt

  19. Ground-water solute transport modeling using a three-dimensional scaled model

    International Nuclear Information System (INIS)

    Crider, S.S.

    1987-01-01

    Scaled models are used extensively in current hydraulic research on sediment transport and solute dispersion in free surface flows (rivers, estuaries), but are neglected in current ground-water model research. Thus, an investigation was conducted to test the efficacy of a three-dimensional scaled model of solute transport in ground water. No previous results from such a model have been reported. Experiments performed on uniform scaled models indicated that some historical problems (e.g., construction and scaling difficulties; disproportionate capillary rise in model) were partly overcome by using simple model materials (sand, cement and water), by restricting model application to selective classes of problems, and by physically controlling the effect of the model capillary zone. Results from these tests were compared with mathematical models. Model scaling laws were derived for ground-water solute transport and used to build a three-dimensional scaled model of a ground-water tritium plume in a prototype aquifer on the Savannah River Plant near Aiken, South Carolina. Model results compared favorably with field data and with a numerical model. Scaled models are recommended as a useful additional tool for prediction of ground-water solute transport

  20. Nonintersecting string model and graphical approach: equivalence with a Potts model

    International Nuclear Information System (INIS)

    Perk, J.H.H.; Wu, F.Y.

    1986-01-01

    Using a graphical method the authors establish the exact equivalence of the partition function of a q-state nonintersecting string (NIS) model on an arbitrary planar, even-valenced lattice with that of a q 2 -state Potts model on a relaxed lattice. The NIS model considered in this paper is one in which the vertex weights are expressible as sums of those of basic vertex types, and the resulting Potts model generally has multispin interactions. For the square and Kagome lattices this leads to the equivalence of a staggered NIS model with Potts models with anisotropic pair interactions, indicating that these NIS models have a first-order transition for q greater than 2. For the triangular lattice the NIS model turns out to be the five-vertex model of Wu and Lin and it relates to a Potts model with two- and three-site interactions. The most general model the authors discuss is an oriented NIS model which contains the six-vertex model and the NIS models of Stroganov and Schultz as special cases

  1. BioModels: expanding horizons to include more modelling approaches and formats.

    Science.gov (United States)

    Glont, Mihai; Nguyen, Tung V N; Graesslin, Martin; Hälke, Robert; Ali, Raza; Schramm, Jochen; Wimalaratne, Sarala M; Kothamachu, Varun B; Rodriguez, Nicolas; Swat, Maciej J; Eils, Jurgen; Eils, Roland; Laibe, Camille; Malik-Sheriff, Rahuman S; Chelliah, Vijayalakshmi; Le Novère, Nicolas; Hermjakob, Henning

    2018-01-04

    BioModels serves as a central repository of mathematical models representing biological processes. It offers a platform to make mathematical models easily shareable across the systems modelling community, thereby supporting model reuse. To facilitate hosting a broader range of model formats derived from diverse modelling approaches and tools, a new infrastructure for BioModels has been developed that is available at http://www.ebi.ac.uk/biomodels. This new system allows submitting and sharing of a wide range of models with improved support for formats other than SBML. It also offers a version-control backed environment in which authors and curators can work collaboratively to curate models. This article summarises the features available in the current system and discusses the potential benefit they offer to the users over the previous system. In summary, the new portal broadens the scope of models accepted in BioModels and supports collaborative model curation which is crucial for model reproducibility and sharing. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.

  2. Correlation between the model accuracy and model-based SOC estimation

    International Nuclear Information System (INIS)

    Wang, Qianqian; Wang, Jiao; Zhao, Pengju; Kang, Jianqiang; Yan, Few; Du, Changqing

    2017-01-01

    State-of-charge (SOC) estimation is a core technology for battery management systems. Considerable progress has been achieved in the study of SOC estimation algorithms, especially the algorithm on the basis of Kalman filter to meet the increasing demand of model-based battery management systems. The Kalman filter weakens the influence of white noise and initial error during SOC estimation but cannot eliminate the existing error of the battery model itself. As such, the accuracy of SOC estimation is directly related to the accuracy of the battery model. Thus far, the quantitative relationship between model accuracy and model-based SOC estimation remains unknown. This study summarizes three equivalent circuit lithium-ion battery models, namely, Thevenin, PNGV, and DP models. The model parameters are identified through hybrid pulse power characterization test. The three models are evaluated, and SOC estimation conducted by EKF-Ah method under three operating conditions are quantitatively studied. The regression and correlation of the standard deviation and normalized RMSE are studied and compared between the model error and the SOC estimation error. These parameters exhibit a strong linear relationship. Results indicate that the model accuracy affects the SOC estimation accuracy mainly in two ways: dispersion of the frequency distribution of the error and the overall level of the error. On the basis of the relationship between model error and SOC estimation error, our study provides a strategy for selecting a suitable cell model to meet the requirements of SOC precision using Kalman filter.

  3. Simulation Model of Membrane Gas Separator Using Aspen Custom Modeler

    Energy Technology Data Exchange (ETDEWEB)

    Song, Dong-keun [Korea Institute of Machinery and Materials, Daejeon (Korea, Republic of); Shin, Gahui; Yun, Jinwon; Yu, Sangseok [Chungnam Nat’l Univ., Daejeon (Korea, Republic of)

    2016-12-15

    Membranes are used to separate pure gas from gas mixtures. In this study, three different types of mass transport through a membrane were developed in order to investigate the gas separation capabilities of a membrane. The three different models typically used are a lumped model, a multi-cell model, and a discretization model. Despite the multi-cell model producing similar results to a discretization model, the discretization model was selected for this investigation, due to the cell number dependence of a multi-cell model. The mass transport model was then used to investigate the effects of pressure difference, flow rate, total exposed area, and permeability. The results showed that the pressure difference increased with the stage cut, but the selectivity was a trade-off for the increasing pressure difference. Additionally, even though permeability is an important parameter, the selectivity and stage cut of the membrane converged as permeability increased.

  4. Mathematical modelling

    DEFF Research Database (Denmark)

    Blomhøj, Morten

    2004-01-01

    Developing competences for setting up, analysing and criticising mathematical models are normally seen as relevant only from and above upper secondary level. The general belief among teachers is that modelling activities presuppose conceptual understanding of the mathematics involved. Mathematical...... roots for the construction of important mathematical concepts. In addition competences for setting up, analysing and criticising modelling processes and the possible use of models is a formative aim in this own right for mathematics teaching in general education. The paper presents a theoretical...... modelling, however, can be seen as a practice of teaching that place the relation between real life and mathematics into the centre of teaching and learning mathematics, and this is relevant at all levels. Modelling activities may motivate the learning process and help the learner to establish cognitive...

  5. Simple Models for the Dynamic Modeling of Rotating Tires

    Directory of Open Access Journals (Sweden)

    J.C. Delamotte

    2008-01-01

    Full Text Available Large Finite Element (FE models of tires are currently used to predict low frequency behavior and to obtain dynamic model coefficients used in multi-body models for riding and comfort. However, to predict higher frequency behavior, which may explain irregular wear, critical rotating speeds and noise radiation, FE models are not practical. Detailed FE models are not adequate for optimization and uncertainty predictions either, as in such applications the dynamic solution must be computed a number of times. Therefore, there is a need for simpler models that can capture the physics of the tire and be used to compute the dynamic response with a low computational cost. In this paper, the spectral (or continuous element approach is used to derive such a model. A circular beam spectral element that takes into account the string effect is derived, and a method to simulate the response to a rotating force is implemented in the frequency domain. The behavior of a circular ring under different internal pressures is investigated using modal and frequency/wavenumber representations. Experimental results obtained with a real untreaded truck tire are presented and qualitatively compared with the simple model predictions with good agreement. No attempt is made to obtain equivalent parameters for the simple model from the real tire results. On the other hand, the simple model fails to represent the correct variation of the quotient of the natural frequency by the number of circumferential wavelengths with the mode count. Nevertheless, some important features of the real tire dynamic behavior, such as the generation of standing waves and part of the frequency/wavenumber behavior, can be investigated using the proposed simplified model.

  6. BAYESIAN MODELS FOR SPECIES DISTRIBUTION MODELLING WITH ONLY-PRESENCE RECORDS

    Directory of Open Access Journals (Sweden)

    Bartolo de Jesús Villar-Hernández

    2015-08-01

    Full Text Available One of the central issues in ecology is the study of geographical distribution of species of flora and fauna through Species Distribution Models (SDM. Recently, scientific interest has focused on presence-only records. Two recent approaches have been proposed for this problem: a model based on maximum likelihood method (Maxlike and an inhomogeneous poisson process model (IPP. In this paper we discussed two bayesian approaches called MaxBayes and IPPBayes based on Maxlike and IPP model, respectively. To illustrate these proposals, we implemented two study examples: (1 both models were implemented on a simulated dataset, and (2 we modeled the potencial distribution of genus Dalea in the Tehuacan-Cuicatlán biosphere reserve with both models, the results was compared with that of Maxent. The results show that both models, MaxBayes and IPPBayes, are viable alternatives when species distribution are modeled with only-presence records. For simulated dataset, MaxBayes achieved prevalence estimation, even when the number of records was small. In the real dataset example, both models predict similar potential distributions like Maxent does. Â

  7. Modeling of the Global Water Cycle - Analytical Models

    Science.gov (United States)

    Yongqiang Liu; Roni Avissar

    2005-01-01

    Both numerical and analytical models of coupled atmosphere and its underlying ground components (land, ocean, ice) are useful tools for modeling the global and regional water cycle. Unlike complex three-dimensional climate models, which need very large computing resources and involve a large number of complicated interactions often difficult to interpret, analytical...

  8. Equivalent Dynamic Models.

    Science.gov (United States)

    Molenaar, Peter C M

    2017-01-01

    Equivalences of two classes of dynamic models for weakly stationary multivariate time series are discussed: dynamic factor models and autoregressive models. It is shown that exploratory dynamic factor models can be rotated, yielding an infinite set of equivalent solutions for any observed series. It also is shown that dynamic factor models with lagged factor loadings are not equivalent to the currently popular state-space models, and that restriction of attention to the latter type of models may yield invalid results. The known equivalent vector autoregressive model types, standard and structural, are given a new interpretation in which they are conceived of as the extremes of an innovating type of hybrid vector autoregressive models. It is shown that consideration of hybrid models solves many problems, in particular with Granger causality testing.

  9. Applied stochastic modelling

    CERN Document Server

    Morgan, Byron JT; Tanner, Martin Abba; Carlin, Bradley P

    2008-01-01

    Introduction and Examples Introduction Examples of data sets Basic Model Fitting Introduction Maximum-likelihood estimation for a geometric model Maximum-likelihood for the beta-geometric model Modelling polyspermy Which model? What is a model for? Mechanistic models Function Optimisation Introduction MATLAB: graphs and finite differences Deterministic search methods Stochastic search methods Accuracy and a hybrid approach Basic Likelihood ToolsIntroduction Estimating standard errors and correlations Looking at surfaces: profile log-likelihoods Confidence regions from profiles Hypothesis testing in model selectionScore and Wald tests Classical goodness of fit Model selection biasGeneral Principles Introduction Parameterisation Parameter redundancy Boundary estimates Regression and influence The EM algorithm Alternative methods of model fitting Non-regular problemsSimulation Techniques Introduction Simulating random variables Integral estimation Verification Monte Carlo inference Estimating sampling distributi...

  10. From spiking neuron models to linear-nonlinear models.

    Science.gov (United States)

    Ostojic, Srdjan; Brunel, Nicolas

    2011-01-20

    Neurons transform time-varying inputs into action potentials emitted stochastically at a time dependent rate. The mapping from current input to output firing rate is often represented with the help of phenomenological models such as the linear-nonlinear (LN) cascade, in which the output firing rate is estimated by applying to the input successively a linear temporal filter and a static non-linear transformation. These simplified models leave out the biophysical details of action potential generation. It is not a priori clear to which extent the input-output mapping of biophysically more realistic, spiking neuron models can be reduced to a simple linear-nonlinear cascade. Here we investigate this question for the leaky integrate-and-fire (LIF), exponential integrate-and-fire (EIF) and conductance-based Wang-Buzsáki models in presence of background synaptic activity. We exploit available analytic results for these models to determine the corresponding linear filter and static non-linearity in a parameter-free form. We show that the obtained functions are identical to the linear filter and static non-linearity determined using standard reverse correlation analysis. We then quantitatively compare the output of the corresponding linear-nonlinear cascade with numerical simulations of spiking neurons, systematically varying the parameters of input signal and background noise. We find that the LN cascade provides accurate estimates of the firing rates of spiking neurons in most of parameter space. For the EIF and Wang-Buzsáki models, we show that the LN cascade can be reduced to a firing rate model, the timescale of which we determine analytically. Finally we introduce an adaptive timescale rate model in which the timescale of the linear filter depends on the instantaneous firing rate. This model leads to highly accurate estimates of instantaneous firing rates.

  11. Business Model Innovation

    OpenAIRE

    Dodgson, Mark; Gann, David; Phillips, Nelson; Massa, Lorenzo; Tucci, Christopher

    2014-01-01

    The chapter offers a broad review of the literature at the nexus between Business Models and innovation studies, and examines the notion of Business Model Innovation in three different situations: Business Model Design in newly formed organizations, Business Model Reconfiguration in incumbent firms, and Business Model Innovation in the broad context of sustainability. Tools and perspectives to make sense of Business Models and support managers and entrepreneurs in dealing with Business Model ...

  12. IHY Modeling Support at the Community Coordinated Modeling Center

    Science.gov (United States)

    Chulaki, A.; Hesse, Michael; Kuznetsova, Masha; MacNeice, P.; Rastaetter, L.

    2005-01-01

    The Community Coordinated Modeling Center (CCMC) is a US inter-agency activity aiming at research in support of the generation of advanced space weather models. As one of its main functions, the CCMC provides to researchers the use of space science models, even if they are not model owners themselves. In particular, the CCMC provides to the research community the execution of "runs-onrequest" for specific events of interest to space science researchers. Through this activity and the concurrent development of advanced visualization tools, CCMC provides, to the general science community, unprecedented access to a large number of state-of-the-art research models. CCMC houses models that cover the entire domain from the Sun to the Earth. In this presentation, we will provide an overview of CCMC modeling services that are available to support activities during the International Heliospheric Year. In order to tailor CCMC activities to IHY needs, we will also invite community input into our IHY planning activities.

  13. A probabilistic graphical model based stochastic input model construction

    International Nuclear Information System (INIS)

    Wan, Jiang; Zabaras, Nicholas

    2014-01-01

    Model reduction techniques have been widely used in modeling of high-dimensional stochastic input in uncertainty quantification tasks. However, the probabilistic modeling of random variables projected into reduced-order spaces presents a number of computational challenges. Due to the curse of dimensionality, the underlying dependence relationships between these random variables are difficult to capture. In this work, a probabilistic graphical model based approach is employed to learn the dependence by running a number of conditional independence tests using observation data. Thus a probabilistic model of the joint PDF is obtained and the PDF is factorized into a set of conditional distributions based on the dependence structure of the variables. The estimation of the joint PDF from data is then transformed to estimating conditional distributions under reduced dimensions. To improve the computational efficiency, a polynomial chaos expansion is further applied to represent the random field in terms of a set of standard random variables. This technique is combined with both linear and nonlinear model reduction methods. Numerical examples are presented to demonstrate the accuracy and efficiency of the probabilistic graphical model based stochastic input models. - Highlights: • Data-driven stochastic input models without the assumption of independence of the reduced random variables. • The problem is transformed to a Bayesian network structure learning problem. • Examples are given in flows in random media

  14. Petroleum Market Model of the National Energy Modeling System

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1997-01-01

    The purpose of this report is to define the objectives of the Petroleum Market Model (PMM), describe its basic approach, and provide detail on how it works. This report is intended as a reference document for model analysts, users, and the public. The PMM models petroleum refining activities, the marketing of petroleum products to consumption regions. The production of natural gas liquids in gas processing plants, and domestic methanol production. The PMM projects petroleum product prices and sources of supply for meeting petroleum product demand. The sources of supply include crude oil, both domestic and imported; other inputs including alcohols and ethers; natural gas plant liquids production; petroleum product imports; and refinery processing gain. In addition, the PMM estimates domestic refinery capacity expansion and fuel consumption. Product prices are estimated at the Census division level and much of the refining activity information is at the Petroleum Administration for Defense (PAD) District level. This report is organized as follows: Chapter 2, Model Purpose; Chapter 3, Model Overview and Rationale; Chapter 4, Model Structure; Appendix A, Inventory of Input Data, Parameter Estimates, and Model Outputs; Appendix B, Detailed Mathematical Description of the Model; Appendix C, Bibliography; Appendix D, Model Abstract; Appendix E, Data Quality; Appendix F, Estimation methodologies; Appendix G, Matrix Generator documentation; Appendix H, Historical Data Processing; and Appendix I, Biofuels Supply Submodule.

  15. Petroleum Market Model of the National Energy Modeling System

    International Nuclear Information System (INIS)

    1997-01-01

    The purpose of this report is to define the objectives of the Petroleum Market Model (PMM), describe its basic approach, and provide detail on how it works. This report is intended as a reference document for model analysts, users, and the public. The PMM models petroleum refining activities, the marketing of petroleum products to consumption regions. The production of natural gas liquids in gas processing plants, and domestic methanol production. The PMM projects petroleum product prices and sources of supply for meeting petroleum product demand. The sources of supply include crude oil, both domestic and imported; other inputs including alcohols and ethers; natural gas plant liquids production; petroleum product imports; and refinery processing gain. In addition, the PMM estimates domestic refinery capacity expansion and fuel consumption. Product prices are estimated at the Census division level and much of the refining activity information is at the Petroleum Administration for Defense (PAD) District level. This report is organized as follows: Chapter 2, Model Purpose; Chapter 3, Model Overview and Rationale; Chapter 4, Model Structure; Appendix A, Inventory of Input Data, Parameter Estimates, and Model Outputs; Appendix B, Detailed Mathematical Description of the Model; Appendix C, Bibliography; Appendix D, Model Abstract; Appendix E, Data Quality; Appendix F, Estimation methodologies; Appendix G, Matrix Generator documentation; Appendix H, Historical Data Processing; and Appendix I, Biofuels Supply Submodule

  16. Modeling influenza-like illnesses through composite compartmental models

    Science.gov (United States)

    Levy, Nir; , Michael, Iv; Yom-Tov, Elad

    2018-03-01

    Epidemiological models for the spread of pathogens in a population are usually only able to describe a single pathogen. This makes their application unrealistic in cases where multiple pathogens with similar symptoms are spreading concurrently within the same population. Here we describe a method which makes possible the application of multiple single-strain models under minimal conditions. As such, our method provides a bridge between theoretical models of epidemiology and data-driven approaches for modeling of influenza and other similar viruses. Our model extends the Susceptible-Infected-Recovered model to higher dimensions, allowing the modeling of a population infected by multiple viruses. We further provide a method, based on an overcomplete dictionary of feasible realizations of SIR solutions, to blindly partition the time series representing the number of infected people in a population into individual components, each representing the effect of a single pathogen. We demonstrate the applicability of our proposed method on five years of seasonal influenza-like illness (ILI) rates, estimated from Twitter data. We demonstrate that our method describes, on average, 44% of the variance in the ILI time series. The individual infectious components derived from our model are matched to known viral profiles in the populations, which we demonstrate matches that of independently collected epidemiological data. We further show that the basic reproductive numbers (R 0) of the matched components are in range known for these pathogens. Our results suggest that the proposed method can be applied to other pathogens and geographies, providing a simple method for estimating the parameters of epidemics in a population.

  17. Calibrated Properties Model

    International Nuclear Information System (INIS)

    Ahlers, C.; Liu, H.

    2000-01-01

    The purpose of this Analysis/Model Report (AMR) is to document the Calibrated Properties Model that provides calibrated parameter sets for unsaturated zone (UZ) flow and transport process models for the Yucca Mountain Site Characterization Project (YMP). This work was performed in accordance with the ''AMR Development Plan for U0035 Calibrated Properties Model REV00. These calibrated property sets include matrix and fracture parameters for the UZ Flow and Transport Model (UZ Model), drift seepage models, drift-scale and mountain-scale coupled-processes models, and Total System Performance Assessment (TSPA) models as well as Performance Assessment (PA) and other participating national laboratories and government agencies. These process models provide the necessary framework to test conceptual hypotheses of flow and transport at different scales and predict flow and transport behavior under a variety of climatic and thermal-loading conditions

  18. Modeling the Frequency of Cyclists’ Red-Light Running Behavior Using Bayesian PG Model and PLN Model

    Directory of Open Access Journals (Sweden)

    Yao Wu

    2016-01-01

    Full Text Available Red-light running behaviors of bicycles at signalized intersection lead to a large number of traffic conflicts and high collision potentials. The primary objective of this study is to model the cyclists’ red-light running frequency within the framework of Bayesian statistics. Data was collected at twenty-five approaches at seventeen signalized intersections. The Poisson-gamma (PG and Poisson-lognormal (PLN model were developed and compared. The models were validated using Bayesian p values based on posterior predictive checking indicators. It was found that the two models have a good fit of the observed cyclists’ red-light running frequency. Furthermore, the PLN model outperformed the PG model. The model estimated results showed that the amount of cyclists’ red-light running is significantly influenced by bicycle flow, conflict traffic flow, pedestrian signal type, vehicle speed, and e-bike rate. The validation result demonstrated the reliability of the PLN model. The research results can help transportation professionals to predict the expected amount of the cyclists’ red-light running and develop effective guidelines or policies to reduce red-light running frequency of bicycles at signalized intersections.

  19. An improved interfacial bonding model for material interface modeling

    Science.gov (United States)

    Lin, Liqiang; Wang, Xiaodu; Zeng, Xiaowei

    2016-01-01

    An improved interfacial bonding model was proposed from potential function point of view to investigate interfacial interactions in polycrystalline materials. It characterizes both attractive and repulsive interfacial interactions and can be applied to model different material interfaces. The path dependence of work-of-separation study indicates that the transformation of separation work is smooth in normal and tangential direction and the proposed model guarantees the consistency of the cohesive constitutive model. The improved interfacial bonding model was verified through a simple compression test in a standard hexagonal structure. The error between analytical solutions and numerical results from the proposed model is reasonable in linear elastic region. Ultimately, we investigated the mechanical behavior of extrafibrillar matrix in bone and the simulation results agreed well with experimental observations of bone fracture. PMID:28584343

  20. Modelling freight transport

    NARCIS (Netherlands)

    Tavasszy, L.A.; Jong, G. de

    2014-01-01

    Freight Transport Modelling is a unique new reference book that provides insight into the state-of-the-art of freight modelling. Focusing on models used to support public transport policy analysis, Freight Transport Modelling systematically introduces the latest freight transport modelling

  1. Adhesive contact: from atomistic model to continuum model

    International Nuclear Information System (INIS)

    Fan Kang-Qi; Jia Jian-Yuan; Zhu Ying-Min; Zhang Xiu-Yan

    2011-01-01

    Two types of Lennard-Jones potential are widely used in modeling adhesive contacts. However, the relationships between the parameters of the two types of Lennard-Jones potential are not well defined. This paper employs a self-consistent method to derive the Lennard-Jones surface force law from the interatomic Lennard-Jones potential with emphasis on the relationships between the parameters. The effect of using correct parameters in the adhesion models is demonstrated in single sphere-flat contact via continuum models and an atomistic model. Furthermore, the adhesion hysteresis behaviour is investigated, and the S-shaped force-distance relation is revealed by the atomistic model. It shows that the adhesion hysteresis loop is generated by the jump-to-contact and jump-off-contact, which are illustrated by the S-shaped force-distance curve. (atomic and molecular physics)

  2. Model validation and calibration based on component functions of model output

    International Nuclear Information System (INIS)

    Wu, Danqing; Lu, Zhenzhou; Wang, Yanping; Cheng, Lei

    2015-01-01

    The target in this work is to validate the component functions of model output between physical observation and computational model with the area metric. Based on the theory of high dimensional model representations (HDMR) of independent input variables, conditional expectations are component functions of model output, and the conditional expectations reflect partial information of model output. Therefore, the model validation of conditional expectations tells the discrepancy between the partial information of the computational model output and that of the observations. Then a calibration of the conditional expectations is carried out to reduce the value of model validation metric. After that, a recalculation of the model validation metric of model output is taken with the calibrated model parameters, and the result shows that a reduction of the discrepancy in the conditional expectations can help decrease the difference in model output. At last, several examples are employed to demonstrate the rationality and necessity of the methodology in case of both single validation site and multiple validation sites. - Highlights: • A validation metric of conditional expectations of model output is proposed. • HDRM explains the relationship of conditional expectations and model output. • An improved approach of parameter calibration updates the computational models. • Validation and calibration process are applied at single site and multiple sites. • Validation and calibration process show a superiority than existing methods

  3. Finsler Geometry Modeling of an Orientation-Asymmetric Surface Model for Membranes

    Science.gov (United States)

    Proutorov, Evgenii; Koibuchi, Hiroshi

    2017-12-01

    In this paper, a triangulated surface model is studied in the context of Finsler geometry (FG) modeling. This FG model is an extended version of a recently reported model for two-component membranes, and it is asymmetric under surface inversion. We show that the definition of the model is independent of how the Finsler length of a bond is defined. This leads us to understand that the canonical (or Euclidean) surface model is obtained from the FG model such that it is uniquely determined as a trivial model from the viewpoint of well definedness.

  4. Biosphere Model Report

    Energy Technology Data Exchange (ETDEWEB)

    D.W. Wu; A.J. Smith

    2004-11-08

    The purpose of this report is to document the biosphere model, the Environmental Radiation Model for Yucca Mountain, Nevada (ERMYN), which describes radionuclide transport processes in the biosphere and associated human exposure that may arise as the result of radionuclide release from the geologic repository at Yucca Mountain. The biosphere model is one of the process models that support the Yucca Mountain Project (YMP) Total System Performance Assessment (TSPA) for the license application (LA), TSPA-LA. The ERMYN provides the capability of performing human radiation dose assessments. This report documents the biosphere model, which includes: (1) Describing the reference biosphere, human receptor, exposure scenarios, and primary radionuclides for each exposure scenario (Section 6.1); (2) Developing a biosphere conceptual model using site-specific features, events, and processes (FEPs) (Section 6.2), the reference biosphere (Section 6.1.1), the human receptor (Section 6.1.2), and approximations (Sections 6.3.1.4 and 6.3.2.4); (3) Building a mathematical model using the biosphere conceptual model (Section 6.3) and published biosphere models (Sections 6.4 and 6.5); (4) Summarizing input parameters for the mathematical model, including the uncertainty associated with input values (Section 6.6); (5) Identifying improvements in the ERMYN compared with the model used in previous biosphere modeling (Section 6.7); (6) Constructing an ERMYN implementation tool (model) based on the biosphere mathematical model using GoldSim stochastic simulation software (Sections 6.8 and 6.9); (7) Verifying the ERMYN by comparing output from the software with hand calculations to ensure that the GoldSim implementation is correct (Section 6.10); (8) Validating the ERMYN by corroborating it with published biosphere models; comparing conceptual models, mathematical models, and numerical results (Section 7).

  5. Biosphere Model Report

    International Nuclear Information System (INIS)

    D.W. Wu; A.J. Smith

    2004-01-01

    The purpose of this report is to document the biosphere model, the Environmental Radiation Model for Yucca Mountain, Nevada (ERMYN), which describes radionuclide transport processes in the biosphere and associated human exposure that may arise as the result of radionuclide release from the geologic repository at Yucca Mountain. The biosphere model is one of the process models that support the Yucca Mountain Project (YMP) Total System Performance Assessment (TSPA) for the license application (LA), TSPA-LA. The ERMYN provides the capability of performing human radiation dose assessments. This report documents the biosphere model, which includes: (1) Describing the reference biosphere, human receptor, exposure scenarios, and primary radionuclides for each exposure scenario (Section 6.1); (2) Developing a biosphere conceptual model using site-specific features, events, and processes (FEPs) (Section 6.2), the reference biosphere (Section 6.1.1), the human receptor (Section 6.1.2), and approximations (Sections 6.3.1.4 and 6.3.2.4); (3) Building a mathematical model using the biosphere conceptual model (Section 6.3) and published biosphere models (Sections 6.4 and 6.5); (4) Summarizing input parameters for the mathematical model, including the uncertainty associated with input values (Section 6.6); (5) Identifying improvements in the ERMYN compared with the model used in previous biosphere modeling (Section 6.7); (6) Constructing an ERMYN implementation tool (model) based on the biosphere mathematical model using GoldSim stochastic simulation software (Sections 6.8 and 6.9); (7) Verifying the ERMYN by comparing output from the software with hand calculations to ensure that the GoldSim implementation is correct (Section 6.10); (8) Validating the ERMYN by corroborating it with published biosphere models; comparing conceptual models, mathematical models, and numerical results (Section 7)

  6. Model documentation: Natural Gas Transmission and Distribution Model of the National Energy Modeling System; Volume 1

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1994-02-24

    The Natural Gas Transmission and Distribution Model (NGTDM) is a component of the National Energy Modeling System (NEMS) used to represent the domestic natural gas transmission and distribution system. NEMS is the third in a series of computer-based, midterm energy modeling systems used since 1974 by the Energy Information Administration (EIA) and its predecessor, the Federal Energy Administration, to analyze domestic energy-economy markets and develop projections. This report documents the archived version of NGTDM that was used to produce the natural gas forecasts used in support of the Annual Energy Outlook 1994, DOE/EIA-0383(94). The purpose of this report is to provide a reference document for model analysts, users, and the public that defines the objectives of the model, describes its basic design, provides detail on the methodology employed, and describes the model inputs, outputs, and key assumptions. It is intended to fulfill the legal obligation of the EIA to provide adequate documentation in support of its models (Public Law 94-385, Section 57.b.2). This report represents Volume 1 of a two-volume set. (Volume 2 will report on model performance, detailing convergence criteria and properties, results of sensitivity testing, comparison of model outputs with the literature and/or other model results, and major unresolved issues.) Subsequent chapters of this report provide: (1) an overview of the NGTDM (Chapter 2); (2) a description of the interface between the National Energy Modeling System (NEMS) and the NGTDM (Chapter 3); (3) an overview of the solution methodology of the NGTDM (Chapter 4); (4) the solution methodology for the Annual Flow Module (Chapter 5); (5) the solution methodology for the Distributor Tariff Module (Chapter 6); (6) the solution methodology for the Capacity Expansion Module (Chapter 7); (7) the solution methodology for the Pipeline Tariff Module (Chapter 8); and (8) a description of model assumptions, inputs, and outputs (Chapter 9).

  7. SU-E-T-115: Dose Perturbation Study of Self-Expandable Metal and Polyester Esophageal Stents in Proton Therapy Beams

    Energy Technology Data Exchange (ETDEWEB)

    Lee, S; Li, Z [University of Florida Proton Therapy Institute, Jacksonville, FL (United States); Jalaj, S; McGaw, C; B K, John; J S, Scolapio; J C, Munoz [Division of Gastoenterology, Department of Medicine, University of Florida, Jacksonville, FL (United States)

    2014-06-01

    Purpose: This work investigates dose perturbations due to Self-expandable metal and polyester esophageal stents undergoing proton radiotherapy for esophageal cancer. Methods: Five commercially available esophageal stents made of nitinol (Evolution, Wallflex and Ultraflex), stainless steel (Z-Stent) and polyester (Polyflex) were tested. Radiochromic film (GafChromic EBT3 film, Ashland, Covington, KY) wrapped around a stent and a 12cc syringe was irradiated with 2CGE (Cobalt Gray Equivalent) of proton beam in a custom fabricated acrylic phantom. An air-hollow syringe simulates the esophagus. Results: The Z-stent created the largest dose perturbations ranges from -14.5% to 6.1% due to the steel composition. The WallFlex, Evolution and Ultraflex stents produced the dose perturbation ranges of (−9.2%∼8.6%), (−6.8%∼5.7%) and (−6.2%∼6.2%), respectively. The PolyFlex stent contains the radiopaque tungsten markers located top, middle and bottom portions. When the focal cold spots induced by the markers were excluded in the analysis, the dose perturbation range was changed from (−11.6%∼6.4%) to (−0.6%∼5.0%). Conclusion: The magnitude of dose perturbation is related to material of a metallic stent. The non-metallic stent such as PolyFlex shows relatively lower dose perturbation than metallic stents except a radiopaque marker region. Overall Evolution and Ultraflex stent appear to be less dose perturbations. The largest dose perturbations (cold spots) were located at both edges of stents in distal area for the single proton beam irradiation study. The analysis of more than two proton beam which is more typical clinical beam arrangement would be necessary to minimize the doe perturbation effect in proton ratiotherapy.

  8. Northern Cheyenne Reservation Coal Bed Natural Resource Assessment and Analysis of Produced Water Disposal Options

    Energy Technology Data Exchange (ETDEWEB)

    Shaochang Wo; David A. Lopez; Jason Whiteman Sr.; Bruce A. Reynolds

    2004-07-01

    Coalbed methane (CBM) development in the Powder River Basin (PRB) is currently one of the most active gas plays in the United States. Monthly production in 2002 reached about 26 BCF in the Wyoming portion of the basin. Coalbed methane reserves for the Wyoming portion of the basin are approximately 25 trillion cubic feet (TCF). Although coal beds in the Powder River Basin extend well into Montana, including the area of the Northern Cheyenne Indian Reservation, the only CBM development in Montana is the CX Field, operated by the Fidelity Exploration, near the Wyoming border. The Northern Cheyenne Reservation is located on the northwest flank of the PRB in Montana with a total land of 445,000 acres. The Reservation consists of five districts, Lame Deer, Busby, Ashland, Birney, and Muddy Cluster and has a population of 4,470 according to the 2000 Census. The CBM resource represents a significant potential asset to the Northern Cheyenne Indian Tribe. Methane gas in coal beds is trapped by hydrodynamic pressure. Because the production of CBM involves the dewatering of coalbed to allow the release of methane gas from the coal matrix, the relatively large volume of the co-produced water and its potential environmental impacts are the primary concerns for the Tribe. Presented in this report is a study conducted by the Idaho National Engineering and Environmental Laboratory (INEEL) and the Montana Bureau of Mines and Geology (MBMG) in partnership with the Northern Cheyenne Tribe to assess the Tribe’s CBM resources and evaluate applicable water handling options. The project was supported by the U.S. Department of Energy (DOE) through the Native American Initiative of the National Petroleum Technology Office, under contract DEAC07- 99ID13727. Matching funds were granted by the MBMG in supporting the work of geologic study and mapping conducted at MBMG.

  9. SU-F-T-490: Separating Effects Influencing Detector Response in Small MV Photon Fields

    Energy Technology Data Exchange (ETDEWEB)

    Wegener, S; Sauer, O [University of Wuerzburg, Wuerzburg (Germany)

    2016-06-15

    Purpose: Different detector properties influence their responses especially in field sizes below the lateral electron range. Due to the finite active volume, the detector density and electron perturbation at other structural parts, the response factor is in general field size dependent. We aimed to visualize and separate the main effects contributing to detector behavior for a variety of detector types. This was achieved in an experimental setup, shielding the field center. Thus, effects caused by scattered radiation could be examined separately. Methods: Signal ratios for field sizes down to 8 mm (SSD 90 cm, water depth 10 cm) of a 6MV beam from a Siemens Primus LINAC were recorded with several detectors: PTW microDiamond and PinPoint ionization chamber, shielded diodes (PTW P-60008, IBA PFD and SNC Edge) and unshielded diodes (PTW E-60012 and IBA SFD). Measurements were carried out in open fields and with an aluminum pole of 4 mm diameter as a central block. The geometric volume effect was calculated from profiles obtained with Gafchromic EBT3 film, evaluated using FilmQA Pro software (Ashland, USA). Results: Volume corrections were 1.7% at maximum. After correction, in small open fields, unshielded diodes showed a lower response than the diamond, i.e. diamond detector over-response seems to be higher than that for unshielded diodes. Beneath the block, this behavior was amplified by a factor of 2. For the shielded diodes, the overresponse for small open fields could be confirmed. However their lateral response behavior was strongly type dependent, e.g. the signal ratio dropped from 1.02 to 0.98 for the P-60008 diode. Conclusion: The lateral detector response was experimentally examined. Detector volume and density alone do not fully account for the field size dependence of detector response. Detector construction details play a major role, especially for shielded diodes.

  10. A multi-model assessment of terrestrial biosphere model data needs

    Science.gov (United States)

    Gardella, A.; Cowdery, E.; De Kauwe, M. G.; Desai, A. R.; Duveneck, M.; Fer, I.; Fisher, R.; Knox, R. G.; Kooper, R.; LeBauer, D.; McCabe, T.; Minunno, F.; Raiho, A.; Serbin, S.; Shiklomanov, A. N.; Thomas, A.; Walker, A.; Dietze, M.

    2017-12-01

    Terrestrial biosphere models provide us with the means to simulate the impacts of climate change and their uncertainties. Going beyond direct observation and experimentation, models synthesize our current understanding of ecosystem processes and can give us insight on data needed to constrain model parameters. In previous work, we leveraged the Predictive Ecosystem Analyzer (PEcAn) to assess the contribution of different parameters to the uncertainty of the Ecosystem Demography model v2 (ED) model outputs across various North American biomes (Dietze et al., JGR-G, 2014). While this analysis identified key research priorities, the extent to which these priorities were model- and/or biome-specific was unclear. Furthermore, because the analysis only studied one model, we were unable to comment on the effect of variability in model structure to overall predictive uncertainty. Here, we expand this analysis to all biomes globally and a wide sample of models that vary in complexity: BioCro, CABLE, CLM, DALEC, ED2, FATES, G'DAY, JULES, LANDIS, LINKAGES, LPJ-GUESS, MAESPA, PRELES, SDGVM, SIPNET, and TEM. Prior to performing uncertainty analyses, model parameter uncertainties were assessed by assimilating all available trait data from the combination of the BETYdb and TRY trait databases, using an updated multivariate version of PEcAn's Hierarchical Bayesian meta-analysis. Next, sensitivity analyses were performed for all models across a range of sites globally to assess sensitivities for a range of different outputs (GPP, ET, SH, Ra, NPP, Rh, NEE, LAI) at multiple time scales from the sub-annual to the decadal. Finally, parameter uncertainties and model sensitivities were combined to evaluate the fractional contribution of each parameter to the predictive uncertainty for a specific variable at a specific site and timescale. Facilitated by PEcAn's automated workflows, this analysis represents the broadest assessment of the sensitivities and uncertainties in terrestrial

  11. International Nuclear Model personal computer (PCINM): Model documentation

    International Nuclear Information System (INIS)

    1992-08-01

    The International Nuclear Model (INM) was developed to assist the Energy Information Administration (EIA), U.S. Department of Energy (DOE) in producing worldwide projections of electricity generation, fuel cycle requirements, capacities, and spent fuel discharges from commercial nuclear reactors. The original INM was developed, maintained, and operated on a mainframe computer system. In spring 1992, a streamlined version of INM was created for use on a microcomputer utilizing CLIPPER and PCSAS software. This new version is known as PCINM. This documentation is based on the new PCINM version. This document is designed to satisfy the requirements of several categories of users of the PCINM system including technical analysts, theoretical modelers, and industry observers. This document assumes the reader is familiar with the nuclear fuel cycle and each of its components. This model documentation contains four chapters and seven appendices. Chapter Two presents the model overview containing the PCINM structure and process flow, the areas for which projections are made, and input data and output reports. Chapter Three presents the model technical specifications showing all model equations, algorithms, and units of measure. Chapter Four presents an overview of all parameters, variables, and assumptions used in PCINM. The appendices present the following detailed information: variable and parameter listings, variable and equation cross reference tables, source code listings, file layouts, sample report outputs, and model run procedures. 2 figs

  12. A comprehensive model for piezoceramic actuators: modelling, validation and application

    International Nuclear Information System (INIS)

    Quant, Mario; Elizalde, Hugo; Flores, Abiud; Ramírez, Ricardo; Orta, Pedro; Song, Gangbing

    2009-01-01

    This paper presents a comprehensive model for piezoceramic actuators (PAs), which accounts for hysteresis, non-linear electric field and dynamic effects. The hysteresis model is based on the widely used general Maxwell slip model, while an enhanced electro-mechanical non-linear model replaces the linear constitutive equations commonly used. Further on, a linear second order model compensates the frequency response of the actuator. Each individual model is fully characterized from experimental data yielded by a specific PA, then incorporated into a comprehensive 'direct' model able to determine the output strain based on the applied input voltage, fully compensating the aforementioned effects, where the term 'direct' represents an electrical-to-mechanical operating path. The 'direct' model was implemented in a Matlab/Simulink environment and successfully validated via experimental results, exhibiting higher accuracy and simplicity than many published models. This simplicity would allow a straightforward inclusion of other behaviour such as creep, ageing, material non-linearity, etc, if such parameters are important for a particular application. Based on the same formulation, two other models are also presented: the first is an 'alternate' model intended to operate within a force-controlled scheme (instead of a displacement/position control), thus able to capture the complex mechanical interactions occurring between a PA and its host structure. The second development is an 'inverse' model, able to operate within an open-loop control scheme, that is, yielding a 'linearized' PA behaviour. The performance of the developed models is demonstrated via a numerical sample case simulated in Matlab/Simulink, consisting of a PA coupled to a simple mechanical system, aimed at shifting the natural frequency of the latter

  13. Modeling Dynamic Systems with Efficient Ensembles of Process-Based Models.

    Directory of Open Access Journals (Sweden)

    Nikola Simidjievski

    Full Text Available Ensembles are a well established machine learning paradigm, leading to accurate and robust models, predominantly applied to predictive modeling tasks. Ensemble models comprise a finite set of diverse predictive models whose combined output is expected to yield an improved predictive performance as compared to an individual model. In this paper, we propose a new method for learning ensembles of process-based models of dynamic systems. The process-based modeling paradigm employs domain-specific knowledge to automatically learn models of dynamic systems from time-series observational data. Previous work has shown that ensembles based on sampling observational data (i.e., bagging and boosting, significantly improve predictive performance of process-based models. However, this improvement comes at the cost of a substantial increase of the computational time needed for learning. To address this problem, the paper proposes a method that aims at efficiently learning ensembles of process-based models, while maintaining their accurate long-term predictive performance. This is achieved by constructing ensembles with sampling domain-specific knowledge instead of sampling data. We apply the proposed method to and evaluate its performance on a set of problems of automated predictive modeling in three lake ecosystems using a library of process-based knowledge for modeling population dynamics. The experimental results identify the optimal design decisions regarding the learning algorithm. The results also show that the proposed ensembles yield significantly more accurate predictions of population dynamics as compared to individual process-based models. Finally, while their predictive performance is comparable to the one of ensembles obtained with the state-of-the-art methods of bagging and boosting, they are substantially more efficient.

  14. Competency Modeling in Extension Education: Integrating an Academic Extension Education Model with an Extension Human Resource Management Model

    Science.gov (United States)

    Scheer, Scott D.; Cochran, Graham R.; Harder, Amy; Place, Nick T.

    2011-01-01

    The purpose of this study was to compare and contrast an academic extension education model with an Extension human resource management model. The academic model of 19 competencies was similar across the 22 competencies of the Extension human resource management model. There were seven unique competencies for the human resource management model.…

  15. BioModels Database: a repository of mathematical models of biological processes.

    Science.gov (United States)

    Chelliah, Vijayalakshmi; Laibe, Camille; Le Novère, Nicolas

    2013-01-01

    BioModels Database is a public online resource that allows storing and sharing of published, peer-reviewed quantitative, dynamic models of biological processes. The model components and behaviour are thoroughly checked to correspond the original publication and manually curated to ensure reliability. Furthermore, the model elements are annotated with terms from controlled vocabularies as well as linked to relevant external data resources. This greatly helps in model interpretation and reuse. Models are stored in SBML format, accepted in SBML and CellML formats, and are available for download in various other common formats such as BioPAX, Octave, SciLab, VCML, XPP and PDF, in addition to SBML. The reaction network diagram of the models is also available in several formats. BioModels Database features a search engine, which provides simple and more advanced searches. Features such as online simulation and creation of smaller models (submodels) from the selected model elements of a larger one are provided. BioModels Database can be accessed both via a web interface and programmatically via web services. New models are available in BioModels Database at regular releases, about every 4 months.

  16. Modeling Distillation Column Using ARX Model Structure and Artificial Neural Networks

    Directory of Open Access Journals (Sweden)

    Reza Pirmoradi

    2012-04-01

    Full Text Available Distillation is a complex and highly nonlinear industrial process. In general it is not always possible to obtain accurate first principles models for high-purity distillation columns. On the other hand the development of first principles models is usually time consuming and expensive. To overcome these problems, empirical models such as neural networks can be used. One major drawback of empirical models is that the prediction is valid only inside the data domain that is sufficiently covered by measurement data. Modeling distillation columns by means of neural networks is reported in literature by using recursive networks. The recursive networks are proper for modeling purpose, but such models have the problems of high complexity and high computational cost. The objective of this paper is to propose a simple and reliable model for distillation column. The proposed model uses feed forward neural networks which results in a simple model with less parameters and faster training time. Simulation results demonstrate that predictions of the proposed model in all regions are close to outputs of the dynamic model and the error in negligible. This implies that the model is reliable in all regions.

  17. Constitutive Models

    DEFF Research Database (Denmark)

    Sales-Cruz, Mauricio; Piccolo, Chiara; Heitzig, Martina

    2011-01-01

    covered, illustrating several models such as the Wilson equation and NRTL equation, along with their solution strategies. A section shows how to use experimental data to regress the property model parameters using a least squares approach. A full model analysis is applied in each example that discusses...... the degrees of freedom, dependent and independent variables and solution strategy. Vapour-liquid and solid-liquid equilibrium is covered, and applications to droplet evaporation and kinetic models are given....

  18. Calibrated Properties Model

    International Nuclear Information System (INIS)

    Ahlers, C.F.; Liu, H.H.

    2001-01-01

    The purpose of this Analysis/Model Report (AMR) is to document the Calibrated Properties Model that provides calibrated parameter sets for unsaturated zone (UZ) flow and transport process models for the Yucca Mountain Site Characterization Project (YMP). This work was performed in accordance with the AMR Development Plan for U0035 Calibrated Properties Model REV00 (CRWMS M and O 1999c). These calibrated property sets include matrix and fracture parameters for the UZ Flow and Transport Model (UZ Model), drift seepage models, drift-scale and mountain-scale coupled-processes models, and Total System Performance Assessment (TSPA) models as well as Performance Assessment (PA) and other participating national laboratories and government agencies. These process models provide the necessary framework to test conceptual hypotheses of flow and transport at different scales and predict flow and transport behavior under a variety of climatic and thermal-loading conditions

  19. Algebraic formulation of collective models. I. The mass quadrupole collective model

    International Nuclear Information System (INIS)

    Rosensteel, G.; Rowe, D.J.

    1979-01-01

    This paper is the first in a series of three which together present a microscopic formulation of the Bohr--Mottelson (BM) collective model of the nucleus. In this article the mass quadrupole collective (MQC) model is defined and shown to be a generalization of the BM model. The MQC model eliminates the small oscillation assumption of BM and also yields the rotational and CM (3) submodels by holonomic constraints on the MQC configuration space. In addition, the MQC model is demonstrated to be an algebraic model, so that the state space of the MQC model carries an irrep of a Lie algebra of microscopic observables, the MQC algebra. An infinite class of new collective models is then given by the various inequivalent irreps of this algebra. A microscopic embedding of the BM model is achieved by decomposing the representation of the MQC algebra on many-particle state space into its irreducible components. In the second paper this decomposition is studied in detail. The third paper presents the symplectic model, which provides the realization of the collective model in the harmonic oscillator shell model

  20. Modeling of immision from power plants using stream-diffusion model

    International Nuclear Information System (INIS)

    Kanevce, Lj.; Kanevce, G.; Markoski, A.

    1996-01-01

    Analyses of simple empirical and integral immision models, comparing with complex three dimensional differential models is given. Complex differential models needs huge computer power, so they can't be useful for practical engineering calculations. In this paper immision modeling, using stream-diffusion approach is presented. Process of dispersion is divided into two parts. First part is called stream part, it's near the source of the pollutants, and it's presented with defected turbulent jet in wind field. This part finished when the velocity of stream (jet) becomes equal with wind speed. Boundary conditions in the end of the first part, are initial for the second, called diffusion part, which is modeling with tri dimensional diffusion equation. Gradient of temperature, wind speed profile and coefficient of diffusion in this model must not be constants, they can change with the height. Presented model is much simpler than the complete meteorological differential models which calculates whole fields of meteorological parameters. Also, it is more complex and gives more valuable results for dispersion of pollutants from widely used integral and empirical models

  1. Simulation modelling of fynbos ecosystems: Systems analysis and conceptual models

    CSIR Research Space (South Africa)

    Kruger, FJ

    1985-03-01

    Full Text Available -animal interactions. An additional two models, which expand aspects of the FYNBOS model, are described: a model for simulating canopy processes; and a Fire Recovery Simulator. The canopy process model will simulate ecophysiological processes in more detail than FYNBOS...

  2. Mathematical models for sleep-wake dynamics: comparison of the two-process model and a mutual inhibition neuronal model.

    Directory of Open Access Journals (Sweden)

    Anne C Skeldon

    Full Text Available Sleep is essential for the maintenance of the brain and the body, yet many features of sleep are poorly understood and mathematical models are an important tool for probing proposed biological mechanisms. The most well-known mathematical model of sleep regulation, the two-process model, models the sleep-wake cycle by two oscillators: a circadian oscillator and a homeostatic oscillator. An alternative, more recent, model considers the mutual inhibition of sleep promoting neurons and the ascending arousal system regulated by homeostatic and circadian processes. Here we show there are fundamental similarities between these two models. The implications are illustrated with two important sleep-wake phenomena. Firstly, we show that in the two-process model, transitions between different numbers of daily sleep episodes can be classified as grazing bifurcations. This provides the theoretical underpinning for numerical results showing that the sleep patterns of many mammals can be explained by the mutual inhibition model. Secondly, we show that when sleep deprivation disrupts the sleep-wake cycle, ostensibly different measures of sleepiness in the two models are closely related. The demonstration of the mathematical similarities of the two models is valuable because not only does it allow some features of the two-process model to be interpreted physiologically but it also means that knowledge gained from study of the two-process model can be used to inform understanding of the behaviour of the mutual inhibition model. This is important because the mutual inhibition model and its extensions are increasingly being used as a tool to understand a diverse range of sleep-wake phenomena such as the design of optimal shift-patterns, yet the values it uses for parameters associated with the circadian and homeostatic processes are very different from those that have been experimentally measured in the context of the two-process model.

  3. Crop rotation modelling-A European model intercomparison

    Czech Academy of Sciences Publication Activity Database

    Kollas, C.; Kersebaum, K. C.; Nendel, C.; Manevski, K.; Müller, C.; Palosuo, T.; Armas-Herrera, C.; Beaudoin, N.; Bindi, M.; Charefeddine, M.; Conradt, T.; Constantin, J.; Eitzinger, J.; Ewert, F.; Ferrise, R.; Gaiser, T.; de Cortazar-Atauri, I. G.; Giglio, L.; Hlavinka, Petr; Hoffman, H.; Hofmann, M.; Launay, M.; Manderscheid, R.; Mary, B.; Mirschel, W.; Moriondo, M.; Olesen, J. E.; Öztürk, I.; Pacholski, A.; Ripoche-Wachter, D.; Roggero, P. P.; Roncossek, S.; Rötter, R. P.; Ruget, F.; Sharif, B.; Trnka, Miroslav; Ventrella, D.; Waha, K.; Wegehenkel, M.; Weigel, H-J.; Wu, L.

    2015-01-01

    Roč. 70, oct (2015), s. 98-111 ISSN 1161-0301 Institutional support: RVO:67179843 Keywords : model ensemble * crop simulation models * catch crop * intermediate crop * treatment * Multi-year Subject RIV: GC - Agronomy Impact factor: 3.186, year: 2015

  4. Calibrated Properties Model

    International Nuclear Information System (INIS)

    Ghezzehej, T.

    2004-01-01

    The purpose of this model report is to document the calibrated properties model that provides calibrated property sets for unsaturated zone (UZ) flow and transport process models (UZ models). The calibration of the property sets is performed through inverse modeling. This work followed, and was planned in, ''Technical Work Plan (TWP) for: Unsaturated Zone Flow Analysis and Model Report Integration'' (BSC 2004 [DIRS 169654], Sections 1.2.6 and 2.1.1.6). Direct inputs to this model report were derived from the following upstream analysis and model reports: ''Analysis of Hydrologic Properties Data'' (BSC 2004 [DIRS 170038]); ''Development of Numerical Grids for UZ Flow and Transport Modeling'' (BSC 2004 [DIRS 169855]); ''Simulation of Net Infiltration for Present-Day and Potential Future Climates'' (BSC 2004 [DIRS 170007]); ''Geologic Framework Model'' (GFM2000) (BSC 2004 [DIRS 170029]). Additionally, this model report incorporates errata of the previous version and closure of the Key Technical Issue agreement TSPAI 3.26 (Section 6.2.2 and Appendix B), and it is revised for improved transparency

  5. Dynamic Metabolic Model Building Based on the Ensemble Modeling Approach

    Energy Technology Data Exchange (ETDEWEB)

    Liao, James C. [Univ. of California, Los Angeles, CA (United States)

    2016-10-01

    Ensemble modeling of kinetic systems addresses the challenges of kinetic model construction, with respect to parameter value selection, and still allows for the rich insights possible from kinetic models. This project aimed to show that constructing, implementing, and analyzing such models is a useful tool for the metabolic engineering toolkit, and that they can result in actionable insights from models. Key concepts are developed and deliverable publications and results are presented.

  6. Modeling patterns in data using linear and related models

    International Nuclear Information System (INIS)

    Engelhardt, M.E.

    1996-06-01

    This report considers the use of linear models for analyzing data related to reliability and safety issues of the type usually associated with nuclear power plants. The report discusses some of the general results of linear regression analysis, such as the model assumptions and properties of the estimators of the parameters. The results are motivated with examples of operational data. Results about the important case of a linear regression model with one covariate are covered in detail. This case includes analysis of time trends. The analysis is applied with two different sets of time trend data. Diagnostic procedures and tests for the adequacy of the model are discussed. Some related methods such as weighted regression and nonlinear models are also considered. A discussion of the general linear model is also included. Appendix A gives some basic SAS programs and outputs for some of the analyses discussed in the body of the report. Appendix B is a review of some of the matrix theoretic results which are useful in the development of linear models

  7. Transport properties site descriptive model. Guidelines for evaluation and modelling

    International Nuclear Information System (INIS)

    Berglund, Sten; Selroos, Jan-Olof

    2004-04-01

    This report describes a strategy for the development of Transport Properties Site Descriptive Models within the SKB Site Investigation programme. Similar reports have been produced for the other disciplines in the site descriptive modelling (Geology, Hydrogeology, Hydrogeochemistry, Rock mechanics, Thermal properties, and Surface ecosystems). These reports are intended to guide the site descriptive modelling, but also to provide the authorities with an overview of modelling work that will be performed. The site descriptive modelling of transport properties is presented in this report and in the associated 'Strategy for the use of laboratory methods in the site investigations programme for the transport properties of the rock', which describes laboratory measurements and data evaluations. Specifically, the objectives of the present report are to: Present a description that gives an overview of the strategy for developing Site Descriptive Models, and which sets the transport modelling into this general context. Provide a structure for developing Transport Properties Site Descriptive Models that facilitates efficient modelling and comparisons between different sites. Provide guidelines on specific modelling issues where methodological consistency is judged to be of special importance, or where there is no general consensus on the modelling approach. The objectives of the site descriptive modelling process and the resulting Transport Properties Site Descriptive Models are to: Provide transport parameters for Safety Assessment. Describe the geoscientific basis for the transport model, including the qualitative and quantitative data that are of importance for the assessment of uncertainties and confidence in the transport description, and for the understanding of the processes at the sites. Provide transport parameters for use within other discipline-specific programmes. Contribute to the integrated evaluation of the investigated sites. The site descriptive modelling of

  8. Volcanic ash modeling with the NMMB-MONARCH-ASH model: quantification of offline modeling errors

    Science.gov (United States)

    Marti, Alejandro; Folch, Arnau

    2018-03-01

    Volcanic ash modeling systems are used to simulate the atmospheric dispersion of volcanic ash and to generate forecasts that quantify the impacts from volcanic eruptions on infrastructures, air quality, aviation, and climate. The efficiency of response and mitigation actions is directly associated with the accuracy of the volcanic ash cloud detection and modeling systems. Operational forecasts build on offline coupled modeling systems in which meteorological variables are updated at the specified coupling intervals. Despite the concerns from other communities regarding the accuracy of this strategy, the quantification of the systematic errors and shortcomings associated with the offline modeling systems has received no attention. This paper employs the NMMB-MONARCH-ASH model to quantify these errors by employing different quantitative and categorical evaluation scores. The skills of the offline coupling strategy are compared against those from an online forecast considered to be the best estimate of the true outcome. Case studies are considered for a synthetic eruption with constant eruption source parameters and for two historical events, which suitably illustrate the severe aviation disruptive effects of European (2010 Eyjafjallajökull) and South American (2011 Cordón Caulle) volcanic eruptions. Evaluation scores indicate that systematic errors due to the offline modeling are of the same order of magnitude as those associated with the source term uncertainties. In particular, traditional offline forecasts employed in operational model setups can result in significant uncertainties, failing to reproduce, in the worst cases, up to 45-70 % of the ash cloud of an online forecast. These inconsistencies are anticipated to be even more relevant in scenarios in which the meteorological conditions change rapidly in time. The outcome of this paper encourages operational groups responsible for real-time advisories for aviation to consider employing computationally

  9. Empirical Model Building Data, Models, and Reality

    CERN Document Server

    Thompson, James R

    2011-01-01

    Praise for the First Edition "This...novel and highly stimulating book, which emphasizes solving real problems...should be widely read. It will have a positive and lasting effect on the teaching of modeling and statistics in general." - Short Book Reviews This new edition features developments and real-world examples that showcase essential empirical modeling techniques Successful empirical model building is founded on the relationship between data and approximate representations of the real systems that generated that data. As a result, it is essential for researchers who construct these m

  10. Modeling, robust and distributed model predictive control for freeway networks

    NARCIS (Netherlands)

    Liu, S.

    2016-01-01

    In Model Predictive Control (MPC) for traffic networks, traffic models are crucial since they are used as prediction models for determining the optimal control actions. In order to reduce the computational complexity of MPC for traffic networks, macroscopic traffic models are often used instead of

  11. PEMODELAN DAERAH TANGKAPAN AIR WADUK KELILING DENGAN MODEL SWAT (Keliling Reservoir Catchment Area Modeling Using SWAT Model

    Directory of Open Access Journals (Sweden)

    Teuku Ferijal

    2015-05-01

    Full Text Available This study aimed to model watershed area of Keliling Reservoir using SWAT model. The reservoir is located in Aceh Besar District, Province of Aceh. The model was setup using 90m x 90m digital elevation model, land use data extracted from remote sensing data and soil characteristic obtained from laboratory analysis on soil samples. Model was calibrated using observed daily reservoir volume and the model performance was analyzed using RMSE-observations standard deviation ratio (RSR, Nash-Sutcliffe efficiency (NSE and percent bias (PBIAS. The model delineated the study area into 3,448 Ha having 13 subwatersheds and 76 land units (HRUs. The watershed is mostly covered by forest (53% and grassland (31%. The analysis revealed the 10 most sensitive parameters i.e. GW_DELAY, CN2, REVAPMN, ALPHA_BF, SOL_AWC, GW_REVAP, GWQMN, CH_K2 and ESCO. Model performances were categorized into very good for monthly reservoir volume with ENS 0.95, RSR 0.23, and PBIAS 2.97. The model performance decreased when it used to analyze daily reservoir inflow with ENS 0.55, RSR 0.67, and PBIAS 3.46. Keywords: Keliling Reservoir, SWAT, Watershed   ABSTRAK Penelitian ini bertujuan untuk untuk memodelkan daerah tangkapan air Waduk Keliling dengan menggunakan Model SWAT. Waduk Keliling terletak di Kabupaten Aceh Besar, Propinsi Aceh. Dalam penelitian ini Model SWAT dikembangkan berdasarkan data digital elevasi model resolusi 90 m x90 m, tata guna lahan yang diperoleh dari intepretasi citra satelit dan data soil dari hasil analisa sampel tanah yang diperoleh di daerah penelitian. Model dikalibrasi dengan data volume waduk dan kinerja model dianalisa menggunakan parameter rasio akar rata-rata kuadrat error dan standard deviasi observasi (RSR, efesiensi Nash-Sutcliffe (NSE dan persentase bias (PBIAS. Hasil deleniasi untuk daerah penelitian menghasilkan suatu DAS dengan luas 3,448 Ha dan memiliki 13 Sub DAS yang dikelompokkan menjadi 76 unit lahan. Sebagian besar wilayah study

  12. CPsup(N-1) model: a toy model for QCD

    International Nuclear Information System (INIS)

    Cant, R.J.; Davis, A.C.

    1979-01-01

    The authors examine the CP 2 sup(N-1) models and discuss their relevance as toy models for QCD 4 . Specifically, they study the role of instantons, theta vacua, and confinement in the 1/N expansion. The results, and comparisons with other two-dimensional models, suggest that most of the interesting features of these models are peculiarities of two-dimensional space-time and cannot be expected to reappear in QCD 4 . (Auth.)

  13. Bootstrap-after-bootstrap model averaging for reducing model uncertainty in model selection for air pollution mortality studies.

    Science.gov (United States)

    Roberts, Steven; Martin, Michael A

    2010-01-01

    Concerns have been raised about findings of associations between particulate matter (PM) air pollution and mortality that have been based on a single "best" model arising from a model selection procedure, because such a strategy may ignore model uncertainty inherently involved in searching through a set of candidate models to find the best model. Model averaging has been proposed as a method of allowing for model uncertainty in this context. To propose an extension (double BOOT) to a previously described bootstrap model-averaging procedure (BOOT) for use in time series studies of the association between PM and mortality. We compared double BOOT and BOOT with Bayesian model averaging (BMA) and a standard method of model selection [standard Akaike's information criterion (AIC)]. Actual time series data from the United States are used to conduct a simulation study to compare and contrast the performance of double BOOT, BOOT, BMA, and standard AIC. Double BOOT produced estimates of the effect of PM on mortality that have had smaller root mean squared error than did those produced by BOOT, BMA, and standard AIC. This performance boost resulted from estimates produced by double BOOT having smaller variance than those produced by BOOT and BMA. Double BOOT is a viable alternative to BOOT and BMA for producing estimates of the mortality effect of PM.

  14. A Generic Modeling Process to Support Functional Fault Model Development

    Science.gov (United States)

    Maul, William A.; Hemminger, Joseph A.; Oostdyk, Rebecca; Bis, Rachael A.

    2016-01-01

    Functional fault models (FFMs) are qualitative representations of a system's failure space that are used to provide a diagnostic of the modeled system. An FFM simulates the failure effect propagation paths within a system between failure modes and observation points. These models contain a significant amount of information about the system including the design, operation and off nominal behavior. The development and verification of the models can be costly in both time and resources. In addition, models depicting similar components can be distinct, both in appearance and function, when created individually, because there are numerous ways of representing the failure space within each component. Generic application of FFMs has the advantages of software code reuse: reduction of time and resources in both development and verification, and a standard set of component models from which future system models can be generated with common appearance and diagnostic performance. This paper outlines the motivation to develop a generic modeling process for FFMs at the component level and the effort to implement that process through modeling conventions and a software tool. The implementation of this generic modeling process within a fault isolation demonstration for NASA's Advanced Ground System Maintenance (AGSM) Integrated Health Management (IHM) project is presented and the impact discussed.

  15. Global Analysis, Interpretation and Modelling: An Earth Systems Modelling Program

    Science.gov (United States)

    Moore, Berrien, III; Sahagian, Dork

    1997-01-01

    The Goal of the GAIM is: To advance the study of the coupled dynamics of the Earth system using as tools both data and models; to develop a strategy for the rapid development, evaluation, and application of comprehensive prognostic models of the Global Biogeochemical Subsystem which could eventually be linked with models of the Physical-Climate Subsystem; to propose, promote, and facilitate experiments with existing models or by linking subcomponent models, especially those associated with IGBP Core Projects and with WCRP efforts. Such experiments would be focused upon resolving interface issues and questions associated with developing an understanding of the prognostic behavior of key processes; to clarify key scientific issues facing the development of Global Biogeochemical Models and the coupling of these models to General Circulation Models; to assist the Intergovernmental Panel on Climate Change (IPCC) process by conducting timely studies that focus upon elucidating important unresolved scientific issues associated with the changing biogeochemical cycles of the planet and upon the role of the biosphere in the physical-climate subsystem, particularly its role in the global hydrological cycle; and to advise the SC-IGBP on progress in developing comprehensive Global Biogeochemical Models and to maintain scientific liaison with the WCRP Steering Group on Global Climate Modelling.

  16. Evolutionary modeling-based approach for model errors correction

    Directory of Open Access Journals (Sweden)

    S. Q. Wan

    2012-08-01

    Full Text Available The inverse problem of using the information of historical data to estimate model errors is one of the science frontier research topics. In this study, we investigate such a problem using the classic Lorenz (1963 equation as a prediction model and the Lorenz equation with a periodic evolutionary function as an accurate representation of reality to generate "observational data."

    On the basis of the intelligent features of evolutionary modeling (EM, including self-organization, self-adaptive and self-learning, the dynamic information contained in the historical data can be identified and extracted by computer automatically. Thereby, a new approach is proposed to estimate model errors based on EM in the present paper. Numerical tests demonstrate the ability of the new approach to correct model structural errors. In fact, it can actualize the combination of the statistics and dynamics to certain extent.

  17. SEP modeling based on global heliospheric models at the CCMC

    Science.gov (United States)

    Mays, M. L.; Luhmann, J. G.; Odstrcil, D.; Bain, H. M.; Schwadron, N.; Gorby, M.; Li, Y.; Lee, K.; Zeitlin, C.; Jian, L. K.; Lee, C. O.; Mewaldt, R. A.; Galvin, A. B.

    2017-12-01

    Heliospheric models provide contextual information of conditions in the heliosphere, including the background solar wind conditions and shock structures, and are used as input to SEP models, providing an essential tool for understanding SEP properties. The global 3D MHD WSA-ENLIL+Cone model provides a time-dependent background heliospheric description, into which a spherical shaped hydrodynamic CME can be inserted. ENLIL simulates solar wind parameters and additionally one can extract the magnetic topologies of observer-connected magnetic field lines and all plasma and shock properties along those field lines. An accurate representation of the background solar wind is necessary for simulating transients. ENLIL simulations also drive SEP models such as the Solar Energetic Particle Model (SEPMOD) (Luhmann et al. 2007, 2010) and the Energetic Particle Radiation Environment Module (EPREM) (Schwadron et al. 2010). The Community Coordinated Modeling Center (CCMC) is in the process of making these SEP models available to the community and offering a system to run SEP models driven by a variety of heliospheric models available at CCMC. SEPMOD injects protons onto a sequence of observer field lines at intensities dependent on the connected shock source strength which are then integrated at the observer to approximate the proton flux. EPREM couples with MHD models such as ENLIL and computes energetic particle distributions based on the focused transport equation along a Lagrangian grid of nodes that propagate out with the solar wind. The coupled SEP models allow us to derive the longitudinal distribution of SEP profiles of different types of events throughout the heliosphere. The coupled ENLIL and SEP models allow us to derive the longitudinal distribution of SEP profiles of different types of events throughout the heliosphere. In this presentation we demonstrate several case studies of SEP event modeling at different observers based on WSA-ENLIL+Cone simulations.

  18. Map algebra and model algebra for integrated model building

    NARCIS (Netherlands)

    Schmitz, O.; Karssenberg, D.J.; Jong, K. de; Kok, J.-L. de; Jong, S.M. de

    2013-01-01

    Computer models are important tools for the assessment of environmental systems. A seamless workflow of construction and coupling of model components is essential for environmental scientists. However, currently available software packages are often tailored either to the construction of model

  19. CCF model comparison

    International Nuclear Information System (INIS)

    Pulkkinen, U.

    2004-04-01

    The report describes a simple comparison of two CCF-models, the ECLM, and the Beta-model. The objective of the comparison is to identify differences in the results of the models by applying the models in some simple test data cases. The comparison focuses mainly on theoretical aspects of the above mentioned CCF-models. The properties of the model parameter estimates in the data cases is also discussed. The practical aspects in using and estimating CCFmodels in real PSA context (e.g. the data interpretation, properties of computer tools, the model documentation) are not discussed in the report. Similarly, the qualitative CCF-analyses needed in using the models are not discussed in the report. (au)

  20. Wake modelling combining mesoscale and microscale models

    DEFF Research Database (Denmark)

    Badger, Jake; Volker, Patrick; Prospathospoulos, J.

    2013-01-01

    In this paper the basis for introducing thrust information from microscale wake models into mesocale model wake parameterizations will be described. A classification system for the different types of mesoscale wake parameterizations is suggested and outlined. Four different mesoscale wake paramet...

  1. Integrated Exoplanet Modeling with the GSFC Exoplanet Modeling & Analysis Center (EMAC)

    Science.gov (United States)

    Mandell, Avi M.; Hostetter, Carl; Pulkkinen, Antti; Domagal-Goldman, Shawn David

    2018-01-01

    Our ability to characterize the atmospheres of extrasolar planets will be revolutionized by JWST, WFIRST and future ground- and space-based telescopes. In preparation, the exoplanet community must develop an integrated suite of tools with which we can comprehensively predict and analyze observations of exoplanets, in order to characterize the planetary environments and ultimately search them for signs of habitability and life.The GSFC Exoplanet Modeling and Analysis Center (EMAC) will be a web-accessible high-performance computing platform with science support for modelers and software developers to host and integrate their scientific software tools, with the goal of leveraging the scientific contributions from the entire exoplanet community to improve our interpretations of future exoplanet discoveries. Our suite of models will include stellar models, models for star-planet interactions, atmospheric models, planet system science models, telescope models, instrument models, and finally models for retrieving signals from observational data. By integrating this suite of models, the community will be able to self-consistently calculate the emergent spectra from the planet whether from emission, scattering, or in transmission, and use these simulations to model the performance of current and new telescopes and their instrumentation.The EMAC infrastructure will not only provide a repository for planetary and exoplanetary community models, modeling tools and intermodal comparisons, but it will include a "run-on-demand" portal with each software tool hosted on a separate virtual machine. The EMAC system will eventually include a means of running or “checking in” new model simulations that are in accordance with the community-derived standards. Additionally, the results of intermodal comparisons will be used to produce open source publications that quantify the model comparisons and provide an overview of community consensus on model uncertainties on the climates of

  2. Modeling Global Biogenic Emission of Isoprene: Exploration of Model Drivers

    Science.gov (United States)

    Alexander, Susan E.; Potter, Christopher S.; Coughlan, Joseph C.; Klooster, Steven A.; Lerdau, Manuel T.; Chatfield, Robert B.; Peterson, David L. (Technical Monitor)

    1996-01-01

    Vegetation provides the major source of isoprene emission to the atmosphere. We present a modeling approach to estimate global biogenic isoprene emission. The isoprene flux model is linked to a process-based computer simulation model of biogenic trace-gas fluxes that operates on scales that link regional and global data sets and ecosystem nutrient transformations Isoprene emission estimates are determined from estimates of ecosystem specific biomass, emission factors, and algorithms based on light and temperature. Our approach differs from an existing modeling framework by including the process-based global model for terrestrial ecosystem production, satellite derived ecosystem classification, and isoprene emission measurements from a tropical deciduous forest. We explore the sensitivity of model estimates to input parameters. The resulting emission products from the global 1 degree x 1 degree coverage provided by the satellite datasets and the process model allow flux estimations across large spatial scales and enable direct linkage to atmospheric models of trace-gas transport and transformation.

  3. STAMINA - Model description. Standard Model Instrumentation for Noise Assessments

    NARCIS (Netherlands)

    Schreurs EM; Jabben J; Verheijen ENG; CMM; mev

    2010-01-01

    Deze rapportage beschrijft het STAMINA-model, dat staat voor Standard Model Instrumentation for Noise Assessments en door het RIVM is ontwikkeld. Het instituut gebruikt dit standaardmodel om omgevingsgeluid in Nederland in kaart te brengen. Het model is gebaseerd op de Standaard Karteringsmethode

  4. The impact of working memory and the "process of process modelling" on model quality: Investigating experienced versus inexperienced modellers

    DEFF Research Database (Denmark)

    Martini, Markus; Pinggera, Jakob; Neurauter, Manuel

    2016-01-01

    of reconciliation phases was positively related to PM quality in experienced modellers. Our research reveals central cognitive mechanisms in process modelling and has potential practical implications for the development of modelling software and teaching the craft of process modelling....... the role of cognitive processes as well as modelling processes in creating a PM in experienced and inexperienced modellers. Specifically, two working memory (WM) functions (holding and processing of information and relational integration) and three process of process modelling phases (comprehension...

  5. The Trimeric Model: A New Model of Periodontal Treatment Planning

    Science.gov (United States)

    Tarakji, Bassel

    2014-01-01

    Treatment of periodontal disease is a complex and multidisciplinary procedure, requiring periodontal, surgical, restorative, and orthodontic treatment modalities. Several authors attempted to formulate models for periodontal treatment that orders the treatment steps in a logical and easy to remember manner. In this article, we discuss two models of periodontal treatment planning from two of the most well-known textbook in the specialty of periodontics internationally. Then modify them to arrive at a new model of periodontal treatment planning, The Trimeric Model. Adding restorative and orthodontic interrelationships with periodontal treatment allows us to expand this model into the Extended Trimeric Model of periodontal treatment planning. These models will provide a logical framework and a clear order of the treatment of periodontal disease for general practitioners and periodontists alike. PMID:25177662

  6. Model building

    International Nuclear Information System (INIS)

    Frampton, Paul H.

    1998-01-01

    In this talk I begin with some general discussion of model building in particle theory, emphasizing the need for motivation and testability. Three illustrative examples are then described. The first is the Left-Right model which provides an explanation for the chirality of quarks and leptons. The second is the 331-model which offers a first step to understanding the three generations of quarks and leptons. Third and last is the SU(15) model which can accommodate the light leptoquarks possibly seen at HERA

  7. Composite hadron models

    International Nuclear Information System (INIS)

    Ogava, S.; Savada, S.; Nakagava, M.

    1983-01-01

    Composite models of hadrons are considered. The main attention is paid to the Sakata, S model. In the framework of the model it is presupposed that proton, neutron and Λ particle are the fundamental particles. Theoretical studies of unknown fundamental constituents of a substance have led to the creation of the quark model. In the framework of the quark model using the theory of SU(6)-symmetry the classification of mesons and baryons is considered. Using the quark model relations between hadron masses, their spins and electromagnetic properties are explained. The problem of three-colour model with many flavours is briefly presented

  8. Connecting Biochemical Photosynthesis Models with Crop Models to Support Crop Improvement.

    Science.gov (United States)

    Wu, Alex; Song, Youhong; van Oosterom, Erik J; Hammer, Graeme L

    2016-01-01

    The next advance in field crop productivity will likely need to come from improving crop use efficiency of resources (e.g., light, water, and nitrogen), aspects of which are closely linked with overall crop photosynthetic efficiency. Progress in genetic manipulation of photosynthesis is confounded by uncertainties of consequences at crop level because of difficulties connecting across scales. Crop growth and development simulation models that integrate across biological levels of organization and use a gene-to-phenotype modeling approach may present a way forward. There has been a long history of development of crop models capable of simulating dynamics of crop physiological attributes. Many crop models incorporate canopy photosynthesis (source) as a key driver for crop growth, while others derive crop growth from the balance between source- and sink-limitations. Modeling leaf photosynthesis has progressed from empirical modeling via light response curves to a more mechanistic basis, having clearer links to the underlying biochemical processes of photosynthesis. Cross-scale modeling that connects models at the biochemical and crop levels and utilizes developments in upscaling leaf-level models to canopy models has the potential to bridge the gap between photosynthetic manipulation at the biochemical level and its consequences on crop productivity. Here we review approaches to this emerging cross-scale modeling framework and reinforce the need for connections across levels of modeling. Further, we propose strategies for connecting biochemical models of photosynthesis into the cross-scale modeling framework to support crop improvement through photosynthetic manipulation.

  9. Connecting Biochemical Photosynthesis Models with Crop Models to Support Crop Improvement

    Science.gov (United States)

    Wu, Alex; Song, Youhong; van Oosterom, Erik J.; Hammer, Graeme L.

    2016-01-01

    The next advance in field crop productivity will likely need to come from improving crop use efficiency of resources (e.g., light, water, and nitrogen), aspects of which are closely linked with overall crop photosynthetic efficiency. Progress in genetic manipulation of photosynthesis is confounded by uncertainties of consequences at crop level because of difficulties connecting across scales. Crop growth and development simulation models that integrate across biological levels of organization and use a gene-to-phenotype modeling approach may present a way forward. There has been a long history of development of crop models capable of simulating dynamics of crop physiological attributes. Many crop models incorporate canopy photosynthesis (source) as a key driver for crop growth, while others derive crop growth from the balance between source- and sink-limitations. Modeling leaf photosynthesis has progressed from empirical modeling via light response curves to a more mechanistic basis, having clearer links to the underlying biochemical processes of photosynthesis. Cross-scale modeling that connects models at the biochemical and crop levels and utilizes developments in upscaling leaf-level models to canopy models has the potential to bridge the gap between photosynthetic manipulation at the biochemical level and its consequences on crop productivity. Here we review approaches to this emerging cross-scale modeling framework and reinforce the need for connections across levels of modeling. Further, we propose strategies for connecting biochemical models of photosynthesis into the cross-scale modeling framework to support crop improvement through photosynthetic manipulation. PMID:27790232

  10. Model Sistem Informasi Manajemen Sekolah Berbasiskan Notasi Unified Modeling Language

    Directory of Open Access Journals (Sweden)

    Yohannes Kurniawan

    2013-12-01

    Full Text Available Basically the use of integrated information systems can be applied not only for the company, but also education industry, particularly schools. To support business processes at the school, this research would like to describe a conceptual model of information systems using the Unified Modeling Language (UML notationwith "4 +1 View" architectural model. This model is expected to assist analysis and design the whole business processes at school. A conceptual model of the information system can help application developers to easily and clearly understand the school system. By adopting this information system model, schools are able to have effective understanding of management information systems.

  11. An Agent Model Integrating an Adaptive Model for Environmental Dynamics

    NARCIS (Netherlands)

    Treur, J.; Umair, M.

    2011-01-01

    The environments in which agents are used often may be described by dynamical models, e.g., in the form of a set of differential equations. In this paper, an agent model is proposed that can perform model-based reasoning about the environment, based on a numerical (dynamical system) model of the

  12. Model unspecific search in CMS. Model unspecific limits

    Energy Technology Data Exchange (ETDEWEB)

    Knutzen, Simon; Albert, Andreas; Duchardt, Deborah; Hebbeker, Thomas; Lieb, Jonas; Meyer, Arnd; Pook, Tobias; Roemer, Jonas [III. Physikalisches Institut A, RWTH Aachen University (Germany)

    2016-07-01

    The standard model of particle physics is increasingly challenged by recent discoveries and also by long known phenomena, representing a strong motivation to develop extensions of the standard model. The amount of theories describing possible extensions is large and steadily growing. In this presentation a new approach is introduced, verifying if a given theory beyond the standard model is consistent with data collected by the CMS detector without the need to perform a dedicated search. To achieve this, model unspecific limits on the number of additional events above the standard model expectation are calculated in every event class produced by the MUSiC algorithm. Furthermore, a tool is provided to translate these results into limits on the signal cross section of any theory. In addition to the general procedure, first results and examples are shown using the proton-proton collision data taken at a centre of mass energy of 8 TeV.

  13. Model-Independent Diffs

    DEFF Research Database (Denmark)

    Könemann, Patrick

    just contain a list of strings, one for each line, whereas the structure of models is defined by their meta models. There are tools available which are able to compute the diff between two models, e.g. RSA or EMF Compare. However, their diff is not model-independent, i.e. it refers to the models...

  14. The CAFE model: A net production model for global ocean phytoplankton

    Science.gov (United States)

    Silsbe, Greg M.; Behrenfeld, Michael J.; Halsey, Kimberly H.; Milligan, Allen J.; Westberry, Toby K.

    2016-12-01

    The Carbon, Absorption, and Fluorescence Euphotic-resolving (CAFE) net primary production model is an adaptable framework for advancing global ocean productivity assessments by exploiting state-of-the-art satellite ocean color analyses and addressing key physiological and ecological attributes of phytoplankton. Here we present the first implementation of the CAFE model that incorporates inherent optical properties derived from ocean color measurements into a mechanistic and accurate model of phytoplankton growth rates (μ) and net phytoplankton production (NPP). The CAFE model calculates NPP as the product of energy absorption (QPAR), and the efficiency (ϕμ) by which absorbed energy is converted into carbon biomass (CPhyto), while μ is calculated as NPP normalized to CPhyto. The CAFE model performance is evaluated alongside 21 other NPP models against a spatially robust and globally representative set of direct NPP measurements. This analysis demonstrates that the CAFE model explains the greatest amount of variance and has the lowest model bias relative to other NPP models analyzed with this data set. Global oceanic NPP from the CAFE model (52 Pg C m-2 yr-1) and mean division rates (0.34 day-1) are derived from climatological satellite data (2002-2014). This manuscript discusses and validates individual CAFE model parameters (e.g., QPAR and ϕμ), provides detailed sensitivity analyses, and compares the CAFE model results and parameterization to other widely cited models.

  15. The impact of working memory and the "process of process modelling" on model quality: Investigating experienced versus inexperienced modellers.

    Science.gov (United States)

    Martini, Markus; Pinggera, Jakob; Neurauter, Manuel; Sachse, Pierre; Furtner, Marco R; Weber, Barbara

    2016-05-09

    A process model (PM) represents the graphical depiction of a business process, for instance, the entire process from online ordering a book until the parcel is delivered to the customer. Knowledge about relevant factors for creating PMs of high quality is lacking. The present study investigated the role of cognitive processes as well as modelling processes in creating a PM in experienced and inexperienced modellers. Specifically, two working memory (WM) functions (holding and processing of information and relational integration) and three process of process modelling phases (comprehension, modelling, and reconciliation) were related to PM quality. Our results show that the WM function of relational integration was positively related to PM quality in both modelling groups. The ratio of comprehension phases was negatively related to PM quality in inexperienced modellers and the ratio of reconciliation phases was positively related to PM quality in experienced modellers. Our research reveals central cognitive mechanisms in process modelling and has potential practical implications for the development of modelling software and teaching the craft of process modelling.

  16. Modeling of Communication in a Computational Situation Assessment Model

    International Nuclear Information System (INIS)

    Lee, Hyun Chul; Seong, Poong Hyun

    2009-01-01

    Operators in nuclear power plants have to acquire information from human system interfaces (HSIs) and the environment in order to create, update, and confirm their understanding of a plant state, or situation awareness, because failures of situation assessment may result in wrong decisions for process control and finally errors of commission in nuclear power plants. Quantitative or prescriptive models to predict operator's situation assessment in a situation, the results of situation assessment, provide many benefits such as HSI design solutions, human performance data, and human reliability. Unfortunately, a few computational situation assessment models for NPP operators have been proposed and those insufficiently embed human cognitive characteristics. Thus we proposed a new computational situation assessment model of nuclear power plant operators. The proposed model incorporating significant cognitive factors uses a Bayesian belief network (BBN) as model architecture. It is believed that communication between nuclear power plant operators affects operators' situation assessment and its result, situation awareness. We tried to verify that the proposed model represent the effects of communication on situation assessment. As the result, the proposed model succeeded in representing the operators' behavior and this paper shows the details

  17. Model building

    International Nuclear Information System (INIS)

    Frampton, P.H.

    1998-01-01

    In this talk I begin with some general discussion of model building in particle theory, emphasizing the need for motivation and testability. Three illustrative examples are then described. The first is the Left-Right model which provides an explanation for the chirality of quarks and leptons. The second is the 331-model which offers a first step to understanding the three generations of quarks and leptons. Third and last is the SU(15) model which can accommodate the light leptoquarks possibly seen at HERA. copyright 1998 American Institute of Physics

  18. Forest-fire models

    Science.gov (United States)

    Haiganoush Preisler; Alan Ager

    2013-01-01

    For applied mathematicians forest fire models refer mainly to a non-linear dynamic system often used to simulate spread of fire. For forest managers forest fire models may pertain to any of the three phases of fire management: prefire planning (fire risk models), fire suppression (fire behavior models), and postfire evaluation (fire effects and economic models). In...

  19. The Danish national passenger modelModel specification and results

    DEFF Research Database (Denmark)

    Rich, Jeppe; Hansen, Christian Overgaard

    2016-01-01

    The paper describes the structure of the new Danish National Passenger model and provides on this basis a general discussion of large-scale model design, cost-damping and model validation. The paper aims at providing three main contributions to the existing literature. Firstly, at the general level......, the paper provides a description of a large-scale forecast model with a discussion of the linkage between population synthesis, demand and assignment. Secondly, the paper gives specific attention to model specification and in particular choice of functional form and cost-damping. Specifically we suggest...... a family of logarithmic spline functions and illustrate how it is applied in the model. Thirdly and finally, we evaluate model sensitivity and performance by evaluating the distance distribution and elasticities. In the paper we present results where the spline-function is compared with more traditional...

  20. Measurement Model Specification Error in LISREL Structural Equation Models.

    Science.gov (United States)

    Baldwin, Beatrice; Lomax, Richard

    This LISREL study examines the robustness of the maximum likelihood estimates under varying degrees of measurement model misspecification. A true model containing five latent variables (two endogenous and three exogenous) and two indicator variables per latent variable was used. Measurement model misspecification considered included errors of…

  1. Computational neurogenetic modeling

    CERN Document Server

    Benuskova, Lubica

    2010-01-01

    Computational Neurogenetic Modeling is a student text, introducing the scope and problems of a new scientific discipline - Computational Neurogenetic Modeling (CNGM). CNGM is concerned with the study and development of dynamic neuronal models for modeling brain functions with respect to genes and dynamic interactions between genes. These include neural network models and their integration with gene network models. This new area brings together knowledge from various scientific disciplines, such as computer and information science, neuroscience and cognitive science, genetics and molecular biol

  2. A stochastic optimization model under modeling uncertainty and parameter certainty for groundwater remediation design-Part I. Model development

    Energy Technology Data Exchange (ETDEWEB)

    He, L., E-mail: li.he@ryerson.ca [Department of Civil Engineering, Faculty of Engineering, Architecture and Science, Ryerson University, 350 Victoria Street, Toronto, Ontario, M5B 2K3 (Canada); Huang, G.H. [Environmental Systems Engineering Program, Faculty of Engineering, University of Regina, Regina, Saskatchewan, S4S 0A2 (Canada); College of Urban Environmental Sciences, Peking University, Beijing 100871 (China); Lu, H.W. [Environmental Systems Engineering Program, Faculty of Engineering, University of Regina, Regina, Saskatchewan, S4S 0A2 (Canada)

    2010-04-15

    Solving groundwater remediation optimization problems based on proxy simulators can usually yield optimal solutions differing from the 'true' ones of the problem. This study presents a new stochastic optimization model under modeling uncertainty and parameter certainty (SOMUM) and the associated solution method for simultaneously addressing modeling uncertainty associated with simulator residuals and optimizing groundwater remediation processes. This is a new attempt different from the previous modeling efforts. The previous ones focused on addressing uncertainty in physical parameters (i.e. soil porosity) while this one aims to deal with uncertainty in mathematical simulator (arising from model residuals). Compared to the existing modeling approaches (i.e. only parameter uncertainty is considered), the model has the advantages of providing mean-variance analysis for contaminant concentrations, mitigating the effects of modeling uncertainties on optimal remediation strategies, offering confidence level of optimal remediation strategies to system designers, and reducing computational cost in optimization processes.

  3. OFFl Models: Novel Schema for Dynamical Modeling of Biological Systems.

    Directory of Open Access Journals (Sweden)

    C Brandon Ogbunugafor

    Full Text Available Flow diagrams are a common tool used to help build and interpret models of dynamical systems, often in biological contexts such as consumer-resource models and similar compartmental models. Typically, their usage is intuitive and informal. Here, we present a formalized version of flow diagrams as a kind of weighted directed graph which follow a strict grammar, which translate into a system of ordinary differential equations (ODEs by a single unambiguous rule, and which have an equivalent representation as a relational database. (We abbreviate this schema of "ODEs and formalized flow diagrams" as OFFL. Drawing a diagram within this strict grammar encourages a mental discipline on the part of the modeler in which all dynamical processes of a system are thought of as interactions between dynamical species that draw parcels from one or more source species and deposit them into target species according to a set of transformation rules. From these rules, the net rate of change for each species can be derived. The modeling schema can therefore be understood as both an epistemic and practical heuristic for modeling, serving both as an organizational framework for the model building process and as a mechanism for deriving ODEs. All steps of the schema beyond the initial scientific (intuitive, creative abstraction of natural observations into model variables are algorithmic and easily carried out by a computer, thus enabling the future development of a dedicated software implementation. Such tools would empower the modeler to consider significantly more complex models than practical limitations might have otherwise proscribed, since the modeling framework itself manages that complexity on the modeler's behalf. In this report, we describe the chief motivations for OFFL, carefully outline its implementation, and utilize a range of classic examples from ecology and epidemiology to showcase its features.

  4. OFFl Models: Novel Schema for Dynamical Modeling of Biological Systems.

    Science.gov (United States)

    Ogbunugafor, C Brandon; Robinson, Sean P

    2016-01-01

    Flow diagrams are a common tool used to help build and interpret models of dynamical systems, often in biological contexts such as consumer-resource models and similar compartmental models. Typically, their usage is intuitive and informal. Here, we present a formalized version of flow diagrams as a kind of weighted directed graph which follow a strict grammar, which translate into a system of ordinary differential equations (ODEs) by a single unambiguous rule, and which have an equivalent representation as a relational database. (We abbreviate this schema of "ODEs and formalized flow diagrams" as OFFL.) Drawing a diagram within this strict grammar encourages a mental discipline on the part of the modeler in which all dynamical processes of a system are thought of as interactions between dynamical species that draw parcels from one or more source species and deposit them into target species according to a set of transformation rules. From these rules, the net rate of change for each species can be derived. The modeling schema can therefore be understood as both an epistemic and practical heuristic for modeling, serving both as an organizational framework for the model building process and as a mechanism for deriving ODEs. All steps of the schema beyond the initial scientific (intuitive, creative) abstraction of natural observations into model variables are algorithmic and easily carried out by a computer, thus enabling the future development of a dedicated software implementation. Such tools would empower the modeler to consider significantly more complex models than practical limitations might have otherwise proscribed, since the modeling framework itself manages that complexity on the modeler's behalf. In this report, we describe the chief motivations for OFFL, carefully outline its implementation, and utilize a range of classic examples from ecology and epidemiology to showcase its features.

  5. Modelling Constructs

    DEFF Research Database (Denmark)

    Kindler, Ekkart

    2009-01-01

    , these notations have been extended in order to increase expressiveness and to be more competitive. This resulted in an increasing number of notations and formalisms for modelling business processes and in an increase of the different modelling constructs provided by modelling notations, which makes it difficult......There are many different notations and formalisms for modelling business processes and workflows. These notations and formalisms have been introduced with different purposes and objectives. Later, influenced by other notations, comparisons with other tools, or by standardization efforts...... to compare modelling notations and to make transformations between them. One of the reasons is that, in each notation, the new concepts are introduced in a different way by extending the already existing constructs. In this chapter, we go the opposite direction: We show that it is possible to add most...

  6. Statistical modelling of railway track geometry degradation using Hierarchical Bayesian models

    International Nuclear Information System (INIS)

    Andrade, A.R.; Teixeira, P.F.

    2015-01-01

    Railway maintenance planners require a predictive model that can assess the railway track geometry degradation. The present paper uses a Hierarchical Bayesian model as a tool to model the main two quality indicators related to railway track geometry degradation: the standard deviation of longitudinal level defects and the standard deviation of horizontal alignment defects. Hierarchical Bayesian Models (HBM) are flexible statistical models that allow specifying different spatially correlated components between consecutive track sections, namely for the deterioration rates and the initial qualities parameters. HBM are developed for both quality indicators, conducting an extensive comparison between candidate models and a sensitivity analysis on prior distributions. HBM is applied to provide an overall assessment of the degradation of railway track geometry, for the main Portuguese railway line Lisbon–Oporto. - Highlights: • Rail track geometry degradation is analysed using Hierarchical Bayesian models. • A Gibbs sampling strategy is put forward to estimate the HBM. • Model comparison and sensitivity analysis find the most suitable model. • We applied the most suitable model to all the segments of the main Portuguese line. • Tackling spatial correlations using CAR structures lead to a better model fit

  7. Foraminifera Models to Interrogate Ostensible Proxy-Model Discrepancies During Late Pliocene

    Science.gov (United States)

    Jacobs, P.; Dowsett, H. J.; de Mutsert, K.

    2017-12-01

    Planktic foraminifera faunal assemblages have been used in the reconstruction of past oceanic states (e.g. the Last Glacial Maximum, the mid-Piacenzian Warm Period). However these reconstruction efforts have typically relied on inverse modeling using transfer functions or the modern analog technique, which by design seek to translate foraminifera into one or two target oceanic variables, primarily sea surface temperature (SST). These reconstructed SST data have then been used to test the performance of climate models, and discrepancies have been attributed to shortcomings in climate model processes and/or boundary conditions. More recently forward proxy models or proxy system models have been used to leverage the multivariate nature of proxy relationships to their environment, and to "bring models into proxy space". Here we construct ecological models of key planktic foraminifera taxa, calibrated and validated with World Ocean Atlas (WO13) oceanographic data. Multiple modeling methods (e.g. multilayer perceptron neural networks, Mahalanobis distance, logistic regression, and maximum entropy) are investigated to ensure robust results. The resulting models are then driven by a Late Pliocene climate model simulation with biogeochemical as well as temperature variables. Similarities and differences with previous model-proxy comparisons (e.g. PlioMIP) are discussed.

  8. On the equivalence between sine-Gordon model and Thirring model in the chirally broken phase of the Thirring model

    International Nuclear Information System (INIS)

    Faber, M.; Ivanov, A.N.

    2001-01-01

    We investigate the equivalence between Thirring model and sine-Gordon model in the chirally broken phase of the Thirring model. This is unlike all other available approaches where the fermion fields of the Thirring model were quantized in the chiral symmetric phase. In the path integral approach we show that the bosonized version of the massless Thirring model is described by a quantum field theory of a massless scalar field and exactly solvable, and the massive Thirring model bosonizes to the sine-Gordon model with a new relation between the coupling constants. We show that the non-perturbative vacuum of the chirally broken phase in the massless Thirring model can be described in complete analogy with the BCS ground state of superconductivity. The Mermin-Wagner theorem and Coleman's statement concerning the absence of Goldstone bosons in the 1+1-dimensional quantum field theories are discussed. We investigate the current algebra in the massless Thirring model and give a new value of the Schwinger term. We show that the topological current in the sine-Gordon model coincides with the Noether current responsible for the conservation of the fermion number in the Thirring model. This allows one to identify the topological charge in the sine-Gordon model with the fermion number. (orig.)

  9. Mathematical models for atmospheric pollutants. Appendix D. Available air quality models. Final report

    International Nuclear Information System (INIS)

    Drake, R.L.; McNaughton, D.J.; Huang, C.

    1979-08-01

    Models that are available for the analysis of airborne pollutants are summarized. In addition, recommendations are given concerning the use of particular models to aid in particular air quality decision making processes. The air quality models are characterized in terms of time and space scales, steady state or time dependent processes, reference frames, reaction mechanisms, treatment of turbulence and topography, and model uncertainty. Using these characteristics, the models are classified in the following manner: simple deterministic models, such as air pollution indices, simple area source models and rollback models; statistical models, such as averaging time models, time series analysis and multivariate analysis; local plume and puff models; box and multibox models; finite difference or grid models; particle models; physical models, such as wind tunnels and liquid flumes; regional models; and global models

  10. Fatigue modelling according to the JCSS Probabilistic model code

    NARCIS (Netherlands)

    Vrouwenvelder, A.C.W.M.

    2007-01-01

    The Joint Committee on Structural Safety is working on a Model Code for full probabilistic design. The code consists out of three major parts: Basis of design, Load Models and Models for Material and Structural Properties. The code is intended as the operational counter part of codes like ISO,

  11. The Sensitivity of Evapotranspiration Models to Errors in Model ...

    African Journals Online (AJOL)

    Five evapotranspiration (Et) model-the penman, Blaney - Criddel, Thornthwaite, the Blaney –Morin-Nigeria, and the Jensen and Haise models – were analyzed for parameter sensitivity under Nigerian Climatic conditions. The sensitivity of each model to errors in any of its measured parameters (variables) was based on the ...

  12. On coupling global biome models with climate models

    OpenAIRE

    Claussen, M.

    1994-01-01

    The BIOME model of Prentice et al. (1992; J. Biogeogr. 19: 117-134), which predicts global vegetation patterns in equilibrium with climate, was coupled with the ECHAM climate model of the Max-Planck-Institut fiir Meteorologie, Hamburg, Germany. It was found that incorporation of the BIOME model into ECHAM, regardless at which frequency, does not enhance the simulated climate variability, expressed in terms of differences between global vegetation patterns. Strongest changes are seen only betw...

  13. Latent classification models

    DEFF Research Database (Denmark)

    Langseth, Helge; Nielsen, Thomas Dyhre

    2005-01-01

    parametric family ofdistributions.  In this paper we propose a new set of models forclassification in continuous domains, termed latent classificationmodels. The latent classification model can roughly be seen ascombining the \\NB model with a mixture of factor analyzers,thereby relaxing the assumptions...... classification model, and wedemonstrate empirically that the accuracy of the proposed model issignificantly higher than the accuracy of other probabilisticclassifiers....

  14. Template for Conceptual Model Construction: Model Review and Corps Applications

    National Research Council Canada - National Science Library

    Henderson, Jim E; O'Neil, L. J

    2007-01-01

    .... The template will expedite conceptual model construction by providing users with model parameters and potential model components, building on a study team's knowledge and experience, and promoting...

  15. Nonlinear Modeling by Assembling Piecewise Linear Models

    Science.gov (United States)

    Yao, Weigang; Liou, Meng-Sing

    2013-01-01

    To preserve nonlinearity of a full order system over a parameters range of interest, we propose a simple modeling approach by assembling a set of piecewise local solutions, including the first-order Taylor series terms expanded about some sampling states. The work by Rewienski and White inspired our use of piecewise linear local solutions. The assembly of these local approximations is accomplished by assigning nonlinear weights, through radial basis functions in this study. The efficacy of the proposed procedure is validated for a two-dimensional airfoil moving at different Mach numbers and pitching motions, under which the flow exhibits prominent nonlinear behaviors. All results confirm that our nonlinear model is accurate and stable for predicting not only aerodynamic forces but also detailed flowfields. Moreover, the model is robustness-accurate for inputs considerably different from the base trajectory in form and magnitude. This modeling preserves nonlinearity of the problems considered in a rather simple and accurate manner.

  16. Hierarchical material models for fragmentation modeling in NIF-ALE-AMR

    International Nuclear Information System (INIS)

    Fisher, A C; Masters, N D; Koniges, A E; Anderson, R W; Gunney, B T N; Wang, P; Becker, R; Dixit, P; Benson, D J

    2008-01-01

    Fragmentation is a fundamental process that naturally spans micro to macroscopic scales. Recent advances in algorithms, computer simulations, and hardware enable us to connect the continuum to microstructural regimes in a real simulation through a heterogeneous multiscale mathematical model. We apply this model to the problem of predicting how targets in the NIF chamber dismantle, so that optics and diagnostics can be protected from damage. The mechanics of the initial material fracture depend on the microscopic grain structure. In order to effectively simulate the fragmentation, this process must be modeled at the subgrain level with computationally expensive crystal plasticity models. However, there are not enough computational resources to model the entire NIF target at this microscopic scale. In order to accomplish these calculations, a hierarchical material model (HMM) is being developed. The HMM will allow fine-scale modeling of the initial fragmentation using computationally expensive crystal plasticity, while the elements at the mesoscale can use polycrystal models, and the macroscopic elements use analytical flow stress models. The HMM framework is built upon an adaptive mesh refinement (AMR) capability. We present progress in implementing the HMM in the NIF-ALE-AMR code. Additionally, we present test simulations relevant to NIF targets

  17. Hierarchical material models for fragmentation modeling in NIF-ALE-AMR

    Energy Technology Data Exchange (ETDEWEB)

    Fisher, A C; Masters, N D; Koniges, A E; Anderson, R W; Gunney, B T N; Wang, P; Becker, R [Lawrence Livermore National Laboratory, PO Box 808, Livermore, CA 94551 (United States); Dixit, P; Benson, D J [University of California San Diego, 9500 Gilman Dr., La Jolla. CA 92093 (United States)], E-mail: fisher47@llnl.gov

    2008-05-15

    Fragmentation is a fundamental process that naturally spans micro to macroscopic scales. Recent advances in algorithms, computer simulations, and hardware enable us to connect the continuum to microstructural regimes in a real simulation through a heterogeneous multiscale mathematical model. We apply this model to the problem of predicting how targets in the NIF chamber dismantle, so that optics and diagnostics can be protected from damage. The mechanics of the initial material fracture depend on the microscopic grain structure. In order to effectively simulate the fragmentation, this process must be modeled at the subgrain level with computationally expensive crystal plasticity models. However, there are not enough computational resources to model the entire NIF target at this microscopic scale. In order to accomplish these calculations, a hierarchical material model (HMM) is being developed. The HMM will allow fine-scale modeling of the initial fragmentation using computationally expensive crystal plasticity, while the elements at the mesoscale can use polycrystal models, and the macroscopic elements use analytical flow stress models. The HMM framework is built upon an adaptive mesh refinement (AMR) capability. We present progress in implementing the HMM in the NIF-ALE-AMR code. Additionally, we present test simulations relevant to NIF targets.

  18. Heterogeneous traffic flow modelling using second-order macroscopic continuum model

    Science.gov (United States)

    Mohan, Ranju; Ramadurai, Gitakrishnan

    2017-01-01

    Modelling heterogeneous traffic flow lacking in lane discipline is one of the emerging research areas in the past few years. The two main challenges in modelling are: capturing the effect of varying size of vehicles, and the lack in lane discipline, both of which together lead to the 'gap filling' behaviour of vehicles. The same section length of the road can be occupied by different types of vehicles at the same time, and the conventional measure of traffic concentration, density (vehicles per lane per unit length), is not a good measure for heterogeneous traffic modelling. First aim of this paper is to have a parsimonious model of heterogeneous traffic that can capture the unique phenomena of gap filling. Second aim is to emphasize the suitability of higher-order models for modelling heterogeneous traffic. Third, the paper aims to suggest area occupancy as concentration measure of heterogeneous traffic lacking in lane discipline. The above mentioned two main challenges of heterogeneous traffic flow are addressed by extending an existing second-order continuum model of traffic flow, using area occupancy for traffic concentration instead of density. The extended model is calibrated and validated with field data from an arterial road in Chennai city, and the results are compared with those from few existing generalized multi-class models.

  19. Respectful Modeling: Addressing Uncertainty in Dynamic System Models for Molecular Biology.

    Science.gov (United States)

    Tsigkinopoulou, Areti; Baker, Syed Murtuza; Breitling, Rainer

    2017-06-01

    Although there is still some skepticism in the biological community regarding the value and significance of quantitative computational modeling, important steps are continually being taken to enhance its accessibility and predictive power. We view these developments as essential components of an emerging 'respectful modeling' framework which has two key aims: (i) respecting the models themselves and facilitating the reproduction and update of modeling results by other scientists, and (ii) respecting the predictions of the models and rigorously quantifying the confidence associated with the modeling results. This respectful attitude will guide the design of higher-quality models and facilitate the use of models in modern applications such as engineering and manipulating microbial metabolism by synthetic biology. Copyright © 2016 Elsevier Ltd. All rights reserved.

  20. PD/PID controller tuning based on model approximations: Model reduction of some unstable and higher order nonlinear models

    Directory of Open Access Journals (Sweden)

    Christer Dalen

    2017-10-01

    Full Text Available A model reduction technique based on optimization theory is presented, where a possible higher order system/model is approximated with an unstable DIPTD model by using only step response data. The DIPTD model is used to tune PD/PID controllers for the underlying possible higher order system. Numerous examples are used to illustrate the theory, i.e. both linear and nonlinear models. The Pareto Optimal controller is used as a reference controller.

  1. Coupling population dynamics with earth system models: the POPEM model.

    Science.gov (United States)

    Navarro, Andrés; Moreno, Raúl; Jiménez-Alcázar, Alfonso; Tapiador, Francisco J

    2017-09-16

    Precise modeling of CO 2 emissions is important for environmental research. This paper presents a new model of human population dynamics that can be embedded into ESMs (Earth System Models) to improve climate modeling. Through a system dynamics approach, we develop a cohort-component model that successfully simulates historical population dynamics with fine spatial resolution (about 1°×1°). The population projections are used to improve the estimates of CO 2 emissions, thus transcending the bulk approach of existing models and allowing more realistic non-linear effects to feature in the simulations. The module, dubbed POPEM (from Population Parameterization for Earth Models), is compared with current emission inventories and validated against UN aggregated data. Finally, it is shown that the module can be used to advance toward fully coupling the social and natural components of the Earth system, an emerging research path for environmental science and pollution research.

  2. Applications of the k – ω Model in Stellar Evolutionary Models

    Energy Technology Data Exchange (ETDEWEB)

    Li, Yan, E-mail: ly@ynao.ac.cn [Yunnan Observatories, Chinese Academy of Sciences, Kunming 650216 (China)

    2017-05-20

    The k – ω model for turbulence was first proposed by Kolmogorov. A new k – ω model for stellar convection was developed by Li, which could reasonably describe turbulent convection not only in the convectively unstable zone, but also in the overshooting regions. We revised the k – ω model by improving several model assumptions (including the macro-length of turbulence, convective heat flux, and turbulent mixing diffusivity, etc.), making it applicable not only for convective envelopes, but also for convective cores. Eight parameters are introduced in the revised k – ω model. It should be noted that the Reynolds stress (turbulent pressure) is neglected in the equation of hydrostatic support. We applied it into solar models and 5 M {sub ⊙} stellar models to calibrate the eight model parameters, as well as to investigate the effects of the convective overshooting on the Sun and intermediate mass stellar models.

  3. Dynamic Model Averaging in Large Model Spaces Using Dynamic Occam's Window.

    Science.gov (United States)

    Onorante, Luca; Raftery, Adrian E

    2016-01-01

    Bayesian model averaging has become a widely used approach to accounting for uncertainty about the structural form of the model generating the data. When data arrive sequentially and the generating model can change over time, Dynamic Model Averaging (DMA) extends model averaging to deal with this situation. Often in macroeconomics, however, many candidate explanatory variables are available and the number of possible models becomes too large for DMA to be applied in its original form. We propose a new method for this situation which allows us to perform DMA without considering the whole model space, but using a subset of models and dynamically optimizing the choice of models at each point in time. This yields a dynamic form of Occam's window. We evaluate the method in the context of the problem of nowcasting GDP in the Euro area. We find that its forecasting performance compares well with that of other methods.

  4. Model-observer similarity, error modeling and social learning in rhesus macaques.

    Directory of Open Access Journals (Sweden)

    Elisabetta Monfardini

    Full Text Available Monkeys readily learn to discriminate between rewarded and unrewarded items or actions by observing their conspecifics. However, they do not systematically learn from humans. Understanding what makes human-to-monkey transmission of knowledge work or fail could help identify mediators and moderators of social learning that operate regardless of language or culture, and transcend inter-species differences. Do monkeys fail to learn when human models show a behavior too dissimilar from the animals' own, or when they show a faultless performance devoid of error? To address this question, six rhesus macaques trained to find which object within a pair concealed a food reward were successively tested with three models: a familiar conspecific, a 'stimulus-enhancing' human actively drawing the animal's attention to one object of the pair without actually performing the task, and a 'monkey-like' human performing the task in the same way as the monkey model did. Reward was manipulated to ensure that all models showed equal proportions of errors and successes. The 'monkey-like' human model improved the animals' subsequent object discrimination learning as much as a conspecific did, whereas the 'stimulus-enhancing' human model tended on the contrary to retard learning. Modeling errors rather than successes optimized learning from the monkey and 'monkey-like' models, while exacerbating the adverse effect of the 'stimulus-enhancing' model. These findings identify error modeling as a moderator of social learning in monkeys that amplifies the models' influence, whether beneficial or detrimental. By contrast, model-observer similarity in behavior emerged as a mediator of social learning, that is, a prerequisite for a model to work in the first place. The latter finding suggests that, as preverbal infants, macaques need to perceive the model as 'like-me' and that, once this condition is fulfilled, any agent can become an effective model.

  5. The reservoir model: a differential equation model of psychological regulation.

    Science.gov (United States)

    Deboeck, Pascal R; Bergeman, C S

    2013-06-01

    Differential equation models can be used to describe the relationships between the current state of a system of constructs (e.g., stress) and how those constructs are changing (e.g., based on variable-like experiences). The following article describes a differential equation model based on the concept of a reservoir. With a physical reservoir, such as one for water, the level of the liquid in the reservoir at any time depends on the contributions to the reservoir (inputs) and the amount of liquid removed from the reservoir (outputs). This reservoir model might be useful for constructs such as stress, where events might "add up" over time (e.g., life stressors, inputs), but individuals simultaneously take action to "blow off steam" (e.g., engage coping resources, outputs). The reservoir model can provide descriptive statistics of the inputs that contribute to the "height" (level) of a construct and a parameter that describes a person's ability to dissipate the construct. After discussing the model, we describe a method of fitting the model as a structural equation model using latent differential equation modeling and latent distribution modeling. A simulation study is presented to examine recovery of the input distribution and output parameter. The model is then applied to the daily self-reports of negative affect and stress from a sample of older adults from the Notre Dame Longitudinal Study on Aging. (PsycINFO Database Record (c) 2013 APA, all rights reserved).

  6. Development of bubble-induced turbulence model for advanced two-fluid model

    International Nuclear Information System (INIS)

    Hosoi, Hideaki; Yoshida, Hiroyuki

    2011-01-01

    A two-fluid model can simulate two-phase flow by computational cost less than detailed two-phase flow simulation method such as interface tracking method. The two-fluid model is therefore useful for thermal hydraulic analysis in the large-scale domain such as rod bundles. However, since the two-fluid model includes a lot of constitutive equations verified by use of experimental results, it has problems that the result of analyses depends on accuracy of the constitutive equations. To solve these problems, an advanced two-fluid model has been developed by Japan Atomic Energy Agency. In this model, interface tracking method is combined with two-fluid model to accurately predict large interface structure behavior. Liquid clusters and bubbles larger than a computational cell are calculated using the interface tracking method, and those smaller than the cell are simulated by the two-fluid model. The constitutive equations to evaluate the effects of small bubbles or droplets on two-phase flow are also required in the advanced two-fluid model, just as with the conventional two-fluid model. However, the dependency of small bubbles and droplets on two-phase flow characteristics is relatively small, and fewer experimental results are required to verify the characteristics of large interface structures. Turbulent dispersion force model is one of the most important constitutive equations for the advanced two-fluid model. The turbulent dispersion force model has been developed by many researchers for the conventional two-fluid model. However, existing models implicitly include the effects of large bubbles and the deformation of bubbles, and are unfortunately not applicable to the advanced two-fluid model. In the previous study, the authors suggested the turbulent dispersion force model based on the analogy of Brownian motion. And the authors improved the turbulent dispersion force model in consideration of bubble-induced turbulence to improve the analysis results for small

  7. Modeling environmental policy

    International Nuclear Information System (INIS)

    Martin, W.E.; McDonald, L.A.

    1997-01-01

    The eight book chapters demonstrate the link between the physical models of the environment and the policy analysis in support of policy making. Each chapter addresses an environmental policy issue using a quantitative modeling approach. The volume addresses three general areas of environmental policy - non-point source pollution in the agricultural sector, pollution generated in the extractive industries, and transboundary pollutants from burning fossil fuels. The book concludes by discussing the modeling efforts and the use of mathematical models in general. Chapters are entitled: modeling environmental policy: an introduction; modeling nonpoint source pollution in an integrated system (agri-ecological); modeling environmental and trade policy linkages: the case of EU and US agriculture; modeling ecosystem constraints in the Clean Water Act: a case study in Clearwater National Forest (subject to discharge from metal mining waste); costs and benefits of coke oven emission controls; modeling equilibria and risk under global environmental constraints (discussing energy and environmental interrelations); relative contribution of the enhanced greenhouse effect on the coastal changes in Louisiana; and the use of mathematical models in policy evaluations: comments. The paper on coke area emission controls has been abstracted separately for the IEA Coal Research CD-ROM

  8. Systemic resilience model

    International Nuclear Information System (INIS)

    Lundberg, Jonas; Johansson, Björn JE

    2015-01-01

    It has been realized that resilience as a concept involves several contradictory definitions, both for instance resilience as agile adjustment and as robust resistance to situations. Our analysis of resilience concepts and models suggest that beyond simplistic definitions, it is possible to draw up a systemic resilience model (SyRes) that maintains these opposing characteristics without contradiction. We outline six functions in a systemic model, drawing primarily on resilience engineering, and disaster response: anticipation, monitoring, response, recovery, learning, and self-monitoring. The model consists of four areas: Event-based constraints, Functional Dependencies, Adaptive Capacity and Strategy. The paper describes dependencies between constraints, functions and strategies. We argue that models such as SyRes should be useful both for envisioning new resilience methods and metrics, as well as for engineering and evaluating resilient systems. - Highlights: • The SyRes model resolves contradictions between previous resilience definitions. • SyRes is a core model for envisioning and evaluating resilience metrics and models. • SyRes describes six functions in a systemic model. • They are anticipation, monitoring, response, recovery, learning, self-monitoring. • The model describes dependencies between constraints, functions and strategies

  9. Differential Topic Models.

    Science.gov (United States)

    Chen, Changyou; Buntine, Wray; Ding, Nan; Xie, Lexing; Du, Lan

    2015-02-01

    In applications we may want to compare different document collections: they could have shared content but also different and unique aspects in particular collections. This task has been called comparative text mining or cross-collection modeling. We present a differential topic model for this application that models both topic differences and similarities. For this we use hierarchical Bayesian nonparametric models. Moreover, we found it was important to properly model power-law phenomena in topic-word distributions and thus we used the full Pitman-Yor process rather than just a Dirichlet process. Furthermore, we propose the transformed Pitman-Yor process (TPYP) to incorporate prior knowledge such as vocabulary variations in different collections into the model. To deal with the non-conjugate issue between model prior and likelihood in the TPYP, we thus propose an efficient sampling algorithm using a data augmentation technique based on the multinomial theorem. Experimental results show the model discovers interesting aspects of different collections. We also show the proposed MCMC based algorithm achieves a dramatically reduced test perplexity compared to some existing topic models. Finally, we show our model outperforms the state-of-the-art for document classification/ideology prediction on a number of text collections.

  10. Making sense to modelers: Presenting UML class model differences in prose

    DEFF Research Database (Denmark)

    Störrle, Harald

    2013-01-01

    Understanding the difference between two models, such as different versions of a design, can be difficult. It is a commonly held belief in the model differencing community that the best way of presenting a model difference is by using graph or tree-based visualizations. We disagree and present an...... by a controlled experiment that tests three alternatives to presenting model differences. Our findings support our claim that the approach presented here is superior to EMF Compare.......Understanding the difference between two models, such as different versions of a design, can be difficult. It is a commonly held belief in the model differencing community that the best way of presenting a model difference is by using graph or tree-based visualizations. We disagree and present...... an alternative approach where sets of low-level model differences are abstracted into high-level model differences that lend themselves to being presented textually. This format is informed by an explorative survey to elicit the change descriptions modelers use themselves. Our approach is validated...

  11. A stochastic optimization model under modeling uncertainty and parameter certainty for groundwater remediation design--part I. Model development.

    Science.gov (United States)

    He, L; Huang, G H; Lu, H W

    2010-04-15

    Solving groundwater remediation optimization problems based on proxy simulators can usually yield optimal solutions differing from the "true" ones of the problem. This study presents a new stochastic optimization model under modeling uncertainty and parameter certainty (SOMUM) and the associated solution method for simultaneously addressing modeling uncertainty associated with simulator residuals and optimizing groundwater remediation processes. This is a new attempt different from the previous modeling efforts. The previous ones focused on addressing uncertainty in physical parameters (i.e. soil porosity) while this one aims to deal with uncertainty in mathematical simulator (arising from model residuals). Compared to the existing modeling approaches (i.e. only parameter uncertainty is considered), the model has the advantages of providing mean-variance analysis for contaminant concentrations, mitigating the effects of modeling uncertainties on optimal remediation strategies, offering confidence level of optimal remediation strategies to system designers, and reducing computational cost in optimization processes. 2009 Elsevier B.V. All rights reserved.

  12. Modeling Ability Differentiation in the Second-Order Factor Model

    Science.gov (United States)

    Molenaar, Dylan; Dolan, Conor V.; van der Maas, Han L. J.

    2011-01-01

    In this article we present factor models to test for ability differentiation. Ability differentiation predicts that the size of IQ subtest correlations decreases as a function of the general intelligence factor. In the Schmid-Leiman decomposition of the second-order factor model, we model differentiation by introducing heteroscedastic residuals,…

  13. Data-Model and Inter-Model Comparisons of the GEM Outflow Events Using the Space Weather Modeling Framework

    Science.gov (United States)

    Welling, D. T.; Eccles, J. V.; Barakat, A. R.; Kistler, L. M.; Haaland, S.; Schunk, R. W.; Chappell, C. R.

    2015-12-01

    Two storm periods were selected by the Geospace Environment Modeling Ionospheric Outflow focus group for community collaborative study because of its high magnetospheric activity and extensive data coverage: the September 27 - October 4, 2002 corotating interaction region event and the October 22 - 29 coronal mass ejection event. During both events, the FAST, Polar, Cluster, and other missions made key observations, creating prime periods for data-model comparison. The GEM community has come together to simulate this period using many different methods in order to evaluate models, compare results, and expand our knowledge of ionospheric outflow and its effects on global dynamics. This paper presents Space Weather Modeling Framework (SWMF) simulations of these important periods compared against observations from the Polar TIDE, Cluster CODIF and EFW instruments. Emphasis will be given to the second event. Density and velocity of oxygen and hydrogen throughout the lobes, plasma sheet, and inner magnetosphere will be the focus of these comparisons. For these simulations, the SWMF couples the multifluid version of BATS-R-US MHD to a variety of ionospheric outflow models of varying complexity. The simplest is outflow arising from constant MHD inner boundary conditions. Two first-principles-based models are also leveraged: the Polar Wind Outflow Model (PWOM), a fluid treatment of outflow dynamics, and the Generalized Polar Wind (GPW) model, which combines fluid and particle-in-cell approaches. Each model is capable of capturing a different set of energization mechanisms, yielding different outflow results. The data-model comparisons will illustrate how well each approach captures reality and which energization mechanisms are most important. Inter-model comparisons will illustrate how the different outflow specifications affect the magnetosphere. Specifically, it is found that the GPW provides increased heavy ion outflow over a broader spatial range than the alternative

  14. Interface models

    DEFF Research Database (Denmark)

    Ravn, Anders P.; Staunstrup, Jørgen

    1994-01-01

    This paper proposes a model for specifying interfaces between concurrently executing modules of a computing system. The model does not prescribe a particular type of communication protocol and is aimed at describing interfaces between both software and hardware modules or a combination of the two....... The model describes both functional and timing properties of an interface...

  15. Wastewater treatment models

    DEFF Research Database (Denmark)

    Gernaey, Krist; Sin, Gürkan

    2011-01-01

    description of biological phosphorus removal, physicalchemical processes, hydraulics and settling tanks. For attached growth systems, biofilm models have progressed from analytical steady-state models to more complex 2D/3D dynamic numerical models. Plant-wide modeling is set to advance further the practice......The state-of-the-art level reached in modeling wastewater treatment plants (WWTPs) is reported. For suspended growth systems, WWTP models have evolved from simple description of biological removal of organic carbon and nitrogen in aeration tanks (ASM1 in 1987) to more advanced levels including...

  16. Wastewater Treatment Models

    DEFF Research Database (Denmark)

    Gernaey, Krist; Sin, Gürkan

    2008-01-01

    description of biological phosphorus removal, physical–chemical processes, hydraulics, and settling tanks. For attached growth systems, biofilm models have progressed from analytical steady-state models to more complex 2-D/3-D dynamic numerical models. Plant-wide modeling is set to advance further......The state-of-the-art level reached in modeling wastewater treatment plants (WWTPs) is reported. For suspended growth systems, WWTP models have evolved from simple description of biological removal of organic carbon and nitrogen in aeration tanks (ASM1 in 1987) to more advanced levels including...

  17. Usability Prediction & Ranking of SDLC Models Using Fuzzy Hierarchical Usability Model

    Science.gov (United States)

    Gupta, Deepak; Ahlawat, Anil K.; Sagar, Kalpna

    2017-06-01

    Evaluation of software quality is an important aspect for controlling and managing the software. By such evaluation, improvements in software process can be made. The software quality is significantly dependent on software usability. Many researchers have proposed numbers of usability models. Each model considers a set of usability factors but do not cover all the usability aspects. Practical implementation of these models is still missing, as there is a lack of precise definition of usability. Also, it is very difficult to integrate these models into current software engineering practices. In order to overcome these challenges, this paper aims to define the term `usability' using the proposed hierarchical usability model with its detailed taxonomy. The taxonomy considers generic evaluation criteria for identifying the quality components, which brings together factors, attributes and characteristics defined in various HCI and software models. For the first time, the usability model is also implemented to predict more accurate usability values. The proposed system is named as fuzzy hierarchical usability model that can be easily integrated into the current software engineering practices. In order to validate the work, a dataset of six software development life cycle models is created and employed. These models are ranked according to their predicted usability values. This research also focuses on the detailed comparison of proposed model with the existing usability models.

  18. The lagRST Model: A Turbulence Model for Non-Equilibrium Flows

    Science.gov (United States)

    Lillard, Randolph P.; Oliver, A. Brandon; Olsen, Michael E.; Blaisdell, Gregory A.; Lyrintzis, Anastasios S.

    2011-01-01

    This study presents a new class of turbulence model designed for wall bounded, high Reynolds number flows with separation. The model addresses deficiencies seen in the modeling of nonequilibrium turbulent flows. These flows generally have variable adverse pressure gradients which cause the turbulent quantities to react at a finite rate to changes in the mean flow quantities. This "lag" in the response of the turbulent quantities can t be modeled by most standard turbulence models, which are designed to model equilibrium turbulent boundary layers. The model presented uses a standard 2-equation model as the baseline for turbulent equilibrium calculations, but adds transport equations to account directly for non-equilibrium effects in the Reynolds Stress Tensor (RST) that are seen in large pressure gradients involving shock waves and separation. Comparisons are made to several standard turbulence modeling validation cases, including an incompressible boundary layer (both neutral and adverse pressure gradients), an incompressible mixing layer and a transonic bump flow. In addition, a hypersonic Shock Wave Turbulent Boundary Layer Interaction with separation is assessed along with a transonic capsule flow. Results show a substantial improvement over the baseline models for transonic separated flows. The results are mixed for the SWTBLI flows assessed. Separation predictions are not as good as the baseline models, but the over prediction of the peak heat flux downstream of the reattachment shock that plagues many models is reduced.

  19. Validation of community models: 3. Tracing field lines in heliospheric models

    Science.gov (United States)

    MacNeice, Peter; Elliott, Brian; Acebal, Ariel

    2011-10-01

    Forecasting hazardous gradual solar energetic particle (SEP) bursts at Earth requires accurately modeling field line connections between Earth and the locations of coronal or interplanetary shocks that accelerate the particles. We test the accuracy of field lines reconstructed using four different models of the ambient coronal and inner heliospheric magnetic field, through which these shocks must propagate, including the coupled Wang-Sheeley-Arge (WSA)/ENLIL model. Evaluating the WSA/ENLIL model performance is important since it is the most sophisticated model currently available to space weather forecasters which can model interplanetary coronal mass ejections and, when coupled with particle acceleration and transport models, will provide a complete model for gradual SEP bursts. Previous studies using a simpler Archimedean spiral approach above 2.5 solar radii have reported poor performance. We test the accuracy of the model field lines connecting Earth to the Sun at the onset times of 15 impulsive SEP bursts, comparing the foot points of these field lines with the locations of surface events believed to be responsible for the SEP bursts. We find the WSA/ENLIL model performance is no better than the simplest spiral model, and the principal source of error is the model's inability to reproduce sufficient low-latitude open flux. This may be due to the model's use of static synoptic magnetograms, which fail to account for transient activity in the low corona, during which reconnection events believed to initiate the SEP acceleration may contribute short-lived open flux at low latitudes. Time-dependent coronal models incorporating these transient events may be needed to significantly improve Earth/Sun field line forecasting.

  20. Modeling promoter grammars with evolving hidden Markov models

    DEFF Research Database (Denmark)

    Won, Kyoung-Jae; Sandelin, Albin; Marstrand, Troels Torben

    2008-01-01

    MOTIVATION: Describing and modeling biological features of eukaryotic promoters remains an important and challenging problem within computational biology. The promoters of higher eukaryotes in particular display a wide variation in regulatory features, which are difficult to model. Often several...... factors are involved in the regulation of a set of co-regulated genes. If so, promoters can be modeled with connected regulatory features, where the network of connections is characteristic for a particular mode of regulation. RESULTS: With the goal of automatically deciphering such regulatory structures......, we present a method that iteratively evolves an ensemble of regulatory grammars using a hidden Markov Model (HMM) architecture composed of interconnected blocks representing transcription factor binding sites (TFBSs) and background regions of promoter sequences. The ensemble approach reduces the risk...

  1. Bio-Inspired Neural Model for Learning Dynamic Models

    Science.gov (United States)

    Duong, Tuan; Duong, Vu; Suri, Ronald

    2009-01-01

    A neural-network mathematical model that, relative to prior such models, places greater emphasis on some of the temporal aspects of real neural physical processes, has been proposed as a basis for massively parallel, distributed algorithms that learn dynamic models of possibly complex external processes by means of learning rules that are local in space and time. The algorithms could be made to perform such functions as recognition and prediction of words in speech and of objects depicted in video images. The approach embodied in this model is said to be "hardware-friendly" in the following sense: The algorithms would be amenable to execution by special-purpose computers implemented as very-large-scale integrated (VLSI) circuits that would operate at relatively high speeds and low power demands.

  2. Dynamics models and modeling of tree stand development

    Directory of Open Access Journals (Sweden)

    M. V. Rogozin

    2015-04-01

    Full Text Available Brief analysis of scientific works in Russia and in the CIS over the past 100 years. Logical and mathematical models consider the conceptual and show some of the results of their verification. It was found that the models include different laws and the parameters, the sum of which allows you to divide them into four categories: models of static states, development models, models of care for the natural forest and models of cultivation. Each category has fulfilled and fulfills its tasks in economic management. Thus, the model states in statics (table traverse growth played a prominent role in figuring out what may be the most productive (full stands in different regions of the country. However, they do not answer the question of what the initial states lead to the production of complete stands. In a study of the growth of stands used system analysis, and it is observed dominance of works studying static state, snatched from the biological time. Therefore, the real drama of the growth of stands remained almost unexplored. It is no accident there were «chrono-forestry» «plantation forestry» and even «non-traditional forestry», where there is a strong case of a number of new concepts of development stands. That is quite in keeping with Kuhn (Kuhn, 2009 in the forestry crisis began – there were alternative theories and coexist conflicting scientific schools. To develop models of stand development, it is proposed to use a well-known method of repeated observations within 10–20 years, in conjunction with the explanation of the history of the initial density. It mounted on the basis of studying the dynamics of its indicators: the trunk, crown overlap coefficient, the sum of volumes of all crowns and the relative length of the crown. According to these indicators, the researcher selects natural series of development stands with the same initial density. As a theoretical basis for the models it is possible to postulate the general properties of

  3. PORTER S FIVE FORCES MODEL SCOTT MORTON S FIVE FORCES MODEL BAKOS TREACY MODEL ANALYZES STRATEGIC INFORMATION SYSTEMS MANAGEMENT

    Directory of Open Access Journals (Sweden)

    Indra Gamayanto

    2004-01-01

    Full Text Available Wollongong City Council (WCC is one of the most progressive and innovative local government organizations in Australia. Wollongong City Council use Information Technology to gain the competitive advantage and to face a global economy in the future. Porter's Five Force model is one of the models that can be using at Wollongong City Council because porter's five Forces model has strength in relationship between buyer and suppliers (Bargaining power of suppliers and bargaining power of buyers. Other model such as Scott Morton's Five Forces model has strength to analyze the social impact factor, so to gain competitive advantage in the future and have a good IT/IS strategic planning; this model can be use also. Bakos & Treacy model almost the same as Porter's model but Bakos & Treacy model can also be applying into Wollongong City Council to improve the capability in Transforming organization, efficiency, and effectiveness.

  4. Modeling dynamic functional connectivity using a wishart mixture model

    DEFF Research Database (Denmark)

    Nielsen, Søren Føns Vind; Madsen, Kristoffer Hougaard; Schmidt, Mikkel Nørgaard

    2017-01-01

    framework provides model selection by quantifying models generalization to new data. We use this to quantify the number of states within a prespecified window length. We further propose a heuristic procedure for choosing the window length based on contrasting for each window length the predictive...... together whereas short windows are more unstable and influenced by noise and we find that our heuristic correctly identifies an adequate level of complexity. On single subject resting state fMRI data we find that dynamic models generally outperform static models and using the proposed heuristic points...

  5. Integration of Simulink Models with Component-based Software Models

    Directory of Open Access Journals (Sweden)

    MARIAN, N.

    2008-06-01

    Full Text Available Model based development aims to facilitate the development of embedded control systems by emphasizing the separation of the design level from the implementation level. Model based design involves the use of multiple models that represent different views of a system, having different semantics of abstract system descriptions. Usually, in mechatronics systems, design proceeds by iterating model construction, model analysis, and model transformation. Constructing a MATLAB/Simulink model, a plant and controller behavior is simulated using graphical blocks to represent mathematical and logical constructs and process flow, then software code is generated. A Simulink model is a representation of the design or implementation of a physical system that satisfies a set of requirements. A software component-based system aims to organize system architecture and behavior as a means of computation, communication and constraints, using computational blocks and aggregates for both discrete and continuous behavior, different interconnection and execution disciplines for event-based and time-based controllers, and so on, to encompass the demands to more functionality, at even lower prices, and with opposite constraints. COMDES (Component-based Design of Software for Distributed Embedded Systems is such a component-based system framework developed by the software engineering group of Mads Clausen Institute for Product Innovation (MCI, University of Southern Denmark. Once specified, the software model has to be analyzed. One way of doing that is to integrate in wrapper files the model back into Simulink S-functions, and use its extensive simulation features, thus allowing an early exploration of the possible design choices over multiple disciplines. The paper describes a safe translation of a restricted set of MATLAB/Simulink blocks to COMDES software components, both for continuous and discrete behavior, and the transformation of the software system into the S

  6. Can the Stephani model be an alternative to FRW accelerating models?

    International Nuclear Information System (INIS)

    Godlowski, Wlodzimierz; Stelmach, Jerzy; Szydlowski, Marek

    2004-01-01

    A class of Stephani cosmological models as a prototype of a non-homogeneous universe is considered. The non-homogeneity can lead to accelerated evolution, which is now observed from the SNe Ia data. Three samples of type Ia supernovae obtained by Perlmutter et al, Tonry et al and Knop et al are taken into account. Different statistical methods (best fits as well as maximum likelihood method) to obtain estimation for the model parameters are used. The Stephani model is considered as an alternative to the ΛCDM model in the explanation of the present acceleration of the universe. The model explains the acceleration of the universe at the same level of accuracy as the ΛCDM model (χ 2 statistics are comparable). From the best fit analysis it follows that the Stephani model is characterized by a higher value of density parameter Ω m0 than the ΛCDM model. It is also shown that the model is consistent with the location of CMB peaks

  7. Modeling SMAP Spacecraft Attitude Control Estimation Error Using Signal Generation Model

    Science.gov (United States)

    Rizvi, Farheen

    2016-01-01

    Two ground simulation software are used to model the SMAP spacecraft dynamics. The CAST software uses a higher fidelity model than the ADAMS software. The ADAMS software models the spacecraft plant, controller and actuator models, and assumes a perfect sensor and estimator model. In this simulation study, the spacecraft dynamics results from the ADAMS software are used as CAST software is unavailable. The main source of spacecraft dynamics error in the higher fidelity CAST software is due to the estimation error. A signal generation model is developed to capture the effect of this estimation error in the overall spacecraft dynamics. Then, this signal generation model is included in the ADAMS software spacecraft dynamics estimate such that the results are similar to CAST. This signal generation model has similar characteristics mean, variance and power spectral density as the true CAST estimation error. In this way, ADAMS software can still be used while capturing the higher fidelity spacecraft dynamics modeling from CAST software.

  8. Optimizing a gap conductance model applicable to VVER-1000 thermal–hydraulic model

    International Nuclear Information System (INIS)

    Rahgoshay, M.; Hashemi-Tilehnoee, M.

    2012-01-01

    Highlights: ► Two known conductance models for application in VVER-1000 thermal–hydraulic code are examined. ► An optimized gap conductance model is developed which can predict the gap conductance in good agreement with FSAR data. ► The licensed thermal–hydraulic code is coupled with the gap conductance model predictor externally. -- Abstract: The modeling of gap conductance for application in VVER-1000 thermal–hydraulic codes is addressed. Two known models, namely CALZA-BINI and RELAP5 gap conductance models, are examined. By externally linking of gap conductance models and COBRA-EN thermal hydraulic code, the acceptable range of each model is specified. The result of each gap conductance model versus linear heat rate has been compared with FSAR data. A linear heat rate of about 9 kW/m is the boundary for optimization process. Since each gap conductance model has its advantages and limitation, the optimized gap conductance model can predict the gap conductance better than each of the two other models individually.

  9. Towards a standard model for research in agent-based modeling and simulation

    Directory of Open Access Journals (Sweden)

    Nuno Fachada

    2015-11-01

    Full Text Available Agent-based modeling (ABM is a bottom-up modeling approach, where each entity of the system being modeled is uniquely represented as an independent decision-making agent. ABMs are very sensitive to implementation details. Thus, it is very easy to inadvertently introduce changes which modify model dynamics. Such problems usually arise due to the lack of transparency in model descriptions, which constrains how models are assessed, implemented and replicated. In this paper, we present PPHPC, a model which aims to serve as a standard in agent based modeling research, namely, but not limited to, conceptual model specification, statistical analysis of simulation output, model comparison and parallelization studies. This paper focuses on the first two aspects (conceptual model specification and statistical analysis of simulation output, also providing a canonical implementation of PPHPC. The paper serves as a complete reference to the presented model, and can be used as a tutorial for simulation practitioners who wish to improve the way they communicate their ABMs.

  10. Automated parameter estimation for biological models using Bayesian statistical model checking.

    Science.gov (United States)

    Hussain, Faraz; Langmead, Christopher J; Mi, Qi; Dutta-Moscato, Joyeeta; Vodovotz, Yoram; Jha, Sumit K

    2015-01-01

    Probabilistic models have gained widespread acceptance in the systems biology community as a useful way to represent complex biological systems. Such models are developed using existing knowledge of the structure and dynamics of the system, experimental observations, and inferences drawn from statistical analysis of empirical data. A key bottleneck in building such models is that some system variables cannot be measured experimentally. These variables are incorporated into the model as numerical parameters. Determining values of these parameters that justify existing experiments and provide reliable predictions when model simulations are performed is a key research problem. Using an agent-based model of the dynamics of acute inflammation, we demonstrate a novel parameter estimation algorithm by discovering the amount and schedule of doses of bacterial lipopolysaccharide that guarantee a set of observed clinical outcomes with high probability. We synthesized values of twenty-eight unknown parameters such that the parameterized model instantiated with these parameter values satisfies four specifications describing the dynamic behavior of the model. We have developed a new algorithmic technique for discovering parameters in complex stochastic models of biological systems given behavioral specifications written in a formal mathematical logic. Our algorithm uses Bayesian model checking, sequential hypothesis testing, and stochastic optimization to automatically synthesize parameters of probabilistic biological models.

  11. High-dimensional model estimation and model selection

    CERN Multimedia

    CERN. Geneva

    2015-01-01

    I will review concepts and algorithms from high-dimensional statistics for linear model estimation and model selection. I will particularly focus on the so-called p>>n setting where the number of variables p is much larger than the number of samples n. I will focus mostly on regularized statistical estimators that produce sparse models. Important examples include the LASSO and its matrix extension, the Graphical LASSO, and more recent non-convex methods such as the TREX. I will show the applicability of these estimators in a diverse range of scientific applications, such as sparse interaction graph recovery and high-dimensional classification and regression problems in genomics.

  12. STEREOMETRIC MODELLING

    Directory of Open Access Journals (Sweden)

    P. Grimaldi

    2012-07-01

    Full Text Available These mandatory guidelines are provided for preparation of papers accepted for publication in the series of Volumes of The The stereometric modelling means modelling achieved with : – the use of a pair of virtual cameras, with parallel axes and positioned at a mutual distance average of 1/10 of the distance camera-object (in practice the realization and use of a stereometric camera in the modeling program; – the shot visualization in two distinct windows – the stereoscopic viewing of the shot while modelling. Since the definition of "3D vision" is inaccurately referred to as the simple perspective of an object, it is required to add the word stereo so that "3D stereo vision " shall stand for "three-dimensional view" and ,therefore, measure the width, height and depth of the surveyed image. Thanks to the development of a stereo metric model , either real or virtual, through the "materialization", either real or virtual, of the optical-stereo metric model made visible with a stereoscope. It is feasible a continuous on line updating of the cultural heritage with the help of photogrammetry and stereometric modelling. The catalogue of the Architectonic Photogrammetry Laboratory of Politecnico di Bari is available on line at: http://rappresentazione.stereofot.it:591/StereoFot/FMPro?-db=StereoFot.fp5&-lay=Scheda&-format=cerca.htm&-view

  13. A study on the intrusion model by physical modeling

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Jung Yul; Kim, Yoo Sung; Hyun, Hye Ja [Korea Inst. of Geology Mining and Materials, Taejon (Korea, Republic of)

    1995-12-01

    In physical modeling, the actual phenomena of seismic wave propagation are directly measured like field survey and furthermore the structure and physical properties of subsurface can be known. So the measured datasets from physical modeling can be very desirable as input data to test the efficiency of various inversion algorithms. An underground structure formed by intrusion, which can be often seen in seismic section for oil exploration, is investigated by physical modeling. The model is characterized by various types of layer boundaries with steep dip angle. Therefore, this physical modeling data are very available not only to interpret seismic sections for oil exploration as a case history, but also to develop data processing techniques and estimate the capability of software such as migration, full waveform inversion. (author). 5 refs., 18 figs.

  14. Modeling conflict : research methods, quantitative modeling, and lessons learned.

    Energy Technology Data Exchange (ETDEWEB)

    Rexroth, Paul E.; Malczynski, Leonard A.; Hendrickson, Gerald A.; Kobos, Peter Holmes; McNamara, Laura A.

    2004-09-01

    This study investigates the factors that lead countries into conflict. Specifically, political, social and economic factors may offer insight as to how prone a country (or set of countries) may be for inter-country or intra-country conflict. Largely methodological in scope, this study examines the literature for quantitative models that address or attempt to model conflict both in the past, and for future insight. The analysis concentrates specifically on the system dynamics paradigm, not the political science mainstream approaches of econometrics and game theory. The application of this paradigm builds upon the most sophisticated attempt at modeling conflict as a result of system level interactions. This study presents the modeling efforts built on limited data and working literature paradigms, and recommendations for future attempts at modeling conflict.

  15. A model for photothermal responses of flowering in rice. II. Model evaluation.

    NARCIS (Netherlands)

    Yin, X.; Kropff, M.J.; Nakagawa, H.; Horie, T.; Goudriaan, J.

    1997-01-01

    A detailed nonlinear model, the 3s-Beta model, for photothermal responses of flowering in rice (Oryza sativa L.) was evaluated for predicting rice flowering date in field conditions. This model was compared with other three models: a three-plane linear model and two nonlinear models, viz, the

  16. Modeling North Atlantic Nor'easters With Modern Wave Forecast Models

    Science.gov (United States)

    Perrie, Will; Toulany, Bechara; Roland, Aron; Dutour-Sikiric, Mathieu; Chen, Changsheng; Beardsley, Robert C.; Qi, Jianhua; Hu, Yongcun; Casey, Michael P.; Shen, Hui

    2018-01-01

    Three state-of-the-art operational wave forecast model systems are implemented on fine-resolution grids for the Northwest Atlantic. These models are: (1) a composite model system consisting of SWAN implemented within WAVEWATCHIII® (the latter is hereafter, WW3) on a nested system of traditional structured grids, (2) an unstructured grid finite-volume wave model denoted "SWAVE," using SWAN physics, and (3) an unstructured grid finite element wind wave model denoted as "WWM" (for "wind wave model") which uses WW3 physics. Models are implemented on grid systems that include relatively large domains to capture the wave energy generated by the storms, as well as including fine-resolution nearshore regions of the southern Gulf of Maine with resolution on the scale of 25 m to simulate areas where inundation and coastal damage have occurred, due to the storms. Storm cases include three intense midlatitude cases: a spring Nor'easter storm in May 2005, the Patriot's Day storm in 2007, and the Boxing Day storm in 2010. Although these wave model systems have comparable overall properties in terms of their performance and skill, it is found that there are differences. Models that use more advanced physics, as presented in recent versions of WW3, tuned to regional characteristics, as in the Gulf of Maine and the Northwest Atlantic, can give enhanced results.

  17. Limits with modeling data and modeling data with limits

    Directory of Open Access Journals (Sweden)

    Lionello Pogliani

    2006-01-01

    Full Text Available Modeling of the solubility of amino acids and purine and pyrimidine bases with a set of sixteen molecular descriptors has been thoroughly analyzed to detect and understand the reasons for anomalies in the description of this property for these two classes of compounds. Unsatisfactory modeling can be ascribed to incomplete collateral data, i.e, to the fact that there is insufficient data known about the behavior of these compounds in solution. This is usually because intermolecular forces cannot be modeled. The anomalous modeling can be detected from the rather large values of the standard deviation of the estimates of the whole set of compounds, and from the unsatisfactory modeling of some of the subsets of these compounds. Thus the detected abnormalities can be used (i to get an idea about weak intermolecular interactions such as hydration, self-association, the hydrogen-bond phenomena in solution, and (ii to reshape the molecular descriptors with the introduction of parameters that allow better modeling. This last procedure should be used with care, bearing in mind that the solubility phenomena is rather complex.

  18. Transparent Model Transformation: Turning Your Favourite Model Editor into a Transformation Tool

    DEFF Research Database (Denmark)

    Acretoaie, Vlad; Störrle, Harald; Strüber, Daniel

    2015-01-01

    Current model transformation languages are supported by dedicated editors, often closely coupled to a single execution engine. We introduce Transparent Model Transformation, a paradigm enabling modelers to specify transformations using a familiar tool: their model editor. We also present VMTL, th...... model transformation tool sharing the model editor’s benefits, transparently....

  19. Model Fusion Tool - the Open Environmental Modelling Platform Concept

    Science.gov (United States)

    Kessler, H.; Giles, J. R.

    2010-12-01

    The vision of an Open Environmental Modelling Platform - seamlessly linking geoscience data, concepts and models to aid decision making in times of environmental change. Governments and their executive agencies across the world are facing increasing pressure to make decisions about the management of resources in light of population growth and environmental change. In the UK for example, groundwater is becoming a scarce resource for large parts of its most densely populated areas. At the same time river and groundwater flooding resulting from high rainfall events are increasing in scale and frequency and sea level rise is threatening the defences of coastal cities. There is also a need for affordable housing, improved transport infrastructure and waste disposal as well as sources of renewable energy and sustainable food production. These challenges can only be resolved if solutions are based on sound scientific evidence. Although we have knowledge and understanding of many individual processes in the natural sciences it is clear that a single science discipline is unable to answer the questions and their inter-relationships. Modern science increasingly employs computer models to simulate the natural, economic and human system. Management and planning requires scenario modelling, forecasts and ‘predictions’. Although the outputs are often impressive in terms of apparent accuracy and visualisation, they are inherently not suited to simulate the response to feedbacks from other models of the earth system, such as the impact of human actions. Geological Survey Organisations (GSO) are increasingly employing advances in Information Technology to visualise and improve their understanding of geological systems. Instead of 2 dimensional paper maps and reports many GSOs now produce 3 dimensional geological framework models and groundwater flow models as their standard output. Additionally the British Geological Survey have developed standard routines to link geological

  20. Building generic anatomical models using virtual model cutting and iterative registration

    Directory of Open Access Journals (Sweden)

    Hallgrímsson Benedikt

    2010-02-01

    Full Text Available Abstract Background Using 3D generic models to statistically analyze trends in biological structure changes is an important tool in morphometrics research. Therefore, 3D generic models built for a range of populations are in high demand. However, due to the complexity of biological structures and the limited views of them that medical images can offer, it is still an exceptionally difficult task to quickly and accurately create 3D generic models (a model is a 3D graphical representation of a biological structure based on medical image stacks (a stack is an ordered collection of 2D images. We show that the creation of a generic model that captures spatial information exploitable in statistical analyses is facilitated by coupling our generalized segmentation method to existing automatic image registration algorithms. Methods The method of creating generic 3D models consists of the following processing steps: (i scanning subjects to obtain image stacks; (ii creating individual 3D models from the stacks; (iii interactively extracting sub-volume by cutting each model to generate the sub-model of interest; (iv creating image stacks that contain only the information pertaining to the sub-models; (v iteratively registering the corresponding new 2D image stacks; (vi averaging the newly created sub-models based on intensity to produce the generic model from all the individual sub-models. Results After several registration procedures are applied to the image stacks, we can create averaged image stacks with sharp boundaries. The averaged 3D model created from those image stacks is very close to the average representation of the population. The image registration time varies depending on the image size and the desired accuracy of the registration. Both volumetric data and surface model for the generic 3D model are created at the final step. Conclusions Our method is very flexible and easy to use such that anyone can use image stacks to create models and

  1. Relative efficiency of joint-model and full-conditional-specification multiple imputation when conditional models are compatible: The general location model.

    Science.gov (United States)

    Seaman, Shaun R; Hughes, Rachael A

    2018-06-01

    Estimating the parameters of a regression model of interest is complicated by missing data on the variables in that model. Multiple imputation is commonly used to handle these missing data. Joint model multiple imputation and full-conditional specification multiple imputation are known to yield imputed data with the same asymptotic distribution when the conditional models of full-conditional specification are compatible with that joint model. We show that this asymptotic equivalence of imputation distributions does not imply that joint model multiple imputation and full-conditional specification multiple imputation will also yield asymptotically equally efficient inference about the parameters of the model of interest, nor that they will be equally robust to misspecification of the joint model. When the conditional models used by full-conditional specification multiple imputation are linear, logistic and multinomial regressions, these are compatible with a restricted general location joint model. We show that multiple imputation using the restricted general location joint model can be substantially more asymptotically efficient than full-conditional specification multiple imputation, but this typically requires very strong associations between variables. When associations are weaker, the efficiency gain is small. Moreover, full-conditional specification multiple imputation is shown to be potentially much more robust than joint model multiple imputation using the restricted general location model to mispecification of that model when there is substantial missingness in the outcome variable.

  2. Hydrological Modeling Reproducibility Through Data Management and Adaptors for Model Interoperability

    Science.gov (United States)

    Turner, M. A.

    2015-12-01

    Because of a lack of centralized planning and no widely-adopted standards among hydrological modeling research groups, research communities, and the data management teams meant to support research, there is chaos when it comes to data formats, spatio-temporal resolutions, ontologies, and data availability. All this makes true scientific reproducibility and collaborative integrated modeling impossible without some glue to piece it all together. Our Virtual Watershed Integrated Modeling System provides the tools and modeling framework hydrologists need to accelerate and fortify new scientific investigations by tracking provenance and providing adaptors for integrated, collaborative hydrologic modeling and data management. Under global warming trends where water resources are under increasing stress, reproducible hydrological modeling will be increasingly important to improve transparency and understanding of the scientific facts revealed through modeling. The Virtual Watershed Data Engine is capable of ingesting a wide variety of heterogeneous model inputs, outputs, model configurations, and metadata. We will demonstrate one example, starting from real-time raw weather station data packaged with station metadata. Our integrated modeling system will then create gridded input data via geostatistical methods along with error and uncertainty estimates. These gridded data are then used as input to hydrological models, all of which are available as web services wherever feasible. Models may be integrated in a data-centric way where the outputs too are tracked and used as inputs to "downstream" models. This work is part of an ongoing collaborative Tri-state (New Mexico, Nevada, Idaho) NSF EPSCoR Project, WC-WAVE, comprised of researchers from multiple universities in each of the three states. The tools produced and presented here have been developed collaboratively alongside watershed scientists to address specific modeling problems with an eye on the bigger picture of

  3. Evaluation Of Statistical Models For Forecast Errors From The HBV-Model

    Science.gov (United States)

    Engeland, K.; Kolberg, S.; Renard, B.; Stensland, I.

    2009-04-01

    Three statistical models for the forecast errors for inflow to the Langvatn reservoir in Northern Norway have been constructed and tested according to how well the distribution and median values of the forecasts errors fit to the observations. For the first model observed and forecasted inflows were transformed by the Box-Cox transformation before a first order autoregressive model was constructed for the forecast errors. The parameters were conditioned on climatic conditions. In the second model the Normal Quantile Transformation (NQT) was applied on observed and forecasted inflows before a similar first order autoregressive model was constructed for the forecast errors. For the last model positive and negative errors were modeled separately. The errors were first NQT-transformed before a model where the mean values were conditioned on climate, forecasted inflow and yesterday's error. To test the three models we applied three criterions: We wanted a) the median values to be close to the observed values; b) the forecast intervals to be narrow; c) the distribution to be correct. The results showed that it is difficult to obtain a correct model for the forecast errors, and that the main challenge is to account for the auto-correlation in the errors. Model 1 and 2 gave similar results, and the main drawback is that the distributions are not correct. The 95% forecast intervals were well identified, but smaller forecast intervals were over-estimated, and larger intervals were under-estimated. Model 3 gave a distribution that fits better, but the median values do not fit well since the auto-correlation is not properly accounted for. If the 95% forecast interval is of interest, Model 2 is recommended. If the whole distribution is of interest, Model 3 is recommended.

  4. Modelling open pit shovel-truck systems using the Machine Repair Model

    Energy Technology Data Exchange (ETDEWEB)

    Krause, A.; Musingwini, C. [CBH Resources Ltd., Sydney, NSW (Australia). Endeaver Mine

    2007-08-15

    Shovel-truck systems for loading and hauling material in open pit mines are now routinely analysed using simulation models or off-the-shelf simulation software packages, which can be very expensive for once-off or occasional use. The simulation models invariably produce different estimations of fleet sizes due to their differing estimations of cycle time. No single model or package can accurately estimate the required fleet size because the fleet operating parameters are characteristically random and dynamic. In order to improve confidence in sizing the fleet for a mining project, at least two estimation models should be used. This paper demonstrates that the Machine Repair Model can be modified and used as a model for estimating truck fleet size in an open pit shovel-truck system. The modified Machine Repair Model is first applied to a virtual open pit mine case study. The results compare favourably to output from other estimation models using the same input parameters for the virtual mine. The modified Machine Repair Model is further applied to an existing open pit coal operation, the Kwagga Section of Optimum Colliery as a case study. Again the results confirm those obtained from the virtual mine case study. It is concluded that the Machine Repair Model can be an affordable model compared to off-the-shelf generic software because it is easily modelled in Microsoft Excel, a software platform that most mines already use.

  5. CFD Wake Modelling with a BEM Wind Turbine Sub-Model

    Directory of Open Access Journals (Sweden)

    Anders Hallanger

    2013-01-01

    Full Text Available Modelling of wind farms using computational fluid dynamics (CFD resolving the flow field around each wind turbine's blades on a moving computational grid is still too costly and time consuming in terms of computational capacity and effort. One strategy is to use sub-models for the wind turbines, and sub-grid models for turbulence production and dissipation to model the turbulent viscosity accurately enough to handle interaction of wakes in wind farms. A wind turbine sub-model, based on the Blade Momentum Theory, see Hansen (2008, has been implemented in an in-house CFD code, see Hallanger et al. (2002. The tangential and normal reaction forces from the wind turbine blades are distributed on the control volumes (CVs at the wind turbine rotor location as sources in the conservation equations of momentum. The classical k-epsilon turbulence model of Launder and Spalding (1972 is implemented with sub-grid turbulence (SGT model, see Sha and Launder (1979 and Sand and Salvesen (1994. Steady state CFD simulations were compared with flow and turbulence measurements in the wake of a model scale wind turbine, see Krogstad and Eriksen (2011. The simulated results compared best with experiments when stalling (boundary layer separation on the wind turbine blades did not occur. The SGT model did improve turbulence level in the wake but seems to smear the wake flow structure. It should be noted that the simulations are carried out steady state not including flow oscillations caused by vortex shedding from tower and blades as they were in the experiments. Further improvement of the simulated velocity defect and turbulence level seems to rely on better parameter estimation to the SGT model, improvements to the SGT model, and possibly transient- instead of steady state simulations.

  6. Aggregated wind power plant models consisting of IEC wind turbine models

    DEFF Research Database (Denmark)

    Altin, Müfit; Göksu, Ömer; Hansen, Anca Daniela

    2015-01-01

    The common practice regarding the modelling of large generation components has been to make use of models representing the performance of the individual components with a required level of accuracy and details. Owing to the rapid increase of wind power plants comprising large number of wind...... turbines, parameters and models to represent each individual wind turbine in detail makes it necessary to develop aggregated wind power plant models considering the simulation time for power system stability studies. In this paper, aggregated wind power plant models consisting of the IEC 61400-27 variable...... speed wind turbine models (type 3 and type 4) with a power plant controller is presented. The performance of the detailed benchmark wind power plant model and the aggregated model are compared by means of simulations for the specified test cases. Consequently, the results are summarized and discussed...

  7. Modeling Historical Land Cover and Land Use: A Review fromContemporary Modeling

    Directory of Open Access Journals (Sweden)

    Laura Alfonsina Chang-Martínez

    2015-09-01

    Full Text Available Spatially-explicit land cover land use change (LCLUC models are becoming increasingly useful tools for historians and archaeologists. Such kinds of models have been developed and used by geographers, ecologists and land managers over the last few decades to carry out prospective scenarios. In this paper, we review historical models to compare them with prospective models, with the assumption that the ample experience gained in the development of models of prospective simulation can benefit the development of models having as their objective the simulation of changes that happened in the past. The review is divided into three sections: in the first section, we explain the functioning of contemporary LCLUC models; in the second section, we analyze historical LCLUC models; in the third section, we compare the former two types of models, and finally, we discuss the contributions to historical LCLUC models of contemporary LCLUC models.

  8. Individual model evaluation and probabilistic weighting of models

    International Nuclear Information System (INIS)

    Atwood, C.L.

    1994-01-01

    This note stresses the importance of trying to assess the accuracy of each model individually. Putting a Bayesian probability distribution on a population of models faces conceptual and practical complications, and apparently can come only after the work of evaluating the individual models. Moreover, the primary issue is open-quotes How good is this modelclose quotes? Therefore, the individual evaluations are first in both chronology and importance. They are not easy, but some ideas are given here on how to perform them

  9. Beginning SQL Server Modeling Model-driven Application Development in SQL Server

    CERN Document Server

    Weller, Bart

    2010-01-01

    Get ready for model-driven application development with SQL Server Modeling! This book covers Microsoft's SQL Server Modeling (formerly known under the code name "Oslo") in detail and contains the information you need to be successful with designing and implementing workflow modeling. Beginning SQL Server Modeling will help you gain a comprehensive understanding of how to apply DSLs and other modeling components in the development of SQL Server implementations. Most importantly, after reading the book and working through the examples, you will have considerable experience using SQL M

  10. Research on Multi - Person Parallel Modeling Method Based on Integrated Model Persistent Storage

    Science.gov (United States)

    Qu, MingCheng; Wu, XiangHu; Tao, YongChao; Liu, Ying

    2018-03-01

    This paper mainly studies the multi-person parallel modeling method based on the integrated model persistence storage. The integrated model refers to a set of MDDT modeling graphics system, which can carry out multi-angle, multi-level and multi-stage description of aerospace general embedded software. Persistent storage refers to converting the data model in memory into a storage model and converting the storage model into a data model in memory, where the data model refers to the object model and the storage model is a binary stream. And multi-person parallel modeling refers to the need for multi-person collaboration, the role of separation, and even real-time remote synchronization modeling.

  11. Business Model Visualization

    OpenAIRE

    Zagorsek, Branislav

    2013-01-01

    Business model describes the company’s most important activities, proposed value, and the compensation for the value. Business model visualization enables to simply and systematically capture and describe the most important components of the business model while the standardization of the concept allows the comparison between companies. There are several possibilities how to visualize the model. The aim of this paper is to describe the options for business model visualization and business mod...

  12. Evaluation of statistical models for forecast errors from the HBV model

    Science.gov (United States)

    Engeland, Kolbjørn; Renard, Benjamin; Steinsland, Ingelin; Kolberg, Sjur

    2010-04-01

    SummaryThree statistical models for the forecast errors for inflow into the Langvatn reservoir in Northern Norway have been constructed and tested according to the agreement between (i) the forecast distribution and the observations and (ii) median values of the forecast distribution and the observations. For the first model observed and forecasted inflows were transformed by the Box-Cox transformation before a first order auto-regressive model was constructed for the forecast errors. The parameters were conditioned on weather classes. In the second model the Normal Quantile Transformation (NQT) was applied on observed and forecasted inflows before a similar first order auto-regressive model was constructed for the forecast errors. For the third model positive and negative errors were modeled separately. The errors were first NQT-transformed before conditioning the mean error values on climate, forecasted inflow and yesterday's error. To test the three models we applied three criterions: we wanted (a) the forecast distribution to be reliable; (b) the forecast intervals to be narrow; (c) the median values of the forecast distribution to be close to the observed values. Models 1 and 2 gave almost identical results. The median values improved the forecast with Nash-Sutcliffe R eff increasing from 0.77 for the original forecast to 0.87 for the corrected forecasts. Models 1 and 2 over-estimated the forecast intervals but gave the narrowest intervals. Their main drawback was that the distributions are less reliable than Model 3. For Model 3 the median values did not fit well since the auto-correlation was not accounted for. Since Model 3 did not benefit from the potential variance reduction that lies in bias estimation and removal it gave on average wider forecasts intervals than the two other models. At the same time Model 3 on average slightly under-estimated the forecast intervals, probably explained by the use of average measures to evaluate the fit.

  13. On coupling global biome models with climate models

    International Nuclear Information System (INIS)

    Claussen, M.

    1994-01-01

    The BIOME model of Prentice et al. (1992), which predicts global vegetation patterns in equilibrium with climate, is coupled with the ECHAM climate model of the Max-Planck-Institut fuer Meteorologie, Hamburg. It is found that incorporation of the BIOME model into ECHAM, regardless at which frequency, does not enhance the simulated climate variability, expressed in terms of differences between global vegetation patterns. Strongest changes are seen only between the initial biome distribution and the biome distribution computed after the first simulation period, provided that the climate-biome model is started from a biome distribution that resembles the present-day distribution. After the first simulation period, there is no significant shrinking, expanding, or shifting of biomes. Likewise, no trend is seen in global averages of land-surface parameters and climate variables. (orig.)

  14. Models in Science Education: Applications of Models in Learning and Teaching Science

    Science.gov (United States)

    Ornek, Funda

    2008-01-01

    In this paper, I discuss different types of models in science education and applications of them in learning and teaching science, in particular physics. Based on the literature, I categorize models as conceptual and mental models according to their characteristics. In addition to these models, there is another model called "physics model" by the…

  15. Weighted-indexed semi-Markov models for modeling financial returns

    International Nuclear Information System (INIS)

    D’Amico, Guglielmo; Petroni, Filippo

    2012-01-01

    In this paper we propose a new stochastic model based on a generalization of semi-Markov chains for studying the high frequency price dynamics of traded stocks. We assume that the financial returns are described by a weighted-indexed semi-Markov chain model. We show, through Monte Carlo simulations, that the model is able to reproduce important stylized facts of financial time series such as the first-passage-time distributions and the persistence of volatility. The model is applied to data from the Italian and German stock markets from 1 January 2007 until the end of December 2010. (paper)

  16. Metal Mixture Modeling Evaluation project: 2. Comparison of four modeling approaches

    Science.gov (United States)

    Farley, Kevin J.; Meyer, Joe; Balistrieri, Laurie S.; DeSchamphelaere, Karl; Iwasaki, Yuichi; Janssen, Colin; Kamo, Masashi; Lofts, Steve; Mebane, Christopher A.; Naito, Wataru; Ryan, Adam C.; Santore, Robert C.; Tipping, Edward

    2015-01-01

    As part of the Metal Mixture Modeling Evaluation (MMME) project, models were developed by the National Institute of Advanced Industrial Science and Technology (Japan), the U.S. Geological Survey (USA), HDR⎪HydroQual, Inc. (USA), and the Centre for Ecology and Hydrology (UK) to address the effects of metal mixtures on biological responses of aquatic organisms. A comparison of the 4 models, as they were presented at the MMME Workshop in Brussels, Belgium (May 2012), is provided herein. Overall, the models were found to be similar in structure (free ion activities computed by WHAM; specific or non-specific binding of metals/cations in or on the organism; specification of metal potency factors and/or toxicity response functions to relate metal accumulation to biological response). Major differences in modeling approaches are attributed to various modeling assumptions (e.g., single versus multiple types of binding site on the organism) and specific calibration strategies that affected the selection of model parameters. The models provided a reasonable description of additive (or nearly additive) toxicity for a number of individual toxicity test results. Less-than-additive toxicity was more difficult to describe with the available models. Because of limitations in the available datasets and the strong inter-relationships among the model parameters (log KM values, potency factors, toxicity response parameters), further evaluation of specific model assumptions and calibration strategies is needed.

  17. Understanding the Day Cent model: Calibration, sensitivity, and identifiability through inverse modeling

    Science.gov (United States)

    Necpálová, Magdalena; Anex, Robert P.; Fienen, Michael N.; Del Grosso, Stephen J.; Castellano, Michael J.; Sawyer, John E.; Iqbal, Javed; Pantoja, Jose L.; Barker, Daniel W.

    2015-01-01

    The ability of biogeochemical ecosystem models to represent agro-ecosystems depends on their correct integration with field observations. We report simultaneous calibration of 67 DayCent model parameters using multiple observation types through inverse modeling using the PEST parameter estimation software. Parameter estimation reduced the total sum of weighted squared residuals by 56% and improved model fit to crop productivity, soil carbon, volumetric soil water content, soil temperature, N2O, and soil3NO− compared to the default simulation. Inverse modeling substantially reduced predictive model error relative to the default model for all model predictions, except for soil 3NO− and 4NH+. Post-processing analyses provided insights into parameter–observation relationships based on parameter correlations, sensitivity and identifiability. Inverse modeling tools are shown to be a powerful way to systematize and accelerate the process of biogeochemical model interrogation, improving our understanding of model function and the underlying ecosystem biogeochemical processes that they represent.

  18. CAD-based automatic modeling method for Geant4 geometry model through MCAM

    International Nuclear Information System (INIS)

    Wang, D.; Nie, F.; Wang, G.; Long, P.; LV, Z.

    2013-01-01

    The full text of publication follows. Geant4 is a widely used Monte Carlo transport simulation package. Before calculating using Geant4, the calculation model need be established which could be described by using Geometry Description Markup Language (GDML) or C++ language. However, it is time-consuming and error-prone to manually describe the models by GDML. Automatic modeling methods have been developed recently, but there are some problems that exist in most present modeling programs, specially some of them were not accurate or adapted to specifically CAD format. To convert the GDML format models to CAD format accurately, a Geant4 Computer Aided Design (CAD) based modeling method was developed for automatically converting complex CAD geometry model into GDML geometry model. The essence of this method was dealing with CAD model represented with boundary representation (B-REP) and GDML model represented with constructive solid geometry (CSG). At first, CAD model was decomposed to several simple solids which had only one close shell. And then the simple solid was decomposed to convex shell set. Then corresponding GDML convex basic solids were generated by the boundary surfaces getting from the topological characteristic of a convex shell. After the generation of these solids, GDML model was accomplished with series boolean operations. This method was adopted in CAD/Image-based Automatic Modeling Program for Neutronics and Radiation Transport (MCAM), and tested with several models including the examples in Geant4 install package. The results showed that this method could convert standard CAD model accurately, and can be used for Geant4 automatic modeling. (authors)

  19. Groundwater Model Validation

    Energy Technology Data Exchange (ETDEWEB)

    Ahmed E. Hassan

    2006-01-24

    Models have an inherent uncertainty. The difficulty in fully characterizing the subsurface environment makes uncertainty an integral component of groundwater flow and transport models, which dictates the need for continuous monitoring and improvement. Building and sustaining confidence in closure decisions and monitoring networks based on models of subsurface conditions require developing confidence in the models through an iterative process. The definition of model validation is postulated as a confidence building and long-term iterative process (Hassan, 2004a). Model validation should be viewed as a process not an end result. Following Hassan (2004b), an approach is proposed for the validation process of stochastic groundwater models. The approach is briefly summarized herein and detailed analyses of acceptance criteria for stochastic realizations and of using validation data to reduce input parameter uncertainty are presented and applied to two case studies. During the validation process for stochastic models, a question arises as to the sufficiency of the number of acceptable model realizations (in terms of conformity with validation data). Using a hierarchical approach to make this determination is proposed. This approach is based on computing five measures or metrics and following a decision tree to determine if a sufficient number of realizations attain satisfactory scores regarding how they represent the field data used for calibration (old) and used for validation (new). The first two of these measures are applied to hypothetical scenarios using the first case study and assuming field data consistent with the model or significantly different from the model results. In both cases it is shown how the two measures would lead to the appropriate decision about the model performance. Standard statistical tests are used to evaluate these measures with the results indicating they are appropriate measures for evaluating model realizations. The use of validation

  20. Mixed models, linear dependency, and identification in age-period-cohort models.

    Science.gov (United States)

    O'Brien, Robert M

    2017-07-20

    This paper examines the identification problem in age-period-cohort models that use either linear or categorically coded ages, periods, and cohorts or combinations of these parameterizations. These models are not identified using the traditional fixed effect regression model approach because of a linear dependency between the ages, periods, and cohorts. However, these models can be identified if the researcher introduces a single just identifying constraint on the model coefficients. The problem with such constraints is that the results can differ substantially depending on the constraint chosen. Somewhat surprisingly, age-period-cohort models that specify one or more of ages and/or periods and/or cohorts as random effects are identified. This is the case without introducing an additional constraint. I label this identification as statistical model identification and show how statistical model identification comes about in mixed models and why which effects are treated as fixed and which are treated as random can substantially change the estimates of the age, period, and cohort effects. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  1. Assessment of structural model and parameter uncertainty with a multi-model system for soil water balance models

    Science.gov (United States)

    Michalik, Thomas; Multsch, Sebastian; Frede, Hans-Georg; Breuer, Lutz

    2016-04-01

    Water for agriculture is strongly limited in arid and semi-arid regions and often of low quality in terms of salinity. The application of saline waters for irrigation increases the salt load in the rooting zone and has to be managed by leaching to maintain a healthy soil, i.e. to wash out salts by additional irrigation. Dynamic simulation models are helpful tools to calculate the root zone water fluxes and soil salinity content in order to investigate best management practices. However, there is little information on structural and parameter uncertainty for simulations regarding the water and salt balance of saline irrigation. Hence, we established a multi-model system with four different models (AquaCrop, RZWQM, SWAP, Hydrus1D/UNSATCHEM) to analyze the structural and parameter uncertainty by using the Global Likelihood and Uncertainty Estimation (GLUE) method. Hydrus1D/UNSATCHEM and SWAP were set up with multiple sets of different implemented functions (e.g. matric and osmotic stress for root water uptake) which results in a broad range of different model structures. The simulations were evaluated against soil water and salinity content observations. The posterior distribution of the GLUE analysis gives behavioral parameters sets and reveals uncertainty intervals for parameter uncertainty. Throughout all of the model sets, most parameters accounting for the soil water balance show a low uncertainty, only one or two out of five to six parameters in each model set displays a high uncertainty (e.g. pore-size distribution index in SWAP and Hydrus1D/UNSATCHEM). The differences between the models and model setups reveal the structural uncertainty. The highest structural uncertainty is observed for deep percolation fluxes between the model sets of Hydrus1D/UNSATCHEM (~200 mm) and RZWQM (~500 mm) that are more than twice as high for the latter. The model sets show a high variation in uncertainty intervals for deep percolation as well, with an interquartile range (IQR) of

  2. Sunspot Modeling: From Simplified Models to Radiative MHD Simulations

    Directory of Open Access Journals (Sweden)

    Rolf Schlichenmaier

    2011-09-01

    Full Text Available We review our current understanding of sunspots from the scales of their fine structure to their large scale (global structure including the processes of their formation and decay. Recently, sunspot models have undergone a dramatic change. In the past, several aspects of sunspot structure have been addressed by static MHD models with parametrized energy transport. Models of sunspot fine structure have been relying heavily on strong assumptions about flow and field geometry (e.g., flux-tubes, "gaps", convective rolls, which were motivated in part by the observed filamentary structure of penumbrae or the necessity of explaining the substantial energy transport required to maintain the penumbral brightness. However, none of these models could self-consistently explain all aspects of penumbral structure (energy transport, filamentation, Evershed flow. In recent years, 3D radiative MHD simulations have been advanced dramatically to the point at which models of complete sunspots with sufficient resolution to capture sunspot fine structure are feasible. Here overturning convection is the central element responsible for energy transport, filamentation leading to fine-structure and the driving of strong outflows. On the larger scale these models are also in the progress of addressing the subsurface structure of sunspots as well as sunspot formation. With this shift in modeling capabilities and the recent advances in high resolution observations, the future research will be guided by comparing observation and theory.

  3. Model documentation renewable fuels module of the National Energy Modeling System

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1997-04-01

    This report documents the objectives, analytical approach, and design of the National Energy Modeling System (NEMS) Renewable Fuels Module (RFM) as it relates to the production of the 1997 Annual Energy Outlook forecasts. The report catalogues and describes modeling assumptions, computational methodologies, data inputs. and parameter estimation techniques. A number of offline analyses used in lieu of RFM modeling components are also described. This documentation report serves three purposes. First, it is a reference document for model analysts, model users, and the public interested in the construction and application of the RFM. Second, it meets the legal requirement of the Energy Information Administration (EIA) to provide adequate documentation in support of its models. Finally, such documentation facilitates continuity in EIA model development by providing information sufficient to perform model enhancements and data updates as part of EIA`s ongoing mission to provide analytical and forecasting information systems.

  4. Model documentation renewable fuels module of the National Energy Modeling System

    International Nuclear Information System (INIS)

    1997-04-01

    This report documents the objectives, analytical approach, and design of the National Energy Modeling System (NEMS) Renewable Fuels Module (RFM) as it relates to the production of the 1997 Annual Energy Outlook forecasts. The report catalogues and describes modeling assumptions, computational methodologies, data inputs. and parameter estimation techniques. A number of offline analyses used in lieu of RFM modeling components are also described. This documentation report serves three purposes. First, it is a reference document for model analysts, model users, and the public interested in the construction and application of the RFM. Second, it meets the legal requirement of the Energy Information Administration (EIA) to provide adequate documentation in support of its models. Finally, such documentation facilitates continuity in EIA model development by providing information sufficient to perform model enhancements and data updates as part of EIA's ongoing mission to provide analytical and forecasting information systems

  5. Predictive Modelling and Time: An Experiment in Temporal Archaeological Predictive Models

    OpenAIRE

    David Ebert

    2006-01-01

    One of the most common criticisms of archaeological predictive modelling is that it fails to account for temporal or functional differences in sites. However, a practical solution to temporal or functional predictive modelling has proven to be elusive. This article discusses temporal predictive modelling, focusing on the difficulties of employing temporal variables, then introduces and tests a simple methodology for the implementation of temporal modelling. The temporal models thus created ar...

  6. OPEC model : adjustment or new model

    International Nuclear Information System (INIS)

    Ayoub, A.

    1994-01-01

    Since the early eighties, the international oil industry went through major changes : new financial markets, reintegration, opening of the upstream, liberalization of investments, privatization. This article provides answers to two major questions : what are the reasons for these changes ? ; do these changes announce the replacement of OPEC model by a new model in which state intervention is weaker and national companies more autonomous. This would imply a profound change of political and institutional systems of oil producing countries. (Author)

  7. Model based design introduction: modeling game controllers to microprocessor architectures

    Science.gov (United States)

    Jungwirth, Patrick; Badawy, Abdel-Hameed

    2017-04-01

    We present an introduction to model based design. Model based design is a visual representation, generally a block diagram, to model and incrementally develop a complex system. Model based design is a commonly used design methodology for digital signal processing, control systems, and embedded systems. Model based design's philosophy is: to solve a problem - a step at a time. The approach can be compared to a series of steps to converge to a solution. A block diagram simulation tool allows a design to be simulated with real world measurement data. For example, if an analog control system is being upgraded to a digital control system, the analog sensor input signals can be recorded. The digital control algorithm can be simulated with the real world sensor data. The output from the simulated digital control system can then be compared to the old analog based control system. Model based design can compared to Agile software develop. The Agile software development goal is to develop working software in incremental steps. Progress is measured in completed and tested code units. Progress is measured in model based design by completed and tested blocks. We present a concept for a video game controller and then use model based design to iterate the design towards a working system. We will also describe a model based design effort to develop an OS Friendly Microprocessor Architecture based on the RISC-V.

  8. Metamodeling for Business Model Design : Facilitating development and communication of Business Model Canvas (BMC) models with an OMG standards-based metamodel.

    OpenAIRE

    Hauksson, Hilmar

    2013-01-01

    Interest for business models and business modeling has increased rapidly since the mid-1990‘s and there are numerous approaches used to create business models. The business model concept has many definitions which can lead to confusion and slower progress in the research and development of business models. A business model ontology (BMO) was created in 2004 where the business model concept was conceptualized based on an analysis of existing literature. A few years later the Business Model Can...

  9. Thermal Models of the Niger Delta: Implications for Charge Modelling

    International Nuclear Information System (INIS)

    Ejedawe, J.

    2002-01-01

    There are generally three main sources of temperature data-BHT data from log headers, production temperature data, and continuo's temperature logs. Analysis of continuous temperature profiles of over 100 wells in the Niger Delta two main thermal models (single leg and dogleg) are defined with occasional occurrence of a modified dogleg model.The dogleg model is characterised by a shallow interval of low geothermal gradient ( 3.0.C/100m). This is characteristically developed onshore area is simple, requiring only consideration of heat transients, modelling in the onshore require modelling programmes with built in modules to handle convective heat flow dissipation in the shallow layer. Current work around methods would involve tweaking of thermal conductivity values to mimic the underlying heat flow process effects, or heat flow mapping above and below the depth of gradient change. These methods allow for more realistic thermal modelling, hydrocarbon type prediction, and also more accurate prediction of temperature prior to drilling and for reservoir rock properties. The regional distribution of the models also impact on regional hydrocarbon distribution pattern in the Niger Delta

  10. Alternative methods of modeling wind generation using production costing models

    International Nuclear Information System (INIS)

    Milligan, M.R.; Pang, C.K.

    1996-08-01

    This paper examines the methods of incorporating wind generation in two production costing models: one is a load duration curve (LDC) based model and the other is a chronological-based model. These two models were used to evaluate the impacts of wind generation on two utility systems using actual collected wind data at two locations with high potential for wind generation. The results are sensitive to the selected wind data and the level of benefits of wind generation is sensitive to the load forecast. The total production cost over a year obtained by the chronological approach does not differ significantly from that of the LDC approach, though the chronological commitment of units is more realistic and more accurate. Chronological models provide the capability of answering important questions about wind resources which are difficult or impossible to address with LDC models

  11. Using the object modeling system for hydrological model development and application

    Directory of Open Access Journals (Sweden)

    S. Kralisch

    2005-01-01

    Full Text Available State of the art challenges in sustainable management of water resources have created demand for integrated, flexible and easy to use hydrological models which are able to simulate the quantitative and qualitative aspects of the hydrological cycle with a sufficient degree of certainty. Existing models which have been de-veloped to fit these needs are often constrained to specific scales or purposes and thus can not be easily adapted to meet different challenges. As a solution for flexible and modularised model development and application, the Object Modeling System (OMS has been developed in a joint approach by the USDA-ARS, GPSRU (Fort Collins, CO, USA, USGS (Denver, CO, USA, and the FSU (Jena, Germany. The OMS provides a modern modelling framework which allows the implementation of single process components to be compiled and applied as custom tailored model assemblies. This paper describes basic principles of the OMS and its main components and explains in more detail how the problems during coupling of models or model components are solved inside the system. It highlights the integration of different spatial and temporal scales by their representation as spatial modelling entities embedded into time compound components. As an exam-ple the implementation of the hydrological model J2000 is discussed.

  12. A Method of Upgrading a Hydrostatic Model to a Nonhydrostatic Model

    Directory of Open Access Journals (Sweden)

    Chi-Sann Liou

    2009-01-01

    Full Text Available As the sigma-p coordinate under hydrostatic approximation can be interpreted as the mass coordinate with out the hydro static approximation, we propose a method that up grades a hydro static model to a nonhydrostatic model with relatively less effort. The method adds to the primitive equations the extra terms omitted by the hydro static approximation and two prognostic equations for vertical speed w and nonhydrostatic part pres sure p'. With properly formulated governing equations, at each time step, the dynamic part of the model is first integrated as that for the original hydro static model and then nonhydrostatic contributions are added as corrections to the hydro static solutions. In applying physical parameterizations after the dynamic part integration, all physics pack ages of the original hydro static model can be directly used in the nonhydrostatic model, since the up graded nonhydrostatic model shares the same vertical coordinates with the original hydro static model. In this way, the majority codes of the nonhydrostatic model come from the original hydro static model. The extra codes are only needed for the calculation additional to the primitive equations. In order to handle sound waves, we use smaller time steps in the nonhydrostatic part dynamic time integration with a split-explicit scheme for horizontal momentum and temperature and a semi-implicit scheme for w and p'. Simulations of 2-dimensional mountain waves and density flows associated with a cold bubble have been used to test the method. The idealized case tests demonstrate that the pro posed method realistically simulates the nonhydrostatic effects on different atmospheric circulations that are revealed in the oretical solutions and simulations from other nonhydrostatic models. This method can be used in upgrading any global or mesoscale models from a hydrostatic to nonhydrostatic model.

  13. Bayesian model selection of template forward models for EEG source reconstruction.

    Science.gov (United States)

    Strobbe, Gregor; van Mierlo, Pieter; De Vos, Maarten; Mijović, Bogdan; Hallez, Hans; Van Huffel, Sabine; López, José David; Vandenberghe, Stefaan

    2014-06-01

    Several EEG source reconstruction techniques have been proposed to identify the generating neuronal sources of electrical activity measured on the scalp. The solution of these techniques depends directly on the accuracy of the forward model that is inverted. Recently, a parametric empirical Bayesian (PEB) framework for distributed source reconstruction in EEG/MEG was introduced and implemented in the Statistical Parametric Mapping (SPM) software. The framework allows us to compare different forward modeling approaches, using real data, instead of using more traditional simulated data from an assumed true forward model. In the absence of a subject specific MR image, a 3-layered boundary element method (BEM) template head model is currently used including a scalp, skull and brain compartment. In this study, we introduced volumetric template head models based on the finite difference method (FDM). We constructed a FDM head model equivalent to the BEM model and an extended FDM model including CSF. These models were compared within the context of three different types of source priors related to the type of inversion used in the PEB framework: independent and identically distributed (IID) sources, equivalent to classical minimum norm approaches, coherence (COH) priors similar to methods such as LORETA, and multiple sparse priors (MSP). The resulting models were compared based on ERP data of 20 subjects using Bayesian model selection for group studies. The reconstructed activity was also compared with the findings of previous studies using functional magnetic resonance imaging. We found very strong evidence in favor of the extended FDM head model with CSF and assuming MSP. These results suggest that the use of realistic volumetric forward models can improve PEB EEG source reconstruction. Copyright © 2014 Elsevier Inc. All rights reserved.

  14. VENTILATION MODEL

    International Nuclear Information System (INIS)

    V. Chipman

    2002-01-01

    The purpose of the Ventilation Model is to simulate the heat transfer processes in and around waste emplacement drifts during periods of forced ventilation. The model evaluates the effects of emplacement drift ventilation on the thermal conditions in the emplacement drifts and surrounding rock mass, and calculates the heat removal by ventilation as a measure of the viability of ventilation to delay the onset of peak repository temperature and reduce its magnitude. The heat removal by ventilation is temporally and spatially dependent, and is expressed as the fraction of heat carried away by the ventilation air compared to the fraction of heat produced by radionuclide decay. One minus the heat removal is called the wall heat fraction, or the remaining amount of heat that is transferred via conduction to the surrounding rock mass. Downstream models, such as the ''Multiscale Thermohydrologic Model'' (BSC 2001), use the wall heat fractions as outputted from the Ventilation Model to initialize their postclosure analyses

  15. Tracer disposition kinetics in the determination of local cerebral blood flow by a venous equilibrium model, tube model, and distributed model

    International Nuclear Information System (INIS)

    Sawada, Y.; Sugiyama, Y.; Iga, T.; Hanano, M.

    1987-01-01

    Tracer distribution kinetics in the determination of local cerebral blood flow (LCBF) were examined by using three models, i.e., venous equilibrium, tube, and distributed models. The technique most commonly used for measuring LCBF is the tissue uptake method, which was first developed and applied by Kety. The measurement of LCBF with the 14 C-iodoantipyrine (IAP) method is calculated by using an equation derived by Kety based on the Fick's principle and a two-compartment model of blood-tissue exchange and tissue concentration at a single data point. The procedure, in which the tissue is to be in equilibrium with venous blood, will be referred to as the tissue equilibration model. In this article, effects of the concentration gradient of tracer along the length of the capillary (tube model) and the transverse heterogeneity in the capillary transit time (distributed model) on the determination of LCBF were theoretically analyzed for the tissue sampling method. Similarities and differences among these models are explored. The rank order of the LCBF calculated by using arterial blood concentration time courses and the tissue concentration of tracer based on each model were tube model (model II) less than distributed model (model III) less than venous equilibrium model (model I). Data on 14 C-IAP kinetics reported by Ohno et al. were employed. The LCBFs calculated based on model I were 45-260% larger than those in models II or III. To discriminate among three models, we propose to examine the effect of altering the venous infusion time of tracer on the apparent tissue-to-blood concentration ratio (lambda app). A range of the ratio of the predicted lambda app in models II or III to that in model I was from 0.6 to 1.3

  16. Functionalized anatomical models for EM-neuron Interaction modeling

    Science.gov (United States)

    Neufeld, Esra; Cassará, Antonino Mario; Montanaro, Hazael; Kuster, Niels; Kainz, Wolfgang

    2016-06-01

    The understanding of interactions between electromagnetic (EM) fields and nerves are crucial in contexts ranging from therapeutic neurostimulation to low frequency EM exposure safety. To properly consider the impact of in vivo induced field inhomogeneity on non-linear neuronal dynamics, coupled EM-neuronal dynamics modeling is required. For that purpose, novel functionalized computable human phantoms have been developed. Their implementation and the systematic verification of the integrated anisotropic quasi-static EM solver and neuronal dynamics modeling functionality, based on the method of manufactured solutions and numerical reference data, is described. Electric and magnetic stimulation of the ulnar and sciatic nerve were modeled to help understanding a range of controversial issues related to the magnitude and optimal determination of strength-duration (SD) time constants. The results indicate the importance of considering the stimulation-specific inhomogeneous field distributions (especially at tissue interfaces), realistic models of non-linear neuronal dynamics, very short pulses, and suitable SD extrapolation models. These results and the functionalized computable phantom will influence and support the development of safe and effective neuroprosthetic devices and novel electroceuticals. Furthermore they will assist the evaluation of existing low frequency exposure standards for the entire population under all exposure conditions.

  17. Hidden Markov models: the best models for forager movements?

    Science.gov (United States)

    Joo, Rocio; Bertrand, Sophie; Tam, Jorge; Fablet, Ronan

    2013-01-01

    One major challenge in the emerging field of movement ecology is the inference of behavioural modes from movement patterns. This has been mainly addressed through Hidden Markov models (HMMs). We propose here to evaluate two sets of alternative and state-of-the-art modelling approaches. First, we consider hidden semi-Markov models (HSMMs). They may better represent the behavioural dynamics of foragers since they explicitly model the duration of the behavioural modes. Second, we consider discriminative models which state the inference of behavioural modes as a classification issue, and may take better advantage of multivariate and non linear combinations of movement pattern descriptors. For this work, we use a dataset of >200 trips from human foragers, Peruvian fishermen targeting anchovy. Their movements were recorded through a Vessel Monitoring System (∼1 record per hour), while their behavioural modes (fishing, searching and cruising) were reported by on-board observers. We compare the efficiency of hidden Markov, hidden semi-Markov, and three discriminative models (random forests, artificial neural networks and support vector machines) for inferring the fishermen behavioural modes, using a cross-validation procedure. HSMMs show the highest accuracy (80%), significantly outperforming HMMs and discriminative models. Simulations show that data with higher temporal resolution, HSMMs reach nearly 100% of accuracy. Our results demonstrate to what extent the sequential nature of movement is critical for accurately inferring behavioural modes from a trajectory and we strongly recommend the use of HSMMs for such purpose. In addition, this work opens perspectives on the use of hybrid HSMM-discriminative models, where a discriminative setting for the observation process of HSMMs could greatly improve inference performance.

  18. Hidden Markov models: the best models for forager movements?

    Directory of Open Access Journals (Sweden)

    Rocio Joo

    Full Text Available One major challenge in the emerging field of movement ecology is the inference of behavioural modes from movement patterns. This has been mainly addressed through Hidden Markov models (HMMs. We propose here to evaluate two sets of alternative and state-of-the-art modelling approaches. First, we consider hidden semi-Markov models (HSMMs. They may better represent the behavioural dynamics of foragers since they explicitly model the duration of the behavioural modes. Second, we consider discriminative models which state the inference of behavioural modes as a classification issue, and may take better advantage of multivariate and non linear combinations of movement pattern descriptors. For this work, we use a dataset of >200 trips from human foragers, Peruvian fishermen targeting anchovy. Their movements were recorded through a Vessel Monitoring System (∼1 record per hour, while their behavioural modes (fishing, searching and cruising were reported by on-board observers. We compare the efficiency of hidden Markov, hidden semi-Markov, and three discriminative models (random forests, artificial neural networks and support vector machines for inferring the fishermen behavioural modes, using a cross-validation procedure. HSMMs show the highest accuracy (80%, significantly outperforming HMMs and discriminative models. Simulations show that data with higher temporal resolution, HSMMs reach nearly 100% of accuracy. Our results demonstrate to what extent the sequential nature of movement is critical for accurately inferring behavioural modes from a trajectory and we strongly recommend the use of HSMMs for such purpose. In addition, this work opens perspectives on the use of hybrid HSMM-discriminative models, where a discriminative setting for the observation process of HSMMs could greatly improve inference performance.

  19. How can model comparison help improving species distribution models?

    Directory of Open Access Journals (Sweden)

    Emmanuel Stephan Gritti

    Full Text Available Today, more than ever, robust projections of potential species range shifts are needed to anticipate and mitigate the impacts of climate change on biodiversity and ecosystem services. Such projections are so far provided almost exclusively by correlative species distribution models (correlative SDMs. However, concerns regarding the reliability of their predictive power are growing and several authors call for the development of process-based SDMs. Still, each of these methods presents strengths and weakness which have to be estimated if they are to be reliably used by decision makers. In this study we compare projections of three different SDMs (STASH, LPJ and PHENOFIT that lie in the continuum between correlative models and process-based models for the current distribution of three major European tree species, Fagussylvatica L., Quercusrobur L. and Pinussylvestris L. We compare the consistency of the model simulations using an innovative comparison map profile method, integrating local and multi-scale comparisons. The three models simulate relatively accurately the current distribution of the three species. The process-based model performs almost as well as the correlative model, although parameters of the former are not fitted to the observed species distributions. According to our simulations, species range limits are triggered, at the European scale, by establishment and survival through processes primarily related to phenology and resistance to abiotic stress rather than to growth efficiency. The accuracy of projections of the hybrid and process-based model could however be improved by integrating a more realistic representation of the species resistance to water stress for instance, advocating for pursuing efforts to understand and formulate explicitly the impact of climatic conditions and variations on these processes.

  20. Modeling Renewable Penertration Using a Network Economic Model

    Science.gov (United States)

    Lamont, A.

    2001-03-01

    This paper evaluates the accuracy of a network economic modeling approach in designing energy systems having renewable and conventional generators. The network approach models the system as a network of processes such as demands, generators, markets, and resources. The model reaches a solution by exchanging prices and quantity information between the nodes of the system. This formulation is very flexible and takes very little time to build and modify models. This paper reports an experiment designing a system with photovoltaic and base and peak fossil generators. The level of PV penetration as a function of its price and the capacities of the fossil generators were determined using the network approach and using an exact, analytic approach. It is found that the two methods agree very closely in terms of the optimal capacities and are nearly identical in terms of annual system costs.

  1. Finite element modeling of a 3D coupled foot-boot model.

    Science.gov (United States)

    Qiu, Tian-Xia; Teo, Ee-Chon; Yan, Ya-Bo; Lei, Wei

    2011-12-01

    Increasingly, musculoskeletal models of the human body are used as powerful tools to study biological structures. The lower limb, and in particular the foot, is of interest because it is the primary physical interaction between the body and the environment during locomotion. The goal of this paper is to adopt the finite element (FE) modeling and analysis approaches to create a state-of-the-art 3D coupled foot-boot model for future studies on biomechanical investigation of stress injury mechanism, foot wear design and parachute landing fall simulation. In the modeling process, the foot-ankle model with lower leg was developed based on Computed Tomography (CT) images using ScanIP, Surfacer and ANSYS. Then, the boot was represented by assembling the FE models of upper, insole, midsole and outsole built based on the FE model of the foot-ankle, and finally the coupled foot-boot model was generated by putting together the models of the lower limb and boot. In this study, the FE model of foot and ankle was validated during balance standing. There was a good agreement in the overall patterns of predicted and measured plantar pressure distribution published in literature. The coupled foot-boot model will be fully validated in the subsequent works under both static and dynamic loading conditions for further studies on injuries investigation in military and sports, foot wear design and characteristics of parachute landing impact in military. Copyright © 2011 IPEM. Published by Elsevier Ltd. All rights reserved.

  2. Evaluation of gas radiation models in CFD modeling of oxy-combustion

    International Nuclear Information System (INIS)

    Rajhi, M.A.; Ben-Mansour, R.; Habib, M.A.; Nemitallah, M.A.; Andersson, K.

    2014-01-01

    Highlights: • CFD modeling of a typical industrial water tube boiler is conducted. • Different combustion processes were considered including air and oxy-fuel combustion. • SGG, EWBM, Leckner, Perry and WSGG radiation models were considered in the study. • EWBM is the most accurate model and it’s considered to be the benchmark model. • Characteristics of oxy-fuel combustion are compared to those of air–fuel combustion. - Abstract: Proper determination of the radiation energy is very important for proper predictions of the combustion characteristics inside combustion devices using CFD modeling. For this purpose, different gas radiation models were developed and applied in the present work. These radiation models vary in their accuracy and complexity according to the application. In this work, a CFD model for a typical industrial water tube boiler was developed, considering three different combustion environments. The combustion environments are air–fuel combustion (21% O 2 and 79% N 2 ), oxy-fuel combustion (21% O 2 and 79% CO 2 ) and oxy-fuel combustion (27% O 2 and 73% CO 2 ). Simple grey gas (SGG), exponential wide band model (EWBM), Leckner, Perry and weighted sum of grey gases (WSGG) radiation models were examined and their influences on the combustion characteristics were evaluated. Among those radiation models, the EWBM was found to provide close results to the experimental data for the present boiler combustion application. The oxy-fuel combustion characteristics were analyzed and compared with those of air–fuel combustion

  3. EpiModel: An R Package for Mathematical Modeling of Infectious Disease over Networks.

    Science.gov (United States)

    Jenness, Samuel M; Goodreau, Steven M; Morris, Martina

    2018-04-01

    Package EpiModel provides tools for building, simulating, and analyzing mathematical models for the population dynamics of infectious disease transmission in R. Several classes of models are included, but the unique contribution of this software package is a general stochastic framework for modeling the spread of epidemics on networks. EpiModel integrates recent advances in statistical methods for network analysis (temporal exponential random graph models) that allow the epidemic modeling to be grounded in empirical data on contacts that can spread infection. This article provides an overview of both the modeling tools built into EpiModel , designed to facilitate learning for students new to modeling, and the application programming interface for extending package EpiModel , designed to facilitate the exploration of novel research questions for advanced modelers.

  4. Microstructural modelling of nuclear graphite using multi-phase models

    International Nuclear Information System (INIS)

    Berre, C.; Fok, S.L.; Marsden, B.J.; Mummery, P.M.; Marrow, T.J.; Neighbour, G.B.

    2008-01-01

    This paper presents a new modelling technique using three-dimensional multi-phase finite element models in which meshes representing the microstructure of thermally oxidised nuclear graphite were generated from X-ray micro-tomography images. The density of the material was related to the image greyscale using Beer-Lambert's law, and multiple phases could thus be defined. The local elastic and non-linear properties of each phase were defined as a function of density and changes in Young's modulus, tensile and compressive strength with thermal oxidation were calculated. Numerical predictions compared well with experimental data and with other numerical results obtained using two-phase models. These models were found to be more representative of the actual microstructure of the scanned material than two-phase models and, possibly because of pore closure occurring during compression, compressive tests were also predicted to be less sensitive to the microstructure geometry than tensile tests

  5. AREST model description

    International Nuclear Information System (INIS)

    Engel, D.W.; McGrail, B.P.

    1993-11-01

    The Office of Civilian Radioactive Waste Management and the Power Reactor and Nuclear Fuel Development Corporation of Japan (PNC) have supported the development of the Analytical Repository Source-Term (AREST) at Pacific Northwest Laboratory. AREST is a computer model developed to evaluate radionuclide release from an underground geologic repository. The AREST code can be used to calculate/estimate the amount and rate of each radionuclide that is released from the engineered barrier system (EBS) of the repository. The EBS is the man-made or disrupted area of the repository. AREST was designed as a system-level models to simulate the behavior of the total repository by combining process-level models for the release from an individual waste package or container. AREST contains primarily analytical models for calculating the release/transport of radionuclides to the lost rock that surrounds each waste package. Analytical models were used because of the small computational overhead that allows all the input parameters to be derived from a statistical distribution. Recently, a one-dimensional numerical model was also incorporated into AREST, to allow for more detailed modeling of the transport process with arbitrary length decay chains. The next step in modeling the EBS, is to develop a model that couples the probabilistic capabilities of AREST with a more detailed process model. This model will need to look at the reactive coupling of the processes that are involved with the release process. Such coupling would include: (1) the dissolution of the waste form, (2) the geochemical modeling of the groundwater, (3) the corrosion of the container overpacking, and (4) the backfill material, just to name a few. Several of these coupled processes are already incorporated in the current version of AREST

  6. Models in architectural design

    OpenAIRE

    Pauwels, Pieter

    2017-01-01

    Whereas architects and construction specialists used to rely mainly on sketches and physical models as representations of their own cognitive design models, they rely now more and more on computer models. Parametric models, generative models, as-built models, building information models (BIM), and so forth, they are used daily by any practitioner in architectural design and construction. Although processes of abstraction and the actual architectural model-based reasoning itself of course rema...

  7. Kantowski--Sachs cosmological models as big-bang models

    International Nuclear Information System (INIS)

    Weber, E.

    1985-01-01

    In the presence of a nonzero cosmological constant Λ, we classify the anisotropic cosmological models of the Kantowski--Sachs type by means of the quantities epsilon 2 0 , q 0 , summation 0 corresponding, respectively, to the relative root-mean-square deviation from isotropy, the deceleration parameter, and the density parameter of the perfect fluid at a given time t = t 0 . We obtain for Λ>0 a set of big-bang models of zero measure as well as a set of cosmological models of nonzero measure evolving toward the de Sitter solution

  8. Multi-model analysis of terrestrial carbon cycles in Japan: reducing uncertainties in model outputs among different terrestrial biosphere models using flux observations

    Science.gov (United States)

    Ichii, K.; Suzuki, T.; Kato, T.; Ito, A.; Hajima, T.; Ueyama, M.; Sasai, T.; Hirata, R.; Saigusa, N.; Ohtani, Y.; Takagi, K.

    2009-08-01

    Terrestrial biosphere models show large uncertainties when simulating carbon and water cycles, and reducing these uncertainties is a priority for developing more accurate estimates of both terrestrial ecosystem statuses and future climate changes. To reduce uncertainties and improve the understanding of these carbon budgets, we investigated the ability of flux datasets to improve model simulations and reduce variabilities among multi-model outputs of terrestrial biosphere models in Japan. Using 9 terrestrial biosphere models (Support Vector Machine-based regressions, TOPS, CASA, VISIT, Biome-BGC, DAYCENT, SEIB, LPJ, and TRIFFID), we conducted two simulations: (1) point simulations at four flux sites in Japan and (2) spatial simulations for Japan with a default model (based on original settings) and an improved model (based on calibration using flux observations). Generally, models using default model settings showed large deviations in model outputs from observation with large model-by-model variability. However, after we calibrated the model parameters using flux observations (GPP, RE and NEP), most models successfully simulated seasonal variations in the carbon cycle, with less variability among models. We also found that interannual variations in the carbon cycle are mostly consistent among models and observations. Spatial analysis also showed a large reduction in the variability among model outputs, and model calibration using flux observations significantly improved the model outputs. These results show that to reduce uncertainties among terrestrial biosphere models, we need to conduct careful validation and calibration with available flux observations. Flux observation data significantly improved terrestrial biosphere models, not only on a point scale but also on spatial scales.

  9. Using Model Replication to Improve the Reliability of Agent-Based Models

    Science.gov (United States)

    Zhong, Wei; Kim, Yushim

    The basic presupposition of model replication activities for a computational model such as an agent-based model (ABM) is that, as a robust and reliable tool, it must be replicable in other computing settings. This assumption has recently gained attention in the community of artificial society and simulation due to the challenges of model verification and validation. Illustrating the replication of an ABM representing fraudulent behavior in a public service delivery system originally developed in the Java-based MASON toolkit for NetLogo by a different author, this paper exemplifies how model replication exercises provide unique opportunities for model verification and validation process. At the same time, it helps accumulate best practices and patterns of model replication and contributes to the agenda of developing a standard methodological protocol for agent-based social simulation.

  10. Generalised linear models for correlated pseudo-observations, with applications to multi-state models

    DEFF Research Database (Denmark)

    Andersen, Per Kragh; Klein, John P.; Rosthøj, Susanne

    2003-01-01

    Generalised estimating equation; Generalised linear model; Jackknife pseudo-value; Logistic regression; Markov Model; Multi-state model......Generalised estimating equation; Generalised linear model; Jackknife pseudo-value; Logistic regression; Markov Model; Multi-state model...

  11. Dynamic modeling method for infrared smoke based on enhanced discrete phase model

    Science.gov (United States)

    Zhang, Zhendong; Yang, Chunling; Zhang, Yan; Zhu, Hongbo

    2018-03-01

    The dynamic modeling of infrared (IR) smoke plays an important role in IR scene simulation systems and its accuracy directly influences the system veracity. However, current IR smoke models cannot provide high veracity, because certain physical characteristics are frequently ignored in fluid simulation; simplifying the discrete phase as a continuous phase and ignoring the IR decoy missile-body spinning. To address this defect, this paper proposes a dynamic modeling method for IR smoke, based on an enhanced discrete phase model (DPM). A mathematical simulation model based on an enhanced DPM is built and a dynamic computing fluid mesh is generated. The dynamic model of IR smoke is then established using an extended equivalent-blackbody-molecule model. Experiments demonstrate that this model realizes a dynamic method for modeling IR smoke with higher veracity.

  12. Log-normal frailty models fitted as Poisson generalized linear mixed models.

    Science.gov (United States)

    Hirsch, Katharina; Wienke, Andreas; Kuss, Oliver

    2016-12-01

    The equivalence of a survival model with a piecewise constant baseline hazard function and a Poisson regression model has been known since decades. As shown in recent studies, this equivalence carries over to clustered survival data: A frailty model with a log-normal frailty term can be interpreted and estimated as a generalized linear mixed model with a binary response, a Poisson likelihood, and a specific offset. Proceeding this way, statistical theory and software for generalized linear mixed models are readily available for fitting frailty models. This gain in flexibility comes at the small price of (1) having to fix the number of pieces for the baseline hazard in advance and (2) having to "explode" the data set by the number of pieces. In this paper we extend the simulations of former studies by using a more realistic baseline hazard (Gompertz) and by comparing the model under consideration with competing models. Furthermore, the SAS macro %PCFrailty is introduced to apply the Poisson generalized linear mixed approach to frailty models. The simulations show good results for the shared frailty model. Our new %PCFrailty macro provides proper estimates, especially in case of 4 events per piece. The suggested Poisson generalized linear mixed approach for log-normal frailty models based on the %PCFrailty macro provides several advantages in the analysis of clustered survival data with respect to more flexible modelling of fixed and random effects, exact (in the sense of non-approximate) maximum likelihood estimation, and standard errors and different types of confidence intervals for all variance parameters. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  13. Process modeling style

    CERN Document Server

    Long, John

    2014-01-01

    Process Modeling Style focuses on other aspects of process modeling beyond notation that are very important to practitioners. Many people who model processes focus on the specific notation used to create their drawings. While that is important, there are many other aspects to modeling, such as naming, creating identifiers, descriptions, interfaces, patterns, and creating useful process documentation. Experience author John Long focuses on those non-notational aspects of modeling, which practitioners will find invaluable. Gives solid advice for creating roles, work produ

  14. Mathematical modelling techniques

    CERN Document Server

    Aris, Rutherford

    1995-01-01

    ""Engaging, elegantly written."" - Applied Mathematical ModellingMathematical modelling is a highly useful methodology designed to enable mathematicians, physicists and other scientists to formulate equations from a given nonmathematical situation. In this elegantly written volume, a distinguished theoretical chemist and engineer sets down helpful rules not only for setting up models but also for solving the mathematical problems they pose and for evaluating models.The author begins with a discussion of the term ""model,"" followed by clearly presented examples of the different types of mode

  15. AIDS Epidemiological models

    Science.gov (United States)

    Rahmani, Fouad Lazhar

    2010-11-01

    The aim of this paper is to present mathematical modelling of the spread of infection in the context of the transmission of the human immunodeficiency virus (HIV) and the acquired immune deficiency syndrome (AIDS). These models are based in part on the models suggested in the field of th AIDS mathematical modelling as reported by ISHAM [6].

  16. Lumped-parameter models

    Energy Technology Data Exchange (ETDEWEB)

    Ibsen, Lars Bo; Liingaard, M.

    2006-12-15

    A lumped-parameter model represents the frequency dependent soil-structure interaction of a massless foundation placed on or embedded into an unbounded soil domain. In this technical report the steps of establishing a lumped-parameter model are presented. Following sections are included in this report: Static and dynamic formulation, Simple lumped-parameter models and Advanced lumped-parameter models. (au)

  17. SR 97. Alternative models project. Stochastic continuum modelling of Aberg

    International Nuclear Information System (INIS)

    Widen, H.; Walker, D.

    1999-08-01

    As part of studies into the siting of a deep repository for nuclear waste, Swedish Nuclear Fuel and Waste Management Company (SKB) has commissioned the Alternative Models Project (AMP). The AMP is a comparison of three alternative modelling approaches to bedrock performance assessment for a single hypothetical repository, arbitrarily named Aberg. The Aberg repository will adopt input parameters from the Aespoe Hard Rock Laboratory in southern Sweden. The models are restricted to an explicit domain, boundary conditions and canister location to facilitate the comparison. The boundary conditions are based on the regional groundwater model provided in digital format. This study is the application of HYDRASTAR, a stochastic continuum groundwater flow and transport-modelling program. The study uses 34 realisations of 945 canister locations in the hypothetical repository to evaluate the uncertainty of the advective travel time, canister flux (Darcy velocity at a canister) and F-ratio. Several comparisons of variability are constructed between individual canister locations and individual realisations. For the ensemble of all realisations with all canister locations, the study found a median travel time of 27 years, a median canister flux of 7.1 x 10 -4 m/yr and a median F-ratio of 3.3 x 10 5 yr/m. The overall pattern of regional flow is preserved in the site-scale model, as is reflected in flow paths and exit locations. The site-scale model slightly over-predicts the boundary fluxes from the single realisation of the regional model. The explicitly prescribed domain was seen to be slightly restrictive, with 6% of the stream tubes failing to exit the upper surface of the model. Sensitivity analysis and calibration are suggested as possible extensions of the modelling study

  18. Equivalent model and power flow model for electric railway traction network

    Science.gov (United States)

    Wang, Feng

    2018-05-01

    An equivalent model of the Cable Traction Network (CTN) considering the distributed capacitance effect of the cable system is proposed. The model can be divided into 110kV side and 27.5kV side two kinds. The 110kV side equivalent model can be used to calculate the power supply capacity of the CTN. The 27.5kV side equivalent model can be used to solve the voltage of the catenary. Based on the equivalent simplified model of CTN, the power flow model of CTN which involves the reactive power compensation coefficient and the interaction of voltage and current, is derived.

  19. Linking Simple Economic Theory Models and the Cointegrated Vector AutoRegressive Model

    DEFF Research Database (Denmark)

    Møller, Niels Framroze

    This paper attempts to clarify the connection between simple economic theory models and the approach of the Cointegrated Vector-Auto-Regressive model (CVAR). By considering (stylized) examples of simple static equilibrium models, it is illustrated in detail, how the theoretical model and its stru....... Further fundamental extensions and advances to more sophisticated theory models, such as those related to dynamics and expectations (in the structural relations) are left for future papers......This paper attempts to clarify the connection between simple economic theory models and the approach of the Cointegrated Vector-Auto-Regressive model (CVAR). By considering (stylized) examples of simple static equilibrium models, it is illustrated in detail, how the theoretical model and its......, it is demonstrated how other controversial hypotheses such as Rational Expectations can be formulated directly as restrictions on the CVAR-parameters. A simple example of a "Neoclassical synthetic" AS-AD model is also formulated. Finally, the partial- general equilibrium distinction is related to the CVAR as well...

  20. A composite computational model of liver glucose homeostasis. I. Building the composite model.

    Science.gov (United States)

    Hetherington, J; Sumner, T; Seymour, R M; Li, L; Rey, M Varela; Yamaji, S; Saffrey, P; Margoninski, O; Bogle, I D L; Finkelstein, A; Warner, A

    2012-04-07

    A computational model of the glucagon/insulin-driven liver glucohomeostasis function, focusing on the buffering of glucose into glycogen, has been developed. The model exemplifies an 'engineering' approach to modelling in systems biology, and was produced by linking together seven component models of separate aspects of the physiology. The component models use a variety of modelling paradigms and degrees of simplification. Model parameters were determined by an iterative hybrid of fitting to high-scale physiological data, and determination from small-scale in vitro experiments or molecular biological techniques. The component models were not originally designed for inclusion within such a composite model, but were integrated, with modification, using our published modelling software and computational frameworks. This approach facilitates the development of large and complex composite models, although, inevitably, some compromises must be made when composing the individual models. Composite models of this form have not previously been demonstrated.

  1. Animation Augmented Reality Book Model (AAR Book Model) to Enhance Teamwork

    Science.gov (United States)

    Chujitarom, Wannaporn; Piriyasurawong, Pallop

    2017-01-01

    This study aims to synthesize an Animation Augmented Reality Book Model (AAR Book Model) to enhance teamwork and to assess the AAR Book Model to enhance teamwork. Samples are five specialists that consist of one animation specialist, two communication and information technology specialists, and two teaching model design specialists, selected by…

  2. Object Oriented Modeling : A method for combining model and software development

    NARCIS (Netherlands)

    Van Lelyveld, W.

    2010-01-01

    When requirements for a new model cannot be met by available modeling software, new software can be developed for a specific model. Methods for the development of both model and software exist, but a method for combined development has not been found. A compatible way of thinking is required to

  3. Identification of walking human model using agent-based modelling

    Science.gov (United States)

    Shahabpoor, Erfan; Pavic, Aleksandar; Racic, Vitomir

    2018-03-01

    The interaction of walking people with large vibrating structures, such as footbridges and floors, in the vertical direction is an important yet challenging phenomenon to describe mathematically. Several different models have been proposed in the literature to simulate interaction of stationary people with vibrating structures. However, the research on moving (walking) human models, explicitly identified for vibration serviceability assessment of civil structures, is still sparse. In this study, the results of a comprehensive set of FRF-based modal tests were used, in which, over a hundred test subjects walked in different group sizes and walking patterns on a test structure. An agent-based model was used to simulate discrete traffic-structure interactions. The occupied structure modal parameters found in tests were used to identify the parameters of the walking individual's single-degree-of-freedom (SDOF) mass-spring-damper model using 'reverse engineering' methodology. The analysis of the results suggested that the normal distribution with the average of μ = 2.85Hz and standard deviation of σ = 0.34Hz can describe human SDOF model natural frequency. Similarly, the normal distribution with μ = 0.295 and σ = 0.047 can describe the human model damping ratio. Compared to the previous studies, the agent-based modelling methodology proposed in this paper offers significant flexibility in simulating multi-pedestrian walking traffics, external forces and simulating different mechanisms of human-structure and human-environment interaction at the same time.

  4. Predictive Capability Maturity Model for computational modeling and simulation.

    Energy Technology Data Exchange (ETDEWEB)

    Oberkampf, William Louis; Trucano, Timothy Guy; Pilch, Martin M.

    2007-10-01

    The Predictive Capability Maturity Model (PCMM) is a new model that can be used to assess the level of maturity of computational modeling and simulation (M&S) efforts. The development of the model is based on both the authors experience and their analysis of similar investigations in the past. The perspective taken in this report is one of judging the usefulness of a predictive capability that relies on the numerical solution to partial differential equations to better inform and improve decision making. The review of past investigations, such as the Software Engineering Institute's Capability Maturity Model Integration and the National Aeronautics and Space Administration and Department of Defense Technology Readiness Levels, indicates that a more restricted, more interpretable method is needed to assess the maturity of an M&S effort. The PCMM addresses six contributing elements to M&S: (1) representation and geometric fidelity, (2) physics and material model fidelity, (3) code verification, (4) solution verification, (5) model validation, and (6) uncertainty quantification and sensitivity analysis. For each of these elements, attributes are identified that characterize four increasing levels of maturity. Importantly, the PCMM is a structured method for assessing the maturity of an M&S effort that is directed toward an engineering application of interest. The PCMM does not assess whether the M&S effort, the accuracy of the predictions, or the performance of the engineering system satisfies or does not satisfy specified application requirements.

  5. Flash flood modeling with the MARINE hydrological distributed model

    Science.gov (United States)

    Estupina-Borrell, V.; Dartus, D.; Ababou, R.

    2006-11-01

    Flash floods are characterized by their violence and the rapidity of their occurrence. Because these events are rare and unpredictable, but also fast and intense, their anticipation with sufficient lead time for warning and broadcasting is a primary subject of research. Because of the heterogeneities of the rain and of the behavior of the surface, spatially distributed hydrological models can lead to a better understanding of the processes and so on they can contribute to a better forecasting of flash flood. Our main goal here is to develop an operational and robust methodology for flash flood forecasting. This methodology should provide relevant data (information) about flood evolution on short time scales, and should be applicable even in locations where direct observations are sparse (e.g. absence of historical and modern rainfalls and streamflows in small mountainous watersheds). The flash flood forecast is obtained by the physically based, space-time distributed hydrological model "MARINE'' (Model of Anticipation of Runoff and INondations for Extreme events). This model is presented and tested in this paper for a real flash flood event. The model consists in two steps, or two components: the first component is a "basin'' flood module which generates flood runoff in the upstream part of the watershed, and the second component is the "stream network'' module, which propagates the flood in the main river and its subsidiaries. The basin flash flood generation model is a rainfall-runoff model that can integrate remotely sensed data. Surface hydraulics equations are solved with enough simplifying hypotheses to allow real time exploitation. The minimum data required by the model are: (i) the Digital Elevation Model, used to calculate slopes that generate runoff, it can be issued from satellite imagery (SPOT) or from French Geographical Institute (IGN); (ii) the rainfall data from meteorological radar, observed or anticipated by the French Meteorological Service (M

  6. Model : making

    OpenAIRE

    Bottle, Neil

    2013-01-01

    The Model : making exhibition was curated by Brian Kennedy in collaboration with Allies & Morrison in September 2013. For the London Design Festival, the Model : making exhibition looked at the increased use of new technologies by both craft-makers and architectural model makers. In both practices traditional ways of making by hand are increasingly being combined with the latest technologies of digital imaging, laser cutting, CNC machining and 3D printing. This exhibition focussed on ...

  7. Electrical circuit models for performance modeling of Lithium-Sulfur batteries

    DEFF Research Database (Denmark)

    Knap, Vaclav; Stroe, Daniel Ioan; Teodorescu, Remus

    2015-01-01

    emerging technology for various applications, there is a need for Li-S battery performance model; however, developing such models represents a challenging task due to batteries' complex ongoing chemical reactions. Therefore, the literature review was performed to summarize electrical circuit models (ECMs......) used for modeling the performance behavior of Li-S batteries. The studied Li-S pouch cell was tested in the laboratory in order to parametrize four basic ECM topologies. These topologies were compared by analyzing their voltage estimation accuracy values, which were obtained for different battery...... current profiles. Based on these results, the 3 R-C ECM was chosen and the Li-S battery cell discharging performance model with current dependent parameters was derived and validated....

  8. Modelling and Internal Fuzzy Model Power Control of a Francis Water Turbine

    Directory of Open Access Journals (Sweden)

    Klemen Nagode

    2014-02-01

    Full Text Available This paper presents dynamic modelling of a Francis turbine with a surge tank and the control of a hydro power plant (HPP. Non-linear and linear models include technical parameters and show high similarity to measurement data. Turbine power control with an internal model control (IMC is proposed, based on a turbine fuzzy model. Considering appropriate control responses in the entire area of turbine power, the model parameters of the process are determined from a fuzzy model, which are further included in the internal model controller. The results are compared to a proportional-integral (PI controller tuned with an integral absolute error (IAE objective function, and show an improved response of internal model control.

  9. Modelling of cardiovascular system: development of a hybrid (numerical-physical) model.

    Science.gov (United States)

    Ferrari, G; Kozarski, M; De Lazzari, C; Górczyńska, K; Mimmo, R; Guaragno, M; Tosti, G; Darowski, M

    2003-12-01

    Physical models of the circulation are used for research, training and for testing of implantable active and passive circulatory prosthetic and assistance devices. However, in comparison with numerical models, they are rigid and expensive. To overcome these limitations, we have developed a model of the circulation based on the merging of a lumped parameter physical model into a numerical one (producing therefore a hybrid). The physical model is limited to the barest essentials and, in this application, developed to test the principle, it is a windkessel representing the systemic arterial tree. The lumped parameters numerical model was developed in LabVIEW environment and represents pulmonary and systemic circulation (except the systemic arterial tree). Based on the equivalence between hydraulic and electrical circuits, this prototype was developed connecting the numerical model to an electrical circuit--the physical model. This specific solution is valid mainly educationally but permits the development of software and the verification of preliminary results without using cumbersome hydraulic circuits. The interfaces between numerical and electrical circuits are set up by a voltage controlled current generator and a voltage controlled voltage generator. The behavior of the model is analyzed based on the ventricular pressure-volume loops and on the time course of arterial and ventricular pressures and flow in different circulatory conditions. The model can represent hemodynamic relationships in different ventricular and circulatory conditions.

  10. On the Use of Structural Equation Models in Marketing Modeling

    NARCIS (Netherlands)

    Steenkamp, J.E.B.M.; Baumgartner, H.

    2000-01-01

    We reflect on the role of structural equation modeling (SEM) in marketing modeling and managerial decision making. We discuss some benefits provided by SEM and alert marketing modelers to several recent developments in SEM in three areas: measurement analysis, analysis of cross-sectional data, and

  11. Styles in business process modeling: an exploration and a model

    NARCIS (Netherlands)

    Pinggera, J.; Soffer, P.; Fahland, D.; Weidlich, M.; Zugal, S.; Weber, B.; Reijers, H.A.; Mendling, J.

    2015-01-01

    Business process models are an important means to design, analyze, implement, and control business processes. As with every type of conceptual model, a business process model has to meet certain syntactic, semantic, and pragmatic quality requirements to be of value. For many years, such quality

  12. Model Validation Status Review

    International Nuclear Information System (INIS)

    E.L. Hardin

    2001-01-01

    The primary objective for the Model Validation Status Review was to perform a one-time evaluation of model validation associated with the analysis/model reports (AMRs) containing model input to total-system performance assessment (TSPA) for the Yucca Mountain site recommendation (SR). This review was performed in response to Corrective Action Request BSC-01-C-01 (Clark 2001, Krisha 2001) pursuant to Quality Assurance review findings of an adverse trend in model validation deficiency. The review findings in this report provide the following information which defines the extent of model validation deficiency and the corrective action needed: (1) AMRs that contain or support models are identified, and conversely, for each model the supporting documentation is identified. (2) The use for each model is determined based on whether the output is used directly for TSPA-SR, or for screening (exclusion) of features, events, and processes (FEPs), and the nature of the model output. (3) Two approaches are used to evaluate the extent to which the validation for each model is compliant with AP-3.10Q (Analyses and Models). The approaches differ in regard to whether model validation is achieved within individual AMRs as originally intended, or whether model validation could be readily achieved by incorporating information from other sources. (4) Recommendations are presented for changes to the AMRs, and additional model development activities or data collection, that will remedy model validation review findings, in support of licensing activities. The Model Validation Status Review emphasized those AMRs that support TSPA-SR (CRWMS M and O 2000bl and 2000bm). A series of workshops and teleconferences was held to discuss and integrate the review findings. The review encompassed 125 AMRs (Table 1) plus certain other supporting documents and data needed to assess model validity. The AMRs were grouped in 21 model areas representing the modeling of processes affecting the natural and

  13. Model Validation Status Review

    Energy Technology Data Exchange (ETDEWEB)

    E.L. Hardin

    2001-11-28

    The primary objective for the Model Validation Status Review was to perform a one-time evaluation of model validation associated with the analysis/model reports (AMRs) containing model input to total-system performance assessment (TSPA) for the Yucca Mountain site recommendation (SR). This review was performed in response to Corrective Action Request BSC-01-C-01 (Clark 2001, Krisha 2001) pursuant to Quality Assurance review findings of an adverse trend in model validation deficiency. The review findings in this report provide the following information which defines the extent of model validation deficiency and the corrective action needed: (1) AMRs that contain or support models are identified, and conversely, for each model the supporting documentation is identified. (2) The use for each model is determined based on whether the output is used directly for TSPA-SR, or for screening (exclusion) of features, events, and processes (FEPs), and the nature of the model output. (3) Two approaches are used to evaluate the extent to which the validation for each model is compliant with AP-3.10Q (Analyses and Models). The approaches differ in regard to whether model validation is achieved within individual AMRs as originally intended, or whether model validation could be readily achieved by incorporating information from other sources. (4) Recommendations are presented for changes to the AMRs, and additional model development activities or data collection, that will remedy model validation review findings, in support of licensing activities. The Model Validation Status Review emphasized those AMRs that support TSPA-SR (CRWMS M&O 2000bl and 2000bm). A series of workshops and teleconferences was held to discuss and integrate the review findings. The review encompassed 125 AMRs (Table 1) plus certain other supporting documents and data needed to assess model validity. The AMRs were grouped in 21 model areas representing the modeling of processes affecting the natural and

  14. Modeling for Battery Prognostics

    Science.gov (United States)

    Kulkarni, Chetan S.; Goebel, Kai; Khasin, Michael; Hogge, Edward; Quach, Patrick

    2017-01-01

    For any battery-powered vehicles (be it unmanned aerial vehicles, small passenger aircraft, or assets in exoplanetary operations) to operate at maximum efficiency and reliability, it is critical to monitor battery health as well performance and to predict end of discharge (EOD) and end of useful life (EOL). To fulfil these needs, it is important to capture the battery's inherent characteristics as well as operational knowledge in the form of models that can be used by monitoring, diagnostic, and prognostic algorithms. Several battery modeling methodologies have been developed in last few years as the understanding of underlying electrochemical mechanics has been advancing. The models can generally be classified as empirical models, electrochemical engineering models, multi-physics models, and molecular/atomist. Empirical models are based on fitting certain functions to past experimental data, without making use of any physicochemical principles. Electrical circuit equivalent models are an example of such empirical models. Electrochemical engineering models are typically continuum models that include electrochemical kinetics and transport phenomena. Each model has its advantages and disadvantages. The former type of model has the advantage of being computationally efficient, but has limited accuracy and robustness, due to the approximations used in developed model, and as a result of such approximations, cannot represent aging well. The latter type of model has the advantage of being very accurate, but is often computationally inefficient, having to solve complex sets of partial differential equations, and thus not suited well for online prognostic applications. In addition both multi-physics and atomist models are computationally expensive hence are even less suited to online application An electrochemistry-based model of Li-ion batteries has been developed, that captures crucial electrochemical processes, captures effects of aging, is computationally efficient

  15. Synergy of modeling processes in the area of soft and hard modeling

    Directory of Open Access Journals (Sweden)

    Sika Robert

    2017-01-01

    Full Text Available High complexity of production processes results in more frequent use of computer systems for their modeling and simulation. Process modeling helps to find optimal solution, verify some assumptions before implementation and eliminate errors. In practice, modeling of production processes concerns two areas: hard modeling (based on differential equations of mathematical physics and soft (based on existing data. In the paper the possibility of synergistic connection of these two approaches was indicated: it means hard modeling support based on the tools used in soft modeling. It aims at significant reducing the time in order to obtain final results with the use of hard modeling. Some test were carried out in the Calibrate module of NovaFlow&Solid (NF&S simulation system in the frame of thermal analysis (ATAS-cup. The authors tested output values forecasting in NF&S system (solidification time on the basis of variable parameters of the thermal model (heat conduction, specific heat, density. Collected data was used as an input to prepare soft model with the use of MLP (Multi-Layer Perceptron neural network regression model. The approach described above enable to reduce the time of production process modeling with use of hard modeling and should encourage production companies to use it.

  16. Building crop models within different crop modelling frameworks

    NARCIS (Netherlands)

    Adam, M.Y.O.; Corbeels, M.; Leffelaar, P.A.; Keulen, van H.; Wery, J.; Ewert, F.

    2012-01-01

    Modular frameworks for crop modelling have evolved through simultaneous progress in crop science and software development but differences among these frameworks exist which are not well understood, resulting in potential misuse for crop modelling. In this paper we review differences and similarities

  17. Model identification using stochastic differential equation grey-box models in diabetes.

    Science.gov (United States)

    Duun-Henriksen, Anne Katrine; Schmidt, Signe; Røge, Rikke Meldgaard; Møller, Jonas Bech; Nørgaard, Kirsten; Jørgensen, John Bagterp; Madsen, Henrik

    2013-03-01

    The acceptance of virtual preclinical testing of control algorithms is growing and thus also the need for robust and reliable models. Models based on ordinary differential equations (ODEs) can rarely be validated with standard statistical tools. Stochastic differential equations (SDEs) offer the possibility of building models that can be validated statistically and that are capable of predicting not only a realistic trajectory, but also the uncertainty of the prediction. In an SDE, the prediction error is split into two noise terms. This separation ensures that the errors are uncorrelated and provides the possibility to pinpoint model deficiencies. An identifiable model of the glucoregulatory system in a type 1 diabetes mellitus (T1DM) patient is used as the basis for development of a stochastic-differential-equation-based grey-box model (SDE-GB). The parameters are estimated on clinical data from four T1DM patients. The optimal SDE-GB is determined from likelihood-ratio tests. Finally, parameter tracking is used to track the variation in the "time to peak of meal response" parameter. We found that the transformation of the ODE model into an SDE-GB resulted in a significant improvement in the prediction and uncorrelated errors. Tracking of the "peak time of meal absorption" parameter showed that the absorption rate varied according to meal type. This study shows the potential of using SDE-GBs in diabetes modeling. Improved model predictions were obtained due to the separation of the prediction error. SDE-GBs offer a solid framework for using statistical tools for model validation and model development. © 2013 Diabetes Technology Society.

  18. Modeling pedestrian shopping behavior using principles of bounded rationality: model comparison and validation

    Science.gov (United States)

    Zhu, Wei; Timmermans, Harry

    2011-06-01

    Models of geographical choice behavior have been dominantly based on rational choice models, which assume that decision makers are utility-maximizers. Rational choice models may be less appropriate as behavioral models when modeling decisions in complex environments in which decision makers may simplify the decision problem using heuristics. Pedestrian behavior in shopping streets is an example. We therefore propose a modeling framework for pedestrian shopping behavior incorporating principles of bounded rationality. We extend three classical heuristic rules (conjunctive, disjunctive and lexicographic rule) by introducing threshold heterogeneity. The proposed models are implemented using data on pedestrian behavior in Wang Fujing Street, the city center of Beijing, China. The models are estimated and compared with multinomial logit models and mixed logit models. Results show that the heuristic models are the best for all the decisions that are modeled. Validation tests are carried out through multi-agent simulation by comparing simulated spatio-temporal agent behavior with the observed pedestrian behavior. The predictions of heuristic models are slightly better than those of the multinomial logit models.

  19. Modelling and evaluation of surgical performance using hidden Markov models.

    Science.gov (United States)

    Megali, Giuseppe; Sinigaglia, Stefano; Tonet, Oliver; Dario, Paolo

    2006-10-01

    Minimally invasive surgery has become very widespread in the last ten years. Since surgeons experience difficulties in learning and mastering minimally invasive techniques, the development of training methods is of great importance. While the introduction of virtual reality-based simulators has introduced a new paradigm in surgical training, skill evaluation methods are far from being objective. This paper proposes a method for defining a model of surgical expertise and an objective metric to evaluate performance in laparoscopic surgery. Our approach is based on the processing of kinematic data describing movements of surgical instruments. We use hidden Markov model theory to define an expert model that describes expert surgical gesture. The model is trained on kinematic data related to exercises performed on a surgical simulator by experienced surgeons. Subsequently, we use this expert model as a reference model in the definition of an objective metric to evaluate performance of surgeons with different abilities. Preliminary results show that, using different topologies for the expert model, the method can be efficiently used both for the discrimination between experienced and novice surgeons, and for the quantitative assessment of surgical ability.

  20. Combination of Bayesian Network and Overlay Model in User Modeling

    Directory of Open Access Journals (Sweden)

    Loc Nguyen

    2009-12-01

    Full Text Available The core of adaptive system is user model containing personal information such as knowledge, learning styles, goals… which is requisite for learning personalized process. There are many modeling approaches, for example: stereotype, overlay, plan recognition… but they don’t bring out the solid method for reasoning from user model. This paper introduces the statistical method that combines Bayesian network and overlay modeling so that it is able to infer user’s knowledge from evidences collected during user’s learning process.

  1. Improved ensemble-mean forecast skills of ENSO events by a zero-mean stochastic model-error model of an intermediate coupled model

    Science.gov (United States)

    Zheng, F.; Zhu, J.

    2015-12-01

    To perform an ensemble-based ENSO probabilistic forecast, the crucial issue is to design a reliable ensemble prediction strategy that should include the major uncertainties of a forecast system. In this study, we developed a new general ensemble perturbation technique to improve the ensemble-mean predictive skill of forecasting ENSO using an intermediate coupled model (ICM). The model uncertainties are first estimated and analyzed from EnKF analysis results through assimilating observed SST. Then, based on the pre-analyzed properties of the model errors, a zero-mean stochastic model-error model is developed to mainly represent the model uncertainties induced by some important physical processes missed in the coupled model (i.e., stochastic atmospheric forcing/MJO, extra-tropical cooling and warming, Indian Ocean Dipole mode, etc.). Each member of an ensemble forecast is perturbed by the stochastic model-error model at each step during the 12-month forecast process, and the stochastical perturbations are added into the modeled physical fields to mimic the presence of these high-frequency stochastic noises and model biases and their effect on the predictability of the coupled system. The impacts of stochastic model-error perturbations on ENSO deterministic predictions are examined by performing two sets of 21-yr retrospective forecast experiments. The two forecast schemes are differentiated by whether they considered the model stochastic perturbations, with both initialized by the ensemble-mean analysis states from EnKF. The comparison results suggest that the stochastic model-error perturbations have significant and positive impacts on improving the ensemble-mean prediction skills during the entire 12-month forecast process. Because the nonlinear feature of the coupled model can induce the nonlinear growth of the added stochastic model errors with model integration, especially through the nonlinear heating mechanism with the vertical advection term of the model, the

  2. TENCompetence Domain Model

    NARCIS (Netherlands)

    2006-01-01

    This is the version 1.1 of the TENCompetence Domain Model (version 1.0 released at 19-6-2006; version 1.1 at 9-11-2008). It contains several files: a) a pdf with the model description, b) three jpg files with class models (also in the pdf), c) a MagicDraw zip file with the model itself, d) a release

  3. Statistical geological discrete fracture network model. Forsmark modelling stage 2.2

    Energy Technology Data Exchange (ETDEWEB)

    Fox, Aaron; La Pointe, Paul [Golder Associates Inc (United States); Simeonov, Assen [Swedish Nuclear Fuel and Waste Management Co., Stockholm (Sweden); Hermanson, Jan; Oehman, Johan [Golder Associates AB, Stockholm (Sweden)

    2007-11-15

    The Swedish Nuclear Fuel and Waste Management Company (SKB) is performing site characterization at two different locations, Forsmark and Laxemar, in order to locate a site for a final geologic repository for spent nuclear fuel. The program is built upon the development of Site Descriptive Models (SDMs) at specific timed data freezes. Each SDM is formed from discipline-specific reports from across the scientific spectrum. This report describes the methods, analyses, and conclusions of the geological modeling team with respect to a geological and statistical model of fractures and minor deformation zones (henceforth referred to as the geological DFN), version 2.2, at the Forsmark site. The geological DFN builds upon the work of other geological modelers, including the deformation zone (DZ), rock domain (RD), and fracture domain (FD) models. The geological DFN is a statistical model for stochastically simulating rock fractures and minor deformation zones as a scale of less than 1,000 m (the lower cut-off of the DZ models). The geological DFN is valid within four specific fracture domains inside the local model region, and encompassing the candidate volume at Forsmark: FFM01, FFM02, FFM03, and FFM06. The models are build using data from detailed surface outcrop maps and the cored borehole record at Forsmark. The conceptual model for the Forsmark 2.2 geological revolves around the concept of orientation sets; for each fracture domain, other model parameters such as size and intensity are tied to the orientation sets. Two classes of orientation sets were described; Global sets, which are encountered everywhere in the model region, and Local sets, which represent highly localized stress environments. Orientation sets were described in terms of their general cardinal direction (NE, NW, etc). Two alternatives are presented for fracture size modeling: - the tectonic continuum approach (TCM, TCMF) described by coupled size-intensity scaling following power law distributions

  4. Statistical geological discrete fracture network model. Forsmark modelling stage 2.2

    International Nuclear Information System (INIS)

    Fox, Aaron; La Pointe, Paul; Simeonov, Assen; Hermanson, Jan; Oehman, Johan

    2007-11-01

    The Swedish Nuclear Fuel and Waste Management Company (SKB) is performing site characterization at two different locations, Forsmark and Laxemar, in order to locate a site for a final geologic repository for spent nuclear fuel. The program is built upon the development of Site Descriptive Models (SDMs) at specific timed data freezes. Each SDM is formed from discipline-specific reports from across the scientific spectrum. This report describes the methods, analyses, and conclusions of the geological modeling team with respect to a geological and statistical model of fractures and minor deformation zones (henceforth referred to as the geological DFN), version 2.2, at the Forsmark site. The geological DFN builds upon the work of other geological modelers, including the deformation zone (DZ), rock domain (RD), and fracture domain (FD) models. The geological DFN is a statistical model for stochastically simulating rock fractures and minor deformation zones as a scale of less than 1,000 m (the lower cut-off of the DZ models). The geological DFN is valid within four specific fracture domains inside the local model region, and encompassing the candidate volume at Forsmark: FFM01, FFM02, FFM03, and FFM06. The models are build using data from detailed surface outcrop maps and the cored borehole record at Forsmark. The conceptual model for the Forsmark 2.2 geological revolves around the concept of orientation sets; for each fracture domain, other model parameters such as size and intensity are tied to the orientation sets. Two classes of orientation sets were described; Global sets, which are encountered everywhere in the model region, and Local sets, which represent highly localized stress environments. Orientation sets were described in terms of their general cardinal direction (NE, NW, etc). Two alternatives are presented for fracture size modeling: - the tectonic continuum approach (TCM, TCMF) described by coupled size-intensity scaling following power law distributions

  5. Distinguishing Antimicrobial Models with Different Resistance Mechanisms via Population Pharmacodynamic Modeling.

    Directory of Open Access Journals (Sweden)

    Matthieu Jacobs

    2016-03-01

    Full Text Available Semi-mechanistic pharmacokinetic-pharmacodynamic (PK-PD modeling is increasingly used for antimicrobial drug development and optimization of dosage regimens, but systematic simulation-estimation studies to distinguish between competing PD models are lacking. This study compared the ability of static and dynamic in vitro infection models to distinguish between models with different resistance mechanisms and support accurate and precise parameter estimation. Monte Carlo simulations (MCS were performed for models with one susceptible bacterial population without (M1 or with a resting stage (M2, a one population model with adaptive resistance (M5, models with pre-existing susceptible and resistant populations without (M3 or with (M4 inter-conversion, and a model with two pre-existing populations with adaptive resistance (M6. For each model, 200 datasets of the total bacterial population were simulated over 24h using static antibiotic concentrations (256-fold concentration range or over 48h under dynamic conditions (dosing every 12h; elimination half-life: 1h. Twelve-hundred random datasets (each containing 20 curves for static or four curves for dynamic conditions were generated by bootstrapping. Each dataset was estimated by all six models via population PD modeling to compare bias and precision. For M1 and M3, most parameter estimates were unbiased (<10% and had good imprecision (<30%. However, parameters for adaptive resistance and inter-conversion for M2, M4, M5 and M6 had poor bias and large imprecision under static and dynamic conditions. For datasets that only contained viable counts of the total population, common statistical criteria and diagnostic plots did not support sound identification of the true resistance mechanism. Therefore, it seems advisable to quantify resistant bacteria and characterize their MICs and resistance mechanisms to support extended simulations and translate from in vitro experiments to animal infection models and

  6. Modeling crop water productivity using a coupled SWAT-MODSIM model

    Science.gov (United States)

    This study examines the water productivity of irrigated wheat and maize yields in Karkheh River Basin (KRB) in the semi-arid region of Iran using a coupled modeling approach consisting of the hydrological model (SWAT) and the river basin water allocation model (MODSIM). Dynamic irrigation requireme...

  7. Modeling rainfall-runoff relationship using multivariate GARCH model

    Science.gov (United States)

    Modarres, R.; Ouarda, T. B. M. J.

    2013-08-01

    The traditional hydrologic time series approaches are used for modeling, simulating and forecasting conditional mean of hydrologic variables but neglect their time varying variance or the second order moment. This paper introduces the multivariate Generalized Autoregressive Conditional Heteroscedasticity (MGARCH) modeling approach to show how the variance-covariance relationship between hydrologic variables varies in time. These approaches are also useful to estimate the dynamic conditional correlation between hydrologic variables. To illustrate the novelty and usefulness of MGARCH models in hydrology, two major types of MGARCH models, the bivariate diagonal VECH and constant conditional correlation (CCC) models are applied to show the variance-covariance structure and cdynamic correlation in a rainfall-runoff process. The bivariate diagonal VECH-GARCH(1,1) and CCC-GARCH(1,1) models indicated both short-run and long-run persistency in the conditional variance-covariance matrix of the rainfall-runoff process. The conditional variance of rainfall appears to have a stronger persistency, especially long-run persistency, than the conditional variance of streamflow which shows a short-lived drastic increasing pattern and a stronger short-run persistency. The conditional covariance and conditional correlation coefficients have different features for each bivariate rainfall-runoff process with different degrees of stationarity and dynamic nonlinearity. The spatial and temporal pattern of variance-covariance features may reflect the signature of different physical and hydrological variables such as drainage area, topography, soil moisture and ground water fluctuations on the strength, stationarity and nonlinearity of the conditional variance-covariance for a rainfall-runoff process.

  8. Electricity demand loads modeling using AutoRegressive Moving Average (ARMA) models

    Energy Technology Data Exchange (ETDEWEB)

    Pappas, S.S. [Department of Information and Communication Systems Engineering, University of the Aegean, Karlovassi, 83 200 Samos (Greece); Ekonomou, L.; Chatzarakis, G.E. [Department of Electrical Engineering Educators, ASPETE - School of Pedagogical and Technological Education, N. Heraklion, 141 21 Athens (Greece); Karamousantas, D.C. [Technological Educational Institute of Kalamata, Antikalamos, 24100 Kalamata (Greece); Katsikas, S.K. [Department of Technology Education and Digital Systems, University of Piraeus, 150 Androutsou Srt., 18 532 Piraeus (Greece); Liatsis, P. [Division of Electrical Electronic and Information Engineering, School of Engineering and Mathematical Sciences, Information and Biomedical Engineering Centre, City University, Northampton Square, London EC1V 0HB (United Kingdom)

    2008-09-15

    This study addresses the problem of modeling the electricity demand loads in Greece. The provided actual load data is deseasonilized and an AutoRegressive Moving Average (ARMA) model is fitted on the data off-line, using the Akaike Corrected Information Criterion (AICC). The developed model fits the data in a successful manner. Difficulties occur when the provided data includes noise or errors and also when an on-line/adaptive modeling is required. In both cases and under the assumption that the provided data can be represented by an ARMA model, simultaneous order and parameter estimation of ARMA models under the presence of noise are performed. The produced results indicate that the proposed method, which is based on the multi-model partitioning theory, tackles successfully the studied problem. For validation purposes the produced results are compared with three other established order selection criteria, namely AICC, Akaike's Information Criterion (AIC) and Schwarz's Bayesian Information Criterion (BIC). The developed model could be useful in the studies that concern electricity consumption and electricity prices forecasts. (author)

  9. Self-organising mixture autoregressive model for non-stationary time series modelling.

    Science.gov (United States)

    Ni, He; Yin, Hujun

    2008-12-01

    Modelling non-stationary time series has been a difficult task for both parametric and nonparametric methods. One promising solution is to combine the flexibility of nonparametric models with the simplicity of parametric models. In this paper, the self-organising mixture autoregressive (SOMAR) network is adopted as a such mixture model. It breaks time series into underlying segments and at the same time fits local linear regressive models to the clusters of segments. In such a way, a global non-stationary time series is represented by a dynamic set of local linear regressive models. Neural gas is used for a more flexible structure of the mixture model. Furthermore, a new similarity measure has been introduced in the self-organising network to better quantify the similarity of time series segments. The network can be used naturally in modelling and forecasting non-stationary time series. Experiments on artificial, benchmark time series (e.g. Mackey-Glass) and real-world data (e.g. numbers of sunspots and Forex rates) are presented and the results show that the proposed SOMAR network is effective and superior to other similar approaches.

  10. Brookhaven Regional Energy Facility Siting Model (REFS): model development and application

    Energy Technology Data Exchange (ETDEWEB)

    Meier, P.; Hobbs, B.; Ketcham, G.; McCoy, M.; Stern, R.

    1979-06-01

    A siting methodology developed specifically to bridge the gap between regional-energy-system scenarios and environmental transport models is documented. Development of the model is described in Chapter 1. Chapter 2 described the basic structure of such a model. Additional chapters on model development cover: generation, transmission, demand disaggregation, the interface to other models, computational aspects, the coal sector, water resource considerations, and air quality considerations. These subjects comprise Part I. Part II, Model Applications, covers: analysis of water resource constraints, water resource issues in the New York Power Pool, water resource issues in the New England Power Pool, water resource issues in the Pennsylvania-Jersey-Maryland Power Pool, and a summary of water resource constraint analysis. (MCW)

  11. Modeling abundance using N-mixture models: the importance of considering ecological mechanisms.

    Science.gov (United States)

    Joseph, Liana N; Elkin, Ché; Martin, Tara G; Possinghami, Hugh P

    2009-04-01

    Predicting abundance across a species' distribution is useful for studies of ecology and biodiversity management. Modeling of survey data in relation to environmental variables can be a powerful method for extrapolating abundances across a species' distribution and, consequently, calculating total abundances and ultimately trends. Research in this area has demonstrated that models of abundance are often unstable and produce spurious estimates, and until recently our ability to remove detection error limited the development of accurate models. The N-mixture model accounts for detection and abundance simultaneously and has been a significant advance in abundance modeling. Case studies that have tested these new models have demonstrated success for some species, but doubt remains over the appropriateness of standard N-mixture models for many species. Here we develop the N-mixture model to accommodate zero-inflated data, a common occurrence in ecology, by employing zero-inflated count models. To our knowledge, this is the first application of this method to modeling count data. We use four variants of the N-mixture model (Poisson, zero-inflated Poisson, negative binomial, and zero-inflated negative binomial) to model abundance, occupancy (zero-inflated models only) and detection probability of six birds in South Australia. We assess models by their statistical fit and the ecological realism of the parameter estimates. Specifically, we assess the statistical fit with AIC and assess the ecological realism by comparing the parameter estimates with expected values derived from literature, ecological theory, and expert opinion. We demonstrate that, despite being frequently ranked the "best model" according to AIC, the negative binomial variants of the N-mixture often produce ecologically unrealistic parameter estimates. The zero-inflated Poisson variant is preferable to the negative binomial variants of the N-mixture, as it models an ecological mechanism rather than a

  12. Efficient polarimetric BRDF model.

    Science.gov (United States)

    Renhorn, Ingmar G E; Hallberg, Tomas; Boreman, Glenn D

    2015-11-30

    The purpose of the present manuscript is to present a polarimetric bidirectional reflectance distribution function (BRDF) model suitable for hyperspectral and polarimetric signature modelling. The model is based on a further development of a previously published four-parameter model that has been generalized in order to account for different types of surface structures (generalized Gaussian distribution). A generalization of the Lambertian diffuse model is presented. The pBRDF-functions are normalized using numerical integration. Using directional-hemispherical reflectance (DHR) measurements, three of the four basic parameters can be determined for any wavelength. This simplifies considerably the development of multispectral polarimetric BRDF applications. The scattering parameter has to be determined from at least one BRDF measurement. The model deals with linear polarized radiation; and in similarity with e.g. the facet model depolarization is not included. The model is very general and can inherently model extreme surfaces such as mirrors and Lambertian surfaces. The complex mixture of sources is described by the sum of two basic models, a generalized Gaussian/Fresnel model and a generalized Lambertian model. Although the physics inspired model has some ad hoc features, the predictive power of the model is impressive over a wide range of angles and scattering magnitudes. The model has been applied successfully to painted surfaces, both dull and glossy and also on metallic bead blasted surfaces. The simple and efficient model should be attractive for polarimetric simulations and polarimetric remote sensing.

  13. Multilevel modeling using R

    CERN Document Server

    Finch, W Holmes; Kelley, Ken

    2014-01-01

    A powerful tool for analyzing nested designs in a variety of fields, multilevel/hierarchical modeling allows researchers to account for data collected at multiple levels. Multilevel Modeling Using R provides you with a helpful guide to conducting multilevel data modeling using the R software environment.After reviewing standard linear models, the authors present the basics of multilevel models and explain how to fit these models using R. They then show how to employ multilevel modeling with longitudinal data and demonstrate the valuable graphical options in R. The book also describes models fo

  14. Dealing with dissatisfaction in mathematical modelling to integrate QFD and Kano’s model

    Science.gov (United States)

    Retno Sari Dewi, Dian; Debora, Joana; Edy Sianto, Martinus

    2017-12-01

    The purpose of the study is to implement the integration of Quality Function Deployment (QFD) and Kano’s Model into mathematical model. Voice of customer data in QFD was collected using questionnaire and the questionnaire was developed based on Kano’s model. Then the operational research methodology was applied to build the objective function and constraints in the mathematical model. The relationship between voice of customer and engineering characteristics was modelled using linier regression model. Output of the mathematical model would be detail of engineering characteristics. The objective function of this model is to maximize satisfaction and minimize dissatisfaction as well. Result of this model is 62% .The major contribution of this research is to implement the existing mathematical model to integrate QFD and Kano’s Model in the case study of shoe cabinet.

  15. Models of Reality.

    Energy Technology Data Exchange (ETDEWEB)

    Brown-VanHoozer, S. A.

    1999-06-02

    Conscious awareness of our environment is based on a feedback loop comprised of sensory input transmitted to the central nervous system leading to construction of our ''model of the world,'' (Lewis et al, 1982). We then assimilate the neurological model at the unconscious level into information we can later consciously consider useful in identifying belief systems and behaviors for designing diverse systems. Thus, we can avoid potential problems based on our open-to-error perceived reality of the world. By understanding how our model of reality is organized, we allow ourselves to transcend content and develop insight into how effective choices and belief systems are generated through sensory derived processes. These are the processes which provide the designer the ability to meta model (build a model of a model) the user; consequently, matching the mental model of the user with that of the designer's and, coincidentally, forming rapport between the two participants. The information shared between the participants is neither assumed nor generalized, it is closer to equivocal; thus minimizing error through a sharing of each other's model of reality. How to identify individual mental mechanisms or processes, how to organize the individual strategies of these mechanisms into useful patterns, and to formulate these into models for success and knowledge based outcomes is the subject of the discussion that follows.

  16. Thermocouple modeling

    International Nuclear Information System (INIS)

    Fryer, M.O.

    1984-01-01

    The temperature measurements provided by thermocouples (TCs) are important for the operation of pressurized water reactors. During severe inadequate core cooling incidents, extreme temperatures may cause type K thermocouples (TCs) used for core exit temperature monitoring to perform poorly. A model of TC electrical behavior has been developed to determine how TCs react under extreme temperatures. The model predicts the voltage output of the TC and its impedance. A series of experiments were conducted on a length of type K thermocouple to validate the model. Impedance was measured at several temperatures between 22 0 C and 1100 0 C and at frequencies between dc and 10 MHz. The model was able to accurately predict impedance over this wide range of conditions. The average percentage difference between experimental data and the model was less than 6.5%. Experimental accuracy was +-2.5%. There is a sriking difference between impedance versus frequency plots at 300 0 C and at higher temperatures. This may be useful in validating TC data during accident conditions

  17. Hierarchical Bass model

    International Nuclear Information System (INIS)

    Tashiro, Tohru

    2014-01-01

    We propose a new model about diffusion of a product which includes a memory of how many adopters or advertisements a non-adopter met, where (non-)adopters mean people (not) possessing the product. This effect is lacking in the Bass model. As an application, we utilize the model to fit the iPod sales data, and so the better agreement is obtained than the Bass model

  18. Hierarchical Bass model

    Science.gov (United States)

    Tashiro, Tohru

    2014-03-01

    We propose a new model about diffusion of a product which includes a memory of how many adopters or advertisements a non-adopter met, where (non-)adopters mean people (not) possessing the product. This effect is lacking in the Bass model. As an application, we utilize the model to fit the iPod sales data, and so the better agreement is obtained than the Bass model.

  19. Impact of model structure and parameterization on Penman-Monteith type evaporation models

    KAUST Repository

    Ershadi, A.

    2015-04-12

    The impact of model structure and parameterization on the estimation of evaporation is investigated across a range of Penman-Monteith type models. To examine the role of model structure on flux retrievals, three different retrieval schemes are compared. The schemes include a traditional single-source Penman-Monteith model (Monteith, 1965), a two-layer model based on Shuttleworth and Wallace (1985) and a three-source model based on Mu et al. (2011). To assess the impact of parameterization choice on model performance, a number of commonly used formulations for aerodynamic and surface resistances were substituted into the different formulations. Model response to these changes was evaluated against data from twenty globally distributed FLUXNET towers, representing a cross-section of biomes that include grassland, cropland, shrubland, evergreen needleleaf forest and deciduous broadleaf forest. Scenarios based on 14 different combinations of model structure and parameterization were ranked based on their mean value of Nash-Sutcliffe Efficiency. Results illustrated considerable variability in model performance both within and between biome types. Indeed, no single model consistently outperformed any other when considered across all biomes. For instance, in grassland and shrubland sites, the single-source Penman-Monteith model performed the best. In croplands it was the three-source Mu model, while for evergreen needleleaf and deciduous broadleaf forests, the Shuttleworth-Wallace model rated highest. Interestingly, these top ranked scenarios all shared the simple lookup-table based surface resistance parameterization of Mu et al. (2011), while a more complex Jarvis multiplicative method for surface resistance produced lower ranked simulations. The highly ranked scenarios mostly employed a version of the Thom (1975) formulation for aerodynamic resistance that incorporated dynamic values of roughness parameters. This was true for all cases except over deciduous broadleaf

  20. Models and correlations of the DEBRIS Late-Phase Melt Progression Model

    International Nuclear Information System (INIS)

    Schmidt, R.C.; Gasser, R.D.

    1997-09-01

    The DEBRIS Late Phase Melt Progression Model is an assembly of models, embodied in a computer code, which is designed to treat late-phase melt progression in dry rubble (or debris) regions that can form as a consequence of a severe core uncover accident in a commercial light water nuclear reactor. The approach is fully two-dimensional, and incorporates a porous medium modeling framework together with conservation and constitutive relationships to simulate the time-dependent evolution of such regions as various physical processes act upon the materials. The objective of the code is to accurately model these processes so that the late-phase melt progression that would occur in different hypothetical severe nuclear reactor accidents can be better understood and characterized. In this report the models and correlations incorporated and used within the current version of DEBRIS are described. These include the global conservation equations solved, heat transfer and fission heating models, melting and refreezing models (including material interactions), liquid and solid relocation models, gas flow and pressure field models, and the temperature and compositionally dependent material properties employed. The specific models described here have been used in the experiment design analysis of the Phebus FPT-4 debris-bed fission-product release experiment. An earlier DEBRIS code version was used to analyze the MP-1 and MP-2 late-phase melt progression experiments conducted at Sandia National Laboratories for the US Nuclear Regulatory Commission