WorldWideScience

Sample records for feature scale modeling

  1. Multi-scale salient feature extraction on mesh models

    KAUST Repository

    Yang, Yongliang; Shen, ChaoHui

    2012-01-01

    We present a new method of extracting multi-scale salient features on meshes. It is based on robust estimation of curvature on multiple scales. The coincidence between salient feature and the scale of interest can be established straightforwardly, where detailed feature appears on small scale and feature with more global shape information shows up on large scale. We demonstrate this multi-scale description of features accords with human perception and can be further used for several applications as feature classification and viewpoint selection. Experiments exhibit that our method as a multi-scale analysis tool is very helpful for studying 3D shapes. © 2012 Springer-Verlag.

  2. A national-scale model of linear features improves predictions of farmland biodiversity.

    Science.gov (United States)

    Sullivan, Martin J P; Pearce-Higgins, James W; Newson, Stuart E; Scholefield, Paul; Brereton, Tom; Oliver, Tom H

    2017-12-01

    Modelling species distribution and abundance is important for many conservation applications, but it is typically performed using relatively coarse-scale environmental variables such as the area of broad land-cover types. Fine-scale environmental data capturing the most biologically relevant variables have the potential to improve these models. For example, field studies have demonstrated the importance of linear features, such as hedgerows, for multiple taxa, but the absence of large-scale datasets of their extent prevents their inclusion in large-scale modelling studies.We assessed whether a novel spatial dataset mapping linear and woody-linear features across the UK improves the performance of abundance models of 18 bird and 24 butterfly species across 3723 and 1547 UK monitoring sites, respectively.Although improvements in explanatory power were small, the inclusion of linear features data significantly improved model predictive performance for many species. For some species, the importance of linear features depended on landscape context, with greater importance in agricultural areas. Synthesis and applications . This study demonstrates that a national-scale model of the extent and distribution of linear features improves predictions of farmland biodiversity. The ability to model spatial variability in the role of linear features such as hedgerows will be important in targeting agri-environment schemes to maximally deliver biodiversity benefits. Although this study focuses on farmland, data on the extent of different linear features are likely to improve species distribution and abundance models in a wide range of systems and also can potentially be used to assess habitat connectivity.

  3. Efficient and robust model-to-image alignment using 3D scale-invariant features.

    Science.gov (United States)

    Toews, Matthew; Wells, William M

    2013-04-01

    This paper presents feature-based alignment (FBA), a general method for efficient and robust model-to-image alignment. Volumetric images, e.g. CT scans of the human body, are modeled probabilistically as a collage of 3D scale-invariant image features within a normalized reference space. Features are incorporated as a latent random variable and marginalized out in computing a maximum a posteriori alignment solution. The model is learned from features extracted in pre-aligned training images, then fit to features extracted from a new image to identify a globally optimal locally linear alignment solution. Novel techniques are presented for determining local feature orientation and efficiently encoding feature intensity in 3D. Experiments involving difficult magnetic resonance (MR) images of the human brain demonstrate FBA achieves alignment accuracy similar to widely-used registration methods, while requiring a fraction of the memory and computation resources and offering a more robust, globally optimal solution. Experiments on CT human body scans demonstrate FBA as an effective system for automatic human body alignment where other alignment methods break down. Copyright © 2012 Elsevier B.V. All rights reserved.

  4. Model-independent phenotyping of C. elegans locomotion using scale-invariant feature transform.

    Directory of Open Access Journals (Sweden)

    Yelena Koren

    Full Text Available To uncover the genetic basis of behavioral traits in the model organism C. elegans, a common strategy is to study locomotion defects in mutants. Despite efforts to introduce (semi-automated phenotyping strategies, current methods overwhelmingly depend on worm-specific features that must be hand-crafted and as such are not generalizable for phenotyping motility in other animal models. Hence, there is an ongoing need for robust algorithms that can automatically analyze and classify motility phenotypes quantitatively. To this end, we have developed a fully-automated approach to characterize C. elegans' phenotypes that does not require the definition of nematode-specific features. Rather, we make use of the popular computer vision Scale-Invariant Feature Transform (SIFT from which we construct histograms of commonly-observed SIFT features to represent nematode motility. We first evaluated our method on a synthetic dataset simulating a range of nematode crawling gaits. Next, we evaluated our algorithm on two distinct datasets of crawling C. elegans with mutants affecting neuromuscular structure and function. Not only is our algorithm able to detect differences between strains, results capture similarities in locomotory phenotypes that lead to clustering that is consistent with expectations based on genetic relationships. Our proposed approach generalizes directly and should be applicable to other animal models. Such applicability holds promise for computational ethology as more groups collect high-resolution image data of animal behavior.

  5. Scaling up spike-and-slab models for unsupervised feature learning.

    Science.gov (United States)

    Goodfellow, Ian J; Courville, Aaron; Bengio, Yoshua

    2013-08-01

    We describe the use of two spike-and-slab models for modeling real-valued data, with an emphasis on their applications to object recognition. The first model, which we call spike-and-slab sparse coding (S3C), is a preexisting model for which we introduce a faster approximate inference algorithm. We introduce a deep variant of S3C, which we call the partially directed deep Boltzmann machine (PD-DBM) and extend our S3C inference algorithm for use on this model. We describe learning procedures for each. We demonstrate that our inference procedure for S3C enables scaling the model to unprecedented large problem sizes, and demonstrate that using S3C as a feature extractor results in very good object recognition performance, particularly when the number of labeled examples is low. We show that the PD-DBM generates better samples than its shallow counterpart, and that unlike DBMs or DBNs, the PD-DBM may be trained successfully without greedy layerwise training.

  6. A hybrid model for dissolved oxygen prediction in aquaculture based on multi-scale features

    Directory of Open Access Journals (Sweden)

    Chen Li

    2018-03-01

    Full Text Available To increase prediction accuracy of dissolved oxygen (DO in aquaculture, a hybrid model based on multi-scale features using ensemble empirical mode decomposition (EEMD is proposed. Firstly, original DO datasets are decomposed by EEMD and we get several components. Secondly, these components are used to reconstruct four terms including high frequency term, intermediate frequency term, low frequency term and trend term. Thirdly, according to the characteristics of high and intermediate frequency terms, which fluctuate violently, the least squares support vector machine (LSSVR is used to predict the two terms. The fluctuation of low frequency term is gentle and periodic, so it can be modeled by BP neural network with an optimal mind evolutionary computation (MEC-BP. Then, the trend term is predicted using grey model (GM because it is nearly linear. Finally, the prediction values of DO datasets are calculated by the sum of the forecasting values of all terms. The experimental results demonstrate that our hybrid model outperforms EEMD-ELM (extreme learning machine based on EEMD, EEMD-BP and MEC-BP models based on the mean absolute error (MAE, mean absolute percentage error (MAPE, mean square error (MSE and root mean square error (RMSE. Our hybrid model is proven to be an effective approach to predict aquaculture DO.

  7. Discriminative phenomenological features of scale invariant models for electroweak symmetry breaking

    Directory of Open Access Journals (Sweden)

    Katsuya Hashino

    2016-01-01

    Full Text Available Classical scale invariance (CSI may be one of the solutions for the hierarchy problem. Realistic models for electroweak symmetry breaking based on CSI require extended scalar sectors without mass terms, and the electroweak symmetry is broken dynamically at the quantum level by the Coleman–Weinberg mechanism. We discuss discriminative features of these models. First, using the experimental value of the mass of the discovered Higgs boson h(125, we obtain an upper bound on the mass of the lightest additional scalar boson (≃543 GeV, which does not depend on its isospin and hypercharge. Second, a discriminative prediction on the Higgs-photon–photon coupling is given as a function of the number of charged scalar bosons, by which we can narrow down possible models using current and future data for the di-photon decay of h(125. Finally, for the triple Higgs boson coupling a large deviation (∼+70% from the SM prediction is universally predicted, which is independent of masses, quantum numbers and even the number of additional scalars. These models based on CSI can be well tested at LHC Run II and at future lepton colliders.

  8. SITE-94. Discrete-feature modelling of the Aespoe site: 2. Development of the integrated site-scale model

    Energy Technology Data Exchange (ETDEWEB)

    Geier, J.E. [Golder Associates AB, Uppsala (Sweden)

    1996-12-01

    A 3-dimensional, discrete-feature hydrological model is developed. The model integrates structural and hydrologic data for the Aespoe site, on scales ranging from semi regional fracture zones to individual fractures in the vicinity of the nuclear waste canisters. Hydrologic properties of the large-scale structures are initially estimated from cross-hole hydrologic test data, and automatically calibrated by numerical simulation of network flow, and comparison with undisturbed heads and observed drawdown in selected cross-hole tests. The calibrated model is combined with a separately derived fracture network model, to yield the integrated model. This model is partly validated by simulation of transient responses to a long-term pumping test and a convergent tracer test, based on the LPT2 experiment at Aespoe. The integrated model predicts that discharge from the SITE-94 repository is predominantly via fracture zones along the eastern shore of Aespoe. Similar discharge loci are produced by numerous model variants that explore uncertainty with regard to effective semi regional boundary conditions, hydrologic properties of the site-scale structures, and alternative structural/hydrological interpretations. 32 refs.

  9. SITE-94. Discrete-feature modelling of the Aespoe site: 2. Development of the integrated site-scale model

    International Nuclear Information System (INIS)

    Geier, J.E.

    1996-12-01

    A 3-dimensional, discrete-feature hydrological model is developed. The model integrates structural and hydrologic data for the Aespoe site, on scales ranging from semi regional fracture zones to individual fractures in the vicinity of the nuclear waste canisters. Hydrologic properties of the large-scale structures are initially estimated from cross-hole hydrologic test data, and automatically calibrated by numerical simulation of network flow, and comparison with undisturbed heads and observed drawdown in selected cross-hole tests. The calibrated model is combined with a separately derived fracture network model, to yield the integrated model. This model is partly validated by simulation of transient responses to a long-term pumping test and a convergent tracer test, based on the LPT2 experiment at Aespoe. The integrated model predicts that discharge from the SITE-94 repository is predominantly via fracture zones along the eastern shore of Aespoe. Similar discharge loci are produced by numerous model variants that explore uncertainty with regard to effective semi regional boundary conditions, hydrologic properties of the site-scale structures, and alternative structural/hydrological interpretations. 32 refs

  10. Feature scale modeling for etching and deposition processes in semiconductor manufacturing

    International Nuclear Information System (INIS)

    Pyka, W.

    2000-04-01

    modeling of ballistic transport determined low-pressure processes, the equations for the calculation of local etching and deposition rates have been revised. New extensions like the full relation between angular and radial target emission characteristics and particle distributions resulting at different positions on the wafer have been added, and results from reactor scale simulations have been linked to the feature scale profile evolution. Moreover, a fitting model has been implemented, which reduces the number of parameters for particle distributions, scattering mechanisms, and angular dependent surface interactions. Concerning diffusion determined high-pressure CVD processes, a continuum transport and reaction model for the first time has been implemented in three dimensions. It comprises a flexible interface for the formulation of the involved process chemistry and derives the local deposition rate from a finite element diffusion calculation carried out on the three-dimensional mesh of the gas domain above the feature. For each time-step of the deposition simulation the mesh is automatically generated as counterpart to the surface of the three-dimensional structure evolving with time. The CVD model has also been coupled with equipment simulations. (author)

  11. Finite element modeling of small-scale tapered wood-laminated composite poles with biomimicry features

    Science.gov (United States)

    Cheng Piao; Todd F. Shupe; R.C. Tang; Chung Y. Hse

    2008-01-01

    Tapered composite poles with biomimicry features as in bamboo are a new generation of wood laminated composite poles that may some day be considered as an alternative to solid wood poles that are widely used in the transmission and telecommunication fields. Five finite element models were developed with ANSYS to predict and assess the performance of five types of...

  12. Viscous flow features in scaled-up physical models of normal and pathological vocal phonation

    Energy Technology Data Exchange (ETDEWEB)

    Erath, Byron D., E-mail: berath@purdue.ed [School of Mechanical Engineering, Purdue University, 585 Purdue Mall, West Lafayette, IN 47907 (United States); Plesniak, Michael W., E-mail: plesniak@gwu.ed [Department of Mechanical and Aerospace Engineering, George Washington University, 801 22nd Street NW, Suite 739, Washington, DC 20052 (United States)

    2010-06-15

    Unilateral vocal fold paralysis results when the recurrent laryngeal nerve, which innervates the muscles of the vocal folds becomes damaged. The loss of muscle and tension control to the damaged vocal fold renders it ineffectual. The mucosal wave disappears during phonation, and the vocal fold becomes largely immobile. The influence of unilateral vocal fold paralysis on the viscous flow development, which impacts speech quality within the glottis during phonation was investigated. Driven, scaled-up vocal fold models were employed to replicate both normal and pathological patterns of vocal fold motion. Spatial and temporal velocity fields were captured using particle image velocimetry, and laser Doppler velocimetry. Flow parameters were scaled to match the physiological values associated with human speech. Loss of motion in one vocal fold resulted in a suppression of typical glottal flow fields, including decreased spatial variability in the location of the flow separation point throughout the phonatory cycle, as well as a decrease in the vorticity magnitude.

  13. Viscous flow features in scaled-up physical models of normal and pathological vocal phonation

    International Nuclear Information System (INIS)

    Erath, Byron D.; Plesniak, Michael W.

    2010-01-01

    Unilateral vocal fold paralysis results when the recurrent laryngeal nerve, which innervates the muscles of the vocal folds becomes damaged. The loss of muscle and tension control to the damaged vocal fold renders it ineffectual. The mucosal wave disappears during phonation, and the vocal fold becomes largely immobile. The influence of unilateral vocal fold paralysis on the viscous flow development, which impacts speech quality within the glottis during phonation was investigated. Driven, scaled-up vocal fold models were employed to replicate both normal and pathological patterns of vocal fold motion. Spatial and temporal velocity fields were captured using particle image velocimetry, and laser Doppler velocimetry. Flow parameters were scaled to match the physiological values associated with human speech. Loss of motion in one vocal fold resulted in a suppression of typical glottal flow fields, including decreased spatial variability in the location of the flow separation point throughout the phonatory cycle, as well as a decrease in the vorticity magnitude.

  14. Soluble Model Fluids with Complete Scaling and Yang-Yang Features

    Science.gov (United States)

    Cerdeiriña, Claudio A.; Orkoulas, Gerassimos; Fisher, Michael E.

    2016-01-01

    Yang-Yang (YY) and singular diameter critical anomalies arise in exactly soluble compressible cell gas (CCG) models that obey complete scaling with pressure mixing. Thus, on the critical isochore ρ =ρc , C˜ μ≔-T d2μ /d T2 diverges as |t |-α when t ∝T -Tc→0- while ρd-ρc˜|t |2β where ρd(T )=1/2 [ρliq+ρgas] . When the discrete local CCG cell volumes fluctuate freely, the YY ratio Rμ=C˜μ/CV may take any value -∞ 0 . More general decorated CCGs, including "hydrogen bonding" water models, illuminate energy-volume coupling as relevant to Rμ.

  15. Large-Scale Features of Pliocene Climate: Results from the Pliocene Model Intercomparison Project

    Science.gov (United States)

    Haywood, A. M.; Hill, D.J.; Dolan, A. M.; Otto-Bliesner, B. L.; Bragg, F.; Chan, W.-L.; Chandler, M. A.; Contoux, C.; Dowsett, H. J.; Jost, A.; hide

    2013-01-01

    Climate and environments of the mid-Pliocene warm period (3.264 to 3.025 Ma) have been extensively studied.Whilst numerical models have shed light on the nature of climate at the time, uncertainties in their predictions have not been systematically examined. The Pliocene Model Intercomparison Project quantifies uncertainties in model outputs through a coordinated multi-model and multi-mode data intercomparison. Whilst commonalities in model outputs for the Pliocene are clearly evident, we show substantial variation in the sensitivity of models to the implementation of Pliocene boundary conditions. Models appear able to reproduce many regional changes in temperature reconstructed from geological proxies. However, data model comparison highlights that models potentially underestimate polar amplification. To assert this conclusion with greater confidence, limitations in the time-averaged proxy data currently available must be addressed. Furthermore, sensitivity tests exploring the known unknowns in modelling Pliocene climate specifically relevant to the high latitudes are essential (e.g. palaeogeography, gateways, orbital forcing and trace gasses). Estimates of longer-term sensitivity to CO2 (also known as Earth System Sensitivity; ESS), support previous work suggesting that ESS is greater than Climate Sensitivity (CS), and suggest that the ratio of ESS to CS is between 1 and 2, with a "best" estimate of 1.5.

  16. Automatic Craniomaxillofacial Landmark Digitization via Segmentation-guided Partially-joint Regression Forest Model and Multi-scale Statistical Features

    Science.gov (United States)

    Zhang, Jun; Gao, Yaozong; Wang, Li; Tang, Zhen; Xia, James J.; Shen, Dinggang

    2016-01-01

    Objective The goal of this paper is to automatically digitize craniomaxillofacial (CMF) landmarks efficiently and accurately from cone-beam computed tomography (CBCT) images, by addressing the challenge caused by large morphological variations across patients and image artifacts of CBCT images. Methods We propose a Segmentation-guided Partially-joint Regression Forest (S-PRF) model to automatically digitize CMF landmarks. In this model, a regression voting strategy is first adopted to localize each landmark by aggregating evidences from context locations, thus potentially relieving the problem caused by image artifacts near the landmark. Second, CBCT image segmentation is utilized to remove uninformative voxels caused by morphological variations across patients. Third, a partially-joint model is further proposed to separately localize landmarks based on the coherence of landmark positions to improve the digitization reliability. In addition, we propose a fast vector quantization (VQ) method to extract high-level multi-scale statistical features to describe a voxel's appearance, which has low dimensionality, high efficiency, and is also invariant to the local inhomogeneity caused by artifacts. Results Mean digitization errors for 15 landmarks, in comparison to the ground truth, are all less than 2mm. Conclusion Our model has addressed challenges of both inter-patient morphological variations and imaging artifacts. Experiments on a CBCT dataset show that our approach achieves clinically acceptable accuracy for landmark digitalization. Significance Our automatic landmark digitization method can be used clinically to reduce the labor cost and also improve digitalization consistency. PMID:26625402

  17. Simulated pre-industrial climate in Bergen Climate Model (version 2: model description and large-scale circulation features

    Directory of Open Access Journals (Sweden)

    O. H. Otterå

    2009-11-01

    Full Text Available The Bergen Climate Model (BCM is a fully-coupled atmosphere-ocean-sea-ice model that provides state-of-the-art computer simulations of the Earth's past, present, and future climate. Here, a pre-industrial multi-century simulation with an updated version of BCM is described and compared to observational data. The model is run without any form of flux adjustments and is stable for several centuries. The simulated climate reproduces the general large-scale circulation in the atmosphere reasonably well, except for a positive bias in the high latitude sea level pressure distribution. Also, by introducing an updated turbulence scheme in the atmosphere model a persistent cold bias has been eliminated. For the ocean part, the model drifts in sea surface temperatures and salinities are considerably reduced compared to earlier versions of BCM. Improved conservation properties in the ocean model have contributed to this. Furthermore, by choosing a reference pressure at 2000 m and including thermobaric effects in the ocean model, a more realistic meridional overturning circulation is simulated in the Atlantic Ocean. The simulated sea-ice extent in the Northern Hemisphere is in general agreement with observational data except for summer where the extent is somewhat underestimated. In the Southern Hemisphere, large negative biases are found in the simulated sea-ice extent. This is partly related to problems with the mixed layer parametrization, causing the mixed layer in the Southern Ocean to be too deep, which in turn makes it hard to maintain a realistic sea-ice cover here. However, despite some problematic issues, the pre-industrial control simulation presented here should still be appropriate for climate change studies requiring multi-century simulations.

  18. Multi-scale textural feature extraction and particle swarm optimization based model selection for false positive reduction in mammography.

    Science.gov (United States)

    Zyout, Imad; Czajkowska, Joanna; Grzegorzek, Marcin

    2015-12-01

    The high number of false positives and the resulting number of avoidable breast biopsies are the major problems faced by current mammography Computer Aided Detection (CAD) systems. False positive reduction is not only a requirement for mass but also for calcification CAD systems which are currently deployed for clinical use. This paper tackles two problems related to reducing the number of false positives in the detection of all lesions and masses, respectively. Firstly, textural patterns of breast tissue have been analyzed using several multi-scale textural descriptors based on wavelet and gray level co-occurrence matrix. The second problem addressed in this paper is the parameter selection and performance optimization. For this, we adopt a model selection procedure based on Particle Swarm Optimization (PSO) for selecting the most discriminative textural features and for strengthening the generalization capacity of the supervised learning stage based on a Support Vector Machine (SVM) classifier. For evaluating the proposed methods, two sets of suspicious mammogram regions have been used. The first one, obtained from Digital Database for Screening Mammography (DDSM), contains 1494 regions (1000 normal and 494 abnormal samples). The second set of suspicious regions was obtained from database of Mammographic Image Analysis Society (mini-MIAS) and contains 315 (207 normal and 108 abnormal) samples. Results from both datasets demonstrate the efficiency of using PSO based model selection for optimizing both classifier hyper-parameters and parameters, respectively. Furthermore, the obtained results indicate the promising performance of the proposed textural features and more specifically, those based on co-occurrence matrix of wavelet image representation technique. Copyright © 2015 Elsevier Ltd. All rights reserved.

  19. Model Checking Feature Interactions

    DEFF Research Database (Denmark)

    Le Guilly, Thibaut; Olsen, Petur; Pedersen, Thomas

    2015-01-01

    This paper presents an offline approach to analyzing feature interactions in embedded systems. The approach consists of a systematic process to gather the necessary information about system components and their models. The model is first specified in terms of predicates, before being refined to t...... to timed automata. The consistency of the model is verified at different development stages, and the correct linkage between the predicates and their semantic model is checked. The approach is illustrated on a use case from home automation....

  20. Discrete-feature modelling of the Aespoe Site: 1. Discrete-fracture network models for the repository scale

    International Nuclear Information System (INIS)

    Geier, J.E.; Thomas, A.L.

    1996-08-01

    This report describes the statistical derivation and partial validation of discrete-fracture network (DFN) models for the rock beneath the island of Aespoe in southeastern Sweden. The purpose was to develop DFN representations of the rock mass within a hypothetical, spent-fuel repository, located under Aespoe. Analyses are presented for four major lithologic types, with separate analyses of the rock within fracture zones, the rock excluding fracture zones, and all rock. Complete DFN models are proposed as descriptions of the rock mass in the near field. The procedure for validation, by comparison between actual and simulated packer tests, was found to be useful for discriminating among candidate DFN models. In particular, the validation approach was shown to be sensitive to a change in the fracture location (clustering) model, and to a change in the variance of single-fracture transmissivity. The proposed models are defined in terms of stochastic processes and statistical distributions, and thus are descriptive of the variability of the fracture system. This report includes discussion of the numerous sources of uncertainty in the models, including uncertainty that results from the variability of the natural system. 62 refs

  1. Slim Battery Modelling Features

    Science.gov (United States)

    Borthomieu, Y.; Prevot, D.

    2011-10-01

    Saft has developed a life prediction model for VES and MPS cells and batteries. The Saft Li-ion Model (SLIM) is a macroscopic electrochemical model based on energy (global at cell level). The main purpose is to predict the battery performances during the life for GEO, MEO and LEO missions. This model is based on electrochemical characteristics such as Energy, Capacity, EMF, Internal resistance, end of charge voltage. It uses fading and calendar law effects on energy and internal impedance vs. time, temperature, End of Charge voltage. Based on the mission profile, satellite power system characteristics, the model proposes the various battery configurations. For each configuration, the model gives the battery performances using mission figures and profiles: power, duration, DOD, end of charge voltages, temperatures during eclipses and solstices, thermal dissipations and cell failures. For the GEO/MEO missions, eclipse and solstice periods can include specific profile such as plasmic propulsion fires and specific balancing operations. For LEO missions, the model is able to simulate high power peaks to predict radar pulses. Saft's main customers have been using the SLIM model available in house for two years. The purpose is to have the satellite builder power engineers able to perform by themselves in the battery pre-dimensioning activities their own battery simulations. The simulations can be shared with Saft engineers to refine the power system designs. This model has been correlated with existing life and calendar tests performed on all the VES and MPS cells. In comparing with more than 10 year lasting life tests, the accuracy of the model from a voltage point of view is less than 10 mV at end Of Life. In addition, thethe comparison with in-orbit data has been also done. b This paper will present the main features of the SLIM software and outputs comparison with real life tests. b0

  2. Feature Scaling via Second-Order Cone Programming

    Directory of Open Access Journals (Sweden)

    Zhizheng Liang

    2016-01-01

    Full Text Available Feature scaling has attracted considerable attention during the past several decades because of its important role in feature selection. In this paper, a novel algorithm for learning scaling factors of features is proposed. It first assigns a nonnegative scaling factor to each feature of data and then adopts a generalized performance measure to learn the optimal scaling factors. It is of interest to note that the proposed model can be transformed into a convex optimization problem: second-order cone programming (SOCP. Thus the scaling factors of features in our method are globally optimal in some sense. Several experiments on simulated data, UCI data sets, and the gene data set are conducted to demonstrate that the proposed method is more effective than previous methods.

  3. a Novel Ship Detection Method for Large-Scale Optical Satellite Images Based on Visual Lbp Feature and Visual Attention Model

    Science.gov (United States)

    Haigang, Sui; Zhina, Song

    2016-06-01

    Reliably ship detection in optical satellite images has a wide application in both military and civil fields. However, this problem is very difficult in complex backgrounds, such as waves, clouds, and small islands. Aiming at these issues, this paper explores an automatic and robust model for ship detection in large-scale optical satellite images, which relies on detecting statistical signatures of ship targets, in terms of biologically-inspired visual features. This model first selects salient candidate regions across large-scale images by using a mechanism based on biologically-inspired visual features, combined with visual attention model with local binary pattern (CVLBP). Different from traditional studies, the proposed algorithm is high-speed and helpful to focus on the suspected ship areas avoiding the separation step of land and sea. Largearea images are cut into small image chips and analyzed in two complementary ways: Sparse saliency using visual attention model and detail signatures using LBP features, thus accordant with sparseness of ship distribution on images. Then these features are employed to classify each chip as containing ship targets or not, using a support vector machine (SVM). After getting the suspicious areas, there are still some false alarms such as microwaves and small ribbon clouds, thus simple shape and texture analysis are adopted to distinguish between ships and nonships in suspicious areas. Experimental results show the proposed method is insensitive to waves, clouds, illumination and ship size.

  4. Sugar beet and volunteer potato classification using Bag-of-Visual-Words model, Scale-Invariant Feature Transform, or Speeded Up Robust Feature descriptors and crop row information

    NARCIS (Netherlands)

    Suh, Hyun K.; Hofstee, Jan Willem; IJsselmuiden, Joris; Henten, van Eldert J.

    2018-01-01

    One of the most important steps in vision-based weed detection systems is the classification of weeds growing amongst crops. In the EU SmartBot project it was required to effectively control more than 95% of volunteer potatoes and ensure less than 5% of damage of sugar beet. Classification features

  5. Analysing Feature Model Changes using FMDiff

    NARCIS (Netherlands)

    Dintzner, N.J.R.; Van Deursen, A.; Pinzger, M.

    2015-01-01

    Evolving a large scale, highly variable sys- tems is a challenging task. For such a system, evolution operations often require to update consistently both their implementation and its feature model. In this con- text, the evolution of the feature model closely follows the evolution of the system.

  6. Impacts of Changing Climatic Drivers and Land use features on Future Stormwater Runoff in the Northwest Florida Basin: A Large-Scale Hydrologic Modeling Assessment

    Science.gov (United States)

    Khan, M.; Abdul-Aziz, O. I.

    2017-12-01

    Potential changes in climatic drivers and land cover features can significantly influence the stormwater budget in the Northwest Florida Basin. We investigated the hydro-climatic and land use sensitivities of stormwater runoff by developing a large-scale process-based rainfall-runoff model for the large basin by using the EPA Storm Water Management Model (SWMM 5.1). Climatic and hydrologic variables, as well as land use/cover features were incorporated into the model to account for the key processes of coastal hydrology and its dynamic interactions with groundwater and sea levels. We calibrated and validated the model by historical daily streamflow observations during 2009-2012 at four major rivers in the basin. Downscaled climatic drivers (precipitation, temperature, solar radiation) projected by twenty GCMs-RCMs under CMIP5, along with the projected future land use/cover features were also incorporated into the model. The basin storm runoff was then simulated for the historical (2000s = 1976-2005) and two future periods (2050s = 2030-2059, and 2080s = 2070-2099). Comparative evaluation of the historical and future scenarios leads to important guidelines for stormwater management in Northwest Florida and similar regions under a changing climate and environment.

  7. Genomic Feature Models

    DEFF Research Database (Denmark)

    Sørensen, Peter; Edwards, Stefan McKinnon; Rohde, Palle Duun

    -additive genetic mechanisms. These modeling approaches have proven to be highly useful to determine population genetic parameters as well as prediction of genetic risk or value. We present a series of statistical modelling approaches that use prior biological information for evaluating the collective action......Whole-genome sequences and multiple trait phenotypes from large numbers of individuals will soon be available in many populations. Well established statistical modeling approaches enable the genetic analyses of complex trait phenotypes while accounting for a variety of additive and non...... regions and gene ontologies) that provide better model fit and increase predictive ability of the statistical model for this trait....

  8. Extracting Feature Model Changes from the Linux Kernel Using FMDiff

    NARCIS (Netherlands)

    Dintzner, N.J.R.; Van Deursen, A.; Pinzger, M.

    2014-01-01

    The Linux kernel feature model has been studied as an example of large scale evolving feature model and yet details of its evolution are not known. We present here a classification of feature changes occurring on the Linux kernel feature model, as well as a tool, FMDiff, designed to automatically

  9. Robust object tracking combining color and scale invariant features

    Science.gov (United States)

    Zhang, Shengping; Yao, Hongxun; Gao, Peipei

    2010-07-01

    Object tracking plays a very important role in many computer vision applications. However its performance will significantly deteriorate due to some challenges in complex scene, such as pose and illumination changes, clustering background and so on. In this paper, we propose a robust object tracking algorithm which exploits both global color and local scale invariant (SIFT) features in a particle filter framework. Due to the expensive computation cost of SIFT features, the proposed tracker adopts a speed-up variation of SIFT, SURF, to extract local features. Specially, the proposed method first finds matching points between the target model and target candidate, than the weight of the corresponding particle based on scale invariant features is computed as the the proportion of matching points of that particle to matching points of all particles, finally the weight of the particle is obtained by combining weights of color and SURF features with a probabilistic way. The experimental results on a variety of challenging videos verify that the proposed method is robust to pose and illumination changes and is significantly superior to the standard particle filter tracker and the mean shift tracker.

  10. DETERMINATION OF RELEVANT FEATURES OF A SCALE MODEL FOR A 55 000 DWT BULK CARRIER NECESSARY TO STUDY THE SHIP MANEUVERABILITY

    Directory of Open Access Journals (Sweden)

    ALECU TOMA

    2016-06-01

    Full Text Available The study method of a ship behavior based on practical tests on scale models is widely used both leading scientists and engineers, architects and researchers in the naval field. In this paper we propose to determine the parameters of a ship handling characteristics relevant to study the 55,000 dwt bulk carrier using a scale model. Scientific background for practical experimentation of this techniques necessary to built a scale model ship consists in applying the principles of similarity or "similitude". The scale model achieved by applying the laws of similarity must allow, through approximations available in certain circumstances, finding relevant parameters needed to simplify and solve the Navier-Stokes equations. These parameters are necessary for modeling the interaction between hull of the real ship and the fluid motion.

  11. Multi-scale Analysis of High Resolution Topography: Feature Extraction and Identification of Landscape Characteristic Scales

    Science.gov (United States)

    Passalacqua, P.; Sangireddy, H.; Stark, C. P.

    2015-12-01

    With the advent of digital terrain data, detailed information on terrain characteristics and on scale and location of geomorphic features is available over extended areas. Our ability to observe landscapes and quantify topographic patterns has greatly improved, including the estimation of fluxes of mass and energy across landscapes. Challenges still remain in the analysis of high resolution topography data; the presence of features such as roads, for example, challenges classic methods for feature extraction and large data volumes require computationally efficient extraction and analysis methods. Moreover, opportunities exist to define new robust metrics of landscape characterization for landscape comparison and model validation. In this presentation we cover recent research in multi-scale and objective analysis of high resolution topography data. We show how the analysis of the probability density function of topographic attributes such as slope, curvature, and topographic index contains useful information for feature localization and extraction. The analysis of how the distributions change across scales, quantified by the behavior of modal values and interquartile range, allows the identification of landscape characteristic scales, such as terrain roughness. The methods are introduced on synthetic signals in one and two dimensions and then applied to a variety of landscapes of different characteristics. Validation of the methods includes the analysis of modeled landscapes where the noise distribution is known and features of interest easily measured.

  12. Improving scale invariant feature transform-based descriptors with shape-color alliance robust feature

    Science.gov (United States)

    Wang, Rui; Zhu, Zhengdan; Zhang, Liang

    2015-05-01

    Constructing appropriate descriptors for interest points in image matching is a critical aspect task in computer vision and pattern recognition. A method as an extension of the scale invariant feature transform (SIFT) descriptor called shape-color alliance robust feature (SCARF) descriptor is presented. To address the problem that SIFT is designed mainly for gray images and lack of global information for feature points, the proposed approach improves the SIFT descriptor by means of a concentric-rings model, as well as integrating the color invariant space and shape context with SIFT to construct the SCARF descriptor. The SCARF method developed is more robust than the conventional SIFT with respect to not only the color and photometrical variations but also the measuring similarity as a global variation between two shapes. A comparative evaluation of different descriptors is carried out showing that the SCARF approach provides better results than the other four state-of-the-art related methods.

  13. Component Composition Using Feature Models

    DEFF Research Database (Denmark)

    Eichberg, Michael; Klose, Karl; Mitschke, Ralf

    2010-01-01

    interface description languages. If this variability is relevant when selecting a matching component then human interaction is required to decide which components can be bound. We propose to use feature models for making this variability explicit and (re-)enabling automatic component binding. In our...... approach, feature models are one part of service specifications. This enables to declaratively specify which service variant is provided by a component. By referring to a service's variation points, a component that requires a specific service can list the requirements on the desired variant. Using...... these specifications, a component environment can then determine if a binding of the components exists that satisfies all requirements. The prototypical environment Columbus demonstrates the feasibility of the approach....

  14. Drift Scale THM Model

    International Nuclear Information System (INIS)

    Rutqvist, J.

    2004-01-01

    This model report documents the drift scale coupled thermal-hydrological-mechanical (THM) processes model development and presents simulations of the THM behavior in fractured rock close to emplacement drifts. The modeling and analyses are used to evaluate the impact of THM processes on permeability and flow in the near-field of the emplacement drifts. The results from this report are used to assess the importance of THM processes on seepage and support in the model reports ''Seepage Model for PA Including Drift Collapse'' and ''Abstraction of Drift Seepage'', and to support arguments for exclusion of features, events, and processes (FEPs) in the analysis reports ''Features, Events, and Processes in Unsaturated Zone Flow and Transport and Features, Events, and Processes: Disruptive Events''. The total system performance assessment (TSPA) calculations do not use any output from this report. Specifically, the coupled THM process model is applied to simulate the impact of THM processes on hydrologic properties (permeability and capillary strength) and flow in the near-field rock around a heat-releasing emplacement drift. The heat generated by the decay of radioactive waste results in elevated rock temperatures for thousands of years after waste emplacement. Depending on the thermal load, these temperatures are high enough to cause boiling conditions in the rock, resulting in water redistribution and altered flow paths. These temperatures will also cause thermal expansion of the rock, with the potential of opening or closing fractures and thus changing fracture permeability in the near-field. Understanding the THM coupled processes is important for the performance of the repository because the thermally induced permeability changes potentially effect the magnitude and spatial distribution of percolation flux in the vicinity of the drift, and hence the seepage of water into the drift. This is important because a sufficient amount of water must be available within a

  15. Discrete Feature Model (DFM) User Documentation

    International Nuclear Information System (INIS)

    Geier, Joel

    2008-06-01

    This manual describes the Discrete-Feature Model (DFM) software package for modelling groundwater flow and solute transport in networks of discrete features. A discrete-feature conceptual model represents fractures and other water-conducting features around a repository as discrete conductors surrounded by a rock matrix which is usually treated as impermeable. This approximation may be valid for crystalline rocks such as granite or basalt, which have very low permeability if macroscopic fractures are excluded. A discrete feature is any entity that can conduct water and permit solute transport through bedrock, and can be reasonably represented as a piecewise-planar conductor. Examples of such entities may include individual natural fractures (joints or faults), fracture zones, and disturbed-zone features around tunnels (e.g. blasting-induced fractures or stress-concentration induced 'onion skin' fractures around underground openings). In a more abstract sense, the effectively discontinuous nature of pathways through fractured crystalline bedrock may be idealized as discrete, equivalent transmissive features that reproduce large-scale observations, even if the details of connective paths (and unconnected domains) are not precisely known. A discrete-feature model explicitly represents the fundamentally discontinuous and irregularly connected nature of systems of such systems, by constraining flow and transport to occur only within such features and their intersections. Pathways for flow and solute transport in this conceptualization are a consequence not just of the boundary conditions and hydrologic properties (as with continuum models), but also the irregularity of connections between conductive/transmissive features. The DFM software package described here is an extensible code for investigating problems of flow and transport in geological (natural or human-altered) systems that can be characterized effectively in terms of discrete features. With this software, the

  16. Discrete Feature Model (DFM) User Documentation

    Energy Technology Data Exchange (ETDEWEB)

    Geier, Joel (Clearwater Hardrock Consulting, Corvallis, OR (United States))

    2008-06-15

    This manual describes the Discrete-Feature Model (DFM) software package for modelling groundwater flow and solute transport in networks of discrete features. A discrete-feature conceptual model represents fractures and other water-conducting features around a repository as discrete conductors surrounded by a rock matrix which is usually treated as impermeable. This approximation may be valid for crystalline rocks such as granite or basalt, which have very low permeability if macroscopic fractures are excluded. A discrete feature is any entity that can conduct water and permit solute transport through bedrock, and can be reasonably represented as a piecewise-planar conductor. Examples of such entities may include individual natural fractures (joints or faults), fracture zones, and disturbed-zone features around tunnels (e.g. blasting-induced fractures or stress-concentration induced 'onion skin' fractures around underground openings). In a more abstract sense, the effectively discontinuous nature of pathways through fractured crystalline bedrock may be idealized as discrete, equivalent transmissive features that reproduce large-scale observations, even if the details of connective paths (and unconnected domains) are not precisely known. A discrete-feature model explicitly represents the fundamentally discontinuous and irregularly connected nature of systems of such systems, by constraining flow and transport to occur only within such features and their intersections. Pathways for flow and solute transport in this conceptualization are a consequence not just of the boundary conditions and hydrologic properties (as with continuum models), but also the irregularity of connections between conductive/transmissive features. The DFM software package described here is an extensible code for investigating problems of flow and transport in geological (natural or human-altered) systems that can be characterized effectively in terms of discrete features. With this

  17. Rotation invariant fast features for large-scale recognition

    Science.gov (United States)

    Takacs, Gabriel; Chandrasekhar, Vijay; Tsai, Sam; Chen, David; Grzeszczuk, Radek; Girod, Bernd

    2012-10-01

    We present an end-to-end feature description pipeline which uses a novel interest point detector and Rotation- Invariant Fast Feature (RIFF) descriptors. The proposed RIFF algorithm is 15× faster than SURF1 while producing large-scale retrieval results that are comparable to SIFT.2 Such high-speed features benefit a range of applications from Mobile Augmented Reality (MAR) to web-scale image retrieval and analysis.

  18. Comparison of laminite fracture features at different scales

    OpenAIRE

    Zihms, Stephanie; Miranda, Tiago; Lewis, Helen; Hall, Stephen

    2017-01-01

    Laminites (NE Brazil) are well laminated carbonates that provide insight into the geomechanical behaviour of layered systems, especially when comparing deformation characteristics observed in the laboratory with outcrop / field scale deformations. This is useful in order to a)  validate where laboratory experiments can reproduce field scale deformation types b)  understand which feature characteristics can or cannot be scaled

  19. Rotation, scale, and translation invariant pattern recognition using feature extraction

    Science.gov (United States)

    Prevost, Donald; Doucet, Michel; Bergeron, Alain; Veilleux, Luc; Chevrette, Paul C.; Gingras, Denis J.

    1997-03-01

    A rotation, scale and translation invariant pattern recognition technique is proposed.It is based on Fourier- Mellin Descriptors (FMD). Each FMD is taken as an independent feature of the object, and a set of those features forms a signature. FMDs are naturally rotation invariant. Translation invariance is achieved through pre- processing. A proper normalization of the FMDs gives the scale invariance property. This approach offers the double advantage of providing invariant signatures of the objects, and a dramatic reduction of the amount of data to process. The compressed invariant feature signature is next presented to a multi-layered perceptron neural network. This final step provides some robustness to the classification of the signatures, enabling good recognition behavior under anamorphically scaled distortion. We also present an original feature extraction technique, adapted to optical calculation of the FMDs. A prototype optical set-up was built, and experimental results are presented.

  20. Analysing the Linux kernel feature model changes using FMDiff

    NARCIS (Netherlands)

    Dintzner, N.J.R.; van Deursen, A.; Pinzger, M.

    Evolving a large scale, highly variable system is a challenging task. For such a system, evolution operations often require to update consistently both their implementation and its feature model. In this context, the evolution of the feature model closely follows the evolution of the system. The

  1. Analysing the Linux kernel feature model changes using FMDiff

    NARCIS (Netherlands)

    Dintzner, N.J.R.; Van Deursen, A.; Pinzger, M.

    2015-01-01

    Evolving a large scale, highly variable system is a challenging task. For such a system, evolution operations often require to update consistently both their implementation and its feature model. In this context, the evolution of the feature model closely follows the evolution of the system. The

  2. Delving Deep into Multiscale Pedestrian Detection via Single Scale Feature Maps

    Directory of Open Access Journals (Sweden)

    Xinchuan Fu

    2018-04-01

    Full Text Available The standard pipeline in pedestrian detection is sliding a pedestrian model on an image feature pyramid to detect pedestrians of different scales. In this pipeline, feature pyramid construction is time consuming and becomes the bottleneck for fast detection. Recently, a method called multiresolution filtered channels (MRFC was proposed which only used single scale feature maps to achieve fast detection. However, there are two shortcomings in MRFC which limit its accuracy. One is that the receptive field correspondence in different scales is weak. Another is that the features used are not scale invariance. In this paper, two solutions are proposed to tackle with the two shortcomings respectively. Specifically, scale-aware pooling is proposed to make a better receptive field correspondence, and soft decision tree is proposed to relive scale variance problem. When coupled with efficient sliding window classification strategy, our detector achieves fast detecting speed at the same time with state-of-the-art accuracy.

  3. Object feature extraction and recognition model

    International Nuclear Information System (INIS)

    Wan Min; Xiang Rujian; Wan Yongxing

    2001-01-01

    The characteristics of objects, especially flying objects, are analyzed, which include characteristics of spectrum, image and motion. Feature extraction is also achieved. To improve the speed of object recognition, a feature database is used to simplify the data in the source database. The feature vs. object relationship maps are stored in the feature database. An object recognition model based on the feature database is presented, and the way to achieve object recognition is also explained

  4. Object detection based on improved color and scale invariant features

    Science.gov (United States)

    Chen, Mengyang; Men, Aidong; Fan, Peng; Yang, Bo

    2009-10-01

    A novel object detection method which combines color and scale invariant features is presented in this paper. The detection system mainly adopts the widely used framework of SIFT (Scale Invariant Feature Transform), which consists of both a keypoint detector and descriptor. Although SIFT has some impressive advantages, it is not only computationally expensive, but also vulnerable to color images. To overcome these drawbacks, we employ the local color kernel histograms and Haar Wavelet Responses to enhance the descriptor's distinctiveness and computational efficiency. Extensive experimental evaluations show that the method has better robustness and lower computation costs.

  5. New SCALE-4 features related to cross-section processing

    International Nuclear Information System (INIS)

    Petrie, L.M.; Landers, N.F.; Greene, N.M.; Parks, C.V.

    1991-01-01

    The SCALE code system has a standardized scheme for processing problem-dependent cross section from problem-independent waste libraries. Some improvements and new capabilities in the processing scheme have been incorporated into the new Version 4 release of the SCALE system. The new features include the capability to consider annular cylindrical and spherical unit cells, and improved Dancoff factor formulation, and changes to the NITAWL-II module to perform resonance self-shielding with reference to infinite dilute values. A review of these major changes in the cross-section processing scheme for SCALE-4 is presented in this paper

  6. Exploring quantum control landscapes: Topology, features, and optimization scaling

    International Nuclear Information System (INIS)

    Moore, Katharine W.; Rabitz, Herschel

    2011-01-01

    Quantum optimal control experiments and simulations have successfully manipulated the dynamics of systems ranging from atoms to biomolecules. Surprisingly, these collective works indicate that the effort (i.e., the number of algorithmic iterations) required to find an optimal control field appears to be essentially invariant to the complexity of the system. The present work explores this matter in a series of systematic optimizations of the state-to-state transition probability on model quantum systems with the number of states N ranging from 5 through 100. The optimizations occur over a landscape defined by the transition probability as a function of the control field. Previous theoretical studies on the topology of quantum control landscapes established that they should be free of suboptimal traps under reasonable physical conditions. The simulations in this work include nearly 5000 individual optimization test cases, all of which confirm this prediction by fully achieving optimal population transfer of at least 99.9% on careful attention to numerical procedures to ensure that the controls are free of constraints. Collectively, the simulation results additionally show invariance of required search effort to system dimension N. This behavior is rationalized in terms of the structural features of the underlying control landscape. The very attractive observed scaling with system complexity may be understood by considering the distance traveled on the control landscape during a search and the magnitude of the control landscape slope. Exceptions to this favorable scaling behavior can arise when the initial control field fluence is too large or when the target final state recedes from the initial state as N increases.

  7. Genetic search feature selection for affective modeling

    DEFF Research Database (Denmark)

    Martínez, Héctor P.; Yannakakis, Georgios N.

    2010-01-01

    Automatic feature selection is a critical step towards the generation of successful computational models of affect. This paper presents a genetic search-based feature selection method which is developed as a global-search algorithm for improving the accuracy of the affective models built....... The method is tested and compared against sequential forward feature selection and random search in a dataset derived from a game survey experiment which contains bimodal input features (physiological and gameplay) and expressed pairwise preferences of affect. Results suggest that the proposed method...

  8. Large scale model testing

    International Nuclear Information System (INIS)

    Brumovsky, M.; Filip, R.; Polachova, H.; Stepanek, S.

    1989-01-01

    Fracture mechanics and fatigue calculations for WWER reactor pressure vessels were checked by large scale model testing performed using large testing machine ZZ 8000 (with a maximum load of 80 MN) at the SKODA WORKS. The results are described from testing the material resistance to fracture (non-ductile). The testing included the base materials and welded joints. The rated specimen thickness was 150 mm with defects of a depth between 15 and 100 mm. The results are also presented of nozzles of 850 mm inner diameter in a scale of 1:3; static, cyclic, and dynamic tests were performed without and with surface defects (15, 30 and 45 mm deep). During cyclic tests the crack growth rate in the elastic-plastic region was also determined. (author). 6 figs., 2 tabs., 5 refs

  9. Small scale models equal large scale savings

    International Nuclear Information System (INIS)

    Lee, R.; Segroves, R.

    1994-01-01

    A physical scale model of a reactor is a tool which can be used to reduce the time spent by workers in the containment during an outage and thus to reduce the radiation dose and save money. The model can be used for worker orientation, and for planning maintenance, modifications, manpower deployment and outage activities. Examples of the use of models are presented. These were for the La Salle 2 and Dresden 1 and 2 BWRs. In each case cost-effectiveness and exposure reduction due to the use of a scale model is demonstrated. (UK)

  10. Deep Feature Learning and Cascaded Classifier for Large Scale Data

    DEFF Research Database (Denmark)

    Prasoon, Adhish

    from data rather than having a predefined feature set. We explore deep learning approach of convolutional neural network (CNN) for segmenting three dimensional medical images. We propose a novel system integrating three 2D CNNs, which have a one-to-one association with the xy, yz and zx planes of 3D......This thesis focuses on voxel/pixel classification based approaches for image segmentation. The main application is segmentation of articular cartilage in knee MRIs. The first major contribution of the thesis deals with large scale machine learning problems. Many medical imaging problems need huge...... amount of training data to cover sufficient biological variability. Learning methods scaling badly with number of training data points cannot be used in such scenarios. This may restrict the usage of many powerful classifiers having excellent generalization ability. We propose a cascaded classifier which...

  11. Local-Scale Simulations of Nucleate Boiling on Micrometer Featured Surfaces: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Sitaraman, Hariswaran [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Moreno, Gilberto [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Narumanchi, Sreekant V [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Dede, Ercan M. [Toyota Research Institute of North America; Joshi, Shailesh N. [Toyota Research Institute of North America; Zhou, Feng [Toyota Research Institute of North America

    2017-08-03

    A high-fidelity computational fluid dynamics (CFD)-based model for bubble nucleation of the refrigerant HFE7100 on micrometer-featured surfaces is presented in this work. The single-fluid incompressible Navier-Stokes equations, along with energy transport and natural convection effects are solved on a featured surface resolved grid. An a priori cavity detection method is employed to convert raw profilometer data of a surface into well-defined cavities. The cavity information and surface morphology are represented in the CFD model by geometric mesh deformations. Surface morphology is observed to initiate buoyancy-driven convection in the liquid phase, which in turn results in faster nucleation of cavities. Simulations pertaining to a generic rough surface show a trend where smaller size cavities nucleate with higher wall superheat. This local-scale model will serve as a self-consistent connection to larger device scale continuum models where local feature representation is not possible.

  12. Local-Scale Simulations of Nucleate Boiling on Micrometer-Featured Surfaces

    Energy Technology Data Exchange (ETDEWEB)

    Sitaraman, Hariswaran [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Moreno, Gilberto [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Narumanchi, Sreekant V [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Dede, Ercan M. [Toyota Research Institute of North America; Joshi, Shailesh N. [Toyota Research Institute of North America; Zhou, Feng [Toyota Research Institute of North America

    2017-07-12

    A high-fidelity computational fluid dynamics (CFD)-based model for bubble nucleation of the refrigerant HFE7100 on micrometer-featured surfaces is presented in this work. The single-fluid incompressible Navier-Stokes equations, along with energy transport and natural convection effects are solved on a featured surface resolved grid. An a priori cavity detection method is employed to convert raw profilometer data of a surface into well-defined cavities. The cavity information and surface morphology are represented in the CFD model by geometric mesh deformations. Surface morphology is observed to initiate buoyancy-driven convection in the liquid phase, which in turn results in faster nucleation of cavities. Simulations pertaining to a generic rough surface show a trend where smaller size cavities nucleate with higher wall superheat. This local-scale model will serve as a self-consistent connection to larger device scale continuum models where local feature representation is not possible.

  13. Augmented distinctive features with color and scale invariance

    Science.gov (United States)

    Liu, Yan; Lu, Xiaoqing; Qin, Yeyang; Tang, Zhi; Xu, Jianbo

    2013-03-01

    For objects with the same texture but different colors, it is difficult to discriminate them with the traditional scale invariant feature transform descriptor (SIFT), because it is designed for grayscale images only. Thus it is important to keep a high probability to make sure that the used key points are couples of correct pairs. In addition, mean distributed key points are much more expected than over dense and clustered key points for image match and other applications. In this paper, we analyze these two problems. First, we propose a color and scale invariant method to extract a more mean distributed key points relying on illumination intensity invariance but object reflectance sensitivity variance variable. Second, we modify the key point's canonical direction accumulated error by dispersing each pixel's gradient direction on a relative direction around the current key point. At last, we build the descriptors on a Gaussian pyramid and match the key points with our enhanced two-way matching regulations. Experiments are performed on the Amsterdam Library of Object Images dataset and some synthetic images manually. The results show that the extracted key points have better distribution character and larger number than SIFT. The feature descriptors can well discriminate images with different color but with the same content and texture.

  14. Cloud Detection by Fusing Multi-Scale Convolutional Features

    Science.gov (United States)

    Li, Zhiwei; Shen, Huanfeng; Wei, Yancong; Cheng, Qing; Yuan, Qiangqiang

    2018-04-01

    Clouds detection is an important pre-processing step for accurate application of optical satellite imagery. Recent studies indicate that deep learning achieves best performance in image segmentation tasks. Aiming at boosting the accuracy of cloud detection for multispectral imagery, especially for those that contain only visible and near infrared bands, in this paper, we proposed a deep learning based cloud detection method termed MSCN (multi-scale cloud net), which segments cloud by fusing multi-scale convolutional features. MSCN was trained on a global cloud cover validation collection, and was tested in more than ten types of optical images with different resolution. Experiment results show that MSCN has obvious advantages over the traditional multi-feature combined cloud detection method in accuracy, especially when in snow and other areas covered by bright non-cloud objects. Besides, MSCN produced more detailed cloud masks than the compared deep cloud detection convolution network. The effectiveness of MSCN make it promising for practical application in multiple kinds of optical imagery.

  15. A Feature Selection Method for Large-Scale Network Traffic Classification Based on Spark

    Directory of Open Access Journals (Sweden)

    Yong Wang

    2016-02-01

    Full Text Available Currently, with the rapid increasing of data scales in network traffic classifications, how to select traffic features efficiently is becoming a big challenge. Although a number of traditional feature selection methods using the Hadoop-MapReduce framework have been proposed, the execution time was still unsatisfactory with numeral iterative computations during the processing. To address this issue, an efficient feature selection method for network traffic based on a new parallel computing framework called Spark is proposed in this paper. In our approach, the complete feature set is firstly preprocessed based on Fisher score, and a sequential forward search strategy is employed for subsets. The optimal feature subset is then selected using the continuous iterations of the Spark computing framework. The implementation demonstrates that, on the precondition of keeping the classification accuracy, our method reduces the time cost of modeling and classification, and improves the execution efficiency of feature selection significantly.

  16. BioModels: Content, Features, Functionality, and Use

    Science.gov (United States)

    Juty, N; Ali, R; Glont, M; Keating, S; Rodriguez, N; Swat, MJ; Wimalaratne, SM; Hermjakob, H; Le Novère, N; Laibe, C; Chelliah, V

    2015-01-01

    BioModels is a reference repository hosting mathematical models that describe the dynamic interactions of biological components at various scales. The resource provides access to over 1,200 models described in literature and over 140,000 models automatically generated from pathway resources. Most model components are cross-linked with external resources to facilitate interoperability. A large proportion of models are manually curated to ensure reproducibility of simulation results. This tutorial presents BioModels' content, features, functionality, and usage. PMID:26225232

  17. Feature Extraction for Structural Dynamics Model Validation

    Energy Technology Data Exchange (ETDEWEB)

    Farrar, Charles [Los Alamos National Laboratory; Nishio, Mayuko [Yokohama University; Hemez, Francois [Los Alamos National Laboratory; Stull, Chris [Los Alamos National Laboratory; Park, Gyuhae [Chonnam Univesity; Cornwell, Phil [Rose-Hulman Institute of Technology; Figueiredo, Eloi [Universidade Lusófona; Luscher, D. J. [Los Alamos National Laboratory; Worden, Keith [University of Sheffield

    2016-01-13

    As structural dynamics becomes increasingly non-modal, stochastic and nonlinear, finite element model-updating technology must adopt the broader notions of model validation and uncertainty quantification. For example, particular re-sampling procedures must be implemented to propagate uncertainty through a forward calculation, and non-modal features must be defined to analyze nonlinear data sets. The latter topic is the focus of this report, but first, some more general comments regarding the concept of model validation will be discussed.

  18. The future of primordial features with large-scale structure surveys

    International Nuclear Information System (INIS)

    Chen, Xingang; Namjoo, Mohammad Hossein; Dvorkin, Cora; Huang, Zhiqi; Verde, Licia

    2016-01-01

    Primordial features are one of the most important extensions of the Standard Model of cosmology, providing a wealth of information on the primordial Universe, ranging from discrimination between inflation and alternative scenarios, new particle detection, to fine structures in the inflationary potential. We study the prospects of future large-scale structure (LSS) surveys on the detection and constraints of these features. We classify primordial feature models into several classes, and for each class we present a simple template of power spectrum that encodes the essential physics. We study how well the most ambitious LSS surveys proposed to date, including both spectroscopic and photometric surveys, will be able to improve the constraints with respect to the current Planck data. We find that these LSS surveys will significantly improve the experimental sensitivity on features signals that are oscillatory in scales, due to the 3D information. For a broad range of models, these surveys will be able to reduce the errors of the amplitudes of the features by a factor of 5 or more, including several interesting candidates identified in the recent Planck data. Therefore, LSS surveys offer an impressive opportunity for primordial feature discovery in the next decade or two. We also compare the advantages of both types of surveys.

  19. The future of primordial features with large-scale structure surveys

    Energy Technology Data Exchange (ETDEWEB)

    Chen, Xingang; Namjoo, Mohammad Hossein [Institute for Theory and Computation, Harvard-Smithsonian Center for Astrophysics, 60 Garden Street, Cambridge, MA 02138 (United States); Dvorkin, Cora [Department of Physics, Harvard University, Cambridge, MA 02138 (United States); Huang, Zhiqi [School of Physics and Astronomy, Sun Yat-Sen University, 135 Xingang Xi Road, Guangzhou, 510275 (China); Verde, Licia, E-mail: xingang.chen@cfa.harvard.edu, E-mail: dvorkin@physics.harvard.edu, E-mail: huangzhq25@sysu.edu.cn, E-mail: mohammad.namjoo@cfa.harvard.edu, E-mail: liciaverde@icc.ub.edu [ICREA and ICC-UB, University of Barcelona (IEEC-UB), Marti i Franques, 1, Barcelona 08028 (Spain)

    2016-11-01

    Primordial features are one of the most important extensions of the Standard Model of cosmology, providing a wealth of information on the primordial Universe, ranging from discrimination between inflation and alternative scenarios, new particle detection, to fine structures in the inflationary potential. We study the prospects of future large-scale structure (LSS) surveys on the detection and constraints of these features. We classify primordial feature models into several classes, and for each class we present a simple template of power spectrum that encodes the essential physics. We study how well the most ambitious LSS surveys proposed to date, including both spectroscopic and photometric surveys, will be able to improve the constraints with respect to the current Planck data. We find that these LSS surveys will significantly improve the experimental sensitivity on features signals that are oscillatory in scales, due to the 3D information. For a broad range of models, these surveys will be able to reduce the errors of the amplitudes of the features by a factor of 5 or more, including several interesting candidates identified in the recent Planck data. Therefore, LSS surveys offer an impressive opportunity for primordial feature discovery in the next decade or two. We also compare the advantages of both types of surveys.

  20. Simulation of Synoptic Scale Circulation Features over Southern Africa Using GCMS

    International Nuclear Information System (INIS)

    Browne, Nana Ama Kum; Abiodun, Babatunde Joseph; Tadross, Mark; Hewitson, Bruce

    2009-11-01

    Two global models (HadAM3: The Hadley Centre Atmospheric Model version 3 and CAM3: The Community Atmospheric model version 3) have been studied regarding their capabilities in reproducing the small scale features over southern Africa compared with the NCEP reanalysis. In this study, geopotential height at 500hPa and 850hPa pressure levels are used to investigate the variability of small scale circulation features over southern Africa. The investigation took into consideration the magnitude of the models standard deviations. Most of the results were linked with rainfall and temperature over the region. It was found that the standardized anomalies in the geopotential height at the 500hPa pressure level are in phase with that of rainfall. In contrast, the standardized anomalies of 850hPa pressure level geopotential height are out of phase with the standardized anomalies of rainfall and temperature. In addition, the models are able to capture the variation in the mean cut-off lows, number of days with deep tropical lows and number of days with Tropical Temperate Troughs (TTTs) quite well. However, the models could not capture the number of days with temperate lows very well. Generally, the models are able to reproduce the synoptic scale circulation features which are crucial for reliable seasonal forecast over southern Africa. (author)

  1. A Method for Model Checking Feature Interactions

    DEFF Research Database (Denmark)

    Pedersen, Thomas; Le Guilly, Thibaut; Ravn, Anders Peter

    2015-01-01

    This paper presents a method to check for feature interactions in a system assembled from independently developed concurrent processes as found in many reactive systems. The method combines and refines existing definitions and adds a set of activities. The activities describe how to populate the ...... the definitions with models to ensure that all interactions are captured. The method is illustrated on a home automation example with model checking as analysis tool. In particular, the modelling formalism is timed automata and the analysis uses UPPAAL to find interactions....

  2. International Symposia on Scale Modeling

    CERN Document Server

    Ito, Akihiko; Nakamura, Yuji; Kuwana, Kazunori

    2015-01-01

    This volume thoroughly covers scale modeling and serves as the definitive source of information on scale modeling as a powerful simplifying and clarifying tool used by scientists and engineers across many disciplines. The book elucidates techniques used when it would be too expensive, or too difficult, to test a system of interest in the field. Topics addressed in the current edition include scale modeling to study weather systems, diffusion of pollution in air or water, chemical process in 3-D turbulent flow, multiphase combustion, flame propagation, biological systems, behavior of materials at nano- and micro-scales, and many more. This is an ideal book for students, both graduate and undergraduate, as well as engineers and scientists interested in the latest developments in scale modeling. This book also: Enables readers to evaluate essential and salient aspects of profoundly complex systems, mechanisms, and phenomena at scale Offers engineers and designers a new point of view, liberating creative and inno...

  3. Enhanced HMAX model with feedforward feature learning for multiclass categorization

    Directory of Open Access Journals (Sweden)

    Yinlin eLi

    2015-10-01

    Full Text Available In recent years, the interdisciplinary research between neuroscience and computer vision has promoted the development in both fields. Many biologically inspired visual models are proposed, and among them, the Hierarchical Max-pooling model (HMAX is a feedforward model mimicking the structures and functions of V1 to posterior inferotemporal (PIT layer of the primate visual cortex, which could generate a series of position- and scale- invariant features. However, it could be improved with attention modulation and memory processing, which are two important properties of the primate visual cortex. Thus, in this paper, based on recent biological research on the primate visual cortex, we still mimic the first 100-150 milliseconds of visual cognition to enhance the HMAX model, which mainly focuses on the unsupervised feedforward feature learning process. The main modifications are as follows: 1 To mimic the attention modulation mechanism of V1 layer, a bottom-up saliency map is computed in the S1 layer of the HMAX model, which can support the initial feature extraction for memory processing; 2 To mimic the learning, clustering and short-term memory to long-term memory conversion abilities of V2 and IT, an unsupervised iterative clustering method is used to learn clusters with multiscale middle level patches, which are taken as long-term memory; 3 Inspired by the multiple feature encoding mode of the primate visual cortex, information including color, orientation, and spatial position are encoded in different layers of the HMAX model progressively. By adding a softmax layer at the top of the model, multiclass categorization experiments can be conducted, and the results on Caltech101 show that the enhanced model with a smaller memory size exhibits higher accuracy than the original HMAX model, and could also achieve better accuracy than other unsupervised feature learning methods in multiclass categorization task.

  4. Enhanced HMAX model with feedforward feature learning for multiclass categorization.

    Science.gov (United States)

    Li, Yinlin; Wu, Wei; Zhang, Bo; Li, Fengfu

    2015-01-01

    In recent years, the interdisciplinary research between neuroscience and computer vision has promoted the development in both fields. Many biologically inspired visual models are proposed, and among them, the Hierarchical Max-pooling model (HMAX) is a feedforward model mimicking the structures and functions of V1 to posterior inferotemporal (PIT) layer of the primate visual cortex, which could generate a series of position- and scale- invariant features. However, it could be improved with attention modulation and memory processing, which are two important properties of the primate visual cortex. Thus, in this paper, based on recent biological research on the primate visual cortex, we still mimic the first 100-150 ms of visual cognition to enhance the HMAX model, which mainly focuses on the unsupervised feedforward feature learning process. The main modifications are as follows: (1) To mimic the attention modulation mechanism of V1 layer, a bottom-up saliency map is computed in the S1 layer of the HMAX model, which can support the initial feature extraction for memory processing; (2) To mimic the learning, clustering and short-term memory to long-term memory conversion abilities of V2 and IT, an unsupervised iterative clustering method is used to learn clusters with multiscale middle level patches, which are taken as long-term memory; (3) Inspired by the multiple feature encoding mode of the primate visual cortex, information including color, orientation, and spatial position are encoded in different layers of the HMAX model progressively. By adding a softmax layer at the top of the model, multiclass categorization experiments can be conducted, and the results on Caltech101 show that the enhanced model with a smaller memory size exhibits higher accuracy than the original HMAX model, and could also achieve better accuracy than other unsupervised feature learning methods in multiclass categorization task.

  5. Features of the method of large-scale paleolandscape reconstructions

    Science.gov (United States)

    Nizovtsev, Vyacheslav; Erman, Natalia; Graves, Irina

    2017-04-01

    The method of paleolandscape reconstructions was tested in the key area of the basin of the Central Dubna, located at the junction of the Taldom and Sergiev Posad districts of the Moscow region. A series of maps was created which shows paleoreconstructions of the original (indigenous) living environment of initial settlers during main time periods of the Holocene age and features of human interaction with landscapes at the early stages of economic development of the territory (in the early and middle Holocene). The sequence of these works is as follows. 1. Comprehensive analysis of topographic maps of different scales and aerial and satellite images, stock materials of geological and hydrological surveys and prospecting of peat deposits, archaeological evidence on ancient settlements, palynological and osteological analysis, analysis of complex landscape and archaeological studies. 2. Mapping of factual material and analyzing of the spatial distribution of archaeological sites were performed. 3. Running of a large-scale field landscape mapping (sample areas) and compiling of maps of the modern landscape structure. On this basis, edaphic properties of the main types of natural boundaries were analyzed and their resource base was determined. 4. Reconstruction of lake-river system during the main periods of the Holocene. The boundaries of restored paleolakes were determined based on power and territorial confinement of decay ooze. 5. On the basis of landscape and edaphic method the actual paleolandscape reconstructions for the main periods of the Holocene were performed. During the reconstructions of the original, indigenous flora we relied on data of palynological studies conducted on the studied area or in similar landscape conditions. 6. The result was a retrospective analysis and periodization of the settlement process, economic development and the formation of the first anthropogenically transformed landscape complexes. The reconstruction of the dynamics of the

  6. Large-scale lithography for sub-500nm features

    International Nuclear Information System (INIS)

    Pelzer, R L; Steininger, T; Belier, Benoit; Julie, Gwenaelle

    2006-01-01

    The interest in micro- and nanotechnologies has grown rapidly in the last years. The applications are versatile and different techniques found its way into several research domains as optics, electronics, magnetism, fluidics, etc. In all of these fields integration of more and more functions on steadily decreasing device dimensions lead to an increase in structural density and feature size. Expensive and slow processes utilizing projection steppers or e-beam direct writer equipment are used to fabricate nm features today. A high throughput and cost effective method adapted on a standard mask aligner will be demonstrated, making features of below 300nm available on wafer-level. We will demonstrate results of 4 different resists exposed on a DUV proximity aligner and plasma etched for optical and biological applications in the sub-300nm range

  7. Large-scale lithography for sub-500nm features

    Energy Technology Data Exchange (ETDEWEB)

    Pelzer, R L [Technology group, EV Group, DI Erich Thallner Str. 1, A-4780 Schaerding (Austria); Steininger, T [Technology group, EV Group, DI Erich Thallner Str. 1, A-4780 Schaerding (Austria); Belier, Benoit [CNRS, Institut d' Electronique Fondamentale, Universite Paris-Sud Bat 220, F- 91405 Orsay Cedex (France); Julie, Gwenaelle [CNRS, Institut d' Electronique Fondamentale, Universite Paris-Sud Bat 220, F- 91405 Orsay Cedex (France)

    2006-04-01

    The interest in micro- and nanotechnologies has grown rapidly in the last years. The applications are versatile and different techniques found its way into several research domains as optics, electronics, magnetism, fluidics, etc. In all of these fields integration of more and more functions on steadily decreasing device dimensions lead to an increase in structural density and feature size. Expensive and slow processes utilizing projection steppers or e-beam direct writer equipment are used to fabricate nm features today. A high throughput and cost effective method adapted on a standard mask aligner will be demonstrated, making features of below 300nm available on wafer-level. We will demonstrate results of 4 different resists exposed on a DUV proximity aligner and plasma etched for optical and biological applications in the sub-300nm range.

  8. Microarray-based large scale detection of single feature ...

    Indian Academy of Sciences (India)

    2015-12-08

    Dec 8, 2015 ... mental stages was used to identify single feature polymorphisms (SFPs). ... on a high-density oligonucleotide expression array in which. ∗ ..... The sign (+/−) with SFPs indicates direction of polymorphism. In the. (−) sign (i.e. ...

  9. A scale-entropy diffusion equation to describe the multi-scale features of turbulent flames near a wall

    Science.gov (United States)

    Queiros-Conde, D.; Foucher, F.; Mounaïm-Rousselle, C.; Kassem, H.; Feidt, M.

    2008-12-01

    Multi-scale features of turbulent flames near a wall display two kinds of scale-dependent fractal features. In scale-space, an unique fractal dimension cannot be defined and the fractal dimension of the front is scale-dependent. Moreover, when the front approaches the wall, this dependency changes: fractal dimension also depends on the wall-distance. Our aim here is to propose a general geometrical framework that provides the possibility to integrate these two cases, in order to describe the multi-scale structure of turbulent flames interacting with a wall. Based on the scale-entropy quantity, which is simply linked to the roughness of the front, we thus introduce a general scale-entropy diffusion equation. We define the notion of “scale-evolutivity” which characterises the deviation of a multi-scale system from the pure fractal behaviour. The specific case of a constant “scale-evolutivity” over the scale-range is studied. In this case, called “parabolic scaling”, the fractal dimension is a linear function of the logarithm of scale. The case of a constant scale-evolutivity in the wall-distance space implies that the fractal dimension depends linearly on the logarithm of the wall-distance. We then verified experimentally, that parabolic scaling represents a good approximation of the real multi-scale features of turbulent flames near a wall.

  10. Innovations in individual feature history management - The significance of feature-based temporal model

    Science.gov (United States)

    Choi, J.; Seong, J.C.; Kim, B.; Usery, E.L.

    2008-01-01

    A feature relies on three dimensions (space, theme, and time) for its representation. Even though spatiotemporal models have been proposed, they have principally focused on the spatial changes of a feature. In this paper, a feature-based temporal model is proposed to represent the changes of both space and theme independently. The proposed model modifies the ISO's temporal schema and adds new explicit temporal relationship structure that stores temporal topological relationship with the ISO's temporal primitives of a feature in order to keep track feature history. The explicit temporal relationship can enhance query performance on feature history by removing topological comparison during query process. Further, a prototype system has been developed to test a proposed feature-based temporal model by querying land parcel history in Athens, Georgia. The result of temporal query on individual feature history shows the efficiency of the explicit temporal relationship structure. ?? Springer Science+Business Media, LLC 2007.

  11. Scale modelling in LMFBR safety

    International Nuclear Information System (INIS)

    Cagliostro, D.J.; Florence, A.L.; Abrahamson, G.R.

    1979-01-01

    This paper reviews scale modelling techniques used in studying the structural response of LMFBR vessels to HCDA loads. The geometric, material, and dynamic similarity parameters are presented and identified using the methods of dimensional analysis. Complete similarity of the structural response requires that each similarity parameter be the same in the model as in the prototype. The paper then focuses on the methods, limitations, and problems of duplicating these parameters in scale models and mentions an experimental technique for verifying the scaling. Geometric similarity requires that all linear dimensions of the prototype be reduced in proportion to the ratio of a characteristic dimension of the model to that of the prototype. The overall size of the model depends on the structural detail required, the size of instrumentation, and the costs of machining and assemblying the model. Material similarity requires that the ratio of the density, bulk modulus, and constitutive relations for the structure and fluid be the same in the model as in the prototype. A practical choice of a material for the model is one with the same density and stress-strain relationship as the operating temperature. Ni-200 and water are good simulant materials for the 304 SS vessel and the liquid sodium coolant, respectively. Scaling of the strain rate sensitivity and fracture toughness of materials is very difficult, but may not be required if these effects do not influence the structural response of the reactor components. Dynamic similarity requires that the characteristic pressure of a simulant source equal that of the prototype HCDA for geometrically similar volume changes. The energy source is calibrated in the geometry and environment in which it will be used to assure that heat transfer between high temperature loading sources and the coolant simulant and that non-equilibrium effects in two-phase sources are accounted for. For the geometry and flow conitions of interest, the

  12. Scaling Features of Multimode Motions in Coupled Chaotic Oscillators

    DEFF Research Database (Denmark)

    Pavlov, A.N.; Sosnovtseva, Olga; Mosekilde, Erik

    2003-01-01

    Two different methods (the WTMM- and DFA-approaches) are applied to investigate the scaling properties in the return-time sequences generated by a system of two coupled chaotic oscillators. Transitions from twomode asynchronous dynamics (torus or torus-Chaos) to different states of chaotic phase ...

  13. A Novel DBN Feature Fusion Model for Cross-Corpus Speech Emotion Recognition

    Directory of Open Access Journals (Sweden)

    Zou Cairong

    2016-01-01

    Full Text Available The feature fusion from separate source is the current technical difficulties of cross-corpus speech emotion recognition. The purpose of this paper is to, based on Deep Belief Nets (DBN in Deep Learning, use the emotional information hiding in speech spectrum diagram (spectrogram as image features and then implement feature fusion with the traditional emotion features. First, based on the spectrogram analysis by STB/Itti model, the new spectrogram features are extracted from the color, the brightness, and the orientation, respectively; then using two alternative DBN models they fuse the traditional and the spectrogram features, which increase the scale of the feature subset and the characterization ability of emotion. Through the experiment on ABC database and Chinese corpora, the new feature subset compared with traditional speech emotion features, the recognition result on cross-corpus, distinctly advances by 8.8%. The method proposed provides a new idea for feature fusion of emotion recognition.

  14. Global scale groundwater flow model

    Science.gov (United States)

    Sutanudjaja, Edwin; de Graaf, Inge; van Beek, Ludovicus; Bierkens, Marc

    2013-04-01

    As the world's largest accessible source of freshwater, groundwater plays vital role in satisfying the basic needs of human society. It serves as a primary source of drinking water and supplies water for agricultural and industrial activities. During times of drought, groundwater sustains water flows in streams, rivers, lakes and wetlands, and thus supports ecosystem habitat and biodiversity, while its large natural storage provides a buffer against water shortages. Yet, the current generation of global scale hydrological models does not include a groundwater flow component that is a crucial part of the hydrological cycle and allows the simulation of groundwater head dynamics. In this study we present a steady-state MODFLOW (McDonald and Harbaugh, 1988) groundwater model on the global scale at 5 arc-minutes resolution. Aquifer schematization and properties of this groundwater model were developed from available global lithological model (e.g. Dürr et al., 2005; Gleeson et al., 2010; Hartmann and Moorsdorff, in press). We force the groundwtaer model with the output from the large-scale hydrological model PCR-GLOBWB (van Beek et al., 2011), specifically the long term net groundwater recharge and average surface water levels derived from routed channel discharge. We validated calculated groundwater heads and depths with available head observations, from different regions, including the North and South America and Western Europe. Our results show that it is feasible to build a relatively simple global scale groundwater model using existing information, and estimate water table depths within acceptable accuracy in many parts of the world.

  15. A modular CUDA-based framework for scale-space feature detection in video streams

    International Nuclear Information System (INIS)

    Kinsner, M; Capson, D; Spence, A

    2010-01-01

    Multi-scale image processing techniques enable extraction of features where the size of a feature is either unknown or changing, but the requirement to process image data at multiple scale levels imposes a substantial computational load. This paper describes the architecture and emerging results from the implementation of a GPGPU-accelerated scale-space feature detection framework for video processing. A discrete scale-space representation is generated for image frames within a video stream, and multi-scale feature detection metrics are applied to detect ridges and Gaussian blobs at video frame rates. A modular structure is adopted, in which common feature extraction tasks such as non-maximum suppression and local extrema search may be reused across a variety of feature detectors. Extraction of ridge and blob features is achieved at faster than 15 frames per second on video sequences from a machine vision system, utilizing an NVIDIA GTX 480 graphics card. By design, the framework is easily extended to additional feature classes through the inclusion of feature metrics to be applied to the scale-space representation, and using common post-processing modules to reduce the required CPU workload. The framework is scalable across multiple and more capable GPUs, and enables previously intractable image processing at video frame rates using commodity computational hardware.

  16. Molecular scale modeling of polymer imprint nanolithography.

    Science.gov (United States)

    Chandross, Michael; Grest, Gary S

    2012-01-10

    We present the results of large-scale molecular dynamics simulations of two different nanolithographic processes, step-flash imprint lithography (SFIL), and hot embossing. We insert rigid stamps into an entangled bead-spring polymer melt above the glass transition temperature. After equilibration, the polymer is then hardened in one of two ways, depending on the specific process to be modeled. For SFIL, we cross-link the polymer chains by introducing bonds between neighboring beads. To model hot embossing, we instead cool the melt to below the glass transition temperature. We then study the ability of these methods to retain features by removing the stamps, both with a zero-stress removal process in which stamp atoms are instantaneously deleted from the system as well as a more physical process in which the stamp is pulled from the hardened polymer at fixed velocity. We find that it is necessary to coat the stamp with an antifriction coating to achieve clean removal of the stamp. We further find that a high density of cross-links is necessary for good feature retention in the SFIL process. The hot embossing process results in good feature retention at all length scales studied as long as coated, low surface energy stamps are used.

  17. Fishermen Follow Fine-Scale Physical Ocean Features for Finance

    Directory of Open Access Journals (Sweden)

    James R. Watson

    2018-02-01

    Full Text Available The seascapes on which many millions of people make their living and secure food have complex and dynamic spatial features—the figurative hills and valleys—that influence where and how people work at sea. Here, we quantify the physical mosaic of the surface ocean by identifying Lagrangian Coherent Structures for a whole seascape—the U.S. California Current Large Marine Ecosystem—and assess their impact on the spatial distribution of fishing. We observe that there is a mixed response: some fisheries track these physical features, and others avoid them. These spatial behaviors map to economic impacts, in particular we find that tuna fishermen can expect to make three times more revenue per trip if fishing occurs on strong Lagrangian Coherent Structures. However, we find no relationship for salmon and pink shrimp fishing trips. These results highlight a connection between the biophysical state of the oceans, the spatial patterns of human activity, and ultimately the economic welfare of coastal communities.

  18. Holographic models with anisotropic scaling

    Science.gov (United States)

    Brynjolfsson, E. J.; Danielsson, U. H.; Thorlacius, L.; Zingg, T.

    2013-12-01

    We consider gravity duals to d+1 dimensional quantum critical points with anisotropic scaling. The primary motivation comes from strongly correlated electron systems in condensed matter theory but the main focus of the present paper is on the gravity models in their own right. Physics at finite temperature and fixed charge density is described in terms of charged black branes. Some exact solutions are known and can be used to obtain a maximally extended spacetime geometry, which has a null curvature singularity inside a single non-degenerate horizon, but generic black brane solutions in the model can only be obtained numerically. Charged matter gives rise to black branes with hair that are dual to the superconducting phase of a holographic superconductor. Our numerical results indicate that holographic superconductors with anisotropic scaling have vanishing zero temperature entropy when the back reaction of the hair on the brane geometry is taken into account.

  19. Visual Thing Recognition with Binary Scale-Invariant Feature Transform and Support Vector Machine Classifiers Using Color Information

    OpenAIRE

    Wei-Jong Yang; Wei-Hau Du; Pau-Choo Chang; Jar-Ferr Yang; Pi-Hsia Hung

    2017-01-01

    The demands of smart visual thing recognition in various devices have been increased rapidly for daily smart production, living and learning systems in recent years. This paper proposed a visual thing recognition system, which combines binary scale-invariant feature transform (SIFT), bag of words model (BoW), and support vector machine (SVM) by using color information. Since the traditional SIFT features and SVM classifiers only use the gray information, color information is still an importan...

  20. The feature-weighted receptive field: an interpretable encoding model for complex feature spaces.

    Science.gov (United States)

    St-Yves, Ghislain; Naselaris, Thomas

    2017-06-20

    We introduce the feature-weighted receptive field (fwRF), an encoding model designed to balance expressiveness, interpretability and scalability. The fwRF is organized around the notion of a feature map-a transformation of visual stimuli into visual features that preserves the topology of visual space (but not necessarily the native resolution of the stimulus). The key assumption of the fwRF model is that activity in each voxel encodes variation in a spatially localized region across multiple feature maps. This region is fixed for all feature maps; however, the contribution of each feature map to voxel activity is weighted. Thus, the model has two separable sets of parameters: "where" parameters that characterize the location and extent of pooling over visual features, and "what" parameters that characterize tuning to visual features. The "where" parameters are analogous to classical receptive fields, while "what" parameters are analogous to classical tuning functions. By treating these as separable parameters, the fwRF model complexity is independent of the resolution of the underlying feature maps. This makes it possible to estimate models with thousands of high-resolution feature maps from relatively small amounts of data. Once a fwRF model has been estimated from data, spatial pooling and feature tuning can be read-off directly with no (or very little) additional post-processing or in-silico experimentation. We describe an optimization algorithm for estimating fwRF models from data acquired during standard visual neuroimaging experiments. We then demonstrate the model's application to two distinct sets of features: Gabor wavelets and features supplied by a deep convolutional neural network. We show that when Gabor feature maps are used, the fwRF model recovers receptive fields and spatial frequency tuning functions consistent with known organizational principles of the visual cortex. We also show that a fwRF model can be used to regress entire deep

  1. Individual discriminative face recognition models based on subsets of features

    DEFF Research Database (Denmark)

    Clemmensen, Line Katrine Harder; Gomez, David Delgado; Ersbøll, Bjarne Kjær

    2007-01-01

    The accuracy of data classification methods depends considerably on the data representation and on the selected features. In this work, the elastic net model selection is used to identify meaningful and important features in face recognition. Modelling the characteristics which distinguish one...... person from another using only subsets of features will both decrease the computational cost and increase the generalization capacity of the face recognition algorithm. Moreover, identifying which are the features that better discriminate between persons will also provide a deeper understanding...... of the face recognition problem. The elastic net model is able to select a subset of features with low computational effort compared to other state-of-the-art feature selection methods. Furthermore, the fact that the number of features usually is larger than the number of images in the data base makes feature...

  2. A scale space approach for unsupervised feature selection in mass spectra classification for ovarian cancer detection.

    Science.gov (United States)

    Ceccarelli, Michele; d'Acierno, Antonio; Facchiano, Angelo

    2009-10-15

    Mass spectrometry spectra, widely used in proteomics studies as a screening tool for protein profiling and to detect discriminatory signals, are high dimensional data. A large number of local maxima (a.k.a. peaks) have to be analyzed as part of computational pipelines aimed at the realization of efficient predictive and screening protocols. With this kind of data dimensions and samples size the risk of over-fitting and selection bias is pervasive. Therefore the development of bio-informatics methods based on unsupervised feature extraction can lead to general tools which can be applied to several fields of predictive proteomics. We propose a method for feature selection and extraction grounded on the theory of multi-scale spaces for high resolution spectra derived from analysis of serum. Then we use support vector machines for classification. In particular we use a database containing 216 samples spectra divided in 115 cancer and 91 control samples. The overall accuracy averaged over a large cross validation study is 98.18. The area under the ROC curve of the best selected model is 0.9962. We improved previous known results on the problem on the same data, with the advantage that the proposed method has an unsupervised feature selection phase. All the developed code, as MATLAB scripts, can be downloaded from http://medeaserver.isa.cnr.it/dacierno/spectracode.htm.

  3. Noncircular features in Saturn's rings IV: Absolute radius scale and Saturn's pole direction

    Science.gov (United States)

    French, Richard G.; McGhee-French, Colleen A.; Lonergan, Katherine; Sepersky, Talia; Jacobson, Robert A.; Nicholson, Philip D.; Hedman, Mathew M.; Marouf, Essam A.; Colwell, Joshua E.

    2017-07-01

    We present a comprehensive solution for the geometry of Saturn's ring system, based on orbital fits to an extensive set of occultation observations of 122 individual ring edges and gaps. We begin with a restricted set of very high quality Cassini VIMS, UVIS, and RSS measurements for quasi-circular features in the C and B rings and the Cassini Division, and then successively add suitably weighted additional Cassini and historical occultation measurements (from Voyager, HST and the widely-observed 28 Sgr occultation of 3 Jul 1989) for additional non-circular features, to derive an absolute radius scale applicable across the entire classical ring system. As part of our adopted solution, we determine first-order corrections to the spacecraft trajectories used to determine the geometry of individual occultation chords. We adopt a simple linear model for Saturn's precession, and our favored solution yields a precession rate on the sky n^˙P = 0.207 ± 0 .006‧‧yr-1 , equivalent to an angular rate of polar motion ΩP = 0.451 ± 0 .014‧‧yr-1 . The 3% formal uncertainty in the fitted precession rate is approaching the point where it can provide a useful constraint on models of Saturn's interior, although realistic errors are likely to be larger, given the linear approximation of the precession model and possible unmodeled systematic errors in the spacecraft ephemerides. Our results are largely consistent with independent estimates of the precession rate based on historical RPX times (Nicholson et al., 1999 AAS/Division for Planetary Sciences Meeting Abstracts #31 31, 44.01) and from theoretical expectations that account for Titan's 700-yr precession period (Vienne and Duriez 1992, Astronomy and Astrophysics 257, 331-352). The fitted precession rate based on Cassini data only is somewhat lower, which may be an indication of unmodeled shorter term contributions to Saturn's polar motion from other satellites, or perhaps the result of inconsistencies in the assumed

  4. On the Use of Memory Models in Audio Features

    DEFF Research Database (Denmark)

    Jensen, Karl Kristoffer

    2011-01-01

    Audio feature estimation is potentially improved by including higher- level models. One such model is the Short Term Memory (STM) model. A new paradigm of audio feature estimation is obtained by adding the influence of notes in the STM. These notes are identified when the perceptual spectral flux...

  5. A multi scale model for small scale plasticity

    International Nuclear Information System (INIS)

    Zbib, Hussein M.

    2002-01-01

    Full text.A framework for investigating size-dependent small-scale plasticity phenomena and related material instabilities at various length scales ranging from the nano-microscale to the mesoscale is presented. The model is based on fundamental physical laws that govern dislocation motion and their interaction with various defects and interfaces. Particularly, a multi-scale model is developed merging two scales, the nano-microscale where plasticity is determined by explicit three-dimensional dislocation dynamics analysis providing the material length-scale, and the continuum scale where energy transport is based on basic continuum mechanics laws. The result is a hybrid simulation model coupling discrete dislocation dynamics with finite element analyses. With this hybrid approach, one can address complex size-dependent problems, including dislocation boundaries, dislocations in heterogeneous structures, dislocation interaction with interfaces and associated shape changes and lattice rotations, as well as deformation in nano-structured materials, localized deformation and shear band

  6. A Multiobjective Sparse Feature Learning Model for Deep Neural Networks.

    Science.gov (United States)

    Gong, Maoguo; Liu, Jia; Li, Hao; Cai, Qing; Su, Linzhi

    2015-12-01

    Hierarchical deep neural networks are currently popular learning models for imitating the hierarchical architecture of human brain. Single-layer feature extractors are the bricks to build deep networks. Sparse feature learning models are popular models that can learn useful representations. But most of those models need a user-defined constant to control the sparsity of representations. In this paper, we propose a multiobjective sparse feature learning model based on the autoencoder. The parameters of the model are learnt by optimizing two objectives, reconstruction error and the sparsity of hidden units simultaneously to find a reasonable compromise between them automatically. We design a multiobjective induced learning procedure for this model based on a multiobjective evolutionary algorithm. In the experiments, we demonstrate that the learning procedure is effective, and the proposed multiobjective model can learn useful sparse features.

  7. Extraction of multi-scale landslide morphological features based on local Gi* using airborne LiDAR-derived DEM

    Science.gov (United States)

    Shi, Wenzhong; Deng, Susu; Xu, Wenbing

    2018-02-01

    For automatic landslide detection, landslide morphological features should be quantitatively expressed and extracted. High-resolution Digital Elevation Models (DEMs) derived from airborne Light Detection and Ranging (LiDAR) data allow fine-scale morphological features to be extracted, but noise in DEMs influences morphological feature extraction, and the multi-scale nature of landslide features should be considered. This paper proposes a method to extract landslide morphological features characterized by homogeneous spatial patterns. Both profile and tangential curvature are utilized to quantify land surface morphology, and a local Gi* statistic is calculated for each cell to identify significant patterns of clustering of similar morphometric values. The method was tested on both synthetic surfaces simulating natural terrain and airborne LiDAR data acquired over an area dominated by shallow debris slides and flows. The test results of the synthetic data indicate that the concave and convex morphologies of the simulated terrain features at different scales and distinctness could be recognized using the proposed method, even when random noise was added to the synthetic data. In the test area, cells with large local Gi* values were extracted at a specified significance level from the profile and the tangential curvature image generated from the LiDAR-derived 1-m DEM. The morphologies of landslide main scarps, source areas and trails were clearly indicated, and the morphological features were represented by clusters of extracted cells. A comparison with the morphological feature extraction method based on curvature thresholds proved the proposed method's robustness to DEM noise. When verified against a landslide inventory, the morphological features of almost all recent (historical (> 10 years) landslides were extracted. This finding indicates that the proposed method can facilitate landslide detection, although the cell clusters extracted from curvature images should

  8. Feature Analysis for Modeling Game Content Quality

    DEFF Research Database (Denmark)

    Shaker, Noor; Yannakakis, Georgios N.; Togelius, Julian

    2011-01-01

    ’ preferences, and by defining the smallest game session size for which the model can still predict reported emotion with acceptable accuracy. Neuroevolutionary preference learning is used to approximate the function from game content to reported emotional preferences. The experiments are based on a modified...

  9. Compact Representation of High-Dimensional Feature Vectors for Large-Scale Image Recognition and Retrieval.

    Science.gov (United States)

    Zhang, Yu; Wu, Jianxin; Cai, Jianfei

    2016-05-01

    In large-scale visual recognition and image retrieval tasks, feature vectors, such as Fisher vector (FV) or the vector of locally aggregated descriptors (VLAD), have achieved state-of-the-art results. However, the combination of the large numbers of examples and high-dimensional vectors necessitates dimensionality reduction, in order to reduce its storage and CPU costs to a reasonable range. In spite of the popularity of various feature compression methods, this paper shows that the feature (dimension) selection is a better choice for high-dimensional FV/VLAD than the feature (dimension) compression methods, e.g., product quantization. We show that strong correlation among the feature dimensions in the FV and the VLAD may not exist, which renders feature selection a natural choice. We also show that, many dimensions in FV/VLAD are noise. Throwing them away using feature selection is better than compressing them and useful dimensions altogether using feature compression methods. To choose features, we propose an efficient importance sorting algorithm considering both the supervised and unsupervised cases, for visual recognition and image retrieval, respectively. Combining with the 1-bit quantization, feature selection has achieved both higher accuracy and less computational cost than feature compression methods, such as product quantization, on the FV and the VLAD image representations.

  10. Scaling laws for modeling nuclear reactor systems

    International Nuclear Information System (INIS)

    Nahavandi, A.N.; Castellana, F.S.; Moradkhanian, E.N.

    1979-01-01

    Scale models are used to predict the behavior of nuclear reactor systems during normal and abnormal operation as well as under accident conditions. Three types of scaling procedures are considered: time-reducing, time-preserving volumetric, and time-preserving idealized model/prototype. The necessary relations between the model and the full-scale unit are developed for each scaling type. Based on these relationships, it is shown that scaling procedures can lead to distortion in certain areas that are discussed. It is advised that, depending on the specific unit to be scaled, a suitable procedure be chosen to minimize model-prototype distortion

  11. Derivative-based scale invariant image feature detector with error resilience.

    Science.gov (United States)

    Mainali, Pradip; Lafruit, Gauthier; Tack, Klaas; Van Gool, Luc; Lauwereins, Rudy

    2014-05-01

    We present a novel scale-invariant image feature detection algorithm (D-SIFER) using a newly proposed scale-space optimal 10th-order Gaussian derivative (GDO-10) filter, which reaches the jointly optimal Heisenberg's uncertainty of its impulse response in scale and space simultaneously (i.e., we minimize the maximum of the two moments). The D-SIFER algorithm using this filter leads to an outstanding quality of image feature detection, with a factor of three quality improvement over state-of-the-art scale-invariant feature transform (SIFT) and speeded up robust features (SURF) methods that use the second-order Gaussian derivative filters. To reach low computational complexity, we also present a technique approximating the GDO-10 filters with a fixed-length implementation, which is independent of the scale. The final approximation error remains far below the noise margin, providing constant time, low cost, but nevertheless high-quality feature detection and registration capabilities. D-SIFER is validated on a real-life hyperspectral image registration application, precisely aligning up to hundreds of successive narrowband color images, despite their strong artifacts (blurring, low-light noise) typically occurring in such delicate optical system setups.

  12. An alternative to scale-space representation for extracting local features in image recognition

    DEFF Research Database (Denmark)

    Andersen, Hans Jørgen; Nguyen, Phuong Giang

    2012-01-01

    In image recognition, the common approach for extracting local features using a scale-space representation has usually three main steps; first interest points are extracted at different scales, next from a patch around each interest point the rotation is calculated with corresponding orientation...... and compensation, and finally a descriptor is computed for the derived patch (i.e. feature of the patch). To avoid the memory and computational intensive process of constructing the scale-space, we use a method where no scale-space is required This is done by dividing the given image into a number of triangles...... with sizes dependent on the content of the image, at the location of each triangle. In this paper, we will demonstrate that by rotation of the interest regions at the triangles it is possible in grey scale images to achieve a recognition precision comparable with that of MOPS. The test of the proposed method...

  13. Annotation-based feature extraction from sets of SBML models.

    Science.gov (United States)

    Alm, Rebekka; Waltemath, Dagmar; Wolfien, Markus; Wolkenhauer, Olaf; Henkel, Ron

    2015-01-01

    Model repositories such as BioModels Database provide computational models of biological systems for the scientific community. These models contain rich semantic annotations that link model entities to concepts in well-established bio-ontologies such as Gene Ontology. Consequently, thematically similar models are likely to share similar annotations. Based on this assumption, we argue that semantic annotations are a suitable tool to characterize sets of models. These characteristics improve model classification, allow to identify additional features for model retrieval tasks, and enable the comparison of sets of models. In this paper we discuss four methods for annotation-based feature extraction from model sets. We tested all methods on sets of models in SBML format which were composed from BioModels Database. To characterize each of these sets, we analyzed and extracted concepts from three frequently used ontologies, namely Gene Ontology, ChEBI and SBO. We find that three out of the methods are suitable to determine characteristic features for arbitrary sets of models: The selected features vary depending on the underlying model set, and they are also specific to the chosen model set. We show that the identified features map on concepts that are higher up in the hierarchy of the ontologies than the concepts used for model annotations. Our analysis also reveals that the information content of concepts in ontologies and their usage for model annotation do not correlate. Annotation-based feature extraction enables the comparison of model sets, as opposed to existing methods for model-to-keyword comparison, or model-to-model comparison.

  14. Downscaling modelling system for multi-scale air quality forecasting

    Science.gov (United States)

    Nuterman, R.; Baklanov, A.; Mahura, A.; Amstrup, B.; Weismann, J.

    2010-09-01

    kind of Dirichlet condition is chosen to provide the values based on interpolation from the coarse to the fine grid. When the roughness approach is changed to the obstacle-resolved one in the nested model, the interpolation procedure will increase the computational time (due to additional iterations) for meteorological/ chemical fields inside the urban sub-layer. In such situations, as a possible alternative, the perturbation approach can be applied. Here, the effects of main meteorological variables and chemical species are considered as a sum of two components: background (large-scale) values, described by the coarse-resolution model, and perturbations (micro-scale) features, obtained from the nested fine resolution model.

  15. Music Genre Classification using the multivariate AR feature integration model

    DEFF Research Database (Denmark)

    Ahrendt, Peter; Meng, Anders

    2005-01-01

    informative decisions about musical genre. For the MIREX music genre contest several authors derive long time features based either on statistical moments and/or temporal structure in the short time features. In our contribution we model a segment (1.2 s) of short time features (texture) using a multivariate...... autoregressive model. Other authors have applied simpler statistical models such as the mean-variance model, which also has been included in several of this years MIREX submissions, see e.g. Tzanetakis (2005); Burred (2005); Bergstra et al. (2005); Lidy and Rauber (2005)....

  16. Doubly sparse factor models for unifying feature transformation and feature selection

    International Nuclear Information System (INIS)

    Katahira, Kentaro; Okanoya, Kazuo; Okada, Masato; Matsumoto, Narihisa; Sugase-Miyamoto, Yasuko

    2010-01-01

    A number of unsupervised learning methods for high-dimensional data are largely divided into two groups based on their procedures, i.e., (1) feature selection, which discards irrelevant dimensions of the data, and (2) feature transformation, which constructs new variables by transforming and mixing over all dimensions. We propose a method that both selects and transforms features in a common Bayesian inference procedure. Our method imposes a doubly automatic relevance determination (ARD) prior on the factor loading matrix. We propose a variational Bayesian inference for our model and demonstrate the performance of our method on both synthetic and real data.

  17. Doubly sparse factor models for unifying feature transformation and feature selection

    Energy Technology Data Exchange (ETDEWEB)

    Katahira, Kentaro; Okanoya, Kazuo; Okada, Masato [ERATO, Okanoya Emotional Information Project, Japan Science Technology Agency, Saitama (Japan); Matsumoto, Narihisa; Sugase-Miyamoto, Yasuko, E-mail: okada@k.u-tokyo.ac.j [Human Technology Research Institute, National Institute of Advanced Industrial Science and Technology, Ibaraki (Japan)

    2010-06-01

    A number of unsupervised learning methods for high-dimensional data are largely divided into two groups based on their procedures, i.e., (1) feature selection, which discards irrelevant dimensions of the data, and (2) feature transformation, which constructs new variables by transforming and mixing over all dimensions. We propose a method that both selects and transforms features in a common Bayesian inference procedure. Our method imposes a doubly automatic relevance determination (ARD) prior on the factor loading matrix. We propose a variational Bayesian inference for our model and demonstrate the performance of our method on both synthetic and real data.

  18. On Feature Extraction from Large Scale Linear LiDAR Data

    Science.gov (United States)

    Acharjee, Partha Pratim

    Airborne light detection and ranging (LiDAR) can generate co-registered elevation and intensity map over large terrain. The co-registered 3D map and intensity information can be used efficiently for different feature extraction application. In this dissertation, we developed two algorithms for feature extraction, and usages of features for practical applications. One of the developed algorithms can map still and flowing waterbody features, and another one can extract building feature and estimate solar potential on rooftops and facades. Remote sensing capabilities, distinguishing characteristics of laser returns from water surface and specific data collection procedures provide LiDAR data an edge in this application domain. Furthermore, water surface mapping solutions must work on extremely large datasets, from a thousand square miles, to hundreds of thousands of square miles. National and state-wide map generation/upgradation and hydro-flattening of LiDAR data for many other applications are two leading needs of water surface mapping. These call for as much automation as possible. Researchers have developed many semi-automated algorithms using multiple semi-automated tools and human interventions. This reported work describes a consolidated algorithm and toolbox developed for large scale, automated water surface mapping. Geometric features such as flatness of water surface, higher elevation change in water-land interface and, optical properties such as dropouts caused by specular reflection, bimodal intensity distributions were some of the linear LiDAR features exploited for water surface mapping. Large-scale data handling capabilities are incorporated by automated and intelligent windowing, by resolving boundary issues and integrating all results to a single output. This whole algorithm is developed as an ArcGIS toolbox using Python libraries. Testing and validation are performed on a large datasets to determine the effectiveness of the toolbox and results are

  19. Learning scale-variant and scale-invariant features for deep image classification

    NARCIS (Netherlands)

    van Noord, Nanne; Postma, Eric

    Convolutional Neural Networks (CNNs) require large image corpora to be trained on classification tasks. The variation in image resolutions, sizes of objects and patterns depicted, and image scales, hampers CNN training and performance, because the task-relevant information varies over spatial

  20. Feature-based component model for design of embedded systems

    Science.gov (United States)

    Zha, Xuan Fang; Sriram, Ram D.

    2004-11-01

    An embedded system is a hybrid of hardware and software, which combines software's flexibility and hardware real-time performance. Embedded systems can be considered as assemblies of hardware and software components. An Open Embedded System Model (OESM) is currently being developed at NIST to provide a standard representation and exchange protocol for embedded systems and system-level design, simulation, and testing information. This paper proposes an approach to representing an embedded system feature-based model in OESM, i.e., Open Embedded System Feature Model (OESFM), addressing models of embedded system artifacts, embedded system components, embedded system features, and embedded system configuration/assembly. The approach provides an object-oriented UML (Unified Modeling Language) representation for the embedded system feature model and defines an extension to the NIST Core Product Model. The model provides a feature-based component framework allowing the designer to develop a virtual embedded system prototype through assembling virtual components. The framework not only provides a formal precise model of the embedded system prototype but also offers the possibility of designing variation of prototypes whose members are derived by changing certain virtual components with different features. A case study example is discussed to illustrate the embedded system model.

  1. Physical model for the 2175 A interstellar extinction feature

    International Nuclear Information System (INIS)

    Hecht, J.H.

    1986-01-01

    Recent IUE observations have shown that the 2175 A interstellar extinction feature is constant in wavelength but varies in width. A model has been constructed to explain these results. It is proposed that the 2175 A feature will only be seen when there is extinction due to carbon grains which have lost their hydrogen. In particular, the feature is caused by a separate population of small (less than 50 A radius), hydrogen-free carbon grains. The variations in width would be due to differences in either their temperature, size distribution, or impurity content. All other carbon grains retain hydrogen, which causes the feature to be suppressed. If this model is correct, then it implies that the grains responsible for the unidentified IR emission features would not generally cause the 2175 A feature. 53 references

  2. Histogram-based adaptive gray level scaling for texture feature classification of colorectal polyps

    Science.gov (United States)

    Pomeroy, Marc; Lu, Hongbing; Pickhardt, Perry J.; Liang, Zhengrong

    2018-02-01

    Texture features have played an ever increasing role in computer aided detection (CADe) and diagnosis (CADx) methods since their inception. Texture features are often used as a method of false positive reduction for CADe packages, especially for detecting colorectal polyps and distinguishing them from falsely tagged residual stool and healthy colon wall folds. While texture features have shown great success there, the performance of texture features for CADx have lagged behind primarily because of the more similar features among different polyps types. In this paper, we present an adaptive gray level scaling and compare it to the conventional equal-spacing of gray level bins. We use a dataset taken from computed tomography colonography patients, with 392 polyp regions of interest (ROIs) identified and have a confirmed diagnosis through pathology. Using the histogram information from the entire ROI dataset, we generate the gray level bins such that each bin contains roughly the same number of voxels Each image ROI is the scaled down to two different numbers of gray levels, using both an equal spacing of Hounsfield units for each bin, and our adaptive method. We compute a set of texture features from the scaled images including 30 gray level co-occurrence matrix (GLCM) features and 11 gray level run length matrix (GLRLM) features. Using a random forest classifier to distinguish between hyperplastic polyps and all others (adenomas and adenocarcinomas), we find that the adaptive gray level scaling can improve performance based on the area under the receiver operating characteristic curve by up to 4.6%.

  3. A priori study of subgrid-scale features in turbulent Rayleigh-Bénard convection

    Science.gov (United States)

    Dabbagh, F.; Trias, F. X.; Gorobets, A.; Oliva, A.

    2017-10-01

    At the crossroad between flow topology analysis and turbulence modeling, a priori studies are a reliable tool to understand the underlying physics of the subgrid-scale (SGS) motions in turbulent flows. In this paper, properties of the SGS features in the framework of a large-eddy simulation are studied for a turbulent Rayleigh-Bénard convection (RBC). To do so, data from direct numerical simulation (DNS) of a turbulent air-filled RBC in a rectangular cavity of aspect ratio unity and π spanwise open-ended distance are used at two Rayleigh numbers R a ∈{1 08,1 010 } [Dabbagh et al., "On the evolution of flow topology in turbulent Rayleigh-Bénard convection," Phys. Fluids 28, 115105 (2016)]. First, DNS at Ra = 108 is used to assess the performance of eddy-viscosity models such as QR, Wall-Adapting Local Eddy-viscosity (WALE), and the recent S3PQR-models proposed by Trias et al. ["Building proper invariants for eddy-viscosity subgrid-scale models," Phys. Fluids 27, 065103 (2015)]. The outcomes imply that the eddy-viscosity modeling smoothes the coarse-grained viscous straining and retrieves fairly well the effect of the kinetic unfiltered scales in order to reproduce the coherent large scales. However, these models fail to approach the exact evolution of the SGS heat flux and are incapable to reproduce well the further dominant rotational enstrophy pertaining to the buoyant production. Afterwards, the key ingredients of eddy-viscosity, νt, and eddy-diffusivity, κt, are calculated a priori and revealed positive prevalent values to maintain a turbulent wind essentially driven by the mean buoyant force at the sidewalls. The topological analysis suggests that the effective turbulent diffusion paradigm and the hypothesis of a constant turbulent Prandtl number are only applicable in the large-scale strain-dominated areas in the bulk. It is shown that the bulk-dominated rotational structures of vortex-stretching (and its synchronous viscous dissipative structures) hold

  4. Planimetric Features Generalization for the Production of Small-Scale Map by Using Base Maps and the Existing Algorithms

    Directory of Open Access Journals (Sweden)

    M. Modiri

    2014-10-01

    Full Text Available Cartographic maps are representations of the Earth upon a flat surface in the smaller scale than it’s true. Large scale maps cover relatively small regions in great detail and small scale maps cover large regions such as nations, continents and the whole globe. Logical connection between the features and scale map must be maintained by changing the scale and it is important to recognize that even the most accurate maps sacrifice a certain amount of accuracy in scale to deliver a greater visual usefulness to its user. Cartographic generalization, or map generalization, is the method whereby information is selected and represented on a map in a way that adapts to the scale of the display medium of the map, not necessarily preserving all intricate geographical or other cartographic details. Due to the problems facing small-scale map production process and the need to spend time and money for surveying, today’s generalization is used as executive approach. The software is proposed in this paper that converted various data and information to certain Data Model. This software can produce generalization map according to base map using the existing algorithm. Planimetric generalization algorithms and roles are described in this article. Finally small-scale maps with 1:100,000, 1:250,000 and 1:500,000 scale are produced automatically and they are shown at the end.

  5. Retinal Identification Based on an Improved Circular Gabor Filter and Scale Invariant Feature Transform

    Directory of Open Access Journals (Sweden)

    Xiaoming Xi

    2013-07-01

    Full Text Available Retinal identification based on retinal vasculatures in the retina provides the most secure and accurate means of authentication among biometrics and has primarily been used in combination with access control systems at high security facilities. Recently, there has been much interest in retina identification. As digital retina images always suffer from deformations, the Scale Invariant Feature Transform (SIFT, which is known for its distinctiveness and invariance for scale and rotation, has been introduced to retinal based identification. However, some shortcomings like the difficulty of feature extraction and mismatching exist in SIFT-based identification. To solve these problems, a novel preprocessing method based on the Improved Circular Gabor Transform (ICGF is proposed. After further processing by the iterated spatial anisotropic smooth method, the number of uninformative SIFT keypoints is decreased dramatically. Tested on the VARIA and eight simulated retina databases combining rotation and scaling, the developed method presents promising results and shows robustness to rotations and scale changes.

  6. Spatial scale separation in regional climate modelling

    Energy Technology Data Exchange (ETDEWEB)

    Feser, F.

    2005-07-01

    In this thesis the concept of scale separation is introduced as a tool for first improving regional climate model simulations and, secondly, to explicitly detect and describe the added value obtained by regional modelling. The basic idea behind this is that global and regional climate models have their best performance at different spatial scales. Therefore the regional model should not alter the global model's results at large scales. The for this purpose designed concept of nudging of large scales controls the large scales within the regional model domain and keeps them close to the global forcing model whereby the regional scales are left unchanged. For ensemble simulations nudging of large scales strongly reduces the divergence of the different simulations compared to the standard approach ensemble that occasionally shows large differences for the individual realisations. For climate hindcasts this method leads to results which are on average closer to observed states than the standard approach. Also the analysis of the regional climate model simulation can be improved by separating the results into different spatial domains. This was done by developing and applying digital filters that perform the scale separation effectively without great computational effort. The separation of the results into different spatial scales simplifies model validation and process studies. The search for 'added value' can be conducted on the spatial scales the regional climate model was designed for giving clearer results than by analysing unfiltered meteorological fields. To examine the skill of the different simulations pattern correlation coefficients were calculated between the global reanalyses, the regional climate model simulation and, as a reference, of an operational regional weather analysis. The regional climate model simulation driven with large-scale constraints achieved a high increase in similarity to the operational analyses for medium-scale 2 meter

  7. Structural conceptual models of water-conducting features at Aespoe

    International Nuclear Information System (INIS)

    Bossart, P.; Mazurek, M.; Hermansson, Jan

    1998-01-01

    Within the framework of the Fracture Classification and Characterization Project (FCC), water conducting features (WCF) in the Aespoe tunnel system and on the surface of Aespoe Island are being characterized over a range of scales. The larger-scale hierarchies of WCF are mostly constituted of fault arrays, i.e. brittle structures that accommodated episodes of shear strain. The smaller-scale WCF (contained within blocks 1 m. Structural evidence indicates that the fractures within the TRUE-1 block constitute an interconnected system with a pronounced anisotropy

  8. Modeling and simulation with operator scaling

    OpenAIRE

    Cohen, Serge; Meerschaert, Mark M.; Rosiński, Jan

    2010-01-01

    Self-similar processes are useful in modeling diverse phenomena that exhibit scaling properties. Operator scaling allows a different scale factor in each coordinate. This paper develops practical methods for modeling and simulating stochastic processes with operator scaling. A simulation method for operator stable Levy processes is developed, based on a series representation, along with a Gaussian approximation of the small jumps. Several examples are given to illustrate practical application...

  9. A keyword spotting model using perceptually significant energy features

    Science.gov (United States)

    Umakanthan, Padmalochini

    The task of a keyword recognition system is to detect the presence of certain words in a conversation based on the linguistic information present in human speech. Such keyword spotting systems have applications in homeland security, telephone surveillance and human-computer interfacing. General procedure of a keyword spotting system involves feature generation and matching. In this work, new set of features that are based on the psycho-acoustic masking nature of human speech are proposed. After developing these features a time aligned pattern matching process was implemented to locate the words in a set of unknown words. A word boundary detection technique based on frame classification using the nonlinear characteristics of speech is also addressed in this work. Validation of this keyword spotting model was done using widely acclaimed Cepstral features. The experimental results indicate the viability of using these perceptually significant features as an augmented feature set in keyword spotting.

  10. Taxometric Analysis of the Antisocial Features Scale of the Personality Assessment Inventory in Federal Prison Inmates

    Science.gov (United States)

    Walters, Glenn D.; Diamond, Pamela M.; Magaletta, Philip R.; Geyer, Matthew D.; Duncan, Scott A.

    2007-01-01

    The Antisocial Features (ANT) scale of the Personality Assessment Inventory (PAI) was subjected to taxometric analysis in a group of 2,135 federal prison inmates. Scores on the three ANT subscales--Antisocial Behaviors (ANT-A), Egocentricity (ANT-E), and Stimulus Seeking (ANT-S)--served as indicators in this study and were evaluated using the…

  11. Discrete-Feature Model Implementation of SDM-Site Forsmark

    International Nuclear Information System (INIS)

    Geier, Joel

    2010-03-01

    A discrete-feature model (DFM) was implemented for the Forsmark repository site based on the final site descriptive model from surface based investigations. The discrete-feature conceptual model represents deformation zones, individual fractures, and other water-conducting features around a repository as discrete conductors surrounded by a rock matrix which, in the present study, is treated as impermeable. This approximation is reasonable for sites in crystalline rock which has very low permeability, apart from that which results from macroscopic fracturing. Models are constructed based on the geological and hydrogeological description of the sites and engineering designs. Hydraulic heads and flows through the network of water-conducting features are calculated by the finite-element method, and are used in turn to simulate migration of non-reacting solute by a particle-tracking method, in order to estimate the properties of pathways by which radionuclides could be released to the biosphere. Stochastic simulation is used to evaluate portions of the model that can only be characterized in statistical terms, since many water-conducting features within the model volume cannot be characterized deterministically. Chapter 2 describes the methodology by which discrete features are derived to represent water-conducting features around the hypothetical repository at Forsmark (including both natural features and features that result from the disturbance of excavation), and then assembled to produce a discrete-feature network model for numerical simulation of flow and transport. Chapter 3 describes how site-specific data and repository design are adapted to produce the discrete-feature model. Chapter 4 presents results of the calculations. These include utilization factors for deposition tunnels based on the emplacement criteria that have been set forth by the implementers, flow distributions to the deposition holes, and calculated properties of discharge paths as well as

  12. Discrete-Feature Model Implementation of SDM-Site Forsmark

    Energy Technology Data Exchange (ETDEWEB)

    Geier, Joel (Clearwater Hardrock Consulting, Corvallis, OR (United States))

    2010-03-15

    A discrete-feature model (DFM) was implemented for the Forsmark repository site based on the final site descriptive model from surface based investigations. The discrete-feature conceptual model represents deformation zones, individual fractures, and other water-conducting features around a repository as discrete conductors surrounded by a rock matrix which, in the present study, is treated as impermeable. This approximation is reasonable for sites in crystalline rock which has very low permeability, apart from that which results from macroscopic fracturing. Models are constructed based on the geological and hydrogeological description of the sites and engineering designs. Hydraulic heads and flows through the network of water-conducting features are calculated by the finite-element method, and are used in turn to simulate migration of non-reacting solute by a particle-tracking method, in order to estimate the properties of pathways by which radionuclides could be released to the biosphere. Stochastic simulation is used to evaluate portions of the model that can only be characterized in statistical terms, since many water-conducting features within the model volume cannot be characterized deterministically. Chapter 2 describes the methodology by which discrete features are derived to represent water-conducting features around the hypothetical repository at Forsmark (including both natural features and features that result from the disturbance of excavation), and then assembled to produce a discrete-feature network model for numerical simulation of flow and transport. Chapter 3 describes how site-specific data and repository design are adapted to produce the discrete-feature model. Chapter 4 presents results of the calculations. These include utilization factors for deposition tunnels based on the emplacement criteria that have been set forth by the implementers, flow distributions to the deposition holes, and calculated properties of discharge paths as well as

  13. On the scaling features of high-latitude geomagnetic field fluctuations during a large geomagnetic storm

    Science.gov (United States)

    De Michelis, Paola; Federica Marcucci, Maria; Consolini, Giuseppe

    2015-04-01

    Recently we have investigated the spatial distribution of the scaling features of short-time scale magnetic field fluctuations using measurements from several ground-based geomagnetic observatories distributed in the northern hemisphere. We have found that the scaling features of fluctuations of the horizontal magnetic field component at time scales below 100 minutes are correlated with the geomagnetic activity level and with changes in the currents flowing in the ionosphere. Here, we present a detailed analysis of the dynamical changes of the magnetic field scaling features as a function of the geomagnetic activity level during the well-known large geomagnetic storm occurred on July, 15, 2000 (the Bastille event). The observed dynamical changes are discussed in relationship with the changes of the overall ionospheric polar convection and potential structure as reconstructed using SuperDARN data. This work is supported by the Italian National Program for Antarctic Research (PNRA) - Research Project 2013/AC3.08 and by the European Community's Seventh Framework Programme ([FP7/2007-2013]) under Grant no. 313038/STORM and

  14. Modeling crash injury severity by road feature to improve safety.

    Science.gov (United States)

    Penmetsa, Praveena; Pulugurtha, Srinivas S

    2018-01-02

    The objective of this research is 2-fold: to (a) model and identify critical road features (or locations) based on crash injury severity and compare it with crash frequency and (b) model and identify drivers who are more likely to contribute to crashes by road feature. Crash data from 2011 to 2013 were obtained from the Highway Safety Information System (HSIS) for the state of North Carolina. Twenty-three different road features were considered, analyzed, and compared with each other as well as no road feature. A multinomial logit (MNL) model was developed and odds ratios were estimated to investigate the effect of road features on crash injury severity. Among the many road features, underpass, end or beginning of a divided highway, and on-ramp terminal on crossroad are the top 3 critical road features. Intersection crashes are frequent but are not highly likely to result in severe injuries compared to critical road features. Roundabouts are least likely to result in both severe and moderate injuries. Female drivers are more likely to be involved in crashes at intersections (4-way and T) compared to male drivers. Adult drivers are more likely to be involved in crashes at underpasses. Older drivers are 1.6 times more likely to be involved in a crash at the end or beginning of a divided highway. The findings from this research help to identify critical road features that need to be given priority. As an example, additional advanced warning signs and providing enlarged or highly retroreflective signs that grab the attention of older drivers may help in making locations such as end or beginning of a divided highway much safer. Educating drivers about the necessary skill sets required at critical road features in addition to engineering solutions may further help them adopt safe driving behaviors on the road.

  15. Towards the maturity model for feature oriented domain analysis

    Directory of Open Access Journals (Sweden)

    Muhammad Javed

    2014-09-01

    Full Text Available Assessing the quality of a model has always been a challenge for researchers in academia and industry. The quality of a feature model is a prime factor because it is used in the development of products. A degraded feature model leads the development of low quality products. Few efforts have been made on improving the quality of feature models. This paper is an effort to present our ongoing work i.e. development of FODA (Feature Oriented Domain Analysis maturity model which will help to evaluate the quality of a given feature model. In this paper, we provide the quality levels along with their descriptions. The proposed model consists of four levels starting from level 0 to level 3. Design of each level is based on the severity of errors, whereas severity of errors decreases from level 0 to level 3. We elaborate each level with the help of examples. We borrowed all examples from the material published by the research community of Software Product Lines (SPL for the application of our framework.

  16. Spatial Uncertainty Model for Visual Features Using a Kinect™ Sensor

    Directory of Open Access Journals (Sweden)

    Jae-Han Park

    2012-06-01

    Full Text Available This study proposes a mathematical uncertainty model for the spatial measurement of visual features using Kinect™ sensors. This model can provide qualitative and quantitative analysis for the utilization of Kinect™ sensors as 3D perception sensors. In order to achieve this objective, we derived the propagation relationship of the uncertainties between the disparity image space and the real Cartesian space with the mapping function between the two spaces. Using this propagation relationship, we obtained the mathematical model for the covariance matrix of the measurement error, which represents the uncertainty for spatial position of visual features from Kinect™ sensors. In order to derive the quantitative model of spatial uncertainty for visual features, we estimated the covariance matrix in the disparity image space using collected visual feature data. Further, we computed the spatial uncertainty information by applying the covariance matrix in the disparity image space and the calibrated sensor parameters to the proposed mathematical model. This spatial uncertainty model was verified by comparing the uncertainty ellipsoids for spatial covariance matrices and the distribution of scattered matching visual features. We expect that this spatial uncertainty model and its analyses will be useful in various Kinect™ sensor applications.

  17. Spatial uncertainty model for visual features using a Kinect™ sensor.

    Science.gov (United States)

    Park, Jae-Han; Shin, Yong-Deuk; Bae, Ji-Hun; Baeg, Moon-Hong

    2012-01-01

    This study proposes a mathematical uncertainty model for the spatial measurement of visual features using Kinect™ sensors. This model can provide qualitative and quantitative analysis for the utilization of Kinect™ sensors as 3D perception sensors. In order to achieve this objective, we derived the propagation relationship of the uncertainties between the disparity image space and the real Cartesian space with the mapping function between the two spaces. Using this propagation relationship, we obtained the mathematical model for the covariance matrix of the measurement error, which represents the uncertainty for spatial position of visual features from Kinect™ sensors. In order to derive the quantitative model of spatial uncertainty for visual features, we estimated the covariance matrix in the disparity image space using collected visual feature data. Further, we computed the spatial uncertainty information by applying the covariance matrix in the disparity image space and the calibrated sensor parameters to the proposed mathematical model. This spatial uncertainty model was verified by comparing the uncertainty ellipsoids for spatial covariance matrices and the distribution of scattered matching visual features. We expect that this spatial uncertainty model and its analyses will be useful in various Kinect™ sensor applications.

  18. Hole Feature on Conical Face Recognition for Turning Part Model

    Science.gov (United States)

    Zubair, A. F.; Abu Mansor, M. S.

    2018-03-01

    Computer Aided Process Planning (CAPP) is the bridge between CAD and CAM and pre-processing of the CAD data in the CAPP system is essential. For CNC turning part, conical faces of part model is inevitable to be recognised beside cylindrical and planar faces. As the sinus cosines of the cone radius structure differ according to different models, face identification in automatic feature recognition of the part model need special intention. This paper intends to focus hole on feature on conical faces that can be detected by CAD solid modeller ACIS via. SAT file. Detection algorithm of face topology were generated and compared. The study shows different faces setup for similar conical part models with different hole type features. Three types of holes were compared and different between merge faces and unmerge faces were studied.

  19. A Network Contention Model for the Extreme-scale Simulator

    Energy Technology Data Exchange (ETDEWEB)

    Engelmann, Christian [ORNL; Naughton III, Thomas J [ORNL

    2015-01-01

    The Extreme-scale Simulator (xSim) is a performance investigation toolkit for high-performance computing (HPC) hardware/software co-design. It permits running a HPC application with millions of concurrent execution threads, while observing its performance in a simulated extreme-scale system. This paper details a newly developed network modeling feature for xSim, eliminating the shortcomings of the existing network modeling capabilities. The approach takes a different path for implementing network contention and bandwidth capacity modeling using a less synchronous and accurate enough model design. With the new network modeling feature, xSim is able to simulate on-chip and on-node networks with reasonable accuracy and overheads.

  20. Modelling of rate effects at multiple scales

    DEFF Research Database (Denmark)

    Pedersen, R.R.; Simone, A.; Sluys, L. J.

    2008-01-01

    , the length scale in the meso-model and the macro-model can be coupled. In this fashion, a bridging of length scales can be established. A computational analysis of  a Split Hopkinson bar test at medium and high impact load is carried out at macro-scale and meso-scale including information from  the micro-scale.......At the macro- and meso-scales a rate dependent constitutive model is used in which visco-elasticity is coupled to visco-plasticity and damage. A viscous length scale effect is introduced to control the size of the fracture process zone. By comparison of the widths of the fracture process zone...

  1. Dynamically Scaled Model Experiment of a Mooring Cable

    Directory of Open Access Journals (Sweden)

    Lars Bergdahl

    2016-01-01

    Full Text Available The dynamic response of mooring cables for marine structures is scale-dependent, and perfect dynamic similitude between full-scale prototypes and small-scale physical model tests is difficult to achieve. The best possible scaling is here sought by means of a specific set of dimensionless parameters, and the model accuracy is also evaluated by two alternative sets of dimensionless parameters. A special feature of the presented experiment is that a chain was scaled to have correct propagation celerity for longitudinal elastic waves, thus providing perfect geometrical and dynamic scaling in vacuum, which is unique. The scaling error due to incorrect Reynolds number seemed to be of minor importance. The 33 m experimental chain could then be considered a scaled 76 mm stud chain with the length 1240 m, i.e., at the length scale of 1:37.6. Due to the correct elastic scale, the physical model was able to reproduce the effect of snatch loads giving rise to tensional shock waves propagating along the cable. The results from the experiment were used to validate the newly developed cable-dynamics code, MooDy, which utilises a discontinuous Galerkin FEM formulation. The validation of MooDy proved to be successful for the presented experiments. The experimental data is made available here for validation of other numerical codes by publishing digitised time series of two of the experiments.

  2. Large Scale Computations in Air Pollution Modelling

    DEFF Research Database (Denmark)

    Zlatev, Z.; Brandt, J.; Builtjes, P. J. H.

    Proceedings of the NATO Advanced Research Workshop on Large Scale Computations in Air Pollution Modelling, Sofia, Bulgaria, 6-10 July 1998......Proceedings of the NATO Advanced Research Workshop on Large Scale Computations in Air Pollution Modelling, Sofia, Bulgaria, 6-10 July 1998...

  3. One-scale supersymmetric inflationary models

    International Nuclear Information System (INIS)

    Bertolami, O.; Ross, G.G.

    1986-01-01

    The reheating phase is studied in a class of supergravity inflationary models involving a two-component hidden sector in which the scale of supersymmetry breaking and the scale generating inflation are related. It is shown that these models have an ''entropy crisis'' in which there is a large entropy release after nucleosynthesis leading to unacceptable low nuclear abundances. (orig.)

  4. Vascularity and grey-scale sonographic features of normal cervical lymph nodes: variations with nodal size

    International Nuclear Information System (INIS)

    Ying, Michael; Ahuja, Anil; Brook, Fiona; Metreweli, Constantine

    2001-01-01

    AIM: This study was undertaken to investigate variations in the vascularity and grey-scale sonographic features of cervical lymph nodes with their size. MATERIALS AND METHODS: High resolution grey-scale sonography and power Doppler sonography were performed in 1133 cervical nodes in 109 volunteers who had a sonographic examination of the neck. Standardized parameters were used in power Doppler sonography. RESULTS: About 90% of lymph nodes with a maximum transverse diameter greater than 5 mm showed vascularity and an echogenic hilus. Smaller nodes were less likely to show vascularity and an echogenic hilus. As the size of the lymph nodes increased, the intranodal blood flow velocity increased significantly (P 0.05). CONCLUSIONS: The findings provide a baseline for grey-scale and power Doppler sonography of normal cervical lymph nodes. Sonologists will find varying vascularity and grey-scale appearances when encountering nodes of different sizes. Ying, M. et al. (2001)

  5. A Registration Scheme for Multispectral Systems Using Phase Correlation and Scale Invariant Feature Matching

    Directory of Open Access Journals (Sweden)

    Hanlun Li

    2016-01-01

    Full Text Available In the past few years, many multispectral systems which consist of several identical monochrome cameras equipped with different bandpass filters have been developed. However, due to the significant difference in the intensity between different band images, image registration becomes very difficult. Considering the common structural characteristic of the multispectral systems, this paper proposes an effective method for registering different band images. First we use the phase correlation method to calculate the parameters of a coarse-offset relationship between different band images. Then we use the scale invariant feature transform (SIFT to detect the feature points. For every feature point in a reference image, we can use the coarse-offset parameters to predict the location of its matching point. We only need to compare the feature point in the reference image with the several near feature points from the predicted location instead of the feature points all over the input image. Our experiments show that this method does not only avoid false matches and increase correct matches, but also solve the matching problem between an infrared band image and a visible band image in cases lacking man-made objects.

  6. A Feature Fusion Based Forecasting Model for Financial Time Series

    Science.gov (United States)

    Guo, Zhiqiang; Wang, Huaiqing; Liu, Quan; Yang, Jie

    2014-01-01

    Predicting the stock market has become an increasingly interesting research area for both researchers and investors, and many prediction models have been proposed. In these models, feature selection techniques are used to pre-process the raw data and remove noise. In this paper, a prediction model is constructed to forecast stock market behavior with the aid of independent component analysis, canonical correlation analysis, and a support vector machine. First, two types of features are extracted from the historical closing prices and 39 technical variables obtained by independent component analysis. Second, a canonical correlation analysis method is utilized to combine the two types of features and extract intrinsic features to improve the performance of the prediction model. Finally, a support vector machine is applied to forecast the next day's closing price. The proposed model is applied to the Shanghai stock market index and the Dow Jones index, and experimental results show that the proposed model performs better in the area of prediction than other two similar models. PMID:24971455

  7. Features of Functioning the Integrated Building Thermal Model

    Directory of Open Access Journals (Sweden)

    Morozov Maxim N.

    2017-01-01

    Full Text Available A model of the building heating system, consisting of energy source, a distributed automatic control system, elements of individual heating unit and heating system is designed. Application Simulink of mathematical package Matlab is selected as a platform for the model. There are the specialized application Simscape libraries in aggregate with a wide range of Matlab mathematical tools allow to apply the “acausal” modeling concept. Implementation the “physical” representation of the object model gave improving the accuracy of the models. Principle of operation and features of the functioning of the thermal model is described. The investigations of building cooling dynamics were carried out.

  8. The effective field theory of inflation models with sharp features

    International Nuclear Information System (INIS)

    Bartolo, Nicola; Cannone, Dario; Matarrese, Sabino

    2013-01-01

    We describe models of single-field inflation with small and sharp step features in the potential (and sound speed) of the inflaton field, in the context of the Effective Field Theory of Inflation. This approach allows us to study the effects of features in the power-spectrum and in the bispectrum of curvature perturbations, from a model-independent point of view, by parametrizing the features directly with modified ''slow-roll'' parameters. We can obtain a self-consistent power-spectrum, together with enhanced non-Gaussianity, which grows with a quantity β that parametrizes the sharpness of the step. With this treatment it is straightforward to generalize and include features in other coefficients of the effective action of the inflaton field fluctuations. Our conclusion in this case is that, excluding extrinsic curvature terms, the only interesting effects at the level of the bispectrum could arise from features in the first slow-roll parameter ε or in the speed of sound c s . Finally, we derive an upper bound on the parameter β from the consistency of the perturbative expansion of the action for inflaton perturbations. This constraint can be used for an estimation of the signal-to-noise ratio, to show that the observable which is most sensitive to features is the power-spectrum. This conclusion would change if we consider the contemporary presence of a feature and a speed of sound c s < 1, as, in such a case, contributions from an oscillating folded configuration can potentially make the bispectrum the leading observable for feature models

  9. Features and New Physical Scales in Primordial Observables: Theory and Observation

    CERN Document Server

    Chluba, Jens; Patil, Subodh P.

    2015-01-01

    All cosmological observations to date are consistent with adiabatic, Gaussian and nearly scale invariant initial conditions. These findings provide strong evidence for a particular symmetry breaking pattern in the very early universe (with a close to vanishing order parameter, $\\epsilon$), widely accepted as conforming to the predictions of the simplest realizations of the inflationary paradigm. However, given that our observations are only privy to perturbations, in inferring something about the background that gave rise to them, it should be clear that many different underlying constructions project onto the same set of cosmological observables. Features in the primordial correlation functions, if present, would offer a unique and discriminating window onto the parent theory in which the mechanism that generated the initial conditions is embedded. In certain contexts, simple linear response theory allows us to infer new characteristic scales from the presence of features that can break the aforementioned de...

  10. Fast Localization in Large-Scale Environments Using Supervised Indexing of Binary Features.

    Science.gov (United States)

    Youji Feng; Lixin Fan; Yihong Wu

    2016-01-01

    The essence of image-based localization lies in matching 2D key points in the query image and 3D points in the database. State-of-the-art methods mostly employ sophisticated key point detectors and feature descriptors, e.g., Difference of Gaussian (DoG) and Scale Invariant Feature Transform (SIFT), to ensure robust matching. While a high registration rate is attained, the registration speed is impeded by the expensive key point detection and the descriptor extraction. In this paper, we propose to use efficient key point detectors along with binary feature descriptors, since the extraction of such binary features is extremely fast. The naive usage of binary features, however, does not lend itself to significant speedup of localization, since existing indexing approaches, such as hierarchical clustering trees and locality sensitive hashing, are not efficient enough in indexing binary features and matching binary features turns out to be much slower than matching SIFT features. To overcome this, we propose a much more efficient indexing approach for approximate nearest neighbor search of binary features. This approach resorts to randomized trees that are constructed in a supervised training process by exploiting the label information derived from that multiple features correspond to a common 3D point. In the tree construction process, node tests are selected in a way such that trees have uniform leaf sizes and low error rates, which are two desired properties for efficient approximate nearest neighbor search. To further improve the search efficiency, a probabilistic priority search strategy is adopted. Apart from the label information, this strategy also uses non-binary pixel intensity differences available in descriptor extraction. By using the proposed indexing approach, matching binary features is no longer much slower but slightly faster than matching SIFT features. Consequently, the overall localization speed is significantly improved due to the much faster key

  11. Research on Optimal Observation Scale for Damaged Buildings after Earthquake Based on Optimal Feature Space

    Science.gov (United States)

    Chen, J.; Chen, W.; Dou, A.; Li, W.; Sun, Y.

    2018-04-01

    A new information extraction method of damaged buildings rooted in optimal feature space is put forward on the basis of the traditional object-oriented method. In this new method, ESP (estimate of scale parameter) tool is used to optimize the segmentation of image. Then the distance matrix and minimum separation distance of all kinds of surface features are calculated through sample selection to find the optimal feature space, which is finally applied to extract the image of damaged buildings after earthquake. The overall extraction accuracy reaches 83.1 %, the kappa coefficient 0.813. The new information extraction method greatly improves the extraction accuracy and efficiency, compared with the traditional object-oriented method, and owns a good promotional value in the information extraction of damaged buildings. In addition, the new method can be used for the information extraction of different-resolution images of damaged buildings after earthquake, then to seek the optimal observation scale of damaged buildings through accuracy evaluation. It is supposed that the optimal observation scale of damaged buildings is between 1 m and 1.2 m, which provides a reference for future information extraction of damaged buildings.

  12. Synoptic evaluation of scale-dependent metrics for hydrographic line feature geometry

    Science.gov (United States)

    Stanislawski, Larry V.; Buttenfield, Barbara P.; Raposo, Paulo; Cameron, Madeline; Falgout, Jeff T.

    2015-01-01

    Methods of acquisition and feature simplification for vector feature data impact cartographic representations and scientific investigations of these data, and are therefore important considerations for geographic information science (Haunert and Sester 2008). After initial collection, linear features may be simplified to reduce excessive detail or to furnish a reduced-scale version of the features through cartographic generalization (Regnauld and McMaster 2008, Stanislawski et al. 2014). A variety of algorithms exist to simplify linear cartographic features, and all of the methods affect the positional accuracy of the features (Shahriari and Tao 2002, Regnauld and McMaster 2008, Stanislawski et al. 2012). In general, simplification operations are controlled by one or more tolerance parameters that limit the amount of positional change the operation can make to features. Using a single tolerance value can have varying levels of positional change on features; depending on local shape, texture, or geometric characteristics of the original features (McMaster and Shea 1992, Shahriari and Tao 2002, Buttenfield et al. 2010). Consequently, numerous researchers have advocated calibration of simplification parameters to control quantifiable properties of resulting changes to the features (Li and Openshaw 1990, Raposo 2013, Tobler 1988, Veregin 2000, and Buttenfield, 1986, 1989).This research identifies relations between local topographic conditions and geometric characteristics of linear features that are available in the National Hydrography Dataset (NHD). The NHD is a comprehensive vector dataset of surface 18 th ICA Workshop on Generalisation and Multiple Representation, Rio de Janiero, Brazil 2015 2 water features within the United States that is maintained by the U.S. Geological Survey (USGS). In this paper, geometric characteristics of cartographic representations for natural stream and river features are summarized for subbasin watersheds within entire regions of the

  13. a Model Study of Small-Scale World Map Generalization

    Science.gov (United States)

    Cheng, Y.; Yin, Y.; Li, C. M.; Wu, W.; Guo, P. P.; Ma, X. L.; Hu, F. M.

    2018-04-01

    With the globalization and rapid development every filed is taking an increasing interest in physical geography and human economics. There is a surging demand for small scale world map in large formats all over the world. Further study of automated mapping technology, especially the realization of small scale production on a large scale global map, is the key of the cartographic field need to solve. In light of this, this paper adopts the improved model (with the map and data separated) in the field of the mapmaking generalization, which can separate geographic data from mapping data from maps, mainly including cross-platform symbols and automatic map-making knowledge engine. With respect to the cross-platform symbol library, the symbol and the physical symbol in the geographic information are configured at all scale levels. With respect to automatic map-making knowledge engine consists 97 types, 1086 subtypes, 21845 basic algorithm and over 2500 relevant functional modules.In order to evaluate the accuracy and visual effect of our model towards topographic maps and thematic maps, we take the world map generalization in small scale as an example. After mapping generalization process, combining and simplifying the scattered islands make the map more explicit at 1 : 2.1 billion scale, and the map features more complete and accurate. Not only it enhance the map generalization of various scales significantly, but achieve the integration among map-makings of various scales, suggesting that this model provide a reference in cartographic generalization for various scales.

  14. Nanometer-scale features in dolomite from Pennsylvanian rocks, Paradox Basin, Utah

    Science.gov (United States)

    Gournay, Jonas P.; Kirkland, Brenda L.; Folk, Robert L.; Lynch, F. Leo

    1999-07-01

    Scanning electron microscopy reveals an association between early dolomite in the Pennsylvanian Desert Creek (Paradox Fm.) and small (approximately 0.1 μm) nanometer-scale textures, termed `nannobacteria'. Three diagenetically distinct dolomites are present: early dolomite, limpid dolomite, and baroque dolomite. In this study, only the early dolomite contained nanometer-scale features. These textures occur as discrete balls and rods, clumps of balls, and chains of balls. Precipitation experiments demonstrate that these textures may be the result of precipitation in an organic-rich micro-environment. The presence of these nanometer-scale textures in Pennsylvanian rocks suggests that these early dolomites precipitated in organic-rich, bacterial environments.

  15. Riparian erosion vulnerability model based on environmental features.

    Science.gov (United States)

    Botero-Acosta, Alejandra; Chu, Maria L; Guzman, Jorge A; Starks, Patrick J; Moriasi, Daniel N

    2017-12-01

    Riparian erosion is one of the major causes of sediment and contaminant load to streams, degradation of riparian wildlife habitats, and land loss hazards. Land and soil management practices are implemented as conservation and restoration measures to mitigate the environmental problems brought about by riparian erosion. This, however, requires the identification of vulnerable areas to soil erosion. Because of the complex interactions between the different mechanisms that govern soil erosion and the inherent uncertainties involved in quantifying these processes, assessing erosion vulnerability at the watershed scale is challenging. The main objective of this study was to develop a methodology to identify areas along the riparian zone that are susceptible to erosion. The methodology was developed by integrating the physically-based watershed model MIKE-SHE, to simulate water movement, and a habitat suitability model, MaxEnt, to quantify the probability of presences of elevation changes (i.e., erosion) across the watershed. The presences of elevation changes were estimated based on two LiDAR-based elevation datasets taken in 2009 and 2012. The changes in elevation were grouped into four categories: low (0.5 - 0.7 m), medium (0.7 - 1.0 m), high (1.0 - 1.7 m) and very high (1.7 - 5.9 m), considering each category as a studied "species". The categories' locations were then used as "species location" map in MaxEnt. The environmental features used as constraints to the presence of erosion were land cover, soil, stream power index, overland flow, lateral inflow, and discharge. The modeling framework was evaluated in the Fort Cobb Reservoir Experimental watershed in southcentral Oklahoma. Results showed that the most vulnerable areas for erosion were located at the upper riparian zones of the Cobb and Lake sub-watersheds. The main waterways of these sub-watersheds were also found to be prone to streambank erosion. Approximatively 80% of the riparian zone (streambank

  16. Multi-scale modeling of composites

    DEFF Research Database (Denmark)

    Azizi, Reza

    A general method to obtain the homogenized response of metal-matrix composites is developed. It is assumed that the microscopic scale is sufficiently small compared to the macroscopic scale such that the macro response does not affect the micromechanical model. Therefore, the microscopic scale......-Mandel’s energy principle is used to find macroscopic operators based on micro-mechanical analyses using the finite element method under generalized plane strain condition. A phenomenologically macroscopic model for metal matrix composites is developed based on constitutive operators describing the elastic...... to plastic deformation. The macroscopic operators found, can be used to model metal matrix composites on the macroscopic scale using a hierarchical multi-scale approach. Finally, decohesion under tension and shear loading is studied using a cohesive law for the interface between matrix and fiber....

  17. Advanced social features in a recommendation system for process modeling

    NARCIS (Netherlands)

    Koschmider, A.; Song, M.S.; Reijers, H.A.; Abramowicz, W.

    2009-01-01

    Social software is known to stimulate the exchange and sharing of information among peers. This paper describes how an existing system that supports process builders in completing a business process can be enhanced with various social features. In that way, it is easier for process modeler to become

  18. [Modeling continuous scaling of NDVI based on fractal theory].

    Science.gov (United States)

    Luan, Hai-Jun; Tian, Qing-Jiu; Yu, Tao; Hu, Xin-Li; Huang, Yan; Du, Ling-Tong; Zhao, Li-Min; Wei, Xi; Han, Jie; Zhang, Zhou-Wei; Li, Shao-Peng

    2013-07-01

    Scale effect was one of the very important scientific problems of remote sensing. The scale effect of quantitative remote sensing can be used to study retrievals' relationship between different-resolution images, and its research became an effective way to confront the challenges, such as validation of quantitative remote sensing products et al. Traditional up-scaling methods cannot describe scale changing features of retrievals on entire series of scales; meanwhile, they are faced with serious parameters correction issues because of imaging parameters' variation of different sensors, such as geometrical correction, spectral correction, etc. Utilizing single sensor image, fractal methodology was utilized to solve these problems. Taking NDVI (computed by land surface radiance) as example and based on Enhanced Thematic Mapper Plus (ETM+) image, a scheme was proposed to model continuous scaling of retrievals. Then the experimental results indicated that: (a) For NDVI, scale effect existed, and it could be described by fractal model of continuous scaling; (2) The fractal method was suitable for validation of NDVI. All of these proved that fractal was an effective methodology of studying scaling of quantitative remote sensing.

  19. Modeling Lactococcus lactis using a genome-scale flux model

    Directory of Open Access Journals (Sweden)

    Nielsen Jens

    2005-06-01

    Full Text Available Abstract Background Genome-scale flux models are useful tools to represent and analyze microbial metabolism. In this work we reconstructed the metabolic network of the lactic acid bacteria Lactococcus lactis and developed a genome-scale flux model able to simulate and analyze network capabilities and whole-cell function under aerobic and anaerobic continuous cultures. Flux balance analysis (FBA and minimization of metabolic adjustment (MOMA were used as modeling frameworks. Results The metabolic network was reconstructed using the annotated genome sequence from L. lactis ssp. lactis IL1403 together with physiological and biochemical information. The established network comprised a total of 621 reactions and 509 metabolites, representing the overall metabolism of L. lactis. Experimental data reported in the literature was used to fit the model to phenotypic observations. Regulatory constraints had to be included to simulate certain metabolic features, such as the shift from homo to heterolactic fermentation. A minimal medium for in silico growth was identified, indicating the requirement of four amino acids in addition to a sugar. Remarkably, de novo biosynthesis of four other amino acids was observed even when all amino acids were supplied, which is in good agreement with experimental observations. Additionally, enhanced metabolic engineering strategies for improved diacetyl producing strains were designed. Conclusion The L. lactis metabolic network can now be used for a better understanding of lactococcal metabolic capabilities and potential, for the design of enhanced metabolic engineering strategies and for integration with other types of 'omic' data, to assist in finding new information on cellular organization and function.

  20. A biologically inspired scale-space for illumination invariant feature detection

    International Nuclear Information System (INIS)

    Vonikakis, Vasillios; Chrysostomou, Dimitrios; Kouskouridas, Rigas; Gasteratos, Antonios

    2013-01-01

    This paper presents a new illumination invariant operator, combining the nonlinear characteristics of biological center-surround cells with the classic difference of Gaussians operator. It specifically targets the underexposed image regions, exhibiting increased sensitivity to low contrast, while not affecting performance in the correctly exposed ones. The proposed operator can be used to create a scale-space, which in turn can be a part of a SIFT-based detector module. The main advantage of this illumination invariant scale-space is that, using just one global threshold, keypoints can be detected in both dark and bright image regions. In order to evaluate the degree of illumination invariance that the proposed, as well as other, existing, operators exhibit, a new benchmark dataset is introduced. It features a greater variety of imaging conditions, compared to existing databases, containing real scenes under various degrees and combinations of uniform and non-uniform illumination. Experimental results show that the proposed detector extracts a greater number of features, with a high level of repeatability, compared to other approaches, for both uniform and non-uniform illumination. This, along with its simple implementation, renders the proposed feature detector particularly appropriate for outdoor vision systems, working in environments under uncontrolled illumination conditions. (paper)

  1. On scaling of human body models

    Directory of Open Access Journals (Sweden)

    Hynčík L.

    2007-10-01

    Full Text Available Human body is not an unique being, everyone is another from the point of view of anthropometry and mechanical characteristics which means that division of the human body population to categories like 5%-tile, 50%-tile and 95%-tile from the application point of view is not enough. On the other hand, the development of a particular human body model for all of us is not possible. That is why scaling and morphing algorithms has started to be developed. The current work describes the development of a tool for scaling of the human models. The idea is to have one (or couple of standard model(s as a base and to create other models based on these basic models. One has to choose adequate anthropometrical and biomechanical parameters that describe given group of humans to be scaled and morphed among.

  2. MOUNTAIN-SCALE COUPLED PROCESSES (TH/THC/THM)MODELS

    Energy Technology Data Exchange (ETDEWEB)

    Y.S. Wu

    2005-08-24

    This report documents the development and validation of the mountain-scale thermal-hydrologic (TH), thermal-hydrologic-chemical (THC), and thermal-hydrologic-mechanical (THM) models. These models provide technical support for screening of features, events, and processes (FEPs) related to the effects of coupled TH/THC/THM processes on mountain-scale unsaturated zone (UZ) and saturated zone (SZ) flow at Yucca Mountain, Nevada (BSC 2005 [DIRS 174842], Section 2.1.1.1). The purpose and validation criteria for these models are specified in ''Technical Work Plan for: Near-Field Environment and Transport: Coupled Processes (Mountain-Scale TH/THC/THM, Drift-Scale THC Seepage, and Drift-Scale Abstraction) Model Report Integration'' (BSC 2005 [DIRS 174842]). Model results are used to support exclusion of certain FEPs from the total system performance assessment for the license application (TSPA-LA) model on the basis of low consequence, consistent with the requirements of 10 CFR 63.342 [DIRS 173273]. Outputs from this report are not direct feeds to the TSPA-LA. All the FEPs related to the effects of coupled TH/THC/THM processes on mountain-scale UZ and SZ flow are discussed in Sections 6 and 7 of this report. The mountain-scale coupled TH/THC/THM processes models numerically simulate the impact of nuclear waste heat release on the natural hydrogeological system, including a representation of heat-driven processes occurring in the far field. The mountain-scale TH simulations provide predictions for thermally affected liquid saturation, gas- and liquid-phase fluxes, and water and rock temperature (together called the flow fields). The main focus of the TH model is to predict the changes in water flux driven by evaporation/condensation processes, and drainage between drifts. The TH model captures mountain-scale three-dimensional flow effects, including lateral diversion and mountain-scale flow patterns. The mountain-scale THC model evaluates TH effects on

  3. MOUNTAIN-SCALE COUPLED PROCESSES (TH/THC/THM) MODELS

    International Nuclear Information System (INIS)

    Y.S. Wu

    2005-01-01

    This report documents the development and validation of the mountain-scale thermal-hydrologic (TH), thermal-hydrologic-chemical (THC), and thermal-hydrologic-mechanical (THM) models. These models provide technical support for screening of features, events, and processes (FEPs) related to the effects of coupled TH/THC/THM processes on mountain-scale unsaturated zone (UZ) and saturated zone (SZ) flow at Yucca Mountain, Nevada (BSC 2005 [DIRS 174842], Section 2.1.1.1). The purpose and validation criteria for these models are specified in ''Technical Work Plan for: Near-Field Environment and Transport: Coupled Processes (Mountain-Scale TH/THC/THM, Drift-Scale THC Seepage, and Drift-Scale Abstraction) Model Report Integration'' (BSC 2005 [DIRS 174842]). Model results are used to support exclusion of certain FEPs from the total system performance assessment for the license application (TSPA-LA) model on the basis of low consequence, consistent with the requirements of 10 CFR 63.342 [DIRS 173273]. Outputs from this report are not direct feeds to the TSPA-LA. All the FEPs related to the effects of coupled TH/THC/THM processes on mountain-scale UZ and SZ flow are discussed in Sections 6 and 7 of this report. The mountain-scale coupled TH/THC/THM processes models numerically simulate the impact of nuclear waste heat release on the natural hydrogeological system, including a representation of heat-driven processes occurring in the far field. The mountain-scale TH simulations provide predictions for thermally affected liquid saturation, gas- and liquid-phase fluxes, and water and rock temperature (together called the flow fields). The main focus of the TH model is to predict the changes in water flux driven by evaporation/condensation processes, and drainage between drifts. The TH model captures mountain-scale three-dimensional flow effects, including lateral diversion and mountain-scale flow patterns. The mountain-scale THC model evaluates TH effects on water and gas

  4. Statistical evolution of quiet-Sun small-scale magnetic features using Sunrise observations

    Science.gov (United States)

    Anusha, L. S.; Solanki, S. K.; Hirzberger, J.; Feller, A.

    2017-02-01

    The evolution of small magnetic features in quiet regions of the Sun provides a unique window for probing solar magneto-convection. Here we analyze small-scale magnetic features in the quiet Sun, using the high resolution, seeing-free observations from the Sunrise balloon borne solar observatory. Our aim is to understand the contribution of different physical processes, such as splitting, merging, emergence and cancellation of magnetic fields to the rearrangement, addition and removal of magnetic flux in the photosphere. We have employed a statistical approach for the analysis and the evolution studies are carried out using a feature-tracking technique. In this paper we provide a detailed description of the feature-tracking algorithm that we have newly developed and we present the results of a statistical study of several physical quantities. The results on the fractions of the flux in the emergence, appearance, splitting, merging, disappearance and cancellation qualitatively agrees with other recent studies. To summarize, the total flux gained in unipolar appearance is an order of magnitude larger than the total flux gained in emergence. On the other hand, the bipolar cancellation contributes nearly an equal amount to the loss of magnetic flux as unipolar disappearance. The total flux lost in cancellation is nearly six to eight times larger than the total flux gained in emergence. One big difference between our study and previous similar studies is that, thanks to the higher spatial resolution of Sunrise, we can track features with fluxes as low as 9 × 1014 Mx. This flux is nearly an order of magnitude lower than the smallest fluxes of the features tracked in the highest resolution previous studies based on Hinode data. The area and flux of the magnetic features follow power-law type distribution, while the lifetimes show either power-law or exponential type distribution depending on the exact definitions used to define various birth and death events. We have

  5. MULTI-SCALE SEGMENTATION OF HIGH RESOLUTION REMOTE SENSING IMAGES BY INTEGRATING MULTIPLE FEATURES

    Directory of Open Access Journals (Sweden)

    Y. Di

    2017-05-01

    Full Text Available Most of multi-scale segmentation algorithms are not aiming at high resolution remote sensing images and have difficulty to communicate and use layers’ information. In view of them, we proposes a method of multi-scale segmentation of high resolution remote sensing images by integrating multiple features. First, Canny operator is used to extract edge information, and then band weighted distance function is built to obtain the edge weight. According to the criterion, the initial segmentation objects of color images can be gained by Kruskal minimum spanning tree algorithm. Finally segmentation images are got by the adaptive rule of Mumford–Shah region merging combination with spectral and texture information. The proposed method is evaluated precisely using analog images and ZY-3 satellite images through quantitative and qualitative analysis. The experimental results show that the multi-scale segmentation of high resolution remote sensing images by integrating multiple features outperformed the software eCognition fractal network evolution algorithm (highest-resolution network evolution that FNEA on the accuracy and slightly inferior to FNEA on the efficiency.

  6. Phenomenological features of dreams: Results from dream log studies using the Subjective Experiences Rating Scale (SERS).

    Science.gov (United States)

    Kahan, Tracey L; Claudatos, Stephanie

    2016-04-01

    Self-ratings of dream experiences were obtained from 144 college women for 788 dreams, using the Subjective Experiences Rating Scale (SERS). Consistent with past studies, dreams were characterized by a greater prevalence of vision, audition, and movement than smell, touch, or taste, by both positive and negative emotion, and by a range of cognitive processes. A Principal Components Analysis of SERS ratings revealed ten subscales: four sensory, three affective, one cognitive, and two structural (events/actions, locations). Correlations (Pearson r) among subscale means showed a stronger relationship among the process-oriented features (sensory, cognitive, affective) than between the process-oriented and content-centered (structural) features--a pattern predicted from past research (e.g., Bulkeley & Kahan, 2008). Notably, cognition and positive emotion were associated with a greater number of other phenomenal features than was negative emotion; these findings are consistent with studies of the qualitative features of waking autobiographical memory (e.g., Fredrickson, 2001). Copyright © 2016 Elsevier Inc. All rights reserved.

  7. Gravity Model for Topological Features on a Cylindrical Manifold

    Directory of Open Access Journals (Sweden)

    Bayak I.

    2008-04-01

    Full Text Available A model aimed at understanding quantum gravity in terms of Birkho’s approach is discussed. The geometry of this model is constructed by using a winding map of Minkowski space into a R3 S1 -cylinder. The basic field of this model is a field of unit vectors defined through the velocity field of a flow wrapping the cylinder. The degeneration of some parts of the flow into circles (topological features results in in- homogeneities and gives rise to a scalar field, analogous to the gravitational field. The geometry and dynamics of this field are briefly discussed. We treat the intersections be- tween the topological features and the observer’s 3-space as matter particles and argue that these entities are likely to possess some quantum properties.

  8. Bilateral symmetry detection on the basis of Scale Invariant Feature Transform.

    Directory of Open Access Journals (Sweden)

    Habib Akbar

    Full Text Available The automatic detection of bilateral symmetry is a challenging task in computer vision and pattern recognition. This paper presents an approach for the detection of bilateral symmetry in digital single object images. Our method relies on the extraction of Scale Invariant Feature Transform (SIFT based feature points, which serves as the basis for the ascertainment of the centroid of the object; the latter being the origin under the Cartesian coordinate system to be converted to the polar coordinate system in order to facilitate the selection symmetric coordinate pairs. This is followed by comparing the gradient magnitude and orientation of the corresponding points to evaluate the amount of symmetry exhibited by each pair of points. The experimental results show that our approach draw the symmetry line accurately, provided that the observed centroid point is true.

  9. Real-time object tracking based on scale-invariant features employing bio-inspired hardware.

    Science.gov (United States)

    Yasukawa, Shinsuke; Okuno, Hirotsugu; Ishii, Kazuo; Yagi, Tetsuya

    2016-09-01

    We developed a vision sensor system that performs a scale-invariant feature transform (SIFT) in real time. To apply the SIFT algorithm efficiently, we focus on a two-fold process performed by the visual system: whole-image parallel filtering and frequency-band parallel processing. The vision sensor system comprises an active pixel sensor, a metal-oxide semiconductor (MOS)-based resistive network, a field-programmable gate array (FPGA), and a digital computer. We employed the MOS-based resistive network for instantaneous spatial filtering and a configurable filter size. The FPGA is used to pipeline process the frequency-band signals. The proposed system was evaluated by tracking the feature points detected on an object in a video. Copyright © 2016 Elsevier Ltd. All rights reserved.

  10. Formal modelling and verification of interlocking systems featuring sequential release

    DEFF Research Database (Denmark)

    Vu, Linh Hong; Haxthausen, Anne Elisabeth; Peleska, Jan

    2017-01-01

    In this article, we present a method and an associated toolchain for the formal verification of the new Danish railway interlocking systems that are compatible with the European Train Control System (ETCS) Level 2. We have made a generic and reconfigurable model of the system behaviour and generic...... safety properties. This model accommodates sequential release - a feature in the new Danish interlocking systems. To verify the safety of an interlocking system, first a domain-specific description of interlocking configuration data is constructed and validated. Then the generic model and safety...

  11. Scale-invariant feature extraction of neural network and renormalization group flow

    Science.gov (United States)

    Iso, Satoshi; Shiba, Shotaro; Yokoo, Sumito

    2018-05-01

    Theoretical understanding of how a deep neural network (DNN) extracts features from input images is still unclear, but it is widely believed that the extraction is performed hierarchically through a process of coarse graining. It reminds us of the basic renormalization group (RG) concept in statistical physics. In order to explore possible relations between DNN and RG, we use the restricted Boltzmann machine (RBM) applied to an Ising model and construct a flow of model parameters (in particular, temperature) generated by the RBM. We show that the unsupervised RBM trained by spin configurations at various temperatures from T =0 to T =6 generates a flow along which the temperature approaches the critical value Tc=2.2 7 . This behavior is the opposite of the typical RG flow of the Ising model. By analyzing various properties of the weight matrices of the trained RBM, we discuss why it flows towards Tc and how the RBM learns to extract features of spin configurations.

  12. Multi-scale Modeling of Arctic Clouds

    Science.gov (United States)

    Hillman, B. R.; Roesler, E. L.; Dexheimer, D.

    2017-12-01

    The presence and properties of clouds are critically important to the radiative budget in the Arctic, but clouds are notoriously difficult to represent in global climate models (GCMs). The challenge stems partly from a disconnect in the scales at which these models are formulated and the scale of the physical processes important to the formation of clouds (e.g., convection and turbulence). Because of this, these processes are parameterized in large-scale models. Over the past decades, new approaches have been explored in which a cloud system resolving model (CSRM), or in the extreme a large eddy simulation (LES), is embedded into each gridcell of a traditional GCM to replace the cloud and convective parameterizations to explicitly simulate more of these important processes. This approach is attractive in that it allows for more explicit simulation of small-scale processes while also allowing for interaction between the small and large-scale processes. The goal of this study is to quantify the performance of this framework in simulating Arctic clouds relative to a traditional global model, and to explore the limitations of such a framework using coordinated high-resolution (eddy-resolving) simulations. Simulations from the global model are compared with satellite retrievals of cloud fraction partioned by cloud phase from CALIPSO, and limited-area LES simulations are compared with ground-based and tethered-balloon measurements from the ARM Barrow and Oliktok Point measurement facilities.

  13. Color-based scale-invariant feature detection applied in robot vision

    Science.gov (United States)

    Gao, Jian; Huang, Xinhan; Peng, Gang; Wang, Min; Li, Xinde

    2007-11-01

    The scale-invariant feature detecting methods always require a lot of computation yet sometimes still fail to meet the real-time demands in robot vision fields. To solve the problem, a quick method for detecting interest points is presented. To decrease the computation time, the detector selects as interest points those whose scale normalized Laplacian values are the local extrema in the nonholonomic pyramid scale space. The descriptor is built with several subregions, whose width is proportional to the scale factor, and the coordinates of the descriptor are rotated in relation to the interest point orientation just like the SIFT descriptor. The eigenvector is computed in the original color image and the mean values of the normalized color g and b in each subregion are chosen to be the factors of the eigenvector. Compared with the SIFT descriptor, this descriptor's dimension has been reduced evidently, which can simplify the point matching process. The performance of the method is analyzed in theory in this paper and the experimental results have certified its validity too.

  14. Site-Scale Saturated Zone Flow Model

    International Nuclear Information System (INIS)

    G. Zyvoloski

    2003-01-01

    The purpose of this model report is to document the components of the site-scale saturated-zone flow model at Yucca Mountain, Nevada, in accordance with administrative procedure (AP)-SIII.lOQ, ''Models''. This report provides validation and confidence in the flow model that was developed for site recommendation (SR) and will be used to provide flow fields in support of the Total Systems Performance Assessment (TSPA) for the License Application. The output from this report provides the flow model used in the ''Site-Scale Saturated Zone Transport'', MDL-NBS-HS-000010 Rev 01 (BSC 2003 [162419]). The Site-Scale Saturated Zone Transport model then provides output to the SZ Transport Abstraction Model (BSC 2003 [164870]). In particular, the output from the SZ site-scale flow model is used to simulate the groundwater flow pathways and radionuclide transport to the accessible environment for use in the TSPA calculations. Since the development and calibration of the saturated-zone flow model, more data have been gathered for use in model validation and confidence building, including new water-level data from Nye County wells, single- and multiple-well hydraulic testing data, and new hydrochemistry data. In addition, a new hydrogeologic framework model (HFM), which incorporates Nye County wells lithology, also provides geologic data for corroboration and confidence in the flow model. The intended use of this work is to provide a flow model that generates flow fields to simulate radionuclide transport in saturated porous rock and alluvium under natural or forced gradient flow conditions. The flow model simulations are completed using the three-dimensional (3-D), finite-element, flow, heat, and transport computer code, FEHM Version (V) 2.20 (software tracking number (STN): 10086-2.20-00; LANL 2003 [161725]). Concurrently, process-level transport model and methodology for calculating radionuclide transport in the saturated zone at Yucca Mountain using FEHM V 2.20 are being

  15. Flexible feature-space-construction architecture and its VLSI implementation for multi-scale object detection

    Science.gov (United States)

    Luo, Aiwen; An, Fengwei; Zhang, Xiangyu; Chen, Lei; Huang, Zunkai; Jürgen Mattausch, Hans

    2018-04-01

    Feature extraction techniques are a cornerstone of object detection in computer-vision-based applications. The detection performance of vison-based detection systems is often degraded by, e.g., changes in the illumination intensity of the light source, foreground-background contrast variations or automatic gain control from the camera. In order to avoid such degradation effects, we present a block-based L1-norm-circuit architecture which is configurable for different image-cell sizes, cell-based feature descriptors and image resolutions according to customization parameters from the circuit input. The incorporated flexibility in both the image resolution and the cell size for multi-scale image pyramids leads to lower computational complexity and power consumption. Additionally, an object-detection prototype for performance evaluation in 65 nm CMOS implements the proposed L1-norm circuit together with a histogram of oriented gradients (HOG) descriptor and a support vector machine (SVM) classifier. The proposed parallel architecture with high hardware efficiency enables real-time processing, high detection robustness, small chip-core area as well as low power consumption for multi-scale object detection.

  16. The use of scale-invariance feature transform approach to recognize and retrieve incomplete shoeprints.

    Science.gov (United States)

    Wei, Chia-Hung; Li, Yue; Gwo, Chih-Ying

    2013-05-01

    Shoeprints left at the crime scene provide valuable information in criminal investigation due to the distinctive patterns in the sole. Those shoeprints are often incomplete and noisy. In this study, scale-invariance feature transform is proposed and evaluated for recognition and retrieval of partial and noisy shoeprint images. The proposed method first constructs different scale spaces to detect local extrema in the underlying shoeprint images. Those local extrema are considered as useful key points in the image. Next, the features of those key points are extracted to represent their local patterns around key points. Then, the system computes the cross-correlation between the query image and each shoeprint image in the database. Experimental results show that full-size prints and prints from the toe area perform best among all shoeprints. Furthermore, this system also demonstrates its robustness against noise because there is a very slight difference in comparison between original shoeprints and noisy shoeprints. © 2013 American Academy of Forensic Sciences.

  17. Using the Personality Assessment Inventory Antisocial and Borderline Features Scales to Predict Behavior Change.

    Science.gov (United States)

    Penson, Brittany N; Ruchensky, Jared R; Morey, Leslie C; Edens, John F

    2016-11-01

    A substantial amount of research has examined the developmental trajectory of antisocial behavior and, in particular, the relationship between antisocial behavior and maladaptive personality traits. However, research typically has not controlled for previous behavior (e.g., past violence) when examining the utility of personality measures, such as self-report scales of antisocial and borderline traits, in predicting future behavior (e.g., subsequent violence). Examination of the potential interactive effects of measures of both antisocial and borderline traits also is relatively rare in longitudinal research predicting adverse outcomes. The current study utilizes a large sample of youthful offenders ( N = 1,354) from the Pathways to Desistance project to examine the separate effects of the Personality Assessment Inventory Antisocial Features (ANT) and Borderline Features (BOR) scales in predicting future offending behavior as well as trends in other negative outcomes (e.g., substance abuse, violence, employment difficulties) over a 1-year follow-up period. In addition, an ANT × BOR interaction term was created to explore the predictive effects of secondary psychopathy. ANT and BOR both explained unique variance in the prediction of various negative outcomes even after controlling for past indicators of those same behaviors during the preceding year.

  18. Evidence on Features of a DSGE Business Cycle Model from Bayesian Model Averaging

    NARCIS (Netherlands)

    R.W. Strachan (Rodney); H.K. van Dijk (Herman)

    2012-01-01

    textabstractThe empirical support for features of a Dynamic Stochastic General Equilibrium model with two technology shocks is valuated using Bayesian model averaging over vector autoregressions. The model features include equilibria, restrictions on long-run responses, a structural break of unknown

  19. Improving scale invariant feature transform with local color contrastive descriptor for image classification

    Science.gov (United States)

    Guo, Sheng; Huang, Weilin; Qiao, Yu

    2017-01-01

    Image representation and classification are two fundamental tasks toward version understanding. Shape and texture provide two key features for visual representation and have been widely exploited in a number of successful local descriptors, e.g., scale invariant feature transform (SIFT), local binary pattern descriptor, and histogram of oriented gradient. Unlike these gradient-based descriptors, this paper presents a simple yet efficient local descriptor, named local color contrastive descriptor (LCCD), which captures the contrastive aspects among local regions or color channels for image representation. LCCD is partly inspired by the neural science facts that color contrast plays important roles in visual perception and there exist strong linkages between color and shape. We leverage f-divergence as a robust measure to estimate the contrastive features between different spatial locations and multiple channels. Our descriptor enriches local image representation with both color and contrast information. Due to that LCCD does not explore any gradient information, individual LCCD does not yield strong performance. But we verified experimentally that LCCD can compensate strongly SIFT. Extensive experimental results on image classification show that our descriptor improves the performance of SIFT substantially by combination on three challenging benchmarks, including MIT Indoor-67 database, SUN397, and PASCAL VOC 2007.

  20. Design of scaled down structural models

    Science.gov (United States)

    Simitses, George J.

    1994-07-01

    In the aircraft industry, full scale and large component testing is a very necessary, time consuming, and expensive process. It is essential to find ways by which this process can be minimized without loss of reliability. One possible alternative is the use of scaled down models in testing and use of the model test results in order to predict the behavior of the larger system, referred to herein as prototype. This viewgraph presentation provides justifications and motivation for the research study, and it describes the necessary conditions (similarity conditions) for two structural systems to be structurally similar with similar behavioral response. Similarity conditions provide the relationship between a scaled down model and its prototype. Thus, scaled down models can be used to predict the behavior of the prototype by extrapolating their experimental data. Since satisfying all similarity conditions simultaneously is in most cases impractical, distorted models with partial similarity can be employed. Establishment of similarity conditions, based on the direct use of the governing equations, is discussed and their use in the design of models is presented. Examples include the use of models for the analysis of cylindrical bending of orthotropic laminated beam plates, of buckling of symmetric laminated rectangular plates subjected to uniform uniaxial compression and shear, applied individually, and of vibrational response of the same rectangular plates. Extensions and future tasks are also described.

  1. A New Feature Extraction Method Based on EEMD and Multi-Scale Fuzzy Entropy for Motor Bearing

    Directory of Open Access Journals (Sweden)

    Huimin Zhao

    2016-12-01

    Full Text Available Feature extraction is one of the most important, pivotal, and difficult problems in mechanical fault diagnosis, which directly relates to the accuracy of fault diagnosis and the reliability of early fault prediction. Therefore, a new fault feature extraction method, called the EDOMFE method based on integrating ensemble empirical mode decomposition (EEMD, mode selection, and multi-scale fuzzy entropy is proposed to accurately diagnose fault in this paper. The EEMD method is used to decompose the vibration signal into a series of intrinsic mode functions (IMFs with a different physical significance. The correlation coefficient analysis method is used to calculate and determine three improved IMFs, which are close to the original signal. The multi-scale fuzzy entropy with the ability of effective distinguishing the complexity of different signals is used to calculate the entropy values of the selected three IMFs in order to form a feature vector with the complexity measure, which is regarded as the inputs of the support vector machine (SVM model for training and constructing a SVM classifier (EOMSMFD based on EDOMFE and SVM for fulfilling fault pattern recognition. Finally, the effectiveness of the proposed method is validated by real bearing vibration signals of the motor with different loads and fault severities. The experiment results show that the proposed EDOMFE method can effectively extract fault features from the vibration signal and that the proposed EOMSMFD method can accurately diagnose the fault types and fault severities for the inner race fault, the outer race fault, and rolling element fault of the motor bearing. Therefore, the proposed method provides a new fault diagnosis technology for rotating machinery.

  2. Accounting for small scale heterogeneity in ecohydrologic watershed models

    Science.gov (United States)

    Burke, W.; Tague, C.

    2017-12-01

    Spatially distributed ecohydrologic models are inherently constrained by the spatial resolution of their smallest units, below which land and processes are assumed to be homogenous. At coarse scales, heterogeneity is often accounted for by computing store and fluxes of interest over a distribution of land cover types (or other sources of heterogeneity) within spatially explicit modeling units. However this approach ignores spatial organization and the lateral transfer of water and materials downslope. The challenge is to account both for the role of flow network topology and fine-scale heterogeneity. We present a new approach that defines two levels of spatial aggregation and that integrates spatially explicit network approach with a flexible representation of finer-scale aspatial heterogeneity. Critically, this solution does not simply increase the resolution of the smallest spatial unit, and so by comparison, results in improved computational efficiency. The approach is demonstrated by adapting Regional Hydro-Ecologic Simulation System (RHESSys), an ecohydrologic model widely used to simulate climate, land use, and land management impacts. We illustrate the utility of our approach by showing how the model can be used to better characterize forest thinning impacts on ecohydrology. Forest thinning is typically done at the scale of individual trees, and yet management responses of interest include impacts on watershed scale hydrology and on downslope riparian vegetation. Our approach allow us to characterize the variability in tree size/carbon reduction and water transfers between neighboring trees while still capturing hillslope to watershed scale effects, Our illustrative example demonstrates that accounting for these fine scale effects can substantially alter model estimates, in some cases shifting the impacts of thinning on downslope water availability from increases to decreases. We conclude by describing other use cases that may benefit from this approach

  3. A model for AGN variability on multiple time-scales

    Science.gov (United States)

    Sartori, Lia F.; Schawinski, Kevin; Trakhtenbrot, Benny; Caplar, Neven; Treister, Ezequiel; Koss, Michael J.; Urry, C. Megan; Zhang, C. E.

    2018-05-01

    We present a framework to link and describe active galactic nuclei (AGN) variability on a wide range of time-scales, from days to billions of years. In particular, we concentrate on the AGN variability features related to changes in black hole fuelling and accretion rate. In our framework, the variability features observed in different AGN at different time-scales may be explained as realisations of the same underlying statistical properties. In this context, we propose a model to simulate the evolution of AGN light curves with time based on the probability density function (PDF) and power spectral density (PSD) of the Eddington ratio (L/LEdd) distribution. Motivated by general galaxy population properties, we propose that the PDF may be inspired by the L/LEdd distribution function (ERDF), and that a single (or limited number of) ERDF+PSD set may explain all observed variability features. After outlining the framework and the model, we compile a set of variability measurements in terms of structure function (SF) and magnitude difference. We then combine the variability measurements on a SF plot ranging from days to Gyr. The proposed framework enables constraints on the underlying PSD and the ability to link AGN variability on different time-scales, therefore providing new insights into AGN variability and black hole growth phenomena.

  4. Including investment risk in large-scale power market models

    DEFF Research Database (Denmark)

    Lemming, Jørgen Kjærgaard; Meibom, P.

    2003-01-01

    Long-term energy market models can be used to examine investments in production technologies, however, with market liberalisation it is crucial that such models include investment risks and investor behaviour. This paper analyses how the effect of investment risk on production technology selection...... can be included in large-scale partial equilibrium models of the power market. The analyses are divided into a part about risk measures appropriate for power market investors and a more technical part about the combination of a risk-adjustment model and a partial-equilibrium model. To illustrate...... the analyses quantitatively, a framework based on an iterative interaction between the equilibrium model and a separate risk-adjustment module was constructed. To illustrate the features of the proposed modelling approach we examined how uncertainty in demand and variable costs affects the optimal choice...

  5. A 3D Printing Model Watermarking Algorithm Based on 3D Slicing and Feature Points

    Directory of Open Access Journals (Sweden)

    Giao N. Pham

    2018-02-01

    Full Text Available With the increase of three-dimensional (3D printing applications in many areas of life, a large amount of 3D printing data is copied, shared, and used several times without any permission from the original providers. Therefore, copyright protection and ownership identification for 3D printing data in communications or commercial transactions are practical issues. This paper presents a novel watermarking algorithm for 3D printing models based on embedding watermark data into the feature points of a 3D printing model. Feature points are determined and computed by the 3D slicing process along the Z axis of a 3D printing model. The watermark data is embedded into a feature point of a 3D printing model by changing the vector length of the feature point in OXY space based on the reference length. The x and y coordinates of the feature point will be then changed according to the changed vector length that has been embedded with a watermark. Experimental results verified that the proposed algorithm is invisible and robust to geometric attacks, such as rotation, scaling, and translation. The proposed algorithm provides a better method than the conventional works, and the accuracy of the proposed algorithm is much higher than previous methods.

  6. Music genre classification via likelihood fusion from multiple feature models

    Science.gov (United States)

    Shiu, Yu; Kuo, C.-C. J.

    2005-01-01

    Music genre provides an efficient way to index songs in a music database, and can be used as an effective means to retrieval music of a similar type, i.e. content-based music retrieval. A new two-stage scheme for music genre classification is proposed in this work. At the first stage, we examine a couple of different features, construct their corresponding parametric models (e.g. GMM and HMM) and compute their likelihood functions to yield soft classification results. In particular, the timbre, rhythm and temporal variation features are considered. Then, at the second stage, these soft classification results are integrated to result in a hard decision for final music genre classification. Experimental results are given to demonstrate the performance of the proposed scheme.

  7. New Analysis Method Application in Metallographic Images through the Construction of Mosaics Via Speeded Up Robust Features and Scale Invariant Feature Transform

    Directory of Open Access Journals (Sweden)

    Pedro Pedrosa Rebouças Filho

    2015-06-01

    Full Text Available In many applications in metallography and analysis, many regions need to be considered and not only the current region. In cases where there are analyses with multiple images, the specialist should also evaluate neighboring areas. For example, in metallurgy, welding technology is derived from conventional testing and metallographic analysis. In welding, these tests allow us to know the features of the metal, especially in the Heat-Affected Zone (HAZ; the region most likely for natural metallurgical problems to occur in welding. The expanse of the Heat-Affected Zone exceeds the size of the area observed through a microscope and typically requires multiple images to be mounted on a larger picture surface to allow for the study of the entire heat affected zone. This image stitching process is performed manually and is subject to all the inherent flaws of the human being due to results of fatigue and distraction. The analyzing of grain growth is also necessary in the examination of multiple regions, although not necessarily neighboring regions, but this analysis would be a useful tool to aid a specialist. In areas such as microscopic metallography, which study metallurgical products with the aid of a microscope, the assembly of mosaics is done manually, which consumes a lot of time and is also subject to failures due to human limitations. The mosaic technique is used in the construct of environment or scenes with corresponding characteristics between themselves. Through several small images, and with corresponding characteristics between themselves, a new model is generated in a larger size. This article proposes the use of Digital Image Processing for the automatization of the construction of these mosaics in metallographic images. The use of this proposed method is meant to significantly reduce the time required to build the mosaic and reduce the possibility of failures in assembling the final image; therefore increasing efficiency in obtaining

  8. Improving permafrost distribution modelling using feature selection algorithms

    Science.gov (United States)

    Deluigi, Nicola; Lambiel, Christophe; Kanevski, Mikhail

    2016-04-01

    The availability of an increasing number of spatial data on the occurrence of mountain permafrost allows the employment of machine learning (ML) classification algorithms for modelling the distribution of the phenomenon. One of the major problems when dealing with high-dimensional dataset is the number of input features (variables) involved. Application of ML classification algorithms to this large number of variables leads to the risk of overfitting, with the consequence of a poor generalization/prediction. For this reason, applying feature selection (FS) techniques helps simplifying the amount of factors required and improves the knowledge on adopted features and their relation with the studied phenomenon. Moreover, taking away irrelevant or redundant variables from the dataset effectively improves the quality of the ML prediction. This research deals with a comparative analysis of permafrost distribution models supported by FS variable importance assessment. The input dataset (dimension = 20-25, 10 m spatial resolution) was constructed using landcover maps, climate data and DEM derived variables (altitude, aspect, slope, terrain curvature, solar radiation, etc.). It was completed with permafrost evidences (geophysical and thermal data and rock glacier inventories) that serve as training permafrost data. Used FS algorithms informed about variables that appeared less statistically important for permafrost presence/absence. Three different algorithms were compared: Information Gain (IG), Correlation-based Feature Selection (CFS) and Random Forest (RF). IG is a filter technique that evaluates the worth of a predictor by measuring the information gain with respect to the permafrost presence/absence. Conversely, CFS is a wrapper technique that evaluates the worth of a subset of predictors by considering the individual predictive ability of each variable along with the degree of redundancy between them. Finally, RF is a ML algorithm that performs FS as part of its

  9. Comments on intermediate-scale models

    Energy Technology Data Exchange (ETDEWEB)

    Ellis, J.; Enqvist, K.; Nanopoulos, D.V.; Olive, K.

    1987-04-23

    Some superstring-inspired models employ intermediate scales m/sub I/ of gauge symmetry breaking. Such scales should exceed 10/sup 16/ GeV in order to avoid prima facie problems with baryon decay through heavy particles and non-perturbative behaviour of the gauge couplings above m/sub I/. However, the intermediate-scale phase transition does not occur until the temperature of the Universe falls below O(m/sub W/), after which an enormous excess of entropy is generated. Moreover, gauge symmetry breaking by renormalization group-improved radiative corrections is inapplicable because the symmetry-breaking field has not renormalizable interactions at scales below m/sub I/. We also comment on the danger of baryon and lepton number violation in the effective low-energy theory.

  10. Comments on intermediate-scale models

    International Nuclear Information System (INIS)

    Ellis, J.; Enqvist, K.; Nanopoulos, D.V.; Olive, K.

    1987-01-01

    Some superstring-inspired models employ intermediate scales m I of gauge symmetry breaking. Such scales should exceed 10 16 GeV in order to avoid prima facie problems with baryon decay through heavy particles and non-perturbative behaviour of the gauge couplings above m I . However, the intermediate-scale phase transition does not occur until the temperature of the Universe falls below O(m W ), after which an enormous excess of entropy is generated. Moreover, gauge symmetry breaking by renormalization group-improved radiative corrections is inapplicable because the symmetry-breaking field has not renormalizable interactions at scales below m I . We also comment on the danger of baryon and lepton number violation in the effective low-energy theory. (orig.)

  11. Managing large-scale models: DBS

    International Nuclear Information System (INIS)

    1981-05-01

    A set of fundamental management tools for developing and operating a large scale model and data base system is presented. Based on experience in operating and developing a large scale computerized system, the only reasonable way to gain strong management control of such a system is to implement appropriate controls and procedures. Chapter I discusses the purpose of the book. Chapter II classifies a broad range of generic management problems into three groups: documentation, operations, and maintenance. First, system problems are identified then solutions for gaining management control are disucssed. Chapters III, IV, and V present practical methods for dealing with these problems. These methods were developed for managing SEAS but have general application for large scale models and data bases

  12. Scaled Experimental Modeling of VHTR Plenum Flows

    Energy Technology Data Exchange (ETDEWEB)

    ICONE 15

    2007-04-01

    Abstract The Very High Temperature Reactor (VHTR) is the leading candidate for the Next Generation Nuclear Power (NGNP) Project in the U.S. which has the goal of demonstrating the production of emissions free electricity and hydrogen by 2015. Various scaled heated gas and water flow facilities were investigated for modeling VHTR upper and lower plenum flows during the decay heat portion of a pressurized conduction-cooldown scenario and for modeling thermal mixing and stratification (“thermal striping”) in the lower plenum during normal operation. It was concluded, based on phenomena scaling and instrumentation and other practical considerations, that a heated water flow scale model facility is preferable to a heated gas flow facility and to unheated facilities which use fluids with ranges of density to simulate the density effect of heating. For a heated water flow lower plenum model, both the Richardson numbers and Reynolds numbers may be approximately matched for conduction-cooldown natural circulation conditions. Thermal mixing during normal operation may be simulated but at lower, but still fully turbulent, Reynolds numbers than in the prototype. Natural circulation flows in the upper plenum may also be simulated in a separate heated water flow facility that uses the same plumbing as the lower plenum model. However, Reynolds number scaling distortions will occur at matching Richardson numbers due primarily to the necessity of using a reduced number of channels connected to the plenum than in the prototype (which has approximately 11,000 core channels connected to the upper plenum) in an otherwise geometrically scaled model. Experiments conducted in either or both facilities will meet the objectives of providing benchmark data for the validation of codes proposed for NGNP designs and safety studies, as well as providing a better understanding of the complex flow phenomena in the plenums.

  13. Small-scale structure and the Lyman-α forest baryon acoustic oscillation feature

    Science.gov (United States)

    Hirata, Christopher M.

    2018-02-01

    The baryon-acoustic oscillation (BAO) feature in the Lyman-α forest is a key probe of the cosmic expansion rate at redshifts z ˜ 2.5, well before dark energy is believed to have become significant. A key advantage of the BAO as a standard ruler is that it is a sharp feature and hence is more robust against broad-band systematic effects than other cosmological probes. However, if the Lyman-α forest transmission is sensitive to the initial streaming velocity of the baryons relative to the dark matter, then the BAO peak position can be shifted. Here we investigate this sensitivity using a suite of hydrodynamic simulations of small regions of the intergalactic medium with a range of box sizes and physics assumptions; each simulation starts from initial conditions at the kinematic decoupling era (z ˜ 1059), undergoes a discrete change from neutral gas to ionized gas thermal evolution at reionization (z ˜ 8), and is finally processed into a Lyman-α forest transmitted flux cube. Streaming velocities suppress small-scale structure, leading to less violent relaxation after reionization. The changes in the gas distribution and temperature-density relation at low redshift are more subtle, due to the convergent temperature evolution in the ionized phase. The change in the BAO scale is estimated to be of the order of 0.12 per cent at z = 2.5; some of the major uncertainties and avenues for future improvement are discussed. The predicted streaming velocity shift would be a subdominant but not negligible effect (of order 0.26σ) for the upcoming DESI Lyman-α forest survey, and exceeds the cosmic variance floor.

  14. Fine-scale features on bioreplicated decoys of the emerald ash borer provide necessary visual verisimilitude

    Science.gov (United States)

    Domingue, Michael J.; Pulsifer, Drew P.; Narkhede, Mahesh S.; Engel, Leland G.; Martín-Palma, Raúl J.; Kumar, Jayant; Baker, Thomas C.; Lakhtakia, Akhlesh

    2014-03-01

    The emerald ash borer (EAB), Agrilus planipennis, is an invasive tree-killing pest in North America. Like other buprestid beetles, it has an iridescent coloring, produced by a periodically layered cuticle whose reflectance peaks at 540 nm wavelength. The males perform a visually mediated ritualistic mating flight directly onto females poised on sunlit leaves. We attempted to evoke this behavior using artificial visual decoys of three types. To fabricate decoys of the first type, a polymer sheet coated with a Bragg-stack reflector was loosely stamped by a bioreplicating die. For decoys of the second type, a polymer sheet coated with a Bragg-stack reflector was heavily stamped by the same die and then painted green. Every decoy of these two types had an underlying black absorber layer. Decoys of the third type were produced by a rapid prototyping machine and painted green. Fine-scale features were absent on the third type. Experiments were performed in an American ash forest infested with EAB, and a European oak forest home to a similar pest, the two-spotted oak borer (TSOB), Agrilus biguttatus. When pinned to leaves, dead EAB females, dead TSOB females, and bioreplicated decoys of both types often evoked the complete ritualized flight behavior. Males also initiated approaches to the rapidly prototyped decoy, but would divert elsewhere without making contact. The attraction of the bioreplicated decoys was also demonstrated by providing a high dc voltage across the decoys that stunned and killed approaching beetles. Thus, true bioreplication with fine-scale features is necessary to fully evoke ritualized visual responses in insects, and provides an opportunity for developing insecttrapping technologies.

  15. Unsupervised Video Shot Detection Using Clustering Ensemble with a Color Global Scale-Invariant Feature Transform Descriptor

    Directory of Open Access Journals (Sweden)

    Yuchou Chang

    2008-02-01

    Full Text Available Scale-invariant feature transform (SIFT transforms a grayscale image into scale-invariant coordinates of local features that are invariant to image scale, rotation, and changing viewpoints. Because of its scale-invariant properties, SIFT has been successfully used for object recognition and content-based image retrieval. The biggest drawback of SIFT is that it uses only grayscale information and misses important visual information regarding color. In this paper, we present the development of a novel color feature extraction algorithm that addresses this problem, and we also propose a new clustering strategy using clustering ensembles for video shot detection. Based on Fibonacci lattice-quantization, we develop a novel color global scale-invariant feature transform (CGSIFT for better description of color contents in video frames for video shot detection. CGSIFT first quantizes a color image, representing it with a small number of color indices, and then uses SIFT to extract features from the quantized color index image. We also develop a new space description method using small image regions to represent global color features as the second step of CGSIFT. Clustering ensembles focusing on knowledge reuse are then applied to obtain better clustering results than using single clustering methods for video shot detection. Evaluation of the proposed feature extraction algorithm and the new clustering strategy using clustering ensembles reveals very promising results for video shot detection.

  16. Unsupervised Video Shot Detection Using Clustering Ensemble with a Color Global Scale-Invariant Feature Transform Descriptor

    Directory of Open Access Journals (Sweden)

    Hong Yi

    2008-01-01

    Full Text Available Abstract Scale-invariant feature transform (SIFT transforms a grayscale image into scale-invariant coordinates of local features that are invariant to image scale, rotation, and changing viewpoints. Because of its scale-invariant properties, SIFT has been successfully used for object recognition and content-based image retrieval. The biggest drawback of SIFT is that it uses only grayscale information and misses important visual information regarding color. In this paper, we present the development of a novel color feature extraction algorithm that addresses this problem, and we also propose a new clustering strategy using clustering ensembles for video shot detection. Based on Fibonacci lattice-quantization, we develop a novel color global scale-invariant feature transform (CGSIFT for better description of color contents in video frames for video shot detection. CGSIFT first quantizes a color image, representing it with a small number of color indices, and then uses SIFT to extract features from the quantized color index image. We also develop a new space description method using small image regions to represent global color features as the second step of CGSIFT. Clustering ensembles focusing on knowledge reuse are then applied to obtain better clustering results than using single clustering methods for video shot detection. Evaluation of the proposed feature extraction algorithm and the new clustering strategy using clustering ensembles reveals very promising results for video shot detection.

  17. Adaptive Correlation Model for Visual Tracking Using Keypoints Matching and Deep Convolutional Feature

    Directory of Open Access Journals (Sweden)

    Yuankun Li

    2018-02-01

    Full Text Available Although correlation filter (CF-based visual tracking algorithms have achieved appealing results, there are still some problems to be solved. When the target object goes through long-term occlusions or scale variation, the correlation model used in existing CF-based algorithms will inevitably learn some non-target information or partial-target information. In order to avoid model contamination and enhance the adaptability of model updating, we introduce the keypoints matching strategy and adjust the model learning rate dynamically according to the matching score. Moreover, the proposed approach extracts convolutional features from a deep convolutional neural network (DCNN to accurately estimate the position and scale of the target. Experimental results demonstrate that the proposed tracker has achieved satisfactory performance in a wide range of challenging tracking scenarios.

  18. Large scale features of the hot component of the interstellar medium

    International Nuclear Information System (INIS)

    Garmire, G.P.

    1983-01-01

    The interstellar medium contains identifiable hot plasma clouds occupying up to about 35% of the volume of the local galactic disc. The temperature of these clouds is not uniform but ranges from 10 5 up to 4 x 10 6 K. Besides the high temperature which places the emission spectrum in the soft X-ray band, the implied pressure of the hot plasma compared to the cooler gas reveals the importance of this component in determining the motions and evolution of the cooler gas in the disc, as well as providing a source of hot gas which may extend above the galactic disc to form a corona. The author presents data from the A-2 soft X-ray experiment on the HEAO-1 spacecraft concerning the large scale features of this gas. These features are interpreted in terms of the late phases of supernovae expansion, multiple supernovae and the possible creation of a hot halo surrounding the region of the galactic nucleus. (Auth.)

  19. A large-scale dataset of solar event reports from automated feature recognition modules

    Science.gov (United States)

    Schuh, Michael A.; Angryk, Rafal A.; Martens, Petrus C.

    2016-05-01

    The massive repository of images of the Sun captured by the Solar Dynamics Observatory (SDO) mission has ushered in the era of Big Data for Solar Physics. In this work, we investigate the entire public collection of events reported to the Heliophysics Event Knowledgebase (HEK) from automated solar feature recognition modules operated by the SDO Feature Finding Team (FFT). With the SDO mission recently surpassing five years of operations, and over 280,000 event reports for seven types of solar phenomena, we present the broadest and most comprehensive large-scale dataset of the SDO FFT modules to date. We also present numerous statistics on these modules, providing valuable contextual information for better understanding and validating of the individual event reports and the entire dataset as a whole. After extensive data cleaning through exploratory data analysis, we highlight several opportunities for knowledge discovery from data (KDD). Through these important prerequisite analyses presented here, the results of KDD from Solar Big Data will be overall more reliable and better understood. As the SDO mission remains operational over the coming years, these datasets will continue to grow in size and value. Future versions of this dataset will be analyzed in the general framework established in this work and maintained publicly online for easy access by the community.

  20. Traffic sign recognition based on a context-aware scale-invariant feature transform approach

    Science.gov (United States)

    Yuan, Xue; Hao, Xiaoli; Chen, Houjin; Wei, Xueye

    2013-10-01

    A new context-aware scale-invariant feature transform (CASIFT) approach is proposed, which is designed for the use in traffic sign recognition (TSR) systems. The following issues remain in previous works in which SIFT is used for matching or recognition: (1) SIFT is unable to provide color information; (2) SIFT only focuses on local features while ignoring the distribution of global shapes; (3) the template with the maximum number of matching points selected as the final result is instable, especially for images with simple patterns; and (4) SIFT is liable to result in errors when different images share the same local features. In order to resolve these problems, a new CASIFT approach is proposed. The contributions of the work are as follows: (1) color angular patterns are used to provide the color distinguishing information; (2) a CASIFT which effectively combines local and global information is proposed; and (3) a method for computing the similarity between two images is proposed, which focuses on the distribution of the matching points, rather than using the traditional SIFT approach of selecting the template with maximum number of matching points as the final result. The proposed approach is particularly effective in dealing with traffic signs which have rich colors and varied global shape distribution. Experiments are performed to validate the effectiveness of the proposed approach in TSR systems, and the experimental results are satisfying even for images containing traffic signs that have been rotated, damaged, altered in color, have undergone affine transformations, or images which were photographed under different weather or illumination conditions.

  1. Affective video retrieval: violence detection in Hollywood movies by large-scale segmental feature extraction.

    Science.gov (United States)

    Eyben, Florian; Weninger, Felix; Lehment, Nicolas; Schuller, Björn; Rigoll, Gerhard

    2013-01-01

    Without doubt general video and sound, as found in large multimedia archives, carry emotional information. Thus, audio and video retrieval by certain emotional categories or dimensions could play a central role for tomorrow's intelligent systems, enabling search for movies with a particular mood, computer aided scene and sound design in order to elicit certain emotions in the audience, etc. Yet, the lion's share of research in affective computing is exclusively focusing on signals conveyed by humans, such as affective speech. Uniting the fields of multimedia retrieval and affective computing is believed to lend to a multiplicity of interesting retrieval applications, and at the same time to benefit affective computing research, by moving its methodology "out of the lab" to real-world, diverse data. In this contribution, we address the problem of finding "disturbing" scenes in movies, a scenario that is highly relevant for computer-aided parental guidance. We apply large-scale segmental feature extraction combined with audio-visual classification to the particular task of detecting violence. Our system performs fully data-driven analysis including automatic segmentation. We evaluate the system in terms of mean average precision (MAP) on the official data set of the MediaEval 2012 evaluation campaign's Affect Task, which consists of 18 original Hollywood movies, achieving up to .398 MAP on unseen test data in full realism. An in-depth analysis of the worth of individual features with respect to the target class and the system errors is carried out and reveals the importance of peak-related audio feature extraction and low-level histogram-based video analysis.

  2. Affective video retrieval: violence detection in Hollywood movies by large-scale segmental feature extraction.

    Directory of Open Access Journals (Sweden)

    Florian Eyben

    Full Text Available Without doubt general video and sound, as found in large multimedia archives, carry emotional information. Thus, audio and video retrieval by certain emotional categories or dimensions could play a central role for tomorrow's intelligent systems, enabling search for movies with a particular mood, computer aided scene and sound design in order to elicit certain emotions in the audience, etc. Yet, the lion's share of research in affective computing is exclusively focusing on signals conveyed by humans, such as affective speech. Uniting the fields of multimedia retrieval and affective computing is believed to lend to a multiplicity of interesting retrieval applications, and at the same time to benefit affective computing research, by moving its methodology "out of the lab" to real-world, diverse data. In this contribution, we address the problem of finding "disturbing" scenes in movies, a scenario that is highly relevant for computer-aided parental guidance. We apply large-scale segmental feature extraction combined with audio-visual classification to the particular task of detecting violence. Our system performs fully data-driven analysis including automatic segmentation. We evaluate the system in terms of mean average precision (MAP on the official data set of the MediaEval 2012 evaluation campaign's Affect Task, which consists of 18 original Hollywood movies, achieving up to .398 MAP on unseen test data in full realism. An in-depth analysis of the worth of individual features with respect to the target class and the system errors is carried out and reveals the importance of peak-related audio feature extraction and low-level histogram-based video analysis.

  3. Fast method for reactor and feature scale coupling in ALD and CVD

    Science.gov (United States)

    Yanguas-Gil, Angel; Elam, Jeffrey W.

    2017-08-08

    Transport and surface chemistry of certain deposition techniques is modeled. Methods provide a model of the transport inside nanostructures as a single-particle discrete Markov chain process. This approach decouples the complexity of the surface chemistry from the transport model, thus allowing its application under general surface chemistry conditions, including atomic layer deposition (ALD) and chemical vapor deposition (CVD). Methods provide for determination of determine statistical information of the trajectory of individual molecules, such as the average interaction time or the number of wall collisions for molecules entering the nanostructures as well as to track the relative contributions to thin-film growth of different independent reaction pathways at each point of the feature.

  4. Mental Imagery Scale: a new measurement tool to assess structural features of mental representations

    International Nuclear Information System (INIS)

    D'Ercole, Martina; Giannini, Anna Maria; Castelli, Paolo; Sbrilli, Antonella

    2010-01-01

    Mental imagery is a quasi-perceptual experience which resembles perceptual experience, but occurring without (appropriate) external stimuli. It is a form of mental representation and is often considered centrally involved in visuo-spatial reasoning and inventive and creative thought. Although imagery ability is assumed to be functionally independent of verbal systems, it is still considered to interact with verbal representations, enabling objects to be named and names to evoke images. In literature, most measurement tools for evaluating imagery capacity are self-report instruments focusing on differences in individuals. In the present work, we applied a Mental Imagery Scale (MIS) to mental images derived from verbal descriptions in order to assess the structural features of such mental representations. This is a key theme for those disciplines which need to turn objects and representations into words and vice versa, such as art or architectural didactics. To this aim, an MIS questionnaire was administered to 262 participants. The questionnaire, originally consisting of a 33-item 5-step Likert scale, was reduced to 28 items covering six areas: (1) Image Formation Speed, (2) Permanence/Stability, (3) Dimensions, (4) Level of Detail/Grain, (5) Distance and (6) Depth of Field or Perspective. Factor analysis confirmed our six-factor hypothesis underlying the 28 items

  5. Biointerface dynamics--Multi scale modeling considerations.

    Science.gov (United States)

    Pajic-Lijakovic, Ivana; Levic, Steva; Nedovic, Viktor; Bugarski, Branko

    2015-08-01

    Irreversible nature of matrix structural changes around the immobilized cell aggregates caused by cell expansion is considered within the Ca-alginate microbeads. It is related to various effects: (1) cell-bulk surface effects (cell-polymer mechanical interactions) and cell surface-polymer surface effects (cell-polymer electrostatic interactions) at the bio-interface, (2) polymer-bulk volume effects (polymer-polymer mechanical and electrostatic interactions) within the perturbed boundary layers around the cell aggregates, (3) cumulative surface and volume effects within the parts of the microbead, and (4) macroscopic effects within the microbead as a whole based on multi scale modeling approaches. All modeling levels are discussed at two time scales i.e. long time scale (cell growth time) and short time scale (cell rearrangement time). Matrix structural changes results in the resistance stress generation which have the feedback impact on: (1) single and collective cell migrations, (2) cell deformation and orientation, (3) decrease of cell-to-cell separation distances, and (4) cell growth. Herein, an attempt is made to discuss and connect various multi scale modeling approaches on a range of time and space scales which have been proposed in the literature in order to shed further light to this complex course-consequence phenomenon which induces the anomalous nature of energy dissipation during the structural changes of cell aggregates and matrix quantified by the damping coefficients (the orders of the fractional derivatives). Deeper insight into the matrix partial disintegration within the boundary layers is useful for understanding and minimizing the polymer matrix resistance stress generation within the interface and on that base optimizing cell growth. Copyright © 2015 Elsevier B.V. All rights reserved.

  6. Elysium region, mars: Tests of lithospheric loading models for the formation of tectonic features

    International Nuclear Information System (INIS)

    Hall, J.L.; Solomon, S.C.; Head, J.W.

    1986-01-01

    The second largest volcanic province on Mars lies in the Elysium region. Like the larger Tharsis province, Elysium is marked by a topographic rise and a broad free air gravity anomaly and also exhibits a complex assortment of tectonic and volcanic features. We test the hypothesis that the tectonic features in the Elysium region are the product of stresses produced by loading of the Martian lithosphere. We consider loading at three different scales: local loading by individual volcanoes, regional loading of the lithosphere from above or below, and quasi-global loading by Tharsis. A comparison of flexural stresses with lithospheric strength and with the inferred maximum depth of faulting confirms that concentric graben around Elysium Mons can be explained as resulting from local flexure of an elastic lithosphere about 50 km thick in response to the volcano load. Volcanic loading on a regional scale, however, leads to predicted stresses inconsistent with all observed tectonic features, suggesting that loading by widespread emplacement of thick plains deposits was not an important factor in the tectonic evolution of the Elysium region. A number of linear extensional features oriented generally NW-SE may have been the result of flexural uplift of the lithosphere on the scale of the Elysium rise. The global stress field associated with the support of the Tharsis rise appears to have influenced the development of many of the tectonic features in the Elysium region, including Cerberus Rupes and the systems of ridges in eastern and western Elysium. The comparisons of stress models for Elysium with the preserved tectonic features support a succession of stress fields operating at different times in the region

  7. Complex scaling in the cluster model

    International Nuclear Information System (INIS)

    Kruppa, A.T.; Lovas, R.G.; Gyarmati, B.

    1987-01-01

    To find the positions and widths of resonances, a complex scaling of the intercluster relative coordinate is introduced into the resonating-group model. In the generator-coordinate technique used to solve the resonating-group equation the complex scaling requires minor changes in the formulae and code. The finding of the resonances does not need any preliminary guess or explicit reference to any asymptotic prescription. The procedure is applied to the resonances in the relative motion of two ground-state α clusters in 8 Be, but is appropriate for any systems consisting of two clusters. (author) 23 refs.; 5 figs

  8. Modeling HAZ hardness and weld features with BPN technology

    International Nuclear Information System (INIS)

    Morinishi, S.; Bibby, M.J.; Chan, B.

    2000-01-01

    A BPN (back propagation network) system for predicting HAZ (heat-affected zone) hardnesses and GMAW (gas metal arc) weld features (size and shape) is described in this presentation. Among other things, issues of network structure, training and testing data selection, software efficiency and user interface are discussed. The system is evaluated by comparing network output with experimentally measured test data in the first instance, and with regression methods available for this purpose, thereafter. The potential of the web for exchanging weld process data and for accessing models generated with this system is addressed. In this regard the software has been made available on the Cambridge University 'steel' and 'neural' websites. In addition Java coded software has recently been generated to provide web flexibility and accessibility. Over and above this, the possibility of offering an on-line 'server' training service, arranged to capture user data (user identification, measured welding parameters and features) and trained models for the use of the entire welding community is described. While the possibility of such an exchange is attractive, there are several difficulties in designing such a system. Server software design, computing resources, data base and communications considerations are some of the issues that must be addressed with regard to a server centered training and database system before it becomes reality. (author)

  9. Advancing Affect Modeling via Preference Learning and Unsupervised Feature Extraction

    DEFF Research Database (Denmark)

    Martínez, Héctor Pérez

    strategies (error functions and training algorithms) for artificial neural networks are examined across synthetic and psycho-physiological datasets, and compared against support vector machines and Cohen’s method. Results reveal the best training strategies for neural networks and suggest their superiority...... difficulties, ordinal reports such as rankings and ratings can yield more reliable affect annotations than alternative tools. This thesis explores preference learning methods to automatically learn computational models from ordinal annotations of affect. In particular, an extensive collection of training...... over the other examined methods. The second challenge addressed in this thesis refers to the extraction of relevant information from physiological modalities. Deep learning is proposed as an automatic approach to extract input features for models of affect from physiological signals. Experiments...

  10. Geometrical scaling vs factorizable eikonal models

    CERN Document Server

    Kiang, D

    1975-01-01

    Among various theoretical explanations or interpretations for the experimental data on the differential cross-sections of elastic proton-proton scattering at CERN ISR, the following two seem to be most remarkable: A) the excellent agreement of the Chou-Yang model prediction of d sigma /dt with data at square root s=53 GeV, B) the general manifestation of geometrical scaling (GS). The paper confronts GS with eikonal models with factorizable opaqueness, with special emphasis on the Chou-Yang model. (12 refs).

  11. Determination of the interaction parameter and topological scaling features of symmetric star polymers in dilute solution

    KAUST Repository

    Rai, Durgesh K.; Beaucage, Gregory; Ratkanthwar, Kedar; Beaucage, Peter; Ramachandran, Ramnath; Hadjichristidis, Nikolaos

    2015-01-01

    Star polymers provide model architectures to understand the dynamic and rheological effects of chain confinement for a range of complex topological structures like branched polymers, colloids, and micelles. It is important to describe the structure of such macromolecular topologies using small-angle neutron and x-ray scattering to facilitate understanding of their structure-property relationships. Modeling of scattering from linear, Gaussian polymers, such as in the melt, has applied the random phase approximation using the Debye polymer scattering function. The Flory-Huggins interaction parameter can be obtained using neutron scattering by this method. Gaussian scaling no longer applies for more complicated chain topologies or when chains are in good solvents. For symmetric star polymers, chain scaling can differ from ν=0.5(df=2) due to excluded volume, steric interaction between arms, and enhanced density due to branching. Further, correlation between arms in a symmetric star leads to an interference term in the scattering function first described by Benoit for Gaussian chains. In this work, a scattering function is derived which accounts for interarm correlations in symmetric star polymers as well as the polymer-solvent interaction parameter for chains of arbitrary scaling dimension using a hybrid Unified scattering function. The approach is demonstrated for linear, four-arm and eight-arm polyisoprene stars in deuterated p-xylene.

  12. Determination of the interaction parameter and topological scaling features of symmetric star polymers in dilute solution

    KAUST Repository

    Rai, Durgesh K.

    2015-07-15

    Star polymers provide model architectures to understand the dynamic and rheological effects of chain confinement for a range of complex topological structures like branched polymers, colloids, and micelles. It is important to describe the structure of such macromolecular topologies using small-angle neutron and x-ray scattering to facilitate understanding of their structure-property relationships. Modeling of scattering from linear, Gaussian polymers, such as in the melt, has applied the random phase approximation using the Debye polymer scattering function. The Flory-Huggins interaction parameter can be obtained using neutron scattering by this method. Gaussian scaling no longer applies for more complicated chain topologies or when chains are in good solvents. For symmetric star polymers, chain scaling can differ from ν=0.5(df=2) due to excluded volume, steric interaction between arms, and enhanced density due to branching. Further, correlation between arms in a symmetric star leads to an interference term in the scattering function first described by Benoit for Gaussian chains. In this work, a scattering function is derived which accounts for interarm correlations in symmetric star polymers as well as the polymer-solvent interaction parameter for chains of arbitrary scaling dimension using a hybrid Unified scattering function. The approach is demonstrated for linear, four-arm and eight-arm polyisoprene stars in deuterated p-xylene.

  13. Probabilistic, meso-scale flood loss modelling

    Science.gov (United States)

    Kreibich, Heidi; Botto, Anna; Schröter, Kai; Merz, Bruno

    2016-04-01

    Flood risk analyses are an important basis for decisions on flood risk management and adaptation. However, such analyses are associated with significant uncertainty, even more if changes in risk due to global change are expected. Although uncertainty analysis and probabilistic approaches have received increased attention during the last years, they are still not standard practice for flood risk assessments and even more for flood loss modelling. State of the art in flood loss modelling is still the use of simple, deterministic approaches like stage-damage functions. Novel probabilistic, multi-variate flood loss models have been developed and validated on the micro-scale using a data-mining approach, namely bagging decision trees (Merz et al. 2013). In this presentation we demonstrate and evaluate the upscaling of the approach to the meso-scale, namely on the basis of land-use units. The model is applied in 19 municipalities which were affected during the 2002 flood by the River Mulde in Saxony, Germany (Botto et al. submitted). The application of bagging decision tree based loss models provide a probability distribution of estimated loss per municipality. Validation is undertaken on the one hand via a comparison with eight deterministic loss models including stage-damage functions as well as multi-variate models. On the other hand the results are compared with official loss data provided by the Saxon Relief Bank (SAB). The results show, that uncertainties of loss estimation remain high. Thus, the significant advantage of this probabilistic flood loss estimation approach is that it inherently provides quantitative information about the uncertainty of the prediction. References: Merz, B.; Kreibich, H.; Lall, U. (2013): Multi-variate flood damage assessment: a tree-based data-mining approach. NHESS, 13(1), 53-64. Botto A, Kreibich H, Merz B, Schröter K (submitted) Probabilistic, multi-variable flood loss modelling on the meso-scale with BT-FLEMO. Risk Analysis.

  14. Spatiotemporal exploratory models for broad-scale survey data.

    Science.gov (United States)

    Fink, Daniel; Hochachka, Wesley M; Zuckerberg, Benjamin; Winkler, David W; Shaby, Ben; Munson, M Arthur; Hooker, Giles; Riedewald, Mirek; Sheldon, Daniel; Kelling, Steve

    2010-12-01

    The distributions of animal populations change and evolve through time. Migratory species exploit different habitats at different times of the year. Biotic and abiotic features that determine where a species lives vary due to natural and anthropogenic factors. This spatiotemporal variation needs to be accounted for in any modeling of species' distributions. In this paper we introduce a semiparametric model that provides a flexible framework for analyzing dynamic patterns of species occurrence and abundance from broad-scale survey data. The spatiotemporal exploratory model (STEM) adds essential spatiotemporal structure to existing techniques for developing species distribution models through a simple parametric structure without requiring a detailed understanding of the underlying dynamic processes. STEMs use a multi-scale strategy to differentiate between local and global-scale spatiotemporal structure. A user-specified species distribution model accounts for spatial and temporal patterning at the local level. These local patterns are then allowed to "scale up" via ensemble averaging to larger scales. This makes STEMs especially well suited for exploring distributional dynamics arising from a variety of processes. Using data from eBird, an online citizen science bird-monitoring project, we demonstrate that monthly changes in distribution of a migratory species, the Tree Swallow (Tachycineta bicolor), can be more accurately described with a STEM than a conventional bagged decision tree model in which spatiotemporal structure has not been imposed. We also demonstrate that there is no loss of model predictive power when a STEM is used to describe a spatiotemporal distribution with very little spatiotemporal variation; the distribution of a nonmigratory species, the Northern Cardinal (Cardinalis cardinalis).

  15. Drift-Scale THC Seepage Model

    Energy Technology Data Exchange (ETDEWEB)

    C.R. Bryan

    2005-02-17

    The purpose of this report (REV04) is to document the thermal-hydrologic-chemical (THC) seepage model, which simulates the composition of waters that could potentially seep into emplacement drifts, and the composition of the gas phase. The THC seepage model is processed and abstracted for use in the total system performance assessment (TSPA) for the license application (LA). This report has been developed in accordance with ''Technical Work Plan for: Near-Field Environment and Transport: Coupled Processes (Mountain-Scale TH/THC/THM, Drift-Scale THC Seepage, and Post-Processing Analysis for THC Seepage) Report Integration'' (BSC 2005 [DIRS 172761]). The technical work plan (TWP) describes planning information pertaining to the technical scope, content, and management of this report. The plan for validation of the models documented in this report is given in Section 2.2.2, ''Model Validation for the DS THC Seepage Model,'' of the TWP. The TWP (Section 3.2.2) identifies Acceptance Criteria 1 to 4 for ''Quantity and Chemistry of Water Contacting Engineered Barriers and Waste Forms'' (NRC 2003 [DIRS 163274]) as being applicable to this report; however, in variance to the TWP, Acceptance Criterion 5 has also been determined to be applicable, and is addressed, along with the other Acceptance Criteria, in Section 4.2 of this report. Also, three FEPS not listed in the TWP (2.2.10.01.0A, 2.2.10.06.0A, and 2.2.11.02.0A) are partially addressed in this report, and have been added to the list of excluded FEPS in Table 6.1-2. This report has been developed in accordance with LP-SIII.10Q-BSC, ''Models''. This report documents the THC seepage model and a derivative used for validation, the Drift Scale Test (DST) THC submodel. The THC seepage model is a drift-scale process model for predicting the composition of gas and water that could enter waste emplacement drifts and the effects of mineral

  16. Drift-Scale THC Seepage Model

    International Nuclear Information System (INIS)

    C.R. Bryan

    2005-01-01

    The purpose of this report (REV04) is to document the thermal-hydrologic-chemical (THC) seepage model, which simulates the composition of waters that could potentially seep into emplacement drifts, and the composition of the gas phase. The THC seepage model is processed and abstracted for use in the total system performance assessment (TSPA) for the license application (LA). This report has been developed in accordance with ''Technical Work Plan for: Near-Field Environment and Transport: Coupled Processes (Mountain-Scale TH/THC/THM, Drift-Scale THC Seepage, and Post-Processing Analysis for THC Seepage) Report Integration'' (BSC 2005 [DIRS 172761]). The technical work plan (TWP) describes planning information pertaining to the technical scope, content, and management of this report. The plan for validation of the models documented in this report is given in Section 2.2.2, ''Model Validation for the DS THC Seepage Model,'' of the TWP. The TWP (Section 3.2.2) identifies Acceptance Criteria 1 to 4 for ''Quantity and Chemistry of Water Contacting Engineered Barriers and Waste Forms'' (NRC 2003 [DIRS 163274]) as being applicable to this report; however, in variance to the TWP, Acceptance Criterion 5 has also been determined to be applicable, and is addressed, along with the other Acceptance Criteria, in Section 4.2 of this report. Also, three FEPS not listed in the TWP (2.2.10.01.0A, 2.2.10.06.0A, and 2.2.11.02.0A) are partially addressed in this report, and have been added to the list of excluded FEPS in Table 6.1-2. This report has been developed in accordance with LP-SIII.10Q-BSC, ''Models''. This report documents the THC seepage model and a derivative used for validation, the Drift Scale Test (DST) THC submodel. The THC seepage model is a drift-scale process model for predicting the composition of gas and water that could enter waste emplacement drifts and the effects of mineral alteration on flow in rocks surrounding drifts. The DST THC submodel uses a drift-scale

  17. Scale Model Thruster Acoustic Measurement Results

    Science.gov (United States)

    Vargas, Magda; Kenny, R. Jeremy

    2013-01-01

    The Space Launch System (SLS) Scale Model Acoustic Test (SMAT) is a 5% scale representation of the SLS vehicle, mobile launcher, tower, and launch pad trench. The SLS launch propulsion system will be comprised of the Rocket Assisted Take-Off (RATO) motors representing the solid boosters and 4 Gas Hydrogen (GH2) thrusters representing the core engines. The GH2 thrusters were tested in a horizontal configuration in order to characterize their performance. In Phase 1, a single thruster was fired to determine the engine performance parameters necessary for scaling a single engine. A cluster configuration, consisting of the 4 thrusters, was tested in Phase 2 to integrate the system and determine their combined performance. Acoustic and overpressure data was collected during both test phases in order to characterize the system's acoustic performance. The results from the single thruster and 4- thuster system are discussed and compared.

  18. Multilevel method for modeling large-scale networks.

    Energy Technology Data Exchange (ETDEWEB)

    Safro, I. M. (Mathematics and Computer Science)

    2012-02-24

    Understanding the behavior of real complex networks is of great theoretical and practical significance. It includes developing accurate artificial models whose topological properties are similar to the real networks, generating the artificial networks at different scales under special conditions, investigating a network dynamics, reconstructing missing data, predicting network response, detecting anomalies and other tasks. Network generation, reconstruction, and prediction of its future topology are central issues of this field. In this project, we address the questions related to the understanding of the network modeling, investigating its structure and properties, and generating artificial networks. Most of the modern network generation methods are based either on various random graph models (reinforced by a set of properties such as power law distribution of node degrees, graph diameter, and number of triangles) or on the principle of replicating an existing model with elements of randomization such as R-MAT generator and Kronecker product modeling. Hierarchical models operate at different levels of network hierarchy but with the same finest elements of the network. However, in many cases the methods that include randomization and replication elements on the finest relationships between network nodes and modeling that addresses the problem of preserving a set of simplified properties do not fit accurately enough the real networks. Among the unsatisfactory features are numerically inadequate results, non-stability of algorithms on real (artificial) data, that have been tested on artificial (real) data, and incorrect behavior at different scales. One reason is that randomization and replication of existing structures can create conflicts between fine and coarse scales of the real network geometry. Moreover, the randomization and satisfying of some attribute at the same time can abolish those topological attributes that have been undefined or hidden from

  19. Features that contribute to the usefulness of low-fidelity models for surgical skills training

    DEFF Research Database (Denmark)

    Langebæk, Rikke; Berendt, Mette; Pedersen, Lene Tanggaard

    2012-01-01

    of models were developed to be used in a basic surgical skills course for veterinary students. The models were low fidelity, having limited resemblance to real animals. The aim of the present study was to describe the students' learning experience with the models and to report their perception...... of the usefulness of the models in applying the trained skills to live animal surgery. One hundred and forty-six veterinary fourth-year students evaluated the models on a four-point Likert scale. Of these, 26 additionally participated in individual semistructured interviews. The survey results showed that 75 per...... educational tools in preparation for live animal surgery. However, there are specific features to take into account when developing models in order for students to perceive them as useful....

  20. Dataset of coded handwriting features for use in statistical modelling

    Directory of Open Access Journals (Sweden)

    Anna Agius

    2018-02-01

    Full Text Available The data presented here is related to the article titled, “Using handwriting to infer a writer's country of origin for forensic intelligence purposes” (Agius et al., 2017 [1]. This article reports original writer, spatial and construction characteristic data for thirty-seven English Australian11 In this study, English writers were Australians whom had learnt to write in New South Wales (NSW. writers and thirty-seven Vietnamese writers. All of these characteristics were coded and recorded in Microsoft Excel 2013 (version 15.31. The construction characteristics coded were only extracted from seven characters, which were: ‘g’, ‘h’, ‘th’, ‘M’, ‘0’, ‘7’ and ‘9’. The coded format of the writer, spatial and construction characteristics is made available in this Data in Brief in order to allow others to perform statistical analyses and modelling to investigate whether there is a relationship between the handwriting features and the nationality of the writer, and whether the two nationalities can be differentiated. Furthermore, to employ mathematical techniques that are capable of characterising the extracted features from each participant.

  1. Bayesian latent feature modeling for modeling bipartite networks with overlapping groups

    DEFF Research Database (Denmark)

    Jørgensen, Philip H.; Mørup, Morten; Schmidt, Mikkel Nørgaard

    2016-01-01

    Bi-partite networks are commonly modelled using latent class or latent feature models. Whereas the existing latent class models admit marginalization of parameters specifying the strength of interaction between groups, existing latent feature models do not admit analytical marginalization...... by the notion of community structure such that the edge density within groups is higher than between groups. Our model further assumes that entities can have different propensities of generating links in one of the modes. The proposed framework is contrasted on both synthetic and real bi-partite networks...... feature representations in bipartite networks provides a new framework for accounting for structure in bi-partite networks using binary latent feature representations providing interpretable representations that well characterize structure as quantified by link prediction....

  2. Quantification of organ motion based on an adaptive image-based scale invariant feature method

    Energy Technology Data Exchange (ETDEWEB)

    Paganelli, Chiara [Dipartimento di Elettronica, Informazione e Bioingegneria, Politecnico di Milano, piazza L. Da Vinci 32, Milano 20133 (Italy); Peroni, Marta [Dipartimento di Elettronica, Informazione e Bioingegneria, Politecnico di Milano, piazza L. Da Vinci 32, Milano 20133, Italy and Paul Scherrer Institut, Zentrum für Protonentherapie, WMSA/C15, CH-5232 Villigen PSI (Italy); Baroni, Guido; Riboldi, Marco [Dipartimento di Elettronica, Informazione e Bioingegneria, Politecnico di Milano, piazza L. Da Vinci 32, Milano 20133, Italy and Bioengineering Unit, Centro Nazionale di Adroterapia Oncologica, strada Campeggi 53, Pavia 27100 (Italy)

    2013-11-15

    Purpose: The availability of corresponding landmarks in IGRT image series allows quantifying the inter and intrafractional motion of internal organs. In this study, an approach for the automatic localization of anatomical landmarks is presented, with the aim of describing the nonrigid motion of anatomo-pathological structures in radiotherapy treatments according to local image contrast.Methods: An adaptive scale invariant feature transform (SIFT) was developed from the integration of a standard 3D SIFT approach with a local image-based contrast definition. The robustness and invariance of the proposed method to shape-preserving and deformable transforms were analyzed in a CT phantom study. The application of contrast transforms to the phantom images was also tested, in order to verify the variation of the local adaptive measure in relation to the modification of image contrast. The method was also applied to a lung 4D CT dataset, relying on manual feature identification by an expert user as ground truth. The 3D residual distance between matches obtained in adaptive-SIFT was then computed to verify the internal motion quantification with respect to the expert user. Extracted corresponding features in the lungs were used as regularization landmarks in a multistage deformable image registration (DIR) mapping the inhale vs exhale phase. The residual distances between the warped manual landmarks and their reference position in the inhale phase were evaluated, in order to provide a quantitative indication of the registration performed with the three different point sets.Results: The phantom study confirmed the method invariance and robustness properties to shape-preserving and deformable transforms, showing residual matching errors below the voxel dimension. The adapted SIFT algorithm on the 4D CT dataset provided automated and accurate motion detection of peak to peak breathing motion. The proposed method resulted in reduced residual errors with respect to standard SIFT

  3. Utilization of Large Scale Surface Models for Detailed Visibility Analyses

    Science.gov (United States)

    Caha, J.; Kačmařík, M.

    2017-11-01

    This article demonstrates utilization of large scale surface models with small spatial resolution and high accuracy, acquired from Unmanned Aerial Vehicle scanning, for visibility analyses. The importance of large scale data for visibility analyses on the local scale, where the detail of the surface model is the most defining factor, is described. The focus is not only the classic Boolean visibility, that is usually determined within GIS, but also on so called extended viewsheds that aims to provide more information about visibility. The case study with examples of visibility analyses was performed on river Opava, near the Ostrava city (Czech Republic). The multiple Boolean viewshed analysis and global horizon viewshed were calculated to determine most prominent features and visibility barriers of the surface. Besides that, the extended viewshed showing angle difference above the local horizon, which describes angular height of the target area above the barrier, is shown. The case study proved that large scale models are appropriate data source for visibility analyses on local level. The discussion summarizes possible future applications and further development directions of visibility analyses.

  4. 1/3-scale model testing program

    International Nuclear Information System (INIS)

    Yoshimura, H.R.; Attaway, S.W.; Bronowski, D.R.; Uncapher, W.L.; Huerta, M.; Abbott, D.G.

    1989-01-01

    This paper describes the drop testing of a one-third scale model transport cask system. Two casks were supplied by Transnuclear, Inc. (TN) to demonstrate dual purpose shipping/storage casks. These casks will be used to ship spent fuel from DOEs West Valley demonstration project in New York to the Idaho National Engineering Laboratory (INEL) for long term spent fuel dry storage demonstration. As part of the certification process, one-third scale model tests were performed to obtain experimental data. Two 9-m (30-ft) drop tests were conducted on a mass model of the cask body and scaled balsa and redwood filled impact limiters. In the first test, the cask system was tested in an end-on configuration. In the second test, the system was tested in a slap-down configuration where the axis of the cask was oriented at a 10 degree angle with the horizontal. Slap-down occurs for shallow angle drops where the primary impact at one end of the cask is followed by a secondary impact at the other end. The objectives of the testing program were to (1) obtain deceleration and displacement information for the cask and impact limiter system, (2) obtain dynamic force-displacement data for the impact limiters, (3) verify the integrity of the impact limiter retention system, and (4) examine the crush behavior of the limiters. This paper describes both test results in terms of measured deceleration, post test deformation measurements, and the general structural response of the system

  5. Genome scale metabolic modeling of cancer

    DEFF Research Database (Denmark)

    Nilsson, Avlant; Nielsen, Jens

    2017-01-01

    of metabolism which allows simulation and hypotheses testing of metabolic strategies. It has successfully been applied to many microorganisms and is now used to study cancer metabolism. Generic models of human metabolism have been reconstructed based on the existence of metabolic genes in the human genome......Cancer cells reprogram metabolism to support rapid proliferation and survival. Energy metabolism is particularly important for growth and genes encoding enzymes involved in energy metabolism are frequently altered in cancer cells. A genome scale metabolic model (GEM) is a mathematical formalization...

  6. Functional validation of candidate genes detected by genomic feature models

    DEFF Research Database (Denmark)

    Rohde, Palle Duun; Østergaard, Solveig; Kristensen, Torsten Nygaard

    2018-01-01

    to investigate locomotor activity, and applied genomic feature prediction models to identify gene ontology (GO) cate- gories predictive of this phenotype. Next, we applied the covariance association test to partition the genomic variance of the predictive GO terms to the genes within these terms. We...... then functionally assessed whether the identified candidate genes affected locomotor activity by reducing gene expression using RNA interference. In five of the seven candidate genes tested, reduced gene expression altered the phenotype. The ranking of genes within the predictive GO term was highly correlated......Understanding the genetic underpinnings of complex traits requires knowledge of the genetic variants that contribute to phenotypic variability. Reliable statistical approaches are needed to obtain such knowledge. In genome-wide association studies, variants are tested for association with trait...

  7. Spatial modeling of agricultural land use change at global scale

    Science.gov (United States)

    Meiyappan, P.; Dalton, M.; O'Neill, B. C.; Jain, A. K.

    2014-11-01

    Long-term modeling of agricultural land use is central in global scale assessments of climate change, food security, biodiversity, and climate adaptation and mitigation policies. We present a global-scale dynamic land use allocation model and show that it can reproduce the broad spatial features of the past 100 years of evolution of cropland and pastureland patterns. The modeling approach integrates economic theory, observed land use history, and data on both socioeconomic and biophysical determinants of land use change, and estimates relationships using long-term historical data, thereby making it suitable for long-term projections. The underlying economic motivation is maximization of expected profits by hypothesized landowners within each grid cell. The model predicts fractional land use for cropland and pastureland within each grid cell based on socioeconomic and biophysical driving factors that change with time. The model explicitly incorporates the following key features: (1) land use competition, (2) spatial heterogeneity in the nature of driving factors across geographic regions, (3) spatial heterogeneity in the relative importance of driving factors and previous land use patterns in determining land use allocation, and (4) spatial and temporal autocorrelation in land use patterns. We show that land use allocation approaches based solely on previous land use history (but disregarding the impact of driving factors), or those accounting for both land use history and driving factors by mechanistically fitting models for the spatial processes of land use change do not reproduce well long-term historical land use patterns. With an example application to the terrestrial carbon cycle, we show that such inaccuracies in land use allocation can translate into significant implications for global environmental assessments. The modeling approach and its evaluation provide an example that can be useful to the land use, Integrated Assessment, and the Earth system modeling

  8. Large-scale multimedia modeling applications

    International Nuclear Information System (INIS)

    Droppo, J.G. Jr.; Buck, J.W.; Whelan, G.; Strenge, D.L.; Castleton, K.J.; Gelston, G.M.

    1995-08-01

    Over the past decade, the US Department of Energy (DOE) and other agencies have faced increasing scrutiny for a wide range of environmental issues related to past and current practices. A number of large-scale applications have been undertaken that required analysis of large numbers of potential environmental issues over a wide range of environmental conditions and contaminants. Several of these applications, referred to here as large-scale applications, have addressed long-term public health risks using a holistic approach for assessing impacts from potential waterborne and airborne transport pathways. Multimedia models such as the Multimedia Environmental Pollutant Assessment System (MEPAS) were designed for use in such applications. MEPAS integrates radioactive and hazardous contaminants impact computations for major exposure routes via air, surface water, ground water, and overland flow transport. A number of large-scale applications of MEPAS have been conducted to assess various endpoints for environmental and human health impacts. These applications are described in terms of lessons learned in the development of an effective approach for large-scale applications

  9. Aerosol numerical modelling at local scale

    International Nuclear Information System (INIS)

    Albriet, Bastien

    2007-01-01

    At local scale and in urban areas, an important part of particulate pollution is due to traffic. It contributes largely to the high number concentrations observed. Two aerosol sources are mainly linked to traffic. Primary emission of soot particles and secondary nanoparticle formation by nucleation. The emissions and mechanisms leading to the formation of such bimodal distribution are still badly understood nowadays. In this thesis, we try to provide an answer to this problematic by numerical modelling. The Modal Aerosol Model MAM is used, coupled with two 3D-codes: a CFD (Mercure Saturne) and a CTM (Polair3D). A sensitivity analysis is performed, at the border of a road but also in the first meters of an exhaust plume, to identify the role of each process involved and the sensitivity of different parameters used in the modelling. (author) [fr

  10. Feature network models for proximity data : statistical inference, model selection, network representations and links with related models

    NARCIS (Netherlands)

    Frank, Laurence Emmanuelle

    2006-01-01

    Feature Network Models (FNM) are graphical structures that represent proximity data in a discrete space with the use of features. A statistical inference theory is introduced, based on the additivity properties of networks and the linear regression framework. Considering features as predictor

  11. The breaking of Bjorken scaling in the covariant parton model

    International Nuclear Information System (INIS)

    Polkinghorne, J.C.

    1976-01-01

    Scale breaking is investigated in terms of a covariant parton model formulation of deep inelastic processes. It is shown that a consistent theory requires that the convergence properties of parton-hadron amplitudes should be modified as well as the parton being given form factors. Purely logarithmic violation is possible and the resulting model has many features in common with asymtotically free gauge theories. Behaviour at large and small ω and fixed q 2 is investigated. γW 2 should increase with q 2 at large ω and decrease with q 2 at small ω. Heuristic arguments are also given which suggest that the model would only lead to logarithmic modifications of dimensional counting results in purely hadronic deep scattering. (Auth.)

  12. Small scale karst features (tube karren) as evidence of a latest Quaternary fossil landslide

    Science.gov (United States)

    Stöger, Tobias; Plan, Lukas; Draganits, Erich

    2017-04-01

    At least since 1933 numerous small dissolutional holes in the ceilings of overhangs and small caves have been known from a restricted area in the Northern Calcareous Alps in Lower Austria but not investigated yet. These tube-shaped structures are a few centimetres in diameter, more or less vertical, taper upwards, are closed at the top and penetrate some tens of centimetres into the Middle Triassic limestone. Very similar features were described by Simms (2002) from the shores of three lakes in western Ireland and termed Röhrenkarren or tube karren. According to his model they formed by condensation corrosion within air pockets trapped by seasonal floods. The features investigated in the present study occur on both sides of a valley in the north eastern part of the Northern Calcareous Alps south of the city Sankt Pölten. Presently there is no lake and so far no paleo lake is known from this area. Based on airborne laser scanning data and field observations in a narrow section of the valley downstream of the tube karren sites, a previously unknown potential fossil landslide was discovered. The clayey silty sediments upstream of the landslide are interpreted as palaeo-lake sediments. This interpretation is supported by the existence of abundant dragonfly eggs within these deposits. The same fine-grained sediments are partly also found inside the tube karren. These observations are interpreted that a landslide-dammed palaeo-lake formed due to the mass movement that blocked the river and the tube karren were formed by seasonal fluctuations of the lake level. Geochronological dating of calcite crusts covering the karren and of the organic material of the dragonfly eggs are on the way. As the karren features look quite fresh and unweathered and from the diffuse shape of the landslide a late Quaternary age is estimated. References Simms, M.J. 2002. The origin of enigmatic, tubular, lake-shore karren: a mechanism for rapid dissolution of limestone in carbonate

  13. Large scale features and energetics of the hybrid subtropical low `Duck' over the Tasman Sea

    Science.gov (United States)

    Pezza, Alexandre Bernardes; Garde, Luke Andrew; Veiga, José Augusto Paixão; Simmonds, Ian

    2014-01-01

    New aspects of the genesis and partial tropical transition of a rare hybrid subtropical cyclone on the eastern Australian coast are presented. The `Duck' (March 2001) attracted more recent attention due to its underlying genesis mechanisms being remarkably similar to the first South Atlantic hurricane (March 2004). Here we put this cyclone in climate perspective, showing that it belongs to a class within the 1 % lowest frequency percentile in the Southern Hemisphere as a function of its thermal evolution. A large scale analysis reveals a combined influence from an existing tropical cyclone and a persistent mid-latitude block. A Lagrangian tracer showed that the upper level air parcels arriving at the cyclone's center had been modified by the blocking. Lorenz energetics is used to identify connections with both tropical and extratropical processes, and reveal how these create the large scale environment conducive to the development of the vortex. The results reveal that the blocking exerted the most important influence, with a strong peak in barotropic generation of kinetic energy over a large area traversed by the air parcels just before genesis. A secondary peak also coincided with the first time the cyclone developed an upper level warm core, but with insufficient amplitude to allow for a full tropical transition. The applications of this technique are numerous and promising, particularly on the use of global climate models to infer changes in environmental parameters associated with severe storms.

  14. Multi-scale Modelling of Segmentation

    DEFF Research Database (Denmark)

    Hartmann, Martin; Lartillot, Olivier; Toiviainen, Petri

    2016-01-01

    pieces. In a second experiment on non-real-time segmentation, musicians indicated boundaries and their strength for six examples. Kernel density estimation was used to develop multi-scale segmentation models. Contrary to previous research, no relationship was found between boundary strength and boundary......While listening to music, people often unwittingly break down musical pieces into constituent chunks such as verses and choruses. Music segmentation studies have suggested that some consensus regarding boundary perception exists, despite individual differences. However, neither the effects...

  15. Analyzing surface features on icy satellites using a new two-layer analogue model

    Science.gov (United States)

    Morales, K. M.; Leonard, E. J.; Pappalardo, R. T.; Yin, A.

    2017-12-01

    The appearance of similar surface morphologies across many icy satellites suggests potentially unified formation mechanisms. Constraining the processes that shape the surfaces of these icy worlds is fundamental to understanding their rheology and thermal evolution—factors that have implications for potential habitability. Analogue models have proven useful for investigating and quantifying surface structure formation on Earth, but have only been sparsely applied to icy bodies. In this study, we employ an innovative two-layer analogue model that simulates a warm, ductile ice layer overlain by brittle surface ice on satellites such as Europa and Enceladus. The top, brittle layer is composed of fine-grained sand while the ductile, lower viscosity layer is made of putty. These materials were chosen because they scale up reasonably to the conditions on Europa and Enceladus. Using this analogue model, we investigate the role of the ductile layer in forming contractional structures (e.g. folds) that would compensate for the over-abundance of extensional features observed on icy satellites. We do this by simulating different compressional scenarios in the analogue model and analyzing whether the resulting features resemble those on icy bodies. If the resulting structures are similar, then the model can be used to quantify the deformation by calculating strain. These values can then be scaled up to Europa or Enceladus and used to quantity the observed surface morphologies and the amount of extensional strain accommodated by certain features. This presentation will focus on the resulting surface morphologies and the calculated strain values from several analogue experiments. The methods and findings from this work can then be expanded and used to study other icy bodies, such as Triton, Miranda, Ariel, and Pluto.

  16. Psychometric Features of a Scale for Characterizing Motivation for Academic Reading

    Directory of Open Access Journals (Sweden)

    Carla Muñoz Valenzuela

    2012-11-01

    Full Text Available The competencies associated with academic reading, especially the motivational aspects, are essential to undergraduate students’ academic success. Motivation is an emerging issue that has given rise to many studies, yet motivation for academic reading remains a subject rarely addressed or studied. To effectively support the learning process, a diagnostic that is capable of providing precise, valid and reliable information on the motivational aspects of reading in an academic context is necessary. This article presents the results of the process of construction and validation of the Motivation Scale for Academic Reading (EMLA-acronym in Spanish, which was based on the Expectancy & Value model of Jacqueline Eccles and Allan Wigfield (2002, hereinafter EyV. This instrument provides clues for motivational intervention to incentivize reading in an academic context. Likewise, we also report on the structure of the instrument, its theoretical foundations, its factor structure and reliability—psychometric characteristics that make EMLA a solid, valid and reliable instrument.

  17. Atom probe characterization of nano-scaled features in irradiated Eurofer and ODS Eurofer steel

    International Nuclear Information System (INIS)

    Rogozkin, S.; Aleev, A.; Nikitin, A.; Zaluzhnyi, A.; Vladimirov, P.; Moeslang, A.; Lindau, R.

    2009-01-01

    Outstanding performance of oxide dispersion strengthened (ODS) steels at high temperatures and up to high doses allowed to consider them as potential candidates for fusion and fission power plants. At the same time their mechanical parameters strongly correlate with number density of oxide particles and their size. It is believed that fine particles are formed at the last stage of sophisticated production procedures and play a crucial role in higher heat- and radiation resistance in comparison with conventional materials. However, due to their small size - only few nanometers, characterization of such objects requires considerable efforts. Recent study of ODS steel by tomographic atom probe, the most appropriate technique in this case, shown considerable stability of these particles under high temperatures and ion-irradiation. However, these results were obtained for 12/14% Cr with addition of 0.3% Y 2 O 3 and titanium which is inappropriate in case of ODS Eurofer 97 and possibility to substitute neutron by ion irradiation is still under consideration. In this work effect of neutron irradiation on nanostructure behaviour of ODS Eurofer are investigated. Irradiation was performed on research reactor BOR-60 in SSC RF RIAR (Dimitrovgrad, Russia) up to 30 dpa at 280 deg. C and 580 deg. C. Recent investigation of unirradiated state revealed high number density of nano-scaled features (nano-clusters) even without addition of Ti in steel. It was shown that vanadium played significant role in nucleation process and core of nano-clusters was considerably enriched with it. In irradiated samples solution of vanadium in matrix was observed while the size of particles stayed practically unchanged. Also no nitrogen was detected in these particles in comparison with unirradiated state where bond energy of N with V was considered to be high as VN 2+ ions were detected on mass-spectra. (author)

  18. Self-Organized Criticality in a Simple Neuron Model Based on Scale-Free Networks

    International Nuclear Information System (INIS)

    Lin Min; Wang Gang; Chen Tianlun

    2006-01-01

    A simple model for a set of interacting idealized neurons in scale-free networks is introduced. The basic elements of the model are endowed with the main features of a neuron function. We find that our model displays power-law behavior of avalanche sizes and generates long-range temporal correlation. More importantly, we find different dynamical behavior for nodes with different connectivity in the scale-free networks.

  19. Network features of sector indexes spillover effects in China: A multi-scale view

    Science.gov (United States)

    Feng, Sida; Huang, Shupei; Qi, Yabin; Liu, Xueyong; Sun, Qingru; Wen, Shaobo

    2018-04-01

    The spillover effects among sectors are of concern for distinct market participants, who are in distinct investment horizons and concerned with the information in different time scales. In order to uncover the hidden spillover information in multi-time scales in the rapidly changing stock market and thereby offer guidance to different investors concerning distinct time scales from a system perspective, this paper constructed directional spillover effect networks for the economic sectors in distinct time scales. The results are as follows: (1) The "2-4 days" scale is the most risky scale, and the "8-16 days" scale is the least risky one. (2) The most influential and sensitive sectors are distinct in different time scales. (3) Although two sectors in the same community may not have direct spillover relations, the volatility of one sector will have a relatively strong influence on the other through indirect relations.

  20. Latent Feature Models for Uncovering Human Mobility Patterns from Anonymized User Location Traces with Metadata

    KAUST Repository

    Alharbi, Basma Mohammed

    2017-04-10

    In the mobile era, data capturing individuals’ locations have become unprecedentedly available. Data from Location-Based Social Networks is one example of large-scale user-location data. Such data provide a valuable source for understanding patterns governing human mobility, and thus enable a wide range of research. However, mining and utilizing raw user-location data is a challenging task. This is mainly due to the sparsity of data (at the user level), the imbalance of data with power-law users and locations check-ins degree (at the global level), and more importantly the lack of a uniform low-dimensional feature space describing users. Three latent feature models are proposed in this dissertation. Each proposed model takes as an input a collection of user-location check-ins, and outputs a new representation space for users and locations respectively. To avoid invading users privacy, the proposed models are designed to learn from anonymized location data where only IDs - not geophysical positioning or category - of locations are utilized. To enrich the inferred mobility patterns, the proposed models incorporate metadata, often associated with user-location data, into the inference process. In this dissertation, two types of metadata are utilized to enrich the inferred patterns, timestamps and social ties. Time adds context to the inferred patterns, while social ties amplifies incomplete user-location check-ins. The first proposed model incorporates timestamps by learning from collections of users’ locations sharing the same discretized time. The second proposed model also incorporates time into the learning model, yet takes a further step by considering time at different scales (hour of a day, day of a week, month, and so on). This change in modeling time allows for capturing meaningful patterns over different times scales. The last proposed model incorporates social ties into the learning process to compensate for inactive users who contribute a large volume

  1. A Novel Approach in Quantifying the Effect of Urban Design Features on Local-Scale Air Pollution in Central Urban Areas.

    Science.gov (United States)

    Miskell, Georgia; Salmond, Jennifer; Longley, Ian; Dirks, Kim N

    2015-08-04

    Differences in urban design features may affect emission and dispersion patterns of air pollution at local-scales within cities. However, the complexity of urban forms, interdependence of variables, and temporal and spatial variability of processes make it difficult to quantify determinants of local-scale air pollution. This paper uses a combination of dense measurements and a novel approach to land-use regression (LUR) modeling to identify key controls on concentrations of ambient nitrogen dioxide (NO2) at a local-scale within a central business district (CBD). Sixty-two locations were measured over 44 days in Auckland, New Zealand at high density (study area 0.15 km(2)). A local-scale LUR model was developed, with seven variables identified as determinants based on standard model criteria. A novel method for improving standard LUR design was developed using two independent data sets (at local and "city" scales) to generate improved accuracy in predictions and greater confidence in results. This revised multiscale LUR model identified three urban design variables (intersection, proximity to a bus stop, and street width) as having the more significant determination on local-scale air quality, and had improved adaptability between data sets.

  2. On the random cascading model study of anomalous scaling in multiparticle production with continuously diminishing scale

    International Nuclear Information System (INIS)

    Liu Lianshou; Zhang Yang; Wu Yuanfang

    1996-01-01

    The anomalous scaling of factorial moments with continuously diminishing scale is studied using a random cascading model. It is shown that the model currently used have the property of anomalous scaling only for descrete values of elementary cell size. A revised model is proposed which can give good scaling property also for continuously varying scale. It turns out that the strip integral has good scaling property provided the integral regions are chosen correctly, and that this property is insensitive to the concrete way of self-similar subdivision of phase space in the models. (orig.)

  3. Scale modeling flow-induced vibrations of reactor components

    International Nuclear Information System (INIS)

    Mulcahy, T.M.

    1982-06-01

    Similitude relationships currently employed in the design of flow-induced vibration scale-model tests of nuclear reactor components are reviewed. Emphasis is given to understanding the origins of the similitude parameters as a basis for discussion of the inevitable distortions which occur in design verification testing of entire reactor systems and in feature testing of individual component designs for the existence of detrimental flow-induced vibration mechanisms. Distortions of similitude parameters made in current test practice are enumerated and selected example tests are described. Also, limitations in the use of specific distortions in model designs are evaluated based on the current understanding of flow-induced vibration mechanisms and structural response

  4. Data-Science Analysis of the Macro-scale Features Governing the Corrosion to Crack Transition in AA7050-T7451

    Science.gov (United States)

    Co, Noelle Easter C.; Brown, Donald E.; Burns, James T.

    2018-05-01

    This study applies data science approaches (random forest and logistic regression) to determine the extent to which macro-scale corrosion damage features govern the crack formation behavior in AA7050-T7451. Each corrosion morphology has a set of corresponding predictor variables (pit depth, volume, area, diameter, pit density, total fissure length, surface roughness metrics, etc.) describing the shape of the corrosion damage. The values of the predictor variables are obtained from white light interferometry, x-ray tomography, and scanning electron microscope imaging of the corrosion damage. A permutation test is employed to assess the significance of the logistic and random forest model predictions. Results indicate minimal relationship between the macro-scale corrosion feature predictor variables and fatigue crack initiation. These findings suggest that the macro-scale corrosion features and their interactions do not solely govern the crack formation behavior. While these results do not imply that the macro-features have no impact, they do suggest that additional parameters must be considered to rigorously inform the crack formation location.

  5. Replication of surface features from a master model to an amorphous metallic article

    Science.gov (United States)

    Johnson, William L.; Bakke, Eric; Peker, Atakan

    1999-01-01

    The surface features of an article are replicated by preparing a master model having a preselected surface feature thereon which is to be replicated, and replicating the preselected surface feature of the master model. The replication is accomplished by providing a piece of a bulk-solidifying amorphous metallic alloy, contacting the piece of the bulk-solidifying amorphous metallic alloy to the surface of the master model at an elevated replication temperature to transfer a negative copy of the preselected surface feature of the master model to the piece, and separating the piece having the negative copy of the preselected surface feature from the master model.

  6. A high resolution global scale groundwater model

    Science.gov (United States)

    de Graaf, Inge; Sutanudjaja, Edwin; van Beek, Rens; Bierkens, Marc

    2014-05-01

    As the world's largest accessible source of freshwater, groundwater plays a vital role in satisfying the basic needs of human society. It serves as a primary source of drinking water and supplies water for agricultural and industrial activities. During times of drought, groundwater storage provides a large natural buffer against water shortage and sustains flows to rivers and wetlands, supporting ecosystem habitats and biodiversity. Yet, the current generation of global scale hydrological models (GHMs) do not include a groundwater flow component, although it is a crucial part of the hydrological cycle. Thus, a realistic physical representation of the groundwater system that allows for the simulation of groundwater head dynamics and lateral flows is essential for GHMs that increasingly run at finer resolution. In this study we present a global groundwater model with a resolution of 5 arc-minutes (approximately 10 km at the equator) using MODFLOW (McDonald and Harbaugh, 1988). With this global groundwater model we eventually intend to simulate the changes in the groundwater system over time that result from variations in recharge and abstraction. Aquifer schematization and properties of this groundwater model were developed from available global lithological maps and datasets (Dürr et al., 2005; Gleeson et al., 2010; Hartmann and Moosdorf, 2013), combined with our estimate of aquifer thickness for sedimentary basins. We forced the groundwater model with the output from the global hydrological model PCR-GLOBWB (van Beek et al., 2011), specifically the net groundwater recharge and average surface water levels derived from routed channel discharge. For the parameterization, we relied entirely on available global datasets and did not calibrate the model so that it can equally be expanded to data poor environments. Based on our sensitivity analysis, in which we run the model with various hydrogeological parameter settings, we observed that most variance in groundwater

  7. Integrated multi-scale modelling and simulation of nuclear fuels

    International Nuclear Information System (INIS)

    Valot, C.; Bertolus, M.; Masson, R.; Malerba, L.; Rachid, J.; Besmann, T.; Phillpot, S.; Stan, M.

    2015-01-01

    This chapter aims at discussing the objectives, implementation and integration of multi-scale modelling approaches applied to nuclear fuel materials. We will first show why the multi-scale modelling approach is required, due to the nature of the materials and by the phenomena involved under irradiation. We will then present the multiple facets of multi-scale modelling approach, while giving some recommendations with regard to its application. We will also show that multi-scale modelling must be coupled with appropriate multi-scale experiments and characterisation. Finally, we will demonstrate how multi-scale modelling can contribute to solving technology issues. (authors)

  8. Thermodynamic model of social influence on two-dimensional square lattice: Case for two features

    Science.gov (United States)

    Genzor, Jozef; Bužek, Vladimír; Gendiar, Andrej

    2015-02-01

    We propose a thermodynamic multi-state spin model in order to describe equilibrial behavior of a society. Our model is inspired by the Axelrod model used in social network studies. In the framework of the statistical mechanics language, we analyze phase transitions of our model, in which the spin interaction J is interpreted as a mutual communication among individuals forming a society. The thermal fluctuations introduce a noise T into the communication, which suppresses long-range correlations. Below a certain phase transition point Tt, large-scale clusters of the individuals, who share a specific dominant property, are formed. The measure of the cluster sizes is an order parameter after spontaneous symmetry breaking. By means of the Corner transfer matrix renormalization group algorithm, we treat our model in the thermodynamic limit and classify the phase transitions with respect to inherent degrees of freedom. Each individual is chosen to possess two independent features f = 2 and each feature can assume one of q traits (e.g. interests). Hence, each individual is described by q2 degrees of freedom. A single first-order phase transition is detected in our model if q > 2, whereas two distinct continuous phase transitions are found if q = 2 only. Evaluating the free energy, order parameters, specific heat, and the entanglement von Neumann entropy, we classify the phase transitions Tt(q) in detail. The permanent existence of the ordered phase (the large-scale cluster formation with a non-zero order parameter) is conjectured below a non-zero transition point Tt(q) ≈ 0.5 in the asymptotic regime q → ∞.

  9. Wavelet-based Characterization of Small-scale Solar Emission Features at Low Radio Frequencies

    Energy Technology Data Exchange (ETDEWEB)

    Suresh, A. [Indian Institute of Science Education and Research, Pune-411008 (India); Sharma, R.; Oberoi, D. [National Centre for Radio Astrophysics, Tata Institute for Fundamental Research, Pune 411007 (India); Das, S. B. [Indian Institute of Science Education and Research, Kolkata-741249 (India); Pankratius, V.; Lonsdale, C. J.; Cappallo, R. J.; Corey, B. E.; Kratzenberg, E. [MIT Haystack Observatory, Westford, MA 01886 (United States); Timar, B. [California Institute of Technology, Pasadena, CA 91125 (United States); Bowman, J. D. [School of Earth and Space Exploration, Arizona State University, Tempe, AZ 85287 (United States); Briggs, F. [Research School of Astronomy and Astrophysics, Australian National University, Canberra, ACT 2611 (Australia); Deshpande, A. A. [Raman Research Institute, Bangalore 560080 (India); Emrich, D. [International Centre for Radio Astronomy Research, Curtin University, Bentley, WA 6102 (Australia); Goeke, R. [Kavli Institute for Astrophysics and Space Research, Massachusetts Institute of Technology, Cambridge, MA 02139 (United States); Greenhill, L. J. [Harvard-Smithsonian Center for Astrophysics, Cambridge, MA 02138 (United States); Hazelton, B. J. [Department of Physics, University of Washington, Seattle, WA 98195 (United States); Johnston-Hollitt, M. [School of Chemical and Physical Sciences, Victoria University of Wellington, P.O. Box 600, Wellington 6140 (New Zealand); Kaplan, D. L. [Department of Physics, University of Wisconsin–Milwaukee, Milwaukee, WI 53201 (United States); Kasper, J. C., E-mail: akshay@students.iiserpune.ac.in [Department of Atmospheric, Oceanic and Space Sciences, University of Michigan, Ann Arbor, MI 48109 (United States); and others

    2017-07-01

    Low radio frequency solar observations using the Murchison Widefield Array have recently revealed the presence of numerous weak short-lived narrowband emission features, even during moderately quiet solar conditions. These nonthermal features occur at rates of many thousands per hour in the 30.72 MHz observing bandwidth, and hence necessarily require an automated approach for their detection and characterization. Here, we employ continuous wavelet transform using a mother Ricker wavelet for feature detection from the dynamic spectrum. We establish the efficacy of this approach and present the first statistically robust characterization of the properties of these features. In particular, we examine distributions of their peak flux densities, spectral spans, temporal spans, and peak frequencies. We can reliably detect features weaker than 1 SFU, making them, to the best of our knowledge, the weakest bursts reported in literature. The distribution of their peak flux densities follows a power law with an index of −2.23 in the 12–155 SFU range, implying that they can provide an energetically significant contribution to coronal and chromospheric heating. These features typically last for 1–2 s and possess bandwidths of about 4–5 MHz. Their occurrence rate remains fairly flat in the 140–210 MHz frequency range. At the time resolution of the data, they appear as stationary bursts, exhibiting no perceptible frequency drift. These features also appear to ride on a broadband background continuum, hinting at the likelihood of them being weak type-I bursts.

  10. Scaling and constitutive relationships in downcomer modeling

    International Nuclear Information System (INIS)

    Daly, B.J.; Harlow, F.H.

    1978-12-01

    Constitutive relationships to describe mass and momentum exchange in multiphase flow in a pressurized water reactor downcomer are presented. Momentum exchange between the phases is described by the product of the flux of momentum available for exchange and the effective area for interaction. The exchange of mass through condensation is assumed to occur along a distinct condensation boundary separating steam at saturation temperature from water in which the temperature falls off roughly linearly with distance from the boundary. Because of the abundance of nucleation sites in a typical churning flow in a downcomer, we propose an equilibrium evaporation process that produces sufficient steam per unit time to keep the water perpetually cooled to the saturation temperature. The transport equations, constitutive models, and boundary conditions used in the K-TIF numerical method are nondimensionalized to obtain scaling relationships for two-phase flow in the downcomer. The results indicate that, subject to idealized thermodynamic and hydraulic constraints, exact mathematical scaling can be achieved. Experiments are proposed to isolate the effects of parameters that contribute to mass, momentum, and energy exchange between the phases

  11. Sea-land segmentation for infrared remote sensing images based on superpixels and multi-scale features

    Science.gov (United States)

    Lei, Sen; Zou, Zhengxia; Liu, Dunge; Xia, Zhenghuan; Shi, Zhenwei

    2018-06-01

    Sea-land segmentation is a key step for the information processing of ocean remote sensing images. Traditional sea-land segmentation algorithms ignore the local similarity prior of sea and land, and thus fail in complex scenarios. In this paper, we propose a new sea-land segmentation method for infrared remote sensing images to tackle the problem based on superpixels and multi-scale features. Considering the connectivity and local similarity of sea or land, we interpret the sea-land segmentation task in view of superpixels rather than pixels, where similar pixels are clustered and the local similarity are explored. Moreover, the multi-scale features are elaborately designed, comprising of gray histogram and multi-scale total variation. Experimental results on infrared bands of Landsat-8 satellite images demonstrate that the proposed method can obtain more accurate and more robust sea-land segmentation results than the traditional algorithms.

  12. Cavitation erosion - scale effect and model investigations

    Science.gov (United States)

    Geiger, F.; Rutschmann, P.

    2015-12-01

    The experimental works presented in here contribute to the clarification of erosive effects of hydrodynamic cavitation. Comprehensive cavitation erosion test series were conducted for transient cloud cavitation in the shear layer of prismatic bodies. The erosion pattern and erosion rates were determined with a mineral based volume loss technique and with a metal based pit count system competitively. The results clarified the underlying scale effects and revealed a strong non-linear material dependency, which indicated significantly different damage processes for both material types. Furthermore, the size and dynamics of the cavitation clouds have been assessed by optical detection. The fluctuations of the cloud sizes showed a maximum value for those cavitation numbers related to maximum erosive aggressiveness. The finding suggests the suitability of a model approach which relates the erosion process to cavitation cloud dynamics. An enhanced experimental setup is projected to further clarify these issues.

  13. Using different classification models in wheat grading utilizing visual features

    Science.gov (United States)

    Basati, Zahra; Rasekh, Mansour; Abbaspour-Gilandeh, Yousef

    2018-04-01

    Wheat is one of the most important strategic crops in Iran and in the world. The major component that distinguishes wheat from other grains is the gluten section. In Iran, sunn pest is one of the most important factors influencing the characteristics of wheat gluten and in removing it from a balanced state. The existence of bug-damaged grains in wheat will reduce the quality and price of the product. In addition, damaged grains reduce the enrichment of wheat and the quality of bread products. In this study, after preprocessing and segmentation of images, 25 features including 9 colour features, 10 morphological features, and 6 textual statistical features were extracted so as to classify healthy and bug-damaged wheat grains of Azar cultivar of four levels of moisture content (9, 11.5, 14 and 16.5% w.b.) and two lighting colours (yellow light, the composition of yellow and white lights). Using feature selection methods in the WEKA software and the CfsSubsetEval evaluator, 11 features were chosen as inputs of artificial neural network, decision tree and discriment analysis classifiers. The results showed that the decision tree with the J.48 algorithm had the highest classification accuracy of 90.20%. This was followed by artificial neural network classifier with the topology of 11-19-2 and discrimient analysis classifier at 87.46 and 81.81%, respectively

  14. Stable micron-scale holes are a general feature of canonical holins.

    Science.gov (United States)

    Savva, Christos G; Dewey, Jill S; Moussa, Samir H; To, Kam H; Holzenburg, Andreas; Young, Ry

    2014-01-01

    At a programmed time in phage infection cycles, canonical holins suddenly trigger to cause lethal damage to the cytoplasmic membrane, resulting in the cessation of respiration and the non-specific release of pre-folded, fully active endolysins to the periplasm. For the paradigm holin S105 of lambda, triggering is correlated with the formation of micron-scale membrane holes, visible as interruptions in the bilayer in cryo-electron microscopic images and tomographic reconstructions. Here we report that the size distribution of the holes is stable for long periods after triggering. Moreover, early triggering caused by an early lysis allele of S105 formed approximately the same number of holes, but the lesions were significantly smaller. In contrast, early triggering prematurely induced by energy poisons resulted in many fewer visible holes, consistent with previous sizing studies. Importantly, the unrelated canonical holins P2 Y and T4 T were found to cause the formation of holes of approximately the same size and number as for lambda. In contrast, no such lesions were visible after triggering of the pinholin S(21) 68. These results generalize the hole formation phenomenon for canonical holins. A model is presented suggesting the unprecedentedly large size of these holes is related to the timing mechanism. © 2013 John Wiley & Sons Ltd.

  15. Comparison Between Overtopping Discharge in Small and Large Scale Models

    DEFF Research Database (Denmark)

    Helgason, Einar; Burcharth, Hans F.

    2006-01-01

    The present paper presents overtopping measurements from small scale model test performed at the Haudraulic & Coastal Engineering Laboratory, Aalborg University, Denmark and large scale model tests performed at the Largde Wave Channel,Hannover, Germany. Comparison between results obtained from...... small and large scale model tests show no clear evidence of scale effects for overtopping above a threshold value. In the large scale model no overtopping was measured for waveheights below Hs = 0.5m as the water sunk into the voids between the stones on the crest. For low overtopping scale effects...

  16. Multi-Level and Multi-Scale Feature Aggregation Using Pretrained Convolutional Neural Networks for Music Auto-Tagging

    Science.gov (United States)

    Lee, Jongpil; Nam, Juhan

    2017-08-01

    Music auto-tagging is often handled in a similar manner to image classification by regarding the 2D audio spectrogram as image data. However, music auto-tagging is distinguished from image classification in that the tags are highly diverse and have different levels of abstractions. Considering this issue, we propose a convolutional neural networks (CNN)-based architecture that embraces multi-level and multi-scaled features. The architecture is trained in three steps. First, we conduct supervised feature learning to capture local audio features using a set of CNNs with different input sizes. Second, we extract audio features from each layer of the pre-trained convolutional networks separately and aggregate them altogether given a long audio clip. Finally, we put them into fully-connected networks and make final predictions of the tags. Our experiments show that using the combination of multi-level and multi-scale features is highly effective in music auto-tagging and the proposed method outperforms previous state-of-the-arts on the MagnaTagATune dataset and the Million Song Dataset. We further show that the proposed architecture is useful in transfer learning.

  17. Simultaneous nested modeling from the synoptic scale to the LES scale for wind energy applications

    DEFF Research Database (Denmark)

    Liu, Yubao; Warner, Tom; Liu, Yuewei

    2011-01-01

    This paper describes an advanced multi-scale weather modeling system, WRF–RTFDDA–LES, designed to simulate synoptic scale (~2000 km) to small- and micro-scale (~100 m) circulations of real weather in wind farms on simultaneous nested grids. This modeling system is built upon the National Center f...

  18. The Personality Assessment Inventory as a proxy for the Psychopathy Checklist Revised: testing the incremental validity and cross-sample robustness of the Antisocial Features Scale.

    Science.gov (United States)

    Douglas, Kevin S; Guy, Laura S; Edens, John F; Boer, Douglas P; Hamilton, Jennine

    2007-09-01

    The Personality Assessment Inventory's (PAI's) ability to predict psychopathic personality features, as assessed by the Psychopathy Checklist-Revised (PCL-R), was examined. To investigate whether the PAI Antisocial Features (ANT) Scale and subscales possessed incremental validity beyond other theoretically relevant PAI scales, optimized regression equations were derived in a sample of 281 Canadian federal offenders. ANT, or ANT-Antisocial Behavior (ANT-A), demonstrated unique variance in regression analyses predicting PCL-R total and Factor 2 (Lifestyle Impulsivity and Social Deviance) scores, but only the Dominance (DOM) Scale was retained in models predicting Factor 1 (Interpersonal and Affective Deficits). Attempts to cross-validate the regression equations derived from the first sample on a sample of 85 U.S. sex offenders resulted in considerable validity shrinkage, with the ANT Scale in isolation performing comparably to or better than the statistical models for PCL-R total and Factor 2 scores. Results offer limited evidence of convergent validity between the PAI and the PCL-R.

  19. Grotoco@SLAM: Second Language Acquisition Modeling with Simple Features, Learners and Task-wise Models

    DEFF Research Database (Denmark)

    Klerke, Sigrid; Martínez Alonso, Héctor; Plank, Barbara

    2018-01-01

    We present our submission to the 2018 Duolingo Shared Task on Second Language Acquisition Modeling (SLAM). We focus on evaluating a range of features for the task, including user-derived measures, while examining how far we can get with a simple linear classifier. Our analysis reveals that errors...

  20. Models of Small-Scale Patchiness

    Science.gov (United States)

    McGillicuddy, D. J.

    2001-01-01

    Patchiness is perhaps the most salient characteristic of plankton populations in the ocean. The scale of this heterogeneity spans many orders of magnitude in its spatial extent, ranging from planetary down to microscale. It has been argued that patchiness plays a fundamental role in the functioning of marine ecosystems, insofar as the mean conditions may not reflect the environment to which organisms are adapted. Understanding the nature of this patchiness is thus one of the major challenges of oceanographic ecology. The patchiness problem is fundamentally one of physical-biological-chemical interactions. This interconnection arises from three basic sources: (1) ocean currents continually redistribute dissolved and suspended constituents by advection; (2) space-time fluctuations in the flows themselves impact biological and chemical processes, and (3) organisms are capable of directed motion through the water. This tripartite linkage poses a difficult challenge to understanding oceanic ecosystems: differentiation between the three sources of variability requires accurate assessment of property distributions in space and time, in addition to detailed knowledge of organismal repertoires and the processes by which ambient conditions control the rates of biological and chemical reactions. Various methods of observing the ocean tend to lie parallel to the axes of the space/time domain in which these physical-biological-chemical interactions take place. Given that a purely observational approach to the patchiness problem is not tractable with finite resources, the coupling of models with observations offers an alternative which provides a context for synthesis of sparse data with articulations of fundamental principles assumed to govern functionality of the system. In a sense, models can be used to fill the gaps in the space/time domain, yielding a framework for exploring the controls on spatially and temporally intermittent processes. The following discussion highlights

  1. ADAPTIVE TEXTURE SYNTHESIS FOR LARGE SCALE CITY MODELING

    Directory of Open Access Journals (Sweden)

    G. Despine

    2015-02-01

    Full Text Available Large scale city models textured with aerial images are well suited for bird-eye navigation but generally the image resolution does not allow pedestrian navigation. One solution to face this problem is to use high resolution terrestrial photos but it requires huge amount of manual work to remove occlusions. Another solution is to synthesize generic textures with a set of procedural rules and elementary patterns like bricks, roof tiles, doors and windows. This solution may give realistic textures but with no correlation to the ground truth. Instead of using pure procedural modelling we present a method to extract information from aerial images and adapt the texture synthesis to each building. We describe a workflow allowing the user to drive the information extraction and to select the appropriate texture patterns. We also emphasize the importance to organize the knowledge about elementary pattern in a texture catalogue allowing attaching physical information, semantic attributes and to execute selection requests. Roofs are processed according to the detected building material. Façades are first described in terms of principal colours, then opening positions are detected and some window features are computed. These features allow selecting the most appropriate patterns from the texture catalogue. We experimented this workflow on two samples with 20 cm and 5 cm resolution images. The roof texture synthesis and opening detection were successfully conducted on hundreds of buildings. The window characterization is still sensitive to the distortions inherent to the projection of aerial images onto the facades.

  2. Adaptive Texture Synthesis for Large Scale City Modeling

    Science.gov (United States)

    Despine, G.; Colleu, T.

    2015-02-01

    Large scale city models textured with aerial images are well suited for bird-eye navigation but generally the image resolution does not allow pedestrian navigation. One solution to face this problem is to use high resolution terrestrial photos but it requires huge amount of manual work to remove occlusions. Another solution is to synthesize generic textures with a set of procedural rules and elementary patterns like bricks, roof tiles, doors and windows. This solution may give realistic textures but with no correlation to the ground truth. Instead of using pure procedural modelling we present a method to extract information from aerial images and adapt the texture synthesis to each building. We describe a workflow allowing the user to drive the information extraction and to select the appropriate texture patterns. We also emphasize the importance to organize the knowledge about elementary pattern in a texture catalogue allowing attaching physical information, semantic attributes and to execute selection requests. Roofs are processed according to the detected building material. Façades are first described in terms of principal colours, then opening positions are detected and some window features are computed. These features allow selecting the most appropriate patterns from the texture catalogue. We experimented this workflow on two samples with 20 cm and 5 cm resolution images. The roof texture synthesis and opening detection were successfully conducted on hundreds of buildings. The window characterization is still sensitive to the distortions inherent to the projection of aerial images onto the facades.

  3. Efficient Feature-Driven Visualization of Large-Scale Scientific Data

    Energy Technology Data Exchange (ETDEWEB)

    Lu, Aidong

    2012-12-12

    Very large, complex scientific data acquired in many research areas creates critical challenges for scientists to understand, analyze, and organize their data. The objective of this project is to expand the feature extraction and analysis capabilities to develop powerful and accurate visualization tools that can assist domain scientists with their requirements in multiple phases of scientific discovery. We have recently developed several feature-driven visualization methods for extracting different data characteristics of volumetric datasets. Our results verify the hypothesis in the proposal and will be used to develop additional prototype systems.

  4. Large scale injection test (LASGIT) modelling

    International Nuclear Information System (INIS)

    Arnedo, D.; Olivella, S.; Alonso, E.E.

    2010-01-01

    Document available in extended abstract form only. With the objective of understanding the gas flow processes through clay barriers in schemes of radioactive waste disposal, the Lasgit in situ experiment was planned and is currently in progress. The modelling of the experiment will permit to better understand of the responses, to confirm hypothesis of mechanisms and processes and to learn in order to design future experiments. The experiment and modelling activities are included in the project FORGE (FP7). The in situ large scale injection test Lasgit is currently being performed at the Aespoe Hard Rock Laboratory by SKB and BGS. An schematic layout of the test is shown. The deposition hole follows the KBS3 scheme. A copper canister is installed in the axe of the deposition hole, surrounded by blocks of highly compacted MX-80 bentonite. A concrete plug is placed at the top of the buffer. A metallic lid anchored to the surrounding host rock is included in order to prevent vertical movements of the whole system during gas injection stages (high gas injection pressures are expected to be reached). Hydration of the buffer material is achieved by injecting water through filter mats, two placed at the rock walls and two at the interfaces between bentonite blocks. Water is also injected through the 12 canister filters. Gas injection stages are performed injecting gas to some of the canister injection filters. Since the water pressure and the stresses (swelling pressure development) will be high during gas injection, it is necessary to inject at high gas pressures. This implies mechanical couplings as gas penetrates after the gas entry pressure is achieved and may produce deformations which in turn lead to permeability increments. A 3D hydro-mechanical numerical model of the test using CODE-BRIGHT is presented. The domain considered for the modelling is shown. The materials considered in the simulation are the MX-80 bentonite blocks (cylinders and rings), the concrete plug

  5. Multiscale Feature Model for Terrain Data Based on Adaptive Spatial Neighborhood

    Directory of Open Access Journals (Sweden)

    Huijie Zhang

    2013-01-01

    Full Text Available Multiresolution hierarchy based on features (FMRH has been applied in the field of terrain modeling and obtained significant results in real engineering. However, it is difficult to schedule multiresolution data in FMRH from external memory. This paper proposed new multiscale feature model and related strategies to cluster spatial data blocks and solve the scheduling problems of FMRH using spatial neighborhood. In the model, the nodes with similar error in the different layers should be in one cluster. On this basis, a space index algorithm for each cluster guided by Hilbert curve is proposed. It ensures that multi-resolution terrain data can be loaded without traversing the whole FMRH; therefore, the efficiency of data scheduling is improved. Moreover, a spatial closeness theorem of cluster is put forward and is also proved. It guarantees that the union of data blocks composites a whole terrain without any data loss. Finally, experiments have been carried out on many different large scale data sets, and the results demonstrate that the schedule time is shortened and the efficiency of I/O operation is apparently improved, which is important in real engineering.

  6. Islands Climatology at Local Scale. Downscaling with CIELO model

    Science.gov (United States)

    Azevedo, Eduardo; Reis, Francisco; Tomé, Ricardo; Rodrigues, Conceição

    2016-04-01

    Islands with horizontal scales of the order of tens of km, as is the case of the Atlantic Islands of Macaronesia, are subscale orographic features for Global Climate Models (GCMs) since the horizontal scales of these models are too coarse to give a detailed representation of the islands' topography. Even the Regional Climate Models (RCMs) reveals limitations when they are forced to reproduce the climate of small islands mainly by the way they flat and lowers the elevation of the islands, reducing the capacity of the model to reproduce important local mechanisms that lead to a very deep local climate differentiation. Important local thermodynamics mechanisms like Foehn effect, or the influence of topography on radiation balance, have a prominent role in the climatic spatial differentiation. Advective transport of air - and the consequent induced adiabatic cooling due to orography - lead to transformations of the state parameters of the air that leads to the spatial configuration of the fields of pressure, temperature and humidity. The same mechanism is in the origin of the orographic clouds cover that, besides the direct role as water source by the reinforcement of precipitation, act like a filter to direct solar radiation and as a source of long-wave radiation that affect the local balance of energy. Also, the saturation (or near saturation) conditions that they provide constitute a barrier to water vapour diffusion in the mechanisms of evapotranspiration. Topographic factors like slope, aspect and orographic mask have also significant importance in the local energy balance. Therefore, the simulation of the local scale climate (past, present and future) in these archipelagos requires the use of downscaling techniques to adjust locally outputs obtained at upper scales. This presentation will discuss and analyse the evolution of the CIELO model (acronym for Clima Insular à Escala LOcal) a statistical/dynamical technique developed at the University of the Azores

  7. Representation of Block-Based Image Features in a Multi-Scale Framework for Built-Up Area Detection

    Directory of Open Access Journals (Sweden)

    Zhongwen Hu

    2016-02-01

    Full Text Available The accurate extraction and mapping of built-up areas play an important role in many social, economic, and environmental studies. In this paper, we propose a novel approach for built-up area detection from high spatial resolution remote sensing images, using a block-based multi-scale feature representation framework. First, an image is divided into small blocks, in which the spectral, textural, and structural features are extracted and represented using a multi-scale framework; a set of refined Harris corner points is then used to select blocks as training samples; finally, a built-up index image is obtained by minimizing the normalized spectral, textural, and structural distances to the training samples, and a built-up area map is obtained by thresholding the index image. Experiments confirm that the proposed approach is effective for high-resolution optical and synthetic aperture radar images, with different scenes and different spatial resolutions.

  8. A food recognition system for diabetic patients based on an optimized bag-of-features model.

    Science.gov (United States)

    Anthimopoulos, Marios M; Gianola, Lauro; Scarnato, Luca; Diem, Peter; Mougiakakou, Stavroula G

    2014-07-01

    Computer vision-based food recognition could be used to estimate a meal's carbohydrate content for diabetic patients. This study proposes a methodology for automatic food recognition, based on the bag-of-features (BoF) model. An extensive technical investigation was conducted for the identification and optimization of the best performing components involved in the BoF architecture, as well as the estimation of the corresponding parameters. For the design and evaluation of the prototype system, a visual dataset with nearly 5000 food images was created and organized into 11 classes. The optimized system computes dense local features, using the scale-invariant feature transform on the HSV color space, builds a visual dictionary of 10000 visual words by using the hierarchical k-means clustering and finally classifies the food images with a linear support vector machine classifier. The system achieved classification accuracy of the order of 78%, thus proving the feasibility of the proposed approach in a very challenging image dataset.

  9. Modeling of micro-scale thermoacoustics

    Energy Technology Data Exchange (ETDEWEB)

    Offner, Avshalom [The Nancy and Stephen Grand Technion Energy Program, Technion-Israel Institute of Technology, Haifa 32000 (Israel); Department of Civil and Environmental Engineering, Technion-Israel Institute of Technology, Haifa 32000 (Israel); Ramon, Guy Z., E-mail: ramong@technion.ac.il [Department of Civil and Environmental Engineering, Technion-Israel Institute of Technology, Haifa 32000 (Israel)

    2016-05-02

    Thermoacoustic phenomena, that is, onset of self-sustained oscillations or time-averaged fluxes in a sound wave, may be harnessed as efficient and robust heat transfer devices. Specifically, miniaturization of such devices holds great promise for cooling of electronics. At the required small dimensions, it is expected that non-negligible slip effects exist at the solid surface of the “stack”-a porous matrix, which is used for maintaining the correct temporal phasing of the heat transfer between the solid and oscillating gas. Here, we develop theoretical models for thermoacoustic engines and heat pumps that account for slip, within the standing-wave approximation. Stability curves for engines with both no-slip and slip boundary conditions were calculated; the slip boundary condition curve exhibits a lower temperature difference compared with the no slip curve for resonance frequencies that characterize micro-scale devices. Maximum achievable temperature differences across the stack of a heat pump were also calculated. For this case, slip conditions are detrimental and such a heat pump would maintain a lower temperature difference compared to larger devices, where slip effects are negligible.

  10. Modeling of micro-scale thermoacoustics

    International Nuclear Information System (INIS)

    Offner, Avshalom; Ramon, Guy Z.

    2016-01-01

    Thermoacoustic phenomena, that is, onset of self-sustained oscillations or time-averaged fluxes in a sound wave, may be harnessed as efficient and robust heat transfer devices. Specifically, miniaturization of such devices holds great promise for cooling of electronics. At the required small dimensions, it is expected that non-negligible slip effects exist at the solid surface of the “stack”-a porous matrix, which is used for maintaining the correct temporal phasing of the heat transfer between the solid and oscillating gas. Here, we develop theoretical models for thermoacoustic engines and heat pumps that account for slip, within the standing-wave approximation. Stability curves for engines with both no-slip and slip boundary conditions were calculated; the slip boundary condition curve exhibits a lower temperature difference compared with the no slip curve for resonance frequencies that characterize micro-scale devices. Maximum achievable temperature differences across the stack of a heat pump were also calculated. For this case, slip conditions are detrimental and such a heat pump would maintain a lower temperature difference compared to larger devices, where slip effects are negligible.

  11. Multi-Scale Models for the Scale Interaction of Organized Tropical Convection

    Science.gov (United States)

    Yang, Qiu

    Assessing the upscale impact of organized tropical convection from small spatial and temporal scales is a research imperative, not only for having a better understanding of the multi-scale structures of dynamical and convective fields in the tropics, but also for eventually helping in the design of new parameterization strategies to improve the next-generation global climate models. Here self-consistent multi-scale models are derived systematically by following the multi-scale asymptotic methods and used to describe the hierarchical structures of tropical atmospheric flows. The advantages of using these multi-scale models lie in isolating the essential components of multi-scale interaction and providing assessment of the upscale impact of the small-scale fluctuations onto the large-scale mean flow through eddy flux divergences of momentum and temperature in a transparent fashion. Specifically, this thesis includes three research projects about multi-scale interaction of organized tropical convection, involving tropical flows at different scaling regimes and utilizing different multi-scale models correspondingly. Inspired by the observed variability of tropical convection on multiple temporal scales, including daily and intraseasonal time scales, the goal of the first project is to assess the intraseasonal impact of the diurnal cycle on the planetary-scale circulation such as the Hadley cell. As an extension of the first project, the goal of the second project is to assess the intraseasonal impact of the diurnal cycle over the Maritime Continent on the Madden-Julian Oscillation. In the third project, the goals are to simulate the baroclinic aspects of the ITCZ breakdown and assess its upscale impact on the planetary-scale circulation over the eastern Pacific. These simple multi-scale models should be useful to understand the scale interaction of organized tropical convection and help improve the parameterization of unresolved processes in global climate models.

  12. Site-scale groundwater flow modelling of Ceberg

    Energy Technology Data Exchange (ETDEWEB)

    Walker, D. [Duke Engineering and Services (United States); Gylling, B. [Kemakta Konsult AB, Stockholm (Sweden)

    1999-06-01

    The Swedish Nuclear Fuel and Waste Management Company (SKB) SR 97 study is a comprehensive performance assessment illustrating the results for three hypothetical repositories in Sweden. In support of SR 97, this study examines the hydrogeologic modelling of the hypothetical site called Ceberg, which adopts input parameters from the SKB study site near Gideaa, in northern Sweden. This study uses a nested modelling approach, with a deterministic regional model providing boundary conditions to a site-scale stochastic continuum model. The model is run in Monte Carlo fashion to propagate the variability of the hydraulic conductivity to the advective travel paths from representative canister locations. A series of variant cases addresses uncertainties in the inference of parameters and the model of conductive fracturezones. The study uses HYDRASTAR, the SKB stochastic continuum (SC) groundwater modelling program, to compute the heads, Darcy velocities at each representative canister position, and the advective travel times and paths through the geosphere. The volumetric flow balance between the regional and site-scale models suggests that the nested modelling and associated upscaling of hydraulic conductivities preserve mass balance only in a general sense. In contrast, a comparison of the base and deterministic (Variant 4) cases indicates that the upscaling is self-consistent with respect to median travel time and median canister flux. These suggest that the upscaling of hydraulic conductivity is approximately self-consistent but the nested modelling could be improved. The Base Case yields the following results for a flow porosity of {epsilon}{sub f} 10{sup -4} and a flow-wetted surface area of a{sub r} = 0.1 m{sup 2}/(m{sup 3} rock): The median travel time is 1720 years. The median canister flux is 3.27x10{sup -5} m/year. The median F-ratio is 1.72x10{sup 6} years/m. The base case and the deterministic variant suggest that the variability of the travel times within

  13. Site-scale groundwater flow modelling of Ceberg

    International Nuclear Information System (INIS)

    Walker, D.; Gylling, B.

    1999-06-01

    The Swedish Nuclear Fuel and Waste Management Company (SKB) SR 97 study is a comprehensive performance assessment illustrating the results for three hypothetical repositories in Sweden. In support of SR 97, this study examines the hydrogeologic modelling of the hypothetical site called Ceberg, which adopts input parameters from the SKB study site near Gideaa, in northern Sweden. This study uses a nested modelling approach, with a deterministic regional model providing boundary conditions to a site-scale stochastic continuum model. The model is run in Monte Carlo fashion to propagate the variability of the hydraulic conductivity to the advective travel paths from representative canister locations. A series of variant cases addresses uncertainties in the inference of parameters and the model of conductive fracture zones. The study uses HYDRASTAR, the SKB stochastic continuum (SC) groundwater modelling program, to compute the heads, Darcy velocities at each representative canister position, and the advective travel times and paths through the geosphere. The volumetric flow balance between the regional and site-scale models suggests that the nested modelling and associated upscaling of hydraulic conductivities preserve mass balance only in a general sense. In contrast, a comparison of the base and deterministic (Variant 4) cases indicates that the upscaling is self-consistent with respect to median travel time and median canister flux. These suggest that the upscaling of hydraulic conductivity is approximately self-consistent but the nested modelling could be improved. The Base Case yields the following results for a flow porosity of ε f 10 -4 and a flow-wetted surface area of a r = 0.1 m 2 /(m 3 rock): The median travel time is 1720 years. The median canister flux is 3.27x10 -5 m/year. The median F-ratio is 1.72x10 6 years/m. The base case and the deterministic variant suggest that the variability of the travel times within individual realisations is due to the

  14. The relationship between social, policy and physical venue features and social cohesion on condom use for pregnancy prevention among sex workers: a safer indoor work environment scale.

    Science.gov (United States)

    Duff, Putu; Shoveller, Jean; Dobrer, Sabina; Ogilvie, Gina; Montaner, Julio; Chettiar, Jill; Shannon, Kate

    2015-07-01

    This study aims to report on a newly developed Safer Indoor Work Environmental Scale that characterises the social, policy and physical features of indoor venues and social cohesion; and using this scale, longitudinally evaluate the association between these features on sex workers' (SWs') condom use for pregnancy prevention. Drawing on a prospective open cohort of female SWs working in indoor venues, a newly developed Safer Indoor Work Environment Scale was used to build six multivariable models with generalised estimating equations (GEE), to determine the independent effects of social, policy and physical venue-based features and social cohesion on condom use. Of 588 indoor SWs, 63.6% used condoms for pregnancy prevention in the last month. In multivariable GEE analysis, the following venue-based features were significantly correlated with barrier contraceptive use for pregnancy prevention: managerial practices and venue safety policies (adjusted OR (AOR)=1.09; 95% CI 1.01 to 1.17), access to sexual and reproductive health services/supplies (AOR=1.10; 95% CI 1.00 to 1.20), access to drug harm reduction (AOR=1.13; 95% CI 1.01 to 1.28) and social cohesion among workers (AOR=1.05; 95% CI 1.03 to 1.07). Access to security features was marginally associated with condom use (AOR=1.13; 95% CI 0.99 to 1.29). The findings of the current study highlight how work environment and social cohesion among SWs are related to improved condom use. Given global calls for the decriminalisation of sex work, and potential legislative reforms in Canada, this study points to the critical need for new institutional arrangements (eg, legal and regulatory frameworks; labour standards) to support safer sex workplaces. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.

  15. SDG and qualitative trend based model multiple scale validation

    Science.gov (United States)

    Gao, Dong; Xu, Xin; Yin, Jianjin; Zhang, Hongyu; Zhang, Beike

    2017-09-01

    Verification, Validation and Accreditation (VV&A) is key technology of simulation and modelling. For the traditional model validation methods, the completeness is weak; it is carried out in one scale; it depends on human experience. The SDG (Signed Directed Graph) and qualitative trend based multiple scale validation is proposed. First the SDG model is built and qualitative trends are added to the model. And then complete testing scenarios are produced by positive inference. The multiple scale validation is carried out by comparing the testing scenarios with outputs of simulation model in different scales. Finally, the effectiveness is proved by carrying out validation for a reactor model.

  16. Design features of a full-scale high-level waste vitrification system

    International Nuclear Information System (INIS)

    Siemens, D.H.; Bonner, W.F.

    1976-08-01

    A system has been designed and is currently under construction for vitrification of commercial high-level waste. The process consists of a spray calciner coupled to an in-can melter. Due to the high radiation levels expected, this equipment is designed for totally remote operation and maintenance. The in-cell arrangement of this equipment has been developed cooperatively with a nuclear fuel reprocessor. The system will be demonstrated both full scale with nonradioactive simulated waste and pilot scale with actual high-level waste

  17. A comprehensive analysis of earthquake damage patterns using high dimensional model representation feature selection

    Science.gov (United States)

    Taşkin Kaya, Gülşen

    2013-10-01

    Recently, earthquake damage assessment using satellite images has been a very popular ongoing research direction. Especially with the availability of very high resolution (VHR) satellite images, a quite detailed damage map based on building scale has been produced, and various studies have also been conducted in the literature. As the spatial resolution of satellite images increases, distinguishability of damage patterns becomes more cruel especially in case of using only the spectral information during classification. In order to overcome this difficulty, textural information needs to be involved to the classification to improve the visual quality and reliability of damage map. There are many kinds of textural information which can be derived from VHR satellite images depending on the algorithm used. However, extraction of textural information and evaluation of them have been generally a time consuming process especially for the large areas affected from the earthquake due to the size of VHR image. Therefore, in order to provide a quick damage map, the most useful features describing damage patterns needs to be known in advance as well as the redundant features. In this study, a very high resolution satellite image after Iran, Bam earthquake was used to identify the earthquake damage. Not only the spectral information, textural information was also used during the classification. For textural information, second order Haralick features were extracted from the panchromatic image for the area of interest using gray level co-occurrence matrix with different size of windows and directions. In addition to using spatial features in classification, the most useful features representing the damage characteristic were selected with a novel feature selection method based on high dimensional model representation (HDMR) giving sensitivity of each feature during classification. The method called HDMR was recently proposed as an efficient tool to capture the input

  18. Feature and Meta-Models in Clafer: Mixed, Specialized, and Coupled

    DEFF Research Database (Denmark)

    Bąk, Kacper; Czarnecki, Krzysztof; Wasowski, Andrzej

    2011-01-01

    constraints (such as mapping feature configurations to component configurations or model templates). Clafer also allows arranging models into multiple specialization and extension layers via constraints and inheritance. We identify four key mechanisms allowing a meta-modeling language to express feature...

  19. Global fits of GUT-scale SUSY models with GAMBIT

    Science.gov (United States)

    Athron, Peter; Balázs, Csaba; Bringmann, Torsten; Buckley, Andy; Chrząszcz, Marcin; Conrad, Jan; Cornell, Jonathan M.; Dal, Lars A.; Edsjö, Joakim; Farmer, Ben; Jackson, Paul; Krislock, Abram; Kvellestad, Anders; Mahmoudi, Farvah; Martinez, Gregory D.; Putze, Antje; Raklev, Are; Rogan, Christopher; de Austri, Roberto Ruiz; Saavedra, Aldo; Savage, Christopher; Scott, Pat; Serra, Nicola; Weniger, Christoph; White, Martin

    2017-12-01

    We present the most comprehensive global fits to date of three supersymmetric models motivated by grand unification: the constrained minimal supersymmetric standard model (CMSSM), and its Non-Universal Higgs Mass generalisations NUHM1 and NUHM2. We include likelihoods from a number of direct and indirect dark matter searches, a large collection of electroweak precision and flavour observables, direct searches for supersymmetry at LEP and Runs I and II of the LHC, and constraints from Higgs observables. Our analysis improves on existing results not only in terms of the number of included observables, but also in the level of detail with which we treat them, our sampling techniques for scanning the parameter space, and our treatment of nuisance parameters. We show that stau co-annihilation is now ruled out in the CMSSM at more than 95% confidence. Stop co-annihilation turns out to be one of the most promising mechanisms for achieving an appropriate relic density of dark matter in all three models, whilst avoiding all other constraints. We find high-likelihood regions of parameter space featuring light stops and charginos, making them potentially detectable in the near future at the LHC. We also show that tonne-scale direct detection will play a largely complementary role, probing large parts of the remaining viable parameter space, including essentially all models with multi-TeV neutralinos.

  20. Global fits of GUT-scale SUSY models with GAMBIT

    Energy Technology Data Exchange (ETDEWEB)

    Athron, Peter [Monash University, School of Physics and Astronomy, Melbourne, VIC (Australia); Australian Research Council Centre of Excellence for Particle Physics at the Tera-scale (Australia); Balazs, Csaba [Monash University, School of Physics and Astronomy, Melbourne, VIC (Australia); Australian Research Council Centre of Excellence for Particle Physics at the Tera-scale (Australia); Bringmann, Torsten; Dal, Lars A.; Krislock, Abram; Raklev, Are [University of Oslo, Department of Physics, Oslo (Norway); Buckley, Andy [University of Glasgow, SUPA, School of Physics and Astronomy, Glasgow (United Kingdom); Chrzaszcz, Marcin [Universitaet Zuerich, Physik-Institut, Zurich (Switzerland); H. Niewodniczanski Institute of Nuclear Physics, Polish Academy of Sciences, Krakow (Poland); Conrad, Jan; Edsjoe, Joakim; Farmer, Ben [AlbaNova University Centre, Oskar Klein Centre for Cosmoparticle Physics, Stockholm (Sweden); Stockholm University, Department of Physics, Stockholm (Sweden); Cornell, Jonathan M. [McGill University, Department of Physics, Montreal, QC (Canada); Jackson, Paul; White, Martin [Australian Research Council Centre of Excellence for Particle Physics at the Tera-scale (Australia); University of Adelaide, Department of Physics, Adelaide, SA (Australia); Kvellestad, Anders; Savage, Christopher [NORDITA, Stockholm (Sweden); Mahmoudi, Farvah [Univ Lyon, Univ Lyon 1, CNRS, ENS de Lyon, Centre de Recherche Astrophysique de Lyon UMR5574, Saint-Genis-Laval (France); Theoretical Physics Department, CERN, Geneva (Switzerland); Martinez, Gregory D. [University of California, Physics and Astronomy Department, Los Angeles, CA (United States); Putze, Antje [LAPTh, Universite de Savoie, CNRS, Annecy-le-Vieux (France); Rogan, Christopher [Harvard University, Department of Physics, Cambridge, MA (United States); Ruiz de Austri, Roberto [IFIC-UV/CSIC, Instituto de Fisica Corpuscular, Valencia (Spain); Saavedra, Aldo [Australian Research Council Centre of Excellence for Particle Physics at the Tera-scale (Australia); The University of Sydney, Faculty of Engineering and Information Technologies, Centre for Translational Data Science, School of Physics, Camperdown, NSW (Australia); Scott, Pat [Imperial College London, Department of Physics, Blackett Laboratory, London (United Kingdom); Serra, Nicola [Universitaet Zuerich, Physik-Institut, Zurich (Switzerland); Weniger, Christoph [University of Amsterdam, GRAPPA, Institute of Physics, Amsterdam (Netherlands); Collaboration: The GAMBIT Collaboration

    2017-12-15

    We present the most comprehensive global fits to date of three supersymmetric models motivated by grand unification: the constrained minimal supersymmetric standard model (CMSSM), and its Non-Universal Higgs Mass generalisations NUHM1 and NUHM2. We include likelihoods from a number of direct and indirect dark matter searches, a large collection of electroweak precision and flavour observables, direct searches for supersymmetry at LEP and Runs I and II of the LHC, and constraints from Higgs observables. Our analysis improves on existing results not only in terms of the number of included observables, but also in the level of detail with which we treat them, our sampling techniques for scanning the parameter space, and our treatment of nuisance parameters. We show that stau co-annihilation is now ruled out in the CMSSM at more than 95% confidence. Stop co-annihilation turns out to be one of the most promising mechanisms for achieving an appropriate relic density of dark matter in all three models, whilst avoiding all other constraints. We find high-likelihood regions of parameter space featuring light stops and charginos, making them potentially detectable in the near future at the LHC. We also show that tonne-scale direct detection will play a largely complementary role, probing large parts of the remaining viable parameter space, including essentially all models with multi-TeV neutralinos. (orig.)

  1. Differential Effect of Features of Autism on IQs Reported Using Wechsler Scales

    Science.gov (United States)

    Carothers, Douglas E.; Taylor, Ronald L.

    2013-01-01

    Many children with autistic disorder, or autism, are described as having low intelligence quotients. These descriptions are partially based on use of various editions of the "Wechsler Intelligence Scale for Children" (WISC), the most widely used intelligence test for children with autism. An important question is whether task demands of…

  2. Homogeneity analysis with k sets of variables: An alternating least squares method with optimal scaling features

    NARCIS (Netherlands)

    van der Burg, Eeke; de Leeuw, Jan; Verdegaal, Renée

    1988-01-01

    Homogeneity analysis, or multiple correspondence analysis, is usually applied tok separate variables. In this paper we apply it to sets of variables by using sums within sets. The resulting technique is called OVERALS. It uses the notion of optimal scaling, with transformations that can be multiple

  3. Verification of Simulation Results Using Scale Model Flight Test Trajectories

    National Research Council Canada - National Science Library

    Obermark, Jeff

    2004-01-01

    .... A second compromise scaling law was investigated as a possible improvement. For ejector-driven events at minimum sideslip, the most important variables for scale model construction are the mass moment of inertia and ejector...

  4. Common scale features of the recent Greek and Serbian church chant traditions

    Directory of Open Access Journals (Sweden)

    Peno Vesna

    2008-01-01

    Full Text Available This paper is an attempt to show the similarity between the Serbian and Greek Post-Byzantine chanting traditions, especially those which relate to the scale organization of modes. Three teachers and reformers from Constantinople, Chrisantos, Gregorios and Chourmousios, established a fairly firm theoretical system for the first time during the long history of church chant. One of the main results of their reform, beside changes relating to neums, was the assignment of strict sizes to the intervals in the natural tonal system. There are three kinds of natural scales: diatonic, chromatic and encharmonic. They all have their place in the Greek Anastasimatarion chant book, whose first edition was prepared by Petar Peloponesios, and later edited by Ionnes Protopsaltes. The first, first plagal and forth plagal modes are diatonic in each of their melos, with very few exceptions; the second and second plagal are soft and hard chromatic, while the third and varis are encharmonic. It is important to note that the Greek chanter is very conscious of the scale foundation of the melody, so he begins to chant the apechima foremost, the intonation formula that comprehends all indisposed details to enter the adequate mode, i. e. melos. One mode could use one sort of scale for all groups of melodies - melos. However, in some modes there are different melos, whose scale organisation is not equal at all. That means that it is not proper to equate mode with scale, but rather to look for the specific scale's shape through the melodies that belong to the melos. The absence of formal Serbian church music theory and, especially, the very conservative way in which church melodies are learnt by ear and by heart, has caused significant gaps, which preclude an adequate approach to the essentional principals of Serbian chant. Over the years many Serbian chanters and musicians have noted down church melodies, especially those from the Octoechos, in F or in G, with the key

  5. The Catchment Feature Model: A Device for Multimodal Fusion and a Bridge between Signal and Sense

    Science.gov (United States)

    Quek, Francis

    2004-12-01

    The catchment feature model addresses two questions in the field of multimodal interaction: how we bridge video and audio processing with the realities of human multimodal communication, and how information from the different modes may be fused. We argue from a detailed literature review that gestural research has clustered around manipulative and semaphoric use of the hands, motivate the catchment feature model psycholinguistic research, and present the model. In contrast to "whole gesture" recognition, the catchment feature model applies a feature decomposition approach that facilitates cross-modal fusion at the level of discourse planning and conceptualization. We present our experimental framework for catchment feature-based research, cite three concrete examples of catchment features, and propose new directions of multimodal research based on the model.

  6. The Catchment Feature Model: A Device for Multimodal Fusion and a Bridge between Signal and Sense

    Directory of Open Access Journals (Sweden)

    Francis Quek

    2004-09-01

    Full Text Available The catchment feature model addresses two questions in the field of multimodal interaction: how we bridge video and audio processing with the realities of human multimodal communication, and how information from the different modes may be fused. We argue from a detailed literature review that gestural research has clustered around manipulative and semaphoric use of the hands, motivate the catchment feature model psycholinguistic research, and present the model. In contrast to “whole gesture” recognition, the catchment feature model applies a feature decomposition approach that facilitates cross-modal fusion at the level of discourse planning and conceptualization. We present our experimental framework for catchment feature-based research, cite three concrete examples of catchment features, and propose new directions of multimodal research based on the model.

  7. Classification of Urban Feature from Unmanned Aerial Vehicle Images Using Gasvm Integration and Multi-Scale Segmentation

    Science.gov (United States)

    Modiri, M.; Salehabadi, A.; Mohebbi, M.; Hashemi, A. M.; Masumi, M.

    2015-12-01

    The use of UAV in the application of photogrammetry to obtain cover images and achieve the main objectives of the photogrammetric mapping has been a boom in the region. The images taken from REGGIOLO region in the province of, Italy Reggio -Emilia by UAV with non-metric camera Canon Ixus and with an average height of 139.42 meters were used to classify urban feature. Using the software provided SURE and cover images of the study area, to produce dense point cloud, DSM and Artvqvtv spatial resolution of 10 cm was prepared. DTM area using Adaptive TIN filtering algorithm was developed. NDSM area was prepared with using the difference between DSM and DTM and a separate features in the image stack. In order to extract features, using simultaneous occurrence matrix features mean, variance, homogeneity, contrast, dissimilarity, entropy, second moment, and correlation for each of the RGB band image was used Orthophoto area. Classes used to classify urban problems, including buildings, trees and tall vegetation, grass and vegetation short, paved road and is impervious surfaces. Class consists of impervious surfaces such as pavement conditions, the cement, the car, the roof is stored. In order to pixel-based classification and selection of optimal features of classification was GASVM pixel basis. In order to achieve the classification results with higher accuracy and spectral composition informations, texture, and shape conceptual image featureOrthophoto area was fencing. The segmentation of multi-scale segmentation method was used.it belonged class. Search results using the proposed classification of urban feature, suggests the suitability of this method of classification complications UAV is a city using images. The overall accuracy and kappa coefficient method proposed in this study, respectively, 47/93% and 84/91% was.

  8. Computer-aided mass detection in mammography: False positive reduction via gray-scale invariant ranklet texture features

    International Nuclear Information System (INIS)

    Masotti, Matteo; Lanconelli, Nico; Campanini, Renato

    2009-01-01

    In this work, gray-scale invariant ranklet texture features are proposed for false positive reduction (FPR) in computer-aided detection (CAD) of breast masses. Two main considerations are at the basis of this proposal. First, false positive (FP) marks surviving our previous CAD system seem to be characterized by specific texture properties that can be used to discriminate them from masses. Second, our previous CAD system achieves invariance to linear/nonlinear monotonic gray-scale transformations by encoding regions of interest into ranklet images through the ranklet transform, an image transformation similar to the wavelet transform, yet dealing with pixels' ranks rather than with their gray-scale values. Therefore, the new FPR approach proposed herein defines a set of texture features which are calculated directly from the ranklet images corresponding to the regions of interest surviving our previous CAD system, hence, ranklet texture features; then, a support vector machine (SVM) classifier is used for discrimination. As a result of this approach, texture-based information is used to discriminate FP marks surviving our previous CAD system; at the same time, invariance to linear/nonlinear monotonic gray-scale transformations of the new CAD system is guaranteed, as ranklet texture features are calculated from ranklet images that have this property themselves by construction. To emphasize the gray-scale invariance of both the previous and new CAD systems, training and testing are carried out without any in-between parameters' adjustment on mammograms having different gray-scale dynamics; in particular, training is carried out on analog digitized mammograms taken from a publicly available digital database, whereas testing is performed on full-field digital mammograms taken from an in-house database. Free-response receiver operating characteristic (FROC) curve analysis of the two CAD systems demonstrates that the new approach achieves a higher reduction of FP marks

  9. Features of Balance Model Development of Exclave Region

    Directory of Open Access Journals (Sweden)

    Timur Rustamovich Gareev

    2015-06-01

    Full Text Available In the article, the authors build a balance model for an exclave region. The aim of the work is to explore the unique properties of exclaves to evaluate the possibility of development of a more complex model for the economy of a region. Exclaves are strange phenomena in both theoretical and practical regional economy. There is lack of comparative models, so it is typically quite challenging to study exclaves. At the same time, exclaves produce better statistics, which gives more careful consideration of cross-regional economic flows. The authors discuss methodologies of model-based regional development forecasting. They analyze balance approach on a more general level of regional governance and individually, on the example of specific territories. Thus, they identify and explain the need to develop balance approach models fitted to the special needs of certain territories. By combining regional modeling for an exclave with traditional balance and simulation-based methods and event-based approach, they come up with a more detailed model for the economy of a region. Having taken one Russian exclave as an example, the authors have developed a simulation event-based long-term sustainability model. In the article, they provide the general characteristics of the model, describe its components, and simulation algorithm. The approach introduced in this article combines the traditional balance models and the peculiarities of an exclave region to develop a holistic regional economy model (with the Kaliningrad region serving as an example. It is important to underline that the resulting model helps to evaluate the degree of influence of preferential economic regimes (such as Free Customs Zone, for example on the economy of a region.

  10. Formal Modeling and Verification of Interlocking Systems Featuring Sequential Release

    DEFF Research Database (Denmark)

    Vu, Linh Hong; Haxthausen, Anne Elisabeth; Peleska, Jan

    2015-01-01

    In this paper, we present a method and an associated tool suite for formal verification of the new ETCS level 2 based Danish railway interlocking systems. We have made a generic and reconfigurable model of the system behavior and generic high-level safety properties. This model accommodates seque...... SMT based bounded model checking (BMC) and inductive reasoning, we are able to verify the properties for model instances corresponding to railway networks of industrial size. Experiments also show that BMC is efficient for finding bugs in the railway interlocking designs....

  11. Formal Modeling and Verification of Interlocking Systems Featuring Sequential Release

    DEFF Research Database (Denmark)

    Vu, Linh Hong; Haxthausen, Anne Elisabeth; Peleska, Jan

    2014-01-01

    In this paper, we present a method and an associated tool suite for formal verification of the new ETCS level 2 based Danish railway interlocking systems. We have made a generic and reconfigurable model of the system behavior and generic high-level safety properties. This model accommodates seque...... SMT based bounded model checking (BMC) and inductive reasoning, we are able to verify the properties for model instances corresponding to railway networks of industrial size. Experiments also show that BMC is efficient for finding bugs in the railway interlocking designs....

  12. Results of PMIP2 coupled simulations of the Mid-Holocene and Last Glacial Maximum – Part 1: experiments and large-scale features

    Directory of Open Access Journals (Sweden)

    Y. Zhao

    2007-06-01

    Full Text Available A set of coupled ocean-atmosphere simulations using state of the art climate models is now available for the Last Glacial Maximum and the Mid-Holocene through the second phase of the Paleoclimate Modeling Intercomparison Project (PMIP2. This study presents the large-scale features of the simulated climates and compares the new model results to those of the atmospheric models from the first phase of the PMIP, for which sea surface temperature was prescribed or computed using simple slab ocean formulations. We consider the large-scale features of the climate change, pointing out some of the major differences between the different sets of experiments. We show in particular that systematic differences between PMIP1 and PMIP2 simulations are due to the interactive ocean, such as the amplification of the African monsoon at the Mid-Holocene or the change in precipitation in mid-latitudes at the LGM. Also the PMIP2 simulations are in general in better agreement with data than PMIP1 simulations.

  13. Small-scale features in the Earth's magnetic field observed by Magsat.

    Science.gov (United States)

    Cain, J.C.; Schmitz, D.R.; Muth, L.

    1984-01-01

    A spherical harmonic expansion to degree and order 29 is derived using a selected magnetically quiet sample of Magsat data. Global maps representing the contribution due to terms of the expansion above n = 13 at 400 km altitude are compared with previously published residual anomaly maps and shown to be similar, even in polar regions. An expansion with such a high degree and order displays all but the sharpest features seen by the satellite and gives a more consistent picture of the high-order field structure at a constant altitude than do component maps derived independently. -Authors

  14. Feature extraction for face recognition via Active Shape Model (ASM) and Active Appearance Model (AAM)

    Science.gov (United States)

    Iqtait, M.; Mohamad, F. S.; Mamat, M.

    2018-03-01

    Biometric is a pattern recognition system which is used for automatic recognition of persons based on characteristics and features of an individual. Face recognition with high recognition rate is still a challenging task and usually accomplished in three phases consisting of face detection, feature extraction, and expression classification. Precise and strong location of trait point is a complicated and difficult issue in face recognition. Cootes proposed a Multi Resolution Active Shape Models (ASM) algorithm, which could extract specified shape accurately and efficiently. Furthermore, as the improvement of ASM, Active Appearance Models algorithm (AAM) is proposed to extracts both shape and texture of specified object simultaneously. In this paper we give more details about the two algorithms and give the results of experiments, testing their performance on one dataset of faces. We found that the ASM is faster and gains more accurate trait point location than the AAM, but the AAM gains a better match to the texture.

  15. Kilometer-Scale Topographic Roughness of Mercury: Correlation with Geologic Features and Units

    Science.gov (United States)

    Kreslavsky, Mikhail A.; Head, James W.; Neumann, Gregory A.; Zuber, Maria T.; Smith, David E.

    2014-01-01

    We present maps of the topographic roughness of the northern circumpolar area of Mercury at kilometer scales. The maps are derived from range profiles obtained by the Mercury Laser Altimeter (MLA) instrument onboard the MErcury Surface, Space ENvironment, GEochemistry, and Ranging (MESSENGER) mission. As measures of roughness, we used the interquartile range of profile curvature at three baselines: 0.7 kilometers, 2.8 kilometers, and 11 kilometers. The maps provide a synoptic overview of variations of typical topographic textures. They show a dichotomy between the smooth northern plains and rougher, more heavily cratered terrains. Analysis of the scale dependence of roughness indicates that the regolith on Mercury is thicker than on the Moon by approximately a factor of three. Roughness contrasts within northern volcanic plains of Mercury indicate a younger unit inside Goethe basin and inside another unnamed stealth basin. These new data permit interplanetary comparisons of topographic roughness.

  16. Formal modelling and verification of interlocking systems featuring sequential release

    DEFF Research Database (Denmark)

    Vu, Linh Hong; Haxthausen, Anne Elisabeth; Peleska, Jan

    2017-01-01

    checking (BMC) and inductive reasoning, it is verified that the generated model instance satisfies the generated safety properties. Using this method, we are able to verify the safety properties for model instances corresponding to railway networks of industrial size. Experiments show that BMC is also...

  17. Modelling across bioreactor scales: methods, challenges and limitations

    DEFF Research Database (Denmark)

    Gernaey, Krist

    that it is challenging and expensive to acquire experimental data of good quality that can be used for characterizing gradients occurring inside a large industrial scale bioreactor. But which model building methods are available? And how can one ensure that the parameters in such a model are properly estimated? And what......Scale-up and scale-down of bioreactors are very important in industrial biotechnology, especially with the currently available knowledge on the occurrence of gradients in industrial-scale bioreactors. Moreover, it becomes increasingly appealing to model such industrial scale systems, considering...

  18. A feature-based approach to modeling protein-protein interaction hot spots.

    Science.gov (United States)

    Cho, Kyu-il; Kim, Dongsup; Lee, Doheon

    2009-05-01

    Identifying features that effectively represent the energetic contribution of an individual interface residue to the interactions between proteins remains problematic. Here, we present several new features and show that they are more effective than conventional features. By combining the proposed features with conventional features, we develop a predictive model for interaction hot spots. Initially, 54 multifaceted features, composed of different levels of information including structure, sequence and molecular interaction information, are quantified. Then, to identify the best subset of features for predicting hot spots, feature selection is performed using a decision tree. Based on the selected features, a predictive model for hot spots is created using support vector machine (SVM) and tested on an independent test set. Our model shows better overall predictive accuracy than previous methods such as the alanine scanning methods Robetta and FOLDEF, and the knowledge-based method KFC. Subsequent analysis yields several findings about hot spots. As expected, hot spots have a larger relative surface area burial and are more hydrophobic than other residues. Unexpectedly, however, residue conservation displays a rather complicated tendency depending on the types of protein complexes, indicating that this feature is not good for identifying hot spots. Of the selected features, the weighted atomic packing density, relative surface area burial and weighted hydrophobicity are the top 3, with the weighted atomic packing density proving to be the most effective feature for predicting hot spots. Notably, we find that hot spots are closely related to pi-related interactions, especially pi . . . pi interactions.

  19. Entropy Error Model of Planar Geometry Features in GIS

    Institute of Scientific and Technical Information of China (English)

    LI Dajun; GUAN Yunlan; GONG Jianya; DU Daosheng

    2003-01-01

    Positional error of line segments is usually described by using "g-band", however, its band width is in relation to the confidence level choice. In fact, given different confidence levels, a series of concentric bands can be obtained. To overcome the effect of confidence level on the error indicator, by introducing the union entropy theory, we propose an entropy error ellipse index of point, then extend it to line segment and polygon,and establish an entropy error band of line segment and an entropy error donut of polygon. The research shows that the entropy error index can be determined uniquely and is not influenced by confidence level, and that they are suitable for positional uncertainty of planar geometry features.

  20. Turbulence modeling for flows around convex features giving rapid eddy distortion

    International Nuclear Information System (INIS)

    Tucker, P.G.; Liu, Y.

    2007-01-01

    Reynolds averaged Navier-Stokes model performances in the stagnation and wake regions for turbulent flows with relatively large Lagrangian length scales (generally larger than the scale of geometrical features) approaching small cylinders (both square and circular) is explored. The effective cylinder (or wire) diameter based Reynolds number, Re W ≤ 2.5 x 10 3 . The following turbulence models are considered: a mixing-length; standard Spalart and Allmaras (SA) and streamline curvature (and rotation) corrected SA (SARC); Secundov's ν t -92; Secundov et al.'s two equation ν t -L; Wolfshtein's k-l model; the Explicit Algebraic Stress Model (EASM) of Abid et al.; the cubic model of Craft et al.; various linear k-ε models including those with wall distance based damping functions; Menter SST, k-ω and Spalding's LVEL model. The use of differential equation distance functions (Poisson and Hamilton-Jacobi equation based) for palliative turbulence modeling purposes is explored. The performance of SA with these distance functions is also considered in the sharp convex geometry region of an airfoil trailing edge. For the cylinder, with Re W ∼ 2.5 x 10 3 the mixing length and k-l models give strong turbulence production in the wake region. However, in agreement with eddy viscosity estimates, the LVEL and Secundov ν t -92 models show relatively little cylinder influence on turbulence. On the other hand, two equation models (as does the one equation SA) suggest the cylinder gives a strong turbulence deficit in the wake region. Also, for SA, an order or magnitude cylinder diameter decrease from Re W = 2500 to 250 surprisingly strengthens the cylinder's disruptive influence. Importantly, results for Re W W = 250 i.e. no matter how small the cylinder/wire its influence does not, as it should, vanish. Similar tests for the Launder-Sharma k-ε, Menter SST and k-ω show, in accordance with physical reality, the cylinder's influence diminishing albeit slowly with size. Results

  1. Model of cosmology and particle physics at an intermediate scale

    International Nuclear Information System (INIS)

    Bastero-Gil, M.; Di Clemente, V.; King, S. F.

    2005-01-01

    We propose a model of cosmology and particle physics in which all relevant scales arise in a natural way from an intermediate string scale. We are led to assign the string scale to the intermediate scale M * ∼10 13 GeV by four independent pieces of physics: electroweak symmetry breaking; the μ parameter; the axion scale; and the neutrino mass scale. The model involves hybrid inflation with the waterfall field N being responsible for generating the μ term, the right-handed neutrino mass scale, and the Peccei-Quinn symmetry breaking scale. The large scale structure of the Universe is generated by the lightest right-handed sneutrino playing the role of a coupled curvaton. We show that the correct curvature perturbations may be successfully generated providing the lightest right-handed neutrino is weakly coupled in the seesaw mechanism, consistent with sequential dominance

  2. Main features of the proposed NCRP respiratory tract model

    International Nuclear Information System (INIS)

    Phalen, R.F.; Fisher, G.L.; Moss, O.R.; Schlesinger, R.B.; Swift, D.L.

    1991-01-01

    The proposed NCRP respiratory tract dosimetry model regions include the naso-oro-pharyngo-laryngeal (NOPL), the tracheobronchial (TB), the pulmonary (P), and the lymph nodes (LN). Input aerosol concentrations are derived from a consideration of particle-size-dependent inspirability. Particle deposition in the respiratory tract is modelled using the mechanisms of inertial impaction, sedimentation and diffusion. The rates of absorption of particles, and transport to the blood, have been derived from clearance data from people and laboratory animals. The effect of body growth on particle deposition is considered. Particle clearance rates are assumed to be independent of age. The proposed respiratory tract model differs significantly from the 1966 Task Group Model in that (1) inspirability is considered; (2) new sub-regions of the respiratory tract are considered; (3) absorption of materials by the blood is treated in a more sophisticated fashion; and (4) body size (and thus age) is taken into account. (author)

  3. Modelling the cognitive and neuropathological features of schizophrenia with phencyclidine.

    Science.gov (United States)

    Reynolds, Gavin P; Neill, Joanna C

    2016-11-01

    Here, Reynolds and Neill describe the studies that preceded and followed publication of this paper, which reported a deficit in parvalbumin (PV), a calcium-binding protein found in GABA interneurons known to be reduced in schizophrenia patients, in conjunction with a deficit in reversal learning in an animal model for schizophrenia. This publication resulted from common research interests: Reynolds in the neurotransmitter pathology of schizophrenia, and Neill in developing animal models for schizophrenia symptomatology. The animal model, using a sub-chronic dosing regimen (sc) with the non-competitive NMDA receptor antagonist PCP (phencyclidine), evolved from previous work in rats (for PCP) and primates (for cognition). The hypothesis of a PV deficit came from emerging evidence for a GABAergic dysfunction in schizophrenia, in particular a deficit in PV-containing GABA interneurons. Since this original publication, a PV deficit has been identified in other animal models for schizophrenia, and the PV field has expanded considerably. This includes mechanistic work attempting to identify the link between oxidative stress and GABAergic dysfunction using this scPCP model, and assessment of the potential of the PV neuron as a target for new antipsychotic drugs. The latter has included development of a molecule targeting KV3.1 channels located on PV-containing GABA interneurons which can restore both PV expression and cognitive deficits in the scPCP model. © The Author(s) 2016.

  4. The influence of fine-scale habitat features on regional variation in population performance of alpine White-tailed Ptarmigan

    Science.gov (United States)

    Fedy, B.; Martin, K.

    2011-01-01

    It is often assumed (explicitly or implicitly) that animals select habitat features to maximize fitness. However, there is often a mismatch between preferred habitats and indices of individual and population measures of performance. We examined the influence of fine-scale habitat selection on the overall population performance of the White-tailed Ptarmigan (Lagopus leucura), an alpine specialist, in two subdivided populations whose habitat patches are configured differently. The central region of Vancouver Island, Canada, has more continuous and larger habitat patches than the southern region. In 2003 and 2004, using paired logistic regression between used (n = 176) and available (n = 324) sites, we identified food availability, distance to standing water, and predator cover as preferred habitat components . We then quantified variation in population performance in the two regions in terms of sex ratio, age structure (n = 182 adults and yearlings), and reproductive success (n = 98 females) on the basis of 8 years of data (1995-1999, 2002-2004). Region strongly influenced females' breeding success, which, unsuccessful hens included, was consistently higher in the central region (n = 77 females) of the island than in the south (n = 21 females, P = 0.01). The central region also had a much higher proportion of successful hens (87%) than did the south (55%, P < 0.001). In light of our findings, we suggest that population performance is influenced by a combination of fine-scale habitat features and coarse-scale habitat configuration. ?? The Cooper Ornithological Society 2011.

  5. Automated prostate cancer detection via comprehensive multi-parametric magnetic resonance imaging texture feature models

    International Nuclear Information System (INIS)

    Khalvati, Farzad; Wong, Alexander; Haider, Masoom A.

    2015-01-01

    Prostate cancer is the most common form of cancer and the second leading cause of cancer death in North America. Auto-detection of prostate cancer can play a major role in early detection of prostate cancer, which has a significant impact on patient survival rates. While multi-parametric magnetic resonance imaging (MP-MRI) has shown promise in diagnosis of prostate cancer, the existing auto-detection algorithms do not take advantage of abundance of data available in MP-MRI to improve detection accuracy. The goal of this research was to design a radiomics-based auto-detection method for prostate cancer via utilizing MP-MRI data. In this work, we present new MP-MRI texture feature models for radiomics-driven detection of prostate cancer. In addition to commonly used non-invasive imaging sequences in conventional MP-MRI, namely T2-weighted MRI (T2w) and diffusion-weighted imaging (DWI), our proposed MP-MRI texture feature models incorporate computed high-b DWI (CHB-DWI) and a new diffusion imaging modality called correlated diffusion imaging (CDI). Moreover, the proposed texture feature models incorporate features from individual b-value images. A comprehensive set of texture features was calculated for both the conventional MP-MRI and new MP-MRI texture feature models. We performed feature selection analysis for each individual modality and then combined best features from each modality to construct the optimized texture feature models. The performance of the proposed MP-MRI texture feature models was evaluated via leave-one-patient-out cross-validation using a support vector machine (SVM) classifier trained on 40,975 cancerous and healthy tissue samples obtained from real clinical MP-MRI datasets. The proposed MP-MRI texture feature models outperformed the conventional model (i.e., T2w+DWI) with regard to cancer detection accuracy. Comprehensive texture feature models were developed for improved radiomics-driven detection of prostate cancer using MP-MRI. Using a

  6. Grade 12 Students' Conceptual Understanding and Mental Models of Galvanic Cells before and after Learning by Using Small-Scale Experiments in Conjunction with a Model Kit

    Science.gov (United States)

    Supasorn, Saksri

    2015-01-01

    This study aimed to develop the small-scale experiments involving electrochemistry and the galvanic cell model kit featuring the sub-microscopic level. The small-scale experiments in conjunction with the model kit were implemented based on the 5E inquiry learning approach to enhance students' conceptual understanding of electrochemistry. The…

  7. Correlation between clinical and histological features in a pig model of choroidal neovascularization

    DEFF Research Database (Denmark)

    Lassota, Nathan; Kiilgaard, Jens Folke; Prause, Jan Ulrik

    2006-01-01

    To analyse the histological changes in the retina and the choroid in a pig model of choroidal neovascularization (CNV) and to correlate these findings with fundus photographic and fluorescein angiographic features.......To analyse the histological changes in the retina and the choroid in a pig model of choroidal neovascularization (CNV) and to correlate these findings with fundus photographic and fluorescein angiographic features....

  8. Improvements and new features in the IRI-2000 model

    International Nuclear Information System (INIS)

    Bilitza, D.

    2002-01-01

    This paper describes the changes that were implemented in the new version of the COSPAR/URSI International Reference Ionosphere (IRI-2000). These changes are: (1) two new options for the electron density in the D-region, (2) a better functional description of the electron density in the E-F merging region, (3) inclusion of the F1 layer occurrence probability as a new parameter, (4) a new model for the bottomside parameters B 0 and B 1 that greatly improves the representation at low and equatorial latitudes during high solar activities, (5) inclusion of a model for foF2 storm-time updating, (6) a new option for the electron temperature in the topside ionosphere, and (7) inclusion of a model for the equatorial F region ion drift. The main purpose of this paper is to provide the IRI users with examples of the effects of these changes. (author)

  9. Structural and Molecular Modeling Features of P2X Receptors

    Directory of Open Access Journals (Sweden)

    Luiz Anastacio Alves

    2014-03-01

    Full Text Available Currently, adenosine 5'-triphosphate (ATP is recognized as the extracellular messenger that acts through P2 receptors. P2 receptors are divided into two subtypes: P2Y metabotropic receptors and P2X ionotropic receptors, both of which are found in virtually all mammalian cell types studied. Due to the difficulty in studying membrane protein structures by X-ray crystallography or NMR techniques, there is little information about these structures available in the literature. Two structures of the P2X4 receptor in truncated form have been solved by crystallography. Molecular modeling has proven to be an excellent tool for studying ionotropic receptors. Recently, modeling studies carried out on P2X receptors have advanced our knowledge of the P2X receptor structure-function relationships. This review presents a brief history of ion channel structural studies and shows how modeling approaches can be used to address relevant questions about P2X receptors.

  10. The features of modelling semiconductor lasers with a wide contact

    Directory of Open Access Journals (Sweden)

    Rzhanov Alexey

    2017-01-01

    Full Text Available The aspects of calculating the dynamics and statics of powerful semiconductor laser diodes radiation are investigated. It takes into account the main physical mechanisms influencing power, spectral composition, far and near field of laser radiation. It outlines a dynamic distributed model of a semiconductor laser with a wide contact and possible algorithms for its implementation.

  11. Features of optical modeling in educational and scientific activity ...

    African Journals Online (AJOL)

    The article discusses the functionality of existing software for the modeling, analysis and optimization of lighting systems and optical elements, through which the stage of their design can be automated completely. The use of these programs is shown using the example of scientific work and the educational activity of ...

  12. Developement of the method for realization of spectral irradiance scale featuring system of spectral comparisons

    International Nuclear Information System (INIS)

    Skerovic, V; Zarubica, V; Aleksic, M; Zekovic, L; Belca, I

    2010-01-01

    Realization of the scale of spectral responsivity of the detectors in the Directorate of Measures and Precious Metals (DMDM) is based on silicon detectors traceable to LNE-INM. In order to realize the unit of spectral irradiance in the laboratory for photometry and radiometry of the Bureau of Measures and Precious Metals, the new method based on the calibration of the spectroradiometer by comparison with standard detector has been established. The development of the method included realization of the System of Spectral Comparisons (SSC), together with the detector spectral responsivity calibrations by means of a primary spectrophotometric system. The linearity testing and stray light analysis were preformed to characterize the spectroradiometer. Measurement of aperture diameter and calibration of transimpedance amplifier were part of the overall experiment. In this paper, the developed method is presented and measurement results with the associated measurement uncertainty budget are shown.

  13. Developement of the method for realization of spectral irradiance scale featuring system of spectral comparisons

    Energy Technology Data Exchange (ETDEWEB)

    Skerovic, V; Zarubica, V; Aleksic, M [Directorate of measures and precious metals, Optical radiation Metrology department, Mike Alasa 14, 11000 Belgrade (Serbia); Zekovic, L; Belca, I, E-mail: vladanskerovic@dmdm.r [Faculty of Physics, Department for Applied physics and metrology, Studentski trg 12-16, 11000 Belgrade (Serbia)

    2010-10-15

    Realization of the scale of spectral responsivity of the detectors in the Directorate of Measures and Precious Metals (DMDM) is based on silicon detectors traceable to LNE-INM. In order to realize the unit of spectral irradiance in the laboratory for photometry and radiometry of the Bureau of Measures and Precious Metals, the new method based on the calibration of the spectroradiometer by comparison with standard detector has been established. The development of the method included realization of the System of Spectral Comparisons (SSC), together with the detector spectral responsivity calibrations by means of a primary spectrophotometric system. The linearity testing and stray light analysis were preformed to characterize the spectroradiometer. Measurement of aperture diameter and calibration of transimpedance amplifier were part of the overall experiment. In this paper, the developed method is presented and measurement results with the associated measurement uncertainty budget are shown.

  14. Noise magnetic Barkahausen: modeling and scale

    International Nuclear Information System (INIS)

    Rodríguez-Pérez, Jorge L.; Pérez Benítez, José A.

    2008-01-01

    Noise magnetic Barkahausen of produces due to network defaults, and is reflected in abrupt changes that take place in the magnetization of the material in Studio. This fact presupposes a complexity, according to the various factors that influence its occurrence and internal changes in the system. A study of noise are used in three fundamental quantities: length the signal, the area under the curve and the energy of the signal; from these other quantities that are used often are defined: the square root mean (average-quadratic voltage) signal and the amplitude of the signal (maximum peak voltage). This form of investigate the phenomenon assumes a statistical analysis of the behaviour of the signal as a result of a set of changes that occur in the material, showing the complexity of the system and the importance of the laws of scale. This paper investigates the relationship between noise magnetic Barkahausen, laws of scale and complexity using structural steel ATSM 36 samples that have been subjected to mechanical deformations by traction and compression. For it's performed a statistical analysis to determine the complexity from the Test-appointment and reported the values of fundamental quantities and laws of scale for different deformation, resulting in the unit which shows the connection between the values of the voltage quadratic medium, the depth of the sample, the characteristics of the laws of scale and complexity: a pseudo random system.

  15. TU-CD-BRB-01: Normal Lung CT Texture Features Improve Predictive Models for Radiation Pneumonitis

    International Nuclear Information System (INIS)

    Krafft, S; Briere, T; Court, L; Martel, M

    2015-01-01

    Purpose: Existing normal tissue complication probability (NTCP) models for radiation pneumonitis (RP) traditionally rely on dosimetric and clinical data but are limited in terms of performance and generalizability. Extraction of pre-treatment image features provides a potential new category of data that can improve NTCP models for RP. We consider quantitative measures of total lung CT intensity and texture in a framework for prediction of RP. Methods: Available clinical and dosimetric data was collected for 198 NSCLC patients treated with definitive radiotherapy. Intensity- and texture-based image features were extracted from the T50 phase of the 4D-CT acquired for treatment planning. A total of 3888 features (15 clinical, 175 dosimetric, and 3698 image features) were gathered and considered candidate predictors for modeling of RP grade≥3. A baseline logistic regression model with mean lung dose (MLD) was first considered. Additionally, a least absolute shrinkage and selection operator (LASSO) logistic regression was applied to the set of clinical and dosimetric features, and subsequently to the full set of clinical, dosimetric, and image features. Model performance was assessed by comparing area under the curve (AUC). Results: A simple logistic fit of MLD was an inadequate model of the data (AUC∼0.5). Including clinical and dosimetric parameters within the framework of the LASSO resulted in improved performance (AUC=0.648). Analysis of the full cohort of clinical, dosimetric, and image features provided further and significant improvement in model performance (AUC=0.727). Conclusions: To achieve significant gains in predictive modeling of RP, new categories of data should be considered in addition to clinical and dosimetric features. We have successfully incorporated CT image features into a framework for modeling RP and have demonstrated improved predictive performance. Validation and further investigation of CT image features in the context of RP NTCP

  16. Body Dysmorphic Disorder: Neurobiological Features and an Updated Model

    Science.gov (United States)

    Li, Wei; Arienzo, Donatello; Feusner, Jamie D.

    2013-01-01

    Body Dysmorphic Disorder (BDD) affects approximately 2% of the population and involves misperceived defects of appearance along with obsessive preoccupation and compulsive behaviors. There is evidence of neurobiological abnormalities associated with symptoms in BDD, although research to date is still limited. This review covers the latest neuropsychological, genetic, neurochemical, psychophysical, and neuroimaging studies and synthesizes these findings into an updated (yet still preliminary) neurobiological model of the pathophysiology of BDD. We propose a model in which visual perceptual abnormalities, along with frontostriatal and limbic system dysfunction, may combine to contribute to the symptoms of impaired insight and obsessive thoughts and compulsive behaviors expressed in BDD. Further research is necessary to gain a greater understanding of the etiological formation of BDD symptoms and their evolution over time. PMID:25419211

  17. Main features of nucleation in model solutions of oral cavity

    Science.gov (United States)

    Golovanova, O. A.; Chikanova, E. S.; Punin, Yu. O.

    2015-05-01

    The regularities of nucleation in model solutions of oral cavity have been investigated, and the induction order and constants have been determined for two systems: saliva and dental plaque fluid (DPF). It is shown that an increase in the initial supersaturation leads to a transition from the heterogeneous nucleation of crystallites to a homogeneous one. Some additives are found to enhance nucleation: HCO{3/-} > C6H12O6 > F-, while others hinder this process: protein (casein) > Mg2+. It is established that crystallization in DPF occurs more rapidly and the DPF composition is favorable for the growth of small (52.6-26.1 μm) crystallites. On the contrary, the conditions implemented in the model saliva solution facilitate the formation of larger (198.4-41.8 μm) crystals.

  18. Boosting the discriminative power of color models for feature detection

    Science.gov (United States)

    Stokman, Harro M. G.; Gevers, Theo

    2005-01-01

    We consider the well-known problem of segmenting a color image into foreground-background pixels. Such result can be obtained by segmenting the red, green and blue channels directly. Alternatively, the result may be obtained through the transformation of the color image into other color spaces, such as HSV or normalized colors. The problem then is how to select the color space or color channel that produces the best segmentation result. Furthermore, if more than one channels are equally good candidates, the next problem is how to combine the results. In this article, we investigate if the principles of the formal model for diversification of Markowitz (1952) can be applied to solve the problem. We verify, in theory and in practice, that the proposed diversification model can be applied effectively to determine the most appropriate combination of color spaces for the application at hand.

  19. Semantic Road Segmentation Via Multi-Scale Ensembles of Learned Features

    NARCIS (Netherlands)

    Alvarez, J.M.; LeCun, Y.; Gevers, T.; Lopez, A.M.

    2012-01-01

    Semantic segmentation refers to the process of assigning an object label (e.g., building, road, sidewalk, car, pedestrian) to every pixel in an image. Common approaches formulate the task as a random field labeling problem modeling the interactions between labels by combining local and contextual

  20. The Goddard multi-scale modeling system with unified physics

    Directory of Open Access Journals (Sweden)

    W.-K. Tao

    2009-08-01

    Full Text Available Recently, a multi-scale modeling system with unified physics was developed at NASA Goddard. It consists of (1 a cloud-resolving model (CRM, (2 a regional-scale model, the NASA unified Weather Research and Forecasting Model (WRF, and (3 a coupled CRM-GCM (general circulation model, known as the Goddard Multi-scale Modeling Framework or MMF. The same cloud-microphysical processes, long- and short-wave radiative transfer and land-surface processes are applied in all of the models to study explicit cloud-radiation and cloud-surface interactive processes in this multi-scale modeling system. This modeling system has been coupled with a multi-satellite simulator for comparison and validation with NASA high-resolution satellite data.

    This paper reviews the development and presents some applications of the multi-scale modeling system, including results from using the multi-scale modeling system to study the interactions between clouds, precipitation, and aerosols. In addition, use of the multi-satellite simulator to identify the strengths and weaknesses of the model-simulated precipitation processes will be discussed as well as future model developments and applications.

  1. Microphysics in Multi-scale Modeling System with Unified Physics

    Science.gov (United States)

    Tao, Wei-Kuo

    2012-01-01

    Recently, a multi-scale modeling system with unified physics was developed at NASA Goddard. It consists of (1) a cloud-resolving model (Goddard Cumulus Ensemble model, GCE model), (2) a regional scale model (a NASA unified weather research and forecast, WRF), (3) a coupled CRM and global model (Goddard Multi-scale Modeling Framework, MMF), and (4) a land modeling system. The same microphysical processes, long and short wave radiative transfer and land processes and the explicit cloud-radiation, and cloud-land surface interactive processes are applied in this multi-scale modeling system. This modeling system has been coupled with a multi-satellite simulator to use NASA high-resolution satellite data to identify the strengths and weaknesses of cloud and precipitation processes simulated by the model. In this talk, a review of developments and applications of the multi-scale modeling system will be presented. In particular, the microphysics development and its performance for the multi-scale modeling system will be presented.

  2. Feature inference with uncertain categorization: Re-assessing Anderson's rational model.

    Science.gov (United States)

    Konovalova, Elizaveta; Le Mens, Gaël

    2017-09-18

    A key function of categories is to help predictions about unobserved features of objects. At the same time, humans are often in situations where the categories of the objects they perceive are uncertain. In an influential paper, Anderson (Psychological Review, 98(3), 409-429, 1991) proposed a rational model for feature inferences with uncertain categorization. A crucial feature of this model is the conditional independence assumption-it assumes that the within category feature correlation is zero. In prior research, this model has been found to provide a poor fit to participants' inferences. This evidence is restricted to task environments inconsistent with the conditional independence assumption. Currently available evidence thus provides little information about how this model would fit participants' inferences in a setting with conditional independence. In four experiments based on a novel paradigm and one experiment based on an existing paradigm, we assess the performance of Anderson's model under conditional independence. We find that this model predicts participants' inferences better than competing models. One model assumes that inferences are based on just the most likely category. The second model is insensitive to categories but sensitive to overall feature correlation. The performance of Anderson's model is evidence that inferences were influenced not only by the more likely category but also by the other candidate category. Our findings suggest that a version of Anderson's model which relaxes the conditional independence assumption will likely perform well in environments characterized by within-category feature correlation.

  3. Feature Fusion Based Audio-Visual Speaker Identification Using Hidden Markov Model under Different Lighting Variations

    Directory of Open Access Journals (Sweden)

    Md. Rabiul Islam

    2014-01-01

    Full Text Available The aim of the paper is to propose a feature fusion based Audio-Visual Speaker Identification (AVSI system with varied conditions of illumination environments. Among the different fusion strategies, feature level fusion has been used for the proposed AVSI system where Hidden Markov Model (HMM is used for learning and classification. Since the feature set contains richer information about the raw biometric data than any other levels, integration at feature level is expected to provide better authentication results. In this paper, both Mel Frequency Cepstral Coefficients (MFCCs and Linear Prediction Cepstral Coefficients (LPCCs are combined to get the audio feature vectors and Active Shape Model (ASM based appearance and shape facial features are concatenated to take the visual feature vectors. These combined audio and visual features are used for the feature-fusion. To reduce the dimension of the audio and visual feature vectors, Principal Component Analysis (PCA method is used. The VALID audio-visual database is used to measure the performance of the proposed system where four different illumination levels of lighting conditions are considered. Experimental results focus on the significance of the proposed audio-visual speaker identification system with various combinations of audio and visual features.

  4. Applications of random forest feature selection for fine-scale genetic population assignment.

    Science.gov (United States)

    Sylvester, Emma V A; Bentzen, Paul; Bradbury, Ian R; Clément, Marie; Pearce, Jon; Horne, John; Beiko, Robert G

    2018-02-01

    Genetic population assignment used to inform wildlife management and conservation efforts requires panels of highly informative genetic markers and sensitive assignment tests. We explored the utility of machine-learning algorithms (random forest, regularized random forest and guided regularized random forest) compared with F ST ranking for selection of single nucleotide polymorphisms (SNP) for fine-scale population assignment. We applied these methods to an unpublished SNP data set for Atlantic salmon ( Salmo salar ) and a published SNP data set for Alaskan Chinook salmon ( Oncorhynchus tshawytscha ). In each species, we identified the minimum panel size required to obtain a self-assignment accuracy of at least 90% using each method to create panels of 50-700 markers Panels of SNPs identified using random forest-based methods performed up to 7.8 and 11.2 percentage points better than F ST -selected panels of similar size for the Atlantic salmon and Chinook salmon data, respectively. Self-assignment accuracy ≥90% was obtained with panels of 670 and 384 SNPs for each data set, respectively, a level of accuracy never reached for these species using F ST -selected panels. Our results demonstrate a role for machine-learning approaches in marker selection across large genomic data sets to improve assignment for management and conservation of exploited populations.

  5. Object-Based Change Detection in Urban Areas: The Effects of Segmentation Strategy, Scale, and Feature Space on Unsupervised Methods

    Directory of Open Access Journals (Sweden)

    Lei Ma

    2016-09-01

    Full Text Available Object-based change detection (OBCD has recently been receiving increasing attention as a result of rapid improvements in the resolution of remote sensing data. However, some OBCD issues relating to the segmentation of high-resolution images remain to be explored. For example, segmentation units derived using different segmentation strategies, segmentation scales, feature space, and change detection methods have rarely been assessed. In this study, we have tested four common unsupervised change detection methods using different segmentation strategies and a series of segmentation scale parameters on two WorldView-2 images of urban areas. We have also evaluated the effect of adding extra textural and Normalized Difference Vegetation Index (NDVI information instead of using only spectral information. Our results indicated that change detection methods performed better at a medium scale than at a fine scale where close to the pixel size. Multivariate Alteration Detection (MAD always outperformed the other methods tested, at the same confidence level. The overall accuracy appeared to benefit from using a two-date segmentation strategy rather than single-date segmentation. Adding textural and NDVI information appeared to reduce detection accuracy, but the magnitude of this reduction was not consistent across the different unsupervised methods and segmentation strategies. We conclude that a two-date segmentation strategy is useful for change detection in high-resolution imagery, but that the optimization of thresholds is critical for unsupervised change detection methods. Advanced methods need be explored that can take advantage of additional textural or other parameters.

  6. 3D Core Model for simulation of nuclear power plants: Simulation requirements, model features, and validation

    International Nuclear Information System (INIS)

    Zerbino, H.

    1999-01-01

    In 1994-1996, Thomson Training and Simulation (TT and S) earned out the D50 Project, which involved the design and construction of optimized replica simulators for one Dutch and three German Nuclear Power Plants. It was recognized early on that the faithful reproduction of the Siemens reactor control and protection systems would impose extremely stringent demands on the simulation models, particularly the Core physics and the RCS thermohydraulics. The quality of the models, and their thorough validation, were thus essential. The present paper describes the main features of the fully 3D Core model implemented by TT and S, and its extensive validation campaign, which was defined in extremely positive collaboration with the Customer and the Core Data suppliers. (author)

  7. Feature extraction through least squares fit to a simple model

    International Nuclear Information System (INIS)

    Demuth, H.B.

    1976-01-01

    The Oak Ridge National Laboratory (ORNL) presented the Los Alamos Scientific Laboratory (LASL) with 18 radiographs of fuel rod test bundles. The problem is to estimate the thickness of the gap between some cylindrical rods and a flat wall surface. The edges of the gaps are poorly defined due to finite source size, x-ray scatter, parallax, film grain noise, and other degrading effects. The radiographs were scanned and the scan-line data were averaged to reduce noise and to convert the problem to one dimension. A model of the ideal gap, convolved with an appropriate point-spread function, was fit to the averaged data with a least squares program; and the gap width was determined from the final fitted-model parameters. The least squares routine did converge and the gaps obtained are of reasonable size. The method is remarkably insensitive to noise. This report describes the problem, the techniques used to solve it, and the results and conclusions. Suggestions for future work are also given

  8. Scaling considerations for modeling the in situ vitrification process

    International Nuclear Information System (INIS)

    Langerman, M.A.; MacKinnon, R.J.

    1990-09-01

    Scaling relationships for modeling the in situ vitrification waste remediation process are documented based upon similarity considerations derived from fundamental principles. Requirements for maintaining temperature and electric potential field similarity between the model and the prototype are determined as well as requirements for maintaining similarity in off-gas generation rates. A scaling rationale for designing reduced-scale experiments is presented and the results are assessed numerically. 9 refs., 6 figs

  9. Using LISREL to Evaluate Measurement Models and Scale Reliability.

    Science.gov (United States)

    Fleishman, John; Benson, Jeri

    1987-01-01

    LISREL program was used to examine measurement model assumptions and to assess reliability of Coopersmith Self-Esteem Inventory for Children, Form B. Data on 722 third-sixth graders from over 70 schools in large urban school district were used. LISREL program assessed (1) nature of basic measurement model for scale, (2) scale invariance across…

  10. Coulomb-gas scaling, superfluid films, and the XY model

    International Nuclear Information System (INIS)

    Minnhagen, P.; Nylen, M.

    1985-01-01

    Coulomb-gas-scaling ideas are invoked as a link between the superfluid density of two-dimensional 4 He films and the XY model; the Coulomb-gas-scaling function epsilon(X) is extracted from experiments and is compared with Monte Carlo simulations of the XY model. The agreement is found to be excellent

  11. An Investigation of Feature Models for Music Genre Classification using the Support Vector Classifier

    DEFF Research Database (Denmark)

    Meng, Anders; Shawe-Taylor, John

    2005-01-01

    In music genre classification the decision time is typically of the order of several seconds however most automatic music genre classification systems focus on short time features derived from 10-50ms. This work investigates two models, the multivariate gaussian model and the multivariate...... probability kernel. In order to examine the different methods an 11 genre music setup was utilized. In this setup the Mel Frequency Cepstral Coefficients (MFCC) were used as short time features. The accuracy of the best performing model on this data set was 44% as compared to a human performance of 52...... autoregressive model for modelling short time features. Furthermore, it was investigated how these models can be integrated over a segment of short time features into a kernel such that a support vector machine can be applied. Two kernels with this property were considered, the convolution kernel and product...

  12. Observable Emission Features of Black Hole GRMHD Jets on Event Horizon Scales

    Energy Technology Data Exchange (ETDEWEB)

    Pu, Hung-Yi [Perimeter Institute for Theoretical Physics, 31 Caroline Street North, Waterloo, ON, N2L 2Y5 (Canada); Wu, Kinwah [Mullard Space Science Laboratory, University College London, Holmbury St. Mary, Dorking, Surrey, RH5 6NT (United Kingdom); Younsi, Ziri; Mizuno, Yosuke [Institut für Theoretische Physik, Max-von-Laue-Straße 1, D-60438 Frankfurt am Main (Germany); Asada, Keiichi; Nakamura, Masanori, E-mail: hpu@perimeterinstitute.ca, E-mail: asada@asiaa.sinica.edu.tw, E-mail: nakamura@asiaa.sinica.edu.tw, E-mail: kinwah.wu@ucl.ac.uk, E-mail: younsi@th.physik.uni-frankfurt.de, E-mail: mizuno@th.physik.uni-frankfurt.de [Institute of Astronomy and Astrophysics, Academia Sinica, 11F of Astronomy-Mathematics Building, AS/NTU No. 1, Taipei 10617, Taiwan (China)

    2017-08-20

    The general-relativistic magnetohydrodynamical (GRMHD) formulation for black hole-powered jets naturally gives rise to a stagnation surface, where inflows and outflows along magnetic field lines that thread the black hole event horizon originate. We derive a conservative formulation for the transport of energetic electrons, which are initially injected at the stagnation surface and subsequently transported along flow streamlines. With this formulation the energy spectra evolution of the electrons along the flow in the presence of radiative and adiabatic cooling is determined. For flows regulated by synchrotron radiative losses and adiabatic cooling, the effective radio emission region is found to be finite, and geometrically it is more extended along the jet central axis. Moreover, the emission from regions adjacent to the stagnation surface is expected to be the most luminous as this is where the freshly injected energetic electrons are concentrated. An observable stagnation surface is thus a strong prediction of the GRMHD jet model with the prescribed non-thermal electron injection. Future millimeter/submillimeter (mm/sub-mm) very-long-baseline interferometric observations of supermassive black hole candidates, such as the one at the center of M87, can verify this GRMHD jet model and its associated non-thermal electron injection mechanism.

  13. Gravitational wave background from Standard Model physics: qualitative features

    International Nuclear Information System (INIS)

    Ghiglieri, J.; Laine, M.

    2015-01-01

    Because of physical processes ranging from microscopic particle collisions to macroscopic hydrodynamic fluctuations, any plasma in thermal equilibrium emits gravitational waves. For the largest wavelengths the emission rate is proportional to the shear viscosity of the plasma. In the Standard Model at 0T > 16 GeV, the shear viscosity is dominated by the most weakly interacting particles, right-handed leptons, and is relatively large. We estimate the order of magnitude of the corresponding spectrum of gravitational waves. Even though at small frequencies (corresponding to the sub-Hz range relevant for planned observatories such as eLISA) this background is tiny compared with that from non-equilibrium sources, the total energy carried by the high-frequency part of the spectrum is non-negligible if the production continues for a long time. We suggest that this may constrain (weakly) the highest temperature of the radiation epoch. Observing the high-frequency part directly sets a very ambitious goal for future generations of GHz-range detectors

  14. Development and evaluation of a watershed-scale hybrid hydrologic model

    OpenAIRE

    Cho, Younghyun

    2016-01-01

    A watershed-scale hybrid hydrologic model (Distributed-Clark), which is a lumped conceptual and distributed feature model, was developed to predict spatially distributed short- and long-term rainfall runoff generation and routing using relatively simple methodologies and state-of-the-art spatial data in a GIS environment. In Distributed-Clark, spatially distributed excess rainfall estimated with the SCS curve number method and a GIS-based set of separated unit hydrographs (spatially distribut...

  15. Products recognition on shop-racks from local scale-invariant features

    Science.gov (United States)

    Zawistowski, Jacek; Kurzejamski, Grzegorz; Garbat, Piotr; Naruniec, Jacek

    2016-04-01

    This paper presents a system designed for the multi-object detection purposes and adjusted for the application of product search on the market shelves. System uses well known binary keypoint detection algorithms for finding characteristic points in the image. One of the main idea is object recognition based on Implicit Shape Model method. Authors of the article proposed many improvements of the algorithm. Originally fiducial points are matched with a very simple function. This leads to the limitations in the number of objects parts being success- fully separated, while various methods of classification may be validated in order to achieve higher performance. Such an extension implies research on training procedure able to deal with many objects categories. Proposed solution opens a new possibilities for many algorithms demanding fast and robust multi-object recognition.

  16. Toward a model for lexical access based on acoustic landmarks and distinctive features

    Science.gov (United States)

    Stevens, Kenneth N.

    2002-04-01

    This article describes a model in which the acoustic speech signal is processed to yield a discrete representation of the speech stream in terms of a sequence of segments, each of which is described by a set (or bundle) of binary distinctive features. These distinctive features specify the phonemic contrasts that are used in the language, such that a change in the value of a feature can potentially generate a new word. This model is a part of a more general model that derives a word sequence from this feature representation, the words being represented in a lexicon by sequences of feature bundles. The processing of the signal proceeds in three steps: (1) Detection of peaks, valleys, and discontinuities in particular frequency ranges of the signal leads to identification of acoustic landmarks. The type of landmark provides evidence for a subset of distinctive features called articulator-free features (e.g., [vowel], [consonant], [continuant]). (2) Acoustic parameters are derived from the signal near the landmarks to provide evidence for the actions of particular articulators, and acoustic cues are extracted by sampling selected attributes of these parameters in these regions. The selection of cues that are extracted depends on the type of landmark and on the environment in which it occurs. (3) The cues obtained in step (2) are combined, taking context into account, to provide estimates of ``articulator-bound'' features associated with each landmark (e.g., [lips], [high], [nasal]). These articulator-bound features, combined with the articulator-free features in (1), constitute the sequence of feature bundles that forms the output of the model. Examples of cues that are used, and justification for this selection, are given, as well as examples of the process of inferring the underlying features for a segment when there is variability in the signal due to enhancement gestures (recruited by a speaker to make a contrast more salient) or due to overlap of gestures from

  17. A two-scale roughness model for the gloss of coated paper

    Science.gov (United States)

    Elton, N. J.

    2008-08-01

    A model for gloss is developed for surfaces with two-scale random roughness where one scale lies in the wavelength region (microroughness) and the other in the geometrical optics limit (macroroughness). A number of important industrial materials such as coated and printed paper and some paints exhibit such two-scale rough surfaces. Scalar Kirchhoff theory is used to describe scattering in the wavelength region and a facet model used for roughness features much greater than the wavelength. Simple analytical expressions are presented for the gloss of surfaces with Gaussian, modified and intermediate Lorentzian distributions of surface slopes, valid for gloss at high angle of incidence. In the model, gloss depends only on refractive index, rms microroughness amplitude and the FWHM of the surface slope distribution, all of which may be obtained experimentally. Model predictions are compared with experimental results for a range of coated papers and gloss standards, and found to be in fair agreement within model limitations.

  18. Measurement and Modelling of Scaling Minerals

    DEFF Research Database (Denmark)

    Villafafila Garcia, Ada

    2005-01-01

    -liquid equilibrium of sulphate scaling minerals (SrSO4, BaSO4, CaSO4 and CaSO4•2H2O) at temperatures up to 300ºC and pressures up to 1000 bar is described in chapter 4. Results for the binary systems (M2+, )-H2O; the ternary systems (Na+, M2+, )-H2O, and (Na+, M2+, Cl-)-H2O; and the quaternary systems (Na+, M2+)(Cl......-, )-H2O, are presented. M2+ stands for Ba2+, Ca2+, or Sr2+. Chapter 5 is devoted to the correlation and prediction of vapour-liquid-solid equilibria for different carbonate systems causing scale problems (CaCO3, BaCO3, SrCO3, and MgCO3), covering the temperature range from 0 to 250ºC and pressures up......-NaCl-Na2SO4-H2O are given. M2+ stands for Ca2+, Mg2+, Ba2+, and Sr2+. This chapter also includes an analysis of the CaCO3-MgCO3-CO2-H2O system. Chapter 6 deals with the system NaCl-H2O. Available data for that system at high temperatures and/or pressures are addressed, and sodium chloride solubility...

  19. Impact of SLA assimilation in the Sicily Channel Regional Model: model skills and mesoscale features

    Directory of Open Access Journals (Sweden)

    A. Olita

    2012-07-01

    Full Text Available The impact of the assimilation of MyOcean sea level anomalies along-track data on the analyses of the Sicily Channel Regional Model was studied. The numerical model has a resolution of 1/32° degrees and is capable to reproduce mesoscale and sub-mesoscale features. The impact of the SLA assimilation is studied by comparing a simulation (SIM, which does not assimilate data with an analysis (AN assimilating SLA along-track multi-mission data produced in the framework of MyOcean project. The quality of the analysis was evaluated by computing RMSE of the misfits between analysis background and observations (sea level before assimilation. A qualitative evaluation of the ability of the analyses to reproduce mesoscale structures is accomplished by comparing model results with ocean colour and SST satellite data, able to detect such features on the ocean surface. CTD profiles allowed to evaluate the impact of the SLA assimilation along the water column. We found a significant improvement for AN solution in terms of SLA RMSE with respect to SIM (the averaged RMSE of AN SLA misfits over 2 years is about 0.5 cm smaller than SIM. Comparison with CTD data shows a questionable improvement produced by the assimilation process in terms of vertical features: AN is better in temperature while for salinity it gets worse than SIM at the surface. This suggests that a better a-priori description of the vertical error covariances would be desirable. The qualitative comparison of simulation and analyses with synoptic satellite independent data proves that SLA assimilation allows to correctly reproduce some dynamical features (above all the circulation in the Ionian portion of the domain and mesoscale structures otherwise misplaced or neglected by SIM. Such mesoscale changes also infer that the eddy momentum fluxes (i.e. Reynolds stresses show major changes in the Ionian area. Changes in Reynolds stresses reflect a different pumping of eastward momentum from the eddy to

  20. Macro scale models for freight railroad terminals.

    Science.gov (United States)

    2016-03-02

    The project has developed a yard capacity model for macro-level analysis. The study considers the detailed sequence and scheduling in classification yards and their impacts on yard capacities simulate typical freight railroad terminals, and statistic...

  1. Scaling of Precipitation Extremes Modelled by Generalized Pareto Distribution

    Science.gov (United States)

    Rajulapati, C. R.; Mujumdar, P. P.

    2017-12-01

    Precipitation extremes are often modelled with data from annual maximum series or peaks over threshold series. The Generalized Pareto Distribution (GPD) is commonly used to fit the peaks over threshold series. Scaling of precipitation extremes from larger time scales to smaller time scales when the extremes are modelled with the GPD is burdened with difficulties arising from varying thresholds for different durations. In this study, the scale invariance theory is used to develop a disaggregation model for precipitation extremes exceeding specified thresholds. A scaling relationship is developed for a range of thresholds obtained from a set of quantiles of non-zero precipitation of different durations. The GPD parameters and exceedance rate parameters are modelled by the Bayesian approach and the uncertainty in scaling exponent is quantified. A quantile based modification in the scaling relationship is proposed for obtaining the varying thresholds and exceedance rate parameters for shorter durations. The disaggregation model is applied to precipitation datasets of Berlin City, Germany and Bangalore City, India. From both the applications, it is observed that the uncertainty in the scaling exponent has a considerable effect on uncertainty in scaled parameters and return levels of shorter durations.

  2. Scale gauge symmetry and the standard model

    International Nuclear Information System (INIS)

    Sola, J.

    1990-01-01

    This paper speculates on a version of the standard model of the electroweak and strong interactions coupled to gravity and equipped with a spontaneously broken, anomalous, conformal gauge symmetry. The scalar sector is virtually absent in the minimal model but in the general case it shows up in the form of a nonlinear harmonic map Lagrangian. A Euclidean approach to the phenological constant problem is also addressed in this framework

  3. Large-scale modelling of neuronal systems

    International Nuclear Information System (INIS)

    Castellani, G.; Verondini, E.; Giampieri, E.; Bersani, F.; Remondini, D.; Milanesi, L.; Zironi, I.

    2009-01-01

    The brain is, without any doubt, the most, complex system of the human body. Its complexity is also due to the extremely high number of neurons, as well as the huge number of synapses connecting them. Each neuron is capable to perform complex tasks, like learning and memorizing a large class of patterns. The simulation of large neuronal systems is challenging for both technological and computational reasons, and can open new perspectives for the comprehension of brain functioning. A well-known and widely accepted model of bidirectional synaptic plasticity, the BCM model, is stated by a differential equation approach based on bistability and selectivity properties. We have modified the BCM model extending it from a single-neuron to a whole-network model. This new model is capable to generate interesting network topologies starting from a small number of local parameters, describing the interaction between incoming and outgoing links from each neuron. We have characterized this model in terms of complex network theory, showing how this, learning rule can be a support For network generation.

  4. Extraction and representation of common feature from uncertain facial expressions with cloud model.

    Science.gov (United States)

    Wang, Shuliang; Chi, Hehua; Yuan, Hanning; Geng, Jing

    2017-12-01

    Human facial expressions are key ingredient to convert an individual's innate emotion in communication. However, the variation of facial expressions affects the reliable identification of human emotions. In this paper, we present a cloud model to extract facial features for representing human emotion. First, the uncertainties in facial expression are analyzed in the context of cloud model. The feature extraction and representation algorithm is established under cloud generators. With forward cloud generator, facial expression images can be re-generated as many as we like for visually representing the extracted three features, and each feature shows different roles. The effectiveness of the computing model is tested on Japanese Female Facial Expression database. Three common features are extracted from seven facial expression images. Finally, the paper is concluded and remarked.

  5. Multi-scale modeling for sustainable chemical production.

    Science.gov (United States)

    Zhuang, Kai; Bakshi, Bhavik R; Herrgård, Markus J

    2013-09-01

    With recent advances in metabolic engineering, it is now technically possible to produce a wide portfolio of existing petrochemical products from biomass feedstock. In recent years, a number of modeling approaches have been developed to support the engineering and decision-making processes associated with the development and implementation of a sustainable biochemical industry. The temporal and spatial scales of modeling approaches for sustainable chemical production vary greatly, ranging from metabolic models that aid the design of fermentative microbial strains to material and monetary flow models that explore the ecological impacts of all economic activities. Research efforts that attempt to connect the models at different scales have been limited. Here, we review a number of existing modeling approaches and their applications at the scales of metabolism, bioreactor, overall process, chemical industry, economy, and ecosystem. In addition, we propose a multi-scale approach for integrating the existing models into a cohesive framework. The major benefit of this proposed framework is that the design and decision-making at each scale can be informed, guided, and constrained by simulations and predictions at every other scale. In addition, the development of this multi-scale framework would promote cohesive collaborations across multiple traditionally disconnected modeling disciplines to achieve sustainable chemical production. Copyright © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  6. The use of scale models in impact testing

    International Nuclear Information System (INIS)

    Donelan, P.J.; Dowling, A.R.

    1985-01-01

    Theoretical analysis, component testing and model flask testing are employed to investigate the validity of scale models for demonstrating the behaviour of Magnox flasks under impact conditions. Model testing is shown to be a powerful and convenient tool provided adequate care is taken with detail design and manufacture of models and with experimental control. (author)

  7. Scale model helps Duke untie construction snags

    International Nuclear Information System (INIS)

    Anon.

    1977-01-01

    A nuclear power plant model, only 60 percent complete, has helped Duke Power identify over 150 major design interferences, which, when resolved, will help cut capital expense and eliminate scheduling problems that normally crop up as revisions are made during actual plant construction. The model has been used by construction, steam production, and design personnel to recommend changes that should improve material handling, operations, and maintenance procedures as well as simplifying piping and cabling. The company has already saved many man-hours in material take-off, material management, and detailed drafting and expects to save even more with greater use of, and improvement in, its modeling program. Duke's modeling program was authorized and became operational in November 1974, with the first model to be the Catawba Nuclear Station. This plant is a two-unit station using Westinghouse nuclear steam supply systems in tandem with General Electric turbine-generators, horizontal feedwater heaters, and Foster Wheeler triple pressure condensers. Each unit is rated 1142 MWe

  8. Planck-scale corrections to axion models

    International Nuclear Information System (INIS)

    Barr, S.M.; Seckel, D.

    1992-01-01

    It has been argued that quantum gravitational effects will violate all nonlocal symmetries. Peccei-Quinn symmetries must therefore be an ''accidental'' or automatic consequence of local gauge symmetry. Moreover, higher-dimensional operators suppressed by powers of M Pl are expected to explicitly violate the Peccei-Quinn symmetry. Unless these operators are of dimension d≥10, axion models do not solve the strong CP problem in a natural fashion. A small gravitationally induced contribution to the axion mass has little if any effect on the density of relic axions. If d=10, 11, or 12 these operators can solve the axion domain-wall problem, and we describe a simple class of Kim-Shifman-Vainshtein-Zakharov axion models where this occurs. We also study the astrophysics and cosmology of ''heavy axions'' in models where 5≤d≤10

  9. Scaling limit for the Derezi\\'nski-G\\'erard model

    OpenAIRE

    OHKUBO, Atsushi

    2010-01-01

    We consider a scaling limit for the Derezi\\'nski-G\\'erard model. We derive an effective potential by taking a scaling limit for the total Hamiltonian of the Derezi\\'nski-G\\'erard model. Our method to derive an effective potential is independent of whether or not the quantum field has a nonnegative mass. As an application of our theory developed in the present paper, we derive an effective potential of the Nelson model.

  10. BLEVE overpressure: multi-scale comparison of blast wave modeling

    International Nuclear Information System (INIS)

    Laboureur, D.; Buchlin, J.M.; Rambaud, P.; Heymes, F.; Lapebie, E.

    2014-01-01

    BLEVE overpressure modeling has been already widely studied but only few validations including the scale effect have been made. After a short overview of the main models available in literature, a comparison is done with different scales of measurements, taken from previous studies or coming from experiments performed in the frame of this research project. A discussion on the best model to use in different cases is finally proposed. (authors)

  11. A feature-based approach to modeling protein–protein interaction hot spots

    Science.gov (United States)

    Cho, Kyu-il; Kim, Dongsup; Lee, Doheon

    2009-01-01

    Identifying features that effectively represent the energetic contribution of an individual interface residue to the interactions between proteins remains problematic. Here, we present several new features and show that they are more effective than conventional features. By combining the proposed features with conventional features, we develop a predictive model for interaction hot spots. Initially, 54 multifaceted features, composed of different levels of information including structure, sequence and molecular interaction information, are quantified. Then, to identify the best subset of features for predicting hot spots, feature selection is performed using a decision tree. Based on the selected features, a predictive model for hot spots is created using support vector machine (SVM) and tested on an independent test set. Our model shows better overall predictive accuracy than previous methods such as the alanine scanning methods Robetta and FOLDEF, and the knowledge-based method KFC. Subsequent analysis yields several findings about hot spots. As expected, hot spots have a larger relative surface area burial and are more hydrophobic than other residues. Unexpectedly, however, residue conservation displays a rather complicated tendency depending on the types of protein complexes, indicating that this feature is not good for identifying hot spots. Of the selected features, the weighted atomic packing density, relative surface area burial and weighted hydrophobicity are the top 3, with the weighted atomic packing density proving to be the most effective feature for predicting hot spots. Notably, we find that hot spots are closely related to π–related interactions, especially π · · · π interactions. PMID:19273533

  12. Analysis of chromosome aberration data by hybrid-scale models

    International Nuclear Information System (INIS)

    Indrawati, Iwiq; Kumazawa, Shigeru

    2000-02-01

    This paper presents a new methodology for analyzing data of chromosome aberrations, which is useful to understand the characteristics of dose-response relationships and to construct the calibration curves for the biological dosimetry. The hybrid scale of linear and logarithmic scales brings a particular plotting paper, where the normal section paper, two types of semi-log papers and the log-log paper are continuously connected. The hybrid-hybrid plotting paper may contain nine kinds of linear relationships, and these are conveniently called hybrid scale models. One can systematically select the best-fit model among the nine models by among the conditions for a straight line of data points. A biological interpretation is possible with some hybrid-scale models. In this report, the hybrid scale models were applied to separately reported data on chromosome aberrations in human lymphocytes as well as on chromosome breaks in Tradescantia. The results proved that the proposed models fit the data better than the linear-quadratic model, despite the demerit of the increased number of model parameters. We showed that the hybrid-hybrid model (both variables of dose and response using the hybrid scale) provides the best-fit straight lines to be used as the reliable and readable calibration curves of chromosome aberrations. (author)

  13. Flavor gauge models below the Fermi scale

    Science.gov (United States)

    Babu, K. S.; Friedland, A.; Machado, P. A. N.; Mocioiu, I.

    2017-12-01

    The mass and weak interaction eigenstates for the quarks of the third generation are very well aligned, an empirical fact for which the Standard Model offers no explanation. We explore the possibility that this alignment is due to an additional gauge symmetry in the third generation. Specifically, we construct and analyze an explicit, renormalizable model with a gauge boson, X, corresponding to the B - L symmetry of the third family. Having a relatively light (in the MeV to multi-GeV range), flavor-nonuniversal gauge boson results in a variety of constraints from different sources. By systematically analyzing 20 different constraints, we identify the most sensitive probes: kaon, B +, D + and Upsilon decays, D-{\\overline{D}}^0 mixing, atomic parity violation, and neutrino scattering and oscillations. For the new gauge coupling g X in the range (10-2-10-4) the model is shown to be consistent with the data. Possible ways of testing the model in b physics, top and Z decays, direct collider production and neutrino oscillation experiments, where one can observe nonstandard matter effects, are outlined. The choice of leptons to carry the new force is ambiguous, resulting in additional phenomenological implications, such as non-universality in semileptonic bottom decays. The proposed framework provides interesting connections between neutrino oscillations, flavor and collider physics.

  14. Truncation of power law behavior in 'scale-free' network models due to information filtering

    International Nuclear Information System (INIS)

    Mossa, Stefano; Barthelemy, Marc; Eugene Stanley, H.; Nunes Amaral, Luis A.

    2002-01-01

    We formulate a general model for the growth of scale-free networks under filtering information conditions--that is, when the nodes can process information about only a subset of the existing nodes in the network. We find that the distribution of the number of incoming links to a node follows a universal scaling form, i.e., that it decays as a power law with an exponential truncation controlled not only by the system size but also by a feature not previously considered, the subset of the network 'accessible' to the node. We test our model with empirical data for the World Wide Web and find agreement

  15. Detection of baryon acoustic oscillation features in the large-scale three-point correlation function of SDSS BOSS DR12 CMASS galaxies

    Science.gov (United States)

    Slepian, Zachary; Eisenstein, Daniel J.; Brownstein, Joel R.; Chuang, Chia-Hsun; Gil-Marín, Héctor; Ho, Shirley; Kitaura, Francisco-Shu; Percival, Will J.; Ross, Ashley J.; Rossi, Graziano; Seo, Hee-Jong; Slosar, Anže; Vargas-Magaña, Mariana

    2017-08-01

    We present the large-scale three-point correlation function (3PCF) of the Sloan Digital Sky Survey DR12 Constant stellar Mass (CMASS) sample of 777 202 Luminous Red Galaxies, the largest-ever sample used for a 3PCF or bispectrum measurement. We make the first high-significance (4.5σ) detection of baryon acoustic oscillations (BAO) in the 3PCF. Using these acoustic features in the 3PCF as a standard ruler, we measure the distance to z = 0.57 to 1.7 per cent precision (statistical plus systematic). We find DV = 2024 ± 29 Mpc (stat) ± 20 Mpc (sys) for our fiducial cosmology (consistent with Planck 2015) and bias model. This measurement extends the use of the BAO technique from the two-point correlation function (2PCF) and power spectrum to the 3PCF and opens an avenue for deriving additional cosmological distance information from future large-scale structure redshift surveys such as DESI. Our measured distance scale from the 3PCF is fairly independent from that derived from the pre-reconstruction 2PCF and is equivalent to increasing the length of BOSS by roughly 10 per cent; reconstruction appears to lower the independence of the distance measurements. Fitting a model including tidal tensor bias yields a moderate-significance (2.6σ) detection of this bias with a value in agreement with the prediction from local Lagrangian biasing.

  16. THE RELATIONSHIP BETWEEN SOCIAL, POLICY AND PHYSICAL VENUE FEATURES AND SOCIAL COHESION ON CONDOM USE FOR PREGNANCY PREVENTION AMONG SEX WORKERS: A SAFER INDOOR WORK ENVIRONMENT SCALE

    Science.gov (United States)

    Duff, Putu; Shoveller, Jean; Dobrer, Sabina; Ogilvie, Gina; Montaner, Julio; Chettiar, Jill; Shannon, Kate

    2015-01-01

    Background This study aims to: report on a newly developed ‘Safer Indoor Work Environmental Scale’ that characterizes the social, policy and physical features of indoor venues and social cohesion; and using this scale, longitudinally evaluate the association between these features on sex workers’ (SWs’) condom use for pregnancy prevention. Methods Drawing on a prospective open cohort of female SWs working in indoor venues, a newly-developed ‘Safer Indoor Work Environment Scale’ was used to build six multivariable models with generalized estimating equations (GEE), to determine the independent effects of social, policy and venue-based features and social cohesion on condom use. Results Of 588 indoor SWs, 63.6% used condoms for pregnancy prevention in the last month. In multivariable GEE analysis, the following venue-based features were significantly correlated with barrier contraceptive use for pregnancy prevention: managerial practices and venue safety policies (Adjusted Odds Ratio (AOR)=1.09; 95% Confidence Interval (95%CI) 1.01–1.17) access to sexual and reproductive health services/supplies (AOR=1.10; 95%CI 1.00–1.20) access to drug harm reduction (AOR=1.13; 95%CI 1.01–1.28), and social cohesion among workers (AOR=1.05; 95%CI 1.03–1.07). Access to security features was marginally associated with condom use (AOR=1.13; 95%CI 0.99–1.29). Conclusion The findings of the current study highlight how work environment and social cohesion among SWs are related to improved condom use. Given global calls for the decriminalization of sex work, and potential legislative reforms in Canada, this study points to the critical need for new institutional arrangements (e.g., legal and regulatory frameworks; labour standards) to support safer sex workplaces. PMID:25678713

  17. A Hierarchical Feature Extraction Model for Multi-Label Mechanical Patent Classification

    Directory of Open Access Journals (Sweden)

    Jie Hu

    2018-01-01

    Full Text Available Various studies have focused on feature extraction methods for automatic patent classification in recent years. However, most of these approaches are based on the knowledge from experts in related domains. Here we propose a hierarchical feature extraction model (HFEM for multi-label mechanical patent classification, which is able to capture both local features of phrases as well as global and temporal semantics. First, a n-gram feature extractor based on convolutional neural networks (CNNs is designed to extract salient local lexical-level features. Next, a long dependency feature extraction model based on the bidirectional long–short-term memory (BiLSTM neural network model is proposed to capture sequential correlations from higher-level sequence representations. Then the HFEM algorithm and its hierarchical feature extraction architecture are detailed. We establish the training, validation and test datasets, containing 72,532, 18,133, and 2679 mechanical patent documents, respectively, and then check the performance of HFEMs. Finally, we compared the results of the proposed HFEM and three other single neural network models, namely CNN, long–short-term memory (LSTM, and BiLSTM. The experimental results indicate that our proposed HFEM outperforms the other compared models in both precision and recall.

  18. [Unfolding item response model using best-worst scaling].

    Science.gov (United States)

    Ikehara, Kazuya

    2015-02-01

    In attitude measurement and sensory tests, the unfolding model is typically used. In this model, response probability is formulated by the distance between the person and the stimulus. In this study, we proposed an unfolding item response model using best-worst scaling (BWU model), in which a person chooses the best and worst stimulus among repeatedly presented subsets of stimuli. We also formulated an unfolding model using best scaling (BU model), and compared the accuracy of estimates between the BU and BWU models. A simulation experiment showed that the BWU modell performed much better than the BU model in terms of bias and root mean square errors of estimates. With reference to Usami (2011), the proposed models were apllied to actual data to measure attitudes toward tardiness. Results indicated high similarity between stimuli estimates generated with the proposed models and those of Usami (2011).

  19. Sizing and scaling requirements of a large-scale physical model for code validation

    International Nuclear Information System (INIS)

    Khaleel, R.; Legore, T.

    1990-01-01

    Model validation is an important consideration in application of a code for performance assessment and therefore in assessing the long-term behavior of the engineered and natural barriers of a geologic repository. Scaling considerations relevant to porous media flow are reviewed. An analysis approach is presented for determining the sizing requirements of a large-scale, hydrology physical model. The physical model will be used to validate performance assessment codes that evaluate the long-term behavior of the repository isolation system. Numerical simulation results for sizing requirements are presented for a porous medium model in which the media properties are spatially uncorrelated

  20. Pelamis wave energy converter. Verification of full-scale control using a 7th scale model

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2005-07-01

    The Pelamis Wave Energy Converter is a new concept for converting wave energy for several applications including generation of electric power. The machine is flexibly moored and swings to meet the water waves head-on. The system is semi-submerged and consists of cylindrical sections linked by hinges. The mechanical operation is described in outline. A one-seventh scale model was built and tested and the outcome was sufficiently successful to warrant the building of a full-scale prototype. In addition, a one-twentieth scale model was built and has contributed much to the research programme. The work is supported financially by the DTI.

  1. A model of biological neuron with terminal chaos and quantum-like features

    International Nuclear Information System (INIS)

    Conte, Elio; Pierri, GianPaolo; Federici, Antonio; Mendolicchio, Leonardo; Zbilut, Joseph P.

    2006-01-01

    A model of biological neuron is proposed combining terminal dynamics with quantum-like mechanical features, assuming the spin to be an important entity in neurodynamics, and, in particular, in synaptic transmission

  2. Atomic-scale modeling of cellulose nanocrystals

    Science.gov (United States)

    Wu, Xiawa

    Cellulose nanocrystals (CNCs), the most abundant nanomaterials in nature, are recognized as one of the most promising candidates to meet the growing demand of green, bio-degradable and sustainable nanomaterials for future applications. CNCs draw significant interest due to their high axial elasticity and low density-elasticity ratio, both of which are extensively researched over the years. In spite of the great potential of CNCs as functional nanoparticles for nanocomposite materials, a fundamental understanding of CNC properties and their role in composite property enhancement is not available. In this work, CNCs are studied using molecular dynamics simulation method to predict their material' behaviors in the nanoscale. (a) Mechanical properties include tensile deformation in the elastic and plastic regions using molecular mechanics, molecular dynamics and nanoindentation methods. This allows comparisons between the methods and closer connectivity to experimental measurement techniques. The elastic moduli in the axial and transverse directions are obtained and the results are found to be in good agreement with previous research. The ultimate properties in plastic deformation are reported for the first time and failure mechanism are analyzed in details. (b) The thermal expansion of CNC crystals and films are studied. It is proposed that CNC film thermal expansion is due primarily to single crystal expansion and CNC-CNC interfacial motion. The relative contributions of inter- and intra-crystal responses to heating are explored. (c) Friction at cellulose-CNCs and diamond-CNCs interfaces is studied. The effects of sliding velocity, normal load, and relative angle between sliding surfaces are predicted. The Cellulose-CNC model is analyzed in terms of hydrogen bonding effect, and the diamond-CNC model compliments some of the discussion of the previous model. In summary, CNC's material properties and molecular models are both studied in this research, contributing to

  3. Sensitivities in global scale modeling of isoprene

    Directory of Open Access Journals (Sweden)

    R. von Kuhlmann

    2004-01-01

    Full Text Available A sensitivity study of the treatment of isoprene and related parameters in 3D atmospheric models was conducted using the global model of tropospheric chemistry MATCH-MPIC. A total of twelve sensitivity scenarios which can be grouped into four thematic categories were performed. These four categories consist of simulations with different chemical mechanisms, different assumptions concerning the deposition characteristics of intermediate products, assumptions concerning the nitrates from the oxidation of isoprene and variations of the source strengths. The largest differences in ozone compared to the reference simulation occured when a different isoprene oxidation scheme was used (up to 30-60% or about 10 nmol/mol. The largest differences in the abundance of peroxyacetylnitrate (PAN were found when the isoprene emission strength was reduced by 50% and in tests with increased or decreased efficiency of the deposition of intermediates. The deposition assumptions were also found to have a significant effect on the upper tropospheric HOx production. Different implicit assumptions about the loss of intermediate products were identified as a major reason for the deviations among the tested isoprene oxidation schemes. The total tropospheric burden of O3 calculated in the sensitivity runs is increased compared to the background methane chemistry by 26±9  Tg( O3 from 273 to an average from the sensitivity runs of 299 Tg(O3. % revised Thus, there is a spread of ± 35% of the overall effect of isoprene in the model among the tested scenarios. This range of uncertainty and the much larger local deviations found in the test runs suggest that the treatment of isoprene in global models can only be seen as a first order estimate at present, and points towards specific processes in need of focused future work.

  4. Scaling of musculoskeletal models from static and dynamic trials

    DEFF Research Database (Denmark)

    Lund, Morten Enemark; Andersen, Michael Skipper; de Zee, Mark

    2015-01-01

    Subject-specific scaling of cadaver-based musculoskeletal models is important for accurate musculoskeletal analysis within multiple areas such as ergonomics, orthopaedics and occupational health. We present two procedures to scale ‘generic’ musculoskeletal models to match segment lengths and joint...... three scaling methods to an inverse dynamics-based musculoskeletal model and compared predicted knee joint contact forces to those measured with an instrumented prosthesis during gait. Additionally, a Monte Carlo study was used to investigate the sensitivity of the knee joint contact force to random...

  5. Orbital and millennial-scale features of atmospheric CH4 over the past 800,000 years.

    Science.gov (United States)

    Loulergue, Laetitia; Schilt, Adrian; Spahni, Renato; Masson-Delmotte, Valérie; Blunier, Thomas; Lemieux, Bénédicte; Barnola, Jean-Marc; Raynaud, Dominique; Stocker, Thomas F; Chappellaz, Jérôme

    2008-05-15

    Atmospheric methane is an important greenhouse gas and a sensitive indicator of climate change and millennial-scale temperature variability. Its concentrations over the past 650,000 years have varied between approximately 350 and approximately 800 parts per 10(9) by volume (p.p.b.v.) during glacial and interglacial periods, respectively. In comparison, present-day methane levels of approximately 1,770 p.p.b.v. have been reported. Insights into the external forcing factors and internal feedbacks controlling atmospheric methane are essential for predicting the methane budget in a warmer world. Here we present a detailed atmospheric methane record from the EPICA Dome C ice core that extends the history of this greenhouse gas to 800,000 yr before present. The average time resolution of the new data is approximately 380 yr and permits the identification of orbital and millennial-scale features. Spectral analyses indicate that the long-term variability in atmospheric methane levels is dominated by approximately 100,000 yr glacial-interglacial cycles up to approximately 400,000 yr ago with an increasing contribution of the precessional component during the four more recent climatic cycles. We suggest that changes in the strength of tropical methane sources and sinks (wetlands, atmospheric oxidation), possibly influenced by changes in monsoon systems and the position of the intertropical convergence zone, controlled the atmospheric methane budget, with an additional source input during major terminations as the retreat of the northern ice sheet allowed higher methane emissions from extending periglacial wetlands. Millennial-scale changes in methane levels identified in our record as being associated with Antarctic isotope maxima events are indicative of ubiquitous millennial-scale temperature variability during the past eight glacial cycles.

  6. Orbital and millennial-scale features of atmospheric CH{sub 4} over the past 800,000 years

    Energy Technology Data Exchange (ETDEWEB)

    Loulergue, L; Lemieux, B; Barnola, J M; Raynaud, D; Chappellaz, J [Univ. Grenoble 1, CNRS, lab. glaciol. geophys. environm., F-38402 Saint Martin d' Heres, (France); Schilt, A; Spahni, R; Blunier, T; Stocker, T F [Climate and Environm. Physics, Physics Inst., Univ. Bern, CH-3012 Bern, (Switzerland); Schilt, A; Spahni, R; Blunier, T; Stocker, T F [Oeschger Centre for Climate Change Research, Univ. Bern, CH-3012 Bern, (Switzerland); Masson-Delmotte, V [Inst. Pierre Simon Laplace, LSCE, CEA-CNRS-Universite Versailles Saint Quentin, CEA Saclay, F-91191 Gif sur Yvette, (France)

    2008-07-01

    Atmospheric methane is an important greenhouse gas and a sensitive indicator of climate change and millennial-scale temperature variability. Its concentrations over the past 650,000 years have varied between {approx} 350 and {approx} 800 parts per 10{sup 9} by volume (p.p.b.v.) during glacial and interglacial periods, respectively. In comparison, present-day methane levels of {approx} 1,770 p.p.b.v. have been reported. Insights into the external forcing factors and internal feedbacks controlling atmospheric methane are essential for predicting the methane budget in a warmer world. Here we present a detailed atmospheric methane record from the EPICA Dome C ice core that extends the history of this greenhouse gas to 800,000 yr before present. The average time resolution of the new data is {approx} 380 yr and permits the identification of orbital and millennial-scale features. Spectral analyses indicate that the long-term variability in atmospheric methane levels is dominated by {approx}100,000 yr glacial - interglacial cycles up to {approx}400,000 yr ago with an increasing contribution of the precessional component during the four more recent climatic cycles. We suggest that changes in the strength of tropical methane sources and sinks (wetlands, atmospheric oxidation), possibly influenced by changes in monsoon systems and the position of the intertropical convergence zone, controlled the atmospheric methane budget, with an additional source input during major terminations as the retreat of the northern ice sheet allowed higher methane emissions from extending peri-glacial wetlands. Millennial-scale changes in methane levels identified in our record as being associated with Antarctic isotope maxima events are indicative of ubiquitous millennial-scale temperature variability during the past eight glacial cycles. (authors)

  7. Orbital and millennial-scale features of atmospheric CH4 over the past 800,000 years

    International Nuclear Information System (INIS)

    Loulergue, L.; Lemieux, B.; Barnola, J.M.; Raynaud, D.; Chappellaz, J.; Schilt, A.; Spahni, R.; Blunier, T.; Stocker, T.F.; Schilt, A.; Spahni, R.; Blunier, T.; Stocker, T.F.; Masson-Delmotte, V.

    2008-01-01

    Atmospheric methane is an important greenhouse gas and a sensitive indicator of climate change and millennial-scale temperature variability. Its concentrations over the past 650,000 years have varied between ∼ 350 and ∼ 800 parts per 10 9 by volume (p.p.b.v.) during glacial and interglacial periods, respectively. In comparison, present-day methane levels of ∼ 1,770 p.p.b.v. have been reported. Insights into the external forcing factors and internal feedbacks controlling atmospheric methane are essential for predicting the methane budget in a warmer world. Here we present a detailed atmospheric methane record from the EPICA Dome C ice core that extends the history of this greenhouse gas to 800,000 yr before present. The average time resolution of the new data is ∼ 380 yr and permits the identification of orbital and millennial-scale features. Spectral analyses indicate that the long-term variability in atmospheric methane levels is dominated by ∼100,000 yr glacial - interglacial cycles up to ∼400,000 yr ago with an increasing contribution of the precessional component during the four more recent climatic cycles. We suggest that changes in the strength of tropical methane sources and sinks (wetlands, atmospheric oxidation), possibly influenced by changes in monsoon systems and the position of the intertropical convergence zone, controlled the atmospheric methane budget, with an additional source input during major terminations as the retreat of the northern ice sheet allowed higher methane emissions from extending peri-glacial wetlands. Millennial-scale changes in methane levels identified in our record as being associated with Antarctic isotope maxima events are indicative of ubiquitous millennial-scale temperature variability during the past eight glacial cycles. (authors)

  8. The Assessment of Patient Clinical Outcome: Advantages, Models, Features of an Ideal Model

    Directory of Open Access Journals (Sweden)

    Mou’ath Hourani

    2016-06-01

    Full Text Available Background: The assessment of patient clinical outcome focuses on measuring various aspects of the health status of a patient who is under healthcare intervention. Patient clinical outcome assessment is a very significant process in the clinical field as it allows health care professionals to better understand the effectiveness of their health care programs and thus for enhancing the health care quality in general. It is thus vital that a high quality, informative review of current issues regarding the assessment of patient clinical outcome should be conducted. Aims & Objectives: 1 Summarizes the advantages of the assessment of patient clinical outcome; 2 reviews some of the existing patient clinical outcome assessment models namely: Simulation, Markov, Bayesian belief networks, Bayesian statistics and Conventional statistics, and Kaplan-Meier analysis models; and 3 demonstrates the desired features that should be fulfilled by a well-established ideal patient clinical outcome assessment model. Material & Methods: An integrative review of the literature has been performed using the Google Scholar to explore the field of patient clinical outcome assessment. Conclusion: This paper will directly support researchers, clinicians and health care professionals in their understanding of developments in the domain of the assessment of patient clinical outcome, thus enabling them to propose ideal assessment models.

  9. The Assessment of Patient Clinical Outcome: Advantages, Models, Features of an Ideal Model

    Directory of Open Access Journals (Sweden)

    Mou’ath Hourani

    2016-06-01

    Full Text Available Background: The assessment of patient clinical outcome focuses on measuring various aspects of the health status of a patient who is under healthcare intervention. Patient clinical outcome assessment is a very significant process in the clinical field as it allows health care professionals to better understand the effectiveness of their health care programs and thus for enhancing the health care quality in general. It is thus vital that a high quality, informative review of current issues regarding the assessment of patient clinical outcome should be conducted. Aims & Objectives: 1 Summarizes the advantages of the assessment of patient clinical outcome; 2 reviews some of the existing patient clinical outcome assessment models namely: Simulation, Markov, Bayesian belief networks, Bayesian statistics and Conventional statistics, and Kaplan-Meier analysis models; and 3 demonstrates the desired features that should be fulfilled by a well-established ideal patient clinical outcome assessment model. Material & Methods: An integrative review of the literature has been performed using the Google Scholar to explore the field of patient clinical outcome assessment. Conclusion: This paper will directly support researchers, clinicians and health care professionals in their understanding of developments in the domain of the assessment of patient clinical outcome, thus enabling them to propose ideal assessment models.

  10. Anomalous scaling in an age-dependent branching model

    OpenAIRE

    Keller-Schmidt, Stephanie; Tugrul, Murat; Eguiluz, Victor M.; Hernandez-Garcia, Emilio; Klemm, Konstantin

    2010-01-01

    We introduce a one-parametric family of tree growth models, in which branching probabilities decrease with branch age $\\tau$ as $\\tau^{-\\alpha}$. Depending on the exponent $\\alpha$, the scaling of tree depth with tree size $n$ displays a transition between the logarithmic scaling of random trees and an algebraic growth. At the transition ($\\alpha=1$) tree depth grows as $(\\log n)^2$. This anomalous scaling is in good agreement with the trend observed in evolution of biological species, thus p...

  11. Analysis on imaging features of mammography in computer radiography and investigation on gray scale transform and energy subtraction

    International Nuclear Information System (INIS)

    Feng Shuli

    2003-01-01

    In this dissertation, a novel transform method based on human visual response features for gray scale mammographic imaging in computer radiography (CR) is presented. The parameters for imaging quality on CR imaging for mammography were investigated experimentally. In addition, methods for image energy subtraction and a novel method of image registration for mammography of CR imaging are presented. Because the images are viewed and investigated by humans, the method of displaying differences in gray scale images is more convenient if the gray scale differences are displayed in a manner commensurate with human visual response principles. Through transformation of image gray scale with this method, the contrast of the image will be enhanced and the capability for humans to extract the useful information from the image will be increased. Tumors and microcalcifications are displayed in a form for humans to view more simply after transforming the image. The method is theoretically and experimentally investigated. Through measurement of the parameters of a geometrically blurred image, MTF, DQE, and ROC on CR imaging, and also comparison with the imaging quality of screen-film systems, the results indicate that CR imaging qualities in DQE and ROC are better than those of screen-film systems. In geometric blur of the image and MTF, the differences in image quality between CR and the screen-film system are very small. The results suggest that the CR system can replace the screen-film system for mammography imaging. In addition, the results show that the optimal imaging energy for CR mammography is about 24 kV. This condition indicates that the imaging energy of the CR system is lower than that of the screen-film system and, therefore, the x-ray dose to the patient for mammography with the CR system is lower than that with the screen-film system. Based on the difference of penetrability of x ray with different wavelength, and the fact that the part of the x-ray beam will pass

  12. Multilevel binomial logistic prediction model for malignant pulmonary nodules based on texture features of CT image

    International Nuclear Information System (INIS)

    Wang Huan; Guo Xiuhua; Jia Zhongwei; Li Hongkai; Liang Zhigang; Li Kuncheng; He Qian

    2010-01-01

    Purpose: To introduce multilevel binomial logistic prediction model-based computer-aided diagnostic (CAD) method of small solitary pulmonary nodules (SPNs) diagnosis by combining patient and image characteristics by textural features of CT image. Materials and methods: Describe fourteen gray level co-occurrence matrix textural features obtained from 2171 benign and malignant small solitary pulmonary nodules, which belongs to 185 patients. Multilevel binomial logistic model is applied to gain these initial insights. Results: Five texture features, including Inertia, Entropy, Correlation, Difference-mean, Sum-Entropy, and age of patients own aggregating character on patient-level, which are statistically different (P < 0.05) between benign and malignant small solitary pulmonary nodules. Conclusion: Some gray level co-occurrence matrix textural features are efficiently descriptive features of CT image of small solitary pulmonary nodules, which can profit diagnosis of earlier period lung cancer if combined patient-level characteristics to some extent.

  13. Logarithmic corrections to scaling in the XY2-model

    International Nuclear Information System (INIS)

    Kenna, R.; Irving, A.C.

    1995-01-01

    We study the distribution of partition function zeroes for the XY-model in two dimensions. In particular we find the scaling behaviour of the end of the distribution of zeroes in the complex external magnetic field plane in the thermodynamic limit (the Yang-Lee edge) and the form for the density of these zeroes. Assuming that finite-size scaling holds, we show that there have to exist logarithmic corrections to the leading scaling behaviour of thermodynamic quantities in this model. These logarithmic corrections are also manifest in the finite-size scaling formulae and we identify them numerically. The method presented here can be used to check the compatibility of scaling behaviour of odd and even thermodynamic functions in other models too. ((orig.))

  14. Reference Priors for the General Location-Scale Model

    NARCIS (Netherlands)

    Fernández, C.; Steel, M.F.J.

    1997-01-01

    The reference prior algorithm (Berger and Bernardo 1992) is applied to multivariate location-scale models with any regular sampling density, where we establish the irrelevance of the usual assumption of Normal sampling if our interest is in either the location or the scale. This result immediately

  15. Penalized Estimation in Large-Scale Generalized Linear Array Models

    DEFF Research Database (Denmark)

    Lund, Adam; Vincent, Martin; Hansen, Niels Richard

    2017-01-01

    Large-scale generalized linear array models (GLAMs) can be challenging to fit. Computation and storage of its tensor product design matrix can be impossible due to time and memory constraints, and previously considered design matrix free algorithms do not scale well with the dimension...

  16. SITE-94. Discrete-feature modelling of the Aespoe site: 4. Source data and detailed analysis procedures

    Energy Technology Data Exchange (ETDEWEB)

    Geier, J E [Golder Associates AB, Uppsala (Sweden)

    1996-12-01

    Specific procedures and source data are described for the construction and application of discrete-feature hydrological models for the vicinity of Aespoe. Documentation is given for all major phases of the work, including: Statistical analyses to develop and validate discrete-fracture network models, Preliminary evaluation, construction, and calibration of the site-scale model based on the SITE-94 structural model of Aespoe, Simulation of multiple realizations of the integrated model, and variations, to predict groundwater flow, and Evaluation of near-field and far-field parameters for performance assessment calculations. Procedures are documented in terms of the computer batch files and executable scripts that were used to perform the main steps in these analyses, to provide for traceability of results that are used in the SITE-94 performance assessment calculations. 43 refs.

  17. SITE-94. Discrete-feature modelling of the Aespoe site: 4. Source data and detailed analysis procedures

    International Nuclear Information System (INIS)

    Geier, J.E.

    1996-12-01

    Specific procedures and source data are described for the construction and application of discrete-feature hydrological models for the vicinity of Aespoe. Documentation is given for all major phases of the work, including: Statistical analyses to develop and validate discrete-fracture network models, Preliminary evaluation, construction, and calibration of the site-scale model based on the SITE-94 structural model of Aespoe, Simulation of multiple realizations of the integrated model, and variations, to predict groundwater flow, and Evaluation of near-field and far-field parameters for performance assessment calculations. Procedures are documented in terms of the computer batch files and executable scripts that were used to perform the main steps in these analyses, to provide for traceability of results that are used in the SITE-94 performance assessment calculations. 43 refs

  18. Atomic scale simulations for improved CRUD and fuel performance modeling

    Energy Technology Data Exchange (ETDEWEB)

    Andersson, Anders David Ragnar [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Cooper, Michael William Donald [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-01-06

    A more mechanistic description of fuel performance codes can be achieved by deriving models and parameters from atomistic scale simulations rather than fitting models empirically to experimental data. The same argument applies to modeling deposition of corrosion products on fuel rods (CRUD). Here are some results from publications in 2016 carried out using the CASL allocation at LANL.

  19. Genome-scale modeling for metabolic engineering.

    Science.gov (United States)

    Simeonidis, Evangelos; Price, Nathan D

    2015-03-01

    We focus on the application of constraint-based methodologies and, more specifically, flux balance analysis in the field of metabolic engineering, and enumerate recent developments and successes of the field. We also review computational frameworks that have been developed with the express purpose of automatically selecting optimal gene deletions for achieving improved production of a chemical of interest. The application of flux balance analysis methods in rational metabolic engineering requires a metabolic network reconstruction and a corresponding in silico metabolic model for the microorganism in question. For this reason, we additionally present a brief overview of automated reconstruction techniques. Finally, we emphasize the importance of integrating metabolic networks with regulatory information-an area which we expect will become increasingly important for metabolic engineering-and present recent developments in the field of metabolic and regulatory integration.

  20. Genome-scale biological models for industrial microbial systems.

    Science.gov (United States)

    Xu, Nan; Ye, Chao; Liu, Liming

    2018-04-01

    The primary aims and challenges associated with microbial fermentation include achieving faster cell growth, higher productivity, and more robust production processes. Genome-scale biological models, predicting the formation of an interaction among genetic materials, enzymes, and metabolites, constitute a systematic and comprehensive platform to analyze and optimize the microbial growth and production of biological products. Genome-scale biological models can help optimize microbial growth-associated traits by simulating biomass formation, predicting growth rates, and identifying the requirements for cell growth. With regard to microbial product biosynthesis, genome-scale biological models can be used to design product biosynthetic pathways, accelerate production efficiency, and reduce metabolic side effects, leading to improved production performance. The present review discusses the development of microbial genome-scale biological models since their emergence and emphasizes their pertinent application in improving industrial microbial fermentation of biological products.

  1. Particles and scaling for lattice fields and Ising models

    International Nuclear Information System (INIS)

    Glimm, J.; Jaffe, A.

    1976-01-01

    The conjectured inequality GAMMA 6 4 -fields and the scaling limit for d-dimensional Ising models. Assuming GAMMA 6 = 6 these phi 4 fields are free fields unless the field strength renormalization Z -1 diverges. (orig./BJ) [de

  2. Multi-scale modeling strategies in materials science—The ...

    Indian Academy of Sciences (India)

    Unknown

    Multi-scale models; quasicontinuum method; finite elements. 1. Introduction ... boundary with external stresses, and the interaction of a lattice dislocation with a grain ..... mum value of se over the elements that touch node α. The acceleration of ...

  3. Preparatory hydrogeological calculations for site scale models of Aberg, Beberg and Ceberg

    International Nuclear Information System (INIS)

    Gylling, B.; Lindgren, M.; Widen, H.

    1999-03-01

    The purpose of the study is to evaluate the basis for site scale models of the three sites Aberg, Beberg and Ceberg in terms of: extent and position of site scale model domains; numerical implementation of geologic structural model; systematic review of structural data and control of compatibility in data sets. Some of the hydrogeological features of each site are briefly described. A summary of the results from the regional modelling exercises for Aberg, Beberg and Ceberg is given. The results from the regional models may be used as a base for determining the location and size of the site scale models and provide such models with boundary conditions. Results from the regional models may also indicate suitable locations for repositories. The resulting locations and sizes for site scale models are presented in figures. There are also figures showing that the structural models interpreted by HYDRASTAR do not conflict with the repository tunnels. It has in addition been verified with TRAZON, a modified version of HYDRASTAR for checking starting positions, revealing conflicts between starting positions and fractures zones if present

  4. Nonpointlike-parton model with asymptotic scaling and with scaling violationat moderate Q2 values

    International Nuclear Information System (INIS)

    Chen, C.K.

    1981-01-01

    A nonpointlike-parton model is formulated on the basis of the assumption of energy-independent total cross sections of partons and the current-algebra sum rules. No specific strong-interaction Lagrangian density is introduced in this approach. This model predicts asymptotic scaling for the inelastic structure functions of nucleons on the one hand and scaling violation at moderate Q 2 values on the other hand. The predicted scaling-violation patterns at moderate Q 2 values are consistent with the observed scaling-violation patterns. A numerical fit of F 2 functions is performed in order to demonstrate that the predicted scaling-violation patterns of this model at moderate Q 2 values fit the data, and to see how the predicted asymptotic scaling behavior sets in at various x values. Explicit analytic forms of F 2 functions are obtained from this numerical fit, and are compared in detail with the analytic forms of F 2 functions obtained from the numerical fit of the quantum-chromodynamics (QCD) parton model. This comparison shows that this nonpointlike-parton model fits the data better than the QCD parton model, especially at large and small x values. Nachtman moments are computed from the F 2 functions of this model and are shown to agree with data well. It is also shown that the two-dimensional plot of the logarithm of a nonsinglet moment versus the logarithm of another such moment is not a good way to distinguish this nonpointlike-parton model from the QCD parton model

  5. Multi-scale modeling for sustainable chemical production

    DEFF Research Database (Denmark)

    Zhuang, Kai; Bakshi, Bhavik R.; Herrgard, Markus

    2013-01-01

    associated with the development and implementation of a su stainable biochemical industry. The temporal and spatial scales of modeling approaches for sustainable chemical production vary greatly, ranging from metabolic models that aid the design of fermentative microbial strains to material and monetary flow......With recent advances in metabolic engineering, it is now technically possible to produce a wide portfolio of existing petrochemical products from biomass feedstock. In recent years, a number of modeling approaches have been developed to support the engineering and decision-making processes...... models that explore the ecological impacts of all economic activities. Research efforts that attempt to connect the models at different scales have been limited. Here, we review a number of existing modeling approaches and their applications at the scales of metabolism, bioreactor, overall process...

  6. Calibration of the Site-Scale Saturated Zone Flow Model

    International Nuclear Information System (INIS)

    Zyvoloski, G. A.

    2001-01-01

    The purpose of the flow calibration analysis work is to provide Performance Assessment (PA) with the calibrated site-scale saturated zone (SZ) flow model that will be used to make radionuclide transport calculations. As such, it is one of the most important models developed in the Yucca Mountain project. This model will be a culmination of much of our knowledge of the SZ flow system. The objective of this study is to provide a defensible site-scale SZ flow and transport model that can be used for assessing total system performance. A defensible model would include geologic and hydrologic data that are used to form the hydrogeologic framework model; also, it would include hydrochemical information to infer transport pathways, in-situ permeability measurements, and water level and head measurements. In addition, the model should include information on major model sensitivities. Especially important are those that affect calibration, the direction of transport pathways, and travel times. Finally, if warranted, alternative calibrations representing different conceptual models should be included. To obtain a defensible model, all available data should be used (or at least considered) to obtain a calibrated model. The site-scale SZ model was calibrated using measured and model-generated water levels and hydraulic head data, specific discharge calculations, and flux comparisons along several of the boundaries. Model validity was established by comparing model-generated permeabilities with the permeability data from field and laboratory tests; by comparing fluid pathlines obtained from the SZ flow model with those inferred from hydrochemical data; and by comparing the upward gradient generated with the model with that observed in the field. This analysis is governed by the Office of Civilian Radioactive Waste Management (OCRWM) Analysis and Modeling Report (AMR) Development Plan ''Calibration of the Site-Scale Saturated Zone Flow Model'' (CRWMS M and O 1999a)

  7. Ship Detection Based on Multiple Features in Random Forest Model for Hyperspectral Images

    Science.gov (United States)

    Li, N.; Ding, L.; Zhao, H.; Shi, J.; Wang, D.; Gong, X.

    2018-04-01

    A novel method for detecting ships which aim to make full use of both the spatial and spectral information from hyperspectral images is proposed. Firstly, the band which is high signal-noise ratio in the range of near infrared or short-wave infrared spectrum, is used to segment land and sea on Otsu threshold segmentation method. Secondly, multiple features that include spectral and texture features are extracted from hyperspectral images. Principal components analysis (PCA) is used to extract spectral features, the Grey Level Co-occurrence Matrix (GLCM) is used to extract texture features. Finally, Random Forest (RF) model is introduced to detect ships based on the extracted features. To illustrate the effectiveness of the method, we carry out experiments over the EO-1 data by comparing single feature and different multiple features. Compared with the traditional single feature method and Support Vector Machine (SVM) model, the proposed method can stably achieve the target detection of ships under complex background and can effectively improve the detection accuracy of ships.

  8. Model-Based Learning of Local Image Features for Unsupervised Texture Segmentation

    Science.gov (United States)

    Kiechle, Martin; Storath, Martin; Weinmann, Andreas; Kleinsteuber, Martin

    2018-04-01

    Features that capture well the textural patterns of a certain class of images are crucial for the performance of texture segmentation methods. The manual selection of features or designing new ones can be a tedious task. Therefore, it is desirable to automatically adapt the features to a certain image or class of images. Typically, this requires a large set of training images with similar textures and ground truth segmentation. In this work, we propose a framework to learn features for texture segmentation when no such training data is available. The cost function for our learning process is constructed to match a commonly used segmentation model, the piecewise constant Mumford-Shah model. This means that the features are learned such that they provide an approximately piecewise constant feature image with a small jump set. Based on this idea, we develop a two-stage algorithm which first learns suitable convolutional features and then performs a segmentation. We note that the features can be learned from a small set of images, from a single image, or even from image patches. The proposed method achieves a competitive rank in the Prague texture segmentation benchmark, and it is effective for segmenting histological images.

  9. 3-3-1 models at electroweak scale

    International Nuclear Information System (INIS)

    Dias, Alex G.; Montero, J.C.; Pleitez, V.

    2006-01-01

    We show that in 3-3-1 models there exist a natural relation among the SU(3) L coupling constant g, the electroweak mixing angle θ W , the mass of the W, and one of the vacuum expectation values, which implies that those models can be realized at low energy scales and, in particular, even at the electroweak scale. So that, being that symmetries realized in Nature, new physics may be really just around the corner

  10. Language Recognition Using Latent Dynamic Conditional Random Field Model with Phonological Features

    Directory of Open Access Journals (Sweden)

    Sirinoot Boonsuk

    2014-01-01

    Full Text Available Spoken language recognition (SLR has been of increasing interest in multilingual speech recognition for identifying the languages of speech utterances. Most existing SLR approaches apply statistical modeling techniques with acoustic and phonotactic features. Among the popular approaches, the acoustic approach has become of greater interest than others because it does not require any prior language-specific knowledge. Previous research on the acoustic approach has shown less interest in applying linguistic knowledge; it was only used as supplementary features, while the current state-of-the-art system assumes independency among features. This paper proposes an SLR system based on the latent-dynamic conditional random field (LDCRF model using phonological features (PFs. We use PFs to represent acoustic characteristics and linguistic knowledge. The LDCRF model was employed to capture the dynamics of the PFs sequences for language classification. Baseline systems were conducted to evaluate the features and methods including Gaussian mixture model (GMM based systems using PFs, GMM using cepstral features, and the CRF model using PFs. Evaluated on the NIST LRE 2007 corpus, the proposed method showed an improvement over the baseline systems. Additionally, it showed comparable result with the acoustic system based on i-vector. This research demonstrates that utilizing PFs can enhance the performance.

  11. Mechanistically-Based Field-Scale Models of Uranium Biogeochemistry from Upscaling Pore-Scale Experiments and Models

    International Nuclear Information System (INIS)

    Tim Scheibe; Alexandre Tartakovsky; Brian Wood; Joe Seymour

    2007-01-01

    Effective environmental management of DOE sites requires reliable prediction of reactive transport phenomena. A central issue in prediction of subsurface reactive transport is the impact of multiscale physical, chemical, and biological heterogeneity. Heterogeneity manifests itself through incomplete mixing of reactants at scales below those at which concentrations are explicitly defined (i.e., the numerical grid scale). This results in a mismatch between simulated reaction processes (formulated in terms of average concentrations) and actual processes (controlled by local concentrations). At the field scale, this results in apparent scale-dependence of model parameters and inability to utilize laboratory parameters in field models. Accordingly, most field modeling efforts are restricted to empirical estimation of model parameters by fitting to field observations, which renders extrapolation of model predictions beyond fitted conditions unreliable. The objective of this project is to develop a theoretical and computational framework for (1) connecting models of coupled reactive transport from pore-scale processes to field-scale bioremediation through a hierarchy of models that maintain crucial information from the smaller scales at the larger scales; and (2) quantifying the uncertainty that is introduced by both the upscaling process and uncertainty in physical parameters. One of the challenges of addressing scale-dependent effects of coupled processes in heterogeneous porous media is the problem-specificity of solutions. Much effort has been aimed at developing generalized scaling laws or theories, but these require restrictive assumptions that render them ineffective in many real problems. We propose instead an approach that applies physical and numerical experiments at small scales (specifically the pore scale) to a selected model system in order to identify the scaling approach appropriate to that type of problem. Although the results of such studies will

  12. Mechanistically-Based Field-Scale Models of Uranium Biogeochemistry from Upscaling Pore-Scale Experiments and Models

    Energy Technology Data Exchange (ETDEWEB)

    Tim Scheibe; Alexandre Tartakovsky; Brian Wood; Joe Seymour

    2007-04-19

    Effective environmental management of DOE sites requires reliable prediction of reactive transport phenomena. A central issue in prediction of subsurface reactive transport is the impact of multiscale physical, chemical, and biological heterogeneity. Heterogeneity manifests itself through incomplete mixing of reactants at scales below those at which concentrations are explicitly defined (i.e., the numerical grid scale). This results in a mismatch between simulated reaction processes (formulated in terms of average concentrations) and actual processes (controlled by local concentrations). At the field scale, this results in apparent scale-dependence of model parameters and inability to utilize laboratory parameters in field models. Accordingly, most field modeling efforts are restricted to empirical estimation of model parameters by fitting to field observations, which renders extrapolation of model predictions beyond fitted conditions unreliable. The objective of this project is to develop a theoretical and computational framework for (1) connecting models of coupled reactive transport from pore-scale processes to field-scale bioremediation through a hierarchy of models that maintain crucial information from the smaller scales at the larger scales; and (2) quantifying the uncertainty that is introduced by both the upscaling process and uncertainty in physical parameters. One of the challenges of addressing scale-dependent effects of coupled processes in heterogeneous porous media is the problem-specificity of solutions. Much effort has been aimed at developing generalized scaling laws or theories, but these require restrictive assumptions that render them ineffective in many real problems. We propose instead an approach that applies physical and numerical experiments at small scales (specifically the pore scale) to a selected model system in order to identify the scaling approach appropriate to that type of problem. Although the results of such studies will

  13. Feature selection model based on clustering and ranking in pipeline for microarray data

    Directory of Open Access Journals (Sweden)

    Barnali Sahu

    2017-01-01

    Full Text Available Most of the available feature selection techniques in the literature are classifier bound. It means a group of features tied to the performance of a specific classifier as applied in wrapper and hybrid approach. Our objective in this study is to select a set of generic features not tied to any classifier based on the proposed framework. This framework uses attribute clustering and feature ranking techniques in pipeline in order to remove redundant features. On each uncovered cluster, signal-to-noise ratio, t-statistics and significance analysis of microarray are independently applied to select the top ranked features. Both filter and evolutionary wrapper approaches have been considered for feature selection and the data set with selected features are given to ensemble of predefined statistically different classifiers. The class labels of the test data are determined using majority voting technique. Moreover, with the aforesaid objectives, this paper focuses on obtaining a stable result out of various classification models. Further, a comparative analysis has been performed to study the classification accuracy and computational time of the current approach and evolutionary wrapper techniques. It gives a better insight into the features and further enhancing the classification accuracy with less computational time.

  14. Green sturgeon distribution in the Pacific Ocean estimated from modeled oceanographic features and migration behavior.

    Science.gov (United States)

    Huff, David D; Lindley, Steven T; Wells, Brian K; Chai, Fei

    2012-01-01

    The green sturgeon (Acipenser medirostris), which is found in the eastern Pacific Ocean from Baja California to the Bering Sea, tends to be highly migratory, moving long distances among estuaries, spawning rivers, and distant coastal regions. Factors that determine the oceanic distribution of green sturgeon are unclear, but broad-scale physical conditions interacting with migration behavior may play an important role. We estimated the distribution of green sturgeon by modeling species-environment relationships using oceanographic and migration behavior covariates with maximum entropy modeling (MaxEnt) of species geographic distributions. The primary concentration of green sturgeon was estimated from approximately 41-51.5° N latitude in the coastal waters of Washington, Oregon, and Vancouver Island and in the vicinity of San Francisco and Monterey Bays from 36-37° N latitude. Unsuitably cold water temperatures in the far north and energetic efficiencies associated with prevailing water currents may provide the best explanation for the range-wide marine distribution of green sturgeon. Independent trawl records, fisheries observer records, and tagging studies corroborated our findings. However, our model also delineated patchily distributed habitat south of Monterey Bay, though there are few records of green sturgeon from this region. Green sturgeon are likely influenced by countervailing pressures governing their dispersal. They are behaviorally directed to revisit natal freshwater spawning rivers and persistent overwintering grounds in coastal marine habitats, yet they are likely physiologically bounded by abiotic and biotic environmental features. Impacts of human activities on green sturgeon or their habitat in coastal waters, such as bottom-disturbing trawl fisheries, may be minimized through marine spatial planning that makes use of high-quality species distribution information.

  15. Research on Degeneration Model of Neural Network for Deep Groove Ball Bearing Based on Feature Fusion

    Directory of Open Access Journals (Sweden)

    Lijun Zhang

    2018-02-01

    Full Text Available Aiming at the pitting fault of deep groove ball bearing during service, this paper uses the vibration signal of five different states of deep groove ball bearing and extracts the relevant features, then uses a neural network to model the degradation for identifying and classifying the fault type. By comparing the effects of training samples with different capacities through performance indexes such as the accuracy and convergence speed, it is proven that an increase in the sample size can improve the performance of the model. Based on the polynomial fitting principle and Pearson correlation coefficient, fusion features based on the skewness index are proposed, and the performance improvement of the model after incorporating the fusion features is also validated. A comparison of the performance of the support vector machine (SVM model and the neural network model on this dataset is given. The research shows that neural networks have more potential for complex and high-volume datasets.

  16. Feature Set Evaluation for Offline Handwriting Recognition Systems: Application to the Recurrent Neural Network Model.

    Science.gov (United States)

    Chherawala, Youssouf; Roy, Partha Pratim; Cheriet, Mohamed

    2016-12-01

    The performance of handwriting recognition systems is dependent on the features extracted from the word image. A large body of features exists in the literature, but no method has yet been proposed to identify the most promising of these, other than a straightforward comparison based on the recognition rate. In this paper, we propose a framework for feature set evaluation based on a collaborative setting. We use a weighted vote combination of recurrent neural network (RNN) classifiers, each trained with a particular feature set. This combination is modeled in a probabilistic framework as a mixture model and two methods for weight estimation are described. The main contribution of this paper is to quantify the importance of feature sets through the combination weights, which reflect their strength and complementarity. We chose the RNN classifier because of its state-of-the-art performance. Also, we provide the first feature set benchmark for this classifier. We evaluated several feature sets on the IFN/ENIT and RIMES databases of Arabic and Latin script, respectively. The resulting combination model is competitive with state-of-the-art systems.

  17. Main modelling features of the ASTEC V2.1 major version

    International Nuclear Information System (INIS)

    Chatelard, P.; Belon, S.; Bosland, L.; Carénini, L.; Coindreau, O.; Cousin, F.; Marchetto, C.; Nowack, H.; Piar, L.; Chailan, L.

    2016-01-01

    Highlights: • Recent modelling improvements of the ASTEC European severe accident code are outlined. • Key new physical models now available in the ASTEC V2.1 major version are described. • ASTEC progress towards a multi-design reactor code is illustrated for BWR and PHWR. • ASTEC strong link with the on-going EC CESAM FP7 project is emphasized. • Main remaining modelling issues (on which IRSN efforts are now directing) are given. - Abstract: A new major version of the European severe accident integral code ASTEC, developed by IRSN with some GRS support, was delivered in November 2015 to the ASTEC worldwide community. Main modelling features of this V2.1 version are summarised in this paper. In particular, the in-vessel coupling technique between the reactor coolant system thermal-hydraulics module and the core degradation module has been strongly re-engineered to remove some well-known weaknesses of the former V2.0 series. The V2.1 version also includes new core degradation models specifically addressing BWR and PHWR reactor types, as well as several other physical modelling improvements, notably on reflooding of severely damaged cores, Zircaloy oxidation under air atmosphere, corium coolability during corium concrete interaction and source term evaluation. Moreover, this V2.1 version constitutes the back-bone of the CESAM FP7 project, which final objective is to further improve ASTEC for use in Severe Accident Management analysis of the Gen.II–III nuclear power plants presently under operation or foreseen in near future in Europe. As part of this European project, IRSN efforts to continuously improve both code numerical robustness and computing performances at plant scale as well as users’ tools are being intensified. Besides, ASTEC will continue capitalising the whole knowledge on severe accidents phenomenology by progressively keeping physical models at the state of the art through a regular feed-back from the interpretation of the current and

  18. SCALING ANALYSIS OF REPOSITORY HEAT LOAD FOR REDUCED DIMENSIONALITY MODELS

    International Nuclear Information System (INIS)

    MICHAEL T. ITAMUA AND CLIFFORD K. HO

    1998-01-01

    The thermal energy released from the waste packages emplaced in the potential Yucca Mountain repository is expected to result in changes in the repository temperature, relative humidity, air mass fraction, gas flow rates, and other parameters that are important input into the models used to calculate the performance of the engineered system components. In particular, the waste package degradation models require input from thermal-hydrologic models that have higher resolution than those currently used to simulate the T/H responses at the mountain-scale. Therefore, a combination of mountain- and drift-scale T/H models is being used to generate the drift thermal-hydrologic environment

  19. Scaling, soil moisture and evapotranspiration in runoff models

    Science.gov (United States)

    Wood, Eric F.

    1993-01-01

    The effects of small-scale heterogeneity in land surface characteristics on the large-scale fluxes of water and energy in the land-atmosphere system has become a central focus of many of the climatology research experiments. The acquisition of high resolution land surface data through remote sensing and intensive land-climatology field experiments (like HAPEX and FIFE) has provided data to investigate the interactions between microscale land-atmosphere interactions and macroscale models. One essential research question is how to account for the small scale heterogeneities and whether 'effective' parameters can be used in the macroscale models. To address this question of scaling, the probability distribution for evaporation is derived which illustrates the conditions for which scaling should work. A correction algorithm that may appropriate for the land parameterization of a GCM is derived using a 2nd order linearization scheme. The performance of the algorithm is evaluated.

  20. Tissue Feature-Based and Segmented Deformable Image Registration for Improved Modeling of Shear Movement of Lungs

    International Nuclear Information System (INIS)

    Xie Yaoqin; Chao Ming; Xing Lei

    2009-01-01

    Purpose: To report a tissue feature-based image registration strategy with explicit inclusion of the differential motions of thoracic structures. Methods and Materials: The proposed technique started with auto-identification of a number of corresponding points with distinct tissue features. The tissue feature points were found by using the scale-invariant feature transform method. The control point pairs were then sorted into different 'colors' according to the organs in which they resided and used to model the involved organs individually. A thin-plate spline method was used to register a structure characterized by the control points with a given 'color.' The proposed technique was applied to study a digital phantom case and 3 lung and 3 liver cancer patients. Results: For the phantom case, a comparison with the conventional thin-plate spline method showed that the registration accuracy was markedly improved when the differential motions of the lung and chest wall were taken into account. On average, the registration error and standard deviation of the 15 points against the known ground truth were reduced from 3.0 to 0.5 mm and from 1.5 to 0.2 mm, respectively, when the new method was used. A similar level of improvement was achieved for the clinical cases. Conclusion: The results of our study have shown that the segmented deformable approach provides a natural and logical solution to model the discontinuous organ motions and greatly improves the accuracy and robustness of deformable registration.

  1. Lower Length Scale Model Development for Embrittlement of Reactor Presure Vessel Steel

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, Yongfeng [Idaho National Lab. (INL), Idaho Falls, ID (United States); Schwen, Daniel [Idaho National Lab. (INL), Idaho Falls, ID (United States); Chakraborty, Pritam [Idaho National Lab. (INL), Idaho Falls, ID (United States); Bai, Xianming [Idaho National Lab. (INL), Idaho Falls, ID (United States)

    2016-09-01

    This report summarizes the lower-length-scale effort during FY 2016 in developing mesoscale capabilities for microstructure evolution, plasticity and fracture in reactor pressure vessel steels. During operation, reactor pressure vessels are subject to hardening and embrittlement caused by irradiation induced defect accumulation and irradiation enhanced solute precipitation. Both defect production and solute precipitation start from the atomic scale, and manifest their eventual effects as degradation in engineering scale properties. To predict the property degradation, multiscale modeling and simulation are needed to deal with the microstructure evolution, and to link the microstructure feature to material properties. In this report, the development of mesoscale capabilities for defect accumulation and solute precipitation are summarized. A crystal plasticity model to capture defect-dislocation interaction and a damage model for cleavage micro-crack propagation is also provided.

  2. Evaluation of drought propagation in an ensemble mean of large-scale hydrological models

    Directory of Open Access Journals (Sweden)

    A. F. Van Loon

    2012-11-01

    Full Text Available Hydrological drought is increasingly studied using large-scale models. It is, however, not sure whether large-scale models reproduce the development of hydrological drought correctly. The pressing question is how well do large-scale models simulate the propagation from meteorological to hydrological drought? To answer this question, we evaluated the simulation of drought propagation in an ensemble mean of ten large-scale models, both land-surface models and global hydrological models, that participated in the model intercomparison project of WATCH (WaterMIP. For a selection of case study areas, we studied drought characteristics (number of droughts, duration, severity, drought propagation features (pooling, attenuation, lag, lengthening, and hydrological drought typology (classical rainfall deficit drought, rain-to-snow-season drought, wet-to-dry-season drought, cold snow season drought, warm snow season drought, composite drought.

    Drought characteristics simulated by large-scale models clearly reflected drought propagation; i.e. drought events became fewer and longer when moving through the hydrological cycle. However, more differentiation was expected between fast and slowly responding systems, with slowly responding systems having fewer and longer droughts in runoff than fast responding systems. This was not found using large-scale models. Drought propagation features were poorly reproduced by the large-scale models, because runoff reacted immediately to precipitation, in all case study areas. This fast reaction to precipitation, even in cold climates in winter and in semi-arid climates in summer, also greatly influenced the hydrological drought typology as identified by the large-scale models. In general, the large-scale models had the correct representation of drought types, but the percentages of occurrence had some important mismatches, e.g. an overestimation of classical rainfall deficit droughts, and an

  3. Modeling multiple visual words assignment for bag-of-features based medical image retrieval

    KAUST Repository

    Wang, Jim Jing-Yan

    2012-01-01

    In this paper, we investigate the bag-of-features based medical image retrieval methods, which represent an image as a collection of local features, such as image patch and key points with SIFT descriptor. To improve the bag-of-features method, we first model the assignment of local descriptor as contribution functions, and then propose a new multiple assignment strategy. By assuming the local feature can be reconstructed by its neighboring visual words in vocabulary, we solve the reconstruction weights as a QP problem and then use the solved weights as contribution functions, which results in a new assignment method called the QP assignment. We carry our experiments on ImageCLEFmed datasets. Experiments\\' results show that our proposed method exceeds the performances of traditional solutions and works well for the bag-of-features based medical image retrieval tasks.

  4. Optimization of an individual re-identification modeling process using biometric features

    Energy Technology Data Exchange (ETDEWEB)

    Heredia-Langner, Alejandro; Amidan, Brett G.; Matzner, Shari; Jarman, Kristin H.

    2014-09-24

    We present results from the optimization of a re-identification process using two sets of biometric data obtained from the Civilian American and European Surface Anthropometry Resource Project (CAESAR) database. The datasets contain real measurements of features for 2378 individuals in a standing (43 features) and seated (16 features) position. A genetic algorithm (GA) was used to search a large combinatorial space where different features are available between the probe (seated) and gallery (standing) datasets. Results show that optimized model predictions obtained using less than half of the 43 gallery features and data from roughly 16% of the individuals available produce better re-identification rates than two other approaches that use all the information available.

  5. Modeling multiple visual words assignment for bag-of-features based medical image retrieval

    KAUST Repository

    Wang, Jim Jing-Yan; Almasri, Islam

    2012-01-01

    In this paper, we investigate the bag-of-features based medical image retrieval methods, which represent an image as a collection of local features, such as image patch and key points with SIFT descriptor. To improve the bag-of-features method, we first model the assignment of local descriptor as contribution functions, and then propose a new multiple assignment strategy. By assuming the local feature can be reconstructed by its neighboring visual words in vocabulary, we solve the reconstruction weights as a QP problem and then use the solved weights as contribution functions, which results in a new assignment method called the QP assignment. We carry our experiments on ImageCLEFmed datasets. Experiments' results show that our proposed method exceeds the performances of traditional solutions and works well for the bag-of-features based medical image retrieval tasks.

  6. Properties of Brownian Image Models in Scale-Space

    DEFF Research Database (Denmark)

    Pedersen, Kim Steenstrup

    2003-01-01

    Brownian images) will be discussed in relation to linear scale-space theory, and it will be shown empirically that the second order statistics of natural images mapped into jet space may, within some scale interval, be modeled by the Brownian image model. This is consistent with the 1/f 2 power spectrum...... law that apparently governs natural images. Furthermore, the distribution of Brownian images mapped into jet space is Gaussian and an analytical expression can be derived for the covariance matrix of Brownian images in jet space. This matrix is also a good approximation of the covariance matrix......In this paper it is argued that the Brownian image model is the least committed, scale invariant, statistical image model which describes the second order statistics of natural images. Various properties of three different types of Gaussian image models (white noise, Brownian and fractional...

  7. Constructing and validating readability models: the method of integrating multilevel linguistic features with machine learning.

    Science.gov (United States)

    Sung, Yao-Ting; Chen, Ju-Ling; Cha, Ji-Her; Tseng, Hou-Chiang; Chang, Tao-Hsing; Chang, Kuo-En

    2015-06-01

    Multilevel linguistic features have been proposed for discourse analysis, but there have been few applications of multilevel linguistic features to readability models and also few validations of such models. Most traditional readability formulae are based on generalized linear models (GLMs; e.g., discriminant analysis and multiple regression), but these models have to comply with certain statistical assumptions about data properties and include all of the data in formulae construction without pruning the outliers in advance. The use of such readability formulae tends to produce a low text classification accuracy, while using a support vector machine (SVM) in machine learning can enhance the classification outcome. The present study constructed readability models by integrating multilevel linguistic features with SVM, which is more appropriate for text classification. Taking the Chinese language as an example, this study developed 31 linguistic features as the predicting variables at the word, semantic, syntax, and cohesion levels, with grade levels of texts as the criterion variable. The study compared four types of readability models by integrating unilevel and multilevel linguistic features with GLMs and an SVM. The results indicate that adopting a multilevel approach in readability analysis provides a better representation of the complexities of both texts and the reading comprehension process.

  8. Scaling and application of commercial, feature-rich, modular mixed-signal technology platforms for large format ROICs

    Science.gov (United States)

    Kar-Roy, Arjun; Racanelli, Marco; Howard, David; Miyagi, Glenn; Bowler, Mark; Jordan, Scott; Zhang, Tao; Krieger, William

    2010-04-01

    Today's modular, mixed-signal CMOS process platforms are excellent choices for manufacturing of highly integrated, large-format read out integrated circuits (ROICs). Platform features, that can be used for both cooled and un-cooled ROIC applications, can include (1) quality passives such as 4fFμm2 stacked MIM capacitors for linearity and higher density capacitance per pixel, 1kOhm high-value poly-silicon resistors, 2.8μm thick metals for efficient power distribution and reduced I-R drop; (2) analog active devices such as low noise single gate 3.3V, and 1.8V/3.3V or 1.8V/5V dual gate configurations, 40V LDMOS FETs, and NPN and PNP devices, deep n-well for substrate isolation for analog blocks and digital logic; (3) tools to assist the circuit designer such as models for cryogenic temperatures, CAD assistance for metal density uniformity determination, statistical, X-sigma and PCM-based models for corner validation and to simulate design sensitivity, and (4) sub-field stitching for large die. The TowerJazz platform of technology for 0.50μm, 0.25μm and 0.18μm CMOS nodes, with features as described above, is described in detail in this paper.

  9. Nucleon electric dipole moments in high-scale supersymmetric models

    International Nuclear Information System (INIS)

    Hisano, Junji; Kobayashi, Daiki; Kuramoto, Wataru; Kuwahara, Takumi

    2015-01-01

    The electric dipole moments (EDMs) of electron and nucleons are promising probes of the new physics. In generic high-scale supersymmetric (SUSY) scenarios such as models based on mixture of the anomaly and gauge mediations, gluino has an additional contribution to the nucleon EDMs. In this paper, we studied the effect of the CP-violating gluon Weinberg operator induced by the gluino chromoelectric dipole moment in the high-scale SUSY scenarios, and we evaluated the nucleon and electron EDMs in the scenarios. We found that in the generic high-scale SUSY models, the nucleon EDMs may receive the sizable contribution from the Weinberg operator. Thus, it is important to compare the nucleon EDMs with the electron one in order to discriminate among the high-scale SUSY models.

  10. Nucleon electric dipole moments in high-scale supersymmetric models

    Energy Technology Data Exchange (ETDEWEB)

    Hisano, Junji [Kobayashi-Maskawa Institute for the Origin of Particles and the Universe (KMI),Nagoya University,Nagoya 464-8602 (Japan); Department of Physics, Nagoya University,Nagoya 464-8602 (Japan); Kavli IPMU (WPI), UTIAS, University of Tokyo,Kashiwa, Chiba 277-8584 (Japan); Kobayashi, Daiki; Kuramoto, Wataru; Kuwahara, Takumi [Department of Physics, Nagoya University,Nagoya 464-8602 (Japan)

    2015-11-12

    The electric dipole moments (EDMs) of electron and nucleons are promising probes of the new physics. In generic high-scale supersymmetric (SUSY) scenarios such as models based on mixture of the anomaly and gauge mediations, gluino has an additional contribution to the nucleon EDMs. In this paper, we studied the effect of the CP-violating gluon Weinberg operator induced by the gluino chromoelectric dipole moment in the high-scale SUSY scenarios, and we evaluated the nucleon and electron EDMs in the scenarios. We found that in the generic high-scale SUSY models, the nucleon EDMs may receive the sizable contribution from the Weinberg operator. Thus, it is important to compare the nucleon EDMs with the electron one in order to discriminate among the high-scale SUSY models.

  11. New phenomena in the standard no-scale supergravity model

    CERN Document Server

    Kelley, S; Nanopoulos, Dimitri V; Zichichi, Antonino; Kelley, S; Lopez, J L; Nanopoulos, D V; Zichichi, A

    1994-01-01

    We revisit the no-scale mechanism in the context of the simplest no-scale supergravity extension of the Standard Model. This model has the usual five-dimensional parameter space plus an additional parameter \\xi_{3/2}\\equiv m_{3/2}/m_{1/2}. We show how predictions of the model may be extracted over the whole parameter space. A necessary condition for the potential to be stable is {\\rm Str}{\\cal M}^4>0, which is satisfied if \\bf m_{3/2}\\lsim2 m_{\\tilde q}. Order of magnitude calculations reveal a no-lose theorem guaranteeing interesting and potentially observable new phenomena in the neutral scalar sector of the theory which would constitute a ``smoking gun'' of the no-scale mechanism. This new phenomenology is model-independent and divides into three scenarios, depending on the ratio of the weak scale to the vev at the minimum of the no-scale direction. We also calculate the residual vacuum energy at the unification scale (C_0\\, m^4_{3/2}), and find that in typical models one must require C_0>10. Such constrai...

  12. Toward micro-scale spatial modeling of gentrification

    Science.gov (United States)

    O'Sullivan, David

    A simple preliminary model of gentrification is presented. The model is based on an irregular cellular automaton architecture drawing on the concept of proximal space, which is well suited to the spatial externalities present in housing markets at the local scale. The rent gap hypothesis on which the model's cell transition rules are based is discussed. The model's transition rules are described in detail. Practical difficulties in configuring and initializing the model are described and its typical behavior reported. Prospects for further development of the model are discussed. The current model structure, while inadequate, is well suited to further elaboration and the incorporation of other interesting and relevant effects.

  13. Heuristic algorithms for feature selection under Bayesian models with block-diagonal covariance structure.

    Science.gov (United States)

    Foroughi Pour, Ali; Dalton, Lori A

    2018-03-21

    Many bioinformatics studies aim to identify markers, or features, that can be used to discriminate between distinct groups. In problems where strong individual markers are not available, or where interactions between gene products are of primary interest, it may be necessary to consider combinations of features as a marker family. To this end, recent work proposes a hierarchical Bayesian framework for feature selection that places a prior on the set of features we wish to select and on the label-conditioned feature distribution. While an analytical posterior under Gaussian models with block covariance structures is available, the optimal feature selection algorithm for this model remains intractable since it requires evaluating the posterior over the space of all possible covariance block structures and feature-block assignments. To address this computational barrier, in prior work we proposed a simple suboptimal algorithm, 2MNC-Robust, with robust performance across the space of block structures. Here, we present three new heuristic feature selection algorithms. The proposed algorithms outperform 2MNC-Robust and many other popular feature selection algorithms on synthetic data. In addition, enrichment analysis on real breast cancer, colon cancer, and Leukemia data indicates they also output many of the genes and pathways linked to the cancers under study. Bayesian feature selection is a promising framework for small-sample high-dimensional data, in particular biomarker discovery applications. When applied to cancer data these algorithms outputted many genes already shown to be involved in cancer as well as potentially new biomarkers. Furthermore, one of the proposed algorithms, SPM, outputs blocks of heavily correlated genes, particularly useful for studying gene interactions and gene networks.

  14. Construction Method of the Topographical Features Model for Underwater Terrain Navigation

    Directory of Open Access Journals (Sweden)

    Wang Lihui

    2015-09-01

    Full Text Available Terrain database is the reference basic for autonomous underwater vehicle (AUV to implement underwater terrain navigation (UTN functions, and is the important part of building topographical features model for UTN. To investigate the feasibility and correlation of a variety of terrain parameters as terrain navigation information metrics, this paper described and analyzed the underwater terrain features and topography parameters calculation method. Proposing a comprehensive evaluation method for terrain navigation information, and constructing an underwater navigation information analysis model, which is associated with topographic features. Simulation results show that the underwater terrain features, are associated with UTN information directly or indirectly, also affect the terrain matching capture probability and the positioning accuracy directly.

  15. Swallowing sound detection using hidden markov modeling of recurrence plot features

    International Nuclear Information System (INIS)

    Aboofazeli, Mohammad; Moussavi, Zahra

    2009-01-01

    Automated detection of swallowing sounds in swallowing and breath sound recordings is of importance for monitoring purposes in which the recording durations are long. This paper presents a novel method for swallowing sound detection using hidden Markov modeling of recurrence plot features. Tracheal sound recordings of 15 healthy and nine dysphagic subjects were studied. The multidimensional state space trajectory of each signal was reconstructed using the Taken method of delays. The sequences of three recurrence plot features of the reconstructed trajectories (which have shown discriminating capability between swallowing and breath sounds) were modeled by three hidden Markov models. The Viterbi algorithm was used for swallowing sound detection. The results were validated manually by inspection of the simultaneously recorded airflow signal and spectrogram of the sounds, and also by auditory means. The experimental results suggested that the performance of the proposed method using hidden Markov modeling of recurrence plot features was superior to the previous swallowing sound detection methods.

  16. Swallowing sound detection using hidden markov modeling of recurrence plot features

    Energy Technology Data Exchange (ETDEWEB)

    Aboofazeli, Mohammad [Faculty of Engineering, Department of Electrical and Computer Engineering, University of Manitoba, Winnipeg, Manitoba, R3T 5V6 (Canada)], E-mail: umaboofa@cc.umanitoba.ca; Moussavi, Zahra [Faculty of Engineering, Department of Electrical and Computer Engineering, University of Manitoba, Winnipeg, Manitoba, R3T 5V6 (Canada)], E-mail: mousavi@ee.umanitoba.ca

    2009-01-30

    Automated detection of swallowing sounds in swallowing and breath sound recordings is of importance for monitoring purposes in which the recording durations are long. This paper presents a novel method for swallowing sound detection using hidden Markov modeling of recurrence plot features. Tracheal sound recordings of 15 healthy and nine dysphagic subjects were studied. The multidimensional state space trajectory of each signal was reconstructed using the Taken method of delays. The sequences of three recurrence plot features of the reconstructed trajectories (which have shown discriminating capability between swallowing and breath sounds) were modeled by three hidden Markov models. The Viterbi algorithm was used for swallowing sound detection. The results were validated manually by inspection of the simultaneously recorded airflow signal and spectrogram of the sounds, and also by auditory means. The experimental results suggested that the performance of the proposed method using hidden Markov modeling of recurrence plot features was superior to the previous swallowing sound detection methods.

  17. Featuring Multiple Local Optima to Assist the User in the Interpretation of Induced Bayesian Network Models

    DEFF Research Database (Denmark)

    Dalgaard, Jens; Pena, Jose; Kocka, Tomas

    2004-01-01

    We propose a method to assist the user in the interpretation of the best Bayesian network model indu- ced from data. The method consists in extracting relevant features from the model (e.g. edges, directed paths and Markov blankets) and, then, assessing the con¯dence in them by studying multiple...

  18. A product feature-based user-centric product search model

    OpenAIRE

    Ben Jabeur, Lamjed; Soulier, Laure; Tamine, Lynda; Mousset, Paul

    2016-01-01

    During the online shopping process, users would search for interesting products and quickly access those that fit with their needs among a long tail of similar or closely related products. Our contribution addresses head queries that are frequently submitted on e-commerce Web sites. Head queries usually target featured products with several variations, accessories, and complementary products. We present in this paper a product feature-based user-centric model for product search involving in a...

  19. The ScaLIng Macroweather Model (SLIMM): using scaling to forecast global-scale macroweather from months to decades

    Science.gov (United States)

    Lovejoy, S.; del Rio Amador, L.; Hébert, R.

    2015-09-01

    On scales of ≈ 10 days (the lifetime of planetary-scale structures), there is a drastic transition from high-frequency weather to low-frequency macroweather. This scale is close to the predictability limits of deterministic atmospheric models; thus, in GCM (general circulation model) macroweather forecasts, the weather is a high-frequency noise. However, neither the GCM noise nor the GCM climate is fully realistic. In this paper we show how simple stochastic models can be developed that use empirical data to force the statistics and climate to be realistic so that even a two-parameter model can perform as well as GCMs for annual global temperature forecasts. The key is to exploit the scaling of the dynamics and the large stochastic memories that we quantify. Since macroweather temporal (but not spatial) intermittency is low, we propose using the simplest model based on fractional Gaussian noise (fGn): the ScaLIng Macroweather Model (SLIMM). SLIMM is based on a stochastic ordinary differential equation, differing from usual linear stochastic models (such as the linear inverse modelling - LIM) in that it is of fractional rather than integer order. Whereas LIM implicitly assumes that there is no low-frequency memory, SLIMM has a huge memory that can be exploited. Although the basic mathematical forecast problem for fGn has been solved, we approach the problem in an original manner, notably using the method of innovations to obtain simpler results on forecast skill and on the size of the effective system memory. A key to successful stochastic forecasts of natural macroweather variability is to first remove the low-frequency anthropogenic component. A previous attempt to use fGn for forecasts had disappointing results because this was not done. We validate our theory using hindcasts of global and Northern Hemisphere temperatures at monthly and annual resolutions. Several nondimensional measures of forecast skill - with no adjustable parameters - show excellent

  20. The Scaling LInear Macroweather model (SLIM): using scaling to forecast global scale macroweather from months to decades

    Science.gov (United States)

    Lovejoy, S.; del Rio Amador, L.; Hébert, R.

    2015-03-01

    At scales of ≈ 10 days (the lifetime of planetary scale structures), there is a drastic transition from high frequency weather to low frequency macroweather. This scale is close to the predictability limits of deterministic atmospheric models; so that in GCM macroweather forecasts, the weather is a high frequency noise. But neither the GCM noise nor the GCM climate is fully realistic. In this paper we show how simple stochastic models can be developped that use empirical data to force the statistics and climate to be realistic so that even a two parameter model can outperform GCM's for annual global temperature forecasts. The key is to exploit the scaling of the dynamics and the enormous stochastic memories that it implies. Since macroweather intermittency is low, we propose using the simplest model based on fractional Gaussian noise (fGn): the Scaling LInear Macroweather model (SLIM). SLIM is based on a stochastic ordinary differential equations, differing from usual linear stochastic models (such as the Linear Inverse Modelling, LIM) in that it is of fractional rather than integer order. Whereas LIM implicitly assumes there is no low frequency memory, SLIM has a huge memory that can be exploited. Although the basic mathematical forecast problem for fGn has been solved, we approach the problem in an original manner notably using the method of innovations to obtain simpler results on forecast skill and on the size of the effective system memory. A key to successful forecasts of natural macroweather variability is to first remove the low frequency anthropogenic component. A previous attempt to use fGn for forecasts had poor results because this was not done. We validate our theory using hindcasts of global and Northern Hemisphere temperatures at monthly and annual resolutions. Several nondimensional measures of forecast skill - with no adjustable parameters - show excellent agreement with hindcasts and these show some skill even at decadal scales. We also compare

  1. Description of Muzzle Blast by Modified Ideal Scaling Models

    Directory of Open Access Journals (Sweden)

    Kevin S. Fansler

    1998-01-01

    Full Text Available Gun blast data from a large variety of weapons are scaled and presented for both the instantaneous energy release and the constant energy deposition rate models. For both ideal explosion models, similar amounts of data scatter occur for the peak overpressure but the instantaneous energy release model correlated the impulse data significantly better, particularly for the region in front of the gun. Two parameters that characterize gun blast are used in conjunction with the ideal scaling models to improve the data correlation. The gun-emptying parameter works particularly well with the instantaneous energy release model to improve data correlation. In particular, the impulse, especially in the forward direction of the gun, is correlated significantly better using the instantaneous energy release model coupled with the use of the gun-emptying parameter. The use of the Mach disc location parameter improves the correlation only marginally. A predictive model is obtained from the modified instantaneous energy release correlation.

  2. A feature-based approach to modeling protein-DNA interactions.

    Directory of Open Access Journals (Sweden)

    Eilon Sharon

    Full Text Available Transcription factor (TF binding to its DNA target site is a fundamental regulatory interaction. The most common model used to represent TF binding specificities is a position specific scoring matrix (PSSM, which assumes independence between binding positions. However, in many cases, this simplifying assumption does not hold. Here, we present feature motif models (FMMs, a novel probabilistic method for modeling TF-DNA interactions, based on log-linear models. Our approach uses sequence features to represent TF binding specificities, where each feature may span multiple positions. We develop the mathematical formulation of our model and devise an algorithm for learning its structural features from binding site data. We also developed a discriminative motif finder, which discovers de novo FMMs that are enriched in target sets of sequences compared to background sets. We evaluate our approach on synthetic data and on the widely used TF chromatin immunoprecipitation (ChIP dataset of Harbison et al. We then apply our algorithm to high-throughput TF ChIP data from mouse and human, reveal sequence features that are present in the binding specificities of mouse and human TFs, and show that FMMs explain TF binding significantly better than PSSMs. Our FMM learning and motif finder software are available at http://genie.weizmann.ac.il/.

  3. Pattern Classification Using an Olfactory Model with PCA Feature Selection in Electronic Noses: Study and Application

    Directory of Open Access Journals (Sweden)

    Junbao Zheng

    2012-03-01

    Full Text Available Biologically-inspired models and algorithms are considered as promising sensor array signal processing methods for electronic noses. Feature selection is one of the most important issues for developing robust pattern recognition models in machine learning. This paper describes an investigation into the classification performance of a bionic olfactory model with the increase of the dimensions of input feature vector (outer factor as well as its parallel channels (inner factor. The principal component analysis technique was applied for feature selection and dimension reduction. Two data sets of three classes of wine derived from different cultivars and five classes of green tea derived from five different provinces of China were used for experiments. In the former case the results showed that the average correct classification rate increased as more principal components were put in to feature vector. In the latter case the results showed that sufficient parallel channels should be reserved in the model to avoid pattern space crowding. We concluded that 6~8 channels of the model with principal component feature vector values of at least 90% cumulative variance is adequate for a classification task of 3~5 pattern classes considering the trade-off between time consumption and classification rate.

  4. A Co-modeling Method Based on Component Features for Mechatronic Devices in Aero-engines

    Science.gov (United States)

    Wang, Bin; Zhao, Haocen; Ye, Zhifeng

    2017-08-01

    Data-fused and user-friendly design of aero-engine accessories is required because of their structural complexity and stringent reliability. This paper gives an overview of a typical aero-engine control system and the development process of key mechatronic devices used. Several essential aspects of modeling and simulation in the process are investigated. Considering the limitations of a single theoretic model, feature-based co-modeling methodology is suggested to satisfy the design requirements and compensate for diversity of component sub-models for these devices. As an example, a stepper motor controlled Fuel Metering Unit (FMU) is modeled in view of the component physical features using two different software tools. An interface is suggested to integrate the single discipline models into the synthesized one. Performance simulation of this device using the co-model and parameter optimization for its key components are discussed. Comparison between delivery testing and the simulation shows that the co-model for the FMU has a high accuracy and the absolute superiority over a single model. Together with its compatible interface with the engine mathematical model, the feature-based co-modeling methodology is proven to be an effective technical measure in the development process of the device.

  5. Robustness of digitally modulated signal features against variation in HF noise model

    Directory of Open Access Journals (Sweden)

    Shoaib Mobien

    2011-01-01

    Full Text Available Abstract High frequency (HF band has both military and civilian uses. It can be used either as a primary or backup communication link. Automatic modulation classification (AMC is of an utmost importance in this band for the purpose of communications monitoring; e.g., signal intelligence and spectrum management. A widely used method for AMC is based on pattern recognition (PR. Such a method has two main steps: feature extraction and classification. The first step is generally performed in the presence of channel noise. Recent studies show that HF noise could be modeled by Gaussian or bi-kappa distributions, depending on day-time. Therefore, it is anticipated that change in noise model will have impact on features extraction stage. In this article, we investigate the robustness of well known digitally modulated signal features against variation in HF noise. Specifically, we consider temporal time domain (TTD features, higher order cumulants (HOC, and wavelet based features. In addition, we propose new features extracted from the constellation diagram and evaluate their robustness against the change in noise model. This study is targeting 2PSK, 4PSK, 8PSK, 16QAM, 32QAM, and 64QAM modulations, as they are commonly used in HF communications.

  6. Modelling of evapotranspiration at field and landscape scales. Abstract

    DEFF Research Database (Denmark)

    Overgaard, Jesper; Butts, M.B.; Rosbjerg, Dan

    2002-01-01

    observations from a nearby weather station. Detailed land-use and soil maps were used to set up the model. Leaf area index was derived from NDVI (Normalized Difference Vegetation Index) images. To validate the model at field scale the simulated evapotranspiration rates were compared to eddy...

  7. Role of scaling in the statistical modelling of finance

    Indian Academy of Sciences (India)

    Modelling the evolution of a financial index as a stochastic process is a problem awaiting a full, satisfactory solution since it was first formulated by Bachelier in 1900. Here it is shown that the scaling with time of the return probability density function sampled from the historical series suggests a successful model.

  8. Dynamic Modeling, Optimization, and Advanced Control for Large Scale Biorefineries

    DEFF Research Database (Denmark)

    Prunescu, Remus Mihail

    with a complex conversion route. Computational fluid dynamics is used to model transport phenomena in large reactors capturing tank profiles, and delays due to plug flows. This work publishes for the first time demonstration scale real data for validation showing that the model library is suitable...

  9. Appropriatie spatial scales to achieve model output uncertainty goals

    NARCIS (Netherlands)

    Booij, Martijn J.; Melching, Charles S.; Chen, Xiaohong; Chen, Yongqin; Xia, Jun; Zhang, Hailun

    2008-01-01

    Appropriate spatial scales of hydrological variables were determined using an existing methodology based on a balance in uncertainties from model inputs and parameters extended with a criterion based on a maximum model output uncertainty. The original methodology uses different relationships between

  10. Development of the Artistic Supervision Model Scale (ASMS)

    Science.gov (United States)

    Kapusuzoglu, Saduman; Dilekci, Umit

    2017-01-01

    The purpose of the study is to develop the Artistic Supervision Model Scale in accordance with the perception of inspectors and the elementary and secondary school teachers on artistic supervision. The lack of a measuring instrument related to the model of artistic supervision in the field of literature reveals the necessity of such study. 290…

  11. Transdisciplinary application of the cross-scale resilience model

    Science.gov (United States)

    Sundstrom, Shana M.; Angeler, David G.; Garmestani, Ahjond S.; Garcia, Jorge H.; Allen, Craig R.

    2014-01-01

    The cross-scale resilience model was developed in ecology to explain the emergence of resilience from the distribution of ecological functions within and across scales, and as a tool to assess resilience. We propose that the model and the underlying discontinuity hypothesis are relevant to other complex adaptive systems, and can be used to identify and track changes in system parameters related to resilience. We explain the theory behind the cross-scale resilience model, review the cases where it has been applied to non-ecological systems, and discuss some examples of social-ecological, archaeological/ anthropological, and economic systems where a cross-scale resilience analysis could add a quantitative dimension to our current understanding of system dynamics and resilience. We argue that the scaling and diversity parameters suitable for a resilience analysis of ecological systems are appropriate for a broad suite of systems where non-normative quantitative assessments of resilience are desired. Our planet is currently characterized by fast environmental and social change, and the cross-scale resilience model has the potential to quantify resilience across many types of complex adaptive systems.

  12. Scale-free, axisymmetry galaxy models with little angular momentum

    International Nuclear Information System (INIS)

    Richstone, D.O.

    1980-01-01

    Two scale-free models of elliptical galaxies are constructed using a self-consistent field approach developed by Schwarschild. Both models have concentric, oblate spheroidal, equipotential surfaces, with a logarithmic potential dependence on central distance. The axial ratio of the equipotential surfaces is 4:3, and the extent ratio of density level surfaces id 2.5:1 (corresponding to an E6 galaxy). Each model satisfies the Poisson and steady state Boltzmann equaion for time scales of order 100 galactic years

  13. Drift-Scale Coupled Processes (DST and THC Seepage) Models

    International Nuclear Information System (INIS)

    Sonnenthale, E.

    2001-01-01

    The purpose of this Analysis/Model Report (AMR) is to document the Near-Field Environment (NFE) and Unsaturated Zone (UZ) models used to evaluate the potential effects of coupled thermal-hydrologic-chemical (THC) processes on unsaturated zone flow and transport. This is in accordance with the ''Technical Work Plan (TWP) for Unsaturated Zone Flow and Transport Process Model Report'', Addendum D, Attachment D-4 (Civilian Radioactive Waste Management System (CRWMS) Management and Operating Contractor (M and O) 2000 [1534471]) and ''Technical Work Plan for Nearfield Environment Thermal Analyses and Testing'' (CRWMS M and O 2000 [153309]). These models include the Drift Scale Test (DST) THC Model and several THC seepage models. These models provide the framework to evaluate THC coupled processes at the drift scale, predict flow and transport behavior for specified thermal loading conditions, and predict the chemistry of waters and gases entering potential waste-emplacement drifts. The intended use of this AMR is to provide input for the following: Performance Assessment (PA); Near-Field Environment (NFE) PMR; Abstraction of Drift-Scale Coupled Processes AMR (ANL-NBS-HS-000029); and UZ Flow and Transport Process Model Report (PMR). The work scope for this activity is presented in the TWPs cited above, and summarized as follows: Continue development of the repository drift-scale THC seepage model used in support of the TSPA in-drift geochemical model; incorporate heterogeneous fracture property realizations; study sensitivity of results to changes in input data and mineral assemblage; validate the DST model by comparison with field data; perform simulations to predict mineral dissolution and precipitation and their effects on fracture properties and chemistry of water (but not flow rates) that may seep into drifts; submit modeling results to the TDMS and document the models. The model development, input data, sensitivity and validation studies described in this AMR are

  14. Ionocovalency and Applications 1. Ionocovalency Model and Orbital Hybrid Scales

    Directory of Open Access Journals (Sweden)

    Yonghe Zhang

    2010-11-01

    Full Text Available Ionocovalency (IC, a quantitative dual nature of the atom, is defined and correlated with quantum-mechanical potential to describe quantitatively the dual properties of the bond. Orbiotal hybrid IC model scale, IC, and IC electronegativity scale, XIC, are proposed, wherein the ionicity and the covalent radius are determined by spectroscopy. Being composed of the ionic function I and the covalent function C, the model describes quantitatively the dual properties of bond strengths, charge density and ionic potential. Based on the atomic electron configuration and the various quantum-mechanical built-up dual parameters, the model formed a Dual Method of the multiple-functional prediction, which has much more versatile and exceptional applications than traditional electronegativity scales and molecular properties. Hydrogen has unconventional values of IC and XIC, lower than that of boron. The IC model can agree fairly well with the data of bond properties and satisfactorily explain chemical observations of elements throughout the Periodic Table.

  15. The influence of coarse-scale environmental features on current and predicted future distributions of narrow-range endemic crayfish populations

    Science.gov (United States)

    Dyer, Joseph J.; Brewer, Shannon K.; Worthington, Thomas A.; Bergey, Elizabeth A.

    2013-01-01

    , whereas two of four species would be severely restricted in range under moderatehigh emissions. Discrepancies in the two emission scenarios probably relate to the exclusion of behavioural adaptations from species-distribution models. 6.These model predictions illustrate possible impacts of climate change on narrow-range endemic crayfish populations. The predictions do not account for biotic interactions, migration, local habitat conditions or species adaptation. However, we identified the constraining landscape features acting on these populations that provide a framework for addressing habitat needs at a fine scale and developing targeted and systematic monitoring programmes.

  16. A Labeling Model Based on the Region of Movability for Point-Feature Label Placement

    Directory of Open Access Journals (Sweden)

    Lin Li

    2016-09-01

    Full Text Available Automatic point-feature label placement (PFLP is a fundamental task for map visualization. As the dominant solutions to the PFLP problem, fixed-position and slider models have been widely studied in previous research. However, the candidate labels generated with these models are set to certain fixed positions or a specified track line for sliding. Thus, the whole surrounding space of a point feature is not sufficiently used for labeling. Hence, this paper proposes a novel label model based on the region of movability, which comes from plane collision detection theory. The model defines a complete conflict-free search space for label placement. On the premise of no conflict with the point, line, and area features, the proposed model utilizes the surrounding zone of the point feature to generate candidate label positions. By combining with heuristic search method, the model achieves high-quality label placement. In addition, the flexibility of the proposed model enables placing arbitrarily shaped labels.

  17. Hybrid image representation learning model with invariant features for basal cell carcinoma detection

    Science.gov (United States)

    Arevalo, John; Cruz-Roa, Angel; González, Fabio A.

    2013-11-01

    This paper presents a novel method for basal-cell carcinoma detection, which combines state-of-the-art methods for unsupervised feature learning (UFL) and bag of features (BOF) representation. BOF, which is a form of representation learning, has shown a good performance in automatic histopathology image classi cation. In BOF, patches are usually represented using descriptors such as SIFT and DCT. We propose to use UFL to learn the patch representation itself. This is accomplished by applying a topographic UFL method (T-RICA), which automatically learns visual invariance properties of color, scale and rotation from an image collection. These learned features also reveals these visual properties associated to cancerous and healthy tissues and improves carcinoma detection results by 7% with respect to traditional autoencoders, and 6% with respect to standard DCT representations obtaining in average 92% in terms of F-score and 93% of balanced accuracy.

  18. The application of feature selection to the development of Gaussian process models for percutaneous absorption.

    Science.gov (United States)

    Lam, Lun Tak; Sun, Yi; Davey, Neil; Adams, Rod; Prapopoulou, Maria; Brown, Marc B; Moss, Gary P

    2010-06-01

    The aim was to employ Gaussian processes to assess mathematically the nature of a skin permeability dataset and to employ these methods, particularly feature selection, to determine the key physicochemical descriptors which exert the most significant influence on percutaneous absorption, and to compare such models with established existing models. Gaussian processes, including automatic relevance detection (GPRARD) methods, were employed to develop models of percutaneous absorption that identified key physicochemical descriptors of percutaneous absorption. Using MatLab software, the statistical performance of these models was compared with single linear networks (SLN) and quantitative structure-permeability relationships (QSPRs). Feature selection methods were used to examine in more detail the physicochemical parameters used in this study. A range of statistical measures to determine model quality were used. The inherently nonlinear nature of the skin data set was confirmed. The Gaussian process regression (GPR) methods yielded predictive models that offered statistically significant improvements over SLN and QSPR models with regard to predictivity (where the rank order was: GPR > SLN > QSPR). Feature selection analysis determined that the best GPR models were those that contained log P, melting point and the number of hydrogen bond donor groups as significant descriptors. Further statistical analysis also found that great synergy existed between certain parameters. It suggested that a number of the descriptors employed were effectively interchangeable, thus questioning the use of models where discrete variables are output, usually in the form of an equation. The use of a nonlinear GPR method produced models with significantly improved predictivity, compared with SLN or QSPR models. Feature selection methods were able to provide important mechanistic information. However, it was also shown that significant synergy existed between certain parameters, and as such it

  19. Feature selection, statistical modeling and its applications to universal JPEG steganalyzer

    Energy Technology Data Exchange (ETDEWEB)

    Jalan, Jaikishan [Iowa State Univ., Ames, IA (United States)

    2009-01-01

    Steganalysis deals with identifying the instances of medium(s) which carry a message for communication by concealing their exisitence. This research focuses on steganalysis of JPEG images, because of its ubiquitous nature and low bandwidth requirement for storage and transmission. JPEG image steganalysis is generally addressed by representing an image with lower-dimensional features such as statistical properties, and then training a classifier on the feature set to differentiate between an innocent and stego image. Our approach is two fold: first, we propose a new feature reduction technique by applying Mahalanobis distance to rank the features for steganalysis. Many successful steganalysis algorithms use a large number of features relative to the size of the training set and suffer from a ”curse of dimensionality”: large number of feature values relative to training data size. We apply this technique to state-of-the-art steganalyzer proposed by Tom´as Pevn´y (54) to understand the feature space complexity and effectiveness of features for steganalysis. We show that using our approach, reduced-feature steganalyzers can be obtained that perform as well as the original steganalyzer. Based on our experimental observation, we then propose a new modeling technique for steganalysis by developing a Partially Ordered Markov Model (POMM) (23) to JPEG images and use its properties to train a Support Vector Machine. POMM generalizes the concept of local neighborhood directionality by using a partial order underlying the pixel locations. We show that the proposed steganalyzer outperforms a state-of-the-art steganalyzer by testing our approach with many different image databases, having a total of 20000 images. Finally, we provide a software package with a Graphical User Interface that has been developed to make this research accessible to local state forensic departments.

  20. Down-scaling wind energy resource from mesoscale to local scale by nesting and data assimilation with a CFD model

    International Nuclear Information System (INIS)

    Duraisamy Jothiprakasam, Venkatesh

    2014-01-01

    The development of wind energy generation requires precise and well-established methods for wind resource assessment, which is the initial step in every wind farm project. During the last two decades linear flow models were widely used in the wind industry for wind resource assessment and micro-siting. But the linear models inaccuracies in predicting the wind speeds in very complex terrain are well known and led to use of CFD, capable of modeling the complex flow in details around specific geographic features. Mesoscale models (NWP) are able to predict the wind regime at resolutions of several kilometers, but are not well suited to resolve the wind speed and turbulence induced by the topography features on the scale of a few hundred meters. CFD has proven successful in capturing flow details at smaller scales, but needs an accurate specification of the inlet conditions. Thus coupling NWP and CFD models is a better modeling approach for wind energy applications. A one-year field measurement campaign carried out in a complex terrain in southern France during 2007-2008 provides a well-documented data set both for input and validation data. The proposed new methodology aims to address two problems: the high spatial variation of the topography on the domain lateral boundaries, and the prediction errors of the mesoscale model. It is applied in this work using the open source CFD code Code-Saturne, coupled with the mesoscale forecast model of Meteo-France (ALADIN). The improvement is obtained by combining the mesoscale data as inlet condition and field measurement data assimilation into the CFD model. Newtonian relaxation (nudging) data assimilation technique is used to incorporate the measurement data into the CFD simulations. The methodology to reconstruct long term averages uses a clustering process to group the similar meteorological conditions and to reduce the number of CFD simulations needed to reproduce 1 year of atmospheric flow over the site. The assimilation

  1. Drift-Scale Coupled Processes (DST and THC Seepage) Models

    International Nuclear Information System (INIS)

    Dixon, P.

    2004-01-01

    The purpose of this Model Report (REV02) is to document the unsaturated zone (UZ) models used to evaluate the potential effects of coupled thermal-hydrological-chemical (THC) processes on UZ flow and transport. This Model Report has been developed in accordance with the ''Technical Work Plan for: Performance Assessment Unsaturated Zone'' (Bechtel SAIC Company, LLC (BSC) 2002 [160819]). The technical work plan (TWP) describes planning information pertaining to the technical scope, content, and management of this Model Report in Section 1.12, Work Package AUZM08, ''Coupled Effects on Flow and Seepage''. The plan for validation of the models documented in this Model Report is given in Attachment I, Model Validation Plans, Section I-3-4, of the TWP. Except for variations in acceptance criteria (Section 4.2), there were no deviations from this TWP. This report was developed in accordance with AP-SIII.10Q, ''Models''. This Model Report documents the THC Seepage Model and the Drift Scale Test (DST) THC Model. The THC Seepage Model is a drift-scale process model for predicting the composition of gas and water that could enter waste emplacement drifts and the effects of mineral alteration on flow in rocks surrounding drifts. The DST THC model is a drift-scale process model relying on the same conceptual model and much of the same input data (i.e., physical, hydrological, thermodynamic, and kinetic) as the THC Seepage Model. The DST THC Model is the primary method for validating the THC Seepage Model. The DST THC Model compares predicted water and gas compositions, as well as mineral alteration patterns, with observed data from the DST. These models provide the framework to evaluate THC coupled processes at the drift scale, predict flow and transport behavior for specified thermal-loading conditions, and predict the evolution of mineral alteration and fluid chemistry around potential waste emplacement drifts. The DST THC Model is used solely for the validation of the THC

  2. Model Scaling of Hydrokinetic Ocean Renewable Energy Systems

    Science.gov (United States)

    von Ellenrieder, Karl; Valentine, William

    2013-11-01

    Numerical simulations are performed to validate a non-dimensional dynamic scaling procedure that can be applied to subsurface and deeply moored systems, such as hydrokinetic ocean renewable energy devices. The prototype systems are moored in water 400 m deep and include: subsurface spherical buoys moored in a shear current and excited by waves; an ocean current turbine excited by waves; and a deeply submerged spherical buoy in a shear current excited by strong current fluctuations. The corresponding model systems, which are scaled based on relative water depths of 10 m and 40 m, are also studied. For each case examined, the response of the model system closely matches the scaled response of the corresponding full-sized prototype system. The results suggest that laboratory-scale testing of complete ocean current renewable energy systems moored in a current is possible. This work was supported by the U.S. Southeast National Marine Renewable Energy Center (SNMREC).

  3. Scale modeling of reinforced concrete structures subjected to seismic loading

    International Nuclear Information System (INIS)

    Dove, R.C.

    1983-01-01

    Reinforced concrete, Category I structures are so large that the possibility of seismicly testing the prototype structures under controlled conditions is essentially nonexistent. However, experimental data, from which important structural properties can be determined and existing and new methods of seismic analysis benchmarked, are badly needed. As a result, seismic experiments on scaled models are of considerable interest. In this paper, the scaling laws are developed in some detail so that assumptions and choices based on judgement can be clearly recognized and their effects discussed. The scaling laws developed are then used to design a reinforced concrete model of a Category I structure. Finally, how scaling is effected by various types of damping (viscous, structural, and Coulomb) is discussed

  4. Empirical spatial econometric modelling of small scale neighbourhood

    Science.gov (United States)

    Gerkman, Linda

    2012-07-01

    The aim of the paper is to model small scale neighbourhood in a house price model by implementing the newest methodology in spatial econometrics. A common problem when modelling house prices is that in practice it is seldom possible to obtain all the desired variables. Especially variables capturing the small scale neighbourhood conditions are hard to find. If there are important explanatory variables missing from the model, the omitted variables are spatially autocorrelated and they are correlated with the explanatory variables included in the model, it can be shown that a spatial Durbin model is motivated. In the empirical application on new house price data from Helsinki in Finland, we find the motivation for a spatial Durbin model, we estimate the model and interpret the estimates for the summary measures of impacts. By the analysis we show that the model structure makes it possible to model and find small scale neighbourhood effects, when we know that they exist, but we are lacking proper variables to measure them.

  5. Overall feature of EAST operation space by using simple Core-SOL-Divertor model

    International Nuclear Information System (INIS)

    Hiwatari, R.; Hatayama, A.; Zhu, S.; Takizuka, T.; Tomita, Y.

    2005-01-01

    We have developed a simple Core-SOL-Divertor (C-S-D) model to investigate qualitatively the overall features of the operational space for the integrated core and edge plasma. To construct the simple C-S-D model, a simple core plasma model of ITER physics guidelines and a two-point SOL-divertor model are used. The simple C-S-D model is applied to the study of the EAST operational space with lower hybrid current drive experiments under various kinds of trade-off for the basic plasma parameters. Effective methods for extending the operation space are also presented. As shown by this study for the EAST operation space, it is evident that the C-S-D model is a useful tool to understand qualitatively the overall features of the plasma operation space. (author)

  6. Quantum critical scaling of fidelity in BCS-like model

    International Nuclear Information System (INIS)

    Adamski, Mariusz; Jedrzejewski, Janusz; Krokhmalskii, Taras

    2013-01-01

    We study scaling of the ground-state fidelity in neighborhoods of quantum critical points in a model of interacting spinful fermions—a BCS-like model. Due to the exact diagonalizability of the model, in one and higher dimensions, scaling of the ground-state fidelity can be analyzed numerically with great accuracy, not only for small systems but also for macroscopic ones, together with the crossover region between them. Additionally, in the one-dimensional case we have been able to derive a number of analytical formulas for fidelity and show that they accurately fit our numerical results; these results are reported in the paper. Besides regular critical points and their neighborhoods, where well-known scaling laws are obeyed, there is the multicritical point and critical points in its proximity where anomalous scaling behavior is found. We also consider scaling of fidelity in neighborhoods of critical points where fidelity oscillates strongly as the system size or the chemical potential is varied. Our results for a one-dimensional version of a BCS-like model are compared with those obtained recently by Rams and Damski in similar studies of a quantum spin chain—an anisotropic XY model in a transverse magnetic field. (paper)

  7. Small-Scale Morphological Features on a Solid Surface Processed by High-Pressure Abrasive Water Jet

    Directory of Open Access Journals (Sweden)

    Can Kang

    2013-08-01

    Full Text Available Being subjected to a high-pressure abrasive water jet, solid samples will experience an essential variation of both internal stress and physical characteristics, which is closely associated with the kinetic energy attached to the abrasive particles involved in the jet stream. Here, experiments were performed, with particular emphasis being placed on the kinetic energy attenuation and turbulent features in the jet stream. At jet pressure of 260 MPa, mean velocity and root-mean-square (RMS velocity on two jet-stream sections were acquired by utilizing the phase Doppler anemometry (PDA technique. A jet-cutting experiment was then carried out with Al-Mg alloy samples being cut by an abrasive water jet. Morphological features and roughness on the cut surface were quantitatively examined through scanning electron microscopy (SEM and optical profiling techniques. The results indicate that the high-pressure water jet is characterized by remarkably high mean flow velocities and distinct velocity fluctuations. Those irregular pits and grooves on the cut surfaces indicate both the energy attenuation and the development of radial velocity components in the jet stream. When the sample is positioned with different distances from the nozzle outlet, the obtained quantitative surface roughness varies accordingly. A descriptive model highlighting the behaviors of abrasive particles in jet-cutting process is established in light of the experimental results and correlation analysis.

  8. Small-Scale Morphological Features on a Solid Surface Processed by High-Pressure Abrasive Water Jet.

    Science.gov (United States)

    Kang, Can; Liu, Haixia

    2013-08-14

    Being subjected to a high-pressure abrasive water jet, solid samples will experience an essential variation of both internal stress and physical characteristics, which is closely associated with the kinetic energy attached to the abrasive particles involved in the jet stream. Here, experiments were performed, with particular emphasis being placed on the kinetic energy attenuation and turbulent features in the jet stream. At jet pressure of 260 MPa, mean velocity and root-mean-square (RMS) velocity on two jet-stream sections were acquired by utilizing the phase Doppler anemometry (PDA) technique. A jet-cutting experiment was then carried out with Al-Mg alloy samples being cut by an abrasive water jet. Morphological features and roughness on the cut surface were quantitatively examined through scanning electron microscopy (SEM) and optical profiling techniques. The results indicate that the high-pressure water jet is characterized by remarkably high mean flow velocities and distinct velocity fluctuations. Those irregular pits and grooves on the cut surfaces indicate both the energy attenuation and the development of radial velocity components in the jet stream. When the sample is positioned with different distances from the nozzle outlet, the obtained quantitative surface roughness varies accordingly. A descriptive model highlighting the behaviors of abrasive particles in jet-cutting process is established in light of the experimental results and correlation analysis.

  9. Association of high proliferation marker Ki-67 expression with DCEMR imaging features of breast: a large scale evaluation

    Science.gov (United States)

    Saha, Ashirbani; Harowicz, Michael R.; Grimm, Lars J.; Kim, Connie E.; Ghate, Sujata V.; Walsh, Ruth; Mazurowski, Maciej A.

    2018-02-01

    One of the methods widely used to measure the proliferative activity of cells in breast cancer patients is the immunohistochemical (IHC) measurement of the percentage of cells stained for nuclear antigen Ki-67. Use of Ki-67 expression as a prognostic marker is still under investigation. However, numerous clinical studies have reported an association between a high Ki-67 and overall survival (OS) and disease free survival (DFS). On the other hand, to offer non-invasive alternative in determining Ki-67 expression, researchers have made recent attempts to study the association of Ki-67 expression with magnetic resonance (MR) imaging features of breast cancer in small cohorts (AUC) of the values predicted. Our model was able to predict high versus low Ki-67 in the test set with an AUC of 0.67 (95% CI: 0.58-0.75, p<1.1e-04). Thus, a moderate strength of association of Ki-67 values and MRextracted imaging features was demonstrated in our experiments.

  10. Scale genesis and gravitational wave in a classically scale invariant extension of the standard model

    Energy Technology Data Exchange (ETDEWEB)

    Kubo, Jisuke [Institute for Theoretical Physics, Kanazawa University,Kanazawa 920-1192 (Japan); Yamada, Masatoshi [Department of Physics, Kyoto University,Kyoto 606-8502 (Japan); Institut für Theoretische Physik, Universität Heidelberg,Philosophenweg 16, 69120 Heidelberg (Germany)

    2016-12-01

    We assume that the origin of the electroweak (EW) scale is a gauge-invariant scalar-bilinear condensation in a strongly interacting non-abelian gauge sector, which is connected to the standard model via a Higgs portal coupling. The dynamical scale genesis appears as a phase transition at finite temperature, and it can produce a gravitational wave (GW) background in the early Universe. We find that the critical temperature of the scale phase transition lies above that of the EW phase transition and below few O(100) GeV and it is strongly first-order. We calculate the spectrum of the GW background and find the scale phase transition is strong enough that the GW background can be observed by DECIGO.

  11. Prediction models for solitary pulmonary nodules based on curvelet textural features and clinical parameters.

    Science.gov (United States)

    Wang, Jing-Jing; Wu, Hai-Feng; Sun, Tao; Li, Xia; Wang, Wei; Tao, Li-Xin; Huo, Da; Lv, Ping-Xin; He, Wen; Guo, Xiu-Hua

    2013-01-01

    Lung cancer, one of the leading causes of cancer-related deaths, usually appears as solitary pulmonary nodules (SPNs) which are hard to diagnose using the naked eye. In this paper, curvelet-based textural features and clinical parameters are used with three prediction models [a multilevel model, a least absolute shrinkage and selection operator (LASSO) regression method, and a support vector machine (SVM)] to improve the diagnosis of benign and malignant SPNs. Dimensionality reduction of the original curvelet-based textural features was achieved using principal component analysis. In addition, non-conditional logistical regression was used to find clinical predictors among demographic parameters and morphological features. The results showed that, combined with 11 clinical predictors, the accuracy rates using 12 principal components were higher than those using the original curvelet-based textural features. To evaluate the models, 10-fold cross validation and back substitution were applied. The results obtained, respectively, were 0.8549 and 0.9221 for the LASSO method, 0.9443 and 0.9831 for SVM, and 0.8722 and 0.9722 for the multilevel model. All in all, it was found that using curvelet-based textural features after dimensionality reduction and using clinical predictors, the highest accuracy rate was achieved with SVM. The method may be used as an auxiliary tool to differentiate between benign and malignant SPNs in CT images.

  12. A new synoptic scale resolving global climate simulation using the Community Earth System Model

    Science.gov (United States)

    Small, R. Justin; Bacmeister, Julio; Bailey, David; Baker, Allison; Bishop, Stuart; Bryan, Frank; Caron, Julie; Dennis, John; Gent, Peter; Hsu, Hsiao-ming; Jochum, Markus; Lawrence, David; Muñoz, Ernesto; diNezio, Pedro; Scheitlin, Tim; Tomas, Robert; Tribbia, Joseph; Tseng, Yu-heng; Vertenstein, Mariana

    2014-12-01

    High-resolution global climate modeling holds the promise of capturing planetary-scale climate modes and small-scale (regional and sometimes extreme) features simultaneously, including their mutual interaction. This paper discusses a new state-of-the-art high-resolution Community Earth System Model (CESM) simulation that was performed with these goals in mind. The atmospheric component was at 0.25° grid spacing, and ocean component at 0.1°. One hundred years of "present-day" simulation were completed. Major results were that annual mean sea surface temperature (SST) in the equatorial Pacific and El-Niño Southern Oscillation variability were well simulated compared to standard resolution models. Tropical and southern Atlantic SST also had much reduced bias compared to previous versions of the model. In addition, the high resolution of the model enabled small-scale features of the climate system to be represented, such as air-sea interaction over ocean frontal zones, mesoscale systems generated by the Rockies, and Tropical Cyclones. Associated single component runs and standard resolution coupled runs are used to help attribute the strengths and weaknesses of the fully coupled run. The high-resolution run employed 23,404 cores, costing 250 thousand processor-hours per simulated year and made about two simulated years per day on the NCAR-Wyoming supercomputer "Yellowstone."

  13. Bayesian hierarchical model for large-scale covariance matrix estimation.

    Science.gov (United States)

    Zhu, Dongxiao; Hero, Alfred O

    2007-12-01

    Many bioinformatics problems implicitly depend on estimating large-scale covariance matrix. The traditional approaches tend to give rise to high variance and low accuracy due to "overfitting." We cast the large-scale covariance matrix estimation problem into the Bayesian hierarchical model framework, and introduce dependency between covariance parameters. We demonstrate the advantages of our approaches over the traditional approaches using simulations and OMICS data analysis.

  14. SITE-94. Discrete-feature modelling of the Aespoe Site: 3. Predictions of hydrogeological parameters for performance assessment

    International Nuclear Information System (INIS)

    Geier, J.E.

    1996-12-01

    A 3-dimensional, discrete-feature hydrological model is developed. The model integrates structural and hydrologic data for the Aespoe site, on scales ranging from semi regional fracture zones to individual fractures in the vicinity of the nuclear waste canisters. Predicted parameters for the near field include fracture spacing, fracture aperture, and Darcy velocity at each of forty canister deposition holes. Parameters for the far field include discharge location, Darcy velocity, effective longitudinal dispersion coefficient and head gradient, flow porosity, and flow wetted surface, for each canister source that discharges to the biosphere. Results are presented in the form of statistical summaries for a total of 42 calculation cases, which treat a set of 25 model variants in various combinations. The variants for the SITE-94 Reference Case model address conceptual and parametric uncertainty related to the site-scale hydrogeologic model and its properties, the fracture network within the repository, effective semi regional boundary conditions for the model, and the disturbed-rock zone around the repository tunnels and shafts. Two calculation cases simulate hydrologic conditions that are predicted to occur during future glacial episodes. 30 refs

  15. Business models of sharing economy companies : exploring features responsible for sharing economy companies’ internationalization

    OpenAIRE

    Kosintceva, Aleksandra

    2016-01-01

    This paper is dedicated to the sharing economy business models and their features responsible for internationalization. The study proposes derived definitions for the concepts of “sharing economy” and “business model” and first generic sharing economy business models typology. The typology was created through the qualitative analysis of secondary data on twenty sharing economy companies from nine different industries. The outlined categories of sharing economy business models a...

  16. Plasma and process characterization of high power magnetron physical vapor deposition with integrated plasma equipment--feature profile model

    International Nuclear Information System (INIS)

    Zhang Da; Stout, Phillip J.; Ventzek, Peter L.G.

    2003-01-01

    High power magnetron physical vapor deposition (HPM-PVD) has recently emerged for metal deposition into deep submicron features in state of the art integrated circuit fabrication. However, the plasma characteristics and process mechanism are not well known. An integrated plasma equipment-feature profile modeling infrastructure has therefore been developed for HPM-PVD deposition, and it has been applied to simulating copper seed deposition with an Ar background gas for damascene metalization. The equipment scale model is based on the hybrid plasma equipment model [M. Grapperhaus et al., J. Appl. Phys. 83, 35 (1998); J. Lu and M. J. Kushner, ibid., 89, 878 (2001)], which couples a three-dimensional Monte Carlo sputtering module within a two-dimensional fluid model. The plasma kinetics of thermalized, athermal, and ionized metals and the contributions of these species in feature deposition are resolved. A Monte Carlo technique is used to derive the angular distribution of athermal metals. Simulations show that in typical HPM-PVD processing, Ar + is the dominant ionized species driving sputtering. Athermal metal neutrals are the dominant deposition precursors due to the operation at high target power and low pressure. The angular distribution of athermals is off axis and more focused than thermal neutrals. The athermal characteristics favor sufficient and uniform deposition on the sidewall of the feature, which is the critical area in small feature filling. In addition, athermals lead to a thick bottom coverage. An appreciable fraction (∼10%) of the metals incident to the wafer are ionized. The ionized metals also contribute to bottom deposition in the absence of sputtering. We have studied the impact of process and equipment parameters on HPM-PVD. Simulations show that target power impacts both plasma ionization and target sputtering. The Ar + ion density increases nearly linearly with target power, different from the behavior of typical ionized PVD processing. The

  17. Anomalous scaling in an age-dependent branching model.

    Science.gov (United States)

    Keller-Schmidt, Stephanie; Tuğrul, Murat; Eguíluz, Víctor M; Hernández-García, Emilio; Klemm, Konstantin

    2015-02-01

    We introduce a one-parametric family of tree growth models, in which branching probabilities decrease with branch age τ as τ(-α). Depending on the exponent α, the scaling of tree depth with tree size n displays a transition between the logarithmic scaling of random trees and an algebraic growth. At the transition (α=1) tree depth grows as (logn)(2). This anomalous scaling is in good agreement with the trend observed in evolution of biological species, thus providing a theoretical support for age-dependent speciation and associating it to the occurrence of a critical point.

  18. Modification of the large-scale features of high Reynolds number wall turbulence by passive surface obtrusions

    Energy Technology Data Exchange (ETDEWEB)

    Monty, J.P.; Lien, K.; Chong, M.S. [University of Melbourne, Department of Mechanical Engineering, Parkville, VIC (Australia); Allen, J.J. [New Mexico State University, Department of Mechanical Engineering, Las Cruces, NM (United States)

    2011-12-15

    A high Reynolds number boundary-layer wind-tunnel facility at New Mexico State University was fitted with a regularly distributed braille surface. The surface was such that braille dots were closely packed in the streamwise direction and sparsely spaced in the spanwise direction. This novel surface had an unexpected influence on the flow: the energy of the very large-scale features of wall turbulence (approximately six-times the boundary-layer thickness in length) became significantly attenuated, even into the logarithmic region. To the author's knowledge, this is the first experimental study to report a modification of 'superstructures' in a rough-wall turbulent boundary layer. The result gives rise to the possibility that flow control through very small, passive surface roughness may be possible at high Reynolds numbers, without the prohibitive drag penalty anticipated heretofore. Evidence was also found for the uninhibited existence of the near-wall cycle, well known to smooth-wall-turbulence researchers, in the spanwise space between roughness elements. (orig.)

  19. On the scale similarity in large eddy simulation. A proposal of a new model

    International Nuclear Information System (INIS)

    Pasero, E.; Cannata, G.; Gallerano, F.

    2004-01-01

    Among the most common LES models present in literature there are the Eddy Viscosity-type models. In these models the subgrid scale (SGS) stress tensor is related to the resolved strain rate tensor through a scalar eddy viscosity coefficient. These models are affected by three fundamental drawbacks: they are purely dissipative, i.e. they cannot account for back scatter; they assume that the principal axes of the resolved strain rate tensor and SGS stress tensor are aligned; and that a local balance exists between the SGS turbulent kinetic energy production and its dissipation. Scale similarity models (SSM) were created to overcome the drawbacks of eddy viscosity-type models. The SSM models, such as that of Bardina et al. and that of Liu et al., assume that scales adjacent in wave number space present similar hydrodynamic features. This similarity makes it possible to effectively relate the unresolved scales, represented by the modified Cross tensor and the modified Reynolds tensor, to the smallest resolved scales represented by the modified Leonard tensor] or by a term obtained through multiple filtering operations at different scales. The models of Bardina et al. and Liu et al. are affected, however, by a fundamental drawback: they are not dissipative enough, i.e they are not able to ensure a sufficient energy drain from the resolved scales of motion to the unresolved ones. In this paper it is shown that such a drawback is due to the fact that such models do not take into account the smallest unresolved scales where the most dissipation of turbulent SGS energy takes place. A new scale similarity LES model that is able to grant an adequate drain of energy from the resolved scales to the unresolved ones is presented. The SGS stress tensor is aligned with the modified Leonard tensor. The coefficient of proportionality is expressed in terms of the trace of the modified Leonard tensor and in terms of the SGS kinetic energy (computed by solving its balance equation). The

  20. Universal Scaling and Critical Exponents of the Anisotropic Quantum Rabi Model

    Science.gov (United States)

    Liu, Maoxin; Chesi, Stefano; Ying, Zu-Jian; Chen, Xiaosong; Luo, Hong-Gang; Lin, Hai-Qing

    2017-12-01

    We investigate the quantum phase transition of the anisotropic quantum Rabi model, in which the rotating and counterrotating terms are allowed to have different coupling strengths. The model interpolates between two known limits with distinct universal properties. Through a combination of analytic and numerical approaches, we extract the phase diagram, scaling functions, and critical exponents, which determine the universality class at finite anisotropy (identical to the isotropic limit). We also reveal other interesting features, including a superradiance-induced freezing of the effective mass and discontinuous scaling functions in the Jaynes-Cummings limit. Our findings are extended to the few-body quantum phase transitions with N >1 spins, where we expose the same effective parameters, scaling properties, and phase diagram. Thus, a stronger form of universality is established, valid from N =1 up to the thermodynamic limit.

  1. Macro-scale turbulence modelling for flows in porous media

    International Nuclear Information System (INIS)

    Pinson, F.

    2006-03-01

    - This work deals with the macroscopic modeling of turbulence in porous media. It concerns heat exchangers, nuclear reactors as well as urban flows, etc. The objective of this study is to describe in an homogenized way, by the mean of a spatial average operator, turbulent flows in a solid matrix. In addition to this first operator, the use of a statistical average operator permits to handle the pseudo-aleatory character of turbulence. The successive application of both operators allows us to derive the balance equations of the kind of flows under study. Two major issues are then highlighted, the modeling of dispersion induced by the solid matrix and the turbulence modeling at a macroscopic scale (Reynolds tensor and turbulent dispersion). To this aim, we lean on the local modeling of turbulence and more precisely on the k - ε RANS models. The methodology of dispersion study, derived thanks to the volume averaging theory, is extended to turbulent flows. Its application includes the simulation, at a microscopic scale, of turbulent flows within a representative elementary volume of the porous media. Applied to channel flows, this analysis shows that even within the turbulent regime, dispersion remains one of the dominating phenomena within the macro-scale modeling framework. A two-scale analysis of the flow allows us to understand the dominating role of the drag force in the kinetic energy transfers between scales. Transfers between the mean part and the turbulent part of the flow are formally derived. This description significantly improves our understanding of the issue of macroscopic modeling of turbulence and leads us to define the sub-filter production and the wake dissipation. A f - f - w >f model is derived. It is based on three balance equations for the turbulent kinetic energy, the viscous dissipation and the wake dissipation. Furthermore, a dynamical predictor for the friction coefficient is proposed. This model is then successfully applied to the study of

  2. Multi Scale Models for Flexure Deformation in Sheet Metal Forming

    Directory of Open Access Journals (Sweden)

    Di Pasquale Edmondo

    2016-01-01

    Full Text Available This paper presents the application of multi scale techniques to the simulation of sheet metal forming using the one-step method. When a blank flows over the die radius, it undergoes a complex cycle of bending and unbending. First, we describe an original model for the prediction of residual plastic deformation and stresses in the blank section. This model, working on a scale about one hundred times smaller than the element size, has been implemented in SIMEX, one-step sheet metal forming simulation code. The utilisation of this multi-scale modeling technique improves greatly the accuracy of the solution. Finally, we discuss the implications of this analysis on the prediction of springback in metal forming.

  3. Scaling of Core Material in Rubble Mound Breakwater Model Tests

    DEFF Research Database (Denmark)

    Burcharth, H. F.; Liu, Z.; Troch, P.

    1999-01-01

    The permeability of the core material influences armour stability, wave run-up and wave overtopping. The main problem related to the scaling of core materials in models is that the hydraulic gradient and the pore velocity are varying in space and time. This makes it impossible to arrive at a fully...... correct scaling. The paper presents an empirical formula for the estimation of the wave induced pressure gradient in the core, based on measurements in models and a prototype. The formula, together with the Forchheimer equation can be used for the estimation of pore velocities in cores. The paper proposes...... that the diameter of the core material in models is chosen in such a way that the Froude scale law holds for a characteristic pore velocity. The characteristic pore velocity is chosen as the average velocity of a most critical area in the core with respect to porous flow. Finally the method is demonstrated...

  4. Materials and nanosystems : interdisciplinary computational modeling at multiple scales

    International Nuclear Information System (INIS)

    Huber, S.E.

    2014-01-01

    Over the last five decades, computer simulation and numerical modeling have become valuable tools complementing the traditional pillars of science, experiment and theory. In this thesis, several applications of computer-based simulation and modeling shall be explored in order to address problems and open issues in chemical and molecular physics. Attention shall be paid especially to the different degrees of interrelatedness and multiscale-flavor, which may - at least to some extent - be regarded as inherent properties of computational chemistry. In order to do so, a variety of computational methods are used to study features of molecular systems which are of relevance in various branches of science and which correspond to different spatial and/or temporal scales. Proceeding from small to large measures, first, an application in astrochemistry, the investigation of spectroscopic and energetic aspects of carbonic acid isomers shall be discussed. In this respect, very accurate and hence at the same time computationally very demanding electronic structure methods like the coupled-cluster approach are employed. These studies are followed by the discussion of an application in the scope of plasma-wall interaction which is related to nuclear fusion research. There, the interactions of atoms and molecules with graphite surfaces are explored using density functional theory methods. The latter are computationally cheaper than coupled-cluster methods and thus allow the treatment of larger molecular systems, but yield less accuracy and especially reduced error control at the same time. The subsequently presented exploration of surface defects at low-index polar zinc oxide surfaces, which are of interest in materials science and surface science, is another surface science application. The necessity to treat even larger systems of several hundreds of atoms requires the use of approximate density functional theory methods. Thin gold nanowires consisting of several thousands of

  5. Main features and possibilities of the new scale module for calculation of sensitivity and uncertainty by sampling: SAMPLER; Principlaes caracteristicas y posibilidades del nuevo modulo de SCALE 6.2 para calculo de sensibilidad e incertidumbre por muestreo: SAMPLER

    Energy Technology Data Exchange (ETDEWEB)

    Mesado, C.; Miro, R.; Barrachina, T.; Verdu, G.

    2014-07-01

    Due to the importance of calculating sensitivity and uncertainty in the calculation of field engineering, and especially in the nuclear world, it has been decided to present the main features of the new module present in the new version of SCALE 6.2 (currently beta 3 version) called SAMPLER. This module allows the calculation of uncertainty in a wide range of effective sections, neutron parameters, composition and physical parameters. However, the calculation of sensitivity is not present in the beta 3 release. Even so, this module can be helpful for participants of the proposed Benchmark by Expert Group on Uncertainty Analysis in Modelling (UAM-LWR), as well as to analysts in general. (Author)

  6. Validity of the Neuromuscular Recovery Scale: a measurement model approach.

    Science.gov (United States)

    Velozo, Craig; Moorhouse, Michael; Ardolino, Elizabeth; Lorenz, Doug; Suter, Sarah; Basso, D Michele; Behrman, Andrea L

    2015-08-01

    To determine how well the Neuromuscular Recovery Scale (NRS) items fit the Rasch, 1-parameter, partial-credit measurement model. Confirmatory factor analysis (CFA) and principal components analysis (PCA) of residuals were used to determine dimensionality. The Rasch, 1-parameter, partial-credit rating scale model was used to determine rating scale structure, person/item fit, point-measure item correlations, item discrimination, and measurement precision. Seven NeuroRecovery Network clinical sites. Outpatients (N=188) with spinal cord injury. Not applicable. NRS. While the NRS met 1 of 3 CFA criteria, the PCA revealed that the Rasch measurement dimension explained 76.9% of the variance. Ten of 11 items and 91% of the patients fit the Rasch model, with 9 of 11 items showing high discrimination. Sixty-nine percent of the ratings met criteria. The items showed a logical item-difficulty order, with Stand retraining as the easiest item and Walking as the most challenging item. The NRS showed no ceiling or floor effects and separated the sample into almost 5 statistically distinct strata; individuals with an American Spinal Injury Association Impairment Scale (AIS) D classification showed the most ability, and those with an AIS A classification showed the least ability. Items not meeting the rating scale criteria appear to be related to the low frequency counts. The NRS met many of the Rasch model criteria for construct validity. Copyright © 2015 American Congress of Rehabilitation Medicine. Published by Elsevier Inc. All rights reserved.

  7. A genome-scale RNA-interference screen identifies RRAS signaling as a pathologic feature of Huntington's disease.

    Directory of Open Access Journals (Sweden)

    John P Miller

    Full Text Available A genome-scale RNAi screen was performed in a mammalian cell-based assay to identify modifiers of mutant huntingtin toxicity. Ontology analysis of suppressor data identified processes previously implicated in Huntington's disease, including proteolysis, glutamate excitotoxicity, and mitochondrial dysfunction. In addition to established mechanisms, the screen identified multiple components of the RRAS signaling pathway as loss-of-function suppressors of mutant huntingtin toxicity in human and mouse cell models. Loss-of-function in orthologous RRAS pathway members also suppressed motor dysfunction in a Drosophila model of Huntington's disease. Abnormal activation of RRAS and a down-stream effector, RAF1, was observed in cellular models and a mouse model of Huntington's disease. We also observe co-localization of RRAS and mutant huntingtin in cells and in mouse striatum, suggesting that activation of R-Ras may occur through protein interaction. These data indicate that mutant huntingtin exerts a pathogenic effect on this pathway that can be corrected at multiple intervention points including RRAS, FNTA/B, PIN1, and PLK1. Consistent with these results, chemical inhibition of farnesyltransferase can also suppress mutant huntingtin toxicity. These data suggest that pharmacological inhibition of RRAS signaling may confer therapeutic benefit in Huntington's disease.

  8. Data Field Modeling and Spectral-Spatial Feature Fusion for Hyperspectral Data Classification.

    Science.gov (United States)

    Liu, Da; Li, Jianxun

    2016-12-16

    Classification is a significant subject in hyperspectral remote sensing image processing. This study proposes a spectral-spatial feature fusion algorithm for the classification of hyperspectral images (HSI). Unlike existing spectral-spatial classification methods, the influences and interactions of the surroundings on each measured pixel were taken into consideration in this paper. Data field theory was employed as the mathematical realization of the field theory concept in physics, and both the spectral and spatial domains of HSI were considered as data fields. Therefore, the inherent dependency of interacting pixels was modeled. Using data field modeling, spatial and spectral features were transformed into a unified radiation form and further fused into a new feature by using a linear model. In contrast to the current spectral-spatial classification methods, which usually simply stack spectral and spatial features together, the proposed method builds the inner connection between the spectral and spatial features, and explores the hidden information that contributed to classification. Therefore, new information is included for classification. The final classification result was obtained using a random forest (RF) classifier. The proposed method was tested with the University of Pavia and Indian Pines, two well-known standard hyperspectral datasets. The experimental results demonstrate that the proposed method has higher classification accuracies than those obtained by the traditional approaches.

  9. Use of genome-scale microbial models for metabolic engineering

    DEFF Research Database (Denmark)

    Patil, Kiran Raosaheb; Åkesson, M.; Nielsen, Jens

    2004-01-01

    Metabolic engineering serves as an integrated approach to design new cell factories by providing rational design procedures and valuable mathematical and experimental tools. Mathematical models have an important role for phenotypic analysis, but can also be used for the design of optimal metaboli...... network structures. The major challenge for metabolic engineering in the post-genomic era is to broaden its design methodologies to incorporate genome-scale biological data. Genome-scale stoichiometric models of microorganisms represent a first step in this direction....

  10. Wind Farm Wake Models From Full Scale Data

    DEFF Research Database (Denmark)

    Knudsen, Torben; Bak, Thomas

    2012-01-01

    This investigation is part of the EU FP7 project “Distributed Control of Large-Scale Offshore Wind Farms”. The overall goal in this project is to develop wind farm controllers giving power set points to individual turbines in the farm in order to minimise mechanical loads and optimise power. One...... on real full scale data. The modelling is based on so called effective wind speed. It is shown that there is a wake for a wind direction range of up to 20 degrees. Further, when accounting for the wind direction it is shown that the two model structures considered can both fit the experimental data...

  11. Ground-water solute transport modeling using a three-dimensional scaled model

    International Nuclear Information System (INIS)

    Crider, S.S.

    1987-01-01

    Scaled models are used extensively in current hydraulic research on sediment transport and solute dispersion in free surface flows (rivers, estuaries), but are neglected in current ground-water model research. Thus, an investigation was conducted to test the efficacy of a three-dimensional scaled model of solute transport in ground water. No previous results from such a model have been reported. Experiments performed on uniform scaled models indicated that some historical problems (e.g., construction and scaling difficulties; disproportionate capillary rise in model) were partly overcome by using simple model materials (sand, cement and water), by restricting model application to selective classes of problems, and by physically controlling the effect of the model capillary zone. Results from these tests were compared with mathematical models. Model scaling laws were derived for ground-water solute transport and used to build a three-dimensional scaled model of a ground-water tritium plume in a prototype aquifer on the Savannah River Plant near Aiken, South Carolina. Model results compared favorably with field data and with a numerical model. Scaled models are recommended as a useful additional tool for prediction of ground-water solute transport

  12. Atomic scale modelling of materials of the nuclear fuel cycle

    International Nuclear Information System (INIS)

    Bertolus, M.

    2011-10-01

    This document written to obtain the French accreditation to supervise research presents the research I conducted at CEA Cadarache since 1999 on the atomic scale modelling of non-metallic materials involved in the nuclear fuel cycle: host materials for radionuclides from nuclear waste (apatites), fuel (in particular uranium dioxide) and ceramic cladding materials (silicon carbide). These are complex materials at the frontier of modelling capabilities since they contain heavy elements (rare earths or actinides), exhibit complex structures or chemical compositions and/or are subjected to irradiation effects: creation of point defects and fission products, amorphization. The objective of my studies is to bring further insight into the physics and chemistry of the elementary processes involved using atomic scale modelling and its coupling with higher scale models and experimental studies. This work is organised in two parts: on the one hand the development, adaptation and implementation of atomic scale modelling methods and validation of the approximations used; on the other hand the application of these methods to the investigation of nuclear materials under irradiation. This document contains a synthesis of the studies performed, orientations for future research, a detailed resume and a list of publications and communications. (author)

  13. Scaling and percolation in the small-world network model

    Energy Technology Data Exchange (ETDEWEB)

    Newman, M. E. J. [Santa Fe Institute, 1399 Hyde Park Road, Santa Fe, New Mexico 87501 (United States); Watts, D. J. [Santa Fe Institute, 1399 Hyde Park Road, Santa Fe, New Mexico 87501 (United States)

    1999-12-01

    In this paper we study the small-world network model of Watts and Strogatz, which mimics some aspects of the structure of networks of social interactions. We argue that there is one nontrivial length-scale in the model, analogous to the correlation length in other systems, which is well-defined in the limit of infinite system size and which diverges continuously as the randomness in the network tends to zero, giving a normal critical point in this limit. This length-scale governs the crossover from large- to small-world behavior in the model, as well as the number of vertices in a neighborhood of given radius on the network. We derive the value of the single critical exponent controlling behavior in the critical region and the finite size scaling form for the average vertex-vertex distance on the network, and, using series expansion and Pade approximants, find an approximate analytic form for the scaling function. We calculate the effective dimension of small-world graphs and show that this dimension varies as a function of the length-scale on which it is measured, in a manner reminiscent of multifractals. We also study the problem of site percolation on small-world networks as a simple model of disease propagation, and derive an approximate expression for the percolation probability at which a giant component of connected vertices first forms (in epidemiological terms, the point at which an epidemic occurs). The typical cluster radius satisfies the expected finite size scaling form with a cluster size exponent close to that for a random graph. All our analytic results are confirmed by extensive numerical simulations of the model. (c) 1999 The American Physical Society.

  14. Scaling and percolation in the small-world network model

    International Nuclear Information System (INIS)

    Newman, M. E. J.; Watts, D. J.

    1999-01-01

    In this paper we study the small-world network model of Watts and Strogatz, which mimics some aspects of the structure of networks of social interactions. We argue that there is one nontrivial length-scale in the model, analogous to the correlation length in other systems, which is well-defined in the limit of infinite system size and which diverges continuously as the randomness in the network tends to zero, giving a normal critical point in this limit. This length-scale governs the crossover from large- to small-world behavior in the model, as well as the number of vertices in a neighborhood of given radius on the network. We derive the value of the single critical exponent controlling behavior in the critical region and the finite size scaling form for the average vertex-vertex distance on the network, and, using series expansion and Pade approximants, find an approximate analytic form for the scaling function. We calculate the effective dimension of small-world graphs and show that this dimension varies as a function of the length-scale on which it is measured, in a manner reminiscent of multifractals. We also study the problem of site percolation on small-world networks as a simple model of disease propagation, and derive an approximate expression for the percolation probability at which a giant component of connected vertices first forms (in epidemiological terms, the point at which an epidemic occurs). The typical cluster radius satisfies the expected finite size scaling form with a cluster size exponent close to that for a random graph. All our analytic results are confirmed by extensive numerical simulations of the model. (c) 1999 The American Physical Society

  15. Short-Term Solar Irradiance Forecasting Model Based on Artificial Neural Network Using Statistical Feature Parameters

    Directory of Open Access Journals (Sweden)

    Hongshan Zhao

    2012-05-01

    Full Text Available Short-term solar irradiance forecasting (STSIF is of great significance for the optimal operation and power predication of grid-connected photovoltaic (PV plants. However, STSIF is very complex to handle due to the random and nonlinear characteristics of solar irradiance under changeable weather conditions. Artificial Neural Network (ANN is suitable for STSIF modeling and many research works on this topic are presented, but the conciseness and robustness of the existing models still need to be improved. After discussing the relation between weather variations and irradiance, the characteristics of the statistical feature parameters of irradiance under different weather conditions are figured out. A novel ANN model using statistical feature parameters (ANN-SFP for STSIF is proposed in this paper. The input vector is reconstructed with several statistical feature parameters of irradiance and ambient temperature. Thus sufficient information can be effectively extracted from relatively few inputs and the model complexity is reduced. The model structure is determined by cross-validation (CV, and the Levenberg-Marquardt algorithm (LMA is used for the network training. Simulations are carried out to validate and compare the proposed model with the conventional ANN model using historical data series (ANN-HDS, and the results indicated that the forecast accuracy is obviously improved under variable weather conditions.

  16. Homogenization of Large-Scale Movement Models in Ecology

    Science.gov (United States)

    Garlick, M.J.; Powell, J.A.; Hooten, M.B.; McFarlane, L.R.

    2011-01-01

    A difficulty in using diffusion models to predict large scale animal population dispersal is that individuals move differently based on local information (as opposed to gradients) in differing habitat types. This can be accommodated by using ecological diffusion. However, real environments are often spatially complex, limiting application of a direct approach. Homogenization for partial differential equations has long been applied to Fickian diffusion (in which average individual movement is organized along gradients of habitat and population density). We derive a homogenization procedure for ecological diffusion and apply it to a simple model for chronic wasting disease in mule deer. Homogenization allows us to determine the impact of small scale (10-100 m) habitat variability on large scale (10-100 km) movement. The procedure generates asymptotic equations for solutions on the large scale with parameters defined by small-scale variation. The simplicity of this homogenization procedure is striking when compared to the multi-dimensional homogenization procedure for Fickian diffusion,and the method will be equally straightforward for more complex models. ?? 2010 Society for Mathematical Biology.

  17. Pharmacokinetic-Pharmacodynamic Modeling in Pediatric Drug Development, and the Importance of Standardized Scaling of Clearance.

    Science.gov (United States)

    Germovsek, Eva; Barker, Charlotte I S; Sharland, Mike; Standing, Joseph F

    2018-04-19

    Pharmacokinetic/pharmacodynamic (PKPD) modeling is important in the design and conduct of clinical pharmacology research in children. During drug development, PKPD modeling and simulation should underpin rational trial design and facilitate extrapolation to investigate efficacy and safety. The application of PKPD modeling to optimize dosing recommendations and therapeutic drug monitoring is also increasing, and PKPD model-based dose individualization will become a core feature of personalized medicine. Following extensive progress on pediatric PK modeling, a greater emphasis now needs to be placed on PD modeling to understand age-related changes in drug effects. This paper discusses the principles of PKPD modeling in the context of pediatric drug development, summarizing how important PK parameters, such as clearance (CL), are scaled with size and age, and highlights a standardized method for CL scaling in children. One standard scaling method would facilitate comparison of PK parameters across multiple studies, thus increasing the utility of existing PK models and facilitating optimal design of new studies.

  18. Use of a handheld low-cost sensor to explore the effect of urban design features on local-scale spatial and temporal air quality variability.

    Science.gov (United States)

    Miskell, Georgia; Salmond, Jennifer A; Williams, David E

    2018-04-01

    Portable low-cost instruments have been validated and used to measure ambient nitrogen dioxide (NO 2 ) at multiple sites over a small urban area with 20min time resolution. We use these results combined with land use regression (LUR) and rank correlation methods to explore the effects of traffic, urban design features, and local meteorology and atmosphere chemistry on small-scale spatio-temporal variations. We measured NO 2 at 45 sites around the downtown area of Vancouver, BC, in spring 2016, and constructed four different models: i) a model based on averaging concentrations observed at each site over the whole measurement period, and separate temporal models for ii) morning, iii) midday, and iv) afternoon. Redesign of the temporal models using the average model predictors as constants gave three 'hybrid' models that used both spatial and temporal variables. These accounted for approximately 50% of the total variation with mean absolute error±5ppb. Ranking sites by concentration and by change in concentration across the day showed a shift of high NO 2 concentrations across the central city from morning to afternoon. Locations could be identified in which NO 2 concentration was determined by the geography of the site, and others as ones in which the concentration changed markedly from morning to afternoon indicating the importance of temporal controls. Rank correlation results complemented LUR in identifying significant urban design variables that impacted NO 2 concentration. High variability across a relatively small space was partially described by predictor variables related to traffic (bus stop density, speed limits, traffic counts, distance to traffic lights), atmospheric chemistry (ozone, dew point), and environment (land use, trees). A high-density network recording continuously would be needed fully to capture local variations. Copyright © 2017 Elsevier B.V. All rights reserved.

  19. Comparison of the Features of EPUB E-Book and SCORM E-Learning Content Model

    Science.gov (United States)

    Chang, Hsuan-Pu; Hung, Jason C.

    2018-01-01

    E-books nowadays have greatly evolved in its presentation and functions, however its features for education need to be investigated and inspired because people who are accustomed to using printed books may consider and approach it in the same way as they do printed ones. Therefore, the authors compared the EPUB e-book content model with the SCORM…

  20. Independent screening for single-index hazard rate models with ultrahigh dimensional features

    DEFF Research Database (Denmark)

    Gorst-Rasmussen, Anders; Scheike, Thomas

    2013-01-01

    can be viewed as the natural survival equivalent of correlation screening. We state conditions under which the method admits the sure screening property within a class of single-index hazard rate models with ultrahigh dimensional features and describe the generally detrimental effect of censoring...

  1. Observations and models of star formation in the tidal features of interacting galaxies

    International Nuclear Information System (INIS)

    Wallin, J.F.; Schombert, J.M.; Struck-Marcell, C.

    1990-01-01

    Multi-color surface photometry (BVri) is presented for the tidal features in a sample of interacting galaxies. Large color variations are found between the morphological components and within the individual components. The blue colors in the primary and the tidal features are most dramatic in B-V, and not in V-i, indicating that star formation instead of metallicity or age dominates the colors. Color variations between components is larger in systems shortly after interaction begins and diminishes to a very low level in systems which are merged. Photometric models for interacting systems are presented which suggest that a weak burst of star formation in the tidal features could cause the observed color distributions. Dynamical models indicate that compression occurs during the development of tidal features causing an increase in the local density by a factor of between 1.5 and 5. Assuming this density increase can be related to the star formation rate by a Schmidt law, the density increases observed in the dynamical models may be responsible for the variations in color seen in some of the interacting systems. Limitations of the dynamical models are also discussed

  2. The consensus in the two-feature two-state one-dimensional Axelrod model revisited

    Science.gov (United States)

    Biral, Elias J. P.; Tilles, Paulo F. C.; Fontanari, José F.

    2015-04-01

    The Axelrod model for the dissemination of culture exhibits a rich spatial distribution of cultural domains, which depends on the values of the two model parameters: F, the number of cultural features and q, the common number of states each feature can assume. In the one-dimensional model with F = q = 2, which is closely related to the constrained voter model, Monte Carlo simulations indicate the existence of multicultural absorbing configurations in which at least one macroscopic domain coexist with a multitude of microscopic ones in the thermodynamic limit. However, rigorous analytical results for the infinite system starting from the configuration where all cultures are equally likely show convergence to only monocultural or consensus configurations. Here we show that this disagreement is due simply to the order that the time-asymptotic limit and the thermodynamic limit are taken in the simulations. In addition, we show how the consensus-only result can be derived using Monte Carlo simulations of finite chains.

  3. Comparison of void strengthening in fcc and bcc metals: Large-scale atomic-level modelling

    International Nuclear Information System (INIS)

    Osetsky, Yu.N.; Bacon, D.J.

    2005-01-01

    Strengthening due to voids can be a significant radiation effect in metals. Treatment of this by elasticity theory of dislocations is difficult when atomic structure of the obstacle and dislocation is influential. In this paper, we report results of large-scale atomic-level modelling of edge dislocation-void interaction in fcc (copper) and bcc (iron) metals. Voids of up to 5 nm diameter were studied over the temperature range from 0 to 600 K. We demonstrate that atomistic modelling is able to reveal important effects, which are beyond the continuum approach. Some arise from features of the dislocation core and crystal structure, others involve dislocation climb and temperature effects

  4. Genome-scale modeling of the protein secretory machinery in yeast

    DEFF Research Database (Denmark)

    Feizi, Amir; Österlund, Tobias; Petranovic, Dina

    2013-01-01

    The protein secretory machinery in Eukarya is involved in post-translational modification (PTMs) and sorting of the secretory and many transmembrane proteins. While the secretory machinery has been well-studied using classic reductionist approaches, a holistic view of its complex nature is lacking....... Here, we present the first genome-scale model for the yeast secretory machinery which captures the knowledge generated through more than 50 years of research. The model is based on the concept of a Protein Specific Information Matrix (PSIM: characterized by seven PTMs features). An algorithm...

  5. Moving contact lines: linking molecular dynamics and continuum-scale modelling.

    Science.gov (United States)

    Smith, Edward R; Theodorakis, Panagiotis E; Craster, Richard V; Matar, Omar K

    2018-05-04

    Despite decades of research, the modelling of moving contact lines has remained a formidable challenge in fluid dynamics whose resolution will impact numerous industrial, biological, and daily-life applications. On the one hand, molecular dynamics (MD) simulation has the ability to provide unique insight into the microscopic details that determine the dynamic behavior of the contact line, which is not possible with either continuum-scale simulations or experiments. On the other hand, continuum-based models provide the link to the macroscopic description of the system. In this Feature Article, we explore the complex range of physical factors, including the presence of surfactants, which govern the contact line motion through MD simulations. We also discuss links between continuum- and molecular-scale modelling, and highlight the opportunities for future developments in this area.

  6. Scaling model for high-aspect-ratio microballoon direct-drive implosions at short laser wavelengths

    International Nuclear Information System (INIS)

    Schirmann, D.; Juraszek, D.; Lane, S.M.; Campbell, E.M.

    1992-01-01

    A scaling model for hot spherical ablative implosions in direct-drive mode is presented. The model results have been compared with experiments from LLE, ILE, and LLNL. Reduction of the neutron yield due to illumination nonuniformities is taken into account by the assumption that the neutron emission is cut off when the gas shock wave reflected off the center meets the incoming pusher, i.e., at a time when the probability of shell breakup is greatly enhanced. The main advantage of this semiempirical scaling model is that it elucidates the principal features of these simple implosions and permits one to estimate very quickly the performance of a high-aspect-ratio direct-drive target illuminated by short-wavelength laser light. (Author)

  7. Two-dimensional divertor modeling and scaling laws

    International Nuclear Information System (INIS)

    Catto, P.J.; Connor, J.W.; Knoll, D.A.

    1996-01-01

    Two-dimensional numerical models of divertors contain large numbers of dimensionless parameters that must be varied to investigate all operating regimes of interest. To simplify the task and gain insight into divertor operation, we employ similarity techniques to investigate whether model systems of equations plus boundary conditions in the steady state admit scaling transformations that lead to useful divertor similarity scaling laws. A short mean free path neutral-plasma model of the divertor region below the x-point is adopted in which all perpendicular transport is due to the neutrals. We illustrate how the results can be used to benchmark large computer simulations by employing a modified version of UEDGE which contains a neutral fluid model. (orig.)

  8. Active Learning of Classification Models with Likert-Scale Feedback.

    Science.gov (United States)

    Xue, Yanbing; Hauskrecht, Milos

    2017-01-01

    Annotation of classification data by humans can be a time-consuming and tedious process. Finding ways of reducing the annotation effort is critical for building the classification models in practice and for applying them to a variety of classification tasks. In this paper, we develop a new active learning framework that combines two strategies to reduce the annotation effort. First, it relies on label uncertainty information obtained from the human in terms of the Likert-scale feedback. Second, it uses active learning to annotate examples with the greatest expected change. We propose a Bayesian approach to calculate the expectation and an incremental SVM solver to reduce the time complexity of the solvers. We show the combination of our active learning strategy and the Likert-scale feedback can learn classification models more rapidly and with a smaller number of labeled instances than methods that rely on either Likert-scale labels or active learning alone.

  9. Brain Transcriptome Profiles in Mouse Model Simulating Features of Post-traumatic Stress Disorder

    Science.gov (United States)

    2015-02-28

    analyses of DEGs suggested pos- sible roles in anxiety-related behavioral responses, synaptic plasticity, neurogenesis, inflammation, obesity...Behavioral evaluation of mouse model We established [29] a rodent model manifesting PTSD- like behavioral features. We believe that, because the stres - sor...hippo- campus (HC), medial prefrontal cortex (MPFC) play primary roles in fear learning and memory, and thus, may contribute to the behavioral

  10. One-fiftieth scale model studies of 40-by 80-foot and 80-by 120-foot wind tunnel complex at NASA Ames Research Center

    Science.gov (United States)

    Schmidt, Gene I.; Rossow, Vernon J.; Vanaken, Johannes M.; Parrish, Cynthia L.

    1987-01-01

    The features of a 1/50-scale model of the National Full-Scale Aerodynamics Complex are first described. An overview is then given of some results from the various tests conducted with the model to aid in the design of the full-scale facility. It was found that the model tunnel simulated accurately many of the operational characteristics of the full-scale circuits. Some characteristics predicted by the model were, however, noted to differ from previous full-scale results by about 10%.

  11. Multi-scale Modeling of Plasticity in Tantalum.

    Energy Technology Data Exchange (ETDEWEB)

    Lim, Hojun [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Battaile, Corbett Chandler. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Carroll, Jay [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Buchheit, Thomas E. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Boyce, Brad [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Weinberger, Christopher [Drexel Univ., Philadelphia, PA (United States)

    2015-12-01

    In this report, we present a multi-scale computational model to simulate plastic deformation of tantalum and validating experiments. In atomistic/ dislocation level, dislocation kink- pair theory is used to formulate temperature and strain rate dependent constitutive equations. The kink-pair theory is calibrated to available data from single crystal experiments to produce accurate and convenient constitutive laws. The model is then implemented into a BCC crystal plasticity finite element method (CP-FEM) model to predict temperature and strain rate dependent yield stresses of single and polycrystalline tantalum and compared with existing experimental data from the literature. Furthermore, classical continuum constitutive models describing temperature and strain rate dependent flow behaviors are fit to the yield stresses obtained from the CP-FEM polycrystal predictions. The model is then used to conduct hydro- dynamic simulations of Taylor cylinder impact test and compared with experiments. In order to validate the proposed tantalum CP-FEM model with experiments, we introduce a method for quantitative comparison of CP-FEM models with various experimental techniques. To mitigate the effects of unknown subsurface microstructure, tantalum tensile specimens with a pseudo-two-dimensional grain structure and grain sizes on the order of millimeters are used. A technique combining an electron back scatter diffraction (EBSD) and high resolution digital image correlation (HR-DIC) is used to measure the texture and sub-grain strain fields upon uniaxial tensile loading at various applied strains. Deformed specimens are also analyzed with optical profilometry measurements to obtain out-of- plane strain fields. These high resolution measurements are directly compared with large-scale CP-FEM predictions. This computational method directly links fundamental dislocation physics to plastic deformations in the grain-scale and to the engineering-scale applications. Furthermore, direct

  12. Exploiting multi-scale parallelism for large scale numerical modelling of laser wakefield accelerators

    International Nuclear Information System (INIS)

    Fonseca, R A; Vieira, J; Silva, L O; Fiuza, F; Davidson, A; Tsung, F S; Mori, W B

    2013-01-01

    A new generation of laser wakefield accelerators (LWFA), supported by the extreme accelerating fields generated in the interaction of PW-Class lasers and underdense targets, promises the production of high quality electron beams in short distances for multiple applications. Achieving this goal will rely heavily on numerical modelling to further understand the underlying physics and identify optimal regimes, but large scale modelling of these scenarios is computationally heavy and requires the efficient use of state-of-the-art petascale supercomputing systems. We discuss the main difficulties involved in running these simulations and the new developments implemented in the OSIRIS framework to address these issues, ranging from multi-dimensional dynamic load balancing and hybrid distributed/shared memory parallelism to the vectorization of the PIC algorithm. We present the results of the OASCR Joule Metric program on the issue of large scale modelling of LWFA, demonstrating speedups of over 1 order of magnitude on the same hardware. Finally, scalability to over ∼10 6 cores and sustained performance over ∼2 P Flops is demonstrated, opening the way for large scale modelling of LWFA scenarios. (paper)

  13. Systematic construction of kinetic models from genome-scale metabolic networks.

    Directory of Open Access Journals (Sweden)

    Natalie J Stanford

    Full Text Available The quantitative effects of environmental and genetic perturbations on metabolism can be studied in silico using kinetic models. We present a strategy for large-scale model construction based on a logical layering of data such as reaction fluxes, metabolite concentrations, and kinetic constants. The resulting models contain realistic standard rate laws and plausible parameters, adhere to the laws of thermodynamics, and reproduce a predefined steady state. These features have not been simultaneously achieved by previous workflows. We demonstrate the advantages and limitations of the workflow by translating the yeast consensus metabolic network into a kinetic model. Despite crudely selected data, the model shows realistic control behaviour, a stable dynamic, and realistic response to perturbations in extracellular glucose concentrations. The paper concludes by outlining how new data can continuously be fed into the workflow and how iterative model building can assist in directing experiments.

  14. Systematic Construction of Kinetic Models from Genome-Scale Metabolic Networks

    Science.gov (United States)

    Smallbone, Kieran; Klipp, Edda; Mendes, Pedro; Liebermeister, Wolfram

    2013-01-01

    The quantitative effects of environmental and genetic perturbations on metabolism can be studied in silico using kinetic models. We present a strategy for large-scale model construction based on a logical layering of data such as reaction fluxes, metabolite concentrations, and kinetic constants. The resulting models contain realistic standard rate laws and plausible parameters, adhere to the laws of thermodynamics, and reproduce a predefined steady state. These features have not been simultaneously achieved by previous workflows. We demonstrate the advantages and limitations of the workflow by translating the yeast consensus metabolic network into a kinetic model. Despite crudely selected data, the model shows realistic control behaviour, a stable dynamic, and realistic response to perturbations in extracellular glucose concentrations. The paper concludes by outlining how new data can continuously be fed into the workflow and how iterative model building can assist in directing experiments. PMID:24324546

  15. Optogenetic stimulation of a meso-scale human cortical model

    Science.gov (United States)

    Selvaraj, Prashanth; Szeri, Andrew; Sleigh, Jamie; Kirsch, Heidi

    2015-03-01

    Neurological phenomena like sleep and seizures depend not only on the activity of individual neurons, but on the dynamics of neuron populations as well. Meso-scale models of cortical activity provide a means to study neural dynamics at the level of neuron populations. Additionally, they offer a safe and economical way to test the effects and efficacy of stimulation techniques on the dynamics of the cortex. Here, we use a physiologically relevant meso-scale model of the cortex to study the hypersynchronous activity of neuron populations during epileptic seizures. The model consists of a set of stochastic, highly non-linear partial differential equations. Next, we use optogenetic stimulation to control seizures in a hyperexcited cortex, and to induce seizures in a normally functioning cortex. The high spatial and temporal resolution this method offers makes a strong case for the use of optogenetics in treating meso scale cortical disorders such as epileptic seizures. We use bifurcation analysis to investigate the effect of optogenetic stimulation in the meso scale model, and its efficacy in suppressing the non-linear dynamics of seizures.

  16. Small-Scale Helicopter Automatic Autorotation : Modeling, Guidance, and Control

    NARCIS (Netherlands)

    Taamallah, S.

    2015-01-01

    Our research objective consists in developing a, model-based, automatic safety recovery system, for a small-scale helicopter Unmanned Aerial Vehicle (UAV) in autorotation, i.e. an engine OFF flight condition, that safely flies and lands the helicopter to a pre-specified ground location. In pursuit

  17. Phenomenological aspects of no-scale inflation models

    Energy Technology Data Exchange (ETDEWEB)

    Ellis, John [Theoretical Particle Physics and Cosmology Group, Department of Physics,King’s College London,WC2R 2LS London (United Kingdom); Theory Division, CERN,CH-1211 Geneva 23 (Switzerland); Garcia, Marcos A.G. [William I. Fine Theoretical Physics Institute, School of Physics and Astronomy,University of Minnesota,116 Church Street SE, Minneapolis, MN 55455 (United States); Nanopoulos, Dimitri V. [George P. and Cynthia W. Mitchell Institute for Fundamental Physics andAstronomy, Texas A& M University,College Station, 77843 Texas (United States); Astroparticle Physics Group, Houston Advanced Research Center (HARC), Mitchell Campus, Woodlands, 77381 Texas (United States); Academy of Athens, Division of Natural Sciences, 28 Panepistimiou Avenue, 10679 Athens (Greece); Olive, Keith A. [William I. Fine Theoretical Physics Institute, School of Physics and Astronomy,University of Minnesota,116 Church Street SE, Minneapolis, MN 55455 (United States)

    2015-10-01

    We discuss phenomenological aspects of inflationary models wiith a no-scale supergravity Kähler potential motivated by compactified string models, in which the inflaton may be identified either as a Kähler modulus or an untwisted matter field, focusing on models that make predictions for the scalar spectral index n{sub s} and the tensor-to-scalar ratio r that are similar to the Starobinsky model. We discuss possible patterns of soft supersymmetry breaking, exhibiting examples of the pure no-scale type m{sub 0}=B{sub 0}=A{sub 0}=0, of the CMSSM type with universal A{sub 0} and m{sub 0}≠0 at a high scale, and of the mSUGRA type with A{sub 0}=B{sub 0}+m{sub 0} boundary conditions at the high input scale. These may be combined with a non-trivial gauge kinetic function that generates gaugino masses m{sub 1/2}≠0, or one may have a pure gravity mediation scenario where trilinear terms and gaugino masses are generated through anomalies. We also discuss inflaton decays and reheating, showing possible decay channels for the inflaton when it is either an untwisted matter field or a Kähler modulus. Reheating is very efficient if a matter field inflaton is directly coupled to MSSM fields, and both candidates lead to sufficient reheating in the presence of a non-trivial gauge kinetic function.

  18. Modeling and simulation in tribology across scales: An overview

    DEFF Research Database (Denmark)

    Vakis, A.I.; Yastrebov, V.A.; Scheibert, J.

    2018-01-01

    theories at the nano- and micro-scales, as well as multiscale and multiphysics aspects for analytical and computational models relevant to applications spanning a variety of sectors, from automotive to biotribology and nanotechnology. Significant effort is still required to account for complementary...

  19. Large scale solar district heating. Evaluation, modelling and designing - Appendices

    Energy Technology Data Exchange (ETDEWEB)

    Heller, A.

    2000-07-01

    The appendices present the following: A) Cad-drawing of the Marstal CSHP design. B) Key values - large-scale solar heating in Denmark. C) Monitoring - a system description. D) WMO-classification of pyranometers (solarimeters). E) The computer simulation model in TRNSYS. F) Selected papers from the author. (EHS)

  20. Vegetable parenting practices scale: Item response modeling analyses

    Science.gov (United States)

    Our objective was to evaluate the psychometric properties of a vegetable parenting practices scale using multidimensional polytomous item response modeling which enables assessing item fit to latent variables and the distributional characteristics of the items in comparison to the respondents. We al...

  1. Scale-invariant inclusive spectra in a dual model

    International Nuclear Information System (INIS)

    Chikovani, Z.E.; Jenkovsky, L.L.; Martynov, E.S.

    1979-01-01

    One-particle inclusive distributions at large transverse momentum phisub(tr) are shown to scale, Edσ/d 3 phi approximately phisub(tr)sup(-N)(1-Xsub(tr))sup(1+N/2)lnphisub(tr), in a dual model with Mandelstam analyticity if the Regge trajectories are logarithmic asymptotically

  2. Learning in an estimated medium-scale DSGE model

    Czech Academy of Sciences Publication Activity Database

    Slobodyan, Sergey; Wouters, R.

    2012-01-01

    Roč. 36, č. 1 (2012), s. 26-46 ISSN 0165-1889 R&D Projects: GA ČR(CZ) GCP402/11/J018 Institutional support: PRVOUK-P23 Keywords : constant-gain adaptive learning * medium-scale DSGE model * DSGE- VAR Subject RIV: AH - Economics Impact factor: 0.807, year: 2012

  3. Integration of Error Compensation of Coordinate Measuring Machines into Feature Measurement: Part I—Model Development

    Science.gov (United States)

    Calvo, Roque; D’Amato, Roberto; Gómez, Emilio; Domingo, Rosario

    2016-01-01

    The development of an error compensation model for coordinate measuring machines (CMMs) and its integration into feature measurement is presented. CMMs are widespread and dependable instruments in industry and laboratories for dimensional measurement. From the tip probe sensor to the machine display, there is a complex transformation of probed point coordinates through the geometrical feature model that makes the assessment of accuracy and uncertainty measurement results difficult. Therefore, error compensation is not standardized, conversely to other simpler instruments. Detailed coordinate error compensation models are generally based on CMM as a rigid-body and it requires a detailed mapping of the CMM’s behavior. In this paper a new model type of error compensation is proposed. It evaluates the error from the vectorial composition of length error by axis and its integration into the geometrical measurement model. The non-explained variability by the model is incorporated into the uncertainty budget. Model parameters are analyzed and linked to the geometrical errors and uncertainty of CMM response. Next, the outstanding measurement models of flatness, angle, and roundness are developed. The proposed models are useful for measurement improvement with easy integration into CMM signal processing, in particular in industrial environments where built-in solutions are sought. A battery of implementation tests are presented in Part II, where the experimental endorsement of the model is included. PMID:27690052

  4. Integration of Error Compensation of Coordinate Measuring Machines into Feature Measurement: Part I—Model Development

    Directory of Open Access Journals (Sweden)

    Roque Calvo

    2016-09-01

    Full Text Available The development of an error compensation model for coordinate measuring machines (CMMs and its integration into feature measurement is presented. CMMs are widespread and dependable instruments in industry and laboratories for dimensional measurement. From the tip probe sensor to the machine display, there is a complex transformation of probed point coordinates through the geometrical feature model that makes the assessment of accuracy and uncertainty measurement results difficult. Therefore, error compensation is not standardized, conversely to other simpler instruments. Detailed coordinate error compensation models are generally based on CMM as a rigid-body and it requires a detailed mapping of the CMM’s behavior. In this paper a new model type of error compensation is proposed. It evaluates the error from the vectorial composition of length error by axis and its integration into the geometrical measurement model. The non-explained variability by the model is incorporated into the uncertainty budget. Model parameters are analyzed and linked to the geometrical errors and uncertainty of CMM response. Next, the outstanding measurement models of flatness, angle, and roundness are developed. The proposed models are useful for measurement improvement with easy integration into CMM signal processing, in particular in industrial environments where built-in solutions are sought. A battery of implementation tests are presented in Part II, where the experimental endorsement of the model is included.

  5. Novel patch modelling method for efficient simulation and prediction uncertainty analysis of multi-scale groundwater flow and transport processes

    Science.gov (United States)

    Sreekanth, J.; Moore, Catherine

    2018-04-01

    The application of global sensitivity and uncertainty analysis techniques to groundwater models of deep sedimentary basins are typically challenged by large computational burdens combined with associated numerical stability issues. The highly parameterized approaches required for exploring the predictive uncertainty associated with the heterogeneous hydraulic characteristics of multiple aquifers and aquitards in these sedimentary basins exacerbate these issues. A novel Patch Modelling Methodology is proposed for improving the computational feasibility of stochastic modelling analysis of large-scale and complex groundwater models. The method incorporates a nested groundwater modelling framework that enables efficient simulation of groundwater flow and transport across multiple spatial and temporal scales. The method also allows different processes to be simulated within different model scales. Existing nested model methodologies are extended by employing 'joining predictions' for extrapolating prediction-salient information from one model scale to the next. This establishes a feedback mechanism supporting the transfer of information from child models to parent models as well as parent models to child models in a computationally efficient manner. This feedback mechanism is simple and flexible and ensures that while the salient small scale features influencing larger scale prediction are transferred back to the larger scale, this does not require the live coupling of models. This method allows the modelling of multiple groundwater flow and transport processes using separate groundwater models that are built for the appropriate spatial and temporal scales, within a stochastic framework, while also removing the computational burden associated with live model coupling. The utility of the method is demonstrated by application to an actual large scale aquifer injection scheme in Australia.

  6. Hum-mPLoc 3.0: prediction enhancement of human protein subcellular localization through modeling the hidden correlations of gene ontology and functional domain features.

    Science.gov (United States)

    Zhou, Hang; Yang, Yang; Shen, Hong-Bin

    2017-03-15

    Protein subcellular localization prediction has been an important research topic in computational biology over the last decade. Various automatic methods have been proposed to predict locations for large scale protein datasets, where statistical machine learning algorithms are widely used for model construction. A key step in these predictors is encoding the amino acid sequences into feature vectors. Many studies have shown that features extracted from biological domains, such as gene ontology and functional domains, can be very useful for improving the prediction accuracy. However, domain knowledge usually results in redundant features and high-dimensional feature spaces, which may degenerate the performance of machine learning models. In this paper, we propose a new amino acid sequence-based human protein subcellular location prediction approach Hum-mPLoc 3.0, which covers 12 human subcellular localizations. The sequences are represented by multi-view complementary features, i.e. context vocabulary annotation-based gene ontology (GO) terms, peptide-based functional domains, and residue-based statistical features. To systematically reflect the structural hierarchy of the domain knowledge bases, we propose a novel feature representation protocol denoted as HCM (Hidden Correlation Modeling), which will create more compact and discriminative feature vectors by modeling the hidden correlations between annotation terms. Experimental results on four benchmark datasets show that HCM improves prediction accuracy by 5-11% and F 1 by 8-19% compared with conventional GO-based methods. A large-scale application of Hum-mPLoc 3.0 on the whole human proteome reveals proteins co-localization preferences in the cell. www.csbio.sjtu.edu.cn/bioinf/Hum-mPLoc3/. hbshen@sjtu.edu.cn. Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com

  7. Disappearing scales in carps: re-visiting Kirpichnikov's model on the genetics of scale pattern formation.

    Directory of Open Access Journals (Sweden)

    Laura Casas

    Full Text Available The body of most fishes is fully covered by scales that typically form tight, partially overlapping rows. While some of the genes controlling the formation and growth of fish scales have been studied, very little is known about the genetic mechanisms regulating scale pattern formation. Although the existence of two genes with two pairs of alleles (S&s and N&n regulating scale coverage in cyprinids has been predicted by Kirpichnikov and colleagues nearly eighty years ago, their identity was unknown until recently. In 2009, the 'S' gene was found to be a paralog of fibroblast growth factor receptor 1, fgfr1a1, while the second gene called 'N' has not yet been identified. We re-visited the original model of Kirpichnikov that proposed four major scale pattern types and observed a high degree of variation within the so-called scattered phenotype due to which this group was divided into two sub-types: classical mirror and irregular. We also analyzed the survival rates of offspring groups and found a distinct difference between Asian and European crosses. Whereas nude × nude crosses involving at least one parent of Asian origin or hybrid with Asian parent(s showed the 25% early lethality predicted by Kirpichnikov (due to the lethality of the NN genotype, those with two Hungarian nude parents did not. We further extended Kirpichnikov's work by correlating changes in phenotype (scale-pattern to the deformations of fins and losses of pharyngeal teeth. We observed phenotypic changes which were not restricted to nudes, as described by Kirpichnikov, but were also present in mirrors (and presumably in linears as well; not analyzed in detail here. We propose that the gradation of phenotypes observed within the scattered group is caused by a gradually decreasing level of signaling (a dose-dependent effect probably due to a concerted action of multiple pathways involved in scale formation.

  8. Disappearing scales in carps: Re-visiting Kirpichnikov's model on the genetics of scale pattern formation

    KAUST Repository

    Casas, Laura; Szűcs, Ré ka; Vij, Shubha; Goh, Chin Heng; Kathiresan, Purushothaman; Né meth, Sá ndor; Jeney, Zsigmond; Bercsé nyi, Mikló s; Orbá n, Lá szló

    2013-01-01

    The body of most fishes is fully covered by scales that typically form tight, partially overlapping rows. While some of the genes controlling the formation and growth of fish scales have been studied, very little is known about the genetic mechanisms regulating scale pattern formation. Although the existence of two genes with two pairs of alleles (S&s and N&n) regulating scale coverage in cyprinids has been predicted by Kirpichnikov and colleagues nearly eighty years ago, their identity was unknown until recently. In 2009, the 'S' gene was found to be a paralog of fibroblast growth factor receptor 1, fgfr1a1, while the second gene called 'N' has not yet been identified. We re-visited the original model of Kirpichnikov that proposed four major scale pattern types and observed a high degree of variation within the so-called scattered phenotype due to which this group was divided into two sub-types: classical mirror and irregular. We also analyzed the survival rates of offspring groups and found a distinct difference between Asian and European crosses. Whereas nude x nude crosses involving at least one parent of Asian origin or hybrid with Asian parent(s) showed the 25% early lethality predicted by Kirpichnikov (due to the lethality of the NN genoty