Bayesian structure learning for Markov Random Fields with a spike and slab prior
Chen, Y.; Welling, M.; de Freitas, N.; Murphy, K.
2012-01-01
In recent years a number of methods have been developed for automatically learning the (sparse) connectivity structure of Markov Random Fields. These methods are mostly based on L1-regularized optimization which has a number of disadvantages such as the inability to assess model uncertainty and
Iterated random walks with shape prior
DEFF Research Database (Denmark)
Pujadas, Esmeralda Ruiz; Kjer, Hans Martin; Piella, Gemma
2016-01-01
the parametric probability density function. Then, random walks is performed iteratively aligning the prior with the current segmentation in every iteration. We tested the proposed approach with natural and medical images and compared it with the latest techniques with random walks and shape priors......We propose a new framework for image segmentation using random walks where a distance shape prior is combined with a region term. The shape prior is weighted by a confidence map to reduce the influence of the prior in high gradient areas and the region term is computed with k-means to estimate....... The experiments suggest that this method gives promising results for medical and natural images....
Random template placement and prior information
International Nuclear Information System (INIS)
Roever, Christian
2010-01-01
In signal detection problems, one is usually faced with the task of searching a parameter space for peaks in the likelihood function which indicate the presence of a signal. Random searches have proven to be very efficient as well as easy to implement, compared e.g. to searches along regular grids in parameter space. Knowledge of the parameterised shape of the signal searched for adds structure to the parameter space, i.e., there are usually regions requiring to be densely searched while in other regions a coarser search is sufficient. On the other hand, prior information identifies the regions in which a search will actually be promising or may likely be in vain. Defining specific figures of merit allows one to combine both template metric and prior distribution and devise optimal sampling schemes over the parameter space. We show an example related to the gravitational wave signal from a binary inspiral event. Here the template metric and prior information are particularly contradictory, since signals from low-mass systems tolerate the least mismatch in parameter space while high-mass systems are far more likely, as they imply a greater signal-to-noise ratio (SNR) and hence are detectable to greater distances. The derived sampling strategy is implemented in a Markov chain Monte Carlo (MCMC) algorithm where it improves convergence.
Prior information in structure estimation
Czech Academy of Sciences Publication Activity Database
Kárný, Miroslav; Nedoma, Petr; Khailova, Natalia; Pavelková, Lenka
2003-01-01
Roč. 150, č. 6 (2003), s. 643-653 ISSN 1350-2379 R&D Projects: GA AV ČR IBS1075102; GA AV ČR IBS1075351; GA ČR GA102/03/0049 Institutional research plan: CEZ:AV0Z1075907 Keywords : prior knowledge * structure estimation * autoregressive models Subject RIV: BC - Control Systems Theory Impact factor: 0.745, year: 2003 http://library.utia.cas.cz/separaty/historie/karny-0411258.pdf
Neutrino mass priors for cosmology from random matrices
Long, Andrew J.; Raveri, Marco; Hu, Wayne; Dodelson, Scott
2018-02-01
Cosmological measurements of structure are placing increasingly strong constraints on the sum of the neutrino masses, Σ mν, through Bayesian inference. Because these constraints depend on the choice for the prior probability π (Σ mν), we argue that this prior should be motivated by fundamental physical principles rather than the ad hoc choices that are common in the literature. The first step in this direction is to specify the prior directly at the level of the neutrino mass matrix Mν, since this is the parameter appearing in the Lagrangian of the particle physics theory. Thus by specifying a probability distribution over Mν, and by including the known squared mass splittings, we predict a theoretical probability distribution over Σ mν that we interpret as a Bayesian prior probability π (Σ mν). Assuming a basis-invariant probability distribution on Mν, also known as the anarchy hypothesis, we find that π (Σ mν) peaks close to the smallest Σ mν allowed by the measured mass splittings, roughly 0.06 eV (0.1 eV) for normal (inverted) ordering, due to the phenomenon of eigenvalue repulsion in random matrices. We consider three models for neutrino mass generation: Dirac, Majorana, and Majorana via the seesaw mechanism; differences in the predicted priors π (Σ mν) allow for the possibility of having indications about the physical origin of neutrino masses once sufficient experimental sensitivity is achieved. We present fitting functions for π (Σ mν), which provide a simple means for applying these priors to cosmological constraints on the neutrino masses or marginalizing over their impact on other cosmological parameters.
Prior Sensitivity Analysis in Default Bayesian Structural Equation Modeling.
van Erp, Sara; Mulder, Joris; Oberski, Daniel L
2017-11-27
Bayesian structural equation modeling (BSEM) has recently gained popularity because it enables researchers to fit complex models and solve some of the issues often encountered in classical maximum likelihood estimation, such as nonconvergence and inadmissible solutions. An important component of any Bayesian analysis is the prior distribution of the unknown model parameters. Often, researchers rely on default priors, which are constructed in an automatic fashion without requiring substantive prior information. However, the prior can have a serious influence on the estimation of the model parameters, which affects the mean squared error, bias, coverage rates, and quantiles of the estimates. In this article, we investigate the performance of three different default priors: noninformative improper priors, vague proper priors, and empirical Bayes priors-with the latter being novel in the BSEM literature. Based on a simulation study, we find that these three default BSEM methods may perform very differently, especially with small samples. A careful prior sensitivity analysis is therefore needed when performing a default BSEM analysis. For this purpose, we provide a practical step-by-step guide for practitioners to conducting a prior sensitivity analysis in default BSEM. Our recommendations are illustrated using a well-known case study from the structural equation modeling literature, and all code for conducting the prior sensitivity analysis is available in the online supplemental materials. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Bayesian Inference for Structured Spike and Slab Priors
DEFF Research Database (Denmark)
Andersen, Michael Riis; Winther, Ole; Hansen, Lars Kai
2014-01-01
Sparse signal recovery addresses the problem of solving underdetermined linear inverse problems subject to a sparsity constraint. We propose a novel prior formulation, the structured spike and slab prior, which allows to incorporate a priori knowledge of the sparsity pattern by imposing a spatial...
Identification of subsurface structures using electromagnetic data and shape priors
Energy Technology Data Exchange (ETDEWEB)
Tveit, Svenn, E-mail: svenn.tveit@uni.no [Uni CIPR, Uni Research, Bergen 5020 (Norway); Department of Mathematics, University of Bergen, Bergen 5020 (Norway); Bakr, Shaaban A., E-mail: shaaban.bakr1@gmail.com [Department of Mathematics, Faculty of Science, Assiut University, Assiut 71516 (Egypt); Uni CIPR, Uni Research, Bergen 5020 (Norway); Lien, Martha, E-mail: martha.lien@octio.com [Uni CIPR, Uni Research, Bergen 5020 (Norway); Octio AS, Bøhmergaten 44, Bergen 5057 (Norway); Mannseth, Trond, E-mail: trond.mannseth@uni.no [Uni CIPR, Uni Research, Bergen 5020 (Norway); Department of Mathematics, University of Bergen, Bergen 5020 (Norway)
2015-03-01
We consider the inverse problem of identifying large-scale subsurface structures using the controlled source electromagnetic method. To identify structures in the subsurface where the contrast in electric conductivity can be small, regularization is needed to bias the solution towards preserving structural information. We propose to combine two approaches for regularization of the inverse problem. In the first approach we utilize a model-based, reduced, composite representation of the electric conductivity that is highly flexible, even for a moderate number of degrees of freedom. With a low number of parameters, the inverse problem is efficiently solved using a standard, second-order gradient-based optimization algorithm. Further regularization is obtained using structural prior information, available, e.g., from interpreted seismic data. The reduced conductivity representation is suitable for incorporation of structural prior information. Such prior information cannot, however, be accurately modeled with a gaussian distribution. To alleviate this, we incorporate the structural information using shape priors. The shape prior technique requires the choice of kernel function, which is application dependent. We argue for using the conditionally positive definite kernel which is shown to have computational advantages over the commonly applied gaussian kernel for our problem. Numerical experiments on various test cases show that the methodology is able to identify fairly complex subsurface electric conductivity distributions while preserving structural prior information during the inversion.
Energy Technology Data Exchange (ETDEWEB)
Liu, Jiamin; Hoffman, Joanne; Zhao, Jocelyn; Yao, Jianhua; Lu, Le; Kim, Lauren; Turkbey, Evrim B.; Summers, Ronald M., E-mail: rms@nih.gov [Imaging Biomarkers and Computer-aided Diagnosis Laboratory, Radiology and Imaging Sciences, National Institutes of Health Clinical Center Building, 10 Room 1C224 MSC 1182, Bethesda, Maryland 20892-1182 (United States)
2016-07-15
Purpose: To develop an automated system for mediastinal lymph node detection and station mapping for chest CT. Methods: The contextual organs, trachea, lungs, and spine are first automatically identified to locate the region of interest (ROI) (mediastinum). The authors employ shape features derived from Hessian analysis, local object scale, and circular transformation that are computed per voxel in the ROI. Eight more anatomical structures are simultaneously segmented by multiatlas label fusion. Spatial priors are defined as the relative multidimensional distance vectors corresponding to each structure. Intensity, shape, and spatial prior features are integrated and parsed by a random forest classifier for lymph node detection. The detected candidates are then segmented by the following curve evolution process. Texture features are computed on the segmented lymph nodes and a support vector machine committee is used for final classification. For lymph node station labeling, based on the segmentation results of the above anatomical structures, the textual definitions of mediastinal lymph node map according to the International Association for the Study of Lung Cancer are converted into patient-specific color-coded CT image, where the lymph node station can be automatically assigned for each detected node. Results: The chest CT volumes from 70 patients with 316 enlarged mediastinal lymph nodes are used for validation. For lymph node detection, their system achieves 88% sensitivity at eight false positives per patient. For lymph node station labeling, 84.5% of lymph nodes are correctly assigned to their stations. Conclusions: Multiple-channel shape, intensity, and spatial prior features aggregated by a random forest classifier improve mediastinal lymph node detection on chest CT. Using the location information of segmented anatomic structures from the multiatlas formulation enables accurate identification of lymph node stations.
International Nuclear Information System (INIS)
Liu, Jiamin; Hoffman, Joanne; Zhao, Jocelyn; Yao, Jianhua; Lu, Le; Kim, Lauren; Turkbey, Evrim B.; Summers, Ronald M.
2016-01-01
Purpose: To develop an automated system for mediastinal lymph node detection and station mapping for chest CT. Methods: The contextual organs, trachea, lungs, and spine are first automatically identified to locate the region of interest (ROI) (mediastinum). The authors employ shape features derived from Hessian analysis, local object scale, and circular transformation that are computed per voxel in the ROI. Eight more anatomical structures are simultaneously segmented by multiatlas label fusion. Spatial priors are defined as the relative multidimensional distance vectors corresponding to each structure. Intensity, shape, and spatial prior features are integrated and parsed by a random forest classifier for lymph node detection. The detected candidates are then segmented by the following curve evolution process. Texture features are computed on the segmented lymph nodes and a support vector machine committee is used for final classification. For lymph node station labeling, based on the segmentation results of the above anatomical structures, the textual definitions of mediastinal lymph node map according to the International Association for the Study of Lung Cancer are converted into patient-specific color-coded CT image, where the lymph node station can be automatically assigned for each detected node. Results: The chest CT volumes from 70 patients with 316 enlarged mediastinal lymph nodes are used for validation. For lymph node detection, their system achieves 88% sensitivity at eight false positives per patient. For lymph node station labeling, 84.5% of lymph nodes are correctly assigned to their stations. Conclusions: Multiple-channel shape, intensity, and spatial prior features aggregated by a random forest classifier improve mediastinal lymph node detection on chest CT. Using the location information of segmented anatomic structures from the multiatlas formulation enables accurate identification of lymph node stations.
Structure of NCI Cooperative Groups Program Prior to NCTN
Learn how the National Cancer Institute’s Cooperative Groups Program was structured prior to its being replaced by NCI’s National Clinical Trials Network (NCTN). The NCTN gives funds and other support to cancer research organizations to conduct cancer clinical trials.
18 CFR 415.51 - Prior non-conforming structures.
2010-04-01
... 18 Conservation of Power and Water Resources 2 2010-04-01 2010-04-01 false Prior non-conforming structures. 415.51 Section 415.51 Conservation of Power and Water Resources DELAWARE RIVER BASIN COMMISSION... damaged by any means, including a flood, to the extent of 50 percent or more of its market value at that...
A scale-free structure prior for graphical models with applications in functional genomics.
Directory of Open Access Journals (Sweden)
Paul Sheridan
Full Text Available The problem of reconstructing large-scale, gene regulatory networks from gene expression data has garnered considerable attention in bioinformatics over the past decade with the graphical modeling paradigm having emerged as a popular framework for inference. Analysis in a full Bayesian setting is contingent upon the assignment of a so-called structure prior-a probability distribution on networks, encoding a priori biological knowledge either in the form of supplemental data or high-level topological features. A key topological consideration is that a wide range of cellular networks are approximately scale-free, meaning that the fraction, , of nodes in a network with degree is roughly described by a power-law with exponent between and . The standard practice, however, is to utilize a random structure prior, which favors networks with binomially distributed degree distributions. In this paper, we introduce a scale-free structure prior for graphical models based on the formula for the probability of a network under a simple scale-free network model. Unlike the random structure prior, its scale-free counterpart requires a node labeling as a parameter. In order to use this prior for large-scale network inference, we design a novel Metropolis-Hastings sampler for graphical models that includes a node labeling as a state space variable. In a simulation study, we demonstrate that the scale-free structure prior outperforms the random structure prior at recovering scale-free networks while at the same time retains the ability to recover random networks. We then estimate a gene association network from gene expression data taken from a breast cancer tumor study, showing that scale-free structure prior recovers hubs, including the previously unknown hub SLC39A6, which is a zinc transporter that has been implicated with the spread of breast cancer to the lymph nodes. Our analysis of the breast cancer expression data underscores the value of the scale
Random walks with shape prior for cochlea segmentation in ex vivo μCT
DEFF Research Database (Denmark)
Ruiz Pujadas, Esmeralda; Kjer, Hans Martin; Piella, Gemma
2016-01-01
Purpose Cochlear implantation is a safe and effective surgical procedure to restore hearing in deaf patients. However, the level of restoration achieved may vary due to differences in anatomy, implant type and surgical access. In order to reduce the variability of the surgical outcomes, we...... propose a new framework for cochlea segmentation in ex vivo μCT images using random walks where a distance-based shape prior is combined with a region term estimated by a Gaussian mixture model. The prior is also weighted by a confidence map to adjust its influence according to the strength of the image...... contour. Random walks is performed iteratively, and the prior mask is aligned in every iteration. Results We tested the proposed approach in ten μCT data sets and compared it with other random walks-based segmentation techniques such as guided random walks (Eslami et al. in Med Image Anal 17...
Phase II prospective randomized trial of weight loss prior to radical prostatectomy.
Henning, Susanne M; Galet, Colette; Gollapudi, Kiran; Byrd, Joshua B; Liang, Pei; Li, Zhaoping; Grogan, Tristan; Elashoff, David; Magyar, Clara E; Said, Jonathan; Cohen, Pinchas; Aronson, William J
2017-12-04
Obesity is associated with poorly differentiated and advanced prostate cancer and increased mortality. In preclinical models, caloric restriction delays prostate cancer progression and prolongs survival. We sought to determine if weight loss (WL) in men with prostate cancer prior to radical prostatectomy affects tumor apoptosis and proliferation, and if WL effects other metabolic biomarkers. In this Phase II prospective trial, overweight and obese men scheduled for radical prostatectomy were randomized to a 5-8 week WL program consisting of standard structured energy-restricted meal plans (1200-1500 Kcal/day) and physical activity or to a control group. The primary endpoint was apoptotic index in the radical prostatectomy malignant epithelium. Secondary endpoints were proliferation (Ki67) in the radical prostatectomy tissue, body weight, body mass index (BMI), waist to hip ratio, body composition, and serum PSA, insulin, triglyceride, cholesterol, testosterone, estradiol, leptin, adiponectin, interleukin 6, interleukin 8, insulin-like growth factor 1, and IGF binding protein 1. In total 23 patients were randomized to the WL intervention and 21 patients to the control group. Subjects in the intervention group had significantly more weight loss (WL:-3.7 ± 0.5 kg; Control:-1.6 ± 0.5 kg; p = 0.007) than the control group and total fat mass was significantly reduced (WL:-2.1 ± 0.4; Control: 0.1 ± 0.3; p = 0.015). There was no significant difference in apoptotic or proliferation index between the groups. Among the other biomarkers, triglyceride, and insulin levels were significantly decreased in the WL compared with the control group. In summary, this short-term WL program prior to radical prostatectomy resulted in significantly more WL in the intervention vs. the control group and was accompanied by significant reductions in body fat mass, circulating triglycerides, and insulin. However, no significant changes were observed in malignant
Newman, William G; Payne, Katherine; Tricker, Karen; Roberts, Stephen A; Fargher, Emily; Pushpakom, Sudeep; Alder, Jane E; Sidgwick, Gary P; Payne, Debbie; Elliott, Rachel A; Heise, Marco; Elles, Robert; Ramsden, Simon C; Andrews, Julie; Houston, J Brian; Qasim, Faeiza; Shaffer, Jon; Griffiths, Christopher E M; Ray, David W; Bruce, Ian; Ollier, William E R
2011-06-01
To conduct a pragmatic, randomized controlled trial to assess whether thiopurine methyltransferase (TPMT) genotyping prior to azathioprine reduces adverse drug reactions (ADRs). A total of 333 participants were randomized 1:1 to undergo TPMT genotyping prior to azathioprine or to commence treatment without genotyping. There was no difference in the primary outcome of stopping azathioprine due to an adverse reaction (ADR, p = 0.59) between the two study arms. ADRs were more common in older patients (p = 0.01). There was no increase in stopping azathioprine due to ADRs in TPMT heterozygotes compared with wild-type individuals. The single individual with TPMT variant homozygosity experienced severe neutropenia. Our work supports the strong evidence that individuals with TPMT variant homozygosity are at high risk of severe neutropenia, whereas TPMT heterozygotes are not at increased risk of ADRs at standard doses of azathioprine.
Short communication: Alteration of priors for random effects in Gaussian linear mixed model
DEFF Research Database (Denmark)
Vandenplas, Jérémie; Christensen, Ole Fredslund; Gengler, Nicholas
2014-01-01
such alterations. Therefore, the aim of this study was to propose a method to alter both the mean and (co)variance of the prior multivariate normal distributions of random effects of linear mixed models while using currently available software packages. The proposed method was tested on simulated examples with 3......, multiple-trait predictions of lactation yields, and Bayesian approaches integrating external information into genetic evaluations) need to alter both the mean and (co)variance of the prior distributions and, to our knowledge, most software packages available in the animal breeding community do not permit...... different software packages available in animal breeding. The examples showed the possibility of the proposed method to alter both the mean and (co)variance of the prior distributions with currently available software packages through the use of an extended data file and a user-supplied (co)variance matrix....
Automated segmentation of dental CBCT image with prior-guided sequential random forests
Energy Technology Data Exchange (ETDEWEB)
Wang, Li; Gao, Yaozong; Shi, Feng; Li, Gang [Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, North Carolina 27599-7513 (United States); Chen, Ken-Chung; Tang, Zhen [Surgical Planning Laboratory, Department of Oral and Maxillofacial Surgery, Houston Methodist Research Institute, Houston, Texas 77030 (United States); Xia, James J., E-mail: dgshen@med.unc.edu, E-mail: JXia@HoustonMethodist.org [Surgical Planning Laboratory, Department of Oral and Maxillofacial Surgery, Houston Methodist Research Institute, Houston, Texas 77030 (United States); Department of Surgery (Oral and Maxillofacial Surgery), Weill Medical College, Cornell University, New York, New York 10065 (United States); Department of Oral and Craniomaxillofacial Surgery, Shanghai Jiao Tong University School of Medicine, Shanghai Ninth People’s Hospital, Shanghai 200011 (China); Shen, Dinggang, E-mail: dgshen@med.unc.edu, E-mail: JXia@HoustonMethodist.org [Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, North Carolina 27599-7513 and Department of Brain and Cognitive Engineering, Korea University, Seoul 02841 (Korea, Republic of)
2016-01-15
Purpose: Cone-beam computed tomography (CBCT) is an increasingly utilized imaging modality for the diagnosis and treatment planning of the patients with craniomaxillofacial (CMF) deformities. Accurate segmentation of CBCT image is an essential step to generate 3D models for the diagnosis and treatment planning of the patients with CMF deformities. However, due to the image artifacts caused by beam hardening, imaging noise, inhomogeneity, truncation, and maximal intercuspation, it is difficult to segment the CBCT. Methods: In this paper, the authors present a new automatic segmentation method to address these problems. Specifically, the authors first employ a majority voting method to estimate the initial segmentation probability maps of both mandible and maxilla based on multiple aligned expert-segmented CBCT images. These probability maps provide an important prior guidance for CBCT segmentation. The authors then extract both the appearance features from CBCTs and the context features from the initial probability maps to train the first-layer of random forest classifier that can select discriminative features for segmentation. Based on the first-layer of trained classifier, the probability maps are updated, which will be employed to further train the next layer of random forest classifier. By iteratively training the subsequent random forest classifier using both the original CBCT features and the updated segmentation probability maps, a sequence of classifiers can be derived for accurate segmentation of CBCT images. Results: Segmentation results on CBCTs of 30 subjects were both quantitatively and qualitatively validated based on manually labeled ground truth. The average Dice ratios of mandible and maxilla by the authors’ method were 0.94 and 0.91, respectively, which are significantly better than the state-of-the-art method based on sparse representation (p-value < 0.001). Conclusions: The authors have developed and validated a novel fully automated method
Automated segmentation of dental CBCT image with prior-guided sequential random forests
International Nuclear Information System (INIS)
Wang, Li; Gao, Yaozong; Shi, Feng; Li, Gang; Chen, Ken-Chung; Tang, Zhen; Xia, James J.; Shen, Dinggang
2016-01-01
Purpose: Cone-beam computed tomography (CBCT) is an increasingly utilized imaging modality for the diagnosis and treatment planning of the patients with craniomaxillofacial (CMF) deformities. Accurate segmentation of CBCT image is an essential step to generate 3D models for the diagnosis and treatment planning of the patients with CMF deformities. However, due to the image artifacts caused by beam hardening, imaging noise, inhomogeneity, truncation, and maximal intercuspation, it is difficult to segment the CBCT. Methods: In this paper, the authors present a new automatic segmentation method to address these problems. Specifically, the authors first employ a majority voting method to estimate the initial segmentation probability maps of both mandible and maxilla based on multiple aligned expert-segmented CBCT images. These probability maps provide an important prior guidance for CBCT segmentation. The authors then extract both the appearance features from CBCTs and the context features from the initial probability maps to train the first-layer of random forest classifier that can select discriminative features for segmentation. Based on the first-layer of trained classifier, the probability maps are updated, which will be employed to further train the next layer of random forest classifier. By iteratively training the subsequent random forest classifier using both the original CBCT features and the updated segmentation probability maps, a sequence of classifiers can be derived for accurate segmentation of CBCT images. Results: Segmentation results on CBCTs of 30 subjects were both quantitatively and qualitatively validated based on manually labeled ground truth. The average Dice ratios of mandible and maxilla by the authors’ method were 0.94 and 0.91, respectively, which are significantly better than the state-of-the-art method based on sparse representation (p-value < 0.001). Conclusions: The authors have developed and validated a novel fully automated method
Dieguez, Sebastian; Wagner-Egger, Pascal; Gauvrit, Nicolas
2015-11-01
Belief in conspiracy theories has often been associated with a biased perception of randomness, akin to a nothing-happens-by-accident heuristic. Indeed, a low prior for randomness (i.e., believing that randomness is a priori unlikely) could plausibly explain the tendency to believe that a planned deception lies behind many events, as well as the tendency to perceive meaningful information in scattered and irrelevant details; both of these tendencies are traits diagnostic of conspiracist ideation. In three studies, we investigated this hypothesis and failed to find the predicted association between low prior for randomness and conspiracist ideation, even when randomness was explicitly opposed to malevolent human intervention. Conspiracy believers' and nonbelievers' perceptions of randomness were not only indistinguishable from each other but also accurate compared with the normative view arising from the algorithmic information framework. Thus, the motto "nothing happens by accident," taken at face value, does not explain belief in conspiracy theories. © The Author(s) 2015.
Weight reduction intervention for obese infertile women prior to IVF: a randomized controlled trial.
Einarsson, Snorri; Bergh, Christina; Friberg, Britt; Pinborg, Anja; Klajnbard, Anna; Karlström, Per-Olof; Kluge, Linda; Larsson, Ingrid; Loft, Anne; Mikkelsen-Englund, Anne-Lis; Stenlöf, Kaj; Wistrand, Anna; Thurin-Kjellberg, Ann
2017-08-01
Does an intensive weight reduction programme prior to IVF increase live birth rates for infertile obese women? An intensive weight reduction programme resulted in a large weight loss but did not substantially affect live birth rates in obese women scheduled for IVF. Among obese women, fertility and obstetric outcomes are influenced negatively with increased risk of miscarriage and a higher risk of maternal and neonatal complications. A recent large randomized controlled trial found no effect of lifestyle intervention on live birth in infertile obese women. A prospective, multicentre, randomized controlled trial was performed between 2010 and 2016 in the Nordic countries. In total, 962 women were assessed for eligibility and 317 women were randomized. Computerized randomization with concealed allocation was performed in the proportions 1:1 to one of two groups: weight reduction intervention followed by IVF-treatment or IVF-treatment only. One cycle per patient was included. Nine infertility clinics in Sweden, Denmark and Iceland participated. Women under 38 years of age planning IVF, and having a BMI ≥30 and non-financial support from Impolin AB, during the conduct of the study, and personal fees from Merck outside the submitted work. Dr Friberg reports personal fees from Ferring, Merck, MSD, Finox and personal fees from Studentlitteratur, outside the submitted work. Dr Englund reports personal fees from Ferring, and non-financial support from Merck, outside the submitted work. Dr Bergh reports and has been reimbursed for: writing a newsletter twice a year (Ferring), lectures (Ferring, MSD, Merck), and Nordic working group meetings (Finox). Dr Karlström reports lectures (Ferring, Finox, Merck, MSD) and Nordic working group meetings (Ferring). Ms Kluge, Dr Einarsson, Dr Pinborg, Dr Klajnbard, Dr Stenlöf, Dr Larsson, Dr Loft and Dr Wistrand have nothing to disclose. ClinicalTrials.gov number, NCT01566929. 23-03-2012. 05-10-2010. © The Author 2017. Published by
Cost--effectiveness analysis of salpingectomy prior to IVF, based on a randomized controlled trial.
Strandell, Annika; Lindhard, Anette; Eckerlund, Ingemar
2005-12-01
In patients with ultrasound-visible hydrosalpinges, salpingectomy prior to IVF increases the chance of a live birth. This study compared the cost-effectiveness of this strategy (intervention) with that of optional salpingectomy after a failed cycle (control). Data from a Scandinavian randomized controlled trial were used to calculate the individual number of treatments and their outcomes. Only patients with ultrasound-visible hydrosalpinges were considered in the main analysis, and a maximum of three fresh cycles were included. The costs for surgical procedures, IVF treatment, medication, complications, management of pregnancy and delivery as well as of early pregnancy losses were calculated from standardized hospital charges. Among the 51 patients in the intervention group, the live birth rate was 60.8% compared with 40.9% in 44 controls. The average cost per patient was 13,943 euro and 12,091 euro, respectively. Thus, the average cost per live birth was 22,823 euro in the intervention group and 29,517 euro in the control group. The incremental cost-effectiveness ratio for adopting the intervention strategy was estimated at 9306 euro. The incremental cost to achieve the higher birth rate of the intervention strategy seems reasonable.
International Nuclear Information System (INIS)
Chen, Hua-Jun; Shi, Hai-Bin; Jiang, Long-Feng; Li, Lan; Chen, Rong
2018-01-01
To investigate structural brain connectome alterations in cirrhotic patients with prior overt hepatic encephalopathy (OHE). Seventeen cirrhotic patients with prior OHE (prior-OHE), 18 cirrhotic patients without prior OHE (non-prior-OHE) and 18 healthy controls (HC) underwent diffusion tensor imaging. Neurocognitive functioning was assessed with Psychometric Hepatic Encephalopathy Score (PHES). Using a probabilistic fibre tracking approach, we depicted the whole-brain structural network as a connectivity matrix of 90 regions (derived from the Automated Anatomic Labeling atlas). Graph theory-based analyses were performed to analyse topological properties of the brain network. The analysis of variance showed significant group effects on several topological properties, including network strength, global efficiency and local efficiency. A progressive decrease trend for these metrics was found from non-prior-OHE to prior-OHE, compared with HC. Among the three groups, the regions with altered nodal efficiency were mainly distributed in the frontal and occipital cortices, paralimbic system and subcortical regions. The topological metrics, such as network strength and global efficiency, were correlated with PHES among cirrhotic patients. The cirrhotic patients developed structural brain connectome alterations; this is aggravated by prior OHE episode. Disrupted topological organization of the brain structural network may account for cognitive impairments related to prior OHE. (orig.)
Energy Technology Data Exchange (ETDEWEB)
Chen, Hua-Jun [Fujian Medical University Union Hospital, Department of Radiology, Fuzhou (China); The First Affiliated Hospital of Nanjing Medical University, Department of Radiology, Nanjing (China); Shi, Hai-Bin [The First Affiliated Hospital of Nanjing Medical University, Department of Radiology, Nanjing (China); Jiang, Long-Feng [The First Affiliated Hospital of Nanjing Medical University, Department of Infectious Diseases, Nanjing (China); Li, Lan [Fujian Medical University Union Hospital, Department of Radiology, Fuzhou (China); Chen, Rong [University of Maryland School of Medicine, Department of Diagnostic Radiology and Nuclear Medicine, Baltimore, MD (United States); Beijing Institute of Technology, Advanced Innovation Center for Intelligent Robots and Systems, Beijing (China)
2018-01-15
To investigate structural brain connectome alterations in cirrhotic patients with prior overt hepatic encephalopathy (OHE). Seventeen cirrhotic patients with prior OHE (prior-OHE), 18 cirrhotic patients without prior OHE (non-prior-OHE) and 18 healthy controls (HC) underwent diffusion tensor imaging. Neurocognitive functioning was assessed with Psychometric Hepatic Encephalopathy Score (PHES). Using a probabilistic fibre tracking approach, we depicted the whole-brain structural network as a connectivity matrix of 90 regions (derived from the Automated Anatomic Labeling atlas). Graph theory-based analyses were performed to analyse topological properties of the brain network. The analysis of variance showed significant group effects on several topological properties, including network strength, global efficiency and local efficiency. A progressive decrease trend for these metrics was found from non-prior-OHE to prior-OHE, compared with HC. Among the three groups, the regions with altered nodal efficiency were mainly distributed in the frontal and occipital cortices, paralimbic system and subcortical regions. The topological metrics, such as network strength and global efficiency, were correlated with PHES among cirrhotic patients. The cirrhotic patients developed structural brain connectome alterations; this is aggravated by prior OHE episode. Disrupted topological organization of the brain structural network may account for cognitive impairments related to prior OHE. (orig.)
Danis, Rachel B; Pereira, Nigel; Elias, Rony T
2017-11-10
Women of reproductive age diagnosed with cancer are often interested in preserving gametes or reproductive tissue that would allow for future genetic parenthood. Preservation of fertility is often accomplished in young cancer patients via ovarian stimulation followed by oocyte or embryo cryopreservation. Conventional stimulation protocols, however, require 2-4 weeks to complete ovarian stimulation, oocyte retrieval and possible fertilization. Such a strategy may not be feasible in patients requiring urgent cancer treatment. Recent studies have highlighted that random start ovarian stimulation can be initiated irrespective of the phase of the menstrual cycle and is an attractive alternative to conventional ovarian stimulation. The primary aim of the current review is to discuss the feasibility and success of random start ovarian stimulation for oocyte or embryo cryopreservation in women desiring fertility preservation prior to gonadotoxic cancer therapy. We performed a systematic review of medical literature published between January 2000 to June 2017 reporting the utility of random start ovarian stimulation for fertility preservation. Search terms included "fertility preservation," "cancer," "ovarian stimulation," "random-start ovarian stimulation," "embryo cryopreservation, and" "oocyte cryopreservation." Publications were included in this review only if patients underwent random start ovarian stimulation prior to cancer therapy. Nineteen publications were identified and perused by the authors. Most publications described the utility of random start ovarian stimulation in the setting of breast cancer. Radom-start stimulation was associated with a reduced time interval between ovarian stimulation initiation and oocyte or embryo cryopreservation. The yield of mature oocytes and their developmental potential into embryos was comparable between conventional and random-start protocols, albeit with higher gonadotropin doses in the latter. The current review suggests
Fatigue in Steel Structures under Random Loading
DEFF Research Database (Denmark)
Agerskov, Henning
1999-01-01
types of welded plate test specimens and full-scale offshore tubular joints. The materials that have been used are either conventional structural steel with a yield stress of ~ 360-410 MPa or high-strength steel with a yield stress of ~ 810-1010 MPa. The fatigue tests and the fracture mechanics analyses......Fatigue damage accumulation in steel structures under random loading is studied. The fatigue life of welded joints has been determined both experimentally and from a fracture mechanics analysis. In the experimental part of the investigation, fatigue test series have been carried through on various...... have been carried out using load histories, which are realistic in relation to the types of structures studied, i.e. primarily bridges, offshore structures and chimneys. In general, the test series carried through show a significant difference between constant amplitude and variable amplitude fatigue...
van Eck, Carola F; Toor, Aneet; Banffy, Michael B; Gambardella, Ralph A
2018-01-01
A good patient-surgeon relationship relies on adequate preoperative education and counseling. Several multimedia resources, such as web-based education tools, have become available to enhance aspects of perioperative care. The purpose of this study was to evaluate the effect of an interactive web-based education tool on perioperative patient satisfaction scores after outpatient orthopaedic surgery. It was hypothesized that web-based education prior to outpatient orthopaedic surgery enhances patient satisfaction scores. Randomized controlled trial; Level of evidence, 1. All patients undergoing knee arthroscopy with meniscectomy, chondroplasty, or anterior cruciate ligament reconstruction or shoulder arthroscopy with rotator cuff repair were eligible for inclusion and were randomized to the study or control group. The control group received routine education by the surgeon, whereas the study group received additional web-based education. At the first postoperative visit, all patients completed the OAS CAHPS (Outpatient and Ambulatory Surgery Consumer Assessment of Healthcare Providers and Systems) survey. Differences in patient satisfaction scores between the study and control groups were determined with an independent t test. A total of 177 patients were included (104 [59%] males; mean age, 42 ± 14 years); 87 (49%) patients were randomized to receive additional web-based education. Total patient satisfaction score was significantly higher in the study group (97 ± 5) as compared with the control group (94 ± 8; P = .019), specifically for the OAS CAHPS core measure "recovery" (92 ± 13 vs 82 ± 23; P = .001). Age, sex, race, workers' compensation status, education level, overall health, emotional health, procedure type and complexity, and addition of a video did not influence patient satisfaction scores. Supplemental web-based patient education prior to outpatient orthopaedic surgery enhances patient satisfaction scores.
Le Maitre, Olivier
2015-01-07
We address model dimensionality reduction in the Bayesian inference of Gaussian fields, considering prior covariance function with unknown hyper-parameters. The Karhunen-Loeve (KL) expansion of a prior Gaussian process is traditionally derived assuming fixed covariance function with pre-assigned hyperparameter values. Thus, the modes strengths of the Karhunen-Loeve expansion inferred using available observations, as well as the resulting inferred process, dependent on the pre-assigned values for the covariance hyper-parameters. Here, we seek to infer the process and its the covariance hyper-parameters in a single Bayesian inference. To this end, the uncertainty in the hyper-parameters is treated by means of a coordinate transformation, leading to a KL-type expansion on a fixed reference basis of spatial modes, but with random coordinates conditioned on the hyper-parameters. A Polynomial Chaos (PC) expansion of the model prediction is also introduced to accelerate the Bayesian inference and the sampling of the posterior distribution with MCMC method. The PC expansion of the model prediction also rely on a coordinates transformation, enabling us to avoid expanding the dependence of the prediction with respect to the covariance hyper-parameters. We demonstrate the efficiency of the proposed method on a transient diffusion equation by inferring spatially-varying log-diffusivity fields from noisy data.
Hsieh, Ching-Shui; Cheng, Hsiu-Chi; Lin, Jen-Shiou; Kuo, Shou-Jen; Chen, Yao-Li
2014-01-01
This trial was designed to compare the efficacy of 4% chlorhexidine gluconate (CHG) with normal saline (NS) as a predisinfection skin-scrub solution prior to standard presurgical skin preparation. Data was collected at a single transplantation center where patients electing resection of hepatic tumors were recruited between October 2011 and September 2012. In total, 100 patients were consecutively enrolled for random assignment to either 4% CHG or NS as a predisinfection skin-scrub solution prior to surgery. Our aim was to assess the comparative antiseptic efficacy of CHG in this setting, focusing on cutaneous microbial colonization (at baseline, preoperatively, and postoperatively) and postsurgical site infections as primary outcome measures. Positivity rates of baseline, preoperative, and postoperative cultures were similar for both groups, showing significant declines (relative to baseline) after skin preparation and no significant postsurgical rebound. Rates of surgical site infection were also similar in both groups (CHG, 6.0%; NS, 4.1%; P = 1.0). For patients with hepatic tumors undergoing hepatectomy, the effect of 4% CHG as a predisinfection scrub solution was similar to that of NS in terms of skin decontamination and surgical site infections.
Semi-automated measurement of anatomical structures using statistical and morphological priors
Ashton, Edward A.; Du, Tong
2004-05-01
Rapid, accurate and reproducible delineation and measurement of arbitrary anatomical structures in medical images is a widely held goal, with important applications in both clinical diagnostics and, perhaps more significantly, pharmaceutical trial evaluation. This process requires the ability first to localize a structure within the body, and then to find a best approximation of the structure"s boundaries within a given scan. Structures that are tortuous and small in cross section, such as the hippocampus in the brain or the abdominal aorta, present a particular challenge. Their apparent shape and position can change significantly from slice to slice, and accurate prior shape models for such structures are often difficult to form. In this work, we have developed a system that makes use of both a user-defined shape model and a statistical maximum likelihood classifier to identify and measure structures of this sort in MRI and CT images. Experiments show that this system can reduce analysis time by 75% or more with respect to manual tracing with no loss of precision or accuracy.
Detection and correction of underassigned rotational symmetry prior to structure deposition
International Nuclear Information System (INIS)
Poon, Billy K.; Grosse-Kunstleve, Ralf W.; Zwart, Peter H.; Sauter, Nicholas K.
2010-01-01
An X-ray structural model can be reassigned to a higher symmetry space group using the presented framework if its noncrystallographic symmetry operators are close to being exact crystallographic relationships. About 2% of structures in the Protein Data Bank can be reclassified in this way. Up to 2% of X-ray structures in the Protein Data Bank (PDB) potentially fit into a higher symmetry space group. Redundant protein chains in these structures can be made compatible with exact crystallographic symmetry with minimal atomic movements that are smaller than the expected range of coordinate uncertainty. The incidence of problem cases is somewhat difficult to define precisely, as there is no clear line between underassigned symmetry, in which the subunit differences are unsupported by the data, and pseudosymmetry, in which the subunit differences rest on small but significant intensity differences in the diffraction pattern. To help catch symmetry-assignment problems in the future, it is useful to add a validation step that operates on the refined coordinates just prior to structure deposition. If redundant symmetry-related chains can be removed at this stage, the resulting model (in a higher symmetry space group) can readily serve as an isomorphous replacement starting point for re-refinement using re-indexed and re-integrated raw data. These ideas are implemented in new software tools available at http://cci.lbl.gov/labelit
Solution Methods for Structures with Random Properties Subject to Random Excitation
DEFF Research Database (Denmark)
Köylüoglu, H. U.; Nielsen, Søren R. K.; Cakmak, A. S.
This paper deals with the lower order statistical moments of the response of structures with random stiffness and random damping properties subject to random excitation. The arising stochastic differential equations (SDE) with random coefficients are solved by two methods, a second order...... the SDE with random coefficients with deterministic initial conditions to an equivalent nonlinear SDE with deterministic coefficient and random initial conditions. In both methods, the statistical moment equations are used. Hierarchy of statistical moments in the markovian approach is closed...... by the cumulant neglect closure method applied at the fourth order level....
Nonlinear deterministic structures and the randomness of protein sequences
Huang Yan Zhao
2003-01-01
To clarify the randomness of protein sequences, we make a detailed analysis of a set of typical protein sequences representing each structural classes by using nonlinear prediction method. No deterministic structures are found in these protein sequences and this implies that they behave as random sequences. We also give an explanation to the controversial results obtained in previous investigations.
International Nuclear Information System (INIS)
Kazantsev, Daniil; Dobson, Katherine J; Withers, Philip J; Lee, Peter D; Ourselin, Sébastien; Arridge, Simon R; Hutton, Brian F; Kaestner, Anders P; Lionheart, William R B
2014-01-01
There has been a rapid expansion of multi-modal imaging techniques in tomography. In biomedical imaging, patients are now regularly imaged using both single photon emission computed tomography (SPECT) and x-ray computed tomography (CT), or using both positron emission tomography and magnetic resonance imaging (MRI). In non-destructive testing of materials both neutron CT (NCT) and x-ray CT are widely applied to investigate the inner structure of material or track the dynamics of physical processes. The potential benefits from combining modalities has led to increased interest in iterative reconstruction algorithms that can utilize the data from more than one imaging mode simultaneously. We present a new regularization term in iterative reconstruction that enables information from one imaging modality to be used as a structural prior to improve resolution of the second modality. The regularization term is based on a modified anisotropic tensor diffusion filter, that has shape-adapted smoothing properties. By considering the underlying orientations of normal and tangential vector fields for two co-registered images, the diffusion flux is rotated and scaled adaptively to image features. The images can have different greyscale values and different spatial resolutions. The proposed approach is particularly good at isolating oriented features in images which are important for medical and materials science applications. By enhancing the edges it enables both easy identification and volume fraction measurements aiding segmentation algorithms used for quantification. The approach is tested on a standard denoising and deblurring image recovery problem, and then applied to 2D and 3D reconstruction problems; thereby highlighting the capabilities of the algorithm. Using synthetic data from SPECT co-registered with MRI, and real NCT data co-registered with x-ray CT, we show how the method can be used across a range of imaging modalities. (paper)
Automatic structure classification of small proteins using random forest
Directory of Open Access Journals (Sweden)
Hirst Jonathan D
2010-07-01
Full Text Available Abstract Background Random forest, an ensemble based supervised machine learning algorithm, is used to predict the SCOP structural classification for a target structure, based on the similarity of its structural descriptors to those of a template structure with an equal number of secondary structure elements (SSEs. An initial assessment of random forest is carried out for domains consisting of three SSEs. The usability of random forest in classifying larger domains is demonstrated by applying it to domains consisting of four, five and six SSEs. Results Random forest, trained on SCOP version 1.69, achieves a predictive accuracy of up to 94% on an independent and non-overlapping test set derived from SCOP version 1.73. For classification to the SCOP Class, Fold, Super-family or Family levels, the predictive quality of the model in terms of Matthew's correlation coefficient (MCC ranged from 0.61 to 0.83. As the number of constituent SSEs increases the MCC for classification to different structural levels decreases. Conclusions The utility of random forest in classifying domains from the place-holder classes of SCOP to the true Class, Fold, Super-family or Family levels is demonstrated. Issues such as introduction of a new structural level in SCOP and the merger of singleton levels can also be addressed using random forest. A real-world scenario is mimicked by predicting the classification for those protein structures from the PDB, which are yet to be assigned to the SCOP classification hierarchy.
A simplified method for random vibration analysis of structures with random parameters
International Nuclear Information System (INIS)
Ghienne, Martin; Blanzé, Claude
2016-01-01
Piezoelectric patches with adapted electrical circuits or viscoelastic dissipative materials are two solutions particularly adapted to reduce vibration of light structures. To accurately design these solutions, it is necessary to describe precisely the dynamical behaviour of the structure. It may quickly become computationally intensive to describe robustly this behaviour for a structure with nonlinear phenomena, such as contact or friction for bolted structures, and uncertain variations of its parameters. The aim of this work is to propose a non-intrusive reduced stochastic method to characterize robustly the vibrational response of a structure with random parameters. Our goal is to characterize the eigenspace of linear systems with dynamic properties considered as random variables. This method is based on a separation of random aspects from deterministic aspects and allows us to estimate the first central moments of each random eigenfrequency with a single deterministic finite elements computation. The method is applied to a frame with several Young's moduli modeled as random variables. This example could be expanded to a bolted structure including piezoelectric devices. The method needs to be enhanced when random eigenvalues are closely spaced. An indicator with no additional computational cost is proposed to characterize the ’’proximity” of two random eigenvalues. (paper)
A Structural Modeling Approach to a Multilevel Random Coefficients Model.
Rovine, Michael J.; Molenaar, Peter C. M.
2000-01-01
Presents a method for estimating the random coefficients model using covariance structure modeling and allowing one to estimate both fixed and random effects. The method is applied to real and simulated data, including marriage data from J. Belsky and M. Rovine (1990). (SLD)
Geometry, structure and randomness in combinatorics
Nešetřil, Jaroslav; Pellegrini, Marco
2014-01-01
This book collects some surveys on current trends in discrete mathematics and discrete geometry. The areas covered include: graph representations, structural graphs theory, extremal graph theory, Ramsey theory and constrained satisfaction problems.
Color effects from scattering on random surface structures in dielectrics
DEFF Research Database (Denmark)
Clausen, Jeppe; Christiansen, Alexander B; Garnæs, Jørgen
2012-01-01
We show that cheap large area color filters, based on surface scattering, can be fabricated in dielectric materials by replication of random structures in silicon. The specular transmittance of three different types of structures, corresponding to three different colors, have been characterized...
Random generation of RNA secondary structures according to native distributions
Directory of Open Access Journals (Sweden)
Nebel Markus E
2011-10-01
Full Text Available Abstract Background Random biological sequences are a topic of great interest in genome analysis since, according to a powerful paradigm, they represent the background noise from which the actual biological information must differentiate. Accordingly, the generation of random sequences has been investigated for a long time. Similarly, random object of a more complicated structure like RNA molecules or proteins are of interest. Results In this article, we present a new general framework for deriving algorithms for the non-uniform random generation of combinatorial objects according to the encoding and probability distribution implied by a stochastic context-free grammar. Briefly, the framework extends on the well-known recursive method for (uniform random generation and uses the popular framework of admissible specifications of combinatorial classes, introducing weighted combinatorial classes to allow for the non-uniform generation by means of unranking. This framework is used to derive an algorithm for the generation of RNA secondary structures of a given fixed size. We address the random generation of these structures according to a realistic distribution obtained from real-life data by using a very detailed context-free grammar (that models the class of RNA secondary structures by distinguishing between all known motifs in RNA structure. Compared to well-known sampling approaches used in several structure prediction tools (such as SFold ours has two major advantages: Firstly, after a preprocessing step in time O(n2 for the computation of all weighted class sizes needed, with our approach a set of m random secondary structures of a given structure size n can be computed in worst-case time complexity Om⋅n⋅ log(n while other algorithms typically have a runtime in O(m⋅n2. Secondly, our approach works with integer arithmetic only which is faster and saves us from all the discomforting details of using floating point arithmetic with
EVOLUTION OF FAST MAGNETOACOUSTIC PULSES IN RANDOMLY STRUCTURED CORONAL PLASMAS
International Nuclear Information System (INIS)
Yuan, D.; Li, B.; Pascoe, D. J.; Nakariakov, V. M.; Keppens, R.
2015-01-01
We investigate the evolution of fast magnetoacoustic pulses in randomly structured plasmas, in the context of large-scale propagating waves in the solar atmosphere. We perform one-dimensional numerical simulations of fast wave pulses propagating perpendicular to a constant magnetic field in a low-β plasma with a random density profile across the field. Both linear and nonlinear regimes are considered. We study how the evolution of the pulse amplitude and width depends on their initial values and the parameters of the random structuring. Acting as a dispersive medium, a randomly structured plasma causes amplitude attenuation and width broadening of the fast wave pulses. After the passage of the main pulse, secondary propagating and standing fast waves appear. Width evolution of both linear and nonlinear pulses can be well approximated by linear functions; however, narrow pulses may have zero or negative broadening. This arises because narrow pulses are prone to splitting, while broad pulses usually deviate less from their initial Gaussian shape and form ripple structures on top of the main pulse. Linear pulses decay at an almost constant rate, while nonlinear pulses decay exponentially. A pulse interacts most efficiently with a random medium with a correlation length of about half of the initial pulse width. This detailed model of fast wave pulses propagating in highly structured media substantiates the interpretation of EIT waves as fast magnetoacoustic waves. Evolution of a fast pulse provides us with a novel method to diagnose the sub-resolution filamentation of the solar atmosphere
Probabilistic SSME blades structural response under random pulse loading
Shiao, Michael; Rubinstein, Robert; Nagpal, Vinod K.
1987-01-01
The purpose is to develop models of random impacts on a Space Shuttle Main Engine (SSME) turbopump blade and to predict the probabilistic structural response of the blade to these impacts. The random loading is caused by the impact of debris. The probabilistic structural response is characterized by distribution functions for stress and displacements as functions of the loading parameters which determine the random pulse model. These parameters include pulse arrival, amplitude, and location. The analysis can be extended to predict level crossing rates. This requires knowledge of the joint distribution of the response and its derivative. The model of random impacts chosen allows the pulse arrivals, pulse amplitudes, and pulse locations to be random. Specifically, the pulse arrivals are assumed to be governed by a Poisson process, which is characterized by a mean arrival rate. The pulse intensity is modelled as a normally distributed random variable with a zero mean chosen independently at each arrival. The standard deviation of the distribution is a measure of pulse intensity. Several different models were used for the pulse locations. For example, three points near the blade tip were chosen at which pulses were allowed to arrive with equal probability. Again, the locations were chosen independently at each arrival. The structural response was analyzed both by direct Monte Carlo simulation and by a semi-analytical method.
Sfoungaristos, Stavros; Polimeros, Nikolaos; Kavouras, Adamantios; Perimenis, Petros
2012-06-01
To determine the need for pre-treatment stenting in patients undergoing extracorporeal shockwave lithotripsy (ESWL) for ureteral stones sized 4-10 mm. A prospective randomized study was conducted between September 2009 and March 2011. Included 156 patients randomized in stented and non-stented groups and underwent a maximum of 3 ESWL sessions. Radiographic follow-up was used to assess the stone fragmentation and clearance. Results were compared in terms of stone-free rates, post-treatment morbidity and complications. Overall efficacy was 76.9%. Stone-free rates were statistically significantly lower (P = 0.026) in the stented group (68.6%) compared to the non-stented ones (83.7%). Furthermore, stenting was significantly correlated with post-treatment lower urinary tract symptoms (P ≤ 0.001), need for more ESWL sessions (P = 0.019) and possibility for operation due to ESWL failure (P = 0.026). A multivariate analysis was conducted to identify the parameters which may predict complete stone removal after ESWL. Stone size (P = 0.026), stone location (P = 0.011) and stenting (P = 0.007) were the most significant factors. ESWL is an efficient and safe treatment for 4- to 10-mm ureteral stones. Pre-treatment stenting is limiting stone-free rates and is significantly influencing post-ESWL morbidity and quality of life in a negative manner, while it contributes minimally to the prophylaxis of complications.
Rocha, Leonardo Lima; Pessoa, Camila Menezes Souza; Neto, Ary Serpa; do Prado, Rogerio Ruscitto; Silva, Eliezer; de Almeida, Marcio Dias; Correa, Thiago Domingos
2017-02-27
Liver failure patients have traditionally been empirically transfused prior to invasive procedures. Blood transfusion is associated with immunologic and nonimmunologic reactions, increased risk of adverse outcomes and high costs. Scientific evidence supporting empirical transfusion is lacking, and the best approach for blood transfusion prior to invasive procedures in cirrhotic patients has not been established so far. The aim of this study is to compare three transfusion strategies (routine coagulation test-guided - ordinary or restrictive, or thromboelastometry-guided) prior to central venous catheterization in critically ill patients with cirrhosis. Design and setting: a double-blinded, parallel-group, single-center, randomized controlled clinical trial in a tertiary private hospital in São Paulo, Brazil. adults (aged 18 years or older) admitted to the intensive care unit with cirrhosis and an indication for central venous line insertion. Patients will be randomly assigned to three groups for blood transfusion strategy prior to central venous catheterization: standard coagulation tests-based, thromboelastometry-based, or restrictive. The primary efficacy endpoint will be the proportion of patients transfused with any blood product prior to central venous catheterization. The primary safety endpoint will be the incidence of major bleeding. Secondary endpoints will be the proportion of transfusion of fresh frozen plasma, platelets and cryoprecipitate; infused volume of blood products; hemoglobin and hematocrit before and after the procedure; intensive care unit and hospital length of stay; 28-day and hospital mortality; incidence of minor bleeding; transfusion-related adverse reactions; and cost analysis. This study will evaluate three strategies to guide blood transfusion prior to central venous line placement in severely ill patients with cirrhosis. We hypothesized that thromboelastometry-based and/or restrictive protocols are safe and would significantly
Fones, Helen N; Eyles, Chris J; Kay, William; Cowper, Josh; Gurr, Sarah J
2017-09-01
Zymoseptoria tritici causes Septoria leaf blotch of wheat. The prevailing paradigm of the Z. tritici-wheat interaction assumes fungal ingress through stomata within 24-48h, followed by days of symptomless infection. This is extrapolated from studies testing the mode of fungal ingress under optimal infection conditions. Here, we explicitly assess the timing of entry, using GFP-tagged Z. tritici. We show that early entry is comparatively rare, and extended epiphytic growth possible. We test the hypotheses that our data diverge from earlier studies due to: i. random ingress of Z. tritici into the leaf, with some early entry events; ii. previous reliance upon fungal stains, combined with poor attachment of Z. tritici to the leaf, leading to increased likelihood of observing internal versus external growth, compared to using GFP; iii. use of exceptionally high humidity to promote entry in previous studies. We combine computer simulation of leaf-surface growth with thousands of in planta observations to demonstrate that while spores germinate rapidly on the leaf, over 95% of fungi remain epiphytic, growing randomly over the leaf for ten days or more. We show that epiphytic fungi are easily detached from leaves by rinsing and that humidity promotes epiphytic growth, increasing infection rates. Together, these results explain why epiphytic growth has been dismissed and early ingress assumed. The prolonged epiphytic phase should inform studies of pathogenicity and virulence mutants, disease control strategies, and interpretation of the observed low in planta growth, metabolic quiescence and evasion of plant defences by Zymoseptoria during symptomless infection. Copyright © 2017. Published by Elsevier Inc.
Graphene materials having randomly distributed two-dimensional structural defects
Kung, Harold H; Zhao, Xin; Hayner, Cary M; Kung, Mayfair C
2013-10-08
Graphene-based storage materials for high-power battery applications are provided. The storage materials are composed of vertical stacks of graphene sheets and have reduced resistance for Li ion transport. This reduced resistance is achieved by incorporating a random distribution of structural defects into the stacked graphene sheets, whereby the structural defects facilitate the diffusion of Li ions into the interior of the storage materials.
Random fractal structures in North American energy markets
Energy Technology Data Exchange (ETDEWEB)
Serletis, Apostolos [Calgary Univ., Dept. of Economics, Calgary, AB (Canada); Andreadis, Ioannis [European Univ. of the Hague, Center of Management Studies, The Hague (Netherlands)
2004-05-01
This paper uses daily observations on West Texas Intermediate (WTI) crude oil prices at Chicago and Henry Hub natural gas prices at LA (over the deregulated period of the 1990s) and various tests from statistics and dynamical systems theory to support a random fractal structure for North American energy markets. In particular, this evidence is supported by the Vassilicos et al. (1993) multifractal structure test and the Ghashghaie et al. [Nature 381 (1996) 767] turbulent behavior test. (Author)
Genetic structure of the Common Eider in the western Aleutian Islands prior to fox eradication
Sonsthagen, Sarah A.; Talbot, Sandra L.; Wilson, Robert E.; Petersen, Margaret R.; Williams, Jeffrey C.; Byrd, G. Vernon; McCracken, Kevin G.
2013-01-01
Since the late 18th century bird populations residing in the Aleutian Archipelago have been greatly reduced by introduced arctic foxes (Alopex lagopus). We analyzed data from microsatellite, nuclear intron, and mitochondrial (mtDNA) loci to examine the spatial genetic structure, demography, and gene flow among four Aleutian Island populations of the Common Eider (Somateria mollissima) much reduced by introduced foxes. In mtDNA, we found high levels of genetic structure within and between island groups (ΦST = 0.643), but we found no population subdivision in microsatellites or nuclear introns. Differences in genetic structure between the mitochondrial and nuclear genomes are consistent with the Common Eider's breeding and winter biology, as females are highly philopatric and males disperse. Nevertheless, significant differences between islands in the mtDNA of males and marginal significance (P =0.07) in the Z-linked locus Smo 1 suggest that males may also have some level of fidelity to island groups. Severe reduction of populations by the fox, coupled with females' high philopatry, may have left the genetic signature of a bottleneck effect, resulting in the high levels of genetic differentiation observed in mtDNA (ΦST = 0.460–0.807) between islands only 440 km apart. Reestablishment of the Common Eider following the fox's eradication was likely through recruitment from within the islands and bolstered by dispersal from neighboring islands, as suggested by the lack of genetic structure and asymmetry in gene flow between Attu and the other Near Islands.
Lutenbacher, Melanie; Gabbe, Patricia Temple; Karp, Sharon M; Dietrich, Mary S; Narrigan, Deborah; Carpenter, Lavenia; Walsh, William
2014-07-01
Women with a history of a prior preterm birth (PTB) have a high probability of a recurrent preterm birth. Some risk factors and health behaviors that contribute to PTB may be amenable to intervention. Home visitation is a promising method to deliver evidence based interventions. We evaluated a system of care designed to reduce preterm births and hospital length of stay in a sample of pregnant women with a history of a PTB. Single site randomized clinical trial. Eligibility: >18 years with prior live birth ≥20-home visits by certified nurse-midwives guided by protocols for specific risk factors (e.g., depressive symptoms, abuse, smoking). Data was collected via multiple methods and sources including intervention fidelity assessments. Average age 27.8 years; mean gestational age at enrollment was 15 weeks. Racial breakdown mirrored local demographics. Most had a partner, high school education, and 62% had Medicaid. No statistically significant group differences were found in gestational age at birth. Intervention participants had a shorter intrapartum length of stay. Enhanced prenatal care by nurse-midwife home visits may limit some risk factors and shorten intrapartum length of stay for women with a prior PTB. This study contributes to knowledge about evidence-based home visit interventions directed at risk factors associated with PTB.
Expedite random structure searching using objects from Wyckoff positions
Wang, Shu-Wei; Hsing, Cheng-Rong; Wei, Ching-Ming
2018-02-01
Random structure searching has been proved to be a powerful approach to search and find the global minimum and the metastable structures. A true random sampling is in principle needed yet it would be highly time-consuming and/or practically impossible to find the global minimum for the complicated systems in their high-dimensional configuration space. Thus the implementations of reasonable constraints, such as adopting system symmetries to reduce the independent dimension in structural space and/or imposing chemical information to reach and relax into low-energy regions, are the most essential issues in the approach. In this paper, we propose the concept of "object" which is either an atom or composed of a set of atoms (such as molecules or carbonates) carrying a symmetry defined by one of the Wyckoff positions of space group and through this process it allows the searching of global minimum for a complicated system to be confined in a greatly reduced structural space and becomes accessible in practice. We examined several representative materials, including Cd3As2 crystal, solid methanol, high-pressure carbonates (FeCO3), and Si(111)-7 × 7 reconstructed surface, to demonstrate the power and the advantages of using "object" concept in random structure searching.
Structure of a randomly grown 2-d network
DEFF Research Database (Denmark)
Ajazi, Fioralba; Napolitano, George M.; Turova, Tatyana
2015-01-01
We introduce a growing random network on a plane as a model of a growing neuronal network. The properties of the structure of the induced graph are derived. We compare our results with available data. In particular, it is shown that depending on the parameters of the model the system undergoes in...... in time different phases of the structure. We conclude with a possible explanation of some empirical data on the connections between neurons.......We introduce a growing random network on a plane as a model of a growing neuronal network. The properties of the structure of the induced graph are derived. We compare our results with available data. In particular, it is shown that depending on the parameters of the model the system undergoes...
Probabilistic analysis of structures involving random stress-strain behavior
Millwater, H. R.; Thacker, B. H.; Harren, S. V.
1991-01-01
The present methodology for analysis of structures with random stress strain behavior characterizes the uniaxial stress-strain curve in terms of (1) elastic modulus, (2) engineering stress at initial yield, (3) initial plastic-hardening slope, (4) engineering stress at point of ultimate load, and (5) engineering strain at point of ultimate load. The methodology is incorporated into the Numerical Evaluation of Stochastic Structures Under Stress code for probabilistic structural analysis. The illustrative problem of a thick cylinder under internal pressure, where both the internal pressure and the stress-strain curve are random, is addressed by means of the code. The response value is the cumulative distribution function of the equivalent plastic strain at the inner radius.
Response of Launch Pad Structures to Random Acoustic Excitation
Directory of Open Access Journals (Sweden)
Ravi N. Margasahayam
1994-01-01
Full Text Available The design of launch pad structures, particularly those having a large area-to-mass ratio, is governed by launch-induced acoustics, a relatively short transient with random pressure amplitudes having a non-Gaussian distribution. The factors influencing the acoustic excitation and resulting structural responses are numerous and cannot be predicted precisely. Two solutions (probabilistic and deterministic for the random vibration problem are presented in this article from the standpoint of their applicability to predict the response of ground structures exposed to rocket noise. Deficiencies of the probabilistic method, especially to predict response in the low-frequency range of launch transients (below 20 Hz, prompted the development of the deterministic analysis. The relationship between the two solutions is clarified for future implementation in a finite element method (FEM code.
Calcium Isotopic Evidence for Vulnerable Marine Ecosystem Structure Prior to the K/Pg Extinction.
Martin, Jeremy E; Vincent, Peggy; Tacail, Théo; Khaldoune, Fatima; Jourani, Essaid; Bardet, Nathalie; Balter, Vincent
2017-06-05
The collapse of marine ecosystems during the end-Cretaceous mass extinction involved the base of the food chain [1] up to ubiquitous vertebrate apex predators [2-5]. Large marine reptiles became suddenly extinct at the Cretaceous-Paleogene (K/Pg) boundary, whereas other contemporaneous groups such as bothremydid turtles or dyrosaurid crocodylomorphs, although affected at the familial, genus, or species level, survived into post-crisis environments of the Paleocene [5-9] and could have found refuge in freshwater habitats [10-12]. A recent hypothesis proposes that the extinction of plesiosaurians and mosasaurids could have been caused by an important drop in sea level [13]. Mosasaurids are unusually diverse and locally abundant in the Maastrichtian phosphatic deposits of Morocco, and with large sharks and one species of elasmosaurid plesiosaurian recognized so far, contribute to an overabundance of apex predators [3, 7, 14, 15]. For this reason, high local diversity of marine reptiles exhibiting different body masses and a wealth of tooth morphologies hints at complex trophic interactions within this latest Cretaceous marine ecosystem. Using calcium isotopes, we investigated the trophic structure of this extinct assemblage. Our results are consistent with a calcium isotope pattern observed in modern marine ecosystems and show that plesiosaurians and mosasaurids indiscriminately fall in the tertiary piscivore group. This suggests that marine reptile apex predators relied onto a single dietary calcium source, compatible with the vulnerable wasp-waist food webs of the modern world [16]. This inferred peculiar ecosystem structure may help explain plesiosaurian and mosasaurid extinction following the end-Cretaceous biological crisis. Copyright © 2017 Elsevier Ltd. All rights reserved.
POROSIMETRY BY RANDOM NODE STRUCTURING IN VIRTUAL CONCRETE
Directory of Open Access Journals (Sweden)
Piet Stroeven
2012-05-01
Full Text Available Two different porosimetry methods are presented in two successive papers. Inspiration for the development came from the rapidly-exploring random tree (RRT approach used in robotics. The novel methods are applied to virtual cementitious materials produced by a modern concurrent algorithm-based discrete element modeling system, HADES. This would render possible realistically simulating all aspects of particulate matter that influence structure-sensitive features of the pore network structure in maturing concrete, namely size, shape and dispersion of the aggregate and cement particles. Pore space is a complex tortuous entity. Practical methods conventionally applied for assessment of pore size distribution may fail or present biased information. Among them, mercury intrusion porosimetry and 2D quantitative image analysis are popular. The mathematical morphology operator “opening” can be applied to sections and even provide 3D information on pore size distribution, provided isotropy is guaranteed. However, aggregate grain surfaces lead to anisotropy in porosity. The presented methods allow exploration of pore space in the virtual material, after which pore size distribution is derived from star volume measurements. In addition to size of pores their continuity is of crucial importance for durability estimation. Double-random multiple tree structuring (DRaMuTS, introduced earlier in IA&S (Stroeven et al., 2011b and random node structuring (RaNoS provide such information.
Bargaje, Rhishikesh; Trachana, Kalliopi; Shelton, Martin N; McGinnis, Christopher S; Zhou, Joseph X; Chadick, Cora; Cook, Savannah; Cavanaugh, Christopher; Huang, Sui; Hood, Leroy
2017-02-28
Steering the differentiation of induced pluripotent stem cells (iPSCs) toward specific cell types is crucial for patient-specific disease modeling and drug testing. This effort requires the capacity to predict and control when and how multipotent progenitor cells commit to the desired cell fate. Cell fate commitment represents a critical state transition or "tipping point" at which complex systems undergo a sudden qualitative shift. To characterize such transitions during iPSC to cardiomyocyte differentiation, we analyzed the gene expression patterns of 96 developmental genes at single-cell resolution. We identified a bifurcation event early in the trajectory when a primitive streak-like cell population segregated into the mesodermal and endodermal lineages. Before this branching point, we could detect the signature of an imminent critical transition: increase in cell heterogeneity and coordination of gene expression. Correlation analysis of gene expression profiles at the tipping point indicates transcription factors that drive the state transition toward each alternative cell fate and their relationships with specific phenotypic readouts. The latter helps us to facilitate small molecule screening for differentiation efficiency. To this end, we set up an analysis of cell population structure at the tipping point after systematic variation of the protocol to bias the differentiation toward mesodermal or endodermal cell lineage. We were able to predict the proportion of cardiomyocytes many days before cells manifest the differentiated phenotype. The analysis of cell populations undergoing a critical state transition thus affords a tool to forecast cell fate outcomes and can be used to optimize differentiation protocols to obtain desired cell populations.
Tuned mass absorbers on damped structures under random load
DEFF Research Database (Denmark)
Krenk, Steen; Høgsberg, Jan Becker
2008-01-01
the mass ratio alone, and the damping can be determined subsequently. Only approximate results are available for the influence of damping in the original structure, typically in the form of series expansions. In the present paper it is demonstrated that for typical mass ratios in the order of a few percent......A substantial literature exists on the optimal choice of parameters of a tuned mass absorber on a structure excited by a force or by ground acceleration with random characteristics in the form of white noise. In the absence of structural damping the optimal frequency tuning is determined from...... for the response variance of a structure with initial damping in terms of the mass ratio and both damping ratios. Within this format the optimal tuning of the absorber turns out to be independent of the structural damping, and a simple explicit expression is obtained for the equivalent total damping....
Random matrix theory in nuclear structure: past, present and future
International Nuclear Information System (INIS)
Kota, V.K.B.
2012-01-01
Random matrix theory (RMT) introduced by Wigner in 50's to describe statistical properties of slow-neutron resonances in heavy nuclei such as 232 Th, was developed further in the 60's by Dyson, Mehta, Porter and others and in the 70's by French, Pandey, Bohigas and others. Going beyond this, the demonstration that level fluctuations of quantum analogues of classically chaotic few-degrees-of-freedom systems follow random matrix theory (integrable systems follow Poisson as shown by Berry) in 1984 by Bohigas and others on one hand and the recognition from 1995 onwards that two-body random matrix ensembles derived from shell model have wide ranging applications on the other, defined new directions in RMT applications in nuclear physics. Growth points in RMT in nuclear physics are: (i) analysis of nuclear data looking for order-chaos transitions and symmetry (Time-reversal, Parity, Isospin) breaking; (ii) analysis of shell model driven embedded (or two-body) random matrix ensembles giving statistical properties generated by random interactions in the presence of a mean-field; (iii) statistical nuclear spectroscopy generated by embedded ensembles for level densities, occupancies, GT strengths, transition strength sums and so on; (iv) the new paradigm of regular structures generated by random interactions as brought out by studies using various nuclear models; (v) random matrix theory for nuclear reactions with particular reference to open quantum systems; (vi) RMT results from nuclear physics to atomic physics, mesoscopic physics and quantum information science. Topics (i)-(vi) emphasizing recent results are discussed. (author)
Exploring biological network structure with clustered random networks
Directory of Open Access Journals (Sweden)
Bansal Shweta
2009-12-01
Full Text Available Abstract Background Complex biological systems are often modeled as networks of interacting units. Networks of biochemical interactions among proteins, epidemiological contacts among hosts, and trophic interactions in ecosystems, to name a few, have provided useful insights into the dynamical processes that shape and traverse these systems. The degrees of nodes (numbers of interactions and the extent of clustering (the tendency for a set of three nodes to be interconnected are two of many well-studied network properties that can fundamentally shape a system. Disentangling the interdependent effects of the various network properties, however, can be difficult. Simple network models can help us quantify the structure of empirical networked systems and understand the impact of various topological properties on dynamics. Results Here we develop and implement a new Markov chain simulation algorithm to generate simple, connected random graphs that have a specified degree sequence and level of clustering, but are random in all other respects. The implementation of the algorithm (ClustRNet: Clustered Random Networks provides the generation of random graphs optimized according to a local or global, and relative or absolute measure of clustering. We compare our algorithm to other similar methods and show that ours more successfully produces desired network characteristics. Finding appropriate null models is crucial in bioinformatics research, and is often difficult, particularly for biological networks. As we demonstrate, the networks generated by ClustRNet can serve as random controls when investigating the impacts of complex network features beyond the byproduct of degree and clustering in empirical networks. Conclusion ClustRNet generates ensembles of graphs of specified edge structure and clustering. These graphs allow for systematic study of the impacts of connectivity and redundancies on network function and dynamics. This process is a key step in
Nomura, Ken-Ichi; Kalia, Rajiv K; Nakano, Aiichiro; Vashishta, Priya; van Duin, Adri C T; Goddard, William A
2007-10-05
Mechanical stimuli in energetic materials initiate chemical reactions at shock fronts prior to detonation. Shock sensitivity measurements provide widely varying results, and quantum-mechanical calculations are unable to handle systems large enough to describe shock structure. Recent developments in reactive force-field molecular dynamics (ReaxFF-MD) combined with advances in parallel computing have paved the way to accurately simulate reaction pathways along with the structure of shock fronts. Our multimillion-atom ReaxFF-MD simulations of l,3,5-trinitro-l,3,5-triazine (RDX) reveal that detonation is preceded by a transition from a diffuse shock front with well-ordered molecular dipoles behind it to a disordered dipole distribution behind a sharp front.
Directory of Open Access Journals (Sweden)
Stephen S Whitehead
2017-05-01
Full Text Available Infection caused by the four serotypes of dengue virus (DENV-1-4 is a leading cause of mosquito-borne disease. Clinically-severe dengue disease is more common when secondary dengue infection occurs following prior infection with a heterologous dengue serotype. Other flaviviruses such as yellow fever virus, Japanese encephalitis virus, and Zika virus, can also elicit antibodies which are cross-reactive to DENV. As candidate dengue vaccines become available in endemic settings and for individuals who have received other flavivirus vaccines, it is important to examine vaccine safety and immunogenicity in these flavivirus-experienced populations. We performed a randomized, controlled trial of the National Institutes of Health live attenuated tetravalent dengue vaccine candidate (TV003 in fifty-eight individuals with prior exposure to flavivirus infection or vaccine. As in prior studies of this vaccine in flavivirus-naive volunteers, flavivirus-experienced subjects received two doses of vaccine six months apart and were followed closely for clinical events, laboratory changes, viremia, and neutralizing antibody titers. TV003 was well tolerated with few adverse events other than rash, which was predominately mild. Following one dose, 87% of vaccinees had an antibody response to all four serotypes (tetravalent response, suggesting a robust immune response. In addition, 76% of vaccinees were viremic; mean peak titers ranged from 0.68–1.1 log10 PFU/mL and did not differ by serotype. The second dose of TV003 was not associated with viremia, rash, or a sustained boost in antibody titers indicating that a single dose of the vaccine is likely sufficient to prevent viral replication and thus protect against disease. In comparison to the viremia and neutralizing antibody response elicited by TV003 in flavivirus-naïve subjects from prior studies, we found that subjects who were flavivirus-exposed prior to vaccination exhibited slightly higher DENV-3 viremia
Time-variant random interval natural frequency analysis of structures
Wu, Binhua; Wu, Di; Gao, Wei; Song, Chongmin
2018-02-01
This paper presents a new robust method namely, unified interval Chebyshev-based random perturbation method, to tackle hybrid random interval structural natural frequency problem. In the proposed approach, random perturbation method is implemented to furnish the statistical features (i.e., mean and standard deviation) and Chebyshev surrogate model strategy is incorporated to formulate the statistical information of natural frequency with regards to the interval inputs. The comprehensive analysis framework combines the superiority of both methods in a way that computational cost is dramatically reduced. This presented method is thus capable of investigating the day-to-day based time-variant natural frequency of structures accurately and efficiently under concrete intrinsic creep effect with probabilistic and interval uncertain variables. The extreme bounds of the mean and standard deviation of natural frequency are captured through the embedded optimization strategy within the analysis procedure. Three particularly motivated numerical examples with progressive relationship in perspective of both structure type and uncertainty variables are demonstrated to justify the computational applicability, accuracy and efficiency of the proposed method.
Moisture diffusivity in structure of random fractal fiber bed
Energy Technology Data Exchange (ETDEWEB)
Zhu, Fanglong, E-mail: zhufanglong_168@163.com [College of Textile, Zhongyuan University of Technology, Zhengzhou City (China); The Chinese People' s Armed Police Forces Academy, Langfan City (China); Zhou, Yu; Feng, Qianqian [College of Textile, Zhongyuan University of Technology, Zhengzhou City (China); Xia, Dehong [School of Mechanical Engineering, University of Science and Technology, Beijing (China)
2013-11-08
A theoretical expression related to effective moisture diffusivity to random fiber bed is derived by using fractal theory and considering both parallel and perpendicular channels to diffusion flow direction. In this Letter, macroporous structure of hydrophobic nonwoven material is investigated, and Knudsen diffusion and surface diffusion are neglected. The effective moisture diffusivity predicted by the present fractal model are compared with water vapor transfer rate (WVTR) experiment data and calculated values obtained from other theoretical models. This verifies the validity of the present fractal diffusivity of fibrous structural beds.
The Fatigue Behavior of Steel Structures under Random Loading
DEFF Research Database (Denmark)
Agerskov, Henning
2008-01-01
Fatigue damage accumulation in steel structures under random loading has been studied in a number of investigations at the Technical University of Denmark. The fatigue life of welded joints has been determined both experimentally and from a fracture mechanics analysis. In the experimental part...... and variable amplitude fatigue test results. Both the fracture mechanics analysis and the fatigue test results indicate that Miner’s rule, which is normally used in the design against fatigue in steel structures, may give results, which are unconservative, and that the validity of the results obtained from...
Glavind, J; Henriksen, T B; Kindberg, S F; Uldbjerg, N
2014-11-01
To evaluate women's preferences for timing of elective cesarean section (ECS) scheduled prior to versus after 39 completed weeks. Secondary analyses from a randomized controlled open-label trial were conducted at seven Danish tertiary hospitals from March 2009 to June 2011 with inclusion of singleton pregnant women with a healthy fetus. The women were allocated by a computerized telephone system to ECS scheduled at 38(+3) weeks or 39(+3) weeks of gestation. Dissatisfaction with timing of ECS and preferred timing of the procedure in a proposed future ECS delivery were evaluated. Data analyses were done by intention-to-treat, using logistic regression. A total of 1196 women (94%) completed an online questionnaire at follow-up eight weeks postpartum. In the 38 weeks group, 61 (10%) women 601 were dissatisfied with the timing of their ECS, whereas in the 39 weeks group 157 (26%) of 595 were dissatisfied (adjOR 3.18, 95% CI 2.30; 4.40). The proportion of women who preferred the same timing in a future ECS were 272 (45%) in the 38 weeks group compared to 232 (39%) in the 39(+3) weeks group (adjOR 0.75, 95% CI 0.60; 0.95). The women in this trial preferred ECS scheduled prior to 39 weeks of gestation.
The Fatigue Behavior of Steel Structures under Random Loading
DEFF Research Database (Denmark)
Agerskov, Henning
2009-01-01
of the investigation, fatigue test series with a total of 540 fatigue tests have been carried through on various types of welded plate test specimens and full-scale offshore tubular joints. The materials that have been used are either conventional structural steel or high-strength steel. The fatigue tests......Fatigue damage accumulation in steel structures under random loading has been studied in a number of investigations at the Technical University of Denmark. The fatigue life of welded joints has been determined both experimentally and from a fracture mechanics analysis. In the experimental part...... and the fracture mechanics analyses have been carried out using load histories, which are realistic in relation to the types of structures studied, i.e. primarily bridges, offshore structures and chimneys. In general, the test series carried through show a significant difference between constant amplitude...
Martínez, Carlos Alberto; Khare, Kshitij; Banerjee, Arunava; Elzo, Mauricio A
2017-03-21
It is important to consider heterogeneity of marker effects and allelic frequencies in across population genome-wide prediction studies. Moreover, all regression models used in genome-wide prediction overlook randomness of genotypes. In this study, a family of hierarchical Bayesian models to perform across population genome-wide prediction modeling genotypes as random variables and allowing population-specific effects for each marker was developed. Models shared a common structure and differed in the priors used and the assumption about residual variances (homogeneous or heterogeneous). Randomness of genotypes was accounted for by deriving the joint probability mass function of marker genotypes conditional on allelic frequencies and pedigree information. As a consequence, these models incorporated kinship and genotypic information that not only permitted to account for heterogeneity of allelic frequencies, but also to include individuals with missing genotypes at some or all loci without the need for previous imputation. This was possible because the non-observed fraction of the design matrix was treated as an unknown model parameter. For each model, a simpler version ignoring population structure, but still accounting for randomness of genotypes was proposed. Implementation of these models and computation of some criteria for model comparison were illustrated using two simulated datasets. Theoretical and computational issues along with possible applications, extensions and refinements were discussed. Some features of the models developed in this study make them promising for genome-wide prediction, the use of information contained in the probability distribution of genotypes is perhaps the most appealing. Further studies to assess the performance of the models proposed here and also to compare them with conventional models used in genome-wide prediction are needed. Copyright © 2017 Elsevier Ltd. All rights reserved.
Freedland, Stephen J.; Carducci, Michael; Kroeger, Nils; Partin, Alan; Rao, Jian-yu; Jin, Yusheng; Kerkoutian, Susan; Wu, Hong; Li, Yunfeng; Creel, Patricia; Mundy, Kelly; Gurganus, Robin; Fedor, Helen; King, Serina A.; Zhang, Yanjun; Heber, David; Pantuck, Allan J.
2013-01-01
Pomegranates slow prostate cancer xenograft growth and prolong PSA doubling times in single-arm human studies. Pomegranates’ effects on human prostate tissue are understudied. We hypothesized orally administered pomegranate extract (POMx; PomWonderful, Los Angeles, CA) would lower tissue 8-hydroxy-2-deoxyguanosine (8-OHdG), an oxidative stress biomarker. 70 men were randomized to 2 tablets POMx or placebo daily up to 4 weeks prior to radical prostatectomy. Tissue was analyzed for intra-prostatic Urolithin A, a pomegranate metabolite, benign and malignant 8-OHdG, and cancer pS6 kinase, NFκB, and Ki67. Primary end-point was differences in 8-OHdG powered to detect 30% reduction. POMx was associated with 16% lower benign tissue 8-OHdG (p=0.095), which was not statistically significant. POMx was well-tolerated with no treatment-related withdrawals. There were no differences in baseline clinicopathological features between arms. Urolithin A was detected in 21/33 patient in the POMx group vs. 12/35 in the placebo group (p=0.031). Cancer pS6 kinase, NFκB, Ki67, and serum PSA changes were similar between arms. POMx prior to surgery results in pomegranate metabolite accumulation in prostate tissues. Our primary end-point in this modest-sized short-term trial was negative. Future larger longer studies are needed to more definitely test whether POMx reduces prostate oxidative stress as well as further animal testing to better understand the multiple mechanisms through which POMx may alter prostate cancer biology. PMID:23985577
International Nuclear Information System (INIS)
Altomare, A.; Carrozzini, B.; Giacovazzo, C.; Guagliardi, A.; Moliterni, A.G.G.; Rizzi, R.
1996-01-01
For pt.II see ibid., p.674-81, 1996. The principal limitation of the diffraction methods for crystal structure analysis from powder data is originated by the collapse of the three-dimensional reciprocal space into the one dimension of the powder diffraction pattern. The degradation of the information can make difficult even the solution of small crystal structures and can generate inefficiencies in the least-squares methods devoted to crystal structure refinement. In this paper, the current two-stage procedures, the first stage dedicated to powder-pattern decomposition and the second to direct phasing of powder data, are analysed. It is shown that in the first stage such procedures disregard a large amount of information that can become available during the process of crystal structure solution and analysis. The use of such information is essential for making direct-methods procedures more robust and for improving the accuracy of the least-squares techniques. The performances of EXTRA [Altomare, Burla, Cascarano, Giacovazzo, Guagliardi, Moliterni and Polidori (1995). J. Appl. Cryst. 28, 842-846], a program for full-pattern decomposition based on the Le Bail algorithm, and of SIRPOW.92 [Altomare, Burla, Cascarano, Giacovazzo, Guagliardi, Polidori and Camalli (1994). J. Appl. Cryst. 27, 435-436], a direct-methods program optimized for powder data, are discussed in order to offer to the reader a logical pathway for the analysis of the traditional techniques and for the proposition of a new approach. It is shown that pattern-decomposition programs based on the Le Bail algorithm are able to exploit the prior information in a more effective way than Pawley-method-based decomposition programs. (orig.)
Exponential random graph models for networks with community structure.
Fronczak, Piotr; Fronczak, Agata; Bujok, Maksymilian
2013-09-01
Although the community structure organization is an important characteristic of real-world networks, most of the traditional network models fail to reproduce the feature. Therefore, the models are useless as benchmark graphs for testing community detection algorithms. They are also inadequate to predict various properties of real networks. With this paper we intend to fill the gap. We develop an exponential random graph approach to networks with community structure. To this end we mainly built upon the idea of blockmodels. We consider both the classical blockmodel and its degree-corrected counterpart and study many of their properties analytically. We show that in the degree-corrected blockmodel, node degrees display an interesting scaling property, which is reminiscent of what is observed in real-world fractal networks. A short description of Monte Carlo simulations of the models is also given in the hope of being useful to others working in the field.
Energy Technology Data Exchange (ETDEWEB)
Moraczewski, Krzysztof, E-mail: kmm@ukw.edu.pl [Kazimierz Wielki University, Chodkiewicza 30, 85-064 Bydgoszcz (Poland); Rytlewski, Piotr [Kazimierz Wielki University, Chodkiewicza 30, 85-064 Bydgoszcz (Poland); Malinowski, Rafał [Institute for Engineering of Polymer Materials and Dyes, Marii Skłodowskiej-Curie 55, 87-100 Toruń (Poland); Tracz, Adam [Centre for Molecular and Macromolecular Studies of the Polish Academy of Sciences, Sienkiewicza 112, 90-363 Łódź (Poland); Żenkiewicz, Marian [Institute for Engineering of Polymer Materials and Dyes, Marii Skłodowskiej-Curie 55, 87-100 Toruń (Poland)
2015-03-01
The paper presents the results of studies to determine the applicability of plasma modification in the process of polylactide (PLA) surface preparation prior to the autocatalytic metallization. The polylactide plasma modification was carried out in an oxygen or nitrogen chemistry. The samples were tested with the following methods: scanning electron microscopy (SEM), atomic force microscopy (AFM), goniometry and electron spectrophotometry (XPS). Scanning electron microscopy and atomic force microscopy images were demonstrated. The results of surface free energy calculations, performed based on the results of the contact angle measurements have been presented. The results of the qualitative (degree of oxidation or nitridation) and quantitative analysis of the chemical composition of the polylactide surface layer have also been described. The results of the studies show that the DC plasma modification performed in the proposed condition is a suitable as a method of surface preparation for the polylactide metallization. - Highlights: • We modified polylactide surface layer with plasma generated in oxygen or nitrogen. • We tested selected properties and surface structure of modified samples. • DC plasma modification can be used to prepare the PLA surface for metallization. • For better results metallization should be preceded by sonication process.
International Nuclear Information System (INIS)
Gouda, Y.S.; Eldeeb, N.A.; Omar, A.M.; Kohail, H.M.; El-Geneidy, M.M.; Elkerm, Y.M.
2006-01-01
Background: Multiple concepts of combined modality therapy for locally advanced inoperable non-small cell lung cancer have been investigated. These include induction chemotherapy, concomitant chemo-radiotherapy, and radiation only. To date, combined modality therapy specially the use of concomitant chemo-radiotherapy has led to promising results and was shown to be superior to radiotherapy alone in phase II studies. However the optimum chemo-therapeutic regimen to be used as well as the benefit of induction chemotherapy before concomitant chemo-radiotherapy are yet to be determined. Based on these observations, we investigated the use of paclitaxel and carboplatin concomitantly with radiotherapy and the benefit of prior two cycles induction chemotherapy. Materials and Methods: In this trial 50 patients with locally advanced inoperable non small cell lung cancer, good performance status and minimal weight loss have been randomized into 3 groups each of 20 patients. Group A received induction 2 cycles paclitaxel (175 mg/m 2 ) and carboplatin (AUC 6) on day I and 28 th followed by concomitant paclitaxel (45 mg/m 2 ) and carboplatin (AUC 2) weekly with radiotherapy. Group B received concomitant carboplatin, paclitaxel (same doses as in group A) and radiotherapy with no prior induction chemotherapy. Group C received only radiotherapy to a total dose of 60 Gy in conventional fractionation. Results: A total of 60 patients were enrolled in this study between 1998 and 2000. Pretreatment characteristics, including age, gender, performance status, histological features and stage were comparable in each group. The incidence of oesophagi tis was significantly higher in group A and B than in group C (ρ=0.023). Hematological toxicities was also significantly higher in group A and B than in group C (ρ=0.003). The response rate was significantly higher in group A and B than in group C (75%,79%, and 40% respectively) (ρ =0.020). The time to in-field progresion was significantly
Conditional Random Fields for Pattern Recognition Applied to Structured Data
Directory of Open Access Journals (Sweden)
Tom Burr
2015-07-01
Full Text Available Pattern recognition uses measurements from an input domain, X, to predict their labels from an output domain, Y. Image analysis is one setting where one might want to infer whether a pixel patch contains an object that is “manmade” (such as a building or “natural” (such as a tree. Suppose the label for a pixel patch is “manmade”; if the label for a nearby pixel patch is then more likely to be “manmade” there is structure in the output domain that can be exploited to improve pattern recognition performance. Modeling P(X is difficult because features between parts of the model are often correlated. Therefore, conditional random fields (CRFs model structured data using the conditional distribution P(Y|X = x, without specifying a model for P(X, and are well suited for applications with dependent features. This paper has two parts. First, we overview CRFs and their application to pattern recognition in structured problems. Our primary examples are image analysis applications in which there is dependence among samples (pixel patches in the output domain. Second, we identify research topics and present numerical examples.
Directory of Open Access Journals (Sweden)
Suyeon Kim
2018-05-01
Full Text Available The objective of this study was to determine the effect of prior knowledge and visual evaluation on supports for rain garden installations. To achieve this objective, a survey was conducted to obtain prior knowledge of rain gardens, rain garden implementation support ratings, and visual evaluation of rain gardens in 100 visitors of three rain garden sites. Results of the analysis revealed that users’ visual evaluation of rain gardens played a role as a moderator in the relationship between prior knowledge and support for rain garden installations. In other words, education and publicity of rain gardens alone cannot increase support for rain gardens. However, if rain gardens are visually evaluated positively, the effects of education and publicity of rain gardens can be expected. Therefore, to successfully apply a rain garden policy in the future, basic consideration should be given to aesthetics in order to meet visitors’ visual expectations prior to education and publicity of rain gardens.
de Castro, Therese C; Carr, Debra J; Taylor, Michael C; Kieser, Jules A; Duncan, Warwick
2016-09-01
The interaction of blood and fabrics is currently a 'hot topic', since the understanding and interpretation of these stains is still in its infancy. A recent simplified perpendicular impact experimental programme considering bloodstains generated on fabrics laid the foundations for understanding more complex scenarios. Blood rarely impacts apparel fabrics perpendicular; therefore a systematic study was conducted to characterise the appearance of drip stains on inclined fabrics. The final drip stain appearance for 45° and 15° impact angles on torso apparel fabrics (100% cotton plain woven, 100% polyester plain woven, a blend of polyester and cotton plain woven and 100% cotton single jersey knit) that had been laundered for six, 26 and 52 cycles prior to testing was investigated. The relationship between drop parameters (height and volume), angle and the stain characteristics (parent stain area, axis 1 and 2 and number of satellite stains) for each fabric was examined using analysis of variance. The appearance of the drip stains on these fabrics was distorted, in comparison to drip stains on hard-smooth surface. Examining the parent stain allowed for classification of stains occurring at an angle, however the same could not be said for the satellite stains produced. All of the dried stains visible on the surface of the fabric were larger than just after the impacting event, indicating within fabric spreading of blood due to capillary force (wicking). The cotton-containing fabrics spread the blood within the fabrics in all directions along the stain's circumference, while spreading within the polyester plain woven fabric occurred in only the weft (width of the fabric) and warp (length) directions. Laundering affected the formation of bloodstain on the blend plain woven fabric at both impact angles, although not all characteristics were significantly affected for the three impact conditions considered. The bloodstain characteristics varied due to the fibre content
International Nuclear Information System (INIS)
Viquez, Olga M.; Lai, Barry; Ahn, Jae Hee; Does, Mark D.; Valentine, Holly L.; Valentine, William M.
2009-01-01
-mediated inhibition of proteasome function and inhibition of cuproenzyme activity to neurotoxicity, and also to assess the potential of dithiocarbamates to promote oxidative stress and injury within the central nervous system. These evaluations were performed using an established model for dithiocarbamate-mediated demyelination in the rat utilizing sciatic nerve, spinal cord and brain samples obtained from rats exposed to N,N-diethyldithiocarbamate (DEDC) by intra-abdominal pumps for periods of 2, 4, and 8 weeks and from non exposed controls. The data supported the ability of DEDC to increase copper within myelin and to enhance oxidative stress prior to structural changes detectable by MET 2 . Evidence was also obtained that the excess copper produced by DEDC in the central nervous system is redox active and promotes oxidative injury.
International Nuclear Information System (INIS)
E Nielsen, Viveque; Bonnema, Steen; Hegedues, Laszlo; Grupe, Peter; Boel-Joergensen, Henrik
2005-01-01
Full text: Background: rh TSH increases the thyroid 131 I uptake (RAIU) and may have a role in the context of 131 I therapy of goiter. No placebo-controlled trial has yet been performed. Methods: In a double-blinded trial, 57 patients with nodular nontoxic goiter (51 F, 6 M) were randomized to receive either 0.3 mg rh TSH (n=28) or placebo (n=29) 24 h before 131 I. The thyroid dose was calculated based on thyroid size (measured by ultrasound) and RAUI at 24 h and 96 h. Thyroid size and function and patient satisfaction were monitored for 12 months. Results: At baseline the median goiter volume was 51 ml (range: 20-99 ml) in the placebo group and 59 ml (25-92 ml) in the rh TSH group (p=0.75). Three months after 131 I the goiter size was reduced to 38 ml (15-78 ml) and 43 ml (20-75 ml) in the two groups, respectively (p=0.001 within groups, p=0.96 between groups). At 12 months, the corresponding figures were 27 ml (15-82 ml) and 20 ml (6-59 ml); p=0.001 within groups compared with baseline, p=0.12 between groups. The relative goiter reduction at this time was 46 ± 22% in the placebo group, and 61 ± 15% in the rh TSH group (p=0.004). In addition to the influence of rh TSH, the magnitude of the goiter reduction correlated inversely with the initial goiter volume (p=0.019), whereas no significant correlation was found with the RAIU during therapy or with the absorbed thyroid dose. Discomfort during 131 I was reported by 10 patients in the placebo group and by 15 patients in the rh TSH group (p=0.12). Permanent hypothyroidism developed in 12% in the placebo group and in 52% in the rh TSH group (p=0.005). Patient satisfaction was generally very high without any major within group difference. Conclusion: In the first placebo-controlled double-blinded trial, we found that rh TSH prior to 131 I -therapy significantly improves thyroid size reduction by 33%, with a four-fold higher rate of hypothyroidism. These effects are, at least partially, mediated through other
Random matrices and chaos in nuclear physics: Nuclear structure
International Nuclear Information System (INIS)
Weidenmueller, H. A.; Mitchell, G. E.
2009-01-01
Evidence for the applicability of random-matrix theory to nuclear spectra is reviewed. In analogy to systems with few degrees of freedom, one speaks of chaos (more accurately, quantum chaos) in nuclei whenever random-matrix predictions are fulfilled. An introduction into the basic concepts of random-matrix theory is followed by a survey over the extant experimental information on spectral fluctuations, including a discussion of the violation of a symmetry or invariance property. Chaos in nuclear models is discussed for the spherical shell model, for the deformed shell model, and for the interacting boson model. Evidence for chaos also comes from random-matrix ensembles patterned after the shell model such as the embedded two-body ensemble, the two-body random ensemble, and the constrained ensembles. All this evidence points to the fact that chaos is a generic property of nuclear spectra, except for the ground-state regions of strongly deformed nuclei.
Merlin , R.; Bajema , K.; Nagle , J.; Ploog , K.
1987-01-01
We report structural studies of incommensurate and random GaAs-AlAs superlattices using Raman scattering by acoustic phonons. Properties of the structure factor of Fibonacci and Thue-Morse superlattices are discussed in some detail.
DEFF Research Database (Denmark)
Köyluoglu, H.U.; Nielsen, Søren R.K.; Cakmak, A.S.
1994-01-01
perturbation method using stochastic differential equations. The joint statistical moments entering the perturbation solution are determined by considering an augmented dynamic system with state variables made up of the displacement and velocity vector and their first and second derivatives with respect......The paper deals with the first and second order statistical moments of the response of linear systems with random parameters subject to random excitation modelled as white-noise multiplied by an envelope function with random parameters. The method of analysis is basically a second order...... to the random parameters of the problem. Equations for partial derivatives are obtained from the partial differentiation of the equations of motion. The zero time-lag joint statistical moment equations for the augmented state vector are derived from the Itô differential formula. General formulation is given...
Bergman, J. J.; van Berkel, A. M.; Bruno, M. J.; Fockens, P.; Rauws, E. A.; Tijssen, J. G.; Tytgat, G. N.; Huibregtse, K.
2001-01-01
BACKGROUND: A prior Billroth II gastrectomy renders endoscopic sphincterotomy (EST) more difficult in patients with bile duct stones. Endoscopic balloon dilation (EBD) is a relatively easy procedure that potentially reduces the risk of bleeding and perforation. METHODS: Thirty-four patients with
Atomic structure calculations using the relativistic random phase approximation
International Nuclear Information System (INIS)
Cheng, K.T.; Johnson, W.R.
1981-01-01
A brief review is given for the relativistic random phase approximation (RRPA) applied to atomic transition problems. Selected examples of RRPA calculations on discrete excitations and photoionization are given to illustrate the need of relativistic many-body theories in dealing with atomic processes where both relativity and correlation are important
Slepian Simulations of Plastic Displacements of Randomly Excited Hysteretic Structures
DEFF Research Database (Denmark)
Lazarov, Boyan Stefanov
2003-01-01
The object of the study is a fast simulation method for generation and analysis of the plastic response of a randomly excited MDOF oscillatro with several potential elements with elasto-plastic constitutive behavior. The oscillator is statically determinate with linear damping. The external...... approximately as a stationary Gaussian process. This requires that the standard deviation of the stationary response is not too large as compared to the plastic yield limits. The Slepian model process for the behavior of the linear response is then simply the conditional mean (linear regression) of the process...... noise excited linear oscillator obtained from the elasto-plastic oscillator by totally removing the plastic domain. Thus the key to the applicability of the method is that the oscillator has a linear domain within which the response stays for a sufficiently long time to make the random response behave...
On a structure intermediate between quasiperiodic and random
International Nuclear Information System (INIS)
Hof, A.
1996-01-01
This paper proves rigorously that the structure factor of the open-quotes structure intermediate between quasiperiodic and randomclose quotes introduced by Aubry, Godreche, and Luck is purely singular continuous apart from a delta function at zero for open-quotes mostclose quotes parameters, and on the Wonderland Theorem
Localized surface plasmon enhanced cellular imaging using random metallic structures
Son, Taehwang; Lee, Wonju; Kim, Donghyun
2017-02-01
We have studied fluorescence cellular imaging with randomly distributed localized near-field induced by silver nano-islands. For the fabrication of nano-islands, a 10-nm silver thin film evaporated on a BK7 glass substrate with an adhesion layer of 2-nm thick chromium. Micrometer sized silver square pattern was defined using e-beam lithography and then the film was annealed at 200°C. Raw images were restored using electric field distribution produced on the surface of random nano-islands. Nano-islands were modeled from SEM images. 488-nm p-polarized light source was set to be incident at 60°. Simulation results show that localized electric fields were created among nano-islands and that their average size was found to be 135 nm. The feasibility was tested using conventional total internal reflection fluorescence microscopy while the angle of incidence was adjusted to maximize field enhancement. Mouse microphage cells were cultured on nano-islands, and actin filaments were selectively stained with FITC-conjugated phalloidin. Acquired images were deconvolved based on linear imaging theory, in which molecular distribution was sampled by randomly distributed localized near-field and blurred by point spread function of far-field optics. The optimum fluorophore distribution was probabilistically estimated by repetitively matching a raw image. The deconvolved images are estimated to have a resolution in the range of 100-150 nm largely determined by the size of localized near-fields. We also discuss and compare the results with images acquired with periodic nano-aperture arrays in various optical configurations to excite localized plasmonic fields and to produce super-resolved molecular images.
Directory of Open Access Journals (Sweden)
Rachel C. Veasey
2015-07-01
Full Text Available Exercise undertaken in a fasted state can lead to higher post-exercise mental fatigue. The administration of a vitamin and mineral complex with guaraná (MVM + G has been shown to attenuate mental fatigue and improve performance during cognitively demanding tasks. This placebo-controlled, double-blind, randomized, balanced cross-over study examined the effect of MVM + G consumed prior to morning exercise on cognitive performance, affect, exertion, and substrate metabolism. Forty active males (age 21.4 ± 3.0 year; body mass index (BMI 24.0 ± 2.4 kg/m2; maximal oxygen consumption (V̇O2max 57.6 ± 7.3 mL/min/kg completed two main trials, consuming either MVM + G or placebo prior to a 30-min run at 60% V̇O2max. Supplementation prior to exercise led to a small but significant reduction in Rating of Perceived Exertion (RPE during exercise compared to the placebo. The MVM + G combination also led to significantly increased accuracy of numeric working memory and increased speed of picture recognition, compared to the placebo. There were no significant effects of supplementation on any other cognitive or mood measures or on substrate metabolism during exercise. These findings demonstrate that consuming a vitamin and mineral complex containing guaraná, prior to exercise, can positively impact subsequent memory performance and reduce perceived exertion during a moderate-intensity run in active males.
ATLAS, an integrated structural analysis and design system. Volume 4: Random access file catalog
Gray, F. P., Jr. (Editor)
1979-01-01
A complete catalog is presented for the random access files used by the ATLAS integrated structural analysis and design system. ATLAS consists of several technical computation modules which output data matrices to corresponding random access file. A description of the matrices written on these files is contained herein.
Cheung, Mike W.-L.; Cheung, Shu Fai
2016-01-01
Meta-analytic structural equation modeling (MASEM) combines the techniques of meta-analysis and structural equation modeling for the purpose of synthesizing correlation or covariance matrices and fitting structural equation models on the pooled correlation or covariance matrix. Both fixed-effects and random-effects models can be defined in MASEM.…
Phonon structures of GaN-based random semiconductor alloys
Zhou, Mei; Chen, Xiaobin; Li, Gang; Zheng, Fawei; Zhang, Ping
2017-12-01
Accurate modeling of thermal properties is strikingly important for developing next-generation electronics with high performance. Many thermal properties are closely related to phonon dispersions, such as sound velocity. However, random substituted semiconductor alloys AxB1-x usually lack translational symmetry, and simulation with periodic boundary conditions often requires large supercells, which makes phonon dispersion highly folded and hardly comparable with experimental results. Here, we adopt a large supercell with randomly distributed A and B atoms to investigate substitution effect on the phonon dispersions of semiconductor alloys systematically by using phonon unfolding method [F. Zheng, P. Zhang, Comput. Mater. Sci. 125, 218 (2016)]. The results reveal the extent to which phonon band characteristics in (In,Ga)N and Ga(N,P) are preserved or lost at different compositions and q points. Generally, most characteristics of phonon dispersions can be preserved with indium substitution of gallium in GaN, while substitution of nitrogen with phosphorus strongly perturbs the phonon dispersion of GaN, showing a rapid disintegration of the Bloch characteristics of optical modes and introducing localized impurity modes. In addition, the sound velocities of both (In,Ga)N and Ga(N,P) display a nearly linear behavior as a function of substitution compositions. Supplementary material in the form of one pdf file available from the Journal web page at http://https://doi.org/10.1140/epjb/e2017-80481-0.
Analytical and Experimental Random Vibration of Nonlinear Aeroelastic Structures.
1987-01-28
vibrations. In civil engineenng the mechanical and strength eccentricity in the disks. Parameter variations exist in disk properties of the material vary...support. define the loading and resistance strength of the structure. Figure 10 shows the comparison between theoretical and experi- mental natural... dinamics . Sijthoff- Hilton, H H. and Feigen. M. Minimum weight analysis based on structural Noordhoff Co, Netherlands. reliability. J Aerospace Sc, 27
Special quasirandom structures for binary/ternary group IV random alloys
Chroneos, Alexander I.; Jiang, Chao; Grimes, Robin W.; Schwingenschlö gl, Udo
2010-01-01
Simulation of defect interactions in binary/ternary group IV semiconductor alloys at the density functional theory level is difficult due to the random distribution of the constituent atoms. The special quasirandom structures approach is a
RANDOM FUNCTIONS AND INTERVAL METHOD FOR PREDICTING THE RESIDUAL RESOURCE OF BUILDING STRUCTURES
Directory of Open Access Journals (Sweden)
Shmelev Gennadiy Dmitrievich
2017-11-01
Full Text Available Subject: possibility of using random functions and interval prediction method for estimating the residual life of building structures in the currently used buildings. Research objectives: coordination of ranges of values to develop predictions and random functions that characterize the processes being predicted. Materials and methods: when performing this research, the method of random functions and the method of interval prediction were used. Results: in the course of this work, the basic properties of random functions, including the properties of families of random functions, are studied. The coordination of time-varying impacts and loads on building structures is considered from the viewpoint of their influence on structures and representation of the structures’ behavior in the form of random functions. Several models of random functions are proposed for predicting individual parameters of structures. For each of the proposed models, its scope of application is defined. The article notes that the considered approach of forecasting has been used many times at various sites. In addition, the available results allowed the authors to develop a methodology for assessing the technical condition and residual life of building structures for the currently used facilities. Conclusions: we studied the possibility of using random functions and processes for the purposes of forecasting the residual service lives of structures in buildings and engineering constructions. We considered the possibility of using an interval forecasting approach to estimate changes in defining parameters of building structures and their technical condition. A comprehensive technique for forecasting the residual life of building structures using the interval approach is proposed.
Directory of Open Access Journals (Sweden)
Nimrod Noruwana
2012-06-01
Full Text Available There appears to be a lack of knowledge on the phases South African (SA organisations go through while adopting agile methods. As a means to address this gap, this study uncovered empirical evidence on the phases SA organisations go through whilst adopting agile methods as well as the disparities between agile prescriptions and the way SA organisations actually implement agile methods. The data collected using a case study approach was analysed through the lens of Actor-Network Theory (ANT. The results reveal that there is no structured process for adopting agile methods and organisations go through various phases in their attempts to adopt agile methods. During the various phases, organisations face challenges which are culture as well as people related. Through this study South African practitioners could now be aware that before adopting an agile methodology, there has to be a common understanding of the problems at hand and the envisioned solution. The findings also inform aspiring adopters in South Africa that adoption of the methods does not have to be as prescribed. They are free to adopt only those aspects the organisations need most.
Berwanger, Otavio; Santucci, Eliana Vieira; de Barros E Silva, Pedro Gabriel Melo; Jesuíno, Isabella de Andrade; Damiani, Lucas Petri; Barbosa, Lilian Mazza; Santos, Renato Hideo Nakagawa; Laranjeira, Ligia Nasi; Egydio, Flávia de Mattos; Borges de Oliveira, Juliana Aparecida; Dall Orto, Frederico Toledo Campo; Beraldo de Andrade, Pedro; Bienert, Igor Ribeiro de Castro; Bosso, Carlos Eduardo; Mangione, José Armando; Polanczyk, Carisi Anne; Sousa, Amanda Guerra de Moraes Rego; Kalil, Renato Abdala Karam; Santos, Luciano de Moura; Sposito, Andrei Carvalho; Rech, Rafael Luiz; Sousa, Antônio Carlos Sobral; Baldissera, Felipe; Nascimento, Bruno Ramos; Giraldez, Roberto Rocha Corrêa Veiga; Cavalcanti, Alexandre Biasi; Pereira, Sabrina Bernardez; Mattos, Luiz Alberto; Armaganijan, Luciana Vidal; Guimarães, Hélio Penna; Sousa, José Eduardo Moraes Rego; Alexander, John Hunter; Granger, Christopher Bull; Lopes, Renato Delascio
2018-04-03
The effects of loading doses of statins on clinical outcomes in patients with acute coronary syndrome (ACS) and planned invasive management remain uncertain. To determine if periprocedural loading doses of atorvastatin decrease 30-day major adverse cardiovascular events (MACE) in patients with ACS and planned invasive management. Multicenter, double-blind, placebo-controlled, randomized clinical trial conducted at 53 sites in Brazil among 4191 patients with ACS evaluated with coronary angiography to proceed with a percutaneous coronary intervention (PCI) if anatomically feasible. Enrollment occurred between April 18, 2012, and October 6, 2017. Final follow-up for 30-day outcomes was on November 6, 2017. Patients were randomized to receive 2 loading doses of 80 mg of atorvastatin (n = 2087) or matching placebo (n = 2104) before and 24 hours after a planned PCI. All patients received 40 mg of atorvastatin for 30 days starting 24 hours after the second dose of study medication. The primary outcome was MACE, defined as a composite of all-cause mortality, myocardial infarction, stroke, and unplanned coronary revascularization through 30 days. Among the 4191 patients (mean age, 61.8 [SD, 11.5] years; 1085 women [25.9%]) enrolled, 4163 (99.3%) completed 30-day follow-up. A total of 2710 (64.7%) underwent PCI, 333 (8%) underwent coronary artery bypass graft surgery, and 1144 (27.3%) had exclusively medical management. At 30 days, 130 patients in the atorvastatin group (6.2%) and 149 in the placebo group (7.1%) had a MACE (absolute difference, 0.85% [95% CI, -0.70% to 2.41%]; hazard ratio, 0.88; 95% CI, 0.69-1.11; P = .27). No cases of hepatic failure were reported; 3 cases of rhabdomyolysis were reported in the placebo group (0.1%) and 0 in the atorvastatin group. Among patients with ACS and planned invasive management with PCI, periprocedural loading doses of atorvastatin did not reduce the rate of MACE at 30 days. These findings do not support the routine use
Escudier, Bernard; Sharma, Padmanee; McDermott, David F; George, Saby; Hammers, Hans J; Srinivas, Sandhya; Tykodi, Scott S; Sosman, Jeffrey A; Procopio, Giuseppe; Plimack, Elizabeth R; Castellano, Daniel; Gurney, Howard; Donskov, Frede; Peltola, Katriina; Wagstaff, John; Gauler, Thomas C; Ueda, Takeshi; Zhao, Huanyu; Waxman, Ian M; Motzer, Robert J
2017-12-01
The randomized, phase 3 CheckMate 025 study of nivolumab (n=410) versus everolimus (n=411) in previously treated adults (75% male; 88% white) with advanced renal cell carcinoma (aRCC) demonstrated significantly improved overall survival (OS) and objective response rate (ORR). To investigate which baseline factors were associated with OS and ORR benefit with nivolumab versus everolimus. Subgroup OS analyses were performed using Kaplan-Meier methodology. Hazard ratios were estimated using the Cox proportional hazards model. Nivolumab 3mg/kg every 2 wk or everolimus 10mg once daily. The minimum follow-up was 14 mo. Baseline subgroup distributions were balanced between nivolumab and everolimus arms. Nivolumab demonstrated an OS improvement versus everolimus across subgroups, including Memorial Sloan Kettering Cancer Center (MSKCC) and International Metastatic Renal Cell Carcinoma Database Consortium risk groups; age guide treatment decisions, and further supports nivolumab as the standard of care in previously treated patients with aRCC. We investigated the impact of demographic and pretreatment features on survival benefit and tumor response with nivolumab versus everolimus in advanced renal cell carcinoma (aRCC). Survival benefit and response were observed for multiple subgroups, supporting the use of nivolumab as a new standard of care across a broad range of patients with previously treated aRCC. The trial is registered on ClinicalTrials.gov as NCT01668784. Copyright © 2017 European Association of Urology. Published by Elsevier B.V. All rights reserved.
Poynard, Thierry; Bruix, Jordi; Schiff, Eugene R; Diago, Moises; Berg, Thomas; Moreno-Otero, Ricardo; Lyra, Andre C; Carrilho, Flair; Griffel, Louis H; Boparai, Navdeep; Jiang, Ruiyun; Burroughs, Margaret; Brass, Clifford A; Albrecht, Janice K
2013-03-01
Therapeutic options for patients failing hepatitis C retreatment are limited. EPIC(3) included a prospective trial assessing long-term peginterferon alfa-2b (PegIFNα-2b) maintenance therapy in patients with METAVIR fibrosis scores (MFS) of F2 or F3 who previously failed hepatitis C retreatment. Patients with F2/F3 MFS who failed retreatment were randomized to PegIFNα-2b (0.5 μg/kg/week, n=270) or observation (n=270) for 36 months. Blinded liver biopsies obtained before retreatment and after maintenance therapy were evaluated using MFS and activity scores, and confirmatory testing was performed using FibroTest and ActiTest. In total, 348 patients had paired biopsies: 192 patients had missing post-treatment biopsies and were considered as having no change in fibrosis/activity scores. In total, 16% of patients receiving PegIFNα-2b and 11% of observation patients had improvement in MFS (p=0.32). More PegIFNα-2b than observation patients had improvement in activity score (20% vs. 9%; p 2.5 years, improvement in MFS or activity score was more common with PegIFNα-2b than observation (21% vs. 14%, p=0.08 and 26% vs. 10%, p 2.5 years. Both FibroTest and ActiTest were significantly improved during maintenance therapy. Copyright © 2012 European Association for the Study of the Liver. Published by Elsevier B.V. All rights reserved.
Completely random measures for modelling block-structured sparse networks
DEFF Research Database (Denmark)
Herlau, Tue; Schmidt, Mikkel Nørgaard; Mørup, Morten
2016-01-01
Many statistical methods for network data parameterize the edge-probability by attributing latent traits to the vertices such as block structure and assume exchangeability in the sense of the Aldous-Hoover representation theorem. Empirical studies of networks indicate that many real-world networks...... have a power-law distribution of the vertices which in turn implies the number of edges scale slower than quadratically in the number of vertices. These assumptions are fundamentally irreconcilable as the Aldous-Hoover theorem implies quadratic scaling of the number of edges. Recently Caron and Fox...
Optical performance of random anti-reflection structured surfaces (rARSS) on spherical lenses
Taylor, Courtney D.
Random anti-reflection structured surfaces (rARSS) have been reported to improve transmittance of optical-grade fused silica planar substrates to values greater than 99%. These textures are fabricated directly on the substrates using reactive-ion/inductively-coupled plasma etching (RIE/ICP) techniques, and often result in transmitted spectra with no measurable interference effects (fringes) for a wide range of wavelengths. The RIE/ICP processes used in the fabrication process to etch the rARSS is anisotropic and thus well suited for planar components. The improvement in spectral transmission has been found to be independent of optical incidence angles for values from 0° to +/-30°. Qualifying and quantifying the rARSS performance on curved substrates, such as convex lenses, is required to optimize the fabrication of the desired AR effect on optical-power elements. In this work, rARSS was fabricated on fused silica plano-convex (PCX) and plano-concave (PCV) lenses using a planar-substrate optimized RIE process to maximize optical transmission in the range from 500 to 1100 nm. An additional set of lenses were etched in a non-optimized ICP process to provide additional comparisons. Results are presented from optical transmission and beam propagation tests (optimized lenses only) of rARSS lenses for both TE and TM incident polarizations at a wavelength of 633 nm and over a 70° full field of view in both singlet and doublet configurations. These results suggest optimization of the fabrication process is not required, mainly due to the wide angle-of-incidence AR tolerance performance of the rARSS lenses. Non-optimized recipe lenses showed low transmission enhancement, and confirmed the need to optimized etch recipes prior to process transfer of PCX/PCV lenses. Beam propagation tests indicated no major beam degradation through the optimized lens elements. Scanning electron microscopy (SEM) images confirmed different structure between optimized and non-optimized samples
Dual resonant structure for energy harvesting from random vibration sources at low frequency
Directory of Open Access Journals (Sweden)
Shanshan Li
2016-01-01
Full Text Available We introduce a design with dual resonant structure which can harvest energy from random vibration sources at low frequency range. The dual resonant structure consists of two spring-mass subsystems with different frequency responses, which exhibit strong coupling and broad bandwidth when the two masses collide with each other. Experiments with piezoelectric elements show that the energy harvesting device with dual resonant structure can generate higher power output than the sum of the two separate devices from random vibration sources.
Random Walk Model for Cell-To-Cell Misalignments in Accelerator Structures
International Nuclear Information System (INIS)
Stupakov, Gennady
2000-01-01
Due to manufacturing and construction errors, cells in accelerator structures can be misaligned relative to each other. As a consequence, the beam generates a transverse wakefield even when it passes through the structure on axis. The most important effect is the long-range transverse wakefield that deflects the bunches and causes growth of the bunch train projected emittance. In this paper, the effect of the cell-to-cell misalignments is evaluated using a random walk model that assumes that each cell is shifted by a random step relative to the previous one. The model is compared with measurements of a few accelerator structures
San Miguel, Jesus F.; Weisel, Katja C.; Song, Kevin W.; Delforge, Michel; Karlin, Lionel; Goldschmidt, Hartmut; Moreau, Philippe; Banos, Anne; Oriol, Albert; Garderet, Laurent; Cavo, Michele; Ivanova, Valentina; Alegre, Adrian; Martinez-Lopez, Joaquin; Chen, Christine; Renner, Christoph; Bahlis, Nizar Jacques; Yu, Xin; Teasdale, Terri; Sternas, Lars; Jacques, Christian; Zaki, Mohamed H.; Dimopoulos, Meletios A.
2015-01-01
Pomalidomide is a distinct oral IMiD® immunomodulatory agent with direct antimyeloma, stromal-support inhibitory, and immunomodulatory effects. The pivotal, multicenter, open-label, randomized phase 3 trial MM-003 compared pomalidomide + low-dose dexamethasone vs high-dose dexamethasone in 455 patients with refractory or relapsed and refractory multiple myeloma after failure of bortezomib and lenalidomide treatment. Initial results demonstrated significantly longer progression-free survival and overall survival with an acceptable tolerability profile for pomalidomide + low-dose dexamethasone vs high-dose dexamethasone. This secondary analysis describes patient outcomes by treatment history and depth of response. Pomalidomide + low-dose dexamethasone significantly prolonged progression-free survival and favored overall survival vs high-dose dexamethasone for all subgroups analyzed, regardless of prior treatments or refractory status. Both univariate and multivariate analyses showed that no variable relating to either the number (≤ or > 3) or type of prior treatment was a significant predictor of progression-free survival or overall survival. No cross-resistance with prior lenalidomide or thalidomide treatment was observed. Patients achieving a minimal response or better to pomalidomide + low-dose dexamethasone treatment experienced a survival benefit, which was even higher in those achieving at least a partial response (17.2 and 19.9 months, respectively, as compared with 7.5 months for patients with less than minimal response). These data suggest that pomalidomide + low-dose dexamethasone should be considered a standard of care in patients with refractory or relapsed and refractory multiple myeloma regardless of prior treatment. ClinicalTrials.gov: NCT01311687; EudraCT: 2010-019820-30. PMID:26160879
GenRGenS: Software for Generating Random Genomic Sequences and Structures
Ponty , Yann; Termier , Michel; Denise , Alain
2006-01-01
International audience; GenRGenS is a software tool dedicated to randomly generating genomic sequences and structures. It handles several classes of models useful for sequence analysis, such as Markov chains, hidden Markov models, weighted context-free grammars, regular expressions and PROSITE expressions. GenRGenS is the only program that can handle weighted context-free grammars, thus allowing the user to model and to generate structured objects (such as RNA secondary structures) of any giv...
Statistics of the Von Mises Stress Response For Structures Subjected To Random Excitations
Directory of Open Access Journals (Sweden)
Mu-Tsang Chen
1998-01-01
Full Text Available Finite element-based random vibration analysis is increasingly used in computer aided engineering software for computing statistics (e.g., root-mean-square value of structural responses such as displacements, stresses and strains. However, these statistics can often be computed only for Cartesian responses. For the design of metal structures, a failure criterion based on an equivalent stress response, commonly known as the von Mises stress, is more appropriate and often used. This paper presents an approach for computing the statistics of the von Mises stress response for structures subjected to random excitations. Random vibration analysis is first performed to compute covariance matrices of Cartesian stress responses. Monte Carlo simulation is then used to perform scatter and failure analyses using the von Mises stress response.
Application of the random vibration approach in the seismic analysis of LMFBR structures
International Nuclear Information System (INIS)
Preumont, A.
1988-01-01
The first part discusses the general topic of the spectral analysis of linear multi-degree-of-freedom structure subjected to a stationary random field. Particular attention is given to structures with non-classical damping and hereditary characteristics. The method is implemented in the computer programme RANDOM. Next, the same concepts are applied to multi-supported structures subjected to a stationary seismic excitation. The method is implemented in the computer programme SEISME. Two related problems are dealt with in the next two chapters: (i) the relation between the input of the random vibration analysis and the traditional ground motion specification for seismic analysis (the Design Response Spectra) and (ii) the application of random vibration techniques to the direct generation of floor response spectra. Finally the problem of extracting information from costly time history analyses is addressed. This study has mainly been concerned with the methodology and the development of appropriate softwares. Some qualitative conclusions have been drawn regarding the expected benefit of the approach. They have been judged promising enough to motivate a benchmark exercise. Specifically, the random vibration approach will be compared to the current approximate methods (response spectrum) and time-history analyses (considered as representative of the true response) for a set of typical structures. The hope is that some of the flaws of the current approximate methods can be removed
Special quasirandom structures for binary/ternary group IV random alloys
Chroneos, Alexander I.
2010-06-01
Simulation of defect interactions in binary/ternary group IV semiconductor alloys at the density functional theory level is difficult due to the random distribution of the constituent atoms. The special quasirandom structures approach is a computationally efficient way to describe the random nature. We systematically study the efficacy of the methodology and generate a number of special quasirandom cells for future use. In order to demonstrate the applicability of the technique, the electronic structures of E centers in Si1-xGex and Si1-x -yGexSny alloys are discussed for a range of nearest neighbor environments. © 2010 Elsevier B.V. All rights reserved.
Directory of Open Access Journals (Sweden)
Kleinjan Jan H
2009-07-01
Full Text Available Abstract Background Stress urinary incontinence (SUI is a common problem. In the Netherlands, yearly 64.000 new patients, of whom 96% are women, consult their general practitioner because of urinary incontinence. Approximately 7500 urodynamic evaluations and approximately 5000 operations for SUI are performed every year. In all major national and international guidelines from both gynaecological and urological scientific societies, it is advised to perform urodynamics prior to invasive treatment for SUI, but neither its effectiveness nor its cost-effectiveness has been assessed in a randomized setting. The Value of Urodynamics prior to Stress Incontinence Surgery (VUSIS study evaluates the positive and negative effects with regard to outcome, as well as the costs of urodynamics, in women with symptoms of SUI in whom surgical treatment is considered. Methods/design A multicentre diagnostic cohort study will be performed with an embedded randomized controlled trial among women presenting with symptoms of (predominant SUI. Urinary incontinence has to be demonstrated on clinical examination and/or voiding diary. Physiotherapy must have failed and surgical treatment needs to be under consideration. Patients will be excluded in case of previous incontinence surgery, in case of pelvic organ prolapse more than 1 centimeter beyond the hymen and/or in case of residual bladder volume of more than 150 milliliter on ultrasound or catheterisation. Patients with discordant findings between the diagnosis based on urodynamic investigation and the diagnosis based on their history, clinical examination and/or micturition diary will be randomized to operative therapy or individually tailored therapy based on all available information. Patients will be followed for two years after treatment by their attending urologist or gynaecologist, in combination with the completion of questionnaires. Six hundred female patients will be recruited for registration from
Study of RNA structures with a connection to random matrix theory
International Nuclear Information System (INIS)
Bhadola, Pradeep; Deo, Nivedita
2015-01-01
This manuscript investigates the level of complexity and thermodynamic properties of the real RNA structures and compares the properties with the random RNA sequences. A discussion on the similarities of thermodynamical properties of the real structures with the non linear random matrix model of RNA folding is presented. The structural information contained in the PDB file is exploited to get the base pairing information. The complexity of an RNA structure is defined by a topological quantity called genus which is calculated from the base pairing information. Thermodynamic analysis of the real structures is done numerically. The real structures have a minimum free energy which is very small compared to the randomly generated sequences of the same length. This analysis suggests that there are specific patterns in the structures which are preserved during the evolution of the sequences and certain sequences are discarded by the evolutionary process. Further analyzing the sequences of a fixed length reveal that the RNA structures exist in ensembles i.e. although all the sequences in the ensemble have different series of nucleotides (sequence) they fold into structures that have the same pairs of hydrogen bonding as well as the same minimum free energy. The specific heat of the RNA molecule is numerically estimated at different lengths. The specific heat curve with temperature shows a bump and for some RNA, a double peak behavior is observed. The same behavior is seen in the study of the random matrix model with non linear interaction of RNA folding. The bump in the non linear matrix model can be controlled by the change in the interaction strength.
Correa, Loreto A; Zapata, Beatriz; Samaniego, Horacio; Soto-Gamboa, Mauricio
2013-09-01
Social life involves costs and benefits mostly associated with how individuals interact with each other. The formation of hierarchies inside social groups has evolved as a common strategy to avoid high costs stemming from social interactions. Hierarchical relationships seem to be associated with different features such as body size, body condition and/or age, which determine dominance ability ('prior attributes' hypothesis). In contrast, the 'social dynamic' hypothesis suggests that an initial social context is a determinant in the formation of the hierarchy, more so than specific individual attributes. Hierarchical rank places individuals in higher positions, which presumably increases resource accessibility to their benefit, including opportunities for reproduction. We evaluate the maintenance of hierarchy in a family group of guanacos (Lama guanicoe) and evaluate the possible mechanisms involved in the stability of these interactions and their consequences. We estimate the linearity of social hierarchy and their dynamics. We find evidence of the formation of a highly linear hierarchy among females with males positioned at the bottom of the hierarchy. This hierarchy is not affected by physical characteristics or age, suggesting that it is established only through intra-group interactions. Rank is not related with calves' weight gain either; however, subordinated females, with lower rank, exhibit higher rates of allosuckling. We found no evidence of hierarchical structure in calves suggesting that hierarchical relationship in guanacos could be established during the formation of the family group. Hence, our results suggest that hierarchical dynamics could be related more to social dynamics than to prior attributes. We finally discuss the importance of hierarchies established by dominance and their role in minimizing social costs of interactions. Copyright © 2013 Elsevier B.V. All rights reserved.
Harris, Wendy; Zhang, You; Yin, Fang-Fang; Ren, Lei
2017-03-01
To investigate the feasibility of using structural-based principal component analysis (PCA) motion-modeling and weighted free-form deformation to estimate on-board 4D-CBCT using prior information and extremely limited angle projections for potential 4D target verification of lung radiotherapy. A technique for lung 4D-CBCT reconstruction has been previously developed using a deformation field map (DFM)-based strategy. In the previous method, each phase of the 4D-CBCT was generated by deforming a prior CT volume. The DFM was solved by a motion model extracted by a global PCA and free-form deformation (GMM-FD) technique, using a data fidelity constraint and deformation energy minimization. In this study, a new structural PCA method was developed to build a structural motion model (SMM) by accounting for potential relative motion pattern changes between different anatomical structures from simulation to treatment. The motion model extracted from planning 4DCT was divided into two structures: tumor and body excluding tumor, and the parameters of both structures were optimized together. Weighted free-form deformation (WFD) was employed afterwards to introduce flexibility in adjusting the weightings of different structures in the data fidelity constraint based on clinical interests. XCAT (computerized patient model) simulation with a 30 mm diameter lesion was simulated with various anatomical and respiratory changes from planning 4D-CT to on-board volume to evaluate the method. The estimation accuracy was evaluated by the volume percent difference (VPD)/center-of-mass-shift (COMS) between lesions in the estimated and "ground-truth" on-board 4D-CBCT. Different on-board projection acquisition scenarios and projection noise levels were simulated to investigate their effects on the estimation accuracy. The method was also evaluated against three lung patients. The SMM-WFD method achieved substantially better accuracy than the GMM-FD method for CBCT estimation using extremely
Entropy of level-cut random Gaussian structures at different volume fractions.
Marčelja, Stjepan
2017-10-01
Cutting random Gaussian fields at a given level can create a variety of morphologically different two- or several-phase structures that have often been used to describe physical systems. The entropy of such structures depends on the covariance function of the generating Gaussian random field, which in turn depends on its spectral density. But the entropy of level-cut structures also depends on the volume fractions of different phases, which is determined by the selection of the cutting level. This dependence has been neglected in earlier work. We evaluate the entropy of several lattice models to show that, even in the cases of strongly coupled systems, the dependence of the entropy of level-cut structures on molar fractions of the constituents scales with the simple ideal noninteracting system formula. In the last section, we discuss the application of the results to binary or ternary fluids and microemulsions.
Entropy of level-cut random Gaussian structures at different volume fractions
Marčelja, Stjepan
2017-10-01
Cutting random Gaussian fields at a given level can create a variety of morphologically different two- or several-phase structures that have often been used to describe physical systems. The entropy of such structures depends on the covariance function of the generating Gaussian random field, which in turn depends on its spectral density. But the entropy of level-cut structures also depends on the volume fractions of different phases, which is determined by the selection of the cutting level. This dependence has been neglected in earlier work. We evaluate the entropy of several lattice models to show that, even in the cases of strongly coupled systems, the dependence of the entropy of level-cut structures on molar fractions of the constituents scales with the simple ideal noninteracting system formula. In the last section, we discuss the application of the results to binary or ternary fluids and microemulsions.
Directory of Open Access Journals (Sweden)
Driss Sarsri
2014-05-01
Full Text Available In this paper, we propose a method to calculate the first two moments (mean and variance of the structural dynamics response of a structure with uncertain variables and subjected to random excitation. For this, Newmark method is used to transform the equation of motion of the structure into a quasistatic equilibrium equation in the time domain. The Neumann development method was coupled with Monte Carlo simulations to calculate the statistical values of the random response. The use of modal synthesis methods can reduce the dimensions of the model before integration of the equation of motion. Numerical applications have been developed to highlight effectiveness of the method developed to analyze the stochastic response of large structures.
International Nuclear Information System (INIS)
Senatore, G.; Tosi, M.P.; Trieste Univ.
1981-08-01
The purpose of this letter is to stress that the way towards an unconventional optimized-random-phase-approximation (ORPA) approach to the structure of liquid metals is indicated, and in fact already a good first-order solution for such an approach is provided
High Efficiency Computation of the Variances of Structural Evolutionary Random Responses
Directory of Open Access Journals (Sweden)
J.H. Lin
2000-01-01
Full Text Available For structures subjected to stationary or evolutionary white/colored random noise, their various response variances satisfy algebraic or differential Lyapunov equations. The solution of these Lyapunov equations used to be very difficult. A precise integration method is proposed in the present paper, which solves such Lyapunov equations accurately and very efficiently.
Kasner, Scott E; Thomassen, Lars; Søndergaard, Lars; Rhodes, John F; Larsen, Coby C; Jacobson, Joth
2017-12-01
Rationale The utility of patent foramen ovale (PFO) closure for secondary prevention in patients with prior cryptogenic stroke is uncertain despite multiple randomized trials completed to date. Aims The Gore REDUCE Clinical Study (REDUCE) aims to establish superiority of patent foramen ovale closure in conjunction with antiplatelet therapy over antiplatelet therapy alone in reducing the risk of recurrent clinical ischemic stroke or new silent brain infarct in patients who have had a cryptogenic stroke. Methods and design This controlled, open-label trial randomized 664 subjects with cryptogenic stroke at 63 multinational sites in a 2:1 ratio to either antiplatelet therapy plus patent foramen ovale closure (with GORE® HELEX® Septal Occluder or GORE® CARDIOFORM Septal Occluder) or antiplatelet therapy alone. Subjects will be prospectively followed for up to five years. Neuroimaging is required for all subjects at baseline and at two years or study exit. Study outcomes The two co-primary endpoints for the study are freedom from recurrent clinical ischemic stroke through at least 24 months post-randomization and incidence of new brain infarct (defined as clinical ischemic stroke or silent brain infarct) through 24 months. The primary analyses are an unadjusted log-rank test and a binomial test of subject-based proportions, respectively, both on the intent-to-treat population, with adjustment for testing multiplicity. Discussion The REDUCE trial aims to target a patient population with truly cryptogenic strokes. Medical therapy is limited to antiplatelet agents in both arms thereby reducing confounding. The trial should determine whether patent foramen ovale closure with the Gore septal occluders is safe and more effective than medical therapy alone for the prevention of recurrent clinical ischemic stroke or new silent brain infarct; the neuroimaging data will provide an opportunity to further support the proof of concept. The main results are anticipated in 2017
de Castro, Therese C; Taylor, Michael C; Kieser, Jules A; Carr, Debra J; Duncan, W
2015-05-01
Bloodstain pattern analysis is the investigation of blood deposited at crime scenes and the interpretation of that pattern. The surface that the blood gets deposited onto could distort the appearance of the bloodstain. The interaction of blood and apparel fabrics is in its infancy, but the interaction of liquids and apparel fabrics has been well documented and investigated in the field of textile science (e.g. the processes of wetting and wicking of fluids on fibres, yarns and fabrics). A systematic study on the final appearance of drip stains on torso apparel fabrics (100% cotton plain woven, 100% polyester plain woven, blend of polyester and cotton plain woven and 100% cotton single jersey knit) that had been laundered for six, 26 and 52 cycles prior to testing was investigated in the paper. The relationship between drop velocity (1.66±0.50m/s, 4.07±0.03m/s, 5.34±0.18m/s) and the stain characteristics (parent stain area, axes 1 and 2 and number of satellite stains) for each fabric was examined using analysis of variance. The experimental design and effect of storing blood were investigated on a reference sample, which indicated that the day (up to five days) at which the drops were generated did not affect the bloodstain. The effect of prior-laundering (six, 26 and 52 laundering cycles), fibre content (cotton vs. polyester vs. blend) and fabric structure (plain woven vs. single jersey knit) on the final appearance of the bloodstain were investigated. Distortion in the bloodstains produced on non-laundered fabrics indicated the importance of laundering fabrics to remove finishing treatments before conducting bloodstain experiments. For laundered fabrics, both the cotton fabrics and the blend had a circular to oval stain appearance, while the polyester fabric had a circular appearance with evidence of spread along the warp and weft yarns, which resulted in square-like stains at the lowest drop velocity. A significant (pfibre content (pfibres/yarns, while for the
A pilot cluster randomized controlled trial of structured goal-setting following stroke.
Taylor, William J; Brown, Melanie; William, Levack; McPherson, Kathryn M; Reed, Kirk; Dean, Sarah G; Weatherall, Mark
2012-04-01
To determine the feasibility, the cluster design effect and the variance and minimal clinical importance difference in the primary outcome in a pilot study of a structured approach to goal-setting. A cluster randomized controlled trial. Inpatient rehabilitation facilities. People who were admitted to inpatient rehabilitation following stroke who had sufficient cognition to engage in structured goal-setting and complete the primary outcome measure. Structured goal elicitation using the Canadian Occupational Performance Measure. Quality of life at 12 weeks using the Schedule for Individualised Quality of Life (SEIQOL-DW), Functional Independence Measure, Short Form 36 and Patient Perception of Rehabilitation (measuring satisfaction with rehabilitation). Assessors were blinded to the intervention. Four rehabilitation services and 41 patients were randomized. We found high values of the intraclass correlation for the outcome measures (ranging from 0.03 to 0.40) and high variance of the SEIQOL-DW (SD 19.6) in relation to the minimally importance difference of 2.1, leading to impractically large sample size requirements for a cluster randomized design. A cluster randomized design is not a practical means of avoiding contamination effects in studies of inpatient rehabilitation goal-setting. Other techniques for coping with contamination effects are necessary.
The Hidden Flow Structure and Metric Space of Network Embedding Algorithms Based on Random Walks.
Gu, Weiwei; Gong, Li; Lou, Xiaodan; Zhang, Jiang
2017-10-13
Network embedding which encodes all vertices in a network as a set of numerical vectors in accordance with it's local and global structures, has drawn widespread attention. Network embedding not only learns significant features of a network, such as the clustering and linking prediction but also learns the latent vector representation of the nodes which provides theoretical support for a variety of applications, such as visualization, link prediction, node classification, and recommendation. As the latest progress of the research, several algorithms based on random walks have been devised. Although those algorithms have drawn much attention for their high scores in learning efficiency and accuracy, there is still a lack of theoretical explanation, and the transparency of those algorithms has been doubted. Here, we propose an approach based on the open-flow network model to reveal the underlying flow structure and its hidden metric space of different random walk strategies on networks. We show that the essence of embedding based on random walks is the latent metric structure defined on the open-flow network. This not only deepens our understanding of random- walk-based embedding algorithms but also helps in finding new potential applications in network embedding.
Zick, Stephanie E.
, fragmentation, and dispersiveness. In 2004-2012 TCs, increasing (decreasing) compactness is observed in the eastern and central (western) Gulf of Mexico. Dispersiveness increases prior to landfall in most cases; however, asymmetry and fragmentation increase more commonly in western (versus eastern) Gulf landfalls. These results indicate that structural changes occur in advance of landfall, while the TC inner core is positioned over warm Gulf of Mexico waters, particularly in storms that make landfall in the northern and western Gulf States.
Directory of Open Access Journals (Sweden)
You-Wei Zhang
Full Text Available A general symplectic method for the random response analysis of infinitely periodic structures subjected to stationary/non-stationary random excitations is developed using symplectic mathematics in conjunction with variable separation and the pseudo-excitation method (PEM. Starting from the equation of motion for a single loaded substructure, symplectic analysis is firstly used to eliminate the dependent degrees of the freedom through condensation. A Fourier expansion of the condensed equation of motion is then applied to separate the variables of time and wave number, thus enabling the necessary recurrence scheme to be developed. The random response is finally determined by implementing PEM. The proposed method is justified by comparison with results available in the literature and is then applied to a more complicated time-dependent coupled system.
International Nuclear Information System (INIS)
Preumont, A.; Shilab, S.; Cornaggia, L.; Reale, M.; Labbe, P.; Noe, H.
1992-01-01
This benchmark exercise is the continuation of the state-of-the-art review (EUR 11369 EN) which concluded that the random vibration approach could be an effective tool in seismic analysis of nuclear power plants, with potential advantages on time history and response spectrum techniques. As compared to the latter, the random vibration method provides an accurate treatment of multisupport excitations, non classical damping as well as the combination of high-frequency modal components. With respect to the former, the random vibration method offers direct information on statistical variability (probability distribution) and cheaper computations. The disadvantages of the random vibration method are that it is based on stationary results, and requires a power spectral density input instead of a response spectrum. A benchmark exercise to compare the three methods from the various aspects mentioned above, on one or several simple structures has been made. The following aspects have been covered with the simplest possible models: (i) statistical variability, (ii) multisupport excitation, (iii) non-classical damping. The random vibration method is therefore concluded to be a reliable method of analysis. Its use is recommended, particularly for preliminary design, owing to its computational advantage on multiple time history analysis
Directory of Open Access Journals (Sweden)
Keqin Yan
2017-01-01
Full Text Available This chapter presents a reliability study for an offshore jacket structure with emphasis on the features of nonconventional modeling. Firstly, a random set model is formulated for modeling the random waves in an ocean site. Then, a jacket structure is investigated in a pushover analysis to identify the critical wave direction and key structural elements. This is based on the ultimate base shear strength. The selected probabilistic models are adopted for the important structural members and the wave direction is specified in the weakest direction of the structure for a conservative safety analysis. The wave height model is processed in a P-box format when it is used in the numerical analysis. The models are applied to find the bounds of the failure probabilities for the jacket structure. The propagation of this wave model to the uncertainty in results is investigated in both an interval analysis and Monte Carlo simulation. The results are compared in context of information content and numerical accuracy. Further, the failure probability bounds are compared with the conventional probabilistic approach.
Vickers, Andrew J.; Wolters, Tineke; Savage, Caroline J.; Cronin, Angel M.; O’Brien, M. Frank; Roobol, Monique J.; Aus, Gunnar; Scardino, Peter T.; Hugosson, Jonas; Schröder, Fritz H.; Lilja, Hans
2012-01-01
Purpose Prostate specific antigen (PSA) velocity has been proposed as a marker to aid detection of prostate cancer. We sought to determine whether PSA velocity could predict the results of repeat biopsy in men with persistently elevated PSA after initial negative biopsy. Materials and Methods We identified 1,837 men who participated in the Göteborg or Rotterdam section of the European Randomized Screening study of Prostate Cancer (ERSPC), and who had one or more subsequent prostate biopsies after an initial negative finding. We evaluated whether PSA velocity improved predictive accuracy beyond that of PSA alone. Results There were a total of 2579 repeat biopsies, of which 363 (14%) were positive for prostate cancer, and 44 (1.7%) were high grade (Gleason score ≥7). Although PSA velocity was statistically associated with cancer risk (p<0.001), it had very low predictive accuracy (area-under-the-curve [AUC] of 0.55). There was some evidence that PSA velocity improved AUC compared to PSA for high grade cancer. However, the small increase in risk associated with high PSA velocity – from 1.7 % to 2.8% as velocity increased from 0 to 1 ng / ml / year - is of questionable clinical relevance. Conclusions Men with a prior negative biopsy have a lower risk for prostate cancer at subsequent biopsies, with high grade disease particularly rare. We found little evidence to support the use of PSA velocity to aid decisions about repeat biopsy for prostate cancer. PMID:20643434
Embedded random matrix ensembles from nuclear structure and their recent applications
Kota, V. K. B.; Chavda, N. D.
Embedded random matrix ensembles generated by random interactions (of low body rank and usually two-body) in the presence of a one-body mean field, introduced in nuclear structure physics, are now established to be indispensable in describing statistical properties of a large number of isolated finite quantum many-particle systems. Lie algebra symmetries of the interactions, as identified from nuclear shell model and the interacting boson model, led to the introduction of a variety of embedded ensembles (EEs). These ensembles with a mean field and chaos generating two-body interaction generate in three different stages, delocalization of wave functions in the Fock space of the mean-field basis states. The last stage corresponds to what one may call thermalization and complex nuclei, as seen from many shell model calculations, lie in this region. Besides briefly describing them, their recent applications to nuclear structure are presented and they are (i) nuclear level densities with interactions; (ii) orbit occupancies; (iii) neutrinoless double beta decay nuclear transition matrix elements as transition strengths. In addition, their applications are also presented briefly that go beyond nuclear structure and they are (i) fidelity, decoherence, entanglement and thermalization in isolated finite quantum systems with interactions; (ii) quantum transport in disordered networks connected by many-body interactions with centrosymmetry; (iii) semicircle to Gaussian transition in eigenvalue densities with k-body random interactions and its relation to the Sachdev-Ye-Kitaev (SYK) model for majorana fermions.
Directory of Open Access Journals (Sweden)
Andreas Martin Lisewski
2008-09-01
Full Text Available The transmission of genomic information from coding sequence to protein structure during protein synthesis is subject to stochastic errors. To analyze transmission limits in the presence of spurious errors, Shannon's noisy channel theorem is applied to a communication channel between amino acid sequences and their structures established from a large-scale statistical analysis of protein atomic coordinates. While Shannon's theorem confirms that in close to native conformations information is transmitted with limited error probability, additional random errors in sequence (amino acid substitutions and in structure (structural defects trigger a decrease in communication capacity toward a Shannon limit at 0.010 bits per amino acid symbol at which communication breaks down. In several controls, simulated error rates above a critical threshold and models of unfolded structures always produce capacities below this limiting value. Thus an essential biological system can be realistically modeled as a digital communication channel that is (a sensitive to random errors and (b restricted by a Shannon error limit. This forms a novel basis for predictions consistent with observed rates of defective ribosomal products during protein synthesis, and with the estimated excess of mutual information in protein contact potentials.
Random vibration sensitivity studies of modeling uncertainties in the NIF structures
International Nuclear Information System (INIS)
Swensen, E.A.; Farrar, C.R.; Barron, A.A.; Cornwell, P.
1996-01-01
The National Ignition Facility is a laser fusion project that will provide an above-ground experimental capability for nuclear weapons effects simulation. This facility will achieve fusion ignition utilizing solid-state lasers as the energy driver. The facility will cover an estimated 33,400 m 2 at an average height of 5--6 stories. Within this complex, a number of beam transport structures will be houses that will deliver the laser beams to the target area within a 50 microm ms radius of the target center. The beam transport structures are approximately 23 m long and reach approximately heights of 2--3 stories. Low-level ambient random vibrations are one of the primary concerns currently controlling the design of these structures. Low level ambient vibrations, 10 -10 g 2 /Hz over a frequency range of 1 to 200 Hz, are assumed to be present during all facility operations. Each structure described in this paper will be required to achieve and maintain 0.6 microrad ms laser beam pointing stability for a minimum of 2 hours under these vibration levels. To date, finite element (FE) analysis has been performed on a number of the beam transport structures. Certain assumptions have to be made regarding structural uncertainties in the FE models. These uncertainties consist of damping values for concrete and steel, compliance within bolted and welded joints, and assumptions regarding the phase coherence of ground motion components. In this paper, the influence of these structural uncertainties on the predicted pointing stability of the beam line transport structures as determined by random vibration analysis will be discussed
Constrained noninformative priors
International Nuclear Information System (INIS)
Atwood, C.L.
1994-10-01
The Jeffreys noninformative prior distribution for a single unknown parameter is the distribution corresponding to a uniform distribution in the transformed model where the unknown parameter is approximately a location parameter. To obtain a prior distribution with a specified mean but with diffusion reflecting great uncertainty, a natural generalization of the noninformative prior is the distribution corresponding to the constrained maximum entropy distribution in the transformed model. Examples are given
Damage Detection in Bridge Structure Using Vibration Data under Random Travelling Vehicle Loads
International Nuclear Information System (INIS)
Loh, C H; Hung, T Y; Chen, S F; Hsu, W T
2015-01-01
Due to the random nature of the road excitation and the inherent uncertainties in bridge-vehicle system, damage identification of bridge structure through continuous monitoring under operating situation become a challenge problem. Methods for system identification and damage detection of a continuous two-span concrete bridge structure in time domain is presented using interaction forces from random moving vehicles as excitation. The signals recorded in different locations of the instrumented bridge are mixed with signals from different internal and external (road roughness) vibration sources. The damage structure is also modelled as the stiffness reduction in one of the beam element. For the purpose of system identification and damage detection three different output-only modal analysis techniques are proposed: The covariance-driven stochastic subspace identification (SSI-COV), the blind source separation algorithms (called Second Order Blind Identification) and the multivariate AR model. The advantages and disadvantages of the three algorithms are discussed. Finally, the null-space damage index, subspace damage indices and mode shape slope change are used to detect and locate the damage. The proposed approaches has been tested in simulation and proved to be effective for structural health monitoring. (paper)
Mean first passage time for random walk on dual structure of dendrimer
Li, Ling; Guan, Jihong; Zhou, Shuigeng
2014-12-01
The random walk approach has recently been widely employed to study the relations between the underlying structure and dynamic of complex systems. The mean first-passage time (MFPT) for random walks is a key index to evaluate the transport efficiency in a given system. In this paper we study analytically the MFPT in a dual structure of dendrimer network, Husimi cactus, which has different application background and different structure (contains loops) from dendrimer. By making use of the iterative construction, we explicitly determine both the partial mean first-passage time (PMFT, the average of MFPTs to a given target) and the global mean first-passage time (GMFT, the average of MFPTs over all couples of nodes) on Husimi cactus. The obtained closed-form results show that PMFPT and EMFPT follow different scaling with the network order, suggesting that the target location has essential influence on the transport efficiency. Finally, the impact that loop structure could bring is analyzed and discussed.
A special covariance structure for random coefficient models with both between and within covariates
International Nuclear Information System (INIS)
Riedel, K.S.
1990-07-01
We review random coefficient (RC) models in linear regression and propose a bias correction to the maximum likelihood (ML) estimator. Asymmptotic expansion of the ML equations are given when the between individual variance is much larger or smaller than the variance from within individual fluctuations. The standard model assumes all but one covariate varies within each individual, (we denote the within covariates by vector χ 1 ). We consider random coefficient models where some of the covariates do not vary in any single individual (we denote the between covariates by vector χ 0 ). The regression coefficients, vector β k , can only be estimated in the subspace X k of X. Thus the number of individuals necessary to estimate vector β and the covariance matrix Δ of vector β increases significantly in the presence of more than one between covariate. When the number of individuals is sufficient to estimate vector β but not the entire matrix Δ , additional assumptions must be imposed on the structure of Δ. A simple reduced model is that the between component of vector β is fixed and only the within component varies randomly. This model fails because it is not invariant under linear coordinate transformations and it can significantly overestimate the variance of new observations. We propose a covariance structure for Δ without these difficulties by first projecting the within covariates onto the space perpendicular to be between covariates. (orig.)
Multilevel covariance regression with correlated random effects in the mean and variance structure.
Quintero, Adrian; Lesaffre, Emmanuel
2017-09-01
Multivariate regression methods generally assume a constant covariance matrix for the observations. In case a heteroscedastic model is needed, the parametric and nonparametric covariance regression approaches can be restrictive in the literature. We propose a multilevel regression model for the mean and covariance structure, including random intercepts in both components and allowing for correlation between them. The implied conditional covariance function can be different across clusters as a result of the random effect in the variance structure. In addition, allowing for correlation between the random intercepts in the mean and covariance makes the model convenient for skewedly distributed responses. Furthermore, it permits us to analyse directly the relation between the mean response level and the variability in each cluster. Parameter estimation is carried out via Gibbs sampling. We compare the performance of our model to other covariance modelling approaches in a simulation study. Finally, the proposed model is applied to the RN4CAST dataset to identify the variables that impact burnout of nurses in Belgium. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Jeong, Chan-Seok; Kim, Dongsup
2016-02-24
Elucidating the cooperative mechanism of interconnected residues is an important component toward understanding the biological function of a protein. Coevolution analysis has been developed to model the coevolutionary information reflecting structural and functional constraints. Recently, several methods have been developed based on a probabilistic graphical model called the Markov random field (MRF), which have led to significant improvements for coevolution analysis; however, thus far, the performance of these models has mainly been assessed by focusing on the aspect of protein structure. In this study, we built an MRF model whose graphical topology is determined by the residue proximity in the protein structure, and derived a novel positional coevolution estimate utilizing the node weight of the MRF model. This structure-based MRF method was evaluated for three data sets, each of which annotates catalytic site, allosteric site, and comprehensively determined functional site information. We demonstrate that the structure-based MRF architecture can encode the evolutionary information associated with biological function. Furthermore, we show that the node weight can more accurately represent positional coevolution information compared to the edge weight. Lastly, we demonstrate that the structure-based MRF model can be reliably built with only a few aligned sequences in linear time. The results show that adoption of a structure-based architecture could be an acceptable approximation for coevolution modeling with efficient computation complexity.
Godeaux, Olivier; Kovac, Martina; Shu, Daniel; Grupping, Katrijn; Campora, Laura; Douha, Martine; Heineman, Thomas C; Lal, Himal
2017-05-04
This phase III, non-randomized, open-label, multi-center study (NCT01827839) evaluated the immunogenicity and safety of an adjuvanted recombinant subunit herpes zoster (HZ) vaccine (HZ/su) in adults aged ≥ 50 y with prior physician-documented history of HZ. Participants (stratified by age: 50-59, 60-69 and ≥ 70 y) received 2 doses of HZ/su 2 months apart and were followed-up for another 12 months. Anti-glycoprotein E (gE) antibodies were measured by enzyme-linked immunosorbent assay before vaccination and 1 month after the second dose (Month 3). Solicited local and general adverse events (AEs) were recorded for 7 d and unsolicited AEs for 30 d after each vaccination. Serious AEs were recorded until study end. The primary immunogenicity objective was met if the lower limit of the 95% confidence interval (CI) of the vaccine response rate (VRR), defined as a 4-fold increase in anti-gE over baseline, at Month 3 was ≥ 60%. 96 participants (32/age group) were enrolled. The primary immunogenicity objective was met, as the VRR at Month 3 was 90.2% (95% CI: 81.7-95.7). Geometric mean anti-gE antibody concentrations at Month 3 were similar across age groups. 77.9% and 71.6% of participants reported local and general solicited AEs, respectively. The most frequent solicited AEs were pain at injection site, fatigue, headache, myalgia and shivering. The HZ/su vaccine was immunogenic in adults aged ≥ 50 y with a physician-documented history of HZ, and no safety concerns were identified.
Li, Xiayue; Curtis, Farren S.; Rose, Timothy; Schober, Christoph; Vazquez-Mayagoitia, Alvaro; Reuter, Karsten; Oberhofer, Harald; Marom, Noa
2018-06-01
We present Genarris, a Python package that performs configuration space screening for molecular crystals of rigid molecules by random sampling with physical constraints. For fast energy evaluations, Genarris employs a Harris approximation, whereby the total density of a molecular crystal is constructed via superposition of single molecule densities. Dispersion-inclusive density functional theory is then used for the Harris density without performing a self-consistency cycle. Genarris uses machine learning for clustering, based on a relative coordinate descriptor developed specifically for molecular crystals, which is shown to be robust in identifying packing motif similarity. In addition to random structure generation, Genarris offers three workflows based on different sequences of successive clustering and selection steps: the "Rigorous" workflow is an exhaustive exploration of the potential energy landscape, the "Energy" workflow produces a set of low energy structures, and the "Diverse" workflow produces a maximally diverse set of structures. The latter is recommended for generating initial populations for genetic algorithms. Here, the implementation of Genarris is reported and its application is demonstrated for three test cases.
Zhang, Yulong; Wang, Tianyang; Zhang, Ai; Peng, Zhuoteng; Luo, Dan; Chen, Rui; Wang, Fei
2016-12-01
In this paper, we present design and test of a broadband electrostatic energy harvester with a dual resonant structure, which consists of two cantilever-mass subsystems each with a mass attached at the free edge of a cantilever. Comparing to traditional devices with single resonant frequency, the proposed device with dual resonant structure can resonate at two frequencies. Furthermore, when one of the cantilever-masses is oscillating at resonance, the vibration amplitude is large enough to make it collide with the other mass, which provides strong mechanical coupling between the two subsystems. Therefore, this device can harvest a decent power output from vibration sources at a broad frequency range. During the measurement, continuous power output up to 6.2-9.8 μW can be achieved under external vibration amplitude of 9.3 m/s 2 at a frequency range from 36.3 Hz to 48.3 Hz, which means the bandwidth of the device is about 30% of the central frequency. The broad bandwidth of the device provides a promising application for energy harvesting from the scenarios with random vibration sources. The experimental results indicate that with the dual resonant structure, the vibration-to-electricity energy conversion efficiency can be improved by 97% when an external random vibration with a low frequency filter is applied.
International Nuclear Information System (INIS)
Deo, Omkar; Neithalath, Narayanan
2010-01-01
Research highlights: → Identified the relevant pore structure features of pervious concretes, provided methodologies to extract those, and quantified the influence of these features on compressive response. → A model for stress-strain relationship of pervious concretes, and relationship between model parameters and parameters of the stress-strain relationship developed. → Statistical model for compressive strength as a function of pore structure features; and a stochastic model for the sensitivity of pore structure features in strength prediction. - Abstract: Properties of a random porous material such as pervious concrete are strongly dependent on its pore structure features, porosity being an important one among them. This study deals with developing an understanding of the material structure-compressive response relationships in pervious concretes. Several pervious concrete mixtures with different pore structure features are proportioned and subjected to static compression tests. The pore structure features such as pore area fractions, pore sizes, mean free spacing of the pores, specific surface area, and the three-dimensional pore distribution density are extracted using image analysis methods. The compressive stress-strain response of pervious concretes, a model to predict the stress-strain response, and its relationship to several of the pore structure features are outlined. Larger aggregate sizes and increase in paste volume fractions are observed to result in increased compressive strengths. The compressive response is found to be influenced by the pore sizes, their distributions and spacing. A statistical model is used to relate the compressive strength to the relevant pore structure features, which is then used as a base model in a Monte-Carlo simulation to evaluate the sensitivity of the predicted compressive strength to the model terms.
International Nuclear Information System (INIS)
Derrida, B.; Flyvbjerg, H.
1987-02-01
The statistical properties of the multivalley structure of disordered systems and of randomly broken objects have lots of features in common. For all these problems, if W s denotes the weight of the s th piece, we show that the probability distributions P 1 (W 1 ) of the largest piece W 1 , P 2 (W 2 ) of the second largest piece, and Π(Y) of Y = Σ s W s 2 have always singularities at W 1 = 1/n, W 2 = 1/n and Y = 1/n, n = 1, 2, 3,... (orig.)
DEFF Research Database (Denmark)
Nielsen, Søren R.K.; Sørensen, John Dalsgaard; Thoft-Christensen, Palle
1983-01-01
plastic deformation during several loadings can be modelled as a filtered Poisson process. Using the Markov property of this quantity the considered first-passage problem as well as the related extreme distribution problems are then solved numerically, and the results are compared to simulation studies.......A method is presented for life-time reliability' estimates of randomly excited yielding systems, assuming the structure to be safe, when the plastic deformations are confined below certain limits. The accumulated plastic deformations during any single significant loading history are considered...
Directory of Open Access Journals (Sweden)
K. Rahmani
2018-05-01
Full Text Available In this paper we present a pipeline for high quality semantic segmentation of building facades using Structured Random Forest (SRF, Region Proposal Network (RPN based on a Convolutional Neural Network (CNN as well as rectangular fitting optimization. Our main contribution is that we employ features created by the RPN as channels in the SRF.We empirically show that this is very effective especially for doors and windows. Our pipeline is evaluated on two datasets where we outperform current state-of-the-art methods. Additionally, we quantify the contribution of the RPN and the rectangular fitting optimization on the accuracy of the result.
Lauterbach, S.; Fina, M.; Wagner, W.
2018-04-01
Since structural engineering requires highly developed and optimized structures, the thickness dependency is one of the most controversially debated topics. This paper deals with stability analysis of lightweight thin structures combined with arbitrary geometrical imperfections. Generally known design guidelines only consider imperfections for simple shapes and loading, whereas for complex structures the lower-bound design philosophy still holds. Herein, uncertainties are considered with an empirical knockdown factor representing a lower bound of existing measurements. To fully understand and predict expected bearable loads, numerical investigations are essential, including geometrical imperfections. These are implemented into a stand-alone program code with a stochastic approach to compute random fields as geometric imperfections that are applied to nodes of the finite element mesh of selected structural examples. The stochastic approach uses the Karhunen-Loève expansion for the random field discretization. For this approach, the so-called correlation length l_c controls the random field in a powerful way. This parameter has a major influence on the buckling shape, and also on the stability load. First, the impact of the correlation length is studied for simple structures. Second, since most structures for engineering devices are more complex and combined structures, these are intensively discussed with the focus on constrained random fields for e.g. flange-web-intersections. Specific constraints for those random fields are pointed out with regard to the finite element model. Further, geometrical imperfections vanish where the structure is supported.
Uniform Recovery Bounds for Structured Random Matrices in Corrupted Compressed Sensing
Zhang, Peng; Gan, Lu; Ling, Cong; Sun, Sumei
2018-04-01
We study the problem of recovering an $s$-sparse signal $\\mathbf{x}^{\\star}\\in\\mathbb{C}^n$ from corrupted measurements $\\mathbf{y} = \\mathbf{A}\\mathbf{x}^{\\star}+\\mathbf{z}^{\\star}+\\mathbf{w}$, where $\\mathbf{z}^{\\star}\\in\\mathbb{C}^m$ is a $k$-sparse corruption vector whose nonzero entries may be arbitrarily large and $\\mathbf{w}\\in\\mathbb{C}^m$ is a dense noise with bounded energy. The aim is to exactly and stably recover the sparse signal with tractable optimization programs. In this paper, we prove the uniform recovery guarantee of this problem for two classes of structured sensing matrices. The first class can be expressed as the product of a unit-norm tight frame (UTF), a random diagonal matrix and a bounded columnwise orthonormal matrix (e.g., partial random circulant matrix). When the UTF is bounded (i.e. $\\mu(\\mathbf{U})\\sim1/\\sqrt{m}$), we prove that with high probability, one can recover an $s$-sparse signal exactly and stably by $l_1$ minimization programs even if the measurements are corrupted by a sparse vector, provided $m = \\mathcal{O}(s \\log^2 s \\log^2 n)$ and the sparsity level $k$ of the corruption is a constant fraction of the total number of measurements. The second class considers randomly sub-sampled orthogonal matrix (e.g., random Fourier matrix). We prove the uniform recovery guarantee provided that the corruption is sparse on certain sparsifying domain. Numerous simulation results are also presented to verify and complement the theoretical results.
Document page structure learning for fixed-layout e-books using conditional random fields
Tao, Xin; Tang, Zhi; Xu, Canhui
2013-12-01
In this paper, a model is proposed to learn logical structure of fixed-layout document pages by combining support vector machine (SVM) and conditional random fields (CRF). Features related to each logical label and their dependencies are extracted from various original Portable Document Format (PDF) attributes. Both local evidence and contextual dependencies are integrated in the proposed model so as to achieve better logical labeling performance. With the merits of SVM as local discriminative classifier and CRF modeling contextual correlations of adjacent fragments, it is capable of resolving the ambiguities of semantic labels. The experimental results show that CRF based models with both tree and chain graph structures outperform the SVM model with an increase of macro-averaged F1 by about 10%.
Thermodynamics and structure of liquid metals from a consistent optimized random phase approximation
International Nuclear Information System (INIS)
Akinlade, O.; Badirkhan, Z.; Pastore, G.
2000-05-01
We study thermodynamics and structural properties of several liquid metals to assess the validity of the generalized non-local model potential (GNMP) of Li et. al. [J.Phys. F16,309 (1986)]. By using a new thermodynamically consistent version of the optimized random phase approximation (ORPA), especially adapted to continuous reference potentials, we improve our previous results obtained within the variational approach based on the Gibbs - Bogoliubov inequality. Hinging on the unified and very accurate evaluation of structure factors and thermodynamic quantities provided by the ORPA, we find that the GNMP yields satisfactory results for the alkali metals, however, those for the polyvalent metals point to a substantial inadequacy of the GNMP for high valence systems. (author)
Livan, Giacomo; Alfarano, Simone; Scalas, Enrico
2011-07-01
We study some properties of eigenvalue spectra of financial correlation matrices. In particular, we investigate the nature of the large eigenvalue bulks which are observed empirically, and which have often been regarded as a consequence of the supposedly large amount of noise contained in financial data. We challenge this common knowledge by acting on the empirical correlation matrices of two data sets with a filtering procedure which highlights some of the cluster structure they contain, and we analyze the consequences of such filtering on eigenvalue spectra. We show that empirically observed eigenvalue bulks emerge as superpositions of smaller structures, which in turn emerge as a consequence of cross correlations between stocks. We interpret and corroborate these findings in terms of factor models, and we compare empirical spectra to those predicted by random matrix theory for such models.
DEFF Research Database (Denmark)
Holst, René; Jørgensen, Bent
2015-01-01
The paper proposes a versatile class of multiplicative generalized linear longitudinal mixed models (GLLMM) with additive dispersion components, based on explicit modelling of the covariance structure. The class incorporates a longitudinal structure into the random effects models and retains...... a marginal as well as a conditional interpretation. The estimation procedure is based on a computationally efficient quasi-score method for the regression parameters combined with a REML-like bias-corrected Pearson estimating function for the dispersion and correlation parameters. This avoids...... the multidimensional integral of the conventional GLMM likelihood and allows an extension of the robust empirical sandwich estimator for use with both association and regression parameters. The method is applied to a set of otholit data, used for age determination of fish....
Livan, Giacomo; Alfarano, Simone; Scalas, Enrico
2011-07-01
We study some properties of eigenvalue spectra of financial correlation matrices. In particular, we investigate the nature of the large eigenvalue bulks which are observed empirically, and which have often been regarded as a consequence of the supposedly large amount of noise contained in financial data. We challenge this common knowledge by acting on the empirical correlation matrices of two data sets with a filtering procedure which highlights some of the cluster structure they contain, and we analyze the consequences of such filtering on eigenvalue spectra. We show that empirically observed eigenvalue bulks emerge as superpositions of smaller structures, which in turn emerge as a consequence of cross correlations between stocks. We interpret and corroborate these findings in terms of factor models, and we compare empirical spectra to those predicted by random matrix theory for such models.
Improving the chances of successful protein structure determination with a random forest classifier
Energy Technology Data Exchange (ETDEWEB)
Jahandideh, Samad [Sanford-Burnham Medical Research Institute, 10901 North Torrey Pines Road, La Jolla, CA 92307 (United States); Joint Center for Structural Genomics, (United States); Jaroszewski, Lukasz; Godzik, Adam, E-mail: adam@burnham.org [Sanford-Burnham Medical Research Institute, 10901 North Torrey Pines Road, La Jolla, CA 92307 (United States); Joint Center for Structural Genomics, (United States); University of California, San Diego, La Jolla, California (United States)
2014-03-01
Using an extended set of protein features calculated separately for protein surface and interior, a new version of XtalPred based on a random forest classifier achieves a significant improvement in predicting the success of structure determination from the primary amino-acid sequence. Obtaining diffraction quality crystals remains one of the major bottlenecks in structural biology. The ability to predict the chances of crystallization from the amino-acid sequence of the protein can, at least partly, address this problem by allowing a crystallographer to select homologs that are more likely to succeed and/or to modify the sequence of the target to avoid features that are detrimental to successful crystallization. In 2007, the now widely used XtalPred algorithm [Slabinski et al. (2007 ▶), Protein Sci.16, 2472–2482] was developed. XtalPred classifies proteins into five ‘crystallization classes’ based on a simple statistical analysis of the physicochemical features of a protein. Here, towards the same goal, advanced machine-learning methods are applied and, in addition, the predictive potential of additional protein features such as predicted surface ruggedness, hydrophobicity, side-chain entropy of surface residues and amino-acid composition of the predicted protein surface are tested. The new XtalPred-RF (random forest) achieves significant improvement of the prediction of crystallization success over the original XtalPred. To illustrate this, XtalPred-RF was tested by revisiting target selection from 271 Pfam families targeted by the Joint Center for Structural Genomics (JCSG) in PSI-2, and it was estimated that the number of targets entered into the protein-production and crystallization pipeline could have been reduced by 30% without lowering the number of families for which the first structures were solved. The prediction improvement depends on the subset of targets used as a testing set and reaches 100% (i.e. twofold) for the top class of predicted
First-principles study of ternary bcc alloys using special quasi-random structures
International Nuclear Information System (INIS)
Jiang Chao
2009-01-01
Using a combination of exhaustive enumeration and Monte Carlo simulated annealing, we have developed special quasi-random structures (SQSs) for ternary body-centered cubic (bcc) alloys with compositions of A 1 B 1 C 1 , A 2 B 1 C 1 , A 6 B 1 C 1 and A 2 B 3 C 3 , respectively. The structures possess local pair and multisite correlation functions that closely mimic those of the random bcc alloy. We employed the SQSs to predict the mixing enthalpies, nearest neighbor bond length distributions and electronic density of states of bcc Mo-Nb-Ta and Mo-Nb-V solid solutions. Our convergence tests indicate that even small-sized SQSs can give reliable results. Based on the SQS energetics, the predicting powers of the existing empirical ternary extrapolation models were assessed. The present results suggest that it is important to take into account the ternary interaction parameter in order to accurately describe the thermodynamic behaviors of ternary alloys. The proposed SQSs are quite general and can be applied to other ternary bcc alloys.
Ab initio random structure search for 13-atom clusters of fcc elements
International Nuclear Information System (INIS)
Chou, J P; Hsing, C R; Wei, C M; Cheng, C; Chang, C M
2013-01-01
The 13-atom metal clusters of fcc elements (Al, Rh, Ir, Ni, Pd, Pt, Cu, Ag, Au) were studied by density functional theory calculations. The global minima were searched for by the ab initio random structure searching method. In addition to some new lowest-energy structures for Pd 13 and Au 13 , we found that the effective coordination numbers of the lowest-energy clusters would increase with the ratio of the dimer-to-bulk bond length. This correlation, together with the electronic structures of the lowest-energy clusters, divides the 13-atom clusters of these fcc elements into two groups (except for Au 13 , which prefers a two-dimensional structure due to the relativistic effect). Compact-like clusters that are composed exclusively of triangular motifs are preferred for elements without d-electrons (Al) or with (nearly) filled d-band electrons (Ni, Pd, Cu, Ag). Non-compact clusters composed mainly of square motifs connected by some triangular motifs (Rh, Ir, Pt) are favored for elements with unfilled d-band electrons. (paper)
DEFF Research Database (Denmark)
Engerer, Volkmar Paul; Roued-Cunliffe, Henriette; Albretsen, Jørgen
digitisation of Arthur Prior’s Nachlass kept in the Bodleian Library, Oxford. The DH infrastructure in question is the Prior Virtual Lab (PVL). PVL was established in 2011 in order to provide researchers in the field of temporal logic easy access to the papers of Arthur Norman Prior (1914-1969), and officially......In this paper, we present a DH research infrastructure which relies heavily on a combination of domain knowledge with information technology. The general goal is to develop tools to aid scholars in their interpretations and understanding of temporal logic. This in turn is based on an extensive...
Liu, Zhangjun; Liu, Zenghui
2018-06-01
This paper develops a hybrid approach of spectral representation and random function for simulating stationary stochastic vector processes. In the proposed approach, the high-dimensional random variables, included in the original spectral representation (OSR) formula, could be effectively reduced to only two elementary random variables by introducing the random functions that serve as random constraints. Based on this, a satisfactory simulation accuracy can be guaranteed by selecting a small representative point set of the elementary random variables. The probability information of the stochastic excitations can be fully emerged through just several hundred of sample functions generated by the proposed approach. Therefore, combined with the probability density evolution method (PDEM), it could be able to implement dynamic response analysis and reliability assessment of engineering structures. For illustrative purposes, a stochastic turbulence wind velocity field acting on a frame-shear-wall structure is simulated by constructing three types of random functions to demonstrate the accuracy and efficiency of the proposed approach. Careful and in-depth studies concerning the probability density evolution analysis of the wind-induced structure have been conducted so as to better illustrate the application prospects of the proposed approach. Numerical examples also show that the proposed approach possesses a good robustness.
Effects of hormone therapy on brain structure: A randomized controlled trial.
Kantarci, Kejal; Tosakulwong, Nirubol; Lesnick, Timothy G; Zuk, Samantha M; Gunter, Jeffrey L; Gleason, Carey E; Wharton, Whitney; Dowling, N Maritza; Vemuri, Prashanthi; Senjem, Matthew L; Shuster, Lynne T; Bailey, Kent R; Rocca, Walter A; Jack, Clifford R; Asthana, Sanjay; Miller, Virginia M
2016-08-30
To investigate the effects of hormone therapy on brain structure in a randomized, double-blinded, placebo-controlled trial in recently postmenopausal women. Participants (aged 42-56 years, within 5-36 months past menopause) in the Kronos Early Estrogen Prevention Study were randomized to (1) 0.45 mg/d oral conjugated equine estrogens (CEE), (2) 50 μg/d transdermal 17β-estradiol, or (3) placebo pills and patch for 48 months. Oral progesterone (200 mg/d) was given to active treatment groups for 12 days each month. MRI and cognitive testing were performed in a subset of participants at baseline, and at 18, 36, and 48 months of randomization (n = 95). Changes in whole brain, ventricular, and white matter hyperintensity volumes, and in global cognitive function, were measured. Higher rates of ventricular expansion were observed in both the CEE and the 17β-estradiol groups compared to placebo; however, the difference was significant only in the CEE group (p = 0.01). Rates of ventricular expansion correlated with rates of decrease in brain volume (r = -0.58; p ≤ 0.001) and with rates of increase in white matter hyperintensity volume (r = 0.27; p = 0.01) after adjusting for age. The changes were not different between the CEE and 17β-estradiol groups for any of the MRI measures. The change in global cognitive function was not different across the groups. Ventricular volumes increased to a greater extent in recently menopausal women who received CEE compared to placebo but without changes in cognitive performance. Because the sample size was small and the follow-up limited to 4 years, the findings should be interpreted with caution and need confirmation. This study provides Class I evidence that brain ventricular volume increased to a greater extent in recently menopausal women who received oral CEE compared to placebo. © 2016 American Academy of Neurology.
Energy Technology Data Exchange (ETDEWEB)
Chandonia, John-Marc; Brenner, Steven E.
2004-07-14
The structural genomics project is an international effort to determine the three-dimensional shapes of all important biological macromolecules, with a primary focus on proteins. Target proteins should be selected according to a strategy which is medically and biologically relevant, of good value, and tractable. As an option to consider, we present the Pfam5000 strategy, which involves selecting the 5000 most important families from the Pfam database as sources for targets. We compare the Pfam5000 strategy to several other proposed strategies that would require similar numbers of targets. These include including complete solution of several small to moderately sized bacterial proteomes, partial coverage of the human proteome, and random selection of approximately 5000 targets from sequenced genomes. We measure the impact that successful implementation of these strategies would have upon structural interpretation of the proteins in Swiss-Prot, TrEMBL, and 131 complete proteomes (including 10 of eukaryotes) from the Proteome Analysis database at EBI. Solving the structures of proteins from the 5000 largest Pfam families would allow accurate fold assignment for approximately 68 percent of all prokaryotic proteins (covering 59 percent of residues) and 61 percent of eukaryotic proteins (40 percent of residues). More fine-grained coverage which would allow accurate modeling of these proteins would require an order of magnitude more targets. The Pfam5000 strategy may be modified in several ways, for example to focus on larger families, bacterial sequences, or eukaryotic sequences; as long as secondary consideration is given to large families within Pfam, coverage results vary only slightly. In contrast, focusing structural genomics on a single tractable genome would have only a limited impact in structural knowledge of other proteomes: a significant fraction (about 30-40 percent of the proteins, and 40-60 percent of the residues) of each proteome is classified in small
The thermodynamic and structural properties of metallocenes-type random ethylene copolymers
International Nuclear Information System (INIS)
Simanke, Adriane G.; Mauler, Raquel S.; Galland, Griselda B.; Alamo, Rufina G.
2001-01-01
The properties of a series of random ethylene copolymers prepared with the metallocenes catalytic system rac-Et[Ind] 2 ZrCl 2 /MAO were studied for a large variety of comonomer types. These include the classical 1-alkene type with length up to 10 carbons and those of the cyclic type such as cyclopentadiene and dicyclopentadiene. Under rapid crystallization, the melting temperatures of the newly synthesized copolymers followed the relation of model random copolymers indicating a behavior that conforms to that predicted by Flory's phase equilibrium theory. The molar entropy of fusion is not significantly altered by the comonomer type including the dicyclopentadiene type. All types of comonomers studied showed, for a fixed comonomer content, the same change in properties during annealing, except the ethylene 1-butenes. These latter copolymers and the hydrogenated poly butadiene showed a faster rate of change in thermal properties. This is consistent with a higher molecular diffusion for the butene comonomer than for the rest of comonomers analyzed. The properties of the inter lamellar region were also studied as a function of comonomer type and content following the variation of the amorphous halo extracted from the WAXS diffractograms. The observed systematic decrease in the peak scattering angle with increasing comonomer content indicates a variation of the intermolecular liquid structure. (author)
Ramazani, Saba; Jackson, Delvin L.; Selmic, Rastko R.
2013-05-01
In search and surveillance operations, deploying a team of mobile agents provides a robust solution that has multiple advantages over using a single agent in efficiency and minimizing exploration time. This paper addresses the challenge of identifying a target in a given environment when using a team of mobile agents by proposing a novel method of mapping and movement of agent teams in a cooperative manner. The approach consists of two parts. First, the region is partitioned into a hexagonal beehive structure in order to provide equidistant movements in every direction and to allow for more natural and flexible environment mapping. Additionally, in search environments that are partitioned into hexagons, mobile agents have an efficient travel path while performing searches due to this partitioning approach. Second, we use a team of mobile agents that move in a cooperative manner and utilize the Tabu Random algorithm to search for the target. Due to the ever-increasing use of robotics and Unmanned Aerial Vehicle (UAV) platforms, the field of cooperative multi-agent search has developed many applications recently that would benefit from the use of the approach presented in this work, including: search and rescue operations, surveillance, data collection, and border patrol. In this paper, the increased efficiency of the Tabu Random Search algorithm method in combination with hexagonal partitioning is simulated, analyzed, and advantages of this approach are presented and discussed.
Mean-field Theory for Some Bus Transport Networks with Random Overlapping Clique Structure
International Nuclear Information System (INIS)
Yang Xuhua; Sun Bao; Wang Bo; Sun Youxian
2010-01-01
Transport networks, such as railway networks and airport networks, are a kind of random network with complex topology. Recently, more and more scholars paid attention to various kinds of transport networks and try to explore their inherent characteristics. Here we study the exponential properties of a recently introduced Bus Transport Networks (BTNs) evolution model with random overlapping clique structure, which gives a possible explanation for the observed exponential distribution of the connectivities of some BTNs of three major cities in China. Applying mean-field theory, we analyze the BTNs model and prove that this model has the character of exponential distribution of the connectivities, and develop a method to predict the growth dynamics of the individual vertices, and use this to calculate analytically the connectivity distribution and the exponents. By comparing mean-field based theoretic results with the statistical data of real BTNs, we observe that, as a whole, both of their data show similar character of exponential distribution of the connectivities, and their exponents have same order of magnitude, which show the availability of the analytical result of this paper. (general)
Prediction of protein-protein interaction sites in sequences and 3D structures by random forests.
Directory of Open Access Journals (Sweden)
Mile Sikić
2009-01-01
Full Text Available Identifying interaction sites in proteins provides important clues to the function of a protein and is becoming increasingly relevant in topics such as systems biology and drug discovery. Although there are numerous papers on the prediction of interaction sites using information derived from structure, there are only a few case reports on the prediction of interaction residues based solely on protein sequence. Here, a sliding window approach is combined with the Random Forests method to predict protein interaction sites using (i a combination of sequence- and structure-derived parameters and (ii sequence information alone. For sequence-based prediction we achieved a precision of 84% with a 26% recall and an F-measure of 40%. When combined with structural information, the prediction performance increases to a precision of 76% and a recall of 38% with an F-measure of 51%. We also present an attempt to rationalize the sliding window size and demonstrate that a nine-residue window is the most suitable for predictor construction. Finally, we demonstrate the applicability of our prediction methods by modeling the Ras-Raf complex using predicted interaction sites as target binding interfaces. Our results suggest that it is possible to predict protein interaction sites with quite a high accuracy using only sequence information.
Analysis of tree stand horizontal structure using random point field methods
Directory of Open Access Journals (Sweden)
O. P. Sekretenko
2015-06-01
Full Text Available This paper uses the model approach to analyze the horizontal structure of forest stands. The main types of models of random point fields and statistical procedures that can be used to analyze spatial patterns of trees of uneven and even-aged stands are described. We show how modern methods of spatial statistics can be used to address one of the objectives of forestry – to clarify the laws of natural thinning of forest stand and the corresponding changes in its spatial structure over time. Studying natural forest thinning, we describe the consecutive stages of modeling: selection of the appropriate parametric model, parameter estimation and generation of point patterns in accordance with the selected model, the selection of statistical functions to describe the horizontal structure of forest stands and testing of statistical hypotheses. We show the possibilities of a specialized software package, spatstat, which is designed to meet the challenges of spatial statistics and provides software support for modern methods of analysis of spatial data. We show that a model of stand thinning that does not consider inter-tree interaction can project the size distribution of the trees properly, but the spatial pattern of the modeled stand is not quite consistent with observed data. Using data of three even-aged pine forest stands of 25, 55, and 90-years old, we demonstrate that the spatial point process models are useful for combining measurements in the forest stands of different ages to study the forest stand natural thinning.
Stochastic generation of explicit pore structures by thresholding Gaussian random fields
Energy Technology Data Exchange (ETDEWEB)
Hyman, Jeffrey D., E-mail: jhyman@lanl.gov [Program in Applied Mathematics, University of Arizona, Tucson, AZ 85721-0089 (United States); Computational Earth Science, Earth and Environmental Sciences (EES-16), and Center for Nonlinear Studies, Los Alamos National Laboratory, Los Alamos, NM 87544 (United States); Winter, C. Larrabee, E-mail: winter@email.arizona.edu [Program in Applied Mathematics, University of Arizona, Tucson, AZ 85721-0089 (United States); Department of Hydrology and Water Resources, University of Arizona, Tucson, AZ 85721-0011 (United States)
2014-11-15
We provide a description and computational investigation of an efficient method to stochastically generate realistic pore structures. Smolarkiewicz and Winter introduced this specific method in pores resolving simulation of Darcy flows (Smolarkiewicz and Winter, 2010 [1]) without giving a complete formal description or analysis of the method, or indicating how to control the parameterization of the ensemble. We address both issues in this paper. The method consists of two steps. First, a realization of a correlated Gaussian field, or topography, is produced by convolving a prescribed kernel with an initial field of independent, identically distributed random variables. The intrinsic length scales of the kernel determine the correlation structure of the topography. Next, a sample pore space is generated by applying a level threshold to the Gaussian field realization: points are assigned to the void phase or the solid phase depending on whether the topography over them is above or below the threshold. Hence, the topology and geometry of the pore space depend on the form of the kernel and the level threshold. Manipulating these two user prescribed quantities allows good control of pore space observables, in particular the Minkowski functionals. Extensions of the method to generate media with multiple pore structures and preferential flow directions are also discussed. To demonstrate its usefulness, the method is used to generate a pore space with physical and hydrological properties similar to a sample of Berea sandstone. -- Graphical abstract: -- Highlights: •An efficient method to stochastically generate realistic pore structures is provided. •Samples are generated by applying a level threshold to a Gaussian field realization. •Two user prescribed quantities determine the topology and geometry of the pore space. •Multiple pore structures and preferential flow directions can be produced. •A pore space based on Berea sandstone is generated.
Directory of Open Access Journals (Sweden)
Christoph Nick
2014-09-01
Full Text Available The growth of cortical neurons on three dimensional structures of spatially defined (structured randomly oriented, as well as on vertically aligned, carbon nanotubes (CNT is studied. Cortical neurons are attracted towards both types of CNT nano-architectures. For both, neurons form clusters in close vicinity to the CNT structures whereupon the randomly oriented CNTs are more closely colonised than the CNT pillars. Neurons develop communication paths via neurites on both nanoarchitectures. These neuron cells attach preferentially on the CNT sidewalls of the vertically aligned CNT architecture instead than onto the tips of the individual CNT pillars.
DEFF Research Database (Denmark)
Blackburn, Patrick Rowan; Jørgensen, Klaus Frovin
2016-01-01
’s search led him through the work of Castañeda, and back to his own work on hybrid logic: the first made temporal reference philosophically respectable, the second made it technically feasible in a modal framework. With the aid of hybrid logic, Prior built a bridge from a two-dimensional UT calculus...
Prior Knowledge Assessment Guide
2014-12-01
assessment in a reasonable amount of time. Hands-on assessments can be extremely diverse in makeup and administration depending on the subject matter...DEVELOPING AND USING PRIOR KNOWLEDGE ASSESSMENTS TO TAILOR TRAINING D-3 ___ Brush and scrub ___ Orchards ___ Rice
International Nuclear Information System (INIS)
Lindgren, Georg
2012-01-01
The statistical properties near phase singularities in a complex wavefield are here studied by means of the conditional distributions of the real and imaginary Gaussian components, given a common zero crossing point. The exact distribution is expressed as a Slepian model, where a regression term provides the main structure, with parameters given by the gradients of the Gaussian components at the singularity, and Gaussian non-stationary residuals that provide local variability. This technique differs from the linearization (Taylor expansion) technique commonly used. The empirically and theoretically verified elliptic eccentricity of the intensity contours in the vortex core is a property of the regression term, but with different normalization compared to the classical theory. The residual term models the statistical variability around these ellipses. The radii of the circular contours of the current magnitude are similarly modified by the new regression expansion and also here the random deviations are modeled by the residual field. (paper)
Computational prediction of muon stopping sites using ab initio random structure searching (AIRSS)
Liborio, Leandro; Sturniolo, Simone; Jochym, Dominik
2018-04-01
The stopping site of the muon in a muon-spin relaxation experiment is in general unknown. There are some techniques that can be used to guess the muon stopping site, but they often rely on approximations and are not generally applicable to all cases. In this work, we propose a purely theoretical method to predict muon stopping sites in crystalline materials from first principles. The method is based on a combination of ab initio calculations, random structure searching, and machine learning, and it has successfully predicted the MuT and MuBC stopping sites of muonium in Si, diamond, and Ge, as well as the muonium stopping site in LiF, without any recourse to experimental results. The method makes use of Soprano, a Python library developed to aid ab initio computational crystallography, that was publicly released and contains all the software tools necessary to reproduce our analysis.
Woolley, Thomas E.; Gaffney, Eamonn A.; Goriely, Alain
2017-07-01
If the plasma membrane of a cell is able to delaminate locally from its actin cortex, a cellular bleb can be produced. Blebs are pressure-driven protrusions, which are noteworthy for their ability to produce cellular motion. Starting from a general continuum mechanics description, we restrict ourselves to considering cell and bleb shapes that maintain approximately spherical forms. From this assumption, we obtain a tractable algebraic system for bleb formation. By including cell-substrate adhesions, we can model blebbing cell motility. Further, by considering mechanically isolated blebbing events, which are randomly distributed over the cell, we can derive equations linking the macroscopic migration characteristics to the microscopic structural parameters of the cell. This multiscale modeling framework is then used to provide parameter estimates, which are in agreement with current experimental data. In summary, the construction of the mathematical model provides testable relationships between the bleb size and cell motility.
Woolley, Thomas E; Gaffney, Eamonn A; Goriely, Alain
2017-07-01
If the plasma membrane of a cell is able to delaminate locally from its actin cortex, a cellular bleb can be produced. Blebs are pressure-driven protrusions, which are noteworthy for their ability to produce cellular motion. Starting from a general continuum mechanics description, we restrict ourselves to considering cell and bleb shapes that maintain approximately spherical forms. From this assumption, we obtain a tractable algebraic system for bleb formation. By including cell-substrate adhesions, we can model blebbing cell motility. Further, by considering mechanically isolated blebbing events, which are randomly distributed over the cell, we can derive equations linking the macroscopic migration characteristics to the microscopic structural parameters of the cell. This multiscale modeling framework is then used to provide parameter estimates, which are in agreement with current experimental data. In summary, the construction of the mathematical model provides testable relationships between the bleb size and cell motility.
Directory of Open Access Journals (Sweden)
Kenaszchuk Chris
2007-09-01
Full Text Available Abstract Background Despite a burgeoning interest in using interprofessional approaches to promote effective collaboration in health care, systematic reviews find scant evidence of benefit. This protocol describes the first cluster randomized controlled trial (RCT to design and evaluate an intervention intended to improve interprofessional collaborative communication and patient-centred care. Objectives The objective is to evaluate the effects of a four-component, hospital-based staff communication protocol designed to promote collaborative communication between healthcare professionals and enhance patient-centred care. Methods The study is a multi-centre mixed-methods cluster randomized controlled trial involving twenty clinical teaching teams (CTTs in general internal medicine (GIM divisions of five Toronto tertiary-care hospitals. CTTs will be randomly assigned either to receive an intervention designed to improve interprofessional collaborative communication, or to continue usual communication practices. Non-participant naturalistic observation, shadowing, and semi-structured, qualitative interviews were conducted to explore existing patterns of interprofessional collaboration in the CTTs, and to support intervention development. Interviews and shadowing will continue during intervention delivery in order to document interactions between the intervention settings and adopters, and changes in interprofessional communication. The primary outcome is the rate of unplanned hospital readmission. Secondary outcomes are length of stay (LOS; adherence to evidence-based prescription drug therapy; patients' satisfaction with care; self-report surveys of CTT staff perceptions of interprofessional collaboration; and frequency of calls to paging devices. Outcomes will be compared on an intention-to-treat basis using adjustment methods appropriate for data from a cluster randomized design. Discussion Pre-intervention qualitative analysis revealed that a
The Structure of a Thermophilic Kinase Shapes Fitness upon Random Circular Permutation.
Jones, Alicia M; Mehta, Manan M; Thomas, Emily E; Atkinson, Joshua T; Segall-Shapiro, Thomas H; Liu, Shirley; Silberg, Jonathan J
2016-05-20
Proteins can be engineered for synthetic biology through circular permutation, a sequence rearrangement in which native protein termini become linked and new termini are created elsewhere through backbone fission. However, it remains challenging to anticipate a protein's functional tolerance to circular permutation. Here, we describe new transposons for creating libraries of randomly circularly permuted proteins that minimize peptide additions at their termini, and we use transposase mutagenesis to study the tolerance of a thermophilic adenylate kinase (AK) to circular permutation. We find that libraries expressing permuted AKs with either short or long peptides amended to their N-terminus yield distinct sets of active variants and present evidence that this trend arises because permuted protein expression varies across libraries. Mapping all sites that tolerate backbone cleavage onto AK structure reveals that the largest contiguous regions of sequence that lack cleavage sites are proximal to the phosphotransfer site. A comparison of our results with a range of structure-derived parameters further showed that retention of function correlates to the strongest extent with the distance to the phosphotransfer site, amino acid variability in an AK family sequence alignment, and residue-level deviations in superimposed AK structures. Our work illustrates how permuted protein libraries can be created with minimal peptide additions using transposase mutagenesis, and it reveals a challenge of maintaining consistent expression across permuted variants in a library that minimizes peptide additions. Furthermore, these findings provide a basis for interpreting responses of thermophilic phosphotransferases to circular permutation by calibrating how different structure-derived parameters relate to retention of function in a cellular selection.
International Nuclear Information System (INIS)
Yanchev, I.
2003-01-01
A new expression for the Fourier transform of the binary correlation function of the random potential near the semiconductor-insulator interface is derived. The screening from the metal electrode in MIS-structure is taken into account introducing an effective insulator thickness. An essential advantage of this correlation function is the finite dispersion of the random potential to which it leads in distinction with the so far known correlation functions leading to a divergent dispersion. The dispersion, an important characteristic of the random potential distribution, determining the amplitude of the potential fluctuations is calculated
International Nuclear Information System (INIS)
Yanchev, I; Slavcheva, G.
1993-01-01
A new expression for the Fourier transform of the binary correlation function of the random potential near the semiconductor-insulator interface is derived. The screening from the metal electrode in MIS-structure is taken into account introducing an effective insulator thickness. An essential advantage of this correlation function is the finite dispersion of the random potential Γ 2 to which it leads in distinction with the so far known correlation functions leading to divergent dispersion. The important characteristic of the random potential distribution Γ 2 determining the amplitude of the potential fluctuations is calculated. 7 refs. (orig.)
International Nuclear Information System (INIS)
Slavcheva, G.; Yanchev, I.
1991-01-01
A new expression for the Fourier transform of the binary correlation function of the random potential near the semiconductor-insulator interface is derived. The screening due to the image charge with respect to the metal electrode in MIS-structure is taken into account, introducing an effective insulator thickness. An essential advantage of this correlation function is the finite dispersion of the random potential Γ 2 to which it leads in distinction with the so far known correlation functions leading to divergent dispersion. The important characteristic of the random potential distribution Γ 2 determining the amplitude of the potential fluctuations is calculated. (author). 7 refs, 1 fig
Yanchev, I
2003-01-01
A new expression for the Fourier transform of the binary correlation function of the random potential near the semiconductor-insulator interface is derived. The screening from the metal electrode in MIS-structure is taken into account introducing an effective insulator thickness. An essential advantage of this correlation function is the finite dispersion of the random potential to which it leads in distinction with the so far known correlation functions leading to a divergent dispersion. The dispersion, an important characteristic of the random potential distribution, determining the amplitude of the potential fluctuations is calculated.
Energy Technology Data Exchange (ETDEWEB)
Yanchev, I
2003-07-01
A new expression for the Fourier transform of the binary correlation function of the random potential near the semiconductor-insulator interface is derived. The screening from the metal electrode in MIS-structure is taken into account introducing an effective insulator thickness. An essential advantage of this correlation function is the finite dispersion of the random potential to which it leads in distinction with the so far known correlation functions leading to a divergent dispersion. The dispersion, an important characteristic of the random potential distribution, determining the amplitude of the potential fluctuations is calculated.
Finite element random vibration method for soil-structure interaction analysis
International Nuclear Information System (INIS)
Romo-Organista, M.P.; Lysmer, J.; Seed, H.B.
1977-01-01
The authors present a method in which the seismic environment is defined directly in terms of the given design response spectrum. Response spectra cannot be used directly for random analysis, thus using extreme value theory a new procedure has been developed for converting the design response spectrum into a design power spectrum. This procedure is reversible and can also be used to compute response spectra the distribution of which can be expressed in terms of Confidence limits. Knowing the design power spctrum the resulting output power spectra and their statistical distribution can be computed by a response analysis of the soil-structure system in the frequency domain. Due to the complexity of soil structure systems, this is most conveniently done by the finite element method. Having obtained the power spectra for all motions in the system, these spectra can be used to determine other statistical information about the response such as maximum accelerations, stresses, bending moments, etc, all with appropriate confidence limits. This type of information is actually more useful for design than corresponding deterministic values. The authors have developed a computer program, PLUSH, which can perform the above procedures. Results obtained by the new method are in excellent agreement with the results of corresponding deterministic analysis. Furthermore, the probabilistic results can be obtained at a fraction of the cost of deterministic results
Lievens, Klaus; Van Nimmen, Katrien; Lombaert, Geert; De Roeck, Guido; Van den Broeck, Peter
2016-09-01
In civil engineering and architecture, the availability of high strength materials and advanced calculation techniques enables the construction of slender footbridges, generally highly sensitive to human-induced excitation. Due to the inherent random character of the human-induced walking load, variability on the pedestrian characteristics must be considered in the response simulation. To assess the vibration serviceability of the footbridge, the statistics of the stochastic dynamic response are evaluated by considering the instantaneous peak responses in a time range. Therefore, a large number of time windows are needed to calculate the mean value and standard deviation of the instantaneous peak values. An alternative method to evaluate the statistics is based on the standard deviation of the response and a characteristic frequency as proposed in wind engineering applications. In this paper, the accuracy of this method is evaluated for human-induced vibrations. The methods are first compared for a group of pedestrians crossing a lightly damped footbridge. Small differences of the instantaneous peak value were found by the method using second order statistics. Afterwards, a TMD tuned to reduce the peak acceleration to a comfort value, was added to the structure. The comparison between both methods in made and the accuracy is verified. It is found that the TMD parameters are tuned sufficiently and good agreements between the two methods are found for the estimation of the instantaneous peak response for a strongly damped structure.
Random demodulation for structural health monitoring excited by the five-cycle sine burst
Directory of Open Access Journals (Sweden)
Li Xing
2017-01-01
Full Text Available Nowadays, the Structural Health Monitoring (SHM has been paid more and more attention. The five-cycle sine burst is widely used as the exciting signal in SHM and the sensors’ responded signals are analyzed to research the damage. In the sensor network, there will be many sensors which mean many responded signals will be sampled, restored and sometimes transferred. In the traditional way which is known as Nyquist sampling theorem, the sampling rate must be more than twice the highest rate of the original signal. In this way, the amount of data will be huge. As the result, the costs will be very expensive and the equipment may be huge and heavy, which is especially unaccepted in the aircraft. It is necessary to do some research to compress the signal. The Compressing Sensing (CS theory provides new methods to compress the signals. The Random Demodulation (RD is a specific method which can accomplish the physical implementation of CS theory. In this paper, according to the structure of RD, we chose some chips to build a RD system. And we did some experiments to verify the method through the system. We chose the Orthogonal Matching Pursuit (OMP as the construct algorithm to recover the signal.
A Deep-Structured Conditional Random Field Model for Object Silhouette Tracking.
Directory of Open Access Journals (Sweden)
Mohammad Javad Shafiee
Full Text Available In this work, we introduce a deep-structured conditional random field (DS-CRF model for the purpose of state-based object silhouette tracking. The proposed DS-CRF model consists of a series of state layers, where each state layer spatially characterizes the object silhouette at a particular point in time. The interactions between adjacent state layers are established by inter-layer connectivity dynamically determined based on inter-frame optical flow. By incorporate both spatial and temporal context in a dynamic fashion within such a deep-structured probabilistic graphical model, the proposed DS-CRF model allows us to develop a framework that can accurately and efficiently track object silhouettes that can change greatly over time, as well as under different situations such as occlusion and multiple targets within the scene. Experiment results using video surveillance datasets containing different scenarios such as occlusion and multiple targets showed that the proposed DS-CRF approach provides strong object silhouette tracking performance when compared to baseline methods such as mean-shift tracking, as well as state-of-the-art methods such as context tracking and boosted particle filtering.
International Nuclear Information System (INIS)
Olson, Gordon L.
2008-01-01
In binary stochastic media in two- and three-dimensions consisting of randomly placed impenetrable disks or spheres, the chord lengths in the background material between disks and spheres closely follow exponential distributions if the disks and spheres occupy less than 10% of the medium. This work demonstrates that for regular spatial structures of disks and spheres, the tails of the chord length distributions (CLDs) follow power laws rather than exponentials. In dilute media, when the disks and spheres are widely spaced, the slope of the power law seems to be independent of the details of the structure. When approaching a close-packed arrangement, the exact placement of the spheres can make a significant difference. When regular structures are perturbed by small random displacements, the CLDs become power laws with steeper slopes. An example CLD from a quasi-random distribution of spheres in clusters shows a modified exponential distribution
Energy Technology Data Exchange (ETDEWEB)
Olson, Gordon L. [Computer and Computational Sciences Division (CCS-2), Los Alamos National Laboratory, 5 Foxglove Circle, Madison, WI 53717 (United States)], E-mail: olson99@tds.net
2008-11-15
In binary stochastic media in two- and three-dimensions consisting of randomly placed impenetrable disks or spheres, the chord lengths in the background material between disks and spheres closely follow exponential distributions if the disks and spheres occupy less than 10% of the medium. This work demonstrates that for regular spatial structures of disks and spheres, the tails of the chord length distributions (CLDs) follow power laws rather than exponentials. In dilute media, when the disks and spheres are widely spaced, the slope of the power law seems to be independent of the details of the structure. When approaching a close-packed arrangement, the exact placement of the spheres can make a significant difference. When regular structures are perturbed by small random displacements, the CLDs become power laws with steeper slopes. An example CLD from a quasi-random distribution of spheres in clusters shows a modified exponential distribution.
Directory of Open Access Journals (Sweden)
O. W. Roberts
2014-12-01
Full Text Available Recent observations of astrophysical magnetic fields have shown the presence of fluctuations being wave-like (propagating in the plasma frame and those described as being structure-like (advected by the plasma bulk velocity. Typically with single-spacecraft missions it is impossible to differentiate between these two fluctuations, due to the inherent spatio-temporal ambiguity associated with a single point measurement. However missions such as Cluster which contain multiple spacecraft have allowed for temporal and spatial changes to be resolved, using techniques such as k filtering. While this technique does not assume Taylor's hypothesis it requires both weak stationarity of the time series and that the fluctuations can be described by a superposition of plane waves with random phases. In this paper we test whether the method can cope with a synthetic signal which is composed of a combination of non-random-phase coherent structures with a mean radius d and a mean separation λ, as well as plane waves with random phase.
Sets of priors reflecting prior-data conflict and agreement
Walter, G.M.; Coolen, F.P.A.; Carvalho, J.P.; Lesot, M.-J.; Kaymak, U.; Vieira, S.; Bouchon-Meunier, B.; Yager, R.R.
2016-01-01
Bayesian inference enables combination of observations with prior knowledge in the reasoning process. The choice of a particular prior distribution to represent the available prior knowledge is, however, often debatable, especially when prior knowledge is limited or data are scarce, as then
Prior indigenous technological species
Wright, Jason T.
2018-01-01
One of the primary open questions of astrobiology is whether there is extant or extinct life elsewhere the solar system. Implicit in much of this work is that we are looking for microbial or, at best, unintelligent life, even though technological artefacts might be much easier to find. Search for Extraterrestrial Intelligence (SETI) work on searches for alien artefacts in the solar system typically presumes that such artefacts would be of extrasolar origin, even though life is known to have existed in the solar system, on Earth, for eons. But if a prior technological, perhaps spacefaring, species ever arose in the solar system, it might have produced artefacts or other technosignatures that have survived to present day, meaning solar system artefact SETI provides a potential path to resolving astrobiology's question. Here, I discuss the origins and possible locations for technosignatures of such a prior indigenous technological species, which might have arisen on ancient Earth or another body, such as a pre-greenhouse Venus or a wet Mars. In the case of Venus, the arrival of its global greenhouse and potential resurfacing might have erased all evidence of its existence on the Venusian surface. In the case of Earth, erosion and, ultimately, plate tectonics may have erased most such evidence if the species lived Gyr ago. Remaining indigenous technosignatures might be expected to be extremely old, limiting the places they might still be found to beneath the surfaces of Mars and the Moon, or in the outer solar system.
Stone, Ian S; Barnes, Neil C; James, Wai-Yee; Midwinter, Dawn; Boubertakh, Redha; Follows, Richard; John, Leonette; Petersen, Steffen E
2016-04-01
Patients with chronic obstructive pulmonary disease develop increased cardiovascular morbidity with structural alterations. To investigate through a double-blind, placebo-controlled, crossover study the effect of lung deflation on cardiovascular structure and function using cardiac magnetic resonance. Forty-five hyperinflated patients with chronic obstructive pulmonary disease were randomized (1:1) to 7 (maximum 14) days inhaled corticosteroid/long-acting β2-agonist fluticasone furoate/vilanterol 100/25 μg or placebo (7-day minimum washout). Primary outcome was change from baseline in right ventricular end-diastolic volume index versus placebo. There was a 5.8 ml/m(2) (95% confidence interval, 2.74-8.91; P volume index and a 429 ml (P volume with fluticasone furoate/vilanterol versus placebo. Left ventricular end-diastolic and left atrial end-systolic volumes increased by 3.63 ml/m(2) (P = 0.002) and 2.33 ml/m(2) (P = 0.002). In post hoc analysis, right ventricular stroke volume increased by 4.87 ml/m(2) (P = 0.003); right ventricular ejection fraction was unchanged. Left ventricular adaptation was similar; left atrial ejection fraction improved by +3.17% (P Pulmonary artery pulsatility increased in two of three locations (main +2.9%, P = 0.001; left +2.67%, P = 0.030). Fluticasone furoate/vilanterol safety profile was similar to placebo. Pharmacologic treatment of chronic obstructive pulmonary disease has consistent beneficial and plausible effects on cardiac function and pulmonary vasculature that may contribute to favorable effects of inhaled therapies. Future studies should investigate the effect of prolonged lung deflation on intrinsic myocardial function. Clinical trial registered with www.clinicaltrials.gov (NCT 01691885).
Kapellas, Kostas; Maple-Brown, Louise J; Jamieson, Lisa M; Do, Loc G; O'Dea, Kerin; Brown, Alex; Cai, Tommy Y; Anstey, Nicholas M; Sullivan, David R; Wang, Hao; Celermajer, David S; Slade, Gary D; Skilton, Michael R
2014-10-01
Observational studies and nonrandomized trials support an association between periodontal disease and atherosclerotic vascular disease. Both diseases occur frequently in Aboriginal Australians. We hypothesized that nonsurgical periodontal therapy would improve measures of arterial function and structure that are subclinical indicators of atherosclerotic vascular disease. This parallel-group, randomized, open label clinical trial enrolled 273 Aboriginal Australians aged ≥18 years with periodontitis. Intervention participants received full-mouth periodontal scaling during a single visit, whereas controls received no treatment. Prespecified primary end points measured 12-month change in carotid intima-media thickness, an indicator of arterial structure, and 3- and 12-month change in pulse wave velocity, an indicator of arterial function. ANCOVA used complete case data to evaluate treatment group differences. End points could be calculated for 169 participants with follow-up data at 3 months and 168 participants at 12 months. Intima-media thickness decreased significantly after 12 months in the intervention group (mean reduction=-0.023 [95% confidence interval {CI}, -0.038 to -0.008] mm) but not in the control group (mean increase=0.002 [95% CI, -0.017 to 0.022] mm). The difference in intima-media thickness change between treatment groups was statistically significant (-0.026 [95% CI, -0.048 to -0.003] mm; P=0.03). In contrast, there were no significant differences between treatment groups in pulse wave velocity at 3 months (mean difference, 0.06 [95% CI, -0.17 to 0.29] m/s; P=0.594) or 12 months (mean difference, 0.21 [95% CI, -0.01 to 0.43] m/s; P=0.062). Periodontal therapy reduced subclinical arterial thickness but not function in Aboriginal Australians with periodontal disease, suggesting periodontal disease and atherosclerosis are significantly associated. © 2014 American Heart Association, Inc.
Structured triglyceride for parenteral nutrition: meta-analysis of randomized controlled trials.
Zhou, Yong; Wu, Xiao-Ting; Li, Ni; Zhuang, Wen; Liu, Guanjian; Wu, Taixiang; Wei, Mao-Ling
2006-01-01
This study assessed the safety and efficacy of structured triglyceride (ST) for parenteral nutrition. A meta-analysis of all the relevant randomized controlled trials (RCTs) was performed. Clinical trials were identified from the following electronic databases: MEDLINE, EMBASE, the Cochrane Controlled Trials Register, Chinese Bio-medicine Database. The search was undertaken in March 2005. Language was restricted to Chinese and English. Literature references were checked at the same time. Only RCTs were extracted and evaluated by two reviewers independently of each other. The statistical analysis was performed by RevMan4.2 software which was provided by the Cochrane Collaboration. A P value of triglyceride (LCT), and the combined results showed that the ST had significant effect on resting energy expenditure (weighted mean difference [WMD] =1.54, 95%CI [ 1.26, 1.82], ptriglycerides (WMD = -0.10, 95%CI [-0.30, 0.10], P=0.32). Only two RCTs compared ST with the physical mixture of medium- and long-chain triglyceride (MCT/LCT), data from trials were not combined due to clinical differences between trials, and conclusions can not be drew from the present data. ST appeared to be safe and well tolerated. Further trials are required, especially compared with the MCT/LCT, with sufficient size and rigorous design.
Chiral Molecule-Enhanced Extinction Ratios of Quantum Dots Coupled to Random Plasmonic Structures.
Bezen, Lior; Yochelis, Shira; Jayarathna, Dilhara; Bhunia, Dinesh; Achim, Catalina; Paltiel, Yossi
2018-03-06
Devices based on self-assembled hybrid colloidal quantum dots (CQDs) coupled with specific organic linker molecules are a promising way to simply realize room-temperature, spectrally tunable light detectors. Nevertheless, this type of devices usually has low quantum efficiency. Plasmonics has been shown as an efficient tool in guiding and confining light at nanoscale dimensions. As plasmonic modes exhibit highly confined fields, they locally increase light-matter interactions and consequently enhance the performance of CQD-based photodetectors. Recent publications presented experimental results of large extinction enhancement from a monolayer of CQDs coupled to random gold nanoislands using a monolayer of organic alkyl linkers. We report here that a twofold larger extinction enhancement in the visible spectrum is observed when a monolayer of helical chiral molecules connects the CQDs to the gold structure instead of a monolayer of achiral linkers. We also show that this effect provides insight into the chirality of the molecules within the monolayer. In future work, we plan to evaluate the potential of these results to be used in the construction of a more efficient and sensitive photon detector based on surface QDs, as well as to supply a simple way to map the chirality of a single chiral monolayer.
Structure of Sn1−xGex random alloys as obtained from the coherent potential approximation
Pulikkotil, J. J.; Chroneos, A.; Schwingenschlö gl, Udo
2011-01-01
The structure of the Sn1−xGex random alloys is studied using density functional theory and the coherent potential approximation. We report on the deviation of the Sn1−xGex alloys from Vegard’s law, addressing their full compositional range
International Nuclear Information System (INIS)
Laperashvili, L.V.
1994-01-01
The first part of the present paper contains a review of papers by Nielsen, Bennett, Brene and Picek which underly the model called random dynamics. The second part of the paper is devoted to calculating the fine structure constant by means of the path integration in the U(1)-lattice gauge theory
Carbajo, M A; Castro, Maria J; Kleinfinger, S; Gómez-Arenas, S; Ortiz-Solórzano, J; Wellman, R; García-Ianza, C; Luque, E
2010-01-01
Bariatric surgery is considered the only therapeutic alternative for morbid obesity and its comorbidities. High risks factors are usually linked with this kind of surgery. In order to reduce it, we consider that losing at least 10% of overweight in Morbid Obese (MO) and a minimum of 20% in Super- Obese patients (SO) before surgery, may reduce the morbidity of the procedure. The aim of our study is to demonstrate the effectiveness and tolerance of a balanced energy formula diet at the preoperative stage, comparing it against a low calorie regular diet. We studied 120 patients divided into two groups of 60 each, group A was treated 20 days prior to bariatric surgery with a balanced energy formula diet, based on 200 Kcal every 6 hours for 12 days and group B was treated with a low calorie regular diet with no carbs or fat. The last eight days prior to surgery both groups took only clear liquids. We studied the evolution of weight loss, the BMI, as well as behavior of co-morbidities as systolic blood pressure, diastolic blood pressure, glucose controls and tolerance at the protocol. The study shows that patients undergoing a balanced energy formula diet improved their comorbidities statistically significant in terms of decrease in weight and BMI loss, blood pressure and glucose, compared to the group that was treated before surgery with a low calorie regular diet. Nevertheless both groups improving the weight loss and co-morbidities with better surgical results and facilities. A correct preparation of the Morbid Obese patients prior of surgery can reduce the operative risks improving the results. Our study show that the preoperative treatment with a balanced energy formula diet as were included in our protocol in patients undergoing bariatric surgery improves statistical better their overall conditions, lowers cardiovascular risk and metabolic diseases that the patients with regular diet alone.
Zhang, Quan-Chao; Li, Hong-Jie; Cui, Ying-Qiu; Xu, Zhi; Jin, Li; Zhou, Hui; Zhu, Hong
2015-01-01
The Han Chinese are the largest ethnic group in the world, and their origins, development, and expansion are complex. Many genetic studies have shown that Han Chinese can be divided into two distinct groups: northern Han Chinese and southern Han Chinese. The genetic history of the southern Han Chinese has been well studied. However, the genetic history of the northern Han Chinese is still obscure. In order to gain insight into the genetic history of the northern Han Chinese, 89 human remains were sampled from the Hengbei site which is located in the Central Plain and dates back to a key transitional period during the rise of the Han Chinese (approximately 3,000 years ago). We used 64 authentic mtDNA data obtained in this study, 27 Y chromosome SNP data profiles from previously studied Hengbei samples, and genetic datasets of the current Chinese populations and two ancient northern Chinese populations to analyze the relationship between the ancient people of Hengbei and present-day northern Han Chinese. We used a wide range of population genetic analyses, including principal component analyses, shared mtDNA haplotype analyses, and geographic mapping of maternal genetic distances. The results show that the ancient people of Hengbei bore a strong genetic resemblance to present-day northern Han Chinese and were genetically distinct from other present-day Chinese populations and two ancient populations. These findings suggest that the genetic structure of northern Han Chinese was already shaped 3,000 years ago in the Central Plain area. PMID:25938511
DEFF Research Database (Denmark)
Ruban, Andrei; Abrikosov, I. A.; Kats, D. Ya.
1994-01-01
We have calculated the electronic structure and segregation profiles of the (001) surface of random Cu-Ni alloys with varying bulk concentrations by means of the coherent potential approximation and the linear muffin-tin-orbitals method. Exchange and correlation were included within the local......-density approximation. Temperature effects were accounted for by means of the cluster-variation method and, for comparison, by mean-field theory. The necessary interaction parameters were calculated by the Connolly-Williams method generalized to the case of a surface of a random alloy. We find the segregation profiles...
International Nuclear Information System (INIS)
Laperashvili, L.V.
1994-01-01
An overview of papers by Nielson, Bennet, Brene, and Picek, forming the basis of the model called random dynamics, is given in the first part of this work. The fine structure constant is calculated in the second part of this work by using the technique of path integration in the U(1) lattice gauge theory. It is shown that α U(1),crit -1 ∼ 19.8. This value is in agreement with the prediction of random dynamics. The obtained results are compared with the results of Monte Carlo simulations. 20 refs., 3 figs., 1 tab
DEFF Research Database (Denmark)
Liang, Shanshan; Crovetto, Andrea; Peng, Zhuoteng
2016-01-01
and experiments with piezoelectric elements show that the energy harvesting device with the bi-resonant structure can generate higher power output than that of the sum of the two separate devices from random vibration sources at low frequency, and hence significantly improves the vibration-to- electricity......This paper reports on a bi-resonant structure of piezoelectric PVDF films energy harvester (PPEH), which consists of two cantilevers with resonant frequencies of 15 Hz and 22 Hz. With increased acceleration, the vibration amplitudes of the two cantilever-mass structures are increased and collision...
International Nuclear Information System (INIS)
Iwatsubo, Takuzo; Kawamura, Shozo; Mori, Hiroyuki.
1995-01-01
In this paper, the method to obtain the random response of a structure with uncertain parameters is proposed. The proposed method is a combination of the substructure synthesis method and the hierarchy method. The concept of the proposed method is that the hierarchy equation of each substructure is obtained using the hierarchy method, and the hierarchy equation of the overall structure is obtained using the substructure synthesis method. Using the proposed method, the reduced order hierarchy equation can be obtained without analyzing the original whole structure. After the calculation of the mean square value of response, the reliability analysis can be carried out based on the first passage problem and Poisson's excursion rate. As a numerical example of structure, a simple piping system is considered. The damping constant of the support is considered as the uncertainty parameter. Then the random response is calculated using the proposed method. As a result, the proposed method is useful to analyze the random response in terms of the accuracy, computer storage and calculation time. (author)
International Nuclear Information System (INIS)
Harris, W; Yin, F; Zhang, Y; Ren, L
2016-01-01
Purpose: To investigate the feasibility of using structure-based principal component analysis (PCA) motion-modeling and weighted free-form deformation to estimate on-board 4D-CBCT using prior information and extremely limited angle projections for potential 4D target verification of lung radiotherapy. Methods: A technique for lung 4D-CBCT reconstruction has been previously developed using a deformation field map (DFM)-based strategy. In the previous method, each phase of the 4D-CBCT was generated by deforming a prior CT volume. The DFM was solved by a motion-model extracted by global PCA and a free-form deformation (GMM-FD) technique, using data fidelity constraint and the deformation energy minimization. In this study, a new structural-PCA method was developed to build a structural motion-model (SMM) by accounting for potential relative motion pattern changes between different anatomical structures from simulation to treatment. The motion model extracted from planning 4DCT was divided into two structures: tumor and body excluding tumor, and the parameters of both structures were optimized together. Weighted free-form deformation (WFD) was employed afterwards to introduce flexibility in adjusting the weightings of different structures in the data fidelity constraint based on clinical interests. XCAT (computerized patient model) simulation with a 30 mm diameter lesion was simulated with various anatomical and respirational changes from planning 4D-CT to onboard volume. The estimation accuracy was evaluated by the Volume-Percent-Difference (VPD)/Center-of-Mass-Shift (COMS) between lesions in the estimated and “ground-truth” on board 4D-CBCT. Results: Among 6 different XCAT scenarios corresponding to respirational and anatomical changes from planning CT to on-board using single 30° on-board projections, the VPD/COMS for SMM-WFD was reduced to 10.64±3.04%/1.20±0.45mm from 21.72±9.24%/1.80±0.53mm for GMM-FD. Using 15° orthogonal projections, the VPD/COMS was
Energy Technology Data Exchange (ETDEWEB)
Harris, W; Yin, F; Zhang, Y; Ren, L [Duke University Medical Center, Durham, NC (United States)
2016-06-15
Purpose: To investigate the feasibility of using structure-based principal component analysis (PCA) motion-modeling and weighted free-form deformation to estimate on-board 4D-CBCT using prior information and extremely limited angle projections for potential 4D target verification of lung radiotherapy. Methods: A technique for lung 4D-CBCT reconstruction has been previously developed using a deformation field map (DFM)-based strategy. In the previous method, each phase of the 4D-CBCT was generated by deforming a prior CT volume. The DFM was solved by a motion-model extracted by global PCA and a free-form deformation (GMM-FD) technique, using data fidelity constraint and the deformation energy minimization. In this study, a new structural-PCA method was developed to build a structural motion-model (SMM) by accounting for potential relative motion pattern changes between different anatomical structures from simulation to treatment. The motion model extracted from planning 4DCT was divided into two structures: tumor and body excluding tumor, and the parameters of both structures were optimized together. Weighted free-form deformation (WFD) was employed afterwards to introduce flexibility in adjusting the weightings of different structures in the data fidelity constraint based on clinical interests. XCAT (computerized patient model) simulation with a 30 mm diameter lesion was simulated with various anatomical and respirational changes from planning 4D-CT to onboard volume. The estimation accuracy was evaluated by the Volume-Percent-Difference (VPD)/Center-of-Mass-Shift (COMS) between lesions in the estimated and “ground-truth” on board 4D-CBCT. Results: Among 6 different XCAT scenarios corresponding to respirational and anatomical changes from planning CT to on-board using single 30° on-board projections, the VPD/COMS for SMM-WFD was reduced to 10.64±3.04%/1.20±0.45mm from 21.72±9.24%/1.80±0.53mm for GMM-FD. Using 15° orthogonal projections, the VPD/COMS was
Prior Elicitation, Assessment and Inference with a Dirichlet Prior
Directory of Open Access Journals (Sweden)
Michael Evans
2017-10-01
Full Text Available Methods are developed for eliciting a Dirichlet prior based upon stating bounds on the individual probabilities that hold with high prior probability. This approach to selecting a prior is applied to a contingency table problem where it is demonstrated how to assess the prior with respect to the bias it induces as well as how to check for prior-data conflict. It is shown that the assessment of a hypothesis via relative belief can easily take into account what it means for the falsity of the hypothesis to correspond to a difference of practical importance and provide evidence in favor of a hypothesis.
Energy Technology Data Exchange (ETDEWEB)
Capiez-Lernout, E.; Soize, Ch. [Universite de Marne la Vallee, Lab. de Mecanique, 77 (France)
2003-10-01
The mis-tuning of blades is frequently the cause of spatial localizations for the dynamic forced response in turbomachinery industry. The random character of mis-tuning requires the construction of probabilistic models of random uncertainties. A usual parametric probabilistic description considers the mis-tuning through the Young modulus of each blade. This model consists in mis-tuning blade eigenfrequencies, assuming the blade modal shapes unchanged. Recently a new approach known as a non-parametric model of random uncertainties has been introduced for modelling random uncertainties in elasto-dynamics. This paper proposes the construction of a non-parametric model which is coherent with all the uncertainties which characterize mis-tuning. As mis-tuning is a phenomenon which is independent from one blade to another one, the structure is considered as an assemblage of substructures. The mean reduced matrix model required by the non-parametric approach is thus constructed by dynamic sub-structuring. A comparative approach is also needed to study the influence of the non-parametric approach for a usual parametric model adapted to mis-tuning. A numerical example is presented. (authors)
International Nuclear Information System (INIS)
Vani, V C; Chatterjee, S
2010-01-01
The matched filter method for detecting a periodic structure on a surface hidden behind randomness is known to detect up to (r 0 /Λ)≥0.11, where r 0 is the coherence length of light on scattering from the rough part and Λ is the wavelength of the periodic part of the surface-the above limit being much lower than what is allowed by conventional detection methods. The primary goal of this technique is the detection and characterization of the periodic structure hidden behind randomness without the use of any complicated experimental or computational procedures. This paper examines this detection procedure for various values of the amplitude a of the periodic part beginning from a=0 to small finite values of a. We thus address the importance of the following quantities: '(a/λ)', which scales the amplitude of the periodic part with the wavelength of light, and (r 0 /Λ), in determining the detectability of the intensity peaks.
Phase structure of the O(n) model on a random lattice for n > 2
DEFF Research Database (Denmark)
Durhuus, B.; Kristjansen, C.
1997-01-01
We show that coarse graining arguments invented for the analysis of multi-spin systems on a randomly triangulated surface apply also to the O(n) model on a random lattice. These arguments imply that if the model has a critical point with diverging string susceptibility, then either γ = +1....../2 or there exists a dual critical point with negative string susceptibility exponent, γ̃, related to γ by γ = γ̃/γ̃-1. Exploiting the exact solution of the O(n) model on a random lattice we show that both situations are realized for n > 2 and that the possible dual pairs of string susceptibility exponents are given...... by (γ̃, γ) = (-1/m, 1/m+1), m = 2, 3, . . . We also show that at the critical points with positive string susceptibility exponent the average number of loops on the surface diverges while the average length of a single loop stays finite....
Estimating security betas using prior information based on firm fundamentals
Cosemans, M.; Frehen, R.; Schotman, P.C.; Bauer, R.
2010-01-01
This paper proposes a novel approach for estimating time-varying betas of individual stocks that incorporates prior information based on fundamentals. We shrink the rolling window estimate of beta towards a firm-specific prior that is motivated by asset pricing theory. The prior captures structural
Yamamoto, Yutaka; Ishikawa, Takashi; Hozumi, Yasuo; Ikeda, Masahiko; Iwata, Hiroji; Yamashita, Hiroko; Toyama, Tatsuya; Chishima, Takashi; Saji, Shigehira; Yamamoto-Ibusuki, Mutsuko; Iwase, Hirotaka
2013-05-16
After the failure of a non-steroidal aromatase inhibitor (nsAI) for postmenopausal patients with metastatic breast cancer (mBC), it is unclear which of various kinds of endocrine therapy is the most appropriate. A randomized controlled trial was performed to compare the efficacy and safety of daily toremifene 120 mg (TOR120), a selective estrogen receptor modulator, and exemestane 25 mg (EXE), a steroidal aromatase inhibitor. The primary end point was the clinical benefit rate (CBR). The secondary end points were objective response rate (ORR), progression-free survival (PFS), overall survival (OS) and toxicity. Initially, a total of 91 women was registered in the study and randomly assigned to either TOR120 (n = 46) or EXE (n = 45) from October 2008 to November 2011. Three of the 46 patients in the TOR120 arm were not received treatment, 2 patients having withdrawn from the trial by their preference and one having been dropped due to administration of another SERM. When analyzed after a median observation period of 16.9 months, the intention-to-treat analysis showed that there were no statistical difference between TOR120 (N = 46) and EXE (n = 45) in terms of CBR (41.3% vs. 26.7%; P = 0.14), ORR (10.8% vs. 2.2%; P = 0.083), and OS (Hazard ratio, 0.60; P = 0.22). The PFS of TOR120 was longer than that of EXE, the difference being statistically significant (Hazard ratio, 0.61, P = 0.045). The results in treatment-received cohort (N = 88) were similar to those in ITT cohort. Both treatments were well-tolerated with no severe adverse events, although the treatment of 3 of 43 women administered TOR120 was stopped after a few days because of nausea, general fatigue, hot flush and night sweating. TOR120, as a subsequent endocrine therapy for mBC patients who failed non-steroidal AI treatment, could potentially be more beneficial than EXE. UMIN000001841.
International Nuclear Information System (INIS)
Yamamoto, Yutaka; Yamamoto-Ibusuki, Mutsuko; Iwase, Hirotaka; Ishikawa, Takashi; Hozumi, Yasuo; Ikeda, Masahiko; Iwata, Hiroji; Yamashita, Hiroko; Toyama, Tatsuya; Chishima, Takashi; Saji, Shigehira
2013-01-01
After the failure of a non-steroidal aromatase inhibitor (nsAI) for postmenopausal patients with metastatic breast cancer (mBC), it is unclear which of various kinds of endocrine therapy is the most appropriate. A randomized controlled trial was performed to compare the efficacy and safety of daily toremifene 120 mg (TOR120), a selective estrogen receptor modulator, and exemestane 25 mg (EXE), a steroidal aromatase inhibitor. The primary end point was the clinical benefit rate (CBR). The secondary end points were objective response rate (ORR), progression-free survival (PFS), overall survival (OS) and toxicity. Initially, a total of 91 women was registered in the study and randomly assigned to either TOR120 (n = 46) or EXE (n = 45) from October 2008 to November 2011. Three of the 46 patients in the TOR120 arm were not received treatment, 2 patients having withdrawn from the trial by their preference and one having been dropped due to administration of another SERM. When analyzed after a median observation period of 16.9 months, the intention-to-treat analysis showed that there were no statistical difference between TOR120 (N = 46) and EXE (n = 45) in terms of CBR (41.3% vs. 26.7%; P = 0.14), ORR (10.8% vs. 2.2%; P = 0.083), and OS (Hazard ratio, 0.60; P = 0.22). The PFS of TOR120 was longer than that of EXE, the difference being statistically significant (Hazard ratio, 0.61, P = 0.045). The results in treatment-received cohort (N = 88) were similar to those in ITT cohort. Both treatments were well-tolerated with no severe adverse events, although the treatment of 3 of 43 women administered TOR120 was stopped after a few days because of nausea, general fatigue, hot flush and night sweating. TOR120, as a subsequent endocrine therapy for mBC patients who failed non-steroidal AI treatment, could potentially be more beneficial than EXE. https://upload.umin.ac.jp/cgi-open-bin/ctr/ctr.cgi?function
Rapid sampling of molecular motions with prior information constraints.
Raveh, Barak; Enosh, Angela; Schueler-Furman, Ora; Halperin, Dan
2009-02-01
Proteins are active, flexible machines that perform a range of different functions. Innovative experimental approaches may now provide limited partial information about conformational changes along motion pathways of proteins. There is therefore a need for computational approaches that can efficiently incorporate prior information into motion prediction schemes. In this paper, we present PathRover, a general setup designed for the integration of prior information into the motion planning algorithm of rapidly exploring random trees (RRT). Each suggested motion pathway comprises a sequence of low-energy clash-free conformations that satisfy an arbitrary number of prior information constraints. These constraints can be derived from experimental data or from expert intuition about the motion. The incorporation of prior information is very straightforward and significantly narrows down the vast search in the typically high-dimensional conformational space, leading to dramatic reduction in running time. To allow the use of state-of-the-art energy functions and conformational sampling, we have integrated this framework into Rosetta, an accurate protocol for diverse types of structural modeling. The suggested framework can serve as an effective complementary tool for molecular dynamics, Normal Mode Analysis, and other prevalent techniques for predicting motion in proteins. We applied our framework to three different model systems. We show that a limited set of experimentally motivated constraints may effectively bias the simulations toward diverse predicates in an outright fashion, from distance constraints to enforcement of loop closure. In particular, our analysis sheds light on mechanisms of protein domain swapping and on the role of different residues in the motion.
Rapid sampling of molecular motions with prior information constraints.
Directory of Open Access Journals (Sweden)
Barak Raveh
2009-02-01
Full Text Available Proteins are active, flexible machines that perform a range of different functions. Innovative experimental approaches may now provide limited partial information about conformational changes along motion pathways of proteins. There is therefore a need for computational approaches that can efficiently incorporate prior information into motion prediction schemes. In this paper, we present PathRover, a general setup designed for the integration of prior information into the motion planning algorithm of rapidly exploring random trees (RRT. Each suggested motion pathway comprises a sequence of low-energy clash-free conformations that satisfy an arbitrary number of prior information constraints. These constraints can be derived from experimental data or from expert intuition about the motion. The incorporation of prior information is very straightforward and significantly narrows down the vast search in the typically high-dimensional conformational space, leading to dramatic reduction in running time. To allow the use of state-of-the-art energy functions and conformational sampling, we have integrated this framework into Rosetta, an accurate protocol for diverse types of structural modeling. The suggested framework can serve as an effective complementary tool for molecular dynamics, Normal Mode Analysis, and other prevalent techniques for predicting motion in proteins. We applied our framework to three different model systems. We show that a limited set of experimentally motivated constraints may effectively bias the simulations toward diverse predicates in an outright fashion, from distance constraints to enforcement of loop closure. In particular, our analysis sheds light on mechanisms of protein domain swapping and on the role of different residues in the motion.
Accommodating Uncertainty in Prior Distributions
Energy Technology Data Exchange (ETDEWEB)
Picard, Richard Roy [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Vander Wiel, Scott Alan [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2017-01-19
A fundamental premise of Bayesian methodology is that a priori information is accurately summarized by a single, precisely de ned prior distribution. In many cases, especially involving informative priors, this premise is false, and the (mis)application of Bayes methods produces posterior quantities whose apparent precisions are highly misleading. We examine the implications of uncertainty in prior distributions, and present graphical methods for dealing with them.
Zwarenstein, Merrick; Reeves, Scott; Russell, Ann; Kenaszchuk, Chris; Conn, Lesley Gotlib; Miller, Karen-Lee; Lingard, Lorelei; Thorpe, Kevin E
2007-01-01
Abstract Background Despite a burgeoning interest in using interprofessional approaches to promote effective collaboration in health care, systematic reviews find scant evidence of benefit. This protocol describes the first cluster randomized controlled trial (RCT) to design and evaluate an intervention intended to improve interprofessional collaborative communication and patient-centred care. Objectives The objective is to evaluate the effects of a four-component, hospital-based staff commun...
Structure and Randomness of Continuous-Time, Discrete-Event Processes
Marzen, Sarah E.; Crutchfield, James P.
2017-10-01
Loosely speaking, the Shannon entropy rate is used to gauge a stochastic process' intrinsic randomness; the statistical complexity gives the cost of predicting the process. We calculate, for the first time, the entropy rate and statistical complexity of stochastic processes generated by finite unifilar hidden semi-Markov models—memoryful, state-dependent versions of renewal processes. Calculating these quantities requires introducing novel mathematical objects (ɛ -machines of hidden semi-Markov processes) and new information-theoretic methods to stochastic processes.
Statistics of equally weighted random paths on a class of self-similar structures
International Nuclear Information System (INIS)
Knezevic, Milan; Knezevic, Dragica; Spasojevic, Djordje
2004-01-01
We study the statistics of equally weighted random walk paths on a family of Sierpinski gasket lattices whose members are labelled by an integer b (2 ≤ b 2, mean path end-to-end distance grows more slowly than any power of its length N. We provide arguments for the emergence of usual power law critical behaviour in the limit b → ∞ when fractal lattices become almost compact
International Nuclear Information System (INIS)
Granger, S.; Perotin, L.
1997-01-01
Maintaining the PWR components under reliable operating conditions requires a complex design to prevent various damaging processes, including fatigue and wear problems due to flow-induced vibration. In many practical situations, it is difficult, if not impossible, to perform direct measurements or calculations of the external forces acting on vibrating structures. Instead, vibrational responses can often be conveniently measured. This paper presents an inverse method for estimating a distributed random excitation from the measurement of the structural response at a number of discrete points. This paper is devoted to the presentation of the theoretical development. The force identification method is based on a modal model for the structure and a spatial orthonormal decomposition of the excitation field. The estimation of the Fourier coefficients of this orthonormal expansion is presented. As this problem turns out to be ill-posed, a regularization process is introduced. The minimization problem associated to this process is then formulated and its solutions is developed. (author)
Energy Technology Data Exchange (ETDEWEB)
Bishop, Joseph E.
2008-09-01
Under extreme loading conditions most often the extent of material and structural fracture is pervasive in the sense that a multitude of cracks are nucleating, propagating in arbitrary directions, coalescing, and branching. Pervasive fracture is a highly nonlinear process involving complex material constitutive behavior, material softening, localization, surface generation, and ubiquitous contact. Two primary applications in which pervasive fracture is encountered are (1) weapons effects on structures and (2) geomechanics of highly jointed and faulted reservoirs. A pure Lagrangian computational method based on randomly close-packed Voronoi tessellations is proposed as a rational approach for simulating the pervasive fracture of materials and structures. Each Voronoi cell is formulated as a finite element using the reproducing kernel method. Fracture surfaces are allowed to nucleate only at the intercell faces. The randomly seeded Voronoi cells provide an unbiased network for representing cracks. In this initial study two approaches for allowing the new surfaces to initiate are studied: (1) dynamic mesh connectivity and the instantaneous insertion of a cohesive traction when localization is detected, and (2) a discontinuous Galerkin approach in which the interelement tractions are an integral part of the variational formulation, but only become active once localization is detected. Pervasive fracture problems are extremely sensitive to initial conditions and system parameters. Dynamic problems exhibit a form of transient chaos. The primary numerical challenge for this class of problems is the demonstration of model objectivity and, in particular, the identification and demonstration of a measure of convergence for engineering quantities of interest.
The Prior Internet Resources 2017
DEFF Research Database (Denmark)
Engerer, Volkmar Paul; Albretsen, Jørgen
2017-01-01
The Prior Internet Resources (PIR) are presented. Prior’s unpublished scientific manuscripts and his wast letter correspondence with fellow researchers at the time, his Nachlass, is now subject to transcription by Prior-researchers worldwide, and form an integral part of PIR. It is demonstrated...
The Importance of Prior Knowledge.
Cleary, Linda Miller
1989-01-01
Recounts a college English teacher's experience of reading and rereading Noam Chomsky, building up a greater store of prior knowledge. Argues that Frank Smith provides a theory for the importance of prior knowledge and Chomsky's work provided a personal example with which to interpret and integrate that theory. (RS)
Structural analysis of anodic porous alumina used for resistive random access memory
International Nuclear Information System (INIS)
Lee, Jeungwoo; Nigo, Seisuke; Kato, Seiichi; Kitazawa, Hideaki; Kido, Giyuu; Nakano, Yoshihiro
2010-01-01
Anodic porous alumina with duplex layers exhibits a voltage-induced switching effect and is a promising candidate for resistive random access memory. The nanostructural analysis of porous alumina is important for understanding the switching effect. We investigated the difference between the two layers of an anodic porous alumina film using transmission electron microscopy and electron energy-loss spectroscopy. Diffraction patterns showed that both layers are amorphous, and the electron energy-loss spectroscopy indicated that the inner layer contains less oxygen than the outer layer. We speculate that the conduction paths are mostly located in the oxygen-depleted area.
Gilthorpe, M S; Dahly, D L; Tu, Y K; Kubzansky, L D; Goodman, E
2014-06-01
Lifecourse trajectories of clinical or anthropological attributes are useful for identifying how our early-life experiences influence later-life morbidity and mortality. Researchers often use growth mixture models (GMMs) to estimate such phenomena. It is common to place constrains on the random part of the GMM to improve parsimony or to aid convergence, but this can lead to an autoregressive structure that distorts the nature of the mixtures and subsequent model interpretation. This is especially true if changes in the outcome within individuals are gradual compared with the magnitude of differences between individuals. This is not widely appreciated, nor is its impact well understood. Using repeat measures of body mass index (BMI) for 1528 US adolescents, we estimated GMMs that required variance-covariance constraints to attain convergence. We contrasted constrained models with and without an autocorrelation structure to assess the impact this had on the ideal number of latent classes, their size and composition. We also contrasted model options using simulations. When the GMM variance-covariance structure was constrained, a within-class autocorrelation structure emerged. When not modelled explicitly, this led to poorer model fit and models that differed substantially in the ideal number of latent classes, as well as class size and composition. Failure to carefully consider the random structure of data within a GMM framework may lead to erroneous model inferences, especially for outcomes with greater within-person than between-person homogeneity, such as BMI. It is crucial to reflect on the underlying data generation processes when building such models.
Research on Some Bus Transport Networks with Random Overlapping Clique Structure
International Nuclear Information System (INIS)
Yang Xuhua; Sun Youxian; Wang Bo; Wang Wanliang
2008-01-01
On the basis of investigating the statistical data of bus transport networks of three big cities in China, we propose that each bus route is a clique (maximal complete subgraph) and a bus transport network (BTN) consists of a lot of cliques, which intensively connect and overlap with each other. We study the network properties, which include the degree distribution, multiple edges' overlapping time distribution, distribution of the overlap size between any two overlapping cliques, distribution of the number of cliques that a node belongs to. Naturally, the cliques also constitute a network, with the overlapping nodes being their multiple links. We also research its network properties such as degree distribution, clustering, average path length, and so on. We propose that a BTN has the properties of random clique increment and random overlapping clique, at the same time, a BTN is a small-world network with highly clique-clustered and highly clique-overlapped. Finally, we introduce a BTN evolution model, whose simulation results agree well with the statistical laws that emerge in real BTNs
Heuristics as Bayesian inference under extreme priors.
Parpart, Paula; Jones, Matt; Love, Bradley C
2018-05-01
Simple heuristics are often regarded as tractable decision strategies because they ignore a great deal of information in the input data. One puzzle is why heuristics can outperform full-information models, such as linear regression, which make full use of the available information. These "less-is-more" effects, in which a relatively simpler model outperforms a more complex model, are prevalent throughout cognitive science, and are frequently argued to demonstrate an inherent advantage of simplifying computation or ignoring information. In contrast, we show at the computational level (where algorithmic restrictions are set aside) that it is never optimal to discard information. Through a formal Bayesian analysis, we prove that popular heuristics, such as tallying and take-the-best, are formally equivalent to Bayesian inference under the limit of infinitely strong priors. Varying the strength of the prior yields a continuum of Bayesian models with the heuristics at one end and ordinary regression at the other. Critically, intermediate models perform better across all our simulations, suggesting that down-weighting information with the appropriate prior is preferable to entirely ignoring it. Rather than because of their simplicity, our analyses suggest heuristics perform well because they implement strong priors that approximate the actual structure of the environment. We end by considering how new heuristics could be derived by infinitely strengthening the priors of other Bayesian models. These formal results have implications for work in psychology, machine learning and economics. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.
Kirousis, Lefteris; Ortiz-Gracia, Luis; Serna, Maria
2017-01-01
This book is divided into two parts, the first of which seeks to connect the phase transitions of various disciplines, including game theory, and to explore the synergies between statistical physics and combinatorics. Phase Transitions has been an active multidisciplinary field of research, bringing together physicists, computer scientists and mathematicians. The main research theme explores how atomic agents that act locally and microscopically lead to discontinuous macroscopic changes. Adopting this perspective has proven to be especially useful in studying the evolution of random and usually complex or large combinatorial objects (like networks or logic formulas) with respect to discontinuous changes in global parameters like connectivity, satisfiability etc. There is, of course, an obvious strategic element in the formation of a transition: the atomic agents “selfishly” seek to optimize a local parameter. However, up to now this game-theoretic aspect of abrupt, locally triggered changes had not been e...
Structural and magnetic properties of Co films on highly textured and randomly oriented C_6_0 layers
International Nuclear Information System (INIS)
Kim, Dong-Ok; Choi, Jun Woo; Lee, Dong Ryeol
2016-01-01
The structural and magnetic properties of Co/C_6_0/pentacene and Co/C_6_0 thin film structures were investigated. Atomic force microscopy and x-ray reflectivity analysis show that the presence or absence of a pentacene buffer layer leads to a highly textured or randomly oriented C_6_0 layer, respectively. A Co film deposited on a randomly oriented C_6_0 layer penetrates into the C_6_0 layer when it is deposited at a slow deposition rate. The Co penetration can be minimized, regardless of the Co deposition rate, by growth on a highly textured and nanostructured C_6_0/pentacene layer. Vibrating sample magnetometry measurements show that the saturation magnetization of Co/C_6_0/pentacene is significantly reduced compared to that of Co/C_6_0. On the other hand, the Co penetration does not seem to have an effect on the magnetic properties, suggesting that the structural properties of the Co and C_6_0 layer, rather than the Co penetration into the organic C_6_0 layer, are critical to the magnetic properties of the Co/C_6_0. - Highlights: • Structural and magnetic properties of metal(Co)-organic(C_6_0) interface is studied. • Highly textured C_6_0 layer was grown on a pentacene buffer layer (C_6_0/pentacene). • Co penetration into the C_6_0 is significantly suppressed in Co/C_6_0/pentacene. • The Co magnetization in Co/C_6_0/pentacene is reduced than that in Co/C_6_0.
Energy Technology Data Exchange (ETDEWEB)
Kim, Dong-Ok [Department of Physics, Soongsil University, Seoul 156-743 (Korea, Republic of); Choi, Jun Woo, E-mail: junwoo@kist.re.kr [Center for Spintronics Research, Korea Institute of Science and Technology, Seoul 136-791 (Korea, Republic of); Lee, Dong Ryeol, E-mail: drlee@ssu.ac.kr [Department of Physics, Soongsil University, Seoul 156-743 (Korea, Republic of)
2016-03-01
The structural and magnetic properties of Co/C{sub 60}/pentacene and Co/C{sub 60} thin film structures were investigated. Atomic force microscopy and x-ray reflectivity analysis show that the presence or absence of a pentacene buffer layer leads to a highly textured or randomly oriented C{sub 60} layer, respectively. A Co film deposited on a randomly oriented C{sub 60} layer penetrates into the C{sub 60} layer when it is deposited at a slow deposition rate. The Co penetration can be minimized, regardless of the Co deposition rate, by growth on a highly textured and nanostructured C{sub 60}/pentacene layer. Vibrating sample magnetometry measurements show that the saturation magnetization of Co/C{sub 60}/pentacene is significantly reduced compared to that of Co/C{sub 60}. On the other hand, the Co penetration does not seem to have an effect on the magnetic properties, suggesting that the structural properties of the Co and C{sub 60} layer, rather than the Co penetration into the organic C{sub 60} layer, are critical to the magnetic properties of the Co/C{sub 60}. - Highlights: • Structural and magnetic properties of metal(Co)-organic(C{sub 60}) interface is studied. • Highly textured C{sub 60} layer was grown on a pentacene buffer layer (C{sub 60}/pentacene). • Co penetration into the C{sub 60} is significantly suppressed in Co/C{sub 60}/pentacene. • The Co magnetization in Co/C{sub 60}/pentacene is reduced than that in Co/C{sub 60}.
Avendaño-Valencia, Luis David; Fassois, Spilios D.
2017-12-01
The problem of vibration-based damage diagnosis in structures characterized by time-dependent dynamics under significant environmental and/or operational uncertainty is considered. A stochastic framework consisting of a Gaussian Mixture Random Coefficient model of the uncertain time-dependent dynamics under each structural health state, proper estimation methods, and Bayesian or minimum distance type decision making, is postulated. The Random Coefficient (RC) time-dependent stochastic model with coefficients following a multivariate Gaussian Mixture Model (GMM) allows for significant flexibility in uncertainty representation. Certain of the model parameters are estimated via a simple procedure which is founded on the related Multiple Model (MM) concept, while the GMM weights are explicitly estimated for optimizing damage diagnostic performance. The postulated framework is demonstrated via damage detection in a simple simulated model of a quarter-car active suspension with time-dependent dynamics and considerable uncertainty on the payload. Comparisons with a simpler Gaussian RC model based method are also presented, with the postulated framework shown to be capable of offering considerable improvement in diagnostic performance.
Bayesian exponential random graph modeling of whole-brain structural networks across lifespan
Sinke, Michel R T; Dijkhuizen, Rick M; Caimo, Alberto; Stam, Cornelis J; Otte, Wim
2016-01-01
Descriptive neural network analyses have provided important insights into the organization of structural and functional networks in the human brain. However, these analyses have limitations for inter-subject or between-group comparisons in which network sizes and edge densities may differ, such as
Sherr, Michael E.; Crow, Janet; Stamey, James; Jones, Johnny; Dyer, Preston
2012-01-01
This study examined the influence of family structure on the outcomes of a sex education program in Miami, Florida. Using an experimental design, data collection occurred at pretest, 3-month, and 6-month follow-up with a sample of teenagers from high schools with a large majority of minority youth, assigned into treatment (n = 549) and control (n…
Recruiting for Prior Service Market
2008-06-01
perceptions, expectations and issues for re-enlistment • Develop potential marketing and advertising tactics and strategies targeted to the defined...01 JUN 2008 2. REPORT TYPE N/A 3. DATES COVERED - 4. TITLE AND SUBTITLE Recruiting for Prior Service Market 5a. CONTRACT NUMBER 5b. GRANT...Command First Handshake to First Unit of Assignment An Army of One Proud to Be e e to Serve Recruiting for Prior Service Market MAJ Eric Givens / MAJ Brian
Random lattice structures. Modelling, manufacture and FEA of their mechanical response
Maliaris, G.; Sarafis, I. T.; Lazaridis, T.; Varoutoglou, A.; Tsakataras, G.
2016-11-01
The implementation of lightweight structures in various applications, especially in Aerospace/ Automotive industries and Orthopaedics, has become a necessity due to their exceptional mechanical properties with respect to reduced weight. In this work we present a Voronoi tessellation based algorithm, which has been developed for modelling stochastic lattice structures. With the proposed algorithm, is possible to generate CAD geometry with controllable structural parameters, such as porosity, cell number and strut thickness. The digital structures were transformed into physical objects through the combination of 3D printing technics and investment casting. This process was applied to check the mechanical behaviour of generated digital models. Until now, the only way to materialize such structures into physical objects, was feasible through 3D printing methods such as Selective Laser Sintering/ Melting (SLS/ SLM). Investment casting possesses numerous advantages against SLS or SLA, with the major one being the material variety. On the other hand, several trials are required in order to calibrate the process parameters to have successful castings, which is the major drawback of investment casting. The manufactured specimens were subjected to compression tests, where their mechanical response was registered in the form of compressive load - displacement curves. Also, a finite element model was developed, using the specimens’ CAD data and compression test parameters. The FE assisted calculation of specimen plastic deformation is identical with the one of the physical object, which validates the conclusions drawn from the simulation results. As it was observed, strut contact is initiated when specimen deformation is approximately 5mm. Although FE calculated compressive force follows the same trend for the first 3mm of compression, then diverges because of the elasto-plastic FE model type definition and the occurred remeshing steps.
Structured patient handoff on an internal medicine ward: A cluster randomized control trial.
Tam, Penny; Nijjar, Aman P; Fok, Mark; Little, Chris; Shingina, Alexandra; Bittman, Jesse; Raghavan, Rashmi; Khan, Nadia A
2018-01-01
The effect of a multi-faceted handoff strategy in a high volume internal medicine inpatient setting on process and patient outcomes has not been clearly established. We set out to determine if a multi-faceted handoff intervention consisting of education, standardized handoff procedures, including fixed time and location for face-to-face handoff would result in improved rates of handoff compared with usual practice. We also evaluated resident satisfaction, health resource utilization and clinical outcomes. This was a cluster randomized controlled trial in a large academic tertiary care center with 18 inpatient internal medicine ward teams from January-April 2013. We randomized nine inpatient teams to an intervention where they received an education session standardizing who and how to handoff patients, with practice and feedback from facilitators. The control group of 9 teams continued usual non-standardized handoffs. The primary process outcome was the rate of patients handed over per 1000 patient nights. Other process outcomes included perceptions of inadequate handoff by overnight physicians, resource utilization overnight and hospital length of stay. Clinical outcomes included medical errors, frequency of patients requiring higher level of care overnight, and in-hospital mortality. The intervention group demonstrated a significant increase in the rate of patients handed over to the overnight physician (62.90/1000 person-nights vs. 46.86/1000 person-nights, p = 0.002). There was no significant difference in other process outcomes except resource utilization was increased in the intervention group (26.35/1000 person-days vs. 17.57/1000 person-days, p-value = 0.01). There was no significant difference between groups in medical errors (4.8% vs. 4.1%), need for higher level of care or in hospital mortality. Limitations include a dependence of accurate record keeping by the overnight physician, the possibility of cross-contamination in the handoff process, analysis at
Directory of Open Access Journals (Sweden)
L. Genovese
2015-11-01
Full Text Available A series of novel random copolymers of poly(propylene 1,4-cyclohexanedicarboxylate (PPCE containing neo -pentyl glycol sub-unit (P(PCExNCEy were synthesized and characterized in terms of molecular and solid-state properties. In addition, biodegradability studies in compost have been conducted. The copolymers displayed a high and similar thermal stability with respect to PPCE. At room temperature, all the copolymers appeared as semicrystalline materials: the main effect of copolymerization was a lowering of crystallinity degree (χc and a decrease of the melting temperature compared to the parent homopolymer. In particular, Wide Angle X-Ray diffraction (WAXD measurements indicated that P(PCExNCEy copolymers are characterized by cocrystallization, PNCE counits cocrystallizing in PPCE crystalline phase. Final properties and biodegradation rate of the materials under study were strictly dependent on copolymer composition and χc. As a matter of fact, the elastic modulus and the elongation at break decreased and increased, respectively, as neopentyl glycol cyclohexanedicarboxylate (NCE unit content was increased. The presence of a rigid-amorphous phase was evidenced by means of Dynamic Mechanical Thermal Analysis (DMTA analysis in all the samples under investigation. Lastly, the biodegradation rate of P(PCExNCEy copolymers was found to slightly increase with the increasing of NCE molar content.
Structure-Function Analysis of Chloroplast Proteins via Random Mutagenesis Using Error-Prone PCR.
Dumas, Louis; Zito, Francesca; Auroy, Pascaline; Johnson, Xenie; Peltier, Gilles; Alric, Jean
2018-06-01
Site-directed mutagenesis of chloroplast genes was developed three decades ago and has greatly advanced the field of photosynthesis research. Here, we describe a new approach for generating random chloroplast gene mutants that combines error-prone polymerase chain reaction of a gene of interest with chloroplast complementation of the knockout Chlamydomonas reinhardtii mutant. As a proof of concept, we targeted a 300-bp sequence of the petD gene that encodes subunit IV of the thylakoid membrane-bound cytochrome b 6 f complex. By sequencing chloroplast transformants, we revealed 149 mutations in the 300-bp target petD sequence that resulted in 92 amino acid substitutions in the 100-residue target subunit IV sequence. Our results show that this method is suited to the study of highly hydrophobic, multisubunit, and chloroplast-encoded proteins containing cofactors such as hemes, iron-sulfur clusters, and chlorophyll pigments. Moreover, we show that mutant screening and sequencing can be used to study photosynthetic mechanisms or to probe the mutational robustness of chloroplast-encoded proteins, and we propose that this method is a valuable tool for the directed evolution of enzymes in the chloroplast. © 2018 American Society of Plant Biologists. All rights reserved.
Vyas, Manan; Kota, V K B; Chavda, N D
2010-03-01
Finite interacting Fermi systems with a mean-field and a chaos generating two-body interaction are modeled by one plus two-body embedded Gaussian orthogonal ensemble of random matrices with spin degree of freedom [called EGOE(1+2)-s]. Numerical calculations are used to demonstrate that, as lambda , the strength of the interaction (measured in the units of the average spacing of the single-particle levels defining the mean-field), increases, generically there is Poisson to GOE transition in level fluctuations, Breit-Wigner to Gaussian transition in strength functions (also called local density of states) and also a duality region where information entropy will be the same in both the mean-field and interaction defined basis. Spin dependence of the transition points lambda_{c} , lambdaF, and lambdad , respectively, is described using the propagator for the spectral variances and the formula for the propagator is derived. We further establish that the duality region corresponds to a region of thermalization. For this purpose we compared the single-particle entropy defined by the occupancies of the single-particle orbitals with thermodynamic entropy and information entropy for various lambda values and they are very close to each other at lambda=lambdad.
A saturation property of structures obtained by forcing with a compact family of random variables
Czech Academy of Sciences Publication Activity Database
Krajíček, Jan
2013-01-01
Roč. 52, 1-2 (2013), s. 19-28 ISSN 1432-0665 R&D Projects: GA AV ČR IAA100190902 Keywords : Boolean-valued structures * saturation property * non-standard model of arithmetic Subject RIV: BA - General Mathematics Impact factor: 0.324, year: 2013 http://link.springer.com/article/10.1007%2Fs00153-012-0304-9
Bayesian exponential random graph modeling of whole-brain structural networks across lifespan
Sinke, Michel R T; Dijkhuizen, Rick M; Caimo, Alberto; Stam, Cornelis J; Otte, Wim
2016-01-01
Descriptive neural network analyses have provided important insights into the organization of structural and functional networks in the human brain. However, these analyses have limitations for inter-subject or between-group comparisons in which network sizes and edge densities may differ, such as in studies on neurodevelopment or brain diseases. Furthermore, descriptive neural network analyses lack an appropriate generic null model and a unifying framework. These issues may be solved with an...
Cochlea Segmentation using Iterated Random Walks with Shape Prior
DEFF Research Database (Denmark)
Ruiz Pujadas, Esmeralda; Kjer, Hans Martin; Vera, Sergio
2016-01-01
Cochlear implants can restore hearing to deaf or partially deaf patients. In order to plan the intervention, a model from high resolution μCT images is to be built from accurate cochlea segmentations and then, adapted to a patient-specific model. Thus, a precise segmentation is required to build...
Recchia, Stephen
Kevlar is the most common high-end plastic filament yarn used in body armor, tire reinforcement, and wear resistant applications. Kevlar is a trade name for an aramid fiber. These are fibers in which the chain molecules are highly oriented along the fiber axis, so the strength of the chemical bond can be exploited. The bulk material is extruded into filaments that are bound together into yarn, which may be chorded with other materials as in car tires, woven into a fabric, or layered in an epoxy to make composite panels. The high tensile strength to low weight ratio makes this material ideal for designs that decrease weight and inertia, such as automobile tires, body panels, and body armor. For designs that use Kevlar, increasing the strength, or tenacity, to weight ratio would improve performance or reduce cost of all products that are based on this material. This thesis computationally and experimentally investigates the tenacity and stiffness of Kevlar yarns with varying twist ratios. The test boundary conditions were replicated with a geometrically accurate finite element model, resulting in a customized code that can reproduce tortuous filaments in a yarn was developed. The solid model geometry capturing filament tortuosity was implemented through a random walk method of axial geometry creation. A finite element analysis successfully recreated the yarn strength and stiffness dependency observed during the tests. The physics applied in the finite element model was reproduced in an analytical equation that was able to predict the failure strength and strain dependency of twist ratio. The analytical solution can be employed to optimize yarn design for high strength applications.
Random lock-in intervals for tubular structural elements subject to simulated natural wind
DEFF Research Database (Denmark)
Christensen, Claus F.; Ditlevsen, Ove Dalager
1999-01-01
The paper reports on wind tunnel experiments with an elastically suspended circular cylinder vibrating under the excitation of natural wind of high turbulence degree. The natural wind turbulence was simulated bysuperposing the low frequency part of the natural wind turbulence on the background high...... structural elements subject to thenatural wind. The engineering relevance of the investigation is supported by comparing with the unrealistic highlyconservative rules of wind induced fatique commonly given in codes of practice. The stochastic lock-in model aswell as the related fatigue calculation procedure...
Randomized random walk on a random walk
International Nuclear Information System (INIS)
Lee, P.A.
1983-06-01
This paper discusses generalizations of the model introduced by Kehr and Kunter of the random walk of a particle on a one-dimensional chain which in turn has been constructed by a random walk procedure. The superimposed random walk is randomised in time according to the occurrences of a stochastic point process. The probability of finding the particle in a particular position at a certain instant is obtained explicitly in the transform domain. It is found that the asymptotic behaviour for large time of the mean-square displacement of the particle depends critically on the assumed structure of the basic random walk, giving a diffusion-like term for an asymmetric walk or a square root law if the walk is symmetric. Many results are obtained in closed form for the Poisson process case, and these agree with those given previously by Kehr and Kunter. (author)
Bellantone, R; Bossola, M; Carriero, C; Malerba, M; Nucera, P; Ratto, C; Crucitti, P; Pacelli, F; Doglietto, G B; Crucitti, F
1999-01-01
After trauma or surgery, researchers have suggested that medium-chain triglycerides have metabolic advantages, although they are toxic in large doses. To try to reduce this potential toxicity, structured lipids, which provide a higher oxidation rate, faster clearance from blood, improved nitrogen balance, and less accumulation in the reticuloendothelial system, could be used. Therefore, we evaluated, through a blind randomized study, the safety, tolerance, and efficacy of structured triglycerides, compared with long-chain triglycerides (LCT), in patients undergoing colorectal surgery. Nineteen patients were randomized to receive long-chain or structured triglycerides as a lipid source. They received the same amount of calories (27.2/kg/d), glucose (4 g/kg/d), protein (0.2 g/kg/d), and lipids (11.2 kcal/kg/d). Patients were evaluated during and after the treatment for clinical and laboratory variables, daily and cumulative nitrogen balance, urinary excretion of 3-methyl-histidine, and urinary 3-methylhistidine/creatinine ratio. No adverse effect that required the interruption of the treatment was observed. Triglyceride levels and clinical and laboratory variables were similar in the two groups. A predominantly positive nitrogen balance was observed from day 2 until day 5 in the LCT group and from day 1 until day 4 in the structured triglycerides group. The cumulative nitrogen balance (in grams) for days 1 to 3 was 9.7+/-5.2 in the experimental group and 4.4+/-11.8 in the control group (p = .2). For days 1 to 5 it was 10.7+/-10.5 and 6.5+/-17.9 (p = .05), respectively. The excretion of 3-methylhistidine was higher in the control group but decreased in the following days and was similar to the experimental group on day 5. This study represents the first report in which structured triglycerides are administered in postoperative patients to evaluate safety, tolerance, and efficacy. It suggests that Fe73403 is safe, well tolerated, and efficacious in terms of nitrogen
Zhang, Li-Zhi; Yuan, Wu-Zhi
2018-04-01
The motion of coalescence-induced condensate droplets on superhydrophobic surface (SHS) has attracted increasing attention in energy-related applications. Previous researches were focused on regularly rough surfaces. Here a new approach, a mesoscale lattice Boltzmann method (LBM), is proposed and used to model the dynamic behavior of coalescence-induced droplet jumping on SHS with randomly distributed rough structures. A Fast Fourier Transformation (FFT) method is used to generate non-Gaussian randomly distributed rough surfaces with the skewness (Sk), kurtosis (K) and root mean square (Rq) obtained from real surfaces. Three typical spreading states of coalesced droplets are observed through LBM modeling on various rough surfaces, which are found to significantly influence the jumping ability of coalesced droplet. The coalesced droplets spreading in Cassie state or in composite state will jump off the rough surfaces, while the ones spreading in Wenzel state would eventually remain on the rough surfaces. It is demonstrated that the rough surfaces with smaller Sks, larger Rqs and a K at 3.0 are beneficial to coalescence-induced droplet jumping. The new approach gives more detailed insights into the design of SHS.
Energy Technology Data Exchange (ETDEWEB)
Ren, Cheng; Yang, Xingtuan; Liu, Zhiyong; Sun, Yanfei; Jiang, Shengyao [Tsinghua Univ., Beijing (China). Key Laboratory of Advanced Reactor Engineering and Safety; Li, Congxin [Ministry of Environmental Protection of the People' s Republic of China, Beijing (China). Nuclear and Radiation Safety Center
2015-02-15
A three-dimensional pebble bed corresponding to the randomly packed bed in the heat transfer test facility built for the High Temperature Reactor Pebble bed Modules (HTR-PM) in Shandong Shidaowan is simulated via discrete element method. Based on the simulation, we make a detailed analysis on the packing structure of the pebble bed from several aspects, such as transverse section image, longitudinal section image, radial and axial porosity distributions, two-dimensional porosity distribution and coordination number distribution. The calculation results show that radial distribution of porosity is uniform in the center and oscillates near the wall; axial distribution of porosity oscillates near the bottom and linearly varies along height due to effect of gravity; the average coordination number is about seven and equals to the maximum coordination number frequency. The fully established three-dimensional packing structure analysis of the pebble bed in this work is of fundamental significance to understand the flow and heat transfer characteristics throughout the pebble-bed type structure.
Random dynamics and relations between the number of fermion generations and the fine structure
International Nuclear Information System (INIS)
Nielsen, H.B.
1989-01-01
By looking at the structure and crude features of the parameters of the Standard Model we argue for some properties of physics at a more fundamental level, presumably the Planck energy scale. These properties suggest a picture of 'anti-grand-unification' in the sense that, contrary to usual grand unification, we do not expect a simple gauge group at the high energy level. Rather we expect to see a gauge algebra which is a cross product of several simple or abelian factors. A symmetry breaking mechanism called confusion may then break each set of isomorphic factors down to the diagonal subgroup, thereby explaining the fact that non of the direct product factors in the Standard Model are repeated. (orig.)
Kim, Hyungjin; Kim, Sihyun; Kim, Hyun-Min; Lee, Kitae; Kim, Sangwan; Pak, Byung-Gook
2018-09-01
In this study, we investigate a one-transistor (1T) dynamic random access memory (DRAM) cell based on a gated-thyristor device utilizing voltage-driven bistability to enable high-speed operations. The structural feature of the surrounding gate using a sidewall provides high scalability with regard to constructing an array architecture of the proposed devices. In addition, the operation mechanism, I-V characteristics, DRAM operations, and bias dependence are analyzed using a commercial device simulator. Unlike conventional 1T DRAM cells utilizing the floating body effect, excess carriers which are required to be stored to make two different states are not generated but injected from the n+ cathode region, giving the device high-speed operation capabilities. The findings here indicate that the proposed DRAM cell offers distinct advantages in terms of scalability and high-speed operations.
Structure of Sn1−xGex random alloys as obtained from the coherent potential approximation
Pulikkotil, J. J.
2011-08-09
The structure of the Sn1−xGex random alloys is studied using density functional theory and the coherent potential approximation. We report on the deviation of the Sn1−xGex alloys from Vegard’s law, addressing their full compositional range. The findings are compared to the related Si1−xGex alloys and to experimental results. Interestingly, the deviation from Vegard’s law is quantitatively and qualitatively different between the Sn1−xGex and Si1−xGex alloys. An almost linear dependence of the bulk modulus as a function of composition is found for Si1−xGex, whereas for Sn1−xGex the dependence is strongly nonlinear.
A novel multiplexer-based structure for random access memory cell in quantum-dot cellular automata
Naji Asfestani, Mazaher; Rasouli Heikalabad, Saeed
2017-09-01
Quantum-dot cellular automata (QCA) is a new technology in scale of nano and perfect replacement for CMOS circuits in the future. Memory is one of the basic components in any digital system, so designing the random access memory (RAM) with high speed and optimal in QCA is important. In this paper, by employing the structure of multiplexer, a novel RAM cell architecture is proposed. The proposed architecture is implemented without the coplanar crossover approach. The proposed architecture is simulated using the QCADesigner version 2.0.3 and QCAPro. The simulation results demonstrate that the proposed QCA RAM architecture has the best performance in terms of delay, circuit complexity, area, cell count and energy consumption in comparison with other QCA RAM architectures.
Carneiro, Lara S F; Fonseca, António Manuel; Vieira-Coelho, Maria Augusta; Mota, Maria Paula; Vasconcelos-Raposo, José
2015-12-01
Physical exercise has been consistently documented as a complementary therapy in the treatment of depressive disorders. However, despite a higher prevalence among women compared to men, the trials developed in women are scarce. In addition, the optimal dosage of exercise capable of producing benefits that reduce depressive symptoms remains unclear. This clinical trial is designed to measure the effect of a structured physical exercise program as a complement to antidepressant medication in the treatment of women with depression. From July 2013 to May 2014, we implemented a randomized controlled trial (HAPPY BRAIN study). A total of 26 women (aged 50.16 ± 12.08) diagnosed with clinical depression were randomized either to a supervised aerobic exercise group (45-50 min/week three times a week for four months) plus pharmacotherapy (intervention group), or only antidepressant medication (control group). The exercise group presented a decrease in BDI-II and DASS-21 total score scales. Relatively to DASS-21, it showed a significant decrease in anxiety and stress. The exercise group when compared to a control group showed improvement in relation to physical functioning parameters between baseline and post-intervention. Moreover, anthropometric parameters presented only significant differences between groups in fat mass percentage. Nonetheless, no differences were found between groups in weight, body mass index, waist circumference, and self-esteem. Our results showed that supervised structured aerobic exercise training could be an effective adjuvant therapy for treating women with depression, reducing depressive symptomatology and improving physical fitness. A key factor of this improvement included strict control of exercise workload parameters and adjustment to each subject's capacity. In our study, due to the sample size there is an increase in the probability of type II errors. Copyright © 2015 Elsevier Ltd. All rights reserved.
Low-Quality Structural and Interaction Data Improves Binding Affinity Prediction via Random Forest.
Li, Hongjian; Leung, Kwong-Sak; Wong, Man-Hon; Ballester, Pedro J
2015-06-12
Docking scoring functions can be used to predict the strength of protein-ligand binding. It is widely believed that training a scoring function with low-quality data is detrimental for its predictive performance. Nevertheless, there is a surprising lack of systematic validation experiments in support of this hypothesis. In this study, we investigated to which extent training a scoring function with data containing low-quality structural and binding data is detrimental for predictive performance. We actually found that low-quality data is not only non-detrimental, but beneficial for the predictive performance of machine-learning scoring functions, though the improvement is less important than that coming from high-quality data. Furthermore, we observed that classical scoring functions are not able to effectively exploit data beyond an early threshold, regardless of its quality. This demonstrates that exploiting a larger data volume is more important for the performance of machine-learning scoring functions than restricting to a smaller set of higher data quality.
Acquisition of multiple prior distributions in tactile temporal order judgment
Directory of Open Access Journals (Sweden)
Yasuhito eNagai
2012-08-01
Full Text Available The Bayesian estimation theory proposes that the brain acquires the prior distribution of a task and integrates it with sensory signals to minimize the effect of sensory noise. Psychophysical studies have demonstrated that our brain actually implements Bayesian estimation in a variety of sensory-motor tasks. However, these studies only imposed one prior distribution on participants within a task period. In this study, we investigated the conditions that enable the acquisition of multiple prior distributions in temporal order judgment (TOJ of two tactile stimuli across the hands. In Experiment 1, stimulation intervals were randomly selected from one of two prior distributions (biased to right hand earlier and biased to left hand earlier in association with color cues (green and red, respectively. Although the acquisition of the two priors was not enabled by the color cues alone, it was significant when participants shifted their gaze (above or below in response to the color cues. However, the acquisition of multiple priors was not significant when participants moved their mouths (opened or closed. In Experiment 2, the spatial cues (above and below were used to identify which eye position or retinal cue position was crucial for the eye-movement-dependent acquisition of multiple priors in Experiment 1. The acquisition of the two priors was significant when participants moved their gaze to the cues (i.e., the cue positions on the retina were constant across the priors, as well as when participants did not shift their gazes (i.e., the cue positions on the retina changed according to the priors. Thus, both eye and retinal cue positions were effective in acquiring multiple priors. Based on previous neurophysiological reports, we discuss possible neural correlates that contribute to the acquisition of multiple priors.
International Nuclear Information System (INIS)
Kim, Jiwoong
2015-01-01
In theoretical calculations, expressing the random distribution of atoms in a certain crystal structure is still challenging. The special quasi-random structure (SQS) model is effective for depicting such random distributions. The SQS model has not been applied to semi-empirical thermodynamic calculations; however, Debye–Grüneisen theory (DGT), a semi-empirical method, was used here for that purpose. The model reliability was obtained by comparing supercell models of various sizes. The results for chemical bonds, pair correlation, and elastic properties demonstrated the reliability of the SQS models. Thermodynamic calculations using density functional perturbation theory (DFPT) and DGT assessed the applicability of the SQS models. DGT and DFPT led to similar variations of the mixing and formation energies. This study provides guidelines for theoretical assessments to obtain the reliable SQS models and to calculate the thermodynamic properties of numerous materials with a random atomic distribution. - Highlights: • Various material properties are used to examine reliability of special quasi-random structures. • SQS models are applied to thermodynamic calculations by semi-empirical methods. • Basic calculation guidelines for materials with random atomic distribution are given.
International Nuclear Information System (INIS)
Kurosu, Shunji; Fukuda, Takahiro; Maekawa, Toru
2013-01-01
Assemblies, which are composed of nanoparticles such as nanofibres, have been intensively studied in recent years. This has particularly been the case in the field of biomedicine, where the aim is to develop efficient methodologies for capturing and separating target biomolecules and cells and/or encouraging bio-chemical reactions, utilizing the extremely high surface area to volume ratio of assemblies. There is an urgent need for the development of a quick synthesis method of forming nanofibrous structures on the surface of biomedical microchips and devices for the investigation of the interactions between biomolecules/cells and the nanostructures. Here, we produce nanofibrous structures composed of C 60 molecules, which are aligned in one direction or randomly oriented, by dissolving C 60 molecules and sulphur in benzene and evaporating a droplet of the solution on a glass substrate under appropriate conditions. The synthesis time is as short as 30 s. Sulphur is extracted and nanofibres are crystallized by leaving them in supercritical carbon dioxide. (paper)
Quantum steganography using prior entanglement
International Nuclear Information System (INIS)
Mihara, Takashi
2015-01-01
Steganography is the hiding of secret information within innocent-looking information (e.g., text, audio, image, video, etc.). A quantum version of steganography is a method based on quantum physics. In this paper, we propose quantum steganography by combining quantum error-correcting codes with prior entanglement. In many steganographic techniques, embedding secret messages in error-correcting codes may cause damage to them if the embedded part is corrupted. However, our proposed steganography can separately create secret messages and the content of cover messages. The intrinsic form of the cover message does not have to be modified for embedding secret messages. - Highlights: • Our steganography combines quantum error-correcting codes with prior entanglement. • Our steganography can separately create secret messages and the content of cover messages. • Errors in cover messages do not have affect the recovery of secret messages. • We embed a secret message in the Steane code as an example of our steganography
Quantum steganography using prior entanglement
Energy Technology Data Exchange (ETDEWEB)
Mihara, Takashi, E-mail: mihara@toyo.jp
2015-06-05
Steganography is the hiding of secret information within innocent-looking information (e.g., text, audio, image, video, etc.). A quantum version of steganography is a method based on quantum physics. In this paper, we propose quantum steganography by combining quantum error-correcting codes with prior entanglement. In many steganographic techniques, embedding secret messages in error-correcting codes may cause damage to them if the embedded part is corrupted. However, our proposed steganography can separately create secret messages and the content of cover messages. The intrinsic form of the cover message does not have to be modified for embedding secret messages. - Highlights: • Our steganography combines quantum error-correcting codes with prior entanglement. • Our steganography can separately create secret messages and the content of cover messages. • Errors in cover messages do not have affect the recovery of secret messages. • We embed a secret message in the Steane code as an example of our steganography.
Wang, Lai-Guo; Cao, Zheng-Yi; Qian, Xu; Zhu, Lin; Cui, Da-Peng; Li, Ai-Dong; Wu, Di
2017-02-22
Al 2 O 3 - or HfO 2 -based nanocomposite structures with embedded CoPt x nanocrystals (NCs) on TiN-coated Si substrates have been prepared by combination of thermal atomic layer deposition (ALD) and plasma-enhanced ALD for resistive random access memory (RRAM) applications. The impact of CoPt x NCs and their average size/density on the resistive switching properties has been explored. Compared to the control sample without CoPt x NCs, ALD-derived Pt/oxide/100 cycle-CoPt x NCs/TiN/SiO 2 /Si exhibits a typical bipolar, reliable, and reproducible resistive switching behavior, such as sharp distribution of RRAM parameters, smaller set/reset voltages, stable resistance ratio (≥10 2 ) of OFF/ON states, better switching endurance up to 10 4 cycles, and longer data retention over 10 5 s. The possible resistive switching mechanism based on nanocomposite structures of oxide/CoPt x NCs has been proposed. The dominant conduction mechanisms in low- and high-resistance states of oxide-based device units with embedded CoPt x NCs are Ohmic behavior and space-charge-limited current, respectively. The insertion of CoPt x NCs can effectively improve the formation of conducting filaments due to the CoPt x NC-enhanced electric field intensity. Besides excellent resistive switching performances, the nanocomposite structures also simultaneously present ferromagnetic property. This work provides a flexible pathway by combining PEALD and TALD compatible with state-of-the-art Si-based technology for multifunctional electronic devices applications containing RRAM.
Prior-to-Exam: What Activities Enhance Performance?
Rhoads, C. J.; Healy, Therese
2013-01-01
Can instructors impact their student performance by recommending an activity just prior to taking an exam? In this study, college students were randomly assigned to one of three treatment groups (study, exercise, or meditation) or a control group. Each group was given two different types of tests; a traditional concept exam, and a non-traditional…
Random matrices and random difference equations
International Nuclear Information System (INIS)
Uppuluri, V.R.R.
1975-01-01
Mathematical models leading to products of random matrices and random difference equations are discussed. A one-compartment model with random behavior is introduced, and it is shown how the average concentration in the discrete time model converges to the exponential function. This is of relevance to understanding how radioactivity gets trapped in bone structure in blood--bone systems. The ideas are then generalized to two-compartment models and mammillary systems, where products of random matrices appear in a natural way. The appearance of products of random matrices in applications in demography and control theory is considered. Then random sequences motivated from the following problems are studied: constant pulsing and random decay models, random pulsing and constant decay models, and random pulsing and random decay models
Sparse Multivariate Modeling: Priors and Applications
DEFF Research Database (Denmark)
Henao, Ricardo
This thesis presents a collection of statistical models that attempt to take advantage of every piece of prior knowledge available to provide the models with as much structure as possible. The main motivation for introducing these models is interpretability since in practice we want to be able...... a general yet self-contained description of every model in terms of generative assumptions, interpretability goals, probabilistic formulation and target applications. Case studies, benchmark results and practical details are also provided as appendices published elsewhere, containing reprints of peer...
Peng, Cheng-Jien
The purpose of this study is to see the application feasibility of barium strontium titanate (BST) thin films on ultra large scale integration (ULSI) dynamic random access memory (DRAM) capacitors through the understanding of the relationships among processing, structure and electrical properties. Thin films of BST were deposited by multi-ion -beam reactive sputtering (MIBERS) technique and metallo -organic decomposition (MOD) method. The processing parameters such as Ba/Sr ratio, substrate temperature, annealing temperature and time, film thickness and doping concentration were correlated with the structure and electric properties of the films. Some effects of secondary low-energy oxygen ion bombardment were also examined. Microstructures of BST thin films could be classified into two types: (a) Type I structures, with multi-grains through the film thickness, for amorphous as-grown films after high temperature annealing, and (b) columnar structure (Type II) which remained even after high temperature annealing, for well-crystallized films deposited at high substrate temperatures. Type I films showed Curie-von Schweidler response, while Type II films showed Debted type behavior. Type I behavior may be attributed to the presence of a high density of disordered grain boundaries. Two types of current -voltage characteristics could be seen in non-bombarded films depending on the chemistry of the films (doped or undoped) and substrate temperature during deposition. Only the MIBERS films doped with high donor concentration and deposited at high substrate temperature showed space-charge -limited conduction (SCLC) with discrete shallow traps embedded in trap-distributed background at high electric field. All other non-bombarded films, including MOD films, showed trap-distributed SCLC behavior with a slope of {~}7.5-10 due to the presence of grain boundaries through film thickness or traps induced by unavoidable acceptor impurities in the films. Donor-doping could
Directory of Open Access Journals (Sweden)
Deyo Richard A
2009-10-01
Full Text Available Abstract Background Chronic back pain is a major public health problem and the primary reason patients seek massage treatment. Despite the growing use of massage for chronic low back pain, there have been few studies of its effectiveness. This trial will be the first evaluation of the effectiveness of relaxation massage for chronic back pain and the first large trial of a focused structural form of massage for this condition. Methods and Design A total of 399 participants (133 in each of three arms between the ages of 20 and 65 years of age who have low back pain lasting at least 3 months will be recruited from an integrated health care delivery system. They will be randomized to one of two types of massage ("focused structural massage" or "relaxation massage", or continued usual medical care. Ten massage treatments will be provided over 10 weeks. The primary outcomes, standard measures of dysfunction and bothersomeness of low back pain, will be assessed at baseline and after 10, 26, and 52 weeks by telephone interviewers masked to treatment assignment. General health status, satisfaction with back care, days of back-related disability, perceived stress, and use and costs of healthcare services for back pain will also be measured. Outcomes across assigned treatment groups will be compared using generalized estimating equations, accounting for participant correlation and adjusted for baseline value, age, and sex. For both primary outcome measures, this trial will have at least 85% power to detect the presence of a minimal clinically significant difference among the three treatment groups and 91% power for pairwise comparisons. Secondary analyses will compare the proportions of participants in each group that improve by a clinically meaningful amount. Conclusion Results of this trial will help clarify the value of two types of massage therapy for chronic low back pain. Trial registration Clinical Trials.gov NCT 00371384.
Bhavnani, Sanjeev P; Sola, Srikanth; Adams, David; Venkateshvaran, Ashwin; Dash, P K; Sengupta, Partho P
2018-04-01
This study sought to determine whether mobile health (mHealth) device assessments used as clinical decision support tools at the point-of-care can reduce the time to treatment and improve long-term outcomes among patients with rheumatic and structural heart diseases (SHD). Newly developed smartphone-connected mHealth devices represent promising methods to diagnose common diseases in resource-limited areas; however, the impact of technology-based care on long-term outcomes has not been rigorously evaluated. A total of 253 patients with SHD were randomized to an initial diagnostic assessment with wireless devices in mHealth clinics (n = 139) or to standard-care (n = 114) in India. mHealth clinics were equipped with point-of-care devices including pocket-echocardiography, smartphone-connected-electrocardiogram blood pressure and oxygen measurements, activity monitoring, and portable brain natriuretic peptide laboratory testing. All individuals underwent comprehensive transthoracic echocardiography to assess the severity of SHD. The primary endpoint was the time to referral for therapy with percutaneous valvuloplasty or surgical valve replacement. Secondary endpoints included the probability of a cardiovascular hospitalization and/or death over 1 year. An initial mHealth assessment was associated with a shorter time to referral for valvuloplasty and/or valve replacement (83 ± 79 days vs. 180 ± 101 days; p Mobile Health Device Assessments in Modern Structural Heart Disease Clinics; NCT02881398). Copyright © 2018 American College of Cardiology Foundation. Published by Elsevier Inc. All rights reserved.
Tan, Shurun
The objective of my research is two-fold: to study wave scattering phenomena in dense volumetric random media and in periodic wave functional materials. For the first part, the goal is to use the microwave remote sensing technique to monitor water resources and global climate change. Towards this goal, I study the microwave scattering behavior of snow and ice sheet. For snowpack scattering, I have extended the traditional dense media radiative transfer (DMRT) approach to include cyclical corrections that give rise to backscattering enhancements, enabling the theory to model combined active and passive observations of snowpack using the same set of physical parameters. Besides DMRT, a fully coherent approach is also developed by solving Maxwell's equations directly over the entire snowpack including a bottom half space. This revolutionary new approach produces consistent scattering and emission results, and demonstrates backscattering enhancements and coherent layer effects. The birefringence in anisotropic snow layers is also analyzed by numerically solving Maxwell's equation directly. The effects of rapid density fluctuations in polar ice sheet emission in the 0.5˜2.0 GHz spectrum are examined using both fully coherent and partially coherent layered media emission theories that agree with each other and distinct from incoherent approaches. For the second part, the goal is to develop integral equation based methods to solve wave scattering in periodic structures such as photonic crystals and metamaterials that can be used for broadband simulations. Set upon the concept of modal expansion of the periodic Green's function, we have developed the method of broadband Green's function with low wavenumber extraction (BBGFL), where a low wavenumber component is extracted and results a non-singular and fast-converging remaining part with simple wavenumber dependence. We've applied the technique to simulate band diagrams and modal solutions of periodic structures, and to
Cherkin, Daniel C; Sherman, Karen J; Kahn, Janet; Erro, Janet H; Deyo, Richard A; Haneuse, Sebastien J; Cook, Andrea J
2009-10-20
Chronic back pain is a major public health problem and the primary reason patients seek massage treatment. Despite the growing use of massage for chronic low back pain, there have been few studies of its effectiveness. This trial will be the first evaluation of the effectiveness of relaxation massage for chronic back pain and the first large trial of a focused structural form of massage for this condition. A total of 399 participants (133 in each of three arms) between the ages of 20 and 65 years of age who have low back pain lasting at least 3 months will be recruited from an integrated health care delivery system. They will be randomized to one of two types of massage ("focused structural massage" or "relaxation massage"), or continued usual medical care. Ten massage treatments will be provided over 10 weeks. The primary outcomes, standard measures of dysfunction and bothersomeness of low back pain, will be assessed at baseline and after 10, 26, and 52 weeks by telephone interviewers masked to treatment assignment. General health status, satisfaction with back care, days of back-related disability, perceived stress, and use and costs of healthcare services for back pain will also be measured. Outcomes across assigned treatment groups will be compared using generalized estimating equations, accounting for participant correlation and adjusted for baseline value, age, and sex. For both primary outcome measures, this trial will have at least 85% power to detect the presence of a minimal clinically significant difference among the three treatment groups and 91% power for pairwise comparisons. Secondary analyses will compare the proportions of participants in each group that improve by a clinically meaningful amount. Results of this trial will help clarify the value of two types of massage therapy for chronic low back pain.
DEFF Research Database (Denmark)
Asmussen, J.C.; Ibrahim, S.R.; Brincker, Rune
Abstraet Thispaper demansirates how to use the Random Decrement (RD) technique for identification o flinear structures subjected to ambient excitation. The theory behind the technique will be presented and guidelines how to choose the different variables will be given. This is done by introducing...
DEFF Research Database (Denmark)
Asmussen, J. C.; Ibrahim, S. R.; Brincker, Rune
This paper demonstrates how to use the Random Decrement (RD) technique for identification of linear structures subjected to ambient excitation. The theory behind the technique will be presented and guidelines how to choose the different variables will be given. This is done by introducing a new...
DEFF Research Database (Denmark)
Asmussen, J. C.; Ibrahim, R.; Brincker, Rune
1998-01-01
This paper demonstrates how to use the Random Decrement (RD) technique for identification of linear structures subjected to ambient excitation. The theory behind the technique will be presented and guidelines how to choose the different variables will be given. This is done by introducing a new...
Nepal, Niraj K.; Ruzsinszky, Adrienn; Bates, Jefferson E.
2018-03-01
The ground state structural and energetic properties for rocksalt and cesium chloride phases of the cesium halides were explored using the random phase approximation (RPA) and beyond-RPA methods to benchmark the nonempirical SCAN meta-GGA and its empirical dispersion corrections. The importance of nonadditivity and higher-order multipole moments of dispersion in these systems is discussed. RPA generally predicts the equilibrium volume for these halides within 2.4% of the experimental value, while beyond-RPA methods utilizing the renormalized adiabatic LDA (rALDA) exchange-correlation kernel are typically within 1.8%. The zero-point vibrational energy is small and shows that the stability of these halides is purely due to electronic correlation effects. The rAPBE kernel as a correction to RPA overestimates the equilibrium volume and could not predict the correct phase ordering in the case of cesium chloride, while the rALDA kernel consistently predicted results in agreement with the experiment for all of the halides. However, due to its reasonable accuracy with lower computational cost, SCAN+rVV10 proved to be a good alternative to the RPA-like methods for describing the properties of these ionic solids.
High speed true random number generator with a new structure of coarse-tuning PDL in FPGA
Fang, Hongzhen; Wang, Pengjun; Cheng, Xu; Zhou, Keji
2018-03-01
A metastability-based TRNG (true random number generator) is presented in this paper, and implemented in FPGA. The metastable state of a D flip-flop is tunable through a two-stage PDL (programmable delay line). With the proposed coarse-tuning PDL structure, the TRNG core does not require extra placement and routing to ensure its entropy. Furthermore, the core needs fewer stages of coarse-tuning PDL at higher operating frequency, and thus saves more resources in FPGA. The designed TRNG achieves 25 Mbps @ 100 MHz throughput after proper post-processing, which is several times higher than other previous TRNGs based on FPGA. Moreover, the robustness of the system is enhanced with the adoption of a feedback system. The quality of the designed TRNG is verified by NIST (National Institute of Standards and Technology) and also accepted by class P1 of the AIS-20/31 test suite. Project supported by the S&T Plan of Zhejiang Provincial Science and Technology Department (No. 2016C31078), the National Natural Science Foundation of China (Nos. 61574041, 61474068, 61234002), and the K.C. Wong Magna Fund in Ningbo University, China.
International Nuclear Information System (INIS)
Ootori, Yasuki; Ishikawa, Hiroyuki; Takeda, Tomoyoshi
2004-01-01
In the JEAG4601-1987 (Japan Electric Association Guide for earthquake resistance design), either the conventional deterministic method or probabilistic method is used for evaluating the stability of ground foundations and surrounding slopes in nuclear power plants. The deterministic method, in which the soil properties of 'mean ± coefficient x standard deviation' is adopted for the calculations, is generally used in the design stage to data. On the other hand, the probabilistic method, in which the soil properties assume to have probabilistic distributions, is stated as a future method. The deterministic method facilitates the evaluation, however, it is necessary to clarify the relationship between the deterministic and probabilistic methods. In order to investigate the relationship, a simple model that can take into account the dynamic effect of structures, and a simplified method for taking the spatial randomness into account are proposed in this study. As a result, it is found that the shear strength of soil is the most important factor for the stability of grounds and slopes, and the probability below the safety factor evaluated with the soil properties of mean - 1.0 x standard deviation' by the deterministic methods of much lower. (author)
Anomalous Anticipatory Responses in Networked Random Data
International Nuclear Information System (INIS)
Nelson, Roger D.; Bancel, Peter A.
2006-01-01
We examine an 8-year archive of synchronized, parallel time series of random data from a world spanning network of physical random event generators (REGs). The archive is a publicly accessible matrix of normally distributed 200-bit sums recorded at 1 Hz which extends from August 1998 to the present. The primary question is whether these data show non-random structure associated with major events such as natural or man-made disasters, terrible accidents, or grand celebrations. Secondarily, we examine the time course of apparently correlated responses. Statistical analyses of the data reveal consistent evidence that events which strongly affect people engender small but significant effects. These include suggestions of anticipatory responses in some cases, leading to a series of specialized analyses to assess possible non-random structure preceding precisely timed events. A focused examination of data collected around the time of earthquakes with Richter magnitude 6 and greater reveals non-random structure with a number of intriguing, potentially important features. Anomalous effects in the REG data are seen only when the corresponding earthquakes occur in populated areas. No structure is found if they occur in the oceans. We infer that an important contributor to the effect is the relevance of the earthquake to humans. Epoch averaging reveals evidence for changes in the data some hours prior to the main temblor, suggestive of reverse causation
Palter, Vanessa N; Orzech, Neil; Reznick, Richard K; Grantcharov, Teodor P
2013-02-01
: To develop and validate an ex vivo comprehensive curriculum for a basic laparoscopic procedure. : Although simulators have been well validated as tools to teach technical skills, their integration into comprehensive curricula is lacking. Moreover, neither the effect of ex vivo training on learning curves in the operating room (OR), nor the effect on nontechnical proficiency has been investigated. : This randomized single-blinded prospective trial allocated 20 surgical trainees to a structured training and assessment curriculum (STAC) group or conventional residency training. The STAC consisted of case-based learning, proficiency-based virtual reality training, laparoscopic box training, and OR participation. After completion of the intervention, all participants performed 5 sequential laparoscopic cholecystectomies in the OR. The primary outcome measure was the difference in technical performance between the 2 groups during the first laparoscopic cholecystectomy. Secondary outcome measures included differences with respect to learning curves in the OR, technical proficiency of each sequential laparoscopic cholecystectomy, and nontechnical skills. : Residents in the STAC group outperformed residents in the conventional group in the first (P = 0.004), second (P = 0.036), third (P = 0.021), and fourth (P = 0.023) laparoscopic cholecystectomies. The conventional group demonstrated a significant learning curve in the OR (P = 0.015) in contrast to the STAC group (P = 0.032). Residents in the STAC group also had significantly higher nontechnical skills (P = 0.027). : Participating in the STAC shifted the learning curve for a basic laparoscopic procedure from the operating room into the simulation laboratory. STAC-trained residents had superior technical proficiency in the OR and nontechnical skills compared with conventionally trained residents. (The study registration ID is NCT01560494.).
Bušs, Ginters
2009-01-01
Bayesian inference requires an analyst to set priors. Setting the right prior is crucial for precise forecasts. This paper analyzes how optimal prior changes when an economy is hit by a recession. For this task, an autoregressive distributed lag (ADL) model is chosen. The results show that a sharp economic slowdown changes the optimal prior in two directions. First, it changes the structure of the optimal weight prior, setting smaller weight on the lagged dependent variable compared to varia...
International Nuclear Information System (INIS)
Mamuris, Z.; Dumont, J.; Dutrillaux, B.; Aurias, A.
1989-01-01
A cytogenetic study of 14 patients with secondary acute nonlymphocytic leukemia (S-ANLL) with prior treatment for breast cancer is reported. The chromosomes recurrently involved in numerical or structural anomalies are chromosomes 7, 5, 17, and 11, in decreasing order of frequency. The distribution of the anomalies detected in this sample of patients is similar to that observed in published cases with prior breast or other solid tumors, though anomalies of chromosome 11 were not pointed out, but it significantly differs from that of the S-ANLL with prior hematologic malignancies. This difference is principally due to a higher involvement of chromosome 7 in patients with prior hematologic malignancies and of chromosomes 11 and 17 in patients with prior solid tumors. A genetic determinism involving abnormal recessive alleles located on chromosomes 5, 7, 11, and 17 uncovered by deletions of the normal homologs may be a cause of S-ANLL. The difference between patients with prior hematologic malignancies or solid tumors may be explained by different constitutional mutations of recessive genes in the two groups of patients
Whitmire, Jeannette M; Merrell, D Scott
2017-01-01
Mutagenesis is a valuable tool to examine the structure-function relationships of bacterial proteins. As such, a wide variety of mutagenesis techniques and strategies have been developed. This chapter details a selection of random mutagenesis methods and site-directed mutagenesis procedures that can be applied to an array of bacterial species. Additionally, the direct application of the techniques to study the Helicobacter pylori Ferric Uptake Regulator (Fur) protein is described. The varied approaches illustrated herein allow the robust investigation of the structural-functional relationships within a protein of interest.
Divergent Priors and well Behaved Bayes Factors
R.W. Strachan (Rodney); H.K. van Dijk (Herman)
2011-01-01
textabstractDivergent priors are improper when defined on unbounded supports. Bartlett's paradox has been taken to imply that using improper priors results in ill-defined Bayes factors, preventing model comparison by posterior probabilities. However many improper priors have attractive properties
International Nuclear Information System (INIS)
Perotin, L.; Granger, S.
1997-01-01
In order to improve the prediction of wear problems due to flow-induced vibration in PWR components, an inverse method for identifying a distributed random excitation acting on a dynamical system has been developed at EDF. This method, whose applications go far beyond the flow-induced vibration field, has been implemented into the MEIDEE software. This method is presented. (author)
Weight reduction intervention for obese infertile women prior to IVF
DEFF Research Database (Denmark)
Einarsson, Snorri; Bergh, Christina; Friberg, Britt
2017-01-01
in the weight reduction group reaching BMI ≤ 25 kg/m2 or reaching a weight loss of at least five BMI units to the IVF only group. No statistical differences in live birth rates between the groups in either subgroup analysis were found. LIMITATIONS, REASON FOR CAUTION: The study was not powered to detect a small......STUDY QUESTION: Does an intensive weight reduction programme prior to IVF increase live birth rates for infertile obese women? SUMMARY ANSWER: An intensive weight reduction programme resulted in a large weight loss but did not substantially affect live birth rates in obese women scheduled for IVF...... in infertile obese women. STUDY DESIGN, SIZE, DURATION: A prospective, multicentre, randomized controlled trial was performed between 2010 and 2016 in the Nordic countries. In total, 962 women were assessed for eligibility and 317 women were randomized. Computerized randomization with concealed allocation...
Advanced prior modeling for 3D bright field electron tomography
Sreehari, Suhas; Venkatakrishnan, S. V.; Drummy, Lawrence F.; Simmons, Jeffrey P.; Bouman, Charles A.
2015-03-01
Many important imaging problems in material science involve reconstruction of images containing repetitive non-local structures. Model-based iterative reconstruction (MBIR) could in principle exploit such redundancies through the selection of a log prior probability term. However, in practice, determining such a log prior term that accounts for the similarity between distant structures in the image is quite challenging. Much progress has been made in the development of denoising algorithms like non-local means and BM3D, and these are known to successfully capture non-local redundancies in images. But the fact that these denoising operations are not explicitly formulated as cost functions makes it unclear as to how to incorporate them in the MBIR framework. In this paper, we formulate a solution to bright field electron tomography by augmenting the existing bright field MBIR method to incorporate any non-local denoising operator as a prior model. We accomplish this using a framework we call plug-and-play priors that decouples the log likelihood and the log prior probability terms in the MBIR cost function. We specifically use 3D non-local means (NLM) as the prior model in the plug-and-play framework, and showcase high quality tomographic reconstructions of a simulated aluminum spheres dataset, and two real datasets of aluminum spheres and ferritin structures. We observe that streak and smear artifacts are visibly suppressed, and that edges are preserved. Also, we report lower RMSE values compared to the conventional MBIR reconstruction using qGGMRF as the prior model.
International Nuclear Information System (INIS)
Maestrini, A.P.
1979-04-01
Several problems related to the application of the theory of random by means of state variables are studied. The well-known equations that define the propagation of the mean and the variance for linear and non-linear systems are first presented. The Monte Carlo method is next resorted to in order to determine the applicability of the hypothesis of a normally distributed output in case of linear systems subjected to non-Gaussian excitations. Finally, attention is focused on the properties of linear filters and modulation functions proposed to simulate seismic excitations as non stationary random processes. Acceleration spectra obtained by multiplying rms spectra by a constant factor are compared with design spectra suggested by several authors for various soil conditions. In every case, filter properties are given. (Author) [pt
Energy Technology Data Exchange (ETDEWEB)
Carnaby-Mann, Giselle, E-mail: gmann@phhp.ufl.edu [Department of Behavioral Science and Community Health, University of Florida, Gainesville, FL (United States); Crary, Michael A. [Department of Speech Language and Hearing Sciences, University of Florida, Gainesville, FL (United States); Schmalfuss, Ilona [Department of Radiology, North Florida/South Georgia Veterans Health System, Gainesville, FL (Georgia); Amdur, Robert [Department of Radiation Oncology, University of Florida, Gainesville, FL (United States)
2012-05-01
Purpose: Dysphagia after chemoradiotherapy is common. The present randomized clinical trial studied the effectiveness of preventative behavioral intervention for dysphagia compared with the 'usual care.' Methods and Materials: A total of 58 head-and-neck cancer patients treated with chemoradiotherapy were randomly assigned to usual care, sham swallowing intervention, or active swallowing exercises (pharyngocise). The intervention arms were treated daily during chemoradiotherapy. The primary outcome measure was muscle size and composition (determined by T{sub 2}-weighted magnetic resonance imaging). The secondary outcomes included functional swallowing ability, dietary intake, chemosensory function, salivation, nutritional status, and the occurrence of dysphagia-related complications. Results: The swallowing musculature (genioglossus, hyoglossuss, and mylohyoid) demonstrated less structural deterioration in the active treatment arm. The functional swallowing, mouth opening, chemosensory acuity, and salivation rate deteriorated less in the pharyngocise group. Conclusion: Patients completing a program of swallowing exercises during cancer treatment demonstrated superior muscle maintenance and functional swallowing ability.
International Nuclear Information System (INIS)
Carnaby-Mann, Giselle; Crary, Michael A.; Schmalfuss, Ilona; Amdur, Robert
2012-01-01
Purpose: Dysphagia after chemoradiotherapy is common. The present randomized clinical trial studied the effectiveness of preventative behavioral intervention for dysphagia compared with the “usual care.” Methods and Materials: A total of 58 head-and-neck cancer patients treated with chemoradiotherapy were randomly assigned to usual care, sham swallowing intervention, or active swallowing exercises (pharyngocise). The intervention arms were treated daily during chemoradiotherapy. The primary outcome measure was muscle size and composition (determined by T 2 -weighted magnetic resonance imaging). The secondary outcomes included functional swallowing ability, dietary intake, chemosensory function, salivation, nutritional status, and the occurrence of dysphagia-related complications. Results: The swallowing musculature (genioglossus, hyoglossuss, and mylohyoid) demonstrated less structural deterioration in the active treatment arm. The functional swallowing, mouth opening, chemosensory acuity, and salivation rate deteriorated less in the pharyngocise group. Conclusion: Patients completing a program of swallowing exercises during cancer treatment demonstrated superior muscle maintenance and functional swallowing ability.
Fractional Gaussian noise: Prior specification and model comparison
Sø rbye, Sigrunn Holbek; Rue, Haavard
2017-01-01
Fractional Gaussian noise (fGn) is a stationary stochastic process used to model antipersistent or persistent dependency structures in observed time series. Properties of the autocovariance function of fGn are characterised by the Hurst exponent (H), which, in Bayesian contexts, typically has been assigned a uniform prior on the unit interval. This paper argues why a uniform prior is unreasonable and introduces the use of a penalised complexity (PC) prior for H. The PC prior is computed to penalise divergence from the special case of white noise and is invariant to reparameterisations. An immediate advantage is that the exact same prior can be used for the autocorrelation coefficient ϕ(symbol) of a first-order autoregressive process AR(1), as this model also reflects a flexible version of white noise. Within the general setting of latent Gaussian models, this allows us to compare an fGn model component with AR(1) using Bayes factors, avoiding the confounding effects of prior choices for the two hyperparameters H and ϕ(symbol). Among others, this is useful in climate regression models where inference for underlying linear or smooth trends depends heavily on the assumed noise model.
Fractional Gaussian noise: Prior specification and model comparison
Sørbye, Sigrunn Holbek
2017-07-07
Fractional Gaussian noise (fGn) is a stationary stochastic process used to model antipersistent or persistent dependency structures in observed time series. Properties of the autocovariance function of fGn are characterised by the Hurst exponent (H), which, in Bayesian contexts, typically has been assigned a uniform prior on the unit interval. This paper argues why a uniform prior is unreasonable and introduces the use of a penalised complexity (PC) prior for H. The PC prior is computed to penalise divergence from the special case of white noise and is invariant to reparameterisations. An immediate advantage is that the exact same prior can be used for the autocorrelation coefficient ϕ(symbol) of a first-order autoregressive process AR(1), as this model also reflects a flexible version of white noise. Within the general setting of latent Gaussian models, this allows us to compare an fGn model component with AR(1) using Bayes factors, avoiding the confounding effects of prior choices for the two hyperparameters H and ϕ(symbol). Among others, this is useful in climate regression models where inference for underlying linear or smooth trends depends heavily on the assumed noise model.
Penalised Complexity Priors for Stationary Autoregressive Processes
Sø rbye, Sigrunn Holbek; Rue, Haavard
2017-01-01
The autoregressive (AR) process of order p(AR(p)) is a central model in time series analysis. A Bayesian approach requires the user to define a prior distribution for the coefficients of the AR(p) model. Although it is easy to write down some prior, it is not at all obvious how to understand and interpret the prior distribution, to ensure that it behaves according to the users' prior knowledge. In this article, we approach this problem using the recently developed ideas of penalised complexity (PC) priors. These prior have important properties like robustness and invariance to reparameterisations, as well as a clear interpretation. A PC prior is computed based on specific principles, where model component complexity is penalised in terms of deviation from simple base model formulations. In the AR(1) case, we discuss two natural base model choices, corresponding to either independence in time or no change in time. The latter case is illustrated in a survival model with possible time-dependent frailty. For higher-order processes, we propose a sequential approach, where the base model for AR(p) is the corresponding AR(p-1) model expressed using the partial autocorrelations. The properties of the new prior distribution are compared with the reference prior in a simulation study.
Penalised Complexity Priors for Stationary Autoregressive Processes
Sørbye, Sigrunn Holbek
2017-05-25
The autoregressive (AR) process of order p(AR(p)) is a central model in time series analysis. A Bayesian approach requires the user to define a prior distribution for the coefficients of the AR(p) model. Although it is easy to write down some prior, it is not at all obvious how to understand and interpret the prior distribution, to ensure that it behaves according to the users\\' prior knowledge. In this article, we approach this problem using the recently developed ideas of penalised complexity (PC) priors. These prior have important properties like robustness and invariance to reparameterisations, as well as a clear interpretation. A PC prior is computed based on specific principles, where model component complexity is penalised in terms of deviation from simple base model formulations. In the AR(1) case, we discuss two natural base model choices, corresponding to either independence in time or no change in time. The latter case is illustrated in a survival model with possible time-dependent frailty. For higher-order processes, we propose a sequential approach, where the base model for AR(p) is the corresponding AR(p-1) model expressed using the partial autocorrelations. The properties of the new prior distribution are compared with the reference prior in a simulation study.
Buerkle, Bernd; Pueth, Julia; Hefler, Lukas A; Tempfer-Bentz, Eva-Katrin; Tempfer, Clemens B
2012-10-01
To compare the skills of performing a shoulder dystocia management algorithm after hands-on training compared with demonstration. We randomized medical students to a 30-minute hands-on (group 1) and a 30-minute demonstration (group 2) training session teaching a standardized shoulder dystocia management scheme on a pelvic training model. Participants were tested with a 22-item Objective Structured Assessment of Technical Skills scoring system after training and 72 hours thereafter. Objective Structured Assessment of Technical Skills scores were the primary outcome. Performance time, self-assessment, confidence, and global rating scale were the secondary outcomes. Statistics were performed using Mann-Whitney U test, χ test, and multiple linear regression analysis. Two hundred three participants were randomized. Objective Structured Assessment of Technical Skills scores were significantly higher in group 1 (n=103) compared with group 2 (n=100) (17.95±3.14 compared with 15.67±3.18, respectively; PTechnical Skills scores were still significantly higher in group 1 (n=67) compared with group 2 (n=60) (18.17±2.76 compared with 14.98±3.03, respectively; PTechnical Skills scores. Hands-on training helps to achieve a significant improvement of shoulder dystocia management on a pelvic training model. www.ClinicalTrials.gov, NCT01618565. I.
Edgington, Eugene
2007-01-01
Statistical Tests That Do Not Require Random Sampling Randomization Tests Numerical Examples Randomization Tests and Nonrandom Samples The Prevalence of Nonrandom Samples in Experiments The Irrelevance of Random Samples for the Typical Experiment Generalizing from Nonrandom Samples Intelligibility Respect for the Validity of Randomization Tests Versatility Practicality Precursors of Randomization Tests Other Applications of Permutation Tests Questions and Exercises Notes References Randomized Experiments Unique Benefits of Experiments Experimentation without Mani
International Nuclear Information System (INIS)
Boulatov, D.V.; Kazakov, V.A.
1987-01-01
We investigate the critical properties of a recently proposed exactly soluble Ising model on a planar random dynamical lattice representing a regularization of the zero-dimensional string with internal fermions. The sum over all lattices gives rise to a new quantum degree of freedom - fluctuation of the metric. The whole system of critical exponents is found: α = -1, β = 1/2, γ = 2, δ = 5, v . D = 3. To test the universality we have used the planar graphs with the coordination number equal to 4 (Φ 4 theory graphs) as well as with the equal to 3 (Φ 3 theory graphs or triangulations). The critical exponents coincide for both cases. (orig.)
Murakami, Minoru; Fukuma, Shingo; Ikezoe, Masaya; Iizuka, Chizuko; Izawa, Satoshi; Yamamoto, Yosuke; Yamazaki, Shin; Fukuhara, Shunichi
2016-11-01
Little is known about the effect of education programs on changing attitudes and behaviors of participants and their families toward deceased organ donation. The subjects of this randomized trial were Japanese nursing students who were not previously designated organ donors. They were randomly assigned to either the education program or information booklet group. The program comprised a lecture followed by group discussion and information booklet. The primary outcome was self-reported organ donor designation. Outcomes were assessed by questionnaire. Data of 203 (99.0%) students were analyzed. At study end, seven of 102 students (6.9%) of the program group and one of 101 students (1.0%) of the booklet group consented to donate organs (proportion ratio 6.93 [95% CI 0.87-55.32]). There were significant between-group differences in willingness to consent for donation (54.9% vs 39.6%; proportion ratio 1.39 [95% CI 1.03-1.87]), family discussion (31.4% vs 15.9%; 1.98 [1.16-3.38]), and organ donor designation of family members (11.8% vs 2.0%; 5.94 [1.36-25.88]). No group differences were found in willingness for organ donation by students and family members. Although there were no significant between-group differences in organ donor designation, the program seems to indirectly promote consent to organ donation by their families. © 2016 The Authors. Clinical Transplantation Published by John Wiley & Sons Ltd.
Directory of Open Access Journals (Sweden)
Becker Christiane
2010-01-01
Full Text Available Abstract Background People with dementia are often inapproachable due to symptoms of their illness. Therefore nurses should establish relationships with dementia patients via their remaining resources and facilitate communication. In order to achieve this, different targeted non-pharmacological interventions are recommended and practiced. However there is no sufficient evidence about the efficacy of most of these interventions. A number of publications highlight the urgent need for methodological sound studies so that more robust conclusions may be drawn. Methods/Design The trial is designed as a cluster randomized controlled trial with 20 nursing homes in Saxony and Saxony-Anhalt (Germany as the units of randomization. Nursing homes will be randomly allocated into 4 study groups consisting of 5 clusters and 90 residents: snoezelen, structured reminiscence therapy, 10-minutes activation or unstructured verbal communication (control group. The purpose is to determine whether the interventions are effective to reduce apathy in long-term care residents with dementia (N = 360 as the main outcome measure. Assessments will be done at baseline, 3, 6 and 12 months after beginning of the interventions. Discussion This trial will particularly contribute to the evidence on efficacy of non-pharmacological interventions in dementia care. Trial Registration ClinicalTrials.gov NCT00653731
Reproducing kernel Hilbert spaces of Gaussian priors
Vaart, van der A.W.; Zanten, van J.H.; Clarke, B.; Ghosal, S.
2008-01-01
We review definitions and properties of reproducing kernel Hilbert spaces attached to Gaussian variables and processes, with a view to applications in nonparametric Bayesian statistics using Gaussian priors. The rate of contraction of posterior distributions based on Gaussian priors can be described
Improving Open Access through Prior Learning Assessment
Yin, Shuangxu; Kawachi, Paul
2013-01-01
This paper explores and presents new data on how to improve open access in distance education through using prior learning assessments. Broadly there are three types of prior learning assessment (PLAR): Type-1 for prospective students to be allowed to register for a course; Type-2 for current students to avoid duplicating work-load to gain…
Quantitative Evidence Synthesis with Power Priors
Rietbergen, C.|info:eu-repo/dai/nl/322847796
2016-01-01
The aim of this thesis is to provide the applied researcher with a practical approach for quantitative evidence synthesis using the conditional power prior that allows for subjective input and thereby provides an alternative tgbgo deal with the difficulties as- sociated with the joint power prior
Haslbeck, Friederike Barbara; Bucher, Hans-Ulrich; Bassler, Dirk; Hagmann, Cornelia
2017-01-01
Preterm birth is associated with increased risk of neurological impairment and deficits in cognition, motor function, and behavioral problems. Limited studies indicate that multi-sensory experiences support brain development in preterm infants. Music appears to promote neurobiological processes and neuronal learning in the human brain. Creative music therapy (CMT) is an individualized, interactive therapeutic approach based on the theory and methods of Nordoff and Robbins. CMT may promote brain development in preterm infants via concurrent interaction and meaningful auditory stimulation. We hypothesize that preterm infants who receive creative music therapy during neonatal intensive care admission will have developmental benefits short- and long-term brain function. A prospective, randomized controlled single-center pilot trial involving 60 clinically stable preterm infants under 32 weeks of gestational age is conducted in preparation for a multi-center trial. Thirty infants each are randomized to either standard neonatal intensive care or standard care with CMT. Music therapy intervention is approximately 20 min in duration three times per week. A trained music therapist sings for the infants in lullaby style, individually entrained and adjusted to the infant's rhythm and affect. Primary objectives of this study are feasibility of protocol implementation and investigating the potential mechanism of efficacy for this new intervention. To examine the effect of this new intervention, non-invasive, quantitative magnetic resonance imaging (MRI) methods at corrected age and standardized neurodevelopmental assessments using the Bayley Scales of Infant and Toddler Development third edition at a corrected age of 24 months and Kaufman Assessment Battery for Children at 5 years will be performed. All assessments will be performed and analyzed by blinded experts. To our knowledge, this is the first randomized controlled clinical trial to systematically examine possible
Siri, Benoît; Berry, Hugues; Cessac, Bruno; Delord, Bruno; Quoy, Mathias
2008-12-01
We present a mathematical analysis of the effects of Hebbian learning in random recurrent neural networks, with a generic Hebbian learning rule, including passive forgetting and different timescales, for neuronal activity and learning dynamics. Previous numerical work has reported that Hebbian learning drives the system from chaos to a steady state through a sequence of bifurcations. Here, we interpret these results mathematically and show that these effects, involving a complex coupling between neuronal dynamics and synaptic graph structure, can be analyzed using Jacobian matrices, which introduce both a structural and a dynamical point of view on neural network evolution. Furthermore, we show that sensitivity to a learned pattern is maximal when the largest Lyapunov exponent is close to 0. We discuss how neural networks may take advantage of this regime of high functional interest.
Plaza-Manzano, Gustavo; Vergara-Vila, Marta; Val-Otero, Sandra; Rivera-Prieto, Cristina; Pecos-Martin, Daniel; Gallego-Izquierdo, Tomás; Ferragut-Garcías, Alejandro; Romero-Franco, Natalia
2016-12-01
Recurrent ankle sprains often involve residual symptoms for which subjects often perform proprioceptive or/and strengthening exercises. However, the effectiveness of mobilization to influence important nerve structures due to its anatomical distribution like tibial and peroneal nerves is unclear. To analyze the effects of proprioceptive/strengthening exercises versus the same exercises and manual therapy including mobilizations to influence joint and nerve structures in the management of recurrent ankle sprains. A randomized single-blind controlled clinical trial. Fifty-six patients with recurrent ankle sprains and regular sports practice were randomly assigned to experimental or control group. The control group performed 4 weeks of proprioceptive/strengthening exercises; the experimental group performed 4 weeks of the same exercises combined with manual therapy (mobilizations to influence joint and nerve structures). Pain, self-reported functional ankle instability, pressure pain threshold (PPT), ankle muscle strength, and active range of motion (ROM) were evaluated in the ankle joint before, just after and one month after the interventions. The within-group differences revealed improvements in all of the variables in both groups throughout the time. Between-group differences revealed that the experimental group exhibited lower pain levels and self-reported functional ankle instability and higher PPT, ankle muscle strength and ROM values compared to the control group immediately after the interventions and one month later. A protocol involving proprioceptive and strengthening exercises and manual therapy (mobilizations to influence joint and nerve structures) resulted in greater improvements in pain, self-reported functional joint stability, strength and ROM compared to exercises alone. Copyright © 2016 Elsevier Ltd. All rights reserved.
International Nuclear Information System (INIS)
Mehonic, Adnan; Buckwell, Mark; Montesi, Luca; Garnett, Leon; Hudziak, Stephen; Kenyon, Anthony J.; Fearn, Sarah; Chater, Richard; McPhail, David
2015-01-01
We present an investigation of structural changes in silicon-rich silicon oxide metal-insulator-metal resistive RAM devices. The observed unipolar switching, which is intrinsic to the bulk oxide material and does not involve movement of metal ions, correlates with changes in the structure of the oxide. We use atomic force microscopy, conductive atomic force microscopy, x-ray photoelectron spectroscopy, and secondary ion mass spectroscopy to examine the structural changes occurring as a result of switching. We confirm that protrusions formed at the surface of samples during switching are bubbles, which are likely to be related to the outdiffusion of oxygen. This supports existing models for valence-change based resistive switching in oxides. In addition, we describe parallel linear and nonlinear conduction pathways and suggest that the conductance quantum, G 0 , is a natural boundary between the high and low resistance states of our devices
Mehonic, Adnan; Buckwell, Mark; Montesi, Luca; Garnett, Leon; Hudziak, Stephen; Fearn, Sarah; Chater, Richard; McPhail, David; Kenyon, Anthony J.
2015-03-01
We present an investigation of structural changes in silicon-rich silicon oxide metal-insulator-metal resistive RAM devices. The observed unipolar switching, which is intrinsic to the bulk oxide material and does not involve movement of metal ions, correlates with changes in the structure of the oxide. We use atomic force microscopy, conductive atomic force microscopy, x-ray photoelectron spectroscopy, and secondary ion mass spectroscopy to examine the structural changes occurring as a result of switching. We confirm that protrusions formed at the surface of samples during switching are bubbles, which are likely to be related to the outdiffusion of oxygen. This supports existing models for valence-change based resistive switching in oxides. In addition, we describe parallel linear and nonlinear conduction pathways and suggest that the conductance quantum, G0, is a natural boundary between the high and low resistance states of our devices.
Energy Technology Data Exchange (ETDEWEB)
Mehonic, Adnan, E-mail: a.mehonic@ee.ucl.ac.uk, E-mail: t.kenyon@ucl.ac.uk; Buckwell, Mark; Montesi, Luca; Garnett, Leon; Hudziak, Stephen; Kenyon, Anthony J., E-mail: a.mehonic@ee.ucl.ac.uk, E-mail: t.kenyon@ucl.ac.uk [Department of Electronic and Electrical Engineering, UCL, Torrington Place, London WC1E 7JE (United Kingdom); Fearn, Sarah; Chater, Richard; McPhail, David [Department of Materials, Imperial College London, South Kensington Campus, London SW7 2AZ (United Kingdom)
2015-03-28
We present an investigation of structural changes in silicon-rich silicon oxide metal-insulator-metal resistive RAM devices. The observed unipolar switching, which is intrinsic to the bulk oxide material and does not involve movement of metal ions, correlates with changes in the structure of the oxide. We use atomic force microscopy, conductive atomic force microscopy, x-ray photoelectron spectroscopy, and secondary ion mass spectroscopy to examine the structural changes occurring as a result of switching. We confirm that protrusions formed at the surface of samples during switching are bubbles, which are likely to be related to the outdiffusion of oxygen. This supports existing models for valence-change based resistive switching in oxides. In addition, we describe parallel linear and nonlinear conduction pathways and suggest that the conductance quantum, G{sub 0}, is a natural boundary between the high and low resistance states of our devices.
Sharma, Vivek Kumar; Subramanian, Senthil Kumar; Radhakrishnan, Krishnakumar; Rajendran, Rajathi; Ravindran, Balasubramanian Sulur; Arunachalam, Vinayathan
2017-05-01
Physical inactivity contributes to many health issues. The WHO-recommended physical activity for adolescents encompasses aerobic, resistance, and bone strengthening exercises aimed at achieving health-related physical fitness. Heart rate variability (HRV) and maximal aerobic capacity (VO2max) are considered as noninvasive measures of cardiovascular health. The objective of this study is to compare the effect of structured and unstructured physical training on maximal aerobic capacity and HRV among adolescents. We designed a single blinded, parallel, randomized active-controlled trial (Registration No. CTRI/2013/08/003897) to compare the physiological effects of 6 months of globally recommended structured physical activity (SPA), with that of unstructured physical activity (USPA) in healthy school-going adolescents. We recruited 439 healthy student volunteers (boys: 250, girls: 189) in the age group of 12-17 years. Randomization across the groups was done using age and gender stratified randomization method, and the participants were divided into two groups: SPA (n=219, boys: 117, girls: 102) and USPA (n=220, boys: 119, girls: 101). Depending on their training status and gender the participants in both SPA and USPA groups were further subdivided into the following four sub-groups: SPA athlete boys (n=22) and girls (n=17), SPA nonathlete boys (n=95) and girls (n=85), USPA athlete boys (n=23) and girls (n=17), and USPA nonathlete boys (n=96) and girls (n=84). We recorded HRV, body fat%, and VO2 max using Rockport Walk Fitness test before and after the intervention. Maximum aerobic capacity and heart rate variability increased significantly while heart rate, systolic blood pressure, diastolic blood pressure, and body fat percentage decreased significantly after both SPA and USPA intervention. However, the improvement was more in SPA as compared to USPA. SPA is more beneficial for improving cardiorespiratory fitness, HRV, and reducing body fat percentage in terms of
van Emmerik, A.A.P.; Kamphuis, J.H.; Emmelkamp, P.M.G.
2008-01-01
Background: Writing assignments have shown promising results in treating traumatic symptomatology. Yet no studies have compared their efficacy to the current treatment of choice, cognitive behavior therapy (CBT). The present study evaluated the efficacy of structured writing therapy (SWT) and CBT as
Dozois, David J. A.; Bieling, Peter J.; Patelis-Siotis, Irene; Hoar, Lori; Chudzik, Susan; McCabe, Katie; Westra, Henny A.
2009-01-01
Negative cognitive structure (particularly for interpersonal content) has been shown in some research to persist past a current episode of depression and potentially to be a stable marker of vulnerability for depression (D. J. A. Dozois, 2007; D. J. A. Dozois & K. S. Dobson, 2001a). Given that cognitive therapy (CT) is highly effective for…
Terminology for pregnancy loss prior to viability
DEFF Research Database (Denmark)
Kolte, A M; Bernardi, L A; Christiansen, O B
2015-01-01
Pregnancy loss prior to viability is common and research in the field is extensive. Unfortunately, terminology in the literature is inconsistent. The lack of consensus regarding nomenclature and classification of pregnancy loss prior to viability makes it difficult to compare study results from...... different centres. In our opinion, terminology and definitions should be based on clinical findings, and when possible, transvaginal ultrasound. With this Early Pregnancy Consensus Statement, it is our goal to provide clear and consistent terminology for pregnancy loss prior to viability....
Directory of Open Access Journals (Sweden)
Horbach Annegret
2009-09-01
Full Text Available Abstract Background ICU stay is often associated with negative experiences for the individual patient. Many patients are disabled and their communication is restricted during the ICU stay. Specific information on procedures, sensations and coping behavior are thought to reduce anxiety on the ICU. Until now information programs to reduce anxiety were mainly delivered preoperatively, completely neglecting informational needs of non-elective ICU patients. Methods The trial is designed as a prospective multicenter randomized controlled trial in the cities of Marburg, Halle and Stuttgart. Elective and non-elective ICU patients will be included. The trial includes an intervention and a control group on the ICU. The control group receives a trivial conversation without any ICU-specific information. The intervention group receives an information program with specific procedural, sensory and coping information about their ICU stay. Both conversations take place in the ICU and are planned to take about 10 minutes. Discussion In contrast to former trials on information programs on the ICU-stay our intervention will take place in the ICU itself. This approach will ensure to compensate for memory effects due to anesthesia or preoperative stress. Further the results will be applicable to non-elective ICU-patients. Trial Registration ClinicalTrials NCT00764933
A Simulation of Pell Grant Awards and Costs Using Prior-Prior Year Financial Data
Kelchen, Robert; Jones, Gigi
2015-01-01
We examine the likely implications of switching from a prior year (PY) financial aid system, the current practice in which students file the Free Application for Federal Student Aid (FAFSA) using income data from the previous tax year, to prior-prior year (PPY), in which data from two years before enrollment is used. While PPY allows students to…
Resnick, Heidi; Zuromski, Kelly L; Galea, Sandro; Price, Matthew; Gilmore, Amanda K; Kilpatrick, Dean G; Ruggiero, Kenneth
2017-07-01
The purpose of the current report was to examine prior history of exposure to interpersonal violence (IPV), as compared with prior accident or prior disaster exposure, experiences during and after a disaster, and demographic variables as predictors of past month posttraumatic stress disorder (PTSD) and depression severity among adolescents exposed to the tornadoes in Alabama and Missouri. IPV exposure has been consistently identified as a unique category of potentially traumatic events (PTE) that significantly increases risk for development of PTSD and other difficulties relative to other event types among adolescents. A population-based sample of adolescents and caregivers ( N = 2,000) were recruited randomly from tornado-affected communities in Alabama and Joplin, Missouri. Participants completed structured telephone interviews on an average of 8.8 months posttornado. Prior history of IPV was prevalent (36.5%), as was reported history of accidents (25.9%) and prior disaster exposure (26.9%). Negative binomial regression analyses with PTSD and depression symptom counts for past month as outcome variables indicated that history of predisaster IPV was most robustly related to PTSD and depression symptoms, such that those with a history of IPV endorsed over 3 times the number of symptoms than those without IPV history. Final model statistics indicated that female gender, physical injury to caregiver, concern about others' safety, prior disaster, prior accident, and prior IPV exposure were also related to PTSD. Predictors of depression symptoms were similar with the exception that concern about others' safety was not a predictor and age was a predictor in the final model. It is important to evaluate potential additive effects of IPV history in addition to recent disaster exposure variables and to consider such history when developing interventions aimed to reduce or prevent symptoms of PTSD and depression among adolescents recently exposed to disaster.
Prior Authorization of PMDs Demonstration - Status Update
U.S. Department of Health & Human Services — CMS implemented a Prior Authorization process for scooters and power wheelchairs for people with Fee-For-Service Medicare who reside in seven states with high...
Short Report Biochemical derangements prior to emergency ...
African Journals Online (AJOL)
MMJ VOL 29 (1): March 2017. Biochemical derangements prior to emergency laparotomy at QECH 55. Malawi Medical Journal 29 (1): March 2017 ... Venepuncture was performed preoperatively for urgent cases, defined as those requiring.
Chan, Juliana C; So, Wing-Yee; Yeung, Chun-Yip; Ko, Gary T; Lau, Ip-Tim; Tsang, Man-Wo; Lau, Kam-Piu; Siu, Sing-Chung; Li, June K; Yeung, Vincent T; Leung, Wilson Y; Tong, Peter C
2009-06-01
Multifaceted care has been shown to reduce mortality and complications in type 2 diabetes. We hypothesized that structured care would reduce renal complications in type 2 diabetes. A total of 205 Chinese type 2 diabetic patients from nine public hospitals who had plasma creatinine levels of 150-350 micromol/l were randomly assigned to receive structured care (n = 104) or usual care (n = 101) for 2 years. The structured care group was managed according to a prespecified protocol with the following treatment goals: blood pressure triglyceride 500 micromol/l or dialysis). Of these 205 patients (mean +/- SD age 65 +/- 7.2 years; disease duration 14 +/- 7.9 years), the structured care group achieved better control than the usual care group (diastolic blood pressure 68 +/- 12 vs. 71 +/- 12 mmHg, respectively, P = 0.02; A1C 7.3 +/- 1.3 vs. 8.0 +/- 1.6%, P structured care (23.1%, n = 24) and usual care (23.8%, n = 24; NS) groups had similar end points, but more patients in the structured care group attained >or=3 treatment goals (61%, n = 63, vs. 28%, n = 28; P or=3 treatment targets (n = 91) had reduced risk of the primary end point (14 vs. 34; relative risk 0.43 [95% CI 0.21-0.86] compared with that of those who attained
International Nuclear Information System (INIS)
Ambjoern, J.
1987-08-01
The theory of strings is the theory of random surfaces. I review the present attempts to regularize the world sheet of the string by triangulation. The corresponding statistical theory of triangulated random surfaces has a surprising rich structure, but the connection to conventional string theory seems non-trivial. (orig.)
Attentional and Contextual Priors in Sound Perception.
Wolmetz, Michael; Elhilali, Mounya
2016-01-01
Behavioral and neural studies of selective attention have consistently demonstrated that explicit attentional cues to particular perceptual features profoundly alter perception and performance. The statistics of the sensory environment can also provide cues about what perceptual features to expect, but the extent to which these more implicit contextual cues impact perception and performance, as well as their relationship to explicit attentional cues, is not well understood. In this study, the explicit cues, or attentional prior probabilities, and the implicit cues, or contextual prior probabilities, associated with different acoustic frequencies in a detection task were simultaneously manipulated. Both attentional and contextual priors had similarly large but independent impacts on sound detectability, with evidence that listeners tracked and used contextual priors for a variety of sound classes (pure tones, harmonic complexes, and vowels). Further analyses showed that listeners updated their contextual priors rapidly and optimally, given the changing acoustic frequency statistics inherent in the paradigm. A Bayesian Observer model accounted for both attentional and contextual adaptations found with listeners. These results bolster the interpretation of perception as Bayesian inference, and suggest that some effects attributed to selective attention may be a special case of contextual prior integration along a feature axis.
Varying prior information in Bayesian inversion
International Nuclear Information System (INIS)
Walker, Matthew; Curtis, Andrew
2014-01-01
Bayes' rule is used to combine likelihood and prior probability distributions. The former represents knowledge derived from new data, the latter represents pre-existing knowledge; the Bayesian combination is the so-called posterior distribution, representing the resultant new state of knowledge. While varying the likelihood due to differing data observations is common, there are also situations where the prior distribution must be changed or replaced repeatedly. For example, in mixture density neural network (MDN) inversion, using current methods the neural network employed for inversion needs to be retrained every time prior information changes. We develop a method of prior replacement to vary the prior without re-training the network. Thus the efficiency of MDN inversions can be increased, typically by orders of magnitude when applied to geophysical problems. We demonstrate this for the inversion of seismic attributes in a synthetic subsurface geological reservoir model. We also present results which suggest that prior replacement can be used to control the statistical properties (such as variance) of the final estimate of the posterior in more general (e.g., Monte Carlo based) inverse problem solutions. (paper)
Weakly supervised semantic segmentation using fore-background priors
Han, Zheng; Xiao, Zhitao; Yu, Mingjun
2017-07-01
Weakly-supervised semantic segmentation is a challenge in the field of computer vision. Most previous works utilize the labels of the whole training set and thereby need the construction of a relationship graph about image labels, thus result in expensive computation. In this study, we tackle this problem from a different perspective. We proposed a novel semantic segmentation algorithm based on background priors, which avoids the construction of a huge graph in whole training dataset. Specifically, a random forest classifier is obtained using weakly supervised training data .Then semantic texton forest (STF) feature is extracted from image superpixels. Finally, a CRF based optimization algorithm is proposed. The unary potential of CRF derived from the outputting probability of random forest classifier and the robust saliency map as background prior. Experiments on the MSRC21 dataset show that the new algorithm outperforms some previous influential weakly-supervised segmentation algorithms. Furthermore, the use of efficient decision forests classifier and parallel computing of saliency map significantly accelerates the implementation.
Cajanding, Ruff Joseph
Cardiovascular diseases remain the leading cause of morbidity and mortality among Filipinos and are responsible for a very large number of hospital readmissions. Comprehensive discharge planning programs have demonstrated positive benefits among various populations of patients with cardiovascular disease, but the clinical and psychosocial effects of such intervention among Filipino patients with acute myocardial infarction (AMI) have not been studied. In this study we aimed to determine the effectiveness of a nurse-led structured discharge planning program on perceived functional status, cardiac self-efficacy, patient satisfaction, and unexpected hospital revisits among Filipino patients with AMI. A true experimental (randomized control) 2-group design with repeated measures and data collected before and after intervention and at 1-month follow-up was used in this study. Participants were assigned to either the control (n = 68) or the intervention group (n = 75). Intervention participants underwent a 3-day structured discharge planning program implemented by a cardiovascular nurse practitioner, which is comprised of a series of individualized lecture-discussion, provision of feedback, integrative problem solving, goal setting, and action planning. Control participants received standard routine care. Measures of functional status, cardiac self-efficacy, and patient satisfaction were measured at baseline; cardiac self-efficacy and patient satisfaction scores were measured prior to discharge, and perceived functional status and number of revisits were measured 1 month after discharge. Participants in the intervention group had significant improvement in functional status, cardiac self-efficacy, and patient satisfaction scores at baseline and at follow-up compared with the control participants. Furthermore, participants in the intervention group had significantly fewer hospital revisits compared with those who received only standard care. The results demonstrate that a
Generalized species sampling priors with latent Beta reinforcements
Airoldi, Edoardo M.; Costa, Thiago; Bassetti, Federico; Leisen, Fabrizio; Guindani, Michele
2014-01-01
Many popular Bayesian nonparametric priors can be characterized in terms of exchangeable species sampling sequences. However, in some applications, exchangeability may not be appropriate. We introduce a novel and probabilistically coherent family of non-exchangeable species sampling sequences characterized by a tractable predictive probability function with weights driven by a sequence of independent Beta random variables. We compare their theoretical clustering properties with those of the Dirichlet Process and the two parameters Poisson-Dirichlet process. The proposed construction provides a complete characterization of the joint process, differently from existing work. We then propose the use of such process as prior distribution in a hierarchical Bayes modeling framework, and we describe a Markov Chain Monte Carlo sampler for posterior inference. We evaluate the performance of the prior and the robustness of the resulting inference in a simulation study, providing a comparison with popular Dirichlet Processes mixtures and Hidden Markov Models. Finally, we develop an application to the detection of chromosomal aberrations in breast cancer by leveraging array CGH data. PMID:25870462
Buerkle, Bernd; Rueter, Katharina; Hefler, Lukas A; Tempfer-Bentz, Eva-Katrin; Tempfer, Clemens B
2013-12-01
To compare the skills of performing a vaginal breech (VB) delivery after hands-on training versus demonstration. We randomized medical students to a 30-min demonstration (group 1) or a 30-min hands-on (group 2) training session using a standardized VB management algorithm on a pelvic training model. Subjects were tested with a 25 item Objective Structured Assessment of Technical Skills (OSATS) scoring system immediately after training and 72 h thereafter. OSATS scores were the primary outcome. Performance time (PT), self assessment (SA), confidence (CON), and global rating scale (GRS) were the secondary outcomes. Statistics were performed using the Mann-Whitney U-test, chi-square test, and multiple linear regression analysis. 172 subjects were randomized. OSATS scores (primary outcome) were significantly higher in group 2 (n=88) compared to group 1 (n=84) (21.18±2.29 vs. 20.19±2.37, respectively; p=0.006). The secondary outcomes GRS (10.31±2.28 vs. 9.17±2.21; p=0.001), PT (214.60±57.97 s vs. 246.98±59.34 s; ptraining leads to a significant improvement of VB management in a pelvic training model, but this effect was only seen in the short term. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
Hilal, Ziad; Kumpernatz, Anne K; Rezniczek, Günther A; Cetin, Cem; Tempfer-Bentz, Eva-Katrin; Tempfer, Clemens B
2017-03-01
To compare medical students' skills for vaginal operative delivery by vacuum extraction (VE) after hands-on training versus video demonstration. We randomized medical students to an expert demonstration (group 1) or a hands-on (group 2) training using a standardized VE algorithm on a pelvic training model. Students were tested with a 40-item Objective Structured Assessment of Technical Skills (OSATS) scoring system after training and 4 days later. OSATS scores were the primary outcome. Performance time, self-assessment, confidence, and global rating scale were secondary outcomes. We assessed the constructive validity of OSATS in this VE model comparing metric scores of experts and students. In all, 137 students were randomized. OSATS scores were higher in group 2 (n = 63) compared with group 1 (n = 74) (32.89 ± 6.39 vs 27.51 ± 10.27, respectively; P training is superior to video demonstration for teaching VE on a pelvic model.
Ye, Terrance Z; Yang, Rong-Cai; Yeh, Francis C
2002-06-01
We studied the population structure of a lodgepole (Pinus contorta Dougl.) and jack pine (Pinus banksiana Lamb.) complex in west central Alberta and neighboring areas by assessing random amplified polymorphic DNA (RAPD) variability in 23 lodgepole pine, 9 jack pine, and 8 putative hybrid populations. Of 200 random primers screened, 10 that amplified 39 sharp and reproducible RAPDs were chosen for the study. None of the 39 RAPDs were unique to the parental species. RAPD diversity ranged from 0.085 to 0.190 among populations and averaged 0.143 for lodgepole pine, 0.156 for jack pine, 0.152 for hybrids, and 0.148 for all 40 populations. The estimated population differentiation based on G(ST) was 0.168 for hybrids, 0.162 for lodgepole pine, 0.155 for jack pine, and 0.247 across all 40 populations. Cluster analysis of genetic distances generally separated jack pine from lodgepole pine and hybrids, but no division could be identified that further separated lodgepole pine from hybrids. The observed weak to mild trend of "introgression by distance" in the complex and neighbouring areas was consistent with the view that introgressive hybridization between lodgepole and jack pines within and outside the hybrid zone may have been through secondary contact and primary intergradation, respectively.
International Nuclear Information System (INIS)
Chernykh, A.; Shur, V.; Nikolaeva, E.; Shishkin, E.; Shur, A.; Terabe, K.; Kurimura, S.; Kitamura, K.; Gallo, K.
2005-01-01
The variety of the shapes of isolated domains, revealed in congruent and stoichiometric LiTaO 3 and LiNbO 3 by chemical etching and visualized by optical and scanning probe microscopy, was obtained by computer simulation. The kinetic nature of the domain shape was clearly demonstrated. The kinetics of domain structure with the dominance of the growth of the steps formed at the domain walls as a result of domain merging was investigated experimentally in slightly distorted artificial regular two-dimensional (2D) hexagonal domain structure and random natural one. The artificial structure has been realized in congruent LiNbO 3 by 2D electrode pattern produced by photolithography. The polarization reversal in congruent LiTaO 3 was investigated as an example of natural domain growth limited by merging. The switching process defined by domain merging was studied by computer simulation. The crucial dependence of the switching kinetics on the nuclei concentration has been revealed
Putungan, Darwin Barayang; Lin, Shi-Hsin
2018-01-01
In this work, we looked into the lowest energy structures of small lithium clusters (Lin, n = 5, 6, 7, 8) utilizing conventional PBE exchange-correlation functional, PBE with D2 dispersion correction and PBE with Tkatchenko and Scheffler (TS) dispersion correction, and searched using ab initio random structure searching. Results show that in general, dispersion-corrected PBE obtained similar lowest minima structures as those obtained via conventional PBE regardless of the type of implementation, although both D2 and TS found several high-energy isomers that conventional PBE did not arrive at, with TS in general giving more structures per energy range that could be attributed to its environment-dependent implementation. Moreover, D2 and TS dispersion corrections found a lowest energy geometry for Li8 cluster that is in agreement with the structure obtained via the typical benchmarking method diffusion Monte Carlo in a recent work. It is thus suggested that for much larger lithium clusters, utilization of dispersion correction could be of help in searching for lowest energy minima that is in close agreement with that of diffusion Monte Carlo results, but computationally inexpensive.
DEFF Research Database (Denmark)
Frier, Christian; Sørensen, John Dalsgaard
2005-01-01
For many reinforced concrete structures corrosion of the reinforcement is an important problem since it can result in expensive maintenance and repair actions. Further, a significant reduction of the load-bearing capacity can occur. One mode of corrosion initiation occurs when the chloride content...... is modeled by a 2-dimensional diffusion process by FEM (Finite Element Method) and the diffusion coefficient, surface chloride concentration and reinforcement cover depth are modeled by multidimensional stochastic fields, which are discretized using the EOLE (Expansion Optimum Linear Estimation) approach....... As an example a bridge pier in a marine environment is considered and the results are given in terms of the distribution of the time for initialization of corrosion...
Hilário, M.; Hollander, den W.Th.F.; Sidoravicius, V.; Soares dos Santos, R.; Teixeira, A.
2014-01-01
In this paper we study a random walk in a one-dimensional dynamic random environment consisting of a collection of independent particles performing simple symmetric random walks in a Poisson equilibrium with density ¿¿(0,8). At each step the random walk performs a nearest-neighbour jump, moving to
Richardson, Caroline R; Mehari, Kathleen S; McIntyre, Laura G; Janney, Adrienne W; Fortlage, Laurie A; Sen, Ananda; Strecher, Victor J; Piette, John D
2007-11-16
The majority of individuals with type 2 diabetes do not exercise regularly. Pedometer-based walking interventions can help; however, pedometer-based interventions targeting only total daily accumulated steps might not yield the same health benefits as physical activity programs specifying a minimum duration and intensity of physical activity bouts. This pilot randomized trial compared two goal-setting strategies: 1) lifestyle goals targeting total daily accumulated step counts and 2) structured goals targeting bout steps defined as walking that lasts for 10 minutes or longer at a pace of at least 60 steps per minute. We sought to determine which goal-setting strategy was more effective at increasing bout steps. Participants were sedentary adults with type 2 diabetes. All participants: wore enhanced pedometers with embedded USB ports; uploaded detailed, time-stamped step-count data to a website called Stepping Up to Health; and received automated step-count feedback, automatically calculated goals, and tailored motivational messages throughout the six-week intervention. Only the automated goal calculations and step-count feedback differed between the two groups. The primary outcome of interest was increase in steps taken during the previously defined bouts of walking (lasting at least 10 minutes or longer at a pace of at least 60 steps per minute) between baseline and end of the intervention. Thirty-five participants were randomized and 30 (86%) completed the pilot study. Both groups significantly increased bout steps, but there was no statistically significant difference between groups. Among study completers, bout steps increased by 1921 +/- 2729 steps a day. Those who received lifestyle goals were more satisfied with the intervention (p = 0.006) and wore the pedometer more often (p day of additional moderate intensity bout activity. Pedometer-based walking programs that emphasize total accumulated step counts are more acceptable to participants and are as
External Prior Guided Internal Prior Learning for Real-World Noisy Image Denoising
Xu, Jun; Zhang, Lei; Zhang, David
2018-06-01
Most of existing image denoising methods learn image priors from either external data or the noisy image itself to remove noise. However, priors learned from external data may not be adaptive to the image to be denoised, while priors learned from the given noisy image may not be accurate due to the interference of corrupted noise. Meanwhile, the noise in real-world noisy images is very complex, which is hard to be described by simple distributions such as Gaussian distribution, making real noisy image denoising a very challenging problem. We propose to exploit the information in both external data and the given noisy image, and develop an external prior guided internal prior learning method for real noisy image denoising. We first learn external priors from an independent set of clean natural images. With the aid of learned external priors, we then learn internal priors from the given noisy image to refine the prior model. The external and internal priors are formulated as a set of orthogonal dictionaries to efficiently reconstruct the desired image. Extensive experiments are performed on several real noisy image datasets. The proposed method demonstrates highly competitive denoising performance, outperforming state-of-the-art denoising methods including those designed for real noisy images.
Directory of Open Access Journals (Sweden)
Alexandra Schättin
2016-11-01
Full Text Available A common problem in the older population is the risk of falling that might lead to injury, immobility, and reduced survival. Age-related neuronal changes, e.g. decline in grey- and white-matter, affect neuronal, cognitive, and motor functioning. The improvement of these factors might decrease fall events in elderly. Studies showed that the sole administration of video game-based physical exercise, a so-called exergame, or omega-3 fatty acid (FA may improve motor and/or cognitive functioning through neuronal changes in the brain of older adults. The aim of this study is to assess the effects of a combination of exergame training with omega-3 FA supplementation on the elderly brain. We hypothesize that an intervention using a combination approach differently effects on the neuronal structure and function of the elderly’s brain as compared to the sole administration of exergame training. The study is a parallel, double-blinded, randomized controlled trial lasting 26 weeks. Sixty autonomous living, non-smoking, and right-handed healthy older (>65 years adults who live independently or in a senior residency are included, randomized, and allocated to one of two study groups. The experimental group receives a daily amount of 13.5ml fish oil (including 2.9g of omega-3 FA, whereas the control group receives a daily amount of 13.5ml olive oil for 26 weeks. After 16 weeks, both groups start with an exergame training program three times per week. Measurements are performed on three time-points by treatment blinded investigators: pre-intervention measurement, blood sample after 16 week, and post-intervention measurements. The main outcomes are motor evoked potentials of the right M. tibialis anterior (transcranial magnetic stimulation and response-related potentials (electroencephalography during a cognitive test. For secondary outcomes, reaction times during cognitive tests and spatio-temporal parameters during gait performance are measured. Statistics
Prospective regularization design in prior-image-based reconstruction
International Nuclear Information System (INIS)
Dang, Hao; Siewerdsen, Jeffrey H; Stayman, J Webster
2015-01-01
Prior-image-based reconstruction (PIBR) methods leveraging patient-specific anatomical information from previous imaging studies and/or sequences have demonstrated dramatic improvements in dose utilization and image quality for low-fidelity data. However, a proper balance of information from the prior images and information from the measurements is required (e.g. through careful tuning of regularization parameters). Inappropriate selection of reconstruction parameters can lead to detrimental effects including false structures and failure to improve image quality. Traditional methods based on heuristics are subject to error and sub-optimal solutions, while exhaustive searches require a large number of computationally intensive image reconstructions. In this work, we propose a novel method that prospectively estimates the optimal amount of prior image information for accurate admission of specific anatomical changes in PIBR without performing full image reconstructions. This method leverages an analytical approximation to the implicitly defined PIBR estimator, and introduces a predictive performance metric leveraging this analytical form and knowledge of a particular presumed anatomical change whose accurate reconstruction is sought. Additionally, since model-based PIBR approaches tend to be space-variant, a spatially varying prior image strength map is proposed to optimally admit changes everywhere in the image (eliminating the need to know change locations a priori). Studies were conducted in both an ellipse phantom and a realistic thorax phantom emulating a lung nodule surveillance scenario. The proposed method demonstrated accurate estimation of the optimal prior image strength while achieving a substantial computational speedup (about a factor of 20) compared to traditional exhaustive search. Moreover, the use of the proposed prior strength map in PIBR demonstrated accurate reconstruction of anatomical changes without foreknowledge of change locations in
Directory of Open Access Journals (Sweden)
Evânia Galvão Mendonça
2014-01-01
Full Text Available The objective of this study was to assess the genetic variability in two natural populations of Calophyllum brasiliense located along two different rivers in the state of Minas Gerais, Brazil, using RAPD molecular markers. Eighty-two polymorphic fragments were amplified using 27 primers. The values obtained for Shannon index (I were 0.513 and 0.530 for the populations located on the margins of the Rio Grande and Rio das Mortes, respectively, demonstrating the high genetic diversity in the studied populations. Nei’s genetic diversity (He was 0.341 for the Rio Grande population and 0.357 for the Rio das Mortes population. These results were not significantly different between populations and suggest a large proportion of heterozygote individuals within both populations. AMOVA showed that 70.42% of the genetic variability is found within populations and 29.58% is found among populations (ФST=0.2958. The analysis of kinship coefficients detected the existence of family structures in both populations. Average kinship coefficients between neighboring individuals were 0.053 (P<0.001 in Rio das Mortes and 0.040 (P<0.001 in Rio Grande. This could be due to restricted pollen and seed dispersal and the history of anthropogenic disturbance in the area. These factors are likely to contribute to the relatedness observed among these genotypes.
de Aquino, Samuel; Fuess, Lucas Tadeu; Pires, Eduardo Cleto
2017-07-01
This study reports on the application of an innovative structured-bed reactor (FVR) as an alternative to conventional packed-bed reactors (PBRs) to treat high-strength solid-rich wastewaters. Using the FVR prevents solids from accumulating within the fixed-bed, while maintaining the advantages of the biomass immobilization. The long-term operation (330days) of a FVR and a PBR applied to sugarcane vinasse under increasing organic loads (2.4-18.0kgCODm -3 day -1 ) was assessed, focusing on the impacts of the different media arrangements over the production and retention of biomass. Much higher organic matter degradation rates, as well as long-term operational stability and high conversion efficiencies (>80%) confirmed that the FVR performed better than the PBR. Despite the equivalent operating conditions, the biomass growth yield was different in both reactors, i.e., 0.095gVSSg -1 COD (FVR) and 0.066gVSSg -1 COD (PBR), indicating a clear control of the media arrangement over the biomass production in fixed-bed reactors. Copyright © 2017 Elsevier Ltd. All rights reserved.
Hysteresis as an Implicit Prior in Tactile Spatial Decision Making
Thiel, Sabrina D.; Bitzer, Sebastian; Nierhaus, Till; Kalberlah, Christian; Preusser, Sven; Neumann, Jane; Nikulin, Vadim V.; van der Meer, Elke; Villringer, Arno; Pleger, Burkhard
2014-01-01
Perceptual decisions not only depend on the incoming information from sensory systems but constitute a combination of current sensory evidence and internally accumulated information from past encounters. Although recent evidence emphasizes the fundamental role of prior knowledge for perceptual decision making, only few studies have quantified the relevance of such priors on perceptual decisions and examined their interplay with other decision-relevant factors, such as the stimulus properties. In the present study we asked whether hysteresis, describing the stability of a percept despite a change in stimulus property and known to occur at perceptual thresholds, also acts as a form of an implicit prior in tactile spatial decision making, supporting the stability of a decision across successively presented random stimuli (i.e., decision hysteresis). We applied a variant of the classical 2-point discrimination task and found that hysteresis influenced perceptual decision making: Participants were more likely to decide ‘same’ rather than ‘different’ on successively presented pin distances. In a direct comparison between the influence of applied pin distances (explicit stimulus property) and hysteresis, we found that on average, stimulus property explained significantly more variance of participants’ decisions than hysteresis. However, when focusing on pin distances at threshold, we found a trend for hysteresis to explain more variance. Furthermore, the less variance was explained by the pin distance on a given decision, the more variance was explained by hysteresis, and vice versa. Our findings suggest that hysteresis acts as an implicit prior in tactile spatial decision making that becomes increasingly important when explicit stimulus properties provide decreasing evidence. PMID:24587045
Offending prior to first psychiatric contact
DEFF Research Database (Denmark)
Stevens, H; Agerbo, E; Dean, K
2012-01-01
There is a well-established association between psychotic disorders and subsequent offending but the extent to which those who develop psychosis might have a prior history of offending is less clear. Little is known about whether the association between illness and offending exists in non-psychot......-psychotic disorders. The aim of this study was to determine whether the association between mental disorder and offending is present prior to illness onset in psychotic and non-psychotic disorders.......There is a well-established association between psychotic disorders and subsequent offending but the extent to which those who develop psychosis might have a prior history of offending is less clear. Little is known about whether the association between illness and offending exists in non...
GENERAL ASPECTS REGARDING THE PRIOR DISCIPLINARY RESEARCH
Directory of Open Access Journals (Sweden)
ANDRA PURAN (DASCĂLU
2012-05-01
Full Text Available Disciplinary research is the first phase of the disciplinary action. According to art. 251 paragraph 1 of the Labour Code no disciplinary sanction may be ordered before performing the prior disciplinary research.These regulations provide an exception: the sanction of written warning. The current regulations in question, kept from the old regulation, provides a protection for employees against abuses made by employers, since sanctions are affecting the salary or the position held, or even the development of individual employment contract. Thus, prior research of the fact that is a misconduct, before a disciplinary sanction is applied, is an essential condition for the validity of the measure ordered. Through this study we try to highlight some general issues concerning the characteristics, processes and effects of prior disciplinary research.
Bayesian Prior Probability Distributions for Internal Dosimetry
Energy Technology Data Exchange (ETDEWEB)
Miller, G.; Inkret, W.C.; Little, T.T.; Martz, H.F.; Schillaci, M.E
2001-07-01
The problem of choosing a prior distribution for the Bayesian interpretation of measurements (specifically internal dosimetry measurements) is considered using a theoretical analysis and by examining historical tritium and plutonium urine bioassay data from Los Alamos. Two models for the prior probability distribution are proposed: (1) the log-normal distribution, when there is some additional information to determine the scale of the true result, and (2) the 'alpha' distribution (a simplified variant of the gamma distribution) when there is not. These models have been incorporated into version 3 of the Bayesian internal dosimetric code in use at Los Alamos (downloadable from our web site). Plutonium internal dosimetry at Los Alamos is now being done using prior probability distribution parameters determined self-consistently from population averages of Los Alamos data. (author)
Incorporating priors for EEG source imaging and connectivity analysis
Directory of Open Access Journals (Sweden)
Xu eLei
2015-08-01
Full Text Available Electroencephalography source imaging (ESI is a useful technique to localize the generators from a given scalp electric measurement and to investigate the temporal dynamics of the large-scale neural circuits. By introducing reasonable priors from other modalities, ESI reveals the most probable sources and communication structures at every moment in time. Here, we review the available priors from such techniques as magnetic resonance imaging (MRI, functional MRI (fMRI, and positron emission tomography (PET. The modality's specific contribution is analyzed from the perspective of source reconstruction. For spatial priors, such as EEG-correlated fMRI, temporally coherent networks and resting-state fMRI are systematically introduced in the ESI. Moreover, the fiber tracking (diffusion tensor imaging, DTI and neuro-stimulation techniques (transcranial magnetic stimulation, TMS are also introduced as the potential priors, which can help to draw inferences about the neuroelectric connectivity in the source space. We conclude that combining EEG source imaging with other complementary modalities is a promising approach towards the study of brain networks in cognitive and clinical neurosciences.
Preparing learners with partly incorrect intuitive prior knowledge for learning
Directory of Open Access Journals (Sweden)
Andrea eOhst
2014-07-01
Full Text Available Learners sometimes have incoherent and fragmented intuitive prior knowledge that is (partly ‘incompatible’ with the to-be-learned contents. Such knowledge in pieces can cause conceptual disorientation and cognitive overload while learning. We hypothesized that a pre-training intervention providing a generalized schema as a structuring framework for such knowledge in pieces would support (reorganizing-processes of prior knowledge and thus reduce unnecessary cognitive load during subsequent learning. Fifty-six student teachers participated in the experiment. A framework group underwent a pre-training intervention providing a generalized, categorical schema for categorizing primary learning strategies and related but different strategies as a cognitive framework for (re-organizing their prior knowledge. Our control group received comparable factual information but no framework. Afterwards, all participants learned about primary learning strategies. The framework group claimed to possess higher levels of interest and self-efficacy, achieved higher learning outcomes, and learned more efficiently. Hence, providing a categorical framework can help overcome the barrier of incorrect prior knowledge in pieces.
Preparing learners with partly incorrect intuitive prior knowledge for learning
Ohst, Andrea; Fondu, Béatrice M. E.; Glogger, Inga; Nückles, Matthias; Renkl, Alexander
2014-01-01
Learners sometimes have incoherent and fragmented intuitive prior knowledge that is (partly) “incompatible” with the to-be-learned contents. Such knowledge in pieces can cause conceptual disorientation and cognitive overload while learning. We hypothesized that a pre-training intervention providing a generalized schema as a structuring framework for such knowledge in pieces would support (re)organizing-processes of prior knowledge and thus reduce unnecessary cognitive load during subsequent learning. Fifty-six student teachers participated in the experiment. A framework group underwent a pre-training intervention providing a generalized, categorical schema for categorizing primary learning strategies and related but different strategies as a cognitive framework for (re-)organizing their prior knowledge. Our control group received comparable factual information but no framework. Afterwards, all participants learned about primary learning strategies. The framework group claimed to possess higher levels of interest and self-efficacy, achieved higher learning outcomes, and learned more efficiently. Hence, providing a categorical framework can help overcome the barrier of incorrect prior knowledge in pieces. PMID:25071638
Prior Exposure and Educational Environment towards Entrepreneurial Intention
Directory of Open Access Journals (Sweden)
Karla Soria-Barreto
2017-07-01
Full Text Available This research is based on the responses to a questionnaire applied to 351 students of business management in Chile and Colombia. Through the analysis of structural equations on Ajzen’s model, we found that entrepreneurial education, the University environment, and the prior entrepreneurial exposure are mediated by the factors of the Ajzen`s model to generate entrepreneurial intention in higher education students. The results show that entrepreneurial education strengthens the perceived control of behavior and, with it, albeit in a differentiated way, the entrepreneurial intention of men and women. University environment affects entrepreneurial intention through attitude towards entrepreneurship; and finally, the work experience, used as one of the variables that measure prior entrepreneurial exposure, explains the entrepreneurial intention inversely through the subjective norms. We found that gender has a moderate effect on perceived control of behavior and entrepreneurial education. The scarce studies on the impact of the University environment and the mixed results of the entrepreneurial education and prior entrepreneurial exposure toward entrepreneurial intention show the necessity for further research. A second contribution is the opportunity to present new evidence about the relationship between University environment, entrepreneurial education and prior exposure to developing countries of South America, including the gender effect (moderator for entrepreneurial intention. It is important to note that most of the research in this area applies to developed countries, and some scholars suggest that extrapolating the results is not convenient.
Can natural selection encode Bayesian priors?
Ramírez, Juan Camilo; Marshall, James A R
2017-08-07
The evolutionary success of many organisms depends on their ability to make decisions based on estimates of the state of their environment (e.g., predation risk) from uncertain information. These decision problems have optimal solutions and individuals in nature are expected to evolve the behavioural mechanisms to make decisions as if using the optimal solutions. Bayesian inference is the optimal method to produce estimates from uncertain data, thus natural selection is expected to favour individuals with the behavioural mechanisms to make decisions as if they were computing Bayesian estimates in typically-experienced environments, although this does not necessarily imply that favoured decision-makers do perform Bayesian computations exactly. Each individual should evolve to behave as if updating a prior estimate of the unknown environment variable to a posterior estimate as it collects evidence. The prior estimate represents the decision-maker's default belief regarding the environment variable, i.e., the individual's default 'worldview' of the environment. This default belief has been hypothesised to be shaped by natural selection and represent the environment experienced by the individual's ancestors. We present an evolutionary model to explore how accurately Bayesian prior estimates can be encoded genetically and shaped by natural selection when decision-makers learn from uncertain information. The model simulates the evolution of a population of individuals that are required to estimate the probability of an event. Every individual has a prior estimate of this probability and collects noisy cues from the environment in order to update its prior belief to a Bayesian posterior estimate with the evidence gained. The prior is inherited and passed on to offspring. Fitness increases with the accuracy of the posterior estimates produced. Simulations show that prior estimates become accurate over evolutionary time. In addition to these 'Bayesian' individuals, we also
Saputro, D. R. S.; Amalia, F.; Widyaningsih, P.; Affan, R. C.
2018-05-01
Bayesian method is a method that can be used to estimate the parameters of multivariate multiple regression model. Bayesian method has two distributions, there are prior and posterior distributions. Posterior distribution is influenced by the selection of prior distribution. Jeffreys’ prior distribution is a kind of Non-informative prior distribution. This prior is used when the information about parameter not available. Non-informative Jeffreys’ prior distribution is combined with the sample information resulting the posterior distribution. Posterior distribution is used to estimate the parameter. The purposes of this research is to estimate the parameters of multivariate regression model using Bayesian method with Non-informative Jeffreys’ prior distribution. Based on the results and discussion, parameter estimation of β and Σ which were obtained from expected value of random variable of marginal posterior distribution function. The marginal posterior distributions for β and Σ are multivariate normal and inverse Wishart. However, in calculation of the expected value involving integral of a function which difficult to determine the value. Therefore, approach is needed by generating of random samples according to the posterior distribution characteristics of each parameter using Markov chain Monte Carlo (MCMC) Gibbs sampling algorithm.
Algorithms and tools for system identification using prior knowledge
International Nuclear Information System (INIS)
Lindskog, P.
1994-01-01
One of the hardest problems in system identification is that of model structure selection. In this thesis two different kinds of a priori process knowledge are used to address this fundamental problem. Concentrating on linear model structures, the first prior advantage of is knowledge about the systems' dominating time constants and resonance frequencies. The idea is to generalize FIR modelling by replacing the usual delay operator with discrete so-called Laguerre or Kautz filters. The generalization is such that stability, the linear regression structure and the approximation ability of the FIR model structure is retained, whereas the prior is used to reduce the number of parameters needed to arrive at a reasonable model. Tailorized and efficient system identification algorithms for these model structures are detailed in this work. The usefulness of the proposed methods is demonstrated through concrete simulation and application studies. The other approach is referred to as semi-physical modelling. The main idea is to use simple physical insight into the application, often in terms of a set of unstructured equations, in order to come up with suitable nonlinear transformation of the raw measurements, so as to allow for a good model structure. Semi-physical modelling is less ''ambitious'' than physical modelling in that no complete physical structure is sought, just combinations of inputs and outputs that can be subjected to more or less standard model structures, such as linear regressions. The suggested modelling procedure shows a first step where symbolic computations are employed to determine a suitable model structure - a set of regressors. We show how constructive methods from commutative and differential algebra can be applied for this. Subsequently, different numerical schemes for finding a subset of ''good'' regressors and for estimating the corresponding linear-in-the-parameters model are discussed. 107 refs, figs, tabs
On a randomly imperfect spherical cap pressurized by a random ...
African Journals Online (AJOL)
In this paper, we investigate a dynamical system in a random setting of dual randomness in space and time variables in which both the imperfection of the structure and the load function are considered random , each with a statistical zero-mean .The auto- covariance of the load is correlated as an exponentially decaying ...
Directory of Open Access Journals (Sweden)
Fortlage Laurie A
2007-11-01
Full Text Available Abstract Background The majority of individuals with type 2 diabetes do not exercise regularly. Pedometer-based walking interventions can help; however, pedometer-based interventions targeting only total daily accumulated steps might not yield the same health benefits as physical activity programs specifying a minimum duration and intensity of physical activity bouts. Methods This pilot randomized trial compared two goal-setting strategies: 1 lifestyle goals targeting total daily accumulated step counts and 2 structured goals targeting bout steps defined as walking that lasts for 10 minutes or longer at a pace of at least 60 steps per minute. We sought to determine which goal-setting strategy was more effective at increasing bout steps. Participants were sedentary adults with type 2 diabetes. All participants: wore enhanced pedometers with embedded USB ports; uploaded detailed, time-stamped step-count data to a website called Stepping Up to Health; and received automated step-count feedback, automatically calculated goals, and tailored motivational messages throughout the six-week intervention. Only the automated goal calculations and step-count feedback differed between the two groups. The primary outcome of interest was increase in steps taken during the previously defined bouts of walking (lasting at least 10 minutes or longer at a pace of at least 60 steps per minute between baseline and end of the intervention. Results Thirty-five participants were randomized and 30 (86% completed the pilot study. Both groups significantly increased bout steps, but there was no statistically significant difference between groups. Among study completers, bout steps increased by 1921 ± 2729 steps a day. Those who received lifestyle goals were more satisfied with the intervention (p = 0.006 and wore the pedometer more often (p Conclusion In this six-week intervention, Lifestyle Goals group participants achieved increases in bout steps comparable to the
International Nuclear Information System (INIS)
Tahir-Kheli, R.A.
1975-01-01
A few simple problems relating to random magnetic systems are presented. Translational symmetry, only on the macroscopic scale, is assumed for these systems. A random set of parameters, on the microscopic scale, for the various regions of these systems is also assumed. A probability distribution for randomness is obeyed. Knowledge of the form of these probability distributions, is assumed in all cases [pt
Recognition of Prior Learning: The Participants' Perspective
Miguel, Marta C.; Ornelas, José H.; Maroco, João P.
2016-01-01
The current narrative on lifelong learning goes beyond formal education and training, including learning at work, in the family and in the community. Recognition of prior learning is a process of evaluation of those skills and knowledge acquired through life experience, allowing them to be formally recognized by the qualification systems. It is a…
Validity in assessment of prior learning
DEFF Research Database (Denmark)
Wahlgren, Bjarne; Aarkrog, Vibe
2015-01-01
, the article discusses the need for specific criteria for assessment. The reliability and validity of the assessment procedures depend on whether the competences are well-defined, and whether the teachers are adequately trained for the assessment procedures. Keywords: assessment, prior learning, adult...... education, vocational training, lifelong learning, validity...
PET reconstruction via nonlocal means induced prior.
Hou, Qingfeng; Huang, Jing; Bian, Zhaoying; Chen, Wufan; Ma, Jianhua
2015-01-01
The traditional Bayesian priors for maximum a posteriori (MAP) reconstruction methods usually incorporate local neighborhood interactions that penalize large deviations in parameter estimates for adjacent pixels; therefore, only local pixel differences are utilized. This limits their abilities of penalizing the image roughness. To achieve high-quality PET image reconstruction, this study investigates a MAP reconstruction strategy by incorporating a nonlocal means induced (NLMi) prior (NLMi-MAP) which enables utilizing global similarity information of image. The present NLMi prior approximates the derivative of Gibbs energy function by an NLM filtering process. Specially, the NLMi prior is obtained by subtracting the current image estimation from its NLM filtered version and feeding the residual error back to the reconstruction filter to yield the new image estimation. We tested the present NLMi-MAP method with simulated and real PET datasets. Comparison studies with conventional filtered backprojection (FBP) and a few iterative reconstruction methods clearly demonstrate that the present NLMi-MAP method performs better in lowering noise, preserving image edge and in higher signal to noise ratio (SNR). Extensive experimental results show that the NLMi-MAP method outperforms the existing methods in terms of cross profile, noise reduction, SNR, root mean square error (RMSE) and correlation coefficient (CORR).
Prior learning assessment and quality assurance practice ...
African Journals Online (AJOL)
The use of RPL (Recognition of Prior Learning) in higher education to assess RPL candidates for admission into programmes of study met with a lot of criticism from faculty academics. Lecturers viewed the possibility of admitting large numbers of under-qualified adult learners, as a threat to the institution's reputation, or an ...
Action priors for learning domain invariances
CSIR Research Space (South Africa)
Rosman, Benjamin S
2015-04-01
Full Text Available behavioural invariances in the domain, by identifying actions to be prioritised in local contexts, invariant to task details. This information has the effect of greatly increasing the speed of solving new problems. We formalise this notion as action priors...
Evaluation of the macula prior to cataract surgery.
McKeague, Marta; Sharma, Priya; Ho, Allen C
2018-01-01
To describe recent evidence regarding methods of evaluation of retinal structure and function prior to cataract surgery. Studies in patients with cataract but no clinically detectable retinal disease have shown that routine use of optical coherence tomography (OCT) prior to cataract surgery can detect subtle macular disease, which may alter the course of treatment or lead to modification of consent. The routine use of OCT has been especially useful in patients being considered for advanced-technology intraocular lenses (IOLs) as subtle macular disease can be a contraindication to the use of these lenses. The cost-effectiveness of routine use of OCT prior to cataract surgery has not been studied. Other technologies that assess retinal function rather than structure, such as microperimetry and electroretinogram (ERG) need further study to determine whether they can predict retinal potential in cataract patients. There is growing evidence for the importance of more detailed retinal evaluation of cataract patients even with clinically normal exam. OCT has been the most established and studied method for retinal evaluation in cataract patients, but other technologies such as microperimetry and ERG are beginning to be studied.
Dror, Adi; Shemesh, Einav; Dayan, Natali
2014-01-01
The abilities of enzymes to catalyze reactions in nonnatural environments of organic solvents have opened new opportunities for enzyme-based industrial processes. However, the main drawback of such processes is that most enzymes have a limited stability in polar organic solvents. In this study, we employed protein engineering methods to generate a lipase for enhanced stability in methanol, which is important for biodiesel production. Two protein engineering approaches, random mutagenesis (error-prone PCR) and structure-guided consensus, were applied in parallel on an unexplored lipase gene from Geobacillus stearothermophilus T6. A high-throughput colorimetric screening assay was used to evaluate lipase activity after an incubation period in high methanol concentrations. Both protein engineering approaches were successful in producing variants with elevated half-life values in 70% methanol. The best variant of the random mutagenesis library, Q185L, exhibited 23-fold-improved stability, yet its methanolysis activity was decreased by one-half compared to the wild type. The best variant from the consensus library, H86Y/A269T, exhibited 66-fold-improved stability in methanol along with elevated thermostability (+4.3°C) and a 2-fold-higher fatty acid methyl ester yield from soybean oil. Based on in silico modeling, we suggest that the Q185L substitution facilitates a closed lid conformation that limits access for both the methanol and substrate excess into the active site. The enhanced stability of H86Y/A269T was a result of formation of new hydrogen bonds. These improved characteristics make this variant a potential biocatalyst for biodiesel production. PMID:24362426
Kim, Ji-Hoon; Kim, Young-Min; Park, Seong Heui; Ju, Eun A; Choi, Se Min; Hong, Tai Yong
2017-06-01
The aim of the study was to compare the educational impact of two postsimulation debriefing methods-focused and corrective feedback (FCF) versus Structured and Supported Debriefing (SSD)-on team dynamics in simulation-based cardiac arrest team training. This was a pilot randomized controlled study conducted at a simulation center. Fourth-year medical students were randomly assigned to the FCF or SSD group, with each team composed of six students and a confederate. Each team participated in two simulations and the assigned debriefing (FCF or SSD) sessions and then underwent a test simulation. Two trained raters blindly assessed all of the recorded simulations using checklists. The primary outcome was the improvement in team dynamics scores between baseline and test simulation. The secondary outcomes were improvements before and after training in team clinical performance scores, self-assessed comprehension of and confidence in cardiac arrest management and team dynamics, as well as evaluations of the postsimulation debriefing intervention. In total, 95 students participated [FCF (8 teams, n = 47) and SSD (8 teams, n = 48)]. The SSD team dynamics score during the test simulation was higher than at baseline [baseline: 74.5 (65.9-80.9), test: 85.0 (71.9-87.6), P = 0.035]. However, there were no differences in the improvement in the team dynamics or team clinical performance scores between the two groups (P = 0.328, respectively). There was no significant difference in improvement in team dynamics scores during the test simulation compared with baseline between the SSD and FCF groups in a simulation-based cardiac arrest team training in fourth-year Korean medical students.
Perichart-Perera, Otilia; Balas-Nakash, Margie; Muñoz-Manrique, Cinthya; Legorreta-Legorreta, Jennifer; Rodríguez-Cano, Ameyalli; Mier-Cabrera, Jennifer; Aguilera-Pérez, Jesús Rafael
2014-07-01
This study aims to compare the effects of a lifestyle intervention using a behavioral therapy (BT) approach with the effects of a cardioprotective structured hypocaloric diet on metabolic syndrome in Mexican postmenopausal women. This study is a randomized clinical trial (2006-2009) of Mexican postmenopausal women with metabolic syndrome (Adult Treatment Panel III criteria) who were recruited from the Postmenopause Clinic of the National Institute of Perinatology in Mexico City. Women were assigned to one of two groups--group 1 (structured hypocaloric diet; n = 63): energy restriction (-300 to -500 kcal/d) emphasizing cardioprotective dietary changes; and group 2 (BT; n = 55): goal setting, problem-solving, and stimulus control to achieve cardioprotective dietary and lifestyle recommendations. Metabolic syndrome prevalence, as well as weight, waist circumference, fat mass, and fasting biochemical markers (glucose and lipid profile), were measured at baseline and at 2, 4, and 6 months after the intervention. Metabolic syndrome risk (relative risk and absolute risk reduction), mean differences between groups, and logistic regression were evaluated using Statistical Package for the Social Sciences software, version 17.0. A total of 118 women were studied (mean [SD] age, 53.81 [6.43] y). No baseline differences were observed between groups. At the end of the study, a higher reduction in metabolic syndrome prevalence was observed in group 1 (-38.1%) compared with group 2 (-12.7%; relative risk, 0.237; 95% CI, 0.092-0.608; P = 0.003). The effect was maintained even when adjusted by age, hormone therapy and antihypertensive drug use. A cardioprotective structured hypocaloric diet is more effective than the BT approach in reducing metabolic syndrome after 6 months of intervention. Both strategies have positive effects on different individual cardiovascular risk factors.
2014-01-01
Background Laparoscopy training courses have been established in many centers worldwide to ensure adequate skill learning before performing operations on patients. Different training modalities and their combinations have been compared regarding training effects. Multimodality training combines different approaches for optimal training outcome. However, no standards currently exist for the number of trainees assigned per workplace. Methods This is a monocentric, open, three-arm randomized controlled trial. The participants are laparoscopically-naive medical students from Heidelberg University. After a standardized introduction to laparoscopic cholecystectomy (LC) with online learning modules, the participants perform a baseline test for basic skills and LC performance on a virtual reality (VR) trainer. A total of 100 students will be randomized into three study arms, in a 2:2:1 ratio. The intervention groups participate individually (Group 1) or in pairs (Group 2) in a standardized and structured multimodality training curriculum. Basic skills are trained on the box and VR trainers. Procedural skills and LC modules are trained on the VR trainer. The control group (Group C) does not receive training between tests. A post-test is performed to reassess basic skills and LC performance on the VR trainer. The performance of a cadaveric porcine LC is then measured as the primary outcome using standardized and validated ratings by blinded experts with the Objective Structured Assessment of Technical Skills. The Global Operative Assessment of Laparoscopic Surgical skills score and the time taken for completion are used as secondary outcome measures as well as the improvement of skills and VR LC performance between baseline and post-test. Cognitive tests and questionnaires are used to identify individual factors that might exert influence on training outcome. Discussion This study aims to assess whether workplaces in laparoscopy training courses for beginners should be used
Nickel, Felix; Jede, Felix; Minassian, Andreas; Gondan, Matthias; Hendrie, Jonathan D; Gehrig, Tobias; Linke, Georg R; Kadmon, Martina; Fischer, Lars; Müller-Stich, Beat P
2014-04-23
Laparoscopy training courses have been established in many centers worldwide to ensure adequate skill learning before performing operations on patients. Different training modalities and their combinations have been compared regarding training effects. Multimodality training combines different approaches for optimal training outcome. However, no standards currently exist for the number of trainees assigned per workplace. This is a monocentric, open, three-arm randomized controlled trial. The participants are laparoscopically-naive medical students from Heidelberg University. After a standardized introduction to laparoscopic cholecystectomy (LC) with online learning modules, the participants perform a baseline test for basic skills and LC performance on a virtual reality (VR) trainer. A total of 100 students will be randomized into three study arms, in a 2:2:1 ratio. The intervention groups participate individually (Group 1) or in pairs (Group 2) in a standardized and structured multimodality training curriculum. Basic skills are trained on the box and VR trainers. Procedural skills and LC modules are trained on the VR trainer. The control group (Group C) does not receive training between tests. A post-test is performed to reassess basic skills and LC performance on the VR trainer. The performance of a cadaveric porcine LC is then measured as the primary outcome using standardized and validated ratings by blinded experts with the Objective Structured Assessment of Technical Skills. The Global Operative Assessment of Laparoscopic Surgical skills score and the time taken for completion are used as secondary outcome measures as well as the improvement of skills and VR LC performance between baseline and post-test. Cognitive tests and questionnaires are used to identify individual factors that might exert influence on training outcome. This study aims to assess whether workplaces in laparoscopy training courses for beginners should be used by one trainee or two trainees
Feedback Both Helps and Hinders Learning: The Causal Role of Prior Knowledge
Fyfe, Emily R.; Rittle-Johnson, Bethany
2016-01-01
Feedback can be a powerful learning tool, but its effects vary widely. Research has suggested that learners' prior knowledge may moderate the effects of feedback; however, no causal link has been established. In Experiment 1, we randomly assigned elementary school children (N = 108) to a condition based on a crossing of 2 factors: induced strategy…
Comparing nonparametric Bayesian tree priors for clonal reconstruction of tumors.
Deshwar, Amit G; Vembu, Shankar; Morris, Quaid
2015-01-01
Statistical machine learning methods, especially nonparametric Bayesian methods, have become increasingly popular to infer clonal population structure of tumors. Here we describe the treeCRP, an extension of the Chinese restaurant process (CRP), a popular construction used in nonparametric mixture models, to infer the phylogeny and genotype of major subclonal lineages represented in the population of cancer cells. We also propose new split-merge updates tailored to the subclonal reconstruction problem that improve the mixing time of Markov chains. In comparisons with the tree-structured stick breaking prior used in PhyloSub, we demonstrate superior mixing and running time using the treeCRP with our new split-merge procedures. We also show that given the same number of samples, TSSB and treeCRP have similar ability to recover the subclonal structure of a tumor…
Prior storm experience moderates water surge perception and risk.
Directory of Open Access Journals (Sweden)
Gregory D Webster
Full Text Available BACKGROUND: How accurately do people perceive extreme water speeds and how does their perception affect perceived risk? Prior research has focused on the characteristics of moving water that can reduce human stability or balance. The current research presents the first experiment on people's perceptions of risk and moving water at different speeds and depths. METHODS: Using a randomized within-person 2 (water depth: 0.45, 0.90 m ×3 (water speed: 0.4, 0.8, 1.2 m/s experiment, we immersed 76 people in moving water and asked them to estimate water speed and the risk they felt. RESULTS: Multilevel modeling showed that people increasingly overestimated water speeds as actual water speeds increased or as water depth increased. Water speed perceptions mediated the direct positive relationship between actual water speeds and perceptions of risk; the faster the moving water, the greater the perceived risk. Participants' prior experience with rip currents and tropical cyclones moderated the strength of the actual-perceived water speed relationship; consequently, mediation was stronger for people who had experienced no rip currents or fewer storms. CONCLUSIONS: These findings provide a clearer understanding of water speed and risk perception, which may help communicate the risks associated with anticipated floods and tropical cyclones.
Random vibrations theory and practice
Wirsching, Paul H; Ortiz, Keith
1995-01-01
Random Vibrations: Theory and Practice covers the theory and analysis of mechanical and structural systems undergoing random oscillations due to any number of phenomena— from engine noise, turbulent flow, and acoustic noise to wind, ocean waves, earthquakes, and rough pavement. For systems operating in such environments, a random vibration analysis is essential to the safety and reliability of the system. By far the most comprehensive text available on random vibrations, Random Vibrations: Theory and Practice is designed for readers who are new to the subject as well as those who are familiar with the fundamentals and wish to study a particular topic or use the text as an authoritative reference. It is divided into three major sections: fundamental background, random vibration development and applications to design, and random signal analysis. Introductory chapters cover topics in probability, statistics, and random processes that prepare the reader for the development of the theory of random vibrations a...
Zhan, Zongqian; Wang, Chendong; Wang, Xin; Liu, Yi
2018-01-01
On the basis of today's popular virtual reality and scientific visualization, three-dimensional (3-D) reconstruction is widely used in disaster relief, virtual shopping, reconstruction of cultural relics, etc. In the traditional incremental structure from motion (incremental SFM) method, the time cost of the matching is one of the main factors restricting the popularization of this method. To make the whole matching process more efficient, we propose a preprocessing method before the matching process: (1) we first construct a random k-d forest with the large-scale scale-invariant feature transform features in the images and combine this with the pHash method to obtain a value of relatedness, (2) we then construct a connected weighted graph based on the relatedness value, and (3) we finally obtain a planned sequence of adding images according to the principle of the minimum spanning tree. On this basis, we attempt to thin the minimum spanning tree to reduce the number of matchings and ensure that the images are well distributed. The experimental results show a great reduction in the number of matchings with enough object points, with only a small influence on the inner stability, which proves that this method can quickly and reliably improve the efficiency of the SFM method with unordered multiview images in complex scenes.
Maximum entropy reconstruction of spin densities involving non uniform prior
International Nuclear Information System (INIS)
Schweizer, J.; Ressouche, E.; Papoular, R.J.; Zheludev, A.I.
1997-01-01
Diffraction experiments give microscopic information on structures in crystals. A method which uses the concept of maximum of entropy (MaxEnt), appears to be a formidable improvement in the treatment of diffraction data. This method is based on a bayesian approach: among all the maps compatible with the experimental data, it selects that one which has the highest prior (intrinsic) probability. Considering that all the points of the map are equally probable, this probability (flat prior) is expressed via the Boltzman entropy of the distribution. This method has been used for the reconstruction of charge densities from X-ray data, for maps of nuclear densities from unpolarized neutron data as well as for distributions of spin density. The density maps obtained by this method, as compared to those resulting from the usual inverse Fourier transformation, are tremendously improved. In particular, any substantial deviation from the background is really contained in the data, as it costs entropy compared to a map that would ignore such features. However, in most of the cases, before the measurements are performed, some knowledge exists about the distribution which is investigated. It can range from the simple information of the type of scattering electrons to an elaborate theoretical model. In these cases, the uniform prior which considers all the different pixels as equally likely, is too weak a requirement and has to be replaced. In a rigorous bayesian analysis, Skilling has shown that prior knowledge can be encoded into the Maximum Entropy formalism through a model m(rvec r), via a new definition for the entropy given in this paper. In the absence of any data, the maximum of the entropy functional is reached for ρ(rvec r) = m(rvec r). Any substantial departure from the model, observed in the final map, is really contained in the data as, with the new definition, it costs entropy. This paper presents illustrations of model testing
Tso, P; Lee, T; DeMichele, S J
2001-08-01
Previously we demonstrated that the digestion, absorption and lymphatic transport of lipid and key essential fatty acids (EFA) from randomly interesterified fish oil/medium-chain structured triglycerides (STG) were significantly higher than an equivalent physical mixture (PM) in a normal lymph fistula rat model and in a rat model of lipid malabsorption caused by ischemia/reperfusion (I/R) injury. The goals of this study were to further explore the potential absorptive benefits of STG by comparing the intestinal absorption and lymphatic transport of tocopherol and retinol when delivered gastrically with either STG or PM under normal conditions and after I/R injury to the small bowel. Food-deprived male Sprague-Dawley rats were randomly assigned to two treatments (sham controls or I/R). Under halothane anesthesia, the superior mesenteric artery (SMA) was occluded for 20 min and then reperfused in I/R rats. The SMA was isolated but not occluded in control rats. In both groups, the mesenteric lymph duct was cannulated and a gastric tube was inserted. Each treatment group received 1 mL of the fish oil/MCT STG or PM (7 rats/group) along with (14)C-alpha-tocopherol and (3)H-retinol through the gastric tube followed by an infusion of PBS at 3 mL/h for 8 h. Lymph was collected hourly for 8 h. Under steady-state conditions, the amount of (14)C-alpha-tocopherol and (3)H-retinol transported into lymph was significantly higher in the STG-fed rats compared with those fed PM in both control and I/R groups. In addition, control and I/R rats given STG had earlier steady-state outputs of (14)C-alpha-tocopherol and (3)H-retinol and maintained approximately 30% higher outputs in lymph throughout the 8-h lymph collection period compared with rats given the PM. We conclude that STG provides the opportunity to potentiate improved absorption of fat-soluble vitamins under normal and malabsorptive states.
Deift, Percy
2009-01-01
This book features a unified derivation of the mathematical theory of the three classical types of invariant random matrix ensembles-orthogonal, unitary, and symplectic. The authors follow the approach of Tracy and Widom, but the exposition here contains a substantial amount of additional material, in particular, facts from functional analysis and the theory of Pfaffians. The main result in the book is a proof of universality for orthogonal and symplectic ensembles corresponding to generalized Gaussian type weights following the authors' prior work. New, quantitative error estimates are derive
Vanmarcke, Erik
1983-03-01
Random variation over space and time is one of the few attributes that might safely be predicted as characterizing almost any given complex system. Random fields or "distributed disorder systems" confront astronomers, physicists, geologists, meteorologists, biologists, and other natural scientists. They appear in the artifacts developed by electrical, mechanical, civil, and other engineers. They even underlie the processes of social and economic change. The purpose of this book is to bring together existing and new methodologies of random field theory and indicate how they can be applied to these diverse areas where a "deterministic treatment is inefficient and conventional statistics insufficient." Many new results and methods are included. After outlining the extent and characteristics of the random field approach, the book reviews the classical theory of multidimensional random processes and introduces basic probability concepts and methods in the random field context. It next gives a concise amount of the second-order analysis of homogeneous random fields, in both the space-time domain and the wave number-frequency domain. This is followed by a chapter on spectral moments and related measures of disorder and on level excursions and extremes of Gaussian and related random fields. After developing a new framework of analysis based on local averages of one-, two-, and n-dimensional processes, the book concludes with a chapter discussing ramifications in the important areas of estimation, prediction, and control. The mathematical prerequisite has been held to basic college-level calculus.
Genome position specific priors for genomic prediction
DEFF Research Database (Denmark)
Brøndum, Rasmus Froberg; Su, Guosheng; Lund, Mogens Sandø
2012-01-01
casual mutation is different between the populations but affects the same gene. Proportions of a four-distribution mixture for SNP effects in segments of fixed size along the genome are derived from one population and set as location specific prior proportions of distributions of SNP effects...... for the target population. The model was tested using dairy cattle populations of different breeds: 540 Australian Jersey bulls, 2297 Australian Holstein bulls and 5214 Nordic Holstein bulls. The traits studied were protein-, fat- and milk yield. Genotypic data was Illumina 777K SNPs, real or imputed Results...
Models for Validation of Prior Learning (VPL)
DEFF Research Database (Denmark)
Ehlers, Søren
The national policies for the education/training of adults are in the 21st century highly influenced by proposals which are formulated and promoted by The European Union (EU) as well as other transnational players and this shift in policy making has consequences. One is that ideas which in the past...... would have been categorized as utopian can become realpolitik. Validation of Prior Learning (VPL) was in Europe mainly regarded as utopian while universities in the United States of America (USA) were developing ways to obtain credits to those students which was coming with experiences from working life....
Random walks, random fields, and disordered systems
Černý, Jiří; Kotecký, Roman
2015-01-01
Focusing on the mathematics that lies at the intersection of probability theory, statistical physics, combinatorics and computer science, this volume collects together lecture notes on recent developments in the area. The common ground of these subjects is perhaps best described by the three terms in the title: Random Walks, Random Fields and Disordered Systems. The specific topics covered include a study of Branching Brownian Motion from the perspective of disordered (spin-glass) systems, a detailed analysis of weakly self-avoiding random walks in four spatial dimensions via methods of field theory and the renormalization group, a study of phase transitions in disordered discrete structures using a rigorous version of the cavity method, a survey of recent work on interacting polymers in the ballisticity regime and, finally, a treatise on two-dimensional loop-soup models and their connection to conformally invariant systems and the Gaussian Free Field. The notes are aimed at early graduate students with a mod...
Prior exercise and antioxidant supplementation: effect on oxidative stress and muscle injury
Directory of Open Access Journals (Sweden)
Schilling Brian K
2007-10-01
Full Text Available Abstract Background Both acute bouts of prior exercise (preconditioning and antioxidant nutrients have been used in an attempt to attenuate muscle injury or oxidative stress in response to resistance exercise. However, most studies have focused on untrained participants rather than on athletes. The purpose of this work was to determine the independent and combined effects of antioxidant supplementation (vitamin C + mixed tocopherols/tocotrienols and prior eccentric exercise in attenuating markers of skeletal muscle injury and oxidative stress in resistance trained men. Methods Thirty-six men were randomly assigned to: no prior exercise + placebo; no prior exercise + antioxidant; prior exercise + placebo; prior exercise + antioxidant. Markers of muscle/cell injury (muscle performance, muscle soreness, C-reactive protein, and creatine kinase activity, as well as oxidative stress (blood protein carbonyls and peroxides, were measured before and through 48 hours of exercise recovery. Results No group by time interactions were noted for any variable (P > 0.05. Time main effects were noted for creatine kinase activity, muscle soreness, maximal isometric force and peak velocity (P Conclusion There appears to be no independent or combined effect of a prior bout of eccentric exercise or antioxidant supplementation as used here on markers of muscle injury in resistance trained men. Moreover, eccentric exercise as used in the present study results in minimal blood oxidative stress in resistance trained men. Hence, antioxidant supplementation for the purpose of minimizing blood oxidative stress in relation to eccentric exercise appears unnecessary in this population.
Depth image enhancement using perceptual texture priors
Bang, Duhyeon; Shim, Hyunjung
2015-03-01
A depth camera is widely used in various applications because it provides a depth image of the scene in real time. However, due to the limited power consumption, the depth camera presents severe noises, incapable of providing the high quality 3D data. Although the smoothness prior is often employed to subside the depth noise, it discards the geometric details so to degrade the distance resolution and hinder achieving the realism in 3D contents. In this paper, we propose a perceptual-based depth image enhancement technique that automatically recovers the depth details of various textures, using a statistical framework inspired by human mechanism of perceiving surface details by texture priors. We construct the database composed of the high quality normals. Based on the recent studies in human visual perception (HVP), we select the pattern density as a primary feature to classify textures. Upon the classification results, we match and substitute the noisy input normals with high quality normals in the database. As a result, our method provides the high quality depth image preserving the surface details. We expect that our work is effective to enhance the details of depth image from 3D sensors and to provide a high-fidelity virtual reality experience.
Autonomous Byte Stream Randomizer
Paloulian, George K.; Woo, Simon S.; Chow, Edward T.
2013-01-01
Net-centric networking environments are often faced with limited resources and must utilize bandwidth as efficiently as possible. In networking environments that span wide areas, the data transmission has to be efficient without any redundant or exuberant metadata. The Autonomous Byte Stream Randomizer software provides an extra level of security on top of existing data encryption methods. Randomizing the data s byte stream adds an extra layer to existing data protection methods, thus making it harder for an attacker to decrypt protected data. Based on a generated crypto-graphically secure random seed, a random sequence of numbers is used to intelligently and efficiently swap the organization of bytes in data using the unbiased and memory-efficient in-place Fisher-Yates shuffle method. Swapping bytes and reorganizing the crucial structure of the byte data renders the data file unreadable and leaves the data in a deconstructed state. This deconstruction adds an extra level of security requiring the byte stream to be reconstructed with the random seed in order to be readable. Once the data byte stream has been randomized, the software enables the data to be distributed to N nodes in an environment. Each piece of the data in randomized and distributed form is a separate entity unreadable on its own right, but when combined with all N pieces, is able to be reconstructed back to one. Reconstruction requires possession of the key used for randomizing the bytes, leading to the generation of the same cryptographically secure random sequence of numbers used to randomize the data. This software is a cornerstone capability possessing the ability to generate the same cryptographically secure sequence on different machines and time intervals, thus allowing this software to be used more heavily in net-centric environments where data transfer bandwidth is limited.
Kauf, Teresa L; McKinnon, Peggy; Corey, G Ralph; Bedolla, John; Riska, Paul F; Sims, Matthew; Jauregui-Peredo, Luis; Friedman, Bruce; Hoehns, James D; Mercier, Renée-Claude; Garcia-Diaz, Julia; Brenneman, Susan K; Ng, David; Lodise, Thomas
2015-11-07
Treatment of complicated skin and skin structure infection (cSSSI) places a tremendous burden on the health care system. Understanding relative resource utilization associated with different antimicrobials is important for decision making by patients, health care providers, and payers. The authors conducted an open-label, pragmatic, randomized (1:1) clinical study (N = 250) to compare the effectiveness of daptomycin with that of vancomycin for treatment of patients hospitalized with cSSSI caused by suspected or documented methicillin-resistant Staphylococcus aureus infection. The primary study end point was infection-related length of stay (IRLOS). Secondary end points included health care resource utilization, cost, clinical response, and patient-reported outcomes. Patient assessments were performed daily until the end of antibiotic therapy or until hospital discharge, and at 14 days and 30 days after discharge. No difference was found for IRLOS, total LOS, and total inpatient cost between cohorts. Hospital LOS contributed 85.9% to the total hospitalization cost, compared with 6.4% for drug costs. Daptomycin showed a nonsignificant trend toward a higher clinical success rate, compared with vancomycin, at treatment days 2 and 3. In the multivariate analyses, vancomycin was associated with a lower likelihood of day 2 clinical success (odds ratio [OR] = 0.498, 95% confidence interval [CI], 0.249-0.997; P < 0.05). This study did not provide conclusive evidence of the superiority of one treatment over the other in terms of clinical, economic, or patient outcomes. The data suggest that physician and patient preference, rather than drug acquisition cost, should be the primary driver of initial antibiotic selection for hospitalized patients with cSSSI. ClinicalTrials.gov: NCT01419184 (Date: August 16, 2011).
2012-01-01
Background Complicated skin and skin structure infections (cSSSIs) frequently result in hospitalization with significant morbidity and mortality. Methods In this phase 3b/4 parallel, randomized, open-label, comparative study, 531 subjects with cSSSI received tigecycline (100 mg initial dose, then 50 mg intravenously every 12 hrs) or ampicillin-sulbactam 1.5-3 g IV every 6 hrs or amoxicillin-clavulanate 1.2 g IV every 6-8 hrs. Vancomycin could be added at the discretion of the investigator to the comparator arm if methicillin-resistant Staphylococcus aureus (MRSA) was confirmed or suspected within 72 hrs of enrollment. The primary endpoint was clinical response in the clinically evaluable (CE) population at the test-of-cure (TOC) visit. Microbiologic response and safety were also assessed. The modified intent-to-treat (mITT) population comprised 531 subjects (tigecycline, n = 268; comparator, n = 263) and 405 were clinically evaluable (tigecycline, n = 209; comparator, n = 196). Results In the CE population, 162/209 (77.5%) tigecycline-treated subjects and 152/196 (77.6%) comparator-treated subjects were clinically cured (difference 0.0; 95% confidence interval [CI]: -8.7, 8.6). The eradication rates at the subject level for the microbiologically evaluable (ME) population were 79.2% in the tigecycline treatment group and 76.8% in the comparator treatment group (difference 2.4; 95% CI: -9.6, 14.4) at the TOC assessment. Nausea, vomiting, and diarrhea rates were higher in the tigecycline group. Conclusions Tigecycline was generally safe and effective in the treatment of cSSSIs. Trial registration ClinicalTrials.gov NCT00368537 PMID:23145952
Telles, Shirley; Bhardwaj, Abhishek K.; Gupta, Ram K.; Sharma, Sachin K.; Monro, Robin; Balkrishna, Acharya
2016-01-01
Background The present study aimed at determining whether 12 weeks of yoga practice in patients with chronic LBP and MRI-based degenerative changes would result in differences in: (i) self-reported pain, anxiety, and spinal flexibility; and (ii) the structure of the discs or vertebrae. Material/Methods Sixty-two persons with MRI-proven degenerative intervertebral discs (group mean ±S.D., 36.2±6.4 years; 30 females) were randomly assigned to yoga and control groups. However, testing was conducted on only 40 subjects, so only their data are included in this study. The assessments were: self-reported pain, state anxiety, spinal flexibility, and MRI of the lumbosacral spine, performed using a 1.5 Tesla system with a spinal surface column. The yoga group was taught light exercises, physical postures, breathing techniques, and yoga relaxation techniques for 1 hour daily for 3 months. No intervention was given to the control group except for routine medical care. A repeated-measures analysis of variance (ANOVA) with post hoc analyses (which was Bonferroni-adjusted) was used. The Ethics Committee of Patanjali Research Foundation had approved the study which had been registered in the Clinical Trials Registry of India (CTRI/2012/11/003094). Results The yoga group showed a significant reduction in self-reported pain and state anxiety in a before/after comparison at 12 weeks. A few patients in both groups showed changes in the discs and vertebrae at post-intervention assessment. Conclusions Within 12 weeks, yoga practice reduced pain and state anxiety but did not alter MRI-proven changes in the intervertebral discs and in the vertebrae.
Piaget, Jean
Provided is an overview of the analytical method known as structuralism. The first chapter discusses the three key components of the concept of a structure: the view of a system as a whole instead of so many parts; the study of the transformations in the system; and the fact that these transformations never lead beyond the system but always…
Directory of Open Access Journals (Sweden)
Robert Šket
Full Text Available We explored the assembly of intestinal microbiota in healthy male participants during the randomized crossover design of run-in (5 day and experimental phases (21-day normoxic bed rest (NBR, hypoxic bed rest (HBR and hypoxic ambulation (HAmb in a strictly controlled laboratory environment, with balanced fluid and dietary intakes, controlled circadian rhythm, microbial ambiental burden and 24/7 medical surveillance. The fraction of inspired O2 (FiO2 and partial pressure of inspired O2 (PiO2 were 0.209 and 133.1 ± 0.3 mmHg for NBR and 0.141 ± 0.004 and 90.0 ± 0.4 mmHg for both hypoxic variants (HBR and HAmb; ~4000 m simulated altitude, respectively. A number of parameters linked to intestinal environment such as defecation frequency, intestinal electrical conductivity (IEC, sterol and polyphenol content and diversity, indole, aromaticity and spectral characteristics of dissolved organic matter (DOM were measured (64 variables. The structure and diversity of bacterial microbial community was assessed using 16S rRNA amplicon sequencing. Inactivity negatively affected frequency of defecation and in combination with hypoxia increased IEC (p < 0.05. In contrast, sterol and polyphenol diversity and content, various characteristics of DOM and aromatic compounds, the structure and diversity of bacterial microbial community were not significantly affected over time. A new in-house PlanHab database was established to integrate all measured variables on host physiology, diet, experiment, immune and metabolic markers (n = 231. The observed progressive decrease in defecation frequency and concomitant increase in IEC suggested that the transition from healthy physiological state towards the developed symptoms of low magnitude obesity-related syndromes was dose dependent on the extent of time spent in inactivity and preceded or took place in absence of significant rearrangements in bacterial microbial community. Species B. thetaiotamicron, B. fragilis, B
International Nuclear Information System (INIS)
Lumay, G; Vandewalle, N
2007-01-01
We present an experimental protocol that allows one to tune the packing fraction η of a random pile of ferromagnetic spheres from a value close to the lower limit of random loose packing η RLP ≅0.56 to the upper limit of random close packing η RCP ≅0.64. This broad range of packing fraction values is obtained under normal gravity in air, by adjusting a magnetic cohesion between the grains during the formation of the pile. Attractive and repulsive magnetic interactions are found to affect stongly the internal structure and the stability of sphere packing. After the formation of the pile, the induced cohesion is decreased continuously along a linear decreasing ramp. The controlled collapse of the pile is found to generate various and reproducible values of the random packing fraction η
Extended Linear Models with Gaussian Priors
DEFF Research Database (Denmark)
Quinonero, Joaquin
2002-01-01
In extended linear models the input space is projected onto a feature space by means of an arbitrary non-linear transformation. A linear model is then applied to the feature space to construct the model output. The dimension of the feature space can be very large, or even infinite, giving the model...... a very big flexibility. Support Vector Machines (SVM's) and Gaussian processes are two examples of such models. In this technical report I present a model in which the dimension of the feature space remains finite, and where a Bayesian approach is used to train the model with Gaussian priors...... on the parameters. The Relevance Vector Machine, introduced by Tipping, is a particular case of such a model. I give the detailed derivations of the expectation-maximisation (EM) algorithm used in the training. These derivations are not found in the literature, and might be helpful for newcomers....
Penalising Model Component Complexity: A Principled, Practical Approach to Constructing Priors
Simpson, Daniel
2017-04-06
In this paper, we introduce a new concept for constructing prior distributions. We exploit the natural nested structure inherent to many model components, which defines the model component to be a flexible extension of a base model. Proper priors are defined to penalise the complexity induced by deviating from the simpler base model and are formulated after the input of a user-defined scaling parameter for that model component, both in the univariate and the multivariate case. These priors are invariant to repa-rameterisations, have a natural connection to Jeffreys\\' priors, are designed to support Occam\\'s razor and seem to have excellent robustness properties, all which are highly desirable and allow us to use this approach to define default prior distributions. Through examples and theoretical results, we demonstrate the appropriateness of this approach and how it can be applied in various situations.
Penalising Model Component Complexity: A Principled, Practical Approach to Constructing Priors
Simpson, Daniel; Rue, Haavard; Riebler, Andrea; Martins, Thiago G.; Sø rbye, Sigrunn H.
2017-01-01
In this paper, we introduce a new concept for constructing prior distributions. We exploit the natural nested structure inherent to many model components, which defines the model component to be a flexible extension of a base model. Proper priors are defined to penalise the complexity induced by deviating from the simpler base model and are formulated after the input of a user-defined scaling parameter for that model component, both in the univariate and the multivariate case. These priors are invariant to repa-rameterisations, have a natural connection to Jeffreys' priors, are designed to support Occam's razor and seem to have excellent robustness properties, all which are highly desirable and allow us to use this approach to define default prior distributions. Through examples and theoretical results, we demonstrate the appropriateness of this approach and how it can be applied in various situations.
Leow, Li-Ann; de Rugy, Aymar; Marinovic, Welber; Riek, Stephan; Carroll, Timothy J
2016-10-01
When we move, perturbations to our body or the environment can elicit discrepancies between predicted and actual outcomes. We readily adapt movements to compensate for such discrepancies, and the retention of this learning is evident as savings, or faster readaptation to a previously encountered perturbation. The mechanistic processes contributing to savings, or even the necessary conditions for savings, are not fully understood. One theory suggests that savings requires increased sensitivity to previously experienced errors: when perturbations evoke a sequence of correlated errors, we increase our sensitivity to the errors experienced, which subsequently improves error correction (Herzfeld et al. 2014). An alternative theory suggests that a memory of actions is necessary for savings: when an action becomes associated with successful target acquisition through repetition, that action is more rapidly retrieved at subsequent learning (Huang et al. 2011). In the present study, to better understand the necessary conditions for savings, we tested how savings is affected by prior experience of similar errors and prior repetition of the action required to eliminate errors using a factorial design. Prior experience of errors induced by a visuomotor rotation in the savings block was either prevented at initial learning by gradually removing an oppositely signed perturbation or enforced by abruptly removing the perturbation. Prior repetition of the action required to eliminate errors in the savings block was either deprived or enforced by manipulating target location in preceding trials. The data suggest that prior experience of errors is both necessary and sufficient for savings, whereas prior repetition of a successful action is neither necessary nor sufficient for savings. Copyright © 2016 the American Physiological Society.
International Nuclear Information System (INIS)
Yeong, C.L.; Torquato, S.
1998-01-01
We formulate a procedure to reconstruct the structure of general random heterogeneous media from limited morphological information by extending the methodology of Rintoul and Torquato [J. Colloid Interface Sci. 186, 467 (1997)] developed for dispersions. The procedure has the advantages that it is simple to implement and generally applicable to multidimensional, multiphase, and anisotropic structures. Furthermore, an extremely useful feature is that it can incorporate any type and number of correlation functions in order to provide as much morphological information as is necessary for accurate reconstruction. We consider a variety of one- and two-dimensional reconstructions, including periodic and random arrays of rods, various distribution of disks, Debye random media, and a Fontainebleau sandstone sample. We also use our algorithm to construct heterogeneous media from specified hypothetical correlation functions, including an exponentially damped, oscillating function as well as physically unrealizable ones. copyright 1998 The American Physical Society
Self-prior strategy for organ reconstruction in fluorescence molecular tomography.
Zhou, Yuan; Chen, Maomao; Su, Han; Luo, Jianwen
2017-10-01
The purpose of this study is to propose a strategy for organ reconstruction in fluorescence molecular tomography (FMT) without prior information from other imaging modalities, and to overcome the high cost and ionizing radiation caused by the traditional structural prior strategy. The proposed strategy is designed as an iterative architecture to solve the inverse problem of FMT. In each iteration, a short time Fourier transform (STFT) based algorithm is used to extract the self-prior information in the space-frequency energy spectrum with the assumption that the regions with higher fluorescence concentration have larger energy intensity, then the cost function of the inverse problem is modified by the self-prior information, and lastly an iterative Laplacian regularization algorithm is conducted to solve the updated inverse problem and obtains the reconstruction results. Simulations and in vivo experiments on liver reconstruction are carried out to test the performance of the self-prior strategy on organ reconstruction. The organ reconstruction results obtained by the proposed self-prior strategy are closer to the ground truth than those obtained by the iterative Tikhonov regularization (ITKR) method (traditional non-prior strategy). Significant improvements are shown in the evaluation indexes of relative locational error (RLE), relative error (RE) and contrast-to-noise ratio (CNR). The self-prior strategy improves the organ reconstruction results compared with the non-prior strategy and also overcomes the shortcomings of the traditional structural prior strategy. Various applications such as metabolic imaging and pharmacokinetic study can be aided by this strategy.
Directory of Open Access Journals (Sweden)
L. I. Alekseeva
2014-01-01
Full Text Available Objective. To evaluate the symptom- and structure-modifying effect of Alflutop compared to placebo (PL in patients with knee osteoarthritis (OA. Material and methods. The study included 90 patients with knee OA (according to the criteria of the Russian Association of Rheumatologists at the stage 2–3 (according to the Kellgren-Lawrence scale; pain score when walk- ing ≥ 40 mm (assessed using the visual analog scale. All the patients provided an informed consent. The patients were randomly divided into two groups: group 1 (n=45 received an intramuscular injection of 1 mL Alflutop for 20 days with 6-month intervals for 2 years (a total of 4 courses for 2 years; group 2 (n=45 received an injection of PL (iso- tonic sodium chloride solution in the same way. Ibuprofen at a dose of 600–1200 mg/day was administered as concomitant therapy. To evaluate the structure-modifying effect of Alflutop, X-ray of the knee joint was performed at the beginning and end of the study; the level of biochemical markers (CTX-II and COMP was determined at the beginning, after 3 months, and at the end of the study. A statistical analysis was performed using the Statistica 10 software package.Results. After the 2-year follow-up, a statistically significant negative trend was detected less frequently in the group of patients treated with Alflutop compared to the PL group (6.1 and 38.4%, respectively. The statistically significant delay in joint space narrowing was observed in patients who received Alflutop in contrast to patients who received PL (the numerical score of the joint space, the Wilcoxon test; p=0.0003. An increase in osteo- phyte size was observed in 72% of the patients receiving PL, and only in 27% of the patients receiving Alflutop (medial and lateral osteophytes of the femoral bone, the Wilcoxon test; p=0.0078; medial and lateral osteophytes of the shin bone, the Wilcoxon test; p=0.0001 and p=0.0039, respective- ly. Augmentation of subchondral
Making Connections in Math: Activating a Prior Knowledge Analogue Matters for Learning
Sidney, Pooja G.; Alibali, Martha W.
2015-01-01
This study investigated analogical transfer of conceptual structure from a prior-knowledge domain to support learning in a new domain of mathematics: division by fractions. Before a procedural lesson on division by fractions, fifth and sixth graders practiced with a surface analogue (other operations on fractions) or a structural analogue (whole…
Putting Priors in Mixture Density Mercer Kernels
Srivastava, Ashok N.; Schumann, Johann; Fischer, Bernd
2004-01-01
This paper presents a new methodology for automatic knowledge driven data mining based on the theory of Mercer Kernels, which are highly nonlinear symmetric positive definite mappings from the original image space to a very high, possibly infinite dimensional feature space. We describe a new method called Mixture Density Mercer Kernels to learn kernel function directly from data, rather than using predefined kernels. These data adaptive kernels can en- code prior knowledge in the kernel using a Bayesian formulation, thus allowing for physical information to be encoded in the model. We compare the results with existing algorithms on data from the Sloan Digital Sky Survey (SDSS). The code for these experiments has been generated with the AUTOBAYES tool, which automatically generates efficient and documented C/C++ code from abstract statistical model specifications. The core of the system is a schema library which contains template for learning and knowledge discovery algorithms like different versions of EM, or numeric optimization methods like conjugate gradient methods. The template instantiation is supported by symbolic- algebraic computations, which allows AUTOBAYES to find closed-form solutions and, where possible, to integrate them into the code. The results show that the Mixture Density Mercer-Kernel described here outperforms tree-based classification in distinguishing high-redshift galaxies from low- redshift galaxies by approximately 16% on test data, bagged trees by approximately 7%, and bagged trees built on a much larger sample of data by approximately 2%.
Prior expectations facilitate metacognition for perceptual decision.
Sherman, M T; Seth, A K; Barrett, A B; Kanai, R
2015-09-01
The influential framework of 'predictive processing' suggests that prior probabilistic expectations influence, or even constitute, perceptual contents. This notion is evidenced by the facilitation of low-level perceptual processing by expectations. However, whether expectations can facilitate high-level components of perception remains unclear. We addressed this question by considering the influence of expectations on perceptual metacognition. To isolate the effects of expectation from those of attention we used a novel factorial design: expectation was manipulated by changing the probability that a Gabor target would be presented; attention was manipulated by instructing participants to perform or ignore a concurrent visual search task. We found that, independently of attention, metacognition improved when yes/no responses were congruent with expectations of target presence/absence. Results were modeled under a novel Bayesian signal detection theoretic framework which integrates bottom-up signal propagation with top-down influences, to provide a unified description of the mechanisms underlying perceptual decision and metacognition. Copyright © 2015 Elsevier Inc. All rights reserved.
Washing of waste prior to landfilling.
Cossu, Raffaello; Lai, Tiziana
2012-05-01
The main impact produced by landfills is represented by the release of leachate emissions. Waste washing treatment has been investigated to evaluate its efficiency in reducing the waste leaching fraction prior to landfilling. The results of laboratory-scale washing tests applied to several significant residues from integrated management of solid waste are presented in this study, specifically: non-recyclable plastics from source separation, mechanical-biological treated municipal solid waste and a special waste, automotive shredded residues. Results obtained demonstrate that washing treatment contributes towards combating the environmental impacts of raw wastes. Accordingly, a leachate production model was applied, leading to the consideration that the concentrations of chemical oxygen demand (COD) and total Kjeldahl nitrogen (TKN), parameters of fundamental importance in the characterization of landfill leachate, from a landfill containing washed wastes, are comparable to those that would only be reached between 90 and 220years later in the presence of raw wastes. The findings obtained demonstrated that washing of waste may represent an effective means of reducing the leachable fraction resulting in a consequent decrease in landfill emissions. Further studies on pilot scale are needed to assess the potential for full-scale application of this treatment. Copyright © 2012 Elsevier Ltd. All rights reserved.
Pitch perception prior to cortical maturation
Lau, Bonnie K.
Pitch perception plays an important role in many complex auditory tasks including speech perception, music perception, and sound source segregation. Because of the protracted and extensive development of the human auditory cortex, pitch perception might be expected to mature, at least over the first few months of life. This dissertation investigates complex pitch perception in 3-month-olds, 7-month-olds and adults -- time points when the organization of the auditory pathway is distinctly different. Using an observer-based psychophysical procedure, a series of four studies were conducted to determine whether infants (1) discriminate the pitch of harmonic complex tones, (2) discriminate the pitch of unresolved harmonics, (3) discriminate the pitch of missing fundamental melodies, and (4) have comparable sensitivity to pitch and spectral changes as adult listeners. The stimuli used in these studies were harmonic complex tones, with energy missing at the fundamental frequency. Infants at both three and seven months of age discriminated the pitch of missing fundamental complexes composed of resolved and unresolved harmonics as well as missing fundamental melodies, demonstrating perception of complex pitch by three months of age. More surprisingly, infants in both age groups had lower pitch and spectral discrimination thresholds than adult listeners. Furthermore, no differences in performance on any of the tasks presented were observed between infants at three and seven months of age. These results suggest that subcortical processing is not only sufficient to support pitch perception prior to cortical maturation, but provides adult-like sensitivity to pitch by three months.
Febrile seizures prior to sudden cardiac death
DEFF Research Database (Denmark)
Stampe, Niels Kjær; Glinge, Charlotte; Jabbari, Reza
2018-01-01
Aims: Febrile seizure (FS) is a common disorder affecting 2-5% of children up to 5 years of age. The aim of this study was to determine whether FS in early childhood are over-represented in young adults dying from sudden cardiac death (SCD). Methods and results: We included all deaths (n = 4595...... with FS was sudden arrhythmic death syndrome (5/8; 62.5%). Conclusion: In conclusion, this study demonstrates a significantly two-fold increase in the frequency of FS prior to death in young SCD cases compared with the two control groups, suggesting that FS could potentially contribute in a risk......) nationwide and through review of all death certificates, we identified 245 SCD in Danes aged 1-30 years in 2000-09. Through the usage of nationwide registries, we identified all persons admitted with first FS among SCD cases (14/245; 5.7%) and in the corresponding living Danish population (71 027/2 369 785...
International Nuclear Information System (INIS)
Tsallis, C.
1980-03-01
The 'ingredients' which control a phase transition in well defined system as well as in random ones (e.g. random magnetic systems) are listed and discussed within a somehow unifying perspective. Among these 'ingredients' we find the couplings and elements responsible for the cooperative phenomenon, the topological connectivity as well as possible topological incompatibilities, the influence of new degrees of freedom, the order parameter dimensionality, the ground state degeneracy and finally the 'quanticity' of the system. The general trends, though illustrated in magnetic systems, essentially hold for all phase transitions, and give a basis for connection of this area with Field theory, Theory of dynamical systems, etc. (Author) [pt
International Nuclear Information System (INIS)
Tsallis, C.
1981-01-01
The 'ingredients' which control a phase transition in well defined systems as well as in random ones (e.q. random magnetic systems) are listed and discussed within a somehow unifying perspective. Among these 'ingredients' the couplings and elements responsible for the cooperative phenomenon, the topological connectivity as well as possible topological incompatibilities, the influence of new degrees of freedom, the order parameter dimensionality, the ground state degeneracy and finally the 'quanticity' of the system are found. The general trends, though illustrated in magnetic systems, essentially hold for all phase transitions, and give a basis for connection of this area with Field theory, Theory of dynamical systems, etc. (Author) [pt
Leveraging Prior Calculus Study with Embedded Review
Nikolov, Margaret C.; Withers, Wm. Douglas
2016-01-01
We propose a new course structure to address the needs of college students with previous calculus study but no course validations as an alternative to repeating the first year of calculus. Students are introduced directly to topics from Calculus III unpreceded by a formal review of topics from Calculus I or II, but with additional syllabus time…
Hierarchical pre-segmentation without prior knowledge
Kuijper, A.; Florack, L.M.J.
2001-01-01
A new method to pre-segment images by means of a hierarchical description is proposed. This description is obtained from an investigation of the deep structure of a scale space image – the input image and the Gaussian filtered ones simultaneously. We concentrate on scale space critical points –
Directory of Open Access Journals (Sweden)
Gao Shouguo
2011-08-01
Full Text Available Abstract Background Bayesian Network (BN is a powerful approach to reconstructing genetic regulatory networks from gene expression data. However, expression data by itself suffers from high noise and lack of power. Incorporating prior biological knowledge can improve the performance. As each type of prior knowledge on its own may be incomplete or limited by quality issues, integrating multiple sources of prior knowledge to utilize their consensus is desirable. Results We introduce a new method to incorporate the quantitative information from multiple sources of prior knowledge. It first uses the Naïve Bayesian classifier to assess the likelihood of functional linkage between gene pairs based on prior knowledge. In this study we included cocitation in PubMed and schematic similarity in Gene Ontology annotation. A candidate network edge reservoir is then created in which the copy number of each edge is proportional to the estimated likelihood of linkage between the two corresponding genes. In network simulation the Markov Chain Monte Carlo sampling algorithm is adopted, and samples from this reservoir at each iteration to generate new candidate networks. We evaluated the new algorithm using both simulated and real gene expression data including that from a yeast cell cycle and a mouse pancreas development/growth study. Incorporating prior knowledge led to a ~2 fold increase in the number of known transcription regulations recovered, without significant change in false positive rate. In contrast, without the prior knowledge BN modeling is not always better than a random selection, demonstrating the necessity in network modeling to supplement the gene expression data with additional information. Conclusion our new development provides a statistical means to utilize the quantitative information in prior biological knowledge in the BN modeling of gene expression data, which significantly improves the performance.
International Nuclear Information System (INIS)
Bennett, D.L.; Brene, N.; Nielsen, H.B.
1986-06-01
The goal of random dynamics is the derivation of the laws of Nature as we know them (standard model) from inessential assumptions. The inessential assumptions made here are expressed as sets of general models at extremely high energies: gauge glass and spacetime foam. Both sets of models lead tentatively to the standard model. (orig.)
International Nuclear Information System (INIS)
Bennett, D.L.
1987-01-01
The goal of random dynamics is the derivation of the laws of Nature as we know them (standard model) from inessential assumptions. The inessential assumptions made here are expressed as sets of general models at extremely high energies: Gauge glass and spacetime foam. Both sets of models lead tentatively to the standard model. (orig.)
Bennett, D. L.; Brene, N.; Nielsen, H. B.
1987-01-01
The goal of random dynamics is the derivation of the laws of Nature as we know them (standard model) from inessential assumptions. The inessential assumptions made here are expressed as sets of general models at extremely high energies: gauge glass and spacetime foam. Both sets of models lead tentatively to the standard model.
Pelsser, L.M.; Steijn, van D.J.; Frankena, K.; Toorman, J.; Buitelaar, J.K.; Rommelse, N.N.
2013-01-01
Behavioural improvements of children with attention-deficit hyperactivity disorder (ADHD) and oppositional defiant disorder (ODD) following a restricted elimination diet (RED), may be due to concurrent changes in family environment. Methods: Twenty-four children with ADHD, were randomized to either
Prior Mental Fatigue Impairs Marksmanship Decision Performance
Directory of Open Access Journals (Sweden)
James Head
2017-09-01
Full Text Available Purpose: Mental fatigue has been shown to impair subsequent physical performance in continuous and discontinuous exercise. However, its influence on subsequent fine-motor performance in an applied setting (e.g., marksmanship for trained soldiers is relatively unknown. The purpose of this study was to investigate whether prior mental fatigue influences subsequent marksmanship performance as measured by shooting accuracy and judgment of soldiers in a live-fire scenario.Methods: Twenty trained infantry soldiers engaged targets after completing either a mental fatigue or control intervention in a repeated measure design. Heart rate variability and the NASA-TLX were used to gauge physiological and subjective effects of the interventions. Target hit proportion, projectile group accuracy, and precision were used to measure marksmanship accuracy. Marksmanship accuracy was assessed by measuring bullet group accuracy (i.e., how close a group of shots are relative to center of mass and bullet group precision (i.e., how close are each individual shot to each other. Additionally, marksmanship decision accuracy (correctly shooting vs. correctly withholding shot when engaging targets was used to examine marksmanship performance.Results: Soldiers rated the mentally fatiguing task (59.88 ± 23.7 as having greater mental workload relative to the control intervention [31.29 ± 12.3, t(19 = 1.72, p < 0.001]. Additionally, soldiers completing the mental fatigue intervention (96.04 ± = 37.1 also had lower time-domain (standard deviation of normal to normal R-R intervals heart rate variability relative to the control [134.39 ± 47.4, t(18 = 3.59, p < 0.001]. Projectile group accuracy and group precision failed to show differences between interventions [t(19 = 0.98, p = 0.34, t(19 = 0.18, p = 0.87, respectively]. Marksmanship decision errors significantly increased after soldiers completed the mental fatigue intervention (48% ± 22.4 relative to the control
Digital communication constraints in prior space missions
Yassine, Nathan K.
2004-01-01
Digital communication is crucial for space endeavors. Jt transmits scientific and command data between earth stations and the spacecraft crew. It facilitates communications between astronauts, and provides live coverage during all phases of the mission. Digital communications provide ground stations and spacecraft crew precise data on the spacecraft position throughout the entire mission. Lessons learned from prior space missions are valuable for our new lunar and Mars missions set by our president s speech. These data will save our agency time and money, and set course our current developing technologies. Limitations on digital communications equipment pertaining mass, volume, data rate, frequency, antenna type and size, modulation, format, and power in the passed space missions are of particular interest. This activity is in support of ongoing communication architectural studies pertaining to robotic and human lunar exploration. The design capabilities and functionalities will depend on the space and power allocated for digital communication equipment. My contribution will be gathering these data, write a report, and present it to Communications Technology Division Staff. Antenna design is very carefully studied for each mission scenario. Currently, Phased array antennas are being developed for the lunar mission. Phased array antennas use little power, and electronically steer a beam instead of DC motors. There are 615 patches in the phased array antenna. These patches have to be modified to have high yield. 50 patches were created for testing. My part is to assist in the characterization of these patch antennas, and determine whether or not certain modifications to quartz micro-strip patch radiators result in a significant yield to warrant proceeding with repairs to the prototype 19 GHz ferroelectric reflect-array antenna. This work requires learning how to calibrate an automatic network, and mounting and testing antennas in coaxial fixtures. The purpose of this
Walter, G.
2015-01-01
In the Bayesian approach to statistical inference, possibly subjective knowledge on model parameters can be expressed by so-called prior distributions. A prior distribution is updated, via Bayes’ Rule, to the so-called posterior distribution, which combines prior information and information from
Wetzels, Sandra A. J.; Kester, Liesbeth; van Merrienboer, Jeroen J. G.; Broers, Nick J.
2011-01-01
Background: Prior knowledge activation facilitates learning. Note taking during prior knowledge activation (i.e., note taking directed at retrieving information from memory) might facilitate the activation process by enabling learners to build an external representation of their prior knowledge. However, taking notes might be less effective in…
Directory of Open Access Journals (Sweden)
Yvonne Mühlig
2017-08-01
Full Text Available Background: To compare efficacy and safety of a manual-based low-level psychological intervention with treatment as usual (weight loss treatment. Methods: A two-armed randomized controlled trial without blinding and computer-based stratified block randomization included adolescents and young adults (14.0-24.9 years with a BMI ≥ 30 kg/m2 at five German university hospitals. Primary outcomes were adherence (participation rate ≥ 5/6 sessions and quality of life (DISABKIDS-37 6 months after randomization. Secondary outcomes included depression, self-esteem, and perceived stress scores. Results: Of 397 screened adolescents, 119 (mean BMI 40.4 ± 7.0 kg/m2, 49.6% female were randomized to the manual-based low-level intervention (n = 59 or treatment as usual (n = 60. We observed no group difference for adherence (absolute risk reduction 0.4%, 95% CI -14.7% to 15.5%; p = 1.0 or health-related quality of life (score difference 8.1, 95% CI -2.1 to 18.3; p = 0.11. Among all secondary outcomes, we detected explorative evidence for an effect on the DISABKIDS-37 ‘social exclusion' subscale (score difference 15.5; 95% CI 1.6-29.4; p = 0.03. 18/19 adverse events occurred in 26 participants, none were classified as serious. Conclusion: Adherence to a coping-oriented intervention was comparable to weight loss treatment, although it was weak in both interventions. Psychological interventions may help to overcome social isolation; further confirmation is required.
Richardson, Caroline R; Mehari, Kathleen S; McIntyre, Laura G; Janney, Adrienne W; Fortlage, Laurie A; Sen, Ananda; Strecher, Victor J; Piette, John D
2007-01-01
Abstract Background The majority of individuals with type 2 diabetes do not exercise regularly. Pedometer-based walking interventions can help; however, pedometer-based interventions targeting only total daily accumulated steps might not yield the same health benefits as physical activity programs specifying a minimum duration and intensity of physical activity bouts. Methods This pilot randomized trial compared two goal-setting strategies: 1) lifestyle goals targeting total daily accumulated...
Simultaneous tensor decomposition and completion using factor priors.
Chen, Yi-Lei; Hsu, Chiou-Ting; Liao, Hong-Yuan Mark
2014-03-01
The success of research on matrix completion is evident in a variety of real-world applications. Tensor completion, which is a high-order extension of matrix completion, has also generated a great deal of research interest in recent years. Given a tensor with incomplete entries, existing methods use either factorization or completion schemes to recover the missing parts. However, as the number of missing entries increases, factorization schemes may overfit the model because of incorrectly predefined ranks, while completion schemes may fail to interpret the model factors. In this paper, we introduce a novel concept: complete the missing entries and simultaneously capture the underlying model structure. To this end, we propose a method called simultaneous tensor decomposition and completion (STDC) that combines a rank minimization technique with Tucker model decomposition. Moreover, as the model structure is implicitly included in the Tucker model, we use factor priors, which are usually known a priori in real-world tensor objects, to characterize the underlying joint-manifold drawn from the model factors. By exploiting this auxiliary information, our method leverages two classic schemes and accurately estimates the model factors and missing entries. We conducted experiments to empirically verify the convergence of our algorithm on synthetic data and evaluate its effectiveness on various kinds of real-world data. The results demonstrate the efficacy of the proposed method and its potential usage in tensor-based applications. It also outperforms state-of-the-art methods on multilinear model analysis and visual data completion tasks.
Efficient Training Methods for Conditional Random Fields
National Research Council Canada - National Science Library
Sutton, Charles A
2008-01-01
.... In this thesis, I investigate efficient training methods for conditional random fields with complex graphical structure, focusing on local methods which avoid propagating information globally along the graph...
Apples and oranges: avoiding different priors in Bayesian DNA sequence analysis
Directory of Open Access Journals (Sweden)
Posch Stefan
2010-03-01
Full Text Available Abstract Background One of the challenges of bioinformatics remains the recognition of short signal sequences in genomic DNA such as donor or acceptor splice sites, splicing enhancers or silencers, translation initiation sites, transcription start sites, transcription factor binding sites, nucleosome binding sites, miRNA binding sites, or insulator binding sites. During the last decade, a wealth of algorithms for the recognition of such DNA sequences has been developed and compared with the goal of improving their performance and to deepen our understanding of the underlying cellular processes. Most of these algorithms are based on statistical models belonging to the family of Markov random fields such as position weight matrix models, weight array matrix models, Markov models of higher order, or moral Bayesian networks. While in many comparative studies different learning principles or different statistical models have been compared, the influence of choosing different prior distributions for the model parameters when using different learning principles has been overlooked, and possibly lead to questionable conclusions. Results With the goal of allowing direct comparisons of different learning principles for models from the family of Markov random fields based on the same a-priori information, we derive a generalization of the commonly-used product-Dirichlet prior. We find that the derived prior behaves like a Gaussian prior close to the maximum and like a Laplace prior in the far tails. In two case studies, we illustrate the utility of the derived prior for a direct comparison of different learning principles with different models for the recognition of binding sites of the transcription factor Sp1 and human donor splice sites. Conclusions We find that comparisons of different learning principles using the same a-priori information can lead to conclusions different from those of previous studies in which the effect resulting from different
Adjusting Beliefs via Transformed Fuzzy Priors
Rattanadamrongaksorn, T.; Sirikanchanarak, D.; Sirisrisakulchai, J.; Sriboonchitta, S.
2018-02-01
Instead of leaving a decision to a pure data-driven system, intervention and collaboration by human would be preferred to fill the gap that machine cannot perform well. In financial applications, for instance, the inference and prediction during structural changes by critical factors; such as market conditions, administrative styles, political policies, etc.; have significant influences to investment strategies. With the conditions differing from the past, we believe that the decision should not be made by only the historical data but also with human estimation. In this study, the updating process by data fusion between expert opinions and statistical observations is thus proposed. The expert’s linguistic terms can be translated into mathematical expressions by the predefined fuzzy numbers and utilized as the initial knowledge for Bayesian statistical framework via the possibility-to-probability transformation. The artificial samples on five scenarios were tested in the univariate problem to demonstrate the methodology. The results showed the shifts and variations appeared on the parameters of the distributions and, as a consequence, adjust the degrees of belief accordingly.
Turner syndrome: counseling prior to oocyte donation
Directory of Open Access Journals (Sweden)
Ester Silveira Ramos
2007-03-01
Full Text Available Ovarian failure is a typical feature of Turner syndrome (TS. Patients are followed clinically with hormone replacement therapy (HRT and inclusion in the oocyte donation program, if necessary. For patients with spontaneous puberty, genetic counseling regarding preimplantation genetic diagnosis and prenatal diagnosis is indicated. Patients with dysgenetic gonads and a Y chromosome are at increased risk of developing gonadoblastoma. Even though this is not an invasive tumor, its frequent association with other malignant forms justifies prophylactic gonadectomy. It is important to perform gonadectomy before HRT and pregnancy with oocyte donation. Among patients with TS stigmata and female genitalia, many have the Y chromosome in one of the cell lines. For this reason, all patients should undergo cytogenetic analysis. Nevertheless, in cases of structural chromosomal alterations or hidden mosaicism, the conventional cytogenetic techniques may be ineffective and molecular investigation is indicated. The author proposes a practical approach for investigating women with TS stigmata in whom identification of the X or Y chromosome is important for clinical management and follow-up.
Gurau, Razvan
2017-01-01
Written by the creator of the modern theory of random tensors, this book is the first self-contained introductory text to this rapidly developing theory. Starting from notions familiar to the average researcher or PhD student in mathematical or theoretical physics, the book presents in detail the theory and its applications to physics. The recent detections of the Higgs boson at the LHC and gravitational waves at LIGO mark new milestones in Physics confirming long standing predictions of Quantum Field Theory and General Relativity. These two experimental results only reinforce today the need to find an underlying common framework of the two: the elusive theory of Quantum Gravity. Over the past thirty years, several alternatives have been proposed as theories of Quantum Gravity, chief among them String Theory. While these theories are yet to be tested experimentally, key lessons have already been learned. Whatever the theory of Quantum Gravity may be, it must incorporate random geometry in one form or another....
Variational Infinite Hidden Conditional Random Fields
Bousmalis, Konstantinos; Zafeiriou, Stefanos; Morency, Louis-Philippe; Pantic, Maja; Ghahramani, Zoubin
2015-01-01
Hidden conditional random fields (HCRFs) are discriminative latent variable models which have been shown to successfully learn the hidden structure of a given classification problem. An Infinite hidden conditional random field is a hidden conditional random field with a countably infinite number of
Prior Knowledge Improves Decoding of Finger Flexion from Electrocorticographic (ECoG Signals
Directory of Open Access Journals (Sweden)
Zuoguan eWang
2011-11-01
Full Text Available Brain-computer interfaces (BCIs use brain signals to convey a user's intent. Some BCI approaches begin by decoding kinematic parameters of movements from brain signals, and then proceed to using these signals, in absence of movements, to allow a user to control an output. Recent results have shown that electrocorticographic (ECoG recordings from the surface of the brain in humans can give information about kinematic parameters (eg{} hand velocity or finger flexion. The decoding approaches in these studies usually employed classical classification/regression algorithms that derive a linear mapping between brain signals and outputs. However, they typically only incorporate little prior information about the target movement parameter. In this paper, we incorporate prior knowledge using a Bayesian decoding method, and use it to decode finger flexion from ECoG signals. Specifically, we exploit the anatomic constraints and dynamic constraints that govern finger flexion and incorporate these constraints in the construction, structure, and the probabilistic functions of the prior model of a switched non-parametric dynamic system (SNDS. Given a measurement model resulting from a traditional linear regression method, we decoded finger flexion using posterior estimation that combined the prior and measurement models. Our results show that the application of the Bayesian decoding model, which incorporates prior knowledge, improves decoding performance compared to the application of a linear regression model, which does not incorporate prior knowledge. Thus, the results presented in this paper may ultimately lead to neurally controlled hand prostheses with full fine-grained finger articulation.
Compositional-prior-guided image reconstruction algorithm for multi-modality imaging
Fang, Qianqian; Moore, Richard H.; Kopans, Daniel B.; Boas, David A.
2010-01-01
The development of effective multi-modality imaging methods typically requires an efficient information fusion model, particularly when combining structural images with a complementary imaging modality that provides functional information. We propose a composition-based image segmentation method for X-ray digital breast tomosynthesis (DBT) and a structural-prior-guided image reconstruction for a combined DBT and diffuse optical tomography (DOT) breast imaging system. Using the 3D DBT images from 31 clinically measured healthy breasts, we create an empirical relationship between the X-ray intensities for adipose and fibroglandular tissue. We use this relationship to then segment another 58 healthy breast DBT images from 29 subjects into compositional maps of different tissue types. For each breast, we build a weighted-graph in the compositional space and construct a regularization matrix to incorporate the structural priors into a finite-element-based DOT image reconstruction. Use of the compositional priors enables us to fuse tissue anatomy into optical images with less restriction than when using a binary segmentation. This allows us to recover the image contrast captured by DOT but not by DBT. We show that it is possible to fine-tune the strength of the structural priors by changing a single regularization parameter. By estimating the optical properties for adipose and fibroglandular tissue using the proposed algorithm, we found the results are comparable or superior to those estimated with expert-segmentations, but does not involve the time-consuming manual selection of regions-of-interest. PMID:21258460
46 CFR 190.07-90 - Vessels contracted for prior to March 1, 1968.
2010-10-01
... considered satisfactory so long as they are maintained in good condition to the satisfaction of the Officer in Charge, Marine Inspection. Minor repairs and alterations may be made to the same standards as the... VESSELS CONSTRUCTION AND ARRANGEMENT Structural Fire Protection § 190.07-90 Vessels contracted for prior...
The Role of Prior Knowledge in International Franchise Partner Recruitment
Wang, Catherine; Altinay, Levent
2006-01-01
Purpose To investigate the role of prior knowledge in the international franchise partner recruitment process and to evaluate how cultural distance influences the role of prior knowledge in this process. Design/Methodology/Approach A single embedded case study of an international hotel firm was the focus of the enquiry. Interviews, observations and document analysis were used as the data collection techniques. Findings Findings reveal that prior knowledge of the franchisor enab...
Spectrally Consistent Satellite Image Fusion with Improved Image Priors
DEFF Research Database (Denmark)
Nielsen, Allan Aasbjerg; Aanæs, Henrik; Jensen, Thomas B.S.
2006-01-01
Here an improvement to our previous framework for satellite image fusion is presented. A framework purely based on the sensor physics and on prior assumptions on the fused image. The contributions of this paper are two fold. Firstly, a method for ensuring 100% spectrally consistency is proposed......, even when more sophisticated image priors are applied. Secondly, a better image prior is introduced, via data-dependent image smoothing....
Jia, Bin; Wang, Xiaodong
2013-12-17
: The extended Kalman filter (EKF) has been applied to inferring gene regulatory networks. However, it is well known that the EKF becomes less accurate when the system exhibits high nonlinearity. In addition, certain prior information about the gene regulatory network exists in practice, and no systematic approach has been developed to incorporate such prior information into the Kalman-type filter for inferring the structure of the gene regulatory network. In this paper, an inference framework based on point-based Gaussian approximation filters that can exploit the prior information is developed to solve the gene regulatory network inference problem. Different point-based Gaussian approximation filters, including the unscented Kalman filter (UKF), the third-degree cubature Kalman filter (CKF3), and the fifth-degree cubature Kalman filter (CKF5) are employed. Several types of network prior information, including the existing network structure information, sparsity assumption, and the range constraint of parameters, are considered, and the corresponding filters incorporating the prior information are developed. Experiments on a synthetic network of eight genes and the yeast protein synthesis network of five genes are carried out to demonstrate the performance of the proposed framework. The results show that the proposed methods provide more accurate inference results than existing methods, such as the EKF and the traditional UKF.
Training shortest-path tractography: Automatic learning of spatial priors
DEFF Research Database (Denmark)
Kasenburg, Niklas; Liptrot, Matthew George; Reislev, Nina Linde
2016-01-01
Tractography is the standard tool for automatic delineation of white matter tracts from diffusion weighted images. However, the output of tractography often requires post-processing to remove false positives and ensure a robust delineation of the studied tract, and this demands expert prior...... knowledge. Here we demonstrate how such prior knowledge, or indeed any prior spatial information, can be automatically incorporated into a shortest-path tractography approach to produce more robust results. We describe how such a prior can be automatically generated (learned) from a population, and we...
Crowdsourcing prior information to improve study design and data analysis.
Directory of Open Access Journals (Sweden)
Jeffrey S Chrabaszcz
Full Text Available Though Bayesian methods are being used more frequently, many still struggle with the best method for setting priors with novel measures or task environments. We propose a method for setting priors by eliciting continuous probability distributions from naive participants. This allows us to include any relevant information participants have for a given effect. Even when prior means are near-zero, this method provides a principle way to estimate dispersion and produce shrinkage, reducing the occurrence of overestimated effect sizes. We demonstrate this method with a number of published studies and compare the effect of different prior estimation and aggregation methods.
Prior knowledge in recalling arguments in bioethical dilemmas
Directory of Open Access Journals (Sweden)
Hiemke Katharina Schmidt
2015-09-01
Full Text Available Prior knowledge is known to facilitate learning new information. Normally in studies confirming this outcome the relationship between prior knowledge and the topic to be learned is obvious: the information to be acquired is part of the domain or topic to which the prior knowledge belongs. This raises the question as to whether prior knowledge of various domains facilitates recalling information. In this study 79 eleventh-grade students completed a questionnaire on their prior knowledge of seven different domains related to the bioethical dilemma of prenatal diagnostics. The students read a text containing arguments for and arguments against prenatal diagnostics. After one week and again 12 weeks later they were asked to write down all the arguments they remembered. Prior knowledge helped them recall the arguments one week (r = .350 and 12 weeks (r = .316 later. Prior knowledge of three of the seven domains significantly helped them recall the arguments one week later (correlations between r = .194 to r = .394. Partial correlations with interest as a control item revealed that interest did not explain the relationship between prior knowledge and recall. Prior knowledge of different domains jointly supports the recall of arguments related to bioethical topics.
Response to baricitinib based on prior biologic use in patients with refractory rheumatoid arthritis
Genovese, Mark C; Kremer, Joel M; Kartman, Cynthia E; Schlichting, Douglas E; Xie, Li; Carmack, Tara; Pantojas, Carlos; Sanchez Burson, Juan; Tony, Hans-Peter; Macias, William L; Rooney, Terence P; Smolen, Josef S
2018-01-01
Abstract Objective RA patients who have failed biologic DMARDs (bDMARDs) represent an unmet medical need. We evaluated the effects of baseline characteristics, including prior bDMARD exposure, on baricitinib efficacy and safety. Methods RA-BEACON patients (previously reported) had moderate to severe RA with insufficient response to one or more TNF inhibitor and were randomized 1:1:1 to once-daily placebo or 2 or 4 mg baricitinib. Prior bDMARD use was allowed. The primary endpoint was a 20% improvement in ACR criteria (ACR20) at week 12 for 4 mg vs placebo. An exploratory, primarily post hoc, subgroup analysis evaluated efficacy at weeks 12 and 24 by ACR20 and Clinical Disease Activity Index (CDAI) ⩽10. An interaction P-value ⩽0.10 was considered significant, with significance at both weeks 12 and 24 given more weight. Results The odds ratios predominantly favored baricitinib over placebo and were generally similar to those in the overall study (3.4, 2.4 for ACR20 weeks 12 and 24, respectively). Significant quantitative interactions were observed for baricitinib 4 mg vs placebo at weeks 12 and 24: ACR20 by region (larger effect Europe) and CDAI ⩽10 by disease duration (larger effect ⩾10 years). No significant interactions were consistently observed for ACR20 by age; weight; disease duration; seropositivity; corticosteroid use; number of prior bDMARDs, TNF inhibitors or non-TNF inhibitors; or a specific prior TNF inhibitor. Treatment-emergent adverse event rates, including infections, appeared somewhat higher across groups with greater prior bDMARD use. Conclusion Baricitinib demonstrated a consistent, beneficial treatment effect in bDMARD-refractory patients across subgroups based on baseline characteristics and prior bDMARD use. Trial registration ClinicalTrials.gov (https://clinicaltrials.gov/), NCT01721044 PMID:29415145
Markov Random Fields on Triangle Meshes
DEFF Research Database (Denmark)
Andersen, Vedrana; Aanæs, Henrik; Bærentzen, Jakob Andreas
2010-01-01
In this paper we propose a novel anisotropic smoothing scheme based on Markov Random Fields (MRF). Our scheme is formulated as two coupled processes. A vertex process is used to smooth the mesh by displacing the vertices according to a MRF smoothness prior, while an independent edge process label...
Testability evaluation using prior information of multiple sources
Directory of Open Access Journals (Sweden)
Wang Chao
2014-08-01
Full Text Available Testability plays an important role in improving the readiness and decreasing the life-cycle cost of equipment. Testability demonstration and evaluation is of significance in measuring such testability indexes as fault detection rate (FDR and fault isolation rate (FIR, which is useful to the producer in mastering the testability level and improving the testability design, and helpful to the consumer in making purchase decisions. Aiming at the problems with a small sample of testability demonstration test data (TDTD such as low evaluation confidence and inaccurate result, a testability evaluation method is proposed based on the prior information of multiple sources and Bayes theory. Firstly, the types of prior information are analyzed. The maximum entropy method is applied to the prior information with the mean and interval estimate forms on the testability index to obtain the parameters of prior probability density function (PDF, and the empirical Bayesian method is used to get the parameters for the prior information with a success-fail form. Then, a parametrical data consistency check method is used to check the compatibility between all the sources of prior information and TDTD. For the prior information to pass the check, the prior credibility is calculated. A mixed prior distribution is formed based on the prior PDFs and the corresponding credibility. The Bayesian posterior distribution model is acquired with the mixed prior distribution and TDTD, based on which the point and interval estimates are calculated. Finally, examples of a flying control system are used to verify the proposed method. The results show that the proposed method is feasible and effective.
Testability evaluation using prior information of multiple sources
Institute of Scientific and Technical Information of China (English)
Wang Chao; Qiu Jing; Liu Guanjun; Zhang Yong
2014-01-01
Testability plays an important role in improving the readiness and decreasing the life-cycle cost of equipment. Testability demonstration and evaluation is of significance in measuring such testability indexes as fault detection rate (FDR) and fault isolation rate (FIR), which is useful to the producer in mastering the testability level and improving the testability design, and helpful to the consumer in making purchase decisions. Aiming at the problems with a small sample of testabil-ity demonstration test data (TDTD) such as low evaluation confidence and inaccurate result, a test-ability evaluation method is proposed based on the prior information of multiple sources and Bayes theory. Firstly, the types of prior information are analyzed. The maximum entropy method is applied to the prior information with the mean and interval estimate forms on the testability index to obtain the parameters of prior probability density function (PDF), and the empirical Bayesian method is used to get the parameters for the prior information with a success-fail form. Then, a parametrical data consistency check method is used to check the compatibility between all the sources of prior information and TDTD. For the prior information to pass the check, the prior credibility is calculated. A mixed prior distribution is formed based on the prior PDFs and the corresponding credibility. The Bayesian posterior distribution model is acquired with the mixed prior distribution and TDTD, based on which the point and interval estimates are calculated. Finally, examples of a flying control system are used to verify the proposed method. The results show that the proposed method is feasible and effective.
Lessons learned: the effect of prior technology use on Web-based interventions.
Carey, Joanne C; Wade, Shari L; Wolfe, Christopher R
2008-04-01
This study examined the role of regular prior technology use in treatment response to an online family problem-solving (OFPS) intervention and an Internet resource intervention (IRI) for pediatric traumatic brain injury (TBI). Participants were 150 individuals in 40 families of children with TBI randomly assigned to OFPS intervention or an IRI. All families received free computers and Internet access to TBI resources. OFPS families received Web-based sessions and therapist-guided synchronous videoconferences focusing on problem solving, communication skills, and behavior management. All participants completed measures of depression, anxiety, and computer usage. OFPS participants rated treatment satisfaction, therapeutic alliance, and Web site and technology comfort. With the OFPS intervention, depression and anxiety improved significantly more among technology using parents (n = 14) than nontechnology users (n = 6). Technology users reported increasing comfort with technology over time, and this change was predictive of depression at followup. Satisfaction and ease-of-use ratings did not differ by technology usage. Lack of regular prior home computer usage and nonadherence were predictive of anxiety at followup. The IRI was not globally effective. However, controlling for prior depression, age, and technology at work, there was a significant effect of technology at home for depression. Families with technology experience at home (n = 11) reported significantly greater improvements in depression than families without prior technology experience at home (n = 8). Although Web-based OFPS was effective in improving caregiver functioning, individuals with limited computer experience may benefit less from an online intervention due to increased nonadherence.
Use on non-conjugate prior distributions in compound failure models. Final technical report
International Nuclear Information System (INIS)
Shultis, J.K.; Johnson, D.E.; Milliken, G.A.; Eckhoff, N.D.
1981-12-01
Several theoretical and computational techniques are presented for compound failure models in which the failure rate or failure probability for a class of components is considered to be a random variable. Both the failure-on-demand and failure-rate situation are considered. Ten different prior families are presented for describing the variation or uncertainty of the failure parameter. Methods considered for estimating values for the prior parameters from a given set of failure data are (1) matching data moments to those of the prior distribution, (2) matching data moments to those of the compound marginal distribution, and (3) the marginal maximum likelihood method. Numerical methods for computing the parameter estimators for all ten prior families are presented, as well as methods for obtaining estimates of the variances and covariance of the parameter estimators, it is shown that various confidence, probability, and tolerance intervals can be evaluated. Finally, to test the resulting failure models against the given failure data, generalized chi-squage and Kolmogorov-Smirnov goodness-of-fit tests are proposed together with a test to eliminate outliers from the failure data. Computer codes based on the results presented here have been prepared and are presented in a companion report
Construction and test of the PRIOR proton microscope; Aufbau und Test des Protonenmikroskops PRIOR
Energy Technology Data Exchange (ETDEWEB)
Lang, Philipp-Michael
2015-01-15
The study of High Energy Density Matter (HEDM) in the laboratory makes great demands on the diagnostics because these states can usually only be created for a short time and usual diagnostic techniques with visible light or X-rays come to their limit because of the high density. The high energy proton radiography technique that was developed in the 1990s at the Los Alamos National Laboratory is a very promising possibility to overcome those limits so that one can measure the density of HEDM with high spatial and time resolution. For this purpose the proton microscope PRIOR (Proton Radiography for FAIR) was set up at GSI, which not only reproduces the image, but also magnifies it by a factor of 4.2 and thereby penetrates matter with a density up to 20 g/cm{sup 2}. Straightaway a spatial resolution of less than 30 μm and a time resolution on the nanosecond scale was achieved. This work describes details to the principle, design and construction of the proton microscope as well as first measurements and simulations of essential components like magnetic lenses, a collimator and a scintillator screen. For the latter one it was possible to show that plastic scintillators can be used as converter as an alternative to the slower but more radiation resistant crystals, so that it is possible to reach a time resolution of 10 ns. Moreover the characteristics were investigated for the system at the commissioning in April 2014. Also the changes in the magnetic field due to radiation damage were studied. Besides that an overview about future applications is given. First experiments with Warm Dense Matter created by using a Pulsed Power Setup have already been performed. Furthermore the promising concept of combining proton radiography with particle therapy has been investigated in context of the PaNTERA project. An outlook on the possibilities with future experiments at the FAIR accelerator facility is given as well. Because of higher beam intensity an energy one can expect even
Conditional maximum-entropy method for selecting prior distributions in Bayesian statistics
Abe, Sumiyoshi
2014-11-01
The conditional maximum-entropy method (abbreviated here as C-MaxEnt) is formulated for selecting prior probability distributions in Bayesian statistics for parameter estimation. This method is inspired by a statistical-mechanical approach to systems governed by dynamics with largely separated time scales and is based on three key concepts: conjugate pairs of variables, dimensionless integration measures with coarse-graining factors and partial maximization of the joint entropy. The method enables one to calculate a prior purely from a likelihood in a simple way. It is shown, in particular, how it not only yields Jeffreys's rules but also reveals new structures hidden behind them.
Directory of Open Access Journals (Sweden)
C. Fabré Sentile
2006-05-01
Full Text Available En el presente artículo se muestra una interesante recopilación estadística de las características aleatorias del viento convistas a considerar sus efectos de carga sobre estructuras esbeltas, se muestra el resultado hallado en varias fuentesbibliográficas tomando en cuenta además lo planteado por diversas normas al respecto. Se establecen parámetrosestadísticos, tales como autocorrelación, autocovarianza, función de densidad espectral, etc., en función de la acción delviento.Palabras claves: Estructuras esbeltas, efecto aleatorio, análisis estadístico de cargas, estadística.________________________________________________________________________________Abstract.In this paper is shown an interesting statistical summary of the random characteristics of the wind with a view toconsidering its load effects on slender structures, considering several bibliographical sources and taking into account thatoutlined by diverse norms in this respect. Statistical parameters are settle down in function of the wind action.Key words: Slender structures, random effect, statistical analysis, statistic.
Adaptive nonparametric Bayesian inference using location-scale mixture priors
Jonge, de R.; Zanten, van J.H.
2010-01-01
We study location-scale mixture priors for nonparametric statistical problems, including multivariate regression, density estimation and classification. We show that a rate-adaptive procedure can be obtained if the prior is properly constructed. In particular, we show that adaptation is achieved if
Nudging toward Inquiry: Awakening and Building upon Prior Knowledge
Fontichiaro, Kristin, Comp.
2010-01-01
"Prior knowledge" (sometimes called schema or background knowledge) is information one already knows that helps him/her make sense of new information. New learning builds on existing prior knowledge. In traditional reporting-style research projects, students bypass this crucial step and plow right into answer-finding. It's no wonder that many…
Drunkorexia: Calorie Restriction Prior to Alcohol Consumption among College Freshman
Burke, Sloane C.; Cremeens, Jennifer; Vail-Smith, Karen; Woolsey, Conrad
2010-01-01
Using a sample of 692 freshmen at a southeastern university, this study examined caloric restriction among students prior to planned alcohol consumption. Participants were surveyed for self-reported alcohol consumption, binge drinking, and caloric intake habits prior to drinking episodes. Results indicated that 99 of 695 (14%) of first year…
Personality, depressive symptoms and prior trauma exposure of new ...
African Journals Online (AJOL)
Background. Police officers are predisposed to trauma exposure. The development of depression and post-traumatic stress disorder (PTSD) may be influenced by personality style, prior exposure to traumatic events and prior depression. Objectives. To describe the personality profiles of new Metropolitan Police Service ...
34 CFR 303.403 - Prior notice; native language.
2010-07-01
... 34 Education 2 2010-07-01 2010-07-01 false Prior notice; native language. 303.403 Section 303.403... TODDLERS WITH DISABILITIES Procedural Safeguards General § 303.403 Prior notice; native language. (a... file a complaint and the timelines under those procedures. (c) Native language. (1) The notice must be...
On the use of a pruning prior for neural networks
DEFF Research Database (Denmark)
Goutte, Cyril
1996-01-01
We address the problem of using a regularization prior that prunes unnecessary weights in a neural network architecture. This prior provides a convenient alternative to traditional weight-decay. Two examples are studied to support this method and illustrate its use. First we use the sunspots...
5 CFR 6201.103 - Prior approval for outside employment.
2010-01-01
... 5 Administrative Personnel 3 2010-01-01 2010-01-01 false Prior approval for outside employment. 6201.103 Section 6201.103 Administrative Personnel EXPORT-IMPORT BANK OF THE UNITED STATES SUPPLEMENTAL STANDARDS OF ETHICAL CONDUCT FOR EMPLOYEES OF THE EXPORT-IMPORT BANK OF THE UNITED STATES § 6201.103 Prior...
Prior authorisation schemes: trade barriers in need of scientific justification
Meulen, van der B.M.J.
2010-01-01
Case C-333/08 Commission v. French Republic ‘processing aids’ [2010] ECR-0000 French prior authorisation scheme for processing aids in food production infringes upon Article 34 TFEU** 1. A prior authorisation scheme not complying with the principle of proportionality, infringes upon Article 34 TFEU.
Nonstationary interference and scattering from random media
International Nuclear Information System (INIS)
Nazikian, R.
1991-12-01
For the small angle scattering of coherent plane waves from inhomogeneous random media, the three dimensional mean square distribution of random fluctuations may be recovered from the interferometric detection of the nonstationary modulational structure of the scattered field. Modulational properties of coherent waves scattered from random media are related to nonlocal correlations in the double sideband structure of the Fourier transform of the scattering potential. Such correlations may be expressed in terms of a suitability generalized spectral coherence function for analytic fields
Walters, Elizabeth R; Lesk, Valerie E
2016-01-01
The aim of this study was to investigate whether the prior consumption of 200 mg of pure caffeine affected neuropsychological test scores in a group of elderly participants aged over 60 years. Using a double-blind placebo versus caffeine design, participants were randomly assigned to receive 200 mg of caffeine or placebo. A neuropsychological assessment testing the domains of general cognitive function, processing speed, semantic memory, episodic memory, executive function, working memory and short-term memory was carried out. Significant interaction effects between age, caffeine and scores of executive function and processing speed were found; participants who had received caffeine showed a decline in performance with increasing age. This effect was not seen for participants who received placebo. The results highlight the need to consider and control prior caffeine consumption when scoring neuropsychological assessments in the elderly, which is important for accuracy of diagnosis and corresponding normative data. © 2016 S. Karger AG, Basel.
Iterative CT shading correction with no prior information
Wu, Pengwei; Sun, Xiaonan; Hu, Hongjie; Mao, Tingyu; Zhao, Wei; Sheng, Ke; Cheung, Alice A.; Niu, Tianye
2015-11-01
Shading artifacts in CT images are caused by scatter contamination, beam-hardening effect and other non-ideal imaging conditions. The purpose of this study is to propose a novel and general correction framework to eliminate low-frequency shading artifacts in CT images (e.g. cone-beam CT, low-kVp CT) without relying on prior information. The method is based on the general knowledge of the relatively uniform CT number distribution in one tissue component. The CT image is first segmented to construct a template image where each structure is filled with the same CT number of a specific tissue type. Then, by subtracting the ideal template from the CT image, the residual image from various error sources are generated. Since forward projection is an integration process, non-continuous shading artifacts in the image become continuous signals in a line integral. Thus, the residual image is forward projected and its line integral is low-pass filtered in order to estimate the error that causes shading artifacts. A compensation map is reconstructed from the filtered line integral error using a standard FDK algorithm and added back to the original image for shading correction. As the segmented image does not accurately depict a shaded CT image, the proposed scheme is iterated until the variation of the residual image is minimized. The proposed method is evaluated using cone-beam CT images of a Catphan©600 phantom and a pelvis patient, and low-kVp CT angiography images for carotid artery assessment. Compared with the CT image without correction, the proposed method reduces the overall CT number error from over 200 HU to be less than 30 HU and increases the spatial uniformity by a factor of 1.5. Low-contrast object is faithfully retained after the proposed correction. An effective iterative algorithm for shading correction in CT imaging is proposed that is only assisted by general anatomical information without relying on prior knowledge. The proposed method is thus practical
Variational segmentation problems using prior knowledge in imaging and vision
DEFF Research Database (Denmark)
Fundana, Ketut
This dissertation addresses variational formulation of segmentation problems using prior knowledge. Variational models are among the most successful approaches for solving many Computer Vision and Image Processing problems. The models aim at finding the solution to a given energy functional defined......, prior knowledge is needed to obtain the desired solution. The introduction of shape priors in particular, has proven to be an effective way to segment objects of interests. Firstly, we propose a prior-based variational segmentation model to segment objects of interest in image sequences, that can deal....... Many objects have high variability in shape and orientation. This often leads to unsatisfactory results, when using a segmentation model with single shape template. One way to solve this is by using more sophisticated shape models. We propose to incorporate shape priors from a shape sub...
Logarithmic Laplacian Prior Based Bayesian Inverse Synthetic Aperture Radar Imaging.
Zhang, Shuanghui; Liu, Yongxiang; Li, Xiang; Bi, Guoan
2016-04-28
This paper presents a novel Inverse Synthetic Aperture Radar Imaging (ISAR) algorithm based on a new sparse prior, known as the logarithmic Laplacian prior. The newly proposed logarithmic Laplacian prior has a narrower main lobe with higher tail values than the Laplacian prior, which helps to achieve performance improvement on sparse representation. The logarithmic Laplacian prior is used for ISAR imaging within the Bayesian framework to achieve better focused radar image. In the proposed method of ISAR imaging, the phase errors are jointly estimated based on the minimum entropy criterion to accomplish autofocusing. The maximum a posterior (MAP) estimation and the maximum likelihood estimation (MLE) are utilized to estimate the model parameters to avoid manually tuning process. Additionally, the fast Fourier Transform (FFT) and Hadamard product are used to minimize the required computational efficiency. Experimental results based on both simulated and measured data validate that the proposed algorithm outperforms the traditional sparse ISAR imaging algorithms in terms of resolution improvement and noise suppression.
Total Variability Modeling using Source-specific Priors
DEFF Research Database (Denmark)
Shepstone, Sven Ewan; Lee, Kong Aik; Li, Haizhou
2016-01-01
sequence of an utterance. In both cases the prior for the latent variable is assumed to be non-informative, since for homogeneous datasets there is no gain in generality in using an informative prior. This work shows in the heterogeneous case, that using informative priors for com- puting the posterior......, can lead to favorable results. We focus on modeling the priors using minimum divergence criterion or fac- tor analysis techniques. Tests on the NIST 2008 and 2010 Speaker Recognition Evaluation (SRE) dataset show that our proposed method beats four baselines: For i-vector extraction using an already...... trained matrix, for the short2-short3 task in SRE’08, five out of eight female and four out of eight male common conditions, were improved. For the core-extended task in SRE’10, four out of nine female and six out of nine male common conditions were improved. When incorporating prior information...
Example-driven manifold priors for image deconvolution.
Ni, Jie; Turaga, Pavan; Patel, Vishal M; Chellappa, Rama
2011-11-01
Image restoration methods that exploit prior information about images to be estimated have been extensively studied, typically using the Bayesian framework. In this paper, we consider the role of prior knowledge of the object class in the form of a patch manifold to address the deconvolution problem. Specifically, we incorporate unlabeled image data of the object class, say natural images, in the form of a patch-manifold prior for the object class. The manifold prior is implicitly estimated from the given unlabeled data. We show how the patch-manifold prior effectively exploits the available sample class data for regularizing the deblurring problem. Furthermore, we derive a generalized cross-validation (GCV) function to automatically determine the regularization parameter at each iteration without explicitly knowing the noise variance. Extensive experiments show that this method performs better than many competitive image deconvolution methods.
International Nuclear Information System (INIS)
Guo Ya'nan; Jin Dapeng; Zhao Dixin; Liu Zhen'an; Qiao Qiao; Chinese Academy of Sciences, Beijing
2007-01-01
Due to the randomness of radioactive decay and nuclear reaction, the signals from detectors are random in time. But normal pulse generator generates periodical pulses. To measure the performances of nuclear electronic devices under random inputs, a random generator is necessary. Types of random pulse generator are reviewed, 2 digital random pulse generators are introduced. (authors)
Background: The total time a patient is disabled likely has a greater influence on his or her quality of life than the initial occurrence of disability alone. Objective: To compare the effect of a long-term, structured physical activity program with that of a health education intervention on the pro...
Objectives: The interactions between nutritional supplementation and physical activity on changes in physical function among older adults remain unclear. The primary objective of this study was to examine the impact of nutritional supplementation plus structured physical activity on 400M walk capaci...
Stamovlasis, Dimitrios; Tsitsipis, Georgios; Papageorgiou, George
2010-01-01
This work uses the concepts and tools of complexity theory to examine the effect of logical thinking and two cognitive styles, such as, the degree of field dependence/independence and the convergent/divergent thinking on students' understanding of the structure of matter. Students were categorized according to the model they adopted for the…
International Nuclear Information System (INIS)
MacLeod, C.; O'Donnell, A.; Tattersall, M.H.N.; Dalrymple, C.; Firth, I.
2001-01-01
Primary or neoadjuvant chemotherapy prior to definitive local therapy has potential advantages for locally advanced cervix cancer. It can down stage a cancer and allow definitive local therapy to be technically possible (surgery), or potentially more effective (radiotherapy). It can also eradicate subclinical systemic metastases. This report reviews a single institution's experience of neoadjuvant chemotherapy prior to definitive local therapy for cervix cancer over a 13-year period. One hundred and six patients were treated with this intent. The patients were analysed for their response to chemotherapy, treatment received, survival, relapse and toxicity. The chemotherapy was feasible and the majority of patients had a complete or partial response (58.5%). Eight patients did not proceed to local treatment. Forty-six patients had definitive surgery and 52 had definitive radiotherapy. The 5-year overall survival was 27% and the majority of patients died with disease. The first site of relapse was usually in the pelvis (46.2%). Late complications that required ongoing medical therapy (n = 6) or surgical intervention (n = 2) were recorded in eight patients (7.5%). On univariate analysis stage (P= 0.04), tumour size (P = 0.01), lymph node status (P=0.003), response to chemotherapy (P = 0.045) and treatment (P = 0.003) were all significant predictors of survival. On multivariate analysis, tumour size (P < 0.0001) and nodal status (P = 0.02) were significant predictors of survival. Despite the impressive responses to chemotherapy of advanced cervix cancer, there is evidence from randomized trials that it does not improve or compromise survival prior to radiotherapy. As its role prior to surgery remains unclear, it should not be used in this setting outside a prospective randomized trial. Copyright (2001) Blackwell Science Pty Ltd
Directory of Open Access Journals (Sweden)
Megan S Dunbar
Full Text Available Adolescent females in Zimbabwe are at high risk for HIV acquisition. Shaping the Health of Adolescents in Zimbabwe (SHAZ! was a randomized controlled trial of a combined intervention package including life-skills and health education, vocational training, micro-grants and social supports compared to life-skills and health education alone. SHAZ! was originally envisioned as a larger effectiveness trial, however, the intervention was scaled back due to contextual and economic conditions in the country at the time. SHAZ! enrolled 315 participants randomly assigned to study arm within blocks of 50 participants (158 intervention and 157 control. The intervention arm participants showed statistically significant differences from the control arm participants for several outcomes during the two years of follow up including; reduced food insecurity [IOR = 0.83 vs. COR = 0.68, p-0.02], and having their own income [IOR = 2.05 vs. COR = 1.67, p = 0.02]. Additionally, within the Intervention arm there was a lower risk of transactional sex [IOR = 0.64, 95% CI (0.50, 0.83], and a higher likelihood of using a condom with their current partner [IOR = 1.79, 95% CI (1.23, 2.62] over time compared to baseline. There was also evidence of fewer unintended pregnancies among intervention participants [HR = 0.61, 95% CI (0.37, 1.01], although this relationship achieved only marginal statistical significance. Several important challenges in this study included the coordination with vocational training programs, the political and economic instability of the area at the time of the study, and the difficulty in creating a true standard of care control arm. Overall the results of the SHAZ! study suggest important potential for HIV prevention intervention packages that include vocational training and micro-grants, and lessons for further economic livelihoods interventions with adolescent females. Further work is needed to refine the
Dunbar, Megan S; Kang Dufour, Mi-Suk; Lambdin, Barrot; Mudekunye-Mahaka, Imelda; Nhamo, Definate; Padian, Nancy S
2014-01-01
Adolescent females in Zimbabwe are at high risk for HIV acquisition. Shaping the Health of Adolescents in Zimbabwe (SHAZ!) was a randomized controlled trial of a combined intervention package including life-skills and health education, vocational training, micro-grants and social supports compared to life-skills and health education alone. SHAZ! was originally envisioned as a larger effectiveness trial, however, the intervention was scaled back due to contextual and economic conditions in the country at the time. SHAZ! enrolled 315 participants randomly assigned to study arm within blocks of 50 participants (158 intervention and 157 control). The intervention arm participants showed statistically significant differences from the control arm participants for several outcomes during the two years of follow up including; reduced food insecurity [IOR = 0.83 vs. COR = 0.68, p-0.02], and having their own income [IOR = 2.05 vs. COR = 1.67, p = 0.02]. Additionally, within the Intervention arm there was a lower risk of transactional sex [IOR = 0.64, 95% CI (0.50, 0.83)], and a higher likelihood of using a condom with their current partner [IOR = 1.79, 95% CI (1.23, 2.62)] over time compared to baseline. There was also evidence of fewer unintended pregnancies among intervention participants [HR = 0.61, 95% CI (0.37, 1.01)], although this relationship achieved only marginal statistical significance. Several important challenges in this study included the coordination with vocational training programs, the political and economic instability of the area at the time of the study, and the difficulty in creating a true standard of care control arm. Overall the results of the SHAZ! study suggest important potential for HIV prevention intervention packages that include vocational training and micro-grants, and lessons for further economic livelihoods interventions with adolescent females. Further work is needed to refine the intervention model, and
Dunbar, Megan S.; Kang Dufour, Mi-Suk; Lambdin, Barrot; Mudekunye-Mahaka, Imelda; Nhamo, Definate; Padian, Nancy S.
2014-01-01
Adolescent females in Zimbabwe are at high risk for HIV acquisition. Shaping the Health of Adolescents in Zimbabwe (SHAZ!) was a randomized controlled trial of a combined intervention package including life-skills and health education, vocational training, micro-grants and social supports compared to life-skills and health education alone. SHAZ! was originally envisioned as a larger effectiveness trial, however, the intervention was scaled back due to contextual and economic conditions in the country at the time. SHAZ! enrolled 315 participants randomly assigned to study arm within blocks of 50 participants (158 intervention and 157 control). The intervention arm participants showed statistically significant differences from the control arm participants for several outcomes during the two years of follow up including; reduced food insecurity [IOR = 0.83 vs. COR = 0.68, p-0.02], and having their own income [IOR = 2.05 vs. COR = 1.67, p = 0.02]. Additionally, within the Intervention arm there was a lower risk of transactional sex [IOR = 0.64, 95% CI (0.50, 0.83)], and a higher likelihood of using a condom with their current partner [IOR = 1.79, 95% CI (1.23, 2.62)] over time compared to baseline. There was also evidence of fewer unintended pregnancies among intervention participants [HR = 0.61, 95% CI (0.37, 1.01)], although this relationship achieved only marginal statistical significance. Several important challenges in this study included the coordination with vocational training programs, the political and economic instability of the area at the time of the study, and the difficulty in creating a true standard of care control arm. Overall the results of the SHAZ! study suggest important potential for HIV prevention intervention packages that include vocational training and micro-grants, and lessons for further economic livelihoods interventions with adolescent females. Further work is needed to refine the intervention model, and
Wetzels, Sandra; Kester, Liesbeth; Van Merriënboer, Jeroen; Broers, Nick
2010-01-01
Wetzels, S. A. J., Kester, L., Van Merriënboer, J. J. G., & Broers, N. J. (2011). The influence of prior knowledge on the retrieval-directed function of note taking in prior knowledge activation. British Journal of Educational Psychology, 81(2), 274-291. doi: 10.1348/000709910X517425
Energy Technology Data Exchange (ETDEWEB)
Yashchuk, V. V., E-mail: VVYashchuk@lbl.gov; Chan, E. R.; Lacey, I. [Advanced Light Source, Lawrence Berkeley National Laboratory, Berkeley, California 94720 (United States); Fischer, P. J. [Center for X-Ray Optics, Lawrence Berkeley National Laboratory, Berkeley, California 94720 (United States); Physics Department, University of California Santa Cruz, Santa Cruz, California 94056 (United States); Conley, R. [Advance Photon Source, Argonne National Laboratory, Argonne, Illinois 60439 (United States); National Synchrotron Light Source II, Brookhaven National Laboratory, Upton, New York 11973 (United States); McKinney, W. R. [Diablo Valley College, 321 Golf Club Road, Pleasant Hill, California 94523 (United States); Artemiev, N. A. [KLA-Tencor Corp., 1 Technology Drive, Milpitas, California 95035 (United States); Bouet, N. [National Synchrotron Light Source II, Brookhaven National Laboratory, Upton, New York 11973 (United States); Cabrini, S. [Molecular Foundry, Lawrence Berkeley National Laboratory, Berkeley, California 94720 (United States); Calafiore, G.; Peroz, C.; Babin, S. [aBeam Technologies, Inc., Hayward, California 94541 (United States)
2015-12-15
We present a modulation transfer function (MTF) calibration method based on binary pseudo-random (BPR) one-dimensional sequences and two-dimensional arrays as an effective method for spectral characterization in the spatial frequency domain of a broad variety of metrology instrumentation, including interferometric microscopes, scatterometers, phase shifting Fizeau interferometers, scanning and transmission electron microscopes, and at this time, x-ray microscopes. The inherent power spectral density of BPR gratings and arrays, which has a deterministic white-noise-like character, allows a direct determination of the MTF with a uniform sensitivity over the entire spatial frequency range and field of view of an instrument. We demonstrate the MTF calibration and resolution characterization over the full field of a transmission soft x-ray microscope using a BPR multilayer (ML) test sample with 2.8 nm fundamental layer thickness. We show that beyond providing a direct measurement of the microscope’s MTF, tests with the BPRML sample can be used to fine tune the instrument’s focal distance. Our results confirm the universality of the method that makes it applicable to a large variety of metrology instrumentation with spatial wavelength bandwidths from a few nanometers to hundreds of millimeters.
International Nuclear Information System (INIS)
Yashchuk, V. V.; Chan, E. R.; Lacey, I.; Fischer, P. J.; Conley, R.; McKinney, W. R.; Artemiev, N. A.; Bouet, N.; Cabrini, S.; Calafiore, G.; Peroz, C.; Babin, S.
2015-01-01
We present a modulation transfer function (MTF) calibration method based on binary pseudo-random (BPR) one-dimensional sequences and two-dimensional arrays as an effective method for spectral characterization in the spatial frequency domain of a broad variety of metrology instrumentation, including interferometric microscopes, scatterometers, phase shifting Fizeau interferometers, scanning and transmission electron microscopes, and at this time, x-ray microscopes. The inherent power spectral density of BPR gratings and arrays, which has a deterministic white-noise-like character, allows a direct determination of the MTF with a uniform sensitivity over the entire spatial frequency range and field of view of an instrument. We demonstrate the MTF calibration and resolution characterization over the full field of a transmission soft x-ray microscope using a BPR multilayer (ML) test sample with 2.8 nm fundamental layer thickness. We show that beyond providing a direct measurement of the microscope’s MTF, tests with the BPRML sample can be used to fine tune the instrument’s focal distance. Our results confirm the universality of the method that makes it applicable to a large variety of metrology instrumentation with spatial wavelength bandwidths from a few nanometers to hundreds of millimeters
Combining symbolic cues with sensory input and prior experience in an iterative Bayesian framework
Directory of Open Access Journals (Sweden)
Frederike Hermi Petzschner
2012-08-01
Full Text Available Perception and action are the result of an integration of various sources of information, such as current sensory input, prior experience, or the context in which a stimulus occurs. Often, the interpretation is not trivial hence needs to be learned from the co-occurrence of stimuli. Yet, how do we combine such diverse information to guide our action?Here we use a distance production-reproduction task to investigate the influence of auxiliary, symbolic cues, sensory input, and prior experience on human performance under three different conditions that vary in the information provided. Our results indicate that subjects can (1 learn the mapping of a verbal, symbolic cue onto the stimulus dimension and (2 integrate symbolic information and prior experience into their estimate of displacements.The behavioral results are explained by to two distinct generative models that represent different structural approaches of how a Bayesian observer would combine prior experience, sensory input, and symbolic cue information into a single estimate of displacement. The first model interprets the symbolic cue in the context of categorization, assuming that it reflects information about a distinct underlying stimulus range (categorical model. The second model applies a multi-modal integration approach and treats the symbolic cue as additional sensory input to the system, which is combined with the current sensory measurement and the subjects’ prior experience (cue-combination model. Notably, both models account equally well for the observed behavior despite their different structural assumptions. The present work thus provides evidence that humans can interpret abstract symbolic information and combine it with other types of information such as sensory input and prior experience. The similar explanatory power of the two models further suggest that issues such as categorization and cue-combination could be explained by alternative probabilistic approaches.
Directory of Open Access Journals (Sweden)
W. Yao
2017-09-01
Full Text Available In this paper, a labelling method for the semantic analysis of ultra-high point density MLS data (up to 4000 points/m2 in urban road corridors is developed based on combining a conditional random field (CRF for the context-based classification of 3D point clouds with shape priors. The CRF uses a Random Forest (RF for generating the unary potentials of nodes and a variant of the contrastsensitive Potts model for the pair-wise potentials of node edges. The foundations of the classification are various geometric features derived by means of co-variance matrices and local accumulation map of spatial coordinates based on local neighbourhoods. Meanwhile, in order to cope with the ultra-high point density, a plane-based region growing method combined with a rule-based classifier is applied to first fix semantic labels for man-made objects. Once such kind of points that usually account for majority of entire data amount are pre-labeled; the CRF classifier can be solved by optimizing the discriminative probability for nodes within a subgraph structure excluded from pre-labeled nodes. The process can be viewed as an evidence fusion step inferring a degree of belief for point labelling from different sources. The MLS data used for this study were acquired by vehicle-borne Z+F phase-based laser scanner measurement, which permits the generation of a point cloud with an ultra-high sampling rate and accuracy. The test sites are parts of Munich City which is assumed to consist of seven object classes including impervious surfaces, tree, building roof/facade, low vegetation, vehicle and pole. The competitive classification performance can be explained by the diverse factors: e.g. the above ground height highlights the vertical dimension of houses, trees even cars, but also attributed to decision-level fusion of graph-based contextual classification approach with shape priors. The use of context-based classification methods mainly contributed to smoothing of
Yao, W.; Polewski, P.; Krzystek, P.
2017-09-01
In this paper, a labelling method for the semantic analysis of ultra-high point density MLS data (up to 4000 points/m2) in urban road corridors is developed based on combining a conditional random field (CRF) for the context-based classification of 3D point clouds with shape priors. The CRF uses a Random Forest (RF) for generating the unary potentials of nodes and a variant of the contrastsensitive Potts model for the pair-wise potentials of node edges. The foundations of the classification are various geometric features derived by means of co-variance matrices and local accumulation map of spatial coordinates based on local neighbourhoods. Meanwhile, in order to cope with the ultra-high point density, a plane-based region growing method combined with a rule-based classifier is applied to first fix semantic labels for man-made objects. Once such kind of points that usually account for majority of entire data amount are pre-labeled; the CRF classifier can be solved by optimizing the discriminative probability for nodes within a subgraph structure excluded from pre-labeled nodes. The process can be viewed as an evidence fusion step inferring a degree of belief for point labelling from different sources. The MLS data used for this study were acquired by vehicle-borne Z+F phase-based laser scanner measurement, which permits the generation of a point cloud with an ultra-high sampling rate and accuracy. The test sites are parts of Munich City which is assumed to consist of seven object classes including impervious surfaces, tree, building roof/facade, low vegetation, vehicle and pole. The competitive classification performance can be explained by the diverse factors: e.g. the above ground height highlights the vertical dimension of houses, trees even cars, but also attributed to decision-level fusion of graph-based contextual classification approach with shape priors. The use of context-based classification methods mainly contributed to smoothing of labelling by removing
Learning priors for Bayesian computations in the nervous system.
Directory of Open Access Journals (Sweden)
Max Berniker
Full Text Available Our nervous system continuously combines new information from our senses with information it has acquired throughout life. Numerous studies have found that human subjects manage this by integrating their observations with their previous experience (priors in a way that is close to the statistical optimum. However, little is known about the way the nervous system acquires or learns priors. Here we present results from experiments where the underlying distribution of target locations in an estimation task was switched, manipulating the prior subjects should use. Our experimental design allowed us to measure a subject's evolving prior while they learned. We confirm that through extensive practice subjects learn the correct prior for the task. We found that subjects can rapidly learn the mean of a new prior while the variance is learned more slowly and with a variable learning rate. In addition, we found that a Bayesian inference model could predict the time course of the observed learning while offering an intuitive explanation for the findings. The evidence suggests the nervous system continuously updates its priors to enable efficient behavior.
Implicit Priors in Galaxy Cluster Mass and Scaling Relation Determinations
Mantz, A.; Allen, S. W.
2011-01-01
Deriving the total masses of galaxy clusters from observations of the intracluster medium (ICM) generally requires some prior information, in addition to the assumptions of hydrostatic equilibrium and spherical symmetry. Often, this information takes the form of particular parametrized functions used to describe the cluster gas density and temperature profiles. In this paper, we investigate the implicit priors on hydrostatic masses that result from this fully parametric approach, and the implications of such priors for scaling relations formed from those masses. We show that the application of such fully parametric models of the ICM naturally imposes a prior on the slopes of the derived scaling relations, favoring the self-similar model, and argue that this prior may be influential in practice. In contrast, this bias does not exist for techniques which adopt an explicit prior on the form of the mass profile but describe the ICM non-parametrically. Constraints on the slope of the cluster mass-temperature relation in the literature show a separation based the approach employed, with the results from fully parametric ICM modeling clustering nearer the self-similar value. Given that a primary goal of scaling relation analyses is to test the self-similar model, the application of methods subject to strong, implicit priors should be avoided. Alternative methods and best practices are discussed.
The structure of leached sodium borosilicate glass
International Nuclear Information System (INIS)
Bunker, B.C.; Tallant, D.R.; Headley, T.J.; Turner, G.L.; Kirkpatrick, R.J.
1988-01-01
Raman spectroscopy, solid state 29 Si, 11 B, 17 O, and 23 Na nuclear magnetic resonance spectroscopy, and transmission electron microscopy have been used to investigate how the structures of sodium borosilicate glasses change during leaching in water at pH 1, 9, and 12. Results show that the random network structure present prior to leaching is transformed into a network of small condensed ring structures and/or colloidal silica particles. The restructuring of leached glass can be rationalised on the basis of simple hydrolysis (depolymerisation) and condensation (repolymerisation) reactions involving Si-O-Si and Si-O-B bonds. The structural changes that occur during leaching influence the properties of the leached layer, including leaching kinetics, crazing and spalling, and slow crack growth. (author)
Sarkar, Aritra; Vijayanand, V. D.; Parameswaran, P.; Shankar, Vani; Sandhya, R.; Laha, K.; Mathew, M. D.; Jayakumar, T.; Rajendra Kumar, E.
2014-06-01
Creep tests were carried out at 823 K (550 °C) and 210 MPa on Reduced Activation Ferritic-Martensitic (RAFM) steel which was subjected to different extents of prior fatigue exposure at 823 K at a strain amplitude of ±0.6 pct to assess the effect of prior fatigue exposure on creep behavior. Extensive cyclic softening that characterized the fatigue damage was found to be immensely deleterious for creep strength of the tempered martensitic steel. Creep rupture life was reduced to 60 pct of that of the virgin steel when the steel was exposed to as low as 1 pct of fatigue life. However, creep life saturated after fatigue exposure of 40 pct. Increase in minimum creep rate and decrease in creep rupture ductility with a saturating trend were observed with prior fatigue exposures. To substantiate these findings, detailed transmission electron microscopy studies were carried out on the steel. With fatigue exposures, extensive recovery of martensitic-lath structure was distinctly observed which supported the cyclic softening behavior that was introduced due to prior fatigue. Consequently, prior fatigue exposures were considered responsible for decrease in creep ductility and associated reduction in the creep rupture strength.
Systematic versus random sampling in stereological studies.
West, Mark J
2012-12-01
The sampling that takes place at all levels of an experimental design must be random if the estimate is to be unbiased in a statistical sense. There are two fundamental ways by which one can make a random sample of the sections and positions to be probed on the sections. Using a card-sampling analogy, one can pick any card at all out of a deck of cards. This is referred to as independent random sampling because the sampling of any one card is made without reference to the position of the other cards. The other approach to obtaining a random sample would be to pick a card within a set number of cards and others at equal intervals within the deck. Systematic sampling along one axis of many biological structures is more efficient than random sampling, because most biological structures are not randomly organized. This article discusses the merits of systematic versus random sampling in stereological studies.
Flindall, Ian; Leff, Daniel Richard; Goodship, Jonathan; Sugden, Colin; Darzi, Ara
2016-04-01
To evaluate the impact of modafinil on "free" and "cued" recall of clinical information in fatigued but nonsleep-deprived clinicians. Despite attempts to minimize sleep deprivation through redesign of the roster of residents and staff surgeons, evidence suggests that fatigue remains prevalent. The wake-promoting agent modafinil improves cognition in the sleep-deprived fatigued state and may improve information recall in fatigued nonsleep-deprived clinicians. Twenty-four medical undergraduates participated in a double-blind, parallel, randomized controlled trial (modafinil-200 mg:placebo). Medication was allocated 2 hours before a 90-minute fatigue-inducing, continuous performance task (dual 2-back task). A case history memorization task was then performed. Clinical information recall was assessed as "free"(no cognitive aids) and "cued"(using aid memoirs). Open and closed cues represent information of increasing specificity to aid the recall of clinical information. Fatigue was measured objectively using the psychomotor vigilance task at induction, before and after the dual 2-back task. Modafinil decreased false starts and lapses (modafinil = 0.50, placebo = 9.83, P recall (modafinil = 137.8, placebo = 106.0, P recalled with open (modafinil = 62.3, placebo = 52.8, P = .1) and closed cues (modafinil = 80.1, placebo = 75.9, P = .3). Modafinil attenuated fatigue and improved free recall of clinical information without improving cue-based recall under the design of our experimental conditions. Memory cues to aid retrieval of clinical information are convenient interventions that could decrease fatigue-related error without adverse effects of the neuropharmacology. Copyright © 2016 Elsevier Inc. All rights reserved.
A Bayesian Justification for Random Sampling in Sample Survey
Directory of Open Access Journals (Sweden)
Glen Meeden
2012-07-01
Full Text Available In the usual Bayesian approach to survey sampling the sampling design, plays a minimal role, at best. Although a close relationship between exchangeable prior distributions and simple random sampling has been noted; how to formally integrate simple random sampling into the Bayesian paradigm is not clear. Recently it has been argued that the sampling design can be thought of as part of a Bayesian's prior distribution. We will show here that under this scenario simple random sample can be given a Bayesian justification in survey sampling.
Topics in random walks in random environment
International Nuclear Information System (INIS)
Sznitman, A.-S.
2004-01-01
Over the last twenty-five years random motions in random media have been intensively investigated and some new general methods and paradigms have by now emerged. Random walks in random environment constitute one of the canonical models of the field. However in dimension bigger than one they are still poorly understood and many of the basic issues remain to this day unresolved. The present series of lectures attempt to give an account of the progresses which have been made over the last few years, especially in the study of multi-dimensional random walks in random environment with ballistic behavior. (author)
Jansson, Miia M; Syrjälä, Hannu P; Ohtonen, Pasi P; Meriläinen, Merja H; Kyngäs, Helvi A; Ala-Kokko, Tero I
2017-01-01
We evaluated the longitudinal effects of single-dose simulation education with structured debriefing and verbal feedback on critical care nurses' endotracheal suctioning knowledge and skills. To do this we used an experimental design without other competing intervention. Twenty-four months after simulation education, no significant time and group differences or time × group interactions were identified between the study groups. The need for regularly repeated educational interventions with audiovisual or individualized performance feedback and repeated bedside demonstrations is evident. Copyright © 2017 Association for Professionals in Infection Control and Epidemiology, Inc. Published by Elsevier Inc. All rights reserved.
Estimating Functions with Prior Knowledge, (EFPK) for diffusions
DEFF Research Database (Denmark)
Nolsøe, Kim; Kessler, Mathieu; Madsen, Henrik
2003-01-01
In this paper a method is formulated in an estimating function setting for parameter estimation, which allows the use of prior information. The main idea is to use prior knowledge of the parameters, either specified as moments restrictions or as a distribution, and use it in the construction of a...... of an estimating function. It may be useful when the full Bayesian analysis is difficult to carry out for computational reasons. This is almost always the case for diffusions, which is the focus of this paper, though the method applies in other settings.......In this paper a method is formulated in an estimating function setting for parameter estimation, which allows the use of prior information. The main idea is to use prior knowledge of the parameters, either specified as moments restrictions or as a distribution, and use it in the construction...
29 CFR 452.40 - Prior office holding.
2010-07-01
... DISCLOSURE ACT OF 1959 Candidacy for Office; Reasonable Qualifications § 452.40 Prior office holding. A.... 26 26 Wirtz v. Hotel, Motel and Club Employees Union, Local 6, 391 U.S. 492 at 504. The Court stated...
Form of prior for constrained thermodynamic processes with uncertainty
Aneja, Preety; Johal, Ramandeep S.
2015-05-01
We consider the quasi-static thermodynamic processes with constraints, but with additional uncertainty about the control parameters. Motivated by inductive reasoning, we assign prior distribution that provides a rational guess about likely values of the uncertain parameters. The priors are derived explicitly for both the entropy-conserving and the energy-conserving processes. The proposed form is useful when the constraint equation cannot be treated analytically. The inference is performed using spin-1/2 systems as models for heat reservoirs. Analytical results are derived in the high-temperatures limit. An agreement beyond linear response is found between the estimates of thermal quantities and their optimal values obtained from extremum principles. We also seek an intuitive interpretation for the prior and the estimated value of temperature obtained therefrom. We find that the prior over temperature becomes uniform over the quantity kept conserved in the process.
On the prior probabilities for two-stage Bayesian estimates
International Nuclear Information System (INIS)
Kohut, P.
1992-01-01
The method of Bayesian inference is reexamined for its applicability and for the required underlying assumptions in obtaining and using prior probability estimates. Two different approaches are suggested to determine the first-stage priors in the two-stage Bayesian analysis which avoid certain assumptions required for other techniques. In the first scheme, the prior is obtained through a true frequency based distribution generated at selected intervals utilizing actual sampling of the failure rate distributions. The population variability distribution is generated as the weighed average of the frequency distributions. The second method is based on a non-parametric Bayesian approach using the Maximum Entropy Principle. Specific features such as integral properties or selected parameters of prior distributions may be obtained with minimal assumptions. It is indicated how various quantiles may also be generated with a least square technique
Prior Expectations Bias Sensory Representations in Visual Cortex
Kok, P.; Brouwer, G.J.; Gerven, M.A.J. van; Lange, F.P. de
2013-01-01
Perception is strongly influenced by expectations. Accordingly, perception has sometimes been cast as a process of inference, whereby sensory inputs are combined with prior knowledge. However, despite a wealth of behavioral literature supporting an account of perception as probabilistic inference,
Bayesian optimal experimental design for priors of compact support
Long, Quan
2016-01-01
to account for the bounded domain of the uniform prior pdf of the parameters. The underlying Gaussian distribution is obtained in the spirit of the Laplace method, more precisely, the mode is chosen as the maximum a posteriori (MAP) estimate
What good are actions? Accelerating learning using learned action priors
CSIR Research Space (South Africa)
Rosman, Benjamin S
2012-11-01
Full Text Available The computational complexity of learning in sequential decision problems grows exponentially with the number of actions available to the agent at each state. We present a method for accelerating this process by learning action priors that express...
Assessment of prior learning in vocational education and training
DEFF Research Database (Denmark)
Wahlgren, Bjarne; Aarkrog, Vibe
’ knowledge, skills and competences during the students’ performances and the methods that the teachers apply in order to assess the students’ prior learning in relation to the regulations of the current VET-program. In particular the study focuses on how to assess not only the students’ explicated knowledge......The article deals about the results of a study of the assessment of prior learning among adult workers who want to obtain formal qualifications as skilled workers. The study contributes to developing methods for assessing prior learning including both the teachers’ ways of eliciting the students...... and skills but also their competences, i.e. the way the students use their skills and knowledge to perform in practice. Based on a description of the assessment procedures the article discusses central issues in relation to the assessment of prior learning. The empirical data have been obtained in the VET...
Directory of Open Access Journals (Sweden)
Chen Lei
2011-12-01
Full Text Available Background To fully assess the various dimensions affected by schizophrenia, clinical trials often include multiple scales measuring various symptom profiles, cognition, quality of life, subjective well-being, and functional impairment. In this exploratory study, we characterized the relationships among six clinical, functional, cognitive, and quality-of-life measures, identifying a parsimonious set of measurements. Methods We used baseline data from a randomized, multicenter study of patients diagnosed with schizophrenia, schizoaffective disorder, or schizophreniform disorder who were experiencing an acute symptom exacerbation (n = 628 to examine the relationship among several outcome measures. These measures included the Positive and Negative Syndrome Scale (PANSS, Montgomery-Asberg Depression Rating Scale (MADRS, Brief Assessment of Cognition in Schizophrenia Symbol Coding Test, Subjective Well-being Under Neuroleptics Scale Short Form (SWN-K, Schizophrenia Objective Functioning Instrument (SOFI, and Quality of Life Scale (QLS. Three analytic approaches were used: 1 path analysis; 2 factor analysis; and 3 categorical latent variable analysis. In the optimal path model, the SWN-K was selected as the final outcome, while the SOFI mediated the effect of the exogenous variables (PANSS, MADRS on the QLS. Results The overall model explained 47% of variance in QLS and 17% of the variance in SOFI, but only 15% in SWN-K. Factor analysis suggested four factors: "Functioning," "Daily Living," "Depression," and "Psychopathology." A strong positive correlation was observed between the SOFI and QLS (r = 0.669, and both the QLS and SOFI loaded on the "Functioning" factor, suggesting redundancy between these scales. The measurement profiles from the categorical latent variable analysis showed significant variation in functioning and quality of life despite similar levels of psychopathology. Conclusions Researchers should consider collecting PANSS, SOFI, and
Valid MR imaging predictors of prior knee arthroscopy
International Nuclear Information System (INIS)
Discepola, Federico; Le, Huy B.Q.; Park, John S.; Clopton, Paul; Knoll, Andrew N.; Austin, Matthew J.; Resnick, Donald L.
2012-01-01
To determine whether fibrosis of the medial patellar reticulum (MPR), lateral patellar reticulum (LPR), deep medial aspect of Hoffa's fat pad (MDH), or deep lateral aspect of Hoffa's fat pad (LDH) is a valid predictor of prior knee arthroscopy. Institutional review board approval and waiver of informed consent were obtained for this HIPPA-compliant study. Initially, fibrosis of the MPR, LPR, MDH, or LDH in MR imaging studies of 50 patients with prior knee arthroscopy and 100 patients without was recorded. Subsequently, two additional radiologists, blinded to clinical data, retrospectively and independently recorded the presence of fibrosis of the MPR in 50 patients with prior knee arthroscopy and 50 without. Sensitivity, specificity, positive predictive value (PPV), negative predictive value (NPV), and accuracy for detecting the presence of fibrosis in the MPR were calculated. κ statistics were used to analyze inter-observer agreement. Fibrosis of each of the regions examined during the first portion of the study showed a significant association with prior knee arthroscopy (p < 0.005 for each). A patient with fibrosis of the MPR, LDH, or LPR was 45.5, 9, or 3.7 times more likely, respectively, to have had a prior knee arthroscopy. Logistic regression analysis indicated that fibrosis of the MPR supplanted the diagnostic utility of identifying fibrosis of the LPR, LDH, or MDH, or combinations of these (p ≥ 0.09 for all combinations). In the second portion of the study, fibrosis of the MPR demonstrated a mean sensitivity of 82%, specificity of 72%, PPV of 75%, NPV of 81%, and accuracy of 77% for predicting prior knee arthroscopy. Analysis of MR images can be used to determine if a patient has had prior knee arthroscopy by identifying fibrosis of the MPR, LPR, MDH, or LDH. Fibrosis of the MPR was the strongest predictor of prior knee arthroscopy. (orig.)
Valid MR imaging predictors of prior knee arthroscopy
Energy Technology Data Exchange (ETDEWEB)
Discepola, Federico; Le, Huy B.Q. [McGill University Health Center, Jewsih General Hospital, Division of Musculoskeletal Radiology, Montreal, Quebec (Canada); Park, John S. [Annapolis Radiology Associates, Division of Musculoskeletal Radiology, Annapolis, MD (United States); Clopton, Paul; Knoll, Andrew N.; Austin, Matthew J.; Resnick, Donald L. [University of California San Diego (UCSD), Division of Musculoskeletal Radiology, San Diego, CA (United States)
2012-01-15
To determine whether fibrosis of the medial patellar reticulum (MPR), lateral patellar reticulum (LPR), deep medial aspect of Hoffa's fat pad (MDH), or deep lateral aspect of Hoffa's fat pad (LDH) is a valid predictor of prior knee arthroscopy. Institutional review board approval and waiver of informed consent were obtained for this HIPPA-compliant study. Initially, fibrosis of the MPR, LPR, MDH, or LDH in MR imaging studies of 50 patients with prior knee arthroscopy and 100 patients without was recorded. Subsequently, two additional radiologists, blinded to clinical data, retrospectively and independently recorded the presence of fibrosis of the MPR in 50 patients with prior knee arthroscopy and 50 without. Sensitivity, specificity, positive predictive value (PPV), negative predictive value (NPV), and accuracy for detecting the presence of fibrosis in the MPR were calculated. {kappa} statistics were used to analyze inter-observer agreement. Fibrosis of each of the regions examined during the first portion of the study showed a significant association with prior knee arthroscopy (p < 0.005 for each). A patient with fibrosis of the MPR, LDH, or LPR was 45.5, 9, or 3.7 times more likely, respectively, to have had a prior knee arthroscopy. Logistic regression analysis indicated that fibrosis of the MPR supplanted the diagnostic utility of identifying fibrosis of the LPR, LDH, or MDH, or combinations of these (p {>=} 0.09 for all combinations). In the second portion of the study, fibrosis of the MPR demonstrated a mean sensitivity of 82%, specificity of 72%, PPV of 75%, NPV of 81%, and accuracy of 77% for predicting prior knee arthroscopy. Analysis of MR images can be used to determine if a patient has had prior knee arthroscopy by identifying fibrosis of the MPR, LPR, MDH, or LDH. Fibrosis of the MPR was the strongest predictor of prior knee arthroscopy. (orig.)
Roemer, Frank W; Aydemir, Aida; Lohmander, Stefan; Crema, Michel D; Marra, Monica Dias; Muurahainen, Norma; Felson, David T; Eckstein, Felix; Guermazi, Ali
2016-07-09
A recent publication on efficacy of Sprifermin for knee osteoarthritis (OA) using quantitatively MRI-defined central medial tibio-femoral compartment cartilage thickness as the structural primary endpoint reported no statistically significant dose response. However, Sprifermin was associated with statistically significant, dose-dependent reductions in loss of total and lateral tibio-femoral cartilage thickness. Based on these preliminary promising data a post-hoc analysis of secondary assessment and endpoints was performed to evaluate potential effects of Sprifermin on semi-quantitatively evaluated structural MRI parameters. Aim of the present analysis was to determine effects of sprifermin on several knee joint tissues over a 12 month period. 1.5 T or 3 T MRIs were acquired at baseline and 12 months follow-up using a standard protocol. MRIs were read according to the Whole-Organ Magnetic Resonance Imaging Score (WORMS) scoring system (in 14 articular subregions) by four muskuloskeletal radiologists independently. Analyses focused on semiquantitative changes in the 100 μg subgroup and matching placebo of multiple MRI-defined structural alterations. Analyses included a delta-subregional and delta-sum approach for the whole knee and the medial and lateral tibio-femoral (MTFJ, LTFJ), and patello-femoral (PFJ) compartments, taking into account number of subregions showing no change, improvement or worsening and changes in the sum of subregional scores. Mann-Whitney - Wilcoxon tests assessed differences between groups. Fifty-seven and 18 patients were included in the treatment and matched placebo subgroups. Less worsening of cartilage damage was observed from baseline to 12 months in the PFJ (0.02, 95 % confidence interval (CI) (-0.04, 0.08) vs. placebo 0.22, 95 % CI (-0.05, 0.49), p = 0.046). For bone marrow lesions (BMLs), more improvement was observed from 6 to 12 months for whole knee analyses (-0.14, 95 % CI (-0.48, 0.19) vs. placebo 0.44, 95
Yoon, Young Jun; Seo, Jae Hwa; Kang, In Man
2018-04-01
In this work, we present a capacitorless one-transistor dynamic random-access memory (1T-DRAM) based on an asymmetric double-gate Ge/GaAs-heterojunction tunneling field-effect transistor (TFET) for DRAM applications. The n-doped boosting layer and gate2 drain-underlap structure is employed in the device to obtain an excellent 1T-DRAM performance. The n-doped layer inserted between the source and channel regions improves the sensing margin because of a high rate of increase in the band-to-band tunneling (BTBT) probability. Furthermore, because the gate2 drain-underlap structure reduces the recombination rate that occurs between the gate2 and drain regions, a device with a gate2 drain-underlap length (L G2_D-underlap) of 10 nm exhibited a longer retention performance. As a result, by applying the n-doped layer and gate2 drain-underlap structure, the proposed device exhibited not only a high sensing margin of 1.11 µA/µm but also a long retention time of greater than 100 ms at a temperature of 358 K (85 °C).
Generalized multiple kernel learning with data-dependent priors.
Mao, Qi; Tsang, Ivor W; Gao, Shenghua; Wang, Li
2015-06-01
Multiple kernel learning (MKL) and classifier ensemble are two mainstream methods for solving learning problems in which some sets of features/views are more informative than others, or the features/views within a given set are inconsistent. In this paper, we first present a novel probabilistic interpretation of MKL such that maximum entropy discrimination with a noninformative prior over multiple views is equivalent to the formulation of MKL. Instead of using the noninformative prior, we introduce a novel data-dependent prior based on an ensemble of kernel predictors, which enhances the prediction performance of MKL by leveraging the merits of the classifier ensemble. With the proposed probabilistic framework of MKL, we propose a hierarchical Bayesian model to learn the proposed data-dependent prior and classification model simultaneously. The resultant problem is convex and other information (e.g., instances with either missing views or missing labels) can be seamlessly incorporated into the data-dependent priors. Furthermore, a variety of existing MKL models can be recovered under the proposed MKL framework and can be readily extended to incorporate these priors. Extensive experiments demonstrate the benefits of our proposed framework in supervised and semisupervised settings, as well as in tasks with partial correspondence among multiple views.
SUSPENSION OF THE PRIOR DISCIPLINARY INVESTIGATION ACCORDING TO LABOR LAW
Directory of Open Access Journals (Sweden)
Nicolae, GRADINARU
2014-11-01
Full Text Available In order to conduct the prior disciplinary investigation, the employee shall be convoked in writing by the person authorized by the employer to carry out the research, specifying the subject, date, time and place of the meeting. For this purpose the employer shall appoint a committee charged with conducting the prior disciplinary investigation. Prior disciplinary research cannot be done without the possibility of the accused person to defend himself. It would be an abuse of the employer to violate these provisions. Since the employee is entitled to formulate and sustain defence in proving innocence or lesser degree of guilt than imputed, it needs between the moment were disclosed to the employee and the one of performing the prior disciplinary investigation to be a reasonable term for the employee to be able to prepare a defence in this regard. The employee's failure to present at the convocation, without an objective reason entitles the employer to dispose the sanctioning without making the prior disciplinary investigation. The objective reason which makes the employee, that is subject to prior disciplinary investigation, unable to present to the preliminary disciplinary investigation, should be at the time of the investigation in question.
Energy Technology Data Exchange (ETDEWEB)
Cavinato, M.; Marangoni, M.; Saruis, A.M.
1990-10-01
This report describes the COINCIDENCE code written for the IBM 3090/300E computer in Fortran 77 language. The output data of this code are the (e, e'x) threefold differential cross-sections, the nuclear structure functions, the polarization asymmetry and the angular correlation coefficients. In the real photon limit, the output data are the angular distributions for plane polarized incident photons. The code reads from tape the transition matrix elements previously calculated, by in continuum self-consistent RPA (random phase approximation) theory with Skyrme interactions. This code has been used to perform a numerical analysis of coincidence (e, e'x) reactions with polarized electrons on the /sup 16/O nucleous.
Segmentation of kidney using C-V model and anatomy priors
Lu, Jinghua; Chen, Jie; Zhang, Juan; Yang, Wenjia
2007-12-01
This paper presents an approach for kidney segmentation on abdominal CT images as the first step of a virtual reality surgery system. Segmentation for medical images is often challenging because of the objects' complicated anatomical structures, various gray levels, and unclear edges. A coarse to fine approach has been applied in the kidney segmentation using Chan-Vese model (C-V model) and anatomy prior knowledge. In pre-processing stage, the candidate kidney regions are located. Then C-V model formulated by level set method is applied in these smaller ROI, which can reduce the calculation complexity to a certain extent. At last, after some mathematical morphology procedures, the specified kidney structures have been extracted interactively with prior knowledge. The satisfying results on abdominal CT series show that the proposed approach keeps all the advantages of C-V model and overcome its disadvantages.
Tong, Xiaolin; Xu, Jia; Lian, Fengmei; Yu, Xiaotong; Zhao, Yufeng; Xu, Lipeng; Zhang, Menghui; Zhao, Xiyan; Shen, Jian; Wu, Shengping; Pang, Xiaoyan; Tian, Jiaxing; Zhang, Chenhong; Zhou, Qiang; Wang, Linhua; Pang, Bing; Chen, Feng; Peng, Zhiping; Wang, Jing; Zhen, Zhong; Fang, Chao; Li, Min; Chen, Limei; Zhao, Liping
2018-05-22
Accumulating evidence implicates gut microbiota as promising targets for the treatment of type 2 diabetes mellitus (T2DM). With a randomized clinical trial, we tested the hypothesis that alteration of gut microbiota may be involved in the alleviation of T2DM with hyperlipidemia by metformin and a specifically designed herbal formula (AMC). Four hundred fifty patients with T2DM and hyperlipidemia were randomly assigned to either the metformin- or AMC-treated group. After 12 weeks of treatment, 100 patients were randomly selected from each group and assessed for clinical improvement. The effects of the two drugs on the intestinal microbiota were evaluated by analyzing the V3 and V4 regions of the 16S rRNA gene by Illumina sequencing and multivariate statistical methods. Both metformin and AMC significantly alleviated hyperglycemia and hyperlipidemia and shifted gut microbiota structure in diabetic patients. They significantly increased a coabundant group represented by Blautia spp., which significantly correlated with the improvements in glucose and lipid homeostasis. However, AMC showed better efficacies in improving homeostasis model assessment of insulin resistance (HOMA-IR) and plasma triglyceride and also exerted a larger effect on gut microbiota. Furthermore, only AMC increased the coabundant group represented by Faecalibacterium spp., which was previously reported to be associated with the alleviation of T2DM in a randomized clinical trial. Metformin and the Chinese herbal formula may ameliorate type 2 diabetes with hyperlipidemia via enriching beneficial bacteria, such as Blautia and Faecalibacterium spp. IMPORTANCE Metabolic diseases such as T2DM and obesity have become a worldwide public health threat. Accumulating evidence indicates that gut microbiota can causatively arouse metabolic diseases, and thus the gut microbiota serves as a promising target for disease control. In this study, we evaluated the role of gut microbiota during improvements in
Lindgren, B F; Ruokonen, E; Magnusson-Borg, K; Takala, J
2001-02-01
Patients with sepsis and trauma are characterised by hypermetabolism, insulin resistance and protein catabolism. Fat emulsions containing medium chain triglycerides have been suggested to be beneficial for these patients since medium chain fatty acids are a more readily available source of energy when compared to long chain fatty acids. The aim of this study was to compare a medium and long chain triglyceride emulsion consisting of structured triglycerides (ST) with a long chain triglyceride (LCT) emulsion in terms of effects on nitrogen balance, energy metabolism and safety. 30 ICU patients with sepsis or multiple injury received a fat emulsion with ST or 20% LCT (1.5 g triglycerides/kg body weight/day) as a component of total parenteral nutrition (TPN), for 5 days in a double blind randomised parallel group design. The main analysis was made on the 3 day per protocol population due to lack of patients at day 5. There were no differences in baseline characteristics of the two groups receiving either the LCT or the ST emulsion. The efficacy analysis was performed on the per protocol population (n=9 ST, n=11 LCT). There was a significant difference between the two treatments regarding daily nitrogen balances when the first 3 days were analysed P=0.0038). This resulted in an amelioration of the nitrogen balance on day 3 in the group on ST as compared to those on LCT (0.1+/-2.4 g vs -9.9+/-2.1 g P=0.01). The 3 day cumulative nitrogen balance was significantly better in the group receiving ST compared to those on LCT (-0.7+/-6.0 vs -16.7+/-3.9 P=0.03). This better cumulative nitrogen balance on day 3 was also preserved as a tendency (P=0.061) in the analysis of the intention to treat population, but on day 5 there was no significant difference (P=0.08). The ST emulsion was well tolerated and no difference was found compared to the LCT emulsion regarding respiratory quotient, energy expenditure, glucose or triglyceride levels during infusion. Administration of a
Generating random networks and graphs
Coolen, Ton; Roberts, Ekaterina
2017-01-01
This book supports researchers who need to generate random networks, or who are interested in the theoretical study of random graphs. The coverage includes exponential random graphs (where the targeted probability of each network appearing in the ensemble is specified), growth algorithms (i.e. preferential attachment and the stub-joining configuration model), special constructions (e.g. geometric graphs and Watts Strogatz models) and graphs on structured spaces (e.g. multiplex networks). The presentation aims to be a complete starting point, including details of both theory and implementation, as well as discussions of the main strengths and weaknesses of each approach. It includes extensive references for readers wishing to go further. The material is carefully structured to be accessible to researchers from all disciplines while also containing rigorous mathematical analysis (largely based on the techniques of statistical mechanics) to support those wishing to further develop or implement the theory of rand...
International Nuclear Information System (INIS)
Chan, M.T.; Herman, G.T.; Levitan, E.
1996-01-01
We demonstrate that (i) classical methods of image reconstruction from projections can be improved upon by considering the output of such a method as a distorted version of the original image and applying a Bayesian approach to estimate from it the original image (based on a model of distortion and on a Gibbs distribution as the prior) and (ii) by selecting an open-quotes image-modelingclose quotes prior distribution (i.e., one which is such that it is likely that a random sample from it shares important characteristics of the images of the application area) one can improve over another Gibbs prior formulated using only pairwise interactions. We illustrate our approach using simulated Positron Emission Tomography (PET) data from realistic brain phantoms. Since algorithm performance ultimately depends on the diagnostic task being performed. we examine a number of different medically relevant figures of merit to give a fair comparison. Based on a training-and-testing evaluation strategy, we demonstrate that statistically significant improvements can be obtained using the proposed approach
Prior Exposure to Zika Virus Significantly Enhances Peak Dengue-2 Viremia in Rhesus Macaques
George, Jeffy; Valiant, William G.; Mattapallil, Mary J.; Walker, Michelle; Huang, Yan-Jang S.; Vanlandingham, Dana L.; Misamore, John; Greenhouse, Jack; Weiss, Deborah E.; Verthelyi, Daniela; Higgs, Stephen; Andersen, Hanne; Lewis, Mark G.; Mattapallil, Joseph J.
2017-01-01
Structural and functional homologies between the Zika and Dengue viruses? envelope proteins raise the possibility that cross-reactive antibodies induced following Zika virus infection might enhance subsequent Dengue infection. Using the rhesus macaque model we show that prior infection with Zika virus leads to a significant enhancement of Dengue-2 viremia that is accompanied by neutropenia, lympocytosis, hyperglycemia, and higher reticulocyte counts, along with the activation of pro-inflammat...
Neutrino masses and their ordering: global data, priors and models
Gariazzo, S.; Archidiacono, M.; de Salas, P. F.; Mena, O.; Ternes, C. A.; Tórtola, M.
2018-03-01
We present a full Bayesian analysis of the combination of current neutrino oscillation, neutrinoless double beta decay and Cosmic Microwave Background observations. Our major goal is to carefully investigate the possibility to single out one neutrino mass ordering, namely Normal Ordering or Inverted Ordering, with current data. Two possible parametrizations (three neutrino masses versus the lightest neutrino mass plus the two oscillation mass splittings) and priors (linear versus logarithmic) are exhaustively examined. We find that the preference for NO is only driven by neutrino oscillation data. Moreover, the values of the Bayes factor indicate that the evidence for NO is strong only when the scan is performed over the three neutrino masses with logarithmic priors; for every other combination of parameterization and prior, the preference for NO is only weak. As a by-product of our Bayesian analyses, we are able to (a) compare the Bayesian bounds on the neutrino mixing parameters to those obtained by means of frequentist approaches, finding a very good agreement; (b) determine that the lightest neutrino mass plus the two mass splittings parametrization, motivated by the physical observables, is strongly preferred over the three neutrino mass eigenstates scan and (c) find that logarithmic priors guarantee a weakly-to-moderately more efficient sampling of the parameter space. These results establish the optimal strategy to successfully explore the neutrino parameter space, based on the use of the oscillation mass splittings and a logarithmic prior on the lightest neutrino mass, when combining neutrino oscillation data with cosmology and neutrinoless double beta decay. We also show that the limits on the total neutrino mass ∑ mν can change dramatically when moving from one prior to the other. These results have profound implications for future studies on the neutrino mass ordering, as they crucially state the need for self-consistent analyses which explore the
Finding A Minimally Informative Dirichlet Prior Using Least Squares
International Nuclear Information System (INIS)
Kelly, Dana
2011-01-01
In a Bayesian framework, the Dirichlet distribution is the conjugate distribution to the multinomial likelihood function, and so the analyst is required to develop a Dirichlet prior that incorporates available information. However, as it is a multiparameter distribution, choosing the Dirichlet parameters is less straightforward than choosing a prior distribution for a single parameter, such as p in the binomial distribution. In particular, one may wish to incorporate limited information into the prior, resulting in a minimally informative prior distribution that is responsive to updates with sparse data. In the case of binomial p or Poisson λ, the principle of maximum entropy can be employed to obtain a so-called constrained noninformative prior. However, even in the case of p, such a distribution cannot be written down in the form of a standard distribution (e.g., beta, gamma), and so a beta distribution is used as an approximation in the case of p. In the case of the multinomial model with parametric constraints, the approach of maximum entropy does not appear tractable. This paper presents an alternative approach, based on constrained minimization of a least-squares objective function, which leads to a minimally informative Dirichlet prior distribution. The alpha-factor model for common-cause failure, which is widely used in the United States, is the motivation for this approach, and is used to illustrate the method. In this approach to modeling common-cause failure, the alpha-factors, which are the parameters in the underlying multinomial model for common-cause failure, must be estimated from data that are often quite sparse, because common-cause failures tend to be rare, especially failures of more than two or three components, and so a prior distribution that is responsive to updates with sparse data is needed.
Prior-based artifact correction (PBAC) in computed tomography
International Nuclear Information System (INIS)
Heußer, Thorsten; Brehm, Marcus; Ritschl, Ludwig; Sawall, Stefan; Kachelrieß, Marc
2014-01-01
Purpose: Image quality in computed tomography (CT) often suffers from artifacts which may reduce the diagnostic value of the image. In many cases, these artifacts result from missing or corrupt regions in the projection data, e.g., in the case of metal, truncation, and limited angle artifacts. The authors propose a generalized correction method for different kinds of artifacts resulting from missing or corrupt data by making use of available prior knowledge to perform data completion. Methods: The proposed prior-based artifact correction (PBAC) method requires prior knowledge in form of a planning CT of the same patient or in form of a CT scan of a different patient showing the same body region. In both cases, the prior image is registered to the patient image using a deformable transformation. The registered prior is forward projected and data completion of the patient projections is performed using smooth sinogram inpainting. The obtained projection data are used to reconstruct the corrected image. Results: The authors investigate metal and truncation artifacts in patient data sets acquired with a clinical CT and limited angle artifacts in an anthropomorphic head phantom data set acquired with a gantry-based flat detector CT device. In all cases, the corrected images obtained by PBAC are nearly artifact-free. Compared to conventional correction methods, PBAC achieves better artifact suppression while preserving the patient-specific anatomy at the same time. Further, the authors show that prominent anatomical details in the prior image seem to have only minor impact on the correction result. Conclusions: The results show that PBAC has the potential to effectively correct for metal, truncation, and limited angle artifacts if adequate prior data are available. Since the proposed method makes use of a generalized algorithm, PBAC may also be applicable to other artifacts resulting from missing or corrupt data
Finding a minimally informative Dirichlet prior distribution using least squares
International Nuclear Information System (INIS)
Kelly, Dana; Atwood, Corwin
2011-01-01
In a Bayesian framework, the Dirichlet distribution is the conjugate distribution to the multinomial likelihood function, and so the analyst is required to develop a Dirichlet prior that incorporates available information. However, as it is a multiparameter distribution, choosing the Dirichlet parameters is less straightforward than choosing a prior distribution for a single parameter, such as p in the binomial distribution. In particular, one may wish to incorporate limited information into the prior, resulting in a minimally informative prior distribution that is responsive to updates with sparse data. In the case of binomial p or Poisson λ, the principle of maximum entropy can be employed to obtain a so-called constrained noninformative prior. However, even in the case of p, such a distribution cannot be written down in the form of a standard distribution (e.g., beta, gamma), and so a beta distribution is used as an approximation in the case of p. In the case of the multinomial model with parametric constraints, the approach of maximum entropy does not appear tractable. This paper presents an alternative approach, based on constrained minimization of a least-squares objective function, which leads to a minimally informative Dirichlet prior distribution. The alpha-factor model for common-cause failure, which is widely used in the United States, is the motivation for this approach, and is used to illustrate the method. In this approach to modeling common-cause failure, the alpha-factors, which are the parameters in the underlying multinomial model for common-cause failure, must be estimated from data that are often quite sparse, because common-cause failures tend to be rare, especially failures of more than two or three components, and so a prior distribution that is responsive to updates with sparse data is needed.
Finding a Minimally Informative Dirichlet Prior Distribution Using Least Squares
International Nuclear Information System (INIS)
Kelly, Dana; Atwood, Corwin
2011-01-01
In a Bayesian framework, the Dirichlet distribution is the conjugate distribution to the multinomial likelihood function, and so the analyst is required to develop a Dirichlet prior that incorporates available information. However, as it is a multiparameter distribution, choosing the Dirichlet parameters is less straight-forward than choosing a prior distribution for a single parameter, such as p in the binomial distribution. In particular, one may wish to incorporate limited information into the prior, resulting in a minimally informative prior distribution that is responsive to updates with sparse data. In the case of binomial p or Poisson, the principle of maximum entropy can be employed to obtain a so-called constrained noninformative prior. However, even in the case of p, such a distribution cannot be written down in closed form, and so an approximate beta distribution is used in the case of p. In the case of the multinomial model with parametric constraints, the approach of maximum entropy does not appear tractable. This paper presents an alternative approach, based on constrained minimization of a least-squares objective function, which leads to a minimally informative Dirichlet prior distribution. The alpha-factor model for common-cause failure, which is widely used in the United States, is the motivation for this approach, and is used to illustrate the method. In this approach to modeling common-cause failure, the alpha-factors, which are the parameters in the underlying multinomial aleatory model for common-cause failure, must be estimated from data that is often quite sparse, because common-cause failures tend to be rare, especially failures of more than two or three components, and so a prior distribution that is responsive to updates with sparse data is needed.
Thrombolysis in patients with prior stroke within the last 3 months.
Heldner, M R; Mattle, H P; Jung, S; Fischer, U; Gralla, J; Zubler, C; El-Koussy, M; Schroth, G; Arnold, M; Mono, M-L
2014-12-01
Patients with prior stroke within 3 months have been mostly excluded from randomized thrombolysis trials mainly because of the fear of an increased rate of symptomatic intracerebral hemorrhage (sICH). The aim of this study was to compare baseline characteristics and clinical outcome of thrombolyzed patients who had a previous stroke within the last 3 months with those not fulfilling this criterion (comparison group). In all, 1217 patients were included in our analysis (42.2% women, mean age 68.8 ± 14.4 years). Patients with previous stroke within the last 3 months (17/1.4%) had more often a basilar artery occlusion (41.2% vs. 10.8%) and less frequently a modified Rankin scale (mRS) score 0-1 prior to index stroke (88.2% vs. 97.3%) and a higher mean time lapse from symptom onset to thrombolysis (321 min vs. 262 min) than those in the comparison group. Stroke severity was not different between the two groups. Rates of sICH were 11.8% vs. 6%. None of the sICHs and only one asymptomatic intracerebral hemorrhage occurred in the region of the former infarct. At 3 months, favorable outcome (mRS ≤ 2) in patients with previous stroke within 3 months was 29.4% (vs. 48.9%) and mortality 41.2% (vs. 22.7%). In patients with prior stroke within the last 3 months, none of the sICHs and only one asymptomatic intracerebral hemorrhage occurred in the region of the former infarct. The high mortality was influenced by four patients, who died until discharge due to acute major index stroke. It is reasonable to include these patients in randomized clinical trials and registries to assess further their thrombolysis benefit-risk ratio. © 2014 The Author(s) European Journal of Neurology © 2014 EAN.
Fatigue Reliability under Random Loads
DEFF Research Database (Denmark)
Talreja, R.
1979-01-01
We consider the problem of estimating the probability of survival (non-failure) and the probability of safe operation (strength greater than a limiting value) of structures subjected to random loads. These probabilities are formulated in terms of the probability distributions of the loads...... propagation stage. The consequences of this behaviour on the fatigue reliability are discussed....
Assessment of Prior Learning in Adult Vocational Education and Training
Directory of Open Access Journals (Sweden)
Vibe Aarkrog
2015-04-01
Full Text Available The article deals about the results of a study of school-based Assessment of Prior Learning of adults who have enrolled as students in a VET college in order to qualify for occupations as skilled workers. Based on examples of VET teachers’ methods for assessing the students’ prior learning in the programs for gastronomes, respectively child care assistants the article discusses two issues in relation to Assessment of Prior Learing: the encounter of practical experience and school-based knowledge and the validity and reliability of the assessment procedures. Through focusing on the students’ knowing that and knowing why the assessment is based on a scholastic perception of the students’ needs for training, reflecting one of the most important challenges in Assessment of Prior Learning: how can practical experience be transformed into credits for the knowledge parts of the programs? The study shows that by combining several Assessment of Prior Learning methods and comparing the teachers’ assessments the teachers respond to the issues of validity and reliability. However, validity and reliability might be even further strengthened, if the competencies are well defined, if the education system is aware of securing a reasonable balance between knowing how, knowing that, and knowing why, and if the teachers are adequately trained for the assessment procedures.
Logarithmic Laplacian Prior Based Bayesian Inverse Synthetic Aperture Radar Imaging
Directory of Open Access Journals (Sweden)
Shuanghui Zhang
2016-04-01
Full Text Available This paper presents a novel Inverse Synthetic Aperture Radar Imaging (ISAR algorithm based on a new sparse prior, known as the logarithmic Laplacian prior. The newly proposed logarithmic Laplacian prior has a narrower main lobe with higher tail values than the Laplacian prior, which helps to achieve performance improvement on sparse representation. The logarithmic Laplacian prior is used for ISAR imaging within the Bayesian framework to achieve better focused radar image. In the proposed method of ISAR imaging, the phase errors are jointly estimated based on the minimum entropy criterion to accomplish autofocusing. The maximum a posterior (MAP estimation and the maximum likelihood estimation (MLE are utilized to estimate the model parameters to avoid manually tuning process. Additionally, the fast Fourier Transform (FFT and Hadamard product are used to minimize the required computational efficiency. Experimental results based on both simulated and measured data validate that the proposed algorithm outperforms the traditional sparse ISAR imaging algorithms in terms of resolution improvement and noise suppression.
Source Localization by Entropic Inference and Backward Renormalization Group Priors
Directory of Open Access Journals (Sweden)
Nestor Caticha
2015-04-01
Full Text Available A systematic method of transferring information from coarser to finer resolution based on renormalization group (RG transformations is introduced. It permits building informative priors in finer scales from posteriors in coarser scales since, under some conditions, RG transformations in the space of hyperparameters can be inverted. These priors are updated using renormalized data into posteriors by Maximum Entropy. The resulting inference method, backward RG (BRG priors, is tested by doing simulations of a functional magnetic resonance imaging (fMRI experiment. Its results are compared with a Bayesian approach working in the finest available resolution. Using BRG priors sources can be partially identified even when signal to noise ratio levels are up to ~ -25dB improving vastly on the single step Bayesian approach. For low levels of noise the BRG prior is not an improvement over the single scale Bayesian method. Analysis of the histograms of hyperparameters can show how to distinguish if the method is failing, due to very high levels of noise, or if the identification of the sources is, at least partially possible.
A Noninformative Prior on a Space of Distribution Functions
Directory of Open Access Journals (Sweden)
Alexander Terenin
2017-07-01
Full Text Available In a given problem, the Bayesian statistical paradigm requires the specification of a prior distribution that quantifies relevant information about the unknowns of main interest external to the data. In cases where little such information is available, the problem under study may possess an invariance under a transformation group that encodes a lack of information, leading to a unique prior—this idea was explored at length by E.T. Jaynes. Previous successful examples have included location-scale invariance under linear transformation, multiplicative invariance of the rate at which events in a counting process are observed, and the derivation of the Haldane prior for a Bernoulli success probability. In this paper we show that this method can be extended, by generalizing Jaynes, in two ways: (1 to yield families of approximately invariant priors; and (2 to the infinite-dimensional setting, yielding families of priors on spaces of distribution functions. Our results can be used to describe conditions under which a particular Dirichlet Process posterior arises from an optimal Bayesian analysis, in the sense that invariances in the prior and likelihood lead to one and only one posterior distribution.
On Bayesian reliability analysis with informative priors and censoring
International Nuclear Information System (INIS)
Coolen, F.P.A.
1996-01-01
In the statistical literature many methods have been presented to deal with censored observations, both within the Bayesian and non-Bayesian frameworks, and such methods have been successfully applied to, e.g., reliability problems. Also, in reliability theory it is often emphasized that, through shortage of statistical data and possibilities for experiments, one often needs to rely heavily on judgements of engineers, or other experts, for which means Bayesian methods are attractive. It is therefore important that such judgements can be elicited easily to provide informative prior distributions that reflect the knowledge of the engineers well. In this paper we focus on this aspect, especially on the situation that the judgements of the consulted engineers are based on experiences in environments where censoring has also been present previously. We suggest the use of the attractive interpretation of hyperparameters of conjugate prior distributions when these are available for assumed parametric models for lifetimes, and we show how one may go beyond the standard conjugate priors, using similar interpretations of hyper-parameters, to enable easier elicitation when censoring has been present in the past. This may even lead to more flexibility for modelling prior knowledge than when using standard conjugate priors, whereas the disadvantage of more complicated calculations that may be needed to determine posterior distributions play a minor role due to the advanced mathematical and statistical software that is widely available these days
International Nuclear Information System (INIS)
Vaegler, Sven; Sauer, Otto; Stsepankou, Dzmitry; Hesser, Juergen
2015-01-01
The reduction of dose in cone beam computer tomography (CBCT) arises from the decrease of the tube current for each projection as well as from the reduction of the number of projections. In order to maintain good image quality, sophisticated image reconstruction techniques are required. The Prior Image Constrained Compressed Sensing (PICCS) incorporates prior images into the reconstruction algorithm and outperforms the widespread used Feldkamp-Davis-Kress-algorithm (FDK) when the number of projections is reduced. However, prior images that contain major variations are not appropriately considered so far in PICCS. We therefore propose the partial-PICCS (pPICCS) algorithm. This framework is a problem-specific extension of PICCS and enables the incorporation of the reliability of the prior images additionally. We assumed that the prior images are composed of areas with large and small deviations. Accordingly, a weighting matrix considered the assigned areas in the objective function. We applied our algorithm to the problem of image reconstruction from few views by simulations with a computer phantom as well as on clinical CBCT projections from a head-and-neck case. All prior images contained large local variations. The reconstructed images were compared to the reconstruction results by the FDK-algorithm, by Compressed Sensing (CS) and by PICCS. To show the gain of image quality we compared image details with the reference image and used quantitative metrics (root-mean-square error (RMSE), contrast-to-noise-ratio (CNR)). The pPICCS reconstruction framework yield images with substantially improved quality even when the number of projections was very small. The images contained less streaking, blurring and inaccurately reconstructed structures compared to the images reconstructed by FDK, CS and conventional PICCS. The increased image quality is also reflected in large RMSE differences. We proposed a modification of the original PICCS algorithm. The pPICCS algorithm
Foulsham, W S; Fu, L; Tatham, A J
2015-10-01
Trend-based analyses examining rates of visual field (VF) loss in glaucoma are useful for predicting risk of vision-related morbidity. Although patients with faster losses are likely to require treatment escalation, little is known about rates that might trigger a decision to intervene surgically. The aims of this study were to investigate prior rates of VF loss in patients attending for trabeculectomy and to estimate, in the absence of surgical intervention, lifetime risk of visual impairment, and blindness. A retrospective analysis of 117 eyes of 86 consecutive patients with glaucoma attending for trabeculectomy, including 53 patients referred from general ophthalmology clinics and 33 patients from specialist glaucoma clinics. Rates of change in standard automated perimetry mean deviation were examined using linear regression and random coefficient models. Risk of lifetime visual impairment and blindness was calculated using life expectancy data. Mean age at surgery was 71.0±9.7 years. Patients were followed for 10.7±7.5 years prior to surgery with an average of seven useable fields per eye. On average patients referred from general clinics lost 1.04 dB/year compared with 0.77 dB/year in those referred from glaucoma clinics (P=0.070). Patients referred from general clinics had more medication changes prior to surgery (3.4 and 2.6 changes, respectively; P=0.004). Given Scottish life expectancy data, untreated, 61 eyes (52%) would have passed the threshold for visual impairment, whereas 40 (34%) would have passed the threshold demarcating blindness. Patients attending for trabeculectomy had faster average rates of field loss prior to surgery than published values for the general glaucoma population with over one-third of eyes studied predicted to have become blind without intervention. Those managed by glaucoma specialists had fewer changes in medication and tended to slower rates of VF loss, although the latter did not reach statistical significance.
Hepatosplenic Candidiasis Without Prior Documented Candidemia: An Underrecognized Diagnosis?
van Prehn, Joffrey; Menke-van der Houven van Oordt, C Willemien; de Rooij, Madelon L; Meijer, Ellen; Bomers, Marije K; van Dijk, Karin
2017-08-01
Patients with a history of chemotherapy or stem cell transplantation (SCT) and prolonged neutropenia are at risk for hepatic and/or splenic seeding of Candida . In our experience, hepatosplenic candidiasis (HSC) without documented candidemia often remains unrecognized. We describe three cases of HSC without documented candidemia and the challenges in establishing the diagnosis and adequately treating this condition. The first patient had a history of SCT for treatment of breast cancer and was scheduled for hemihepatectomy for suspected liver metastasis. A second opinion at our institute resulted in the diagnosis of hepatic candidiasis without prior documented candidemia, for which she was treated successfully with fluconazole. The second case demonstrates the limitations of (blood and tissue) cultures and the value of molecular methods to confirm the diagnosis. Case 3 illustrates treatment challenges, with ongoing dissemination and insufficient source control despite months of antifungal therapy, eventually resulting in a splenectomy. A structured literature search was performed for articles describing any patient with HSC and documented blood culture results. Thirty articles were available for extraction of data on candidemia and HSC. Seventy percent (131/187) of patients with HSC did not have documented candidemia. The majority of HSC events were described in hematologic patients, although some cases were described in patients with solid tumors treated with SCT ( n = 1) or chemotherapy and a history of leukopenia ( n = 2). Current guidelines and practices for diagnosis and treatment are described. Clinicians should be aware that HSC most often occurs without documented candidemia. In case of persistent or unexplained fever or lesions in the liver and/or spleen, a history of neutropenia should place disseminated candidiasis in the differential diagnosis. HSC is not limited to hematological patients and may occur in patients with solid tumors treated with
Superposing pure quantum states with partial prior information
Dogra, Shruti; Thomas, George; Ghosh, Sibasish; Suter, Dieter
2018-05-01
The principle of superposition is an intriguing feature of quantum mechanics, which is regularly exploited in many different circumstances. A recent work [M. Oszmaniec et al., Phys. Rev. Lett. 116, 110403 (2016), 10.1103/PhysRevLett.116.110403] shows that the fundamentals of quantum mechanics restrict the process of superimposing two unknown pure states, even though it is possible to superimpose two quantum states with partial prior knowledge. The prior knowledge imposes geometrical constraints on the choice of input states. We discuss an experimentally feasible protocol to superimpose multiple pure states of a d -dimensional quantum system and carry out an explicit experimental realization for two single-qubit pure states with partial prior information on a two-qubit NMR quantum information processor.
Schneider, Michael; Rittle-Johnson, Bethany; Star, Jon R
2011-11-01
Competence in many domains rests on children developing conceptual and procedural knowledge, as well as procedural flexibility. However, research on the developmental relations between these different types of knowledge has yielded unclear results, in part because little attention has been paid to the validity of the measures or to the effects of prior knowledge on the relations. To overcome these problems, we modeled the three constructs in the domain of equation solving as latent factors and tested (a) whether the predictive relations between conceptual and procedural knowledge were bidirectional, (b) whether these interrelations were moderated by prior knowledge, and (c) how both constructs contributed to procedural flexibility. We analyzed data from 2 measurement points each from two samples (Ns = 228 and 304) of middle school students who differed in prior knowledge. Conceptual and procedural knowledge had stable bidirectional relations that were not moderated by prior knowledge. Both kinds of knowledge contributed independently to procedural flexibility. The results demonstrate how changes in complex knowledge structures contribute to competence development.
Understanding sleep disturbance in athletes prior to important competitions.
Juliff, Laura E; Halson, Shona L; Peiffer, Jeremiah J
2015-01-01
Anecdotally many athletes report worse sleep in the nights prior to important competitions. Despite sleep being acknowledged as an important factor for optimal athletic performance and overall health, little is understood about athlete sleep around competition. The aims of this study were to identify sleep complaints of athletes prior to competitions and determine whether complaints were confined to competition periods. Cross-sectional study. A sample of 283 elite Australian athletes (129 male, 157 female, age 24±5 y) completed two questionnaires; Competitive Sport and Sleep questionnaire and the Pittsburgh Sleep Quality Index. 64.0% of athletes indicated worse sleep on at least one occasion in the nights prior to an important competition over the past 12 months. The main sleep problem specified by athletes was problems falling asleep (82.1%) with the main reasons responsible for poor sleep indicated as thoughts about the competition (83.5%) and nervousness (43.8%). Overall 59.1% of team sport athletes reported having no strategy to overcome poor sleep compared with individual athletes (32.7%, p=0.002) who utilised relaxation and reading as strategies. Individual sport athletes had increased likelihood of poor sleep as they aged. The poor sleep reported by athletes prior to competition was situational rather than a global sleep problem. Poor sleep is common prior to major competitions in Australian athletes, yet most athletes are unaware of strategies to overcome the poor sleep experienced. It is essential coaches and scientists monitor and educate both individual and team sport athletes to facilitate sleep prior to important competitions. Copyright © 2014 Sports Medicine Australia. Published by Elsevier Ltd. All rights reserved.