Energy Technology Data Exchange (ETDEWEB)
Lamart, St.
2008-10-15
The in vivo measurement is an efficient method to estimate the retention of activity in case of internal contamination. However, it is currently limited by the use of physical phantoms for the calibration, not enabling to reproduce neither the morphology of the measured person nor the actual distribution of the contamination. The current method of calibration therefore leads to significant systematic uncertainties on the quantification of the contamination. To improve the in vivo measurement, the Laboratory of Internal Dose Assessment (LEDI, IRSN) has developed an original numerical calibration method with the OEDIPE software. It is based on voxel phantoms created from the medical images of persons, and associated with the MCNPX Monte Carlo code of particle transport. The first version of this software enabled to model simple homogeneous sources and to better estimate the systematic uncertainties in the lung counting of actinides due to the detector position and to the heterogeneous distribution of activity inside the lungs. However, it was not possible to take into account the dynamic feature, and often heterogeneous distribution between body organs and tissues of the activity. Still, the efficiency of the detection system depends on the distribution of the source of activity. The main purpose of the thesis work is to answer to the question: what is the influence of the biokinetics of the radionuclides on the in vivo counting? To answer it, it was necessary to deeply modify OEDIPE. This new development enabled to model the source of activity more realistically from the reference biokinetic models defined by the ICRP. The first part of the work consisted in developing the numerical tools needed to integrate the biokinetics in OEDIPE. Then, a methodology was developed to quantify its influence on the in vivo counting from the results of simulations. This method was carried out and validated on the model of the in vivo counting system of the LEDI. Finally, the procedure was applied to the in vivo counting system of the medical laboratory of AREVA NC La Hague and to a real case of contamination. This work enabled to study and quantify the incomplete knowledge of the body distribution of activity as another systematic source of uncertainty, .Discrepancies of the order of 50% were found in the estimation of the retention of activity from the lung measurement of the 59.54 keV ray of Am-241 in the first days following the contamination. The developed method will be used in the laboratory of AREVA NC La Hague and can be applied in every laboratory dedicated to the in vivo counting of nuclear workers, to correct the efficiency calibration depending on the biokinetics. By mitigating the associated source of uncertainty, this work will therefore contribute to optimizing the estimation of the internal dose. (author)
Energy Technology Data Exchange (ETDEWEB)
Noelle, P
2006-12-15
In vivo lung counting, one of the preferred methods for monitoring people exposed to the risk of actinide inhalation, is nevertheless limited by the use of physical calibration phantoms which, for technical reasons, can only provide a rough representation of human tissue. A new approach to in vivo measurements has been developed to take advantage of advances in medical imaging and computing; this consists of numerical phantoms based on tomographic images (CT) or magnetic resonance images (R.M.I.) combined with Monte Carlo computing techniques. Under laboratory implementation of this innovative method using specific software called O.E.D.I.P.E., the main thrust of this thesis was to provide answers to the following question: what do numerical phantoms and new techniques like O.E.D.I.P.E. contribute to the improvement in calibration of low-energy in vivo counting systems? After a few developments of the O.E.D.I.P.E. interface, the numerical method was validated for systems composed of four germanium detectors, the most widespread configuration in radio bioassay laboratories (a good match was found, with less than 10% variation). This study represents the first step towards a person-specific numerical calibration of counting systems, which will improve assessment of the activity retained. A second stage focusing on an exhaustive evaluation of uncertainties encountered in in vivo lung counting was possible thanks to the approach offered by the previously-validated O.E.D.I.P.E. software. It was shown that the uncertainties suggested by experiments in a previous study were underestimated, notably morphological differences between the physical phantom and the measured person. Some improvements in the measurement procedure were then proposed, particularly new bio-metric equations specific to French measurement configurations that allow a more sensible choice of the calibration phantom, directly assessing the thickness of the torso plate to be added to the Livermore phantom based on the weight and height of the measured person. Lastly, the study underlined the interest of numerical phantoms and Monte Carlo simulation through actual contamination cases of lungs or wounds, which are impossible to study using traditional methods. In the case of contaminated wounds, this method was used to adjust the level of the retained activity in an actual injury on a hand and should improve the determination of source geometry, thereby refining the dose calculation. Personalized calibration of counting systems (for morphological purposes or distribution of radionuclides in the body) appears possible thanks to this innovative method and represents an important step towards implementation of personalized dosimetry. (author)
Update of the FANTOM web resource
DEFF Research Database (Denmark)
Lizio, Marina; Harshbarger, Jayson; Abugessaisa, Imad
2017-01-01
Upon the first publication of the fifth iteration of the Functional Annotation of Mammalian Genomes collaborative project, FANTOM5, we gathered a series of primary data and database systems into the FANTOM web resource (http://fantom.gsc.riken.jp) to facilitate researchers to explore transcriptio......Upon the first publication of the fifth iteration of the Functional Annotation of Mammalian Genomes collaborative project, FANTOM5, we gathered a series of primary data and database systems into the FANTOM web resource (http://fantom.gsc.riken.jp) to facilitate researchers to explore...
FANTOM: Functional and taxonomic analysis of metagenomes
Directory of Open Access Journals (Sweden)
Sanli Kemal
2013-02-01
Full Text Available Abstract Background Interpretation of quantitative metagenomics data is important for our understanding of ecosystem functioning and assessing differences between various environmental samples. There is a need for an easy to use tool to explore the often complex metagenomics data in taxonomic and functional context. Results Here we introduce FANTOM, a tool that allows for exploratory and comparative analysis of metagenomics abundance data integrated with metadata information and biological databases. Importantly, FANTOM can make use of any hierarchical database and it comes supplied with NCBI taxonomic hierarchies as well as KEGG Orthology, COG, PFAM and TIGRFAM databases. Conclusions The software is implemented in Python, is platform independent, and is available at http://www.sysbio.se/Fantom.
Update History of This Database - FANTOM5 | LSDB Archive [Life Science Database Archive metadata
Lifescience Database Archive (English)
Full Text Available switchLanguage; BLAST Search Image Search Home About Archive Update History Data ...List Contact us FANTOM5 Update History of This Database Date Update contents 2017/03/14 FANTOM5 English arch...escription Download License Update History of This Database Site Policy | Contact Us Update History of This Database - FANTOM5 | LSDB Archive ...
License - FANTOM5 | LSDB Archive [Life Science Database Archive metadata
Lifescience Database Archive (English)
Full Text Available List Contact us FANTOM5 License License to Use This Database Last updated : 2017/03/14 You may use this data...base in compliance with the terms and conditions of the license described below. The license specifies the lice...nse terms regarding the use of this database and the requirements you must follow in using this database. The lice....0 International . If you use data from this database, please be sure attribute this database as follows: FANTOM5 © RIKEN lice...tion 4.0 International is found here . With regard to this database, you are lice
Gateways to the FANTOM5 promoter level mammalian expression atlas
DEFF Research Database (Denmark)
Lizio, Marina; Harshbarger, Jayson; Shimoji, Hisashi
2015-01-01
The FANTOM5 project investigates transcription initiation activities in more than 1,000 human and mouse primary cells, cell lines and tissues using CAGE. Based on manual curation of sample information and development of an ontology for sample classification, we assemble the resulting data into a ...
Gateways to the FANTOM5 promoter level mammalian expression atlas
DEFF Research Database (Denmark)
Lizio, Marina; Harshbarger, Jayson; Shimoji, Hisashi;
2015-01-01
The FANTOM5 project investigates transcription initiation activities in more than 1,000 human and mouse primary cells, cell lines and tissues using CAGE. Based on manual curation of sample information and development of an ontology for sample classification, we assemble the resulting data...
Index of /data/fantom5/20161221 [Life Science Database Archive metadata
Lifescience Database Archive (English)
Full Text Available Index of /data/fantom5/20161221 Name Last modified Size Description Parent Director...y - README.html 22-Dec-2016 10:09 28K README_e.html 14-Mar-2017 11:16 25K fantom5_new_experime..> 12-Dec-2016 16:12 273K fantom...5_rp_exp_detai..> 13-Dec-2016 09:59 236K sym_link/ 05-Jan-2017 13:49 - Index of /data/fantom5/20161221 ...
Lifescience Database Archive (English)
Full Text Available List Contact us FANTOM5 Sample ontology, GOstat and ontology term enrichment Data detail Data name Sample on...tology, GOstat and ontology term enrichment DOI 10.18908/lsdba.nbdc01389-006.V002 Version V2 10.18908/lsdba....t Us Sample ontology, GOstat and ontology term enrichment - FANTOM5 | LSDB Archive ...
Statistical Cosmological Fermion Systems With Interparticle Fantom Scalar Interaction
Ignat'ev, Yurii; Ignatyev, Dmitry
2016-01-01
The article represents a research of the cosmological evolution of fermion statistical systems with fantom scalar interaction where "kinetic" term's contribution to the total energy of a scalar field is negative. As a result of analytical and numerical simulation of such systems it has been revealed a existence of four possible scenarios depending on parameters of the system and initial conditions. Among these scenarios there are scenarios with an early, intermediate and late non-relativistic stages of the cosmological evolution, all of which also have necessary inflation stage.
CAGE_peaks_annotation - FANTOM5 | LSDB Archive [Life Science Database Archive metadata
Lifescience Database Archive (English)
Full Text Available List Contact us FANTOM5 CAGE_peaks_annotation Data detail Data name CAGE_peaks_annotation DOI 10.18908/lsdba....nbdc01389-010.V002 Version V2 10.18908/lsdba.nbdc01389-010.V002 Update History V1 - - Description of data contents Ann...file File name: CAGE_peaks_annotation File URL: ftp://ftp.biosciencedbc.jp/archiv...e/fantom5/datafiles/LATEST/extra/CAGE_peaks_annotation File size: 195 MB Simple search URL - Data acquisitio...on Download License Update History of This Database Site Policy | Contact Us CAGE_peaks_annotation - FANTOM5 | LSDB Archive ...
Pancakes of second generation and formation of fantomes of clusters of galaxies
Energy Technology Data Exchange (ETDEWEB)
Doroshkevich, A.G.
1983-09-01
The problem of formation of the second generation hot neutrino-gaseous ''pancakes'' is analyzed. These ''pancakes'' can evolve later into the gravitationally bounded hot neutrino-gaseous clouds-''fantomes'' of clusters. The temperature, mass and other parameters of ''fantomess'' are close to those of clusters of galaxies but ''fantomes'' do not (or nearly do not) contain ordinary galaxies. It is suggested that ''fantomes'' can be found in the X-ray surveys or by temperature fluctuations of relic radiation.
FANTOM5 CAGE profiles of human and mouse samples
Noguchi, Shuhei
2017-08-29
In the FANTOM5 project, transcription initiation events across the human and mouse genomes were mapped at a single base-pair resolution and their frequencies were monitored by CAGE (Cap Analysis of Gene Expression) coupled with single-molecule sequencing. Approximately three thousands of samples, consisting of a variety of primary cells, tissues, cell lines, and time series samples during cell activation and development, were subjected to a uniform pipeline of CAGE data production. The analysis pipeline started by measuring RNA extracts to assess their quality, and continued to CAGE library production by using a robotic or a manual workflow, single molecule sequencing, and computational processing to generate frequencies of transcription initiation. Resulting data represents the consequence of transcriptional regulation in each analyzed state of mammalian cells. Non-overlapping peaks over the CAGE profiles, approximately 200,000 and 150,000 peaks for the human and mouse genomes, were identified and annotated to provide precise location of known promoters as well as novel ones, and to quantify their activities.
Critical evaluation of the FANTOM3 non-coding RNA transcripts
DEFF Research Database (Denmark)
Nordström, Karl J V; Mirza, Majd A I; Almén, Markus Sällman
2009-01-01
We studied the genomic positions of 38,129 putative ncRNAs from the RIKEN dataset in relation to protein-coding genes. We found that the dataset has 41% sense, 6% antisense, 24% intronic and 29% intergenic transcripts. Interestingly, 17,678 (47%) of the FANTOM3 transcripts were found to potentially......-coding genes, did not contain ORFs longer than 100 residues and were not internally primed. This dataset contains 53% of the FANTOM3 transcripts associated to known ncRNA in RNAdb and expands previous similar efforts with 6523 novel transcripts. This bioinformatic filtering of the FANTOM3 non-coding dataset...... has generated a lead dataset of transcripts without signs of being artefacts, providing a suitable dataset for investigation with hybridization-based techniques....
Transcript annotation in FANTOM3: mouse gene catalog based on physical cDNAs.
Directory of Open Access Journals (Sweden)
Norihiro Maeda
2006-04-01
Full Text Available The international FANTOM consortium aims to produce a comprehensive picture of the mammalian transcriptome, based upon an extensive cDNA collection and functional annotation of full-length enriched cDNAs. The previous dataset, FANTOM2, comprised 60,770 full-length enriched cDNAs. Functional annotation revealed that this cDNA dataset contained only about half of the estimated number of mouse protein-coding genes, indicating that a number of cDNAs still remained to be collected and identified. To pursue the complete gene catalog that covers all predicted mouse genes, cloning and sequencing of full-length enriched cDNAs has been continued since FANTOM2. In FANTOM3, 42,031 newly isolated cDNAs were subjected to functional annotation, and the annotation of 4,347 FANTOM2 cDNAs was updated. To accomplish accurate functional annotation, we improved our automated annotation pipeline by introducing new coding sequence prediction programs and developed a Web-based annotation interface for simplifying the annotation procedures to reduce manual annotation errors. Automated coding sequence and function prediction was followed with manual curation and review by expert curators. A total of 102,801 full-length enriched mouse cDNAs were annotated. Out of 102,801 transcripts, 56,722 were functionally annotated as protein coding (including partial or truncated transcripts, providing to our knowledge the greatest current coverage of the mouse proteome by full-length cDNAs. The total number of distinct non-protein-coding transcripts increased to 34,030. The FANTOM3 annotation system, consisting of automated computational prediction, manual curation, and final expert curation, facilitated the comprehensive characterization of the mouse transcriptome, and could be applied to the transcriptomes of other species.
(reprocessed)CAGE_peaks_annotation - FANTOM5 | LSDB Archive [Life Science Database Archive metadata
Lifescience Database Archive (English)
Full Text Available List Contact us FANTOM5 (reprocessed)CAGE_peaks_annotation Data detail Data name (reprocessed)CAGE_peaks_ann... - - Description of data contents Annotation of human and mouse CAGE peaks and RNA transcriptional initiatio...rence sequences (hg38/mm10). Data file File name: (reprocessed)CAGE_peaks_annotation (Homo sapiens) File URL...: ftp://ftp.biosciencedbc.jp/archive/fantom5/datafiles/reprocessed/hg38_latest/extra/CAGE_peaks_annotation/ ...File size: 16 MB File name: (reprocessed)CAGE_peaks_annotation (Mus musculus) Fil
Elbast, M; Saudo, A; Franck, D; Petitot, F; Desbrée, A
2012-07-01
Microdosimetry using Monte Carlo simulation is a suitable technique to describe the stochastic nature of energy deposition by alpha particle at cellular level. Because of its short range, the energy imparted by this particle to the targets is highly non-uniform. Thus, to achieve accurate dosimetric results, the modelling of the geometry should be as realistic as possible. The objectives of the present study were to validate the use of the MCNPX and Geant4 Monte Carlo codes for microdosimetric studies using simple and three-dimensional voxelised geometry and to study their limit of validity in this last case. To that aim, the specific energy (z) deposited in the cell nucleus, the single-hit density of specific energy f(1)(z) and the mean-specific energy were calculated. Results show a good agreement when compared with the literature using simple geometry. The maximum percentage difference found is MCNPX for calculation time is 10 times higher with Geant4 than MCNPX code in the same conditions.
Lifescience Database Archive (English)
Full Text Available p://ftp.biosciencedbc.jp/archive/fantom5/datafiles/phase1.3/extra/TSS_classifier/ File size: 32 MB Simple search URL - Data acquisiti...on method - Data analysis method TSS Classifier（ http://
Transfert radiatif numerique pour un code SPH
Viau, Joseph Edmour Serge
2001-03-01
Le besoin de reproduire la formation d'etoiles par simulations numeriques s'est fait de plus en plus present au cours des 30 dernieres annees. Depuis Larson (1968), les codes de simulations n'ont eu de cesse de s'ameliorer. D'ailleurs, en 1977, Lucy introduit une autre methode de calcul venant concurrencer la methode par grille. Cette nouvelle facon de calculer utilise en effet des points a defaut d'utiliser des grilles, ce qui est une bien meilleure adaptation aux calculs d'un effondrement gravitationnel. Il restait cependant le probleme d'ajouter le transfert radiatif a un tel code. Malgre la proposition de Brookshaw (1984), qui nous montre une formule permettant d'ajouter le transfert radiatif sous la forme SPH tout en evitant la double sommation genante qu'elle implique, aucun code SPH a ce jour ne contient un transfert radiatif satisfaisant. Cette these presente pour la premiere fois un code SPH muni d'un transfert radiatif adequat. Toutes les difficultes ont pu etre surmontees afin d'obtenir finalement le transfert radiatif "vrai" qui survient dans l'effondrement d'un nuage moleculaire. Pour verifier l'integrite de nos resultats, une comparaison avec le nonisothermal test case de Boss & Myhill (1993) nous revele un resultat fort satisfaisant. En plus de suivre fidelement la courbe de l'evolution de la temperature centrale en fonction de la densite centrale, notre code est exempt de toutes les anomalies rencontrees par les codes par grille. Le test du cas de la conduction thermique nous a lui aussi servit a verifier la fiabilite de notre code. La aussi les resultats sont fort satisfaisants. Faisant suite a ces resultats, le code fut utilise dans deux situations reelles de recherche, ce qui nous a permis de demontrer les nombreuses possibilites que nous donne notre nouveau code. Dans un premier temps, nous avons tudie le comportement de la temperature dans un disque d'accretion durant son evolution. Ensuite nous avons refait en partie une experience de Bonnell
Higley, K; Ruedig, E; Gomez-Fernandez, M; Caffrey, E; Jia, J; Comolli, M; Hess, C
2015-06-01
Over the past decade, the International Commission on Radiological Protection (ICRP) has developed a comprehensive approach to environmental protection that includes the use of Reference Animals and Plants (RAPs) to assess radiological impacts on the environment. For the purposes of calculating radiation dose, the RAPs are approximated as simple shapes that contain homogeneous distributions of radionuclides. As uncertainties in environmental dose effects are larger than uncertainties in radiation dose calculation, some have argued against more realistic dose calculation methodologies. However, due to the complexity of organism morphology, internal structure, and density, dose rates calculated via a homogenous model may be too simplistic. The purpose of this study is to examine the benefits of a voxelised phantom compared with simple shapes for organism modelling. Both methods typically use Monte Carlo methods to calculate absorbed dose, but voxelised modelling uses an exact three-dimensional replica of an organism with accurate tissue composition and radionuclide source distribution. It is a multi-stage procedure that couples imaging modalities and processing software with Monte Carlo N-Particle. These features increase dosimetric accuracy, and may reduce uncertainty in non-human biota dose-effect studies by providing mechanistic answers regarding where and how population-level dose effects arise.
CAGE peaks - FANTOM5 | LSDB Archive [Life Science Database Archive metadata
Lifescience Database Archive (English)
Full Text Available switchLanguage; BLAST Search Image Search Home About Archive Update History Data ...rsion V2 10.18908/lsdba.nbdc01389-002.V002 Update History V1 10.18908/lsdba.nbdc01389-002.V001 - Description... This Database Database Description Download License Update History of This Database Site Policy | Contact Us CAGE peaks - FANTOM5 | LSDB Archive ...
Database Description - FANTOM5 | LSDB Archive [Life Science Database Archive metadata
Lifescience Database Archive (English)
Full Text Available nnovation Program / Omics Science Center Journal Search: Creator Name: Hideya Kawaji Creator Affiliation: RI...tion / Center for Life Science Technologies / Omics Science Center Journal Search...: Creator Name: Takeya Kasukawa Creator Affiliation: RIKEN Center for Life Science Technologies Journal Sear... VJ, Sandelin A, Hume DA, Carninci P, Hayashizaki Y. Journal: Nature. 2014 Mar 27...k P, Hume DA, Jensen TH, Suzuki H, Hayashizaki Y, Müller F; FANTOM Consortium, Forrest AR, Carninci P, Rehli M, Sandelin A. Journal
Lifescience Database Archive (English)
Full Text Available List Contact us FANTOM5 Pathway enrichment and co-expression cluster analysis Data detail Data name Pathway enrich...a.nbdc01389-003.V002 No Update V1 10.18908/lsdba.nbdc01389-003.V001 - Description of data contents Pathway enrich.../ File size: 86 MB Simple search URL - Data acquisition method - Data analysis method Co-expression cluster analysis Gostat enrich...ment analysis Pathway enrichment analysis Number of data en...ite Policy | Contact Us Pathway enrichment and co-expression cluster analysis - FANTOM5 | LSDB Archive ...
Lifescience Database Archive (English)
Full Text Available List Contact us FANTOM5 Results of de-novo and Motif activity analyses Data detail Data name Results of de-n...S motif near TSS de-novo motif analysis with HOMER etc. Significance of the corre.../extra/Motifs/ File size: 6.2 GB Simple search URL - Data acquisition method - Data anal...ysis method JASPER motif search HOMER motif analysis Number of data entries 400 files - About This Da...tabase Database Description Download License Update History of This Database Site Policy | Contact Us Results of de-novo and Motif activity analyses - FANTOM5 | LSDB Archive ...
The FANTOM web resource: from mammalian transcriptional landscape to its dynamic regulation.
Kawaji, Hideya; Severin, Jessica; Lizio, Marina; Waterhouse, Andrew; Katayama, Shintaro; Irvine, Katharine M; Hume, David A; Forrest, Alistair R R; Suzuki, Harukazu; Carninci, Piero; Hayashizaki, Yoshihide; Daub, Carsten O
2009-01-01
In FANTOM4, an international collaborative research project, we collected a wide range of genome-scale data, including 24 million mRNA 5'-reads (CAGE tags) and microarray expression profiles along a differentiation time course of the human THP-1 cell line and under 52 systematic siRNA perturbations. In addition, data regarding chromatin status derived from ChIP-chip to elucidate the transcriptional regulatory interactions are included. Here we present these data to the research community as an integrated web resource.
DEFF Research Database (Denmark)
Nordström, Karl J V; Mirza, Majd A I; Larsson, Thomas P
2006-01-01
Our understanding of functional genetic elements in the genomes is continuously growing and new entries are entered in various databases on a regular basis. We have here merged the genetic elements in RefSeq, Ensembl, FANTOM3, HINV, and NCBI:s ESTdb using the genome assemblies in order to achieve...
2011-05-17
... COMMISSION Data Fortress Systems Group Ltd., Digital Youth Network Corp., Fantom Technologies, Inc., and KIK Technology International, Inc., Order of Suspension of Trading May 12, 2011. It appears to the Securities and... Data Fortress Systems Group Ltd. because it has not filed any periodic reports since the period...
Lifescience Database Archive (English)
Full Text Available m5_rp_exp_details#en Data acquisition method HeliScopeCAGE ( http://fantom.gsc.riken.jp/protocol...ocol RNA extraction protocol Material type RNA Material type RNA tube RNA tube ID (.../280 Ratio (260nm/280nm) of the sample Concentration RNA concentration lsid RNA Integrity number Library protocol Library protocol... Library ID Library ID Sequence protocol Sequence protocol ...Machine name Machine name Run name Run name Flowcell channel Flowcell channel Alignment protocol Alignment protocol
Lifescience Database Archive (English)
Full Text Available thod - Data analysis method HeliScopeCAGE ( http://fantom.gsc.riken.jp/protocols/heliscope.html ) Delve (Ali...Collaboration Collaboration Provider Cell provider Extraction protocol RNA extraction protocol...ion RNA Integrity number RNA Integrity number lsid Sample group ID Library protocol Library protocol... Library ID Library ID Sequence protocol Sequence protocol Machine name Machine name Run ...name Run name Flowcell channel Flowcell channel Alignment protocol Alignment protocol BAM file Read mapping
Andersson, P.; Valldor-Blücher, J.; Andersson Sundén, E.; Sjöstrand, H.; Jacobsson-Svärd, S.
2014-08-01
The FANTOM system is a tabletop sized fast-neutron radiography and tomography system newly developed at the Applied Nuclear Physics Division of Uppsala University. The main purpose of the system is to provide time-averaged steam-and-water distribution measurement capability inside the metallic structures of two-phase test loops for light water reactor thermal-hydraulic studies using a portable fusion neutron generator. The FANTOM system provides a set of 1D neutron transmission data, which may be inserted into tomographic reconstruction algorithms to achieve a 2D mapping of the steam-and-water distribution. In this paper, the selected design of FANTOM is described and motivated. The detector concept is based on plastic scintillator elements, separated for spatial resolution. Analysis of pulse heights on an event-to-event basis is used for energy discrimination. Although the concept allows for close stacking of a large number of detector elements, this demonstrator is equipped with only three elements in the detector and one additional element for monitoring the yield from the neutron generator. The first measured projections on test objects of known configurations are presented. These were collected using a Sodern Genie 16 neutron generator with an isotropic yield of about 1E8 neutrons per second, and allowed for characterization of the instrument's capabilities. At an energy threshold of 10 MeV, the detector offered a count rate of about 500 cps per detector element. The performance in terms of spatial resolution was validated by fitting a Gaussian Line Spread Function to the experimental data, a procedure that revealed a spatial unsharpness in good agreement with the predicted FWHM of 0.5 mm.
Energy Technology Data Exchange (ETDEWEB)
Henriet, J.; Bopp, M.; Makovicka, L. [Universite de Franche-Comte, IRMA/ENISYS/FEMTO-ST, UMR 6174 CNRS, 25 - Montbeliard (France); Farah, J.; Broggio, D. [IRSN, LEDI/SDI/DPRH, 92 - Fontenay-aux-Roses (France); Chebel-Morello, B. [Universite de Franche-Comte, COSMI/AS2M/FEMTO-ST, UMR 6174 CNRS, 25 - Besancon (France)
2010-01-15
In case of accidental exposure to radiation, it is necessary to establish as soon as possible a dosimetry report for each victim. In most cases, this report is based on medical images of the victim, enabling the construction of a personalized realistic numerical model, also called a voxel phantom. Unfortunately it is not always possible to perform the medical imaging of the victim since the technology may be unavailable or to avoid additional exposure to radiation. In such cases, the commonly used method is to represent the victim with a numerical model like the 'Reference Man', a voxelized phantom representative of the average male individual. The treatment accuracy depends on the diagnosis precision and, consequently, on the similarity of the phantom and/to the victim. A precise dosimetry evaluation requires a personalized and realistic phantom whose bio metric characteristics match the victim; such model is often unavailable. The Case-Based Reasoning (C.B.R.) is a problem solving method for the conception of intelligent systems. It imitates the analysis, understanding and reconstruction of the human intelligence. The ReEPh project (Research of Equivalent Phantom) proposes to use the case-based reasoning (C.B.R.) principles to retrieve from a set of phantoms, the most adapted one to the irradiated victim. For this study, the ReEPh platform retrieves, stores and compares existing phantoms to a victim. A graphic interface enables the user to compare victim characteristics to the ones of the most similar phantoms available in the database. This defines a similarity index presenting the equivalence between the victim and the suggested phantom. Moreover, a confidence index is also assessed to define the uncertainty implied by the reasoning from a case (RaPC) choice procedure. (authors)
Directory of Open Access Journals (Sweden)
Risalatul Latifah
2015-03-01
Full Text Available Dewasa ini, penggunaan pesawat linear accelerator (linac untuk kegiatan terapi pada penyakit kanker mulai intensif digunakan.Keuntungan utama linac dibanding dengan pesawat teleterapi adalah tidak lagi menggunakan sumber radioaktif serta memiliki variasi energi sehingga bisa disesuaikan dengan kebutuhan. Ketika sebuah pesawat linac dioperasikan di atas 10 MV, maka akan terjadi reaksi fotoneutron (γ,n hasil dari interaksi energi sinar-X tinggi yang menumbuk material-material penyusun komponen pesawat linac seperti target, kolimator dan filter. Reaksi fotoneutron ini akan menghasilkan neutron. Pengukuran fluks neutron sangat penting untuk dilakukan terkait dengan keselamatan pada tindakan radioterapi dikarenakan emisi neutron ini merupakan radiasi sekunder yang akan menaikkan resiko kanker sekunder pada pasien akibat bertambahnya dosis radiasi yang diterima. Studi ini mengevaluasi fluks neutron yang dihasilkan oleh pesawat linac 15 MV menggunakan teknik aktivasi foil. Sebanyak 45 foil disisipkan dalam fantom padat yang diradiasi oleh linac untuk mengetahui besarnya fluks neutron terhadap fungsi kedalaman.Nilai yang didapat dimaksudkan untuk mengestimasi dosis tambahan untuk pasien ketika menjalani treatment menggunakan linac pada operasi di atas 10 MV. Dengan menggunakan hasil analisa spektrometer gamma dari foil indium yang teraktivasi, nilai fluks mengalami kenaikan seiring dengan bertambahnya kedalaman sampai pada 7 cm di bawah permukaan dengan nilai 2,6 x 106 ncm-2s-1 kemudian terus menurun seiring bertambahnya jarak. Pola ini terjadi karena adanya proses termalisasi neutron. Dengan menggunakan metode faktor konversi dosis neutron termal, maka diketahui dosis tambahan dari fluks neutron maksimum yang diterima pasien adalah 0,86 mSv/menit. Kontribusi dosis ini relatif kecil yaitu sebesar 0,1% dari dosis terapi. Kata kunci: Fluks neutron termal, LINAC, indium, fantom, aktivasi foil. Nowadays, using linear accelerator (LINAC for therapeutic
Calculation of reactivity by digital processing; Calcul de la reactivite par traitement numerique
Energy Technology Data Exchange (ETDEWEB)
Hedde, J. [Commissariat a l' Energie Atomique, Saclay (France). Centre d' Etudes Nucleaires
1968-12-01
With a view to exploring the new possibilities offered by digital techniques, a description is given of the optimum theoretical conditions of a computation of the realtime reactivity using counting samples (obtained from a nuclear reactor). The degree to which these optimum conditions can be attained depends on the complexity of the processing which can be accepted. A compromise thus has to be made between the accuracy required and the simplicity of the equipment carrying out the processing. An example is given, using a relatively simple structure, which gives an idea of the accuracy of the results obtained over a wide range of reactor power. (author) [French] Dans le but d'explorer les possibilites nouvelles des techniques numeriques, on decrit les conditions theoriques optimales d'un calcul de la reactivite en temps reel a partir d'echantillons de comptage (en provenance d'un reacteur nucleaire). Ces conditions optimales peuvent etre approchees d'autant mieux que l'on accepte un traitement plus complexe. Un compromis est donc a faire entre la precision desiree et la simplicite du materiel assurant le traitement. Un exemple adoptant une structure de complexite reduite permet de juger de la precision des resultats obtenus sur une importante plage d'evolution de la puissance. (auteur)
Harvey, Derek
Le degivrage au moyen d'actuateurs piezoelectriques est considere comme une avenue prometteuse pour le developpement de systemes a faible consommation d'energie applicables aux helicopteres legers. Ce type de systeme excite des frequences de resonances d'une structure pour produire des deformations suffisantes pour rompre l'adherence de la glace. Par contre, la conception de tel systeme demeure generalement mal comprise. Ce projet de maitrise etudie l'utilisation de methodes numeriques pour assister la conception des systemes de protection contre le givre a base d'elements piezoelectriques. La methodologie retenue pour ce projet a ete de modeliser differentes structures simples et de simuler l'excitation harmonique des frequences de resonance au moyen d'actuateurs piezoelectriques. Le calcul des frequences de resonances ainsi que la simulation de leur excitation a ensuite ete validee a l'aide de montages experimentaux. La procedure a ete realisee pour une poutre en porte-a-faux et pour une plaque plane a l'aide du logiciel de calcul par elements finis, Abaqus. De plus, le modele de la plaque plane a ete utilise afin de realiser une etude parametrique portant sur le positionnement des actuateurs, l'effet de la rigidite ainsi que de l'epaisseur de la plaque. Finalement, la plaque plane a ete degivree en chambre climatique. Des cas de degivrage ont ete simules numeriquement afin d'etudier la possibilite d'utiliser un critere base sur la deformation pour predire le succes du systeme. La validation experimentale a confirme la capacite du logiciel a calculer precisement a la fois les frequences et les modes de resonance d'une structure et a simuler leur excitation par des actuateurs piezoelectriques. L'etude revele que la definition de l'amortissement dans le modele numerique est essentiel pour l'obtention de resultats precis. Les resultats de l'etude parametrique ont demontre l'importance de minimiser l'epaisseur et la rigidite afin de reduire la valeur des frequences
Energy Technology Data Exchange (ETDEWEB)
Desbree, A
2005-09-15
For the last 15 years, animal models that mimic human disorders have become ubiquitous participants to understand biological mechanisms and human disorders and to evaluate new therapeutic approaches. The necessity to study these models in the course of time has stimulated the development of instruments dedicated to in vivo small animal studies. To further understand physiopathological processes, the current challenge is to couple simultaneously several of these methods. Given this context, the combination of the magnetic and radioactive techniques remains an exciting challenge since it is still limited by strict technical constraints. Therefore we propose to couple the magnetic techniques with the radiosensitive Beta-Microprobe, developed in the IPB group and which shown to be an elegant alternative to PET measurements. In this context, the thesis was dedicated to the study of the coupling feasibility from a physical point of view, by simulation and experimental characterizations. Then, the determination of a biological protocol was carried out on the basis of pharmacokinetic studies. The experiments have shown the possibility to use the probe for radioactive measurements under intense magnetic field simultaneously to anatomical images acquisitions. Simultaneously, we have sought to improve the quantification of the radioactive signal using a voxelized phantom of a rat brain. Finally, the emergence of transgenic models led us to reproduce pharmacokinetic studies for the mouse and to develop voxelized mouse phantoms. (author)
Directory of Open Access Journals (Sweden)
Marić Sanja S.
2017-01-01
Full Text Available Background Phantom limb pain is a common problem after limb amputation (41-85%. It is described as an extremely painful sensation in the missing part of the body that can last for hours, days or even years. It is considered to arise from cortical reorganization, although many factors can increase the risk of phantom limb pain: pain before surgery, age and sex of the patients, the time elapsed since surgery, stump pain, inadequate prosthesis. Phantom limb pain therapy is very complicated. Case report We reported a case of 80-year-old patient suffering from phantom limb pain and phantom sensation 25 years after the amputation of his left leg due to the injury. The patient has pain at the site of amputation, sensation that he has the leg and that it occupies an unusual position and almost daily exhausting phantom limb pain (6-9 visual analogue scale - VAS with disturbed sleep and mood. We managed to reduce the pain under 4 VAS and decrease the patient suffering by combining drugs from the group of coanalgetics (antidepressants, antiepileptics, non-pharmacological methods (transcutaneous electroneurostimulation - TENS, mirror therapy and femoral nerve block in the place of disarticulation of the left thigh. Conclusion Phantom limb pain therapy is multimodal, exhausting for both the patient and the physician and it is often unsuccessful. The combination of different pharmacological and non-pharmacological modalities can give satisfactory therapeutic response.
Energy Technology Data Exchange (ETDEWEB)
Joumard, R. [Commissariat a l' Energie Atomique, Grenoble (France). Centre d' Etudes Nucleaires
1969-07-01
An experimental direct digital control system has been realised in the 'C.E.N.G.', in order to verify that a computer makes easier the control of the experiments done in the nuclear reactors and to solve the theoretical and technical difficulties. The regulation is applied to thermal processes. The sampled data systems theory permits to choose the type of an efficient and simple digital compensator, and to establish a diagram which gives the values of the correcting parameters (obtained by minimizing the difference between the output and the input when perturbations occur). The programme execute, in simultaneity, supervision and regulation. Complex possibilities of printing out measures and alarms existed. The computer works out an incremental correction which makes step motors to turn. These motors act on the heating organs. The theoretical values and answers have been confirmed. The accuracy was limited essentially by the input quantification (1/1000 th). The comfort of such a system has been noticeable. (author) [French] Une installation de controle numerique direct fut realisee a titre experimental au C.E.N.G pour verifier qu'un ordinateur rendait plus aisee l'exploitation des experiences faites en pile nucleaire et pour degager les difficultes theoriques et techniques. La regulation s'applique a des processus thermiques. La theorie des systemes echantillonnes a permis de choisir un type de correcteur numerique simple et efficace et d'etablir un abaque qui donne les valeurs des parametres correcteurs minimisant les ecarts enregistres entre la reponse et la consigne en presence de perturbations. Le programme effectuait simultanement de la surveillance et de la regulation. Une restitution complexe des informations et des alarmes sur machine a ecrire etait possible. Le calculateur elaborait une correction incrementielle qui faisait tourner des moteurs pas a pas, lesquels commandaient les organes de puissance de chauffage. Les valeurs
Mejdi, Abderrazak
Les fuselages des avions sont generalement en aluminium ou en composite renforces par des raidisseurs longitudinaux (lisses) et transversaux (cadres). Les raidisseurs peuvent etre metalliques ou en composite. Durant leurs differentes phases de vol, les structures d'avions sont soumises a des excitations aeriennes (couche limite turbulente : TBL, champs diffus : DAF) sur la peau exterieure dont l'energie acoustique produite se transmet a l'interieur de la cabine. Les moteurs, montes sur la structure, produisent une excitation solidienne significative. Ce projet a pour objectifs de developper et de mettre en place des strategies de modelisations des fuselages d'avions soumises a des excitations aeriennes et solidiennes. Tous d'abord, une mise a jour des modeles existants de la TBL apparait dans le deuxieme chapitre afin de mieux les classer. Les proprietes de la reponse vibro-acoustique des structures planes finies et infinies sont analysees. Dans le troisieme chapitre, les hypotheses sur lesquelles sont bases les modeles existants concernant les structures metalliques orthogonalement raidies soumises a des excitations mecaniques, DAF et TBL sont reexamines en premier lieu. Ensuite, une modelisation fine et fiable de ces structures est developpee. Le modele est valide numeriquement a l'aide des methodes des elements finis (FEM) et de frontiere (BEM). Des tests de validations experimentales sont realises sur des panneaux d'avions fournis par des societes aeronautiques. Au quatrieme chapitre, une extension vers les structures composites renforcees par des raidisseurs aussi en composites et de formes complexes est etablie. Un modele analytique simple est egalement implemente et valide numeriquement. Au cinquieme chapitre, la modelisation des structures raidies periodiques en composites est beaucoup plus raffinee par la prise en compte des effets de couplage des deplacements planes et transversaux. L'effet de taille des structures finies periodiques est egalement pris en
Energy Technology Data Exchange (ETDEWEB)
Badel, D.; Cocchi, G.; Oules, H. [Centre d' Etudes Scientifiques et Techniques d' Aquitaine (France). Centre d' Etudes Nucleaires
1969-07-01
The S.I.D.E.X. is a digital computer assisted facility for Data acquisition and Data processing. It is designed for sine wave or random environment tests, mechanical or acoustical vibrations, shock waves. The mathematical principles and the system configuration have been described in the CEA file nb R-3666. The present one describes the numerical methods and the programs available up to now. Some examples of results obtained are shown at the end. (authors) [French] Le systeme integre de depouillement pour l'experimentation S.I.D.E.X., a pour but d'effectuer les calibration, les acquisitions et les depouillements des essais aux vibrations sinusoidales ou aleatoires, mecaniques ou acoustiques et des essais de chocs. Les methodes mathematiques correspondantes et la configuration digitale employee ont ete decrites dans le rapport CEA nb CEA-R-3666. Le present rapport indique les methodes numeriques en vigueur et les programmes actuellement disponibles. Des exemples de resultats obtenus sont egalement presentes. (auteurs)
Energy Technology Data Exchange (ETDEWEB)
Le Tilly, Y. [Commissariat a l' Energie Atomique, Saclay (France). Centre d' Etudes Nucleaires
1966-12-01
The instantaneous value of the counting rate of the pulses given by a fission detector settled in a reactor follows the neutron flux, but it is shown that the counter adds a white noise to the measured signal. This report deals with some possibilities of on line numerical handling afforded by this kind of signals. One considers first the influence of a by N numerical divider and one shows that, acting like a quantifier, it adds to the signal a white noise with the power N{sup 2}/{sub 12}. One, studies afterwards the principle of a digital filter aimed to Fourier analyse the signal. The realization of this device is described. It can be used in transfer function measurements at frequencies below 125 kHz. Some examples of experiments performed with this apparatus are presented. One discusses finally the design, according to the same principle, of a power spectral density analyser in the frequency range 0,01 - 10 000 Hz for random signal of the same kind. (author) [French] La valeur instantanee de la frequence de recurrence des impulsions issues d'un detecteur a fission place dans un reacteur est proportionnelle au flux neutronique. Apres avoir montre que le detecteur ajoute un bruit blanc au signal mesure, on etudie clans ce rapport certaines possibilites de traitement numerique en temps reel offertes par ce type de signaux. On examine d'abord l'influence d'un diviseur numerique par N, et l'on montre que son action, semblable a une quantification, ajoute au signal un bruit blanc de puissance N{sup 2}/{sub 12}. On, etudie ensuite le principe d'un filtre numerique destine a effectuer l'analyse de Fourier du signal, et l'on decrit la realisation de cet appareil qui peut etre utilise pour mesurer des fonctions de transfert a une frequence quelconque inferieure a 10 kHz. Des exemples de mesures faites avec cet appareil sont presentes. On discute enfin la possibilite de realiser suivant le meme principe un analyseur de densite
Energy Technology Data Exchange (ETDEWEB)
Moriceau, Y. [Commissariat a l' Energie Atomique, Centre d' Etudes de Limeil, 94 - Villeneuve-Saint-Georges (France)
1968-03-01
It is well known, if not well explained, that photo cross-sections curves depend on numerical resolution; as well as many other physical solutions from integral equations of the first kind, they are oscillating. In the first part of this report, a typical example points out how oscillations are growing. In the second part, a new method is explained yielding a smooth resolution. From experimental data on equidistant intervals, we build functions expanded in Tchebycheff polynomials; the solution is of this kind. Then, the third part points out that semi-analytical resolutions of this problem are illusive. (author) [French] C'est un fait reconnu mais mal explique, que les courbes de sections efficaces photonucleaires dependent de la resolution numerique adoptee. Beaucoup d'autres solutions physiques extraites d'une equation integrale de 1ere espece sont dans ce cas; elles sont arbitraires et oscillatoires. Dans la 1ere partie de ce rapport, on montre, dans un cas particulier typique, comment se forment les oscillations. Dans la 2eme partie, on presente une methode originale qui permet d'obtenir une resolution exempte d'oscillations. A partir de donnees experimentales a intervalles equidistants, on construit des fonctions developpees en polynomes de Tchebycheff; la solution est de ce type. Enfin, on montre dans la 3eme partie que les resolutions semi-analytiques de ce probleme sont illusoires. (auteur)
Download - FANTOM5 | LSDB Archive [Life Science Database Archive metadata
Lifescience Database Archive (English)
Full Text Available tss (Homo sapiens) (6.5 GB) (reprocessed)pooled_ctss (Mus musculus) (4.5 GB) - 10 Pathway enrichment...rs (160 MB) - 12 Results of de-novo and Motif activity analyses Motifs (6.2 GB) - 13 Sample ontology, GOstat and ontology term enrich...ment Ontology (1.8 MB) - 14 CAGE peaks identified as tru
Energy Technology Data Exchange (ETDEWEB)
Champmartin, A.
2011-02-28
suivre une vitesse relative entre les deux phases et de prendre en compte deux vitesses) et sont supposees 'a l'equilibre en temperature et pression. Cette partie du manuscrit est composee de la derivation des equations, de l'ecriture d'un schema numerique associe a ce jeu d'equations, d'une etude d'ordre de ce schema ainsi que de simulations. Une etude mathematique de ce modele (hyperbolicite dans un cadre simplifie, stabilite du systeme lineaire autour d'un etat constant) a ete realisee dans un cadre ou le gaz est suppose barotrope. La seconde partie de ce manuscrit est consacree a la modelisation de l'effet de collisions inelastiques sur les gouttelettes lorsque l'on se place a un temps de simulation beaucoup plus court, pour lequel les gouttelettes ne peuvent plus etre vues comme un fluide. Pour modeliser ces collisions, on construit un modele simplifie (moins couteux en temps) de type BGK permettant de reproduire le comportement en temps de certains moments sur les gouttelettes. Ces moments sont choisis pour etre representatifs de l'effet des collisions sur ces gouttelettes, a savoir une thermalisation en vitesse et energie. Ce modele est discretise avec une methode particulaire et des resultats numeriques sont donnes en comparaison avec ceux obtenus avec un modele resolvant directement l'equation de Boltzmann homogene. (auteur)
Modelisation numerique et algebrique des joints labyrinthe des turbines francis
Bouderlique, Remi
There are various types of hydraulic turbines. Regarding the operating conditions, geometries and technologies differ. Hydraulic seals are only used in Francis turbines, which are widely used. The role of hydraulic seals is not to be waterproof. Their main aim is to prevent contact between the rotating and static parts of the turbine. Although necessary, hydraulic seals create energetic losses : some fluid does not flow through the runner (leakage loss) and exerts a torque on the rotor (friction loss). In a context of constant progression towards still more efficient turbines, the optimization of each part of the turbine is necessary. Our study is a part of this research seeking to decrease the losses in turbines as much as possible. In order to understand fully the problem, and to ensure an optimal seal exists, an analytical study has been lead in the first place. It establishes the analytical expressions of the speeds, pressure, losses and optimal seal length for laminar flows in straight seals. Various tests were then lead with the ANSYS CFX solver in order to highlight aspects which necessitate a particular attention. For example, the issues of boundary conditions and dimensionless simulations were adressed. A CFD model has then been validated. The results of the experiences lead in the sixties by Dominion Engineering Works, which later became Andritz Hydro Limited, were used in this process. Even if all the tests were not useable, some of them were reproduced numerically. The CFD model which was used features SST turbulence modeling, 2D axisymmetric geometries, parabolic mesh distributions, smooth walls, and a outlet headloss based on the normal speed. For the various tests which were considered, the average discrepancy between numerical and experimental results is 6.5%. More than 60% of the discrepancies of those simulations are below the empirical uncertainty. That is why this model can be used for numerical experiences : as long as these experiences are in the design space, their result will be coherent with reality. An appropriate space-filling design of experiments was created using the software JMP, from SAS Institute. The numerical results of 34 tests have then been modeled statistically using a quadratic polynomial taking into account interactions between various factors. The response surface obtained this way was compared to experimental results. The average discrepancy was around 7.5%. The response surface is precise enough to get an accurate estimation of the experimental results. As no other experimental data is available, nothing proves that the numerical model, and the statistical model which was obtained thanks to it, will be valid outside the design space which was considered.
(reprocessed)pooled_ctss - FANTOM5 | LSDB Archive [Life Science Database Archive metadata
Lifescience Database Archive (English)
Full Text Available switchLanguage; BLAST Search Image Search Home About Archive Update History Data .../lsdba.nbdc01389-013.V002 Version V2 10.18908/lsdba.nbdc01389-013.V002 Update History V1 - - Description of ...hod - Number of data entries 4 files - About This Database Database Description Download License Update History
(reprocessed)CAGE_peaks_expression - FANTOM5 | LSDB Archive [Life Science Database Archive metadata
Lifescience Database Archive (English)
Full Text Available switchLanguage; BLAST Search Image Search Home About Archive Update History Data ...ression DOI 10.18908/lsdba.nbdc01389-012.V002 Version V2 10.18908/lsdba.nbdc01389-012.V002 Update History V1...iles - About This Database Database Description Download License Update History o
(reprocessed)CAGE peaks - FANTOM5 | LSDB Archive [Life Science Database Archive metadata
Lifescience Database Archive (English)
Full Text Available switchLanguage; BLAST Search Image Search Home About Archive Update History Data ...sdba.nbdc01389-009.V002 Version V2 10.18908/lsdba.nbdc01389-009.V002 Update History V1 - - Description of da...s - About This Database Database Description Download License Update History of T
Fantom ciała jako cielesna samoświadomość
Directory of Open Access Journals (Sweden)
Przemysław Nowakowski
2010-11-01
Full Text Available According to Peter Halligan, […] it is important to consider that the experience of our body is largely the product of a continuously updated „phantom” generated by the brain. (Halligan 2002, 266. Next, he adds: I will argue (not withstanding pathology to the physical body that the prevalent common sense assumption of phantom experience as pathological is wrongheaded and largely based on a long-standing and pernicious folk assumption that the physical body is necessary for experience of a body. (Halligan 2002, 252. These two remarks can serve as a backdrop for a discussion of the problem of bodily self-consciousness presented in the article. If experiencing a phantom of an amputated limb is indeed not pathological, and if normal bodily experience is de facto based on the body phantom constructed by the brain, then our conception of this very phantom should prove relevant when trying to explain bodily self-consciousness.
2013-05-25
compar- ison environment at the Department of Electrical and Computer Engineering, PSU. The comparison results were as expected and reported in DARPA... Shenzhen , China. 16 Duke Nihshanka Debroy MS Deloitte Consulting Paolo Bientinesi postdoc RWTH Aachen University Tian Xiao postdoc Wave Computation
Kvalitetssikring af Ul prober på baggrund af protokol til vurdering af tekniske fantom billeder
DEFF Research Database (Denmark)
Vernegreen, Susanne
2013-01-01
fordel sorteres fra i en revidering af protokollen. Jeg havde valgt at medtage dem, da de indgik i den oprindelige CIRS protokol, og det kan i en startfase være en fordel at have flere målinger at holde sig til. Min vurdering er dog at denne fordel ikke opvejer det øgede tidsforbrug. Der er dog nogle...... ikke muligt at undgå subjektivitet i forbindelse med måling og vurdering af de tekniske fantombilleder, da disse foretages manuelt. Det er derfor vigtigt at foretage flere målinger, og måske også oftere målinger end hvis målingerne var objektive. Flere og oftere målinger tager tid, og kan gå ud over...
Energy Technology Data Exchange (ETDEWEB)
Tochon, P.
1997-10-17
This work deals with electrostatic precipitators or ESP used for gas-solid particles separation. By means of a dust-controlled testing loop created and realised at the GRETh`s plate-form (Research Group on Heat Exchangers) and a numerical model developed during this work from TRIO software, the study of the performances of different ESP geometries has been carried out. Many electrical, hydraulic and particular parameters governing solid particles collection under ionised electric field have been identified, measured and modelled. The numerical model, ratified with experimental data obtained during this study and from literature, allows to describe local and global phenomena occurring in any geometries. Furthermore, parametric studies have been carried out in order to propose some optimised geometries. allowing to increase collection efficiencies. At least, on-site measurements with CETIAT (Centre Technique des Industries Aerauliques et Thermiques) allow to identify dust particles likely to be thrown out to the atmosphere, and troubles peculiar to large scales industrial plants. The numerical model has also been tested on these data. At the end of this study, an efficient dust-controlled experimental tool, PACIFIC loop, and a numerical simulation allowing ESP sizing are available. (author)
PREDICTION NUMERIQUE DES PERTES SECONDAIRES EN BOUT D'AUBE D'UN COMPRESSEUR AXIAL
Directory of Open Access Journals (Sweden)
A MAOUGAL
1999-12-01
Full Text Available La performance des machines se fait en termes de coefficients de perte de pression et de rendement pour chaque étage. Ce travail rentre dans le cadre du (pre- design des turbomachines. Cette phase préliminaire est devenue indispensable dans le domaine de l’engineering, ce qui permet de donner une approche de la machine avant son dimensionnement, de réaliser un gain économique très important et d’avoir ainsi une idée exacte de ce qu’on veux réaliser.
ESSAIS DE COMPRESSION SUR BARRES D'HOPKINSON : SIMULATION NUMERIQUE ET ETUDE EXPERIMENTALE
1988-01-01
Le principe de fonctionnement du test de compression dynamique à l'aide de barres d'Hopkinson est connu depuis de nombreuses années, mais il reste toujours des incertitudes quant à l'interprétation des résultats obtenus par celle technique. Le but de cette étude est de montrer les limites de la méthode ainsi que quelques règles pour le choix de la géométrie des échantillons. Des simulations numériques effectuées à l'aide du code lagrangien hydro-dynamique HEMP ont permis de quantifier l'erreu...
L'archivage a long terme de la maquette numerique trois-dimensionnelle annotee
Kheddouci, Fawzi
The use of engineering drawings in the development of mechanical products, including the exchange of engineering data as well as for archiving, is common industry practice. Traditionally, paper has been the mean to deliver those needs. However, these practices have evolved in favour of computerized tools and methods for the creation, diffusion and preservation of data involved in the process of developing aeronautical products characterized by life cycles that can exceed 70 years. Therefore, it is necessary to redefine how to maintain this data in a context whereby engineering drawings are being replaced by the 3D annotated digital mock-up. This thesis addresses the issue of long-term archiving of 3D annotated digital mock-ups, which includes geometric and dimensional tolerances, as well as other notes and specifications, in compliance with the requirements formulated by the aviation industry including regulatory and legal requirements. First, we review the requirements imposed by the aviation industry in the context of long-term archiving of 3D annotated digital mock-ups. We then consider alternative solutions. We begin by identifying the theoretical approach behind the choice of a conceptual model for digital long-term archiving. Then we evaluate, among the proposed alternatives, an archiving format that will guarantee the preservation of the integrity of the 3D annotated model (geometry, tolerances and other metadata) and its sustainability. The evaluation of 3D PDF PRC as a potential archiving format is carried out on a sample of 185 3D CATIA V5 models (parts and assemblies) provided by industrial partners. This evaluation is guided by a set of criteria including the transfer of geometry, 3D annotations, views, captures and parts positioning in assembly. The results indicate that maintaining the exact geometry is done successfully when transferring CATIA V5 models to 3D PDF PRC. Concerning the transfer of 3D annotations, we observed degradation associated with their display on the 3D model. This problem can, however, be solved by performing the conversion of the native model to STEP first, and then to 3D PDF PRC. In view of current tools, PDF 3D PRC is considered as a potential solution for long-term archiving of 3D annotated models for individual parts. However, this solution is currently not deemed adequate for archiving assemblies. The practice of 2D drawing will thus remain, in the short term, for assemblies.
Turbomachinery Design Using CFD (La Conception des Turbomachines par l’Aerodynamique Numerique).
1994-05-01
Dimensional Navier-Stokes Analyses and Experiments", ASME Journal of Turboma- chinery, Vol. 113, p 1 3 9 . Adamczyk, J. J., Celestina , M. L., Beach, T...Adamczyk J. J., Celestina M. L., Greitzer E. M. (1993). ity of 3D flows in compressors and turbines, including tip "The Role of Tip Clearance in High...last two decades, there are still some important aspects of turbomachinery design that CFD is not able to cope with Adamczyk, J. J., Celestina , M. L
SIMULATION NUMERIQUE DES COUCHES CISAILLEES PLANES A GRAND RAPPORT INITIAL DE MASSE VOLUMIQUE
Silvani, Xavier
2001-01-01
There is a close analogy between the dynamics of primary atomization process in a two phase shear layer and the processes initiating the mixing transition in a monophase layer presenting the same shear: the primary instability and and the elongation of dense fluid "fingers" in the rapid fllow are analogous if the inlet veocity and the initial density ratio are conserved between both cases. Therefore, the primary atomization coincides with the turbulent mixing if the Reynolds number and the We...
Simulation numerique de l'accretion de glace sur une pale d'eolienne
Fernando, Villalpando
The wind energy industry is growing steadily, and an excellent place for the construction of wind farms is northern Quebec. This region has huge wind energy production potential, as the cold temperatures increase air density and with it the available wind energy. However, some issues associated with arctic climates cause production losses on wind farms. Icing conditions occur frequently, as high air humidity and freezing temperatures cause ice to build up on the blades, resulting in wind turbines operating suboptimally. One of the negative consequences of ice accretion is degradation of the blade's aerodynamics, in the form of a decrease in lift and an increase in drag. Also, the ice grows unevenly, which unbalances the blades and induces vibration. This reduces the expected life of some of the turbine components. If the ice accretion continues, the ice can reach a mass that endangers the wind turbine structure, and operation must be suspended in order to prevent mechanical failure. To evaluate the impact of ice on the profits of wind farms, it is important to understand how ice builds up and how much it can affect blade aerodynamics. In response, researchers in the wind energy field have attempted to simulate ice accretion on airfoils in refrigerated wind tunnels. Unfortunately, this is an expensive endeavor, and researchers' budgets are limited. However, ice accretion can be simulated more cost-effectively and with fewer limitations on airfoil size and air speed using numerical methods. Numerical simulation is an approach that can help researchers acquire knowledge in the field of wind energy more quickly. For years, the aviation industry has invested time and money developing computer codes to simulate ice accretion on aircraft wings. Nearly all these codes are restricted to use by aircraft developers, and so they are not accessible to researchers in the wind engineering field. Moreover, these codes have been developed to meet aeronautical industry specifications, which are different from those that must be met in the wind energy industry. Among these differences are the following: wind turbines operate at subsonic speeds; the cords and angles of attack of wind turbine blades are smaller than those of aircraft wings; and a wind turbine can operate with a larger ice mass on its blades than an aircraft can. So, it is important to provide wind energy researchers with tools specifically validated with the operations parameters of a wind turbine. The main goal of this work is to develop a methodology to simulate ice accretion in 2D using Fluent and Matlab, commercial software programs that are available at nearly all research institutions. In this study, we used Gambit, previously the companion tool of Fluent, for mesh generation, and which has now been replaced by ICEM. We decided to stay with Gambit, because we were already deeply involved with the meshing procedure for our simulation of ice accretion at the time Gambit was removed from the market. We validate the methodology with experimental data consisting of iced airfoil contours obtained in a refrigerated wind tunnel using the parameters of actual ice conditions recorded in northern Quebec. This methodology consists of four steps: airfoil meshing, droplet trajectory calculation, thermodynamic model application, and airfoil contour updating. The total simulation time is divided into several time steps, for each of which the four steps are performed until the total time has elapsed. The time step length depends on the icing conditions. (Abstract shortened by UMI.).
Contribution a la modelisation et simulation numerique de l ecoulement du sang dans l artere
Alla, H; Bensaid, M H
2009-01-01
Numerous are the questionings raised by medicine interventionnelle, concerning the hold in charge of the pathologies of the arterial partition (aneurysm, dissection, coarctation, atherosclerosis).for it we made the modeling and the numeric simulation of the blood flow in the renal artery taken by the Medical imagery. Geometry has been rebuilt from the medical pictures of angiography, angioscanner and IRM. While considering that blood like a fluid Newtonian and stationary flow. The results gotten in terms of the physical parameters as the velocity, the dynamic pressure is shown that the simplest case was enough to collect relevant data for the development of stenos or thrombosis in the arteries.
Energy Technology Data Exchange (ETDEWEB)
Bouillard, N
2006-12-15
When a radioactive waste is stored in deep geological disposals, it is expected that the waste package will be damaged under water action (concrete leaching, iron corrosion). Then, to understand these damaging processes, chemical reactions and solutes transport are modelled. Numerical simulations of reactive transport can be done sequentially by the coupling of several codes. This is the case of the software platform ALLIANCES which is developed jointly with CEA, ANDRA and EDF. Stiff reactions like precipitation-dissolution are crucial for the radioactive waste storage applications, but standard sequential iterative approaches like Picard's fail in solving rapidly reactive transport simulations with such stiff reactions. In the first part of this work, we focus on a simplified precipitation and dissolution process: a system made up with one solid species and two aqueous species moving by diffusion is studied mathematically. It is assumed that a precipitation dissolution reaction occurs in between them, and it is modelled by a discontinuous kinetics law of unknown sign. By using monotonicity properties, the convergence of a finite volume scheme on admissible mesh is proved. Existence of a weak solution is obtained as a by-product of the convergence of the scheme. The second part is dedicated to coupling algorithms which improve Picard's method and can be easily used in an existing coupling code. By extending previous works, we propose a general and adaptable framework to solve nonlinear systems. Indeed by selecting special options, we can either recover well known methods, like nonlinear conjugate gradient methods, or design specific method. This algorithm has two main steps, a preconditioning one and an acceleration one. This algorithm is tested on several examples, some of them being rather academical and others being more realistic. We test it on the 'three species model'' example. Other reactive transport simulations use an external chemical code CHESS. For a realistic case of Uraninite leaching, accelerated Picard methods divide the CPU cost of standard Picard's by three and the number of iteration by five. (author)
ETUDE BIBLIOGRAPHIQUE ET NUMERIQUE DES PHENOMENES DE TRANSPORT DANS LE BETON
DJELIL, Mohammed
2012-01-01
Une faible durabilité des structures en béton peut provoquer une ruine complète ou partielle des ouvrages. En fonction des conditions d’exposition, les différents mécanismes à la source des dégradations font très souvent intervenir un ou plusieurs phénomènes de transport. Une analyse bibliographique détaillée sur les propriétés physico-chimiques et microstructurales des matériaux cimentaires, puis, sur les différents phénomènes de transport, constitue le premier objectif de ce ...
Modelisation Numerique De L'Interaction Sol-Structure Lors Du Phenomene De Fontis
Caudron, Matthieu; Heib, Marwan Al
2008-01-01
This article focuses on the simulation of soil-structure interaction during a sinkhole development by the use of a coupling numerical modelling approach. The 2D model uses a Finite Difference computer code associated with a Distinct Elements code to optimize the performances of both softwares. This allows an important decrease of computation time and the results computed are close of the experimental observations made before.
Calculs Numeriques des Interactions Entre les Champs Electromagnetiques et les Tissus Biologiques
Mokhtech, Kamel-Eddine
The understanding of electromagnetic field effects on health is a very acute problem nowadays. Indeed, the daily landscape is invaded by waves generated by affordable state-of-the-art equipment. It becomes urgent to improve the knowledge in order to ensure the safety of the users and the general public. At high frequencies two areas of concern can be distinguished: cellular telephony and wireless indoor communications. This dissertation is devoted to two aspects of the research in this field i.e. the numerical dosimetry and the biological experimentation. A new formulation for an expanding grid algorithm based upon the FDTD that can be applied, in particular, at high frequencies is presented. The energy deposition and the associated temperature rise in a model of the human eye due to a resonant dipole at frequencies of 840, 915, 1500 and 1800 MHz are then calculated. The duty cycle of each system is then introduced and its impact on temperature elevation is examined. Finally, a methodological approach to standard setting for portable transceivers is proposed. On the experimental aspect, a protocol has been elaborated and an experimentation to study athermal effects of electromagnetic fields at 9.3 GHz has been carried out. In particular, the importance of polarization has been examined.
Approches experimentale et numerique de l'usinage a sec des composites carbone/epoxy
Iliescu, Daniel
2008-01-01
The research topic is a preliminary study for maximizing the dry machining of carbon/epoxy. The proposed study deals with understanding the mechanisms of damage tools. It aims to determine the parameters of the tribological interface tool-workpiece (forces, temperature, friction, roughness) and confront them with the tools wear. Cutting operations generate heat and strains, and cut surfaces are often affected by damages. A study based on experimental observation of the formation of the chip (...
Long, Jean-Alexandre; Moreau-Gaudry, Alexandre; Troccaz, Jocelyne; Rambeaud, Jean-Jacques; Descotes, Jean-Luc
2008-01-01
This paper analyzes the impact of using 2D or 3D ultrasound on the efficiency of prostate biopsies. The evaluation is performed on home-made phantoms. The study shows that the accuracy is significantly improved.
Fantom Meme Sendromunun Görülme Sıklığı Ve Klinik Özellikleri: Literatür Taraması
Dilek Aygin; Sevim Þen
2017-01-01
Background:Phantom Breast Syndrome (PBS) is one of the complications of mastectomy and it is type of condition in which patients have a sensation of residual breast tissue and can include both non-painful sensations as well as phantom breast pain. The sensation in PBS is different than a pain before mastectomy or a sensation related with scar tissue; the sensation in the removed breast is a neuropathic pain. PBS was divided into a sub-stypes with painful phantom and non-painful phantom sensa...
Energy Technology Data Exchange (ETDEWEB)
Verdiere, S.
1997-04-29
Petroleum reservoirs are made of highly heterogeneous rocks. Simulations of these reservoirs result in geo-scientific works to integrate the data and knowledge about the field. Generally, the reservoir is represented on a very high resolution (HR) grid which can be composed of millions of cells. In order to run fluid flow simulations, it is necessary to reduce the number of cells. Thus, conventional method is to coarsen the grid and to obtain a lower resolution (LR) grid by doing up-scaling before the fluid flow simulation is done. The alternative we propose to classical method is to consider, for a coupled system pressure-saturation a specific discretization in time and space for each unknown. So, for a two phase problem, the principle is to solve the pressure equation over a LR grid and the saturation equation over a HR grid. In addition to the usual steps used in an IMPES scheme, it is necessary to allow the transfer of the results of the implicit resolution of the pressure equation form the LR o the HR Grid and calculate the averaged parameters necessary to the resolution of the pressure equation during the next time step by taking into account the evolution of the saturation. The validation of the Dual Mesh Method has been done for a two-phase problem both theoretically and practically. (author) 73 refs.
Energy Technology Data Exchange (ETDEWEB)
Grandotto Biettoli, M
2006-04-15
The report presents globally the works done by the author in the thermohydraulic applied to nuclear reactors flows. It presents the studies done to the numerical simulation of the two phase flows in the steam generators and a finite element method to compute these flows. (author)
Fracture num\\'erique chez les seniors du 4eme age. Observation d'une acculturation technique
Michel, Christine; Tarpin-Bernard, Franck
2009-01-01
Very old people accumulate the "handicaps": social, physical, psychological or cognitive. Various research thus developed to determine there waiting and needs and also to see the benefit possibly produced by technologies (called ?gerontechnology?) on their living conditions. The object of this article is to present the numerical service offer to very old perople and to see how it takes part in a social justice according to the definition of Rawls (principle of equal freedom, principle of equal opportunity in the access). The adoption, the use and the benefit of technology are analyzed in a theoretical way through a state of the art and in an experimental way through a qualitative and quantitative investigation carried out with a population of very old people. We propose to identify dynamic technological acceptance of old people according to the TAM'S (Technology Acceptance Model) of Davis adapted by (Hamner and Qazi, 2008).
1993-11-01
Italy Escuela Tecnica Superior Prof. Dr Ir J.L. van Ingen de Ingenieros Aeronauticos Dept. of Aerospace Engineering Departamento de Mecanica de...integration along the line of sight. Its use of molecular markers allows high spatial resolution measurements 2. INTRODUCTION with minimal alteration of the... molecular diffusion coefficient bivalves (animals with incurrent and excurrent hs siphon height above the boundary Presented at an AGARD Meeting on
Energy Technology Data Exchange (ETDEWEB)
Maugis, P
2006-06-15
The feasibility and safety of nuclear waste storage containers is studied. The thermodynamics of water/air flow is described and applied, via a simplified numerical model, to a simple experimental apparatus yielding heat pipe effect. The 2D influence of deterministic boundary conditions is important on kinematics and transport. Dispersivity depends on the nonuniform flow type and integrates the often marginal Gaussian part of plume spreading. A new algorithm, based on jump locality and recalibration, avoids the small bias induced by inter-cell diffusive jumps. Several algorithms modeling transport of decaying, soluble, sorbing, or precipitating species are compared. Stability and precision criteria are analyzed. Up-stream over-precipitation and negative down-stream concentrations are observed for high solubility contrasts. (author)
Energy Technology Data Exchange (ETDEWEB)
Bouvier, A. [Electricite de France, 77 - Moret sur Loing (France). Direction des Etudes et Recherches; Trenty, L.; Guillot, J.B. [Ecole Centrale de Paris, Laboratoire EM2C. CNRS, 92 - Chatenay-Malabry (France); Delalondre, C. [Electricite de France (EDF), 78 - Chatou (France). Direction des Etudes et Recherches
1997-12-31
This paper presents the modeling of a transferred electric arc inside a bath of melted metal. After a recall of the context of the study, the problem of the modeling, which involves magnetohydrodynamic coupling inside the arc and the bath, is described. The equations that govern the phenomena inside the arc and the bath are recalled and the approach used for the modeling of the anode region of the arc is explained using a 1-D sub-model. The conditions of connection between arc and bath calculations are explained and calculation results obtained with a 200 kW laboratory furnace geometry are presented. (J.S.) 8 refs.
2007-11-02
faciliter leur traitement et mise en œuvre, les données sont fournies sous une forme exploitable par une machine sur le CD-ROM qui accompagne ce rapport. Le...and Control 201 Surface Oscillations and Flutter by R.M. Bennett, R.C. Scott and C.D. Wieseman 8C. Benchmark Active Controls Technology (BACT) Wing CFD...Results 225 by D.M. Schuster and R.E. Bartels 9E. Test Cases for a Clipped Delta Wing with Pitching and Trailing-Edge Control 239 Surface Oscillations
Fliess, Michel
2008-01-01
The signal to noise ratio, which plays such an important r\\^ole in information theory, is shown to become pointless for digital communications where the demodulation is achieved via new fast estimation techniques. Operational calculus, differential algebra, noncommutative algebra and nonstandard analysis are the main mathematical tools.
Energy Technology Data Exchange (ETDEWEB)
Bur, R.; Benay, R.; Chanetz, B.; Galli, A.; Pot, T. [Office National d' Etudes et de Recherches Aerospatiales (ONERA), Dept. Fundamental and Experimental Aerodynamics, 92 - Chatillon (France); Hollis, B.; Moss, J. [Aerothermodynamics Branch, NASA Langley Research Center Hampton, Virginia (United States)
2002-07-01
An experimental and numerical study on the Mars Pathfinder aero-shell vehicle has been carried out in the framework of an agreement between ONERA and NASA. The experimental work was performed in the ONERA R5Ch hypersonic wind tunnel. Flow-field visualizations and heat-flux measurements along the model have been obtained. Numerical simulations have been performed at ONERA with the RANS solver NASCA and at NASA with a DSMC code. The flow-field structure is correctly reproduced by both computations. The location of the bow shock is well predicted, as well as the expansion waves emanating from the end of the fore-body cone. Both computations also predict the same extension of the separation bubble in the base flow region of the model. Measured and calculated heat-flux distributions along the model have been compared. Both computations give similar results, excepted on the prediction of the heat-flux level on the after-body cone. But computations over-predict the measured heat-flux values on the fore-body and the sting of the model: the value of the stagnation point is overestimated of 28% and the averaged sting level of 35%. (authors)
Chastenay, Pierre
Since the Quebec Education Program came into effect in 2001, Quebec classrooms have again been teaching astronomy. Unfortunately, schools are ill-equipped to teach complex astronomical concepts, most of which occur outside school hours and over long periods of time. Furthermore, many astronomical phenomena involve celestial objects travelling through three-dimensional space, which we cannot access from our geocentric point of view. The lunar phases, a concept prescribed in secondary cycle one, fall into that category. Fortunately, schools can count on support from the planetarium, a science museum dedicated to presenting ultra-realistic simulations of astronomical phenomena in fast time and at any hour of the day. But what type of planetarium will support schools? Recently, planetariums also underwent their own revolution: they switched from analogue to digital, replacing geocentric opto-mechanical projectors with video projectors that offer the possibility of travelling virtually through a completely immersive simulation of the three-dimensional Universe. Although research into planetarium education has focused little on this new paradigm, certain of its conclusions, based on the study of analogue planetariums, can help us develop a rewarding teaching intervention in these new digital simulators. But other sources of inspiration will be cited, primarily the teaching of science, which views learning no longer as the transfer of knowledge, but rather as the construction of knowledge by the learners themselves, with and against their initial conceptions. The conception and use of constructivist learning environments, of which the digital planetarium is a fine example, and the use of simulations in astronomy will complete our theoretical framework and lead to the conception of a teaching intervention focusing on the lunar phases in a digital planetarium and targeting students aged 12 to 14. This teaching intervention was initially tested as part of development research (didactic engineering) aimed at improving it, both theoretically and practically, through multiple iterations in its "natural" environment, in this case an inflatable digital planetarium six metres in diameter. We are presenting the results of our first iteration, completed with help from six children aged 12 to 14 (four boys and two girls) whose conceptions about the lunar phases were noted before, during and after the intervention through group interviews, questionnaires, group exercises and recordings of the interventions throughout the activity. The evaluation was essentially qualitative, based on the traces obtained throughout the session, in particular within the planetarium itself. This material was then analyzed to validate the theoretical concepts that led to the conception of the teaching intervention and also to reveal possible ways to improve the intervention. We noted that the intervention indeed changed most participants' conceptions about the lunar phases, but also identified ways to boost its effectiveness in the future.
Energy Technology Data Exchange (ETDEWEB)
Mulatier, D.; Gens, S. [Schneider Electric S.A., 92 - Boulogne-Billancourt (France)
2000-02-01
Developments in technology are enabling the installation of new types of digital and low level analogue sensors in metal-clad sub-stations. The parameters measured by these sensors include voltage, current, SF{sub 6} density and switch positions. These new sensors offer several advantages for the operator, including increased reliability of the data acquisition chain and simple customization to take account of changing requirements. They also lead to a reduction in costs, space requirements and cabling. (authors)
Energy Technology Data Exchange (ETDEWEB)
Granet, S.
2000-01-28
Oil recovery from fractured reservoirs plays a very important role in the petroleum industry. Some of the world most productive oil fields are located in naturally fractured reservoirs. Modelling flow in such a fracture network is a very complex problem. This is conventionally done using a specific idealized model. This model is based on the Warren and Root representation and on a dual porosity, dual permeability approach. A simplified formulation of matrix-fracture fluid transfers uses a pseudo-steady-state transfer equation involving a constant exchange coefficient. Such a choice is one of the main difficulties of this approach. To get a better understanding of the simplifications involved in the dual porosity approach a reference model must be available. To obtain such a fine description, we have developed a new methodology. This technique called 'the fissure element methodology' is based on a specific gridding of the fractured medium. The fissure network is gridded with linear elements coupled with an unstructured triangular grid of matrix. An appropriate finite volume scheme has been developed to provide a good description of the flow. The numerical development of is precisely described. A simulator has been developed using this method. Several simulations have been realised. Comparisons have been done with different dual-porosity dual-permeability models. A reflexion concerning the choice of the exchange coefficient used in the dual porosity model is then proposed. This new tool has permit to have a better understanding of the production mechanisms of a complex fractured reservoir. (author)
Energy Technology Data Exchange (ETDEWEB)
Michel, F.
2003-10-01
This work concerns plate fins compact heat exchangers. These compact devices (C > 700 m2/m3) reduce bulk and weight due to large surfaces for heat transfer. These exchangers, widely used in automotive systems, cryogenics and aeronautics, are currently studied with empirical correlations. So, this limits the evolution of fins in compact heat exchangers. We propose a numerical methodology for designing and enhancing Offset Strip Fin (OSF) geometries. Numerical models and methods have been validated to correctly predict thermohydraulics in Offset Strip Fin heat exchangers. We have validated simulations with data from the literature but also with two experimental devices made for this thesis. Local and global temperature and velocity measurements have been realised in geometries near Offset Strip Fins. Hot wire and cold wire anemometry and Laser Doppler Anemometry (LDA) have been used to obtained validation data. Finally, the validated numerical simulations have been used to enhance geometries of fins and to give innovating geometries. (author)
Energy Technology Data Exchange (ETDEWEB)
Badel, P.B
2001-07-15
In order to be able to carry out simulations of reinforced concrete structures, it is necessary to know two aspects: the behaviour laws have to reflect the complex behaviour of concrete and a numerical environment has to be developed in order to avoid to the user difficulties due to the softening nature of the behaviour. This work deals with these two subjects. After an accurate estimation of two behaviour models (micro-plan and mesoscopic models), two damage models (the first one using a scalar variable, the other one a tensorial damage of the 2 order) are proposed. These two models belong to the framework of generalized standard materials, which renders their numerical integration easy and efficient. A method of load control is developed in order to make easier the convergence of the calculations. At last, simulations of industrial structures illustrate the efficiency of the method. (O.M.)
Energy Technology Data Exchange (ETDEWEB)
Komiwes, V.
1999-09-01
Numerical models applied to simulation of granular flow with fluid are developed. The physical model selected to describe particles flow is a discrete approach. Particle trajectories are calculated by the Newton law and collision is describe by a soft-sphere approach. The fluid flow is modelled by Navier-Stokes equations. The modelling of the momentum transfer depends on the resolution scale: for a scale of the order of the particle diameter, it is modelled by a drag-law and for a scale smaller than the particle diameter, it is directly calculated by stress tensor computation around particles. The direct model is used to find representative elementary volume and prove the local character of the Ergun's law. This application shows the numerical (mesh size), physical (Reynolds number) and computational (CPU time and memory consumptions) limitations. The drag law model and the direct model are validated with analytical and empirical solutions and compared. For the two models, the CPU time and the memory consumptions are discussed. The drag law model is applied to the simulation of gas-solid dense fluidized-beds. In the case of uniform gas distribution, the fluidized-bed simulation heights are compared to experimental data for particle of group A and B of the Geldart classification. (author)
Numerical modelling of steel arc welding; Modelisation numerique du soudage a l'arc des aciers
Energy Technology Data Exchange (ETDEWEB)
Hamide, M
2008-07-15
Welding is a highly used assembly technique. Welding simulation software would give access to residual stresses and information about the weld's microstructure, in order to evaluate the mechanical resistance of a weld. It would also permit to evaluate the process feasibility when complex geometrical components are to be made, and to optimize the welding sequences in order to minimize defects. This work deals with the numerical modelling of arc welding process of steels. After describing the industrial context and the state of art, the models implemented in TransWeld (software developed at CEMEF) are presented. The set of macroscopic equations is followed by a discussion on their numerical implementation. Then, the theory of re-meshing and our adaptive anisotropic re-meshing strategy are explained. Two welding metal addition techniques are investigated and are compared in terms of the joint size and transient temperature and stresses. The accuracy of the finite element model is evaluated based on experimental results and the results of the analytical solution. Comparative analysis between experimental and numerical results allows the assessment of the ability of the numerical code to predict the thermomechanical and metallurgical response of the welded structure. The models limitations and the phenomena identified during this study are finally discussed and permit to define interesting orientations for future developments. (author)
Energy Technology Data Exchange (ETDEWEB)
Maire, P.H.
2011-02-15
This work was realized by writing the CHIC code, which is a software for designing and restoring experience in the field of inertial confinement fusion. The theoretical model describing the implosion of a laser target is a system of partial differential equations in the center of which is the Euler equations written in Lagrangian formalism, coupled with diffusion equations modeling the nonlinear transport of energy by electrons and photons. After a brief review of the physical context, we describe two novel methods which constitute the backbone of the CHIC code. These are 2 high-order finite volume schemes respectively dedicated to solving the equations of Lagrangian hydrodynamics and the anisotropic diffusion equations on bi-dimensional unstructured grids. The first scheme, called EUCCLHYD (Explicit Unstructured Lagrangian Hydrodynamics), solves the equations of gas dynamics on a moving mesh that moves at the speed of light. It is obtained from a general formalism based on the concept of sub-cell forces. In this context, the numerical fluxes are expressed in terms of the sub-cell force and the nodal velocity. Their determination is based on 3 basic principles: geometric compatibility between the movement of nodes and the volume change of mesh (geometric conservation law), compatibility with the second law of thermodynamics and conservation of total energy and momentum. The high-order extension is performed using a method based on solving a generalized Riemann problem in the acoustic approximation. The second scheme, called CCLAD (Cell-Centered Lagrangian Diffusion), solves the anisotropic heat equation. The corresponding discretization relies on a discrete variational formulation based on the sub-cell that allows to build a multipoint approximation of heat flux. This high-order discretization makes possible the resolution of the equations of anisotropic diffusion with satisfactory accuracy on highly distorted Lagrangian meshes. (author)
1994-01-01
dquations d’Euler) Edited by * S J.W. SLOOFF National Aerospace Laboratory NLR Anthony Fokkerweg 2 1059 CM Amsterdam Netherlands Dr. W. SCHMIDT Air Vehicle...CHAKRAVARTHY. S. R., RIBA , W.* * T.. BYERLY. J. and DRESSER. H. S.. "Multi-Zone Euler 41. DESLANDES R.M.. "Theoretisebe bestimmung der Marching Technique
Boyer, Sylvain
On estime que sur les 3,7 millions des travailleurs au Quebec, plus de 500 000 sont exposes quotidiennement a des niveaux de bruits pouvant causer des lesions de l'appareil auditif. Lorsqu'il n'est pas possible de diminuer le niveau de bruit environnant, en modifiant les sources de bruits, ou en limitant la propagation du son, le port de protecteurs auditifs individualises, telles que les coquilles, demeure l'ultime solution. Bien que vue comme une solution a court terme, elle est communement employee, du fait de son caractere peu dispendieux, de sa facilite d'implantation et de son adaptabilite a la plupart des operations en environnement bruyant. Cependant les protecteurs auditifs peuvent etre a la fois inadaptes aux travailleurs et a leur environnement et inconfortables ce qui limite leur temps de port, reduisant leur protection effective. Afin de palier a ces difficultes, un projet de recherche sur la protection auditive intitule : " Developpement d'outils et de methodes pour ameliorer et mieux evaluer la protection auditive individuelle des travailleur ", a ete mis sur pied en 2010, associant l'Ecole de technologie superieure (ETS) et l'Institut de recherche Robert-Sauve en sante et en securite du travail (IRSST). S'inscrivant dans ce programme de recherche, le present travail de doctorat s'interesse specifiquement a la protection auditive au moyen de protecteurs auditifs " passifs " de type coquille, dont l'usage presente trois problematiques specifiques presentees dans les paragraphes suivants. La premiere problematique specifique concerne l'inconfort cause par exemple par la pression statique induite par la force de serrage de l'arceau, qui peut reduire le temps de port recommande pour limiter l'exposition au bruit. Il convient alors de pouvoir donner a l'utilisateur un protecteur confortable, adapte a son environnement de travail et a son activite. La seconde problematique specifique est l'evaluation de la protection reelle apportee par le protecteur. La methode des seuils auditifs REAT (Real Ear Attenuation Threshold) aussi vu comme un "golden standard" est utilise pour quantifier la reduction du bruit mais surestime generalement la performance des protecteurs. Les techniques de mesure terrains, telles que la F-MIRE (Field Measurement in Real Ear) peuvent etre a l'avenir de meilleurs outils pour evaluer l'attenuation individuelle. Si ces techniques existent pour des bouchons d'oreilles, elles doivent etre adaptees et ameliorees pour le cas des coquilles, en determinant l'emplacement optimal des capteurs acoustiques et les facteurs de compensation individuels qui lient la mesure microphonique a la mesure qui aurait ete prise au tympan. La troisieme problematique specifique est l'optimisation de l'attenuation des coquilles pour les adapter a l'individu et a son environnement de travail. En effet, le design des coquilles est generalement base sur des concepts empiriques et des methodes essais/erreurs sur des prototypes. La piste des outils predictifs a ete tres peu etudiee jusqu'a present et meriterait d'etre approfondie. L'utilisation du prototypage virtuel, permettrait a la fois d'optimiser le design avant production, d'accelerer la phase de developpement produit et d'en reduire les couts. L'objectif general de cette these est de repondre a ces differentes problematiques par le developpement d'un modele de l'attenuation sonore d'un protecteur auditif de type coquille. A cause de la complexite de la geometrie de ces protecteurs, la methode principale de modelisation retenue a priori est la methode des elements finis (FEM). Pour atteindre cet objectif general, trois objectifs specifiques ont ete etablis et sont presentes dans les trois paragraphes suivants. (Abstract shortened by ProQuest.).
Lallier-Daniels, Dominic
La conception de ventilateurs est souvent basée sur une méthodologie « essais/erreurs » d'amélioration de géométries existantes ainsi que sur l'expérience de design et les résultats expérimentaux cumulés par les entreprises. Cependant, cette méthodologie peut se révéler coûteuse en cas d'échec; même en cas de succès, des améliorations significatives en performance sont souvent difficiles, voire impossibles à obtenir. Le projet présent propose le développement et la validation d'une méthodologie de conception basée sur l'emploi du calcul méridien pour la conception préliminaire de turbomachines hélico-centrifuges (ou flux-mixte) et l'utilisation du calcul numérique d'écoulement fluides (CFD) pour la conception détaillée. La méthode de calcul méridien à la base du processus de conception proposé est d'abord présentée. Dans un premier temps, le cadre théorique est développé. Le calcul méridien demeurant fondamentalement un processus itératif, le processus de calcul est également présenté, incluant les méthodes numériques de calcul employée pour la résolution des équations fondamentales. Une validation du code méridien écrit dans le cadre du projet de maîtrise face à un algorithme de calcul méridien développé par l'auteur de la méthode ainsi qu'à des résultats de simulation numérique sur un code commercial est également réalisée. La méthodologie de conception de turbomachines développée dans le cadre de l'étude est ensuite présentée sous la forme d'une étude de cas pour un ventilateur hélico-centrifuge basé sur des spécifications fournies par le partenaire industriel Venmar. La méthodologie se divise en trois étapes: le calcul méridien est employé pour le pré-dimensionnement, suivi de simulations 2D de grilles d'aubes pour la conception détaillée des pales et finalement d'une analyse numérique 3D pour la validation et l'optimisation fine de la géométrie. Les résultats de calcul méridien sont en outre comparés aux résultats de simulation pour la géométrie 3D afin de valider l'emploi du calcul méridien comme outil de prédimensionnement. Mots-clés Turbomachines hélico-centrifuges, Calcul méridien, Simulation numérique d'écoulement fluide, Conception numérique de turbomachines.
Energy Technology Data Exchange (ETDEWEB)
Le Grognec, P.; Hariri, S. [Ecole des Mines de Douai, 59 (France); Afzali, M.; Jaffal, H. [Centre Technique des Industries Mecaniques, 60 - Senlis (France)
2008-11-15
The aim of this work is to determine the evolution of the degree of noxiousness of a defect in a pressure equipment during its propagation. The estimation of the degree of noxiousness involves the calculation of stress intensity factors at each advance of the crack front. The cracks considered are semi-elliptic. The geometries and loads can be complex in order to cover the main industrial cases. Numerical modelling by finite element method is based on the creation of a crack-block, representing the optimized mesh near the discontinuity. The Paris law allows to describe the fatigue behaviour under cyclic load. A specific program (Python), having the advantages of the calculation codes Castem and Abaqus, allows to compute the propagation approach and makes easier the estimation of the residual lifetime of a structure under cracked pressure. (O.M.)
A numerical simulation method of arc welding; Une methode de simulation numerique du soudage a l arc
Energy Technology Data Exchange (ETDEWEB)
Chau, T.T. [AREVA TA (Technicatome), Centre Jean-Louis Andrieu, BP34000, 13791 Aix-en-Provence Commission Simulation Numerique du Soudage (AFM / SNS), Paris La Defense (France)
2006-07-01
Nowadays, in metal industries, are more and more used weak thicknesses steel sheets to reduce the mass and optimize the resistance of the structures to make with the electric arc welding which remains always the most used and most economical technique. Deformations and residual stresses of most or less important levels are introduced too in the assembling thus welded. The methodology presented here can help the designer-manufacturer engineer to estimate the levels of these effects and to optimize the design and manufacture parameters for reaching the wanted performances in his plans with few computer time on 3D numerical models of great sizes. (O.M.)
Energy Technology Data Exchange (ETDEWEB)
Anon.
2006-01-15
To carry out in offshore conditions, a pipeline in South-East Asia, the Korean Hyundai has retained an orbital welding system running with several pairs of torches perfectly synchronized with their digital control. (O.M.)
Processeringsoptimering med Canons software
DEFF Research Database (Denmark)
Precht, Helle
2009-01-01
. Muligheder i software optimering blev studeret i relation til optimal billedkvalitet og kontrol optagelser, for at undersøge om det var muligt at acceptere diagnostisk billedkvalitet og derved tage afsæt i ALARA. Metode og materialer Et kvantitativt eksperimentelt studie baseret på forsøg med teknisk og...... humant fantom. CD Rad fantom anvendes som teknisk fantom, hvor billederne blev analyseret med CD Rad software, og resultatet var en objektiv IQF værdi. Det humane fantom var et lamme pelvis med femur, der via NRPB’ er sammenlignelig med absorptionen ved et femårigt barn. De humane forsøgsbilleder blev...
Energy Technology Data Exchange (ETDEWEB)
Lemarchand, G
2003-04-15
This works deals with the design of light rotary anodes used for the generation of X-rays in medical scanners. Such anodes are made of graphite coated with tungsten by low pressure plasma sputtering. The mechanical behaviour of these materials during intense thermo-mechanical solicitation has been studied. In a first step, the in-service conditions of solicitation are defined in terms of excitation frequency, temperature, deformation and deformation velocity. The analysis of used anodes has permitted to define the main modes of in-service damage. Tests were performed on small size samples over the complete temperature range between ambient temperature and 1800 deg. C. Carbon has shown a fragile elastic behaviour while tungsten has shown a more complex behaviour: elastic-fragile up to 400 deg. C, then plastic, and becoming creep sensible above 1200 deg. C. Original load paths have permitted to show the existence of an internal back-stress and a coupling between plastic and viscous deformations. The definition of an original phenomenological law of behaviour with a double inelastic, plastic and visco-plastic deformation and with an interaction term between both flow mechanisms has been necessary to describe the mechanical behaviour of tungsten. The cold-drawing generated by each flow is translated into kinematic variables. The numerical identification of the parameters has been performed using an optimizer coupled to a finite element code which simulates the flexural test. The obtained law has been validated by the experimental observation of paths for complex loads. This behaviour law has been finally used to simulate the conditions of use of a real anode. An axisymmetrical 2-D mesh has permitted to calculate the constraints generated by the post-annealing cooling, by one and several series of radiographies and finally by a complete cooling after use. The repetition of radiographies rapidly leads to stabilized cycles. The calculated stress levels are realistic and remain inferior to the rupture resistance of the materials. This simulation can already be industrially used to evaluate the influence of a change in the anode geometry or in the conditions of in-service constraints. (J.S.)
Energy Technology Data Exchange (ETDEWEB)
Ravet, F. [Rouen Univ., 76 - Mont-Saint-Aignan (France)]|[SNECMA, 77 - Moissy-Cramayel (France); Baudoin, Ch.; Schultz, J.L. [SNECMA, 77 - Moissy-Cramayel (France)
1996-12-31
Simplifying hypotheses are required when combustion and aerodynamic phenomena are considered simultaneously. In this paper, a turbulent combustion model is proposed, in which the combustion chemistry is reduced to a single reaction. In this way, only two variables are needed to describe the problem and combustion can be characterized by the consumption of one of the two reactive species. In a first step, the instantaneous consumption rate is obtained using the Lagrangian form of the mass fraction equation of the species under consideration, and by considering the equilibrium state only. This state is determined in order to preserve the consistency with results that should be obtained using a complete kinetics scheme. In a second step, the average rate is determined using the instantaneous consumption term and a probabilistic density function. This model was tested on various configurations and in particular on an experimental main chamber and on a reheating chamber. Results indicate that this model could be used to predict temperature levels inside these combustion chambers. Other applications, like the prediction of pollutant species emission can be considered. (J.S.) 12 refs.
Energy Technology Data Exchange (ETDEWEB)
Patino-Palacios, G
2007-11-15
The simulation of the multiphase flows is currently an important scientific, industrial and economic challenge. The objective of this work is to improve comprehension via simulations of poly-dispersed flows and contribute the modeling and characterizing of its hydrodynamics. The study of gas-solid systems involves the models that takes account the influence of the particles and the effects of the collisions in the context of the momentum transfer. This kind of study is covered on the framework of this thesis. Simulations achieved with the Saturne-polyphasique-Tlse code, developed by Electricite de France and co-worked with the Institut de Mecanique des Fluides de Toulouse, allowed to confirm the feasibility of approach CFD for the hydrodynamic study of the injectors and dense fluidized beds. The stages of validation concern, on the one hand, the placement of the tool for simulation in its current state to make studies of validation and sensitivity of the models and to compare the numerical results with the experimental data. In addition, the development of new physical models and their establishments in the code Saturne will allow the optimization of the industrial process. To carry out this validation in a satisfactory way, a key simulation is made, in particular a monodisperse injection and the radial force of injection in the case of a poly-disperse flow, as well as the fluidization of a column made up of solid particles. In this last case, one approached three configurations of dense fluidized beds, in order to study the influence of the grid on simulations; then, one simulates the operation of a dense fluidized bed with which one characterizes the segregation between two various species of particles. The study of the injection of the poly-disperse flows presents two configurations; a flow Co-current gas-particle in gas (Case Hishida), and in addition, a poly-phase flow in a configuration of the jet type confined with zones of recirculation and stagnation (case Hercules). Numerical calculations were compared with the experimental data available and showed a satisfactory reproducibility of the hydrodynamic prediction of the multi-phasic flows. (author)
Cherif, El-Amine; Ouahsine, Abdellatif; Sergent, Philippe
2011-01-01
We propose a new mixing length profile, based on an extension of von K\\'arm\\'an similarity hypothesis, as well as the associated mixing velocity profile. This profile was compared with other profiles and was tested on three academic and experimental cases. The validation of the model was made from a set of reference examples and concerns the erosion of a bottom of sand in a uniform flow, and the filling of an extraction pit resulting from the tests presented in the European project SANDPIT. -- On propose un nouveau profil de longueur de m\\'elange lm (z), bas\\'e sur une extension de l'hypoth\\`ese de similitude de von K\\'arm\\'an, ainsi que le profil de vitesse de m\\'elange associ\\'e. Ce profil a \\'et\\'e compar\\'e \\`a d'autres profils et test\\'e sur trois cas test acad\\'emiques et exp\\'erimentaux. La validation du mod\\`ele a \\'et\\'e effectu\\'ee \\`a partir d'un ensemble d'exemples de r\\'ef\\'erences et concernent l'\\'erosion d'un fond de sable \\'erodable dans un \\'ecoulement uniforme en canal, et le remplissage d'...
Energy Technology Data Exchange (ETDEWEB)
Wilbois, B.
2003-07-01
In this work, a new model is built which allows to take into consideration the overall mass transfer phenomena (in particular convection) taking place inside a mixture of n{sub c} constituents in a porous medium. This model should allow to foresee the quantitative composition of fluids in oil fields and also to improve the knowledge of the flow of different species inside mixtures. The overall physical phenomena taking place at oil fields is explained in the first chapter. Chapter 2 recalls some thermodynamical notions at the equilibrium and outside equilibrium. These notions, necessary to understand the forecasting methods used by petroleum geologists, are described in chapter 3. This chapter includes also a bibliographic study about the methods of simulation of mass and heat transfers in porous media. In chapter 4, using the thermodynamical relations of irreversible processes described in chapter 2, a new type of macroscopic model allowing to describe the overall phenomena analyzed is developed. The numerical method used to solve this new system of equations is precised. Finally, chapter 5 proposes a set of cases for the validation of the uncoupled phenomena and some qualitative examples of modeling of coupled phenomena. (J.S.)
Energy Technology Data Exchange (ETDEWEB)
Torrent, M
1996-07-01
This work contributes to the theoretical study of extended defects in covalent materials. The study is especially devoted to the tilt grain boundaries in silicon as a model material. The theoretical model is based on the self-consistent tight-binding approximation and is applied within two numerical techniques: the fast 'order N' density-matrix method and the diagonalization technique which allows the sampling of the reciprocal space. Total energy parameters of the model have been fitted in order to reproduce the silicon band structure (with a correct gap value) and the transferability of crystalline and mechanical properties of this material. A new type of boundary conditions is proposed and tested. These conditions, named 'ante-periodic' or 'Moebius', allow only one grain boundary per box instead of two and decrease the CPU time by a factor of two. The model is then applied to the study of the {sigma}=25 [001] (710) grain boundary. The results show the possible presence in this boundary of low energy non-reconstructed atomic structures which are electrically active. This confirms what had been suggested by some experimental observations. The same study is also performed for the {sigma}=13 [001] (510) grain boundary. In order to compare the intrinsic electrical activity in the previous grain boundaries with the one induced by impurities, a total energy parametrization for the silicon-nickel bond is achieved and used in preliminary calculations. Finally the two variants of the {sigma}=11 [011] (2-33) interface are studied, especially their respective interfacial energies. The result disagrees with previous calculations using phenomenological potentials. (author)
Energy Technology Data Exchange (ETDEWEB)
Peyroux, J
2005-11-15
This project aims to make even more powerful the resolution of Vlasov codes through the various parallelization tools (MPI, OpenMP...). A simplified test case served as a base for constructing the parallel codes for obtaining a data-processing skeleton which, thereafter, could be re-used for increasingly complex models (more than four variables of phase space). This will thus make it possible to treat more realistic situations linked, for example, to the injection of ultra short and ultra intense impulses in inertial fusion plasmas, or the study of the instability of trapped ions now taken as being responsible for the generation of turbulence in tokamak plasmas. (author)
Energy Technology Data Exchange (ETDEWEB)
Marchand, E
2007-12-15
The questions of safety and uncertainty are central to feasibility studies for an underground nuclear waste storage site, in particular the evaluation of uncertainties about safety indicators which are due to uncertainties concerning properties of the subsoil or of the contaminants. The global approach through probabilistic Monte Carlo methods gives good results, but it requires a large number of simulations. The deterministic method investigated here is complementary. Based on the Singular Value Decomposition of the derivative of the model, it gives only local information, but it is much less demanding in computing time. The flow model follows Darcy's law and the transport of radionuclides around the storage site follows a linear convection-diffusion equation. Manual and automatic differentiation are compared for these models using direct and adjoint modes. A comparative study of both probabilistic and deterministic approaches for the sensitivity analysis of fluxes of contaminants through outlet channels with respect to variations of input parameters is carried out with realistic data provided by ANDRA. Generic tools for sensitivity analysis and code coupling are developed in the Caml language. The user of these generic platforms has only to provide the specific part of the application in any language of his choice. We also present a study about two-phase air/water partially saturated flows in hydrogeology concerning the limitations of the Richards approximation and of the global pressure formulation used in petroleum engineering. (author)
Energy Technology Data Exchange (ETDEWEB)
Doumic, M
2005-05-15
To simulate the propagation of a monochromatic laser beam in a medium, we use the paraxial approximation of the Klein-Gordon (in the time-varying problem) and of the Maxwell (in the non time-depending case) equations. In a first part, we make an asymptotic analysis of the Klein-Gordon equation. We obtain approximated problems, either of Schroedinger or of transport-Schroedinger type. We prove the existence and uniqueness of a solution for these problems, and estimate the difference between it and the exact solution of the Klein-Gordon equation. In a second part, we study the boundary problem for the advection Schroedinger equation, and show what the boundary condition must be so that the problem on our domain should be the restriction of the problem in the whole space: such a condition is called a transparent or an absorbing boundary condition. In a third part, we use the preceding results to build a numerical resolution method, for which we prove stability and show some simulations. (author)
Energy Technology Data Exchange (ETDEWEB)
Masella, J.M.
1997-05-29
This thesis is devoted to the numerical simulation of some two-fluid models describing gas-liquid two-phase flow in pipes. The numerical models developed here can be more generally used in the modelling of a wide class of physical models which can be put under an hyperbolic form. We introduce first two isothermal two-fluid models, composed of a mass balance equation and a momentum equation written in each phase, describing respectively a stratified two-phase flow and a dispersed two-phase flow. These models are hyperbolic under some physical assumptions and can be written under a nonconservative vectorial system. We define and analyse a new numerical finite volume scheme (v{integral}Roe) founded on a linearized Riemann solver. This scheme does not need any analytical calculation and gives good results in the tracking of shocks. We compare this new scheme with the classical Roe scheme. Then we propose and study some numerical models, with and without flux splitting method, which are adapted to the discretization of the two-fluid models. This numerical models are given by a finite volume integration of the equations, and lean on the v{integral} scheme. In order to reducing cpu time, due to the low Mach number of two-phase flows, acoustic waves are implicit. Afterwards we proposed a discretization of boundary conditions, which allows the generation of transient flows in pipe. Some numerical academic and more physical tests show the good behaviour of the numerical methods. (author) 77 refs.
Energy Technology Data Exchange (ETDEWEB)
Ivanov, A.A
2001-06-01
The instabilities of Rayleigh-Taylor type are considered in the thesis. The topic of the thesis was inspired by recent advances in the physics of plasma compression, especially with the aid of systems like Z-pinch. Rayleigh-Taylor instability (RTI) plays an important role in the evolution of magnetized plasmas in these experiments, as well as in stellar plasmas and classic fluids. For the phenomena concerning the nuclear fusion the RTI is very often the factor limiting the possibility of compression. In the current work we try to examine in detail the characteristic features of the instabilities of this type in order to eliminate their detrimental influence. In this thesis we are studying both the general case of the 'classic' Rayleigh-Taylor instability (in incompressible fluids) and more specific cases of the instabilities of Rayleigh-Taylor type in magnetized plasmas, in the liners or wire array implosions etc. We have studied the influence of the Hall diffusion of magnetic field on the growth rate of the instability. We have obtained in this work a self-similar solution for the widening of the initial profile of the magnetic field and for the wave of the penetration of magnetic field. After that the subsequent evolution of the magnetic field in plasma opening switches (POS) has been examined. We have shown the possibility of the existence of a strong rarefaction wave for collisional and non-collisional cases. This wave can explain the phenomenon of the opening of POS. The effect of the suppression of Rayleigh-Taylor instability by forced oscillations of the boundary between two fluids permits us to propose some ideas for the experiments of inertial fusion. We have considered the general case of the instability, in other words, two incompressible viscous superposed fluids in a gravitational field. We have obtained an exact analytical expression for the growth rate and then we have analyzed the influence of the parameters of external 'pumping' on the instability. These results can be applied to a wide range of systems, starting from classic hydrodynamics and up to astrophysical plasmas. The scheme of wire arrays has become recently a very popular method to obtain a high power X-radiation or for a high quality implosion in Z-pinches. The experimental studies have demonstrated that the results of implosion are much better for the case of multiple thin wires situated cylindrically than in a usual liner scheme. We have examined the problem modeling the stabilization of Rayleigh-Taylor instability for a wire array system. The reason for instability suppression is the regular spatial modulation of the surface plasma-magnetic field (in the vacuum). This modulation is created by the explosions of solid wires and by subsequent plasma evolution. We have also examined the coupling of the instability modes that takes place in the presence of the magnetic field and this study shows that the spatial surface modulation can effectively diminish the growth rate of the considered instability. (author)
Energy Technology Data Exchange (ETDEWEB)
Andrei, A. Ivanov
2001-06-15
In this thesis we're studying both the general case of the 'classic' Rayleigh-Taylor instability (in incompressible fluids) and more specific cases of the instabilities of Rayleigh-Taylor type in magnetized plasmas, in the liners or wire array implosions etc. We have studied the influence of the Hall diffusion of magnetic field on the growth rate of the instability. We have obtained in this work a self-similar solution for the widening of the initial profile of the magnetic field and for the wave of the penetration of magnetic field. After that the subsequent evolution of the magnetic field in plasma opening switches (POS) has been examined. We have shown the possibility of the existence of a strong rarefaction wave for collisional and non-collisional cases. This wave can explain the phenomenon of the opening of POS. The effect of the suppression of Rayleigh-Taylor instability by forced oscillations of the boundary between two fluids permits us to propose some ideas for the experiments of inertial fusion. We have considered the general case of the instability, in other words - two incompressible viscous superposed fluids in a gravitational field. We have obtained an exact analytical expression for the growth rate and then we have analyzed the influence of the parameters of external 'pumping' on the instability. These results can be applied to a wide range of systems, starting from classic hydrodynamics and up to astrophysical plasmas. The scheme of wire arrays has become recently a very popular method to obtain a high power X-radiation or for a high quality implosion in Z-pinches. The experimental studies have demonstrated that the results of implosion are much better for the case of multiple thin wires situated cylindrically than in a usual liner scheme. We have examined the problem modeling the stabilization of Rayleigh-Taylor instability for a wire array system. The reason for instability suppression is the regular spatial modulation of the surface plasma-magnetic field (in the vacuum). This modulation is created by the explosions of solid wires and by subsequent plasma evolution. We have also examined the coupling of the instability modes that takes place in the presence of the magnetic field and this study shows that the spatial surface modulation can effectively diminish the growth rate of the considered instability. (author)
Energy Technology Data Exchange (ETDEWEB)
Lahaye, T.; Chau, Q. [Institut de Radioprotection et de Surete Nucleaire (IRSN/DPHD/SDOS), Service Dosimetrie, 92 - Fontenay-aux-Roses (France); Ferragut, A.; Gillot, J.Y. [SAPHYMO, 91 - Massy (France)
2003-07-01
The use of calculation codes allows to reduce the costs and the time limits. These codes brings to operators elements to reinforce their projected dosimetry. In the cases of accidental overexposure, the numerical dosimetry comes in complement of clinical and biological investigations to give an estimation as precise as possible of the received dose. For particular situations where it does not exist an adapted instrumentation, the numerical dosimetry can substitute to conventional techniques used by regulatory dosimetry (project for aviation personnel). (N.C.)
Energy Technology Data Exchange (ETDEWEB)
Caro, F
2004-11-15
This work deals with the modelling and numerical simulation of liquid-vapor phase transition phenomena. The study is divided into two part: first we investigate phase transition phenomena with a Van Der Waals equation of state (non monotonic equation of state), then we adopt an alternative approach with two equations of state. In the first part, we study the classical viscous criteria for selecting weak solutions of the system used when the equation of state is non monotonic. Those criteria do not select physical solutions and therefore we focus a more recent criterion: the visco-capillary criterion. We use this criterion to exactly solve the Riemann problem (which imposes solving an algebraic scalar non linear equation). Unfortunately, this step is quite costly in term of CPU which prevent from using this method as a ground for building Godunov solvers. That is why we propose an alternative approach two equations of state. Using the least action principle, we propose a phase changing two-phase flow model which is based on the second thermodynamic principle. We shall then describe two equilibrium submodels issued from the relaxations processes when instantaneous equilibrium is assumed. Despite the weak hyperbolicity of the last sub-model, we propose stable numerical schemes based on a two-step strategy involving a convective step followed by a relaxation step. We show the ability of the system to simulate vapor bubbles nucleation. (author)
Energy Technology Data Exchange (ETDEWEB)
Couillaud, Ch.; Haouat, G
1999-07-01
The optical transition radiation (OTR) is extensively used since many years as a beam visualisation tool on electron accelerators and serves to monitor the beam during its transport adjustment. Its spatial and temporal characteristics make it very attractive as a diagnostic tool and allow measurements of the beam energy and transverse and longitudinal emittances. We present a numerical study of the transition radiation process in the optical region of the radiated spectrum (OTR) and in the higher part (XTR). Spatial and spectral properties are described. They are used to describe experimental observations performed on the ELSA electron-beam facility. An analytical description of the angular distributions of visible radiation emitted by birefringent targets, used as OTR sources, is also proposed. We also analyze interference phenomena between two OTR sources and show the advantage of using this interferometer as a diagnostic tool for tenth MeV electron accelerators. At last, we present an analytical model allowing to design a soft X-ray source to be installed on the ELSA facility and using either a multi-foil stack or a multilayer of two materials of different permittivities. (authors)
Energy Technology Data Exchange (ETDEWEB)
Diaz Moreno, J.M.; Ortegon Gallego, F. [Universidad de Cadiz, Dept. de Matematicas, CASEM (Spain); Lazaar, S. [Universite AbdelMalek Essaadi, Ecole Nationale des Sciences Appliquees, Dept. de Mathematiques et Informatique, Tanger Principale (Morocco)
2007-10-15
We study a fast algorithm to generate random fields which represent the uncertain parameters in a transport model of radionuclides in the geosphere. This algorithm has been introduced by Mikhailov and is based on a procedure known as Palm process. It can then be applied in a Monte Carlo method in the probabilistic risk assessment of high-level radioactive waste disposal in deep formation. We use this algorithm in order to compute the retardation factor appearing in the radionuclide migration model and we compare the CPU time corresponding to this procedure versus a classical spectral method. (authors)
Energy Technology Data Exchange (ETDEWEB)
Rascle, P.; El Amine, K. [Electricite de France (EDF), Direction des Etudes et Recherches, 92 - Clamart (France)
1997-12-31
We are interested in the numerical approximation of two-fluid models of nonequilibrium two-phase flows described by six balance equations. We introduce an original splitting technique of the system of equations. This technique is derived in a way such that single phase Riemann solvers may be used: moreover, it allows a straightforward extension to various and detailed exchange source terms. The properties of the fluids are first approached by state equations of ideal gas type and then extended to real fluids. For the construction of numerical schemes , the hyperbolicity of the full system is not necessary. When based on suitable kinetic unwind schemes, the algorithm can compute flow regimes evolving from mixture to single phase flows and vice versa. The whole scheme preserves the physical features of all the variables which remain in the set of physical states. Several stiff numerical tests, such as phase separation and phase transition are displayed in order to highlight the efficiency of the proposed method. The document is a PhD thesis divided in 6 chapters and two annexes. They are entitled: 1. - Introduction (in French), 2. - Two-phase flow, modelling and hyperbolicity (in French), 3. - A numerical method using upwind schemes for the resolution of two-phase flows without exchange terms (in English), 4. - A numerical scheme for one-phase flow of real fluids (in English), 5. - An upwind numerical for non-equilibrium two-phase flows (in English), 6. - The treatment of boundary conditions (in English), A.1. The Perthame scheme (in English) and A.2. The Roe scheme (in English). 136 refs. This document represents a PhD thesis in the speciality Applied Mathematics presented par Khalid El Amine to the Universite Paris 6.
Energy Technology Data Exchange (ETDEWEB)
Roser, R.
1999-11-26
This work concerns the thermal design of kettle reboilers. Current methods are highly inaccurate, regarded to the correlations for external heat transfer coefficient at one tube scale, as well as to two-phase flow modelling at boiler scale. The aim of this work is to improve these thermal design methods. It contains an experimental investigation with typical operating conditions of such equipment: an hydrocarbon (n-pentane) with low mass flux. This investigation has lead to characterize the local flow pattern through void fraction measurements and, from this, to develop correlations for void fraction, pressure drop and heat transfer coefficient. The approach is original, since the developed correlations are based on the liquid velocity at minimum cross section area between tubes, as variable characterizing the hydrodynamic effects on pressure drop and heat transfer coefficient. These correlations are shown to give much better results than those suggested up to now in the literature, which are empirical transpositions from methods developed for inside tube flows. Furthermore, the numerical code MC3D has been applied using the correlations developed in this work, leading to a modeling of the two-phase flow in the boiler, which is a significant progress compared to current simplified methods. (author)
Energy Technology Data Exchange (ETDEWEB)
Nadim El Wakil; Jacques Padet [Laboratoire de Thermomecanique UTAP, Universite de Reims Champagne Ardenne Faculte des Sciences, B.P. 1039, 51687 Reims, (France); Nelu-Cristian Chereches; Nicolae Taranu [Faculte de Genie Civil, Universite Technique Gh. Asachi de Iasi 38, Lascar Catargi, 700107 Iasi, (Romania)
2005-07-01
In this article the fluid flow and the heat transfer by mixed convection are analyzed inside a three - phase power transformer. Avoiding the hot spots in a power transformer is a determining factor for its preservation and its good functioning. In order to ensure an efficient cooling, a directed flow is a good solution inside a multichannel system. The modeling was made on the middle column of the power transformer where the core is wounded around by two windings inside an axisymmetric geometry. The entrance and the exit of the fluid are located respectively at the bottom and at the top of the core axis. Different geometric configurations were conceived and studied in order to improve the heat transfer and the cooling of the power transformer. (authors)
Energy Technology Data Exchange (ETDEWEB)
Faivre, V.
2003-12-15
Combustion instabilities occur when the flame heat release couples with the acoustic waves propagating in the combustion chamber. This phenomenon can lead to strong vibrations and noise but also, sometimes, to the complete combustion device failure. That is the reason why so many studies focus on the control of those instabilities. The method chosen in this study consists in an active control device (or set of actuators) having a strong effect on the mixing of the burner exhaust flow with the ambient fluid. The model configuration studied consists in a non reactive jet of air controlled by four small tangential secondary jets. Experiments have been carried out to optimize the control device geometry. The configuration identified as the most efficient, in terms of mixing enhancement, has been simulated through Large Eddy Simulations (LES). The objective of the numerical part of the present work is double. First, the numerical simulations provide a better understanding of the phenomena occurring when the control is on. Then, it is shown that LES can be considered as a tool to predict the effects of a control device on a flow. (author)
Energy Technology Data Exchange (ETDEWEB)
Crestaux, Th. [CEA Saclay, Dept. Modelisation de Systemes et Structures (DEN/DANS/DM2S/SFME), 91 - Gif sur Yvette (France)
2008-07-01
The context of this thesis is the development of the numerical simulation in industrial processes. It aims to study and develop methods allowing a decrease of the numerical cost of calculi of Chaos Polynomials development. The implementing concerns problems of high stochastic dimension and more particularly the transport model of radionuclides in radioactive wastes disposal. (A.L.B.)
2001-12-01
et leur sécurité. Note de traduction : l’auteur insiste lourdement dans le 2ème paragraphe sur la préparation des essais et l’analyse des...feedback loop is readily apparent. As an example, a roll-rate feedback to the aileron is a good indication that the roll mode required augmentation. The...impacts of roll-rate feedback to the aileron on this mode are well understood from the classical design theory. Newer or so-called modern design
Quentin, Emmanuelle
1999-11-01
Modelling of water terrestrial superficial runoff, under the gravity action, is very useful in environmental sciences. Using a raster geographic information system (GIS), it is actually possible to generate the gridded waterway network on a watershed. The most frequently used drainage algorithm identifies the flow direction (aspect) from each cell to one of the eight adjacent cells (8D) according to the steepest elevation gradient. This drainage pattern allows the generation of the hydrographic network and subwatersheds. The basic input data is a digital elevation model (DEM), extracted from elevation contours and rasterised according to a grid cell size. The vectorial terrain network constitutes the only reference data needed. To control the hydrographic tree density, a minimal drainage area generating perennial streams has to be adjusted; but the error associated with the cartographic results can not be explicitly calculated owing to the hierarchical structure of the drainage algorithm. Classical Euclidean distance operations available in GIS are not adapted to quantify the deviation between the reference and the synthetic networks. This particular application requires the use of a drainage distance based on the flow path. Such concept allows the establishment of a unique error estimation that quantifies the spatial agreement between the generated and the reference networks, reflecting the accuracy of the drainage scheme from which all other hydrologic features are extracted. This optimisation technique has been tested on the Boyer River watershed located 25 km south-east of Quebec city and covering an area of approximately 220 km2. The 8D drainage algorithm used, which has been adapted and completed, allows for the consideration of low relief characteristics where elevations range from 275 m upstream to 10 m at the outlet in the Saint-Lawrence River. The reference DEM has been generated by interpolation of the elevation isolines available on the 1:20 000 numerical topographic maps. The optimal grid cell size for the drainage algorithm has been found to be 125 m. Monte-Carlo simulations have been performed to assess the accuracy changes resulting from a DEM rotation of 22,5° and from the introduction of random errors in the initial DEM. The present work constitutes a targeted advance in the field of environmental geomatics. It replaces multiple geometric indexes, generally used in hydrology to compare two networks, by a unique georeferenced measure well adapted to the gravitational nature of the runoff. Moreover, it proposes an objective and universal criterion to reach optimality, substituting traditional rules of thumb. Finally, it opens perspectives for watershed model construction relating land use exports to surface water quality.
Energy Technology Data Exchange (ETDEWEB)
Ali Akbar ABBASIAN ARANI; Didier LASSEUX; Azita AHMADI [TREFLE-ENSAM, UMR CNRS 8508, Esplanade des Arts et Metiers 33405 Talence Cedex, (France)
2005-07-01
In this work, we present the development of a 3 D numerical tool for simulation of non-Darcy two-phase flow in heterogeneous porous media. The physical model selected is the generalized Darcy-Forchheimer model. A validation is performed first by comparing numerical results with a semi-analytical solution of the Buckley-Leverett type. Secondly, numerical results obtained on 1 D and 2 D heterogeneous configurations are presented and we highlight the importance of the inertial terms according to a Reynolds number of the flow. (authors)
Energy Technology Data Exchange (ETDEWEB)
Lardjane, N.
2002-05-15
The subject of this work concerns the application of large-eddy simulation to the mixing of two fluids with different thermodynamical properties. Numerical errors in the discretization of Navier-Stokes equations and their interaction with sub-grid models are investigated on a self decaying isotropic homogeneous turbulence. A high resolution numerical code is then developed for the simulation of binary mixing layers. Reduction of early acoustic waves amplitude is achieved by use of a temporal self-similar initial condition. The relative magnitude of sub-grid terms arising from filtered equations is investigated on explicit filtering of direct numerical simulation results of temporal N{sub 2}/O{sub 2} and H{sub 2}/O{sub 2} mixing layers. Implicit closure (MILES) is then evoked on the basis of WENO schemes. (author)
Energy Technology Data Exchange (ETDEWEB)
Meplan, O.
1996-01-01
This thesis is devoted to a numerical study of the quantum dynamics of the Fermi accelerator which is classically chaotic: it is particle in a one dimensional box with a oscillating wall. First, we study the classical dynamics: we show that the time of impact of the particle with the moving wall and its energy in the wall frame are conjugated variables and that Poincare surface of sections in these variables are more understandable than the usual stroboscopic sections. Then, the quantum dynamics of this systems is studied by the means of two numerical methods. The first one is a generalization of the KKR method in the space-time; it is enough to solve an integral equation on the boundary of a space-time billiard. The second method is faster and is based on successive free propagations and kicks of potential. This allows us to obtain Floquet states which we can on one hand, compare to the classical dynamics with the help of Husimi distributions and on the other hand, study as a function of parameters of the system. This study leads us to nice illustrations of phenomenons such as spatial localizations of a wave packet in a vibrating well or tunnel effects. In the adiabatic situation, we give a formula for quasi-energies which exhibits a phase term independent of states. In this regime, there exist some particular situations where the quasi-energy spectrum presents a total quasi-degeneracy. Then, the wave packet energy can increase significantly. This phenomenon is quite surprising for smooth motion of the wall. The third part deals with the evolution of a classical wave in the Fermi accelerator. Using generalized KKR method, we show a surprising phenomenon: in most of situations (so long as the wall motion is periodic), a wave is localized exponentially in the well and its energy increases in a geometric way. (author). 107 refs., 66 figs., 5 tabs. 2 appends.
Energy Technology Data Exchange (ETDEWEB)
Gorincour, G.; Paris, M.; Aschero, A.; Bourliere, B.; Devred, P.; Petit, P. [Hopital Timone-Enfants, Service de Radiologie Pediatrique, 13 - Marseille (France); Barrau, K.; Auquier, P. [Faculte de Medecine de Marseille, Service de Sante Publique, 13 - Marseille (France); Waultier, S.; Bourrelly, M.; Mundler, O. [Hopital Timone-Enfants, Service de Medecine Nucleaire, 13 - Marseille (France); Viehweger, E.; Jouve, J.L.; Bollini, G. [Hopital Timone-Enfants, Service de Chirurgie Orthopedique, 13 - Marseille (France)
2007-03-15
Objective. Compare the irradiation delivered in conventional radiography and digital radiography by image intensifier during a scoliosis workup. Patients and Methods. Our prospective randomized study included 105 patients, all of whom were identified according to socio-demographic parameters as well as criteria evaluating the quality of the full front spinal x-ray at PA incidence. The entry dose at the scapula and the exit dose in inter-orbital, thyroid, mammary, and hypo-gastric projection was measured by thermoluminescent dosimeters. Results. The results of 71 girls and 28 boys, aged a mean 13.8 years with a mean weight of 47 kg were analyzed. At equal image quality, the entry dose was not significantly different between the two techniques; the mean exit dose reduction was 64% during digital acquisition. This reduction involved the inter-orbital (162%), mammary (43%), and thyroid (309%) regions. However, this system is more irradiating in the hypo-gastric region (34%). Conclusion. The dosimetric evaluation of the different imaging techniques used to explore the entirety of the spine should be part of radiologists' quality standard used to document their work and their choices. (authors)
Egreteau, Thomas
Ce document contient les objectifs, le cheminement suivi et les résultats obtenus d'un projet visant à identifier le nombre d'onde sur différents types de plaques représentatives de ce qui est utilise en aéronautique. Dans une première partie, on y définit les enjeux du sujet, liés au domaine aéronautique et le contexte dans lequel ce projet se déroule. A la suite de quoi, est exposée la revue de l'état de l'art sur les méthodes existantes de mesure du nombre d'onde. Le projet en lui-même est ensuite défini avec un objectif principal fixé qui est de mesurer le nombre d'onde d'une plaque à partir de mesures vibratoires. Pour cela deux méthodes seront utilisées; la méthode de différence de phase permettant d'effectuer la mesure sur les plaques simples ou composites et la méthode du passage dans le domaine des nombres d'onde permettant cette même mesure sur des plaques plus complexes (plaque épaisse ou raidie par exemple). Pour chaque méthode, un cheminement équivalent est utilisé. Dans un premier temps, on développe et implémente la méthode choisie de manière basique, puis on réalise l'étude paramétrique permettant de connaitre les conditions optimales de fonctionnement des deux méthodes, enfin une validation expérimentale est réalisée. Dans ce document, les deux méthodes seront aussi comparées pour en déduire dans quelles conditions il faut utiliser l'une ou l'autre des méthodes. La méthode de la différence de phase donne de bons résultats dans le cas de plaques simples et composites si on est capable d'exciter suffisamment toutes la gamme de fréquence. La méthode du passage dans le domaine des nombres d'onde quant à elle est apte à mesurer le nombre d'onde sur tous types de plaques et dans toutes les directions si la zone de mesure est suffisamment grande. Mots-clés : Nombre d'onde, plaques, différence de phase, domaine des nombre d'onde, transformée de Fourier spatiale, Définition de projet de recherche.
Energy Technology Data Exchange (ETDEWEB)
Petelet, M
2008-07-01
Current approach of most welding modellers is to content themselves with available material data, and to chose a mechanical model that seems to be appropriate. Among inputs, those controlling the material properties are one of the key problems of welding simulation: material data are never characterized over a sufficiently wide temperature range. This way to proceed neglect the influence of the uncertainty of input data on the result given by the computer code. In this case, how to assess the credibility of prediction? This thesis represents a step in the direction of implementing an innovative approach in welding simulation in order to bring answers to this question, with an illustration on some concretes welding cases.The global sensitivity analysis is chosen to determine which material properties are the most sensitive in a numerical welding simulation and in which range of temperature. Using this methodology require some developments to sample and explore the input space covering welding of different steel materials. Finally, input data have been divided in two groups according to their influence on the output of the model (residual stress or distortion). In this work, complete methodology of the global sensitivity analysis has been successfully applied to welding simulation and lead to reduce the input space to the only important variables. Sensitivity analysis has provided answers to what can be considered as one of the probable frequently asked questions regarding welding simulation: for a given material which properties must be measured with a good accuracy and which ones can be simply extrapolated or taken from a similar material? (author)
Energy Technology Data Exchange (ETDEWEB)
Kluth, G
2008-12-15
The goal is to model mathematically and numerically the dynamic phenomenons for solids in finite plasticity. We suggest a model that we call hyper-elasto-plastic based on hyper-elastic systems of conservation laws and on the use of an equation of state that we have constructed so as to achieve the plastic yield criterion of Von Mises. This model gives exact (analytic) solutions with shock split to flyer-plate experiments. The mathematical analysis of this model is done (hyperbolicity, characteristic fields, involutions and entropy). In the numerical part, we give 1D and 2D Lagrangian schemes which satisfy an entropy criterion. Moreover, thanks to a special discretization of the equations on deformation gradient, we satisfy some discrete involutions. In this work, the degeneracy of the solid model into hydrodynamic models is studied at the continuous level, and achieved at the numerical one. On different problems, we show the validity of our model and our numerical schemes. (author)
Energy Technology Data Exchange (ETDEWEB)
Genette, P.; Martelet, B. [Electricite de France (EDF), 69 - Villeurbanne (France). Div. Mecanique des Structures du Septen; Debost-Eymart, I. [Electricite de France (EDF), 92 - Clamart (France). Dept. Mecanique et Modeles Numeriques
1998-10-01
The ASCOUF software has been developed par EDF to facilitate the quick analysis of defects contained in pipes or elbows of the primary loop. This preprocessing tool of Code Aster, the structural analysis finite element code of EDF, has been used to carry out, with an increase of productivity a series of numerical studies proving the mechanical strength of these components. Its validation, taking into account the feed-back from previous studies, leads us to rely on the results. ASCOUF has afterwards been extended to solve the problems of lack of thickness of pipes of the secondary loop. (authors)
Energy Technology Data Exchange (ETDEWEB)
Petelet, M
2007-10-15
Current approach of most welding modellers is to content themselves with available material data, and to chose a mechanical model that seems to be appropriate. Among inputs, those controlling the material properties are one of the key problems of welding simulation: material data are never characterized over a sufficiently wide temperature range {exclamation_point} This way to proceed neglect the influence of the uncertainty of input data on the result given by the computer code. In this case, how to assess the credibility of prediction? This thesis represents a step in the direction of implementing an innovative approach in welding simulation in order to bring answers to this question, with an illustration on some concretes welding cases. The global sensitivity analysis is chosen to determine which material properties are the most sensitive in a numerical welding simulation and in which range of temperature. Using this methodology require some developments to sample and explore the input space covering welding of different steel materials. Finally, input data have been divided in two groups according to their influence on the output of the model (residual stress or distortion). In this work, complete methodology of the global sensitivity analysis has been successfully applied to welding simulation and lead to reduce the input space to the only important variables. Sensitivity analysis has provided answers to what can be considered as one of the probable frequently asked questions regarding welding simulation: for a given material which properties must be measured with a good accuracy and which ones can be simply extrapolated or taken from a similar material? (author)
Energy Technology Data Exchange (ETDEWEB)
Commowick, O
2007-02-15
The main objective of this thesis is to provide radio-oncology specialists with automatic tools for delineating organs at risk of a patient undergoing a radiotherapy treatment of cerebral or head and neck tumors. To achieve this goal, we use an anatomical atlas, i.e. a representative anatomy associated to a clinical image representing it. The registration of this atlas allows us to segment automatically the patient structures and to accelerate this process. Contributions in this method are presented on three axes. First, we want to obtain a registration method which is as independent as possible from the setting of its parameters. This setting, done by the clinician, indeed needs to be minimal while guaranteeing a robust result. We therefore propose registration methods allowing a better control of the obtained transformation, using rejection techniques of inadequate matching or locally affine transformations. The second axis is dedicated to the consideration of structures associated with the presence of the tumor. These structures, not present in the atlas, indeed lead to local errors in the atlas-based segmentation. We therefore propose methods to delineate these structures and take them into account in the registration. Finally, we present the construction of an anatomical atlas of the head and neck region and its evaluation on a database of patients. We show in this part the feasibility of the use of an atlas for this region, as well as a simple method to evaluate the registration methods used to build an atlas. All this research work has been implemented in a commercial software (Imago from DOSIsoft), allowing us to validate our results in clinical conditions. (author)
Energy Technology Data Exchange (ETDEWEB)
Combes, J.F. [Electricite de France (EDF), 92 - Clamart (France). Direction des Etudes et Recherches; Bidot, T. [Simulog, (France)
1997-01-01
In the Ariane rocket propulsion system, ball bearings operate under very severe conditions; in order to evaluate heat transfers enabling their cooling, the flow inside the bearings themselves has to be determined. A numerical study has been carried out by Simulog company using the Turbomachinery release of the N3S code developed by Electricite de France. After a brief presentation of the fluid dynamics N3S code and its turbomachinery version, its application to the calculation of the flow within a ball bearing is presented: as it was shown in a preliminary study, the 3D flow can be split into a succession of 2D flows on parallel slices; therefore examples of laminar and turbulent flow calculation on a cross section are given. Comparison of flow structure calculations with experimental and analytical results is discussed
Energy Technology Data Exchange (ETDEWEB)
Duclous, R
2009-11-15
This research thesis which is at the interface between numerical analysis, plasma physics and applied mathematics, deals with the kinetic modelling and numerical simulations of the electron energy transport and deposition in laser-produced plasmas, having in view the processes of fuel assembly to temperature and density conditions necessary to ignite fusion reactions. After a brief review of the processes at play in the collisional kinetic theory of plasmas, with a focus on basic models and methods to implement, couple and validate them, the author focuses on the collective aspect related to the free-streaming electron transport equation in the non-relativistic limit as well as in the relativistic regime. He discusses the numerical development and analysis of the scheme for the Vlasov-Maxwell system, and the selection of a validation procedure and numerical tests. Then, he investigates more specific aspects of the collective transport: the multi-specie transport, submitted to phase-space discontinuities. Dealing with the multi-scale physics of electron transport with collision source terms, he validates the accuracy of a fast Monte Carlo multi-grid solver for the Fokker-Planck-Landau electron-electron collision operator. He reports realistic simulations for the kinetic electron transport in the frame of the shock ignition scheme, the development and validation of a reduced electron transport angular model. He finally explores the relative importance of the processes involving electron-electron collisions at high energy by means a multi-scale reduced model with relativistic Boltzmann terms.
Energy Technology Data Exchange (ETDEWEB)
Sboui, A
2007-01-15
The aim of this thesis is to model and develop numerical tools adapted to study underground water flow and the propagation of pollutants in a porous medium. The main motivation of this work is a benchmark from GDR Momas and ANDRA to simulate the 3-D propagation of radionuclides around a deep disposal of nuclear waste. Firstly, we construct a new mixed finite elements method suitable for general hexahedral meshes. Convergence of the method is proved and shown in numerical experiments. Secondly, we present a method of time discretization for the advection equation which allows for the use of different time steps in different sub-domains in order to take into account of strong heterogeneities. Finally a numerical method for the calculation of the transport of contaminants is proposed. The techniques above were implemented in a 3-D code and simulation results are shown on the 3-D far field benchmark from GDR Momas and ANDRA. (author)
Energy Technology Data Exchange (ETDEWEB)
Urbin, Gerald [Institut National Polytechnique, 38 - Grenoble (France)
1998-02-02
This study highlights the potentialities of the numerical technique of large scale simulation in describing and understanding the turbulent flows in a complex geometry. Particularly, it is focussed on flows of free jet, confined jets and multiple jets of high solidity grid. Spatial simulations of the circular zone close to a free jet, of high Reynolds number were performed. In spite of an evident sensitivity to upstream conditions good agreement between our statistical predictions and different experimental measurements was obtained. The multiple coherent vortical structures implied in the transition to turbulence of the jet were found. At the same time, helical or annular axisymmetric vortices were observed. Also, an original vortical arrangement was evidenced, resulting from the alternating inclination and local pairing of these rings. It could been forced through an ad-hoc excitation which modifies subsequently drastically the jet development. When an axisymmetric excitation is imposed after formation of annular structures, pairs of counter-rotative longitudinal vortices occur and generate lateral jets. Their nature and presence in case of a helical excitation are discussed. An efficient method for controlling their number is developed. Then, one is studied the very low frequency periodic phenomenon of backward-facing transition to turbulence which develops in the confined jet and grid multiple jets (a phenomenon generic in numerous flows). It was found to depend not only on the characteristic of the re-circulation (pre-transition) zones but also on the upstream flow (zone of post-transition stagnation, pressure effect). Large scale transversal motions of the fluid have been found beginning from the grid. An interpretation of this phenomenon is suggested 193 refs., 109 figs.
Energy Technology Data Exchange (ETDEWEB)
Sentis, R. [CEA Bruyeres-le-Chatel, Dept. de Conception et Simulation des Armes, 91 (France); Golse, F. [CEA Saclay, Dept. de Modelisation des Systemes et Structures, 91 - Gif-sur-Yvette (France); Lafitte, O. [Paris-7 Univ., 75 (France)]|[Ecole Normale Superieure, 75 - Paris (France)
2001-07-01
For the simulation of the laser absorption in a plasma hydrodynamic code, one uses generally a ray tracing method. We show here where are the main difficulties related to a numerical solution of the eikonal equation by an alternative method called Eulerian. We indicate also what way are considered to clear up these difficulties. One of the main assets of the Eulerian method is to give a more regular estimation of the energy absorbed in each elementary volume than the ray-tracing method.
Energy Technology Data Exchange (ETDEWEB)
Dimitrova, M.; Ibrahim, H. [TechnoCentre eolien Gaspesie-les Iles, Gaspe, PQ (Canada); Fortin, G.; Perron, J. [Quebec Univ., Chicoutimi, PQ (Canada); Ilinca, A. [Quebec Univ., Rimouski, PQ (Canada)
2010-07-01
This poster reported on a study that reproduced frost conditions measured on wind turbines in Murdochville, Quebec. Frost accumulation was measured on the NACA 63 415 blade profile of a Vesta V80, 1.8 MW wind turbine. The loss of mass was measured and the form of frost deposited was examined along with lift and drag. Several tests were conducted with various frost precipitation. Meteorological data such as wind velocity, wind direction, air temperature, relative humidity, barometric pressure and solar radiation were recorded along with icing events and their duration. The model was used to determine at which point the drag would cause the turbine to stop turning. refs., tabs., figs.
Directory of Open Access Journals (Sweden)
A BENDIB
2001-12-01
Full Text Available L’équation de Fokker-Planck, qui décrit les électrons d’un plasma complètement ionisé et non magnétisé, a été résolue numériquement. Les collisions électron-ion et électron-électron ont été prises en considération. La fonction de distribution électronique, développée sur la base des polynômes de Legendre, a été calculée jusqu’à la seconde anisotropie. La première anisotropie a été calculée en réduisant le problème à une équation différentielle du quatrième ordre qui peut être résolue numériquement avec les méthodes numériques standards. Les coefficients de transport induits par cette première anisotropie ont été déduits. Ils correspondent exactement à ceux établis dans la littérature par des méthodes numériques différentes, nettement plus complexes. La seconde anisotropie a aussi été calculée en réduisant le problème à une équation différentielle du second ordre en utilisant la méthode itérative. Des résultats très précis sont obtenus à partir de la cinquième itération. La viscosité électronique a été déduite et un ajustement numérique très précis de ce coefficient de transport en fonction du numéro atomique a aussi été proposé.
Energy Technology Data Exchange (ETDEWEB)
Anon.
2006-03-15
Small by its size with only 5 workers, the IS Services Agency of Wittenheim (Alsace) is however ultra specialized in nondestructive testing, and particularly with the new technology of the numerical radiography. (O.M.)
Gemme, Frederic
The aim of the present research project is to increase the amount of fundamental knowledge regarding the process by getting a better understanding of the physical phenomena involved in friction stir welding (FSW). Such knowledge is required to improve the process in the context of industrial applications. In order to do so, the first part of the project is dedicated to a theoretical study of the process, while the microstructure and the mechanical properties of welded joints obtained in different welding conditions are measured and analyzed in the second part. The combination of the tool rotating and translating movements induces plastic deformation and heat generation of the welded material. The material thermomechanical history is responsible for metallurgical phenomena occurring during FSW such as recrystallization and precipitate dissolution and coarsening. Process modelling is used to reproduce this thermomechanical history in order to predict the influence of welding on the material microstructure. It is helpful to study heat generation and heat conduction mechanisms and to understand how joint properties are related to them. In the current work, a finite element numerical model based on solid mechanics has been developed to compute the thermomechanical history of the welded material. The computation results were compared to reference experimental data in order to validate the model and to calibrate unknown physical parameters. The model was used to study the effect of the friction coefficient on the thermomechanical history. Results showed that contact conditions at the workpiece/tool interface have a strong effect on relative amounts of heat generated by friction and by plastic deformation. The comparison with the experimental torque applied by the tool for different rotational speeds has shown that the friction coefficient decreases when the rotational speed increases. Consequently, heat generation is far more important near the material/tool interface and the material deformation is shallower, increasing the lack of penetration probability. The variation of thermomechanical conditions with regards to the rotational speed is responsible for the variation of the nugget shape, as recrystallization conditions are not reached in the same volume of material. The second part of the research project was dedicated to a characterization of the welded joints microstructure and mechanical properties. Sound joints were obtained by using a manufacturing procedure involving process parameters optimization and quality control of the joint integrity. Five different combinations of rotational and advancing speeds were studied. Microstructure observations have shown that the rotational speed has an effect on recrystallization conditions because of the variation of the contact conditions at the material/tool interface. On the other hand, the advancing speed has a strong effect on the precipitation state in the heat affected zone (HAZ). The heat input increases when the advancing speed decreases. The material softening in the HAZ is then more pronounced. Mechanical testing of the welded joints showed that the fatigue resistance increases when the rotational speed increases and the advancing speed decreases. The fatigue resistance of FSW joints mainly depends on the ratio of the advancing speed on the rotational speed, called the welding pitch k. When the welding pitch is high (k ≥ 0,66 mm/rev), the fatigue resistance depends on crack initiation at the root of circular grooves left by the tool on the weld surface. The size of these grooves is directly related to the welding pitch. When the welding pitch is low (k ≤ 0,2 mm/rev), the heat input is high and the fatigue resistance is limited by the HAZ softening. The fatigue resistance is optimized when k stands in the 0,25-0,30 mm/rev range. Outside that range, the presence of small lateral lips is critical. The results of the characterization part of the project showed that the effects of the applied vertical force on the formation of lateral lips should be submitted to further investigations. The elimination of the lateral lip, which could be achieved with a more precise adjustment of the vertical force, could lead to an improved fatigue resistance. The elimination of lateral lips, but also the circular grooves left by the tool, may be obtained by developing an appropriate surfacing technique and could lead to an improved fatigue resistance without reducing the advancing speed. (Abstract shortened by UMI.)
Energy Technology Data Exchange (ETDEWEB)
Pacull, J.
2011-02-15
In pressurized water nuclear power reactor (PWR), the fuel rod is made of dioxide of uranium (UO{sub 2}) pellet stacked in a metallic cladding. A multi scale and multi-physic approaches are needed for the simulation of fuel behavior under irradiation. The main phenomena to take into account are thermomechanical behavior of the fuel rod and chemical-physic behavior of the fission products. These last years one of the scientific issue to improve the simulation is to take into account the multi-physic coupling problem at the microscopic scale. The objective of this ph-D study is to contribute to this multi-scale approach. The present work concerns the micro-mechanical behavior of a polycrystalline aggregate of UO{sub 2}. Mean field and full field approaches are considered. For the former and the later a self consistent homogenization technique and a periodic Finite Element model base on the 3D Voronoi pattern are respectively used. Fuel visco-plasticity is introduced in the model at the scale of a single grain by taking into account specific dislocation slip systems of UO{sub 2}. A cohesive zone model has also been developed and implemented to simulate grain boundary sliding and intergranular crack opening. The effective homogenous behaviour of a Representative Volume Element (RVE) is fitted with experimental data coming from mechanical tests on a single pellet. Local behavior is also analyzed in order to evaluate the model capacity to assess micro-mechanical state. In particular, intra and inter granular stress gradient are discussed. A first validation of the local behavior assessment is proposed through the simulation of intergranular crack opening measured in a compressive creep test of a single fuel pellet. Concerning the impact of the microstructure on the fuel behavior under irradiation, a RVE simulation with a representative transient loading of a fuel rod during a power ramp test is achieved. The impact of local stress and strain heterogeneities on the multi-physic simulation is discussed. (author)
Energy Technology Data Exchange (ETDEWEB)
Dellacherie, St
2004-07-01
This work deals with the derivation of a diphasic low Mach number model obtained through a Mach number asymptotic expansion applied to the compressible diphasic Navier Stokes system, expansion which filters out the acoustic waves. This approach is inspired from the work of Andrew Majda giving the equations of low Mach number combustion for thin flame and for perfect gases. When the equations of state verify some thermodynamic hypothesis, we show that the low Mach number diphasic system predicts in a good way the dilatation or the compression of a bubble and has equilibrium convergence properties. Then, we propose an entropic and convergent Lagrangian scheme in mono-dimensional geometry when the fluids are perfect gases and we propose a first approach in Eulerian variables where the interface between the two fluids is captured with a level set technique. (author)
Energy Technology Data Exchange (ETDEWEB)
Lorrette, Ch
2007-04-15
This work is an original contribution to the study of the thermo-structural composite materials thermal behaviour. It aims to develop a methodology with a new experimental device for thermal characterization adapted to this type of material and to model the heat transfer by conduction within these heterogeneous media. The first part deals with prediction of the thermal effective conductivity of stratified composite materials in the three space directions. For that, a multi scale model using a rigorous morphology analysis of the structure and the elementary properties is proposed and implemented. The second part deals with the thermal characterization at high temperature. It shows how to estimate simultaneously the thermal effusiveness and the thermal conductivity. The present method is based on the observation of the heating from a plane sample submitted to a continuous excitation generated by Joule Effect. Heat transfer is modelled with the quadrupole formalism, temperature is here measured on two sides of the sample. The development of both resistive probes for excitation and linear probes for temperature measurements enables the thermal properties measured up to 1000 C. Finally, some experimental and numerical application examples lead to review the obtained results. (author)
1990-02-02
ULTION NUMERIQUE EN PHYSIQUE ET CHIMIE COMPUTA TIONAL PHYSICS AND CHEMISTRY_~ Localized basis functions and other computational Improvements in...La r6solution des problbmes complexes issus do la simulation num~rique en hypersonique, en micro- 6lectronique, en chimie quantique , en combustion ou...Science, Courant Inst.. New York University, 1986. 344 SIMULATION NUMERIQUE EN PHYSIQUE ET CHIMIE COMPUTATIONAL PHYSICS AND CHEMISTRY tI Localized
Energy Technology Data Exchange (ETDEWEB)
Ahmad, M
2007-09-15
Maldistribution of liquid-vapour two phase flows causes a significant decrease of the thermal and hydraulic performance of evaporators in thermodynamic vapour compression cycles. A first experimental installation was used to visualize the two phase flow evolution between the expansion valve and the evaporator inlet. A second experimental set-up simulating a compact heat exchanger has been designed to identify the functional and geometrical parameters creating the best distribution of the two phases in the different channels. An analysis and a comprehension of the relation between the geometrical and functional parameters with the flow pattern inside the header and the two phase distribution, has been established. A numerical simulations of a stratified flow and a stratified jet flow have been carried out using two CFD codes: FLUENT and NEPTUNE. In the case of a fragmented jet configuration, a global definition of the interfacial area concentration for a separated phases and dispersed phases flow has been established and a model calculating the fragmented mass fraction has been developed. (author)
Energy Technology Data Exchange (ETDEWEB)
NONE
2001-10-01
The SFEN (French Society on Nuclear Energy), organized the 18 october 2001 at Paris, a technical day on the numerical and experimental simulation, applied to the reactor Physics. Nine aspects were discussed, giving a state of the art in the domain:the french nuclear park; the future technology; the controlled thermonuclear fusion; the new organizations and their implications on the research and development programs; Framatome-ANP markets and industrial code packages; reactor core simulation at high temperature; software architecture; SALOME; DESCARTES. (A.L.B.)
1992-07-01
Tecnica Aeroespacial [INTA]) ADDRESS/POINT OF CONTACT: Attention: Elisa Sugranez Puerto Pinto Rosales, 34 28008 Madrid, Spain Fax: 34 (1) 2480872...atmospheric molecular extinction coefficients, total extinction ratios, ozone and N02 mixing ratios, and corresponding error arrays as a function of
Energy Technology Data Exchange (ETDEWEB)
Enaux, C
2007-11-15
The simulation of indirect laser implosion requires an accurate knowledge of the inter-penetration of the laser target materials turned into plasma. This work is devoted to the study of a multi-velocity multi-fluid model recently proposed by Scannapieco and Cheng (SC) to describe the inter-penetration of miscible fluids. In this document, we begin with presenting the SC model in the context of miscible fluids flow modelling. Afterwards, the mathematical analysis of the model is carried out (study of the hyperbolicity, existence of a strictly convex mathematical entropy, asymptotic analysis and diffusion limit). As a conclusion the problem is well set. Then, we focus on the problem of numerical resolution of systems of conservation laws with a relaxation source term, because SC model belongs to this class. The main difficulty of this task is to capture on a coarse grid the asymptotic behaviour of the system when the source term is stiff. The main contribution of this work lies in the proposition of a new technique, allowing us to construct a Lagrangian numerical flux taking into account the presence of the source term. This technique is applied first on the model-problem of a one-dimensional Euler system with friction, and then on the multi-fluid SC model. In both cases, we prove that the new scheme is asymptotic-preserving and entropic under a CFL-like condition. The two-dimensional extension of the scheme is done by using a standard alternate directions method. Some numerical results highlight the contribution of the new flux, compared with a standard Lagrange plus Remap scheme where the source term is processed using an operator splitting. (author)
Energy Technology Data Exchange (ETDEWEB)
Vermorel, O.
2003-11-15
This work is devoted to the numerical and theoretical study of turbulence modulation by particles using direct numerical simulation for the continuous phase coupled with a Lagrangian prediction of trajectories of discrete particles. The configuration corresponds to a slab of particles injected at high velocity into an isotropic decaying turbulence. The motion of a particle is supposed to be governed only by the drag force. The particle mass loading is large so that momentum exchange between particles and fluid results in a significant modulation of the turbulence. Collisions are neglected. The momentum transfer between particles and gas causes a strong acceleration of the gas in the slab. In the periphery of the slab, the turbulence is enhanced due to the production by the mean gas velocity gradients. The analysis of the interphase transfer terms in the gas turbulent kinetic energy equation shows that the direct effect of the particles is to damp the turbulence in the core of the slab but to enhance it in the periphery. This last effect is due to a strong correlation between the particle distribution and the instantaneous gas velocity. Another issue concerns the k-{epsilon} model and the validity of its closure assumptions in two phase flows. A new eddy viscosity expression, function of particle parameters, is used to model the Reynolds stress tensor. The modelling of the gas turbulent dissipation rate is questioned. A two-phase Langevin equation is also tested to model drift velocity and fluid-particles velocity covariance equations. (author)
Energy Technology Data Exchange (ETDEWEB)
Bastin, G.
2004-09-15
The aim of this study concerns the use of numerical methods for the resolution of the Reynolds Averaged Navier Stokes equations adapted to the simulation of the cooling of the trailing edge of a stator in a high pressure turbine. These methods, based on the elsA solver developed at ONERA, use a four steps Runge Kutta time discretization scheme and a Jameson centered space discretization scheme. The scheme is applied through a finite volume approach on control volume centered on the cells of a multi-block structured mesh. Turbulence is simulated either through the algebraic Michel model, or through the one-transport-equation Spalart-Allmaras model, or through the two-transport-equations k 1, k {omega} and k {epsilon} models, and through ASM model. A simulation of the flow in a bidimensional stator, without cooling, is carried out. The cooling, which is realized with trailing edge slots, is then simulated on a bidimensional stator. Because the slot is represented by meshes overlapping the mesh of the smooth blade, the Chimera method is chosen. This method makes it possible computations with overlapping meshes. The comparison with the experimental data, on these two first computations has validated this strategy to represent such slots. The tridimensional simulation of a single stator with taking account of the cooling is then realized. It showed the complex and tridimensional aspects of the main flow with focus on the influence of the cooling system. Finally two steady computations, without and with cooling, and an unsteady computation without cooling are carried out on a high pressure turbine stage. The comparison with the experimental data obtained in the frame of the European Brite-Euram program is made. These results make it possible to determine the effect of the cooling on the flow in a turbine stage. (authors)
Energy Technology Data Exchange (ETDEWEB)
Boughanem, H.
1998-03-24
The assumption of gradient transport for the mean reaction progress variable has a limited domain of validity in premixed turbulent combustion. The existence of two turbulent transport regimes, gradient and counter-gradient, is demonstrated in the present work using Direct Numerical Simulations (DNS) of plane flame configurations. The DNS data base describes the influence of the heat release factor, of the turbulence-to-flame velocity ratio, and of an external pressure gradient. The simulations reveal a strong correlation between the regime of turbulent transport and the turbulent flame speed and turbulent flame thickness. These effects re not well described by current turbulent combustion models. A conditional approach `fresh gases / burnt gases` is proposed to overcome these difficulties. Furthermore, he development of flame instabilities in turbulent configurations is also observed in the simulations. A criterion is derived that determines the domain of occurrence of these instabilities (Darrieus- Landau instabilities, Rayleigh- Taylor instabilities, thermo-diffusive instabilities). This criterion suggests that the domain of occurrence of flame instabilities is not limited to small Reynolds numbers. (author) 98 refs.
Energy Technology Data Exchange (ETDEWEB)
Moussiere, S
2006-12-15
Supercritical water oxidation is an innovative process to treat organic liquid waste which uses supercritical water properties to mix efficiency the oxidant and the organic compounds. The reactor is a stirred double shell reactor. In the step of adaptation to nuclear constraints, the computational fluid dynamic modeling is a good tool to know required temperature field in the reactor for safety analysis. Firstly, the CFD modeling of tubular reactor confirms the hypothesis of an incompressible fluid and the use of k-w turbulence model to represent the hydrodynamic. Moreover, the EDC model is as efficiency as the kinetic to compute the reaction rate in this reactor. Secondly, the study of turbulent flow in the double shell reactor confirms the use of 2D axisymmetric geometry instead of 3D geometry to compute heat transfer. Moreover, this study reports that water-air mixing is not in single phase. The reactive turbulent flow is well represented by EDC model after adaptation of initial conditions. The reaction rate in supercritical water oxidation reactor is mainly controlled by the mixing. (author)
Energy Technology Data Exchange (ETDEWEB)
Drecourt, S.; Laborde, J.C.; Lacan, J.; Witschger, O. [CEA/Fontenay-aux-Roses, Inst. de Protection et de Surete Nucleaire (IPSN), 92 (France)
1998-07-01
In the nuclear field, numerous studies based on the transfer of contamination in air are realised and concern as well the workers protection than the safety of installations: in particular, the ventilation, by its dynamic containment functions, surveillance in cleansing, is brought into operation in order to protect the operators and the outside environment facing a contaminant emission. The ventilation applies in normal conditions of operation and in accidental situations. In the two cases it is necessary to limit the propagation of contaminants and to be sure that a dispersion of contaminant be detected as quickly as possible. (N.C.)
Energy Technology Data Exchange (ETDEWEB)
Salmon, St
2008-07-01
This work is made of 2 distinct parts. The first part is dedicated to 2 formulations of the Stokes's problem: a classical presentation in the (current, vortex) plane and a new one in the (vortex, speed, pressure) space with an extension for 3-dimensional calculations. The second part is dedicated to different solving of the Vlasov equations coupled with Maxwell equations: the semi-Lagrangian method and the finite element method. This system of equations drives the equilibrium and stability of magnetically confined fusion plasmas.
Energy Technology Data Exchange (ETDEWEB)
Zalewski, L.
1996-11-27
The objective of this work is the analysis of a passive solar component: the composite solar wall, a building component, which includes an insulating panel located behind the massive wall. This panel has two vents located at the top and at the bottom, which allow the air to circulate from the room to the layer in contact with the back of the massive wall, where it is heated, and then back to the room. The solar energy is transferred to the building by conduction through the massive wall, and then by convection using a thermosyphon phenomenon. The monitoring of 2 solar houses in Verdun-Thierville (Meuse, France) has clearly shown, control issues of the air layer. The wall must be operated as autonomously as possible, to not be a constraint for the occupants and to get an optimization of the energy gains. To solve these problems, a composite solar wall prototype was erected in a test cell at Cadarache and tested in real operating conditions. This allows to use a more complete instrumentation, to have access more easily to the sensors and to study various configurations. The first experiments revealed an inverse thermosyphon phenomenon. To avoid this effect, two systems were designed, tested at Cadarache and then implemented in the walls at Verdun. (author) 77 refs.
Energy Technology Data Exchange (ETDEWEB)
Dillard, Th.
2004-03-15
The deformation behaviour and failure of nickel foams were studied during loading by using X-ray microtomography. Strut alignment and stretching are observed in tension whereas strut bending followed by strut buckling are observed in compression. Strain localisation, that occurs during compression tests, depends on nickel weight distribution in the foam. Fracture in tension first takes place at cell nodes and the crack propagates cell by cell. The damaged area in front of a crack is about five cells wide. A detailed description of the three-dimensional morphology is also presented. One third of the cells are dodecahedral and 57 % of the faces are pentagonal. The most frequent cell is composed of two quadrilaterals, two hexagons and eight pentagons. The dimensions of the equivalent ellipsoid of each cell are identified and cell orientation are determined. The geometrical aspect ratio is linked to the mechanical anisotropy of the foam. In tension, a uniaxial analytical model, based on elastoplastic strut bending, is developed. The whole stress-strain curve of the foam is predicted according to its specific weight and its anisotropy. It is found that the non-linear regime of the macroscopic curve of the foam is not only due to the elastoplastic bending of the struts. The model is also extended to two-phase foams and the influence of the hollow struts is analysed. The two-phase foams model is finally applied to oxidized nickel foams and compared with experimental data. The strong increase in the rigidity of nickel foams with an increasing rate of oxidation, is well described by the model. However, a fracture criterion must also be introduced to take into account the oxide layer cracking. A phenomenological compressible continuum plasticity model is also proposed and identified in tension. The identification of the model is carried out using experimental strain maps obtained by a photo-mechanical technique. A validation of the model is provided by investigating the strain field around a hole in a foam. The multiaxial model is extended to a micro-morphic one to incorporate non local features accounting for the size effects observed for small holes. The prediction of the model is evaluated in the case of subsequent fracture of the specimen through crack propagation. (author)
Energy Technology Data Exchange (ETDEWEB)
Poquillon, D
1997-10-01
Usually, for the integrity assessment of defective components, well established rules are used: global approach to fracture. A more fundamental way to deal with these problems is based on the local approach to fracture. In this study, we choose this way and we perform numerical simulations of intergranular crack initiation and intergranular crack propagation. This type of damage can be find in components of fast breeder reactors in 316 L austenitic stainless steel which operate at high temperatures. This study deals with methods coupling partly the behaviour and the damage for crack growth in specimens submitted to various thermomechanical loadings. A new numerical method based on finite element computations and a damage model relying on quantitative observations of grain boundary damage is proposed. Numerical results of crack initiation and growth are compared with a number of experimental data obtained in previous studies. Creep and creep-fatigue crack growth are studied. Various specimen geometries are considered: compact Tension Specimens and axisymmetric notched bars tested under isothermal (600 deg C) conditions and tubular structures containing a circumferential notch tested under thermal shock. Adaptative re-meshing technique and/or node release technique are used and compared. In order to broaden our knowledge on stress triaxiality effects on creep intergranular damage, new experiments are defined and conducted on sharply notched tubular specimens in torsion. These isothermal (600 deg C) Mode II creep tests reveal severe intergranular damage and creep crack initiation. Calculated damage fields at the crack tip are compared with the experimental observations. The good agreement between calculations and experimental data shows the damage criterion used can improve the accuracy of life prediction of components submitted to intergranular creep damage. (author) 200 refs.
Energy Technology Data Exchange (ETDEWEB)
Braffort, P.; Iung, J. [Commissariat a l' Energie Atomique, Saclay (France). Centre d' Etudes Nucleaires
1956-07-01
The research activities of the Atomic Energy Commission cover a large variety of different subjects from theoretical physics and nuclear physics to biology, medicine or geology. Thus, about 350 scientific reviews are received and presented in the library. All those documents need to be classified to make the research of information easier for researchers. It describes the classification and codification of such a large quantity of documents. The classification uses a bidimensional system with 5 columns with inter-scale phenomena, corpuscular scale, nuclear scale, atomic and molecular scale and macroscopic scale as subject and 5 lines with theoretical problems, production, measurement, description and utilisation as topic. Some of the rules are given and examples are presented. (M.P.)
Energy Technology Data Exchange (ETDEWEB)
Brun, Ch
1998-04-02
In the context of thermal-hydraulics of nuclear reactors, strong interaction between wakes is encountered in the bottom of reactor vessels where control and measurement rods of variable size and disposition interact with the overall wakes generated in these flow zones. This study deals with the strong interaction between two wakes developed downstream of two parallel cylinders with a small spacing. The analysis focusses on the effect of the Reynolds regime which controls the equilibrium between the inertia and viscosity forces of the fluid and influences the large scale behaviour of the flow with the development of hydrodynamic instabilities and turbulence. The document is organized as follows: the characteristic phenomena of wakes formation downstream of cylindrical obstacles are recalled in the first chapter (single cylinder, interaction between two tubes, case of a bundle of tubes perpendicular to the flow). The experimental setup (hydraulic loop, velocity and pressure measurement instrumentation) and the statistical procedures applied to the signals measured are detailed in chapters 2 and 3. Chapter 4 is devoted to the experimental study of the strong interaction between two tubes. Laser Doppler velocity measurements in the wakes close to cylinders and pressure measurements performed on tube walls are reported in this chapter. In chapter 5, a 2-D numerical simulation of two typical cases of interaction (Re = 1000 and Re = 5000) is performed. In the last chapter, a more complex application of strong interactions inside and downstream of a bunch of staggered tubes is analyzed experimentally for equivalent Reynolds regimes. (J.S.)
Energy Technology Data Exchange (ETDEWEB)
Boudesocque-Dubois, C.; Clarisse, J.M
2007-07-01
In the context of linear perturbation computations of planar or spherically symmetric flows, we propose numerical methods, in Lagrangian coordinates, for integrating the one-dimensional gas dynamics equations with nonlinear heat conduction and their linear perturbations. Numerical results are presented for different configurations, with or without flow motion. (authors)
Energy Technology Data Exchange (ETDEWEB)
Dufour, F
2007-12-15
The industrial context of this research work is to study the durability of the internal barriers of nuclear power plants. This paper is divided in two parts, the first part is relative to the crack-damage state and the second part to the creep consequences on the rupture properties of concrete. In the first part, the analysis of the experimental results, (carried out on a compression cylinder on which the radial permeability has been measured), shows that the permeability decreases until a deformation of half of those at the force peak, by re-closure of the preexisting microcracks in the material; then the permeability strongly increases until after the force peak by initiation, connexion and opening of the crack, and at last it increases less rapidly until the rupture because only the opening of the macro-cracks increases. In order to simulate these phenomena, two original methods are presented, in post-treatment phase, for estimating the leaks from a mechanical computing based on finite element methods. With the first method, it is possible to measure the permeability from the damage field and from a relation between the permeability and the damage which bind the Poiseuille law to an empirical law established for weak damages. The second method is on the deformations field from which the position and opening of the crack are calculated. The Poiseuille relation is then applied along the crack to estimate the leaks rates. The relation between the concrete creep and its mechanical characteristics is analyzed in the second part. In particular, are studied the creep consequences on the long term mechanical properties. After having given the experimental results which show essentially an embrittlement of the material after creep, a qualitative analysis by the bifurcations study is proposed, and then by a discrete numerical method to find again the same influence of the visco-elasticity on the rupture embrittlement experimentally observed. At last, the first results of the quantitative analysis by a finite element method using an original coupled model are presented. (O.M.)
Energy Technology Data Exchange (ETDEWEB)
Zokimila, P
2005-10-15
Deep geological disposal is one of the privileged options for the storage of High Level radioactive waste. A good knowledge of the behavior and properties of the potential geological formations as well as theirs evolution in time under the effect of the stress change induced by a possible installation of storage is required. The geological formation host will be subjected to mechanical and thermal solicitations due respectively to the excavation of the disposal tunnels and the release of heat of the canisters of radioactive waste. These thermomechanical solicitations will generate a stress relief in the host layer and disposal tunnels deformations as well as the extension of the damaged zones (EDZ) could cause local and global instabilities. This work aims to develop calculation methods to optimize numerical modeling of the thermoelastic behavior of the disposal at a large scale and to evaluate thermomechanical disturbance induced by storage on the geological formation host. Accordingly, after a presentation of the state of knowledge on the thermomechanical aspects of the rocks related to deep storage, of numerical modeling 2D and 3D of the thermoelastic behavior of individual disposal tunnel and a network of tunnels were carried out by a discrete approach. However, this classical approach is penalizing to study the global behavior of disposal storage. To mitigate that, an approach of numerical modeling, based on homogenization of periodic structures, was proposed. Formulations as numerical procedures were worked out to calculate the effective thermoelastic behavior of an equivalent heterogeneous structure. The model, obtained by this method, was validated with existing methods of homogenization such as the self-consistent model, as well as the Hashin-Shtrikman bounds. The comparison between the effective thermoelastic behavior and current thermoelastic behavior of reference showed a good coherence of the results. For an application to deep geological storage, the effective thermoelastic properties of a network of circular tunnels could be given in 2D for various dimensions of the distance between galleries. (author)
Energy Technology Data Exchange (ETDEWEB)
Vermorel, O.
2003-11-15
This work is devoted to the numerical and theoretical study of turbulence modulation by particles using direct numerical simulation for the continuous phase coupled with a Lagrangian prediction of trajectories of discrete particles. The configuration corresponds to a slab of particles injected at high velocity into an isotropic decaying turbulence. The motion of a particle is supposed to be governed only by the drag force. The particle mass loading is large so that momentum exchange between particles and fluid results in a significant modulation of the turbulence. Collisions are neglected. The momentum transfer between particles and gas causes a strong acceleration of the gas in the slab. In the periphery of the slab, the turbulence is enhanced due to the production by the mean gas velocity gradients. The analysis of the interphase transfer terms in the gas turbulent kinetic energy equation shows that the direct effect of the particles is to damp the turbulence in the core of the slab but to enhance it in the periphery. This last effect is due to a strong correlation between the particle distribution and the instantaneous gas velocity. Another issue concerns the k-{epsilon} model and the validity of its closure assumptions in two phase flows. A new eddy viscosity expression, function of particle parameters, is used to model the Reynolds stress tensor. The modelling of the gas turbulent dissipation rate is questioned. A two-phase Langevin equation is also tested to model drift velocity and fluid-particles velocity covariance equations. (author)
Energy Technology Data Exchange (ETDEWEB)
Bellivier, A.
2004-05-15
For 3D modelling of thermo-aeraulics in building using field codes, it is necessary to reduce the computing time in order to model increasingly larger volumes. The solution suggested in this study is to couple two modelling: a zonal approach and a CFD approach. The first part of the work that was carried out is the setting of a simplified CFD modelling. We propose rules for use of coarse grids, a constant effective viscosity law and adapted coefficients for heat exchange in the framework of building thermo-aeraulics. The second part of this work concerns the creation of fluid Macro-Elements and their coupling with a calculation of CFD finite volume type. Depending on the boundary conditions of the problem, a local description of the driving flow is proposed via the installation and use of semi-empirical evolution laws. The Macro-Elements is then inserted in CFD computation: the values of velocity calculated by the evolution laws are imposed on the CFD cells corresponding to the Macro-Element. We use these two approaches on five cases representative of thermo-aeraulics in buildings. The results are compared with experimental data and with traditional RANS simulations. We highlight the significant gain of time that our approach allows while preserving a good quality of numerical results. (author)
Energy Technology Data Exchange (ETDEWEB)
Labit, B
2002-10-01
In a fusion machine, understanding plasma turbulence, which causes a degradation of the measured energy confinement time, would constitute a major progress in this field. In tokamaks, the measured ion and electron thermal conductivities are of comparable magnitude. The possible sources of turbulence are the temperature and density gradients occurring in a fusion plasma. Whereas the heat losses in the ion channel are reasonably well understood, the origin of the electron losses is more uncertain. In addition to the radial velocity associated to the fluctuations of the electric field, electrons are more affected than ions by the magnetic field fluctuations. In experiments, the confinement time can be conveniently expressed in terms of dimensionless parameters. Although still somewhat too imprecise, these scaling laws exhibit strong dependencies on the normalized pressure {beta} or the normalized Larmor radius, {rho}{sub *}. The present thesis assesses whether a tridimensional, electromagnetic, nonlinear fluid model of plasma turbulence driven by a specific instability can reproduce the dependence of the experimental electron heat losses on the dimensionless parameters {beta} and {rho}{sub *}. The investigated interchange instability is the Electron Temperature Gradient driven one (ETG). The model is built by using the set of Braginskii equations. The developed simulation code is global in the sense that a fixed heat flux is imposed at the inner boundary, leaving the gradients free to evolve. From the nonlinear simulations, we have put in light three characteristics for the ETG turbulence: the turbulent transport is essentially electrostatic; the potential and pressure fluctuations form radially elongated cells called streamers; the transport level is very low compared to the experimental values. The thermal transport dependence study has shown a very small role of the normalized pressure, which is in contradiction with the Ohkama's formula. On the other hand, the crucial role of the electron normalized Larmor has been emphasized: the confinement time is inverse proportional to this parameter. Finally, the low dependence of turbulent transport with the magnetic shear and the inverse aspect ratio is also reported. Although the transport level observed in the simulations is low compared to the experiments, we have tried a direct confrontation with Tore Supra results. This tokamak is well designed to study the electron heat transport. Keeping most of the parameters from a well referenced Tore Supra shot, the nonlinear simulation gives a threshold quite close to the experimental one. The observed turbulent conductivity is a factor fifty lower than the experimental one. An important parameter can not be matched: the normalized Larmor radius, {rho}{sub *}. This limitation has to be overcome in order to confirm this results. Finally, a rigorous confrontation between this result and gyrokinetic simulations has to conclude that the ETG instability cannot describe electron heat loses in tokamaks. (author)
Energy Technology Data Exchange (ETDEWEB)
Wolff, Marc
2011-10-14
This work is devoted to the construction of numerical methods that allow the accurate simulation of inertial confinement fusion (ICF) implosion processes by taking self-generated magnetic field terms into account. In the sequel, we first derive a two-temperature resistive magnetohydrodynamics model and describe the considered closure relations. The resulting system of equations is then split in several subsystems according to the nature of the underlying mathematical operator. Adequate numerical methods are then proposed for each of these subsystems. Particular attention is paid to the development of finite volume schemes for the hyperbolic operator which actually is the hydrodynamics or ideal magnetohydrodynamics system depending on whether magnetic fields are considered or not. More precisely, a new class of high-order accurate dimensionally split schemes for structured meshes is proposed using the Lagrange re-map formalism. One of these schemes' most innovative features is that they have been designed in order to take advantage of modern massively parallel computer architectures. This property can for example be illustrated by the dimensionally split approach or the use of artificial viscosity techniques and is practically highlighted by sequential performance and parallel efficiency figures. Hyperbolic schemes are then combined with finite volume methods for dealing with the thermal and resistive conduction operators and taking magnetic field generation into account. In order to study the characteristics and effects of self-generated magnetic field terms, simulation results are finally proposed with the complete two-temperature resistive magnetohydrodynamics model on a test problem that represents the state of an ICF capsule at the beginning of the deceleration phase. (author)
Energy Technology Data Exchange (ETDEWEB)
Gres, B.; Foulquier, J.N.; Orthuon, A.; Huguet, F.; Keraudy, K.; Touboul, E. [Hopital Tenon, Service de Radiotherapie, 75 - Paris (France)
2009-06-15
Purpose In case of external breast radiotherapy, the usual treatment consists of two tangential beams homogeneously attenuated by a dynamic or physics wedge in order to obtain the most homogeneous dose distribution as possible. Depending of the shape and size of the breast volume, we may observe with this technique dose heterogeneity over 20% from the recommendation of the International Committee on Radiation Units and Measurements (95-107%). We propose to study breast treatment planning by compensating tissues thickness in order to decrease dose heterogeneity observed on the dose distribution for conventional treatment. Materials and methods We have segmented the initial tangential beams used for this kind of treatment into several smaller beams. Their shape was adapted to the distribution of the greys level on the D.R.R. image. Therefore, we have compensated the thickness gradient and we have given the right dose to the right thickness group. Results Dose distribution performed with this method shows an improvement of the dose homogeneity in the three space dimensions and a decrease of the maximal dose between 5 and 10% over the ICRU recommendation. Conclusion This technique allows us to perform breast irradiation on a single photon energy linac even if the treated volume presents important thickness gradient. However, in case of large breast, this method is not able to reduce the overdosage at the entry of the volume due to inappropriate photon energy relative to the breast thickness. (authors)
Energy Technology Data Exchange (ETDEWEB)
Toulhoat, H.
2002-03-01
I present through this dissertation a synthesis of my contributions to the field of heterogeneous catalysis, along two decades of research undertaken as a scientist at Institut Francais du Petrole. I started my itinerary on the 'floor', with the task of developing industrial hydro-treating catalysts, then I had the nice opportunity to lead advanced research on various subjects. However, I have been devoting myself for the past ten years to the encounter between catalysis and theoretical chemistry. The presentation of my work follows therefore a guideline starting with preparation and ending at modelization of the catalytic solid, after having gone through its characterization and the assessment of its activity. Modelization is thus founded on a consistent set of experimental informations. This guideline is applied to the four main themes to which this work is confined: hydro-treating catalysts, hydro-de-metallation catalysts, thio-resistance of noble metals, and solid acids. In summary, I believe I have contributed significantly, on the one hand to strong conceptual and technical advances in the area of ab initio simulation of elementary phenomena in heterogeneous catalysis, with the elaboration of original knowledge on catalysis by sulfides, metals and acids, as well as the genesis of alumina carriers, and on the other hand to a new approach of periodic trends in catalysis: this can be considered as a re-visitation of the principle of Sabatier, leading to a predictive tool for catalytic activity of solids. In a near future it will be possible to say if practical results validate this conceptual tool, and justify or not the ambitious title I gave to my work. (author)
Energy Technology Data Exchange (ETDEWEB)
Mahdi, T.F.
2004-07-01
The risk zone associated with the surge wave following dam failure is defined in this study using a newly developed methodology that incorporates the conventionally used maximum water levels, the sediment movement in the river bed and the possibility of bank failure. Although the risk zone is typically defined as the inundated area, extensive lateral erosion that causes landslides could also accompany the inundation. This study demonstrates that the stability of the riverbank (in terms of geotechnical considerations) can influence the delineation of the risk area. The combined disciplines of geotechnics and hydraulics can be used to follow the evolution of the riverbed and riverbanks during a flood event. This study also presents a structured methodology for the St. Venant shallow water wave equations which were used to determine the area likely to be flooded without taking into account sediment transport. It also discusses aspects of the sediment transport theory. Fluvial erosion and lateral bank failure are the basic physical processes responsible for bank retreat. The minimum energy dissipation rate theory is applied for fluvial erosion, while Bishop's modified method is used to analyze slope stability when evaluating lateral bank failures. A numerical modeling of flows over movable beds is presented along with a review of some of the available numerical models. A diagnostic phase that provides information needed to qualify the extent of the damage after a flood is also presented. Some of the numerical models to evaluate risk area were validated on a portion of the HaHa River which was affected in the 1996 Saguenay flood. The new methodology, applied to the Outaouais River at Notre Dame du Nord in Quebec, produces a risk area much greater than that obtained when only the inundated area is considered.
Energy Technology Data Exchange (ETDEWEB)
Devanz, Guillaume [Paris-6 Univ., 95 Paris (France)
1999-03-04
Laser triggered radiofrequency guns are the most luminous electron sources allowing to reach the performances requested by highly demanding applications like the e{sup +}/e{sup -}linear colliders and the short wave free electron lasers. CANDELA is a band S photo-injector triggered by a sub-picosecond laser. It allows reaching peak currents of hundred of amperes at average energies higher than 2 MeV. The original concept of two accelerating cavities aims at minimizing the transverse and longitudinal emittances following the Gao's principles. From practical reasons the operating parameters, particularly the laser pulse duration, do not correspond to those considered in the design. Hence, numerical simulations were performed to evaluate the gun's performances in experimental environment. The study of a stabile injector operation resulted in evolutions with consequences in the phase control systems implying the laser and the HF (Hyper Frequency) source. The beam transverse and longitudinal characteristics have been measured as a function of the main parameters i.e., the beam charge and the phase shift between the laser and the HF wave. Measurements of the transverse emittance energy dispersion and wave packed duration are presented for several injector configurations. The systems of existing beam measurements have been studied to determine the resolution and the experimental conditions to fulfill, in order to suggest improvements for the CANDELA beam. The experiments with the beam have been compared with numerical simulations. Agreement was obtained within wide ranges of parameters for most of the characteristic beam quantities.
Energy Technology Data Exchange (ETDEWEB)
Legou, Th
2002-02-01
This report deals with a fast digital electronic system developed for ion identification and spectroscopy. The system, called IRIS, has been conceived for the super heavy element research program: FUSION. In order to observe a super heavy element, the energy of the compound nucleus implanted in a silicon detector must be measured, and the alpha decay also registered. The associated electronics must therefore handle a very wide range of energies and also exhibit a small recovery time after the implantation of the compound nucleus. IRIS is connected to the output of a charge preamplifier. It digitizes the signal and then executes two digital signal processes: the first to detect the particle, and the second to determine the energy deposited in the silicon detector. The use of programmed processing allows for the adjustment of the digital processing parameters, as well as a choice of other digital signal processing procedures, depending the application. After having explained why a conventional electronic system cannot be used for the detection of super-heavy ions, IRIS' structure is detailed and a number of digital signal processing procedures are studied and tested. (author)
Energy Technology Data Exchange (ETDEWEB)
Schneider, D
2001-07-01
The nodal method Minos has been developed to offer a powerful method for the calculation of nuclear reactor cores in rectangular geometry. This method solves the mixed dual form of the diffusion equation and, also of the simplified P{sub N} approximation. The discretization is based on Raviart-Thomas' mixed dual finite elements and the iterative algorithm is an alternating direction method, which uses the current as unknown. The subject of this work is to adapt this method to hexagonal geometry. The guiding idea is to construct and test different methods based on the division of a hexagon into trapeze or rhombi with appropriate mapping of these quadrilaterals onto squares in order to take into advantage what is already available in the Minos solver. The document begins with a review of the neutron diffusion equation. Then we discuss its mixed dual variational formulation from a functional as well as from a numerical point of view. We study conformal and bilinear mappings for the two possible meshing of the hexagon. Thus, four different methods are proposed and are completely described in this work. Because of theoretical and numerical difficulties, a particular treatment has been necessary for methods based on the conformal mapping. Finally, numerical results are presented for a hexagonal benchmark to validate and compare the four methods with respect to pre-defined criteria. (authors)
Energy Technology Data Exchange (ETDEWEB)
Sebelin, E
1997-12-15
Full-wave calculations based on trial functions are carried out for solving the lower hybrid current drive problem in tokamaks. A variational method is developed and provides an efficient system to describe in a global manner both the propagation and the absorption of the electromagnetic waves in plasmas. The calculation is fully carried out in the case of circular and concentric flux surfaces. The existence and uniqueness of the solution of the wave propagation equation is mathematically proved. The first realistic simulations are performed for the high aspect ratio tokamak TRIAM-1M. It is checked that the main features of the lower-hybrid wave dynamics are well described numerically. (A.C.) 81 refs.
Energy Technology Data Exchange (ETDEWEB)
Begis, J.; Balzer, G.
1997-02-01
The numerical modelling of internal CFB boilers flows faced with complex phenomenons due to the flows un-stationariness, the heterogeneousness of the particle size distribution, and interactions between the two phases and the walls. Our study consisted in applying numerical models to the experimental configuration of cold circulating fluidized bed studied at the Cerchar. Special attention was given to the analysis of particles - wall interactions models, stemming from Jenkins (1992) and Louge`s (1994) theories, as well as the influence of the particles on fluid turbulence. In order to realize numerical simulations, we have used Eulerian two-phases flow codes developed at NHL medolif(2D), ESTET-ASTRID(3D). From different tests we have deducted that the most appropriate model for the realization of CFB`s prediction is the model taking in account the influence of particles on fluid turbulence. Then, to evaluate the validity limits of this model, we have built the regime diagram, and we have compared it with the experimental diagram. We have concluded that the simulation allows to describe the different CFB`s working regimes, and especially transitions. We have also noticed the importance of the choice of the mean diameter of the simulated particles. In this way, making a correction of the simulated particles` diameter in comparison with Sauter mean particle diameter, we obtained numerical results in good agreement with experimental data. (authors) 13 refs.
Energy Technology Data Exchange (ETDEWEB)
Jacques, R.; Le Quere, P.; Daube, O. [Centre National de la Recherche Scientifique (CNRS), 91 - Orsay (France)
1997-12-31
Turbulent flows between a fixed disc and a rotating disc are encountered in various applications like turbo-machineries or torque converters of automatic gear boxes. These flows are characterised by particular physical phenomena mainly due to the effects of rotation (Coriolis and inertia forces) and thus, classical k-{epsilon}-type modeling gives approximative results. The aim of this work is to study these flows using direct numerical simulation in order to provide precise information about the statistical turbulent quantities and to improve the k-{epsilon} modeling in the industrial MATHILDA code of the ONERA and used by SNECMA company (aerospace industry). The results presented are restricted to the comparison between results obtained with direct simulation and results obtained with the MATHILDA code in the same configuration. (J.S.) 8 refs.
Energy Technology Data Exchange (ETDEWEB)
Sarh, B.; Gokalp, I.; Sanders, H. [Centre National de la Recherche Scientifique (CNRS), 45 - Orleans-la-Source (France)
1997-12-31
In the framework of the studies carried out by the LCSR on variable density flows and diffusion turbulent flames, this paper deals with the study of the influence of density variation on the characteristics of a heated rectangular turbulent jet emerging in a stagnant surrounding atmosphere and more particularly on the determination of turbulent viscosity. The dynamical field is measured using laser-Doppler anemometry while the thermal field is measured using cold wire anemometry. A numerical predetermination of the characteristics of this jet, based on a k-{epsilon} modeling, is carried out. (J.S.) 6 refs.
Energy Technology Data Exchange (ETDEWEB)
Jacoutot, L
2006-11-15
This study reports on a new vitrification process developed by the French Atomic Energy Commission (CEA, Marcoule). This process is used for the treatment of high activity nuclear waste. It is characterized by the cooling of all the metal walls and by currents directly induced inside the molten glass. In addition, a mechanical stirring device is used to homogenize the molten glass. The goal of this study is to develop numerical tools to understand phenomena which take place within the bath and which involve thermal, hydrodynamic and electromagnetic aspects. The numerical studies are validated using experimental results obtained from pilot vitrification facilities. (author)
Energy Technology Data Exchange (ETDEWEB)
Menouillard, T
2007-09-15
Computerized simulation is nowadays an integrating part of design and validation processes of mechanical structures. Simulation tools are more and more performing allowing a very acute description of the phenomena. Moreover, these tools are not limited to linear mechanics but are developed to describe more difficult behaviours as for instance structures damage which interests the safety domain. A dynamic or static load can thus lead to a damage, a crack and then a rupture of the structure. The fast dynamics allows to simulate 'fast' phenomena such as explosions, shocks and impacts on structure. The application domain is various. It concerns for instance the study of the lifetime and the accidents scenario of the nuclear reactor vessel. It is then very interesting, for fast dynamics codes, to be able to anticipate in a robust and stable way such phenomena: the assessment of damage in the structure and the simulation of crack propagation form an essential stake. The extended finite element method has the advantage to break away from mesh generation and from fields projection during the crack propagation. Effectively, crack is described kinematically by an appropriate strategy of enrichment of supplementary freedom degrees. Difficulties connecting the spatial discretization of this method with the temporal discretization of an explicit calculation scheme has then been revealed; these difficulties are the diagonal writing of the mass matrix and the associated stability time step. Here are presented two methods of mass matrix diagonalization based on the kinetic energy conservation, and studies of critical time steps for various enriched finite elements. The interest revealed here is that the time step is not more penalizing than those of the standard finite elements problem. Comparisons with numerical simulations on another code allow to validate the theoretical works. A crack propagation test in mixed mode has been exploited in order to verify the simulation anticipation. (O.M.)
1992-02-01
la paroi. Cette force est directement lite A un coefficient de frottement donn6 tant que ’dcoulement W’est pas ralenti, sinan elke depend du gradient...montre que dans Ic cms avec pertes, les amplitudes ont des valeurs beaucoup plus 6lev~es que dans le cms sans coefficient de frottement constant aux
Energy Technology Data Exchange (ETDEWEB)
Rousseau, J.
2009-07-15
That study focuses on concrete structures submitted to impact loading and is aimed at predicting local damage in the vicinity of an impact zone as well as the global response of the structure. The Discrete Element Method (DEM) seems particularly well suited in this context for modeling fractures. An identification process of DEM material parameters from macroscopic data (Young's modulus, compressive and tensile strength, fracture energy, etc.) will first be presented for the purpose of enhancing reproducibility and reliability of the simulation results with DE samples of various sizes. Then, a particular interaction, between concrete and steel elements, was developed for the simulation of reinforced concrete. The discrete elements method was validated on quasi-static and dynamic tests carried out on small samples of concrete and reinforced concrete. Finally, discrete elements were used to simulate impacts on reinforced concrete slabs in order to confront the results with experimental tests. The modeling of a large structure by means of DEM may lead to prohibitive computation times. A refined discretization becomes required in the vicinity of the impact, while the structure may be modeled using a coarse FE mesh further from the impact area, where the material behaves elastically. A coupled discrete-finite element approach is thus proposed: the impact zone is modeled by means of DE and elastic FE are used on the rest of the structure. An existing method for 3D finite elements was extended to shells. This new method was then validated on many quasi-static and dynamic tests. The proposed approach is then applied to an impact on a concrete structure in order to validate the coupled method and compare computation times. (author)
Energy Technology Data Exchange (ETDEWEB)
Sauvage, E
2009-11-15
Within the context of a search for a new vitrification process for nuclear wastes with a replacement of the presently used metallic pot by an inductive cold crucible, this research thesis deals with the numerical modelling of this technology. After having recalled the interest of nuclear waste vitrification, this report presents the new process based on the use of a cold crucible, describing principles and objectives of this method, and the characteristic physical phenomena associated with the flow and the thermodynamics of the glassy melt in such a crucible. It also recalls and comments the existing works on modelling. The main objective of this research is then to demonstrate the feasibility of 3D thermo-hydraulic and inductive simulations. He describes and analyses the glass physical properties (electrical properties, viscosity, thermal properties), the electromagnetic, hydrodynamic and thermal phenomena. He presents in detail the bubbling mixing modelling, reports 3D induction and fluid mechanical coupling calculations, and specific thermal investigations (radiating transfers, thermal limit conditions)
Energy Technology Data Exchange (ETDEWEB)
Clerc, S
1998-07-01
In this work, the numerical simulation of fluid dynamics equations is addressed. Implicit upwind schemes of finite volume type are used for this purpose. The first part of the dissertation deals with the improvement of the computational precision in unfavourable situations. A non-conservative treatment of some source terms is studied in order to correct some shortcomings of the usual operator-splitting method. Besides, finite volume schemes based on Godunov's approach are unsuited to compute low Mach number flows. A modification of the up-winding by preconditioning is introduced to correct this defect. The second part deals with the solution of steady-state problems arising from an implicit discretization of the equations. A well-posed linearized boundary value problem is formulated. We prove the convergence of a domain decomposition algorithm of Schwartz type for this problem. This algorithm is implemented either directly, or in a Schur complement framework. Finally, another approach is proposed, which consists in decomposing the non-linear steady state problem. (author)
Energy Technology Data Exchange (ETDEWEB)
Dahhou, B.; Pourciel, J.B.; Trave Massuyes, L.
1995-12-31
This bioreactor control system is based on an expert system achieving a prediction of substrate consumption. It uses a biological model of fermentation, built for that purpose. It should be implanted on a pilot plant at Aramon. (D.L.) 12 refs.
Energy Technology Data Exchange (ETDEWEB)
Chateau, J.P
1999-01-05
We discuss the respective roles played by anodic dissolution and hydrogen in SCC mechanisms of f.c.c. materials, by studying the fracture of copper in nitrite for which we compare the results with that previously obtained in 316L steel in hot chloride. It is surprising to note that even the crystallographies at the scale of the micron are different, the macroscopic inclination of the fracture surfaces are the same. In the case of 316L steel, the formation of strong pile-ups in the presence of hydrogen leads to a zigzag fracture along alternated slip planes in the most general case. In the absence of hydrogen, as in copper, this mechanism effectively disappears. Furthermore, numerical simulations of crack shielding by dislocations emitted on one plane predict the macroscopic inclination. It shows that it is due to the mere dissolution which confines slip activity at the very crack tip in f.c.c. materials. In order to quantify the mechanism involved in 316L steel, we developed simulations which numerically solve the coupled diffusion and elasticity equations for hydrogen in the presence of a crack and shielding dislocations. They reproduce the mechanisms of hydrogen segregation on edge dislocations and of a localised softening effect by decreasing pair interactions. These mechanisms lead to i) a localisation of hydrogen embrittlement along the activated slip planes, ii) an increase of the dislocation density in pile-ups, and iii) a decrease of the cross slip probability. These three factors enhance micro-fracture at the head of a pile-up, which is responsible of thezigzag fracture. Introducing the free surface effects for hydrogen, we point out a new mechanism: the inhibition of dislocation sources at the crack tip, which is relevant with the brittle fracture surfaces observed in some cases in 316L steel. The quantification of these different mechanisms allows to give a relation between the local fracture possibility and the macroscopic parameters. A general law for softening is proposed, and we show that micro-fracture occurs for realistic values of three key parameters: hydrogen concentration, decrease of k{sub lc} and obstacle resistance. (author)
Energy Technology Data Exchange (ETDEWEB)
Blain, M.A.; Bonnaud, G.; Chiron, A.; Riazuelo, G.
1996-02-01
This report addresses the propagation of an intense laser beam in a unmagnetized plasma, which is relevant for both the inertial confinement fusion (ICF) and the ultra-high intensity (UHI) pulses. The width and the irradiance of the laser pulses are respectively: (0.1-10) nanosecond and (10{sup 13}-10{sup 16}) W/cm{sup 2} for the ICF context and (0.1-1) picosecond and in excess of 10{sup 1}8 W/cm{sup 2} for the UHI context. The nonlinear mechanisms for beam self-focusing and filamentation, induced by both the ponderomotive expelling of charged particles and the relativistic increase of the electron mass, are specified studied. Part I deals with the theoretical aspects and part II is concerned with the results of two-dimensional simulations. The results have been obtained within the framework of the paraxial approximation and the stationary response of the plasma. The large set of scenarios that characterize the behavior of Gaussian beam and a modulated beam is presented; a synthetic overview of the previous theoretical works is also provided. The interplay of two crossing beams is discussed. This report will be a help to improve the uniformity of the laser irradiation in the ICF context and to channel a very intense laser beam over large distance in the UHI context. (authors). 17 refs., 53 figs., 14 tabs.
Energy Technology Data Exchange (ETDEWEB)
Boudjemadi, R.
1996-03-01
The main objectives of this thesis are the direct numerical simulation of natural convection in a vertical differentially heated slot and the improvements of second-order turbulence modelling. A three-dimensional direct numerical simulation code has been developed in order to gain a better understanding of turbulence properties in natural convection flows. This code has been validated in several physical configurations: non-stratified natural convection flows (conduction solution), stratified natural convection flows (double boundary layer solution), transitional and turbulent Poiseuille flows. For the conduction solution, the turbulent regime was reached at a Rayleigh number of 1*10{sup 5} and 5.4*10{sup 5}. A detailed analysis of these results has revealed the principal qualities of the available models but has also pointed our their shortcomings. This data base has been used in order to improve the triple correlations transport models and to select the turbulent time scales suitable for such flows. (author). 122 refs., figs., tabs., 4 appends.
Energy Technology Data Exchange (ETDEWEB)
Bruneaux, G.
1996-05-20
Premixed turbulent flame-wall interaction is studied using theoretical and numerical analysis. Laminar interactions are first investigated through a literature review. This gives a characterization of the different configurations of interaction and justifies the use of simplified kinetic schemes to study the interaction. Calculations are then performed using Direct Numerical Simulation with a one-step chemistry model, and are compared with good agreements to asymptotic analysis. Flame-wall distances and wall heat fluxes obtained are compared successfully with those of the literature. Heat losses decrease the consumption rate, leading to extinction at the maximum of wall heat flux. It is followed by a flame retreat, when the fuel diffuses into the reaction zone, resulting in low unburnt hydrocarbon levels. Then, turbulent regime is investigated, using two types of Direct Numerical Simulations: 2D variable density and 3D constant density. Similar results are obtained: the local turbulent flame behavior is identical to a laminar interaction, and tongues of fresh gases are expelled from the wall region, near zones of quenching. In the 2D simulations, minimal flame-wall distances and maximum wall heat fluxes are similar to laminar values. However, the structure of the turbulence in the 3D calculations induces smaller flame-wall distances and higher wall heat fluxes. Finally, a flame-wall interaction model is built and validated. It uses the flamelet approach, where the flame is described in terms of consumption speed and flame surface density. This model is simplified to produce a law of the wall, which is then included in a averaged CFD code (Kiva2-MB). It is validated in an engine calculation. (author) 36 refs.
Energy Technology Data Exchange (ETDEWEB)
Travassos, L
2007-06-15
Concrete is the most common building material and accounts for a large part of the systems that are necessary for a country to operate smoothly including buildings, roads, and bridges. Nondestructive testing is one of the techniques that can be used to assess the structural condition. It provides non perceptible information that conventional techniques of evaluation unable to do. The main objective of this work is the numerical simulation of a particular technique of nondestructive testing: the radar. The numerical modeling of the radar assessment of concrete structures make it possible to envisage the behavior of the system and its capacity to detect defects in various configurations. To achieve this objective, it was implemented electromagnetic wave propagation models in concrete structures, by using various numerical techniques to examine different aspects of the radar inspection. First of all, we implemented the finite-difference time-domain method in 3D which allows to take into account concrete characteristics such as porosity, salt content and the degree of saturation of the mixture by using Debye models. In addition, a procedure to improve the radiation pattern of bow-tie antennas is presented. This approach involves the Moment Method in conjunction with the Multi objective Genetic Algorithm. Finally, we implemented imaging algorithms which can perform fast and precise characterization of buried targets in inhomogeneous medium by using three different methods. The performance of the proposed algorithms is confirmed by numerical simulations. (author)
Energy Technology Data Exchange (ETDEWEB)
Petelet, Matthieu; Asserin, Olivier [CEA, DRT / LITEN / DTH / LTA, Bat 611, 91191 Gif sur Yvette Cedex (France); Iooss, Bertrand [CEA, DEN / CAD / DER / SESI / LCFR, Bat 212, 13108 St-Paul-lez-Durance Cedex (France); Petelet, Matthieu; Loredo, Alexandre [ISAT / LRMA, 49 rue Melle Bourgeois, BP 31, 58027 Nevers Cedex (France)
2006-07-01
In this work, the method of sensitivity analysis allowing to identify the inlet data the most influential on the variability of the responses (residual stresses and distortions). Classically, the sensitivity analysis is carried out locally what limits its validity domain to a given material. A global sensitivity analysis method is proposed; it allows to cover a material domain as wide as those of the steels series. A probabilistic modeling giving the variability of the material parameters in the steels series is proposed. The original aspect of this work consists in the use of the sampling method by latin hypercubes (LHS) of the material parameters which forms the inlet data (dependent of temperature) of the numerical simulations. Thus, a statistical approach has been applied to the welding numerical simulation: LHS sampling of the material properties, global sensitivity analysis what has allowed the reduction of the material parameterization. (O.M.)
Energy Technology Data Exchange (ETDEWEB)
El-Ahmar, Walid; Jullien, Jean-Francois [INSA-Lyon, LaMCoS, CNRS UMR 551, 20 Avenue Albert Einstein, 69621 Villeurbanne, (France); Gilles, Philippe [AREVA NP, 92084 Paris La Defense, (France); Taheri, Said [EDF, 92141 Clamart, (France); Boitout, Frederic [ESI-GROUP, 69458 Lyon, (France)
2006-07-01
The welding numerical simulation is considered as one of the mechanics problems the most un-linear on account of the great number of the parameters required. The analysis of the hardiness of the welding numerical simulation is a current questioning whose expectation is to specify welding numerical simulation procedures allowing to guarantee the reliability of the numerical result. In this work has been quantified the aspect 'uncertainties-sensitivity' imputable to different parameters which occur in the simulation of stainless steel 316L structures welded by the TIG process: that is to say the mechanical and thermophysical parameters, the types of modeling, the adopted behaviour laws, the modeling of the heat contribution.. (O.M.)
Energy Technology Data Exchange (ETDEWEB)
El-Ahmar, W
2007-04-15
The numerical welding simulation is considered to be one of those mechanical problems that have the great level of nonlinearity and which requires a good knowledge in various scientific fields. The 'Robustness Analysis' is a suitable tool to control the quality and guarantee the reliability of numerical welding results. The robustness of a numerical simulation of welding is related to the sensitivity of the modelling assumptions on the input parameters. A simulation is known as robust if the result that it produces is not very sensitive to uncertainties of the input data. The term 'Robust' was coined in statistics by G.E.P. Box in 1953. Various definitions of greater or lesser mathematical rigor are possible for the term, but in general, referring to a statistical estimator, it means 'insensitive to small deviation from the idealized assumptions for which the estimator is optimized. In order to evaluate the robustness of numerical welding simulation, sensitivity analyses on thermomechanical models and parameters have been conducted. At the first step, we research a reference solution which gives the best agreement with the thermal and mechanical experimental results. The second step consists in determining through numerical simulations which parameters have the largest influence on residual stresses induced by the welding process. The residual stresses were predicted using finite element method performed with Code-Aster of EDF and SYSWELD of ESI-GROUP. An analysis of robustness can prove to be heavy and expensive making it an unjustifiable route. However, only with development such tool of analysis can predictive methods become a useful tool for industry. (author)
Energy Technology Data Exchange (ETDEWEB)
Rambaud, P.
2001-11-01
The theme of this numerical thesis is on the behavior of solid particles embedded in a non-homogeneous and non-isotropic turbulent gas flow as the one tacking place in a plane channel. The turbulence is generated through the direct numerical simulation of Navier-Stokes equations discretized by formally second order in time and space finite difference operators. This Eulerian vision of the incompressible gas flow is completed by a Lagrangian formulation allowing to follow solid particles. In this formulation, the considered forces are the non-linear drag and the Saffman lift both corrected for wall effects. Furthermore, depending on the test cases studied, particle bouncing forces on the wall, gravity or electrostatic forces are taken into account. A three-dimensional Hermitian interpolation highlight the special care spend on the determination of the fluid velocity at the solid particle location. The first code application is dedicated to solid particles dispersion inside an horizontal channel, or in a channel operated in a weightlessness state. The huge amount of data from the Lagrangian tracking is reduced to the integral times of the turbulence seen by the solid particles on their trajectories. Those times are crucial in Lagrangian methods associated with a low numerical cost compared with the ones used in the present study. Among those methods, the one based on Langevin type equations have the best potential to handle industrial type problems. Nevertheless, this method needs to rebuild the fluid velocity fluctuations seen by the solid particles on their trajectories. This technic is able to reproduce the crossing trajectory effect, the inertial effect and the continuity effect, only if the integral times of the turbulence seen are known. Till now, those times were known thanks to a semi-empirical correlation from direct numerical simulation in homogeneous and isotropic turbulence (Wang and Stock 1993). However, although these conditions, this correlation was also directly used for non-homogeneous and non-isotropic turbulence. In our study, we are checking the relevance of such a direct application. At this purpose, on the half-channel, all the moving Eulerian integral times and the fluid Lagrangian integral times are computed in order to estimate the integral times of the turbulence seen. These valuations are compared with the direct computations of the integral times of the turbulence seen thanks to the Lagrangian tracking of a huge number of particles. Besides the dispersion study, this code is also presented in a configuration allowing to deal with dispersed flow with no more passive but active particles on the turbulence. The invariant of the simulation being the Reynolds number based on the bulk velocity, a forcing scheme keeping the overall flow rate is used. In spite of the validation of this scheme in single phase turbulence, it is not yet able to work efficiently in vertical turbulent downward or upward coupled two-phase flows. This problem is not met in weightlessness state, for which the macroscopic effects on the turbulence due to the solid particles are presented. (authors)
Energy Technology Data Exchange (ETDEWEB)
Hamdi, A
2004-07-01
The new generation of light sources based on SASE Free-Electron-Lasers driven by LINACs operate with electron beams with high beam currents and duty cycles. This is especially true for the superconducting machines like TTF 2 and the X-RAY FEL, under construction or planning at DESY. Elaborate fast protections systems are required not only to protect the machine from electron beams hitting and destroying the vacuum chamber, but also to prevent the machine from running at high loss levels, dangerous for components like the FEL undulator. This document presents the different protection systems currently under construction for TTF 2. The very fast systems, based on transmission measurements and distributed loss detection monitors, are described in detail. This description includes the fast electronics to collect and to transmit the different interlock and status signals: analog to digital converters, DSP and FPGA, interfaces, toroid protection system (TPS) card. The implementation and validation (simulation and tests) of the TPS card at DESY is presented.
Energy Technology Data Exchange (ETDEWEB)
Massol, A.
2004-02-15
The application of statistically averaged two-fluid models for the simulation of complex indus- trial two-phase flows requires the development of adequate models for the drag force exerted on the inclusions and the interfacial heat exchange. This task becomes problematic at high volume fractions of the dispersed phase. The quality of the simulation strongly depends upon the inter- facial exchange terms, starting with the steady drag force. For example, an accurate modelling of the drag force is therefore a crucial point to simulate the expansion of dense fluidized beds. Most models used to study the exchange terms between particles and fluids are based on the interaction between an isolated particle and a surrounding gas. Those models are clearly not adequate in cases where the volume fraction of particles increases and particle-particle interactions become important. Studying such cases is a complex task because of the multiple possible configurations. While the interaction between an isolated sphere and a gas depends only on the particle size and the slip velocity between gas and particles, the interaction between a cloud of particles and a gas depends on many more parameters: size and velocity distribution of particles, relative position of particles. Even if the particles keep relative fixed positions, there is an infinite number of combinations to construct such an array. The objective of the present work is to perform steady and unsteady simulations of the flow in regular arrays of fixed particles in order to analyze the influence of the size and distributions of spheres on drag force and heat transfer (the array of spheres can be either monodispersed, either bi-dispersed). Several authors have studied the drag exerted on the spheres, but only for low Reynolds numbers and/or solid volume fractions close to the packed limit. Moreover some discrepancies are observed between the different studies. On top of that, all existing studies are limited to steady flows, and do not deal with heat transfer and poly-dispersion. First of all, the steady viscous drag exerted on the spheres of face-centered cubic, simple cubic and tetragonal arrays is evaluated. This allows to analyze the influence of the spheres distribution and solid volume fraction on drag coefficient. Next, the influence of Reynolds number and solid volume fraction on heat transfer from spheres to the surrounding fluid in face-centered cubic arrays is studied. Finally, the history effects on the total force exerted on the inclusions and on the heat transfer between the inclusions and the surrounding fluid are studied. (author)
Energy Technology Data Exchange (ETDEWEB)
Guillemaud, V
2007-03-15
This thesis is devoted to the modelling and numerical simulation of liquid-vapor flows. In order to describe these phase transition flows, a two-fluid two-pressure approach is considered. This description of the liquid-vapor mixing is associated to the seven-equation model introduced by Baer and Nunziato. This work investigates the properties of this model in order to simulate the phase transition flows occurring in nuclear engineering. First, a theoretical thermodynamic framework is constructed to describe the liquid-vapor mixing. Provided with this framework, various modelling choices are suggested for the interaction terms between the phases. These closure laws comply with an entropy inequality. The mathematical properties of this model are thereafter examined. The convective part is associated to a nonconservative hyperbolic system. First, we focus on the definition of its weak solutions. Several flow regimes for the two-phase mixing derive from this analysis. Such regimes for the two-phase flows are analogous to the torrential and fluvial regimes for the shallow-water equations. Furthermore, we establish the linear and nonlinear stabilities of the liquid-vapor equilibrium. Finally, the implementation of a turbulence model and the introduction of a reconstruction process for the interfacial area are investigated in order to refine the description of the interfacial transfers. Using a fractional step approach, a Finite Volume method is at last constructed to simulate this model. First, various nonconservative adaptations of standard Riemann solvers are developed to approach the convective part. Unlike the classic nonconservative framework, these schemes converge towards the same solution. Furthermore, a new relaxation scheme is proposed to approach the interfacial transfers. Provided with these schemes, the whole numerical method preserves the liquid-vapor equilibria. Using this numerical method, a careful comparison between the one- and two-pressure two-fluid models is presented. The numerical simulation of the strongly unbalanced liquid-vapor flows is at last applied to the safety analysis of the pressurized water nuclear reactors. (author)
A voxelization approach to navigate through nested geometries
Harrison, Brent Andrew
2016-01-01
High energy physics experiment software typically implements a detailed description of the geometry of the relevant detector. As modern detectors increase in complexity, modelling them becomes more challenging. Typically such models are built as a nested hierarchy of O(10000) volumes reaching a depth of 10 - 20. It is desirable to develop data structures and algorithms which allow fast and efficient navigation though a given detector geometry model. We investigate the feasibility of voxelisation techniques to this end.
Energy Technology Data Exchange (ETDEWEB)
Thiam, Ch. O
2003-06-01
In radiotherapy, it is essential to have a precise knowledge of the dose delivered in the target volume and the neighbouring critical organs. To be usable clinically, the models of calculation must take into account the exact characteristics of the beams used and the densities of fabrics. Today we can use sophisticated irradiation techniques and get a more precise assessment of the dose and with a better knowledge of its distribution. Thus in this report, will be detailed a simulation of the head of irradiation of accelerator SL-ELEKTA-20 in electrons mode and a dosimetric study of a water phantom. This study is carried out with the code of simulation Monte Carlo GATE adapted for applications of medical physics; the results are compared with the data obtained by the anticancer center 'Jean Perrin' on a similar accelerator. (author)
Energy Technology Data Exchange (ETDEWEB)
Furstoss, Ch
2006-11-15
My PhD study aims to determine the feasibility to design and develop, for photon fields, an anthropomorphic phantom equipped with detectors in order to evaluate the effective dose E at workplaces. First of all, the energy losses within the organs are calculated using the M.C.N.P.X. Monte Carlo code, in order to determine the detection positions within the different organs. Then, to decrease the number of detection positions, the organ contribution to the effective dose is studied. Finally, the characteristics of the detectors to insert and the characteristics of the phantom to use are deduced. The results show that 24 or 23 detection positions, according to the wT values (publication 60 or new recommendations of the ICRP), give a E estimation with an uncertainty of {+-}15 % from 50 keV to 4 MeV. Moreover, the interest of such an instrument is underlined while comparing the E estimation by the personal dose equivalent Hp to the E estimation by the instrumented phantom when the phantom is irradiated by point sources (worker in front of a glove box for example). Last, after the detector and phantom characteristic determination, two types of detectors and one type of phantom are selected. However, for the detectors mainly, developments are necessary. Follow up this study, the characterization and the adaptation of the detectors to the project would be interesting. Furthermore, the study to mixed photon-neutrons would be required the needs of the radiological protection community. (author)
2010-06-01
des champs de turbulence preinformatises dans une region construite, obtenus a l’aide d ’un modele informatique fonde sur la dynamique des fluides...une region construite, obtenus a l’aide d’un modele informatique fonde sur la dynamique des fluides numerique (DFN) a haute resolution et a haute
Vectored Thrust Digital Flight Control for Crew Escape. Volume 2.
1985-12-01
no. 24. Lecrique, J., A. Rault, M. Tessier and J.L. Testud (1978), - "Multivariable Regulation of a Thermal Power Plant Steam Generator," presented...and Extended Kalman Observers," presented at the Conf. Decision and Control, San Diego, CA. Testud , J.L. (1977), Commande Numerique Multivariable du
An atlas of active enhancers across human cell types and tissues
Andersson, Robin; Gebhard, Claudia; Miguel-Escalada, Irene; Hoof, Ilka; Bornholdt, Jette; Boyd, Mette; Chen, Yun; Zhao, Xiaobei; Schmidl, Christian; Suzuki, Takahiro; Ntini, Evgenia; Arner, Erik; Valen, Eivind; Li, Kang; Schwarzfischer, Lucia; Glatz, Dagmar; Raithel, Johanna; Lilje, Berit; Rapin, Nicolas; Bagger, Frederik Otzen; Jørgensen, Mette; Andersen, Peter Refsing; Bertin, Nicolas; Rackham, Owen; Burroughs, A Maxwell; Baillie, J Kenneth; Ishizu, Yuri; Shimizu, Yuri; Furuhata, Erina; Maeda, Shiori; Negishi, Yutaka; Mungall, Christopher J; Meehan, Terrence F; Lassmann, Timo; Itoh, Masayoshi; Kawaji, Hideya; Kondo, Naoto; Kawai, Jun; Lennartsson, Andreas; Daub, Carsten O; Heutink, Peter; Hume, David A; Jensen, Torben Heick; Suzuki, Harukazu; Hayashizaki, Yoshihide; Müller, Ferenc; Forrest, Alistair R R; Carninci, Piero; Rehli, Michael; Sandelin, Albin; Clevers, Hans
2014-01-01
Enhancers control the correct temporal and cell-type-specific activation of gene expression in multicellular eukaryotes. Knowing their properties, regulatory activity and targets is crucial to understand the regulation of differentiation and homeostasis. Here we use the FANTOM5 panel of samples, cov
DEFF Research Database (Denmark)
Suzuki, Harukazu; Forrest, Alistair R R; van Nimwegen, Erik
2009-01-01
Using deep sequencing (deepCAGE), the FANTOM4 study measured the genome-wide dynamics of transcription-start-site usage in the human monocytic cell line THP-1 throughout a time course of growth arrest and differentiation. Modeling the expression dynamics in terms of predicted cis-regulatory sites...
An integrated expression atlas of miRNAs and their promoters in human and mouse
DEFF Research Database (Denmark)
de Rie, Derek; Abugessaisa, Imad; Alam, Tanvir
2017-01-01
MicroRNAs (miRNAs) are short non-coding RNAs with key roles in cellular regulation. As part of the fifth edition of the Functional Annotation of Mammalian Genome (FANTOM5) project, we created an integrated expression atlas of miRNAs and their promoters by deep-sequencing 492 short RNA (sRNA) libr...
An atlas of active enhancers across human cell types and tissues
Andersson, Robin; Gebhard, Claudia; Miguel-Escalada, Irene; Hoof, Ilka; Bornholdt, Jette; Boyd, Mette; Chen, Yun; Zhao, Xiaobei; Schmidl, Christian; Suzuki, Takahiro; Ntini, Evgenia; Arner, Erik; Valen, Eivind; Li, Kang; Schwarzfischer, Lucia; Glatz, Dagmar; Raithel, Johanna; Lilje, Berit; Rapin, Nicolas; Bagger, Frederik Otzen; Jørgensen, Mette; Andersen, Peter Refsing; Bertin, Nicolas; Rackham, Owen; Burroughs, A Maxwell; Baillie, J Kenneth; Ishizu, Yuri; Shimizu, Yuri; Furuhata, Erina; Maeda, Shiori; Negishi, Yutaka; Mungall, Christopher J; Meehan, Terrence F; Lassmann, Timo; Itoh, Masayoshi; Kawaji, Hideya; Kondo, Naoto; Kawai, Jun; Lennartsson, Andreas; Daub, Carsten O; Heutink, Peter; Hume, David A; Jensen, Torben Heick; Suzuki, Harukazu; Hayashizaki, Yoshihide; Müller, Ferenc; Forrest, Alistair R R; Carninci, Piero; Rehli, Michael; Sandelin, Albin; Clevers, Hans
2014-01-01
Enhancers control the correct temporal and cell-type-specific activation of gene expression in multicellular eukaryotes. Knowing their properties, regulatory activity and targets is crucial to understand the regulation of differentiation and homeostasis. Here we use the FANTOM5 panel of samples,
An atlas of active enhancers across human cell types and tissues
DEFF Research Database (Denmark)
Andersson, Robin; Gebhard, Claudia; Miguel-Escalada, Irene
2014-01-01
Enhancers control the correct temporal and cell-type-specific activation of gene expression in multicellular eukaryotes. Knowing their properties, regulatory activity and targets is crucial to understand the regulation of differentiation and homeostasis. Here we use the FANTOM5 panel of samples, ...
Energy Technology Data Exchange (ETDEWEB)
Seignole, V
2005-07-01
This report presents the work of thesis realized under the direction of Jean-Michel Ghidaglia (thesis director, ENS-Cachan) and of Anela Kumbaro (tutor, CEA) within the framework of the modeling of two-phase flows with OAP code. The report consists of two parts of unequal size: the first part concentrates on aspects related exclusively to two-phase flows, while the second one is devoted to the study of a numerical problem inherent to the resolution of two-phase flow systems, but whose action has a broader framework. (author)
Energy Technology Data Exchange (ETDEWEB)
Omnes, P
1999-01-25
This work is dedicated to the study of the behaviour of a magnetic confined plasma that is excited by a purely sinusoidal electric current delivered by an antenna. The response of the electrons to the electromagnetic field is considered as linear,whereas the ions of the plasma are represented by a non-relativistic Vlasov equation. In order to avoid transients, the coupled Maxwell-Vlasov equations are solved in a periodic mode and in a bounded domain. An equivalent electric conductivity tensor has been defined, this tensor is a linear operator that links the electric current generated by the movement of the particles to the electromagnetic field. Theoretical considerations can assure the existence and uniqueness of a periodical solution to Vlasov equations and of a solution to Maxwell equations in harmonic mode. The system of equations is periodical and has been solved by using an iterative method. The application of this method to the simulation of a isotopic separation device based on ionic cyclotron resonance has shown that the convergence is reached in a few iterations and that the solution is valid. Furthermore a method based on a finite-volume formulation of Maxwell equations in the time domain is presented. 2 new variables are defined in order to better take into account the Gauss' law and the conservation of the magnetic flux, the new system is still hyperbolic. The parallelization of the process has been successfully realized. (A.C.)
Energy Technology Data Exchange (ETDEWEB)
Colette, A
2005-12-15
Closing the tropospheric ozone budget requires a better understanding of the role of transport processes from the major reservoirs: the planetary boundary layer and the stratosphere. Case studies lead to the identification of mechanisms involved as well as their efficiency. However, their global impact on the budget must be addressed on a climatological basis. This manuscript is thus divided in two parts. First, we present case studies based on ozone LIDAR measurements performed during the ESCOMPTE campaign. This work consists in a data analysis investigation by means of a hybrid - Lagrangian study involving: global meteorological analyses, Lagrangian particle dispersion computation, and mesoscale, chemistry - transport, and Lagrangian photochemistry modeling. Our aim is to document the amount of observed ozone variability related to transport processes and, when appropriate, to infer the role of tropospheric photochemical production. Second, we propose a climatological analysis of the respective impact of transport from the boundary-layer and from the tropopause region on the tropospheric ozone budget. A multivariate analysis is presented and compared to a trajectography approach. Once validated, this algorithm is applied to the whole database of ozone profiles collected above Europe during the past 30 years in order to discuss the seasonal, geographical and temporal variability of transport processes as well as their impact on the tropospheric ozone budget. The variability of turbulent mixing and its impact on the persistence of tropospheric layers will also be discussed. (author)
Energy Technology Data Exchange (ETDEWEB)
Sigrist, J.F
2004-11-15
The present work deals with the numerical simulation of a coupled fluid/structure problem with fluid free surface. A generic coupled fluid/structure system is defined, on which a linear problem (modal analysis) and a non-linear problem (temporal analysis) are stated. In the linear case, a strong coupled method is used. It is based on a finite element approach of the structure problem and a finite or a boundary element approach of the fluid problem. The coupled problem is formulated in terms of pressure and displacement, leading to a non-symmetric problem which is solved with an appropriate algorithm. In the non-linear case, the structure problem is described with non-linear equations of motion, whereas the fluid problem is modeled with the Stokes equations. The numerical resolution of the coupled problem is based on a weak coupling procedure. The fluid problem is solved with a finite volume technique, using a moving mesh technique to adjust the structure motion, a VOF method for the description of the free surface and the PISO algorithm for the time integration. The structure problem is solved with a finite element technique, using an explicit/implicit time integration algorithm. A procedure is developed in order to handle the coupling in space (fluid forces and structure displacement exchanges between fluid and structure mesh, fluid re-meshing) and in time (staggered explicit algorithm, dynamic filtering of numerical oscillations). The non linear coupled problem is solved using a CFD code, whose use for FSI problem is validated with a benchmark presented in this work. A comparison is proposed between numerical results and analytical solution for two elementary fluid problems. The validation process can be applied for any CFD numerical code. A numerical study is then proposed on the generic coupled case in order to describe the fluid/structure interaction phenomenon (added mass, displaced mass, mode coupling, influence of structural non-linearity). An industrial application of the finite element coupling techniques is exposed. A modal analysis is performed on a simplified model of a nuclear reactor; this example highlights the importance of fluid/structure effects in the industrial case. (author)
Energy Technology Data Exchange (ETDEWEB)
Nourtier-Mazauric, E.
2003-03-15
This thesis presents a thermodynamic and kinetic model of interactions between a fluid and ideal solid solutions represented by several end-members. The reaction between a solid solution and the aqueous solution results from the competition between the stoichiometric dissolution of the initial solid solution and the co-precipitation of the least soluble solid solution in the fluid at considered time. This model was implemented in ARCHIMEDE, a computer code of reactive transport in porous media, then applied to various examples. In the case of binary solid solutions, a graphical method allowed to determine the compositions of the precipitating solid solutions, with the aid of the end-member chemical potentials. The obtained program could be used to notably model the diagenesis of clayey or carbonated oil reservoirs, or the ground pollutant dispersion. (author)
Energy Technology Data Exchange (ETDEWEB)
Kourdey, A.
2002-09-15
The determination of the sliding surface of slope (dam, slope natural..) is one of the important and complicated problems in geotechnics. The Analyze of stability by the methods of Limit Equilibrium like the method of slices are the most used methods. They are able to determine a safety factor for a geometrically defined failure surface. These methods well adapted to the homogeneous mediums, have been developed a lot but they do not integrate the basic relations of mechanics (stress-strain). The numerical methods are better adapted to mediums having more complexity (effect of water, seismicity, fracturing,..). But, they are seldom used to determine a sliding surface and a safety factor. Each family offers appreciable advantages in the analysis of slope stability. For that purpose, we have developed a method that combines the advantages of the numerical methods as well as those of Limit Equilibrium allowing obtaining a slip surface determined by the calculated constraints. This slip surface may be imposed or better optimized, thus providing a minimal safety factor. Methods of operation research are used to obtain this surface. They are search methods by level, dynamic research.. or both at the same time. We integrated these developments in an existing computer code based on the method of Finite Differences known as FLAC. The stresses are determined for a linear behavior and for nonlinear. Interfaces and graphic tools are also produced to facilitate the analysis of stability. The validity of this approach was carried out for a standard case of slope, we analyzed and compared the results with the methods of Limit Equilibrium. The parametric study shows that this approach takes account of different parameters, which influences stability. We also kept a particular place for the application on real cases presenting slopes of different nature (dams, mining slops,...). (author)
Energy Technology Data Exchange (ETDEWEB)
Choi, Y.J.
2005-12-15
In the case of PWR severe accident (Loss of Coolant Accident, LOCA), the inner containment ambient properties such as temperature, pressure and gas species concentrations due to the released steam condensation are the main factors that determine the risk. For this reason, their distributions should be known accurately, but the complexity of the geometry and the computational costs are strong limitations to conduct full three-dimensional numerical simulations. An alternative approach is presented in this thesis, namely, the coupling between a lumped-parameter model and a CFD. The coupling is based on the introduction of a 'heat transfer function' between both models and it is expected that large decreases in the CPU-costs may be achieved. First of all, wall condensation models, such as the Uchida or the Chilton-Colburn models which are implemented in the code CAST3M/TONUS, are investigated. They are examined through steady-state calculations by using the code TONUS-0D, based on lumped parameter models. The temperature and the pressure within the inner containment are compared with those reported in the archival literature. In order to build the 'heat transfer function', natural convection heat transfer is then studied by using the code CAST3M for a partitioned cavity which represents a simplified geometry of the reactor containment. At a first step, two-dimensional natural convection heat transfer without condensation is investigated only. Either the incompressible-Boussinesq fluid flow model or the asymptotic low Mach model are considered for solving the time dependent conservation equations. The SUPG finite element method and the implicit scheme are applied for the numerical discretization. The computed results are qualified by the second-order Richardson extrapolation method which allows obtaining the so-called 'Exact values', i.e. grid size independent values. The computations are also validated through non-partitioned cavity case studies. The discussion is focused on heat transfer characteristics such as the variations of the average Nusselt number (Nu-bar) versus the dimensionless thickness of the partition (0.01 {<=} {gamma} {<=} 0.2) and conductivity ratio of the partition wall to the fluid (1 {<=} {sigma} {<=} 10{sup 5}). Finally, a 'heat transfer function' is suggested based upon the thermal resistance of the partition wall. The validity of the model is assessed thanks to comparisons with 'half-cavity' simulations. (author)
Energy Technology Data Exchange (ETDEWEB)
Libert, M
2007-09-15
It is indispensable to guarantee the integrity of PWR reactor vessels during an accidental running: in this context, the understanding and the modelling of the mechanisms of brittle rupture of steels are decisive elements of the complicated estimation of the service life of reactor vessels. The models of local approach of rupture by cleavage are one of the main tools of anticipation of the tensile strength of low alloy steels. In this work, the effect of the stresses heterogeneities in a local criteria of initiation of cleavage has been taken into account. The results of the microstructure calculations are used for proposing a statistical description of the local stresses distribution evolution. This statistical approach allows to propose a local approach model of the rupture depending both of the mechanical heterogeneities and of the sizes distributions of the defects. The behaviour of the material and its evolution are characterized at the microscopic and macroscopic scales in the temperature range [25 C, -196 C]. Simple tensile tests, velocity and temperature rises tests and toughness tests have been carried out. A micro-mechanical behaviour model describing the plastic behaviour below the transition temperature T{sub a} has been proposed. The behaviour law is based on the deformation mechanisms described in bibliography and identified by an inverse method from mechanical tests. The TEM observations and the characterization of the behaviour thermally activated allow to determine several parameters of the model. Simulations are carried out in order to model the main stress distributions {sigma}{sub 1} in two bainite microstructures corresponding to the elementary volume of the local approach of the rupture. The temperature and the triaxiality effects on the evolutions of the heterogeneities is characterized. A distribution function describing the distribution of the local values of {sigma}{sub 1} in terms of the main and equivalent mean stresses {sigma}{sub 1} and {sigma}{sub m} in the microstructure is proposed. This function is used for formulating a model of rupture local approach integrating the distribution of the critical defects sizes and the distributions of {sigma}{sub 1}. It is shown that in some cases, the dispersion of the local stresses is sufficient to explain the dispersions of the rupture stresses at the scale of the elementary volume. The dispersions of the rupture stresses are in agreement with those given by the Beremin model. The taken into account of the mechanical heterogeneities allows to introduce a dependency of the rupture probability in terms of temperature, of deformation and of triaxiality. (O.M.)
Energy Technology Data Exchange (ETDEWEB)
Grismayer, T
2006-12-15
This work is a theoretical and numerical study on the high energy ion acceleration in laser created plasma expansion. The ion beams produced on the rear side of an irradiated foil reveal some characteristics (low divergence, wide spectra) which distinguish them from the ones coming from the front side. The discovery of these beams has renewed speculation for applications such as proton-therapy or proton radiography. The ion acceleration is performed via a self-consistent electrostatic field due to the charge separation between ions and hot electrons. In the first part of this dissertation, we present the fluid theoretical model and the hybrid code which simulates the plasma expansion. The numerical simulation of a recent experience on the dynamic of the electric field by proton radiography validates the theoretical model. The second part deals with the influence of an initial ion density gradient on the acceleration efficiency. We establish a model which relates the plasma dynamic and more precisely the wave breaking of the ion flow. The numerical results which predict a strong decrease of the ion maximum energy for large gradient length are in agreement with the experimental data. The Boltzmann equilibrium for the electron assumed in the first part has been thrown back into doubt in the third part. We adopt a kinetic description for the electron. The new version of the code can measure the Boltzmann law deviation which does not strongly modify the maximum energy that can reach the ions. (author)
Energy Technology Data Exchange (ETDEWEB)
Vautrin-Ul, Ch.; Chausse, A. [Evry Univ., Laboratoire Analyses et Environnement, UMR 8587-CEA-CNRS, 91(France); Stafiej, J. [Institute of Physical Chemistry, Polish Academy of Sciences, Varsovie (Poland); Badiali, J.P. [Universite Pierre et Marie Curie, LECA-ERI, UMR 7575ENSCP, 75 - Paris (France)
2005-07-01
The safety of radioactive wastes disposal requires a big knowledge on their aging facing a corrosive environment. The corrosion is a complex phenomenon which implies many processes bound to the physic and the chemistry of the system. This approach proposes, from a little number of simple processes, numerical simulation which will define theses complex phenomenon. The presented model is a 2 dimension model at a mesoscopic scale and based on cellular automates. It allows the simulation of a metal evolution, protected by a polymer layer and in contact at one point with a corrosive media at a defect of the layer. (A.L.B.)
Energy Technology Data Exchange (ETDEWEB)
Canneviere, K.
2003-12-15
This work is devoted to the study of the propagation and the structure of two-phases turbulent flames. To this end, Direct Numerical Simulations (DNS) are used. First, numerical systems for two-phases flow simulations is presented along with a specific chemical model. Then, a study of laminar spray flames is carried out. An analytical study related to the dynamics of evaporation of droplets is first proposed where the influence on the equivalence ratio of the ratio between the heating delay of the droplet and the evaporation delay is detailed. The simulation of a propagating flame through a cloud of droplets is carried out and a pulsating behavior is highlighted. A study of these flames according to the topology of liquid fuel enabled us to characterize a double flame structure composed of a premixed flame and a diffusion flame. Our last study is devoted to spray turbulent flames. Two-phase combustion of turbulent jets has been simulated. By varying the spray injection parameters (density, equivalence ratio), a database has been generated. This database allowed us to describe local and global flame regimes appearing in the combustion of sprays. They have been categorized in four main structures: open and closed external regime, group combustion and mixed combustion. Eventually, a combustion diagram has been developed. It involves the spray vaporization time, the mean inter-space between droplets or group of droplets and eventually the injected equivalence ratio. (author)
Energy Technology Data Exchange (ETDEWEB)
Aid, R.
1998-01-07
This work comes from an industrial problem of validating numerical solutions of ordinary differential equations modeling power systems. This problem is solved using asymptotic estimators of the global error. Four techniques are studied: Richardson estimator (RS), Zadunaisky's techniques (ZD), integration of the variational equation (EV), and Solving for the correction (SC). We give some precisions on the relative order of SC w.r.t. the order of the numerical method. A new variant of ZD is proposed that uses the Modified Equation. In the case of variable step-size, it is shown that under suitable restriction, on the hypothesis of the step-size selection, ZD and SC are still valid. Moreover, some Runge-Kutta methods are shown to need less hypothesis on the step-sizes to exhibit a valid order of convergence for ZD and SC. Numerical tests conclude this analysis. Industrial cases are given. Finally, an algorithm to avoid the a priori specification of the integration path for complex time differential equations is proposed. (author)
Energy Technology Data Exchange (ETDEWEB)
Gazave, J
2007-12-15
When fusion ignition will be attained inside the target chambers of high energy laser facilities (LMJ-France and NIF-Usa), a harsh environment, composed of nuclear particles and an electromagnetic pulse (EMP) will be induced. All electronic devices located in the vicinity will be sensitive to this environment. In the first part of this work, a simulation method has been developed to evaluate transient currents that will be induced in coaxial cables. The relevance of this model is then discussed thanks to comparisons with experimental results. In a second part, the possibility to simulate the propagation of the EMP, inside and outside such a big structure as a target chamber, using the finite difference in time and domain (FDTD) method is evaluated. The use of a classic FDTD method is impossible for this kind of simulation because of the huge computer resources needed. It is the reason why a 3-dimensional space-time sub-grid method for FDTD has been developed and some massively parallel FDTD calculations have also been performed. (author)
Energy Technology Data Exchange (ETDEWEB)
Fouquet, T
2007-01-15
In this work we present 2 important results. First, for a relatively moderate laser lighting (I*{lambda}{sup 2} {approx_equal} 10{sup 14} W{mu}m{sup 2}/cm{sup 2}), cavitation appears in Langmuir decay instability (LDI) whenever the plasma wavelength is above a certain limit. Secondly, in the case of an inhomogeneous plasma there is an increase of the Raman reflectivity in presence of LDI for a plasma density profile that was initially smooth. This work is divided into 5 chapters. The first chapter is dedicated to parametric instabilities especially Raman instability and Langmuir decay instability. The equations that govern these instabilities as well as their numerical solutions are presented in the second chapter. The third chapter deals with the case of a mono-dimensional plasma with homogenous density. The saturation of the Raman instability in a mono-dimensional plasma with inhomogeneous density is studied in the fourth chapter. The last chapter is dedicated to bi-dimensional simulations for various types of laser beams.
Energy Technology Data Exchange (ETDEWEB)
Gillard, Ph. [Centre National de la Recherche Scientifique (CNRS), 86 - Poitiers (France)
1998-04-01
Self-ignition of energetic material was investigated in order to optimize safety in the field of pyrotechnic applications. Two approaches were used; the first one is relative to Frank-Kamenetskii stationary thermal explosion theory. The second approach consists of a choice of some numerical solutions of heat conduction equations in a non-stationary state. Comparison between these results was carried out in order to find the numerical scheme which is the most compatible with Frank-Kamenetskii stationary thermal explosion theory. Numerical data were used for three explosive substances. One of them was studied by the author. In all cases, the numerical stationary state is in agreement with the Frank-Kamenetskii stationary thermal explosion theory, more or less accurately. From this comparison, it may be concluded that it is preferable, for this kind of problem, to use an implicit scheme with linearization of the heat source term. Explicit numerical methods, with or without the addition of the heat term with the Zinn and Mader scheme are revealed to be less accurate and to need a greater optimization of spatial and temporal meshing. (author) 7 refs.
Energy Technology Data Exchange (ETDEWEB)
Ramiere, I
2006-09-15
This work is dedicated to the introduction of two original fictitious domain methods for the resolution of elliptic problems (mainly convection-diffusion problems) with general and eventually mixed boundary conditions: Dirichlet, Robin or Neumann. The originality lies in the approximation of the immersed boundary by an approximate interface derived from the fictitious domain Cartesian mesh, which is generally not boundary-fitted to the physical domain. The same generic numerical scheme is used to impose the embedded boundary conditions. Hence, these methods require neither a surface mesh of the immersed boundary nor the local modification of the numerical scheme. We study two modelling of the immersed boundary. In the first one, called spread interface, the approximate immersed boundary is the union of the cells crossed by the physical immersed boundary. In the second one, called thin interface, the approximate immersed boundary lies on sides of mesh cells. Additional algebraic transmission conditions linking both flux and solution jumps through the thin approximate interface are introduced. The fictitious problem to solve as well as the treatment of the embedded boundary conditions are detailed for the two methods. A Q1 finite element scheme is implemented for the numerical validation of the spread interface approach while a new cell-centered finite volume scheme is derived for the thin interface approach with immersed jumps. Each method is then combined to multilevel local mesh refinement algorithms (with solution or flux residual) to increase the precision of the solution in the vicinity of the immersed interface. A convergence analysis of a Q1 finite element method with non-boundary fitted meshes is also presented. This study proves the convergence rates of the present methods. Among the various industrial applications, the simulation on a model of heat exchanger in french nuclear power plants enables us to appreciate the performances of the fictitious domain methods introduced here. (author)
Energy Technology Data Exchange (ETDEWEB)
Robert, Y
2007-09-15
This work is a part of study which goal is to realize a computer modelling of the thermomechanical phenomena occurring during the YAG pulse laser welding of titanium alloy (TA6V). The filet welding has different heterogeneities (microstructural and mechanical). In fact, the temperature causes microstructural changes (phase transformations, precipitations) and modifies the mechanical properties. Thermomechanical modelling has thus to be established for the welding of TA6V. (author)
Energy Technology Data Exchange (ETDEWEB)
Roussel, T
2007-05-15
Hydrogen storage is the key issue to envisage this gas for instance as an energy vector in the field of transportation. Porous carbons are materials that are considered as possible candidates. We have studied well-controlled microporous carbon nano-structures, carbonaceous replicas of meso-porous ordered silica materials and zeolites. We realized numerically (using Grand Canonical Monte Carlo Simulations, GCMC) the atomic nano-structures of the carbon replication of four zeolites: AlPO{sub 4}-5, silicalite-1, and Faujasite (FAU and EMT). The faujasite replicas allow nano-casting of a new form of carbon crystalline solid made of tetrahedrally or hexagonally interconnected single wall nano-tubes. The pore size networks are nano-metric giving these materials optimized hydrogen molecular storage capacities (for pure carbon phases). However, we demonstrate that these new carbon forms are not interesting for room temperature efficient storage compared to the void space of a classical gas cylinder. We showed that doping with an alkaline element, such as lithium, one could store the same quantities at 350 bar compared to a classical tank at 700 bar. This result is a possible route to achieve interesting performances for on-board docking systems for instance. (author)
Energy Technology Data Exchange (ETDEWEB)
Boudousq, V. [Centre Hospitalier Universitaire de Nimes, 30 (France); Bordy, T.; Gonon, G.; Dinten, J.M. [CEA Grenoble (DTBS/STD), Lab. d' Electronique et de Technologie de l' Informatique, LETI, 38 (France)
2004-07-01
LEXXOS (DMS, Montpellier, France) is the first axial and total body cone beam bone densitometer using a 2D digital radiographic detector. In previous papers, technical principles and patients' Bone Mineral Density (BMD) measurement performances were presented. Bone densitometers are also used on small animals for drug development. In this presentation, we show how LEXXOS can be adapted for small animals' examinations and evaluate its performances. At first, in order to take advantage of the whole area of the 20 x 20 cm{sup 2} digital radiographic detector, it has been made profit of X-Rays magnification by adapting the geometrical configuration. Secondly, as small animals present low BMD, a specific dual energy calibration has been defined. This adapted system has then been evaluated on two sets of mice: six reference mice and six ovariectomized mice. Each month, these two populations have been examined and the averaged total body BMD has been measured. This evaluation shows that the right order of BMD magnitude is obtained and, as expected, BMD increases on two sets until a period around puberty and the ovariectomized set presents a significant decrease after. Moreover, the bone image obtained by dual energy processing on LEXXOS presents a radiographic image quality providing useful complementary information on bone morphometry and architecture. This study shows that LEXXOS cone beam bone densitometer provides simultaneously useful quantitative and qualitative information for analysis of bone evolution on small animals. In the future, same system architecture and processing methodology can be used with higher resolution detectors in order to refine information on bone architecture. (authors)
Energy Technology Data Exchange (ETDEWEB)
Jamet, D. [CEA Grenoble, 38 (France). Dept. de Thermohydraulique et de Physique]|[Ecole Centrale de Paris, 75 (France)
1998-12-31
One on the main difficulties encountered in the direct numerical simulation of two-phase flows in general and of liquid-vapor flows with phase-change in particular, is the interface tracking. The idea developed in this work consists in modeling a liquid-vapor interface as a volumetric zone across which physical properties vary continuously instead of a discontinuous surface. The second gradient theory allows to establish the evolution equations of the fluid in the whole system: bulk phases and interfaces. That means that the resolution of a unique system of partial differential equations is necessary to determine the whole two-phase flow, the interfaces and their evolution in time being a part of the solution of this unique system. We show in this work that it is possible to artificially enlarge an interface without changing its surface tension and the latent heat of vaporization. That means than it is possible to track all the interfaces of a liquid-vapor two-phase flow with phase-change on a mesh the size of which is imposed by the smallest Kolmogorov scale of the bulk phases for example. The artificial enlargement of an interfacial zone is obtained by modifying the thermodynamic behavior of the fluid within the binodal. We show that this modification does not change the dynamics of an interface. However, although the thickness of an interface and its surface tension vary with the mass and heat fluxes that go though it, the thermodynamic modification necessary to the artificial enlargement of an interface drastically increases these variations. Consequently, the artificial enlargement of an interface must be made carefully to avoid a too much important variation of its surface tension during dynamic situations. (author) 60 refs.
Energy Technology Data Exchange (ETDEWEB)
Hollmuller, P.
2002-07-01
In this thesis, physical properties and practical implementation of air/ground heat exchangers were studied. These exchangers consist in ducts placed in the upper ground layer (up to a depth of several meters). Air is circulated through the ducts, with heat transfer from and to the surrounding earth/sand/gravel material, with heat diffusion (conductive and capacitive effects) through this material. Air/ground heat exchangers are used to preheat or cool the air needed by the ventilation system of a building (open loop systems), or to heat up or cool the air in a greenhouse (closed loop systems). The reported study consisted in: (i) case studies of built examples, by detailed measuring and monitoring and data analysis. (ii) modeling the basic system. (iii) solving the basic equations both numerically (by computerized simulation) and analytically. (iv) identifying the basic features of these systems. (v) establishing recommendations for the practical implementation, especially in what regards sizing. It turned out that daily and seasonal heat storage/delivery by means of an air/ground heat exchanger have to be considered separately, with ad hoc rules of thumb each. Depending on parameter values a phase shift by as much as half the period may even be observed, with very little damping of the temperature oscillation. In Switzerland the main relevance for these systems is for improving thermal comfort in buildings in the summer time when outdoor temperature is higher than 26 {sup o}C, and for damping the amplitude of day/night temperature variations in horticultural greenhouses. The work carried out can be considered as of basic relevance for all applications of the systems studied.
Energy Technology Data Exchange (ETDEWEB)
Roche, Vincent
2011-10-28
The following work has been carried out in the framework of the studies conducted by IRSN in support of its safety evaluation of the geological disposal programme of high and intermediate level, long-lived radioactive waste. Such a disposal is planned to be hosted by the Callovian-Oxfordian indurate clay formation between two limestone formations in eastern Paris basin, France. Hypothetical faults may cross-cut this layered section, decreasing the clay containment ability by creating preferential pathways for radioactive solute towards limestones. This study aims at characterising the fault architecture and the normal fault growth in clay/limestone layered sections. Structural analysis and displacement profiles have been carried out in normal faults crossing several decimetres to metre thick sedimentary alternations in the South-Eastern Basin (France) and petrophysical properties have been determined for each layer. The studied faults are simple fault planes or complex fault zones showing are significantly controlled by the layering. The analysis of the fault characteristics and the results obtained on numerical models enlighten several processes such as fault nucleation, fault restriction, and fault growth through layered section. Some studied faults nucleated in the limestone layers, without using pre-existing fractures such as joints, and according to our numerical analysis, a strong stiffness, a low strength contrast between the limestone and the clay layer, and/or s a greater thickness of the clay layer are conditions which favour nucleation of faults in limestone. The range of mechanical properties leading to the fault nucleation in one layer type or another was investigated using a 3D modelling approach. After its nucleation, the fault propagates within a homogeneous medium with a constant displacement gradient until its vertical propagation is stopped by a restrictor. The evidenced restrictors are limestone-clay interfaces or faults in clays, sub-parallel to the layering and formed during the same extension that produced the normal faults. Restriction caused perturbation in the displacement gradient distribution as well as modification of the displacement (Dmax) vs. length (R) relation. During the slip accumulation along the fault, the displacement gradients stay constant and low in the centre of the fault and its near-tip value gradually increases up to a threshold leading to the fault propagation across the restrictor. Fault restriction may be related to the contrasts of stiffness and strength between the layers. A modification of the fault surface shape enables the fault to propagate across the restrictor. Displacement gradients characterising the through-going faults are specific of each lithology, with larger values in clay layers than those in the surrounding limestones, which indicate that clays discourage the vertical propagation of the faults. The displacement gradients in a clayey layer decrease with the Young's modulus. Analytical solutions were developed to estimate the role of the gradient variations in the Dmax-R relation. The vertical fault propagation is consistent with 'continuous' models without incidental linkage between independent fractures. The dips of the faults showing relatively low displacement changes with the lithology and are compatible either with frictional, hybrid or Mode I failure depending on the contrast of the mechanical properties and the fault nucleation depth. During the fault growth, its architecture can becomes complex and exhibits fault connections in the clayey layers and spreading in limestones depending on the layer thickness and on possible fault restrictions during the growth. After analysis of the scale effects, an application to the Callovian-Oxfordian of eastern France is finally presented. (author)
Energy Technology Data Exchange (ETDEWEB)
Charles, F.
2009-11-15
The thesis deals with kinetic models describing a rarefied spray. These models rely on coupling two Partial Differential Equations which describe the spatio-temporal evolution of the distribution of molecules and dust particles. The model presented in the first part is described by two Boltzmann-type equations where collisions between molecules and particles are modeled by two collision operators. We suggest two models of this collision operators. In the first one, collisions between dust particles and molecules are supposed to be elastic. In the second one, we assume those collisions are inelastic and given by a diffuse reflexion mechanism on the surface of dust specks. This leads to establish non classical collision operators. We prove that in the case of elastic collisions, the spatially homogeneous system has weak solutions which preserve mass and energy, and which satisfy an entropy inequality. We then describe the numerical simulation of the inelastic model, which is based on a Direct Simulation Method. This brings to light that the numerical simulation of the system becomes too expensive because the typical size of a dust particle is too large. We therefore introduce in the second part of this work a model constituted of a coupling (by a drag force term) between a Boltzmann equation and a Vlasov equation. To this end, we perform a scaling of the Boltzmann/Boltzmann system and an asymptotic expansion of one of the dimensionless collision operators with respect to the ratio of mass between a molecule of gas and a particle. A rigorous proof of the passage to the limit is given in the spatially homogeneous setting, for the elastic model of collision operators. It includes a new variant of Povzner's inequality in which the vanishing mass ratio is taken into account. Moreover, we numerically compare the Boltzmann/Boltzmann and Vlasov/Boltzmann systems with the inelastic collision operators. The simulation of the Vlasov equation is performed with a Particle-In-Cell method. Starting from these models, we perform some numerical simulations of a loss-of-vacuum event in the framework of safety studies in ITER. (author)
Energy Technology Data Exchange (ETDEWEB)
Le Pecheur, A.; Clavel, M.; Rey, C.; Bompard, P. [Laboratoire MSSMat, UMR 8579 CNRS, Ecole Centrale Paris (France); Le Pecheur, A.; Curtit, F.; Stephan, J.M. [Departement MMC, EDF RD, Site des Renardieres (France)
2010-11-15
A thermal fatigue test (INTHERPOL) was developed by EDF in order to study the initiation of cracks. These tests are carried out on tubular specimens under various thermal loadings and surface finish qualities in order to give an account of these parameters on crack initiation. The main topic of this study is to test the sensitivity of different fatigue criteria to surface conditions using a micro/macro modelling approach. Therefore a 304L polycrystalline aggregate, used for cyclic plasticity based FE modelling, have been considered as a Representative Volume Element located at the surface and subsurface of the test tube. This aggregate has been cyclically strained according to the results issued from FE simulation of INTHERPOL thermal fatigue experiment. Different surface parameters have been numerically simulated: effects of local microstructure and of grains orientation, effects of machining: metallurgical prehardening, residual stress gradient, and surface roughness. Three different fatigue criteria (Manson Coffin, Fatemi Socie and dissipated energy types), previously fitted at a macro-scale for thermal fatigue of 304L, have been computed at a meso scale, in order to show the surface 'hot spots' features and test the sensitivity of these three criteria to different surface conditions. Results show that grain orientation and neighbouring play an important role on the location of hot spots, and also that the positive effect of pre-straining and the negative effect of roughness on fatigue life are not all similarly predicted by these different fatigue criteria. (authors)
Energy Technology Data Exchange (ETDEWEB)
Maleki, K
2004-03-15
The relation between damage and permeability in rocks is a very important subject in industrial applications. It is for example the case of cracks around the radioactive waste storage galleries (EDZ) which can rise considerably the permeability and so make a serious problem for the sealing and the safety of these structures. The same phenomena can occur in the surrounding concrete wall of the nuclear power stations and also in the cracking of the oil-bearing rocks reservoirs. The experimental research on this subject, specially in laboratory G.3S-LMS in Ecole Polytechnique made it possible to determine the orders of damage effect on the permeability change. But a numerical modeling of these phenomena for setting a constitutive behavior law was to be done. This is the principal objective of this thesis. In this project, at first the notion of crack damage is defined. This type of damage is modeled by the disc shaped crack distribution in the 3D space. The disc's geometrical characteristics (radius, direction and opening) obey the statistical distribution laws, depending on the type of loading (compression or extension). The upper and lower limits of the characteristics are fixed according to actual observations, carried out on clay-stone (host rock selected for the realization of an underground research laboratory in Bure). In order to modeling the damage in the porous media, the double porosity concept is considered. A method of homogenization is used to simulate the flow through the network of cracks and porosity. It allows to derive the equivalent permeability of the fractured porous media. The study of the correlations between permeability and damage, obtained by this method for various values of fracture density, made it possible to obtain a relation between permeability and crack damage, for classical types of loading as simple extension and simple compression. To generalize this relation in the case of any specified triaxial loading, the crack damage is put in relation with the notion of mechanical damage, resulting from a mechanical model connecting the damage to stress and strain. Finally, the poro-mechanical behavior law with damage is implemented to the finite elements computer program ESAR-LCPC The modeling of an underground gallery proved that damage calculation by this method is feasible and allows to compute the evolution of the permeability and the variation of the flow entering the gallery as a function of this damage. (author)
Energy Technology Data Exchange (ETDEWEB)
Blaise, Philippe [Universite Joseph Fourier, Grenoble 1, 74 Annecy (France)
1998-09-29
The aim of this thesis is to study metallic sodium clusters by numerical simulation. We have developed two ab initio molecular dynamics programs within the formalism of density functional theory. The first is based on the semi-classical extended Thomas-Fermi approach. We use a real-space grid and a Car-Parrinello-like scheme. The computational cost is O(N), and we have built a pseudopotential that speeds up the calculations. By neglecting quantum shell effects, we are able to study a very large set of clusters. We show that sodium cluster energies fit well a liquid drop formula, by adjusting a few parameters. We have investigated breathing modes, surface oscillations and the net charge density. We have shown that the surface energy varies strongly with temperature, and that clusters have a lower melting point than bulk material. We have calculated fission barriers by a constraint method. The second program is based on the quantum Kohn-Sham approach. We use a real-space grid, and combine a generalized Broyden scheme for assuring self-consistency with an iterative Davidson-Lanczos algorithm for solving the Eigen-problem. The cost of the method is much higher. First of all, we have calculated some stable structures for small clusters and their energetics. We obtained very good agreement with previous works. Then, we have investigated highly charged cluster dynamics. We have identified a chaotic fission process. For high fissility systems, we observe a multi-fragmentation dynamics and we find preferential emission of monomers on a characteristic time scale less than a pico-second. This has been simulated for the first time, with the help of our adaptive grid method which follows each fragment as they move apart during the fragmentation. (author) 87 refs., 57 figs., 4 tabs.
1992-06-01
55 e-mail: buttke@math.ias.edu e-mail: baras@imag.fr Claude Basdevant Emanuele Caglioti LMD Dipatimento Matematica Ecole Normale Superieure Universita...Dipartimento Matematica CALTECH Universita ’La Sapienza’ Pasadena, CA 91125, USA Piazzale Aldo Moro, 2 Tel.: (818) 356 44 62 00185 Roma, ITALIE Fax: (818...multiprocessor systems, vector and array processors, etc .. The code performances on PC may be demonstrated. NC.,’spects of Vortex Dynamics by Numerical
Energy Technology Data Exchange (ETDEWEB)
Chateil, J.F.; Rouby, C.; Brun, M.; Labessan, C.; Diard, F. [Hopital Pellegrin, Unite de Radiopediatrie A., 33 - Bordeaux (France)
2004-05-01
Purpose. Control of radiation dose in pediatric radiology requires knowledge of the reference levels for all examinations. These data are useful for daily quality assessment, but are not perfectly known for some radiographic examinations. The purpose of our study was to evaluate the dose related to voiding cysto-urethrograms (VCUG), upper GI (UGI) and intravenous urography (IVU). Neonatal chest radiographs in the intensive care unit were also evaluated. Material and methods. For examinations with contrast material (478VCUG, 220UGI, 80IVU), the children were divided in groups based on their weight, from 5 to 30 Kg. Measurements were performed using an ionization chamber and expressed with the-dose-area product (DAP). For chest radiographs, a direct measurement of the entrance-skin dose was performed, with secondary calculation of the DAP. Results. For-VCUGs, the DAP ranged between 42.89 cGy.cm{sup 2} and 125.41 cGy.cm{sup 2}. The range was between 76.43, and 150.62 cGy.cm{sup 2} for UGIs and between 49.06 and 83.33 cGy.cm{sup 2} for IVUs. For neonate chest radiographs, DAP calculations were between 0.29 and 0.99 cGy.cm{sup 2}. Conclusion. These values represent our reference doses. They allow continuous monitoring of our radiographic technical parameters and radiographic equipment and help to correct and improve them if necessary. (author)
Energy Technology Data Exchange (ETDEWEB)
Heraud, St
2000-07-01
The knowledge of the local mechanical fields over several adjacent grains is needed for a better understanding of damage initiation and intergranular. failure in metallic polycrystals. This thesis aimed at the derivation of such fields through a 'numerical meso-scope': this simulation tool relies on the finite element analysis of a multi-crystalline pattern embedded in a large matrix whose mechanical behaviour is derived experimentally from classical tests performed on the studied metal. First, we derived macroscopic elastic-viscoplastic constitutive equations from tensile and creep tests on a AIS1316 stainless steel and we inferred from them the general form of similar, but crystallographic equations to be used for the single crystals; the corresponding parameters were determined by fitting the computed overall response of an aggregate made of 1000 grains with the macroscopic experimental one. We then investigated a creep-damaged area of the same steel and we simulated the same grain ensemble in the 'numerical meso-scope' so as to compare the computed normal stress on all grain boundaries with the observed de-bonded boundaries: this showed the most damaged boundaries to sustain the largest normal stress. Another application was concerned with the understanding of the origin of intergranular damage of aged AIS321 stainless steel. A similar approach was adopted with help of the meso-scope: it showed that observations could not be explained by a sole intragranular hardening as it is currently proposed in the literature. Thus the pertinence of the 'numerical meso-scope' concept can now be demonstrated, which opens on a number of new interesting perspectives. (author)
Energy Technology Data Exchange (ETDEWEB)
Benas, J.C.; Lefevre, F.; Gaillard, P.; Georgel, B.
1995-12-31
This paper presents an original numeric/symbolic method for solving an inverse problem in the field of non-destructive testing. The purpose of this method is to characterize the transitions of a signal even when they are superimposed. Its principle is to solve as many direct problems as necessary to obtain the solution, and to use some hypothesis to manage the reasoning of the process. The direct problem calculation yields to a `model signal`, and the solution is reached when the model signal is close to the measured one. This method calculates the directions of minimization thanks to a symbolic reasoning based on the peaks of the residual signal. The results of the method are good and seem very promising. (authors). 13 refs., 13 figs., 5 tabs.
Energy Technology Data Exchange (ETDEWEB)
Eripret, C.
1994-01-01
Modelling the fracture behaviour of pressure vessel steels is of major importance for related structural integrity assessments. It is essential to understand how the micromechanisms control the transition between ductile and brittle fracture for predicting geometry effects on transition temperature. To meet this goal, a model has been developed at EDF/R and DD in the framework of local approach to fracture. Its experimental validation has been achieved by analysing toughness tests performed by AEA Technology for a pressure vessel steel in the transition regime. This large data base has evidenced the specimen thickness effects on toughness properties of the material, as well as influence of prior ductile crack growth. Predictions of the model have been compared with experiments, which shows that the transition curve K{sub 1C} = f (T) can be drawn from model predictions and compared with the RCCM or ASME design curve. Substantial safety margins have been exhibited. They are greater for thin specimens (10 mm) than for thicker specimens (230 mm). However, the transition curve in the upper transition region is still underestimated by the model (for temperatures higher than RTNDT + 50 deg C). Improvement should be made to account for important plasticity development and significant crack growth. (author). 30 figs., 10 tabs., 12 refs.
Energy Technology Data Exchange (ETDEWEB)
Jacquin, T.
1997-10-10
The general problem of a single phase fluid flow through heterogenous porous media, is studied, focusing on well test data interpretation in the context of reservoir characterization; a 3D finite volume code, with capacity of local refinement, is developed to simulate well tests. After a review of traditional techniques used to interpret well test data, and their extension to heterogenous media using a weighting function that depends upon the flow geometry, an analysis is carried out for 2D correlated lognormal permeability distributions: it compares well to numerical well tests performed on low variance permeability distributions but needs further investigation for high variance. For 3D heterogenous permeability fields, well bore pressure cannot be estimated by analytical means; therefore a more empirical approach is used to study the permeability field of a reservoir used by Gaz de France as an underground gas storage. Simulated well tests are performed on a reservoir model based upon core measurements and log analysis. The numerical investigation reveals inconsistencies in the treatment of available data, which can be corrected so geology is better taken into account
Bel Hadj Kacem, Mohamed Salah
All hydrological processes are affected by the spatial variability of the physical parameters of the watershed, and also by human intervention on the landscape. The water outflow from a watershed strictly depends on the spatial and temporal variabilities of the physical parameters of the watershed. It is now apparent that the integration of mathematical models into GIS's can benefit both GIS and three-dimension environmental models: a true modeling capability can help the modeling community bridge the gap between planners, scientists, decision-makers and end-users. The main goal of this research is to design a practical tool to simulate run-off water surface using Geographic design a practical tool to simulate run-off water surface using Geographic Information Systems and the simulation of the hydrological behavior by the Finite Element Method.
Energy Technology Data Exchange (ETDEWEB)
Braffort, P. [Commissariat a l' Energie Atomique, Saclay(France). Centre d' Etudes Nucleaires
1953-07-01
We give the principles of a classification of matters to square basis, suiting the needs of the Service, of Documentation of the C.E.A. We present the detail of the categories in the order of the 'columns', likewise the big scientific subdivisions at the CEA. (authors) [French] On donne les principes d'une classification matieres a base carree, convenant aux besoins du Service de Documentation du C.E.A. On presente ensuite le detail des rubriques dans l'ordre des ''colonnes'', c'est-a-dire, des grandes subdivisions scientifiques du C.E.A. (auteurs)
Energy Technology Data Exchange (ETDEWEB)
Mallet, V.
2005-12-15
The aim of this work is the evaluation of the quality of a chemistry-transport model, not by a classical comparison with observations, but by the estimation of its uncertainties due to the input data, to the model formulation and to the numerical approximations. The study of these 3 sources of uncertainty is carried out with Monte Carlo simulations, with multi-model simulations and with comparisons between numerical schemes, respectively. A high uncertainty is shown for ozone concentrations. To overcome the uncertainty-related limitations, a strategy consists in using the overall forecasting. By combining several models (up to 48) on the basis of past observations, forecasts can be significantly improved. This work has been also the occasion of developing an innovative modeling system, named Polyphemus. (J.S.)
Energy Technology Data Exchange (ETDEWEB)
Tauveron, N
2006-02-15
The subject of the present work was to develop models able to simulate axial instabilities occurrence and development in multistage turbomachines. The construction of a 1D unsteady axisymmetric model of internal flow in a turbomachine (at the scale of the row) has followed different steps: generation of steady correlations, adapted to different regimes (off-design conditions, low mass flowrate, negative mass flow rate); building of a model able to describe transient behaviour; use of implicit time schemes adapted to long transients; validation of the model in comparison of experimental investigations, measurements and numerical results from the bibliography. This model is integrated in a numerical tool, which has the capacity to describe the gas dynamics in a complete circuit containing different elements (ducts, valves, plenums). Thus, the complete model can represent the coupling between local and global phenomena, which is a very important mechanism in axial instability occurrence and development. An elementary theory has also been developed, based on a generalisation of Greitzer's model. These models, which were validated on various configurations, have provided complementary elements for the validation of the complete model. They have also allowed a more comprehensive description of physical phenomena at stake in instability occurrence and development by quantifying various effects (inertia, compressibility, performance levels) and underlying the main phenomena (in particular the collapse and recovery kinetics of the plenum), which were the only retained in the final elementary theory. The models were first applied to academic configurations (compression system), and then to an innovative industrial project: a helium cooled fast nuclear reactor with a Brayton cycle. The use of the models have brought comprehensive elements to surge occurrence due to a break event. It has been shown that surge occurrence is highly dependent of break location and that surge development is very limited (no more than few seconds). It is also shown that in the case of a break event, the turbomachine can have a significant contribution to decay heat removal from the nuclear core. At last, such a device is autonomous for a certain time only, and that this time is sensitive to some parameters such as break location and back pressure value. (author)
Energy Technology Data Exchange (ETDEWEB)
Dexet, M
2006-10-15
This thesis presents a methodology for multi scale coupling between the morphology and texture of a microstructure as has been characterised experimentally, and the results of mechanical strain field analysis. This methodology is based on a coupling between experimental characterisation of the microstructure, ex-situ mechanical tests, local strain field measurements performed at the grain scale, and finite element simulations. Then, a definition of a cost function is proposed in order to optimise the parameters of the crystallographic constitutive law. This method is applied to the studies of zirconium alloys in order to improve the understanding of their mechanical behaviour in relation with their microstructures, which is a key requirement for their use in the nuclear industries. This work was funded by the joint research program SMIRN between EDF, CEA and CNRS. (author)
Energy Technology Data Exchange (ETDEWEB)
Andzi Barhe, T.
2004-10-15
The current thesis was performed within a collaboration between the Laboratoire de Combustion et de Detonique (LCD of the University of Poitiers) and the Laboratoire de Physique et de Chimie d'Environnement (LPCE) of the University of Ouagadougou. It was financed by Agency for Environment and Energy Management (ADEME). The principle object of this study is the optimisation of the combustion process during the incineration of waste. This optimisation is aimed at the reduction of the polluting emissions, principally CO and NO, during the incineration of cellulosic and plastic materials. It involves the analysis of the influence of the operational parameters on the polluting emissions and the control of reaction mechanisms of formation and reduction of these pollutants during the combustion process. Consequently, the study was performed in two parts: an experimental part and a numerical part. The experimental part was realised using a fixed bed counterflow reactor. This setup simulates the combustion within an industrial waste incinerator. The reactor allows the combustion of a vertical layer of waste mixture (wood, cardboard, PET, polyamide) to be followed. Three model mixtures representative of the makeup of household waste were studied in order to determine the influence of the composition of the waste on the emission of pollutants (CO and NO). The obtained results show that this parameter has a practically negligible influence within the tested parameter range. Consequently the formation of pollutants depends on the operating parameters - the equivalence ratio and the temperature. A numerical study of the influence of these parameters in order to show their impact on the mechanisms of pollutant formation and to determine the chemical mechanisms involved in the formation of oxides of nitrogen. The numerical study was performed with software developed at the LCD. This programme based on a detailed chemical model coupled to a simple physical model. It uses the calculation code CHEMKIN. It permits the simulation of combustion process within the gaseous zone of the fixed bed reactor. The programme is initialized with the results of the analysis of the pyrolysis which had previously been performed at the LCD on the materials which make up the model waste mixture. The pyrolysis products identified during this study are: HCN, NH{sub 3}, NO, NO{sub 2}, CO and light hydrocarbons. The results show that the simulation enables the determination of reactional mechanisms of the formation and reduction of oxides of nitrogen in the three combustion regimes identified during the experimental study. For each of the three regimes, a study of the impact of the combustion parameters on the yield of NO was performed. It can be seen that the response to the parameters is itself heavily dependent on the combustion regime. (author)
Etude aerodynamique d'un jet turbulent impactant une paroi concave
LeBlanc, Benoit
Etant donne la demande croissante de temperatures elevees dans des chambres de combustion de systemes de propulsions en aerospatiale (turbomoteurs, moteur a reaction, etc.), l'interet dans le refroidissement par jets impactant s'est vu croitre. Le refroidissement des aubes de turbine permet une augmentation de temperature de combustion, ce qui se traduit en une augmentation de l'efficacite de combustion et donc une meilleure economie de carburant. Le transfert de chaleur dans les au bages est influence par les aspects aerodynamiques du refroidissement a jet, particulierement dans le cas d'ecoulements turbulents. Un manque de comprehension de l'aerodynamique a l'interieur de ces espaces confinees peut mener a des changements de transfert thermique qui sont inattendus, ce qui augmente le risque de fluage. Il est donc d'interet pour l'industrie aerospatiale et l'academie de poursuivre la recherche dans l'aerodynamique des jets turbulents impactant les parois courbes. Les jets impactant les surfaces courbes ont deja fait l'objet de nombreuses etudes. Par contre des conditions oscillatoires observees en laboratoire se sont averees difficiles a reproduire en numerique, puisque les structures d'ecoulements impactants des parois concaves sont fortement dependantes de la turbulence et des effets instationnaires. Une etude experimentale fut realisee a l'institut PPRIME a l'Universite de Poitiers afin d'observer le phenomene d'oscillation dans le jet. Une serie d'essais ont verifie les conditions d'ecoulement laminaires et turbulentes, toutefois le cout des essais experimentaux a seulement permis d'avoir un apercu du phenomene global. Une deuxieme serie d'essais fut realisee numeriquement a l'Universite de Moncton avec l'outil OpenFOAM pour des conditions d'ecoulement laminaire et bidimensionnel. Cette etude a donc comme but de poursuivre l'enquete de l'aerodynamique oscillatoire des jets impactant des parois courbes, mais pour un regime d'ecoulement transitoire, turbulent
Energy Technology Data Exchange (ETDEWEB)
Ilyina, T.P. [Hawaii Univ., Manoa, Honolulu, HI (United States). Dept. of Oceanography
2007-07-01
Persistent organic pollutants (POPs) are harmful to human health and to the environment. Their fate in the marine environment is not yet fully understood. An ocean model (FANTOM) has been developed to investigate the fate of selected POPs in the North Sea. The main focus of the model is on quantifying the distribution of POPs and their aquatic pathways. This is the first time that a spatially-resolved, measurement-based ocean transport model has been used to study POP-like substances, at least on the regional scale. The model was applied for the southern North Sea and tested by studying the behaviour of g-HCH, a-HCH and PCB 153 in sea water. This model study proves that transport models, such as FANTOM, are capable of reproducing realistic multi-year temporal and spatial trends of selected POPs and can be used to address further scientific questions. (orig.)
Ahmed, Asm Sabbir; Hauck, Barry; Kramer, Gary H
2012-08-01
This study described the performance of an array of high-purity Germanium detectors, designed with two different end cap materials-steel and carbon fibre. The advantages and disadvantages of using this detector type in the estimation of the minimum detectable activity (MDA) for different energy peaks of isotope (152)Eu were illustrated. A Monte Carlo model was developed to study the detection efficiency for the detector array. A voxelised Lawrence Livermore torso phantom, equipped with lung, chest plates and overlay plates, was used to mimic a typical lung counting protocol with the array of detectors. The lung of the phantom simulated the volumetric source organ. A significantly low MDA was estimated for energy peaks at 40 keV and at a chest wall thickness of 6.64 cm.
Energy Technology Data Exchange (ETDEWEB)
Bertrand, N
2006-10-15
In the framework of research on long term behaviour of radioactive waste containers, this work consists on the one hand in the study of low temperature oxidation of iron and on the other hand in the development of a numerical model of oxide scale growth. Isothermal oxidation experiments are performed on pure iron at 300 and 400 C in dry and humid air at atmospheric pressure. Oxide scales formed in these conditions are characterized. They are composed of a duplex magnetite scale under a thin hematite scale. The inner layer of the duplex scale is thinner than the outer one. Both are composed of columnar grains, that are smaller in the inner part. The outer hematite layer is made of very small equiaxed grains. Markers and tracers experiments show that a part of the scale grows at metal/oxide interface thanks to short-circuits diffusion of oxygen. A model for iron oxide scale growth at low temperature is then deduced. Besides this experimental study, the numerical model EKINOX (Estimation Kinetics Oxidation) is developed. It allows to simulate the growth of an oxide scale controlled by mixed mechanisms, such as anionic and cationic vacancies diffusion through the scale, as well as metal transfer at metal/oxide interface. It is based on the calculation of concentration profiles of chemical species and also point defects in the oxide scale and in the substrate. This numerical model does not use the classical quasi-steady-state approximation and calculates the future of cationic vacancies at metal/oxide interface. Indeed, these point defects can either be eliminated by interface motion or injected in the substrate, where they can be annihilated, considering sinks as the climb of dislocations. Hence, the influence of substrate cold-work can be investigated. The EKINOX model is validated in the conditions of Wagner's theory and is confronted with experimental results by its application to the case of high temperature oxidation of nickel. (author)
2003-02-01
forensic aspects. Aviation, Space & Environmental Medicine 1999; 70 (3, Suppl): A145-51. 5. Canas J, Martinho Pimenta AJF, Castelo Branco NAA, ERP...E, Castelo Branco NAA. Performance assurance computerized test – PACT. Revista Portuguesa de Medicina Militar 1993; 41 (1-4): 21-27. 16. Rodriguez E...Portuguesa de Medicina Militar 1992: 40(1-4): 41-45. 17. GIMOGMA. Epilepsia sintomática de etiologia vascular, manifestação da síndrome das vibrações
Boothroyd, R.; Hardy, R. J.; Warburton, J.; Marjoribanks, T.
2015-12-01
Aquatic vegetation has a significant influence on the hydraulic functioning of river systems. Plant morphology has previously been shown to alter the mean and turbulent properties of flow, influenced by the spatial distribution of branches and foliage, and these effects can be further investigated through numerical models. We report on a novel method for the measurement and incorporation of complex plant morphologies into a computational fluid dynamics (CFD) model. The morphological complexity of Prunus laurocerasus is captured under foliated and defoliated states through terrestrial laser scanning (TLS). Point clouds are characterised by a voxelised representation and incorporated into a CFD scheme using a mass flux scaling algorithm, allowing the numerical prediction of flows around individual plants. Here we examine the sensitivity of plant aspect, i.e. the positioning of the plant relative to the primary flow direction, by rotating the voxelised plant representation through 15° increments (24 rotations) about the vertical axis. This enables the impact of plant aspect to be quantified upon the velocity and pressure fields, and in particular how this effects species-specific drag forces and drag coefficients. Plant aspect is shown to considerably influence the flow field response, producing spatially heterogeneous downstream velocity fields with both symmetric and asymmetric wake shapes, and point of reattachments that extend up to seven plant lengths downstream. For the same plant, changes in aspect are shown to account for a maximum variation in drag force of 168%, which equates to a 65% difference in the drag coefficient. An explicit consideration of plant aspect is therefore important in studies concerning flow-vegetation interactions, especially when reducing the uncertainty in parameterising the effect of vegetation in numerical models.
Methodes d'amas quantiques a temperature finie appliquees au modele de Hubbard
Plouffe, Dany
Depuis leur decouverte dans les annees 80, les supraconducteurs a haute temperature critique ont suscite beaucoup d'interet en physique du solide. Comprendre l'origine des phases observees dans ces materiaux, telle la supraconductivite, est l'un des grands defis de la physique theorique du solide des 25 dernieres annees. L'un des mecanismes pressentis pour expliquer ces phenomenes est la forte interaction electron-electron. Le modele de Hubbard est l'un des modeles les plus simples pour tenir compte de ces interactions. Malgre la simplicite apparente de ce modele, certaines de ses caracteristiques, dont son diagramme de phase, ne sont toujours pas bien etablies, et ce malgre plusieurs avancements theoriques dans les dernieres annees. Cette etude se consacre a faire une analyse de methodes numeriques permettant de calculer diverses proprietes du modele de Hubbard en fonction de la temperature. Nous decrivons des methodes (la VCA et la CPT) qui permettent de calculer approximativement la fonction de Green a temperature finie sur un systeme infini a partir de la fonction de Green calculee sur un amas de taille finie. Pour calculer ces fonctions de Green, nous allons utiliser des methodes permettant de reduire considerablement les efforts numeriques necessaires pour les calculs des moyennes thermodynamiques, en reduisant considerablement l'espace des etats a considerer dans ces moyennes. Bien que cette etude vise d'abord a developper des methodes d'amas pour resoudre le modele de Hubbard a temperature finie de facon generale ainsi qu'a etudier les proprietes de base de ce modele, nous allons l'appliquer a des conditions qui s'approchent de supraconducteurs a haute temperature critique. Les methodes presentees dans cette etude permettent de tracer un diagramme de phase pour l'antiferromagnetisme et la supraconductivite qui presentent plusieurs similarites avec celui des supraconducteurs a haute temperature. Mots-cles : modele de Hubbard, thermodynamique
Boiteau-Auvray, Sophie
1997-01-01
Les filaments de SiC sont obtenus par depot chimique en phase vapeur sur un substrat de W chauffe par effet Joule. Ces filaments peuvent etre utilises comme renforts dans des matrices d' alliage de titane. L' interposition d' une phase synthetique de TiC entre un substrat filamentaire de W et le depot de SiC a permis d' obtenir une stabilite thermochimique accrue du couple de diffusion. Les mecanismes conduisant a la protection chimique ont ete identifies. Une simulation numerique du depot de...
Propagation Aspects of Frequency Sharing, Interference and System Diversity
1983-03-01
multiple a ete menee suffisamment loin. Pour des raisons mathematiques dont la discussion depasse le cadre de cet article, il n’existe pas jusqu’a...eviter des erreurs purement numeriques, la phase des champs diffuses a ete developpee selon la distance des diffuseurs au centre du rectangle. f...porte I’indice de troncature. Ces courbes montrent qu’il parait raisonnable d’adopter un indice de troncature egal a 30 pour un rayon de goutte egal a l
Quelques aspects de l'analyse des donnees symboliques
Diday, E.
1993-01-01
Projet CLOREC; Savoir representer nos connaissances par des expressions a la fois symboliques et numeriques, savoir manipuler et utiliser ces expressions dans le but d'aider a decider, de mieux analyser, synthetiser et organiser notre experience et nos observations, tel est l'objectif que s'assigne l'analyse des donnees symboliques. On presente d'abord les "objets symboliques" (sortes d'atomes de connaissances et ce qui les distingue des objets classiques de l'analyse des donnees usuelles. Ce...
Functional annotation of a full-length mouse cDNA collection
Energy Technology Data Exchange (ETDEWEB)
Kawai, J.; Shinagawa, A.; Shibata, K.; Yoshino, M.; Itoh, M.; Ishii, Y.; Arakawa, T.; Hara, A.; Fukunishi, Y.; Konno, H.; Adachi, J.; Fukuda, S.; Aizawa, K.; Izawa, M.; Nishi, K.; Kiyosawa, H.; Kondo, S.; Yamanaka, I.; Saito, T.; Okazaki, Y.; Gojobori, T.; Bono, H.; Kasukawa, T.; Saito, R.; Kadota, K.; Matsuda, H.; Ashburner, M.; Batalov, S.; Casavant, T.; Fleischmann, W.; Gaasterland, T.; Gissi, C.; King, B.; Kochiwa, H.; Kuehl, P.; Lewis, S.; Matsuo, Y.; Nikaido, I.; Pesole, G.; Quackenbush, J.; Schriml, L.M.; Staubli, F.; Suzuki, R.; Tomita, M.; Wagner, L.; Washio, T.; Sakai, K.; Okido, T.; Furuno, M.; Aono, H.; Baldarelli, R.; Barsh, G.; Blake, J.; Boffelli, D.; Bojunga, N.; Carninci, P.; de Bonaldo, M.F.; Brownstein, M.J.; Bult, C.; Fletcher, C.; Fujita, M.; Gariboldi, M.; Gustincich, S.; Hill, D.; Hofmann, M.; Hume, D.A.; Kamiya, M.; Lee, N.H.; Lyons, P.; Marchionni, L.; Mashima, J.; Mazzarelli, J.; Mombaerts, P.; Nordone, P.; Ring, B.; Ringwald, M.; Rodriguez, I.; Sakamoto, N.; Sasaki, H.; Sato, K.; Schonbach, C.; Seya, T.; Shibata, Y.; Storch, K.-F.; Suzuki, H.; Toyo-oka, K.; Wang, K.H.; Weitz, C.; Whittaker, C.; Wilming, L.; Wynshaw-Boris, A.; Yoshida, K.; Hasegawa, Y.; Kawaji, H.; Kohtsuki, S.; Hayashizaki, Y.; RIKEN Genome Exploration Research Group Phase II T; FANTOM Consortium
2001-01-01
The RIKEN Mouse Gene Encyclopedia Project, a systematic approach to determining the full coding potential of the mouse genome, involves collection and sequencing of full-length complementary DNAs and physical mapping of the corresponding genes to the mouse genome. We organized an international functional annotation meeting (FANTOM) to annotate the first 21,076 cDNAs to be analyzed in this project. Here we describe the first RIKEN clone collection, which is one of the largest described for any organism. Analysis of these cDNAs extends known gene families and identifies new ones.
Alanine-EPR dosimetry for measurements of ionizing radiation absorbed doses in the range 0.5-10 kGy
Peimel-Stuglik, Z
2001-01-01
The usefulness of two, easy accessible alanine dosimeters (ALANPOL from IChTJ and foil dosimeter from Gamma Service, Radeberg, Germany) to radiation dose measurement in the range of 0.5-10 kGy, were investigated. In both cases, the result of the test was positive. The foil dosemeter from Gamma Service is recommended for dose distribution measurements in fantoms or products, ALANPOL - for routine measurements. The EPR-alanine method based on the described dosimeters can be successfully used, among others, in the technology of radiation protection of food.
DEFF Research Database (Denmark)
Ienasescu, Hans; Li, Kang; Andersson, Robin;
2016-01-01
Genomics consortia have produced large datasets profiling the expression of genes, micro-RNAs, enhancers and more across human tissues or cells. There is a need for intuitive tools to select subsets of such data that is the most relevant for specific studies. To this end, we present Slide...... for individual cell types/tissues, producing sets of genes, enhancers etc. which satisfy these constraints. Changes in slider settings result in simultaneous changes in the selected sets, updated in real time. SlideBase is linked to major databases from genomics consortia, including FANTOM, GTEx, The Human...
Modelisation et analyse de la reflectance de couverts forestiers de resineux
Fournier, Richard A.
Une analyse de la reflectance de couverts forestiers de resineux constitue l'essentiel de ce projet de recherche. L'objectif pricipal de la demarche scientifique choisie vise a apporter des explications sur l'influence de l'architecture de la foret et des processus d'interaction entre l'eclairement solaire incident et le couvert forestier pour decrire les patrons de reflectance, tel que vue par un capteur numerique. La strategie de recherche s'opere en quatre etapes, c'est-a-dire (1) l'etude des patron radiometriques de couronnes d'arbres, (2) la comprehension du regime de lumiere dans le couvert, (3) la mesure et l'intetgration de l'architecture du couvert de l'echelle regionale jusqu'a la description detaillee de l'arbre et (4) la simulation et la validation des images numeriques a fine resolution spatiale (autour de 50 cm). Ces travaux ont fait ressortir les parametres directeurs pour expliquer la reflectance des couverts forestiers de resineux, donc de fournir une clarification du probleme dans son ensemble.
Prediction inverse d'un front de solidification dans un four de transformation a haute temperature
Marois, Marc-Andre
Ce projet de recherche porte sur une methode numerique permettant de predire l'evolution du profil 2D de la couche solide qui recouvre l'interieur des parois de plusieurs fours de transformation a haute temperature. Un modele mathematique base sur la formulation faible de l'energie est d'abord developpe et valide. Une methode de transfert thermique inverse reposant sur ce modele est ensuite developpee afin d'obtenir une mesure rapide et continue de l'evolution du profil de cette couche solide. Vu la grande inertie thermique du systeme a l'etude, differentes strategies sont proposees afin de faciliter la mise en uvre de cette methode numerique. Finalement, cette approche inverse est confrontee aux resultats experimentaux obtenus a l'aide d'un reacteur metallurgique. Une etude preliminaire montre que les fours de transformation presentent une tres grande inertie thermique qui limite grandement l'utilisation des methodes inverses. En effet, la sensibilite de cette methode numerique repose essentiellement sur le delai temporel observe entre la variation du profil du banc et la fluctuation de la temperature a la surface externe de la paroi du four. Les resultats obtenus demontrent qu'une partie de ce delai est proportionnel a la chaleur latente de fusion lorsque le materiau a changement de phase est constitue d'un melange non eutectique. Afin de limiter l'impact de ce delai temporel, deux astuces numeriques sont proposees : reutiliser plus d'une fois les mesures de temperature et modifier le probleme thermique dans les regions pateuse et liquide. D'une part, le concept de chevauchement propose permet de reduire le temps d'acquisition des donnees entre chacune des predictions. D'autre part, l'approche virtuelle developpee permet de reduire l'inertie thermique du systeme et, par le fait meme, le delai temporel associe a la diffusion de la chaleur. Ces deux strategies ont permis de predire efficacement l'evolution 1D de l'epaisseur de la couche de gelee qui se solidifie a
Goyette, Stephane
1995-11-01
Le sujet de cette these concerne la modelisation numerique du climat regional. L'objectif principal de l'exercice est de developper un modele climatique regional ayant les capacites de simuler des phenomenes de meso-echelle spatiale. Notre domaine d'etude se situe sur la Cote Ouest nord americaine. Ce dernier a retenu notre attention a cause de la complexite du relief et de son controle sur le climat. Les raisons qui motivent cette etude sont multiples: d'une part, nous ne pouvons pas augmenter, en pratique, la faible resolution spatiale des modeles de la circulation generale de l'atmosphere (MCG) sans augmenter a outrance les couts d'integration et, d'autre part, la gestion de l'environnement exige de plus en plus de donnees climatiques regionales determinees avec une meilleure resolution spatiale. Jusqu'alors, les MCG constituaient les modeles les plus estimes pour leurs aptitudes a simuler le climat ainsi que les changements climatiques mondiaux. Toutefois, les phenomenes climatiques de fine echelle echappent encore aux MCG a cause de leur faible resolution spatiale. De plus, les repercussions socio-economiques des modifications possibles des climats sont etroitement liees a des phenomenes imperceptibles par les MCG actuels. Afin de circonvenir certains problemes inherents a la resolution, une approche pratique vise a prendre un domaine spatial limite d'un MCG et a y imbriquer un autre modele numerique possedant, lui, un maillage de haute resolution spatiale. Ce processus d'imbrication implique alors une nouvelle simulation numerique. Cette "retro-simulation" est guidee dans le domaine restreint a partir de pieces d'informations fournies par le MCG et forcee par des mecanismes pris en charge uniquement par le modele imbrique. Ainsi, afin de raffiner la precision spatiale des previsions climatiques de grande echelle, nous developpons ici un modele numerique appele FIZR, permettant d'obtenir de l'information climatique regionale valide a la fine echelle spatiale
Pseudo-messenger RNA: phantoms of the transcriptome.
Directory of Open Access Journals (Sweden)
Martin C Frith
2006-04-01
Full Text Available The mammalian transcriptome harbours shadowy entities that resist classification and analysis. In analogy with pseudogenes, we define pseudo-messenger RNA to be RNA molecules that resemble protein-coding mRNA, but cannot encode full-length proteins owing to disruptions of the reading frame. Using a rigorous computational pipeline, which rules out sequencing errors, we identify 10,679 pseudo-messenger RNAs (approximately half of which are transposon-associated among the 102,801 FANTOM3 mouse cDNAs: just over 10% of the FANTOM3 transcriptome. These comprise not only transcribed pseudogenes, but also disrupted splice variants of otherwise protein-coding genes. Some may encode truncated proteins, only a minority of which appear subject to nonsense-mediated decay. The presence of an excess of transcripts whose only disruptions are opal stop codons suggests that there are more selenoproteins than currently estimated. We also describe compensatory frameshifts, where a segment of the gene has changed frame but remains translatable. In summary, we survey a large class of non-standard but potentially functional transcripts that are likely to encode genetic information and effect biological processes in novel ways. Many of these transcripts do not correspond cleanly to any identifiable object in the genome, implying fundamental limits to the goal of annotating all functional elements at the genome sequence level.
An atlas of active enhancers across human cell types and tissues
Andersson, Robin; Gebhard, Claudia; Miguel-Escalada, Irene; Hoof, Ilka; Bornholdt, Jette; Boyd, Mette; Chen, Yun; Zhao, Xiaobei; Schmidl, Christian; Suzuki, Takahiro; Ntini, Evgenia; Arner, Erik; Valen, Eivind; Li, Kang; Schwarzfischer, Lucia; Glatz, Dagmar; Raithel, Johanna; Lilje, Berit; Rapin, Nicolas; Bagger, Frederik Otzen; Jørgensen, Mette; Andersen, Peter Refsing; Bertin, Nicolas; Rackham, Owen; Burroughs, A. Maxwell; Baillie, J. Kenneth; Ishizu, Yuri; Shimizu, Yuri; Furuhata, Erina; Maeda, Shiori; Negishi, Yutaka; Mungall, Christopher J.; Meehan, Terrence F.; Lassmann, Timo; Itoh, Masayoshi; Kawaji, Hideya; Kondo, Naoto; Kawai, Jun; Lennartsson, Andreas; Daub, Carsten O.; Heutink, Peter; Hume, David A.; Jensen, Torben Heick; Suzuki, Harukazu; Hayashizaki, Yoshihide; Müller, Ferenc; Consortium, The Fantom; Forrest, Alistair R. R.; Carninci, Piero; Rehli, Michael; Sandelin, Albin
2014-03-01
Enhancers control the correct temporal and cell-type-specific activation of gene expression in multicellular eukaryotes. Knowing their properties, regulatory activity and targets is crucial to understand the regulation of differentiation and homeostasis. Here we use the FANTOM5 panel of samples, covering the majority of human tissues and cell types, to produce an atlas of active, in vivo-transcribed enhancers. We show that enhancers share properties with CpG-poor messenger RNA promoters but produce bidirectional, exosome-sensitive, relatively short unspliced RNAs, the generation of which is strongly related to enhancer activity. The atlas is used to compare regulatory programs between different cells at unprecedented depth, to identify disease-associated regulatory single nucleotide polymorphisms, and to classify cell-type-specific and ubiquitous enhancers. We further explore the utility of enhancer redundancy, which explains gene expression strength rather than expression patterns. The online FANTOM5 enhancer atlas represents a unique resource for studies on cell-type-specific enhancers and gene regulation.
C＋＋ programming design of LEGO MINDSTORM NXT robot system%乐高NXT机器人系统C＋＋程序设计
Institute of Scientific and Technical Information of China (English)
薛清平; 李卫红
2012-01-01
随着乐高NXT机器人系统在国内中、小学的深入推广,有必要对乐高NXT机器人系统的相关问题尤其是二次开发问题做进一步深入探讨、研究。利用乐高LEGO MINDSTORMS NXT机器人系统的SDK文件FantomSDK,借助C＋＋编程,结合LEGO MINDSTORMS NXT Bluetooth Developer Kit文件,代替NXT-G、RoboLab软件,实现对乐高NXT设备的完全控制,为青少年、科技教师运用乐高NXT机器人系统进行科技创新提供帮助与支持。%It is necessary to research the second development issue of Lego NXT robot with the Lego NXT robot system in-depth spread in middle school and primary school.Using SDK document FantomSDK of LEGO MINDSTORMS NXT robot system,C＋＋ programming,combining with LEGO MINDSTORMS NXT Bluetooth Developer Kit files,to realize NXT equipment complete control instead of NXT-G and RoboLab software,complete the NXT device in full control task to help and support teenagers and teachers on technological innovation with LEGO NXT robot system.
A nonlinear computational aeroelasticity model for aircraft wings
Feng, Zhengkun
Cette these presente le developpement d'un code d'aeroelasticite nonlineaire base sur un solveur CFD robuste afin de l'appliquer aux ailes flexibles en ecoulement transsonique. Le modele mathematique complet est base sur les equations du mouvement des structures et les equations d'Euler pour les ecoulements transsoniques non-visqueux. La strategie de traiter tel systeme complexe par un couplage etage presente des avantages pour le developpement d'un code modulaire et facile a faire evoluer. La non-correspondance entre les deux grilles de calcul a l'interface fluide-structure, due aux differences des tailles et des types des elements utilises par la resolution de l'ecoulement et de la structure, est resolue par l'ajout d'un module specifique. Les transferts des informations entre ces deux grilles satisfont la loi de la conservation de l'energie. Le modele nonlineaire de la dynamique du fluide base sur la description Euler-Lagrange est discretise dans le maillage mobile. Le modele pour le calcul des structures est suppose lineaire dans lequel la methode de superposition modale est appliquee pour reduire le temps de calcul et la dimension de la memoire. Un autre modele pour la structure base directement sur la methode des elements finis est aussi developpe. Il est egalement couple dans le code pour prouver son extension future aux applications plus generales. La nonlinearite est une autre source de complexite du systeme bien que celle-ci est prevue uniquement dans le modele aerodynamique. L'algorithme GMRES nonlineaire avec le preconditioneur ILUT est implemente dans le solveur CFD ou un capteur de choc pour les ecoulements transsoniques et la technique de stabilisation numerique SUPG pour des ecoulements domines par la convection sont appliques. Un schema du second ordre est utilise pour la discretisation temporelle. Les composants de ce code sont valides par des tests numeriques. Le modele complet est applique et valide sur l'aile aeroelastique AGARD 445.6 dans le
Dong, Liang; Li, Taosheng; Liu, Chunyu
2015-04-01
A set of fluence-to-effective dose conversion coefficients of external exposure to muons were investigated for Chinese hybrid phantom references, which include both male and female. Both polygon meshes and Non-Uniform Rational B-Spline (NURBS) surfaces were used to descried the boundary of the organs and tissues in these phantoms. The 3D-DOCTOR and Rhinoceros software were used to polygonise the colour slice images and generate the NURBS surfaces, respectively. The voxelisation is completed using the BINVOX software and the assembly finished by using MATLAB codes. The voxel resolutions were selected to be 0.22 × 0.22 × 0.22 cm(3) and 0.2 × 0.2 × 0.2 cm(3) for male and female phantoms, respectively. All parts of the final phantoms were matched to their reference organ masses within a tolerance of ±5%. The conversion coefficients for negative and positive muons were calculated with the FLUKA transport code. There were 21 external monoenergetic beams ranging from 0.01 GeV to 100 TeV in 5 different geometrical conditions of irradiation.
Use of the GATE Monte Carlo package for dosimetry applications
Energy Technology Data Exchange (ETDEWEB)
Visvikis, D. [INSERM U650, LaTIM, University Hospital Medical School, F 29609 Brest (France)]. E-mail: Visvikis.Dimitris@univ-brest.fr; Bardies, M. [INSERM U601, CHU Nantes, F 44093 Nantes (France); Chiavassa, S. [INSERM U601, CHU Nantes, F 44093 Nantes (France); Danford, C. [Department of Medical Physics, MSKCC, New York (United States); Kirov, A. [Department of Medical Physics, MSKCC, New York (United States); Lamare, F. [INSERM U650, LaTIM, University Hospital Medical School, F 29609 Brest (France); Maigne, L. [Departement de Curietherapie-Radiotherapie, Centre Jean Perrin, F 63000 Clemont-Ferrand (France); Staelens, S. [UGent-ELIS, St-Pietersnieuwstraat, 41, B 9000 Gent (Belgium); Taschereau, R. [CRUMP Institute for Molecular Imaging, UCLA, Los Angeles (United States)
2006-12-20
One of the roles for Monte Carlo (MC) simulation studies is in the area of dosimetry. A number of different codes dedicated to dosimetry applications are available and widely used today, such as MCNP, EGSnrc and PTRAN. However, such codes do not easily facilitate the description of complicated 3D sources or emission tomography systems and associated data flow, which may be useful in different dosimetry application domains. Such problems can be overcome by the use of specific MC codes such as GATE (GEANT4 Application to Tomographic Emission), which is based on Geant4 libraries, providing a scripting interface with a number of advantages for the simulation of SPECT and PET systems. Despite this potential, its major disadvantage is in terms of efficiency involving long execution times for applications such as dosimetry. The strong points and disadvantages of GATE in comparison to other dosimetry specific codes are discussed and illustrated in terms of accuracy, efficiency and flexibility. A number of features, such as the use of voxelised and moving sources, as well as developments such as advanced visualization tools and the development of dose estimation maps allowing GATE to be used for dosimetry applications are presented. In addition, different examples from dosimetry applications with GATE are given. Finally, future directions with respect to the use of GATE for dosimetry applications are outlined.
Hickson, Kevin J; O'Keefe, Graeme J
2014-09-01
The scalable XCAT voxelised phantom was used with the GATE Monte Carlo toolkit to investigate the effect of voxel size on dosimetry estimates of internally distributed radionuclide calculated using direct Monte Carlo simulation. A uniformly distributed Fluorine-18 source was simulated in the Kidneys of the XCAT phantom with the organ self dose (kidney ← kidney) and organ cross dose (liver ← kidney) being calculated for a number of organ and voxel sizes. Patient specific dose factors (DF) from a clinically acquired FDG PET/CT study have also been calculated for kidney self dose and liver ← kidney cross dose. Using the XCAT phantom it was found that significantly small voxel sizes are required to achieve accurate calculation of organ self dose. It has also been used to show that a voxel size of 2 mm or less is suitable for accurate calculations of organ cross dose. To compensate for insufficient voxel sampling a correction factor is proposed. This correction factor is applied to the patient specific dose factors calculated with the native voxel size of the PET/CT study.
Energy Technology Data Exchange (ETDEWEB)
Mouriquand, C.; Patet, J.; Gilly, C.; Wolff, C
1966-07-01
Radioinduced chromosomal aberrations were studied in vitro on leukocytes of human peripheral blood after x irradiation at 25, 50, 100, 200, and 300 R. The numeric and structural anomalies were examined on 600 karyotypes. The relationship between these disorders and the dose delivered to the blood are discussed. An explanation on their mechanism of formation is tentatively given. (authors) [French] L'etude in vitro des anomalies chromosomiques radioinduites a ete pratiquee sur des leucocytes de sang peripherique preleve chez 4 sujets et irradie aux doses de 25, 50, 100, 200, 300 R. Les aberrations numeriques et structurales ont ete examinees sur 600 caryotypes. Les rapports entre ces anomalies et les doses appliquees sont etudies. Une hypothese sur leur mecanisme de formation est avancee. (auteurs)
Sustaining Tunisian SMEs' Competitiveness in the Knowledge Society
Del Vecchio, Pasquale; Elia, Gianluca; Secundo, Giustina
The paper aims to contribute to the debate about knowledge and digital divide affecting countries' competitiveness in the knowledge society. A survey based on qualitative and quantitative data collection has been performed to analyze the level of ICTs and e-Business adoption of the Tunisian SMEs. The results shows that to increase the SMEs competitiveness is necessary to invest in all the components of Intellectual capital: human capital (knowledge, skills, and the abilities of people for using the ICTs), structural capital (supportive infrastructure such as buildings, software, processes, patents, and trademarks, proprietary databases) and social capital (relations and collaboration inside and outside the company). At this purpose, the LINCET "Laboratoire d'Innovation Numerique pour la Competitivité de l'Entreprise Tunisienne" project is finally proposed as a coherent proposition to foster the growth of all the components of the Intellectual Capital for the benefits of competitiveness of Tunisian SMEs.
Bergeron, Alain
Cette recherche vise a la mise en oeuvre optique de reseaux neuronaux. Deux architectures differentes sont proposees. La premiere est la memoire associative permettant d'associer a un objet quelconque une sortie arbitraire tout en preservant l'information sur sa position. La seconde architecture, le classificateur neuronal pour le controle robotique, permet l'identification d'une entree et son classement selon differentes categories. La sortie est compatible avec les systemes numeriques standard. Pour realiser ces architectures, une approche modulaire est privilegiee. Le correlateur constitue le module de base des realisations. Differents modules sont de plus introduits pour realiser convenablement les operations neuronales. Le premier de ces modules est le seuil optoelectronique permettant de realiser une fonction non lineaire, element essentiel des reseaux neuronaux. Le second module a etre introduit est l'encodeur optonumerique, utile au classement des objets. Le probleme de l'enregistrement de la memoire est aborde a l'aide du codage iteratif global.
Hilton, James L; Arlot, Jean-Eudes; Bell, Steven A; Capitaine, Nicole; Fienga, Agnes; Folkner, William M; Gastineau, Mickael; Pavlov, Dmitry; Pitjeva, Elena V; Skripnichenko, Vladimir I; Wallace, Patrick
2015-01-01
The IAU Commission 4 Working Group on Standardizing Access to Ephemerides recommends the use of the Spacecraft and Planet Kernel (SPK) format as a standard format for the position ephemerides of planets and other natural solar system bodies, and the use of the Planetary Constants Kernel (PCK) format for the orientation of these bodies. It further recommends that other supporting data be stored in a text PCK. These formats were developed for use by the SPICE Toolkit by the Navigation and Ancillary Information Facility of NASA's Jet Propulsion Laboratory (JPL). The CALCEPH library developed by the Institut de mecanique celeste de calcul des ephemerides (IMCCE) is also able to make use of these files. High accuracy ephemerides available in files conforming to the SPK and PCK formats include: the Development Ephemerides (DE) from JPL, Integrateur Numerique Planetaire de l'Observatoire de Paris (INPOP) from IMCCE, and the Ephemerides Planets and the Moon (EPM), developed by the Institute for Applied Astronomy (IAA...
Wavelets and multiscale signal processing
Cohen, Albert
1995-01-01
Since their appearance in mid-1980s, wavelets and, more generally, multiscale methods have become powerful tools in mathematical analysis and in applications to numerical analysis and signal processing. This book is based on "Ondelettes et Traitement Numerique du Signal" by Albert Cohen. It has been translated from French by Robert D. Ryan and extensively updated by both Cohen and Ryan. It studies the existing relations between filter banks and wavelet decompositions and shows how these relations can be exploited in the context of digital signal processing. Throughout, the book concentrates on the fundamentals. It begins with a chapter on the concept of multiresolution analysis, which contains complete proofs of the basic results. The description of filter banks that are related to wavelet bases is elaborated in both the orthogonal case (Chapter 2), and in the biorthogonal case (Chapter 4). The regularity of wavelets, how this is related to the properties of the filters and the importance of regularity for t...
Energy Technology Data Exchange (ETDEWEB)
Vallee, R.L. [Commissariat a l' Energie Atomique, Saclay (France). Centre d' Etudes Nucleaires
1968-07-01
The study of binary groups under their mathematical aspects constitutes the matter of binary analysis, the purpose of which consists in developing altogether simple, rigorous and practical methods needed by the technicians, the engineers and all those who may be mainly concerned by digital processing. This subject, fast extending if not determining, however tends actually to play a main part in nuclear electronics as well as in several other research areas. (authors) [French] L'analyse binaire a pour objet l'etude mathematique des proprietes d'ensembles binaires algebriques et pour but l'elaboration de methodes simples, rigoureuses et pratiques, destinees aux techniciens, aux ingenieurs et a tous ceux qu'interesse directement le traitement numerique de l'information, discipline en expansion rapide qui, deja, en electronique nucleaire comme dans de nombreux autres domaines de la recherche, tend a jouer un role essentiel sinon determinant. (auteurs)
Simulation de la formabilite des alliages d'aluminium AA5754 et AA6063
Eljaafari, Samira
Les besoins de reduction du poids se sont concretement traduits par l'introduction de nouvelles nuances plus legeres dans les structures automobiles. Ainsi, des alliages d'aluminium ont commence a etre integres dans les pieces de structure de plusieurs vehicules. La faible masse volumique des alliages d'aluminium (2,7g/cm3) permet d'alleger le poids du vehicule qui entraine une diminution de la consommation de carburant et, donc, des emissions de gaz a effet de serre. La striction et la rupture sont les principaux modes de defaillance qui entrainent le rebut systematique des pieces. C'est pourquoi, ameliorer la prediction d'apparition de ces defauts lors de la simulation va dans le sens d'une meilleure maitrise du procede. Dans le cadre de ce travail doctoral, deux modeles sont developpes pour simuler le comportement a grandes deformations d'alliages d'aluminium: un modele polycristallin de type Taylor et un modele a un ou plusieurs elements finis par grain. Les diagrammes limites de formage (DLF) pour les deux alliages d'aluminium AA5754 et AA6063 ont ete simules numeriquement en utilisant une formulation par elements finis pour les polycristaux basee sur l'hypothese de Taylor. Les DLF conventionnels et de l'hydroformage ont ete traces. L'effet des chemins de deformation sur la formabilite des alliages d'aluminium a aussi ete etudie. Finalement, des simulations numeriques avec les donnees de diffraction des electrons retrodiffuses (EBSD) pour 1'alliage d'aluminium AA5754 ont ete effectuees en utilisant le modele a un ou plusieurs elements par grain. Ces simulations sont executees avec differents modeles du durcissement (Asaro, Bassani et puissance). Mots-cles: Formabilite; Alliage d'aluminium; Hydroformage; Glissement cristallographique; Durcissement; Calcul parallele; Diagramme limite de formage (DLF); Diffraction electron.
Decoupling Linear and Nonlinear Associations of Gene Expression
Itakura, Alan
2013-05-01
The FANTOM consortium has generated a large gene expression dataset of different cell lines and tissue cultures using the single-molecule sequencing technology of HeliscopeCAGE. This provides a unique opportunity to investigate novel associations between gene expression over time and different cell types. Here, we create a MatLab wrapper for a powerful and computationally intensive set of statistics known as Maximal Information Coefficient, and then calculate this statistic for a large, comprehensive dataset containing gene expression of a variety of differentiating tissues. We then distinguish between linear and nonlinear associations, and then create gene association networks. Following this analysis, we are then able to identify clusters of linear gene associations that then associate nonlinearly with other clusters of linearity, providing insight to much more complex connections between gene expression patterns than previously anticipated.
Distinguishing protein-coding from non-coding RNAs through support vector machines.
Directory of Open Access Journals (Sweden)
Jinfeng Liu
2006-04-01
Full Text Available RIKEN's FANTOM project has revealed many previously unknown coding sequences, as well as an unexpected degree of variation in transcripts resulting from alternative promoter usage and splicing. Ever more transcripts that do not code for proteins have been identified by transcriptome studies, in general. Increasing evidence points to the important cellular roles of such non-coding RNAs (ncRNAs. The distinction of protein-coding RNA transcripts from ncRNA transcripts is therefore an important problem in understanding the transcriptome and carrying out its annotation. Very few in silico methods have specifically addressed this problem. Here, we introduce CONC (for "coding or non-coding", a novel method based on support vector machines that classifies transcripts according to features they would have if they were coding for proteins. These features include peptide length, amino acid composition, predicted secondary structure content, predicted percentage of exposed residues, compositional entropy, number of homologs from database searches, and alignment entropy. Nucleotide frequencies are also incorporated into the method. Confirmed coding cDNAs for eukaryotic proteins from the Swiss-Prot database constituted the set of true positives, ncRNAs from RNAdb and NONCODE the true negatives. Ten-fold cross-validation suggested that CONC distinguished coding RNAs from ncRNAs at about 97% specificity and 98% sensitivity. Applied to 102,801 mouse cDNAs from the FANTOM3 dataset, our method reliably identified over 14,000 ncRNAs and estimated the total number of ncRNAs to be about 28,000.
Dose reduction in CT using bismuth shielding: measurements and Monte Carlo simulations.
Chang, Kyung-Hwan; Lee, Wonho; Choo, Dong-Myung; Lee, Choon-Sik; Kim, Youhyun
2010-03-01
In this research, using direct measurements and Monte Carlo calculations, the potential dose reduction achieved by bismuth shielding in computed tomography was evaluated. The patient dose was measured using an ionisation chamber in a polymethylmethacrylate (PMMA) phantom that had five measurement points at the centre and periphery. Simulations were performed using the MCNPX code. For both the bare and the bismuth-shielded phantom, the differences of dose values between experiment and simulation were within 9%. The dose reductions due to the bismuth shielding were 1.2-55% depending on the measurement points, X-ray tube voltage and the type of shielding. The amount of dose reduction was significant for the positions covered by the bismuth shielding (34 - 46% for head and 41 - 55% for body phantom on average) and negligible for other peripheral positions. The artefact on the reconstructed images were minimal when the distance between the shielding and the organs was >1 cm, and hence the shielding should be selectively located to protect critical organs such as the eye lens, thyroid and breast. The simulation results using the PMMA phantom was compared with those using a realistically voxelised phantom (KTMAN-2). For eye and breast, the simulation results using the PMMA and KTMAN-2 phantoms were similar with each other, while for thyroid the simulation results were different due to the discrepancy of locations and the sizes of the phantoms. The dose reductions achieved by bismuth and lead shielding were compared with each other and the results showed that the difference of the dose reductions achieved by the two materials was less than 2-3%.
Reconnaissance invariante d'objets 3-D et correlation SONG
Roy, Sebastien
Cette these propose des solutions a deux problemes de la reconnaissance automatique de formes: la reconnaissance invariante d'objets tridimensionnels a partir d'images d'intensite et la reconnaissance robuste a la presence de bruit disjoint. Un systeme utilisant le balayage angulaire des images et un classificateur par trajectoires d'espace des caracteristiques permet d'obtenir la reconnaissance invariante d'objets tridimensionnels. La reconnaissance robuste a la presence de bruit disjoint est realisee au moyen de la correlation SONG. Nous avons realise la reconnaissance invariante aux translations, rotations et changements d'echelle d'objets tridimensionnels a partir d'images d'intensite segmentees. Nous utilisons le balayage angulaire et un classificateur a trajectoires d'espace des caracteris tiques. Afin d'obtenir l'invariance aux translations, le centre de balayage angulaire coincide avec le centre geometrique de l'image. Le balayage angulaire produit un vecteur de caracteristiques invariant aux changements d'echelle de l'image et il transforme en translations du signal les rotations autour d'un axe parallele a la ligne de visee. Le classificateur par trajectoires d'espace des caracteristiques represente une rotation autour d'un axe perpendiculaire a la ligne de visee par une courbe dans l'espace. La classification se fait par la mesure de la distance du vecteur de caracteristiques de l'image a reconnaitre aux trajectoires stockees dans l'espace. Nos resultats numeriques montrent un taux de classement atteignant 98% sur une banque d'images composee de 5 vehicules militaires. La correlation non-lineaire generalisee en tranches orthogonales (SONG) traite independamment les niveaux de gris presents dans une image. Elle somme les correlations lineaires des images binaires ayant le meme niveau de gris. Cette correlation est equivalente a compter le nombre de pixels situes aux memes positions relatives et ayant les memes intensites sur deux images. Nous presentons
Modelisation de la synthese reactive de poudres ultrafines dans un reacteur a plasma thermique
Desilets, Martin
La presente these s'inscrit dans le cadre de la modelisation mathematique des ecoulements a plasmas thermiques inertes et reactifs. Elle vise plus precisement a combler les lacunes des modeles existants en portant une attention particuliere aux phenomenes de transport multicomposant et a la prediction des transformations chimiques. Pour repondre a ces attentes et ainsi poursuivre le developpement dans ce domaine, un modele global a ete developpe. Il combine la resolution d'equations conservatives pour la masse, l'energie et le momentum. La generation d'un plasma inductif (h.f ) y est traitee au moyen d'equations representant les champs electromagnetiques. La nucleation et la croissance de poudres ultrafines sont incluses dans le modele via l'analyse des principaux moments de la distribution des tailles de particules. Enfin, tous les phenomenes physico-chimiques d'importance dans un milieu comme les plasmas thermiques, de meme que lem interactions, sont consideres. Le modele est applique ici a l'analyse de trois problematiques differentes et complementaires. La premiere concerne l'etude du melange gazeux d'un jet froid (He, N 2 ou O2), injecte au coeur d'une decharge d'argon/hydrogene ou d'argon/oxygene. La comparaison des predictions du modele avec des mesures experimentales obtenues par une sonde enthalpique permet une validation partielle de ce dernier. La deuxieme problematique a trait a l'etude numerique de la pyrolyse du methane en reacteur a plasma h.f. Elle met en evidence les difficultes de convergence de la methode numerique lorsque appliquee a la resolution d'ecoulements reactifs a haute temperature. Finalement, le dernier sujet aborde dans cette these, soit l'analyse systematique des principales conditions d'operation d'un reacteur h.f utilise pour la synthese reactive de poudres ultrafines de silicium, engage tous les elements theoriques du modele. Il implique en effet la decomposition thermique d'un precurseur gazeux, le tetrachlorure de silicium, la
Conductivite dans le modele de Hubbard bi-dimensionnel a faible couplage
Bergeron, Dominic
Le modele de Hubbard bi-dimensionnel (2D) est souvent considere comme le modele minimal pour les supraconducteurs a haute temperature critique a base d'oxyde de cuivre (SCHT). Sur un reseau carre, ce modele possede les phases qui sont communes a tous les SCHT, la phase antiferromagnetique, la phase supraconductrice et la phase dite du pseudogap. Il n'a pas de solution exacte, toutefois, plusieurs methodes approximatives permettent d'etudier ses proprietes de facon numerique. Les proprietes optiques et de transport sont bien connues dans les SCHT et sont donc de bonne candidates pour valider un modele theorique et aider a comprendre mieux la physique de ces materiaux. La presente these porte sur le calcul de ces proprietes pour le modele de Hubbard 2D a couplage faible ou intermediaire. La methode de calcul utilisee est l'approche auto-coherente a deux particules (ACDP), qui est non-perturbative et inclue l'effet des fluctuations de spin et de charge a toutes les longueurs d'onde. La derivation complete de l'expression de la conductivite dans l'approche ACDP est presentee. Cette expression contient ce qu'on appelle les corrections de vertex, qui tiennent compte des correlations entre quasi-particules. Pour rendre possible le calcul numerique de ces corrections, des algorithmes utilisant, entre autres, des transformees de Fourier rapides et des splines cubiques sont developpes. Les calculs sont faits pour le reseau carre avec sauts aux plus proches voisins autour du point critique antiferromagnetique. Aux dopages plus faibles que le point critique, la conductivite optique presente une bosse dans l'infrarouge moyen a basse temperature, tel qu'observe dans plusieurs SCHT. Dans la resistivite en fonction de la temperature, on trouve un comportement isolant dans le pseudogap lorsque les corrections de vertex sont negligees et metallique lorsqu'elles sont prises en compte. Pres du point critique, la resistivite est lineaire en T a basse temperature et devient
Energy Technology Data Exchange (ETDEWEB)
Jan, S.; Laedermann, J.P.; Bochud, F.; Ferragut, A.; Bordy, J.M.; Parisi, L.L.; Abou-Khalil, R.; Longeot, M.; Kitsos, S.; Groetz, J.E.; Villagrasa, C.; Daures, J.; Martin, E.; Henriet, J.; Tsilanizara, A.; Farah, J.; Uyttenhove, W.; Perrot, Y.; De Carlan, L.; Vivier, A.; Kodeli, I.; Sayah, R.; Hadid, L.; Courageot, E.; Fritsch, P.; Davesne, E.; Michel, X.
2010-07-01
This document gathers the slides of the available presentations given during these conference days. Twenty seven presentations are assembled in the document and deal with: 1 - GATE: calculation code for medical imaging, radiotherapy and dosimetry (S. Jan); 2 - estimation of conversion factors for the measurement of the ambient dose equivalent rate by in-situ spectroscopy (J.P. Laedermann); 3 - geometry specific calibration factors for nuclear medicine activity meters (F. Bochud); 4 - Monte Carlo simulation of a rare gases measurement system - calculation and validation, ASGA/VGM system (A. Ferragut); 5 - design of a realistic radiation field for the calibration of the dosemeters used in interventional radiology/cardiology (medical personnel dosimetry) (J.M. Bordy); 6 - determination of the position and height of the KALINA facility chimney at CEA Cadarache (L.L. Parisi); 7 - MERCURAD{sup TM} - 3D simulation software for dose rates calculation (R. Abou-Khalil); 8 - PANTHERE - 3D software for gamma dose rates simulation of complex nuclear facilities (M. Longeot); 9 - radioprotection, from the design to the exploitation of radioactive materials transportation containers (S. Kitsos); 10 - post-simulation processing of MCNPX responses in neutron spectroscopy (J.E. Groetz); 11 - last developments of the Geant4 Monte Carlo code for trace amounts simulation in liquid water at the molecular scale (C. Villagrasa); 12 - Calculation of H{sub p}(3)/K{sub air} conversion coefficients using PENELOPE Monte-Carlo code and comparison with MCNP calculation results (J. Daures); 13 - artificial neural networks, a new alternative to Monte Carlo calculations for radiotherapy (E. Martin); 14 - use of case-based reasoning for the reconstruction and handling of voxelized fantoms (J. Henriet); 15 - resolution of the radioactive decay inverse problem for dose calculation in radioprotection (A. Tsilanizara); 16 - use of NURBS-type fantoms for the study of the morphological factors influencing
Traitement des Images pour la Reconnaissance de Formes EN Presence de Bruit Dependant du Signal
Terrillon, Jean-Christophe
En traitement d'images, tres peu de recherches ont jusqu'a present considere le probleme de la reconnaissance de formes en presence de bruit dependant du signal. L'originalite de ce travail reside d'une part dans l'etude de la reconnaissance de formes par correlation, invariante sous translation et invariante simultanement sous rotation et translation, en presence de bruit dependant du signal et, d'autre part, dans le developpement de nouvelles methodes de traitement d'images qui preservent la reconnaissance en presence du bruit lorsque les methodes existantes ont echoure. Nous considerons principalement le speckle, qui peut se manifester dans les correlateurs optiques operant en eclairement coherent. Les nouvelles methodes que nous proposons consistent en un pre-traitement des images bruitees base sur la theorie de l'estimation. Au moyen de simulations numeriques et d'une analyse statistique, nous montrons les avantages du pre-traitement, en particulier pour la reconnaissance avec les filtres de correlation invariants sous rotation et translation.
Energy Technology Data Exchange (ETDEWEB)
Dalfes, A.; Beliard, L.; Cazemajou, J.; Froelicher, B. [Commissariat a l' Energie Atomique, Saclay (France). Centre d' Etudes Nucleaires
1967-07-01
Auto and cross-correlation functions of signals given by neutron detectors situated in a subcritical nuclear reactor are determined by a numerical method. Values of the prompt neutrons decay constant obtained by means of the autocorrelation function of each detector and the cross-correlation function of the two detectors are compared to the reference value given by a classical pulsed neutrons measurement. Agreement between results seems to be satisfactory. (authors) [French] Les fonctions d'autocorrelation et d'intercorrelation des signaux issus de deux detecteurs de neutrons places dans un reacteur nucleaire sous critique sont determinees par une methode numerique. On compare les valeurs de la constante de decroissance des neutrons prompts donnees par les fonctions d'autocorrelation de chaque detecteur et la fonction d'intercorrelation des deux detecteurs au resultat de reference fourni par une manipulation dite de 'neutrons pulses'. L'accord entre les resultats parait satisfaisant. (auteurs)
Developpement de techniques de diagnostic non intrusif par tomographie optique
Dubot, Fabien
pour l'algorithme de Gauss-Newton amorti, et les regularisations basees sur le maillage et l'utilisation des gradients de Sobolev, uniformes ou spatialement dependants, lors de l'extraction du gradient de la fonction cout, pour la methode BFGS. Les resultats numeriques indiquent que l'algorithme de BFGS surpasse celui de Gauss-Newton amorti en ce qui concerne la qualite des reconstructions obtenues, le temps de calcul ou encore la facilite de selection du parametre de regularisation. Deuxiemement, une etude sur la quasi-independance du parametre de penalisation de Tikhonov optimal par rapport a la dimension de l'espace de controle dans les problemes inverses d'estimation de fonctions spatialement dependantes est menee. Cette etude fait suite a une observation realisee lors de la premiere partie de ce travail ou le parametre de Tikhonov, determine par la methode " L-curve ", se trouve etre independant de la dimension de l'espace de controle dans le cas sous-determine. Cette hypothese est demontree theoriquement puis verifiee numeriquement sur un probleme inverse lineaire de conduction de la chaleur puis sur le probleme inverse non-lineaire de TOD. La verification numerique repose sur la determination d'un parametre de Tikhonov optimal, defini comme etant celui qui minimise les ecarts entre les cibles et les reconstructions. La demonstration theorique repose sur le principe de Morozov (discrepancy principle) dans le cas lineaire, tandis qu'elle repose essentiellement sur l'hypothese que les fonctions radiatives a reconstruire sont des variables aleatoires suivant une loi normale dans le cas non-lineaire. En conclusion, la these demontre que le parametre de Tikhonov peut etre determine en utilisant une parametrisation des variables de controle associee a un maillage lâche afin de reduire les temps de calcul. Troisiemement, une methode inverse multi-echelle basee sur les ondelettes associee a l'algorithme de BFGS est developpee. Cette methode, qui s'appuie sur une
EMBO Course “Formal Analysis of Genetic Regulation”
1979-01-01
The E M B 0 course on "Formal Analysis of Genetic Regulation" A course entitled "Formal analysis of Genetic Regulation" was held at the University of Brussels from 6 to 16 September 1977 under the auspices of EMBO (European Molecular Biology Organization). As indicated by the title of the book (but not explicitly enough by the title of the course), the main emphasis was put on a dynamic analysis of systems using logical methods, that is, methods in which functions and variables take only a limited number of values - typically two. In this respect, this course was complementary to an EMBO course using continuous methods which was held some months later in Israel by Prof. Segel. People from four very different laboratories took an active part in teaching our course in Brussels : Drs Anne LEUSSLER and Philippe VAN HAM, from the Laboratory of Prof. Jean FLORINE (Laboratoire des Systemes logiques et numeriques, Faculte des Sciences appliquees, Universite Libre de Bruxelles). Dr Stuart KAUFFMAN (Dept. of Biochemist...
Reconnaissance Invariante des Formes avec le Filtre de Fourier-Mellin et un Reseau Neuronique
Lejeune, Claude
Le filtre de Fourier-Mellin est applique a un ensemble d'objets pour generer des vecteurs invariants sous translation, rotation et changement d'echelle. C'est la premiere methode permettant d'obtenir ces trois invariants simultanement. Le calcul des vecteurs invariants est fait numeriquement et optiquement. Les vecteurs ainsi obtenus sont utilises comme entrees dans un reseau neuronique backpropagation pour faire la classification des prototypes qui lui sont presentes. Les dimensions des vecteurs invariants sont tres petites par rapport aux objets d'entree et permettent d'utiliser un reseau possedant un nombre restreint de connexions. Il devient possible d'entrai ner le reseau dans des temps relativement courts sur un ordinateur du type PC. Une fois le reseau entrai ne, nous lui presentons des vecteurs invariants provenant d'objets se retrouvant dans l'ensemble d'entrai nement mais ayant subi des rotations et des changements d'echelle. Ce nouveau groupe represente l'ensemble de rappel. La performance de la methode est tres bonne avec des taux de succes superieurs a 85%.
Energy Technology Data Exchange (ETDEWEB)
Faure, J.; Gouttefangeas, M.; Levy-Mandel, R.; Vienet, R.; Lago, B.; Loeb, J. [Commissariat a l' Energie Atomique, Saclay (France). Centre d' Etudes Nucleaires
1963-07-01
This is a study of the repulsive electrostatic forces existing inside a proton beam focused by the magnetic field of a circular accelerator. The general equation that rules the variation of beam density versus time can be rewritten by a fairly simple reasoning, A numerical method to solve this equation is then developed. The next step is then to find an optimum beam, a gaussian distribution of density being proposed allowing to find an analytical solution to the problem. (authors) [French] On etudie l'action des forces electrostatiques de repulsion qui existent dans un faisceau de protons focalise par le champ magnetique d'un accelerateur circulaire. L'equation generale qui regit la variation de densite du faisceau au cours du temps est retrouvee par un raisonnement simple. On developpe une methode numerique de resolution de cette equation. On pose le probleme de la recherche d'un faisceau optimal et on propose une loi de repartition gaussienne de densite qui permet de trouver une solution analytique au probleme. (auteurs)
Prediction du transfert thermique parietal pour la chambre de combustion d'une turbine a gaz
Gosselin, Pierre
Des travaux ont demontre que la temperature de paroi pouvait etre predite avec precision (+/-6 a 10%) pour une chambre de combustion a pleine-echelle. Pour les resultats obtenus d'une autre chambre, aussi identique mais de plus petit diametre, la prediction de temperature de paroi n'offrait pas la meme precision de calcul. Cette etude est limitee aux turbines aeronautiques. L'objectif a ete de reevaluer globalement la prediction de la temperature de paroi d'une chambre de combustion de type GHOST a la lumiere des resultats obtenus des programmes experimentaux, tout en prenant en consideration l'influence des differents types de carburants. Par l'analyse du grand nombre de donnees experimentales, des modifications furent apportees a la methode de prediction utilisee pour les chambres a pleine echelle afin de reduire l'erreur de prediction pour les echelles a 1:2 (basse pression), 1:3 (basse pression) et 1:3 (haute pression). Une modelisation de la chambre de combustion a ete effectuee. L'analyse numerique nous a demontree que le code FLUENT/UNS predisait tres bien l'ecoulement a froid a l'interieur de la chambre de combustion GHOST echelle 1:1. La prediction etait acceptable au niveau des profils de temperature a l'interieur de la chambre. Cependant, une lacune a ete observee au niveau du modele d'evaporation du code.
Institute of Scientific and Technical Information of China (English)
贺琼琼
2012-01-01
互联网和数字技术的发展对版权制度形成了巨大的冲击和挑战，版权保护成为世界各国共同面临的一大难题。法国是《人权宣言》诞生的地方，同时也是版权保护最严厉的国家。网络环境下，如何协调信息自由与版权保护的关系，法国反网络盗版立法的最新发展值得关注和借鉴。%Le developpement d＇Intemet et des technologies numeriques constitue choc et defi pour le systeme du droit d＇auteur. La protection du droit d＇auteur s＇impose comme un probl＋me irreductible pour tout pays du monde. Etant le pays of La declaration des droits de l＇homme fiat promulguee, La France est aussi le pays o13 la protection du droit d＇auteur est la plus rigoureusement respectee. Sur fond de reseaux internationaux, comment concilier les relations entre la liberte et la protection du droit d＇auteur？ L＇experience frangaise merite notre attention.
PET/CT成像呼吸运动B样条校正%Respiratory motion correction of PET/CT imaging based on B-spline
Institute of Scientific and Technical Information of China (English)
潘李鹏; 贺建峰; 封硕; 崔锐; 马磊; 相艳; 易三莉; 张俊
2015-01-01
The degradation of image quality of PET/CT caused by respiratory motion will affect the physician’s diagnosis. The common developed technology of respiratory motion correction is gating, but it still has some limitations. This paper proposes a new method that using CT images extract the features of respiratory motion based on B-Spline to correct respi-ration. Firstly, it obtains the sequence of CT images corresponding with PET images within same respiratory motion cycle, and extracts the features of the motion of CT sequences within respiratory cycle by B-Spline. Next it transforms the feature parameters of CT sequences into the corresponding PET image sequences for the motion correction. The geometric defor-mation phantom and voxelised phantom tests show that the proposed method can obviously improve the quality of image of PET/CT for respiratory motion, and has a value of study further.%PET/CT成像中的人体呼吸运动会造成图像运动模糊，会严重影响图像质量，对医生的诊断造成影响。目前常用的呼吸门控技术能够在一定程度上改善图像质量，但是均存在其局限性。提出了一种基于CT图像提取呼吸运动特征的B样条方法，对呼吸运动图像进行校正。在PET/CT上获取与PET图像周期匹配的CT图像序列，通过B样条配准方式对在呼吸周期内的CT图序列像提取运动信息；对CT图像所匹配的PET图像做基于运动特征信息的校正变换进行运动校正。几何位移形变和像素人体模实验结果表明，提出的方法对PET/CT呼吸运动图像质量改进明显，具有研究价值。
Isoforms of the Erythropoietin receptor in dopaminergic neurons of the Substantia Nigra.
Marcuzzi, Federica; Zucchelli, Silvia; Bertuzzi, Maria; Santoro, Claudio; Tell, Gianluca; Carninci, Piero; Gustincich, Stefano
2016-11-01
Erythropoietin receptor (EpoR) regulates erythrocytes differentiation in blood. In the brain, EpoR has been shown to protect several neuronal cell types from cell death, including the A9 dopaminergic neurons (DA) of the Substantia Nigra (SN). These cells form the nigrostriatal pathway and are devoted to the control of postural reflexes and voluntary movements. Selective degeneration of A9 DA neurons leads to Parkinson's disease. By the use of nanoCAGE, a technology that allows the identification of Transcription Start Sites (TSSs) at a genome-wide level, we have described the promoter-level expression atlas of mouse A9 DA neurons purified with Laser Capture Microdissection (LCM). Here, we identify mRNA variants of the Erythropoietin Receptor (DA-EpoR) transcribed from alternative TSSs. Experimental validation and full-length cDNA cloning is integrated with gene expression analysis in the FANTOM5 database. In DA neurons, the EpoR gene encodes for a N-terminal truncated receptor. Based on STAT5 phosphorylation assays, we show that the new variant of N-terminally truncated EpoR acts as decoy when co-expressed with the full-length form. A similar isoform is also found in human. This work highlights new complexities in the regulation of Erythropoietin (EPO) signaling in the brain.
Horie, Masafumi; Yamaguchi, Yoko; Saito, Akira; Nagase, Takahide; Lizio, Marina; Itoh, Masayoshi; Kawaji, Hideya; Lassmann, Timo; Carninci, Piero; Forrest, Alistair R. R.; Hayashizaki, Yoshihide; Suzutani, Tatsuo; Kappert, Kai; Micke, Patrick; Ohshima, Mitsuhiro
2016-01-01
Periodontitis is affecting over half of the adult population, and represents a major public health problem. Previously, we isolated a subset of gingival fibroblasts (GFs) from periodontitis patients, designated as periodontitis-associated fibroblasts (PAFs), which were highly capable of collagen degradation. To elucidate their molecular profiles, GFs isolated form healthy and periodontitis-affected gingival tissues were analyzed by CAGE-seq and integrated with the FANTOM5 atlas. GFs from healthy gingival tissues displayed distinctive patterns of CAGE profiles as compared to fibroblasts from other organ sites and characterized by specific expression of developmentally important transcription factors such as BARX1, PAX9, LHX8, and DLX5. In addition, a novel long non-coding RNA associated with LHX8 was described. Furthermore, we identified DLX5 regulating expression of the long variant of RUNX2 transcript, which was specifically active in GFs but not in their periodontitis-affected counterparts. Knockdown of these factors in GFs resulted in altered expression of extracellular matrix (ECM) components. These results indicate activation of DLX5 and RUNX2 via its distal promoter represents a unique feature of GFs, and is important for ECM regulation. Down-regulation of these transcription factors in PAFs could be associated with their property to degrade collagen, which may impact on the process of periodontitis. PMID:27645561
The RIKEN integrated database of mammals.
Masuya, Hiroshi; Makita, Yuko; Kobayashi, Norio; Nishikata, Koro; Yoshida, Yuko; Mochizuki, Yoshiki; Doi, Koji; Takatsuki, Terue; Waki, Kazunori; Tanaka, Nobuhiko; Ishii, Manabu; Matsushima, Akihiro; Takahashi, Satoshi; Hijikata, Atsushi; Kozaki, Kouji; Furuichi, Teiichi; Kawaji, Hideya; Wakana, Shigeharu; Nakamura, Yukio; Yoshiki, Atsushi; Murata, Takehide; Fukami-Kobayashi, Kaoru; Mohan, Sujatha; Ohara, Osamu; Hayashizaki, Yoshihide; Mizoguchi, Riichiro; Obata, Yuichi; Toyoda, Tetsuro
2011-01-01
The RIKEN integrated database of mammals (http://scinets.org/db/mammal) is the official undertaking to integrate its mammalian databases produced from multiple large-scale programs that have been promoted by the institute. The database integrates not only RIKEN's original databases, such as FANTOM, the ENU mutagenesis program, the RIKEN Cerebellar Development Transcriptome Database and the Bioresource Database, but also imported data from public databases, such as Ensembl, MGI and biomedical ontologies. Our integrated database has been implemented on the infrastructure of publication medium for databases, termed SciNetS/SciNeS, or the Scientists' Networking System, where the data and metadata are structured as a semantic web and are downloadable in various standardized formats. The top-level ontology-based implementation of mammal-related data directly integrates the representative knowledge and individual data records in existing databases to ensure advanced cross-database searches and reduced unevenness of the data management operations. Through the development of this database, we propose a novel methodology for the development of standardized comprehensive management of heterogeneous data sets in multiple databases to improve the sustainability, accessibility, utility and publicity of the data of biomedical information.
The evolution of human cells in terms of protein innovation.
Sardar, Adam J; Oates, Matt E; Fang, Hai; Forrest, Alistair R R; Kawaji, Hideya; Gough, Julian; Rackham, Owen J L
2014-06-01
Humans are composed of hundreds of cell types. As the genomic DNA of each somatic cell is identical, cell type is determined by what is expressed and when. Until recently, little has been reported about the determinants of human cell identity, particularly from the joint perspective of gene evolution and expression. Here, we chart the evolutionary past of all documented human cell types via the collective histories of proteins, the principal product of gene expression. FANTOM5 data provide cell-type-specific digital expression of human protein-coding genes and the SUPERFAMILY resource is used to provide protein domain annotation. The evolutionary epoch in which each protein was created is inferred by comparison with domain annotation of all other completely sequenced genomes. Studying the distribution across epochs of genes expressed in each cell type reveals insights into human cellular evolution in terms of protein innovation. For each cell type, its history of protein innovation is charted based on the genes it expresses. Combining the histories of all cell types enables us to create a timeline of cell evolution. This timeline identifies the possibility that our common ancestor Coelomata (cavity-forming animals) provided the innovation required for the innate immune system, whereas cells which now form the brain of human have followed a trajectory of continually accumulating novel proteins since Opisthokonta (boundary of animals and fungi). We conclude that exaptation of existing domain architectures into new contexts is the dominant source of cell-type-specific domain architectures.
An atlas of human long non-coding RNAs with accurate 5′ ends
Hon, Chung-Chau
2017-02-28
Long non-coding RNAs (lncRNAs) are largely heterogeneous and functionally uncharacterized. Here, using FANTOM5 cap analysis of gene expression (CAGE) data, we integrate multiple transcript collections to generate a comprehensive atlas of 27,919 human lncRNA genes with high-confidence 5′ ends and expression profiles across 1,829 samples from the major human primary cell types and tissues. Genomic and epigenomic classification of these lncRNAs reveals that most intergenic lncRNAs originate from enhancers rather than from promoters. Incorporating genetic and expression data, we show that lncRNAs overlapping trait-associated single nucleotide polymorphisms are specifically expressed in cell types relevant to the traits, implicating these lncRNAs in multiple diseases. We further demonstrate that lncRNAs overlapping expression quantitative trait loci (eQTL)-associated single nucleotide polymorphisms of messenger RNAs are co-expressed with the corresponding messenger RNAs, suggesting their potential roles in transcriptional regulation. Combining these findings with conservation data, we identify 19,175 potentially functional lncRNAs in the human genome.
Motif signatures of transcribed enhancers
Kleftogiannis, Dimitrios
2017-09-14
In mammalian cells, transcribed enhancers (TrEn) play important roles in the initiation of gene expression and maintenance of gene expression levels in spatiotemporal manner. One of the most challenging questions in biology today is how the genomic characteristics of enhancers relate to enhancer activities. This is particularly critical, as several recent studies have linked enhancer sequence motifs to specific functional roles. To date, only a limited number of enhancer sequence characteristics have been investigated, leaving space for exploring the enhancers genomic code in a more systematic way. To address this problem, we developed a novel computational method, TELS, aimed at identifying predictive cell type/tissue specific motif signatures. We used TELS to compile a comprehensive catalog of motif signatures for all known TrEn identified by the FANTOM5 consortium across 112 human primary cells and tissues. Our results confirm that distinct cell type/tissue specific motif signatures characterize TrEn. These signatures allow discriminating successfully a) TrEn from random controls, proxy of non-enhancer activity, and b) cell type/tissue specific TrEn from enhancers expressed and transcribed in different cell types/tissues. TELS codes and datasets are publicly available at http://www.cbrc.kaust.edu.sa/TELS.
Functional annotation of the vlinc class of non-coding RNAs using systems biology approach.
St Laurent, Georges; Vyatkin, Yuri; Antonets, Denis; Ri, Maxim; Qi, Yao; Saik, Olga; Shtokalo, Dmitry; de Hoon, Michiel J L; Kawaji, Hideya; Itoh, Masayoshi; Lassmann, Timo; Arner, Erik; Forrest, Alistair R R; Nicolas, Estelle; McCaffrey, Timothy A; Carninci, Piero; Hayashizaki, Yoshihide; Wahlestedt, Claes; Kapranov, Philipp
2016-04-20
Functionality of the non-coding transcripts encoded by the human genome is the coveted goal of the modern genomics research. While commonly relied on the classical methods of forward genetics, integration of different genomics datasets in a global Systems Biology fashion presents a more productive avenue of achieving this very complex aim. Here we report application of a Systems Biology-based approach to dissect functionality of a newly identified vast class of very long intergenic non-coding (vlinc) RNAs. Using highly quantitative FANTOM5 CAGE dataset, we show that these RNAs could be grouped into 1542 novel human genes based on analysis of insulators that we show here indeed function as genomic barrier elements. We show that vlinc RNAs genes likely function in cisto activate nearby genes. This effect while most pronounced in closely spaced vlinc RNA-gene pairs can be detected over relatively large genomic distances. Furthermore, we identified 101 vlinc RNA genes likely involved in early embryogenesis based on patterns of their expression and regulation. We also found another 109 such genes potentially involved in cellular functions also happening at early stages of development such as proliferation, migration and apoptosis. Overall, we show that Systems Biology-based methods have great promise for functional annotation of non-coding RNAs.
Improved definition of the mouse transcriptome via targeted RNA sequencing.
Bussotti, Giovanni; Leonardi, Tommaso; Clark, Michael B; Mercer, Tim R; Crawford, Joanna; Malquori, Lorenzo; Notredame, Cedric; Dinger, Marcel E; Mattick, John S; Enright, Anton J
2016-05-01
Targeted RNA sequencing (CaptureSeq) uses oligonucleotide probes to capture RNAs for sequencing, providing enriched read coverage, accurate measurement of gene expression, and quantitative expression data. We applied CaptureSeq to refine transcript annotations in the current murine GRCm38 assembly. More than 23,000 regions corresponding to putative or annotated long noncoding RNAs (lncRNAs) and 154,281 known splicing junction sites were selected for targeted sequencing across five mouse tissues and three brain subregions. The results illustrate that the mouse transcriptome is considerably more complex than previously thought. We assemble more complete transcript isoforms than GENCODE, expand transcript boundaries, and connect interspersed islands of mapped reads. We describe a novel filtering pipeline that identifies previously unannotated but high-quality transcript isoforms. In this set, 911 GENCODE neighboring genes are condensed into 400 expanded gene models. Additionally, 594 GENCODE lncRNAs acquire an open reading frame (ORF) when their structure is extended with CaptureSeq. Finally, we validate our observations using current FANTOM and Mouse ENCODE resources.
Scenarios of temporal and spatial evolution of hexabromocyclododecane in the North Sea.
Ilyina, Tatiana; Hunziker, René W
2010-06-15
Spatial and temporal distribution of the flame retardant hexabromocyclododecane (HBCD) in the North Sea was examined for the period from 1995 to 2005 using a pollutant transport model FANTOM. Model calculations allow conclusions on relevant sinks and fluxes in and out of the North Sea and on the time needed to establish a steady state. Calculations were performed for two additional scenarios with different rates of primary degradation ranging from fast degrading to absolute persistency. Concentrations calculated in the scenarios with degradation are in line with the monitoring data available for HBCD. Concentrations calculated in the "persistent" scenario disagree with measured data. According to our model calculations, steady state is established within months for the water and the top layer sediment with no evidence for a temporal trend, except for the "persistent" scenario, in which concentrations increase continuously in the southeastern part of the North Sea, where hydrographic and circulation characteristics produce areas of converging currents. Our model study enables a better understanding of the fate of HBCD in the North Sea, its potential for transport and overall elimination. We discuss these findings in the light of different concerns for PBT substances.
EFFECT OF SALT WEDGE INTRUSION IN KUSHIRO WETLAND CONSIDERING SEA LEVEL RISE
Nakamoto, Atsushi; Shintani, Tetsuya; Nakayama, Keisuke; Maruya, Yasuyuki; Ishida, Tetsuya; Houmura, Kenichi
This paper describes the effect of sea-level rise (SLR) on the salt wedge intrusion in terms of ecological system in Kushiro wetland. Kushiro wetland was registered by Ramsar Treaty and the largest wetland in Japan. A previous study demonstrates that the salt wedge intrusion may not affect ecological system of Kushiro wetland, such as loss of freshwater plants along Kushiro River. However, it is revealed that SLR may occur in the end of the 21st century, which enhances the increase in the distance of the salt wedge intrusion along Kushiro River and the loss of endangered species of Kushiro wetland along Kushiro River. This study thus aims to investigate the influence of the salt wedge intrusion on freshwater plants along Kushiro River, and to clarify the salt wedge intrusion when SLR occurs due to climate change. We attempted to investigate the influence of SLR on endangered species along Kyu-Kushiro River in which sea water is likely to intrude up to about 8 km from the river mouth. As results, it is suggested from field observations that salinity may decrease freshwater plants along Kushiro River, and it clarifies the possibility that the salt wedge intrudes Kushiro River due to SLR by using 3D hydrodynamic model, Fantom3D.
RIKEN mouse genome encyclopedia.
Hayashizaki, Yoshihide
2003-01-01
We have been working to establish the comprehensive mouse full-length cDNA collection and sequence database to cover as many genes as we can, named Riken mouse genome encyclopedia. Recently we are constructing higher-level annotation (Functional ANnoTation Of Mouse cDNA; FANTOM) not only with homology search based annotation but also with expression data profile, mapping information and protein-protein database. More than 1,000,000 clones prepared from 163 tissues were end-sequenced to classify into 159,789 clusters and 60,770 representative clones were fully sequenced. As a conclusion, the 60,770 sequences contained 33,409 unique. The next generation of life science is clearly based on all of the genome information and resources. Based on our cDNA clones we developed the additional system to explore gene function. We developed cDNA microarray system to print all of these cDNA clones, protein-protein interaction screening system, protein-DNA interaction screening system and so on. The integrated database of all the information is very useful not only for analysis of gene transcriptional network and for the connection of gene to phenotype to facilitate positional candidate approach. In this talk, the prospect of the application of these genome resourced should be discussed. More information is available at the web page: http://genome.gsc.riken.go.jp/.
Hurst, Laurence D; Sachenkova, Oxana; Daub, Carsten; Forrest, Alistair R R; Huminiecki, Lukasz
2014-07-31
Conventional wisdom holds that, owing to the dominance of features such as chromatin level control, the expression of a gene cannot be readily predicted from knowledge of promoter architecture. This is reflected, for example, in a weak or absent correlation between promoter divergence and expression divergence between paralogs. However, an inability to predict may reflect an inability to accurately measure or employment of the wrong parameters. Here we address this issue through integration of two exceptional resources: ENCODE data on transcription factor binding and the FANTOM5 high-resolution expression atlas. Consistent with the notion that in eukaryotes most transcription factors are activating, the number of transcription factors binding a promoter is a strong predictor of expression breadth. In addition, evolutionarily young duplicates have fewer transcription factor binders and narrower expression. Nonetheless, we find several binders and cooperative sets that are disproportionately associated with broad expression, indicating that models more complex than simple correlations should hold more predictive power. Indeed, a machine learning approach improves fit to the data compared with a simple correlation. Machine learning could at best moderately predict tissue of expression of tissue specific genes. We find robust evidence that some expression parameters and paralog expression divergence are strongly predictable with knowledge of transcription factor binding repertoire. While some cooperative complexes can be identified, consistent with the notion that most eukaryotic transcription factors are activating, a simple predictor, the number of binding transcription factors found on a promoter, is a robust predictor of expression breadth.
DEFF Research Database (Denmark)
Selberg, Hanne
2009-01-01
”Simulation, Læring og Praksis” Professionshøjskolen Metropol, Sygeplejerskeuddannelsen v/Klinisk adjunkt Hanne Selberg Introduktion. Projektet blev afviklet på Glostrup Hospital i samarbejde med Professionshøjskolen Metropol. Omdrejningspunktet for projektet var etablering af tværfaglige...... simulationsbaserede læringsrum, hvor medarbejdere og studerende kunne øve og træne kompetencer med udgangspunkt i situationer integreret i eller tæt på virkeligheden. Metode Projektet blev afviklet i to spor.Spor 1 var rettet mod læger, sygeplejersker og social- og sundhedsassistenter, hvor fokus overordnet var...... patientsikkerhed. Simulationstræningen foregik integreret i den kliniske kontekst på autentiske patientstuer. I dette spor blev der afprøvet full-scale simulation, hvor deltagerne tværfagligt øvede virkelighedsnære scenarier på fantomer. Læringsmålene var at opøve kompetencer til behandling af en kritisk syg...
Ienasescu, Hans; Li, Kang; Andersson, Robin; Vitezic, Morana; Rennie, Sarah; Chen, Yun; Vitting-Seerup, Kristoffer; Lagoni, Emil; Boyd, Mette; Bornholdt, Jette; de Hoon, Michiel J L; Kawaji, Hideya; Lassmann, Timo; Hayashizaki, Yoshihide; Forrest, Alistair R R; Carninci, Piero; Sandelin, Albin
2016-01-01
Genomics consortia have produced large datasets profiling the expression of genes, micro-RNAs, enhancers and more across human tissues or cells. There is a need for intuitive tools to select subsets of such data that is the most relevant for specific studies. To this end, we present SlideBase, a web tool which offers a new way of selecting genes, promoters, enhancers and microRNAs that are preferentially expressed/used in a specified set of cells/tissues, based on the use of interactive sliders. With the help of sliders, SlideBase enables users to define custom expression thresholds for individual cell types/tissues, producing sets of genes, enhancers etc. which satisfy these constraints. Changes in slider settings result in simultaneous changes in the selected sets, updated in real time. SlideBase is linked to major databases from genomics consortia, including FANTOM, GTEx, The Human Protein Atlas and BioGPS.Database URL: http://slidebase.binf.ku.dk.
A promoter-level mammalian expression atlas.
Forrest, Alistair R R; Kawaji, Hideya; Rehli, Michael; Baillie, J Kenneth; de Hoon, Michiel J L; Haberle, Vanja; Lassmann, Timo; Kulakovskiy, Ivan V; Lizio, Marina; Itoh, Masayoshi; Andersson, Robin; Mungall, Christopher J; Meehan, Terrence F; Schmeier, Sebastian; Bertin, Nicolas; Jørgensen, Mette; Dimont, Emmanuel; Arner, Erik; Schmidl, Christian; Schaefer, Ulf; Medvedeva, Yulia A; Plessy, Charles; Vitezic, Morana; Severin, Jessica; Semple, Colin A; Ishizu, Yuri; Young, Robert S; Francescatto, Margherita; Alam, Intikhab; Albanese, Davide; Altschuler, Gabriel M; Arakawa, Takahiro; Archer, John A C; Arner, Peter; Babina, Magda; Rennie, Sarah; Balwierz, Piotr J; Beckhouse, Anthony G; Pradhan-Bhatt, Swati; Blake, Judith A; Blumenthal, Antje; Bodega, Beatrice; Bonetti, Alessandro; Briggs, James; Brombacher, Frank; Burroughs, A Maxwell; Califano, Andrea; Cannistraci, Carlo V; Carbajo, Daniel; Chen, Yun; Chierici, Marco; Ciani, Yari; Clevers, Hans C; Dalla, Emiliano; Davis, Carrie A; Detmar, Michael; Diehl, Alexander D; Dohi, Taeko; Drabløs, Finn; Edge, Albert S B; Edinger, Matthias; Ekwall, Karl; Endoh, Mitsuhiro; Enomoto, Hideki; Fagiolini, Michela; Fairbairn, Lynsey; Fang, Hai; Farach-Carson, Mary C; Faulkner, Geoffrey J; Favorov, Alexander V; Fisher, Malcolm E; Frith, Martin C; Fujita, Rie; Fukuda, Shiro; Furlanello, Cesare; Furino, Masaaki; Furusawa, Jun-ichi; Geijtenbeek, Teunis B; Gibson, Andrew P; Gingeras, Thomas; Goldowitz, Daniel; Gough, Julian; Guhl, Sven; Guler, Reto; Gustincich, Stefano; Ha, Thomas J; Hamaguchi, Masahide; Hara, Mitsuko; Harbers, Matthias; Harshbarger, Jayson; Hasegawa, Akira; Hasegawa, Yuki; Hashimoto, Takehiro; Herlyn, Meenhard; Hitchens, Kelly J; Ho Sui, Shannan J; Hofmann, Oliver M; Hoof, Ilka; Hori, Furni; Huminiecki, Lukasz; Iida, Kei; Ikawa, Tomokatsu; Jankovic, Boris R; Jia, Hui; Joshi, Anagha; Jurman, Giuseppe; Kaczkowski, Bogumil; Kai, Chieko; Kaida, Kaoru; Kaiho, Ai; Kajiyama, Kazuhiro; Kanamori-Katayama, Mutsumi; Kasianov, Artem S; Kasukawa, Takeya; Katayama, Shintaro; Kato, Sachi; Kawaguchi, Shuji; Kawamoto, Hiroshi; Kawamura, Yuki I; Kawashima, Tsugumi; Kempfle, Judith S; Kenna, Tony J; Kere, Juha; Khachigian, Levon M; Kitamura, Toshio; Klinken, S Peter; Knox, Alan J; Kojima, Miki; Kojima, Soichi; Kondo, Naoto; Koseki, Haruhiko; Koyasu, Shigeo; Krampitz, Sarah; Kubosaki, Atsutaka; Kwon, Andrew T; Laros, Jeroen F J; Lee, Weonju; Lennartsson, Andreas; Li, Kang; Lilje, Berit; Lipovich, Leonard; Mackay-Sim, Alan; Manabe, Ri-ichiroh; Mar, Jessica C; Marchand, Benoit; Mathelier, Anthony; Mejhert, Niklas; Meynert, Alison; Mizuno, Yosuke; de Lima Morais, David A; Morikawa, Hiromasa; Morimoto, Mitsuru; Moro, Kazuyo; Motakis, Efthymios; Motohashi, Hozumi; Mummery, Christine L; Murata, Mitsuyoshi; Nagao-Sato, Sayaka; Nakachi, Yutaka; Nakahara, Fumio; Nakamura, Toshiyuki; Nakamura, Yukio; Nakazato, Kenichi; van Nimwegen, Erik; Ninomiya, Noriko; Nishiyori, Hiromi; Noma, Shohei; Noma, Shohei; Noazaki, Tadasuke; Ogishima, Soichi; Ohkura, Naganari; Ohimiya, Hiroko; Ohno, Hiroshi; Ohshima, Mitsuhiro; Okada-Hatakeyama, Mariko; Okazaki, Yasushi; Orlando, Valerio; Ovchinnikov, Dmitry A; Pain, Arnab; Passier, Robert; Patrikakis, Margaret; Persson, Helena; Piazza, Silvano; Prendergast, James G D; Rackham, Owen J L; Ramilowski, Jordan A; Rashid, Mamoon; Ravasi, Timothy; Rizzu, Patrizia; Roncador, Marco; Roy, Sugata; Rye, Morten B; Saijyo, Eri; Sajantila, Antti; Saka, Akiko; Sakaguchi, Shimon; Sakai, Mizuho; Sato, Hiroki; Savvi, Suzana; Saxena, Alka; Schneider, Claudio; Schultes, Erik A; Schulze-Tanzil, Gundula G; Schwegmann, Anita; Sengstag, Thierry; Sheng, Guojun; Shimoji, Hisashi; Shimoni, Yishai; Shin, Jay W; Simon, Christophe; Sugiyama, Daisuke; Sugiyama, Takaai; Suzuki, Masanori; Suzuki, Naoko; Swoboda, Rolf K; 't Hoen, Peter A C; Tagami, Michihira; Takahashi, Naoko; Takai, Jun; Tanaka, Hiroshi; Tatsukawa, Hideki; Tatum, Zuotian; Thompson, Mark; Toyodo, Hiroo; Toyoda, Tetsuro; Valen, Elvind; van de Wetering, Marc; van den Berg, Linda M; Verado, Roberto; Vijayan, Dipti; Vorontsov, Ilya E; Wasserman, Wyeth W; Watanabe, Shoko; Wells, Christine A; Winteringham, Louise N; Wolvetang, Ernst; Wood, Emily J; Yamaguchi, Yoko; Yamamoto, Masayuki; Yoneda, Misako; Yonekura, Yohei; Yoshida, Shigehiro; Zabierowski, Susan E; Zhang, Peter G; Zhao, Xiaobei; Zucchelli, Silvia; Summers, Kim M; Suzuki, Harukazu; Daub, Carsten O; Kawai, Jun; Heutink, Peter; Hide, Winston; Freeman, Tom C; Lenhard, Boris; Bajic, Vladimir B; Taylor, Martin S; Makeev, Vsevolod J; Sandelin, Albin; Hume, David A; Carninci, Piero; Hayashizaki, Yoshihide
2014-03-27
Regulated transcription controls the diversity, developmental pathways and spatial organization of the hundreds of cell types that make up a mammal. Using single-molecule cDNA sequencing, we mapped transcription start sites (TSSs) and their usage in human and mouse primary cells, cell lines and tissues to produce a comprehensive overview of mammalian gene expression across the human body. We find that few genes are truly 'housekeeping', whereas many mammalian promoters are composite entities composed of several closely separated TSSs, with independent cell-type-specific expression profiles. TSSs specific to different cell types evolve at different rates, whereas promoters of broadly expressed genes are the most conserved. Promoter-based expression analysis reveals key transcription factors defining cell states and links them to binding-site motifs. The functions of identified novel transcripts can be predicted by coexpression and sample ontology enrichment analyses. The functional annotation of the mammalian genome 5 (FANTOM5) project provides comprehensive expression profiles and functional annotation of mammalian cell-type-specific transcriptomes with wide applications in biomedical research.
A promoter-level mammalian expression atlas
Forest, Alistair R R
2014-03-26
Regulated transcription controls the diversity, developmental pathways and spatial organization of the hundreds of cell types that make up a mammal. Using single-molecule cDNA sequencing, we mapped transcription start sites (TSSs) and their usage in human and mouse primary cells, cell lines and tissues to produce a comprehensive overview of mammalian gene expression across the human body. We find that few genes are truly ‘housekeeping’, whereas many mammalian promoters are composite entities composed of several closely separated TSSs, with independent cell-type-specific expression profiles. TSSs specific to different cell types evolve at different rates, whereas promoters of broadly expressed genes are the most conserved. Promoter-based expression analysis reveals key transcription factors defining cell states and links them to binding-site motifs. The functions of identified novel transcripts can be predicted by coexpression and sample ontology enrichment analyses. The functional annotation of the mammalian genome 5 (FANTOM5) project provides comprehensive expression profiles and functional annotation of mammalian cell-type-specific transcriptomes with wide applications in biomedical research.
Boissonneault, Maxime
L'electrodynamique quantique en circuit est une architecture prometteuse pour le calcul quantique ainsi que pour etudier l'optique quantique. Dans cette architecture, on couple un ou plusieurs qubits supraconducteurs jouant le role d'atomes a un ou plusieurs resonateurs jouant le role de cavites optiques. Dans cette these, j'etudie l'interaction entre un seul qubit supraconducteur et un seul resonateur, en permettant cependant au qubit d'avoir plus de deux niveaux et au resonateur d'avoir une non-linearite Kerr. Je m'interesse particulierement a la lecture de l'etat du qubit et a son amelioration, a la retroaction du processus de mesure sur le qubit de meme qu'a l'etude des proprietes quantiques du resonateur a l'aide du qubit. J'utilise pour ce faire un modele analytique reduit que je developpe a partir de la description complete du systeme en utilisant principalement des transfprmations unitaires et une elimination adiabatique. J'utilise aussi une librairie de calcul numerique maison permettant de simuler efficacement l'evolution du systeme complet. Je compare les predictions du modele analytique reduit et les resultats de simulations numeriques a des resultats experimentaux obtenus par l'equipe de quantronique du CEASaclay. Ces resultats sont ceux d'une spectroscopie d'un qubit supraconducteur couple a un resonateur non lineaire excite. Dans un regime de faible puissance de spectroscopie le modele reduit predit correctement la position et la largeur de la raie. La position de la raie subit les decalages de Lamb et de Stark, et sa largeur est dominee par un dephasage induit par le processus de mesure. Je montre que, pour les parametres typiques de l'electrodynamique quantique en circuit, un accord quantitatif requiert un modele en reponse non lineaire du champ intra-resonateur, tel que celui developpe. Dans un regime de forte puissance de spectroscopie, des bandes laterales apparaissent et sont causees par les fluctuations quantiques du champ electromagnetique
Bouguerra, Kheireddine
'assemblage inferieur, (3) la resistance en compression du beton, et (4) le taux d'armature dans les autres directions (armatures transversale et longitudinale de l'assemblage superieur et l'armature longitudinale de l'assemblage inferieur). Lors des essais de chargement, les dalles ont ete supportees par deux poutrelles metalliques espacees de 2000 mm centre a centre et soumises a une charge statique concentree sur une aire de contact de 600 mm x 250 mm afin de simuler une charge de camion (87,5 kN--CL-625) et ce conformement au code Canadien sur le calcul des ponts routiers [CAN/CSA-S6-06]. Aussi, une analyse numerique du comportement des dalles testees sous charges est faite a l'aide d'un logiciel d'elements finis ADINA version 8.2. Les essais ont montre que toutes les dalles testees ont rompu par poinconnement, peu importe le parametre etudie. Aussi, une epaisseur de dalle de 175 mm repond aux exigences du Canadien sur le calcul des ponts routiers [CAN/CSA-S6-06]. Par ailleurs, les resultats ont montre que la resistance en compression du beton est un parametre qui influe sur la deflexion, les deformations dans les barres et l'ouverture de fissures. Enfin, les resultats des analyses numeriques effectuees corroborent avec ceux obtenus experimentalement. Mots cles. Dalle de ponts en beton, armature de PRF, charges statiques, flexion, deformations, poinconnement, elements finis.
De l'importance des orbites periodiques: Detection et applications
Doyon, Bernard
L'ensemble des Orbites Periodiques Instables (OPIs) d'un systeme chaotique est intimement relie a ses proprietes dynamiques. A partir de l'ensemble (en principe infini) d'OPIs cachees dans l'espace des phases, on peut obtenir des quantites dynamiques importantes telles les exposants de Lyapunov, la mesure invariante, l'entropie topologique et la dimension fractale. En chaos quantique (i.e. l'etude de systemes quantiques qui ont un equivalent chaotique dans la limite classique), ces memes OPIs permettent de faire le pont entre le comportement classique et quantique de systemes non-integrables. La localisation de ces cycles fondamentaux est un probleme complexe. Cette these aborde dans un premier temps le probleme de la detection des OPIs dans les systemes chaotiques. Une etude comparative de deux algorithmes recents est presentee. Nous approfondissons ces deux methodes afin de les utiliser sur differents systemes dont des flots continus dissipatifs et conservatifs. Une analyse du taux de convergence des algorithmes est aussi realisee afin de degager les forces et les limites de ces schemes numeriques. Les methodes de detection que nous utilisons reposent sur une transformation particuliere de la dynamique initiale. Cette astuce nous a inspire une methode alternative pour cibler et stabiliser une orbite periodique quelconque dans un systeme chaotique. Le ciblage est en general combine aux methodes de controle pour stabiliser rapidement un cycle donne. En general, il faut connaitre la position et la stabilite du cycle en question. La nouvelle methode de ciblage que nous presentons ne demande pas de connaitre a priori la position et la stabilite des orbites periodiques. Elle pourrait etre un outil complementaire aux methodes de ciblage et de controle actuelles.
Aaron, Gilles; Bonnard, Rene
1984-03-01
Dans l'hOpital, le besoin d'un reseau de communication electronique ne cesse de crottre au fur et a mesure de la numerisation des images. Ce reseau local a pour but de relier quelques sources d'images telles la radiologie numerique, la tomodensitometrie, la resonance magnetique nucleaire, l'echographie ultraso-nore etc..., a un systme d'archivage. Des consoles de visualisation interacti-ves peuvent etre utilisees dans les salles d'examens, les bureaux des medecins et les services de soins. Dans un tel systme, trois caracteristiques princi-pales doivent etre prises en compte le debit, la longueur du cable et le nombre de connexions. - Le debit est tr?)s important, en effet, un temps de reponse maxima de quel-ques secondes doit etre garanti pour des images de plusieurs millions d'ele-ments binaires. - La distance entre connexions peut etre de quelques km dans certains grands hopitaux. - Le nombre de connexions au reseau ne depasse jamais quelques dizaines car les sources d'images et les unites de traitement representent des materiels importants, par ailleurs les consoles de visualisation simples peuvent etre groupees en grappe. Toutes ces conditions sont remplies par les transmissions sur fibres optiques. Selon la topologie et la methode d'accNs, deux solutions peuvent etre envisa-gees : - Anneau actif - Etoile active ou passive Enfin, les developpements de Thomson-CSF en composants pour transmissions optiques pour les grands reseaux de tel4distribution nous apportent un support technologique et une production de masse qui diminuera les collts du materiel.
Energy Technology Data Exchange (ETDEWEB)
Bianchi, G.; Corge, C.R. [Commissariat a l' Energie Atomique, Saclay (France). Centre d' Etudes Nucleaires
1963-07-01
This report deals with the numerical analysis on an I.B.M. 7090 computer of transmission resonances induced by 's' wave neutrons in time of flight experiments. The analysis method used is the partial area one. In this second part the interference term is taken into account. Modifications have been made in the programs and subroutines described in the first part, to determine the resonant transmissions from experimental raw data, and the relating partial areas. Also programs and subroutines are thoroughly described, which estimate the resonance parameters. The field of the partial area method has been extended to cover the case where several resonances have to be treated simultaneously, provided they do not interfere. (authors) [French] Le pretent rapport a pour objet l'analyse numerique sur ordinateur I.B.M. 7090 des resonances dues aux neutrons ''s'' dans les experiences de transmission par temps de vol, la methode d'analyse utilisee etant la methode dea aires partielles. Dans cette deuxieme partie il a ete tenu compte du terme d'interference. On y trouvera une description des amenagements apportes aux programmes et sous-programmes decrits dans la premiere partie pour determiner les transmissions interfero-resonnantes a partir des donnees experimentales brutes et les aires partielles afferentes. Sont egalement decrits les programmes et sous-programmes necessaires au calcul des parametres caracteristiques des resonances. Le domaine d'application de la methode a ete etendu au traitement simultane de plusieurs resonances groupees n'interferant pas entre elles. (auteurs)
Energy Technology Data Exchange (ETDEWEB)
Coulon, R.
2010-11-10
Sodium-cooled Fast Reactors are under development for the fourth generation of nuclear reactor. Breeders reactors could gives solutions for the need of energy and the preservation of uranium resources. An other purpose is the radioactive wastes production reduction by transmutation and the control of non-proliferation using a closed-cycle. These thesis shows safety and profit advantages that could be obtained by a new generation of gamma spectrometry system for SFR. Now, the high count rate abilities, allow us to study new methods of accurate power measurement and fast clad failure detection. Simulations have been done and an experimental test has been performed at the French Phenix SFR of the CEA Marcoule showing promising results for these new measurements. (author) [French] Les reacteurs a neutrons rapides refroidis au sodium sont en developpement en vue d'assurer une quatrieme generation de reacteurs repondant a la demande energetique, tout en assurant la preservation des ressources d'uranium par un fonctionnement en surgenerateur. L'objectif de la filiere est egalement d'ameliorer la gestion de la radiotoxicite des dechets produits par transmutation des actinides mineurs et de controler la non-proliferation par un fonctionnement en cycle ferme. Une instrumentation de surveillance et de controle de ce type de reacteur a ete etudiee dans cette these. La spectrometrie gamma de nouvelle generation permet, par les hauts taux de traitement aujourd'hui accessibles, d'envisager de nouvelles approches pour suivre avec une precision accrue la puissance neutronique et de detecter plus precocement des ruptures de gaine combustible. Des simulations numeriques ont ete realisees et une campagne d'essai a ete menee a bien sur le reacteur Phenix de Marcoule. Des perspectives prometteuses ont ete mises en exergue pour ces deux problematiques
Ait Hammou, Zouhair
Cette etude porte sur la conception d'un accumulateur echangeur de chaleur hybride (AECH) pour la gestion simultanee des energies solaire et electrique. Un modele mathematique reposant sur les equations de conservation de la quantite d'energie est expose. Il est developpe pour tester differents materiaux de stockage, entre autres, les materiaux a changement de phase (solide/liquide) et les materiaux de stockage sensible. Un code de calcul est mis en eeuvre sur ordinateur, puis valide a l'aide des resultats analytiques et numeriques de la litterature. En parallele, un prototype experimental a echelle reduite est concu au laboratoire afin de valider le code de calcul. Des simulations sont effectuees pour etudier les effets des parametres de conception et des materiaux de stockage sur le comportement thermique de l'AECH et sur la consommation d'energie electrique. Les resultats des simulations sur quatre mois d'hiver montrent que la paraffine n-octadecane et l'acide caprique sont deux candidats souhaitables pour le stockage d'energie destine au chauffage des habitats. L'utilisation de ces deux materiaux dans l'AECH permet de reduire la consommation d'energie electrique de 32% et d'aplanir le probleme de pointe electrique puisque 90% de l'energie electrique est consommee durant les heures creuses. En plus, en adoptant un tarif preferentiel, le calcul des couts lies a la consommation d'energie electrique montre que le consommateur adoptant ce systeme beneficie d'une reduction de 50% de la facture d'electricite.
Proprietes Adiabatiques des Naines Blanches Pulsantes de Type ZZ Ceti
Brassard, Pierre
1992-01-01
Cette these a pour but d'etudier les proprietes des oscillation non-radiales des etoiles ZZ Ceti, appelees aussi etoiles DA variables, dans le contexte de la theorie adiabatique des petites oscillations. Ces oscillations sont observables, pour ce type d'etoiles, sous forme de variations periodiques de la luminosite. A partir d'une analyse de modeles stellaires, analyse qui consiste principalement a calculer et a interpreter les periodes d'oscillations des modeles, nous voulons mieux connai tre les proprietes physiques fondamentales des ZZ Ceti. Nous developpons tout d'abord divers outils pour entreprendre cette etude. Apres avoir presente le formalisme mathematique de base decrivant les oscillations non-radiales d'une etoile, nous discutons des difficultes pouvant etre rencontrees dans le calcul de la frequence de Brunt-Vaisala, une quantite fondamentale pour le calcul des periodes d'oscillations. Par la suite, nous developpons un modele theorique simple permettant d'analyser et d'interpreter la structure des periodes calculees (ou observees) en termes des proprietes de structure de l'etoile. Nous presentons aussi les outils numeriques tout a fait originaux utilises pour calculer nos periodes a partir de modeles stellaires. Finalement, nous presentons les resultats d'ensemble de l'analyse de nos modeles, et discutons de l'interpretation des observations de periodes et du taux de variation de ces periodes en termes de structure de l'etoile et de composition du noyau de l'etoile, respectivement. Ces resultats representent l'etude la plus complete a ce jour de la seismologie des naines blanches.
Energy Technology Data Exchange (ETDEWEB)
Czubek, J.A. [Institut de Recheches Nucleaires, Dept. 6, Cracovie (Poland); Guitton, J. [Commissariat a l' Energie Atomique, Saclay (France). Centre d' Etudes Nucleaires
1966-07-01
The work described follows on to the research published in report CEA-R--2720 in march 1965. It includes: - experimental results obtained with a model composed of a constant density material (graphite); - the drawing-up of calibration curves using the similitude principle; - determination of the characteristics of a gamma-gamma probe, together with a discussion. The influence is studied of a certain number of parameters on the shape of the energy spectra of scattered radiation, and of the calibration curves: nature of the radioactive source, diameter of the probe area, source detector distance, geometrical shape of shielding between the source and the detector. An attempt is made to find a mathematical model for the calibration curve, for given conditions. Numerical applications make it possible to establish the optimum technical characteristics for a probe measuring the density with the smallest statistical error. (authors) [French] Les travaux decrits font suite aux etudes presentees dans le rapport CEA-R--2720 de mars 1965. Ils portent sur: - les resultats experimentaux obtenus sur un modele constitue d'un materiau de densite constante (graphite) ; - l'etablissement de courbes d'etalonnage a l'aide du principe de similitude; - la recherche et la discussion des caracteristiques d'une sonde gamma-gamma. L'influence d'un certain nombre de parametres sur la forme des spectres d'energie du rayonnement diffuse et des courbes d'etalonnage est etudiee: nature de la source radioactive, diametre du sondage, distance source-detecteur, forme geometrique du blindage entre la source et le detecteur. Pour des conditions determinees, le modele mathematique de la courbe d'etalonnage est recherche. Des applications numeriques permettent d'etablir les caracteristiques techniques optimales d'une sonde mesurant la densite avec la plus faible erreur statistique. (auteurs)
Energy Technology Data Exchange (ETDEWEB)
Cresta, M.; Lacourly, G. [Commissariat a l' Energie Atomique, Fontenay-aux-Roses (France). Centre d' Etudes Nucleaires
1966-07-01
In the present report are given the results obtained from food surveys carried out during the period 1963-1965 and involving 9000 families living in eleven regions spread out over the six European Community countries. A partial analysis of the results obtained covers a reduced sample of 3725 families; it makes it possible to fix the composition of the mean individual, monthly and annual food consumptions for each of the eleven regions. Details of the organisation of the survey, of the data processing methods and of the method of presenting the results are given in the first part of the report. the second part presents, in numerical table form, the consumption of various foodstuffs and the feeding principles for each region covered by the survey. Tables summarizing the data make it possible to compare the mean individual consumptions in the various regions studied. (author) [French] Dans le present rapport sont rassembles les premiers resultats des enquetes alimentaires effectuees pendant la periode 1963-1965, aupres de 9000 familles, dans onze regions reparties dans les six pays de la Communaute Europeenne. L'exploitation partielle des donnees obtenues porte sur un echantillon reduit a 3725 familles et permet d'etablir la composition du regime alimentaire moyen individuel, mensuel et annuel de chacune des onze regions. L'organisation des enquetes, la methode de traitement des donnees et l'expression des resultats sont exposees dans la premiere partie du rapport. La seconde reunit, sous forme de tableaux numeriques, les consommations des differents aliments et principes alimentaires par region d'enquetes. Des tableaux recapitulatifs permettent, en outre, une comparaison des consommations moyennes, individuelles des differentes regions etudiees. (auteur)
Directory of Open Access Journals (Sweden)
Socha Luis A
2004-04-01
Full Text Available Abstract Background A major goal in the post-genomic era is to identify and characterise disease susceptibility genes and to apply this knowledge to disease prevention and treatment. Rodents and humans have remarkably similar genomes and share closely related biochemical, physiological and pathological pathways. In this work we utilised the latest information on the mouse transcriptome as revealed by the RIKEN FANTOM2 project to identify novel human disease-related candidate genes. We define a new term "patholog" to mean a homolog of a human disease-related gene encoding a product (transcript, anti-sense or protein potentially relevant to disease. Rather than just focus on Mendelian inheritance, we applied the analysis to all potential pathologs regardless of their inheritance pattern. Results Bioinformatic analysis and human curation of 60,770 RIKEN full-length mouse cDNA clones produced 2,578 sequences that showed similarity (70–85% identity to known human-disease genes. Using a newly developed biological information extraction and annotation tool (FACTS in parallel with human expert analysis of 17,051 MEDLINE scientific abstracts we identified 182 novel potential pathologs. Of these, 36 were identified by computational tools only, 49 by human expert analysis only and 97 by both methods. These pathologs were related to neoplastic (53%, hereditary (24%, immunological (5%, cardio-vascular (4%, or other (14%, disorders. Conclusions Large scale genome projects continue to produce a vast amount of data with potential application to the study of human disease. For this potential to be realised we need intelligent strategies for data categorisation and the ability to link sequence data with relevant literature. This paper demonstrates the power of combining human expert annotation with FACTS, a newly developed bioinformatics tool, to identify novel pathologs from within large-scale mouse transcript datasets.
DEEP: a general computational framework for predicting enhancers
Kleftogiannis, Dimitrios A.
2014-11-05
Transcription regulation in multicellular eukaryotes is orchestrated by a number of DNA functional elements located at gene regulatory regions. Some regulatory regions (e.g. enhancers) are located far away from the gene they affect. Identification of distal regulatory elements is a challenge for the bioinformatics research. Although existing methodologies increased the number of computationally predicted enhancers, performance inconsistency of computational models across different cell-lines, class imbalance within the learning sets and ad hoc rules for selecting enhancer candidates for supervised learning, are some key questions that require further examination. In this study we developed DEEP, a novel ensemble prediction framework. DEEP integrates three components with diverse characteristics that streamline the analysis of enhancer\\'s properties in a great variety of cellular conditions. In our method we train many individual classification models that we combine to classify DNA regions as enhancers or non-enhancers. DEEP uses features derived from histone modification marks or attributes coming from sequence characteristics. Experimental results indicate that DEEP performs better than four state-of-the-art methods on the ENCODE data. We report the first computational enhancer prediction results on FANTOM5 data where DEEP achieves 90.2% accuracy and 90% geometric mean (GM) of specificity and sensitivity across 36 different tissues. We further present results derived using in vivo-derived enhancer data from VISTA database. DEEP-VISTA, when tested on an independent test set, achieved GM of 80.1% and accuracy of 89.64%. DEEP framework is publicly available at http://cbrc.kaust.edu.sa/deep/.
Noncoding Elements: Evolution and Epigenetic Regulation
Seridi, Loqmane
2016-03-09
When the human genome project was completed, it revealed a surprising result. 98% of the genome did not code for protein of which more than 50% are repeats— later known as ”Junk DNA”. However, comparative genomics unveiled that many noncoding elements are evolutionarily constrained; thus luckily to have a role in genome stability and regulation. Though, their exact functions remained largely unknown. Several large international consortia such as the Functional Annotation of Mammalian Genomes (FANTOM) and the Encyclopedia of DNA Elements (ENCODE) were set to understand the structure and the regulation of the genome. Specifically, these endeavors aim to measure and reveal the transcribed components and functional elements of the genome. One of the most the striking findings of these efforts is that most of the genome is transcribed, including non-conserved noncoding elements and repeat elements. Specifically, we investigated the evolution and epigenetic properties of noncoding elements. 1. We compared genomes of evolutionarily distant species and showed the ubiquity of constrained noncoding elements in metazoa. 2. By integrating multi-omic data (such as transcriptome, nucleosome profiling, histone modifications), I conducted a comprehensive analysis of epigenetic properties (chromatin states) of conserved noncoding elements in insects. We showed that those elements have distinct and protective sequence features, undergo dynamic epigenetic regulation, and appear to be associated with the structural components of the chromatin, replication origins, and nuclear matrix. 3. I focused on the relationship between enhancers and repetitive elements. Using Cap Analysis of Gene Expression (CAGE) and RNASeq, I compiled a full catalog of active enhancers (a class of noncoding elements) during myogenesis of human primary cells of healthy donors and donors affected by Duchenne muscular dystrophy (DMD). Comparing the two time-courses, a significant change in the epigenetic
Baillie, J Kenneth; Arner, Erik; Daub, Carsten; De Hoon, Michiel; Itoh, Masayoshi; Kawaji, Hideya; Lassmann, Timo; Carninci, Piero; Forrest, Alistair R R; Hayashizaki, Yoshihide; Faulkner, Geoffrey J; Wells, Christine A; Rehli, Michael; Pavli, Paul; Summers, Kim M; Hume, David A
2017-03-01
The FANTOM5 consortium utilised cap analysis of gene expression (CAGE) to provide an unprecedented insight into transcriptional regulation in human cells and tissues. In the current study, we have used CAGE-based transcriptional profiling on an extended dense time course of the response of human monocyte-derived macrophages grown in macrophage colony-stimulating factor (CSF1) to bacterial lipopolysaccharide (LPS). We propose that this system provides a model for the differentiation and adaptation of monocytes entering the intestinal lamina propria. The response to LPS is shown to be a cascade of successive waves of transient gene expression extending over at least 48 hours, with hundreds of positive and negative regulatory loops. Promoter analysis using motif activity response analysis (MARA) identified some of the transcription factors likely to be responsible for the temporal profile of transcriptional activation. Each LPS-inducible locus was associated with multiple inducible enhancers, and in each case, transient eRNA transcription at multiple sites detected by CAGE preceded the appearance of promoter-associated transcripts. LPS-inducible long non-coding RNAs were commonly associated with clusters of inducible enhancers. We used these data to re-examine the hundreds of loci associated with susceptibility to inflammatory bowel disease (IBD) in genome-wide association studies. Loci associated with IBD were strongly and specifically (relative to rheumatoid arthritis and unrelated traits) enriched for promoters that were regulated in monocyte differentiation or activation. Amongst previously-identified IBD susceptibility loci, the vast majority contained at least one promoter that was regulated in CSF1-dependent monocyte-macrophage transitions and/or in response to LPS. On this basis, we concluded that IBD loci are strongly-enriched for monocyte-specific genes, and identified at least 134 additional candidate genes associated with IBD susceptibility from reanalysis
Tissue specific roles for the ribosome biogenesis factor Wdr43 in zebrafish development.
Directory of Open Access Journals (Sweden)
Chengtian Zhao
2014-01-01
Full Text Available During vertebrate craniofacial development, neural crest cells (NCCs contribute to most of the craniofacial pharyngeal skeleton. Defects in NCC specification, migration and differentiation resulting in malformations in the craniofacial complex are associated with human craniofacial disorders including Treacher-Collins Syndrome, caused by mutations in TCOF1. It has been hypothesized that perturbed ribosome biogenesis and resulting p53 mediated neuroepithelial apoptosis results in NCC hypoplasia in mouse Tcof1 mutants. However, the underlying mechanisms linking ribosome biogenesis and NCC development remain poorly understood. Here we report a new zebrafish mutant, fantome (fan, which harbors a point mutation and predicted premature stop codon in zebrafish wdr43, the ortholog to yeast UTP5. Although wdr43 mRNA is widely expressed during early zebrafish development, and its deficiency triggers early neural, eye, heart and pharyngeal arch defects, later defects appear fairly restricted to NCC derived craniofacial cartilages. Here we show that the C-terminus of Wdr43, which is absent in fan mutant protein, is both necessary and sufficient to mediate its nucleolar localization and protein interactions in metazoans. We demonstrate that Wdr43 functions in ribosome biogenesis, and that defects observed in fan mutants are mediated by a p53 dependent pathway. Finally, we show that proper localization of a variety of nucleolar proteins, including TCOF1, is dependent on that of WDR43. Together, our findings provide new insight into roles for Wdr43 in development, ribosome biogenesis, and also ribosomopathy-induced craniofacial phenotypes including Treacher-Collins Syndrome.
Targeting a complex transcriptome: the construction of the mouse full-length cDNA encyclopedia.
Carninci, Piero; Waki, Kazunori; Shiraki, Toshiyuki; Konno, Hideaki; Shibata, Kazuhiro; Itoh, Masayoshi; Aizawa, Katsunori; Arakawa, Takahiro; Ishii, Yoshiyuki; Sasaki, Daisuke; Bono, Hidemasa; Kondo, Shinji; Sugahara, Yuichi; Saito, Rintaro; Osato, Naoki; Fukuda, Shiro; Sato, Kenjiro; Watahiki, Akira; Hirozane-Kishikawa, Tomoko; Nakamura, Mari; Shibata, Yuko; Yasunishi, Ayako; Kikuchi, Noriko; Yoshiki, Atsushi; Kusakabe, Moriaki; Gustincich, Stefano; Beisel, Kirk; Pavan, William; Aidinis, Vassilis; Nakagawara, Akira; Held, William A; Iwata, Hiroo; Kono, Tomohiro; Nakauchi, Hiromitsu; Lyons, Paul; Wells, Christine; Hume, David A; Fagiolini, Michela; Hensch, Takao K; Brinkmeier, Michelle; Camper, Sally; Hirota, Junji; Mombaerts, Peter; Muramatsu, Masami; Okazaki, Yasushi; Kawai, Jun; Hayashizaki, Yoshihide
2003-06-01
We report the construction of the mouse full-length cDNA encyclopedia,the most extensive view of a complex transcriptome,on the basis of preparing and sequencing 246 libraries. Before cloning,cDNAs were enriched in full-length by Cap-Trapper,and in most cases,aggressively subtracted/normalized. We have produced 1,442,236 successful 3'-end sequences clustered into 171,144 groups, from which 60,770 clones were fully sequenced cDNAs annotated in the FANTOM-2 annotation. We have also produced 547,149 5' end reads,which clustered into 124,258 groups. Altogether, these cDNAs were further grouped in 70,000 transcriptional units (TU),which represent the best coverage of a transcriptome so far. By monitoring the extent of normalization/subtraction, we define the tentative equivalent coverage (TEC),which was estimated to be equivalent to >12,000,000 ESTs derived from standard libraries. High coverage explains discrepancies between the very large numbers of clusters (and TUs) of this project,which also include non-protein-coding RNAs,and the lower gene number estimation of genome annotations. Altogether,5'-end clusters identify regions that are potential promoters for 8637 known genes and 5'-end clusters suggest the presence of almost 63,000 transcriptional starting points. An estimate of the frequency of polyadenylation signals suggests that at least half of the singletons in the EST set represent real mRNAs. Clones accounting for about half of the predicted TUs await further sequencing. The continued high-discovery rate suggests that the task of transcriptome discovery is not yet complete.
Clusters of internally primed transcripts reveal novel long noncoding RNAs.
Directory of Open Access Journals (Sweden)
2006-04-01
Full Text Available Non-protein-coding RNAs (ncRNAs are increasingly being recognized as having important regulatory roles. Although much recent attention has focused on tiny 22- to 25-nucleotide microRNAs, several functional ncRNAs are orders of magnitude larger in size. Examples of such macro ncRNAs include Xist and Air, which in mouse are 18 and 108 kilobases (Kb, respectively. We surveyed the 102,801 FANTOM3 mouse cDNA clones and found that Air and Xist were present not as single, full-length transcripts but as a cluster of multiple, shorter cDNAs, which were unspliced, had little coding potential, and were most likely primed from internal adenine-rich regions within longer parental transcripts. We therefore conducted a genome-wide search for regional clusters of such cDNAs to find novel macro ncRNA candidates. Sixty-six regions were identified, each of which mapped outside known protein-coding loci and which had a mean length of 92 Kb. We detected several known long ncRNAs within these regions, supporting the basic rationale of our approach. In silico analysis showed that many regions had evidence of imprinting and/or antisense transcription. These regions were significantly associated with microRNAs and transcripts from the central nervous system. We selected eight novel regions for experimental validation by northern blot and RT-PCR and found that the majority represent previously unrecognized noncoding transcripts that are at least 10 Kb in size and predominantly localized in the nucleus. Taken together, the data not only identify multiple new ncRNAs but also suggest the existence of many more macro ncRNAs like Xist and Air.
Discovery of molecular markers to discriminate corneal endothelial cells in the human body.
Directory of Open Access Journals (Sweden)
Masahito Yoshihara
Full Text Available The corneal endothelium is a monolayer of hexagonal corneal endothelial cells (CECs on the inner surface of the cornea. CECs are critical in maintaining corneal transparency through their barrier and pump functions. CECs in vivo have a limited capacity in proliferation, and loss of a significant number of CECs results in corneal edema called bullous keratopathy which can lead to severe visual loss. Corneal transplantation is the most effective method to treat corneal endothelial dysfunction, where it suffers from donor shortage. Therefore, regeneration of CECs from other cell types attracts increasing interests, and specific markers of CECs are crucial to identify actual CECs. However, the currently used markers are far from satisfactory because of their non-specific expression in other cell types. Here, we explored molecular markers to discriminate CECs from other cell types in the human body by integrating the published RNA-seq data of CECs and the FANTOM5 atlas representing diverse range of cell types based on expression patterns. We identified five genes, CLRN1, MRGPRX3, HTR1D, GRIP1 and ZP4 as novel markers of CECs, and the specificities of these genes were successfully confirmed by independent experiments at both the RNA and protein levels. Notably none of them have been documented in the context of CEC function. These markers could be useful for the purification of actual CECs, and also available for the evaluation of the products derived from other cell types. Our results demonstrate an effective approach to identify molecular markers for CECs and open the door for the regeneration of CECs in vitro.
Discovery of molecular markers to discriminate corneal endothelial cells in the human body.
Yoshihara, Masahito; Ohmiya, Hiroko; Hara, Susumu; Kawasaki, Satoshi; Hayashizaki, Yoshihide; Itoh, Masayoshi; Kawaji, Hideya; Tsujikawa, Motokazu; Nishida, Kohji
2015-01-01
The corneal endothelium is a monolayer of hexagonal corneal endothelial cells (CECs) on the inner surface of the cornea. CECs are critical in maintaining corneal transparency through their barrier and pump functions. CECs in vivo have a limited capacity in proliferation, and loss of a significant number of CECs results in corneal edema called bullous keratopathy which can lead to severe visual loss. Corneal transplantation is the most effective method to treat corneal endothelial dysfunction, where it suffers from donor shortage. Therefore, regeneration of CECs from other cell types attracts increasing interests, and specific markers of CECs are crucial to identify actual CECs. However, the currently used markers are far from satisfactory because of their non-specific expression in other cell types. Here, we explored molecular markers to discriminate CECs from other cell types in the human body by integrating the published RNA-seq data of CECs and the FANTOM5 atlas representing diverse range of cell types based on expression patterns. We identified five genes, CLRN1, MRGPRX3, HTR1D, GRIP1 and ZP4 as novel markers of CECs, and the specificities of these genes were successfully confirmed by independent experiments at both the RNA and protein levels. Notably none of them have been documented in the context of CEC function. These markers could be useful for the purification of actual CECs, and also available for the evaluation of the products derived from other cell types. Our results demonstrate an effective approach to identify molecular markers for CECs and open the door for the regeneration of CECs in vitro.
Contributions a L'etude de Dispositifs D'optique Integree
Touam, Tahar
Cette these contient des contributions a l'etude de deux champs du vaste domaine de l'optique integree. A cet effet, nous avons divise notre travail en deux grandes parties:. Dans une premiere partie, nous traitons le probleme de la realisation d'une nouvelle classe de guides d'onde planaires utilisables dans le domaine de longueur d'onde de l'infrarouge moyen (infrarouge thermique), domaine ou l'apparition anticipee de fibres optiques a pertes extremement faibles rendraient fort interessante l'existence de tels guides d'onde planaires. Dans un premier temps, nous presentons une etude analytique originale d'une structure planaire a profil d'indice gradue, suivie d'une analyse d'un guide canal base sur cette structure. Dans un deuxieme temps, nous decrivons le procede de fabrication par pulverisation atomique d'un guide planaire forme d'arseniure de gallium (AsGa) sur du dioxyde de silicium (SiO_2 ), combinaison de materiau compatible avec l'infrarouge moyen. Finalement, nous presentons une etude de conception d'un reseau de surface destine a coupler la lumiere dans un tel guide, les autres methodes traditionnelles de couplage semblant peu appropriees aux environs de lambda = 10 mum. Dans une deuxieme partie, nous traitons le probleme de la jonction Y en optique integree, jonction qui soufre de pertes tres importantes des que l'angle d'ouverture devient interessant pour le concepteur de circuits integres optiques. L'analyse est basee sur la methode numerique dite BPM (Beam Propagation Method; methode de propagation du faisceau) qui fait l'objet d'un bref rappel. Nous poursuivons avec l'etude et l'optimisation d'une nouvelle jonction Y dont l'essence est l'utilisation du phenomene de diffraction a travers trois fentes de phase. Nous obtenons ainsi une tres bonne jonction, separant proprement le faisceau, a une ouverture de 10 degres. Finalement, nous faisons un rappel d'un profil d'indice dit "ideal" pour guides courbes et nous proposons l'utilisation de tels guides
Analyse des interactions energetiques entre un arena et son systeme de refrigeration
Seghouani, Lotfi
La presente these s'inscrit dans le cadre d'un projet strategique sur les arenas finance par le CRSNG (Conseil de Recherche en Sciences Naturelles et en Genie du Canada) qui a pour but principal le developpement d'un outil numerique capable d'estimer et d'optimiser la consommation d'energie dans les arenas et curlings. Notre travail s'inscrit comme une suite a un travail deja realise par DAOUD et coll. (2006, 2007) qui a developpe un modele 3D (AIM) en regime transitoire de l'arena Camilien Houde a Montreal et qui calcule les flux de chaleur a travers l'enveloppe du batiment ainsi que les distributions de temperatures et d'humidite durant une annee meteorologique typique. En particulier, il calcule les flux de chaleur a travers la couche de glace dus a la convection, la radiation et la condensation. Dans un premier temps nous avons developpe un modele de la structure sous la glace (BIM) qui tient compte de sa geometrie 3D, des differentes couches, de l'effet transitoire, des gains de chaleur du sol en dessous et autour de l'arena etudie ainsi que de la temperature d'entree de la saumure dans la dalle de beton. Par la suite le BIM a ete couple le AIM. Dans la deuxieme etape, nous avons developpe un modele du systeme de refrigeration (REFSYS) en regime quasi-permanent pour l'arena etudie sur la base d'une combinaison de relations thermodynamiques, de correlations de transfert de chaleur et de relations elaborees a partir de donnees disponibles dans le catalogue du manufacturier. Enfin le couplage final entre l'AIM +BIM et le REFSYS a ete effectue sous l'interface du logiciel TRNSYS. Plusieurs etudes parametriques on ete entreprises pour evaluer les effets du climat, de la temperature de la saumure, de l'epaisseur de la glace, etc. sur la consommation energetique de l'arena. Aussi, quelques strategies pour diminuer cette consommation ont ete etudiees. Le considerable potentiel de recuperation de chaleur au niveau des condenseurs qui peut reduire l'energie requise par
Daoud, Ahmed
Cette these presente les resultats d'une etude sur le mouvement de l'air et les transferts thermiques et massiques dans les arenas en regime transitoire et en 3D. Pour la partie aeraulique, il a ete question de developper un modele base sur la methode zonale qui permet de calculer les debits de l'air (dus a la ventilation et aux gradients de temperature) et de l'humidite entre les differentes zones du batiment et de determiner l'age de l'air dans chacune des zones. Pour la partie thermique, un modele de calcul du rayonnement entre les surfaces interieures du batiment qui a ete couple a TRNSYS afin de calculer sur une base annuelle les charges de chauffage et de refrigeration; ces dernieres tiennent compte des transferts radiatif et convectif, de la chaleur latente due a la condensation de l'humidite sur la glace et du surfacage. Le document presente est constitue de 7 chapitres qui peuvent etre resumes comme suit: Les chapitres 1, 2 et 3 sont consacres respectivement: a l'introduction generale, a la revue bibliographique et a la description du batiment modelise. Le chapitre 4 decrit l'approche developpee et la contribution importante qui y est apportee. Il presente l'utilisation de la methode zonale comme une alternative pratique aux methodes CFD car elle permet de realiser des simulations dynamiques sur une annee avec des temps de simulation tres courts et une precision acceptable. Il s'agit d'une approche intermediaire entre les modeles CFD et les modeles a un noeud d'air (considerant la temperature homogene dans un local). Le chapitre 5 est consacre a la methode de resolution numerique. L'outil de simulation a ete developpe en utilisant l'interface du logiciel TRNSYS: Le type 56 de ce logiciel a ete adopte comme modele energetique tandis que les autres modeles ont ete developpes et programmes en utilisant le logiciel MATLAB. Le chapitre 6 presente les resultats de simulation pour un arena sans faux plafond et avec un faux plafond et les resultats de mesures
Energy Technology Data Exchange (ETDEWEB)
Millot, J.P. [Commissariat a l' Energie Atomique, Fontenay-aux-Roses (France). Centre d' Etudes Nucleaires
1962-07-01
We propose a method for calculating the biological efficiency of fast neutrons emitted by in-pile fission sources. This method justifies the empirical theory of Albert and Welton. In making simple assumptions concerning the cross-sections, we have supposed that the propagation can ben reduced to a mono-kinetic problem. A system of orthonormal functions is then set up making it possible to calculate the flux leaving a planar source. This method generalises the results obtained by Platzek to the case where the elastic cross-sections are not isotropic, and make it possible in particular to define a displacement cross-section: extension of the diffusion coefficient. This method can be generalised to the case of neutron diffraction as a function of time, and to the study of slowing-down. Numerical results are given in an appendix for the following: H{sub 2}O, D{sub 2}O, Fe, Be, Pb, CH, CH{sub 2}. These cross-sections have been verified experimentally in water and in graphite for neutrons of 2.5 and 14 MeV using a SAMES accelerator and a 2 MeV Van De Graaff. (author) [French] Nous proposons une methode permettant de calculer l'efficacite biologique des neutrons rapides issus des sources de fission dans la protection d'une pile. Cette methode justifie la theorie empirique d'Albert et Welton. En faisant des hypotheses simples sur les sections efficaces, nous avons suppose que la propagation pouvait etre ramenee a un probleme monocinetique. Nous construisons alors un systeme de fonctions orthonormales qui permet de calculer le flux issu d'une source plane. Cette methode generalise les resultats obtenus par Platzek au cas ou les sections efficaces elastiques ne sont pas isotropes et en particulier permet de definir une section efficace de deplacement: extension du coefficient de diffusion. Cette methode peut etre generalisee a la diffusion des neutrons en fonction du temps et a l'etude du ralentissement. Les resultats numeriques sont donnes en annexe
Modelisation et simulation du comportement des alliages de magnesium lors de la deformation a chaud
Levesque, Julie
Les alliages de magnesium sont de plus en plus utilises dans l'industrie automobile. Leur faible masse volumique permet d'alleger les vehicules, donc de diminuer l'utilisation de carburant et les emissions de gaz a effet de serre. La ductilite du magnesium a temperature ambiante est faible, mais une augmentation de celle-ci permet l'activation de systemes de glissement supplementaires et une meilleure formabilite. L'hydroformage a chaud pourrait donc permettre de fabriquer des pieces en alliages de magnesium destinees a l'industrie automobile. L'objectif de ce travail etait de developper un modele numerique permettant de simuler le comportement des alliages de magnesium deformes a temperature moderee (200°C). Les principales difficultes resident dans le fait que le magnesium ne se deforme pas seulement par glissement, mais aussi par maclage. En plus de reorienter les mailles cristallines, le maclage amene egalement un schema de durcissement complexe. Le modele utilise en est un de plasticite cristalline, qui tient compte de l'evolution de la texture lors de la deformation. Le modele de depart a ete adapte au magnesium en y incluant le maclage. Il tient compte de la reorientation des mailles cristallines, ainsi que du durcissement cause par les joints de macles. Le modele a d'abord ete calibre grace aux courbes de traction et de compression uniaxiale, puis a ete valide par la simulation de l'essai de deformation circonferentielle. L'evolution de texture observee a aussi permis de valider le modele. Des equations permettant d'ajuster les parametres du modele en fonction du taux de deformation ont egalement ete developpees. Les diagrammes des limites de formage dans les directions conventionnelle et d'hydroformage ont ete traces. Le maclage semble contribuer legerement a la formabilite des alliages de magnesium a la temperature etudiee. Une augmentation de l'index de sensibilite a la vitesse de deformation a aussi un effet positif sur la formabilite. Les resultats
Faribault, Alexandre
On peut creer un gaz electronique bidimensionnel en utilisant le potentiel de confinement d'une couche mince d'un semiconducteur dans un substrat fait d'une autre semiconducteur de gap plus eleve. L'ajout d'un champ magnetique perpendiculaire au plan de confinement modifie de facon drastique les proprietes du gaz electronique. Pour des densites et des valeurs du champ magnetique adequatement choisies, on obtient un etat fondamental en onde de densite de charge. Dans un systeme compose de deux de ces gaz bidimensionnels suffisamment rapproches l'un de l'autre, on prevoit theoriquement l'existence d'un etat fondamental compose d'une onde de densite de charge dans chacun des puits et d'une serie de regions lineaires ou l'on a une delocalisation coherente des electrons entre les deux puits. Dans cette these, on etudie le comportement a temperature nulle de cet etat fondamental en rayures coherentes. L'etude numerique des modes collectifs de ces phases laisse croire qu'un deverrouillage des canaux coherents est envisageable dans ce systeme. Afin d'etudier cette possibilite, nous construisons d'abord un modele effectif de canaux quasi-unidimensionnels couples qui permettent de reproduire correctement les excitations collectives a basse energie de la phase en rayures coherentes du double puits quantique. Dans un systeme de coordonnees adequatement choisi, ces excitations peuvent etre decrites par des ondes de pseudospin. Les parametres de ce modele effectif simple peuvent etre extraits des calculs des fonctions de reponse realises dans l'approximation Hartree-Fock dependante du temps (appelee aussi Generalized Random Phase Approximation). On constate l'efficacite de ce modele a decrire la dynamique basse energie du systeme pour une certaine plage de distances inter-puits. En retirant de ce modele les contributions a l'hamiltonien provenant des couplages de type Josephson entre les canaux, on obtient alors un systeme ou les canaux sont deverrouilles. Un traitement en
The Cell Ontology 2016: enhanced content, modularization, and ontology interoperability.
Diehl, Alexander D; Meehan, Terrence F; Bradford, Yvonne M; Brush, Matthew H; Dahdul, Wasila M; Dougall, David S; He, Yongqun; Osumi-Sutherland, David; Ruttenberg, Alan; Sarntivijai, Sirarat; Van Slyke, Ceri E; Vasilevsky, Nicole A; Haendel, Melissa A; Blake, Judith A; Mungall, Christopher J
2016-07-04
The Cell Ontology (CL) is an OBO Foundry candidate ontology covering the domain of canonical, natural biological cell types. Since its inception in 2005, the CL has undergone multiple rounds of revision and expansion, most notably in its representation of hematopoietic cells. For in vivo cells, the CL focuses on vertebrates but provides general classes that can be used for other metazoans, which can be subtyped in species-specific ontologies. Recent work on the CL has focused on extending the representation of various cell types, and developing new modules in the CL itself, and in related ontologies in coordination with the CL. For example, the Kidney and Urinary Pathway Ontology was used as a template to populate the CL with additional cell types. In addition, subtypes of the class 'cell in vitro' have received improved definitions and labels to provide for modularity with the representation of cells in the Cell Line Ontology and Reagent Ontology. Recent changes in the ontology development methodology for CL include a switch from OBO to OWL for the primary encoding of the ontology, and an increasing reliance on logical definitions for improved reasoning. The CL is now mandated as a metadata standard for large functional genomics and transcriptomics projects, and is used extensively for annotation, querying, and analyses of cell type specific data in sequencing consortia such as FANTOM5 and ENCODE, as well as for the NIAID ImmPort database and the Cell Image Library. The CL is also a vital component used in the modular construction of other biomedical ontologies-for example, the Gene Ontology and the cross-species anatomy ontology, Uberon, use CL to support the consistent representation of cell types across different levels of anatomical granularity, such as tissues and organs. The ongoing improvements to the CL make it a valuable resource to both the OBO Foundry community and the wider scientific community, and we continue to experience increased interest in the
Andersson, P.; Andersson-Sunden, E.; Sjöstrand, H.; Jacobsson-Svärd, S.
2014-08-01
In nuclear boiling water reactor cores, the distribution of water and steam (void) is essential for both safety and efficiency reasons. In order to enhance predictive capabilities, void distribution assessment is performed in two-phase test-loops under reactor-relevant conditions. This article proposes the novel technique of fast-neutron tomography using a portable deuterium-tritium neutron generator to determine the time-averaged void distribution in these loops. Fast neutrons have the advantage of high transmission through the metallic structures and pipes typically concealing a thermal-hydraulic test loop, while still being fairly sensitive to the water/void content. However, commercially available fast-neutron generators also have the disadvantage of a relatively low yield and fast-neutron detection also suffers from relatively low detection efficiency. Fortunately, some loops are axially symmetric, a property which can be exploited to reduce the amount of data needed for tomographic measurement, thus limiting the interrogation time needed. In this article, three axially symmetric test objects depicting a thermal-hydraulic test loop have been examined; steel pipes with outer diameter 24 mm, thickness 1.5 mm, and with three different distributions of the plastic material POM inside the pipes. Data recorded with the FANTOM fast-neutron tomography instrument have been used to perform tomographic reconstructions to assess their radial material distribution. Here, a dedicated tomographic algorithm that exploits the symmetry of these objects has been applied, which is described in the paper. Results are demonstrated in 20 rixel (radial pixel) reconstructions of the interior constitution and 2D visualization of the pipe interior is demonstrated. The local POM attenuation coefficients in the rixels were measured with errors (RMS) of 0.025, 0.020, and 0.022 cm-1, solid POM attenuation coefficient. The accuracy and precision is high enough to provide a useful indication
Fujii, Asami; Inoue, Naoya; Watanabe, Mikio; Kawakami, Chisa; Hidaka, Yoh; Hayashizaki, Yoshihide; Iwatani, Yoshinori
2017-01-01
Graves' disease (GD) and Hashimoto's disease (HD) are autoimmune thyroid disorders distinguished by the presence or absence of antithyrotropin receptor (TSHR) antibodies (TRAb). TSHR gene polymorphisms determine the amount of TSHR expressed, which may in turn influence TRAb production. The FANTOM5 project identified six GD-associated single nucleotide polymorphisms (SNPs) within the enhancer regions of the TSHR and unknown genes. This study examined the association of 11 TSHR and unknown gene polymorphisms, five of which are located in TSHR enhancer regions, with the development and prognosis of GD and HD. SNPs of the TSHR and unknown genes were genotyped in 180 GD patients, including 62 patients with intractable GD and 48 patients with GD in remission; 151 HD patients, including 65 patients with severe HD and 40 patients with mild HD; and 111 healthy controls. The rs4411444 GG genotype and G allele, the rs2300519 AA genotype, and the rs179247 AA genotype and A allele were more frequent in GD patients than they were in controls. These same genotypes and alleles, in addition to the rs2300519 A allele and rs4903961 GG genotype and G allele, were more frequent in patients with intractable GD than they were in controls and patients with GD in remission. Interestingly, the rs2300519 TT genotype and T allele, rs4903961 CC genotype and C allele, and rs179247 GG genotype, all of which are minor genotypes and alleles among the evaluated SNPs, were more frequent in HD patients than they were in controls, but there were no differences in the frequencies of these genotypes and alleles between patients with severe HD and mild HD. Among the evaluated SNPs, the rs4411444 GG genotype and the rs4903961 C allele in the enhancer regions of the TSHR gene were most strongly associated with the development of GD, especially intractable disease, and that of HD, respectively. Among the evaluated TSHR gene SNPs, the rs4411444 GG genotype and the rs4903961 C allele in the enhancer regions
Mining mammalian transcript data for functional long non-coding RNAs.
Directory of Open Access Journals (Sweden)
Amit N Khachane
Full Text Available BACKGROUND: The role of long non-coding RNAs (lncRNAs in controlling gene expression has garnered increased interest in recent years. Sequencing projects, such as Fantom3 for mouse and H-InvDB for human, have generated abundant data on transcribed components of mammalian cells, the majority of which appear not to be protein-coding. However, much of the non-protein-coding transcriptome could merely be a consequence of 'transcription noise'. It is therefore essential to use bioinformatic approaches to identify the likely functional candidates in a high throughput manner. PRINCIPAL FINDINGS: We derived a scheme for classifying and annotating likely functional lncRNAs in mammals. Using the available experimental full-length cDNA data sets for human and mouse, we identified 78 lncRNAs that are either syntenically conserved between human and mouse, or that originate from the same protein-coding genes. Of these, 11 have significant sequence homology. We found that these lncRNAs exhibit: (i patterns of codon substitution typical of non-coding transcripts; (ii preservation of sequences in distant mammals such as dog and cow, (iii significant sequence conservation relative to their corresponding flanking regions (in 50% cases, flanking regions do not have homology at all; and in the remaining, the degree of conservation is significantly less; (iv existence mostly as single-exon forms (8/11; and, (v presence of conserved and stable secondary structure motifs within them. We further identified orthologous protein-coding genes that are contributing to the pool of lncRNAs; of which, genes implicated in carcinogenesis are significantly over-represented. CONCLUSION: Our comparative mammalian genomics approach coupled with evolutionary analysis identified a small population of conserved long non-protein-coding RNAs (lncRNAs that are potentially functional across Mammalia. Additionally, our analysis indicates that amongst the orthologous protein-coding genes that
Identification of Enhancers In Human: Advances In Computational Studies
Kleftogiannis, Dimitrios A.
2016-03-24
framework for identifying enhancers. The proposed system called Dragon Ensemble Enhancer Predictor (DEEP) is based on the novel deep learning two-layer ensemble algorithm capable of identifying enhancers characterized by different cellular conditions. Experimental results using data from ENCODE and FANTOM5, demonstrate that DEEP surpasses in terms of recognition performance the major systems for enhancer prediction and shows very good generalization capabilities in unknown cell-lines and tissues. Finally, we take a step further by developing a novel feature selection method suitable for defining a computational framework capable of analyzing the genomic content of enhancers and reporting cell-line specific predictive signatures.
El enigma de los dos Hipólitos
Directory of Open Access Journals (Sweden)
Claudio Pierantoni
2006-01-01
Full Text Available El artículo evidencia, por un lado, que las diferencias entre dos grupos de obras dentro del corpus hipolitiano, notadas por Nautin, y después desarrolladas sobre todo por Simonetti, son notables y deben ser aceptadas como demostración de la existencia de dos autores diferentes, abandonándose la todavía difundida "suspensión del juicio" sobre el tema, aun cuando algunos puntos secundarios permanecen inciertos. Por otro lado, sostiene que la escisión del autor romano en dos personas diferentes, inaugurada por el mismo Nautin (con su fantomático Josipo, y aceptada después -aunque con más prudencia- por Simonetti, Guarducci y otros a partir de 1989, no cuenta con un sólido apoyo. En cambio, fuertes indicios apuntan a la identificación entre los dos, y por tanto, a juicio del autor, debe aceptarse solo la existencia de: un Hipólito escritor romano y presbítero cismático (más tarde reconciliado por su martirio y venerado en la tradición romana; y de un Hipólito escritor asiático, obispo de sede desconocidaThis article, on the one hand, shows that the points of difference between the two groups of works among the Corpus Hippolytianum noted by Nautin, as well as developed and demonstrated above all by Simonetti, are fully convincing and must be accepted to demonstrate the existence of two different authors, even when several points of detail remain uncertain. On the other hand, the author also finds that the excision of the Roman personality into two, inaugurated by Nautin (with his phantom-like "Josipo", and accepted afterwards -albeit more prudently- by Simonetti, Guarducci and others beginning in 1989, has no firm base. Meanwhile, solid indicators point to an identification between the two, and therefore the existence of Hippolyte, Roman writer and schismatic presbyter (later reconciled by martyrdom and venerated in the Roman tradition must be accepted, in the author's judgment, at the side of Hippolyte, Asian writer and bishop of
From 2D to 3D modelling in long term tectonics: Modelling challenges and HPC solutions (Invited)
Le Pourhiet, L.; May, D.
2013-12-01
Over the last decades, 3D thermo-mechanical codes have been made available to the long term tectonics community either as open source (Underworld, Gale) or more limited access (Fantom, Elvis3D, Douar, LaMem etc ...). However, to date, few published results using these methods have included the coupling between crustal and lithospheric dynamics at large strain. The fact that these computations are computational expensive is not the primary reason for the relatively slow development of 3D modeling in the long term tectonics community, as compare to the rapid development observed within the mantle dynamic community, or in the short-term tectonics field. Long term tectonics problems have specific issues not found in either of these two field, including; large strain (not an issue for short-term), the inclusion of free surface and the occurence of large viscosity contrasts. The first issue is typically eliminated using a combined marker-ALE method instead of fully lagrangian method, however, the marker-ALE approach can pose some algorithmic challenges in a massively parallel environment. The two last issues are more problematic because they affect the convergence of the linear/non-linear solver and the memory cost. Two options have been tested so far, using low order element and solving with a sparse direct solver, or using higher order stable elements together with a multi-grid solver. The first options, is simpler to code and to use but reaches its limit at around 80^3 low order elements. The second option requires more operations but allows using iterative solver on extremely large computers. In this presentation, I will describe the design philosophy and highlight results obtained using a code from the second-class method. The presentation will be oriented from an end-user point of view, using an application from 3D continental break up to illustrate key concepts. The description will proceed point by point from implementing physics into the code, to dealing with
Directory of Open Access Journals (Sweden)
Laurence D Hurst
2015-12-01
Full Text Available X chromosomes are unusual in many regards, not least of which is their nonrandom gene content. The causes of this bias are commonly discussed in the context of sexual antagonism and the avoidance of activity in the male germline. Here, we examine the notion that, at least in some taxa, functionally biased gene content may more profoundly be shaped by limits imposed on gene expression owing to haploid expression of the X chromosome. Notably, if the X, as in primates, is transcribed at rates comparable to the ancestral rate (per promoter prior to the X chromosome formation, then the X is not a tolerable environment for genes with very high maximal net levels of expression, owing to transcriptional traffic jams. We test this hypothesis using The Encyclopedia of DNA Elements (ENCODE and data from the Functional Annotation of the Mammalian Genome (FANTOM5 project. As predicted, the maximal expression of human X-linked genes is much lower than that of genes on autosomes: on average, maximal expression is three times lower on the X chromosome than on autosomes. Similarly, autosome-to-X retroposition events are associated with lower maximal expression of retrogenes on the X than seen for X-to-autosome retrogenes on autosomes. Also as expected, X-linked genes have a lesser degree of increase in gene expression than autosomal ones (compared to the human/Chimpanzee common ancestor if highly expressed, but not if lowly expressed. The traffic jam model also explains the known lower breadth of expression for genes on the X (and the Z of birds, as genes with broad expression are, on average, those with high maximal expression. As then further predicted, highly expressed tissue-specific genes are also rare on the X and broadly expressed genes on the X tend to be lowly expressed, both indicating that the trend is shaped by the maximal expression level not the breadth of expression per se. Importantly, a limit to the maximal expression level explains biased
SPA: a probabilistic algorithm for spliced alignment.
Directory of Open Access Journals (Sweden)
Erik van Nimwegen
2006-04-01
Full Text Available Recent large-scale cDNA sequencing efforts show that elaborate patterns of splice variation are responsible for much of the proteome diversity in higher eukaryotes. To obtain an accurate account of the repertoire of splice variants, and to gain insight into the mechanisms of alternative splicing, it is essential that cDNAs are very accurately mapped to their respective genomes. Currently available algorithms for cDNA-to-genome alignment do not reach the necessary level of accuracy because they use ad hoc scoring models that cannot correctly trade off the likelihoods of various sequencing errors against the probabilities of different gene structures. Here we develop a Bayesian probabilistic approach to cDNA-to-genome alignment. Gene structures are assigned prior probabilities based on the lengths of their introns and exons, and based on the sequences at their splice boundaries. A likelihood model for sequencing errors takes into account the rates at which misincorporation, as well as insertions and deletions of different lengths, occurs during sequencing. The parameters of both the prior and likelihood model can be automatically estimated from a set of cDNAs, thus enabling our method to adapt itself to different organisms and experimental procedures. We implemented our method in a fast cDNA-to-genome alignment program, SPA, and applied it to the FANTOM3 dataset of over 100,000 full-length mouse cDNAs and a dataset of over 20,000 full-length human cDNAs. Comparison with the results of four other mapping programs shows that SPA produces alignments of significantly higher quality. In particular, the quality of the SPA alignments near splice boundaries and SPA's mapping of the 5' and 3' ends of the cDNAs are highly improved, allowing for more accurate identification of transcript starts and ends, and accurate identification of subtle splice variations. Finally, our splice boundary analysis on the human dataset suggests the existence of a novel non
Directory of Open Access Journals (Sweden)
Marina eLizio
2015-11-01
Full Text Available Mammals are composed of hundreds of different cell types with specialized functions. Each of these cellular phenotypes are controlled by different combinations of transcription factors. Using a human non islet cell insulinoma cell line (TC-YIK which expresses insulin and the majority of known pancreatic beta cell specific genes as an example, we describe a general approach to identify key cell-type-specific transcription factors (TFs and their direct and indirect targets. By ranking all human TFs by their level of enriched expression in TC-YIK relative to a broad collection of samples (FANTOM5, we confirmed known key regulators of pancreatic function and development. Systematic siRNA mediated perturbation of these TFs followed by qRT-PCR revealed their interconnections with NEUROD1 at the top of the regulation hierarchy and its depletion drastically reducing insulin levels. For 15 of the TF knock-downs (KD, we then used Cap Analysis of Gene Expression (CAGE to identify thousands of their targets genome-wide (KD-CAGE. The data confirm NEUROD1 as a key positive regulator in the transcriptional regulatory network (TRN, and ISL1 and PROX1 as antagonists. As a complimentary approach we used ChIP-seq on four of these factors to identify NEUROD1, LMX1A, PAX6 and RFX6 binding sites in the human genome. Examining the overlap between genes perturbed in the KD-CAGE experiments and genes with a ChIP-seq peak within 1kb of their promoter, we identified direct transcriptional targets of these TFs. Integration of KD-CAGE and ChIP-seq data shows that both NEUROD1 and LMX1A work as the main transcriptional activators. In the core TRN (i.e. TF-TF only, NEUROD1 directly transcriptionally activates the pancreatic TFs HSF4, INSM1, MLXIPL, MYT1, NKX6-3, ONECUT2, PAX4, PROX1, RFX6, ST18, DACH1 and SHOX2, while LMX1A directly transcriptionally activates DACH1, SHOX2, PAX6 and PDX1. Analysis of these complementary datasets suggests the need for caution in interpreting
Macrophages.com: an on-line community resource for innate immunity research.
Robert, Christelle; Lu, Xiang; Law, Andrew; Freeman, Tom C; Hume, David A
2011-11-01
Macrophages play a major role in tissue remodelling during development, wound healing and tissue homeostasis, and are central to innate immunity and to the pathology of tissue injury and inflammation. Given this fundamental role in many aspects of biological function, an enormous wealth of information has accumulated on these fascinating cells in the literature and other public repositories. With the escalation of genome-scale data derived from macrophages and related haematopoietic cell types, there is a growing need for an integrated resource that seeks to compile, organise and analyse our collective knowledge of macrophage biology. Here we describe a community-driven web-based resource, macrophages.com that aims to provide a portal onto various types of Omics data to facilitate comparative genomic studies, promoter and transcriptional network analyses, models of macrophage pathways together with other information on these cells. To this end, the website combines public and in-house analyses of expression data with pre-analysed views of co-expressed genes as supported by the network analysis tool BioLayout Express(3D), as well as providing access to maps of pathways active in macrophages. Macrophages.com also provides access to an extensive image library of macrophages in adult/embryonic tissue sections prepared from normal and transgenic mice. In addition, the site links to the Human Protein Atlas database so as to provide direct access to protein expression patterns in human macrophages. Finally, an integrated gene-centric portal provides the tools for rapid promoter analysis studies based on a comprehensive set of CAGE-derived transcription start site (TSS) sequences in human and mouse genomes as generated by the Functional Annotation of Mammalian genomes (FANTOM) projects initiated by the RIKEN Omics Science Center. Our aim is to continue to grow the macrophages.com resource using publicly available data, as well as in-house generated knowledge. In so doing
O'Driscoll, K.; Mayer, B.; Su, J.; Mathis, M.
2014-05-01
The fate and cycling of two selected legacy persistent organic pollutants (POPs), PCB 153 and γ-HCH, in the North Sea in the 21st century have been modelled with combined hydrodynamic and fate and transport ocean models (HAMSOM and FANTOM, respectively). To investigate the impact of climate variability on POPs in the North Sea in the 21st century, future scenario model runs for three 10-year periods to the year 2100 using plausible levels of both in situ concentrations and atmospheric, river and open boundary inputs are performed. This slice mode under a moderate scenario (A1B) is sufficient to provide a basis for further analysis. For the HAMSOM and atmospheric forcing, results of the IPCC A1B (SRES) 21st century scenario are utilized, where surface forcing is provided by the REMO downscaling of the ECHAM5 global atmospheric model, and open boundary conditions are provided by the MPIOM global ocean model. Dry gas deposition and volatilization of γ-HCH increase in the future relative to the present by up to 20% (in the spring and summer months for deposition and in summer for volatilization). In the water column, total mass of γ-HCH and PCB 153 remain fairly steady in all three runs. In sediment, γ-HCH increases in the future runs, relative to the present, while PCB 153 in sediment decreases exponentially in all three runs, but even faster in the future, due to the increased number of storms, increased duration of gale wind conditions and increased water and air temperatures, all of which are the result of climate change. Annual net sinks exceed sources at the ends of all periods. Overall, the model results indicate that the climate change scenarios considered here generally have a negligible influence on the simulated fate and transport of the two POPs in the North Sea, although the increased number and magnitude of storms in the 21st century will result in POP resuspension and ensuing revolatilization events. Trends in emissions from primary and secondary
Hurst, Laurence D.; Ghanbarian, Avazeh T.; Forrest, Alistair R. R.; Huminiecki, Lukasz
2015-01-01
X chromosomes are unusual in many regards, not least of which is their nonrandom gene content. The causes of this bias are commonly discussed in the context of sexual antagonism and the avoidance of activity in the male germline. Here, we examine the notion that, at least in some taxa, functionally biased gene content may more profoundly be shaped by limits imposed on gene expression owing to haploid expression of the X chromosome. Notably, if the X, as in primates, is transcribed at rates comparable to the ancestral rate (per promoter) prior to the X chromosome formation, then the X is not a tolerable environment for genes with very high maximal net levels of expression, owing to transcriptional traffic jams. We test this hypothesis using The Encyclopedia of DNA Elements (ENCODE) and data from the Functional Annotation of the Mammalian Genome (FANTOM5) project. As predicted, the maximal expression of human X-linked genes is much lower than that of genes on autosomes: on average, maximal expression is three times lower on the X chromosome than on autosomes. Similarly, autosome-to-X retroposition events are associated with lower maximal expression of retrogenes on the X than seen for X-to-autosome retrogenes on autosomes. Also as expected, X-linked genes have a lesser degree of increase in gene expression than autosomal ones (compared to the human/Chimpanzee common ancestor) if highly expressed, but not if lowly expressed. The traffic jam model also explains the known lower breadth of expression for genes on the X (and the Z of birds), as genes with broad expression are, on average, those with high maximal expression. As then further predicted, highly expressed tissue-specific genes are also rare on the X and broadly expressed genes on the X tend to be lowly expressed, both indicating that the trend is shaped by the maximal expression level not the breadth of expression per se. Importantly, a limit to the maximal expression level explains biased tissue of expression
Directory of Open Access Journals (Sweden)
Mladen R. Tišma
2014-10-01
Full Text Available Nova radarska tehnologija za unapređeni RS PVD patriot PAC-3 MSE; Precizno vođena avio-bomba AASM debitovala na nefrancuskoj letelici; Pokrenut projekat novog latinoameričkog školskog aviona; Erbas planira osposobljavanje evrofajtera za protivbrodsku borbu; Kina planira izgradnju domaćeg nosača aviona do 2017. godine; Boing otkrio detalje o letelici fantom svift za program VTOL X-Plane; IOMAX nudi koncept lakog aviona arkeindžel; U planu „pojednostavljena“ verzija transportnog helikoptera NH90; Prototip helikoptera S-97 rejder bliže prvom letu; Ruski T-50 PAK-FA u fazi letnih ispitivanja sa naoružanjem; Laki prenosni raketni sistem za PVD verba ušao u naoružanje ruskih padobranaca; Iran lansirao novu verziju rakete nazeat; Reno predstavio demonstrator vozila 6x6 pod oznakom BMX-01; Izraelski IWI nudi novu snajpersku pušku dan .338; Kolt prikazao jurišnu pušku CK901; Nosač aviona „Vikramaditja“ uveden u naoružanje indijske mornarice; Patrolni brod Mornarice Vojske Crne Gore „Kotor“ ponovo plovi. / New radar technology for enhanced air defence missile system Patriot PAC-3 MSE; Precision-guided air bomb AASM debuted on non-French aircraft; Project for the new Latin American training aircraft; Airbus plans to give Eurofighter maritime strike capability; China plans to build domestic carriers by 2017; Boeing revealed details about Phantom Swift for VTOL X-Plane; Iomax offers the concept of a light aircraft Arckangel; "Simplified" version of the NH90 transport helicopter planned; A prototype S-97 helicopter Raider closer to first flight; Russian T-50 PAK-FA in the phase of weapons flight tests; MANPADS Verba entered service with of Russian paratroopers; Iran launched a new version of the Nazeat rocket; Renault presented demonstrator vehicles 6x6 under the designation BMX-01; Israeli IWI offers a new sniper rifle Day .338; Colt showed assault rifle CK901; Aircraft carrier "Vikramaditya" commisioned with the Indian Navy; The
Lizio, Marina; Ishizu, Yuri; Itoh, Masayoshi; Lassmann, Timo; Hasegawa, Akira; Kubosaki, Atsutaka; Severin, Jessica; Kawaji, Hideya; Nakamura, Yukio; Suzuki, Harukazu; Hayashizaki, Yoshihide; Carninci, Piero; Forrest, Alistair R R
2015-01-01
Mammals are composed of hundreds of different cell types with specialized functions. Each of these cellular phenotypes are controlled by different combinations of transcription factors. Using a human non islet cell insulinoma cell line (TC-YIK) which expresses insulin and the majority of known pancreatic beta cell specific genes as an example, we describe a general approach to identify key cell-type-specific transcription factors (TFs) and their direct and indirect targets. By ranking all human TFs by their level of enriched expression in TC-YIK relative to a broad collection of samples (FANTOM5), we confirmed known key regulators of pancreatic function and development. Systematic siRNA mediated perturbation of these TFs followed by qRT-PCR revealed their interconnections with NEUROD1 at the top of the regulation hierarchy and its depletion drastically reducing insulin levels. For 15 of the TF knock-downs (KD), we then used Cap Analysis of Gene Expression (CAGE) to identify thousands of their targets genome-wide (KD-CAGE). The data confirm NEUROD1 as a key positive regulator in the transcriptional regulatory network (TRN), and ISL1, and PROX1 as antagonists. As a complimentary approach we used ChIP-seq on four of these factors to identify NEUROD1, LMX1A, PAX6, and RFX6 binding sites in the human genome. Examining the overlap between genes perturbed in the KD-CAGE experiments and genes with a ChIP-seq peak within 50 kb of their promoter, we identified direct transcriptional targets of these TFs. Integration of KD-CAGE and ChIP-seq data shows that both NEUROD1 and LMX1A work as the main transcriptional activators. In the core TRN (i.e., TF-TF only), NEUROD1 directly transcriptionally activates the pancreatic TFs HSF4, INSM1, MLXIPL, MYT1, NKX6-3, ONECUT2, PAX4, PROX1, RFX6, ST18, DACH1, and SHOX2, while LMX1A directly transcriptionally activates DACH1, SHOX2, PAX6, and PDX1. Analysis of these complementary datasets suggests the need for caution in interpreting Ch
Directory of Open Access Journals (Sweden)
J Kenneth Baillie
2017-03-01
Full Text Available The FANTOM5 consortium utilised cap analysis of gene expression (CAGE to provide an unprecedented insight into transcriptional regulation in human cells and tissues. In the current study, we have used CAGE-based transcriptional profiling on an extended dense time course of the response of human monocyte-derived macrophages grown in macrophage colony-stimulating factor (CSF1 to bacterial lipopolysaccharide (LPS. We propose that this system provides a model for the differentiation and adaptation of monocytes entering the intestinal lamina propria. The response to LPS is shown to be a cascade of successive waves of transient gene expression extending over at least 48 hours, with hundreds of positive and negative regulatory loops. Promoter analysis using motif activity response analysis (MARA identified some of the transcription factors likely to be responsible for the temporal profile of transcriptional activation. Each LPS-inducible locus was associated with multiple inducible enhancers, and in each case, transient eRNA transcription at multiple sites detected by CAGE preceded the appearance of promoter-associated transcripts. LPS-inducible long non-coding RNAs were commonly associated with clusters of inducible enhancers. We used these data to re-examine the hundreds of loci associated with susceptibility to inflammatory bowel disease (IBD in genome-wide association studies. Loci associated with IBD were strongly and specifically (relative to rheumatoid arthritis and unrelated traits enriched for promoters that were regulated in monocyte differentiation or activation. Amongst previously-identified IBD susceptibility loci, the vast majority contained at least one promoter that was regulated in CSF1-dependent monocyte-macrophage transitions and/or in response to LPS. On this basis, we concluded that IBD loci are strongly-enriched for monocyte-specific genes, and identified at least 134 additional candidate genes associated with IBD susceptibility
Arner, Erik; De Hoon, Michiel; Carninci, Piero; Hayashizaki, Yoshihide; Pavli, Paul; Summers, Kim M.; Hume, David A.
2017-01-01
The FANTOM5 consortium utilised cap analysis of gene expression (CAGE) to provide an unprecedented insight into transcriptional regulation in human cells and tissues. In the current study, we have used CAGE-based transcriptional profiling on an extended dense time course of the response of human monocyte-derived macrophages grown in macrophage colony-stimulating factor (CSF1) to bacterial lipopolysaccharide (LPS). We propose that this system provides a model for the differentiation and adaptation of monocytes entering the intestinal lamina propria. The response to LPS is shown to be a cascade of successive waves of transient gene expression extending over at least 48 hours, with hundreds of positive and negative regulatory loops. Promoter analysis using motif activity response analysis (MARA) identified some of the transcription factors likely to be responsible for the temporal profile of transcriptional activation. Each LPS-inducible locus was associated with multiple inducible enhancers, and in each case, transient eRNA transcription at multiple sites detected by CAGE preceded the appearance of promoter-associated transcripts. LPS-inducible long non-coding RNAs were commonly associated with clusters of inducible enhancers. We used these data to re-examine the hundreds of loci associated with susceptibility to inflammatory bowel disease (IBD) in genome-wide association studies. Loci associated with IBD were strongly and specifically (relative to rheumatoid arthritis and unrelated traits) enriched for promoters that were regulated in monocyte differentiation or activation. Amongst previously-identified IBD susceptibility loci, the vast majority contained at least one promoter that was regulated in CSF1-dependent monocyte-macrophage transitions and/or in response to LPS. On this basis, we concluded that IBD loci are strongly-enriched for monocyte-specific genes, and identified at least 134 additional candidate genes associated with IBD susceptibility from reanalysis
Hurst, Laurence D; Ghanbarian, Avazeh T; Forrest, Alistair R R; Huminiecki, Lukasz
2015-12-01
X chromosomes are unusual in many regards, not least of which is their nonrandom gene content. The causes of this bias are commonly discussed in the context of sexual antagonism and the avoidance of activity in the male germline. Here, we examine the notion that, at least in some taxa, functionally biased gene content may more profoundly be shaped by limits imposed on gene expression owing to haploid expression of the X chromosome. Notably, if the X, as in primates, is transcribed at rates comparable to the ancestral rate (per promoter) prior to the X chromosome formation, then the X is not a tolerable environment for genes with very high maximal net levels of expression, owing to transcriptional traffic jams. We test this hypothesis using The Encyclopedia of DNA Elements (ENCODE) and data from the Functional Annotation of the Mammalian Genome (FANTOM5) project. As predicted, the maximal expression of human X-linked genes is much lower than that of genes on autosomes: on average, maximal expression is three times lower on the X chromosome than on autosomes. Similarly, autosome-to-X retroposition events are associated with lower maximal expression of retrogenes on the X than seen for X-to-autosome retrogenes on autosomes. Also as expected, X-linked genes have a lesser degree of increase in gene expression than autosomal ones (compared to the human/Chimpanzee common ancestor) if highly expressed, but not if lowly expressed. The traffic jam model also explains the known lower breadth of expression for genes on the X (and the Z of birds), as genes with broad expression are, on average, those with high maximal expression. As then further predicted, highly expressed tissue-specific genes are also rare on the X and broadly expressed genes on the X tend to be lowly expressed, both indicating that the trend is shaped by the maximal expression level not the breadth of expression per se. Importantly, a limit to the maximal expression level explains biased tissue of expression
Hurst, Laurence D.
2015-12-18
X chromosomes are unusual in many regards, not least of which is their nonrandom gene content. The causes of this bias are commonly discussed in the context of sexual antagonism and the avoidance of activity in the male germline. Here, we examine the notion that, at least in some taxa, functionally biased gene content may more profoundly be shaped by limits imposed on gene expression owing to haploid expression of the X chromosome. Notably, if the X, as in primates, is transcribed at rates comparable to the ancestral rate (per promoter) prior to the X chromosome formation, then the X is not a tolerable environment for genes with very high maximal net levels of expression, owing to transcriptional traffic jams. We test this hypothesis using The Encyclopedia of DNA Elements (ENCODE) and data from the Functional Annotation of the Mammalian Genome (FANTOM5) project. As predicted, the maximal expression of human X-linked genes is much lower than that of genes on autosomes: on average, maximal expression is three times lower on the X chromosome than on autosomes. Similarly, autosome-to-X retroposition events are associated with lower maximal expression of retrogenes on the X than seen for X-to-autosome retrogenes on autosomes. Also as expected, X-linked genes have a lesser degree of increase in gene expression than autosomal ones (compared to the human/Chimpanzee common ancestor) if highly expressed, but not if lowly expressed. The traffic jam model also explains the known lower breadth of expression for genes on the X (and the Z of birds), as genes with broad expression are, on average, those with high maximal expression. As then further predicted, highly expressed tissue-specific genes are also rare on the X and broadly expressed genes on the X tend to be lowly expressed, both indicating that the trend is shaped by the maximal expression level not the breadth of expression per se. Importantly, a limit to the maximal expression level explains biased tissue of expression
Caracterisation pratique des systemes quantiques et memoires quantiques auto-correctrices 2D
Landon-Cardinal, Olivier
Cette these s'attaque a deux problemes majeurs de l'information quantique: - Comment caracteriser efficacement un systeme quantique? - Comment stocker de l'information quantique? Elle se divise done en deux parties distinctes reliees par des elements techniques communs. Chacune est toutefois d'un interet propre et se suffit a elle-meme. Caracterisation pratique des systemes quantiques. Le calcul quantique exige un tres grand controle des systemes quantiques composes de plusieurs particules, par exemple des atomes confines dans un piege electromagnetique ou des electrons dans un dispositif semi-conducteur. Caracteriser un tel systeme quantique consiste a obtenir de l'information sur l'etat grace a des mesures experimentales. Or, chaque mesure sur le systeme quantique le perturbe et doit done etre effectuee apres avoir reprepare le systeme de facon identique. L'information recherchee est ensuite reconstruite numeriquement a partir de l'ensemble des donnees experimentales. Les experiences effectuees jusqu'a present visaient a reconstruire l'etat quantique complet du systeme, en particulier pour demontrer la capacite de preparer des etats intriques, dans lesquels les particules presentent des correlations non-locales. Or, la procedure de tomographie utilisee actuellement n'est envisageable que pour des systemes composes d'un petit nombre de particules. Il est donc urgent de trouver des methodes de caracterisation pour les systemes de grande taille. Dans cette these, nous proposons deux approches theoriques plus ciblees afin de caracteriser un systeme quantique en n'utilisant qu'un effort experimental et numerique raisonnable. - La premiere consiste a estimer la distance entre l'etat realise en laboratoire et l'etat cible que l'experimentateur voulait preparer. Nous presentons un protocole, dit de certification, demandant moins de ressources que la tomographie et tres efficace pour plusieurs classes d'etats importantes pour l'informatique quantique. - La seconde
Energy Technology Data Exchange (ETDEWEB)
Lim, T.
2011-04-28
numeriquement un controle non destructif par courants de Foucault (CND-CF), la reponse du capteur peut etre modelisee via une approche semi-analytique par integrales de volume. Plus rapide que la methode des elements finis, cette approche est cependant limitee a l'etude de pieces planes ou cylindriques (sans prise en compte des effets de bords) du fait de la complexite de l'expression de la dyade de Green pour des configurations plus generales. Or, il existe une forte demande industrielle pour etendre les capacites de la modelisation CF a des configurations complexes (plaques deformees, bords de piece...). Nous avons donc ete amenes a formuler differemment le probleme electromagnetique, en nous fixant comme objectif de conserver une approche semi-analytique. La formulation integrale surfacique (SIE) permet d'exprimer le probleme volumique en un probleme de transmission equivalent a l'interface (2D) entre sous-domaines homogenes. Ce probleme est ramene a la resolution d'un systeme lineaire (par la methode des moments) dont le nombre d'inconnues est reduit du fait du caractere surfacique du maillage. Des lors, ce systeme peut etre resolu par un solveur direct pour de petites configurations. Cela nous a permis de traiter plusieurs seconds membres (ie. differentes positions de capteurs) pour une seule inversion de la matrice d'impedance. Les resultats numeriques obtenus au moyen de cette formulation concernent des plaques avec la prise en compte des effets de bords tels que l'arete et le coin. Ils sont en accord avec des resultats obtenus par la methode des elements finis. Pour des configurations de grandes tailles, nous avons mene une etude preliminaire a l'adaptation d'une methode d'acceleration du produit matrice-vecteur intervenant dans un solveur iteratif (methode multipole rapide, ou FMM) afin de definir les conditions dans lesquelles le calcul FMM fonctionne correctement (precision, convergence...) dans le contexte CND
Weather and seasonal climate prediction for South America using a multi-model superensemble
Chaves, Rosane R.; Ross, Robert S.; Krishnamurti, T. N.
2005-11-01
January, February, and December of 2000. The six global models are from the following forecast centers: FSU, Bureau of Meteorology Research Center (BMRC), Japan Meteorological Agency (JMA), National Centers for Environmental Prediction (NCEP), Naval Research Laboratory (NRL), and Recherche en Prevision Numerique (RPN). Predictions of precipitation are made for the period January, February, and December of 2001 with a multi-analysis-multi-model superensemble where, in addition to the six forecast models just mentioned, five additional versions of the FSU model are used in the ensemble, each with a different initialization (analysis) based on different physical initialization procedures. On the basis of observations, the results show that the FSUSE provides the best forecasts of the mass and motion field variables to forecast day 5, when compared to both the models comprising the ensemble and the multi-model ensemble mean during the wet season of December-February over South America. Individual case studies show that the FSUSE provides excellent predictions of rainfall for particular synoptic events to forecast day 3. Copyright
Navigation d'un vehicule autonome autour d'un asteroide
Dionne, Karine
resultats de simulation montrent que l'ajout d'une mesure de distance par cycle de mise a jour entraine une amelioration significative des performances de navigation. Ce procede reduit l'erreur d'estimation ainsi que les periodes de non-observabilite en plus de contrer la dilution de precision des mesures. Les analyses de sensibilite confirment quant a elles la contribution des mesures de distance a la diminution globale de l'erreur d'estimation et ce pour une large gamme de parametres de conception. Elles indiquent egalement que l'erreur de cartographie est un parametre critique pour les performances du systeme de navigation developpe. Mots cles : Estimation d'etat, filtre de Kalman adaptatif, navigation optique, lidar, asteroide, simulations numeriques
Energy Technology Data Exchange (ETDEWEB)
Naudet, R. [Commissariat a l' Energie Atomique, Saclay (France). Centre d' Etudes Nucleaires
1964-07-01
ce probleme. Dans la premiere on suit le neutron au cours de ses chocs successifs, et on s'interesse en particulier a la maniere dont varie sa probabilite de capture; dans la seconde on considere globalement ce spectre, en ecrivant a chaque vitesse le bilan des neutrons qui arrivent et disparaissent. On montre d'ailleurs que ces deux methodes reposent en definitive sur le meme formalisme. La premiere donne lieu a des interpretations interessantes qui permettent une tres bonne comprehension des phenomenes physiques, la seconde conduit a des calculs particulierement commodes, grace aux operateurs de thermalisation mis au point a Saclay. Les differents parametres qui regissent ce probleme sont etudies successivement (modele de thermalisation - loi de capture - loi de variation du coefficient de diffusion - parametres geometriques de la cellule - importance relative des fuites), pour chacun d'eux, on donne des applications numeriques. (auteur)
Energy Technology Data Exchange (ETDEWEB)
Moller, J.Y.
2012-01-10
To model the nuclear reactors, the stationary linear Boltzmann equation is solved. After discretizing the energy and the angular variables, the hyperbolic equation is numerically solved with the discontinuous finite element method. The MINARET code uses this method on a triangular unstructured mesh in order to deal with complex geometries (like containing arcs of circle). However, the meshes with straight edges only approximate such geometries. With curved edges, the mesh fits exactly to the geometry, and in some cases, the number of triangles decreases. The main task of this work is the study of finite elements on curved triangles with one or several curved edges. The choice of the basis functions is one of the main points for this kind of finite elements. We obtained a convergence result under the assumption that the curved triangles are not too deformed in comparison with the associated straight triangles. Furthermore, a code has been written to treat triangles with one, two or three curved edges. Another part of this work deals with the acceleration of transport calculations. Indeed, the problem is solved iteratively, and, in some cases, can converge really slowly. A DSA (Diffusion Synthetic Acceleration) method has been implemented using a technique from interior penalty methods. A Fourier analysis in 1D and 2D allows to estimate the acceleration for infinite periodical media, and to check the stability of the numerical scheme when strong heterogeneities exist. (author) [French] La modelisation des reacteurs nucleaires repose sur la resolution de l'equation de Boltzmann lineaire. Nous nous sommes interesses a la resolution spatiale de la forme stationnaire de cette equation. Apres discretisation en energie et en angle, l'equation hyperbolique est resolue numeriquement par la methode des elements finis discontinus. Le solveur MINARET utilise cette methode sur un maillage triangulaire non structure afin de pouvoir traiter des geometries complexes
Energy Technology Data Exchange (ETDEWEB)
Gruel, A.
2011-10-24
afin d'obtenir des informations pertinentes sur les donnees nucleaires de base. L'enjeu de cette these est le developpement d'un schema de calcul de reference, dont les incertitudes sont clairement identifiees et quantifiees, permettant l'interpretation des mesures par oscillation. Dans ce document plusieurs methodes de calcul de ces faibles effets en reactivite sont presentees, basees sur des codes de calculs neutroniques deterministes et/ou stochastiques. Ces methodes sont comparees sur un benchmark numerique, permettant leur validation par rapport a un calcul de reference. Trois applications sont ici presentees dans le detail: une methode purement deterministe utilisant la theorie des perturbations exacte pour la qualification des sections efficaces des principaux produits de fission en REP, dans le cadre d'etudes sur l'estimation de la perte du reactivite du combustible au cours du cycle; une methode hybride, basee sur un calcul stochastique et la theorie des perturbations exacte, permet d'obtenir un retour precis sur les donnees nucleaires de bases d'isotopes, dans notre cas l'{sup 241}Am; et enfin, une troisieme methode, reposant sur un calcul perturbatif Monte Carlo, est utilisee pour une etude de conception
Energy Technology Data Exchange (ETDEWEB)
Guillet, T.
2010-09-23
distribution de matiere dans l'Univers. En mettant en oeuvre des simulations numeriques, j'ai mesure l'effet des baryons predit par ces simulations sur le spectre en puissance, la variance et le troisieme moment de la distribution de matiere. J'ai montre qu'un modele de halo, incluant la presence de baryons sous la forme d'une concentration centrale de matiere dans les halos, reproduit avec precision la variance et le troisieme moment du champ de densite. En raison des problemes connus relatifs aux baryons observes dans les simulations cosmologiques actuelles, j'ai developpe le modele afin d'inclure des ingredients connus par des observations. J'ai applique ce modele a la determination des parametres d'energie noire a partir de l'experience Euclid, qui verra le jour dans un futur proche. Au cours de ce travail, j'ai egalement travaille au developpement et a l'extension du code RAMSES, notamment en developpant un solveur d'autogravite parallele, qui apporte des gains de performance significatifs, en particulier pour la simulation de certaines configurations astrophysiques comme les galaxies isolees et les amas
Metiche, Slimane
La demande croissante en poteaux pour les differents reseaux d'electricite et de telecommunications a rendu necessaire l'utilisation de materiaux innovants, qui preservent l'environnement. La majorite des poteaux electriques existants au Canada ainsi qu'a travers le monde, sont fabriques a partir de materiaux traditionnels tel que le bois, le beton ou l'acier. Les motivations des industriels et des chercheurs a penser a d'autres solutions sont diverses, citons entre autre: La limitation en longueur des poteaux en bois ainsi que la vulnerabilite des poteaux fabriques en beton ou en acier aux agressions climatiques. Les nouveaux poteaux en materiaux composites se presentent comme de bons candidats a cet effet, cependant; leur comportement structural n'est pas connu et des etudes theoriques et experimentales approfondies sont necessaires avant leur mise en marche a grande echelle. Un programme de recherche intensif comportant plusieurs projets experimentaux, analytiques et numeriques est en cours a l'Universite de Sherbrooke afin d'evaluer le comportement a court et a long termes de ces nouveaux poteaux en Polymeres Renforces de Fibres (PRF). C'est dans ce contexte que s'inscrit la presente these, et notre recherche vise a evaluer le comportement a la flexion de nouveaux poteaux tubulaires coniques fabriques en materiaux composites par enroulement filamentaire et ce, a travers une etude theorique, ainsi qu'a travers une serie d'essais de flexion en "grandeur reelle" afin de comprendre le comportement structural de ces poteaux, d'optimiser la conception et de proposer une procedure de dimensionnement pour les utilisateurs. Les poteaux en Polymeres Renforces de Fibres (PRF) etudies dans cette these sont fabriques avec une resine epoxyde renforcee de fibres de verre type E. Chaque type poteaux est constitue principalement de trois zones ou les proprietes geometriques (epaisseur, diametre) et les proprietes mecaniques sont differentes d'une zone a l'autre. La difference
Lavoie, Andre
L'objectif principal que que nous poursuivons est de developper un modele de simulation du transfert radiatif eau-atmosphere adapte aux observations faites par le capteur Thematic Mapper (TM) de Landsat. Les informations que nous cherchons ont trait au milieu cotier marin et concernent les elements qui sont en suspension dans l'eau. Les images multibandes du capteur TM dans la partie visible du spectre, sont surtout visees par nos travaux. A la base de la simulation, nous utilisons un programme de simulation atmospherique, le code 6S, auquel nous greffons un modele de simulation du transfert radiatif dans la masse d'eau. Ce dernier estime le signal en fonction de 4 composants: l'eau, les pigments chlorophylliens (chlorophylle et phaeopigments), les matieres minerales et les substances organiques dissoutes. La concentration des differents composants sert de parametre d'entree pour definir le comportement optique de la masse d'eau. Le modele permet egalement de simuler une masse d'eau stratifiee si l'on connai t les concentrations des composants dans les differentes couches. Il inclut aussi la contribution du fond, selon sa nature et sa composition, ainsi que celle du miroitement du soleil et du ciel a la surface de l'eau. Les informations d'un echantillonnage de la masse d' eau synchronise avec le passage du satellite, a la baie des Chaleurs, d'une cartographie du couvert d'algues et d'un modele bathymetrique ont ete utilisees pour fixer les parametres de simulation par le modele. La comparaison montrent que le modele se comporte relativement bien surtout dans la bande TM2. Une erreur systematique de 2 valeurs numeriques en moyenne subsiste dans les trois bandes spectrales. Les resultats nous montrent que la visibilite du fond aux faibles profondeurs est un element tres important a considerer. Par ailleurs, l'analyse de sensibilite montre que les images TM sont plus sensibles aux concentrations en matieres minerales qu'aux pigments chlorophylliens et aux substances
Energy Technology Data Exchange (ETDEWEB)
Rouquerol, F. [Commissariat a l' Energie Atomique, Saclay (France). Centre d' Etudes Nucleaires
1965-12-01
particulier dans le cas des solides mtcroporeux) pour la determination du volume de gaz adsorbe dans une couche monomoleculaire et que l'azote est sensible aux interactions chimiques ou electriques avec l'adsorbant: il convient de le remplacer par l'argon. Les methodes de calcul de la distribution de taille des pores sont analysees et discutees. Nos resultats experimentaux ont montre que l'epaisseur de la couche multimoleculaire doit etre calculee a partir du nombre de couches donne par Shull et d'une epaisseur de 3,6 angstrom par couche. Enfin, nous proposons une nouvelle methode d'analyse de la branche de desorption. Nous montrons que certains systemes lamellaires, non poreux, tels que Be(OH){sub 2}, donnent une hysteresis sur une isotherme d'adsorption du type I ou II. En nous basant sur les resultats numeriques fournis par notre methode ainsi que sur des observations obtenues par microscopie electronique, nous concluons que cette hysteresis est due au manque de rigidite du solide. Inversement, nous avons caracterise des solides poreux qui ne donnent pas lieu au phenomene d'hysteresis. C'est le cas des echantillons de glucine de texture microporeuse (r < 20 A). Les conclusions precedentes nous permettent de decrire l'evolution de texture subie par deux series d'echantillons (glucine et alumine) au cours de leur deshydratation progressive (traitement thermique allant de 150 a 1100 C). (auteur)
Thieulot, Cedric
2016-04-01
tectonic models. Geophysical Journal International, 120(1), 123. Kronbichler, M., Heister, T., & Bangerth, W. (2012). High accuracy mantle convection simulation through modern numerical methods. Geophysical Journal International, 191(1), 1229. Le Pourhiet, L., Huet, B., May, D. A., Labrousse, L., & Jolivet, L. (2012). Kinematic interpretation of the 3D shapes of metamorphic core complexes. Geochemistry, Geophysics, Geosystems, 13(9). May, D. A., Brown, J., & Le Pourhiet, L. (2015). A scalable, matrixfree multigrid preconditioner for finite element discretizations of heterogeneous Stokes flow. Computer Methods in Applied Mechanics and Engineering, 290, 496523. Thieulot, C. (2011). FANTOM: Twoand threedimensional numerical modelling of creeping flows for the solution of geological problems. Physics of the Earth and Planetary Interiors, 188(1), 4768. Zhong, S., Zuber, M. T., Moresi, L., & Gurnis, M. (2000). Role of temperature-dependent viscosity and surface plates in spherical shell models of mantle convection. Journal of Geophysical Research: Solid Earth (1978-2012), 105(B5), 1106311082.
Bendaoud, Adlane Larbi
. Dans la partie theorique, un modele traitant les aspects thermique, hydrodynamique et massique a ete elabore. Sur la base de ce modele a ete ecrit un programme informatique en langage FORTRAN 6.6. Il est base sur la discretisation du serpentin en volumes de controle, est entierement automatise et peut traiter des echangeurs de chaleur avec des circuits de refrigerant complexes pouvant avoir des entrees et sorties multiples ainsi que des bifurcations. La presence simultanee des trois phases thermodynamiques du refrigerant (liquide sous refroidi, fluide sature, vapeur surchauffee) dans le serpentin est aussi prise en charge. Le modele a ete valide pour un fonctionnement avec et sans formation de givre en utilisant des donnees experimentales disponibles dans la litterature et celles obtenues sur le banc d'essai de CanmetENERGIE. Celui-ci a ete mis a jour pour les besoins de la presente recherche et pour cela, un systeme de surchauffe et d'injection de la vapeur d'eau dans une enceinte a tres basse temperature a ete dimensionne, fabrique et installe. Un dispositif de visualisation de la formation de givre, ainsi qu'un equipement de mesure de la temperature, de la pression et de l'humidite relative de l'air ont aussi ete ajoutes. Une fois le modele valide, des simulations numeriques sur le serpentin avec et sans formation de givre ont ete effectuees. Un premier cas de base a servi comme reference pour d'autres cas pour lesquels une etude parametrique sur la geometrie et le fonctionnement a ete menee. Il a ete montre par rapport au cas de base que : 1. la diminution de la densite des ailettes sur des rangees specifiques du serpentin donne une surface minimale (Amin) plus grande, retardant ainsi l'obstruction totale du serpentin par le givre et permet donc un temps de fonctionnement plus grand et une frequence de degivrage plus faible. 2. une bonne configuration de circuit de refrigerant augmente le temps de fonctionnement du serpentin de 200 % et delivre une puissance
Energy Technology Data Exchange (ETDEWEB)
Wang, H.
2011-10-24
systeme de la tomographie par rayons X (CT), on cherche a reconstruire une image de haute qualite avec un faible nombre de projections. Les algorithmes classiques ne sont pas adaptes a cette situation: la reconstruction est perturbee par artefacts donc instable. Une nouvelle approche basee sur la theorie recente du 'Compressed Sensing' (CS) fait l'hypothese que l'image inconnue est 'parcimonieuse' ou 'compressible', et formule la reconstruction en un probleme d'optimisation (minimisation de la norme TV/l1) afin de promouvoir la parcimonie. Pour appliquer CS en CT avec le pixel (ou le voxel en 3D) comme la base de representation, il necessite une transformation de parcimonie, de plus il faut la combiner avec le 'projecteur du rayon X' qui applique sur une image pixelisee. Dans cette these, on a adapte une base radiale de famille Gaussienne nommee 'blob' a la reconstruction en CT par CS. Le blob a une meilleure localisation spatiofrequentielle que le pixel, et des operations comme la transformee en rayons X, peuvent etre evaluee analytiquement et elles sont facilement parallelisables (sur le plate-forme GPU). Compare au blob classique du Kaisser-Bessel, la nouvelle base a une structure multi-echelle: une image est la somme des translations et des dilatations d'un chapeau Mexicain radial. Les images medicales typiques sont compressibles sous cette base, ce qui entraine que le systeme de la representation parcimonieuse intervenu dans les algorithmes ordinaires de CS n'y est plus necessaire. Des simulations numeriques en 2D ont montre que, compare a l'approche equivalente basee sur la base de pixel ou d'ondelette, les algorithmes du TV et du l1 existantes sont plus efficaces et les reconstructions ont de meilleures qualites visuelles. Cette nouvelle approche a ete egalement validee sur des donnees experimentales bi-dimensionelles, ou on a observe que le nombre de projection peut etre reduit
Energy Technology Data Exchange (ETDEWEB)
Carlier, A. [Commissariat a l' Energie Atomique, Fontenay-aux-Roses (France). Centre d' Etudes Nucleaires
1964-02-15
be taken in some cases. A detailed alphabetical index is intended to help find in this publication the definitions, formulae or theories that most interest the reader. (author) [French] Apres avoir rappele ce qu'est un gisement, de valeur economique, sur la base du marginalisme, l'auteur distingue plusieurs categories de reserves suivant le degre de connaissance du gite et suivant le poste d'exploitation ou est considere le minerai. Il rejette les anciennes categories 'a vue', 'probable' et 'possible' pour les remplacer par des categories mieux adaptees. Les reserves sensu stricto sont celles pour lesquelles on est en mesure de calculer l'erreur aleatoire d'estimation. Une notion est introduite a ce sujet, c'est le contraste naturel des teneurs dans un gite (coefficient de dispersion absolu {alpha}). L'auteur distingue 3 formes de reconnaissance des gites, la mauvaise, la bonne et l'ideale. La premiere est la reconnaissance anarchique trop souvent recontree la seconde est la reconnaissance logique fondee sur une implantation systematique des galeries, sondages, etc. La troisieme, difficile a atteindre, est celle qui minimise la depense des recherches pour une precision fixee a l'avance. Une partie de l'ouvrage traite des erreurs d'echantillonnage telles que celles resultant du quartage d'un lot (theorie de Pierre GY), ou celles issues de l'emploi de la radioactivite pour estimer les teneurs. Une autre partie traite des erreurs d'extension (assimilation du gite a ses echantillons) et donne les formules essentielles pour calculer ces erreurs aleatoires (geostatistique de Matheron). A propos de l'estimateur lui-meme, on note la disharmonie entre l'echantillon et sa zone d'influence, et le moyen de remedier a cette discordance par le 'krigeage' est fourni dans l'ouvrage. La these donne de nombreux exemples des differents parametres numeriques
Energy Technology Data Exchange (ETDEWEB)
Fournier, D.
2011-10-10
comparaison de ces deux estimateurs est faite sur des benchmarks dont on connait la solution exacte grace a des methodes de solutions manufacturees. On peut ainsi analyser le comportement des estimateurs au regard de la regularite de la solution. Grace a cette etude, une strategie de raffinement hp utilisant ces deux estimateurs est proposee et comparee a d'autres methodes rencontrees dans la litterature. L'ensemble des comparaisons est realise tant sur des cas simplifies ou l'on connait la solution exacte que sur des cas realistes issus de la physique des reacteurs. Ces methodes adaptatives permettent de reduire considerablement l'empreinte memoire et le temps de calcul. Afin d'essayer d'ameliorer encore ces deux aspects, on propose d'utiliser des maillages differents par groupe d'energie. En effet, l'allure spatiale du flux etant tres dependante du domaine energetique, il n'y a a priori aucune raison d'utiliser la meme decomposition spatiale. Une telle approche nous oblige a modifier les estimateurs initiaux afin de prendre en compte le couplage entre les differentes energies. L'etude de ce couplage est realisee de maniere theorique et des solutions numeriques sont proposees puis testees
Directory of Open Access Journals (Sweden)
Burak Yılmaz
2011-05-01
Full Text Available
Amaç: Teknoloji ve materyallerdeki gelişmeler estetik diş hekimliğinin en önemli prosedürlerinden biri olan renk seçimini kolaylaştırmaktadır. VITA Toothguide 3D-Master skalası geniş bir renk aralığına sahip olması ile Vitapan Classical skalasının yerini almaya başlamıştır. Bu çalışmanın amacı henüz ülkemizde yaygın bir kullanım alanı bulamayan 3D-Master skalası ile klinisyen ve dental teknisyenlerin renk seçimindeki başarılarını değerlendirmektir.
Gereç ve Yöntem: Toothguide 3D-Master skalasından rastgele seçilen 10 renk modellere adapte edilmiş ve fantom kafalara yerleştirilmiştir. Protetik veya restoratif diş hekimliği alanında doktora eğitimi almış (PhD ve eğitimine devam eden (PG 7 hekim ile 5 dental teknisyenden (TK oluşan gözlemci gruplarına, ikinci bir 3D-Master skalası ile renk seçimi yaptırılmıştır. Ana skaladan seçilmiş olan renkler ile gözlemciler tarafından ikinci skaladan eşleştirilen renkler arasındaki CIE L*a*b* renk farkı (ΔE spektrofotometre ölçümleri ile hesaplanmıştır. ΔE değerleri klinik tolerans eşik değerlerine göre sınıflandırılmıştır. Renk eşleşme oranları yüzde olarak hesaplanmış ve gruplar arasındaki fark Ki Kare testi ile istatistiksel olarak değerlendirilmiştir.
Directory of Open Access Journals (Sweden)
Bourgeois B.
2010-07-01
electriques/EM pour surveiller l’injection de CO2 supercritique a 1700 m de profondeur dans un aquifere salin du Bassin Parisien (Carbonates du Dogger. Nous demontrons d’abord l’interet theorique des methodes de resistivite pour une telle surveillance a l’aide des lois fondamentales de la petrophysique dans les roches sedimentaires poreuses, en supposant que le CO2 supercritique est un isolant parfait. Diverses combinaisons de sources et de capteurs sont discutees et il est conclu que le dispositif le plus performant consiste en une source de type galvanique (injection de courant dans le sol a l’aide d’unepaire d’electrodes A et B et d’une grille de capteurs electriques (et peut-etre magnetiques a la surface du sol. Compte tenu de la profondeur et de la finesse des couches reservoir, l’injection du courant en profondeur est envisagee dans le but d’augmenter la densite de courant circulant dans la couche reservoir. L’injection ponctuelle a la profondeur du reservoir, dans une configuration de type « Mise A la Masse » (MAM, etant generalement impossible a cause de la presence de tubages metalliques dans les forages, nous avons etudie la possibilite d’utiliser ces memes tubages comme des longues electrodes distribuant le courant tout le long du forage. Ce type de source est denomme « LEMAM » (Long Electrode Mise A la Masse, pour le distinguer du MAM conventionnel. Des simulations numeriques sont presentees a la fois pour le dispositif LEMAM et pour le dispositif « rectangle » (RECT, lequel emploie une injection de courant ponctuelle a la surface du sol. Le modele geoelectrique utilise est base sur une zone proche du champ petrolier de Saint-Martin-de-Bossenay (SMB, au sud-est du Bassin Parisien. La couche reservoir consideree dans cette etude est la formation de l’“ Oolithe Blanche ” (Dogger qui a une epaisseur de 75 m et se situe a une profondeur de 1700 m sous la surface du sol. Dans les modeles presentes, le panache de CO2 est simplifie en