|
|
Registro Completo |
Biblioteca(s): |
Embrapa Agroindústria Tropical. |
Data corrente: |
02/03/2012 |
Data da última atualização: |
16/11/2021 |
Tipo da produção científica: |
Boletim de Pesquisa e Desenvolvimento |
Autoria: |
LEITAO, R. C.; CLAUDINO, R. L.; BRITO, C. R. F. de; ALEXANDRE, L. C.; CASSALES, A. R.; PINTO, G. A. S.; SANTAELLA, S. T. |
Afiliação: |
RENATO CARRHA LEITAO, CNPAT; Rayanne Leitão Claudino, UFC; Cristiano Régis Freitas de Brito, UFC; LILIAN CHAYN ALEXANDRE, CNPAT; ANA RIBEIRO CASSALES, CNPAT; GUSTAVO ADOLFO SAAVEDRA PINTO, CNPAT; Sandra Tédde Santaella, UFC. |
Título: |
Produção do biogás a partir do bagaço de caju |
Ano de publicação: |
2011 |
Fonte/Imprenta: |
Fortaleza: Embrapa Agroindústria Tropical, 2011. |
Páginas: |
43 p. |
Série: |
(Boletim de pesquisa e desenvolvimento / Embrapa Agroindústria Tropical, 51). |
ISSN: |
1679-6543 |
Idioma: |
Português |
Conteúdo: |
Caracterização do problema; Caracterização do bagaço do caju; Biodigestores de resíduos sólidos; Degradação anaeróbia de resíduos sólidos de frutas e verduras; Substrato; Teste de biodegradabilidade anaeróbia; Teste de toxicidade anaeróbia; Potencial de produção de metano; Pré-tratamentos do substrato; Reator anaeróbio em escala de laboratório; Análises físico-químicas; Caracterização do bagaço do caju e inóculo; Teste de atividade Metanogênica Específica e Toxicidade; Teste de biodegradabilidade anaeróbia com bagaço in natura; Pré-tratamentos no bagaço de caju; Potencial de produção de metano; Reator anaeróbio em escala de laboratório. |
Palavras-Chave: |
Digestão anaeróbica; Energia renovável; Resíduo agroindustrial. |
Categoria do assunto: |
-- |
URL: |
https://ainfo.cnptia.embrapa.br/digital/bitstream/item/54985/1/BPD11012.pdf
|
Marc: |
LEADER 01456nam a2200253 a 4500 001 1917336 005 2021-11-16 008 2011 bl uuuu u0uu1 u #d 022 $a1679-6543 100 1 $aLEITAO, R. C. 245 $aProdução do biogás a partir do bagaço de caju$h[electronic resource] 260 $aFortaleza: Embrapa Agroindústria Tropical$c2011 300 $a43 p. 490 $a(Boletim de pesquisa e desenvolvimento / Embrapa Agroindústria Tropical, 51). 520 $aCaracterização do problema; Caracterização do bagaço do caju; Biodigestores de resíduos sólidos; Degradação anaeróbia de resíduos sólidos de frutas e verduras; Substrato; Teste de biodegradabilidade anaeróbia; Teste de toxicidade anaeróbia; Potencial de produção de metano; Pré-tratamentos do substrato; Reator anaeróbio em escala de laboratório; Análises físico-químicas; Caracterização do bagaço do caju e inóculo; Teste de atividade Metanogênica Específica e Toxicidade; Teste de biodegradabilidade anaeróbia com bagaço in natura; Pré-tratamentos no bagaço de caju; Potencial de produção de metano; Reator anaeróbio em escala de laboratório. 653 $aDigestão anaeróbica 653 $aEnergia renovável 653 $aResíduo agroindustrial 700 1 $aCLAUDINO, R. L. 700 1 $aBRITO, C. R. F. de 700 1 $aALEXANDRE, L. C. 700 1 $aCASSALES, A. R. 700 1 $aPINTO, G. A. S. 700 1 $aSANTAELLA, S. T.
Download
Esconder MarcMostrar Marc Completo |
Registro original: |
Embrapa Agroindústria Tropical (CNPAT) |
|
Biblioteca |
ID |
Origem |
Tipo/Formato |
Classificação |
Cutter |
Registro |
Volume |
Status |
URL |
Voltar
|
|
Registro Completo
Biblioteca(s): |
Embrapa Agricultura Digital. |
Data corrente: |
06/12/2011 |
Data da última atualização: |
24/01/2020 |
Tipo da produção científica: |
Resumo em Anais de Congresso |
Autoria: |
CINTRA, L. C. |
Afiliação: |
LEANDRO CARRIJO CINTRA, CNPTIA. |
Título: |
Computational investigations in eukaryotes genome de novo assembly using short reads. |
Ano de publicação: |
2011 |
Fonte/Imprenta: |
In: INTERNATIONAL CONFERENCE OF THE BRAZILIAN ASSOCIATION FOR BIOINFORMATICS AND COMPUTATIONAL BIOLOGY, 7.; INTERNATIONAL CONFERENCE OF THE IBEROAMERICAN SOCIETY FOR BIOINFORMATICS, 3., 2011, Florianópolis. Proceedings... Florianópolis: Associação Brasileira de Bioinformática e Biologia Computacional, 2011. |
Páginas: |
Não paginado. |
Idioma: |
Inglês |
Notas: |
X-MEETING 2011. |
Conteúdo: |
Recently news technologies in molecular biology enormously improved the sequencing data production, making it possible to generate billions of short reads totalizing gibabases of data per experiment. Prices for sequencing are decreasing rapidly and experiments that were impossible in the past because of costs are now being executed. Computational methodologies that were successfully used to solve the genome assembler problem with data obtained by the shotgun strategy, are now inefficient. Efforts are under way to develop new programs. At this moment, a stabilized condition for producing quality assembles is to use paired-end reads to virtually increase the length of reads, but there is a lot of controversy in other points. The works described in literature basically use two strategies: one is based in a high coverage[1] and the other is based in an incremental assembly, using the made pairs with shorter inserts first[2]. Independently of the strategy used the computational resources demanded are actually very high. Basically the present computational solution for the de novo genome assembly involves the generation of a graph of some kind [3], and one because those graphs use as node whole reads or k-mers, and considering that the amount of reads is very expressive; it is possible to infer that the memory resource of the computational system will be very important. Works in literature corroborate this idea showing that multiprocessors computational systems with at least 512 Gb of principal memory were used in de novo projects of eukaryotes [1,2,3]. As an example and benchmark source it is possible use the Panda project, which was executed by a research group consortium at China and generated de novo genome of the giant Panda (Ailuropoda melanoleura) . The project initially produced 231 Gb of raw data, which was reduced to 176 Gb after removing low-quality and duplicated reads. In the de novo assembly process just 134 Gb were used. Those bases were distributed in approximately 3 billions short reads. After the assembly, 200604 contigs were generated and 5701 multicontig scaffolds were obtained using 124336 contigs. The N50 was respectively . 36728 bp and 1.22 Mb for contigs and scaffolds. The present work investigated the computational demands of de novo assembly of eukaryotes genomes, reproducing the results of the Panda project. The strategy used was incremental as implemented in the SOAPdenovo software, which basically divides the assembly process in four steps: pre-graph to construction of kmer-graph; contig to eliminate errors and output contigs, map to map reads in the contigs and scaff to scaffold contigs. It used a NUMA (non-uniform memory access) computational system with 8 six-core processors with hyperthread tecnology and 512 Gb of RAM (random access memory), and the consumption of resources as memory and processor time were pointed for every steps in the process. The incremental strategy to solve the problem seems practical and can produce effective results. At this moment a work is in progress which is investigating a new methodology to group the short reads together using the entropy concept. It is possible that assemblies with better quality will be generated, because this methodology initially uses more informative reads. References [1] Gnerre et. al.; High-quality draft assemblies of mammalian genomes from massively parallel sequence data, Proceedings of the National Academy of Sciences USA, v. 108, n. 4, p. 1513-1518, 2010 [2] Li et. al.; The sequence and de novo assembly of the giant panda genome, Nature, v. 463, p. 311-317, 2010 [3] Schatz et. al.; Assembly of large genomes using second-generation sequencing, Genome Research, v. 20, p. 1165-1173, 2010 MenosRecently news technologies in molecular biology enormously improved the sequencing data production, making it possible to generate billions of short reads totalizing gibabases of data per experiment. Prices for sequencing are decreasing rapidly and experiments that were impossible in the past because of costs are now being executed. Computational methodologies that were successfully used to solve the genome assembler problem with data obtained by the shotgun strategy, are now inefficient. Efforts are under way to develop new programs. At this moment, a stabilized condition for producing quality assembles is to use paired-end reads to virtually increase the length of reads, but there is a lot of controversy in other points. The works described in literature basically use two strategies: one is based in a high coverage[1] and the other is based in an incremental assembly, using the made pairs with shorter inserts first[2]. Independently of the strategy used the computational resources demanded are actually very high. Basically the present computational solution for the de novo genome assembly involves the generation of a graph of some kind [3], and one because those graphs use as node whole reads or k-mers, and considering that the amount of reads is very expressive; it is possible to infer that the memory resource of the computational system will be very important. Works in literature corroborate this idea showing that multiprocessors computational systems with at least 512 G... Mostrar Tudo |
Palavras-Chave: |
Bioinformática; Genomas de eucariotos. |
Thesagro: |
Biologia Molecular; Genoma. |
Thesaurus NAL: |
Bioinformatics; Eukaryotic cells; Genome; Molecular biology. |
Categoria do assunto: |
-- |
URL: |
https://ainfo.cnptia.embrapa.br/digital/bitstream/item/49471/1/eukariotes.pdf
|
Marc: |
LEADER 04672nam a2200229 a 4500 001 1908741 005 2020-01-24 008 2011 bl uuuu u00u1 u #d 100 1 $aCINTRA, L. C. 245 $aComputational investigations in eukaryotes genome de novo assembly using short reads.$h[electronic resource] 260 $aIn: INTERNATIONAL CONFERENCE OF THE BRAZILIAN ASSOCIATION FOR BIOINFORMATICS AND COMPUTATIONAL BIOLOGY, 7.; INTERNATIONAL CONFERENCE OF THE IBEROAMERICAN SOCIETY FOR BIOINFORMATICS, 3., 2011, Florianópolis. Proceedings... Florianópolis: Associação Brasileira de Bioinformática e Biologia Computacional$c2011 300 $aNão paginado. 500 $aX-MEETING 2011. 520 $aRecently news technologies in molecular biology enormously improved the sequencing data production, making it possible to generate billions of short reads totalizing gibabases of data per experiment. Prices for sequencing are decreasing rapidly and experiments that were impossible in the past because of costs are now being executed. Computational methodologies that were successfully used to solve the genome assembler problem with data obtained by the shotgun strategy, are now inefficient. Efforts are under way to develop new programs. At this moment, a stabilized condition for producing quality assembles is to use paired-end reads to virtually increase the length of reads, but there is a lot of controversy in other points. The works described in literature basically use two strategies: one is based in a high coverage[1] and the other is based in an incremental assembly, using the made pairs with shorter inserts first[2]. Independently of the strategy used the computational resources demanded are actually very high. Basically the present computational solution for the de novo genome assembly involves the generation of a graph of some kind [3], and one because those graphs use as node whole reads or k-mers, and considering that the amount of reads is very expressive; it is possible to infer that the memory resource of the computational system will be very important. Works in literature corroborate this idea showing that multiprocessors computational systems with at least 512 Gb of principal memory were used in de novo projects of eukaryotes [1,2,3]. As an example and benchmark source it is possible use the Panda project, which was executed by a research group consortium at China and generated de novo genome of the giant Panda (Ailuropoda melanoleura) . The project initially produced 231 Gb of raw data, which was reduced to 176 Gb after removing low-quality and duplicated reads. In the de novo assembly process just 134 Gb were used. Those bases were distributed in approximately 3 billions short reads. After the assembly, 200604 contigs were generated and 5701 multicontig scaffolds were obtained using 124336 contigs. The N50 was respectively . 36728 bp and 1.22 Mb for contigs and scaffolds. The present work investigated the computational demands of de novo assembly of eukaryotes genomes, reproducing the results of the Panda project. The strategy used was incremental as implemented in the SOAPdenovo software, which basically divides the assembly process in four steps: pre-graph to construction of kmer-graph; contig to eliminate errors and output contigs, map to map reads in the contigs and scaff to scaffold contigs. It used a NUMA (non-uniform memory access) computational system with 8 six-core processors with hyperthread tecnology and 512 Gb of RAM (random access memory), and the consumption of resources as memory and processor time were pointed for every steps in the process. The incremental strategy to solve the problem seems practical and can produce effective results. At this moment a work is in progress which is investigating a new methodology to group the short reads together using the entropy concept. It is possible that assemblies with better quality will be generated, because this methodology initially uses more informative reads. References [1] Gnerre et. al.; High-quality draft assemblies of mammalian genomes from massively parallel sequence data, Proceedings of the National Academy of Sciences USA, v. 108, n. 4, p. 1513-1518, 2010 [2] Li et. al.; The sequence and de novo assembly of the giant panda genome, Nature, v. 463, p. 311-317, 2010 [3] Schatz et. al.; Assembly of large genomes using second-generation sequencing, Genome Research, v. 20, p. 1165-1173, 2010 650 $aBioinformatics 650 $aEukaryotic cells 650 $aGenome 650 $aMolecular biology 650 $aBiologia Molecular 650 $aGenoma 653 $aBioinformática 653 $aGenomas de eucariotos
Download
Esconder MarcMostrar Marc Completo |
Registro original: |
Embrapa Agricultura Digital (CNPTIA) |
|
Biblioteca |
ID |
Origem |
Tipo/Formato |
Classificação |
Cutter |
Registro |
Volume |
Status |
Fechar
|
Nenhum registro encontrado para a expressão de busca informada. |
|
|