Matches in DBpedia 2016-04 for { <http://dbpedia.org/resource/CUDA_Pinned_memory> ?p ?o }
Showing triples 1 to 31 of
31
with 100 triples per page.
- CUDA_Pinned_memory abstract "In the framework of accelerating computational codes by parallel computing on graphics processing units (GPUs), the data to be processed must be transferred from system memory to the graphics card's memory, and the results retrieved from the graphics memory into system memory. In a computational code accelerated by general-purpose GPUs (GPGPUs), such transactions can occur many times and may affect the overall performance, so that the problem of carrying out those transfers in the fastest way arises.To allow programmers to use a larger virtual address space that is actually available in the RAM, CPUs (or hosts, in the language of GPGPU) implement a virtual memory system Virtual memory (non-locked memory) in which a physical memory page can be swapped out to disk. When the host needs that page, it loads it back in from the disk. The drawback with CPU⟷GPU memory transfers is that memory transactions are slower, i.e., the bandwidth of the PCI-E bus to connect CPU and GPU is not fully exploited. Non-locked memory is stored not only in memory (e.g. it can be in swap), so the driver needs to access every single page of the non-locked memory, copy it into pinned buffer and pass it to the Direct Memory Access (DMA) (synchronous, page-by-page copy). Indeed, PCI-E transfers occur only using the DMA. Accordingly, when a “normal” transfer is issued, an allocation of a block of page-locked memory is necessary, followed by a host copy from regular memory to the page-locked one, the transfer, the wait for the transfer to complete and the deletion of the page-locked memory. This consumes precious host time which is avoided when directly using page-locked memory.However, with today’s memories, the use of virtual memory is no longer necessary for many applications which will fit within the host memory space. In all those cases, it is more convenient to use page-locked (pinned) memory which enables a DMA on the GPU to request transfers to and from the host memory without the involvement of the CPU. In other words, locked memory is stored in the physical memory (RAM), so the GPU (or device, in the language of GPGPU) can fetch it without the help of the host (synchronous copy).GPU memory is automatically allocated as page-locked, since GPU memory does not support swapping to disk. To allocate page-locked memory on the host in CUDA language one could use cudaHostAlloc.".
- CUDA_Pinned_memory wikiPageExternalLink pinned_tradeoff.html.
- CUDA_Pinned_memory wikiPageExternalLink ?p=443.
- CUDA_Pinned_memory wikiPageID "39518376".
- CUDA_Pinned_memory wikiPageLength "3723".
- CUDA_Pinned_memory wikiPageOutDegree "11".
- CUDA_Pinned_memory wikiPageRevisionID "698793649".
- CUDA_Pinned_memory wikiPageWikiLink CUDA.
- CUDA_Pinned_memory wikiPageWikiLink Category:GPGPU.
- CUDA_Pinned_memory wikiPageWikiLink Category:Graphics_hardware.
- CUDA_Pinned_memory wikiPageWikiLink Category:Nvidia.
- CUDA_Pinned_memory wikiPageWikiLink Direct_memory_access.
- CUDA_Pinned_memory wikiPageWikiLink General-purpose_computing_on_graphics_processing_units.
- CUDA_Pinned_memory wikiPageWikiLink Graphics_processing_unit.
- CUDA_Pinned_memory wikiPageWikiLink PCI_Express.
- CUDA_Pinned_memory wikiPageWikiLink Parallel_computing.
- CUDA_Pinned_memory wikiPageWikiLink Random-access_memory.
- CUDA_Pinned_memory wikiPageWikiLink Virtual_memory.
- CUDA_Pinned_memory wikiPageWikiLinkText "CUDA Pinned memory".
- CUDA_Pinned_memory wikiPageUsesTemplate Template:Lead_missing.
- CUDA_Pinned_memory wikiPageUsesTemplate Template:Reflist.
- CUDA_Pinned_memory subject Category:GPGPU.
- CUDA_Pinned_memory subject Category:Graphics_hardware.
- CUDA_Pinned_memory subject Category:Nvidia.
- CUDA_Pinned_memory comment "In the framework of accelerating computational codes by parallel computing on graphics processing units (GPUs), the data to be processed must be transferred from system memory to the graphics card's memory, and the results retrieved from the graphics memory into system memory.".
- CUDA_Pinned_memory label "CUDA Pinned memory".
- CUDA_Pinned_memory sameAs Q16927914.
- CUDA_Pinned_memory sameAs m.0vxcjyd.
- CUDA_Pinned_memory sameAs Q16927914.
- CUDA_Pinned_memory wasDerivedFrom CUDA_Pinned_memory?oldid=698793649.
- CUDA_Pinned_memory isPrimaryTopicOf CUDA_Pinned_memory.