1

Assumed with a 1 TiB NVMe SSD, I am wondering if it's possible to map its entire capacity (1 TiB) into PCIe BAR for memory-mapped I/O (MMIO).

My understanding is that typically only device registers and doorbell registers of an NVMe SSD are mapped to PCIe BAR space, allowing MMIO access. Once the doorbell is triggered, data transfers occur via DMA between system memory and the NVMe SSD. It makes me thinking if is possible to open up the limited size of devices memory/registers for large range MMIO. ALso in this post, NVMe SSDs's CMB (Controller Memory Buffer) is excluded.

Given the disparity between the small size of the NVMe SSD's PCIe BAR space and its overall storage capacity, I'm unsure whether the entire SSD can be exposed to the PCIe BAR or physical memory.

I'm seeking guidance or clarification on my understanding of PCIe, BAR, and NVMe.


Here is an example of 1 TiB Samsung 980Pro SSD with only 16K in PCIe BAR:

# lspci -s 3b:00.0 -v 3b:00.0 Non-Volatile memory controller: Samsung Electronics Co Ltd NVMe SSD Controller PM9A1/PM9A3/980PRO (prog-if 02 [NVM Express]) Subsystem: Samsung Electronics Co Ltd NVMe SSD Controller PM9A1/PM9A3/980PRO Flags: bus master, fast devsel, latency 0, IRQ 116, NUMA node 0, IOMMU group 11 Memory at b8600000 (64-bit, non-prefetchable) [size=16K] Capabilities: [40] Power Management version 3 Capabilities: [50] MSI: Enable- Count=1/32 Maskable- 64bit+ Capabilities: [70] Express Endpoint, MSI 00 Capabilities: [b0] MSI-X: Enable+ Count=130 Masked- Capabilities: [100] Advanced Error Reporting Capabilities: [168] Alternative Routing-ID Interpretation (ARI) Capabilities: [178] Secondary PCI Express Capabilities: [198] Physical Layer 16.0 GT/s <?> Capabilities: [1bc] Lane Margining at the Receiver <?> Capabilities: [214] Latency Tolerance Reporting Capabilities: [21c] L1 PM Substates Capabilities: [3a0] Data Link Feature <?> Kernel driver in use: nvme Kernel modules: nvme 

1 Answer 1

1

The NVMe 1TB for example storage space is accessed by block addresses. Memory mapping that to 64 bit space is not directly possible to allow MMIO access to it such as memcpy.

It is a serial communication protocol that does not provide for MMIO access. A new interface would need to be implemented by the SSD to allow for MMIO access to the blocks. You could for example write a driver that mascarades as providing MMIO continuous access but in the background does the SHGL and DMA NVMe command/completion queues and polling/interrupt to handle the transfer.

This would be similar to a single disk raid 0 on the SSD, but this raid 0 would provide a pseudo memory access. While the entire 1TB of SSD would not be in memory, sections could be paged in and out similar to how the O/S uses a paging file. Obviously gets very complicated and proned to data corruption if not done correctly.

Or just format the SSD and create a very large file of whitespace on the SSD, then open this file in memory and access each byte in the file with an interface that translates to fread and fwrite to give the appearance of sequential memory access to the SSD.

But why to all of that?

1
  • Thanks for your idea of "mascarades". I just want to understand the concept but not really implement the "mascarades". Commented Nov 1, 2024 at 15:32

You must log in to answer this question.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.