25

I note that SMBios Type 20 would help here, but it's optional as of version 2.5 (2006-09-05) pp. 25, L796, and pp. 131, whereas types 16, 17 and 19 are mandatory, but don't quite help.

Physical Memory Array (Type 16)

There is one of these structures for the entire system, explaining what is possible on this board.

Handle 0x1000, DMI type 16, 23 bytes Physical Memory Array Location: System Board Or Motherboard Use: System Memory Error Correction Type: Multi-bit ECC Maximum Capacity: 768 GB Error Information Handle: Not Provided Number Of Devices: 24 

Memory Device (Type 17)

There is one record per each Dimm, which tells you the physical Dimms installed on the board.

Handle 0x1100, DMI type 17, 34 bytes Memory Device Array Handle: 0x1000 Error Information Handle: Not Provided Total Width: 72 bits Data Width: 64 bits Size: 2048 MB Form Factor: DIMM Set: 1 Locator: DIMM_A1 Bank Locator: Not Specified Type: DDR3 Type Detail: Synchronous Registered (Buffered) Speed: 1600 MHz Manufacturer: XXXX Serial Number: XXXX Asset Tag: XXXX Part Number: XXXX Rank: 1 Configured Clock Speed: 1333 MHz 

Memory Array Mapped Address (Type 19)

There can be multiple of these records, and each record lists a range of physical addresses.

Here is the output with two 2GB sticks:

Handle 0x1300, DMI type 19, 31 bytes Memory Array Mapped Address Starting Address: 0x00000000000 Ending Address: 0x000CFFFFFFF Range Size: 3328 MB Physical Array Handle: 0x1000 Partition Width: 2 Handle 0x1301, DMI type 19, 31 bytes Memory Array Mapped Address Starting Address: 0x00100000000 Ending Address: 0x0012FFFFFFF Range Size: 768 MB Physical Array Handle: 0x1000 Partition Width: 2 

And here is the output with 4 sticks; 2*2GB and 2*4GB:

Handle 0x1300, DMI type 19, 31 bytes Memory Array Mapped Address Starting Address: 0x00000000000 Ending Address: 0x000CFFFFFFF Range Size: 3328 MB Physical Array Handle: 0x1000 Partition Width: 2 Handle 0x1301, DMI type 19, 31 bytes Memory Array Mapped Address Starting Address: 0x00100000000 Ending Address: 0x0032FFFFFFF Range Size: 8960 MB Physical Array Handle: 0x1000 Partition Width: 2 

Note that in the first sample output above, there were two 2GB DIMMs, but two ranges of 3.3GB and 0.7GB. With 4 Dimms, the system will also coalesce the memory array mapped address region into two chunks, as it is just representing the same as the e820 map, i.e. the valid memory physical address ranges.

1 to many Type 20 records are tied to exactly one type 17 memory device, meaning that the entire physical range can be known:

Example

$ sudo dmidecode -t 20 # dmidecode 2.12 SMBIOS 2.6 present. Handle 0x002F, DMI type 20, 19 bytes Memory Device Mapped Address Starting Address: 0x00000000000 Ending Address: 0x000FFFFFFFF Range Size: 4 GB Physical Device Handle: 0x002B Memory Array Mapped Address Handle: 0x002E Partition Row Position: 1 Handle 0x0030, DMI type 20, 19 bytes Memory Device Mapped Address Starting Address: 0x00100000000 Ending Address: 0x001FFFFFFFF Range Size: 4 GB Physical Device Handle: 0x002C Memory Array Mapped Address Handle: 0x002E Partition Row Position: 1 

It seems possible to go from address to DIMM for EDAC - Error Detection & Correction purposes, but not from DIMM to entire range.

Looking at the source code of mcelog, it is also using type 20 for its decoding.

5
  • Can you explain your Q further? I don't really follow what you're asking. More details or examples would be a huge plus. 2 tools that I would start w/ are dmidecode and lshw, but I think you're looking for more than what these provide? Commented Jan 6, 2014 at 7:43
  • @slm: lshw uses dmidecode as code base and dmidecode -t 20 gives wanted information. But, as noted, by version 2.5 of SMBIOS the structure holding this information "Memory Device Mapped Address" aka Type 20 or bank location is optional – thus Q is if there is another way to retrieve the same information. – Link between type 17's Locator value and physical address range (as optionally provided by Type 20). Commented Jan 6, 2014 at 8:03
  • @Sukminder - thanks. This info should probably just be incorporated into the Q. Since you have a handle on it would you mind? Commented Jan 6, 2014 at 8:18
  • @Sukminder - I added some sample dmidecode -t 20 output, can you explain the type 17's locator value vs. physical addr., type 20? Commented Jan 6, 2014 at 8:26
  • I will assume that you don't work for a 3-letter government agency or have their level of funding. And, if you are there, then you aren't asking on here. For a modern PC/Server/MAC, physical memory ranges are often then mapped to Virtual Ranges, then might get re-mapped by the OS, you might not be able to figure it out. Even then, it might map it into the 640k +Extended Memory of the DOS days. Using a 32-bit OS will likely give you a different answer than a 64-bit OS. What is your end goal? Commented Aug 3, 2016 at 20:17

3 Answers 3

4

which OS you are running? If Linux how abount this command?

grep -i 'System RAM' /proc/iomem 

the first colomn is the physical address;

references: https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/4/html/Reference_Guide/s2-proc-iomem.html https://superuser.com/questions/480451/what-kind-of-memory-addresses-are-the-ones-shown-by-proc-ioports-and-proc-iomem

2

When you have multiple DIMMS, the BIOS may configure them into some interleave. So you might have one 2G DIMM being physical 0G->4G, bytes 0-7, skipping 8-15. (i.e. low-64 bits) The other 2G DIMM being physical 0G->4G, bytes 8-15, skipping 0-7. (high-64 bits). Note that I think that the interleave is actually bigger than that, because I think that if you have QDR memory, that the system can do 1 address, 8x 64-bit data cycles, so interleaving by units of 64-bytes would be better.

The 0.7G and 3.3G physical arrangements that you see have to do with needing to keep some of the lower 4G open for PCI devices, VGA buffers, classic <1M 8086 crap, etc. This is done by the north bridge. So you have a map like: 0->640K, 1M->3.3G, 0.7G for BIOS, PCI, etc up to 4G. And then 4G->4.7G for ram.

0

The Brute Force solution seems to be

  1. log the memory range of the current configuration
  2. power down, remove the DIMM in question and all DIMMs above it
  3. reboot, review the new configuration.
2
  • 2
    Not sure that helps... i.e. if you had 6 2GB DIMMs, and remove a pair, your top range is likely just to shrink by 4GB, but that doesn't tell you where they were in the previous case, but I will test this and update. Commented Jan 7, 2014 at 1:49
  • ".. and all DIMMs above it", e.g., if the DIMM in question is in slot 2, also remove the DIMM in slots 3...n Commented Jan 7, 2014 at 16:32

You must log in to answer this question.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.