Timeline for NVMe disk shows 80% io utilization, partitions show 0% io utilization
Current License: CC BY-SA 4.0
16 events
| when toggle format | what | by | license | comment | |
|---|---|---|---|---|---|
| May 30, 2019 at 11:22 | vote | accept | mike | ||
| May 30, 2019 at 10:38 | answer | added | Thomas | timeline score: 3 | |
| S May 29, 2019 at 9:02 | history | bounty ended | CommunityBot | ||
| S May 29, 2019 at 9:02 | history | notice removed | CommunityBot | ||
| May 27, 2019 at 16:45 | answer | added | John Doe | timeline score: 0 | |
| May 25, 2019 at 17:31 | comment | added | mike | @Thomas the CentOS bug seems to be this issue. Newer kernel will fix it. Feel free to submit an answer, I'll accept it. Thank you for helping out! | |
| May 25, 2019 at 13:44 | comment | added | Thomas | This one also looks like your issue. Maybe you can double check by setting a different scheduler on the NVMEs? | |
| May 25, 2019 at 13:40 | comment | added | Thomas | Sorry, just saw that you need a login for that link. Basically it says that iostat shows wrong utilization for NVME drives configured with none scheduler, which is the default for NVME drives. It seems to be a bug which has been identified. | |
| May 25, 2019 at 12:15 | comment | added | mike | @Thomas yes it's set to none | |
| May 25, 2019 at 10:07 | comment | added | Thomas | Do you have the scheduler of the NVMEs set to none? access.redhat.com/solutions/3901291 | |
| May 23, 2019 at 9:24 | comment | added | mike | Just to add - this happens on multiple servers. Hosted with the same provider though. | |
| S May 21, 2019 at 7:37 | history | bounty started | mike | ||
| S May 21, 2019 at 7:37 | history | notice added | mike | Draw attention | |
| May 8, 2019 at 8:01 | comment | added | mike | @telcoM - seems ok. md2 : active raid1 nvme0n1p2[0] nvme1n1p2[1] and 20478912 blocks [2/2] [UU] | |
| May 8, 2019 at 6:30 | comment | added | telcoM | The nvme1n1p2 and nvme0n1p2 are showing activity, and they are the components of your md2 RAID1 device. Perhaps the RAID set is syncing or scrubbing in the background? Look into /proc/mdstat to see the RAID device status. The monitoring results might be a quirk resulting from how the background syncing/scrubbing is implemented in the kernel. If so, it should be basically "soft workload" that will be automatically restricted when there are actual user-space disk I/O operations to do. | |
| May 7, 2019 at 22:11 | history | asked | mike | CC BY-SA 4.0 |