1

I am issuing the following commands:

coredumpctl list Mon 2019-11-18 23:58:19 GMT 19043 1000 1000 31 missing /opt/google/chrome/chrome Mon 2019-11-18 23:58:19 GMT 19062 1000 1000 31 missing /opt/google/chrome/chrome Tue 2019-11-19 15:52:55 GMT 22332 1000 1000 6 missing /usr/bin/texstudio 

Followed by:

coredumpctl gdb 22332 Storage: /var/lib/systemd/coredump/core.texstudio.1000.bb1cfb6b67f2423fac681d721ee1ba02.22332.1574178774000000.lz4 (inaccessible) File "/var/lib/systemd/coredump/core.texstudio.1000.bb1cfb6b67f2423fac681d721ee1ba02.22332.1574178774000000.lz4" is not readable: No such file or directory 

Which dumps the stack trace and gives the above two messages about storage being inaccessible and file not readable or found.

Am I doing something wrong?

1
  • df -h /var/lib/systemd/coredump/ gives the following Filesystem Size Used Avail Use% Mounted on /dev/nvme0n1p2 195G 169G 16G 92% / Commented Nov 27, 2019 at 15:52

2 Answers 2

1

There are two common things you could be getting wrong.

Often a core dump is "inaccessible" because the program was run as a different user ID from you. This means you do not have permission. The quick solution is to run coredumpctl as root, e.g. using sudo coredumpctl.

I guess that is not your problem. These coredumps are from user ID 1000. I guess your user has ID 1000, because it is the first (and probably only) non-root login created on your system.

Secondly, systemd-coredump has some settings in coredump.conf, about how much disk space it is allowed to use. It looks like if you have less than 15% disk space free, core dumps will not be created at all. (Unless you change this setting).

You can check your available disk space using the command df -h or df -h /var/lib/systemd/coredump/.

(And to see the total absolute size used by core dumps, you can run du -sh /var/lib/systemd/coredump/.)

MaxUse=, KeepFree=

Enforce limits on the disk space taken up by externally stored core dumps. MaxUse= makes sure that old core dumps are removed as soon as the total disk space taken up by core dumps grows beyond this limit (defaults to 10% of the total disk size). KeepFree= controls how much disk space to keep free at least (defaults to 15% of the total disk size). Note that the disk space used by core dumps might temporarily exceed these limits while core dumps are processed. Note that old core dumps are also removed based on time via systemd-tmpfiles(8). Set either value to 0 to turn off size-based clean-up.

3
  • Thanks for the reply. Everything is commented out be default in coredump.conf. You might be right that maybe coredumps are not generated cause ATM I have 8% of free disk space. Commented Nov 27, 2019 at 16:41
  • @KirkWalla (the original commented out lines tell you what the default settings are. of course, if you modified something and later commented it out again, it could get confusing :-). Commented Nov 27, 2019 at 16:43
  • BTW my coredump is only 8.0M on disk. Commented Nov 27, 2019 at 18:00
0

Note that the coredumpctl list says missing on each of the dumps, and the final error message was No such file or directory.

I'm guessing your distribution has a regular cron job/systemd timer that cleans up old core dumps (weekly, maybe?). It has cleaned up the actual core dump files, and only a small log entry that tells coredumpctl that such a dump used to exist was left over.

You must log in to answer this question.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.