Manuel Reimer | 17 Feb 21:27 2014
Picon

Problems with systemd-coredump

Hello,

if a bigger application crashes with coredump, then systemd-coredump 
seems to have a few problems with that.

At first, there is the 767 MB limitation which just "drops" all bigger 
coredumps.

But even below this limit it seems to be impossible to store coredumps. 
I did a few tries and found out that, with default configuration, the 
limit seems to be at about 130 MB. Bigger coredumps are just dropped and 
I cannot find any errors logged to somewhere.

It seems to be possible to work around this problem by increasing 
SystemMaxFileSize to 1000M. With this configuration change, bigger 
coredumps seem to be possible, but this causes another problem.

As soon as a bigger coredump (about 500 MB) is to be stored, the whole 
system slows down significantly. Seems like storing such big amounts of 
data takes pretty long and is a very CPU hungry process...

Can someone please give some informations on this? Maybe it's a bad idea 
to store such big amounts of data in the journal? If so, what's the 
solution? Will journald get improvements in this area?

Thank you very much in advance.

Greetings,

Manuel
(Continue reading)

Jan Alexander Steffens | 17 Feb 21:43 2014
Picon

Re: Problems with systemd-coredump

On Mon, Feb 17, 2014 at 9:27 PM, Manuel Reimer
<Manuel.Spam <at> nurfuerspam.de> wrote:
> Hello,
>
> if a bigger application crashes with coredump, then systemd-coredump seems
> to have a few problems with that.
>
> At first, there is the 767 MB limitation which just "drops" all bigger
> coredumps.
>
> But even below this limit it seems to be impossible to store coredumps. I
> did a few tries and found out that, with default configuration, the limit
> seems to be at about 130 MB. Bigger coredumps are just dropped and I cannot
> find any errors logged to somewhere.
>
> It seems to be possible to work around this problem by increasing
> SystemMaxFileSize to 1000M. With this configuration change, bigger coredumps
> seem to be possible, but this causes another problem.
>
> As soon as a bigger coredump (about 500 MB) is to be stored, the whole
> system slows down significantly. Seems like storing such big amounts of data
> takes pretty long and is a very CPU hungry process...
>
> Can someone please give some informations on this? Maybe it's a bad idea to
> store such big amounts of data in the journal? If so, what's the solution?
> Will journald get improvements in this area?
>
> Thank you very much in advance.
>
> Greetings,
(Continue reading)

Kay Sievers | 17 Feb 21:46 2014

Re: Problems with systemd-coredump

On Mon, Feb 17, 2014 at 9:43 PM, Jan Alexander Steffens
<jan.steffens <at> gmail.com> wrote:
> On Mon, Feb 17, 2014 at 9:27 PM, Manuel Reimer
> <Manuel.Spam <at> nurfuerspam.de> wrote:
>> Hello,
>>
>> if a bigger application crashes with coredump, then systemd-coredump seems
>> to have a few problems with that.
>>
>> At first, there is the 767 MB limitation which just "drops" all bigger
>> coredumps.
>>
>> But even below this limit it seems to be impossible to store coredumps. I
>> did a few tries and found out that, with default configuration, the limit
>> seems to be at about 130 MB. Bigger coredumps are just dropped and I cannot
>> find any errors logged to somewhere.
>>
>> It seems to be possible to work around this problem by increasing
>> SystemMaxFileSize to 1000M. With this configuration change, bigger coredumps
>> seem to be possible, but this causes another problem.
>>
>> As soon as a bigger coredump (about 500 MB) is to be stored, the whole
>> system slows down significantly. Seems like storing such big amounts of data
>> takes pretty long and is a very CPU hungry process...
>>
>> Can someone please give some informations on this? Maybe it's a bad idea to
>> store such big amounts of data in the journal? If so, what's the solution?
>> Will journald get improvements in this area?

> I wish there was a good way to install a system debugger which could
(Continue reading)

Thomas Bächler | 18 Feb 11:05 2014

Re: Problems with systemd-coredump

Am 17.02.2014 21:27, schrieb Manuel Reimer:
> As soon as a bigger coredump (about 500 MB) is to be stored, the whole
> system slows down significantly. Seems like storing such big amounts of
> data takes pretty long and is a very CPU hungry process...

I completely agree. Since the kernel ignores the maximum coredump size
when core_pattern is used, a significant amount of time passes whenever
a larger process crashes, with no benefit (since the dump never gets
saved anywhere).

This is extremely annoying if processes with sizes in the tens or
hundreds of gigabytes crash, which sadly happened to me quite a few
times recently.

_______________________________________________
systemd-devel mailing list
systemd-devel <at> lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel
Kay Sievers | 18 Feb 12:06 2014

Re: Problems with systemd-coredump

On Tue, Feb 18, 2014 at 11:05 AM, Thomas Bächler <thomas <at> archlinux.org> wrote:
> Am 17.02.2014 21:27, schrieb Manuel Reimer:
>> As soon as a bigger coredump (about 500 MB) is to be stored, the whole
>> system slows down significantly. Seems like storing such big amounts of
>> data takes pretty long and is a very CPU hungry process...
>
> I completely agree. Since the kernel ignores the maximum coredump size
> when core_pattern is used, a significant amount of time passes whenever
> a larger process crashes, with no benefit (since the dump never gets
> saved anywhere).
>
> This is extremely annoying if processes with sizes in the tens or
> hundreds of gigabytes crash, which sadly happened to me quite a few
> times recently.

It's an incomplete and rather fragile solution the way it works today.
We cannot really *malloc()* the memory for a core dump, it's *pipe*
from the kernel for a reason. It can be as large as the available RAM,
that's why it's limited to the current maximum size, and therefore
also limited in its usefulness.

It really always needs to be extracted to be a minidump to store away.
There are no other sensible options when things should end up in the
journal.

Kay
_______________________________________________
systemd-devel mailing list
systemd-devel <at> lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/systemd-devel
(Continue reading)

Manuel Reimer | 1 Mar 09:42 2014
Picon

Re: Problems with systemd-coredump

On 02/18/2014 11:05 AM, Thomas Bächler wrote:
> Am 17.02.2014 21:27, schrieb Manuel Reimer:
>> As soon as a bigger coredump (about 500 MB) is to be stored, the whole
>> system slows down significantly. Seems like storing such big amounts of
>> data takes pretty long and is a very CPU hungry process...
>
> I completely agree. Since the kernel ignores the maximum coredump size
> when core_pattern is used, a significant amount of time passes whenever
> a larger process crashes, with no benefit (since the dump never gets
> saved anywhere).
>
> This is extremely annoying if processes with sizes in the tens or
> hundreds of gigabytes crash, which sadly happened to me quite a few
> times recently.

If this feature is broken by design, why is it still enabled by default 
on Arch Linux? systemd-coredump makes it nearly impossible to debug 
bigger processes and it took me quite some time to figure out how to get 
coredumps placed to /var/tmp so I can use them to find out why my 
process has crashed.

Yours

Manuel

Gmane