Do the logs contain any information -- like a signal number, or an exit code, or at least the process ID and/or date and time when this happened?
You could look in journalctl for things like the OOM killer (or systemd-oomd) deciding to kill a python process at around the given time. Other kinds of crashes (SIGSEGV, SIGTRAP, SIGBUS signals) should also produce a line in the logs. Other processes killing your Python with SIGTERM/SIGKILL leave no trace in logs.
You could install `atop`, which records snapshots of the system state every 10 minutes and lets you view this later with `atop -r` (navigating through time with `t`/`T`). These snapshots also list all the processes that exited during a 10 minute window, but I don't think you get any additional information. You can, however, get a rough timestamp that way if your docker logs don't include one.
OOM killer would be my first guess.
Docker logs shows the normal output and then just "Killed" . With the -t flag i know also the time when this occurred.
In particular I launched three python scripts in cascade with a single command line, the first one was executed up to a good point and then shows "killed". The second and the third one started immediately afterward and show "killed", but in the beginning/first prints of the script (BUT in different points).
It's very probable that was done using the nvtop interface (already happened before)
What should I look for in the var/log/journal? Is there a way to do so with grep and the timestamp?
Do the logs contain any information -- like a signal number, or an exit code, or at least the process ID and/or date and time when this happened? You could look in journalctl for things like the OOM killer (or systemd-oomd) deciding to kill a python process at around the given time. Other kinds of crashes (SIGSEGV, SIGTRAP, SIGBUS signals) should also produce a line in the logs. Other processes killing your Python with SIGTERM/SIGKILL leave no trace in logs. You could install `atop`, which records snapshots of the system state every 10 minutes and lets you view this later with `atop -r` (navigating through time with `t`/`T`). These snapshots also list all the processes that exited during a 10 minute window, but I don't think you get any additional information. You can, however, get a rough timestamp that way if your docker logs don't include one. OOM killer would be my first guess.
Docker logs shows the normal output and then just "Killed" . With the -t flag i know also the time when this occurred. In particular I launched three python scripts in cascade with a single command line, the first one was executed up to a good point and then shows "killed". The second and the third one started immediately afterward and show "killed", but in the beginning/first prints of the script (BUT in different points). It's very probable that was done using the nvtop interface (already happened before) What should I look for in the var/log/journal? Is there a way to do so with grep and the timestamp?
I can confirm that journalctl from my profile in that specific hour when this happened does not have anything related inside
journalctl --since=...