Exit code 137 is triggered when a pod or a container within your Kubernetes environment exceeds the amount of memory that they're assigned. Typically, this exit code is accompanied by or simply known as OOMKilled. Killed in this description refers to the result of this error, which causes a pod to terminate.
The acronym OOM stands for out of memory, which is caused by a pod pushing past their memory limit that is set. If you're unsure why your pod terminated, one of the easiest ways of finding out is to run 'kubectl get pods command', which will then summon a status update on a specific pod. Within this, you'll then find OOMKilled, which will alert you to the fact that exit code 137 has been triggered.
While OOMKilled is a response that you're likely to see within your Kubernetes environment, it isn't actually native to this system. In fact, it's a central feature of Linux programming that has been carried over to Kubernetes to help facilitate this system. Within Linux Kernel, OOMKilled is known as the OOM Killer, offering the same warning and response as it does within Kubernetes.
Typically, if a platform is taking up too much memory on a certain system, Linux will move through different nodes and decide which of them to kill, scoring all of them with oom_score on a scale of which nodes are taking up the most memory to the least memory. The nodes that are taking up the most memory are the most likely to be terminated, with exit code 137 being the reasoning given for this.
If your Kubernetes ecosystem is returning with 'exited with code 137', then you're likely facing a memory problem within this system. While this can be a frustrating issue, it's not the end of the world as this is a fairly easy concern set to remedy.
Typically, there are a few core causes of OOMKilled within a Kubernetes environment:
As this is a memory-based error, any of the causes for this exit code will be related to the poor management or use of memory within your Kubernetes ecosystem.
As suggested earlier, exit code 137 is one of the easiest errors to fix as it all boils down to either reducing your processes or increasing the amount of memory that each of the nodes is assigned.
If you're trying to fix exit code 137, try the following three things:
By moving through these three steps, you will have most likely increased the amount of memory that your system has, as well as optimizing individual pods to ensure they have enough memory to complete all of their functions without terminating unexpectedly.
If you're experiencing exit code 137 within Kubernetes, then there is likely an issue with how your Kubernetes environment is managing space across its pods, nodes, and containers. As a baseline, Kubernetes suggests that you give each node in your cluster around 300 MIB of memory, which should be enough for the node to function properly.
However, depending on the complexity of your Kubernetes ecosystem, it's always a better idea to have as much memory as possible. If your system has enough space, then designate a higher amount of storage to help every node run without reaching the need for exit code 137 to appear.
Join our WhatsApp Channel to get the latest news, exclusives and videos on WhatsApp
_____________
Disclaimer: Analytics Insight does not provide financial advice or guidance. Also note that the cryptocurrencies mentioned/listed on the website could potentially be scams, i.e. designed to induce you to invest financial resources that may be lost forever and not be recoverable once investments are made. You are responsible for conducting your own research (DYOR) before making any investments. Read more here.