So you've got a slick DevOps environment that enables you to deploy your Dockerized Spring Boot applications to a Kubernetes cluster?
That's awesome. But now you might be wondering how to check the logs.
Cuz you ain't seein' nothin' on the console with everything happening in a Kubernetes pod.
Fortunately, it's fairly easy to get the logs. You just need a little know-how.
I'll provide that know-how here.
And again: this is for those of you who put your Spring Boot applications in a Docker container and then deploy the container to a Kubernetes cluster in one or more pods.
Get the Normal Stuff Right
First, let's take Kubernetes out of the equation for a moment. Make sure you've got logging working just like you would if you were manually deploying the Spring Boot JAR file to a server.
In other words, get the Old School Logging working first.
For my part, I stick with SLF4J and put together a logback.xml file that looks something like this:
<configuration>
<appender name="STDOUT" class="ch.qos.logback.core.ConsoleAppender">
<encoder>
<pattern>%d{HH:mm:ss.SSS} [%thread] %-5level %logger{36} - %msg%n</pattern>
</encoder>
</appender>
<property name="LOG_FILE" value="/etc/careydevelopment/logs/crm" />
<appender name="FILE" class="ch.qos.logback.core.rolling.RollingFileAppender">
<file>${LOG_FILE}.log</file>
<rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy">
<fileNamePattern>${LOG_FILE}.%d{yyyy-MM-dd}.txt</fileNamePattern>
<maxHistory>30</maxHistory>
<totalSizeCap>3GB</totalSizeCap>
</rollingPolicy>
<encoder>
<pattern>%d{HH:mm} [%thread] %-5level %logger{36} - %msg%n</pattern>
</encoder>
</appender>
<logger name="org.springframework" level="INFO"/>
<logger name="org.mongodb.driver" level="INFO"/>
<logger name="io.netty.util" level="ERROR"/>
<root level="DEBUG">
<appender-ref ref="STDOUT" />
<appender-ref ref="FILE" />
</root>
</configuration>
Now it's beyond the scope of this guide to go into all the gory details of logging configuration, but the net of all that you see above is that it logs to the console and to a rolling file.
So I've got a couple of ways to check the logs: by looking at the console output for recent stuff or by looking at one of the log files for events that happened in the distant past.
But how do I do either of those things when my application is running inside a Kubernetes pod?
There Are a Couple of Ways
First, open a shell to wherever you've got Kubernetes deployed. You should be able to use the kubectl command at the command line.
Then, get a list of all your pods:
kubectl get pods
That gives me the following output:
NAME READY STATUS RESTARTS AGE
crm-service-648d7f4f95-mvm2p 1/1 Running 0 138m
ecosystem-customer-service-c988f87b5-x66gb 1/1 Running 0 9d
ecosystem-email-service-549b5657db-ns7c2 1/1 Running 0 9d
ecosystem-geo-service-867ff96498-9swpj 1/1 Running 0 9d
ecosystem-product-service-f6696b4f5-5fcdq 1/1 Running 0 9d
ecosystem-user-service-7f6d6fdcc4-642dc 1/1 Running 0 146m
Now let's say I want to view the log of that first pod: crm-service-648d7f4f95-mvm2p.
(Sidebar: your pod name might be prefaced with a namespace. Consider that namespace part of the pod name and use it.)
I can just type the following at the command line:
kubectl logs crm-service-648d7f4f95-mvm2p
That gives me console logging output.
Of course, you'll need to update the command above with the name of the pod that you want to check.
But as I said, that's console logging. How can you get to the log files?
Get the Shell in There
If you want to view files inside a running Docker container within a Kubernetes pod, you'll have to get a shell going.
You'll do that with kubectl exec but you'll need to provide a little more info.
Here's the command I'll use:
kubectl exec -it crm-service-648d7f4f95-mvm2p -- sh
The -it option is really shorthand for two options: -stdin and -tty. The first option tells Kubernetes to pass in the standard input source and the second option tells Kubernetes that stdin is a TTY or TeleTYpe terminal.
Leave out those two options and you won't have much in the way of interactivity.
After the -it, you'll see the name of the pod again. That's pretty easy to understand.
But then you have to enter a command. That's the thing followed by two dashes.
In my case, I'm going with the Bourne Shell, hence the sh. You might like to go with /bin/bash for the Bash Shell. Or some other option.
Once I hit Enter on the command above, I see a new prompt:
/ #
That tells me I'm in the container. Now I can navigate about just as if it were any other operating system.
If you remember from my log configuration file above, my logs are in /etc/careydevelopment/logs. So I can switch there with:
cd /etc/careydevelopment/logs
And now if I do ls -la, I see the following:
drwxrwxrwx 2 root root 0 May 14 08:52 .
drwxrwxrwx 2 root root 0 May 13 22:43 ..
-rwxrwxrwx 1 root root 49842 Jun 1 17:32 crm.log
Ah ha! There's my log file.
At this point I can use vi or whatever editor is installed in the container to view the log file.
vi crm.log
Or I can use cat so I don't have to deal with the awkward vi interface.
cat crm.log
And that will give me the output of the whole log.
By the way, if I want to get out of the shell, I can just type exit at the command line.
And so can you.
Without Going In
But you don't have to go in there and look around if you'd rather stay outside. You can execute a command from the host command line.
Something like this:
kubectl exec -it crm-service-648d7f4f95-mvm2p -- cat /etc/careydevelopment/logs/crm.log
Yep. That'll do it as well.
But that will print out the whole log. That may be more than you're looking for.
If you're a UNIX power user you can use the grep thing to find exactly what you're looking for.
But what if you're not a UNIX pwer user? In that case, you can copy the log file locally and view it as you see fit.
Here's I do that:
kubectl cp crm-service-648d7f4f95-mvm2p:etc/careydevelopment/logs/crm.log ./crm.log
That kubectl cp command does a copy. The first parameter is the file inside the container. It's prefaced with the pod name (possibly needing a namespace as well). The part after the colon is the full path from the root to the file I want to copy.
The next parameter is the directory and name of the file I want locally.
Wrapping It Up
Now you know a few ways to examine Spring Boot logs within a Kubernetes cluster.
Feel free to pick the option that works best for you an run with it. I'll probably offer some more "heavy duty" solutions at some point in the future.
Have fun!
Photo by Khari Hayden from Pexels