Quick FileMaker Server Stats.log Summaries

When making changes to a FileMaker file or a FileMaker Server’s configuration, it is often useful to know if the changes have helped, or made things worse, or perhaps had no effect at all. FileMaker’s Stats.log generates some data needed to summarize this, with some help from monitoring and analysis tools, or in this case, a Python script.

Optimizing Server Performance

In this most recent case, I was looking at a server with a large, complex file that was occasionally having its cache hit percentage go as low as 97%. Normally, that wouldn’t be low enough to be a concern, but I also noticed disk activity levels that were higher than I had expected.

The server originally had its cache size set to 768 MB, which is already more than the default of 512 MB. This server had plenty of RAM, so I decided to raise the cache size until the cache hit percentage rarely if ever went below 99%. I eventually got that result when I upped it to 1152 MB.

fmsadmin set serverconfig cachesize=1152

This brought me to the next issue:

This server does not report metrics to our Zabbix server, so I had nothing handy that could help me compare stats from before and after making the change. It also wasn’t clear which time periods I should be checking, so there was a lot of manually scrolling through pages of output from the Stats.log file. As I was scrolling through this yet again, it became apparent that I should be able to easily grep out time slices and then sum up columns of text in a Python script.

You could argue that I could import this into a FileMaker file. Certainly, if I parsed the timestamps in some way, I could have made something just as good, or better. But I also wanted to be able to check how it was doing on a regular basis, which would’ve meant doing multiple imports into the FileMaker file. The grep and Python approach seemed just a bit better for ad-hoc checks like this.

The result is the Python script below. But first, let’s look at a few examples of how I used it.

Summarizing Stats.log Using Python

Initially, I was just looking at entire days, which is a pretty straightforward thing to select.

cd /opt/FileMaker/FileMaker\ Server/Logs
grep "2025-06-04" Stats.log | ~/fm_sum_stats.py

Later, I wanted to get some summaries for a smaller larger blocks of time, and I didn’t want anything before 12 noon to be included. Since we don’t have anything here with an understanding of the timestamps in the log, I had to get a little creative with the grep expression. I took advantage of the fact that all the hours I wanted started with ‘1’, and had a second digit of 2-9. For each run, I had to edit the day of month, e.g. changing 03 to 04, 04 to 05, etc. This change also prompted me to use egrep instead of grep for a slightly simpler filter expression.

egrep "2025-06-03 1[2-9]" Stats.log | ~/fm_sum_stats.py

For each run, I had to edit the day of month, e.g. changing 03 to 04.

So, here is the output for 4 days, with the last day being after I started adjusting the cache levels at noon. Note that I didn’t want to display all the columns here in the blog, so I’ve removed a number of columns. The Python script uses some abbreviated column names in for its output, and I’m not showing the 4 times I executed the script.

Net KB/s In	Net KB Out	Disk KB Read	Disk KB Write	Pro Clients	Remote Calls	Calls In Prog	Elapsed Time	Wait Time	I/O Time

16390       	6052498     	5690662     	2925637     	2919        	36125       	198         	14764612    	1053809     	9997046

26232       	5190176     	6241914     	1912916     	2569        	130383      	209         	54305129    	2586750     	46350114   

30494       	12818669    	14583984    	6989155     	3634        	72413       	321         	485758007   	1737754     	67652965

23049       	11537641    	4570233     	3809231     	2200        	65799       	813         	616803781   	100325922   	36369929  

From this, just by “eyeballing it”, it does seem likely that, at the very least, the cache changes reduced the disk read activity, and the I/O time also went down, which could be related. But then why is the Disk KB Write and Wait Time values so much higher?

Sometimes more data leads to good questions…which can lead to a good answers, with a little bit of work.

Python Script: FileMaker Server Stats.log Summaries

#!/usr/bin/python3

# fm_sum_stats.py
#
# 2025-06-06 simon_b: created

# Usage Example:
#       Summmarize all messages from 11am - 7pm on June 6:
#       grep "2025-06-06 1[123456789]" '/opt/FileMaker/FileMaker Server/Logs/Stats.log' | ~/summarize_stats.py

import csv, sys

def sum_columns_csv(stats: list) -> list:
    """Calculates the sum of each column in a tab-delimited input.

    Parameters:
        stats: The lines from FMS' Stats.log'

    Returns:
        A list containing the sum of each column
    """

    sums = []
    reader = csv.reader(stats, delimiter='\t')
    next(reader, None)  # skip the header 

    for row in reader:
        for i, value in enumerate(row):
            if i > 0:
                try:
                    if len(sums) <= i-1:
                        sums.append(0)
                    sums[i-1] += float(value)
                except ValueError:
                    # Handle non-numeric values
                    pass
    return sums


fm_stats = sys.stdin.readlines()
column_sums = sum_columns_csv(fm_stats)

print ('Net KB/s In	Net KB Out	Disk KB Read	Disk KB Write	Cache Hit %	Cache Unsaved %	Pro Clients	Open Files	xDBC Clients	WebD Clients	CWP Clients	Remote Calls	Calls In Prog	Elapsed Time/call	Wait Time/call	I/O Time/call	Go Clients')
print('\t'.join(map ("{:<11.0f} ".format, column_sums)))

Epilogue

Although these results were not 100% conclusive (yet), I think this script proved its usefulness. It would be really nice if I could have it summarize multiple slices at once – e.g., “summarize these for next 5 hours starting at 10am”. Moving some of what grep is doing here into Python would probably help with that, anticipating that this will become more complex fairly quickly.

– Simon

Reference