PS. Using ms there is really the only thing that makes sense. Computers do not operate at the scale of seconds. A second is often an eternity for application code, and it is always an eternity for modern CPUs.
Using milliseconds is the only way to reveal latencies that measure well below a second, but which can be iterated over and over so substantially impact performance.
You could convert to human-readable times plus milliseconds, but that's just adding overhead to an inherently nerdy feature, and kind of obfuscates looking at individual lines (as it is, each line has a "number", almost always distinct except for very low-latency events).
What generally matters is not the specific time individual events occurred, but the difference between the lines. In yours, something is obviously badly wrong if those two lines are side-by-side. I don't need to do ms conversion math to figure that out.