At the very lowest levels there can be performance improvements for longer running tasks. This is because metrics are collected as the program runs, then used to optimize execution. For example, if the same path is very often taken through all the potential forks in the task logic, eventually assumptions are made and that path becomes faster. Such a "hot" path can get compiled to machine code, instructions for your specific computer, optimized based on the metrics. Later if things change, eg it turns out the optimization is no longer good because other paths are taken, the optimization can be undone and optimized differently. It's insanely complex and a lot of very smart people have been working on this system for decades.
At a higher level, sometimes a buffer is grown when its capacity is reached. If it starts small, it may need to grow many times. Each time a new buffer is allocated and the contents are copied from the old to the new buffer. When you perform a task again, if the buffer from the previous task is reused, then it is already large enough and all that allocation and copying doesn't need to happen.
There are also a number of other reasons, but generally it's because some previous effort or previously obtained resource is reused, whether explicitly or automatically by the application runtime/OS/etc. I can say 1-2 minutes is surprising though. It will be interesting to see how it goes on v4 (it will be available in a few days!).
In general JSON export is not fast or efficient or small. Unless you need to inspect the values in the data, there is no good reason to use JSON. If you do use JSON, be sure to uncheck pretty print
, as it is extra slow.