I just tested this on my latest anim and here's the CPU load while rendering the first frame, those drops happen every few seconds and my load drops down a lot more. I have 24 threads, so it would make sense if some single core stuff is happening:
So it's not the AA then, which begs the question: what is going on?
I'd be surprised to learn that UF does the motion blur in some kind of post-render step because the information for it needs to come from rendering pixels that are offset in time rather than space. How does UF sample in time, if at all? Lycium taught me some anti aliasing stuff over this year and one of the things he mentioned is that motion blur is essentially just AA along the time coordinate. Not sure how this would slow down rendering to single core though.
Here's the view per logical CPU core, it's a 4s drop every 12s roughly:
Here's what it looks like with no motion blur, I still get a 1s drop on all cores every 10s:
Just to dig a bit more, I tried the following things:
Make a non-animated still of the same fractal -> lots of similar drops in CPU load
Increase iterations from 54 to 5444 -> drops get significantly smaller
Zoom into this last one to eliminate any cheap outside pixels -> drops disappear
This last results holds even if I go back to my animation with motion blur, but zoom in so it's 100% inside and then increase the iterations. So this seems to be related to overhead while calculating pixels that don't require lots of iterations, maybe it's when UF redistributes the pixels among threads when one of them finishes or something of that sort. I can only guess here.
I just tested this on my latest anim and here's the CPU load while rendering the first frame, those drops happen every few seconds and my load drops down a lot more. I have 24 threads, so it would make sense if some single core stuff is happening:
![6372dd3db0a6f.png](serve/attachment&path=6372dd3db0a6f.png)
So it's not the AA then, which begs the question: what is going on?
I'd be surprised to learn that UF does the motion blur in some kind of post-render step because the information for it needs to come from rendering pixels that are offset in time rather than space. How does UF sample in time, if at all? Lycium taught me some anti aliasing stuff over this year and one of the things he mentioned is that motion blur is essentially just AA along the time coordinate. Not sure how this would slow down rendering to single core though.
Here's the view per logical CPU core, it's a 4s drop every 12s roughly:
![6372df88a5203.png](serve/attachment&path=6372df88a5203.png)
Here's what it looks like with no motion blur, I still get a 1s drop on all cores every 10s:
![6372dee44d9e3.png](serve/attachment&path=6372dee44d9e3.png)
Just to dig a bit more, I tried the following things:
Make a non-animated still of the same fractal -> lots of similar drops in CPU load
Increase iterations from 54 to 5444 -> drops get significantly smaller
**Zoom into this last one to eliminate any cheap outside pixels -> drops disappear**
**This last results holds even if I go back to my animation with motion blur, but zoom in so it's 100% inside and then increase the iterations**. So this seems to be related to overhead while calculating pixels that don't require lots of iterations, maybe it's when UF redistributes the pixels among threads when one of them finishes or something of that sort. I can only guess here.