- It runs on both the client and the server. Frederick pointed out a bug to me in a different thread though... on the server, you'll just need to run the single server instance (you can't run more than one), but if you just connect to that like you'd expect in the server manager on the client it'll only use one core for computation on the server. You need to add a connection to the server IP on the client for each physical and HT core you want it using. I tend towards doing HT core count - 1 for both the client machine and the server since I'm stuck on onboard interrupt-driven ethernet for now and it needs a good amount of CPU time. He's working on getting that fixed but for now you only have to set up the X number of connections to your other machine once. If you run the server on more machines you'll just need to add them in the Network window, one copy of the IP per core for now.
- Zooming in the Window seems to do some remote calculation too, I see changes in pixels / s rendered in the network Window. Animations are farmed out too, I think in the same way as single images. You won't be able to put the client to sleep while a render is happening without it stopping, the client issues work out to the render servers continuously. I'm not sure what the behavior will be if they stop receiving data suddenly. Actually I'd recommend you disable sleep entirely. It's bugged with modern GPUs, bugged with lots of processors (it can actually kill some Epyc 7001s, permanently), and if you have any physical HDD in the system the constant spindown / spinup (i'd disable power saving features on those too) will kill them faster than any amount of data you'll be transferring back and forth to them without a heavy-use RAID setup. Even then I don't allow my hardware RAID to put the drives to sleep. Components are much better at clocking themselves down to near oblivion these days, anyway. A full tilt TR pro uses 280W or a little more power but it runs at 4-6W idling. Solid state drives use practically nothing when they're not actively reading or writing. HDDs are practically all sealed in helium and spinning on as close to a frictionless spindle as you'll find in anything a normal person can buy, once the motor gets them up to speed it doesn't take anything to keep them there, etc.
- It's all or nothing, although you can configure the number of cores to be one on the "client". Maybe zero. I haven't tried it. Configuring the server core count means disconnecting items in the network window for now so I'd plan on using the server at 100% or however many cores you decide on. I suppose you could set the process affinity to use less cores with a remote tool but that's way more fussing around than I'd do with it. I'd say if that machine has something else it needs to do all the time just keep the priority of the server at low and let Windows handle task management.
I connect a 32/64 core Zen3 threadripper pro to the 10/20 i7-6950x with 20 network connections and it gives around 20% performance increase. This matches the 1/3.25x speed of the old 10 core processor when rendering on that machine by itself very closely. The IPC is actually a good amount higher on Zen 3 but it would need to be targeted by the compiler to really take advantage. That might help you get some idea of the numbers when comparing with more recent processors. If your core counts aren't as different you'll see more relative benefit.
Keep in mind if you live somewhere that power is expensive that even though per-core performance is fairly close in this program, the TR is showing 210W power draw running 63 render threads during test render out of the 280W TDP. The i7 is clocked quite a bit above its "turbo boost 2" boost clocks which would have normally put it at 185W for short periods; instead the way it's configured it has no time limit for max power draw and amperage limit set to "infinite" aka let the processor draw as much as it wants within thermal and clock limits; it's probably drawing quite a bit more than the threadripper. I'm thinking of the i7-6000 / 5000 series and below btw. After that TDP and power use stopped having anything to do with one another on Intel and boost would push them insanely high even without modifying clocks so if you're meaning a more recent i7 I can't say what it'll do for power, and core count is going to help more than pure clock speed on some of the desktop models (after Intel stopped using i7 as branding for upclocked lower core count Xeons) that may only have had a single pipeline that could handle vector FMA or division of floats or had other features disabled that could have helped scheduling.
If you can take the machine off of a live internet connection entirely by blocking its access from your router or whatever, or just don't care, disabling all of the Spectre / Meltdown mitigations in Windows (MS has instructions) and turning off hardware virtualization in BIOS gained me roughly 30% performance back doing H.265 encoding on Broadwell-E and Haswell-E, and I'd strongly recommend that.
1. It runs on both the client and the server. Frederick pointed out a bug to me in a different thread though... on the server, you'll just need to run the single server instance (you can't run more than one), but if you just connect to that like you'd expect in the server manager on the client it'll only use one core for computation on the server. You need to add a connection to the server IP on the client for each physical and HT core you want it using. I tend towards doing HT core count - 1 for both the client machine and the server since I'm stuck on onboard interrupt-driven ethernet for now and it needs a good amount of CPU time. He's working on getting that fixed but for now you only have to set up the X number of connections to your other machine once. If you run the server on more machines you'll just need to add them in the Network window, one copy of the IP per core for now.
2. Zooming in the Window seems to do some remote calculation too, I see changes in pixels / s rendered in the network Window. Animations are farmed out too, I think in the same way as single images. You won't be able to put the client to sleep while a render is happening without it stopping, the client issues work out to the render servers continuously. I'm not sure what the behavior will be if they stop receiving data suddenly. Actually I'd recommend you disable sleep entirely. It's bugged with modern GPUs, bugged with lots of processors (it can actually kill some Epyc 7001s, permanently), and if you have any physical HDD in the system the constant spindown / spinup (i'd disable power saving features on those too) will kill them faster than any amount of data you'll be transferring back and forth to them without a heavy-use RAID setup. Even then I don't allow my hardware RAID to put the drives to sleep. Components are much better at clocking themselves down to near oblivion these days, anyway. A full tilt TR pro uses 280W or a little more power but it runs at 4-6W idling. Solid state drives use practically nothing when they're not actively reading or writing. HDDs are practically all sealed in helium and spinning on as close to a frictionless spindle as you'll find in anything a normal person can buy, once the motor gets them up to speed it doesn't take anything to keep them there, etc.
3. It's all or nothing, although you can configure the number of cores to be one on the "client". Maybe zero. I haven't tried it. Configuring the server core count means disconnecting items in the network window for now so I'd plan on using the server at 100% or however many cores you decide on. I suppose you could set the process affinity to use less cores with a remote tool but that's _way_ more fussing around than I'd do with it. I'd say if that machine has something else it needs to do all the time just keep the priority of the server at low and let Windows handle task management.
I connect a 32/64 core Zen3 threadripper pro to the 10/20 i7-6950x with 20 network connections and it gives around 20% performance increase. This matches the 1/3.25x speed of the old 10 core processor when rendering on that machine by itself very closely. The IPC is actually a good amount higher on Zen 3 but it would need to be targeted by the compiler to really take advantage. That might help you get some idea of the numbers when comparing with more recent processors. If your core counts aren't as different you'll see more relative benefit.
Keep in mind if you live somewhere that power is expensive that even though per-core performance is fairly close in this program, the TR is showing 210W power draw running 63 render threads during test render out of the 280W TDP. The i7 is clocked quite a bit above its "turbo boost 2" boost clocks which would have normally put it at 185W for short periods; instead the way it's configured it has no time limit for max power draw and amperage limit set to "infinite" aka let the processor draw as much as it wants within thermal and clock limits; it's probably drawing quite a bit more than the threadripper. I'm thinking of the i7-6000 / 5000 series and below btw. After that TDP and power use stopped having anything to do with one another on Intel and boost would push them insanely high even without modifying clocks so if you're meaning a more recent i7 I can't say what it'll do for power, and core count is going to help more than pure clock speed on some of the desktop models (after Intel stopped using i7 as branding for upclocked lower core count Xeons) that may only have had a single pipeline that could handle vector FMA or division of floats or had other features disabled that could have helped scheduling.
If you can take the machine off of a live internet connection entirely by blocking its access from your router or whatever, or just don't care, disabling all of the Spectre / Meltdown mitigations in Windows (MS has instructions) and turning off hardware virtualization in BIOS gained me roughly 30% performance back doing H.265 encoding on Broadwell-E and Haswell-E, and I'd strongly recommend that.