But if the calculation involves many turns of the handle, you can get full throughput at the expense of latency (which is not such a problem if you’re waiting for hours anyway). Some processors are doing the chunks from step 2 while others are convolving the results from step 1, etc., so maybe you don’t get the result from step 1 until you’ve started on step 500, but as long as all the processors are busy the whole time, the spice will then flow as fast as it would have from a single super-processor
The bubbles in VR, cryptocurrency and machine learning are all part of the parallel computing bubble
This topic was automatically closed after 5 days. New replies are no longer allowed.