0
$\begingroup$

In a mock exam for my CS course in computer architecture there is the question: "explain why the multicycle datapath is faster than the single cycle datapath."

The TA wrote the answer:

"Because the single-cycle datapath requires a fixed time of 1 clock cycle (still slower than the multi-cycle datapath) whose duration is equal to the execution time of the instruction that takes the longest (lw), making the execution of instructions that require less time inefficient.

It is also less efficient because in the multi-cycle datapath, instructions are instead “split” into several clock cycles (still taking less time than the single cycle). Furthermore, the multi-cycle datapath allows for the reuse of certain components for future operations and is equipped with partial registers to store data that can be reused in the following cycle.

Regarding the reuse of some elements, it should also be noted that in the single-cycle datapath, in order to reuse an element, it must be duplicated, which increases costs."

I think the answer is poorly written, but the main concept seems to be something on the lines of "the multicycle datapath (MD) is faster than the single cycle datapath (SD) because the SD has a constant execution time for all the instructions (execution time is equal to the execution time of the slowest instruction), while the MD breaks the execution in multiple steps, so allowing different execution times for different instructions."

What I don´t understand is how this answer suffices to answer the question. In fact, I think that there is a hidden assumption: the slowest instruction time is the same (or at least very close) in SD and MD.

To help my understanding, I imagined an edge case: let t1 be the time of execution of the slowest instruction in the single cycle datapath. Let t2 be the time of execution for the slowest instruction in the multicycle datapath. Let t2=10*t1 (we can assume this because we are not assuming the hidden assumption).

It is easy to verify that that there, given these "data", there is no instruction that would run faster in the MD (we can discuss about this in the comments if there is any disagreement).

Therefore, is the "hidden assumption" that the slowest instruction time is the same (or at least very close) in SD and MD always true? If yes, why?

N.B. I don´t know if it might be useful to answer the question, but the reference processor for the course is the MIPS32

Please, if possible, reference your answer with authoritative sources.

I am sorry for the verbosity and thank you very much for your help!

$\endgroup$

0

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.