K
Kevin Simonson
Guest
Most sorting algorithms I\'ve noticed seem to have an interface somewhat like this:
void someAlgorithm ( elemType[] elements);
So to implement this algorithm an application needs to fill the {elements} array, call the {someAlgorithm()} algorithm, and then read out the (sorted) elements of {elements}. For an {elements} object that contains n {elemType}s, how long does it take to fill that array? Is the most efficient way to implement a loop like this:
elemType[] elems = new elemType[ n];
int ix;
for (ix = 0; ix < n; ix++)
{ elems[ ix] = produceElem();
}
someAlgorithm( elems);
and then is the most efficient way to read the results of the array something like this:
for (ix = 0; ix < n; ix++)
{ consumeElem( elems[ ix]);
}
This would amount to, for each element, incrementing {ix}, comparing it to {n}, and then actually sending the produced {elemType} to the right position in the array (for filling the array), and then, for each element, incrementing {ix}, comparing it to {n}, and then reading the {elemType} in the corresponding position in the array (for emptying the array). So n+n+n = 3n instructions for each stage, and each instruction requires at least one clock cycle to fetch the instruction, at least one clock cycle to interpret the instruction, and at least one clock cycle to execute the instruction. So that\'s at least 9n clock cycles to fill the array and at least 9n clock cycles to empty it after calling {someAlgorithm()}.
I\'ve heard that some scientists have found ways to quickly move large blocks of data from one place on disk to another. Does that speed this process up, make it less than the 9n clock cycles I\'ve postulated above?
I\'ve got my eye on sorting of large amounts of data that\'s actually done in the industry. Do organizations that on a regular basis sort large amounts of data typically take 9n clock cycles to fill up the array, or is there a quicker way to do it that\'s in general use?
void someAlgorithm ( elemType[] elements);
So to implement this algorithm an application needs to fill the {elements} array, call the {someAlgorithm()} algorithm, and then read out the (sorted) elements of {elements}. For an {elements} object that contains n {elemType}s, how long does it take to fill that array? Is the most efficient way to implement a loop like this:
elemType[] elems = new elemType[ n];
int ix;
for (ix = 0; ix < n; ix++)
{ elems[ ix] = produceElem();
}
someAlgorithm( elems);
and then is the most efficient way to read the results of the array something like this:
for (ix = 0; ix < n; ix++)
{ consumeElem( elems[ ix]);
}
This would amount to, for each element, incrementing {ix}, comparing it to {n}, and then actually sending the produced {elemType} to the right position in the array (for filling the array), and then, for each element, incrementing {ix}, comparing it to {n}, and then reading the {elemType} in the corresponding position in the array (for emptying the array). So n+n+n = 3n instructions for each stage, and each instruction requires at least one clock cycle to fetch the instruction, at least one clock cycle to interpret the instruction, and at least one clock cycle to execute the instruction. So that\'s at least 9n clock cycles to fill the array and at least 9n clock cycles to empty it after calling {someAlgorithm()}.
I\'ve heard that some scientists have found ways to quickly move large blocks of data from one place on disk to another. Does that speed this process up, make it less than the 9n clock cycles I\'ve postulated above?
I\'ve got my eye on sorting of large amounts of data that\'s actually done in the industry. Do organizations that on a regular basis sort large amounts of data typically take 9n clock cycles to fill up the array, or is there a quicker way to do it that\'s in general use?