One of the memories I have in my early computers days was a graduate student that was bitching and moaning that it was taking the computer, an IBM mainframe at the University of Wyoming. 3 whole hours to do some mathematical problem. A mathematics professor, a lady, turned on this clown and chewed him out pointing out the problem he was complaining about took several months if solved with slide rules.
I was taking some numerical analysis course senior year college (that was fun - not) and what we were doing was showing just that, how some computations of classic theories take a long time to do (this was 1987 mind you)
I forget the problem du jour, but we had access to the NSF's Cray xmp48 in pittsburgh. We had to use any language we desired, and solve it on AT&T 6030 pcs which were an 8mhz 8086 with a math coprocessor, a vax 8600 series, and the cray.
So ok, the 6030's ran all night, you had to kick it off and let it chew and hope it didnt crash over night.
The vax, took about an hour (both of these were in pascal, turbo 3.0 on the 6030 under dos and whatever was vogue on the vax)
Then on the cray, it was a batch machine, so you built the program, debugged it and sent the executable, datafile, and an output file to write data into, all as a package to the cray. when the vax sent it over, you got 3 messages indicating transmission...vax to cray.... vax to cray....vax to cray like that for each file
upon completion, you got cray to vax (program back with notes where it optimized it), cray to vax (data file back touched only with an indication of how far it got into it) and lastly cray to vax, the output file with either errors or the answer.
so I kick it off, on my terminal, I saw this:
vax to cray....
vax to cray....
vax to cray....
cray to vax....
cray to vax....
cray to vax....
all within 20 seconds.
it impressed me at the time....