Have you ever wondered, when considering a new computer: Exactly how much faster is this machine than what I’m using now?
I studied Computer Architecture — and analysis thereof — with Montek Singh, at UNC Chapel Hill in 2003. If Montek taught me any one thing, it was that you have to take measurements to determine system performance. And if he taught me two things, the second was that you can only evaluate a system’s performance under your own workloads.
http://ws.amazon.com/widgets/q?ServiceVersion=20070822&MarketPlace=US&ID=V20070822/US/marklindseyna-20/8001/1ee4efb1-8b2e-4b9e-afd0-b7ff2274d148 Amazon.com WidgetsSo, of course, I need to determine whether my new machine is genuinely faster than my existing machine doing the sort of work that I, Mark Lindsey typically perform.
My NEW laptop is the April 14, 2010 model. It’s a MacBook Pro, 15″ display, Core i7 2.66 GHz CPU, with 500 GB hard drive and 8 GB RAM. I’m thankful to be part of a firm that appreciates that value of quality tools.
My OLD laptop is the Early 2006 model. It was a MacBook Pro, 17″ display, Core Duo 2.1 GHz, 320 GB 5400 RPM drive and 2 GB of RAM.
I compared these two laptops.
Wireshark Packet Capture Processing Performance
I often have to wait on Wireshark to process packets.
Load 249 MB Packet Capture — 0.8x faster
Unfortunately, this particular test is estimated. The old Core Duo machine couldn’t handle the memory required for this file, and Wireshark died. But if it had loaded at the predicted speed, the the old Core Duo would have taken 72 seconds, while the Core i7 completed loading in 39 seconds.
Load 87 MB Packet Capture — 1.58x faster
Both machines could load the 87 MB packet capture. Core Duo: 31 seconds. Core i7: 12 seconds.
Display all VoIP Calls in 87 MB Packet Capture — 0.8x faster
Core i7: 11 seconds. Core Duo: 20 seconds.
Calculate Statistics > Conversations in 87 MB Packet Capture — 120x faster
Core i7: 5 seconds. Core Duo: 65 seconds. (In fairness, the Core i7 is running a slightly newer version of Wireshark, so they probably made some efficiency gains.)
Shell Scripting
I use shell scripting a lot. I’ve noticed Mac OS X is pretty slow; I believe there’s some security stuff going on making the creation of each process an expensive task. Therefore, shell scripts that run millions of individual processes to accomplish the task are kinda slow.
10,000 instances of random file and text processing: 0.8x faster
My shell script was:
date
x=10000
while [ $x -gt 0 ]
do
ls -al "/etc" |
cut -c12- |
fmt -1 |
fmt -80 |
sort |
uniq -c > /tmp/x ;
let x=$x-1
done
date
This shell script does typical text and file processing stuff. It execute a total of 60,000 processes, opens an individual file 10,000 times, and does some simple in-the-shell processing.
grep through 300 MB of log files: 50% faster
time egrep '^INVITE' XSLog* | wc -l
This searched through 10 log files, each of which was 30 MB. They happened to be BroadSoft BroadWorks log files. It was a total of 7680738 lines of logs.
Generate 32 MB of random data to a disk file: 1.33x faster
time dd if=/dev/urandom bs=16384 count=2000 of=/tmp/random_bytes
Calculate pi to 2000 digits: 0.9x faster
Since Dr Boyd told me how to do it in the hallway of Nevins Hall at Valdosta State University (“Well, I guess you could take the arctangent of one, and multiply that by four”) I’ve always liked using this to measure a system’s performance.
“But Mark, I thought you said you were doing realistic workloads!” Who says taylor-series polynomial expansion isn’t realistic? What ELSE are you doing to do with 731 million transistors?
echo "scale=2000 ; a(1)*4" | time bc -l
Microsoft Word 2008
I have to use Microsoft Word frequently because YOU USE MICROSOFT WORD. I have to write documents that other people edit; and sometimes I have to edit documents that other people wrote. And, unfortunately, not enough of YOU use LaTeX.
The Microsoft Word implementation for Mac OS X is pretty terrible. It seems much slower than the MS Windows version. Nevertheless, I end up using it.
Open a 1.5 MB docx in Microsoft Word: 0.6x faster
This is a 68 page document with several embedded graphics. The file size is 1.5 MB.
Render Preview PDF View from MS Word: 0.8x faster
From within Word, I opened the 1.5 MB, 68-page document mentioned above. Then I Chose File > Print and pressed the Preview button to open a PDF copy in the Apple Preview application.
Open Word to create new document: 69% faster
Search with Spotlight
I tested Spotlight searching in three separate levels: (a) my “BW” directory (7,824 files, 143 MB); (b) my “ECGDC” directory (8,031 files, 591 MB); (c) “My Mac”. At this point, the contents of both machines is practically identical.
Search 7824-file / 143 MB directory for the word “INVITE”: 1.2x faster
Search 8031-file / 591 MB directory for the word “date”: 3.2x faster
Search 8031-file / 591 MB directory for the word “quality”: 6.5x faster
Search entire machine for “Authentication”: 1.2x faster
Conclusions
Excluding a few outliers, I should expect my new machine (Core i7) to be 80% (0.8x) to 120% (1.2x) faster than my old machine; by this I mean that a 30 second task on my old machine should take between 13 and 16 seconds on the new machine.
Methods
To time many processes, I used a stopwatch to measure the speed. On others, such as Wireshark or shell scripts, the system can calculate duration for me.
On opening MS Word files, I always opened the file once, then closed Word, then opened it again and timed the second or third opening. I ran some tests multiple times, and reported on the lowest of the times for each of the machines.
I know the right way to do this is measure elapsed duration, but I wanted to use the term “faster”. I wasn’t sure whether everybody used that term in a consistent way when comparing equivalent workloads. To calculate “fasterness”, I divided the long duration by the short duration, and subtracted one. E.g., if a process took 9.8 seconds on the Core Duo, and 4.4 seconds on the Core i7, I’d calculate
( 9.8 / 4.4 ) - 1 = 1.227
i.e., the Core i7 is 1.2x, or 123% faster.
Thanks for the quote and mug shot – D Boyd