May I ask why do you guys need such high requirements? And why 12 VMs? I just think this is overkill. But it doesn't matter anyways... If I had a budget like this, I would totally build an awesome NAS like you guys have and follow this guide. Great job!
It would also be nice to test against some of the other features like for example iSCSI. Also since the Thecus N4800 supports iSCSI, I would like to see that test redone with a slightly different build/deployment.
Create a single LUN on iSCSI. then mount that LUN in the VM like ESXi, create some VM's 20GB per server should be enough for server 2K8R2 and test it that way.
I don't know who would use NAS over SAN in an enterprise shop, but some of the small guys who can't afford an enterprise storage solution (less than 25 clients) might want to know how effectively a small NAS, can handle VM's with advanced features like vMotion and fault tolerance. In fact if you try some of those HP ML110G7 (3 of them with a vmware essentials plus kit) you can get 12 CPU cores with 48GB RAM, with licensing for about 10K. This setup will give you a decent amount of reliability, and if the NAS can support data replication, you could get a small setup with enterprise features (even if not enterprise performance) for less than the lost of 1-tray of FC-SAN storage.
"The guest OS on each of the VMs is Windows 7 Ultimate x64. The intention of the build is to determine how the performance of the NAS under test degrades when multiple clients begin to access it. This degradation might be in terms of increased response time or decrease in available bandwidth."
12 is a good size, if not too small for a medium size company.
12 is also a good size for a large workgroup.. Alternatively this is a good benchmark for students in dorms. sure there might be 4-5 people but when you factor in computers using torrents, game consoles streaming netflix along with tvs, could be interesting. granted all of this is streaming except for the torrents and their random i/o. However most torrent clients cache as much of the writes. With the current anandtech bench setup with VMs this can be replicated.
The same reason they need 8 threaded benchmark apps to fully test a Quad-HT CPU. They're testing NASes designed to have more than 2 or 3 clients attached at once; simulating a dozen of them puts the load on the nases up, although judging by the results shown by the Thecus N4800 they probably fell short of maxing it out.
someone didn't read the title of the article or the article itself. the purpose is to set up a testbed, not build a system with this software target in mind.
At the same time this system seems extremely over the top for the uses mentioned. It seems likely that the same tests could be run with much less hardware. I know the testbed as specced can be used for much more than testing NAS performance but the only use discussed is simulating the network utilization of a SMB environment. The SSDs are justified because a single HDD was "not suitable" for 12VMs but it seems there are intermediate solutions such as RAIDing two 512GB SSDs that would provide buckets of performance and a cleaner solution than 14 individual disks. I also do not understand how having a physical CPU core per VM is needed to “ensure smooth operation” if network benchmarking software is I/O bound and runs fine on a Pentium 4. Assuming you really do need 64GB of RAM for shared files and Windows VMs then it seems a 1P 2011 board would be more than up to running these benchmarks. Switch to Linux VMs for Dynamo and you could try running the benches from an even lighter system such as an i7-3770. On the network side would it not also be possible to virtualize the physical LAN? The clients could connect together over the internal network and the host OS on the tested perform the switch’s role and stress the NAS over a single aggregated link? For testing NAS performance specifically, what would the effect be of removing the VMs entirely and just running multiple Iometer sessions over a single aggregated link or letting Iometer use the multiple NICs from the host OS? NAS benchmarking would be an interesting application to try to optimize a system for. A simpler system would help you out with reducing power consumption, increasing reliability and reducing cost. You could run some experiments by changing the system configuration and benching again to see if the same NAS performance can be generated. Figuring out what other kinds of systems generate the same results would also make it possible for other editors to bench NAS units without having to purchase 14 SSDs. Sorry for complaining about the system configuration, I know you built it to test other hardware and not as a project in itself but I find the testbed more interesting than the NAS performance.
Zink, Thanks for your comment. Let me try to address your concerns one-by-one, starting with the premise that the current set of tests are not the only ones we propose to run in the testbed. That premise accounts for devoting a single physical core to each VM.
As for the single disk for each VM vs. RAIDed SSDs, that was one of the ideas we considered. However, we decided to isolate the VMs from each other as much as possible. In fact, if you re-check the build, the DRAM is the only 'hardware component' that is shared.
We didn't go with the 'virtualizing the physical LAN' because that puts an upper limit to the number of clients which can be set up for benchmarking purpose (dependent on the host resources). In the current case, using an external switch and one physical LAN port for each VM more accurately represents real world usage. Also, in case we want to increase the number of clients, it is a simple matter of connecting more physical machines to the switch.
Multiple IOMeter sessions: As far as we could test out / understand, IOMeter doesn't allow multiple simultaneous sessions on a given machine. One can create multiple workers, but synchronizing across them is a much more difficult job than synchronizing the dynamo processes across multiple machines. I am also not sure if the workers on one machine can operate through different network interfaces.
As noted by another reader, 12 VMs haven't been able to max out the N4800 from Thecus. The next time around, we will probably go with the RAIDing 512 GB SSD option for storage of the VM disks. Physical NICs are probably going to remain (along with one physical CPU core or, probably, thread, for each VM).
Power consumption with Prime95 set for maximum power consumption was 202 W with all CPU cores 100% loaded. Note that the BIOS has a TDP limit of 70W before throttling the cores down.
However, I noticed that RAM usage in that particular scenario was only 4 GB in total out of the 64 GB available. It is possible that higher DRAM activity might result in more power usage.
Just out of curiosity, when you run with multiple clients accessing the NAS, are they all running the (exact?) same type of workload? Or is each VM/client set to use a slightly (if not entirely) different workload?
I'm curious since, from a home network PoV, I can see multiple access coming from say:
-One (or more) client(s) streaming a movie (or maybe music) -Another (or several) doing copy (reads) from the NAS -Others doing writes to the NAS -Maybe even one client (I can't really imagine more) doing a torrent (I don't like the idea of a client using a mounted shared network device as the primary drive for torrenting, but you never know. Also, some NASes feature built-in torrent functionality as a feature.)
I'm just wondering how much the workload from each client differs from one another, if at all, when conducting your tests/benchmarks.
Also, for the NASes that do RAID, will you be testing how array degradation and/or rebuilding impacts client usage benchmarks?
Thanks for your feedback. This is exactly what I am looking at from our readers.
As for your primary question, in our benchmark case, all the VMs are running the same type of workload at a given time. The type of workload is given in the title of each graph.
It should be possible to set up an IOMeter benchmark ICF file with the type of multiple workloads that you are mentioning. I will try to frame one and try to get it processed for the next NAS review.
Ref. array degradation / rebuild process : Right now, we present results indicating the time taken to rebuild the array when there is no access to the NAS. I will set up a NASPT run when rebuild is in progress to get a feel of how the rebuild process affects the NAS performance.
Glad to be of some help. To be honest, benchmarking and running tests (troubleshooting) is something I used to do in the Navy as Avionics Technician. I actually do kind of miss it (especially being a tech geek.) Reminiscing aside...
Back on-topic: what I described in my previous post was more of a home user secenario. Is there anything else you would also need/want to consider in a more work-oriented "dissimilar multi-client workload" benchmark/test? If this was a SOHO environment, I would add the following to my previous post:
-DB access (not sure how you want to distribute the read/write workload, though I suppose leaning heavier to reads).
I mention this now because my previous post for read/writes was more along the lines of sequential instead of random. I would guess DB access would be more random-ish in nature.
For other work-oriented scenarios in a "dissimilar multi-client workload" benchmark, I'm not sure what else could be added. I'm mainly just a power-user. I dunno is people would really use an NAS for, say, an Exchange Server's storage or maybe a locally-hosted website. (Some NASes come with Web service funtions and features, no?)
I'm just throwing out ideas for consideration. I don't xpect you to implement everything and anything since you don't have the time to do that. Time is your most precious resource during testing and benchmarking, after all.
Thank you all for running a wonderful website and to Ganesh for a quick reply.
Oh, one last thing: does disk fragmentation matter in regards to NASes? Would it affect NAS perfomance? Do any NASes defrag themselves?
This is more of a long-term issue, so you can't really test it readily I'm guessing. (Unless you happen to have a fragmented dataset you could clone to the NAS somehow...) I haven't heard much about disk fragmentation since the advent of SSDs in the consumer space. That, and higher perfomance HDDs. This is mainly just a curiosity for me. (I do have a more personal reason for my interest, but it's a long story...)
Interesting article. Would it be possible to add some pics of the final setup? It'd be interesting to see what the testbed would look like assembled and wired up.
I didn't add the pics to the article because the setup wasn't 'photogenic' after final assembly and placement in my work area :) (as the album below shows). Doesn't matter, I will just link it in this comments section
I'd love to see what a HP N40L Microserver does with 4 disks in it if you throw that at it (use the on-motherboard USB port for the OS). It's certainly not a plug-and-play solution like most NAS boxes, but assuming the performance is there it should be a far more flexible one for the money if you throw a *nix based OS on it.
I've taken advantage of the 5th internal port of the N36L to add an SSD that is used by ZFS for both read and write caching. Strictly speaking, mirrored write caches are advised, but it's connected to a UPS to eliminate much of that risk.
I think HP has given us the perfect platform for low power, high performance with flexibility.
We needed a platform which was well supported by the motherboard. To tell the truth, I found Hyper-V and the virtualization infrrastructure to be really good and easy to use compared to VMWare's offerings.
I assume coder543 was going for a Linux based host, and possibly Linux based clients as well. If you had gone with Linux you wouldn't have needed extra software for SSH or the ram disk. It even looks like IOMeter is supported for Linux. Had you gone that route you likely could have automated the whole task so that it was just a matter of typing go on the host and coming back hours later to collect the results. OTOH most of your audience is probably more likely to be using Windows clients so it probably makes more sense to provide information clearly relevant to the average reader.
I found the article interesting. The one thing that I'd be curious about is whether or not there were any major performance differences using Samba/CIFS type shares vs NFS, or a mixture of the two.
I'd love to see more Linux coverage in general, but I respect that you know your audience and write the articles that they generally want to read.
I should run on that platform just great. On the other hand, when all is said and done, as nice as this setup is, to me it is basically a full blown server/virtualization platform; not really a "NAS" at all. I would typically think of a NAS as being a dedicated storage device - possibly used as an IScsi target with the brains of the operation living elsewhere.
Ganesh- I think this test bed sets up very well for testing the $500-1000 4 bay type NAS devices we've been seeing of late that could actually serve a small office. However, I'm less sure that it delivers meaningful data to the home crowd. Like with your SSD tests, I see a place for a "light" load versus the heavy. I think testing against 4 VMs with, for sake of example, the following load types would work: 1- 2 VMs streaming video - 1 DVD, 1 H.264 HDTV - are there any interruptions? 2- 1 VM streaming audio off a mt-daapd (or actual itunes since you're using windows as the server) - again, is there any dropoffs. 3- same VM as #2 is also doing content creation - like importing 1000 RAW images into Lightroom using this storage space 4- last VM is copying large files (or small) to the storage server.
The Thecus 4800 should handle this with ease, but there are many cheaper solutions out there that may or may not meet this level of need. I got so tired of poorly performing consumer units that 4 years ago I switched to an AMD x2 4800 running solaris, and more recently to the HP36L and 40L. At $300 plus $60 for 8 gigs of ECC I think this is a better value than the Thecus for those who can run solaris or even Windows Home Server. You're not reliant on the release of modules to support a particular service.
Also, it seems that all of these benchmarks are based on SMB transfers. It's worth checking to see if nfs and iscsi performance (when made available by the NAS) shows different numbers. In the past, it certainly did, especially on the consumer devices where NFS smoked SMB1. But perhaps this is a moot point with SMB2/windows 7 where it seems like the NIC or the hard drives are the limiting factors, not the transfer protocol.
I agreem test the different protocols provided by the devices. iSCSI, SMB, NFS as well as the media streaming protocols, FTP and whatever else it offers. If encrypted transfers are offered, test those as well (eg. sshfs / scp).
Additionally, have a look at one of the cluster-ssh solutions, that allows simultaneous connections/commands to all machines.
Some of the biggest problems I have found in running my small business related to NAS's is file integrity under load. Is there a way to see if they have file integrity issues under load? Not just i/o or response times.
Also, it would be interesting to see how their "feature set" holds up under load, as all of the NAS's purport to offer a variety of additional services other than purely file access/storage. Or is that only applicable to your lengthier reviews?
Lastly, most of these nas's don't have version tracking or something similar, so in a media setup, it would be interesting to see how they handle accessing the same file at the same time....can they serve it multiple times to multiple clients?
How big a NAS can you test ? Does the server slow it down when testing 7+ virtual client load against NAS ? What is the client/host CPU usage, host system_cpu%, VM overhead ?
Please test the server by running simultaneous tests to multiple NAS. Compare 6 clients alone to 1 NAS at a time with 2 sets of 6 clients to 2 NAS (or 3x4, 4x3). Is there any difference ? Is the test affected by the testbed CPU speed (try using a faster CPU eg. E5-2670) ? Can you test 16 or 24 clients (1 server) with 2 VM / SSD. Might need more RAM ? Now we are getting less SMB / SOHO and more enterprise :)
Page 2, in the sentence "Out of the three processors, we decided to go ahead with the hexa-core Xeon E5-2630L" The URL in the HREF has a space in it, and therefore doesn't work.
14 SSDs. I know it is only to simulate separate clients, but to be honest this whole test is ultimately meaningless. No reasonable business (not talking about 'man with a laptop' kind of company) will entrust crucial data to SSD(s) (in particular non-industry class standard SSDs). Those disks are far too unreliable and HDDs trounce them in that category every time. Whether you like it or not, HDDs are still here and I'm absolutely certain that they will outlive SSDs by a fair margin. Running a business myself and thank you very much HDDs are the only choice, RAID 10, 6 or 60 depending on a job. Bloody SDDs, hate those to the core (tested). Good for laptops or for geeks who benching system 24/7 not for serious job.
If you love so much the reliability of HDDs, I must ask you: what SSD brand have failed you? Intel? Samsung? You know, they are statistics that show Intel and Samsung SSD are much more reliable 24/7 than many Enterprise HDDs. I mean, on paper, the enterprise HDDs looks great, but in reality they fail more than they should (in a large RAID array vibration is a maine concern). After all, the same basic technology applies to regular HDDs. On top of that, some (if not all) server manufacturers put refurbished HDDs in new servers (I have seen IBM doing that and I was terrified). Perhaps this is not a widespread practice, but it is truly terrifying. So, pardon me if I say: to hell with regular HDDs. Buy enterprise grade SSDs, you get the same 5 year warranty.
We wanted an OS which would support both IOMeter and Intel NASPT. Yes, we could have gone with Windows XP, but the Win 7 installer USB drives were on the top of the heap :)
Hi Ganesh - Thanks for taking my post a few articles back to heart regarding the NAS performance when fully loaded, as it begins to provide some really meaningful results.
I have to agree with some of the other posters' comments about the workload though. Playing a movie on one, copying on another, running a VM from a third and working of docs through an SMB share on a fourth would probably be a more meaninful workload in a prosumer's home.
In light of this, might it be an idea to add a new benchmark to AnandTech's Storage Bench that measures all these factors?
In terms of your setup, there's a balance to be struck. I really like the concept you're doing of using 12 VM's to replicate a realistic environment in the way you can do. However when an office has 12 clients, they're probably using a proper file server or multiple NAS's. 3-4 clients is probably the most typical set up in a SOHO/home setup.
10GbE testing is missing, and a lot of NAS's are beginning to ship with 10GbE. With switches like the Cisco SG500X-24 also supporting 10GbE and becoming slowly more affordable, 10GbE is slowly but surely becoming more relevant. 1 SSD and 1 GbE connection isn't going to saturate it - 10 will, and is certainly meaninful in a multi-user context, but this is AnandTech. What about absolute performance?
How about adding a 13th VM that leashes together all the 12 SSD's and aggregates all the 12 I340 links to provide a beast of RAIDed SSD's and 12GbE connectivity (the 2 extra connections should smoke out net adapters that aren't performing to spec as well).
As always a great article and a sensible testbench which can be scaled to test everything from small setups to larger setups. good choice!
However i would also like some type of test that is less geared towards technical performance and more real world scenarios.
so to help out i give you my real world scenario: Family of two adults and two teenagers...
Equipment in my house is: 4 latops running on wifi network 1 workstation for work 1 mediacenter running XBMC 1 Synollogy NAS
laptops streams music/movies from my nas - usually i guess no more than two of these runs at the same time MediaCenter also streams music/movies from the same nas at the same time in adition some of the laptops browse all the family pictures which are stored on the NAS and does light file copy to and from the NAS. The NAS itself downloads movies/music/tvshows and does unpacking and internal file transfers
My guess for a typical home use scenario there is not that much intensiv file copying going on, usually only light transfers trough mainly either wifi or 100mb links
I think the key factor is that usually there are multiple clients connecting and streaming different stuff that is the most relevant factor. at tops 4-5 clients
Also as mentioned difference on the different sharing protocols like SMB/CIFS would be interesting to se more details about.
Looking forward for the next chapters in your testbench :)
I'd be very curious to see tests involving deduplication. I know deduplication is found more on enterprise-class type storage systems, but WHS used SIS, and FreeNAS uses ZFS, which supports deduplication.
Quick Correction - On the last page under specs for the memory do you mean 10-10-10-30 instead of 19-10-10-30?
I was wondering about the setup with the CPUs for this machine. If each of the 12 VMs use 1 dedicated real CPU core then what is the host OS running on? With 2 Xeon E5-2630Ls that would be 12 real CPU cores.
I'm also curious about how hyper-threading works in a situation like this. Does each VM have 1 physical thread and 1 HT thread for a total of 2 threads per VM? Is it possible to run a VM on a single HT core without any performance degradation? If the answer is yes then I'm assuming it would be possible to scale this system up to run 24 VMs at once.
Thanks for the note about the typo in the CAS timings. Fixed it now.
We took a punt on the fact that I/O generation doesn't take up much CPU. So, the host OS definitely shares CPU resources with the VMs, but the host OS handles that transparently. When I mentioned that one CPU core is dedicated to each VM, I meant that the Hyper-V settings for the VM indicated 1 vCPU instead of the allowed 2 , 3 or 4 vCPUs.
Each VM runs only 1 thread. I am still trying to figure out how to increase the VM density in the current set up. But, yes, it looks like we might be able to hit 24 VMs because the CPU requirements from the IOMeter workloads are not extreme.
Kudos on excellent choice of hardware for power efficiency. 2 CPUs, 14 network ports, 8 sticks of RAM, and a total of 14 SSDS idling at just over 100 watts is very impressive.
Thanks for the build walkthrough, Ganesh. I was wondering why you used a 850W PSU when worst case DC power use is in the 220W range? Instead of the $180 Silverstone Gold rated unit, you could have gone with a lower power 80+ Gold or Platinum PSU for less $'s and better efficiency at your given loads.
Guys yank those NICs and get a dual 10gbe card in place. SOHO is 10Gbe these days. What gives? How are you supposed to test SOHO NAS with each VM so crippled?
I might be missing something really obvious here .. but if the highest power consumption was 146.7 W (IOMeter 100% Seq 100% Reads [ 12 VMs ]), then why did you need a 850W power supply ?
This is not the only workload we plan to run on the machine.
We were ready to put up with some inefficiency just to make sure we didn't have to open up the machine and put in a more powerful PSU down the road. The 850W PSU should serve the testbed well for future workloads which might be more stressful.
I’m VCP:5 / 4 and MCSE and MCITP:VA / EA This setup for 12 VMs way overkill.. Best for this test bad will be VMware vSphere Hypervisor( Free ESXi) – much better memory and vCPU and storage management or MS Hyper-V 2008 R2 free server - try to use free Hyper-V 2008 server much less HD space and compute resources needed Regarding VMs density you could easy run all 12 VMs(1-2 GB memory) from single Sandy Bridge-E CPU or 1155 Xeon(i7) CPU with really good performance. Storage 2x intel 320 series 600GB SSD in RAID 1(you will need Redundancy) with thin provisioning will do trick.
We are working towards increasing the VM density in the current testbed itself. As another reader pointed out, 12 VMs were not enough to stress the Thecus N4800.
I decided not go with the Hyper-V 2008 R2 free server because I needed to run some programs / scripts in the host OS and the Z9PE-D8 WS had drivers specifically for Win Server 2008 R2.
Seems like a lot of people are talking about it being over the top. I agree with the route Anandtech took - could have even went farther. How far can they be pushed is my question? I want to see when they start smoking NAS's. The article and concept is great. I like to know how the site sets up its test scenarios and equipment. It lets me know if my use case is higher or lower and what the device being reviewd can do. I look at your testing methods to decide if your data is worth considering. I continue to be an avid reader here because of the effort placed. If you had one PC with one NIC, anyone in their house can test it like that. Why even write reviews about NAS's if that is how far you are going to test? Great job, Anandtech.
I have some applications at work I would like to create repeatable tests for. An article about how to automate applications for testing would be helpful. I saw that we got a little in this article. I would also like to see more enterprise equipment being tested if you can swing it.
NAS devices are convenient and generally low-power, but it would be nice to see a comparison to some real metal with a real server OS like Server 2k8R2. Maybe a repurposed older computer with a couple drives mirrored and an actual, low end server with some SAS drives.
This asus motherboard is not truly ACPI compliant, ASUS knows it and they do not want to fix it. Their tech support has given stupid excuses to posts from users trying to run Windows 8 and 2012 server on it.
If you boot either Windows 8 or 2012 server RTM on it, it blue screens with error: 0xA5: ACPI_BIOS_ERROR
You just need to check the reviews at the egg to confirm.
I wonder what would happen if you did use Linux for the host and VM oses? I suppose that would become a test of Linux vs Windows! Heh. More seriously, why not add at least one VM of "the current popular distro" of Linux and and a Mac OS X machine Use them with NTFS drivers and / or reformat a NAS partition to native ext# and another to HFS+. Point being, how does the NAS react to mixed client loads and not all smb, as someone commented above. The other test this beast seems ideal for is comparisons of several non-local storage solutions - someone mentioned iSCSI, and I can imagine tryiing some types of SANs - might add an infiniband adapter - being of interest. The point of that would simply be to see what form of non-local storage was fastest, best value, easiest to maintain, etc, etc for us mortals who want to connect 6 - 12 machines, We, being the folks who DON'T run lans for a living and are not up to speed on what IT people already know
Ganesh, Thank you for this article. You mentioned that ASUS recommended the Dynatron r-17 for the Z9PE-D8 WS. I have this board and its manual, but found no recommendation.My question is: where did you find this recommendation by ASUS?
ganeshts, Jeff at Dynatron recommends mounting my two R-17s on my ASUS Z9PE-D8 WS board with the airflow blowing toward the rear of the chassis case – which is 90-degrees clockwise from your orientation. However, it appears from your photo that maybe the R-17 will only fit using your orientation which allows the indentation notch in the heatsink fins to straddle and clear the mobo’s chipset heatsink. Is your orientation the ONLY way you could get it to fit between the memory sticks and both heatsinks? Thanks.
ganeshts, Jeff at Dynatron recommends mounting my two R-17s on my ASUS Z9PE-D8 WS board with the airflow blowing toward the rear of the chassis case – which is 90-degrees clockwise from your orientation. However, it appears from your photo that maybe the R-17 will only fit using your orientation which allows the indentation notch in the heatsink fins to straddle and clear the mobo’s chipset heatsink. Is your orientation the ONLY way you could get it to fit between the memory sticks? Thanks.
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
74 Comments
Back to Article
xTRICKYxx - Wednesday, September 5, 2012 - link
May I ask why do you guys need such high requirements? And why 12 VMs? I just think this is overkill. But it doesn't matter anyways... If I had a budget like this, I would totally build an awesome NAS like you guys have and follow this guide. Great job!xTRICKYxx - Wednesday, September 5, 2012 - link
I should clarify I am looking at this NAS as a household commodity, not something where 10+ computers will be heavily accessing it.mfed3 - Wednesday, September 5, 2012 - link
still didn't read...this is hopeless..extide - Thursday, September 6, 2012 - link
Dude they are NOT BUILDING A NAS!!!They are building a system to TEST other NAS's
thomas-hrb - Thursday, September 6, 2012 - link
It would also be nice to test against some of the other features like for example iSCSI. Also since the Thecus N4800 supports iSCSI, I would like to see that test redone with a slightly different build/deployment.Create a single LUN on iSCSI. then mount that LUN in the VM like ESXi, create some VM's 20GB per server should be enough for server 2K8R2 and test it that way.
I don't know who would use NAS over SAN in an enterprise shop, but some of the small guys who can't afford an enterprise storage solution (less than 25 clients) might want to know how effectively a small NAS, can handle VM's with advanced features like vMotion and fault tolerance. In fact if you try some of those HP ML110G7 (3 of them with a vmware essentials plus kit) you can get 12 CPU cores with 48GB RAM, with licensing for about 10K. This setup will give you a decent amount of reliability, and if the NAS can support data replication, you could get a small setup with enterprise features (even if not enterprise performance) for less than the lost of 1-tray of FC-SAN storage.
Wixman666 - Wednesday, September 5, 2012 - link
It's because they want to be able to really hammer the storage system.The0ne - Wednesday, September 5, 2012 - link
"The guest OS on each of the VMs is Windows 7 Ultimate x64. The intention of the build is to determine how the performance of the NAS under test degrades when multiple clients begin to access it. This degradation might be in terms of increased response time or decrease in available bandwidth."12 is a good size, if not too small for a medium size company.
MGSsancho - Wednesday, September 5, 2012 - link
12 is also a good size for a large workgroup.. Alternatively this is a good benchmark for students in dorms. sure there might be 4-5 people but when you factor in computers using torrents, game consoles streaming netflix along with tvs, could be interesting. granted all of this is streaming except for the torrents and their random i/o. However most torrent clients cache as much of the writes. With the current anandtech bench setup with VMs this can be replicated.DanNeely - Wednesday, September 5, 2012 - link
The same reason they need 8 threaded benchmark apps to fully test a Quad-HT CPU. They're testing NASes designed to have more than 2 or 3 clients attached at once; simulating a dozen of them puts the load on the nases up, although judging by the results shown by the Thecus N4800 they probably fell short of maxing it out.theprodigalrebel - Wednesday, September 5, 2012 - link
Well, this IS Anandtech and the article is filed under IT Computing... ;)mfed3 - Wednesday, September 5, 2012 - link
someone didn't read the title of the article or the article itself. the purpose is to set up a testbed, not build a system with this software target in mind.Zink - Wednesday, September 5, 2012 - link
At the same time this system seems extremely over the top for the uses mentioned. It seems likely that the same tests could be run with much less hardware. I know the testbed as specced can be used for much more than testing NAS performance but the only use discussed is simulating the network utilization of a SMB environment.The SSDs are justified because a single HDD was "not suitable" for 12VMs but it seems there are intermediate solutions such as RAIDing two 512GB SSDs that would provide buckets of performance and a cleaner solution than 14 individual disks. I also do not understand how having a physical CPU core per VM is needed to “ensure smooth operation” if network benchmarking software is I/O bound and runs fine on a Pentium 4. Assuming you really do need 64GB of RAM for shared files and Windows VMs then it seems a 1P 2011 board would be more than up to running these benchmarks. Switch to Linux VMs for Dynamo and you could try running the benches from an even lighter system such as an i7-3770.
On the network side would it not also be possible to virtualize the physical LAN? The clients could connect together over the internal network and the host OS on the tested perform the switch’s role and stress the NAS over a single aggregated link? For testing NAS performance specifically, what would the effect be of removing the VMs entirely and just running multiple Iometer sessions over a single aggregated link or letting Iometer use the multiple NICs from the host OS?
NAS benchmarking would be an interesting application to try to optimize a system for. A simpler system would help you out with reducing power consumption, increasing reliability and reducing cost. You could run some experiments by changing the system configuration and benching again to see if the same NAS performance can be generated. Figuring out what other kinds of systems generate the same results would also make it possible for other editors to bench NAS units without having to purchase 14 SSDs.
Sorry for complaining about the system configuration, I know you built it to test other hardware and not as a project in itself but I find the testbed more interesting than the NAS performance.
ganeshts - Thursday, September 6, 2012 - link
Zink, Thanks for your comment. Let me try to address your concerns one-by-one, starting with the premise that the current set of tests are not the only ones we propose to run in the testbed. That premise accounts for devoting a single physical core to each VM.As for the single disk for each VM vs. RAIDed SSDs, that was one of the ideas we considered. However, we decided to isolate the VMs from each other as much as possible. In fact, if you re-check the build, the DRAM is the only 'hardware component' that is shared.
We didn't go with the 'virtualizing the physical LAN' because that puts an upper limit to the number of clients which can be set up for benchmarking purpose (dependent on the host resources). In the current case, using an external switch and one physical LAN port for each VM more accurately represents real world usage. Also, in case we want to increase the number of clients, it is a simple matter of connecting more physical machines to the switch.
Multiple IOMeter sessions: As far as we could test out / understand, IOMeter doesn't allow multiple simultaneous sessions on a given machine. One can create multiple workers, but synchronizing across them is a much more difficult job than synchronizing the dynamo processes across multiple machines. I am also not sure if the workers on one machine can operate through different network interfaces.
As noted by another reader, 12 VMs haven't been able to max out the N4800 from Thecus. The next time around, we will probably go with the RAIDing 512 GB SSD option for storage of the VM disks. Physical NICs are probably going to remain (along with one physical CPU core or, probably, thread, for each VM).
bobbozzo - Thursday, September 6, 2012 - link
Hi Ganesh,Could you post power consumption for the server with the CPUs loaded (with Prime95 or whatever)?
I'm thinking of building something like this for a webserver.
Thanks!
ganeshts - Friday, September 7, 2012 - link
Power consumption with Prime95 set for maximum power consumption was 202 W with all CPU cores 100% loaded. Note that the BIOS has a TDP limit of 70W before throttling the cores down.However, I noticed that RAM usage in that particular scenario was only 4 GB in total out of the 64 GB available. It is possible that higher DRAM activity might result in more power usage.
Stahn Aileron - Wednesday, September 5, 2012 - link
Just out of curiosity, when you run with multiple clients accessing the NAS, are they all running the (exact?) same type of workload? Or is each VM/client set to use a slightly (if not entirely) different workload?I'm curious since, from a home network PoV, I can see multiple access coming from say:
-One (or more) client(s) streaming a movie (or maybe music)
-Another (or several) doing copy (reads) from the NAS
-Others doing writes to the NAS
-Maybe even one client (I can't really imagine more) doing a torrent (I don't like the idea of a client using a mounted shared network device as the primary drive for torrenting, but you never know. Also, some NASes feature built-in torrent functionality as a feature.)
I'm just wondering how much the workload from each client differs from one another, if at all, when conducting your tests/benchmarks.
Also, for the NASes that do RAID, will you be testing how array degradation and/or rebuilding impacts client usage benchmarks?
ganeshts - Wednesday, September 5, 2012 - link
Stahn,Thanks for your feedback. This is exactly what I am looking at from our readers.
As for your primary question, in our benchmark case, all the VMs are running the same type of workload at a given time. The type of workload is given in the title of each graph.
It should be possible to set up an IOMeter benchmark ICF file with the type of multiple workloads that you are mentioning. I will try to frame one and try to get it processed for the next NAS review.
Ref. array degradation / rebuild process : Right now, we present results indicating the time taken to rebuild the array when there is no access to the NAS. I will set up a NASPT run when rebuild is in progress to get a feel of how the rebuild process affects the NAS performance.
Stahn Aileron - Thursday, September 6, 2012 - link
Glad to be of some help. To be honest, benchmarking and running tests (troubleshooting) is something I used to do in the Navy as Avionics Technician. I actually do kind of miss it (especially being a tech geek.) Reminiscing aside...Back on-topic: what I described in my previous post was more of a home user secenario. Is there anything else you would also need/want to consider in a more work-oriented "dissimilar multi-client workload" benchmark/test? If this was a SOHO environment, I would add the following to my previous post:
-DB access (not sure how you want to distribute the read/write workload, though I suppose leaning heavier to reads).
I mention this now because my previous post for read/writes was more along the lines of sequential instead of random. I would guess DB access would be more random-ish in nature.
For other work-oriented scenarios in a "dissimilar multi-client workload" benchmark, I'm not sure what else could be added. I'm mainly just a power-user. I dunno is people would really use an NAS for, say, an Exchange Server's storage or maybe a locally-hosted website. (Some NASes come with Web service funtions and features, no?)
I'm just throwing out ideas for consideration. I don't xpect you to implement everything and anything since you don't have the time to do that. Time is your most precious resource during testing and benchmarking, after all.
Thank you all for running a wonderful website and to Ganesh for a quick reply.
Oh, one last thing: does disk fragmentation matter in regards to NASes? Would it affect NAS perfomance? Do any NASes defrag themselves?
This is more of a long-term issue, so you can't really test it readily I'm guessing. (Unless you happen to have a fragmented dataset you could clone to the NAS somehow...) I haven't heard much about disk fragmentation since the advent of SSDs in the consumer space. That, and higher perfomance HDDs. This is mainly just a curiosity for me. (I do have a more personal reason for my interest, but it's a long story...)
insz - Wednesday, September 5, 2012 - link
Interesting article. Would it be possible to add some pics of the final setup? It'd be interesting to see what the testbed would look like assembled and wired up.ganeshts - Friday, September 7, 2012 - link
I didn't add the pics to the article because the setup wasn't 'photogenic' after final assembly and placement in my work area :) (as the album below shows). Doesn't matter, I will just link it in this comments section2012 AnandTech SMB / SOHO NAS Testbed : http://imgur.com/a/h4bQR
Individual images:
http://i.imgur.com/hjD9qh.jpg
http://i.imgur.com/PJ91Vh.jpg
http://i.imgur.com/2BcEfh.jpg
http://i.imgur.com/dvmbrh.jpg
dertechie - Wednesday, September 5, 2012 - link
That is a helluva test bench.I'd love to see what a HP N40L Microserver does with 4 disks in it if you throw that at it (use the on-motherboard USB port for the OS). It's certainly not a plug-and-play solution like most NAS boxes, but assuming the performance is there it should be a far more flexible one for the money if you throw a *nix based OS on it.
bsd228 - Wednesday, September 5, 2012 - link
I've taken advantage of the 5th internal port of the N36L to add an SSD that is used by ZFS for both read and write caching. Strictly speaking, mirrored write caches are advised, but it's connected to a UPS to eliminate much of that risk.I think HP has given us the perfect platform for low power, high performance with flexibility.
extide - Thursday, September 6, 2012 - link
Cache? or L2ARC?Mirrored Cache drives are NOT suggested for ZFS, but Mirrored L2ARC devices are.
coder543 - Wednesday, September 5, 2012 - link
running Windows Server.........ganeshts - Wednesday, September 5, 2012 - link
What alternatives do you have in mind?We needed a platform which was well supported by the motherboard. To tell the truth, I found Hyper-V and the virtualization infrrastructure to be really good and easy to use compared to VMWare's offerings.
ender8282 - Wednesday, September 5, 2012 - link
I assume coder543 was going for a Linux based host, and possibly Linux based clients as well. If you had gone with Linux you wouldn't have needed extra software for SSH or the ram disk. It even looks like IOMeter is supported for Linux. Had you gone that route you likely could have automated the whole task so that it was just a matter of typing go on the host and coming back hours later to collect the results. OTOH most of your audience is probably more likely to be using Windows clients so it probably makes more sense to provide information clearly relevant to the average reader.I found the article interesting. The one thing that I'd be curious about is whether or not there were any major performance differences using Samba/CIFS type shares vs NFS, or a mixture of the two.
I'd love to see more Linux coverage in general, but I respect that you know your audience and write the articles that they generally want to read.
Great Job keep it up!
Ratman6161 - Thursday, September 6, 2012 - link
I should run on that platform just great. On the other hand, when all is said and done, as nice as this setup is, to me it is basically a full blown server/virtualization platform; not really a "NAS" at all. I would typically think of a NAS as being a dedicated storage device - possibly used as an IScsi target with the brains of the operation living elsewhere.ganeshts - Thursday, September 6, 2012 - link
This is a testbed for evaluating NAS units, not a NAS. Not sure why readers are getting an impression that this is a NAS by itself.bsd228 - Wednesday, September 5, 2012 - link
Ganesh- I think this test bed sets up very well for testing the $500-1000 4 bay type NAS devices we've been seeing of late that could actually serve a small office. However, I'm less sure that it delivers meaningful data to the home crowd. Like with your SSD tests, I see a place for a "light" load versus the heavy. I think testing against 4 VMs with, for sake of example, the following load types would work:1- 2 VMs streaming video - 1 DVD, 1 H.264 HDTV - are there any interruptions?
2- 1 VM streaming audio off a mt-daapd (or actual itunes since you're using windows as the server) - again, is there any dropoffs.
3- same VM as #2 is also doing content creation - like importing 1000 RAW images into Lightroom using this storage space
4- last VM is copying large files (or small) to the storage server.
The Thecus 4800 should handle this with ease, but there are many cheaper solutions out there that may or may not meet this level of need. I got so tired of poorly performing consumer units that 4 years ago I switched to an AMD x2 4800 running solaris, and more recently to the HP36L and 40L. At $300 plus $60 for 8 gigs of ECC I think this is a better value than the Thecus for those who can run solaris or even Windows Home Server. You're not reliant on the release of modules to support a particular service.
Also, it seems that all of these benchmarks are based on SMB transfers. It's worth checking to see if nfs and iscsi performance (when made available by the NAS) shows different numbers. In the past, it certainly did, especially on the consumer devices where NFS smoked SMB1. But perhaps this is a moot point with SMB2/windows 7 where it seems like the NIC or the hard drives are the limiting factors, not the transfer protocol.
Rick83 - Thursday, September 6, 2012 - link
I agreem test the different protocols provided by the devices.iSCSI, SMB, NFS as well as the media streaming protocols, FTP and whatever else it offers.
If encrypted transfers are offered, test those as well (eg. sshfs / scp).
Additionally, have a look at one of the cluster-ssh solutions, that allows simultaneous connections/commands to all machines.
waldo - Wednesday, September 5, 2012 - link
Some of the biggest problems I have found in running my small business related to NAS's is file integrity under load. Is there a way to see if they have file integrity issues under load? Not just i/o or response times.Also, it would be interesting to see how their "feature set" holds up under load, as all of the NAS's purport to offer a variety of additional services other than purely file access/storage. Or is that only applicable to your lengthier reviews?
Lastly, most of these nas's don't have version tracking or something similar, so in a media setup, it would be interesting to see how they handle accessing the same file at the same time....can they serve it multiple times to multiple clients?
waldo - Wednesday, September 5, 2012 - link
One last thought...it would be interesting to see free nas or some other DIY as an alternative.Peanutsrevenge - Wednesday, September 5, 2012 - link
Top marks!Bet it was satisfying when the SSH script was comlpete, just press this button and .....
tygrus - Thursday, September 6, 2012 - link
How big a NAS can you test ? Does the server slow it down when testing 7+ virtual client load against NAS ? What is the client/host CPU usage, host system_cpu%, VM overhead ?Please test the server by running simultaneous tests to multiple NAS. Compare 6 clients alone to 1 NAS at a time with 2 sets of 6 clients to 2 NAS (or 3x4, 4x3). Is there any difference ? Is the test affected by the testbed CPU speed (try using a faster CPU eg. E5-2670) ? Can you test 16 or 24 clients (1 server) with 2 VM / SSD. Might need more RAM ? Now we are getting less SMB / SOHO and more enterprise :)
jwcalla - Thursday, September 6, 2012 - link
I got a bit of a chuckle out of G.Skill sending you non-ECC RAM.ganeshts - Thursday, September 6, 2012 - link
That is OK for our application :) We aren't running this workstation in a 'production' environment.bobbozzo - Thursday, September 6, 2012 - link
Curious, would ECC RAM use noticeably more power?extide - Thursday, September 6, 2012 - link
Why would they bother with ECC ram? Totally un-needed for this application..bsd228 - Monday, September 10, 2012 - link
ECC is absolutely needed for this application - Data integrity matters, not just data throughput.bobbozzo - Thursday, September 6, 2012 - link
Page 2, in the sentence"Out of the three processors, we decided to go ahead with the hexa-core Xeon E5-2630L"
The URL in the HREF has a space in it, and therefore doesn't work.
Thanks for the article!
ganeshts - Thursday, September 6, 2012 - link
Thanks for unearthing that one.. Fixed now.ypsylon - Thursday, September 6, 2012 - link
14 SSDs. I know it is only to simulate separate clients, but to be honest this whole test is ultimately meaningless. No reasonable business (not talking about 'man with a laptop' kind of company) will entrust crucial data to SSD(s) (in particular non-industry class standard SSDs). Those disks are far too unreliable and HDDs trounce them in that category every time. Whether you like it or not, HDDs are still here and I'm absolutely certain that they will outlive SSDs by a fair margin. Running a business myself and thank you very much HDDs are the only choice, RAID 10, 6 or 60 depending on a job. Bloody SDDs, hate those to the core (tested). Good for laptops or for geeks who benching system 24/7 not for serious job.ypsylon - Thursday, September 6, 2012 - link
Dang 12 not 14 , ha, ha.mtoma - Thursday, September 6, 2012 - link
If you love so much the reliability of HDDs, I must ask you: what SSD brand have failed you? Intel? Samsung? You know, they are statistics that show Intel and Samsung SSD are much more reliable 24/7 than many Enterprise HDDs. I mean, on paper, the enterprise HDDs looks great, but in reality they fail more than they should (in a large RAID array vibration is a maine concern). After all, the same basic technology applies to regular HDDs. On top of that, some (if not all) server manufacturers put refurbished HDDs in new servers (I have seen IBM doing that and I was terrified). Perhaps this is not a widespread practice, but it is truly terrifying.So, pardon me if I say: to hell with regular HDDs. Buy enterprise grade SSDs, you get the same 5 year warranty.
extide - Thursday, September 6, 2012 - link
Dude you missed the point ENTIRELY, the machine they built is to TEST NAS's. They DID NOT BUILD A NAS.Wardrop - Saturday, September 8, 2012 - link
I can't work out whether this guy is trolling or not? A very provocative post without really any detail.AmdInside - Thursday, September 6, 2012 - link
Isn't Win7 x64 Ultimate a little too much for a VM? Would be nice to see videos.ganeshts - Thursday, September 6, 2012 - link
We wanted an OS which would support both IOMeter and Intel NASPT. Yes, we could have gone with Windows XP, but the Win 7 installer USB drives were on the top of the heap :)AmdInside - Thursday, September 6, 2012 - link
Thankszzing123 - Thursday, September 6, 2012 - link
Hi Ganesh - Thanks for taking my post a few articles back to heart regarding the NAS performance when fully loaded, as it begins to provide some really meaningful results.I have to agree with some of the other posters' comments about the workload though. Playing a movie on one, copying on another, running a VM from a third and working of docs through an SMB share on a fourth would probably be a more meaninful workload in a prosumer's home.
In light of this, might it be an idea to add a new benchmark to AnandTech's Storage Bench that measures all these factors?
In terms of your setup, there's a balance to be struck. I really like the concept you're doing of using 12 VM's to replicate a realistic environment in the way you can do. However when an office has 12 clients, they're probably using a proper file server or multiple NAS's. 3-4 clients is probably the most typical set up in a SOHO/home setup.
10GbE testing is missing, and a lot of NAS's are beginning to ship with 10GbE. With switches like the Cisco SG500X-24 also supporting 10GbE and becoming slowly more affordable, 10GbE is slowly but surely becoming more relevant. 1 SSD and 1 GbE connection isn't going to saturate it - 10 will, and is certainly meaninful in a multi-user context, but this is AnandTech. What about absolute performance?
How about adding a 13th VM that leashes together all the 12 SSD's and aggregates all the 12 I340 links to provide a beast of RAIDed SSD's and 12GbE connectivity (the 2 extra connections should smoke out net adapters that aren't performing to spec as well).
Tor-ErikL - Thursday, September 6, 2012 - link
As always a great article and a sensible testbench which can be scaled to test everything from small setups to larger setups. good choice!However i would also like some type of test that is less geared towards technical performance and more real world scenarios.
so to help out i give you my real world scenario:
Family of two adults and two teenagers...
Equipment in my house is:
4 latops running on wifi network
1 workstation for work
1 mediacenter running XBMC
1 Synollogy NAS
laptops streams music/movies from my nas - usually i guess no more than two of these runs at the same time
MediaCenter also streams music/movies from the same nas at the same time
in adition some of the laptops browse all the family pictures which are stored on the NAS and does light file copy to and from the NAS.
The NAS itself downloads movies/music/tvshows and does unpacking and internal file transfers
My guess for a typical home use scenario there is not that much intensiv file copying going on, usually only light transfers trough mainly either wifi or 100mb links
I think the key factor is that usually there are multiple clients connecting and streaming different stuff that is the most relevant factor. at tops 4-5 clients
Also as mentioned difference on the different sharing protocols like SMB/CIFS would be interesting to se more details about.
Looking forward for the next chapters in your testbench :)
Jeff7181 - Thursday, September 6, 2012 - link
I'd be very curious to see tests involving deduplication. I know deduplication is found more on enterprise-class type storage systems, but WHS used SIS, and FreeNAS uses ZFS, which supports deduplication._Ryan_ - Thursday, September 6, 2012 - link
It would be great if you guys could post results for the Drobo FS.Pixelpusher6 - Thursday, September 6, 2012 - link
Quick Correction - On the last page under specs for the memory do you mean 10-10-10-30 instead of 19-10-10-30?I was wondering about the setup with the CPUs for this machine. If each of the 12 VMs use 1 dedicated real CPU core then what is the host OS running on? With 2 Xeon E5-2630Ls that would be 12 real CPU cores.
I'm also curious about how hyper-threading works in a situation like this. Does each VM have 1 physical thread and 1 HT thread for a total of 2 threads per VM? Is it possible to run a VM on a single HT core without any performance degradation? If the answer is yes then I'm assuming it would be possible to scale this system up to run 24 VMs at once.
ganeshts - Thursday, September 6, 2012 - link
Thanks for the note about the typo in the CAS timings. Fixed it now.We took a punt on the fact that I/O generation doesn't take up much CPU. So, the host OS definitely shares CPU resources with the VMs, but the host OS handles that transparently. When I mentioned that one CPU core is dedicated to each VM, I meant that the Hyper-V settings for the VM indicated 1 vCPU instead of the allowed 2 , 3 or 4 vCPUs.
Each VM runs only 1 thread. I am still trying to figure out how to increase the VM density in the current set up. But, yes, it looks like we might be able to hit 24 VMs because the CPU requirements from the IOMeter workloads are not extreme.
dtgoodwin - Thursday, September 6, 2012 - link
Kudos on excellent choice of hardware for power efficiency. 2 CPUs, 14 network ports, 8 sticks of RAM, and a total of 14 SSDS idling at just over 100 watts is very impressive.casteve - Thursday, September 6, 2012 - link
Thanks for the build walkthrough, Ganesh. I was wondering why you used a 850W PSU when worst case DC power use is in the 220W range? Instead of the $180 Silverstone Gold rated unit, you could have gone with a lower power 80+ Gold or Platinum PSU for less $'s and better efficiency at your given loads.ganeshts - Thursday, September 6, 2012 - link
Just a hedge against future workloads :)haxter - Thursday, September 6, 2012 - link
Guys yank those NICs and get a dual 10gbe card in place. SOHO is 10Gbe these days. What gives? How are you supposed to test SOHO NAS with each VM so crippled?extide - Thursday, September 6, 2012 - link
10GBe is certainly not SOHO.Zarquan - Thursday, September 6, 2012 - link
I might be missing something really obvious here .. but if the highest power consumption was 146.7 W (IOMeter 100% Seq 100% Reads [ 12 VMs ]), then why did you need a 850W power supply ?Either the system is using a lot more than the 146.7 W you quoted in your power consumption figures, or the power supply is way over specified.
http://www.anandtech.com/show/6241/building-the-20...
ganeshts - Thursday, September 6, 2012 - link
This is not the only workload we plan to run on the machine.We were ready to put up with some inefficiency just to make sure we didn't have to open up the machine and put in a more powerful PSU down the road. The 850W PSU should serve the testbed well for future workloads which might be more stressful.
ydafff - Thursday, September 6, 2012 - link
I’m VCP:5 / 4 and MCSE and MCITP:VA / EAThis setup for 12 VMs way overkill..
Best for this test bad will be VMware vSphere Hypervisor( Free ESXi) – much better memory and vCPU and storage management or MS Hyper-V 2008 R2 free server - try to use free Hyper-V 2008 server much less HD space and compute resources needed
Regarding VMs density you could easy run all 12 VMs(1-2 GB memory) from single Sandy Bridge-E CPU or 1155 Xeon(i7) CPU with really good performance. Storage 2x intel 320 series 600GB SSD in RAID 1(you will need Redundancy) with thin provisioning will do trick.
ganeshts - Thursday, September 6, 2012 - link
ydaff, Thanks for the inputs.We are working towards increasing the VM density in the current testbed itself. As another reader pointed out, 12 VMs were not enough to stress the Thecus N4800.
I decided not go with the Hyper-V 2008 R2 free server because I needed to run some programs / scripts in the host OS and the Z9PE-D8 WS had drivers specifically for Win Server 2008 R2.
eanazag - Thursday, September 6, 2012 - link
Seems like a lot of people are talking about it being over the top. I agree with the route Anandtech took - could have even went farther. How far can they be pushed is my question? I want to see when they start smoking NAS's. The article and concept is great. I like to know how the site sets up its test scenarios and equipment. It lets me know if my use case is higher or lower and what the device being reviewd can do. I look at your testing methods to decide if your data is worth considering. I continue to be an avid reader here because of the effort placed. If you had one PC with one NIC, anyone in their house can test it like that. Why even write reviews about NAS's if that is how far you are going to test? Great job, Anandtech.I have some applications at work I would like to create repeatable tests for. An article about how to automate applications for testing would be helpful. I saw that we got a little in this article. I would also like to see more enterprise equipment being tested if you can swing it.
KingHerod - Friday, September 7, 2012 - link
NAS devices are convenient and generally low-power, but it would be nice to see a comparison to some real metal with a real server OS like Server 2k8R2. Maybe a repurposed older computer with a couple drives mirrored and an actual, low end server with some SAS drives.dbarth1409 - Friday, September 7, 2012 - link
Ganesh,Good work. I'm looking forward to seeing some future test results.
dijuremo - Monday, September 10, 2012 - link
This asus motherboard is not truly ACPI compliant, ASUS knows it and they do not want to fix it. Their tech support has given stupid excuses to posts from users trying to run Windows 8 and 2012 server on it.If you boot either Windows 8 or 2012 server RTM on it, it blue screens with error:
0xA5: ACPI_BIOS_ERROR
You just need to check the reviews at the egg to confirm.
http://www.newegg.com/Product/Product.aspx?Item=N8...
ganeshts - Monday, September 10, 2012 - link
Looks like Asus has updated support files for Windows 8.VTArbyP - Monday, September 10, 2012 - link
I wonder what would happen if you did use Linux for the host and VM oses? I suppose that would become a test of Linux vs Windows! Heh.More seriously, why not add at least one VM of "the current popular distro" of Linux and and a Mac OS X machine Use them with NTFS drivers and / or reformat a NAS partition to native ext# and another to HFS+. Point being, how does the NAS react to mixed client loads and not all smb, as someone commented above. The other test this beast seems ideal for is comparisons of several non-local storage solutions - someone mentioned iSCSI, and I can imagine tryiing some types of SANs - might add an infiniband adapter - being of interest. The point of that would simply be to see what form of non-local storage was fastest, best value, easiest to maintain, etc, etc for us mortals who want to connect 6 - 12 machines, We, being the folks who DON'T run lans for a living and are not up to speed on what IT people already know
webmastir - Tuesday, September 18, 2012 - link
How much did this build cost you guys to test?garuda1 - Tuesday, March 26, 2013 - link
Ganesh, Thank you for this article. You mentioned that ASUS recommended the Dynatron r-17 for the Z9PE-D8 WS. I have this board and its manual, but found no recommendation.My question is: where did you find this recommendation by ASUS?garuda1 - Saturday, March 30, 2013 - link
ganeshts,Jeff at Dynatron recommends mounting my two R-17s on my ASUS Z9PE-D8 WS board with the airflow blowing toward the rear of the chassis case – which is 90-degrees clockwise from your orientation. However, it appears from your photo that maybe the R-17 will only fit using your orientation which allows the indentation notch in the heatsink fins to straddle and clear the mobo’s chipset heatsink. Is your orientation the ONLY way you could get it to fit between the memory sticks and both heatsinks? Thanks.
garuda1 - Saturday, March 30, 2013 - link
ganeshts,Jeff at Dynatron recommends mounting my two R-17s on my ASUS Z9PE-D8 WS board with the airflow blowing toward the rear of the chassis case – which is 90-degrees clockwise from your orientation. However, it appears from your photo that maybe the R-17 will only fit using your orientation which allows the indentation notch in the heatsink fins to straddle and clear the mobo’s chipset heatsink. Is your orientation the ONLY way you could get it to fit between the memory sticks? Thanks.