Singapore Expats

What is your setup like and why.

Discuss about computers & Internet. Including mobile phones, home appliances & other gadgets. Read about Windows security risks or virus updates.
Post Reply
x9200
Moderator
Moderator
Posts: 10075
Joined: Mon, 07 Sep 2009 4:06 pm
Location: Singapore

iSCSI SAN

Post by x9200 » Wed, 27 Mar 2013 4:26 pm

I am trying to understand the concept behind iSCSI SAN and what real benefits I can get from it in a local home network.

This is what I found here: http://www.tomshardware.com/reviews/ada ... 802-2.html
I already outlined the basics of iSCSI: An iSCSI initiator connects to an iSCSI target, which eventually results in a new drive becoming available on the initiator’s host machine. The beauty of this is the way the new storage partition can be accessed: it appears as if it were locally installed in the host machine, although the iSCSI target can be located anywhere within your network. The only real limitation is network performance, which means that you shouldn’t use anything slower than Gigabit Ethernet - though a simple wireless network is technically capable of hosting an iSCSI deployment.
What is the difference between this (in bold) and a network attached FS via NFS or Samba shares? Leaving aside some half-competent descriptions, is it about multi-node (server) parallel access to the data distributed over multiple storage nodes?
I have at this point single continues use storage server giving me transfer rates up to 40-80 MB(Ytes)/s of actual transfer so what would be the benefit if I set up an iSCSI SAN target?

RimBlock
Regular
Regular
Posts: 126
Joined: Fri, 06 Oct 2006 12:38 am

Re: iSCSI SAN

Post by RimBlock » Wed, 27 Mar 2013 5:36 pm

x9200 wrote: What is the difference between this (in bold) and a network attached FS via NFS or Samba shares? Leaving aside some half-competent descriptions, is it about multi-node (server) parallel access to the data distributed over multiple storage nodes?
I have at this point single continues use storage server giving me transfer rates up to 40-80 MB(Ytes)/s of actual transfer so what would be the benefit if I set up an iSCSI SAN target?
The biggest difference is that iSCSI is block level and NFS (CIFS etc) are file level.

The offshoot of this is that a iSCSI LUN presented to another server looks like a raw unformateed disk. That server can then use whatever filesystem it supports along with any tools for that fs to manage the storage. Servers can also use iSCSI for a PXE network boot enabling them to be diskless.

NFS etc are formatted and managed on the server that shares out the actual storage via NFS.

You can also have a hybrid like I was running until recently. My storage server shared the space via iSCSI to my Windows server which formatted them and shared them out via CIFS / SAMBA to my desktop machines. The Windows server managed backups etc. The reason for doing this rather than just having the disks in the WIndows server was because another set of disks in the storage server were shared to my ESXi server via iSCSI and the ESXi server then formatted them to VMFS for its own use. From my workstation (1Gbe) to my Windows server to the storage SAN I was getting around 60->80MB/s on average. The actual disks were in a raid 5 array (hardware raid controller)

Now, interestingly, VMFS is a multiaccess file system meaning that it is able to cope with multiple clients (iSCSI initiators) accessing it at the same time. This aids moving a virtual machine from one ESXi server to another (shut down on one and start up on the other with no relocation of the VM files in between).

Speed wise, they are now generally thought to be more or less comparable (iSCSI / NFS). NFS 4 should also bring in some more features on supported hard/software.

I am currently setting up an Infiniband network at home. DDR so 20Gb/s (yes around 2.5GB/s with the right equipment at each end). Using second user parts it works out cheaper than 10GbE. Not so eay to get running though :D.

RB
Without dialogues, if you tell them you want something real bad, you will get it real bad.

x9200
Moderator
Moderator
Posts: 10075
Joined: Mon, 07 Sep 2009 4:06 pm
Location: Singapore

Post by x9200 » Wed, 27 Mar 2013 10:44 pm

Thanks RB. I was hoping for a response like this. What is the benefit for the OS seeing the resource as a physical disk rather than a file system? If this is for a shared space (it seems so to me) than the ability of OS to handle a specific FS will still determine the availability of the "device" over the network.

A bit different subject: being a minimalist I think I have fallen in love with Raspberry Pi. Have you had opportunity to play with it? :)

For those potentially interested and not familiar, Raspberry Pi is a roughly credit card size "evaluation platform" or better say a small desktop PC. You can buy it locally for ~$S50.

Image

Image

RimBlock
Regular
Regular
Posts: 126
Joined: Fri, 06 Oct 2006 12:38 am

Post by RimBlock » Mon, 01 Apr 2013 9:41 am

x9200 wrote:Thanks RB. I was hoping for a response like this. What is the benefit for the OS seeing the resource as a physical disk rather than a file system? If this is for a shared spaced (it seems so to me) than the ability of OS to handle a specific FS will still determine the availability of the "device" over the network.
The difference is having the computer believe it has a directly connected disk of a size determined by the storage server from a pool of storage. It can do everything that is can with a physical disk connected to the machine but is using the fabric (GbE / IB / FC) instead of SATA to communicate.

iSCSI:
Storage server: 5GB raid5 (raidz etc) storage.
- 2TB to Windows Server (formatted to NTFS).
- 2TB to Linux server (formatted to ext3).
- 500GB to Windows desktop (formatted to NTFS).
- 500GB to Media PC (formatted to ext3).

None of those machines potentially would need a boot disk as they can boot from the iSCSI share using PXE (if they have a compatible network card). The space is assigned by the storage server and is a hard limit but can be increased if needed and available storage exists. Srinking is more tricky.

NFS:
Storage server: 5GB raid5 (raidz etc) storage.
- 5TB to Windows Server.
- 5TB to Linux server.
- 5TB to Windows desktop.
- 5TB to Media PC.

No remote boot so each machine would still need a boot drive. Storage is shared so one machine could take up all the storage and leave the others with none. Reclaimation of freed up space is instant without any backend manipulation.

As you quite rightly mention, remote PXE boot is great but moves the failure point from the SATA interface / cabling (usually pretty stable) to the network which may be a little less stable and more prone to contention. I ran my iSCSI traffic on a vLAN and on a dedicated network port for each machine using it to help avoid contention. LACP and a multi port network card can help on the storage server as it will load balance connections over the separate ports (it does not combine the ports to make a 'fat pipe'). This means that 4 consumers would each get a 1Gbit connection if the server had a quad nic with LACP enabled on it and the switch it connected to. One machine connecting would still only get a single 1Gb connection with the other 3x 1Gb connection not being used.
x9200 wrote: A bit different subject: being a minimalist I think I have fallen in love with Raspberry Pi. Have you had opportunity to play with it? :)
Nope, not yet. Availability was limited for quite a while and I am not sure what I would use it for. It would be great for a smart internet browser that could be hooked up to the living room T.V. or a little stand alone computer for light home use / customer interactive displays.

Of course, the more adventurous could go for the Pi Lego supercomputer :D

RB
Without dialogues, if you tell them you want something real bad, you will get it real bad.

RimBlock
Regular
Regular
Posts: 126
Joined: Fri, 06 Oct 2006 12:38 am

Post by RimBlock » Mon, 01 Jul 2013 2:19 pm

Ok, been a bit slow here so a few pictures.

My current setup
Image

I managed to get a great deal on a 17" 1U KVM drawer.
Image

The current build I am doing for the Singapore Hadoop user group.
Image

Any questions then please just ask away.

RB
Without dialogues, if you tell them you want something real bad, you will get it real bad.

x9200
Moderator
Moderator
Posts: 10075
Joined: Mon, 07 Sep 2009 4:06 pm
Location: Singapore

Re: What is your setup like and why.

Post by x9200 » Fri, 19 Sep 2014 10:39 am

RimBlock, are you still lurking a bit around?

[quote="RimBlock"]Servers (housed in a 42U cabinet).
  • - Business server (Windows SBS 2011) – HP M110 G7 (E3-1220, 20GB ram, 500GB (raid 1, WD RE4s)
    - Virtualisation Server (VMware vSphere 5.1) – Whitebox (E3-1230, 16GB ECC ram, various SSD / 2.5”

RimBlock
Regular
Regular
Posts: 126
Joined: Fri, 06 Oct 2006 12:38 am

Post by RimBlock » Fri, 19 Sep 2014 11:33 am

Not really lurking but I do still get email notifications on topic updates :)

Well a lot of things have changed in a year and a bit.

I now have a Dell C6100 4 node 2U cloud server (2x E5520s and 24GB per node). Works great as an ESXi cluster in a 2U box. I also modded the fans to make them much quieter.

I also now have a HP DL380 G6 (dual X5560s with 24GB ram). This I am currently using for an ARMA II server (Test and Prod) and for ARMA III mod development.

I have moved my storage on to a Silverstone DS380(8x ext and 4x int drive mITX case) with an S1200KP motherboard, 12GB ECC ram and a LSI 9211-8i sas controller. This runs Solaris 11.2 and shares the disks via a dedicated iscsi network to the ESXi node in the C6100.

The KVM has survived. The switches etc are also still there although I have added a Cisco V320dual WAN router.

I have also been using Infiniband for a storage network which was giving up to 2GB/s data transfer between the SAN and ESXi but having moved the SAN to mITX I only get one PCIe slot which I have to use for the storage controller unless I splash out on anAsrock E3C224D4I-14Swhich would just about fit the Silverstone case with a little modification.

I had a play with a HP DL1000 disk shelf with 9x 146GB 15k sas drives and although that was very fast (the drive and the shelf were dual ported) they were power hungry and loud.

I had the idea to setup water cooling for the entire rack (quick release connections for the servers and a big loop on the rack with a large pump at the bottom and a large rad at the top. I have most of the parts but other priorities have taken precidence.

I also got myself a table top CNC machine with the intention of milling waterblocks and other things like custom PC cases or show pieces. Built a protective case out of acrylic which is great for keeping the noise down and keeping little fingers safe and it works great. Just a bit of a learning curve :D.

To get back to our actual question :wink: , you have to get the right servers and / or mod them for home use.

The C6100 for example is a fantastic unit with 4 seperate servers with just shared cooling and power but it sounded like a jet enging when running at load. I was able to find a replacement fan which had pretty close throughput at significantly lower DBa. I have supplied modded C6100s to quite a few people in the SIngapore VM user group (cluser in a box) for their home labs but TBH the fan mods are a lot of manual work as the fan plugs are non standard so I have to chop the original fan cables off and resolder them to the new fans. My soldering has vastly improved :D.

I would not go near 1U units as they are so slim they can only have very small noisy fans. 2U cases are better and my unmodified DL380G6 (another fantastic bang for buck machine if you hunt around) is fairly quiet although louder than the modified C6100.

Of course, neither is around $60 but they are also newer gen tech (DDR3, 55/5600 series Xeons etc). For $60 a play around server is great and you still have enough left over for earplugs which are readily available for the F1 race :) .
Without dialogues, if you tell them you want something real bad, you will get it real bad.

x9200
Moderator
Moderator
Posts: 10075
Joined: Mon, 07 Sep 2009 4:06 pm
Location: Singapore

Post by x9200 » Fri, 19 Sep 2014 1:02 pm

Surely you don't waste your time. With all the upgrades at home within 1y are you still a married guy?

The water coolings sounds like a very good idea also for this 1950 I got but I have to know first what generates the heat. Something does and I am not sure whether this is the cpu or something else too - a few minutes with no load, no hdds inside and the air going out is clearly warm. A single cpu of this type can generate that much?

Beside that I have nothing remotely as impressive to report. Your Intel desktop board is doing nice job running very stable from the time I bought it from you. The only small "project" I've done in the meantime was a raid array based on MSI C847MS-E33. A rarity, fanless mATX MB. I am not sure if there is any other right now in production based on low power CPU and offering more than one pci-e slot.

RimBlock
Regular
Regular
Posts: 126
Joined: Fri, 06 Oct 2006 12:38 am

Post by RimBlock » Fri, 19 Sep 2014 1:48 pm

Yeah, was thinking the same thing. A lot has changed. Importing, refurbishing and modding the C6100 units gave me a little pocket money to play with pet projects :) .

Good to hear that board is still running well for you. The new Atom boards with loads of sata ports are also seeming to be very popular and low poered for entry level NAS boxes. The C2750D4Iand C2550D4I models look pretty perfect. Supermicro also do some but they tend to be quite a bit more expensive.

Watercooling a 1U is difficult as there really are no 1U waterblocks that are not specialist prices (hence me getting the CNC). I have no real experience of the 51xx series but with the 55/56xx series Xeons, the chipset gets very hot. I was playing around with building a eater cooling plate for the C6100 (each node is 1U half width) using the copper plates from broken all-in-one water cooling solutions from EBay. Piping is quite tricky though unless you use slim copper and braze or solder it like they do with the MUCSupercomputer..
Without dialogues, if you tell them you want something real bad, you will get it real bad.

x9200
Moderator
Moderator
Posts: 10075
Joined: Mon, 07 Sep 2009 4:06 pm
Location: Singapore

Post by x9200 » Fri, 19 Sep 2014 2:33 pm

RimBlock wrote:Yeah, was thinking the same thing. A lot has changed. Importing, refurbishing and modding the C6100 units gave me a little pocket money to play with pet projects :) .

Good to hear that board is still running well for you. The new Atom boards with loads of sata ports are also seeming to be very popular and low poered for entry level NAS boxes. The C2750D4Iand C2550D4I models look pretty perfect. Supermicro also do some but they tend to be quite a bit more expensive.
Expensive. My total cost including an LSI 8 port sata card was less than half of the C2750D4I. I am getting close to 500MB/s (hdparm) for Raid 5.
RimBlock wrote:Watercooling a 1U is difficult as there really are no 1U waterblocks that are not specialist prices (hence me getting the CNC). I have no real experience of the 51xx series but with the 55/56xx series Xeons, the chipset gets very hot. I was playing around with building a eater cooling plate for the C6100 (each node is 1U half width) using the copper plates from broken all-in-one water cooling solutions from EBay. Piping is quite tricky though unless you use slim copper and braze or solder it like they do with the MUCSupercomputer..
How about something like this:
http://www.ebay.com.sg/itm/Water-Coolin ... 58a37bf962

Should fit nicely 1U.

Image

RimBlock
Regular
Regular
Posts: 126
Joined: Fri, 06 Oct 2006 12:38 am

Post by RimBlock » Fri, 19 Sep 2014 3:29 pm

Depends on the orientation of the in and out ports, clearance around the CPU socket and a reasonible flow path to a second CPU and back again in the case of dual processor systems (which the C6100 is).

When the second CPU is up and 45deg to the right a custom solution is needed. The all-in-one units would be fine if they didn't have a pump in the water block assembly. With the pump they are a little but too tall.

There are actually only around two main manufacturers of all-in-one watercoolers which are rebranded and slightly modified for other companies.

One is ASETek. I forget the name of the other off the top of my head.

ASETek do All-in-one loops for use with their rack watercooling frames here. I would hate to imagine how expensive they would be though.
Without dialogues, if you tell them you want something real bad, you will get it real bad.

x9200
Moderator
Moderator
Posts: 10075
Joined: Mon, 07 Sep 2009 4:06 pm
Location: Singapore

Post by x9200 » Fri, 19 Sep 2014 6:30 pm

If anything emits a lot of heat surely the connection should be parallel (not sequential). Even if below 45deg it will contribute to uneven thermal aging over time.

I would rather do the piping and pumping and cooling by my own design. I don't think it is that challenging. The most tricky part is the CPU/chipset block.
For your big setup I would consider running a fishtank small compressor chiller like this one or smaller:

Image

To this you would need a thermally insulated container of probably 10-20l depending on the overall piping capacity and a reliable but rather standard low flow rate aquarium pump (or two sequential) all running in suction mode.

RimBlock
Regular
Regular
Posts: 126
Joined: Fri, 06 Oct 2006 12:38 am

Post by RimBlock » Wed, 24 Sep 2014 10:04 am

My own design was to use a fish tank pump.

Output goes from the pump to the top of the rack where it is capped (possibily with an air bleed system) with feeds taken for individual servers (quick release connectors).

On the other side there is main trunk pipe which takes the output from the individual servers and feeds it up to a large rad at the top of the cabinet with top of rack fans pulling the air up through it.

The output from the rad would then feed back in to the pump.

The pump would need to be good for enough head pressure to manage the circuit and depending on how much you want, redundancy can also be enhanced by seperate pumps for each feed.

I have all the parts for this apart from the waterblocks and large rad (some people are using car rads although finding one with a copper core rather than alu can be tricky. Importing one from the US will be very expensive due to weight.

Koolance do a custom watercooling rack mount unit which has the res, 3 loops (IIRC) and a large rad all in a 4U chassis. I ecpect it is built to order and quite costly though.

Asetek use a heat exchanger in their rack solutions and a cool water feed rather than a rad. I am not so sure my wife would be happy with the extra plumbing :D .
Without dialogues, if you tell them you want something real bad, you will get it real bad.

x9200
Moderator
Moderator
Posts: 10075
Joined: Mon, 07 Sep 2009 4:06 pm
Location: Singapore

Post by x9200 » Wed, 24 Sep 2014 11:10 am

There are some sites offering delivery from China and the prices are generally good. Not sure what sizes you are looking for but if anything bigger than the one below I would really use the aquarium chiller. It will be smaller, probably cheaper and I believe much more effective too.

http://www.aliexpress.com/item/Ke-Ruiwo ... 0.html?s=p

Google images with the keyword: Ke Ruiwo Katyusha copper radiator.

I've been trying at this point to buy some stuff from this one:
http://en.sgbuy4u.com/taobao/view/id/2643770990

It offers the payment via Paypal so I can claim it back if anything is wrong. The question mark is how much weight they are going to assign to it eventually. This 240 x 12 cm rad above is ca 640g but you never know what to expect if the volumetric factor kicks in.

RimBlock
Regular
Regular
Posts: 126
Joined: Fri, 06 Oct 2006 12:38 am

Post by RimBlock » Wed, 24 Sep 2014 12:49 pm

I have got a few items from Taobao through one of my customers who orders on there every now and then. The items were branded (XSPC etc) and seemed to be genuine.

Koolance do a 4 fan copper core rad(30 fpi) for US$75.99.

The only issue I would have with chillers would be having to watch the dew point to make sure you didn't get condensation, especially in humid Singapore and doubly so if the equipment is no in an aircon environment (which mine are not).

Oh and I also have a spare fish tank pump laying around unused as well (Fluval 306) which should have enough head pressure to push to the top of the rack and a fairly large res. It also has a shutoff valve so it can be decoupled from the loop for maintenance.

TBH I have two servers and so my cooling requirements are going to be fairly low compared to a full rack. If I get around to getting a rad then I may give it a go.
Without dialogues, if you tell them you want something real bad, you will get it real bad.

Post Reply
  • Similar Topics
    Replies
    Views
    Last post

Return to “Computer, Internet, Phone & Electronics”

Who is online

Users browsing this forum: No registered users and 6 guests