[QUOTE=Hebsacker;82646]sounds interesting - would you mind to share your further experiences?[/QUOTE]
Hi,
of course. Currently I`m performing just apache benchmarks, gaining ~5% less load time.
Sounds “little”, but the server was freakin fast before, too
I need to wait for a cluster finished his work, so i got time to write some basic thoughts, explaining why I give ramfs a try:
If ur on a common shared server, virtual root server, without raid, raid1, software raid - and so on, youll see reading performance of about 35-70mb/s. Well, if you got some serious hardware raid and SAS drives or even SSD, you
ll get some 190-260mb/s in small server systems, we`re mostly using. well that is close to nothing compared to a ramdisc which can - depending on the used hardware - perform (in my case) at ~6500mb/s.
All mb/s speeds are the average of different access types (there are very very big differences in i.e. sequential read/write, random read/write, and of course how much and what kind of data ur writing/reading).
next thing is the access time. this is a very important point: there is no need to get 4gb out of your tmp folder, so after all - we dont need 6gb/s, but it won
t hurt.
most “spinning drives” perform at about 9-15ms average accesstime. using raids, the accesstime goes up. non linear related to the amount of used drives.
A good SSD drive (child-ready: no spinning things, no needle has to wait, till the needed sector comes by…) got 0.1ms access time. and this is already a very mighty improvement.
but here we go: memory we`re using today got access times from 8 to 30 ns. (no typo, that is nano, not ms).
to round it up:
- all values are kind of “round about”. the memory my servers are using is doing 12gb/s - in theory. you will never hit that - losing a lot in overhead, checksum stuff, hard- and software and driver issues.
- when using ramdiscs be sure to use a unix system. no matter if *bsd(including apple), linux, solaris, hp-ux, whatever. in short words: don`t use windows
- decide what kind of implementation you want to use: ramfs, tmpfs, filesystembased (i prefer ramfs, be sure to have enough ram free, ramfs got no limitation!)
- do never ever save “mission critical” data on a ramdisc! even with UPS (europe: USV) it`s just a worse idea.
and here we finally go: very small differences in performance will help a LOT, when the system is on heavy load. take care of all small things, they sum up and boost your overall performance.
after the mentioned relaunch, i`ll post real-life benchmarks - comparing: 6 hdd raid10, SSD, ramfs.
greets, Nik