Anyone gave tmp on a ramdisc a chance?

Hi,

Im just tuning the pageload times of my oxid shop - and the xcache does not see any var data, and very less php hits at all. It seems normal tuning is just wasting of time, since the tmp folder "takes it all". After all delivering cached things from the normal I/O stressed system is way faster then have no cache at all. But im really keen to know, if anyone tested a ramdisc yet.
Anyone?

greets, Nik

… could not wait for your replys :wink:

Working flawless, the shop feels very very fast and smoth. Well, I can`t wait to relaunch my store to see the difference under heavy load.

greets, Nik

sounds interesting - would you mind to share your further experiences?

[QUOTE=Hebsacker;82646]sounds interesting - would you mind to share your further experiences?[/QUOTE]

Hi,

of course. Currently I`m performing just apache benchmarks, gaining ~5% less load time.
Sounds “little”, but the server was freakin fast before, too :wink:
I need to wait for a cluster finished his work, so i got time to write some basic thoughts, explaining why I give ramfs a try:

If ur on a common shared server, virtual root server, without raid, raid1, software raid - and so on, youll see reading performance of about 35-70mb/s. Well, if you got some serious hardware raid and SAS drives or even SSD, youll get some 190-260mb/s in small server systems, we`re mostly using. well that is close to nothing compared to a ramdisc which can - depending on the used hardware - perform (in my case) at ~6500mb/s.
All mb/s speeds are the average of different access types (there are very very big differences in i.e. sequential read/write, random read/write, and of course how much and what kind of data ur writing/reading).

next thing is the access time. this is a very important point: there is no need to get 4gb out of your tmp folder, so after all - we dont need 6gb/s, but it wont hurt.
most “spinning drives” perform at about 9-15ms average accesstime. using raids, the accesstime goes up. non linear related to the amount of used drives.
A good SSD drive (child-ready: no spinning things, no needle has to wait, till the needed sector comes by…) got 0.1ms access time. and this is already a very mighty improvement.
but here we go: memory we`re using today got access times from 8 to 30 ns. (no typo, that is nano, not ms).

to round it up:

  • all values are kind of “round about”. the memory my servers are using is doing 12gb/s - in theory. you will never hit that - losing a lot in overhead, checksum stuff, hard- and software and driver issues.
  • when using ramdiscs be sure to use a unix system. no matter if *bsd(including apple), linux, solaris, hp-ux, whatever. in short words: don`t use windows :wink:
  • decide what kind of implementation you want to use: ramfs, tmpfs, filesystembased (i prefer ramfs, be sure to have enough ram free, ramfs got no limitation!)
  • do never ever save “mission critical” data on a ramdisc! even with UPS (europe: USV) it`s just a worse idea.

and here we finally go: very small differences in performance will help a LOT, when the system is on heavy load. take care of all small things, they sum up and boost your overall performance.

after the mentioned relaunch, i`ll post real-life benchmarks - comparing: 6 hdd raid10, SSD, ramfs.

greets, Nik

Hi Nik

You know about this whitepaper with some basic “tuning” informations?

http://docu.oxid-esales.com/devdocuments/whitepaper-performance-optimierung.pdf

No, but just checked it. To be honest: In a tuning manual, I would expect to mention, that one should avoid using .htaccess whenever it`s possbile. Just place that directly in the vhost/host config!
The bad thing about .htaccess is, that on EVERY request, the whole tree starting from document root till the final dir will be checked.
So everything should be moved into the configuration - and to make it perfect - insert “AllowOverride None”. This flag stops apache2 from searching for .htaccess files.

greets, Nik

Uhm forgot to mention: maybe that whitepaper should include, that there are way faster http servers then apache2. especially, if it comes to file serving. cookieless domain with lighttpd for img, js, and so on will offer a big boost on big shops.
replacing apache2 by ngix is the next step.

well, there are tons of possibilitys to speed something up, i guess it would just ruin the whitepaper.

greets, Nik

I fully agree to what you say.

The Performance Tuning whitepaper mentioned above does not speed-up anything for me that I could measure on my root server (w/o ramdisc).

NginX makes sense - I’m using it for the pictures and media files, that’s fine. Loading times are around 1.6 sec on the average and that’s what Google webmaster displays as well.

I found an interesting article about ramdisk caching (German lang.): http://www.continum.net/Referenzen-Performance,171.html
There you can find some Oxid shop links in the text (not clearly marked) and check the performance in real production mode. They call it “Premium Cache” and claim 50-70% performance push. Actually. I’ve seen 0.3 - 0.8 sec load time in Chrome.

Did you try SSD yourself in the meanwhile ?

POST EX:
Related to performance as of 2012-05-21 - Intel 520 SSD in MySQL sysbench oltp benchmark