FileSystem Memory Cache

Zalster
Zalster
Joined: 26 Nov 13
Posts: 3117
Credit: 4050672230
RAC: 0
Topic 203899

So I have a question for anyone.  Does the size of the Filesystem Memory cache play a major role in processing of work units? 

The reason I ask as I notice that on 2 of my machines, the amount of memory being used is almost the max value for set for that computer.

I was questioning if increaseing the File System Memory Cache would have a positive effect on crunching the data. 

ML1
ML1
Joined: 20 Feb 05
Posts: 347
Credit: 86562721
RAC: 1690

Very good

Very good question.

 

Important details are whether you are running Windows, Linux or Mac...

And whether this is a dedicated cruncher or whether you do other things on that system also...

 

In short and general: Most important is to have enough available system memory for the active task. The file system cache in main memory is good use of any SPARE otherwise unutilised memory to buffer and cache the much slower SSDs/HDDs. Note the emphasis on "SPARE and otherwise unutilised"... Unless you are going to do some very fine tuning, then best is to stay with the defaults.

 

Aside for one example: I run Linux systems whereby Boinc is copied into a "tmpfs" (In "Windows-speak" otherwise known as a 'RAMDISK') with an automatic copy back to a SSD every 12 hours. The memory utilisation balances out nicely. The one criterion is that you have enough RAM for what you have active...

 

Happy fast crunchin',

Martin

 

See new freedom: Mageia Linux
Take a look for yourself: Linux Format
The Future is what We all make IT (GPLv3)

Zalster
Zalster
Joined: 26 Nov 13
Posts: 3117
Credit: 4050672230
RAC: 0

Thanks Martin,  Yes it's a

Thanks Martin, 

Yes it's a dedicated cruncher.  Ok, I'll stay with defaults

Zalster 

Mad_Max
Mad_Max
Joined: 2 Jan 10
Posts: 154
Credit: 2209761456
RAC: 413602

For my observations

For my observations Einstein@Home do NOT put any significant load on disk system. So Filesystem cache not affect performance here.

But it may be different for other BOINC projects. For example Rosetta@Home known by heavy use of filesystem: it create and use > 4000 small files per each running workunit which have very high impact on HDD performance .
Or The Clean Energy Project (from WCG) knows as "SSD killer".(it can write up to 100-200 Gb of data per day to disk on I7 xxxx/FX 8xxx system running up to 8 WU in parallel).

In such cases large Filesystem cache or moving Boinc data folder to RamDisks(tmpfs) can be useful to improve performance or reduce fast SSD wearout

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.