GW follow-up runs (S6BucketFU*) and pre-7.0 clients

Gary Roberts
Gary Roberts
Moderator
Joined: 9 Feb 05
Posts: 5845
Credit: 109967082469
RAC: 30568415

No problem. I was just lucky

No problem. I was just lucky to have remembered that Bernd had posted his initial announcement (with a more detailed follow-up) about why locality scheduling wasn't working as it should and that upgrading to a V7 client would help.

These snippets of information tend to get remembered in my case because of how many machines I'm running and all the 20GB disks that don't seem to want to die even after 12 years of use :-). For fear of running out of space and racking up a lot of downloads, I chose to stay with FGRP4 rather than have to more closely monitor the GW follow-up searches.

I have a nice data file caching system working for the LATeahxxxxE.dat data files for FGRP4 so each one only gets downloaded once. Because of the unpredictable nature of many different frequency data files for potentially small numbers of randomly available GW tasks, I figured it might be best to stay with just FGRP4 for the moment. When advanced LIGO data eventually becomes available, all that will change and I'll certainly be wanting to jump right into that. If it's anything like the standard GW searches of years ago, I believe I'll be able to modify my caching arrangements to prevent multiple downloading of the same data files by different hosts in the fleet. When there are going to be (hopefully) thousands of tasks for each set of large data files, with steady, regular progression of steps in frequency, I should be able to keep both download bandwidth and local storage space requirements under control.

Cheers,
Gary.

Bernd Machenschalk
Bernd Machenschalk
Moderator
Administrator
Joined: 15 Oct 04
Posts: 4273
Credit: 245212288
RAC: 12870

I modified the scheduler to

I modified the scheduler to be able to explicitly send out "delete requests" for files even if they are not reported by the clients. Problem is that particulaily the old clients do have small buffers for XML that I don't dare to overrun, i.e. we can't send much more than ~80 such delete requests at a time.

I'm updating the list of files to delete from the old clients about every two weeks. Already deleted should be files of 50-56Hz (files h1_0050.00_S6GC1 - l1_0055.95_S6GC1), currently we are deleting 56-58 Hz.

It will take some time until we scratched all the old files (up to 300Hz) from the old clients this way, but it should eventually be done.

BM

BM

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.