Uploads disabled

Ulrich Metzner
Ulrich Metzner
Joined: 22 Jan 05
Posts: 113
Credit: 963370
RAC: 0

I constantly get "HTTP 504:

I constantly get "HTTP 504: Gateway Timeout" messages for the upload tries.
Maybe this message helps? I have a WU to upload which is scheduled for January 11. :/

[edit]
Message log from Proxomitron:

Quote:


+++GET 3298+++
POST /EinsteinAtHome/cgi-bin/file_upload_handler HTTP/1.1
User-Agent: BOINC client (windows_intelx86 7.4.36)
Host: einstein4.aei.uni-hannover.de
Accept: */*
Accept-Encoding: deflate, gzip
Content-Type: application/x-www-form-urlencoded
Accept-Language: en_GB
Content-Length: 4699
Expect: 100-continue
Connection: keep-alive
Posting 4699 bytes...
Continue ignored...

+++RESP 3298+++
HTTP/1.1 504 Gateway Time-out
Server: nginx/1.2.1
Date: Fri, 09 Jan 2015 12:40:40 GMT
Content-Type: text/html
Content-Length: 182
Connection: keep-alive
+++CLOSE 3298+++

Aloha, Uli

Richard Haselgrove
Richard Haselgrove
Joined: 10 Dec 05
Posts: 2143
Credit: 2954779937
RAC: 712858

I was getting a lot of

I was getting a lot of HTTP/1.1 502 Bad Gateway earlier this morning, but it changed to HTTP/1.1 504 Gateway Time-out around 11:30 UTC.

MAGIC Quantum Mechanic
MAGIC Quantum M...
Joined: 18 Jan 05
Posts: 1886
Credit: 1403694658
RAC: 1077024

I still have no problem

I still have no problem d/ling but rarely can get a finished task to u/l once again.

So that means I have plenty to do and it will just mean there will be thousands to send in very soon here.

Bernd Machenschalk
Bernd Machenschalk
Moderator
Administrator
Joined: 15 Oct 04
Posts: 4312
Credit: 250374301
RAC: 35408

For the techs: it looks like

For the techs: it looks like the root cause for our current performance problem is described here:

"After a large delete on a reasonably full filesystem the filesystem will have free inodes sparsely distributed through the inode btree. When a new file is created under these circumstances the inode btree will be walked (i.e.; searched) to find free inodes.

This is in contrast to a filesystem with a normal usage pattern, where an inode is allocated directly from the “free inode cursorâ€, and so searching to find a free inode to allocate is not necessary."

(Re)building the filesystem, as suggested in the article, of cause is not an option in our case.

BM

BM

Anonymous

I assume that the XFS

I assume that the XFS filesystem must be something new to E&H since this issue would have surfaced before now.

Bernd Machenschalk
Bernd Machenschalk
Moderator
Administrator
Joined: 15 Oct 04
Posts: 4312
Credit: 250374301
RAC: 35408

RE: I assume that the XFS

Quote:
I assume that the XFS filesystem must be something new with E&H since this issue would have surfaced before now.

XFS has been in use @AEI before I started working there in 2002, long before Einstein@Home.

I can't remember this particular issue came up. I doubt that we ever had such a large single FS that required such performance ran full.

BM

BM

Bernd Machenschalk
Bernd Machenschalk
Moderator
Administrator
Joined: 15 Oct 04
Posts: 4312
Credit: 250374301
RAC: 35408

We'll shut down einstein4 for

We'll shut down einstein4 for a filsystem check. This may take a while.

BM

BM

Donald A. Tevault
Donald A. Tevault
Joined: 17 Feb 06
Posts: 439
Credit: 73516529
RAC: 0

RE: For the techs: it looks

Quote:

For the techs: it looks like the root cause for our current performance problem is described here:

"After a large delete on a reasonably full filesystem the filesystem will have free inodes sparsely distributed through the inode btree. When a new file is created under these circumstances the inode btree will be walked (i.e.; searched) to find free inodes.

This is in contrast to a filesystem with a normal usage pattern, where an inode is allocated directly from the “free inode cursorâ€, and so searching to find a free inode to allocate is not necessary."

(Re)building the filesystem, as suggested in the article, of cause is not an option in our case.

BM

XFS is one of the few *nix filesystems that is subject to fragmentation. So, I have to wonder if running a defrag might help.

XFS Defrag Article

DJStarfox
DJStarfox
Joined: 25 Mar 07
Posts: 10
Credit: 2484242
RAC: 0

I'm afraid the long-term

I'm afraid the long-term solution may be to backup, reformat as ext4, and restore the data. Any idea how many terabytes we're talking about here?

Oliver Behnke
Oliver Behnke
Moderator
Administrator
Joined: 4 Sep 07
Posts: 984
Credit: 25171376
RAC: 43

Just two quick replies: -

Just two quick replies:

- Defragmentation might not help in this case as we suspect the inode structure itself, not the data. In such a case xfs_fsr won’t help. We started the fragmentation check but had to stop it in favour of a repair we’re currently running (we saw an FS error on Wednesday as well). We’ll check the fragmentation level as soon as the repair is done.
- We’re currently talking about 12 TB. We could in principle delete significant part of that but we’re not able to do that with the current FS performance…

Stay tuned,
Oliver

Einstein@Home Project

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.