All things Amd GPU

Tom M
Tom M
Joined: 2 Feb 06
Posts: 6431
Credit: 9560345961
RAC: 10439482
Topic 229299

I am curious. What would be a discrete Amd gpu comparable in performance to say a Ryzen 5700G iGpu?

A Proud member of the O.F.A.  (Old Farts Association).  Be well, do good work, and keep in touch.® (Garrison Keillor)  I want some more patience. RIGHT NOW!

Ian&Steve C.
Ian&Steve C.
Joined: 19 Jan 20
Posts: 3944
Credit: 46580182642
RAC: 64173531

there is nothing really

there is nothing really directly comparable in a discrete card.

the 5700G APU has a Vega GPU with 8 Compute Units. there was no Vega discrete card with this few CUs.

performance is probably similar to the RX550, which has 8 CUs of the older Polaris generation and very similar Flops ratings.

_________________________________________________________________________

VulcanCat
VulcanCat
Joined: 22 Feb 22
Posts: 2
Credit: 494901159
RAC: 2069

Tom, for what it's worth,

Tom, for what it's worth, here are my results for FGRPB1G running 2 tasks at the same time:
Ryzen 5700G -- 66 minutes each WU
Ryzen 4800U -- 76 minutes each WU
Radeon 6400 -- 27.4 minutes each WU

And to blatantly compare apples and oranges, on Milkyway (FP64) tasks, 2 at a time:
Ryzen 5700G -- 8 minutes each WU
Ryzen 4800U -- 10 minutes each WU
Radeon 6400 -- 5.5 minutes each WU

So I use them on Milkyway WUs -- just because there's something deeply satisfying about shorter runtimes.

 

Tom M
Tom M
Joined: 2 Feb 06
Posts: 6431
Credit: 9560345961
RAC: 10439482

VulcanCat wrote:So I use

VulcanCat wrote:

So I use them on Milkyway WUs -- just because there's something deeply satisfying about shorter runtimes.

I can understand that.  Thank you for more information about what I was wondering about.

===edit===

About 26% of the top performing systems on E@H are using Radeon gpus to do it.

The top performing Radeon GPU system has dual Radeon VII's, 3.2+M Rac, currently in 14th place.

7 Radeon VII systems (1 or 2 gpus)

2 Radeon 6900 xt (3 gpus listed)

1 Radeon 560 (9 gpus listed)

1 Radeon 6600 xt (4 gpus listed)

1 Radeon 6800 xt (2 gpus listed)

1 Radeon Rx Vega (2 gpus listed)

 

 

A Proud member of the O.F.A.  (Old Farts Association).  Be well, do good work, and keep in touch.® (Garrison Keillor)  I want some more patience. RIGHT NOW!

VulcanCat
VulcanCat
Joined: 22 Feb 22
Posts: 2
Credit: 494901159
RAC: 2069

Well, I only gave Tom a

Well, I only gave Tom a half-answer. So to finish:

With FGRPB1G, a 5700G is only 41.5% as powerful as a Radeon 6400. The 6400 has a FP32 of 3.565 gflops. So to match a 5700G, we'd need something running at about 1.48 gflops.

Ian&Steve suggested an RX550, which runs FP32 at 1.211 gflops.

So according to TechPowerUp Ian&Steve is about as right as you can get.  That's why I try to keep my posts to a minimum -- better to just listen and learn.

Skip Da Shu
Skip Da Shu
Joined: 18 Jan 05
Posts: 151
Credit: 1039863000
RAC: 763043

Is there anything in the WU's

Is there anything in the WU's Stderr output that will tell me how many concurrent I was running at that time? 

I'm not spotting any while trying to see how run times compare running 2 thru 5 GPU threads on a single RX 580 in an .ods

Thanx, Skip

Tom M
Tom M
Joined: 2 Feb 06
Posts: 6431
Credit: 9560345961
RAC: 10439482

I can't answer your direct

Skip da shu,

I can't answer your direct question because I don't know.

I can say that very few GPU's gain much from running 7 tasks at once. Many GPU tasks can run 2-4 at the same time and gain total production. Brp7 was a wash for me between 1and 2 tasks at a time. Grp#1 has been a a production gain at 2 GPU tasks at a time.

My testing usually starts at one per gpu. Then when I bump it up one and ask the question does the time to process take less than the baseline (multiply the baseline by the number of tasks you are running. Or divide the current processing time by the number of tasks per gpu you are running).

As long as the processing "time" is less than the baseline "time" you have a net gain.

I once had an rx 580. I don't think I ran more than 2-3 GPU tasks at a time. I remember a Radeon VII not being stable at 7 tasks at a time.

Hth,

Tom M

A Proud member of the O.F.A.  (Old Farts Association).  Be well, do good work, and keep in touch.® (Garrison Keillor)  I want some more patience. RIGHT NOW!

Tom M
Tom M
Joined: 2 Feb 06
Posts: 6431
Credit: 9560345961
RAC: 10439482

Thank you

Thank you Vulcancat.

Please do not be so hard on yourself about offering information.

The reason I was curious was because I have a couple of Ryzen 3700x cpus. I was wondering what I would need to match them to the 5700G in GPU production.

You and Ian& SteveC were very helpful in satisfying my curiosity.

It's got me wondering why the equivalent GPU count of later generation high end Amd GPU's are not topping the Radeon VII's in our top 50 list?

Tom M

A Proud member of the O.F.A.  (Old Farts Association).  Be well, do good work, and keep in touch.® (Garrison Keillor)  I want some more patience. RIGHT NOW!

Skip Da Shu
Skip Da Shu
Joined: 18 Jan 05
Posts: 151
Credit: 1039863000
RAC: 763043

Tom M wrote: Skip da shu, I

Tom M wrote:

Skip da shu,

I can't answer your direct question because I don't know.

I can say that very few GPU's gain much from running 7 tasks at once. Many GPU tasks can run 2-4 at the same time and gain total production. Brp7 was a wash for me between 1and 2 tasks at a time. Grp#1 has been a a production gain at 2 GPU tasks at a time.

My testing usually starts at one per gpu. Then when I bump it up one and ask the question does the time to process take less than the baseline (multiply the baseline by the number of tasks you are running. Or divide the current processing time by the number of tasks per gpu you are running).

As long as the processing "time" is less than the baseline "time" you have a net gain.

I once had an rx 580. I don't think I ran more than 2-3 GPU tasks at a time. I remember a Radeon VII not being stable at 7 tasks at a time.

Hth,

Tom M

Tom,

The O3MDF times are proving this out as I can best capture prior data... sometimes I can't tell if I was running 2 or 3 tasks at the time.  Date/Time WU sent is my primary sort key, followed by date/time it was returned.  If the stars all align then the 'same set' sent time where all returned at the same time.

Recent 4x and 5x I can spot better and can pretty confidently say 0.2 (5x) is slower than 0.25 (4x).  Probably no big surprise to anyone. Not by much though. 

What does suggest something funky in my data capture to a sheet is the 0.33 (3x) as it shows quite a bit slower than 5x or 4x.. BUT it also as the least samples I was able to identify.   As in 1 set of 3.  Just set it back 3x (0.33) to try to get a couple better samples.

Probably another 'no surprise' is that 2x (0.5) seems to be producing the lowest averages.  This is oldest data and probably deserves another run of them just to confirm the times after I get some good 3x (0.33) times.

Skip

PS:  So much for tuning by GPU temp.

 

GWGeorge007
GWGeorge007
Joined: 8 Jan 18
Posts: 3060
Credit: 4961134353
RAC: 1389402

Skip, Let me explain a

Skip,

Let me explain a little better (sorry Tom) how you should go about 'testing' your GPUs for Einstein FGRPB1G tasks, assuming this is what you are trying to do.

Run your GPUs a 1x task per GPU and 1x CPU.  Monitor the usage for more than 10 tasks, maybe 25 would be better.

Find the average per task, and this is your baseline 'time'.

Then switch your GPUs to 2x tasks per GPU (setting for 0.5/GPU) and 1x CPU, and run it for the same amount of time as your 'baseline' was run.

If your baseline time of 2x is less than your baseline time X 2, then your GPU is running faster on 2x than on 1x.

Follow suit with performing 3x, 4x, and 5x on the GPUs, and compare them to the original baseline time, multiplying your baseline by 3, 4, or 5 times.

If your GPUs with the multiplication times of 2x, 3x, 4x, or 5x end up being less than baseline time, then keep going.  But if your GPUs are getting a higher value than the appropriate baseline, you can stop there and revert back to the multiplication that has a lower value than baseline.  This is your best bet to have the lowest time per task on your GPUs.

Remember, always take your times for clocking the GPU and divide by the number of GPU tasks you are performing per GPU to get your actual 'time' of doing a single task.

Yes, I know this explanation was longer than Tom's, but I felt after re-reading Tom's and then reading your response, I felt that I should intercede. 

HTH!

George

Proud member of the Old Farts Association

Tom M
Tom M
Joined: 2 Feb 06
Posts: 6431
Credit: 9560345961
RAC: 10439482

George, No apology should

George,

No apology should be needed for a clearer description (your description) of what should be a simple testing process.

Tom M

A Proud member of the O.F.A.  (Old Farts Association).  Be well, do good work, and keep in touch.® (Garrison Keillor)  I want some more patience. RIGHT NOW!

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.