That is about 8 ms per "template" : a template matches a set of orbital parameters of a binary pulsar system to the de-dispersed radio data from the Arecibo telescope. So your card is checking templates faster than most pulsars spin, so to say ;-)
I don't want to hijack this thread, but as you asked for this, here's a small explanation. (for deeper discussion we should move to a separate thread).
If you look into the projects/einstein.phys.uwm.edu subdirectory of the BOINC working directory (where BOINC downloads all the files necessary to crunch the jobs), you will notice a file "stochastic_full.bank", which is about 6660 lines long.
This file is a list of "orbital parameters" of a binary pulsar system (a system of two gravitationally bound objects, of which at least one is a pulsar).
Think of it like this: The file contains 6660 or so different "types" of a binary system, because they can differ a lot, e.g. by
* different orbital periods (the time it takes for the pulsar to complete one orbit, NOT the spin period),
* different diameter of the orbit
So the way the BRP4 app works is this: it tries each of the 6600 or so templates one after another, for each one it will try to detect a signal that would be generated by a binary system with exactly those template parameters. If there is a real pulsar with parameters close to one of the template parameters, we will find it. The templates were chosen in a way so that those templates together cover a big share of parameters that you would expect to find for pulsars out there in space.
Each workunit is made up of 8 independent sub-workunits, each one is testing the 6662 templates ==> each workunit tries 53296 templates.
gtx650 1x2630 sec, 2x4340 sec
gtx560ti 1x(oops forgot to run it), 2x2330 sec
computers they reside in:
gtx650: A8-3850, Gigabyte GA-A75M-D2H, 1x4gb DDR3-1333, pci-e2.0
Running 1 task and the GPU is showing 47C, sitting in a closet
gtx560ti: Phenom II X4 955, Gigabyte GA-880GM-UD2H, 2x2gb DDR3-1333, pci-e2.0
Running 2 tasks and the GPU is showing 65C, sitting in a server room
Had the first run of BRP 1.31 tasks and do not see any difference in run time from 1.28. I use an ATI 7950 and run 3x at ~1860 seconds on an MB with pci 2.1 slots only.
Its the last long time not updated table from User Petrion plus some added (fast) values from the last posts and deleted some old (very outdated, slow when graka has multiple much faster entries in this list already from previous version). And i sorted it a bit new:
AMD/ATI: (colored is new 1.28 app values, defined by Petrion) HD 7970 ----> 1x~650, 2x~950, 4x~1,800, 5x~2,200 HD 7950 ----> 3x~1860 HD 7950 ----> 1x 1,145
HD 7950 ------> 2x 3,400, 3x 4,500
HD 7870
HD 7850
HD 7770 ------> 2x~8,500
HD 7750 ------> 2x~11,000 HD 5870 ------> 2x~3,105 HD 5850 ------> 1x 1,800, 2x 6,085 HD 5830 ------> 1x 2,916
HD 6970
HD 6950(1536)-> 2x 6700 HD 6950 ------> 2x 3,500
HD 6950
HD 6990 (1 GPU)
HD 6870
HD 5970 (1 GPU) HD 6850 ------> 1x~2,300
HD 6790
HD 5770 ------> 1x 7,750+
HD 6770
HD 5670 ------> 1x 11,100
HD 5670 -----> 1x 11,480(Win XP32)
HD 5570 ------> 1x~15,000
HD 5450 ------> 1x~36,500!
RE: some new results: HD
)
Great run times! Thanks for sharing. I think I may go with AMD for crunching from here on out.
RE: some new results: HD
)
Could you please provide some details about a motherboard, CPU and PCIx mode?
Z77 chipset asus mb, i5-3750K
)
Z77 chipset asus mb, i5-3750K cpu at stock speed, pci-e 3.0 mode.
awesome! That is about 8
)
awesome!
That is about 8 ms per "template" : a template matches a set of orbital parameters of a binary pulsar system to the de-dispersed radio data from the Arecibo telescope. So your card is checking templates faster than most pulsars spin, so to say ;-)
Cheers
HB
what is "template"?
)
what is "template"?
RE: what is
)
I don't want to hijack this thread, but as you asked for this, here's a small explanation. (for deeper discussion we should move to a separate thread).
If you look into the projects/einstein.phys.uwm.edu subdirectory of the BOINC working directory (where BOINC downloads all the files necessary to crunch the jobs), you will notice a file "stochastic_full.bank", which is about 6660 lines long.
This file is a list of "orbital parameters" of a binary pulsar system (a system of two gravitationally bound objects, of which at least one is a pulsar).
Think of it like this: The file contains 6660 or so different "types" of a binary system, because they can differ a lot, e.g. by
* different orbital periods (the time it takes for the pulsar to complete one orbit, NOT the spin period),
* different diameter of the orbit
So the way the BRP4 app works is this: it tries each of the 6600 or so templates one after another, for each one it will try to detect a signal that would be generated by a binary system with exactly those template parameters. If there is a real pulsar with parameters close to one of the template parameters, we will find it. The templates were chosen in a way so that those templates together cover a big share of parameters that you would expect to find for pulsars out there in space.
Each workunit is made up of 8 independent sub-workunits, each one is testing the 6662 templates ==> each workunit tries 53296 templates.
Cheers
HB
gtx650 1x2630 sec, 2x4340
)
gtx650 1x2630 sec, 2x4340 sec
gtx560ti 1x(oops forgot to run it), 2x2330 sec
computers they reside in:
gtx650: A8-3850, Gigabyte GA-A75M-D2H, 1x4gb DDR3-1333, pci-e2.0
Running 1 task and the GPU is showing 47C, sitting in a closet
gtx560ti: Phenom II X4 955, Gigabyte GA-880GM-UD2H, 2x2gb DDR3-1333, pci-e2.0
Running 2 tasks and the GPU is showing 65C, sitting in a server room
2330 for 2 wu for 560ti is
)
2330 for 2 wu for 560ti is not the best for it. my stock 560ti runs 2 for 2040 sec. you should have to optimize something in your system
Had the first run of BRP 1.31
)
Had the first run of BRP 1.31 tasks and do not see any difference in run time from 1.28. I use an ATI 7950 and run 3x at ~1860 seconds on an MB with pci 2.1 slots only.
Its the last long time not
)
Its the last long time not updated table from User Petrion plus some added (fast) values from the last posts and deleted some old (very outdated, slow when graka has multiple much faster entries in this list already from previous version). And i sorted it a bit new:
AMD/ATI: (colored is new 1.28 app values, defined by Petrion)
HD 7970 ----> 1x~650, 2x~950, 4x~1,800, 5x~2,200
HD 7950 ----> 3x~1860
HD 7950 ----> 1x 1,145
HD 7950 ------> 2x 3,400, 3x 4,500
HD 7870
HD 7850
HD 7770 ------> 2x~8,500
HD 7750 ------> 2x~11,000
HD 5870 ------> 2x~3,105
HD 5850 ------> 1x 1,800, 2x 6,085
HD 5830 ------> 1x 2,916
HD 6970
HD 6950(1536)-> 2x 6700
HD 6950 ------> 2x 3,500
HD 6950
HD 6990 (1 GPU)
HD 6870
HD 5970 (1 GPU)
HD 6850 ------> 1x~2,300
HD 6790
HD 5770 ------> 1x 7,750+
HD 6770
HD 5670 ------> 1x 11,100
HD 5670 -----> 1x 11,480(Win XP32)
HD 5570 ------> 1x~15,000
HD 5450 ------> 1x~36,500!
AMD A8 3870 -> 1x 6,489
NVIDIA: (colored is new 1.28 app values, defined by Petrion)
GTX 690
GTX 590
GTX 680 ------> 1x~750
GTX 680 ------> 3x 3,100(Win7)
GTX 680 -----> 2x 1,945(Linux)
GTX 580 ------> 1x 834, 3x~2,500
GTX 580 ------> 3x 3,350(Windows)
GTX 580 -----> 3x 3,050(Linux)
GTX 670 ------> 3x~4,300(vista)
GTX 660Ti ----> 1x~1,180, 2x~2,170
GTX 660Ti ----> 1x~1,700, 2x~2,900, 3x~4,500, 4x~6,030, 5x~8,660, 6x~12,760
gtx650 ----> 1x2630 sec, 2x4340 sec
GTX 570
GTX 670
GTX 480 ------> 2x~2,200
GTX 470 ------> 2x~3,000, 3x 3,800
GTX 560 [448] -> 1x 1,550, 2x 2,500
gtx 560 TI ----> 2x2030
GTX 560 Ti ----> 1x~1,100, 2x 2,654, 6x 6,400
GTX 560 Ti ----> 1x~1,100, 2x 2,000, 4x 4,100, 5x 5,200
GTX 560 Ti ---> 1x 1,583 (OC'd)
GTX 560 ------> 2x 2,300
GTX 560 ------> 1x 3,300, 2x 4800
GTX 460 -> 1x3000, 2x4800
GTX 465
GTX 460 SE
GTX 550 Ti ---> 1x 1,793, 2x 2,961
GTX 550 Ti ---> 1x 3,065, 2x 5,600
GT 640 -------> 1x~5,700
GT 440
GTS 450 ----> 1x~2,200, 2x 4,200
GF 610M ------> 1x~7,800
GT 430 -------> 2x 9,100
GT 430 -------> 1* 4860
GT 520 -------> 1x~9,600(Linux)
FirePro V4800-> 1x 10,620
Older cards (not openCL v1.1 capable) but still interesting comparison:
GT 295 -------> 1x 2,000(Linux)
GTX 285 ----> 2*3000
GTX 260 ----> 1*2200
8800GT G92 ---> 1x 2,940(Linux)
8800GT G92 ---> 1x 3,600(Linux)
8800GTS G80 --> 1x 4,020(Linux)
GTS 250 ------> 2x~5,484
GT 240 ------> 1x 4,035(OC'd)
GT 240 -------> 1x~4,500
GT 240 ----> 1x~5,400, 2x 10,500
GT 220 -------> 2x 19,400[/b]
DSKAG Austria Research Team: [LINK]http://www.research.dskag.at[/LINK]