I think depending on the details of configuration, BOINC version and such, sometimes you don't need to do anything, and sometimes you need to activate an option in cc_config.xml, with a line in the options section of the file reading:
[pre] 1[/pre]
Where the "1" means "do it".
Quote:
Other interesting thought, you have a two 750 and a two 750ti setup and they are showing about the same RAC per day.
I don't, actually, read my previous post regarding my actual GPU population vs. what shows on the web page.
Quote:
750 has 1 gig of ram and 512 cores the 750ti has 2 gigs of ram and 640 cores??? What gives with that? The ti should be significantly faster?
Nevertheless, your question has merit. I have believed for a long time that BOINC users' uniform huge preference for the 750Ti over the 750 base model is ill-founded, at least for people with a narrow interest on the Einstein Parkes PMPS and a relatively short probable use time-frame.
The maximum overclock reachable seems only weakly correlated to the as-sold nominal clock frequency, the card seems by both theory and observation to get zero performance advantage for Einstein Parkes PMPS out of RAM over 1 GB, and not much out of the extra "cores".
So if you can get a significantly lower price on a 750 1GB, I'd take it. But getting the Ti 2GB won't hurt you much in any way (somewhat more power consumption at equal conditions) except purchase cost, will give you protection against possibly higher RAM need for some other application of possible use to you, and possibly more performance on some other current or future application which is more successful at taking advantage of the extra floating point hardware.
I, myself, saw package damaged or similar returns at Amazon Warehouse Deals a few weeks before Christmas on EVGA SC 750Ti cards for about $95. Those went way, and right now have returned but are $129 (I think they were $122 this morning, so they change). As you have found a better current price, you clearly have the idea of shopping around. I have not recently seen a fire sale price on a barebones 750, so were I buying in the next couple of weeks, I'd probably buy a Ti "just in case". But if Mr. Market offered me a 750 at a substantial saving, I'd not hesitate to take it.
By the way, overclocking a 750 or 750Ti is simpler than overclocking a 970, which is a bit difficult, and has a thread dedicated to the topic here. Just moving the sliders in, for example, MSIAfterburner works fine. Some gamers post that memory clock is not worth overclocking. That is bad advice here, as the available Parkes PMPS speedup I observed from memory overclocking exceeded that from core overclocking on the three 750TI and one 750 cards I've tested. The specific limit clocks were different on each card, even though three of them were the exact same model number.
To repeat: experimentation is key. Results from others can be a good guide to what things might be fruitful.
I too saw the 95 dollar 750ti sale black friday week. I saw hardware prices I could not believe. You had to look though. They all went back up the next week.
There are some very low-priced Skylake CPUs sold under the Pentium brand these days. I recently reviewed a list of the capabilities disabled by the manufacturer on these chips as compared to i3/i5/i7 chips sold from the same die, and got the tentative answer that for my purposes they would probably be entirely adequate. "Pentium", "Core", i3, i5, i7, and even Xeon are just brands, and as with, say, Ford/Mercury/Lincoln, the names are not a durably reliable indicator of the detailed characteristics of the products sold under that name in any given year. Anyone telling you what a Skylake Pentium is like based on their memory of ancient products sold under that name is speaking through their hat.
significant warning against jumping straight into the Skylake pool right now.
Indeed Richard, I just read a thread on that subject in the last 20 minutes myself.
Intel representatives posting on the thread that I read appeared to acknowledge that an issue and a fix had been identified and that the fix was in the propagation pipeline by means of a BIOS update.
For some years now Intel microprocessors have carried a provision that allows microcode patches to be uploaded to the CPU on every boot of the machine. This is why it is absolutely crucial to make sure that the BIOS version you run on a given motherboard is specifically stated to support the specific CPU revision you plug into it.
I think people get careless on this point because so many of the bugs are rather uncommon in their manifestation, so plenty of people "get away" with using BIOS versions not claimed to support their specific CPU stepping.
This episode suggests why that is a really bad idea.
It also suggests that Intel PR has not lost the bad old habit of trying to minimize the importance of problems by asserting that the conditions leading to the problem are somehow rare and only of theoretical importance. I have very, very bad memories of that behavior during the FDIV problem.
Calling Prime95 a "complex workload" beggars belief.
Looked at my system returns but I don't really know what I am looking at. System did almost 44k RAC yesterday, highest yet.
DO you have any recommendations or just let it run and see how it does. Again I know that numbers will be affected greatly by how many hours a day my system is crunching.
while it is normal to get two lines on that page, you have four, which, unless there is someone else using bzbro_000, suggests you have not unified your account reporting across different projects. If you'd like to fix that, I suggest you review this thread on that problem.
The summary page for your Einstein account as shown on BOINCStats gives some useful summary statistics, and also is a point of departure to view graphs or text entries giving, for example, the new credit logged in each of recent days. In your case, clicking on the "charts" tab gives access to several graphs showing clearly how much your Einstein output has increased since the turn of the year.
Looked at my system returns but I don't really know what I am looking at.
I presume you can find the 'view computers' link on your account page and then click on the 'tasks' link for your computer to see all the tasks. You can fine tune the list of tasks by clicking on the Parkes PMPS XT link to see just those. You made the last change at the start of Jan 07 UTC so you have now accumulated quite a list of completed tasks under the latest settings. You should see there is some variation - the shortest elapsed time I saw was about 14K secs and the longest was close to 18K secs. Pick the most recent 40-50 tasks (returned since the middle of Jan 07 UTC) and work out the average elapsed time - add up all the values and divide by the number of results you used.
(EDIT: Use as many results as you can and don't 'cherry pick' any 'good' ones :-). )
Once you have looked at the BRP6 tasks, you can do the same for the Gamma ray pulsar tasks (FGRP4) and work out an average for them as well, over the same period. When you have done that, report back with the two values you get. We will compare the latest two averages with what is in the table I posted earlier. We want to see if there is any advantage for the current settings over those in use prior to Jan 07.
These averages won't be affected by how many hours per day the machine is crunching but will be more accurate the larger the sample size is. This is why we have been accumulating the data for quite a few days - simply to get a large enough sample size.
Is there an easy way to export the stats table into excel?
If you choose to run continuously the third party software BOINCTasks (which I do primarily as a means to observe and control several hosts from one place), then the content of the history tab is a convenient source from which to do performance calculation work.
How much history is available is governed by a setting found at Extra|BoincTasks settings|History|Remove history after ...
I leave mine set at 14 days currently.
The actual columns displayed in that history are configurable to ticking boxes on that same settings page.
You can simply copy all to Excel, or sort by some column(s), select a subset, and copy that to Excel.
You may not want to bother installing BoincTasks for just this purpose, but I like using it as a monitoring application, and always do my performance adjustment calculations starting by exporting the BT history to Excel.
OK here is your statistic I think: CPU Tasks 22715/24
Here is what I came up with since 7 Jan CPU Tasks 20316/33
GPU Tasks 15743/53
X1 times X2 times X3 times
======== ======== ========
CPU Tasks 22715/24 23135/ 4 23772/20
GPU Tasks 7550/30 5394/ 8 5299/18
For ease of reading, I've used [ pre ] tags (click on the 'Use BBCode tags ...' link in the message composition window to see examples of all available tags) to line up the above data in a new table which is reproduced below. I've also added your new values. In doing so, I've converted the GPU crunch time (15743 for 3 tasks) to a 'per task' value of 5248 seconds. We were trying out running 1 less CPU task to see the effect on both GPU and CPU elapsed times - hence the extra title of X3 + 3 CPU on a new column to show those results.
The previous results had a CPU task on all 4 cores. I've also taken the opportunity to add a "RAC" column after each "Times" column. This is just a theoretical calculation based on the listed averages. In turn, these averages are dependent on the sample size, most of which are really too small. However, they are probably good enough for you to decide how to best setup your machine. The formula I used for calculating theoretical RAC is
GPU tasks RAC = 86400 / GPU av time X 4400
CPU tasks RAC = 86400 / CPU av time X 693 X # of 'loaded' CPU cores
Theoretical RAC = GPU tasks RAC + CPU tasks RAC
Because you run BOINC about 75% of the time you could expect to achieve about 75% of these values.
[pre]
X1 times Th. RAC X2 times Th. RAC X3 times Th. RAC X3 + 3 CPU Th. RAC
======== ======= ======== ======= ======== ======= ========== =======
CPU Tasks 22715/24 10544 23135/ 4 10352 23772/20 10075 20316/33 8842
For the latest results (x3 + 3), you can see quite a drop in the CPU crunch time because CPU tasks aren't being delayed by losing the cycles needed for GPU support. The 'free core' we have created is being used for a lot of that support now. There is also a small improvement in the GPU crunch time - the GPU is getting the support in a slightly more timely fashion. The trends are expected but I was hoping the improvements might be better.
When you look at what the table is telling you, the main message is to run concurrent GPU tasks rather than singly, but it really makes little difference which of the three tested options you choose (X2 or X3 or X3 with a 'free core'). For simplicity reasons, your best option would be NOT to use app_config.xml because you are not really gaining anything over just using the GPU utilization factor to crunch two GPU tasks and 4 CPU tasks simultaneously.
It's highly possible that the X2 times are not accurate because not enough data was sampled. It might be that the true averages are longer than the ones in the table. If you really wanted to be sure, you could remove app_config.xml and go back to using a GPU utilization factor of 0.5 and let it run for a week to get enough data to find out. Whether or not you want to go to this trouble is entirely up to you :-).
Quote:
Is there an easy way to export the stats table into excel? I had to cut and paste blocks of it, a pain!
If you take a look at this particular message (ID=139733) and the two that immediately follow it, you can learn about using what BOINC automatically keeps for you in the way of results processed by your host. The data on the website is transient. The record your client keeps is not.
BOINC always adds data on every single result you process to a text file in the BOINC Data directory called job_log_einstein.phys.uwm.edu.txt. The message I linked to discusses this file and shows examples of the content (in the latter half of the message). You can import the job_log file (as space separated values) into a spreadsheet of your choice and process it however you want. I use LibreOffice, I've never used Excel but I'm sure it will be easier than trying to hand process results showing on the website. I imagine you could select just the columns you wanted for just the rows that belong to the particular task type you were interested in. You could find the correct starting result by using a Unix time to Human time converter, perhaps something like this one.
RE: How do you get BOINC to
)
I think depending on the details of configuration, BOINC version and such, sometimes you don't need to do anything, and sometimes you need to activate an option in cc_config.xml, with a line in the options section of the file reading:
[pre] 1[/pre]
Where the "1" means "do it".
I don't, actually, read my previous post regarding my actual GPU population vs. what shows on the web page.
Nevertheless, your question has merit. I have believed for a long time that BOINC users' uniform huge preference for the 750Ti over the 750 base model is ill-founded, at least for people with a narrow interest on the Einstein Parkes PMPS and a relatively short probable use time-frame.
The maximum overclock reachable seems only weakly correlated to the as-sold nominal clock frequency, the card seems by both theory and observation to get zero performance advantage for Einstein Parkes PMPS out of RAM over 1 GB, and not much out of the extra "cores".
So if you can get a significantly lower price on a 750 1GB, I'd take it. But getting the Ti 2GB won't hurt you much in any way (somewhat more power consumption at equal conditions) except purchase cost, will give you protection against possibly higher RAM need for some other application of possible use to you, and possibly more performance on some other current or future application which is more successful at taking advantage of the extra floating point hardware.
I, myself, saw package damaged or similar returns at Amazon Warehouse Deals a few weeks before Christmas on EVGA SC 750Ti cards for about $95. Those went way, and right now have returned but are $129 (I think they were $122 this morning, so they change). As you have found a better current price, you clearly have the idea of shopping around. I have not recently seen a fire sale price on a barebones 750, so were I buying in the next couple of weeks, I'd probably buy a Ti "just in case". But if Mr. Market offered me a 750 at a substantial saving, I'd not hesitate to take it.
By the way, overclocking a 750 or 750Ti is simpler than overclocking a 970, which is a bit difficult, and has a thread dedicated to the topic here. Just moving the sliders in, for example, MSIAfterburner works fine. Some gamers post that memory clock is not worth overclocking. That is bad advice here, as the available Parkes PMPS speedup I observed from memory overclocking exceeded that from core overclocking on the three 750TI and one 750 cards I've tested. The specific limit clocks were different on each card, even though three of them were the exact same model number.
To repeat: experimentation is key. Results from others can be a good guide to what things might be fruitful.
Almost 44k RAC yesterday, Woo
)
Almost 44k RAC yesterday, Woo Hoo!
I too saw the 95 dollar 750ti sale black friday week. I saw hardware prices I could not believe. You had to look though. They all went back up the next week.
Have A Blessed Day
Mike Brown
Something posted at SETI that
)
Something posted at SETI that seems like a significant warning against jumping straight into the Skylake pool right now.
http://hexus.net/tech/news/cpu/89636-intel-skylake-bug-seizes-pcs-running-complex-workloads/
RE: significant warning
)
Indeed Richard, I just read a thread on that subject in the last 20 minutes myself.
Intel representatives posting on the thread that I read appeared to acknowledge that an issue and a fix had been identified and that the fix was in the propagation pipeline by means of a BIOS update.
For some years now Intel microprocessors have carried a provision that allows microcode patches to be uploaded to the CPU on every boot of the machine. This is why it is absolutely crucial to make sure that the BIOS version you run on a given motherboard is specifically stated to support the specific CPU revision you plug into it.
I think people get careless on this point because so many of the bugs are rather uncommon in their manifestation, so plenty of people "get away" with using BIOS versions not claimed to support their specific CPU stepping.
This episode suggests why that is a really bad idea.
It also suggests that Intel PR has not lost the bad old habit of trying to minimize the importance of problems by asserting that the conditions leading to the problem are somehow rare and only of theoretical importance. I have very, very bad memories of that behavior during the FDIV problem.
Calling Prime95 a "complex workload" beggars belief.
Good Morning Gary, Looked
)
Good Morning Gary,
Looked at my system returns but I don't really know what I am looking at. System did almost 44k RAC yesterday, highest yet.
DO you have any recommendations or just let it run and see how it does. Again I know that numbers will be affected greatly by how many hours a day my system is crunching.
Thanks again for all your help
Mike Brown
Mike, Regarding monitoring
)
Mike,
Regarding monitoring of progress, you may find that you like to look at the point of view provided by one or another of the statistics sites.
Personally, I like to look at BOINCStats.
For your userID used here, a link is:
stats for bzbro_000
while it is normal to get two lines on that page, you have four, which, unless there is someone else using bzbro_000, suggests you have not unified your account reporting across different projects. If you'd like to fix that, I suggest you review this thread on that problem.
The summary page for your Einstein account as shown on BOINCStats gives some useful summary statistics, and also is a point of departure to view graphs or text entries giving, for example, the new credit logged in each of recent days. In your case, clicking on the "charts" tab gives access to several graphs showing clearly how much your Einstein output has increased since the turn of the year.
RE: Looked at my system
)
I presume you can find the 'view computers' link on your account page and then click on the 'tasks' link for your computer to see all the tasks. You can fine tune the list of tasks by clicking on the Parkes PMPS XT link to see just those. You made the last change at the start of Jan 07 UTC so you have now accumulated quite a list of completed tasks under the latest settings. You should see there is some variation - the shortest elapsed time I saw was about 14K secs and the longest was close to 18K secs. Pick the most recent 40-50 tasks (returned since the middle of Jan 07 UTC) and work out the average elapsed time - add up all the values and divide by the number of results you used.
(EDIT: Use as many results as you can and don't 'cherry pick' any 'good' ones :-). )
Once you have looked at the BRP6 tasks, you can do the same for the Gamma ray pulsar tasks (FGRP4) and work out an average for them as well, over the same period. When you have done that, report back with the two values you get. We will compare the latest two averages with what is in the table I posted earlier. We want to see if there is any advantage for the current settings over those in use prior to Jan 07.
These averages won't be affected by how many hours per day the machine is crunching but will be more accurate the larger the sample size is. This is why we have been accumulating the data for quite a few days - simply to get a large enough sample size.
Cheers,
Gary.
Hey Gary, OK here is your
)
Hey Gary,
OK here is your statistic I think: CPU Tasks 22715/24
Here is what I came up with since 7 Jan CPU Tasks 20316/33
GPU Tasks 15743/53
X1 times X2 times X3 times
======== ======== ========
CPU Tasks 22715/24 23135/ 4 23772/20
GPU Tasks 7550/30 5394/ 8 5299/18
Is there an easy way to export the stats table into excel? I had to cut and paste blocks of it, a pain!
Thanks Again
Mike Brown
RE: Is there an easy way to
)
If you choose to run continuously the third party software BOINCTasks (which I do primarily as a means to observe and control several hosts from one place), then the content of the history tab is a convenient source from which to do performance calculation work.
How much history is available is governed by a setting found at Extra|BoincTasks settings|History|Remove history after ...
I leave mine set at 14 days currently.
The actual columns displayed in that history are configurable to ticking boxes on that same settings page.
You can simply copy all to Excel, or sort by some column(s), select a subset, and copy that to Excel.
You may not want to bother installing BoincTasks for just this purpose, but I like using it as a monitoring application, and always do my performance adjustment calculations starting by exporting the BT history to Excel.
RE: OK here is your
)
For ease of reading, I've used [ pre ] tags (click on the 'Use BBCode tags ...' link in the message composition window to see examples of all available tags) to line up the above data in a new table which is reproduced below. I've also added your new values. In doing so, I've converted the GPU crunch time (15743 for 3 tasks) to a 'per task' value of 5248 seconds. We were trying out running 1 less CPU task to see the effect on both GPU and CPU elapsed times - hence the extra title of X3 + 3 CPU on a new column to show those results.
The previous results had a CPU task on all 4 cores. I've also taken the opportunity to add a "RAC" column after each "Times" column. This is just a theoretical calculation based on the listed averages. In turn, these averages are dependent on the sample size, most of which are really too small. However, they are probably good enough for you to decide how to best setup your machine. The formula I used for calculating theoretical RAC is
Because you run BOINC about 75% of the time you could expect to achieve about 75% of these values.
[pre]
X1 times Th. RAC X2 times Th. RAC X3 times Th. RAC X3 + 3 CPU Th. RAC
======== ======= ======== ======= ======== ======= ========== =======
CPU Tasks 22715/24 10544 23135/ 4 10352 23772/20 10075 20316/33 8842
GPU Tasks 7550/30 50352 5394/ 8 70478 5299/18 71742 5248/53 72439
_____ _____ _____ _____
Daily credit 60896 80830 81817 81281
===== ===== ===== =====
[/pre]
For the latest results (x3 + 3), you can see quite a drop in the CPU crunch time because CPU tasks aren't being delayed by losing the cycles needed for GPU support. The 'free core' we have created is being used for a lot of that support now. There is also a small improvement in the GPU crunch time - the GPU is getting the support in a slightly more timely fashion. The trends are expected but I was hoping the improvements might be better.
When you look at what the table is telling you, the main message is to run concurrent GPU tasks rather than singly, but it really makes little difference which of the three tested options you choose (X2 or X3 or X3 with a 'free core'). For simplicity reasons, your best option would be NOT to use app_config.xml because you are not really gaining anything over just using the GPU utilization factor to crunch two GPU tasks and 4 CPU tasks simultaneously.
It's highly possible that the X2 times are not accurate because not enough data was sampled. It might be that the true averages are longer than the ones in the table. If you really wanted to be sure, you could remove app_config.xml and go back to using a GPU utilization factor of 0.5 and let it run for a week to get enough data to find out. Whether or not you want to go to this trouble is entirely up to you :-).
If you take a look at this particular message (ID=139733) and the two that immediately follow it, you can learn about using what BOINC automatically keeps for you in the way of results processed by your host. The data on the website is transient. The record your client keeps is not.
BOINC always adds data on every single result you process to a text file in the BOINC Data directory called job_log_einstein.phys.uwm.edu.txt. The message I linked to discusses this file and shows examples of the content (in the latter half of the message). You can import the job_log file (as space separated values) into a spreadsheet of your choice and process it however you want. I use LibreOffice, I've never used Excel but I'm sure it will be easier than trying to hand process results showing on the website. I imagine you could select just the columns you wanted for just the rows that belong to the particular task type you were interested in. You could find the correct starting result by using a Unix time to Human time converter, perhaps something like this one.
Cheers,
Gary.