[S5R3/R4] How to check Performance when Testing new Apps

Mike Hewson
Mike Hewson
Moderator
Joined: 1 Dec 05
Posts: 6588
Credit: 315860645
RAC: 333964

Thanks for your kind words

Message 77754 in response to message 77753

Thanks for your kind words and neat suggestions Gary .... :-)
I'll suck up all of your points into a plan for V4!

I figured some utility like this would be of notable use for those of us who, like yourself, quite enjoy data drilling and prognostication in this project. Also I can imagine it would be useful to enhance the accuracy of user feedback to the developers as regarding performance changes with app version updates - which we know will happen. It is certainly a tool which can be Swiss Army Knife'd! :-)

I am dead serious about feedback on any aspect whatsoever including GUI, look & feel, colors, layout etc. Please ( anyone ) don't be too shy to critique, as what I see as I develop at home may/can/will render quite differently on other browsers/hardware combo's worldwide. For those that are very shy then PM me.

It's a pleasant learning curve for me too! For instance : last night I discovered that IE versions disagree amongst themselves ( and with W3C ) on base stuff like how to calculate HTML intra-table widths. Also I discovered that MS's implementation of Javascript's weak variable typing caused a for loop to have a float as an iteration delimiter - ie. when is a '2.0' not a '2'? Thank heavens I wasn't doing this during the 90's browser wars! :-)

I want to make a gadget that is conveniently and frequently used. I am going to keep it, at core, a self contained purely client-side tool - (X)HTML/CSS/Javascript - so that download/setup is trivial and one can hit the ground running.

Mind you : just this morning I was musing if Javascript could interrogate a user's account at E@H, request and parse data ( via HTTP requests ) and thus automate in 'realtime' alot of the sorts of tasks you are alluding to. As I'm on holidays this week I just might kick back with the JS manual and investigate. :-)

Oh, and please do send me your data - PM if you like - as I'd like to watch over the algorithm on varied real data sets. Specifically the value of a cutoff parameter I chose for whittling down estimates, by throwing bad cases, was a somewhat arbitrary guess that I reckon needs more work.

Cheers, Mike.

( edit ) I've just re-read the '37119/34235' actual vs. '37107/34366' modelled part. Wow! I'd certainly be suspicious if they were an exact match, but I originally only hoped for about 2 - 3% 'bulls-eye'. Also nice to see it's resistant to at least a little bit of 'frequency sliding'. :-)

( edit ) Re: PRINT. JS has plenty of capability to create, manipulate and write to new instances of browser windows ( aka 'pop-ups' ), so that will be a profitable method of emitting all variety of reports etc. One can invoke a window.print() method in JS that is identical to whatever one normally does to encourage your browser to print - & this can be 'buttonised'. So what's printed will reflect how the page was made via JS. In other words, do-able! :-)

Re: FILE. As for local file creation and access, this is a generic 'no' for JS. There are subversions of that - say ActiveX on IE - but now we start to Balkanise the code, slide to the dark side, and/or limit distribution prospects. Similarly for programmatically invoking a 'Save Page As' dialog entry.

Alas, alot of the 'we can do this' and 'we can do that' talk with JS really is 'we can do X&foffle with JS version ZZ.plural_alpha and IE version Q.barf' .... :-)

( edit ) re: FILE. Possible JS route using cookies. Not local hard drive access per se, but can certainly persist state between browser usage. This requires no add-ons at all as is simply a discussion b/w JS and the browser. Cookie value is a plain text string effectively, and thus is straight forward to read, parse, edit in place and write back. Can have up to 4KB/cookie so ~ low load there. Also can have multiple cookies per 'site'. In this case might write a 'fake' path value into the cookies' URL Field and by 'incrementing' the path use it to index/tag many cookies? Up to 20/site & up to 300/computer. ( These are quoted minimum specs that all browsers ought to allow, thus setting a maximum to be assumed on the script side ). Possibilites, hmmm. Thinking out loud again .... but I always did that here at E@H from the get-go, eh Gary? :-)

I have made this letter longer than usual because I lack the time to make it shorter ...

... and my other CPU is a Ryzen 5950X :-) Blaise Pascal

Klimax
Klimax
Joined: 27 Apr 07
Posts: 87
Credit: 1370205
RAC: 0

RE: RE: Thanks a lot.

Message 77755 in response to message 77752

Quote:
Quote:
Thanks a lot. Works pretty well now. Good job!

Terrific! :-)
I've also noted the layout in places looks a bit crappy on IE too ....
Next version:
- I'll do a workaround the fact that MS doesn't agree with the rest of the world on what the definition of the width of a box should be ..... sigh

Cheers, Mike.


About nonstandard:This should be true only for IE6 and lower!
IE7 in standard mode(doctype present) is very good with CSS 2.1 with many quirks fixed.
BTW.That is why some sites have problems,because doctype has been inserted by apps and webmasters used quirks,so sites in IE7 then are shown incorrectly.

And your site looks in IE7 from start good,so only IE6 needs extra attention.(Not for long,autoupdate is supposdly on the way)...

Mike Hewson
Mike Hewson
Moderator
Joined: 1 Dec 05
Posts: 6588
Credit: 315860645
RAC: 333964

RE: About nonstandard:This

Message 77756 in response to message 77755

Quote:

About nonstandard:This should be true only for IE6 and lower!
IE7 in standard mode(doctype present) is very good with CSS 2.1 with many quirks fixed.
BTW.That is why some sites have problems,because doctype has been inserted by apps and webmasters used quirks,so sites in IE7 then are shown incorrectly.

And your site looks in IE7 from start good,so only IE6 needs extra attention.(Not for long,autoupdate is supposdly on the way)...


Thanks! ... :-)
CSS 2+ brings layout control into the style sheet so that's welcome news, in that the implementation support is coming into place. I could toss tables for layout purposes completely - CSS is much easier, flexible and intuitive in my opinion.

Cheers, Mike.

I have made this letter longer than usual because I lack the time to make it shorter ...

... and my other CPU is a Ryzen 5950X :-) Blaise Pascal

Gary Roberts
Gary Roberts
Moderator
Joined: 9 Feb 05
Posts: 5872
Credit: 117524916886
RAC: 35381441

RE: Oh, and please do send

Message 77757 in response to message 77754

Quote:

Oh, and please do send me your data - PM if you like - as I'd like to watch over the algorithm on varied real data sets. Specifically the value of a cutoff parameter I chose for whittling down estimates, by throwing bad cases, was a somewhat arbitrary guess that I reckon needs more work.

I've included the data as a further EDIT in the original message. I'll include a similar data block in all subsequent reports of Ready Reckoner use.

Quote:

( edit ) re: FILE. Possible JS route using cookies....

Sounds good - certainly better than nothing :).

Quote:

Possibilites, hmmm. Thinking out loud again .... but I always did that here at E@H from the get-go, eh Gary? :-)

That sounds about right :). It's obviously a good way to go in terms of fast problem resolution. With it all hanging out there, the threat of embarrassment tends to sharpen the focus of attention on the nitty gritty details that might otherwise be overlooked :).

Cheers,
Gary.

archae86
archae86
Joined: 6 Dec 05
Posts: 3157
Credit: 7220384931
RAC: 974916

Here is another use that is

Here is another use that is probably obvious to others, but just occurred to me: estimating relative Einstein compute speed of different hosts (we've mostly been talking about using it to assess relative performance of alternate aps).

With just two tasks, if they are decently well-spaced in a common frequency cycle, Mike's reckoner gives you an expected average compute time. While it is labelled as being applicable to that specific frequency, the evidence I've seen is that any frequency dependence of the a and b parameters of a host is pretty gentle, so comparing hosts with these won't generate much error, especially if the frequencies are moderately close.

So, let's see, akosf has a host 1051461 running, we all believe, home-brewed upgraded code. While it started life as a Q6600, I don't know what frequency he is running currently. But let's see what overall performance advantage he has over my Q6600, mildly overclocked at 3.006 GHz.

akosf host 1051461

freq 716.85
__CPU___ Seq
7,985.41 56
9,936.44 94

With just those five input numbers, readyreckoner3 spits out an estimate average runtime of:

9062 CPU seconds

archae86 host 982234

freq 794.45
__CPU____ Seq
30,442.11 386
23,984.87 331

for my otherwise comparable host, the rr3 average runtime estimate given these entries is:

26541 CPU seconds, or 2.93 times slower.

The accuracy of these estimates will depend on the model accuracy, on the amount of random CPU time variation of the particular host, and on good choice of samples. But in this specific case, where I found high-time (but not right on the peak--to be avoided) and pretty low-time samples from the same frequency for each host, and the host estimate frequencies are not terribly different, I suspect these are very near the truth.

For my own host, this 5-input estimate gives values of a and b of 31150 and .232. My own labors with far more points, graphical review, and assistance from summary statistics for model mean and stdev of error most recently were 30500 and .21. Boiling these down to the predicted trough time, rr3 says 23923, and my choices say 24095, a difference of 0.7%. That is tighter than I even think my estimation attempted to guess, and it means the average CPU time across a cycle estimate would be also within 1%, I think.

Another point of possible interest is that the B estimate for the akosf host is .273, while for my Q6600 it is .232. As the CPU architectures are identical, and the memory characteristics are probably not enough different to matter very much, this says that the variation in execution time is more with the revised code than with 4.26 release code--more improvement has been made in the stuff that executes uniformly for all tasks than in the stuff the executes a varying number of times based on grids and skypoints and such.

Bikeman (Heinz-Bernd Eggenstein)
Bikeman (Heinz-...
Moderator
Joined: 28 Aug 06
Posts: 3522
Credit: 723836608
RAC: 1174677

Hi all, To help with

Hi all,

To help with gathering data for this kind of analysis under UNIX like OSes, I dusted off my awk skills a bit and wrote this little script:

BEGIN {start =0; delim = " "}
/result\\.php\\?resultid/ {ind=0 ; start=1; 
                          match($0,">[0-9]+");
			  if(RLENGTH > 1) {
			    wuid= substr($0, RSTART+1, RLENGTH-5);	
		          } else {
                            wuid="??";
			  }		
			  line = wuid delim;
			}
// { 
	if(start != 0) {
		ind = ind +1;
		if(ind == 1) {
			match($0,"h1_[0-9]+\\.[0-9]+_S5R2__[0-9]+_S5R3[a-z]_[0-9]+");
			wuname = substr($0, RSTART, RLENGTH);
			split(wuname,ar,"_");
			line = line  ar[2]  delim  ar [5]  delim ;
		}
	
		if(ind == 8) {
			if(match($0,"[0-9,\\.]+") ) {
				tm = substr($0,RSTART,RLENGTH);
				gsub("\\,","",tm);;
				line = line tm;
				printf(line "\\n"); 
			}
		}
	}
       }
END {}

Copy-paste this to a file (say) parser.awk

To parse the results from a particular results page, e.g. http://einsteinathome.org/host/1025096

simply use this command line

curl -s "http://einsteinathome.org/host/1025096" | awk -f parser.awk > results.txt

or

wget -q -O - "http://einsteinathome.org/host/1081716/tasks&offset=40" | awk -f parser.awk > results.txt

and the contents of the results web page will be saved to the file results.txt in a space-delimited 4 column table:

[Result_id] [Frequency] [seq.No] [cpu_time]

Only completed results will be exported (also errored out results, will fix that later).

The resultfile can then easily be imported in a spreadsheet or database, or, using "sort" and "uniq", merged with previous results obtained in the same fashion.

Now, this is just a quick-and-dirty hack which relies on the HTML formatting generated by the BOINC web software (which may change in the future), but I guess for a start it's still useful.

Doing this under Windows is, as always, left as an exercise. (Hint: use cygwin :-) ).

P.S.: When importing into a spreadsheet, don't forget to select US-locale formatting (decimal period instead of comma like in Germany for the input data)

CU
Bikeman

Gary Roberts
Gary Roberts
Moderator
Joined: 9 Feb 05
Posts: 5872
Credit: 117524916886
RAC: 35381441

RE: ... While it is

Message 77760 in response to message 77758

Quote:
... While it is labelled as being applicable to that specific frequency, the evidence I've seen is that any frequency dependence of the a and b parameters of a host is pretty gentle, so comparing hosts with these won't generate much error, especially if the frequencies are moderately close.

Funny you should mention that :).

I've started a mini project to collect data from a number of "hardware identical" machines, some of which are running Windows, some Linux, all with different frequency data and all with an app transition, either 4.14 -> 4.27 for Linux or 4.15 -> 4.26 for Windows. The only unfortunate thing at the moment is that the frequencies are now changing around rapidly on each host whereas they seemed a lot more stable at the time I started gathering data. This current dregs cleanup is spoiling the party :).

When I had collected sufficient data I was expecting to be able to show that frequency was largely irrelevant and that instead of plotting crunch time against sequence number we could be looking at it against phase angle. That way we could combine the data from hardware identical machines without having to worry about what particular frequency they were crunching at any point.

Also I've just seen Bikeman's awk script so data gathering should be a lot simpler from now on - thanks Bikeman :). 25 years ago when my brain worked a bit better than now, I started writing shell scripts which used unix tools like sed, awk, grep, etc, but that was a long time ago, about the time the IBM PC XT first came out :). In recent years Windows has made me lazy so I'm quite enjoying the stimulation of transitioning a lot of my fleet over to Linux.

Quote:

So, let's see, akosf has a host 1051461 running, we all believe, home-brewed upgraded code....

for my otherwise comparable host, the rr3 average runtime estimate given these entries is:

26541 CPU seconds, or 2.93 times slower.

WOW!!!

Surely Akos needs a beta tester or two :).

Cheers,
Gary.

Gary Roberts
Gary Roberts
Moderator
Joined: 9 Feb 05
Posts: 5872
Credit: 117524916886
RAC: 35381441

RE: To help with gathering

Message 77761 in response to message 77759

Quote:

To help with gathering data for this kind of analysis under UNIX like OSes, I dusted off my awk skills a bit and wrote this little script....

Thanks very much for this. The script runs fine but my version of awk gives a couple of warnings, namely that it doesn't like the escape sequences \\. and \\, and internally it was going to use the unescaped characters. They were only warnings and the output was fine so I just removed the backslashes and ran the script again and got the correct output with no warnings this time. Too easy!

Since I have a lot of hosts, I decided to write a small shell script to cycle through and retrieve the data for those hostIDs I'm interested in. I also built in the ability to merge new results with old results to mainatain a single file with a complete ongoing list of all results and no duplicates. Please note that the script has no error checking or recovery. It's up to the user to make sure the hostid is correct before running it. Also everything (results and awk script) is assumed to reside in a single working directory. Here is the script:-

#!/bin/sh
#
# hostresults.sh
#
# Script to retrieve all online database results for one or many hostIDs and to
# parse output to give an updateable set of info for each host of interest.
#
while true
do
	echo -n "What hostID do you wish to use ( only to exit)? : "
	read hid
	if [ "X$hid" == "X" ]
	then
		exit 0
	else
	curl -s "http://einstein.phys.uwm.edu/results.php?hostid=$hid" | awk -f parser.awk > results.tmp
	fi
	if [ -f results.$hid ]
	then
		mv results.$hid results.sav
		cat results.sav results.tmp | sort | uniq > results.$hid
		rm -f results.sav results.tmp
	else
		mv results.tmp results.$hid
	fi
done

Note that if you have saved data for a host nnnnn where some or all of that data has already expired from the database just edit that data and put it the correct format using a filename results.nnnnn. Then when you run the above script, your previously saved data plus any new records will all end up in a single results file which will continue to grow each time you harvest new results.

Also please note the small typo in Bikeman's curl command - show_host_detail.php should be results.php

Quote:

Only completed results will be exported (also errored out results, will fix that later).

Yes, very necessary to chop out any error results.

Once again, many thanks for the very useful script.

Cheers,
Gary.

Klimax
Klimax
Joined: 27 Apr 07
Posts: 87
Credit: 1370205
RAC: 0

Those scripts are nice.I am

Those scripts are nice.I am thinking about writting a simple app for not only automated retrieval of results and WUs,but to compute everything possible from the "runtime function".So I can try to convert scripts to C++ and try to build app with them inside.
Bikeman and Gary Roberts,may I port and use your scripts?
:-)

Not sure how long it takes(university student and still two tests to go :-( )

P.S.:The latter one is nice template for this part of future app.Just to clarify why I found usefull later script. :-)

Gary Roberts
Gary Roberts
Moderator
Joined: 9 Feb 05
Posts: 5872
Credit: 117524916886
RAC: 35381441

RE: Bikeman and Gary

Message 77763 in response to message 77762

Quote:
Bikeman and Gary Roberts,may I port and use your scripts?
:-)

Of course you can. You're quite free to do whatever you like with them.

It indeed would be nice to have a single app that could harvest the data and interface directly with Mike Hewson's RR_V3 to allow seamless analysis of results and the ability to predict future runtimes. Please be aware that Bernd has said a couple of times that they know the causes of the cyclic variation and that they intend (at some stage) to have the WU generator take into account the variation when calculating the appropriate credit for each task. It's also possible that the cyclic variation might be modified or even eliminated at some point. In other words you may need to throw away much of your investment if things change.

Cheers,
Gary.

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.