Parallella, Raspberry Pi, FPGA & All That Stuff

Der Mann mit der Ledertasche
Der Mann mit de...
Joined: 12 Dec 05
Posts: 151
Credit: 302594178
RAC: 0

...as I said, sometimes the

...as I said, sometimes the Forrest is to near! ;-)
I use BOINC Task too for all my Comps and would be able to do the wanted!
But anyway, with the --passwd parameter it is going well. See us in Team as the last 26 WU's were done. ;-)

BR

DMmtL

Greetings from the North

Der Mann mit der Ledertasche
Der Mann mit de...
Joined: 12 Dec 05
Posts: 151
Credit: 302594178
RAC: 0

Thx, seems to be a nice

Thx, seems to be a nice Tool.
Did install, editing the Hostlist, but still getting no information (offline).
Do I have to install or editing something other (e.g. Portnumber)?

Sorry for to many Questions but I'm just in the beginning in the Unix/Linux Environment! :-O

BR

DMmdL

Greetings from the North

Der Mann mit der Ledertasche
Der Mann mit de...
Joined: 12 Dec 05
Posts: 151
Credit: 302594178
RAC: 0

...Solved, I'm a

...Solved, I'm a "Dummy"!

I've had to edit the remote_hosts.cfg and everything was fine!

Sorry...

BR

DMmtL

Greetings from the North

KF7IJZ
KF7IJZ
Joined: 27 Feb 15
Posts: 110
Credit: 6108311
RAC: 0

RE: Anyway, glad to see

Quote:

Anyway, glad to see the team growing. If we were a country, we would be in the TOP 100 for Einstein@Home. Yes, the (currently ) ca 6600 credits/day is more than that of, say, Egypt, according to BoincStats. Hmmm... hard to believe ... kind of depressing if true.

This really makes me want to get off my duff and get my other 7 Pi 3s going.

Also, for those who want to use Pi Zeros (even though it makes NO economic sense to do so), check out this new HAT! https://clusterhat.com/

My YouTube Channel: https://www.youtube.com/user/KF7IJZ
Follow me on Twitter: https://twitter.com/KF7IJZ

poppageek
poppageek
Joined: 13 Aug 10
Posts: 259
Credit: 2473733872
RAC: 0

The real reason I wanted

The real reason I wanted everyone to use the "suspend" command. I WAS #2. LOL

Ordered another Pi3. Won't be enough but I am not going down easy!!

Oh the pressure...

poppageek
poppageek
Joined: 13 Aug 10
Posts: 259
Credit: 2473733872
RAC: 0

RE: ...Solved, I'm a

Quote:

...Solved, I'm a "Dummy"!

I've had to edit the remote_hosts.cfg and everything was fine!

Sorry...

BR

DMmtL

Second line contradicts the first line. :-)

poppageek
poppageek
Joined: 13 Aug 10
Posts: 259
Credit: 2473733872
RAC: 0

Welcome to the team olebole!

Welcome to the team olebole!

Der Mann mit der Ledertasche
Der Mann mit de...
Joined: 12 Dec 05
Posts: 151
Credit: 302594178
RAC: 0

Good Morning Team, because

Good Morning Team,

because I've atm only 2 Error WU's at all ;-) I've to wait until the last 36 WU's on the Raspi's are finished. Probably at Tue/Wed next Week is the earliest time to switch over the Pi's to the Team.
BTW, how do I imagine a x-Node Pi-Cluster working with the right Software?
Would one WU spread over all Node's or just every Core calculate one WU?
Waiting for the next week to switch!

BR

DMmtL

Greetings from the North

PorkyPies
PorkyPies
Joined: 27 Apr 16
Posts: 199
Credit: 33740629
RAC: 945

RE: The real reason I

Quote:

The real reason I wanted everyone to use the "suspend" command. I WAS #2. LOL

Ordered another Pi3. Won't be enough but I am not going down easy!!

Oh the pressure...


Ordered another 3 Pi3's. One to replace a Pi2 and the other two to replace a couple of Parallellas. The 3 being replaced aren't currently part of the team. That will bring the PorkyPies cluster up to 7.

PorkyPies
PorkyPies
Joined: 27 Apr 16
Posts: 199
Credit: 33740629
RAC: 945

RE: BTW, how do I imagine a

Quote:

BTW, how do I imagine a x-Node Pi-Cluster working with the right Software?
Would one WU spread over all Node's or just every Core calculate one WU?
Waiting for the next week to switch!

BR

DMmtL


Normally we run them as what's called a CoW (Cluster of Workstations). That is each one has a copy of BOINC client running on it doing its own thing, typically running 4 BRP4 tasks, one for each core, at a time.

If it were a Beowulf cluster then you'd have a head or master node scheduling tasks to run on the compute nodes. To do that would require the BRP4 app to have code changes to run using MPI (Message Passing Interface) and the BOINC client to use MPI to initiate/monitor the tasks. I've been working on a Pi Beowulf cluster for the last few weeks and finally got it working tonight. I will post about it when I get some free time. Unfortunately it doesn't help with running BOINC.

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.