...as I said, sometimes the Forrest is to near! ;-)
I use BOINC Task too for all my Comps and would be able to do the wanted!
But anyway, with the --passwd parameter it is going well. See us in Team as the last 26 WU's were done. ;-)
Thx, seems to be a nice Tool.
Did install, editing the Hostlist, but still getting no information (offline).
Do I have to install or editing something other (e.g. Portnumber)?
Sorry for to many Questions but I'm just in the beginning in the Unix/Linux Environment! :-O
Anyway, glad to see the team growing. If we were a country, we would be in the TOP 100 for Einstein@Home. Yes, the (currently ) ca 6600 credits/day is more than that of, say, Egypt, according to BoincStats. Hmmm... hard to believe ... kind of depressing if true.
This really makes me want to get off my duff and get my other 7 Pi 3s going.
Also, for those who want to use Pi Zeros (even though it makes NO economic sense to do so), check out this new HAT! https://clusterhat.com/
because I've atm only 2 Error WU's at all ;-) I've to wait until the last 36 WU's on the Raspi's are finished. Probably at Tue/Wed next Week is the earliest time to switch over the Pi's to the Team.
BTW, how do I imagine a x-Node Pi-Cluster working with the right Software?
Would one WU spread over all Node's or just every Core calculate one WU?
Waiting for the next week to switch!
The real reason I wanted everyone to use the "suspend" command. I WAS #2. LOL
Ordered another Pi3. Won't be enough but I am not going down easy!!
Oh the pressure...
Ordered another 3 Pi3's. One to replace a Pi2 and the other two to replace a couple of Parallellas. The 3 being replaced aren't currently part of the team. That will bring the PorkyPies cluster up to 7.
BTW, how do I imagine a x-Node Pi-Cluster working with the right Software?
Would one WU spread over all Node's or just every Core calculate one WU?
Waiting for the next week to switch!
BR
DMmtL
Normally we run them as what's called a CoW (Cluster of Workstations). That is each one has a copy of BOINC client running on it doing its own thing, typically running 4 BRP4 tasks, one for each core, at a time.
If it were a Beowulf cluster then you'd have a head or master node scheduling tasks to run on the compute nodes. To do that would require the BRP4 app to have code changes to run using MPI (Message Passing Interface) and the BOINC client to use MPI to initiate/monitor the tasks. I've been working on a Pi Beowulf cluster for the last few weeks and finally got it working tonight. I will post about it when I get some free time. Unfortunately it doesn't help with running BOINC.
...as I said, sometimes the
)
...as I said, sometimes the Forrest is to near! ;-)
I use BOINC Task too for all my Comps and would be able to do the wanted!
But anyway, with the --passwd parameter it is going well. See us in Team as the last 26 WU's were done. ;-)
BR
DMmtL
Greetings from the North
Thx, seems to be a nice
)
Thx, seems to be a nice Tool.
Did install, editing the Hostlist, but still getting no information (offline).
Do I have to install or editing something other (e.g. Portnumber)?
Sorry for to many Questions but I'm just in the beginning in the Unix/Linux Environment! :-O
BR
DMmdL
Greetings from the North
...Solved, I'm a
)
...Solved, I'm a "Dummy"!
I've had to edit the remote_hosts.cfg and everything was fine!
Sorry...
BR
DMmtL
Greetings from the North
RE: Anyway, glad to see
)
This really makes me want to get off my duff and get my other 7 Pi 3s going.
Also, for those who want to use Pi Zeros (even though it makes NO economic sense to do so), check out this new HAT! https://clusterhat.com/
My YouTube Channel: https://www.youtube.com/user/KF7IJZ
Follow me on Twitter: https://twitter.com/KF7IJZ
The real reason I wanted
)
The real reason I wanted everyone to use the "suspend" command. I WAS #2. LOL
Ordered another Pi3. Won't be enough but I am not going down easy!!
Oh the pressure...
RE: ...Solved, I'm a
)
Second line contradicts the first line. :-)
Welcome to the team olebole!
)
Welcome to the team olebole!
Good Morning Team, because
)
Good Morning Team,
because I've atm only 2 Error WU's at all ;-) I've to wait until the last 36 WU's on the Raspi's are finished. Probably at Tue/Wed next Week is the earliest time to switch over the Pi's to the Team.
BTW, how do I imagine a x-Node Pi-Cluster working with the right Software?
Would one WU spread over all Node's or just every Core calculate one WU?
Waiting for the next week to switch!
BR
DMmtL
Greetings from the North
RE: The real reason I
)
Ordered another 3 Pi3's. One to replace a Pi2 and the other two to replace a couple of Parallellas. The 3 being replaced aren't currently part of the team. That will bring the PorkyPies cluster up to 7.
MarksRpiCluster
RE: BTW, how do I imagine a
)
Normally we run them as what's called a CoW (Cluster of Workstations). That is each one has a copy of BOINC client running on it doing its own thing, typically running 4 BRP4 tasks, one for each core, at a time.
If it were a Beowulf cluster then you'd have a head or master node scheduling tasks to run on the compute nodes. To do that would require the BRP4 app to have code changes to run using MPI (Message Passing Interface) and the BOINC client to use MPI to initiate/monitor the tasks. I've been working on a Pi Beowulf cluster for the last few weeks and finally got it working tonight. I will post about it when I get some free time. Unfortunately it doesn't help with running BOINC.
MarksRpiCluster