Well I was just about to say that I reckon Adapteva had fallen over, only we didn't know it yet, when this was published. Ooooh, some unkind cuts there. :-(
Cheers, Mike.
I have made this letter longer than usual because I lack the time to make it shorter ...
... and my other CPU is a Ryzen 5950X :-) Blaise Pascal
Not alot happening, and not alot being said by Adapteva either. In theory my board should be shipped in the next 2 weeks. Some backers are getting quite restless though. Annoyance at delay is very natural and I really don't blame them, even though my tolerance/patience level is set somewhat higher.
Meanwhile I have found some really good and older texts on my personal library shelves that I had forgotten that I had, these relating to parallel programming both directly and as asides.
There's one describing the 'perfect shuffle' algorithm which exactly describes the permutation of vector elements in FFT's DIT & DIF processing, even though such a shuffle is used elsewhere too. The gag here is that if you continue to iterate the shuffle you will return to the original ordering in some finite number of steps. So you could start with a standard pack of 52 playing cards, say well ordered by suit and rank, then perfect shuffle repeatedly through quite random looking states and wind up with the original state eventually and suddenly 'appearing out of nowhere'. In a sense the 'information' in the original ordering never left, the entropy if you like was kept constant/recoverable by the specific shuffle mechanism used.
[ There's a deeper lesson here for thermodynamics and I am reminded of Boltzmann's comment that entropy almost always increases - but he had the assumption of molecular chaos to shuffle the objects, not an algorthmic/perfect shuffle on hand. ]
Another talks of FFT's as a quick solution to finding coefficients for ( large ) polynomial multiplications. I would never have picked that one.
Fourier Transforms - one of my favorite math topics if you haven't already guessed - is such an incredible intersection of so many areas. Theoretical and practical. A good bone to chew on.
Cheers, Mike.
I have made this letter longer than usual because I lack the time to make it shorter ...
... and my other CPU is a Ryzen 5950X :-) Blaise Pascal
Well, another (partial) delivery milestone (Nov. 15th) for the Parallella boards has come, and passed, without any public update by the company behind the project, Adapteva. I've seen no announcements made, no Kickstarter updates, even the Twitter messages from the Adapteva CEO fell silent. He is supposed to give a talk at a technology conference on Dec 4th, we will see whether this will shed some lights on the current state of the project.
I'm not holding my breath for getting my backer reward board, and will have to look for some other toy for the X-mas period to play with, I guess.
Yup. I think they've gone belly up, or at least the Parallella component of their business has. Seems like the sort of industry where if you want a friend, then you get a dog. Their forum is getting a tad feral, and more so the longer they make no comments/announcements. Probably they've lost too much cash and/or investment with their engineer leaving and getting DK'ed by a key supplier, and so my venture capital is gone I reckon. But I knew the risk ..... :-(
However my backup plan is to do FFT's on an FPGA .... seriously ! :-)
Cheers, Mike.
( edit ) However, by a curious piece of pure luck the protected ( low ) power supply that I have designed and built for the Parallella will work fine with the Nexys4 .... howzat for a fluke !! :-)
( edit ) Unlike some other backers, I don't think they've been fraudulent : just better engineers than businessmen. I mean that in a kindly sense.
( edit ) If you're wondering ..... one can have free access to a 32-bit processor design that can be implemented on Xilinx FPGA's ( like the Artix7 within the Nexys4 ) called MicroBlaze MCS ( a quite useful cut-down of their more generally configurable MicroBlaze IP ), available with their Webpack ISE download. Of course you can roll your own 32-bit processor design if you want : like golf there is no absolute obstacle b/w you and the hole and it's just a case of hitting the ball correctly, right ?? :-)
This satisfies all the single precision floating point needs for FFT, and it's then a case of wrapping other features of choice/construction around that floating point engine to give the final bitstream to program the FPGA with.
I have made this letter longer than usual because I lack the time to make it shorter ...
... and my other CPU is a Ryzen 5950X :-) Blaise Pascal
However my backup plan is to do FFT's on an FPGA .... seriously ! :-)
[\quote]
Very interesting indeed. In general, the idea of combining FPGA and BOINC was floating around for some time now, especially after the Bitcoin craze (and attacks on some encryption schemes) demonstrated the power of FPGAs.
I wonder whether there are open-source FPGA implementations of FFT already?
FFT's in hardware is pretty well covered, as google knows. A quick search for 'FFT' on opencores.org threw up plenty of hits.
What performance gain would the e@h apps see if (as a thought experiment) the time taken for an FFT went to zero?
Presumably the improvement would vary according to the application and the algorithms - if the time is essentially all FFT's then the improvement could be large, but if FFT's are only a few % anyway it wouldn't be worth it. And isn't e@h using a lot of bus bandwidth as it is? (an FPGA isn't going to help that).
I wonder whether there are open-source FPGA implementations of FFT already?
No shortage in fact. However none that I've found cope with the vector lengths that E@H uses ie. ~ 2^[20+], for that length you have to take on IP costs eg. Sundance. Strictly speaking they are hybrids requiring associated RAM. My interest is in not only producing such a solution(s), but also studying/managing how that scales ..... fortunately the basic recursive elements of the problem give me great expectations. Tilting at windmills anyone ??? :-) :-)
Cheers, Mike.
( edit ) On size grounds at least : one could **just** fit a 2^22 transform ( real to complex, single precision IEEE-754 floats ) on the Nexys4, without resorting to an in-place algorithm.
( edit ) Fortunately there is also no shortage of open source floating point cores eg. FloPoCo ( a C++ based generator of VHDL, under Linux ). Some involve **pipelining** for instance.
( edit ) A really neat application of the concurrency that FPGA's give would be the generation of the FFT 'twiddle factors', to wit : powers of the Nth ( complex ) root of unity, where N is the transform size. Any of these roots may be generated on the fly at no latency cost ie. sine and cosine components, rather like 'just in time' component delivery to a large assembly line.
( edit ) Slow day, huh ?? ---> another cool thing with FPGA implemented custom float cores is to have fused complex/multiple/combined operations viz. perform a ( ? possibly quite long ? ) series of operand manipulations in some greater number of bits than the ( IEEE-754 ) standard operand length, but round back to said length for storage at the very end of the operation sequence. This may include full function 'calls' eg. trigs, roots and logs. This may dramatically tame alot of rounding error accumulation. For this you also need converters to/from IEEE-754 to some chosen internal format. Also open source! :-)
I have made this letter longer than usual because I lack the time to make it shorter ...
... and my other CPU is a Ryzen 5950X :-) Blaise Pascal
Well I was just about to say
)
Well I was just about to say that I reckon Adapteva had fallen over, only we didn't know it yet, when this was published. Ooooh, some unkind cuts there. :-(
Cheers, Mike.
I have made this letter longer than usual because I lack the time to make it shorter ...
... and my other CPU is a Ryzen 5950X :-) Blaise Pascal
Hi! I'd be happy to get a
)
Hi!
I'd be happy to get a board for x-mas, I guess. Well, better late than never.
Cheers
HB
RE: He already had over 300
)
Ah, the famous XKCD productivity loss, upon discovering the comic for the first time! :)
Do not, on any account, investigate the 'Time' series or the 'massive' page.
Not alot happening, and not
)
Not alot happening, and not alot being said by Adapteva either. In theory my board should be shipped in the next 2 weeks. Some backers are getting quite restless though. Annoyance at delay is very natural and I really don't blame them, even though my tolerance/patience level is set somewhat higher.
Meanwhile I have found some really good and older texts on my personal library shelves that I had forgotten that I had, these relating to parallel programming both directly and as asides.
There's one describing the 'perfect shuffle' algorithm which exactly describes the permutation of vector elements in FFT's DIT & DIF processing, even though such a shuffle is used elsewhere too. The gag here is that if you continue to iterate the shuffle you will return to the original ordering in some finite number of steps. So you could start with a standard pack of 52 playing cards, say well ordered by suit and rank, then perfect shuffle repeatedly through quite random looking states and wind up with the original state eventually and suddenly 'appearing out of nowhere'. In a sense the 'information' in the original ordering never left, the entropy if you like was kept constant/recoverable by the specific shuffle mechanism used.
[ There's a deeper lesson here for thermodynamics and I am reminded of Boltzmann's comment that entropy almost always increases - but he had the assumption of molecular chaos to shuffle the objects, not an algorthmic/perfect shuffle on hand. ]
Another talks of FFT's as a quick solution to finding coefficients for ( large ) polynomial multiplications. I would never have picked that one.
Fourier Transforms - one of my favorite math topics if you haven't already guessed - is such an incredible intersection of so many areas. Theoretical and practical. A good bone to chew on.
Cheers, Mike.
I have made this letter longer than usual because I lack the time to make it shorter ...
... and my other CPU is a Ryzen 5950X :-) Blaise Pascal
Judging by your avatar you
)
Judging by your avatar you need a good bone to chew on, occasionally ;)
Anyway, I wish you and Parallela all the best!
MrS
Scanning for our furry friends since Jan 2002
Well, another (partial)
)
Well, another (partial) delivery milestone (Nov. 15th) for the Parallella boards has come, and passed, without any public update by the company behind the project, Adapteva. I've seen no announcements made, no Kickstarter updates, even the Twitter messages from the Adapteva CEO fell silent. He is supposed to give a talk at a technology conference on Dec 4th, we will see whether this will shed some lights on the current state of the project.
I'm not holding my breath for getting my backer reward board, and will have to look for some other toy for the X-mas period to play with, I guess.
Cheers
HB
Yup. I think they've gone
)
Yup. I think they've gone belly up, or at least the Parallella component of their business has. Seems like the sort of industry where if you want a friend, then you get a dog. Their forum is getting a tad feral, and more so the longer they make no comments/announcements. Probably they've lost too much cash and/or investment with their engineer leaving and getting DK'ed by a key supplier, and so my venture capital is gone I reckon. But I knew the risk ..... :-(
However my backup plan is to do FFT's on an FPGA .... seriously ! :-)
Cheers, Mike.
( edit ) However, by a curious piece of pure luck the protected ( low ) power supply that I have designed and built for the Parallella will work fine with the Nexys4 .... howzat for a fluke !! :-)
( edit ) Unlike some other backers, I don't think they've been fraudulent : just better engineers than businessmen. I mean that in a kindly sense.
( edit ) If you're wondering ..... one can have free access to a 32-bit processor design that can be implemented on Xilinx FPGA's ( like the Artix7 within the Nexys4 ) called MicroBlaze MCS ( a quite useful cut-down of their more generally configurable MicroBlaze IP ), available with their Webpack ISE download. Of course you can roll your own 32-bit processor design if you want : like golf there is no absolute obstacle b/w you and the hole and it's just a case of hitting the ball correctly, right ?? :-)
This satisfies all the single precision floating point needs for FFT, and it's then a case of wrapping other features of choice/construction around that floating point engine to give the final bitstream to program the FPGA with.
I have made this letter longer than usual because I lack the time to make it shorter ...
... and my other CPU is a Ryzen 5950X :-) Blaise Pascal
Hi! RE: However my
)
Hi!
FFT's in hardware is pretty
)
FFT's in hardware is pretty well covered, as google knows. A quick search for 'FFT' on opencores.org threw up plenty of hits.
What performance gain would the e@h apps see if (as a thought experiment) the time taken for an FFT went to zero?
Presumably the improvement would vary according to the application and the algorithms - if the time is essentially all FFT's then the improvement could be large, but if FFT's are only a few % anyway it wouldn't be worth it. And isn't e@h using a lot of bus bandwidth as it is? (an FPGA isn't going to help that).
RE: I wonder whether there
)
No shortage in fact. However none that I've found cope with the vector lengths that E@H uses ie. ~ 2^[20+], for that length you have to take on IP costs eg. Sundance. Strictly speaking they are hybrids requiring associated RAM. My interest is in not only producing such a solution(s), but also studying/managing how that scales ..... fortunately the basic recursive elements of the problem give me great expectations. Tilting at windmills anyone ??? :-) :-)
Cheers, Mike.
( edit ) On size grounds at least : one could **just** fit a 2^22 transform ( real to complex, single precision IEEE-754 floats ) on the Nexys4, without resorting to an in-place algorithm.
( edit ) Fortunately there is also no shortage of open source floating point cores eg. FloPoCo ( a C++ based generator of VHDL, under Linux ). Some involve **pipelining** for instance.
( edit ) A really neat application of the concurrency that FPGA's give would be the generation of the FFT 'twiddle factors', to wit : powers of the Nth ( complex ) root of unity, where N is the transform size. Any of these roots may be generated on the fly at no latency cost ie. sine and cosine components, rather like 'just in time' component delivery to a large assembly line.
( edit ) Slow day, huh ?? ---> another cool thing with FPGA implemented custom float cores is to have fused complex/multiple/combined operations viz. perform a ( ? possibly quite long ? ) series of operand manipulations in some greater number of bits than the ( IEEE-754 ) standard operand length, but round back to said length for storage at the very end of the operation sequence. This may include full function 'calls' eg. trigs, roots and logs. This may dramatically tame alot of rounding error accumulation. For this you also need converters to/from IEEE-754 to some chosen internal format. Also open source! :-)
I have made this letter longer than usual because I lack the time to make it shorter ...
... and my other CPU is a Ryzen 5950X :-) Blaise Pascal