Thanks Petr,
I think it will take about four years before most desktops have the hardware to do generic parallel computing on the GPU. I think double precision maths is necessary for this to come about, and only the latest generation of video cards have this ability.
It would be great if this parallelism could be tightly integrated in to basic. It could be done almost transparently if the interpreter/compiler could identify parallel situations - iteratively processing large arrays for instance.
Charles
Bookmarks