• C++ Programming for Financial Engineering
    Highly recommended by thousands of MFE students. Covers essential C++ topics with applications to financial engineering. Learn more Join!
    Python for Finance with Intro to Data Science
    Gain practical understanding of Python to read, understand, and write professional Python code for your first day on the job. Learn more Join!
    An Intuition-Based Options Primer for FE
    Ideal for entry level positions interviews and graduate studies, specializing in options trading arbitrage and options valuation models. Learn more Join!

GPU programming in pure Java from AMD

This could be really rather important.

When Java first started to be looked at as a mainstream language, the promise was made that we'd decouple compilation from execution. By making decisions based upon the environment the executable found itself in, better decisions could be made aobut how to execute.

Up to now that has been a largely empty promise, giving only the ability to write code that executed on more than one OpSys/ processor without taking much advantage or local conditions, and in nearly all cases being inferior to native apps, even ones that were not very well written.

But CUDA is a relatively difficult language to use well, and Java, for all it's faults is much easier.
 
I don't consider this particularly important - I just don't expect GPU programming to become some kind of skill required from an average programmer, so that there may be need to lower barrier to entry in any way possible, including allowing them to program kernels in language that is presumably most commonly used among this type of programmers... So far, I haven't met that much Java programmers among GPU programmers, and I think efforts like these by Portland Group and AccelerEyes, making it possible to write kernels in Fortran and to utilize GPUs from Matlab code, respectively, are much more relevant for this community. Furthermore AMD is far from having enough momentum to significantly influence the GPGPU domain - their record so far, like long insisting on doomed Brook/CAL approach or their half-baked OpenCL implementation even now when they claim they are 100% dedicated to OpenCL, is pretty much showing they are disconnected from reality; so I would rather look at other players in the field, like for example CAPS, or even Intel, for the innovation.
 
i wonder how much overhead having to deal with a jvm adds... would be interesting to see some benchmarks.

Also what about the hardware? I know Nvidia has been modifying their hardware to make it more useable for cuda...
 
Back
Top