Hacker News new | past | comments | ask | show | jobs | submit login
ClojureCUDA – CUDA programming in Clojure (uncomplicate.org)
164 points by prometheus666 on Sept 7, 2017 | hide | past | favorite | 13 comments



It might also be a good idea to check out Neanderthal (http://neanderthal.uncomplicate.org), a linear algebra library that also uses ClojureCUDA side to side with Intel MKL and ClojureCL.


The uncomplicate family of libraries look really great. The fellow behind them clearly knows what he is doing.

I tried to get uncomplicate.neanderthal working because I was impressed by it, but eventually was scared off by the Intel MKL dependency.

I'll get it all working properly some day after sinking enough hours; but I really wish there was a pure java-CPU implementation so it was a bit easier to ease into the functional mindset and prototype algorithms, learn the concepts and clear out my misunderstandings regarding how the logic fits together. I lost an afternoon to finding downloaders, reading license conditions, shell scripts, and debugging path issues when what I wanted to be doing was learning to use a new API.

Basically, I think uncomplicate is great, but the library dependencies are a bit unfriendly for the novice to low-skill data scientist new to Clojure. EDIT The competition was "import numpy as np" to start learning how to do linear algebra work in Python.


You do not need to read any of those Intel MKL guides - they are for the developers who compile the dependencies (me in this case).

You as a user only need to have Intel's .so files on your PATH (or dylib for Mac or dll for Windows), and that's all.

As this is a discussion about ClojureCUDA, I'll just note that to use this you'd need to install CUDA. IMHO, Intel MKL is easier to install than CUDA, and noone complains of the CUDA installation complexity.

I understand your concerns, but I'd like to encourage you - it is really not difficult once you follow the installation instructions to the letter.


Disclaimer: I run a competing library to the uncomplicate series of libraries.

If you want that set of guarantees with just as ease of use,you can look in to ours: http://nd4j.org/

We have various usage examples here: https://github.com/deeplearning4j/dl4j-examples/tree/master/...

and the userguide: http://nd4j.org/userguide

We have a lot going on underneath the hood, but we pre compile everything out of the box.

We maintain a whole software stack for JNI called javacpp: https://github.com/bytedeco/javacpp

One of those includes something called a "preset" which bundles openblas as a maven dependency (with the ability to link to mkl if you want)

We support cuda as well.

If you have any questions, stop by our live chat: https://gitter.im/deeplearning4j/deeplearning4j


Hi Adam,

Perhaps you'd be interested in writing a joint article with me where we explore both Neanderthal and ND4J side by side?

Maybe you could also benefit of ClojureCUDA's ease of use vs using low-level Java bindings? This could be a stretch, I know :)

BTW Congratulations on your book on Deep learning!


Sure! Would love to. I don't think we can use your java bindings. We have our own off heap memory management and a lot of other stuff going on under the hood. Email in profile.


Wow! Just yesterday I was researching what options I have for programming with CUDA, having a Clojure library for it sounds lovely!

How is the performance, though? Is there an impact on it, as compared to say the C bindings? I want to write a cryptocurrency miner for nVidia and performance is key there. Any inputs?


No impact whatsoever! You get the full speed.


This is the best news I could wish for. Cheers!


Everything is compiled to MTX via NVCC(LLVM) so as long as the libraries are equivalent there should be no impact.

Many of those libraries are nothing more than an interface over the existing "1st" party libraries e.g. cuBLAS so you in effect aren't going to be running unoptimized code.


It is very easy to run unoptimized code despite sending everything to cuBLAS.

How easy it is to make the wrong turn - that depends on the library.


Indeed but that isn't language dependent, my intention was to convey that for the most part C/C++, Fortran, Coljure or anything else will run the same for the same code. Some language specifics might cause edge cases but for the most part as long as the compiler produces an identical PTX there shouldn't be any difference.


Sure. My intention was not to correct your statement, but to clarify it a bit.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: