Rolling your own dynamic linker
marlowsd at gmail.com
Thu Nov 12 06:15:55 EST 2009
On 30/10/2009 12:59, Duncan Coutts wrote:
> On Fri, 2009-10-30 at 19:31 +1100, Manuel M T Chakravarty wrote:
>> This made Roman and me wonder whether it wouldn't be possible to use
>> dynamic libraries and dlopen() instead of the current setup. AFAIK
>> GHC can now generate dynamic libraries on Linux and somebody was
>> working on fixing them for MacOS, too. If GHC can create dynamic
>> libraries, what else would prevent us from using them for dynamically
>> loaded code (in ghci, TH, etc)? I can immediately see a number of
>> benefits of this approach:
> Using dynamic libs for packages for ghci is certainly a good idea.
> However to use the system linker everywhere you must abandon having ghci
> load .o files since the dlopen() call does not understand them. That
> means you can not use the current system where we ghc --make and then
> use ghci to load the compiled .o files.
> We'd have to investigate alternative options. Perhaps we could have
> ghc/ghci build each .o into a separate .so/.dynlib or link them all into
> one .so/.dynlib before loading (which means we must always compile -fPIC
> and that has a non-zero cost). The same goes for .o files from
> helper/wrapper C code. It doesn't seem especially pretty.
> One could also imagine a system where we can load compiled code from ghc
> but not arbitrary .o files (imagine something like the final cmm stage
> being serialised/deserialised followed by in-memory jit). That'd would
> not help your example of special dtrace symbols of course since those
> are in external .o files from the C compiler. You'd have to compile that
> into a .so/.dynlib lib first.
Someone should investigate the feasibility of dropping the RTS linker.
Manuel is pretty keen to do it (with good reasons), and I have some
sympathy with his position. I don't understand all the ramifications
though, and it's not yet clear whether it's completely feasible.
we should create a wiki page in due course, but let's continue the
discussion for now.
Here's a brain dump of the questions I have:
* How do we load compiled Haskell modules in GHCi?
- run "gcc -shared" on each one, then dlopen()?
- how much slower is that? would users be willing to
accept the slowdown?
- does it mean we have to compile with -fPIC/-dynamic by
- if so, what are the performance implications of that?
- should we switch to -fPIC/-dynamic being the default?
(we probably want to move toward -dynamic being the default
anyway, but not necessarily -fPIC).
- or can we require that people compile modules with
-fPIC/-dynamic if they want to load them into GHCi?
- if they forget, can we give a reasonable error message?
- -fPIC by default would be good for x86-64, due to
* Can we continue to load ordinary .o files (e.g. compiled C code)
in GHCi? Does it need to be compiled with -fPIC?
- are people willing to accept any regressions in this area?
* What about Windows? (DLL support is in progress)
* What about other platforms - are we adding a dependency on
PIC/dynamic support in the code generator in order to use GHCi?
More information about the Cvs-ghc