<div>It is not obvious that semantics is preserved for optimisations which remove non-constants like</div><div><br></div><div>bar a b = a + b - a - b -- the RHS is should be optimized away to 0<br></div><div><br></div><div>
Calling bar undefined undefined throws an error, but the optimised bar would return 0.</div><br><div class="gmail_quote">On Sat, Jun 1, 2013 at 8:10 PM, Patrick Palka <span dir="ltr"><<a href="mailto:patrick@parcs.ath.cx" target="_blank">patrick@parcs.ath.cx</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr">Yeah, I am going to use the MVar approach. Alternative implementations will be investigated if this approach happens to not scale well.</div>
<div class="HOEnZb"><div class="h5"><div class="gmail_extra"><br><br><div class="gmail_quote">On Fri, May 31, 2013 at 9:10 AM, Thomas Schilling <span dir="ltr"><<a href="mailto:nominolo@googlemail.com" target="_blank">nominolo@googlemail.com</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">[I'll be the mentor for this GSoC project.]<br>
<br>
I used the MVar approach a while ago and so did Simon Marlow's<br>
original solution. Using MVars and Threads for this should scale well<br>
enough (1000s of modules) and be relatively straightforward.<br>
Error/exception handling could be a bit tricky, but you could use (or<br>
copy ideas from) the 'async' package to deal with that.<br>
<span><font color="#888888"><br>
/ Thomas<br>
</font></span><div><div><br>
On 30 May 2013 18:51, Ryan Newton <<a href="mailto:rrnewton@gmail.com" target="_blank">rrnewton@gmail.com</a>> wrote:<br>
> What's the plan for what control / synchronization structures you'll use in<br>
> part 2 of the plan to implement a parallel driver?<br>
><br>
> Is the idea just to use an IO thread for each compile and block them on<br>
> MVars when they encounter dependencies? Or you can use a pool of worker<br>
> threads and a work queue, and only add modules to the work queue when all<br>
> their dependencies are met (limits memory use)... many options for executing<br>
> a task DAG. Fortunately the granularity is coarse.<br>
><br>
> -Ryan<br>
><br>
><br>
><br>
> On Sun, Apr 21, 2013 at 10:34 PM, Patrick Palka <<a href="mailto:patrick@parcs.ath.cx" target="_blank">patrick@parcs.ath.cx</a>><br>
> wrote:<br>
>><br>
>> Good points. I did not take into account whether proposal #2 may be worth<br>
>> it in light of -fllvm. I suppose that even if the LLVM codegen is able to<br>
>> perform similar optimizations, it would still be beneficial to implement<br>
>> proposal #2 as a core-to-core pass because the transformations it performs<br>
>> would expose new information to subsequent core-to-core passes. Also,<br>
>> Haskell has different overflow rules than C does (whose rules I assume<br>
>> LLVM's optimizations are modeled from): In Haskell, integer overflow is<br>
>> undefined for all integral types, whereas in C it's only undefined for<br>
>> signed integral types. This limits the number of optimizations a C-based<br>
>> optimizer can perform on unsigned arithmetic.<br>
>><br>
>> I'm not sure how I would break up the parallel compilation proposal into<br>
>> multiple self-contained units of work. I can only think of two units: making<br>
>> GHC thread safe, and writing the new parallel compilation driver. Other<br>
>> incidental units may come up during development (e.g. parallel compilation<br>
>> might exacerbate #4012), but I still feel that three months of full time<br>
>> work is ample time to complete the project, especially with existing<br>
>> familiarity with the code base.<br>
>><br>
>> Thanks for the feedback.<br>
>><br>
>><br>
>> On Sun, Apr 21, 2013 at 5:55 PM, Carter Schonwald<br>
>> <<a href="mailto:carter.schonwald@gmail.com" target="_blank">carter.schonwald@gmail.com</a>> wrote:<br>
>>><br>
>>> Hey Patrick,<br>
>>> both are excellent ideas for work that would be really valuable for the<br>
>>> community!<br>
>>> (independent of whether or not they can be made into GSOC sided chunks )<br>
>>><br>
>>> -------<br>
>>> I'm actually hoping to invest some time this summer investigating<br>
>>> improving the numerics optimization story in ghc. This is in large part<br>
>>> because I'm writing LOTs of numeric codes in haskell presently (hopefully on<br>
>>> track to make some available to the community ).<br>
>>><br>
>>> That said, its not entirely obvious (at least to me) what a tractable<br>
>>> focused GSOC sized subset of the numerics optimization improvement would be,<br>
>>> and that would have to also be a subset that has real performance impact and<br>
>>> doesn't benefit from eg using -fllvm rather than -fasm .<br>
>>> ---------<br>
>>><br>
>>> For helping pave the way to better parallel builds, what are some self<br>
>>> contained units of work on ghc (or related work on cabal) that might be<br>
>>> tractable over a summer? If you can put together a clear roadmap of "work<br>
>>> chunks" that are tractable over the course of the summer, I'd favor choosing<br>
>>> that work, especially if you can give a clear outline of the plan per chunk<br>
>>> and how to evaluate the success of each unit of work.<br>
>>><br>
>>> basically: while both are high value projects, helping improve the<br>
>>> parallel build tooling (whether in performance or correctness or both!) has<br>
>>> a more obvious scope of community impact, and if you can layout a clear plan<br>
>>> of work that GHC folks agree with and seems doable, i'd favor that project<br>
>>> :)<br>
>>><br>
>>> hope this feedback helps you sort out project ideas<br>
>>><br>
>>> cheers<br>
>>> -Carter<br>
>>><br>
>>><br>
>>><br>
>>><br>
>>> On Sun, Apr 21, 2013 at 12:20 PM, Patrick Palka <<a href="mailto:patrick@parcs.ath.cx" target="_blank">patrick@parcs.ath.cx</a>><br>
>>> wrote:<br>
>>>><br>
>>>> Hi,<br>
>>>><br>
>>>> I'm interested in participating in the GSoC by improving GHC with one of<br>
>>>> these two features:<br>
>>>><br>
>>>> 1) Implement native support for compiling modules in parallel (see<br>
>>>> #910). This will involve making the compilation pipeline thread-safe,<br>
>>>> implementing the logic for building modules in parallel (with an emphasis on<br>
>>>> keeping compiler output deterministic), and lots of testing and<br>
>>>> benchmarking. Being able to seamlessly build modules in parallel will<br>
>>>> shorten the time it takes to recompile a project and will therefore improve<br>
>>>> the life of every GHC user.<br>
>>>><br>
>>>> 2) Improve existing constant folding, strength reduction and peephole<br>
>>>> optimizations on arithmetic and logical expressions, and optionally<br>
>>>> implement a core-to-core pass for optimizing nested comparisons (relevant<br>
>>>> tickets include #2132,#5615,#4101). GHC currently performs some of these<br>
>>>> simplifications (via its BuiltinRule framework), but there is a lot of room<br>
>>>> for improvement. For instance, the core for this snippet is essentially<br>
>>>> identical to the Haskell source:<br>
>>>><br>
>>>> foo :: Int -> Int -> Int -> Int<br>
>>>> foo a b c = 10*((b+7+a+12+b+9)+4) + 5*(a+7+b+12+a+9) + 7 + b + c<br>
>>>><br>
>>>> Yet the RHS is actually equivalent to<br>
>>>><br>
>>>> 20*a + 26*b + c + 467<br>
>>>><br>
>>>> And:<br>
>>>><br>
>>>> bar :: Int -> Int -> Int<br>
>>>> bar a b = a + b - a - b -- the RHS is should be optimized away to 0<br>
>>>><br>
>>>> Other optimizations include: multiplication and division by powers of<br>
>>>> two should be converted to shifts; multiple plusAddr calls with constant<br>
>>>> operands should be coalesced into a single plusAddr call; floating point<br>
>>>> functions should be constant folded, etc..<br>
>>>><br>
>>>> GHC should be able to perform all these algebraic simplifications. Of<br>
>>>> course, emphasis should be placed on the correctness of such<br>
>>>> transformations. A flag for performing unsafe optimizations like assuming<br>
>>>> floating point arithmetic is associative and distributive should be added.<br>
>>>> This proposal will benefit anybody writing or using numerically intensive<br>
>>>> code.<br>
>>>><br>
>>>><br>
>>>> I'm wondering what the community thinks of these projects. Which project<br>
>>>> is a better fit for GSoC, or are both a good fit? Is a mentor willing to<br>
>>>> supervise one of these projects?<br>
>>>><br>
>>>> Thanks for your time.<br>
>>>> Patrick<br>
>>>><br>
>>>> (A little about myself: I'm a Mathematics student in the US, and I've<br>
>>>> been programming in Haskell for about 3.5 years. Having a keen interest in<br>
>>>> Haskell and compilers, I began studying the GHC source about 1 year ago and<br>
>>>> I've since gotten a good understanding of its internals, contributing a few<br>
>>>> patches along the way.)<br>
>>>><br>
>>>><br>
>>>> _______________________________________________<br>
>>>> Haskell-Cafe mailing list<br>
>>>> <a href="mailto:Haskell-Cafe@haskell.org" target="_blank">Haskell-Cafe@haskell.org</a><br>
>>>> <a href="http://www.haskell.org/mailman/listinfo/haskell-cafe" target="_blank">http://www.haskell.org/mailman/listinfo/haskell-cafe</a><br>
>>>><br>
>>><br>
>><br>
>><br>
>> _______________________________________________<br>
>> Haskell-Cafe mailing list<br>
>> <a href="mailto:Haskell-Cafe@haskell.org" target="_blank">Haskell-Cafe@haskell.org</a><br>
>> <a href="http://www.haskell.org/mailman/listinfo/haskell-cafe" target="_blank">http://www.haskell.org/mailman/listinfo/haskell-cafe</a><br>
>><br>
><br>
</div></div></blockquote></div><br></div>
</div></div><br>_______________________________________________<br>
Haskell-Cafe mailing list<br>
<a href="mailto:Haskell-Cafe@haskell.org">Haskell-Cafe@haskell.org</a><br>
<a href="http://www.haskell.org/mailman/listinfo/haskell-cafe" target="_blank">http://www.haskell.org/mailman/listinfo/haskell-cafe</a><br>
<br></blockquote></div><br><br clear="all"><br>-- <br>Regards,<br>Boris