<div dir="ltr">Hi,<div><br></div><div>I 've got a lot of files which I need to proces in order to make them indexable by sphinx.</div><div>The files contain the data of a website with a custom perl based cms. Unfortunatly they sometimes contain xml/html tags like <i></div>
<div><br></div><div>And since most of the texts are in dutch and some are in French they also contain a lot of special characters like ë é, ...</div><div><br></div><div>I'm trying to replace the custom based perl based cms by a haskell one. And I would like to add search capability. Since someone wrote sphinx</div>
<div>bindings a few weeks ago I thought I try that. </div><div><br></div><div>But transforming the files in something that sphinx seems a challenge. Most special character problems seem to go aways when I use encodeString (Codec.Binary.UTF8.String)</div>
<div>on the indexable data.</div><div><br></div><div>But the sphinx indexer complains that the xml isn't valid. When I look at the errors this seems due to some documents containing not well formed html.</div><div>I would like to use a programmatic solution to this problem.</div>
<div><br></div><div>And is there some haskell function which converts special tokens lik & -> &amp; and é -> &egu; ?</div><div><br></div><div>thanks in advance,</div><div><br></div><div>Pieter</div><div>
<br></div><div><br></div><div><br>-- <br>Pieter Laeremans <<a href="mailto:pieter@laeremans.org">pieter@laeremans.org</a>><br><br>"The future is here. It's just not evenly distributed yet." W. Gibson
</div></div>