I believe there is a bit of misconception about what requires a new backend or not. GHC is a bunch of different intermediate representations from which one can take off to build backends. The STG, or Cmm ones are the most popular. All our Native Code Generators and the LLVM code gen take off from the Cmm one. Whether or not that is the correct input representation for your target largely depends on the target and design of the codegenerator. GHCJS takes off from STG, and so does Csaba's GRIN work via the external STG I believe. IIRC Asterius takes off from Cmm. I don't remember the details about Eta.
Why fork? Do you want to deal with GHC, and GHC's development? If not, fork. Do you want to have to keep up with GHC's development? Maybe not fork. Do you think your compiler can stand on it's own and doesn't follow GHC much, except for being a haskell compiler? By all means fork.
Eta is a bit special here, Eta forked off, and basically started customising their Haskell compiler specifically to the JVM, and this also allowed them to make radical changes to GHC, which would not have been permissible in the mainline GHC. (Mainline GHC tries to support multiple platforms and architectures at all times, breaking any of them isn't really an option that can be taken lightheartedly.) Eta also started having Etlas, a custom Cabal, ... I'd still like to see a lot from Eta and the ecosystem be re-integrated into GHC. There have to be good ideas there that can be brought back. It just needs someone to go look and do the work.
GHCJS is being aligned more with GHC right now precisely to eventually re-integrate it with GHC.
Asterius went down the same path, likely inspired by GHCJS, but I think I was able to convince the author that eventual upstreaming should be the goal and the project should try to stay as close as possible to GHC for that reason.
Now if you consider adding a codegen backend, this can be done, but again depends on your exact target. I'd love to see a CLR target, yet I don't know enough about CLR to give informed suggestions here.
If you have a toolchain that functions sufficiently similar to a stock c toolchain, (or you can make your toolchain look sufficiently similar to one, easily), most of it will just work. If you can separate your building into compilation of source to some form of object code, and some form of object code aggregates (archives), and some form of linking (objects and archives into shared objects, or executables), you can likely plug in your toolchain into GHC (and Cabal), and have it work, once you taught GHC how to produce your target languages object code.
If your toolchain does stuff differently, a bit more work is involved in teaching GHC (and Cabal) about that.
This all only gives you *haskell* though. You still need the Runtime System. If you have a C -> Target compiler, you can try to re-use GHC's RTS. This is what the WebGHC project did. They re-used GHC's RTS, and implemented a shim for linux syscalls, so that they can emulate enough to have the RTS think it's running on some musl like linux. You most likely want something proper here eventually; but this might be a first stab at it to get something working.
Next you'll have to deal with c-bits. Haskell Packages that link against C parts. This is going to be challenging, not impossible but challenging as much of the haskell ecosystem expects the ability to compile C files and use those for low level system interaction.
You can use hackage overlays to build a set of patched packages, once you have your codegen working. At that point you could start patching ecosystem packages to work on your target, until your changes are upstreamed, and provide your user with a hackage overlay (essentially hackage + patches for specific packages).
Hope this helps.