Well, after I reorganised my mdadm.conf file to use UUIDs throughout I was at least able to boot from the "fedup" grub option. However, while the RAID 1 /boot and system-related partions were recognised, the system booted the fedup kernel image (3.11.something.fc20) but then carried on to load FC19 instead of proceeding to upgrade. It also still didn't recognise the RAID 6, but (somehow) started each member partition as an individual RAID, the first one being /dev/md2 and the rest being /dev/md127 down to /dev/md123. I was able to stop these and then reassemble the "real" /dev/md2 though.
# mdadm.conf written out by anaconda
AUTO +imsm +1.x -all
ARRAY /dev/md/localhost.localdomain:0 level=raid1 num-devices=2 UUID=004405f6:41912a19:4858597a:0aad2e6e
ARRAY /dev/md/localhost.localdomain:1 level=raid1 num-devices=2 UUID=1d2049af:7876336f:a6373c2d:76575396
ARRAY /dev/md/localhost.localdomain:2 level=raid6 num-devices=6 UUID=86adbf1c:c318751b:8dad1243:6e89715a
While fedup does build mdadm.conf into initramfs, it is the current copy at the time of running the pre-install fedup command, and so doesn't include any changes made between then and trying to boot into fedup. Memo to self to remember that in future.
The failure to start the upgrade seems to have been due to boot arguments missing "systemd.unit=system-upgrade.target". This Bugzilla case https://bugzilla.redhat.com/show_bug.cgi?id=1038863
describes my symptoms exactly, even though I was on fedup 0.8.0. Once I added that the upgrade to FC20 proceeded as expected, and the RAID 6 is recognised as normal.
I don't know if the issue with the boot arguments is related to RAID, but fedup still seems to be very sensitive about existing RAID configuration. I'll update the Bugzilla report to cover this.
Hope this may be of help to someone,