[PATCH 0/3] RISC-V: Add -menable-experimental-extensions and support bitmanip instructions
Palmer Dabbelt
palmer@dabbelt.com
Fri Dec 18 20:21:16 GMT 2020
On Tue, 15 Dec 2020 18:05:16 PST (-0800), Jim Wilson wrote:
> On Tue, Dec 15, 2020 at 10:23 AM Palmer Dabbelt <palmer@dabbelt.com> wrote:
>
>> IIUC the plan is to start supporting draft extensions in binutils/GCC. I
>> don't
>> do a whole lot of binutils/GCC work any more so I'm not sure my opinion
>> should
>> carry any weight, but I just wanted to point out that I think this is a bad
>> policy to adopt.
>>
>
> There are multiple practical problems here.
>
> LLVM is already accepting draft extensions. If we don't, then we risk
> losing the user community.
I'm not sure I agree there. A big part of the reason I'm arguing for keeping
the extensions on a branch is that I think users would be better served by the
branch. Users (assuming we're talking distro toolchains here) are at least
months out of date WRT trunk, which means they're always going to have out of
date drafts and we're just going to tell them to go use trunk.
> There are development issues here. We have at least 3 different git repos
> where B extension work is happening. It is a mess. The only place where I
> can get everyone to agree to do work is the FSF repo. The FSF repo is also
> the only place where everyone that should have write access does have write
> access.
>
> The github riscv repos are causing problems. The longer we use them, the
> more problems that they cause. I need to deprecate them. But I can't
> deprecate them until I get everything upstream, and that includes the draft
> extension work that we have been doing there.
I agree. I'm certainly not suggesting using those, FSF is the place to do it
(both for access reasons, and so we can deal with the copyright issues). My
suggestion is simply an FSF branch (or FSF repo? do we have multiple now that
we're on git? I don't think it matters here).
>> The issue I have here is bringing code into binutils/GCC that we're
>> intending
>> on quickly deprecating -- for example, it appears this one implements a
>> specification that hasn't even been versioned yet. That is just not a way
>> to
>> build out a useable ecosystem. Imagine the spot we'll be in a year or two
>> from
>> now trying to cobble together trees of the various systems projects that
>> all
>> speak the same flavor of an extension (doubly so if we can't rely on
>> version
>> numbers).
>>
>
> The code we are adding is expected to be unchanged in the official version
> of the zbb and zba extensions. This is of course no guarantee, but we have
> multiple parties that have agreed on this. So this is not code that we are
> expecting to deprecate. The other ones are still in flux and may need to
> be deprecated. The only real problem is that the current updated draft
> hasn't been given a version number yet. I've asked for one to be assigned
> to it so that the patches can use that version number.
If that's really the case then I don't see any reason to avoid putting those on
trunk. I guess maybe I'd just a bit jaded here, but I feel like we get
promised that many things will get finished soon around every one of these
RISC-V conferences, only to have that swing the other way after a few weeks.
That kind of stuff is really why I'm inclined to just decouple ourselves
entirely from the foundation's processes and go the "support extant hardware"
route.
I would argue, though, that if we really do expect these to be unchanged in the
official version then why aren't we just supporting them in the same fashion we
support everything else? Sure, if it turns out that things drift around a bit
or ratification fails we can deprecate it later (once we've convinved ourselves
nobody is implementing the old version), but that's a very different thing than
saying we're going to deprate stuff. If people are starting to say they're
stable then people are going to start implementing them, so even if they do
change so we'll probably want to support these either way.
I definately agree we need a version number (and an actual stable one, not this
changing draft with a version stuff).
> My personal bar for merging support for an extension would be the existence
>> of
>> hardware that implements that extension.
>>
>
> RVI has already decided that there must be toolchain support before we can
> have an official hardware extension. If we can't have official toolchain
> support until the hardware extension is official that we have a circular
> dependency and are screwed. Something needs to go first here, and it makes
> sense that it is the toolchain support. So we add toolchain support first
> conditional on -menable-experimental-extensions, and then the hardware
> extension becomes official, and then the toolchain support is no longer
> conditional on -menable-experimental-extensions.
Hasn't that always been the case? We at least used to say people needed a
software stack before things were ratified, that's why things like the KVM port
showed up so early (the guys doing it though it would be important to have a
hypervisor extension ratified early on). Most of that was pretty light-weight
on the toolchain side so I guess we just let it fly under the radar over here?
In practice all the software dependencies have shown up way before that
hardware implementation requirement, and my guess is that will remain the case.
IIUC "patches approved by the maintainers to be merged" has been sufficient to
freeze extensions for the ratification vote, at which point we can merge the
code. That's generally been much faster than even getting the 45-day period
formally announced, much less the ratification vote.
The H extension and KVM port were this way last time I looked: the software was
done (or at least, targeted the draft that was supposed to be stable, things
drifted around abit) a year or two ago, we've just been waiting for the
hardware to show up so the extension can be properly frozen. This is certainly
an example of the headaches involved in waiting for extensions to finish, but
also a good example of why we wait: the first time the KVM port was suggested
for merging was when we were in this "we're targeting this draft, with a few
spec differences, but that's going to be compatible with the final version"
stage. It turns out we were a long way from a final spec, with at least one
more round of "it's actually done now" in the middle.
Maybe people are more likely to upgrade GCC versions these days than they were
before, as things don't break as much as they used to, but I'm still hesitant
to ask people to stick to the latest version.
>> That said, that's all my personal bar. If you want to support draft
>> extensions
>> earlier that's fine with me, it's your time and I'm fine with you spending
>> it
>> however you want. I just don't want those extensions disappearing out from
>> under me before we even know that they're going to remain unimplemented. I
>> know it's more work to support multiple versions of extensions, but we're
>> going
>> to have to do that at some point soon so we might as well just start taking
>> that seriously now.
>>
>
> The whole point of the -menable-experimental-extensions option is that the
> support may change in an incompatible way tomorrow. If you don't like
> that, then don't use the option.
>
> Yes, we need to support multiple versions of extensions, and we are adding
> support for that, but this is impractical to do for draft extensions, as
> they can change in difficult to support ways, including backward
> compatibillity breaks. Once an extension becomes official, then there
> should be no backward incompatible changes, and it gets easier to support
> multiple versions.
I agree there should be no backwards-incompatible changes, but in practice
there are. We've constructed a complicated rationale for the redefinition of
"I" not being a backwards-incompatible change (though TBH I don't really buy
it), but I really don't see an argument for the PMP thing: it is impossible to
build hardware that is compliant with both the pre-PMP spec and the post-PMP
spec (I guess that non-existent CSR clause could be used to justify the trap,
but I probably wouldn't buy that one either), and those differences broke the boot
on actual hardware. That was obviously a mistake and we can produce software
that is compatible with both specifications so it's not that bad, but these
sorts of mistakes will continue to happen until we have users.
You guys do this way more than I do, so if you think this is the right way to
go then that's fine, let's do it. I'm just not convinced. I know
compatibility is more work, and I know I don't have a lot of ground to stand on
here when I'm throwing away an 8-month old QEMU, but I think this is going to
end up making more work for us in the long term than it saves in the short
term. Happy to be wrong, though ;)
More information about the Binutils
mailing list