[PATCH v2 0/3] RISC-V: ifunced memcpy using new kernel hwprobe interface
Evan Green
evan@rivosinc.com
Thu Mar 30 18:31:56 GMT 2023
Hi Adhemerval,
On Wed, Mar 29, 2023 at 1:13 PM Adhemerval Zanella Netto
<adhemerval.zanella@linaro.org> wrote:
>
>
>
> On 29/03/23 16:45, Palmer Dabbelt wrote:
> > On Wed, 29 Mar 2023 12:16:39 PDT (-0700), adhemerval.zanella@linaro.org wrote:
> >>
> >>
> >> On 28/03/23 21:01, Palmer Dabbelt wrote:
> >>> On Tue, 28 Mar 2023 16:41:10 PDT (-0700), adhemerval.zanella@linaro.org wrote:
> >>>>
> >>>>
> >>>> On 28/03/23 19:54, Palmer Dabbelt wrote:
> >>>>> On Tue, 21 Feb 2023 11:15:34 PST (-0800), Evan Green wrote:
> >>>>>>
> >>>>>> This series illustrates the use of a proposed Linux syscall that
> >>>>>> enumerates architectural information about the RISC-V cores the system
> >>>>>> is running on. In this series we expose a small wrapper function around
> >>>>>> the syscall. An ifunc selector for memcpy queries it to see if unaligned
> >>>>>> access is "fast" on this hardware. If it is, it selects a newly provided
> >>>>>> implementation of memcpy that doesn't work hard at aligning the src and
> >>>>>> destination buffers.
> >>>>>>
> >>>>>> This is somewhat of a proof of concept for the syscall itself, but I do
> >>>>>> find that in my goofy memcpy test [1], the unaligned memcpy performed at
> >>>>>> least as well as the generic C version. This is however on Qemu on an M1
> >>>>>> mac, so not a test of any real hardware (more a smoke test that the
> >>>>>> implementation isn't silly).
> >>>>>
> >>>>> QEMU isn't a good enough benchmark to justify a new memcpy routine in glibc. Evan has a D1, which does support misaligned access and runs some simple benchmarks faster. There's also been some minor changes to the Linux side of things that warrant a v3 anyway, so he'll just post some benchmarks on HW along with that.
> >>>>>
> >>>>> Aside from those comments,
> >>>>>
> >>>>> Reviewed-by: Palmer Dabbelt <palmer@rivosinc.com>
> >>>>>
> >>>>> There's a lot more stuff to probe for, but I think we've got enough of a proof of concept for the hwprobe stuff that we can move forward with the core interface bits in Linux/glibc and then unleash the chaos...
> >>>>>
> >>>>> Unless anyone else has comments?
> >>>>
> >>>> Until riscv_hwprobe is not on Linus tree as official Linux ABI this patchset
> >>>> can not be installed. We failed to enforce it on some occasion (like Intel
> >>>> CET) and it turned out a complete mess after some years...
> >>>
> >>> Sorry if that wasn't clear, I was asking if there were any more comments from the glibc side of things before merging the Linux code.
> >>
> >> Right, so is this already settle to be the de-factor ABI to query for system
> >> information in RISCV? Or is it still being discussed? Is it in a next branch
> >> already, and/or have been tested with a patch glibc?
> >
> > It's not in for-next yet, but various patch sets / proposals have been on the lists for a few months and it seems like discussion on the kernel side has pretty much died down. That's why I was pinging the glibc side of things, if anyone here has comments on the interface then it's time to chime in. If there's no comments then we're likely to end up with this in the next release (so queue into for-next soon, Linus' master in a month or so).
> >
> > IIUC Evan's been testing the kernel+glibc stuff on QEMU, but he should be able to ack that explicitly (it's a little vague in the cover letter). There's also a glibc-independent kselftest as part of the kernel patch set: https://lore.kernel.org/all/20230327163203.2918455-6-evan@rivosinc.com/ .
>
> I am not sure if this is latest thread, but it seems that from cover letter link
> Arnd has raised some concerns about the interface [1] that has not been fully
> addressed.
I've replied to that thread.
>
> From libc perspective, the need to specify the query key on riscv_hwprobe should
> not be a problem (libc must know what tohandle, unknown tags are no use) and it
> simplifies the buffer management (so there is no need to query for unknown set of
> keys of a allocate a large buffer to handle multiple non-required pairs).
>
> However, I agree with Arnd that there should be no need to optimize for hardware
> that has an asymmetric set of features and, at least for glibc usage and most
> runtime feature selection, it does not make sense to query per-cpu information
> (unless you some very specific programming, like pine the process to specific
> cores and enable core-specific code).
I pushed back on that in my reply upstream, feel free to jump in
there. I think you're right that glibc probably wouldn't ever use the
cpuset aspect of the interface, but the gist of my reply upstream is
that more specialized apps may.
>
> I also wonder how hotplug or cpusets would play with the vDSO support, and how
> kernel would synchronize the update, if any, to the prive vDSO data.
The good news is that the cached data in the vDSO is not ABI, it's
hidden behind the vDSO function. So as things like hotplug start
evolving and interacting with the vDSO cache data, we can update what
data we cache and when we fall back to the syscall.
-Evan
>
> [1] https://lore.kernel.org/lkml/20230221190858.3159617-1-evan@rivosinc.com/T/#m452cffd9f60684e9d6d6dccf595f33ecfbc99be2
More information about the Libc-alpha
mailing list