tech-crypto archive
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]
AES leaks, cgd ciphers, and vector units in the kernel
[bcc tech-crypto, tech-security; followups to tech-kern]
It's been well-known since 2005[1] that naive AES software, like we
use today in the NetBSD kernel, is vulnerable to cache-timing attacks
(CVE-2005-1797). These attacks have gotten progressively better over
time, and over a decade ago were even applied to Linux dm-crypt disk
encryption[2].
Timing side channel attacks are not theoretical: shared virtual hosts
and JavaScript engines in web browsers provide adversaries with
abundant attack surfaces to trigger disk I/O, prime/probe/flush/reload
caches, and measure high-resolution timings.
We already replaced NIST CTR_DRBG-AES by NIST Hash_DRBG-SHA256 for
/dev/u?random in part because of AES timing side channel attacks.
It's long since overdue for us to address them in cgd(4), and anything
else in the kernel that uses AES.
The attached patch set provides a three-pronged approach to addressing
the problem:
1. Replace the variable-time AES reference implementation we've been
using by constant-time AES software from Thomas Pornin's
high-quality BearSSL libary.
Security impact: This essentially plugs the leak on all NetBSD
platforms for all existing disk setups (and anything else in the
kernel like IPsec) that already use AES as long as they run an
updated kernel.
(In principle a C compiler could compile the BearSSL logic gates
into secret-dependent branches and memory references, and in
principle a machine could implement logic gates in variable time,
but realistically this goes a long way to plugging the leak.)
Performance impact: The cost is that constant-time AES software is
much slower -- cgd AES-CBC encryption throughput is reduced to
about 1/3, and decryption to about 1/2 (very roughly). This is
bad, obviously, but it is mostly addressed by the next two parts.
2. Add support for CPU AES instructions on Intel, AMD, VIA, and
aarch64 CPUs to implement the kernel's synchronous AES API,
including machinery to allow the kernel to use the CPU's vector
unit.
Security impact: This generally plugs the leak (except perhaps in
software CPU emulators like qemu) on all relevant hardware just
by updating the kernel.
Performance impact: This significantly improves performance over
what it was before with variable-time AES software, on CPUs that
have AES instructions we can use -- cgd AES-CBC throughput very
roughly doubles on a VIA laptop I tried, for instance.
So on ~all amd64 and aarch64 CPUs of the last decade (and VIA
CPUs), this patch set improves security _and_ performance.
3. Add an alternative cgd cipher Adiantum[3], which is built out of
AES (used only once per disk sector), Poly1305, NH, and XChaCha12,
and has been deployed by Google for disk encryption on lower-end
ARM systems.
Security impact: Adiantum generally provides better disk
encryption security than AES-CBC or AES-XTS because it encrypts
an entire disk sector at a time, rather than individual cipher
blocks independently like AES-XTS does or suffixes in units of
cipher blocks like AES-CBC does, so two snapshots of a disk
reveal less information with Adiantum than with AES-CBC or
AES-XTS. Of course, Adiantum is a different cipher so you have
to create new cgd volumes if you want to use it.
(The Adiantum implementation uses the same AES logic as the rest
of the kernel for the one invocation per disk sector it needs, so
it will take advantage of constant-time software or hardware
support.)
Performance impact: Adiantum provides much better software
performance than AES-CBC, AES-XTS, or generally anything that
feeds all the data through AES. (The one AES invocation per disk
sector accounts for only a small fraction of Adiantum's time,
<10%.) This should generally provide performance that is at
least as good as the leaky AES software was on machines that
don't have CPU support for AES.
The net effect is:
(a) there is no more variable-time AES software in the kernel at all,
(b) on most machines of the past decade, AES is (a lot) faster, and
(c) there's an alternative to AES-CBC/AES-XTS in cgd for machines
where fixing the security vulnerability made it slower.
Some additional notes:
* Vector unit in the kernel.
All the CPU AES instructions I dealt with (AES-NI, VIA Padlock,
ARMv8.0-AES) require using the CPU's vector unit. The mechanism is
that we disable interrupts and save any user lwp vector unit state
before computing AES, and then zero the vector registers afterward
to prevent any Spectre-class attacks:
- If the kernel is using the vector unit while in a user lwp, we
have to disable preemption because there's nowhere to save the
kernel's vector registers alongside the user's vector registers.
- If we ever want to compute AES in interrupt context we also need
to disable interrputs, but if we decide never to do AES in
interrupt context (which would be reasonable, just not a
proposition I'm committing to at the moment) then disabling
preemption instead of disabling interrupts would be sufficient.
As future work, in kthreads, we don't need to disable preemption at
all since there's no user lwp state so we can save the kernel's
vector registers in the lwp pcb. Also, in kthreads, we can avoid
zeroing the vector registers after every AES subroutine, since user
code can't even run until after switching to another lwp anyway.
I experimented with doing this in cgd -- adding fpu_kthread_enter
and fpu_kthread_leave around cgd_cipher to set a bit MDL_SYSTEM_FPU
in the lwp, and teaching fpu save/restore to allow saving and
restoring to kthreads with MDL_SYSTEM_FPU set -- and cgd throughput
on my VIA laptop improved by about 1.2x on top of the already huge
throughput increase from using the CPU instructions in the first
place.
I'm not settled on how this should manifest in an MI API yet,
though, so the experiment is not included in the patch set other
than to define fpu_kthread_enter/leave for experimentation.
* Other CPUs' AES instructions.
With a little more effort we could:
- adapt the x86 AES-NI logic to 32-bit mode
- add support for Cavium MIPS CPUs
- adopt vectorized MD constant-time software for CPUs with vector
units like Altivec, NEON, VFP, &c., even if they don't have AES
instructions per se
I didn't do any of that because I was going for low-hanging fruit,
but I would be happy to help if you want to adopt other
implementations.
We could also use a similar mechanism for, e.g., synchronous SHA-256
instructions, to make /dev/urandom (which uses NIST Hash_DRBG with
SHA-256) faster on, e.g., aarch64 and Cavium MIPS CPUs. Also not a
high priority for me because SHA-256 does not invite side channel
threats like AES does, but happy to help if you want to work on it.
* Adiantum components.
Adiantum is built out of components that are useful in their own
right for other applications like Wireguard, notably Poly1305 and
ChaCha, so we could fruitfully factor them out into their own
modules and provide vectorized MD implementations of them from (say)
SUPERCOP to further improve performance.
This makes Adiantum more attractive than, e.g., Threefish as I
suggested some years ago, which is a primitive that almost nobody
uses in the real world.
* cgd disk sector sizes and Adiantum.
cgd currently uses the underlying disk's sector size as advertised
by getdisksize (i.e., DIOCGWEDGEINFO or DIOCGPARTINFO). On almost
all disks today that's 512 bytes, even if the disk actually uses
4096-byte sectors and requires r/m/w cycles to do 512-byte writes.
The sector size should really be a parameter to cgd like the name of
the cipher, because it qualitatively changes the cipher that cgd
computes -- and if some chain of adapters causes a disk with
4096-byte sectors to be presented with 512 bytes or vice versa,
you'll see garbage on your disk.
Unlike AES-CBC or AES-XTS (which don't really care what the sector
size is), Adiantum also takes better advantage of larger sectors --
cursory measurements suggest that it's about 1.5x throughput for
4096-byte sectors over 512-byte sectors.
I did not add any mechanism for configuring the sector size, but it
would be good if we taught cgd to do that (and an upgrade path for
storing it in the parameters file).
* Other existing ciphers.
Our 3DES, Blowfish, CAST128, Camellia, and Skipjack software in the
kernel also obviously relies on secret-dependent array indices.
These are not as high a priority because frankly I don't think
anyone should be using these, and I'd rather get rid of them -- or
maybe reduce 3DES and Blowfish to decryption only, to read old cgd
disks -- than spend any other effort on them.
* Performance measurement.
Most of the performance measurement I did -- which was very rough,
enough to convince me that hardware AES as implemented here clearly
wins in practice over even variable-time software AES, and that my
totally untuned first draft of Adiantum is not worse than
variable-time software AES -- was with:
dd if=/dev/zero of=/tmp/disk bs=1m count=512 progress=$((512/80))
vnconfig -cv vnd0 /tmp/disk
cgdconfig -s cgd0 /dev/vnd0 aes-cbc 256 < /dev/zero
# measure decryption throughput
dd if=/dev/rcgd0d of=/dev/null bs=64k progress=$((512*1024/64/80))
# measure encryption throughput
dd if=/dev/zero of=/dev/rcgd0d bs=64k progress=$((512*1024/64/80))
(Substitute `aes-xts 512' or `adiantum 256' in the cgdconfig
incantation for a fair comparison.)
Thoughts? Comments? Objections? Musical numbers by Groucho Marx on
the nature of consensus?
[1] Daniel J. Bernstein, `Cache-timing attacks on AES', 2004-11-11.
https://cr.yp.to/papers.html#cachetiming
[2] Eran Tromer, Dag Arne Osvik, and Adi Shamir, `Efficient cache
attacks on AES, and countermeasures', Journal of Cryptology 23(1),
pp. 37--71, Springer, 2010. DOI: 10.1007/s00145-009-9049-y
http://www.cs.tau.ac.il/~tromer/papers/cache-joc-official.pdf
[3] Paul Crowley and Eric Biggers, `Adiantum: length-preserving
encryption for entry-level processors', IACR Transactions on
Symmetric Cryptology 2018(4), pp. 39--61.
https://doi.org/10.13154/tosc.v2018.i4.39-61
# HG changeset patch
# User Taylor R Campbell <riastradh%NetBSD.org@localhost>
# Date 1592424014 0
# Wed Jun 17 20:00:14 2020 +0000
# Branch trunk
# Node ID 4a0394d9dc15ee6e51a1f1d6ec158d6f172bb9e0
# Parent 9d717769d8e9978731b1dc571cacd36aa44c7d3d
# EXP-Topic riastradh-kernelcrypto
Spell `blowfish-cbc' as such, not like `bf-cbc'.
Gotta match the name we actually use for this to work!
diff -r 9d717769d8e9 -r 4a0394d9dc15 sys/dev/cgd.c
--- a/sys/dev/cgd.c Mon Jun 15 01:24:20 2020 +0000
+++ b/sys/dev/cgd.c Wed Jun 17 20:00:14 2020 +0000
@@ -1298,7 +1298,7 @@ cgd_ioctl_set(struct cgd_softc *sc, void
if (encblkno[i].v != CGD_CIPHER_CBC_ENCBLKNO1) {
if (strcmp(sc->sc_cfuncs->cf_name, "aes-cbc") &&
strcmp(sc->sc_cfuncs->cf_name, "3des-cbc") &&
- strcmp(sc->sc_cfuncs->cf_name, "bf-cbc")) {
+ strcmp(sc->sc_cfuncs->cf_name, "blowfish-cbc")) {
log(LOG_WARNING, "cgd: %s only makes sense for cbc,"
" not for %s; ignoring\n",
encblkno[i].n, sc->sc_cfuncs->cf_name);
# HG changeset patch
# User Taylor R Campbell <riastradh%NetBSD.org@localhost>
# Date 1591241685 0
# Thu Jun 04 03:34:45 2020 +0000
# Branch trunk
# Node ID 08a86cf7e9ffdc8949751596b2f93934c8f3b692
# Parent 4a0394d9dc15ee6e51a1f1d6ec158d6f172bb9e0
# EXP-Topic riastradh-kernelcrypto
Draft fpu_kthread_enter/leave on x86.
Only fit for kthreads, not user lwps. Preemptible, nestable.
diff -r 4a0394d9dc15 -r 08a86cf7e9ff sys/arch/amd64/include/proc.h
--- a/sys/arch/amd64/include/proc.h Wed Jun 17 20:00:14 2020 +0000
+++ b/sys/arch/amd64/include/proc.h Thu Jun 04 03:34:45 2020 +0000
@@ -55,6 +55,7 @@ struct mdlwp {
#define MDL_COMPAT32 0x0008 /* i386, always return via iret */
#define MDL_IRET 0x0010 /* force return via iret, not sysret */
#define MDL_FPU_IN_CPU 0x0020 /* the FPU state is in the CPU */
+#define MDL_SYSTEM_FPU 0x0040 /* system thread is allowed FPU use */
struct mdproc {
int md_flags;
diff -r 4a0394d9dc15 -r 08a86cf7e9ff sys/arch/i386/include/proc.h
--- a/sys/arch/i386/include/proc.h Wed Jun 17 20:00:14 2020 +0000
+++ b/sys/arch/i386/include/proc.h Thu Jun 04 03:34:45 2020 +0000
@@ -44,6 +44,7 @@ struct pmap;
struct vm_page;
#define MDL_FPU_IN_CPU 0x0020 /* the FPU state is in the CPU */
+#define MDL_SYSTEM_FPU 0x0040 /* system thread is allowed FPU use */
struct mdlwp {
volatile uint64_t md_tsc; /* last TSC reading */
diff -r 4a0394d9dc15 -r 08a86cf7e9ff sys/arch/x86/include/fpu.h
--- a/sys/arch/x86/include/fpu.h Wed Jun 17 20:00:14 2020 +0000
+++ b/sys/arch/x86/include/fpu.h Thu Jun 04 03:34:45 2020 +0000
@@ -33,6 +33,9 @@ void fpu_lwp_abandon(struct lwp *l);
void fpu_kern_enter(void);
void fpu_kern_leave(void);
+int fpu_kthread_enter(void);
+void fpu_kthread_leave(int);
+
void process_write_fpregs_xmm(struct lwp *, const struct fxsave *);
void process_write_fpregs_s87(struct lwp *, const struct save87 *);
diff -r 4a0394d9dc15 -r 08a86cf7e9ff sys/arch/x86/x86/fpu.c
--- a/sys/arch/x86/x86/fpu.c Wed Jun 17 20:00:14 2020 +0000
+++ b/sys/arch/x86/x86/fpu.c Thu Jun 04 03:34:45 2020 +0000
@@ -137,7 +137,8 @@ fpu_lwp_area(struct lwp *l)
struct pcb *pcb = lwp_getpcb(l);
union savefpu *area = &pcb->pcb_savefpu;
- KASSERT((l->l_flag & LW_SYSTEM) == 0);
+ KASSERT((l->l_flag & LW_SYSTEM) == 0 ||
+ (l->l_md.md_flags & MDL_SYSTEM_FPU));
if (l == curlwp) {
fpu_save();
}
@@ -154,7 +155,8 @@ fpu_save_lwp(struct lwp *l)
kpreempt_disable();
if (l->l_md.md_flags & MDL_FPU_IN_CPU) {
- KASSERT((l->l_flag & LW_SYSTEM) == 0);
+ KASSERT((l->l_flag & LW_SYSTEM) == 0 ||
+ (l->l_md.md_flags & MDL_SYSTEM_FPU));
fpu_area_save(area, x86_xsave_features);
l->l_md.md_flags &= ~MDL_FPU_IN_CPU;
}
@@ -343,6 +345,75 @@ fpu_lwp_abandon(struct lwp *l)
/* -------------------------------------------------------------------------- */
+static const union savefpu zero_fpu __aligned(64);
+
+/*
+ * s = fpu_kthread_enter()
+ *
+ * Allow the current kthread to use the FPU without disabling
+ * preemption as fpu_kern_enter/leave do. Must not be used in a
+ * user lwp. When done, call fpu_kthread_leave(s). May be
+ * recursively nested.
+ *
+ * Must not be invoked while in a fpu_kern_enter/leave block.
+ */
+int
+fpu_kthread_enter(void)
+{
+ struct lwp *l = curlwp;
+ int system_fpu = l->l_md.md_flags & MDL_SYSTEM_FPU;
+
+ KASSERTMSG(l->l_flag & LW_SYSTEM,
+ "fpu_kthread_enter is allowed only in kthreads");
+ KASSERTMSG(curcpu()->ci_kfpu_spl == -1,
+ "fpu_kthread_enter is not allowed between fpu_kern_enter/leave");
+
+ if (!system_fpu) {
+ /*
+ * Notify the FPU fault handler to save the FPU state
+ * for us.
+ */
+ l->l_md.md_flags |= MDL_SYSTEM_FPU;
+
+ /* Clear CR0_TS to enable the FPU. */
+ clts();
+ }
+
+ return system_fpu;
+}
+
+/*
+ * fpu_kthread_leave(s)
+ *
+ * Return to the previous state of whether the current kthread can
+ * use the FPU without disabling preemption.
+ */
+void
+fpu_kthread_leave(int system_fpu)
+{
+ struct lwp *l = curlwp;
+
+ KASSERTMSG(l->l_flag & LW_SYSTEM,
+ "fpu_kthread_leave is allowed only in kthreads");
+ KASSERTMSG(l->l_md.md_flags & MDL_SYSTEM_FPU,
+ "fpu_kthread_leave without fpu_kthread_enter");
+
+ if (!system_fpu) {
+ /*
+ * Zero the fpu registers; otherwise we might leak
+ * secrets through Spectre-class attacks to userland,
+ * even if there are no bugs in fpu state management.
+ */
+ fpu_area_restore(&zero_fpu, x86_xsave_features);
+
+ /* Set CR0_TS to disable use of the FPU. */
+ stts();
+
+ /* Stop asking to save our FPU state. */
+ l->l_md.md_flags &= ~MDL_SYSTEM_FPU;
+ }
+}
+
/*
* fpu_kern_enter()
*
@@ -359,6 +430,10 @@ fpu_kern_enter(void)
struct cpu_info *ci;
int s;
+ /* Nothing needed if we're in a kthread with FPU enabled. */
+ if (l->l_md.md_flags & MDL_SYSTEM_FPU)
+ return;
+
s = splhigh();
ci = curcpu();
@@ -392,10 +467,14 @@ fpu_kern_enter(void)
void
fpu_kern_leave(void)
{
- static const union savefpu zero_fpu __aligned(64);
+ struct lwp *l = curlwp;
struct cpu_info *ci = curcpu();
int s;
+ /* Nothing needed if we're in a kthread with FPU enabled. */
+ if (l->l_md.md_flags & MDL_SYSTEM_FPU)
+ return;
+
KASSERT(ci->ci_ilevel == IPL_HIGH);
KASSERT(ci->ci_kfpu_spl != -1);
# HG changeset patch
# User Taylor R Campbell <riastradh%NetBSD.org@localhost>
# Date 1591240980 0
# Thu Jun 04 03:23:00 2020 +0000
# Branch trunk
# Node ID e7941432a3cd362134c7e5195b5c9725e332de7f
# Parent 08a86cf7e9ffdc8949751596b2f93934c8f3b692
# EXP-Topic riastradh-kernelcrypto
Draft fpu_kern_enter/leave on aarch64.
diff -r 08a86cf7e9ff -r e7941432a3cd sys/arch/aarch64/aarch64/cpu.c
--- a/sys/arch/aarch64/aarch64/cpu.c Thu Jun 04 03:34:45 2020 +0000
+++ b/sys/arch/aarch64/aarch64/cpu.c Thu Jun 04 03:23:00 2020 +0000
@@ -133,6 +133,8 @@ cpu_attach(device_t dv, cpuid_t id)
ci->ci_dev = dv;
dv->dv_private = ci;
+ ci->ci_kfpu_spl = -1;
+
arm_cpu_do_topology(ci);
cpu_identify(ci->ci_dev, ci);
diff -r 08a86cf7e9ff -r e7941432a3cd sys/arch/aarch64/aarch64/fpu.c
--- a/sys/arch/aarch64/aarch64/fpu.c Thu Jun 04 03:34:45 2020 +0000
+++ b/sys/arch/aarch64/aarch64/fpu.c Thu Jun 04 03:23:00 2020 +0000
@@ -38,6 +38,8 @@
#include <sys/lwp.h>
#include <sys/evcnt.h>
+#include <aarch64/fpu.h>
+#include <aarch64/locore.h>
#include <aarch64/reg.h>
#include <aarch64/pcb.h>
#include <aarch64/armreg.h>
@@ -172,3 +174,68 @@ fpu_state_release(lwp_t *l)
reg_cpacr_el1_write(CPACR_FPEN_NONE);
__asm __volatile ("isb");
}
+
+void
+fpu_kern_enter(void)
+{
+ struct lwp *l = curlwp;
+ struct cpu_info *ci;
+ int s;
+
+ /*
+ * Block all interrupts. We must block preemption since -- if
+ * this is a user thread -- there is nowhere to save the kernel
+ * fpu state, and if we want this to be usable in interrupts,
+ * we can't let interrupts interfere with the fpu state in use
+ * since there's nowhere for them to save it.
+ */
+ s = splhigh();
+ ci = curcpu();
+ KASSERT(ci->ci_kfpu_spl == -1);
+ ci->ci_kfpu_spl = s;
+
+ /*
+ * If we are in a softint and have a pinned lwp, the fpu state
+ * is that of the pinned lwp, so save it there.
+ */
+ if ((l->l_pflag & LP_INTR) && (l->l_switchto != NULL))
+ l = l->l_switchto;
+ if (fpu_used_p(l))
+ fpu_save(l);
+
+ /*
+ * Enable the fpu, and wait until it is enabled before
+ * executing any further instructions.
+ */
+ reg_cpacr_el1_write(CPACR_FPEN_ALL);
+ arm_isb();
+}
+
+void
+fpu_kern_leave(void)
+{
+ static const struct fpreg zero_fpreg;
+ struct cpu_info *ci = curcpu();
+ int s;
+
+ KASSERT(ci->ci_cpl == IPL_HIGH);
+ KASSERT(ci->ci_kfpu_spl != -1);
+
+ /*
+ * Zero the fpu registers; otherwise we might leak secrets
+ * through Spectre-class attacks to userland, even if there are
+ * no bugs in fpu state management.
+ */
+ load_fpregs(&zero_fpreg);
+
+ /*
+ * Disable the fpu so that the kernel can't accidentally use
+ * it again.
+ */
+ reg_cpacr_el1_write(CPACR_FPEN_NONE);
+ arm_isb();
+
+ s = ci->ci_kfpu_spl;
+ ci->ci_kfpu_spl = -1;
+ splx(s);
+}
diff -r 08a86cf7e9ff -r e7941432a3cd sys/arch/aarch64/include/cpu.h
--- a/sys/arch/aarch64/include/cpu.h Thu Jun 04 03:34:45 2020 +0000
+++ b/sys/arch/aarch64/include/cpu.h Thu Jun 04 03:23:00 2020 +0000
@@ -89,6 +89,8 @@ struct cpu_info {
volatile u_int ci_astpending;
volatile u_int ci_intr_depth;
+ int ci_kfpu_spl;
+
/* event counters */
struct evcnt ci_vfp_use;
struct evcnt ci_vfp_reuse;
diff -r 08a86cf7e9ff -r e7941432a3cd sys/arch/aarch64/include/fpu.h
--- /dev/null Thu Jan 01 00:00:00 1970 +0000
+++ b/sys/arch/aarch64/include/fpu.h Thu Jun 04 03:23:00 2020 +0000
@@ -0,0 +1,35 @@
+/* $NetBSD$ */
+
+/*
+ * Copyright (c) 2020 The NetBSD Foundation, Inc.
+ * All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ * 1. Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * 2. Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in the
+ * documentation and/or other materials provided with the distribution.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE NETBSD FOUNDATION, INC. AND CONTRIBUTORS
+ * ``AS IS'' AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED
+ * TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
+ * PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE FOUNDATION OR CONTRIBUTORS
+ * BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef _AARCH64_FPU_H_
+#define _AARCH64_FPU_H_
+
+void fpu_kern_enter(void);
+void fpu_kern_leave(void);
+
+#endif /* _AARCH64_FPU_H_ */
diff -r 08a86cf7e9ff -r e7941432a3cd sys/arch/aarch64/include/machdep.h
--- a/sys/arch/aarch64/include/machdep.h Thu Jun 04 03:34:45 2020 +0000
+++ b/sys/arch/aarch64/include/machdep.h Thu Jun 04 03:23:00 2020 +0000
@@ -142,8 +142,11 @@ void aarch64_setregs_ptrauth(struct lwp
/* fpu.c */
void fpu_attach(struct cpu_info *);
struct fpreg;
-void load_fpregs(struct fpreg *);
+void load_fpregs(const struct fpreg *);
void save_fpregs(struct fpreg *);
+void fpu_kern_enter(void);
+void fpu_kern_leave(void);
+
#ifdef TRAP_SIGDEBUG
#define do_trapsignal(l, signo, code, addr, trap) \
# HG changeset patch
# User Taylor R Campbell <riastradh%NetBSD.org@localhost>
# Date 1592418582 0
# Wed Jun 17 18:29:42 2020 +0000
# Branch trunk
# Node ID 5f0c9efc2bac72063928aca09b8529df4e63e77a
# Parent e7941432a3cd362134c7e5195b5c9725e332de7f
# EXP-Topic riastradh-kernelcrypto
Draft fpu_kthread_enter/leave for aarch64.
diff -r e7941432a3cd -r 5f0c9efc2bac sys/arch/aarch64/aarch64/fpu.c
--- a/sys/arch/aarch64/aarch64/fpu.c Thu Jun 04 03:23:00 2020 +0000
+++ b/sys/arch/aarch64/aarch64/fpu.c Wed Jun 17 18:29:42 2020 +0000
@@ -175,6 +175,59 @@ fpu_state_release(lwp_t *l)
__asm __volatile ("isb");
}
+static const struct fpreg zero_fpreg;
+
+int
+fpu_kthread_enter(void)
+{
+ struct lwp *l = curlwp;
+ int system_fpu = l->l_md.md_flags & MDL_SYSTEM_FPU;
+
+ KASSERTMSG(l->l_flag & LW_SYSTEM,
+ "fpu_kthread_enter is allowed only in kthreads");
+ KASSERTMSG(curcpu()->ci_kfpu_spl == -1,
+ "fpu_kthread_enter is not allowed between fpu_kern_enter/leave");
+
+ if (!system_fpu) {
+ /*
+ * Notify the FPU fault handler to save the FPU state
+ * for us.
+ */
+ l->l_md.md_flags |= MDL_SYSTEM_FPU;
+
+ /* Enable the FPU. */
+ fpu_state_load(l, 0);
+ }
+
+ return system_fpu;
+}
+
+void
+fpu_kthread_leave(int system_fpu)
+{
+ struct lwp *l = curlwp;
+
+ KASSERTMSG(l->l_flag & LW_SYSTEM,
+ "fpu_kthread_leave is allowed only in kthreads");
+ KASSERTMSG(l->l_md.md_flags & MDL_SYSTEM_FPU,
+ "fpu_kthread_leave without fpu_kthread_enter");
+
+ if (!system_fpu) {
+ /*
+ * Zero the fpu registers; otherwise we might leak
+ * secrets through Spectre-class attacks to userland,
+ * even if there are no bugs in fpu state management.
+ */
+ load_fpregs(&zero_fpreg);
+
+ /* Disable the FPU. */
+ fpu_state_release(l);
+
+ /* Stop asking to save our FPU state. */
+ l->l_md.md_flags &= ~MDL_SYSTEM_FPU;
+ }
+}
+
void
fpu_kern_enter(void)
{
@@ -182,6 +235,10 @@ fpu_kern_enter(void)
struct cpu_info *ci;
int s;
+ /* Nothing needed if we're in a kthread with FPU enabled. */
+ if (l->l_md.md_flags & MDL_SYSTEM_FPU)
+ return;
+
/*
* Block all interrupts. We must block preemption since -- if
* this is a user thread -- there is nowhere to save the kernel
@@ -214,10 +271,14 @@ fpu_kern_enter(void)
void
fpu_kern_leave(void)
{
- static const struct fpreg zero_fpreg;
+ struct lwp *l = curlwp;
struct cpu_info *ci = curcpu();
int s;
+ /* Nothing needed if we're in a kthread with FPU enabled. */
+ if (l->l_md.md_flags & MDL_SYSTEM_FPU)
+ return;
+
KASSERT(ci->ci_cpl == IPL_HIGH);
KASSERT(ci->ci_kfpu_spl != -1);
diff -r e7941432a3cd -r 5f0c9efc2bac sys/arch/aarch64/include/fpu.h
--- a/sys/arch/aarch64/include/fpu.h Thu Jun 04 03:23:00 2020 +0000
+++ b/sys/arch/aarch64/include/fpu.h Wed Jun 17 18:29:42 2020 +0000
@@ -29,6 +29,9 @@
#ifndef _AARCH64_FPU_H_
#define _AARCH64_FPU_H_
+int fpu_kthread_enter(void);
+void fpu_kthread_leave(int);
+
void fpu_kern_enter(void);
void fpu_kern_leave(void);
diff -r e7941432a3cd -r 5f0c9efc2bac sys/arch/aarch64/include/proc.h
--- a/sys/arch/aarch64/include/proc.h Thu Jun 04 03:23:00 2020 +0000
+++ b/sys/arch/aarch64/include/proc.h Wed Jun 17 18:29:42 2020 +0000
@@ -43,6 +43,7 @@ struct mdlwp {
struct trapframe *md_utf;
uint64_t md_cpacr;
uint32_t md_flags;
+#define MDL_SYSTEM_FPU __BIT(0)
uint64_t md_ia_kern[2]; /* APIAKey{Lo,Hi}_EL1 used in the kernel */
uint64_t md_ia_user[2]; /* APIAKey{Lo,Hi}_EL1 used in user-process */
# HG changeset patch
# User Taylor R Campbell <riastradh%NetBSD.org@localhost>
# Date 1592150319 0
# Sun Jun 14 15:58:39 2020 +0000
# Branch trunk
# Node ID 81a487955535865a6bb603c585be109c3dd1adf5
# Parent 5f0c9efc2bac72063928aca09b8529df4e63e77a
# EXP-Topic riastradh-kernelcrypto
Draft aarch64 zero_fpregs.
Just a series of sad donkeys, with no memory references.
diff -r 5f0c9efc2bac -r 81a487955535 sys/arch/aarch64/aarch64/cpuswitch.S
--- a/sys/arch/aarch64/aarch64/cpuswitch.S Wed Jun 17 18:29:42 2020 +0000
+++ b/sys/arch/aarch64/aarch64/cpuswitch.S Sun Jun 14 15:58:39 2020 +0000
@@ -538,3 +538,43 @@ ENTRY_NP(save_fpregs)
str w9, [x0, #FPREG_FPSR]
ret
END(save_fpregs)
+
+ENTRY_NP(zero_fpregs)
+ eor v0.16b, v0.16b, v0.16b
+ eor v1.16b, v1.16b, v1.16b
+ eor v2.16b, v2.16b, v2.16b
+ eor v3.16b, v3.16b, v3.16b
+ eor v4.16b, v4.16b, v4.16b
+ eor v5.16b, v5.16b, v5.16b
+ eor v6.16b, v6.16b, v6.16b
+ eor v7.16b, v7.16b, v7.16b
+ eor v8.16b, v8.16b, v8.16b
+ eor v9.16b, v9.16b, v9.16b
+ eor v10.16b, v10.16b, v10.16b
+ eor v11.16b, v11.16b, v11.16b
+ eor v12.16b, v12.16b, v12.16b
+ eor v13.16b, v13.16b, v13.16b
+ eor v14.16b, v14.16b, v14.16b
+ eor v15.16b, v15.16b, v15.16b
+ eor v16.16b, v16.16b, v16.16b
+ eor v17.16b, v17.16b, v17.16b
+ eor v18.16b, v18.16b, v18.16b
+ eor v19.16b, v19.16b, v19.16b
+ eor v20.16b, v20.16b, v20.16b
+ eor v21.16b, v21.16b, v21.16b
+ eor v22.16b, v22.16b, v22.16b
+ eor v23.16b, v23.16b, v23.16b
+ eor v24.16b, v24.16b, v24.16b
+ eor v25.16b, v25.16b, v25.16b
+ eor v26.16b, v26.16b, v26.16b
+ eor v27.16b, v27.16b, v27.16b
+ eor v28.16b, v28.16b, v28.16b
+ eor v29.16b, v29.16b, v29.16b
+ eor v30.16b, v30.16b, v30.16b
+ eor v31.16b, v31.16b, v31.16b
+ eor x8, x8, x8
+ eor x9, x9, x9
+ msr fpcr, x8
+ msr fpsr, x9
+ ret
+END(zero_fpregs)
diff -r 5f0c9efc2bac -r 81a487955535 sys/arch/aarch64/include/machdep.h
--- a/sys/arch/aarch64/include/machdep.h Wed Jun 17 18:29:42 2020 +0000
+++ b/sys/arch/aarch64/include/machdep.h Sun Jun 14 15:58:39 2020 +0000
@@ -144,6 +144,7 @@ void fpu_attach(struct cpu_info *);
struct fpreg;
void load_fpregs(const struct fpreg *);
void save_fpregs(struct fpreg *);
+void zero_fpregs(void);
void fpu_kern_enter(void);
void fpu_kern_leave(void);
# HG changeset patch
# User Taylor R Campbell <riastradh%NetBSD.org@localhost>
# Date 1591939006 0
# Fri Jun 12 05:16:46 2020 +0000
# Branch trunk
# Node ID 9d6b84c40f6517bb55848159faa9478ef1a23d02
# Parent 81a487955535865a6bb603c585be109c3dd1adf5
# EXP-Topic riastradh-kernelcrypto
Rework AES in kernel to finally address CVE-2005-1797.
1. Rip out old variable-time reference implementation.
2. Replace it by BearSSL's constant-time 32-bit logic.
=> Obtained from commit dda1f8a0c46e15b4a235163470ff700b2f13dcc5.
=> We could conditionally adopt the 64-bit logic too, which would
likely give a modest performance boost on 64-bit platforms
without AES-NI, but that's a bit more trouble.
3. Select the AES implementation at boot-time; allow an MD override.
=> Use self-tests to verify basic correctness at boot.
=> The implementation selection policy is rather rudimentary at
the moment but it is isolated to one place so it's easy to
change later on.
This (a) plugs a host of timing attacks on, e.g., cgd, and (b) paves
the way to take advantage of CPU support for AES -- both things we
should've done a decade ago. Downside: Computing AES takes 2-3x the
CPU time. But that's what hardware support will be coming for.
Rudimentary measurement of performance impact done by:
mount -t tmpfs tmpfs /tmp
dd if=/dev/zero of=/tmp/disk bs=1m count=512
vnconfig -cv vnd0 /tmp/disk
cgdconfig -s cgd0 /dev/vnd0 aes-cbc 256 < /dev/zero
dd if=/dev/rcgd0d of=/dev/null bs=64k
dd if=/dev/zero of=/dev/rcgd0d bs=64k
The AES-CBC encryption performance impact is closer to 3x because it
is inherently sequential; the AES-CBC decryption impact is closer to
2x because the bitsliced AES logic can process two blocks at once.
diff -r 81a487955535 -r 9d6b84c40f65 sys/conf/files
--- a/sys/conf/files Sun Jun 14 15:58:39 2020 +0000
+++ b/sys/conf/files Fri Jun 12 05:16:46 2020 +0000
@@ -200,10 +200,10 @@ defflag opt_machdep.h MACHDEP
# use it.
# Individual crypto transforms
+include "crypto/aes/files.aes"
include "crypto/des/files.des"
include "crypto/blowfish/files.blowfish"
include "crypto/cast128/files.cast128"
-include "crypto/rijndael/files.rijndael"
include "crypto/skipjack/files.skipjack"
include "crypto/camellia/files.camellia"
# General-purpose crypto processing framework.
diff -r 81a487955535 -r 9d6b84c40f65 sys/crypto/aes/aes.h
--- /dev/null Thu Jan 01 00:00:00 1970 +0000
+++ b/sys/crypto/aes/aes.h Fri Jun 12 05:16:46 2020 +0000
@@ -0,0 +1,101 @@
+/* $NetBSD$ */
+
+/*-
+ * Copyright (c) 2020 The NetBSD Foundation, Inc.
+ * All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ * 1. Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * 2. Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in the
+ * documentation and/or other materials provided with the distribution.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE NETBSD FOUNDATION, INC. AND CONTRIBUTORS
+ * ``AS IS'' AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED
+ * TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
+ * PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE FOUNDATION OR CONTRIBUTORS
+ * BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef _CRYPTO_AES_AES_H
+#define _CRYPTO_AES_AES_H
+
+#include <sys/types.h>
+#include <sys/cdefs.h>
+
+/*
+ * struct aes
+ *
+ * Expanded round keys.
+ */
+struct aes {
+ uint32_t aes_rk[60];
+} __aligned(16);
+
+#define AES_128_NROUNDS 10
+#define AES_192_NROUNDS 12
+#define AES_256_NROUNDS 14
+
+struct aesenc {
+ struct aes aese_aes;
+};
+
+struct aesdec {
+ struct aes aesd_aes;
+};
+
+struct aes_impl {
+ const char *ai_name;
+ int (*ai_probe)(void);
+ void (*ai_setenckey)(struct aesenc *, const uint8_t *, uint32_t);
+ void (*ai_setdeckey)(struct aesdec *, const uint8_t *, uint32_t);
+ void (*ai_enc)(const struct aesenc *, const uint8_t[static 16],
+ uint8_t[static 16], uint32_t);
+ void (*ai_dec)(const struct aesdec *, const uint8_t[static 16],
+ uint8_t[static 16], uint32_t);
+ void (*ai_cbc_enc)(const struct aesenc *, const uint8_t[static 16],
+ uint8_t[static 16], size_t, uint8_t[static 16], uint32_t);
+ void (*ai_cbc_dec)(const struct aesdec *, const uint8_t[static 16],
+ uint8_t[static 16], size_t, uint8_t[static 16], uint32_t);
+ void (*ai_xts_enc)(const struct aesenc *, const uint8_t[static 16],
+ uint8_t[static 16], size_t, uint8_t[static 16], uint32_t);
+ void (*ai_xts_dec)(const struct aesdec *, const uint8_t[static 16],
+ uint8_t[static 16], size_t, uint8_t[static 16], uint32_t);
+};
+
+int aes_selftest(const struct aes_impl *);
+
+uint32_t aes_setenckey128(struct aesenc *, const uint8_t[static 16]);
+uint32_t aes_setenckey192(struct aesenc *, const uint8_t[static 24]);
+uint32_t aes_setenckey256(struct aesenc *, const uint8_t[static 32]);
+uint32_t aes_setdeckey128(struct aesdec *, const uint8_t[static 16]);
+uint32_t aes_setdeckey192(struct aesdec *, const uint8_t[static 24]);
+uint32_t aes_setdeckey256(struct aesdec *, const uint8_t[static 32]);
+
+void aes_enc(const struct aesenc *, const uint8_t[static 16],
+ uint8_t[static 16], uint32_t);
+void aes_dec(const struct aesdec *, const uint8_t[static 16],
+ uint8_t[static 16], uint32_t);
+
+void aes_cbc_enc(struct aesenc *, const uint8_t[static 16],
+ uint8_t[static 16], size_t, uint8_t[static 16], uint32_t);
+void aes_cbc_dec(struct aesdec *, const uint8_t[static 16],
+ uint8_t[static 16], size_t, uint8_t[static 16], uint32_t);
+
+void aes_xts_enc(struct aesenc *, const uint8_t[static 16],
+ uint8_t[static 16], size_t, uint8_t[static 16], uint32_t);
+void aes_xts_dec(struct aesdec *, const uint8_t[static 16],
+ uint8_t[static 16], size_t, uint8_t[static 16], uint32_t);
+
+void aes_md_init(const struct aes_impl *);
+
+#endif /* _CRYPTO_AES_AES_H */
diff -r 81a487955535 -r 9d6b84c40f65 sys/crypto/aes/aes_bear.c
--- /dev/null Thu Jan 01 00:00:00 1970 +0000
+++ b/sys/crypto/aes/aes_bear.c Fri Jun 12 05:16:46 2020 +0000
@@ -0,0 +1,617 @@
+/* $NetBSD$ */
+
+/*-
+ * Copyright (c) 2020 The NetBSD Foundation, Inc.
+ * All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ * 1. Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * 2. Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in the
+ * documentation and/or other materials provided with the distribution.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE NETBSD FOUNDATION, INC. AND CONTRIBUTORS
+ * ``AS IS'' AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED
+ * TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
+ * PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE FOUNDATION OR CONTRIBUTORS
+ * BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <sys/cdefs.h>
+__KERNEL_RCSID(1, "$NetBSD$");
+
+#include <sys/types.h>
+#include <sys/endian.h>
+#include <sys/systm.h>
+
+#include <crypto/aes/aes.h>
+#include <crypto/aes/aes_bear.h>
+
+static void
+aesbear_setkey(uint32_t rk[static 60], const void *key, uint32_t nrounds)
+{
+ size_t key_len;
+
+ switch (nrounds) {
+ case 10:
+ key_len = 16;
+ break;
+ case 12:
+ key_len = 24;
+ break;
+ case 14:
+ key_len = 32;
+ break;
+ default:
+ panic("invalid AES nrounds: %u", nrounds);
+ }
+
+ br_aes_ct_keysched(rk, key, key_len);
+}
+
+static void
+aesbear_setenckey(struct aesenc *enc, const uint8_t *key, uint32_t nrounds)
+{
+
+ aesbear_setkey(enc->aese_aes.aes_rk, key, nrounds);
+}
+
+static void
+aesbear_setdeckey(struct aesdec *dec, const uint8_t *key, uint32_t nrounds)
+{
+
+ /*
+ * BearSSL computes InvMixColumns on the fly -- no need for
+ * distinct decryption round keys.
+ */
+ aesbear_setkey(dec->aesd_aes.aes_rk, key, nrounds);
+}
+
+static void
+aesbear_enc(const struct aesenc *enc, const uint8_t in[static 16],
+ uint8_t out[static 16], uint32_t nrounds)
+{
+ uint32_t sk_exp[120];
+ uint32_t q[8];
+
+ /* Expand round keys for bitslicing. */
+ br_aes_ct_skey_expand(sk_exp, nrounds, enc->aese_aes.aes_rk);
+
+ /* Load input block interleaved with garbage block. */
+ q[2*0] = le32dec(in + 4*0);
+ q[2*1] = le32dec(in + 4*1);
+ q[2*2] = le32dec(in + 4*2);
+ q[2*3] = le32dec(in + 4*3);
+ q[1] = q[3] = q[5] = q[7] = 0;
+
+ /* Transform to bitslice, decrypt, transform from bitslice. */
+ br_aes_ct_ortho(q);
+ br_aes_ct_bitslice_encrypt(nrounds, sk_exp, q);
+ br_aes_ct_ortho(q);
+
+ /* Store output block. */
+ le32enc(out + 4*0, q[2*0]);
+ le32enc(out + 4*1, q[2*1]);
+ le32enc(out + 4*2, q[2*2]);
+ le32enc(out + 4*3, q[2*3]);
+
+ /* Paranoia: Zero temporary buffers. */
+ explicit_memset(sk_exp, 0, sizeof sk_exp);
+ explicit_memset(q, 0, sizeof q);
+}
+
+static void
+aesbear_dec(const struct aesdec *dec, const uint8_t in[static 16],
+ uint8_t out[static 16], uint32_t nrounds)
+{
+ uint32_t sk_exp[120];
+ uint32_t q[8];
+
+ /* Expand round keys for bitslicing. */
+ br_aes_ct_skey_expand(sk_exp, nrounds, dec->aesd_aes.aes_rk);
+
+ /* Load input block interleaved with garbage. */
+ q[2*0] = le32dec(in + 4*0);
+ q[2*1] = le32dec(in + 4*1);
+ q[2*2] = le32dec(in + 4*2);
+ q[2*3] = le32dec(in + 4*3);
+ q[1] = q[3] = q[5] = q[7] = 0;
+
+ /* Transform to bitslice, decrypt, transform from bitslice. */
+ br_aes_ct_ortho(q);
+ br_aes_ct_bitslice_decrypt(nrounds, sk_exp, q);
+ br_aes_ct_ortho(q);
+
+ /* Store output block. */
+ le32enc(out + 4*0, q[2*0]);
+ le32enc(out + 4*1, q[2*1]);
+ le32enc(out + 4*2, q[2*2]);
+ le32enc(out + 4*3, q[2*3]);
+
+ /* Paranoia: Zero temporary buffers. */
+ explicit_memset(sk_exp, 0, sizeof sk_exp);
+ explicit_memset(q, 0, sizeof q);
+}
+
+static void
+aesbear_cbc_enc(const struct aesenc *enc, const uint8_t in[static 16],
+ uint8_t out[static 16], size_t nbytes, uint8_t iv[static 16],
+ uint32_t nrounds)
+{
+ uint32_t sk_exp[120];
+ uint32_t q[8];
+ uint32_t cv0, cv1, cv2, cv3;
+
+ KASSERT(nbytes % 16 == 0);
+
+ /* Skip if there's nothing to do. */
+ if (nbytes == 0)
+ return;
+
+ /* Expand round keys for bitslicing. */
+ br_aes_ct_skey_expand(sk_exp, nrounds, enc->aese_aes.aes_rk);
+
+ /* Initialize garbage block. */
+ q[1] = q[3] = q[5] = q[7] = 0;
+
+ /* Load IV. */
+ cv0 = le32dec(iv + 4*0);
+ cv1 = le32dec(iv + 4*1);
+ cv2 = le32dec(iv + 4*2);
+ cv3 = le32dec(iv + 4*3);
+
+ for (; nbytes; nbytes -= 16, in += 16, out += 16) {
+ /* Load input block and apply CV. */
+ q[2*0] = cv0 ^ le32dec(in + 4*0);
+ q[2*1] = cv1 ^ le32dec(in + 4*1);
+ q[2*2] = cv2 ^ le32dec(in + 4*2);
+ q[2*3] = cv3 ^ le32dec(in + 4*3);
+
+ /* Transform to bitslice, encrypt, transform from bitslice. */
+ br_aes_ct_ortho(q);
+ br_aes_ct_bitslice_encrypt(nrounds, sk_exp, q);
+ br_aes_ct_ortho(q);
+
+ /* Remember ciphertext as CV and store output block. */
+ cv0 = q[2*0];
+ cv1 = q[2*1];
+ cv2 = q[2*2];
+ cv3 = q[2*3];
+ le32enc(out + 4*0, cv0);
+ le32enc(out + 4*1, cv1);
+ le32enc(out + 4*2, cv2);
+ le32enc(out + 4*3, cv3);
+ }
+
+ /* Store updated IV. */
+ le32enc(iv + 4*0, cv0);
+ le32enc(iv + 4*1, cv1);
+ le32enc(iv + 4*2, cv2);
+ le32enc(iv + 4*3, cv3);
+
+ /* Paranoia: Zero temporary buffers. */
+ explicit_memset(sk_exp, 0, sizeof sk_exp);
+ explicit_memset(q, 0, sizeof q);
+}
+
+static void
+aesbear_cbc_dec(const struct aesdec *dec, const uint8_t in[static 16],
+ uint8_t out[static 16], size_t nbytes, uint8_t iv[static 16],
+ uint32_t nrounds)
+{
+ uint32_t sk_exp[120];
+ uint32_t q[8];
+ uint32_t cv0, cv1, cv2, cv3, iv0, iv1, iv2, iv3;
+
+ KASSERT(nbytes % 16 == 0);
+
+ /* Skip if there's nothing to do. */
+ if (nbytes == 0)
+ return;
+
+ /* Expand round keys for bitslicing. */
+ br_aes_ct_skey_expand(sk_exp, nrounds, dec->aesd_aes.aes_rk);
+
+ /* Load the IV. */
+ iv0 = le32dec(iv + 4*0);
+ iv1 = le32dec(iv + 4*1);
+ iv2 = le32dec(iv + 4*2);
+ iv3 = le32dec(iv + 4*3);
+
+ /* Load the last cipher block. */
+ cv0 = le32dec(in + nbytes - 16 + 4*0);
+ cv1 = le32dec(in + nbytes - 16 + 4*1);
+ cv2 = le32dec(in + nbytes - 16 + 4*2);
+ cv3 = le32dec(in + nbytes - 16 + 4*3);
+
+ /* Store the updated IV. */
+ le32enc(iv + 4*0, cv0);
+ le32enc(iv + 4*1, cv1);
+ le32enc(iv + 4*2, cv2);
+ le32enc(iv + 4*3, cv3);
+
+ /* Handle the last cipher block separately if odd number. */
+ if (nbytes % 32) {
+ KASSERT(nbytes % 32 == 16);
+
+ /* Set up the last cipher block and a garbage block. */
+ q[2*0] = cv0;
+ q[2*1] = cv1;
+ q[2*2] = cv2;
+ q[2*3] = cv3;
+ q[1] = q[3] = q[5] = q[7] = 0;
+
+ /* Encrypt. */
+ br_aes_ct_ortho(q);
+ br_aes_ct_bitslice_decrypt(nrounds, sk_exp, q);
+ br_aes_ct_ortho(q);
+
+ /* If this was the only cipher block, we're done. */
+ nbytes -= 16;
+ if (nbytes == 0)
+ goto out;
+
+ /*
+ * Otherwise, load up the penultimate cipher block, and
+ * store the output block.
+ */
+ cv0 = le32dec(in + nbytes - 16 + 4*0);
+ cv1 = le32dec(in + nbytes - 16 + 4*1);
+ cv2 = le32dec(in + nbytes - 16 + 4*2);
+ cv3 = le32dec(in + nbytes - 16 + 4*3);
+ le32enc(out + nbytes + 4*0, cv0 ^ q[2*0]);
+ le32enc(out + nbytes + 4*1, cv1 ^ q[2*1]);
+ le32enc(out + nbytes + 4*2, cv2 ^ q[2*2]);
+ le32enc(out + nbytes + 4*3, cv3 ^ q[2*3]);
+ }
+
+ for (;;) {
+ KASSERT(nbytes >= 32);
+
+ /*
+ * 1. Set up upper cipher block from cvN.
+ * 2. Load lower cipher block into cvN and set it up.
+ * 3. Decrypt.
+ */
+ q[2*0 + 1] = cv0;
+ q[2*1 + 1] = cv1;
+ q[2*2 + 1] = cv2;
+ q[2*3 + 1] = cv3;
+ cv0 = q[2*0] = le32dec(in + nbytes - 32 + 4*0);
+ cv1 = q[2*1] = le32dec(in + nbytes - 32 + 4*1);
+ cv2 = q[2*2] = le32dec(in + nbytes - 32 + 4*2);
+ cv3 = q[2*3] = le32dec(in + nbytes - 32 + 4*3);
+
+ br_aes_ct_ortho(q);
+ br_aes_ct_bitslice_decrypt(nrounds, sk_exp, q);
+ br_aes_ct_ortho(q);
+
+ /* Store the upper output block. */
+ le32enc(out + nbytes - 16 + 4*0, q[2*0 + 1] ^ cv0);
+ le32enc(out + nbytes - 16 + 4*1, q[2*1 + 1] ^ cv1);
+ le32enc(out + nbytes - 16 + 4*2, q[2*2 + 1] ^ cv2);
+ le32enc(out + nbytes - 16 + 4*3, q[2*3 + 1] ^ cv3);
+
+ /* Stop if we've reached the first output block. */
+ nbytes -= 32;
+ if (nbytes == 0)
+ goto out;
+
+ /*
+ * Load the preceding cipher block, and apply it as the
+ * chaining value to this one.
+ */
+ cv0 = le32dec(in + nbytes - 16 + 4*0);
+ cv1 = le32dec(in + nbytes - 16 + 4*1);
+ cv2 = le32dec(in + nbytes - 16 + 4*2);
+ cv3 = le32dec(in + nbytes - 16 + 4*3);
+ le32enc(out + nbytes + 4*0, q[2*0] ^ cv0);
+ le32enc(out + nbytes + 4*1, q[2*1] ^ cv1);
+ le32enc(out + nbytes + 4*2, q[2*2] ^ cv2);
+ le32enc(out + nbytes + 4*3, q[2*3] ^ cv3);
+ }
+
+out: /* Store the first output block. */
+ le32enc(out + 4*0, q[2*0] ^ iv0);
+ le32enc(out + 4*1, q[2*1] ^ iv1);
+ le32enc(out + 4*2, q[2*2] ^ iv2);
+ le32enc(out + 4*3, q[2*3] ^ iv3);
+
+ /* Paranoia: Zero temporary buffers. */
+ explicit_memset(sk_exp, 0, sizeof sk_exp);
+ explicit_memset(q, 0, sizeof q);
+}
+
+static inline void
+aesbear_xts_update(uint32_t *t0, uint32_t *t1, uint32_t *t2, uint32_t *t3)
+{
+ uint32_t s0, s1, s2, s3;
+
+ s0 = *t0 >> 31;
+ s1 = *t1 >> 31;
+ s2 = *t2 >> 31;
+ s3 = *t3 >> 31;
+ *t0 = (*t0 << 1) ^ (-s3 & 0x87);
+ *t1 = (*t1 << 1) ^ s0;
+ *t2 = (*t2 << 1) ^ s1;
+ *t3 = (*t3 << 1) ^ s2;
+}
+
+static int
+aesbear_xts_update_selftest(void)
+{
+ static const struct {
+ uint32_t in[4], out[4];
+ } cases[] = {
+ { {1}, {2} },
+ { {0x80000000U,0,0,0}, {0,1,0,0} },
+ { {0,0x80000000U,0,0}, {0,0,1,0} },
+ { {0,0,0x80000000U,0}, {0,0,0,1} },
+ { {0,0,0,0x80000000U}, {0x87,0,0,0} },
+ { {0,0x80000000U,0,0x80000000U}, {0x87,0,1,0} },
+ };
+ unsigned i;
+ uint32_t t0, t1, t2, t3;
+
+ for (i = 0; i < sizeof(cases)/sizeof(cases[0]); i++) {
+ t0 = cases[i].in[0];
+ t1 = cases[i].in[1];
+ t2 = cases[i].in[2];
+ t3 = cases[i].in[3];
+ aesbear_xts_update(&t0, &t1, &t2, &t3);
+ if (t0 != cases[i].out[0] ||
+ t1 != cases[i].out[1] ||
+ t2 != cases[i].out[2] ||
+ t3 != cases[i].out[3])
+ return -1;
+ }
+
+ /* Success! */
+ return 0;
+}
+
+static void
+aesbear_xts_enc(const struct aesenc *enc, const uint8_t in[static 16],
+ uint8_t out[static 16], size_t nbytes, uint8_t tweak[static 16],
+ uint32_t nrounds)
+{
+ uint32_t sk_exp[120];
+ uint32_t q[8];
+ uint32_t t0, t1, t2, t3, u0, u1, u2, u3;
+
+ KASSERT(nbytes % 16 == 0);
+
+ /* Skip if there's nothing to do. */
+ if (nbytes == 0)
+ return;
+
+ /* Expand round keys for bitslicing. */
+ br_aes_ct_skey_expand(sk_exp, nrounds, enc->aese_aes.aes_rk);
+
+ /* Load tweak. */
+ t0 = le32dec(tweak + 4*0);
+ t1 = le32dec(tweak + 4*1);
+ t2 = le32dec(tweak + 4*2);
+ t3 = le32dec(tweak + 4*3);
+
+ /* Handle the first block separately if odd number. */
+ if (nbytes % 32) {
+ KASSERT(nbytes % 32 == 16);
+
+ /* Load up the first block and a garbage block. */
+ q[2*0] = le32dec(in + 4*0) ^ t0;
+ q[2*1] = le32dec(in + 4*1) ^ t1;
+ q[2*2] = le32dec(in + 4*2) ^ t2;
+ q[2*3] = le32dec(in + 4*3) ^ t3;
+ q[1] = q[3] = q[5] = q[7] = 0;
+
+ /* Encrypt two blocks. */
+ br_aes_ct_ortho(q);
+ br_aes_ct_bitslice_encrypt(nrounds, sk_exp, q);
+ br_aes_ct_ortho(q);
+
+ /* Store the first cipher block. */
+ le32enc(out + 4*0, q[2*0] ^ t0);
+ le32enc(out + 4*1, q[2*1] ^ t1);
+ le32enc(out + 4*2, q[2*2] ^ t2);
+ le32enc(out + 4*3, q[2*3] ^ t3);
+
+ /* Advance to the next block. */
+ aesbear_xts_update(&t0, &t1, &t2, &t3);
+ if ((nbytes -= 16) == 0)
+ goto out;
+ in += 16;
+ out += 16;
+ }
+
+ do {
+ KASSERT(nbytes >= 32);
+
+ /* Compute the upper tweak. */
+ u0 = t0; u1 = t1; u2 = t2; u3 = t3;
+ aesbear_xts_update(&u0, &u1, &u2, &u3);
+
+ /* Load lower and upper blocks. */
+ q[2*0] = le32dec(in + 4*0) ^ t0;
+ q[2*1] = le32dec(in + 4*1) ^ t1;
+ q[2*2] = le32dec(in + 4*2) ^ t2;
+ q[2*3] = le32dec(in + 4*3) ^ t3;
+ q[2*0 + 1] = le32dec(in + 16 + 4*0) ^ u0;
+ q[2*1 + 1] = le32dec(in + 16 + 4*1) ^ u1;
+ q[2*2 + 1] = le32dec(in + 16 + 4*2) ^ u2;
+ q[2*3 + 1] = le32dec(in + 16 + 4*3) ^ u3;
+
+ /* Encrypt two blocks. */
+ br_aes_ct_ortho(q);
+ br_aes_ct_bitslice_encrypt(nrounds, sk_exp, q);
+ br_aes_ct_ortho(q);
+
+ /* Store lower and upper blocks. */
+ le32enc(out + 4*0, q[2*0] ^ t0);
+ le32enc(out + 4*1, q[2*1] ^ t1);
+ le32enc(out + 4*2, q[2*2] ^ t2);
+ le32enc(out + 4*3, q[2*3] ^ t3);
+ le32enc(out + 16 + 4*0, q[2*0 + 1] ^ u0);
+ le32enc(out + 16 + 4*1, q[2*1 + 1] ^ u1);
+ le32enc(out + 16 + 4*2, q[2*2 + 1] ^ u2);
+ le32enc(out + 16 + 4*3, q[2*3 + 1] ^ u3);
+
+ /* Advance to the next pair of blocks. */
+ t0 = u0; t1 = u1; t2 = u2; t3 = u3;
+ aesbear_xts_update(&t0, &t1, &t2, &t3);
+ in += 32;
+ out += 32;
+ } while (nbytes -= 32, nbytes);
+
+out: /* Store the updated tweak. */
+ le32enc(tweak + 4*0, t0);
+ le32enc(tweak + 4*1, t1);
+ le32enc(tweak + 4*2, t2);
+ le32enc(tweak + 4*3, t3);
+
+ /* Paranoia: Zero temporary buffers. */
+ explicit_memset(sk_exp, 0, sizeof sk_exp);
+ explicit_memset(q, 0, sizeof q);
+}
+
+static void
+aesbear_xts_dec(const struct aesdec *dec, const uint8_t in[static 16],
+ uint8_t out[static 16], size_t nbytes, uint8_t tweak[static 16],
+ uint32_t nrounds)
+{
+ uint32_t sk_exp[120];
+ uint32_t q[8];
+ uint32_t t0, t1, t2, t3, u0, u1, u2, u3;
+
+ KASSERT(nbytes % 16 == 0);
+
+ /* Skip if there's nothing to do. */
+ if (nbytes == 0)
+ return;
+
+ /* Expand round keys for bitslicing. */
+ br_aes_ct_skey_expand(sk_exp, nrounds, dec->aesd_aes.aes_rk);
+
+ /* Load tweak. */
+ t0 = le32dec(tweak + 4*0);
+ t1 = le32dec(tweak + 4*1);
+ t2 = le32dec(tweak + 4*2);
+ t3 = le32dec(tweak + 4*3);
+
+ /* Handle the first block separately if odd number. */
+ if (nbytes % 32) {
+ KASSERT(nbytes % 32 == 16);
+
+ /* Load up the first block and a garbage block. */
+ q[2*0] = le32dec(in + 4*0) ^ t0;
+ q[2*1] = le32dec(in + 4*1) ^ t1;
+ q[2*2] = le32dec(in + 4*2) ^ t2;
+ q[2*3] = le32dec(in + 4*3) ^ t3;
+ q[1] = q[3] = q[5] = q[7] = 0;
+
+ /* Decrypt two blocks. */
+ br_aes_ct_ortho(q);
+ br_aes_ct_bitslice_decrypt(nrounds, sk_exp, q);
+ br_aes_ct_ortho(q);
+
+ /* Store the first cipher block. */
+ le32enc(out + 4*0, q[2*0] ^ t0);
+ le32enc(out + 4*1, q[2*1] ^ t1);
+ le32enc(out + 4*2, q[2*2] ^ t2);
+ le32enc(out + 4*3, q[2*3] ^ t3);
+
+ /* Advance to the next block. */
+ aesbear_xts_update(&t0, &t1, &t2, &t3);
+ if ((nbytes -= 16) == 0)
+ goto out;
+ in += 16;
+ out += 16;
+ }
+
+ do {
+ KASSERT(nbytes >= 32);
+
+ /* Compute the upper tweak. */
+ u0 = t0; u1 = t1; u2 = t2; u3 = t3;
+ aesbear_xts_update(&u0, &u1, &u2, &u3);
+
+ /* Load lower and upper blocks. */
+ q[2*0] = le32dec(in + 4*0) ^ t0;
+ q[2*1] = le32dec(in + 4*1) ^ t1;
+ q[2*2] = le32dec(in + 4*2) ^ t2;
+ q[2*3] = le32dec(in + 4*3) ^ t3;
+ q[2*0 + 1] = le32dec(in + 16 + 4*0) ^ u0;
+ q[2*1 + 1] = le32dec(in + 16 + 4*1) ^ u1;
+ q[2*2 + 1] = le32dec(in + 16 + 4*2) ^ u2;
+ q[2*3 + 1] = le32dec(in + 16 + 4*3) ^ u3;
+
+ /* Encrypt two blocks. */
+ br_aes_ct_ortho(q);
+ br_aes_ct_bitslice_decrypt(nrounds, sk_exp, q);
+ br_aes_ct_ortho(q);
+
+ /* Store lower and upper blocks. */
+ le32enc(out + 4*0, q[2*0] ^ t0);
+ le32enc(out + 4*1, q[2*1] ^ t1);
+ le32enc(out + 4*2, q[2*2] ^ t2);
+ le32enc(out + 4*3, q[2*3] ^ t3);
+ le32enc(out + 16 + 4*0, q[2*0 + 1] ^ u0);
+ le32enc(out + 16 + 4*1, q[2*1 + 1] ^ u1);
+ le32enc(out + 16 + 4*2, q[2*2 + 1] ^ u2);
+ le32enc(out + 16 + 4*3, q[2*3 + 1] ^ u3);
+
+ /* Advance to the next pair of blocks. */
+ t0 = u0; t1 = u1; t2 = u2; t3 = u3;
+ aesbear_xts_update(&t0, &t1, &t2, &t3);
+ in += 32;
+ out += 32;
+ } while (nbytes -= 32, nbytes);
+
+out: /* Store the updated tweak. */
+ le32enc(tweak + 4*0, t0);
+ le32enc(tweak + 4*1, t1);
+ le32enc(tweak + 4*2, t2);
+ le32enc(tweak + 4*3, t3);
+
+ /* Paranoia: Zero temporary buffers. */
+ explicit_memset(sk_exp, 0, sizeof sk_exp);
+ explicit_memset(q, 0, sizeof q);
+}
+
+static int
+aesbear_probe(void)
+{
+
+ if (aesbear_xts_update_selftest())
+ return -1;
+
+ /* XXX test br_aes_ct_bitslice_decrypt */
+ /* XXX test br_aes_ct_bitslice_encrypt */
+ /* XXX test br_aes_ct_keysched */
+ /* XXX test br_aes_ct_ortho */
+ /* XXX test br_aes_ct_skey_expand */
+
+ return 0;
+}
+
+struct aes_impl aes_bear_impl = {
+ .ai_name = "BearSSL aes_ct",
+ .ai_probe = aesbear_probe,
+ .ai_setenckey = aesbear_setenckey,
+ .ai_setdeckey = aesbear_setdeckey,
+ .ai_enc = aesbear_enc,
+ .ai_dec = aesbear_dec,
+ .ai_cbc_enc = aesbear_cbc_enc,
+ .ai_cbc_dec = aesbear_cbc_dec,
+ .ai_xts_enc = aesbear_xts_enc,
+ .ai_xts_dec = aesbear_xts_dec,
+};
diff -r 81a487955535 -r 9d6b84c40f65 sys/crypto/aes/aes_bear.h
--- /dev/null Thu Jan 01 00:00:00 1970 +0000
+++ b/sys/crypto/aes/aes_bear.h Fri Jun 12 05:16:46 2020 +0000
@@ -0,0 +1,50 @@
+/* $NetBSD$ */
+
+/*-
+ * Copyright (c) 2020 The NetBSD Foundation, Inc.
+ * All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ * 1. Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * 2. Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in the
+ * documentation and/or other materials provided with the distribution.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE NETBSD FOUNDATION, INC. AND CONTRIBUTORS
+ * ``AS IS'' AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED
+ * TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
+ * PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE FOUNDATION OR CONTRIBUTORS
+ * BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef _CRYPTO_AES_AES_BEAR_H
+#define _CRYPTO_AES_AES_BEAR_H
+
+#include <sys/types.h>
+#include <sys/endian.h>
+
+#include <crypto/aes/aes.h>
+
+#define br_dec32le le32dec
+#define br_enc32le le32enc
+
+void br_aes_ct_bitslice_Sbox(uint32_t *);
+void br_aes_ct_bitslice_invSbox(uint32_t *);
+void br_aes_ct_ortho(uint32_t *);
+u_int br_aes_ct_keysched(uint32_t *, const void *, size_t);
+void br_aes_ct_skey_expand(uint32_t *, unsigned, const uint32_t *);
+void br_aes_ct_bitslice_encrypt(unsigned, const uint32_t *, uint32_t *);
+void br_aes_ct_bitslice_decrypt(unsigned, const uint32_t *, uint32_t *);
+
+extern struct aes_impl aes_bear_impl;
+
+#endif /* _CRYPTO_AES_AES_BEAR_H */
diff -r 81a487955535 -r 9d6b84c40f65 sys/crypto/aes/aes_ct.c
--- /dev/null Thu Jan 01 00:00:00 1970 +0000
+++ b/sys/crypto/aes/aes_ct.c Fri Jun 12 05:16:46 2020 +0000
@@ -0,0 +1,335 @@
+/* $NetBSD$ */
+
+/*
+ * Copyright (c) 2016 Thomas Pornin <pornin%bolet.org@localhost>
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining
+ * a copy of this software and associated documentation files (the
+ * "Software"), to deal in the Software without restriction, including
+ * without limitation the rights to use, copy, modify, merge, publish,
+ * distribute, sublicense, and/or sell copies of the Software, and to
+ * permit persons to whom the Software is furnished to do so, subject to
+ * the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be
+ * included in all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
+ * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
+ * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
+ * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
+ * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
+ * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
+ * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
+ * SOFTWARE.
+ */
+
+#include <sys/cdefs.h>
+__KERNEL_RCSID(1, "$NetBSD$");
+
+#include <sys/types.h>
+
+#include <crypto/aes/aes_bear.h>
+
+/* see inner.h */
+void
+br_aes_ct_bitslice_Sbox(uint32_t *q)
+{
+ /*
+ * This S-box implementation is a straightforward translation of
+ * the circuit described by Boyar and Peralta in "A new
+ * combinational logic minimization technique with applications
+ * to cryptology" (https://eprint.iacr.org/2009/191.pdf).
+ *
+ * Note that variables x* (input) and s* (output) are numbered
+ * in "reverse" order (x0 is the high bit, x7 is the low bit).
+ */
+
+ uint32_t x0, x1, x2, x3, x4, x5, x6, x7;
+ uint32_t y1, y2, y3, y4, y5, y6, y7, y8, y9;
+ uint32_t y10, y11, y12, y13, y14, y15, y16, y17, y18, y19;
+ uint32_t y20, y21;
+ uint32_t z0, z1, z2, z3, z4, z5, z6, z7, z8, z9;
+ uint32_t z10, z11, z12, z13, z14, z15, z16, z17;
+ uint32_t t0, t1, t2, t3, t4, t5, t6, t7, t8, t9;
+ uint32_t t10, t11, t12, t13, t14, t15, t16, t17, t18, t19;
+ uint32_t t20, t21, t22, t23, t24, t25, t26, t27, t28, t29;
+ uint32_t t30, t31, t32, t33, t34, t35, t36, t37, t38, t39;
+ uint32_t t40, t41, t42, t43, t44, t45, t46, t47, t48, t49;
+ uint32_t t50, t51, t52, t53, t54, t55, t56, t57, t58, t59;
+ uint32_t t60, t61, t62, t63, t64, t65, t66, t67;
+ uint32_t s0, s1, s2, s3, s4, s5, s6, s7;
+
+ x0 = q[7];
+ x1 = q[6];
+ x2 = q[5];
+ x3 = q[4];
+ x4 = q[3];
+ x5 = q[2];
+ x6 = q[1];
+ x7 = q[0];
+
+ /*
+ * Top linear transformation.
+ */
+ y14 = x3 ^ x5;
+ y13 = x0 ^ x6;
+ y9 = x0 ^ x3;
+ y8 = x0 ^ x5;
+ t0 = x1 ^ x2;
+ y1 = t0 ^ x7;
+ y4 = y1 ^ x3;
+ y12 = y13 ^ y14;
+ y2 = y1 ^ x0;
+ y5 = y1 ^ x6;
+ y3 = y5 ^ y8;
+ t1 = x4 ^ y12;
+ y15 = t1 ^ x5;
+ y20 = t1 ^ x1;
+ y6 = y15 ^ x7;
+ y10 = y15 ^ t0;
+ y11 = y20 ^ y9;
+ y7 = x7 ^ y11;
+ y17 = y10 ^ y11;
+ y19 = y10 ^ y8;
+ y16 = t0 ^ y11;
+ y21 = y13 ^ y16;
+ y18 = x0 ^ y16;
+
+ /*
+ * Non-linear section.
+ */
+ t2 = y12 & y15;
+ t3 = y3 & y6;
+ t4 = t3 ^ t2;
+ t5 = y4 & x7;
+ t6 = t5 ^ t2;
+ t7 = y13 & y16;
+ t8 = y5 & y1;
+ t9 = t8 ^ t7;
+ t10 = y2 & y7;
+ t11 = t10 ^ t7;
+ t12 = y9 & y11;
+ t13 = y14 & y17;
+ t14 = t13 ^ t12;
+ t15 = y8 & y10;
+ t16 = t15 ^ t12;
+ t17 = t4 ^ t14;
+ t18 = t6 ^ t16;
+ t19 = t9 ^ t14;
+ t20 = t11 ^ t16;
+ t21 = t17 ^ y20;
+ t22 = t18 ^ y19;
+ t23 = t19 ^ y21;
+ t24 = t20 ^ y18;
+
+ t25 = t21 ^ t22;
+ t26 = t21 & t23;
+ t27 = t24 ^ t26;
+ t28 = t25 & t27;
+ t29 = t28 ^ t22;
+ t30 = t23 ^ t24;
+ t31 = t22 ^ t26;
+ t32 = t31 & t30;
+ t33 = t32 ^ t24;
+ t34 = t23 ^ t33;
+ t35 = t27 ^ t33;
+ t36 = t24 & t35;
+ t37 = t36 ^ t34;
+ t38 = t27 ^ t36;
+ t39 = t29 & t38;
+ t40 = t25 ^ t39;
+
+ t41 = t40 ^ t37;
+ t42 = t29 ^ t33;
+ t43 = t29 ^ t40;
+ t44 = t33 ^ t37;
+ t45 = t42 ^ t41;
+ z0 = t44 & y15;
+ z1 = t37 & y6;
+ z2 = t33 & x7;
+ z3 = t43 & y16;
+ z4 = t40 & y1;
+ z5 = t29 & y7;
+ z6 = t42 & y11;
+ z7 = t45 & y17;
+ z8 = t41 & y10;
+ z9 = t44 & y12;
+ z10 = t37 & y3;
+ z11 = t33 & y4;
+ z12 = t43 & y13;
+ z13 = t40 & y5;
+ z14 = t29 & y2;
+ z15 = t42 & y9;
+ z16 = t45 & y14;
+ z17 = t41 & y8;
+
+ /*
+ * Bottom linear transformation.
+ */
+ t46 = z15 ^ z16;
+ t47 = z10 ^ z11;
+ t48 = z5 ^ z13;
+ t49 = z9 ^ z10;
+ t50 = z2 ^ z12;
+ t51 = z2 ^ z5;
+ t52 = z7 ^ z8;
+ t53 = z0 ^ z3;
+ t54 = z6 ^ z7;
+ t55 = z16 ^ z17;
+ t56 = z12 ^ t48;
+ t57 = t50 ^ t53;
+ t58 = z4 ^ t46;
+ t59 = z3 ^ t54;
+ t60 = t46 ^ t57;
+ t61 = z14 ^ t57;
+ t62 = t52 ^ t58;
+ t63 = t49 ^ t58;
+ t64 = z4 ^ t59;
+ t65 = t61 ^ t62;
+ t66 = z1 ^ t63;
+ s0 = t59 ^ t63;
+ s6 = t56 ^ ~t62;
+ s7 = t48 ^ ~t60;
+ t67 = t64 ^ t65;
+ s3 = t53 ^ t66;
+ s4 = t51 ^ t66;
+ s5 = t47 ^ t65;
+ s1 = t64 ^ ~s3;
+ s2 = t55 ^ ~t67;
+
+ q[7] = s0;
+ q[6] = s1;
+ q[5] = s2;
+ q[4] = s3;
+ q[3] = s4;
+ q[2] = s5;
+ q[1] = s6;
+ q[0] = s7;
+}
+
+/* see inner.h */
+void
+br_aes_ct_ortho(uint32_t *q)
+{
+#define SWAPN(cl, ch, s, x, y) do { \
+ uint32_t a, b; \
+ a = (x); \
+ b = (y); \
+ (x) = (a & (uint32_t)cl) | ((b & (uint32_t)cl) << (s)); \
+ (y) = ((a & (uint32_t)ch) >> (s)) | (b & (uint32_t)ch); \
+ } while (0)
+
+#define SWAP2(x, y) SWAPN(0x55555555, 0xAAAAAAAA, 1, x, y)
+#define SWAP4(x, y) SWAPN(0x33333333, 0xCCCCCCCC, 2, x, y)
+#define SWAP8(x, y) SWAPN(0x0F0F0F0F, 0xF0F0F0F0, 4, x, y)
+
+ SWAP2(q[0], q[1]);
+ SWAP2(q[2], q[3]);
+ SWAP2(q[4], q[5]);
+ SWAP2(q[6], q[7]);
+
+ SWAP4(q[0], q[2]);
+ SWAP4(q[1], q[3]);
+ SWAP4(q[4], q[6]);
+ SWAP4(q[5], q[7]);
+
+ SWAP8(q[0], q[4]);
+ SWAP8(q[1], q[5]);
+ SWAP8(q[2], q[6]);
+ SWAP8(q[3], q[7]);
+}
+
+static const unsigned char Rcon[] = {
+ 0x01, 0x02, 0x04, 0x08, 0x10, 0x20, 0x40, 0x80, 0x1B, 0x36
+};
+
+static uint32_t
+sub_word(uint32_t x)
+{
+ uint32_t q[8];
+ int i;
+
+ for (i = 0; i < 8; i ++) {
+ q[i] = x;
+ }
+ br_aes_ct_ortho(q);
+ br_aes_ct_bitslice_Sbox(q);
+ br_aes_ct_ortho(q);
+ return q[0];
+}
+
+/* see inner.h */
+unsigned
+br_aes_ct_keysched(uint32_t *comp_skey, const void *key, size_t key_len)
+{
+ unsigned num_rounds;
+ int i, j, k, nk, nkf;
+ uint32_t tmp;
+ uint32_t skey[120];
+
+ switch (key_len) {
+ case 16:
+ num_rounds = 10;
+ break;
+ case 24:
+ num_rounds = 12;
+ break;
+ case 32:
+ num_rounds = 14;
+ break;
+ default:
+ /* abort(); */
+ return 0;
+ }
+ nk = (int)(key_len >> 2);
+ nkf = (int)((num_rounds + 1) << 2);
+ tmp = 0;
+ for (i = 0; i < nk; i ++) {
+ tmp = br_dec32le((const unsigned char *)key + (i << 2));
+ skey[(i << 1) + 0] = tmp;
+ skey[(i << 1) + 1] = tmp;
+ }
+ for (i = nk, j = 0, k = 0; i < nkf; i ++) {
+ if (j == 0) {
+ tmp = (tmp << 24) | (tmp >> 8);
+ tmp = sub_word(tmp) ^ Rcon[k];
+ } else if (nk > 6 && j == 4) {
+ tmp = sub_word(tmp);
+ }
+ tmp ^= skey[(i - nk) << 1];
+ skey[(i << 1) + 0] = tmp;
+ skey[(i << 1) + 1] = tmp;
+ if (++ j == nk) {
+ j = 0;
+ k ++;
+ }
+ }
+ for (i = 0; i < nkf; i += 4) {
+ br_aes_ct_ortho(skey + (i << 1));
+ }
+ for (i = 0, j = 0; i < nkf; i ++, j += 2) {
+ comp_skey[i] = (skey[j + 0] & 0x55555555)
+ | (skey[j + 1] & 0xAAAAAAAA);
+ }
+ return num_rounds;
+}
+
+/* see inner.h */
+void
+br_aes_ct_skey_expand(uint32_t *skey,
+ unsigned num_rounds, const uint32_t *comp_skey)
+{
+ unsigned u, v, n;
+
+ n = (num_rounds + 1) << 2;
+ for (u = 0, v = 0; u < n; u ++, v += 2) {
+ uint32_t x, y;
+
+ x = y = comp_skey[u];
+ x &= 0x55555555;
+ skey[v + 0] = x | (x << 1);
+ y &= 0xAAAAAAAA;
+ skey[v + 1] = y | (y >> 1);
+ }
+}
diff -r 81a487955535 -r 9d6b84c40f65 sys/crypto/aes/aes_ct_dec.c
--- /dev/null Thu Jan 01 00:00:00 1970 +0000
+++ b/sys/crypto/aes/aes_ct_dec.c Fri Jun 12 05:16:46 2020 +0000
@@ -0,0 +1,177 @@
+/* $NetBSD$ */
+
+/*
+ * Copyright (c) 2016 Thomas Pornin <pornin%bolet.org@localhost>
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining
+ * a copy of this software and associated documentation files (the
+ * "Software"), to deal in the Software without restriction, including
+ * without limitation the rights to use, copy, modify, merge, publish,
+ * distribute, sublicense, and/or sell copies of the Software, and to
+ * permit persons to whom the Software is furnished to do so, subject to
+ * the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be
+ * included in all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
+ * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
+ * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
+ * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
+ * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
+ * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
+ * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
+ * SOFTWARE.
+ */
+
+#include <sys/cdefs.h>
+__KERNEL_RCSID(1, "$NetBSD$");
+
+#include <sys/types.h>
+
+#include <crypto/aes/aes_bear.h>
+
+/* see inner.h */
+void
+br_aes_ct_bitslice_invSbox(uint32_t *q)
+{
+ /*
+ * AES S-box is:
+ * S(x) = A(I(x)) ^ 0x63
+ * where I() is inversion in GF(256), and A() is a linear
+ * transform (0 is formally defined to be its own inverse).
+ * Since inversion is an involution, the inverse S-box can be
+ * computed from the S-box as:
+ * iS(x) = B(S(B(x ^ 0x63)) ^ 0x63)
+ * where B() is the inverse of A(). Indeed, for any y in GF(256):
+ * iS(S(y)) = B(A(I(B(A(I(y)) ^ 0x63 ^ 0x63))) ^ 0x63 ^ 0x63) = y
+ *
+ * Note: we reuse the implementation of the forward S-box,
+ * instead of duplicating it here, so that total code size is
+ * lower. By merging the B() transforms into the S-box circuit
+ * we could make faster CBC decryption, but CBC decryption is
+ * already quite faster than CBC encryption because we can
+ * process two blocks in parallel.
+ */
+ uint32_t q0, q1, q2, q3, q4, q5, q6, q7;
+
+ q0 = ~q[0];
+ q1 = ~q[1];
+ q2 = q[2];
+ q3 = q[3];
+ q4 = q[4];
+ q5 = ~q[5];
+ q6 = ~q[6];
+ q7 = q[7];
+ q[7] = q1 ^ q4 ^ q6;
+ q[6] = q0 ^ q3 ^ q5;
+ q[5] = q7 ^ q2 ^ q4;
+ q[4] = q6 ^ q1 ^ q3;
+ q[3] = q5 ^ q0 ^ q2;
+ q[2] = q4 ^ q7 ^ q1;
+ q[1] = q3 ^ q6 ^ q0;
+ q[0] = q2 ^ q5 ^ q7;
+
+ br_aes_ct_bitslice_Sbox(q);
+
+ q0 = ~q[0];
+ q1 = ~q[1];
+ q2 = q[2];
+ q3 = q[3];
+ q4 = q[4];
+ q5 = ~q[5];
+ q6 = ~q[6];
+ q7 = q[7];
+ q[7] = q1 ^ q4 ^ q6;
+ q[6] = q0 ^ q3 ^ q5;
+ q[5] = q7 ^ q2 ^ q4;
+ q[4] = q6 ^ q1 ^ q3;
+ q[3] = q5 ^ q0 ^ q2;
+ q[2] = q4 ^ q7 ^ q1;
+ q[1] = q3 ^ q6 ^ q0;
+ q[0] = q2 ^ q5 ^ q7;
+}
+
+static void
+add_round_key(uint32_t *q, const uint32_t *sk)
+{
+ int i;
+
+ for (i = 0; i < 8; i ++) {
+ q[i] ^= sk[i];
+ }
+}
+
+static void
+inv_shift_rows(uint32_t *q)
+{
+ int i;
+
+ for (i = 0; i < 8; i ++) {
+ uint32_t x;
+
+ x = q[i];
+ q[i] = (x & 0x000000FF)
+ | ((x & 0x00003F00) << 2) | ((x & 0x0000C000) >> 6)
+ | ((x & 0x000F0000) << 4) | ((x & 0x00F00000) >> 4)
+ | ((x & 0x03000000) << 6) | ((x & 0xFC000000) >> 2);
+ }
+}
+
+static inline uint32_t
+rotr16(uint32_t x)
+{
+ return (x << 16) | (x >> 16);
+}
+
+static void
+inv_mix_columns(uint32_t *q)
+{
+ uint32_t q0, q1, q2, q3, q4, q5, q6, q7;
+ uint32_t r0, r1, r2, r3, r4, r5, r6, r7;
+
+ q0 = q[0];
+ q1 = q[1];
+ q2 = q[2];
+ q3 = q[3];
+ q4 = q[4];
+ q5 = q[5];
+ q6 = q[6];
+ q7 = q[7];
+ r0 = (q0 >> 8) | (q0 << 24);
+ r1 = (q1 >> 8) | (q1 << 24);
+ r2 = (q2 >> 8) | (q2 << 24);
+ r3 = (q3 >> 8) | (q3 << 24);
+ r4 = (q4 >> 8) | (q4 << 24);
+ r5 = (q5 >> 8) | (q5 << 24);
+ r6 = (q6 >> 8) | (q6 << 24);
+ r7 = (q7 >> 8) | (q7 << 24);
+
+ q[0] = q5 ^ q6 ^ q7 ^ r0 ^ r5 ^ r7 ^ rotr16(q0 ^ q5 ^ q6 ^ r0 ^ r5);
+ q[1] = q0 ^ q5 ^ r0 ^ r1 ^ r5 ^ r6 ^ r7 ^ rotr16(q1 ^ q5 ^ q7 ^ r1 ^ r5 ^ r6);
+ q[2] = q0 ^ q1 ^ q6 ^ r1 ^ r2 ^ r6 ^ r7 ^ rotr16(q0 ^ q2 ^ q6 ^ r2 ^ r6 ^ r7);
+ q[3] = q0 ^ q1 ^ q2 ^ q5 ^ q6 ^ r0 ^ r2 ^ r3 ^ r5 ^ rotr16(q0 ^ q1 ^ q3 ^ q5 ^ q6 ^ q7 ^ r0 ^ r3 ^ r5 ^ r7);
+ q[4] = q1 ^ q2 ^ q3 ^ q5 ^ r1 ^ r3 ^ r4 ^ r5 ^ r6 ^ r7 ^ rotr16(q1 ^ q2 ^ q4 ^ q5 ^ q7 ^ r1 ^ r4 ^ r5 ^ r6);
+ q[5] = q2 ^ q3 ^ q4 ^ q6 ^ r2 ^ r4 ^ r5 ^ r6 ^ r7 ^ rotr16(q2 ^ q3 ^ q5 ^ q6 ^ r2 ^ r5 ^ r6 ^ r7);
+ q[6] = q3 ^ q4 ^ q5 ^ q7 ^ r3 ^ r5 ^ r6 ^ r7 ^ rotr16(q3 ^ q4 ^ q6 ^ q7 ^ r3 ^ r6 ^ r7);
+ q[7] = q4 ^ q5 ^ q6 ^ r4 ^ r6 ^ r7 ^ rotr16(q4 ^ q5 ^ q7 ^ r4 ^ r7);
+}
+
+/* see inner.h */
+void
+br_aes_ct_bitslice_decrypt(unsigned num_rounds,
+ const uint32_t *skey, uint32_t *q)
+{
+ unsigned u;
+
+ add_round_key(q, skey + (num_rounds << 3));
+ for (u = num_rounds - 1; u > 0; u --) {
+ inv_shift_rows(q);
+ br_aes_ct_bitslice_invSbox(q);
+ add_round_key(q, skey + (u << 3));
+ inv_mix_columns(q);
+ }
+ inv_shift_rows(q);
+ br_aes_ct_bitslice_invSbox(q);
+ add_round_key(q, skey);
+}
diff -r 81a487955535 -r 9d6b84c40f65 sys/crypto/aes/aes_ct_enc.c
--- /dev/null Thu Jan 01 00:00:00 1970 +0000
+++ b/sys/crypto/aes/aes_ct_enc.c Fri Jun 12 05:16:46 2020 +0000
@@ -0,0 +1,119 @@
+/* $NetBSD$ */
+
+/*
+ * Copyright (c) 2016 Thomas Pornin <pornin%bolet.org@localhost>
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining
+ * a copy of this software and associated documentation files (the
+ * "Software"), to deal in the Software without restriction, including
+ * without limitation the rights to use, copy, modify, merge, publish,
+ * distribute, sublicense, and/or sell copies of the Software, and to
+ * permit persons to whom the Software is furnished to do so, subject to
+ * the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be
+ * included in all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
+ * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
+ * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
+ * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
+ * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
+ * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
+ * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
+ * SOFTWARE.
+ */
+
+#include <sys/cdefs.h>
+__KERNEL_RCSID(1, "$NetBSD$");
+
+#include <sys/types.h>
+
+#include <crypto/aes/aes_bear.h>
+
+static inline void
+add_round_key(uint32_t *q, const uint32_t *sk)
+{
+ q[0] ^= sk[0];
+ q[1] ^= sk[1];
+ q[2] ^= sk[2];
+ q[3] ^= sk[3];
+ q[4] ^= sk[4];
+ q[5] ^= sk[5];
+ q[6] ^= sk[6];
+ q[7] ^= sk[7];
+}
+
+static inline void
+shift_rows(uint32_t *q)
+{
+ int i;
+
+ for (i = 0; i < 8; i ++) {
+ uint32_t x;
+
+ x = q[i];
+ q[i] = (x & 0x000000FF)
+ | ((x & 0x0000FC00) >> 2) | ((x & 0x00000300) << 6)
+ | ((x & 0x00F00000) >> 4) | ((x & 0x000F0000) << 4)
+ | ((x & 0xC0000000) >> 6) | ((x & 0x3F000000) << 2);
+ }
+}
+
+static inline uint32_t
+rotr16(uint32_t x)
+{
+ return (x << 16) | (x >> 16);
+}
+
+static inline void
+mix_columns(uint32_t *q)
+{
+ uint32_t q0, q1, q2, q3, q4, q5, q6, q7;
+ uint32_t r0, r1, r2, r3, r4, r5, r6, r7;
+
+ q0 = q[0];
+ q1 = q[1];
+ q2 = q[2];
+ q3 = q[3];
+ q4 = q[4];
+ q5 = q[5];
+ q6 = q[6];
+ q7 = q[7];
+ r0 = (q0 >> 8) | (q0 << 24);
+ r1 = (q1 >> 8) | (q1 << 24);
+ r2 = (q2 >> 8) | (q2 << 24);
+ r3 = (q3 >> 8) | (q3 << 24);
+ r4 = (q4 >> 8) | (q4 << 24);
+ r5 = (q5 >> 8) | (q5 << 24);
+ r6 = (q6 >> 8) | (q6 << 24);
+ r7 = (q7 >> 8) | (q7 << 24);
+
+ q[0] = q7 ^ r7 ^ r0 ^ rotr16(q0 ^ r0);
+ q[1] = q0 ^ r0 ^ q7 ^ r7 ^ r1 ^ rotr16(q1 ^ r1);
+ q[2] = q1 ^ r1 ^ r2 ^ rotr16(q2 ^ r2);
+ q[3] = q2 ^ r2 ^ q7 ^ r7 ^ r3 ^ rotr16(q3 ^ r3);
+ q[4] = q3 ^ r3 ^ q7 ^ r7 ^ r4 ^ rotr16(q4 ^ r4);
+ q[5] = q4 ^ r4 ^ r5 ^ rotr16(q5 ^ r5);
+ q[6] = q5 ^ r5 ^ r6 ^ rotr16(q6 ^ r6);
+ q[7] = q6 ^ r6 ^ r7 ^ rotr16(q7 ^ r7);
+}
+
+/* see inner.h */
+void
+br_aes_ct_bitslice_encrypt(unsigned num_rounds,
+ const uint32_t *skey, uint32_t *q)
+{
+ unsigned u;
+
+ add_round_key(q, skey);
+ for (u = 1; u < num_rounds; u ++) {
+ br_aes_ct_bitslice_Sbox(q);
+ shift_rows(q);
+ mix_columns(q);
+ add_round_key(q, skey + (u << 3));
+ }
+ br_aes_ct_bitslice_Sbox(q);
+ shift_rows(q);
+ add_round_key(q, skey + (num_rounds << 3));
+}
diff -r 81a487955535 -r 9d6b84c40f65 sys/crypto/aes/aes_impl.c
--- /dev/null Thu Jan 01 00:00:00 1970 +0000
+++ b/sys/crypto/aes/aes_impl.c Fri Jun 12 05:16:46 2020 +0000
@@ -0,0 +1,256 @@
+/* $NetBSD$ */
+
+/*-
+ * Copyright (c) 2020 The NetBSD Foundation, Inc.
+ * All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ * 1. Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * 2. Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in the
+ * documentation and/or other materials provided with the distribution.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE NETBSD FOUNDATION, INC. AND CONTRIBUTORS
+ * ``AS IS'' AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED
+ * TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
+ * PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE FOUNDATION OR CONTRIBUTORS
+ * BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <sys/cdefs.h>
+__KERNEL_RCSID(1, "$NetBSD$");
+
+#include <sys/types.h>
+#include <sys/kernel.h>
+#include <sys/module.h>
+#include <sys/once.h>
+#include <sys/systm.h>
+
+#include <crypto/aes/aes.h>
+#include <crypto/aes/aes_bear.h> /* default implementation */
+
+static const struct aes_impl *aes_md_impl __read_mostly;
+static const struct aes_impl *aes_impl __read_mostly;
+
+/*
+ * The timing of AES implementation selection is finicky:
+ *
+ * 1. It has to be done _after_ cpu_attach for implementations,
+ * such as AES-NI, that rely on fpu initialization done by
+ * fpu_attach.
+ *
+ * 2. It has to be done _before_ the cgd self-tests or anything
+ * else that might call AES.
+ *
+ * For the moment, doing it in module init works. However, if a
+ * driver-class module depended on the aes module, that would break.
+ */
+
+static int
+aes_select(void)
+{
+
+ KASSERT(aes_impl == NULL);
+
+ if (aes_md_impl) {
+ if (aes_selftest(aes_md_impl))
+ aprint_error("aes: self-test failed: %s\n",
+ aes_md_impl->ai_name);
+ else
+ aes_impl = aes_md_impl;
+ }
+ if (aes_impl == NULL) {
+ if (aes_selftest(&aes_bear_impl))
+ aprint_error("aes: self-test failed: %s\n",
+ aes_bear_impl.ai_name);
+ else
+ aes_impl = &aes_bear_impl;
+ }
+ if (aes_impl == NULL)
+ panic("AES self-tests failed");
+
+ aprint_verbose("aes: %s\n", aes_impl->ai_name);
+ return 0;
+}
+
+MODULE(MODULE_CLASS_MISC, aes, NULL);
+
+static int
+aes_modcmd(modcmd_t cmd, void *opaque)
+{
+
+ switch (cmd) {
+ case MODULE_CMD_INIT:
+ return aes_select();
+ case MODULE_CMD_FINI:
+ return 0;
+ default:
+ return ENOTTY;
+ }
+}
+
+static void
+aes_guarantee_selected(void)
+{
+#if 0
+ static once_t once;
+ int error;
+
+ error = RUN_ONCE(&once, aes_select);
+ KASSERT(error == 0);
+#endif
+}
+
+void
+aes_md_init(const struct aes_impl *impl)
+{
+
+ KASSERT(cold);
+ KASSERTMSG(aes_impl == NULL,
+ "AES implementation `%s' already chosen, can't offer `%s'",
+ aes_impl->ai_name, impl->ai_name);
+ KASSERTMSG(aes_md_impl == NULL,
+ "AES implementation `%s' already offered, can't offer `%s'",
+ aes_md_impl->ai_name, impl->ai_name);
+
+ aes_md_impl = impl;
+}
+
+static void
+aes_setenckey(struct aesenc *enc, const uint8_t key[static 16],
+ uint32_t nrounds)
+{
+
+ aes_guarantee_selected();
+ aes_impl->ai_setenckey(enc, key, nrounds);
+}
+
+uint32_t
+aes_setenckey128(struct aesenc *enc, const uint8_t key[static 16])
+{
+ uint32_t nrounds = AES_128_NROUNDS;
+
+ aes_setenckey(enc, key, nrounds);
+ return nrounds;
+}
+
+uint32_t
+aes_setenckey192(struct aesenc *enc, const uint8_t key[static 24])
+{
+ uint32_t nrounds = AES_192_NROUNDS;
+
+ aes_setenckey(enc, key, nrounds);
+ return nrounds;
+}
+
+uint32_t
+aes_setenckey256(struct aesenc *enc, const uint8_t key[static 32])
+{
+ uint32_t nrounds = AES_256_NROUNDS;
+
+ aes_setenckey(enc, key, nrounds);
+ return nrounds;
+}
+
+static void
+aes_setdeckey(struct aesdec *dec, const uint8_t key[static 16],
+ uint32_t nrounds)
+{
+
+ aes_guarantee_selected();
+ aes_impl->ai_setdeckey(dec, key, nrounds);
+}
+
+uint32_t
+aes_setdeckey128(struct aesdec *dec, const uint8_t key[static 16])
+{
+ uint32_t nrounds = AES_128_NROUNDS;
+
+ aes_setdeckey(dec, key, nrounds);
+ return nrounds;
+}
+
+uint32_t
+aes_setdeckey192(struct aesdec *dec, const uint8_t key[static 24])
+{
+ uint32_t nrounds = AES_192_NROUNDS;
+
+ aes_setdeckey(dec, key, nrounds);
+ return nrounds;
+}
+
+uint32_t
+aes_setdeckey256(struct aesdec *dec, const uint8_t key[static 32])
+{
+ uint32_t nrounds = AES_256_NROUNDS;
+
+ aes_setdeckey(dec, key, nrounds);
+ return nrounds;
+}
+
+void
+aes_enc(const struct aesenc *enc, const uint8_t in[static 16],
+ uint8_t out[static 16], uint32_t nrounds)
+{
+
+ aes_guarantee_selected();
+ aes_impl->ai_enc(enc, in, out, nrounds);
+}
+
+void
+aes_dec(const struct aesdec *dec, const uint8_t in[static 16],
+ uint8_t out[static 16], uint32_t nrounds)
+{
+
+ aes_guarantee_selected();
+ aes_impl->ai_dec(dec, in, out, nrounds);
+}
+
+void
+aes_cbc_enc(struct aesenc *enc, const uint8_t in[static 16],
+ uint8_t out[static 16], size_t nbytes, uint8_t iv[static 16],
+ uint32_t nrounds)
+{
+
+ aes_guarantee_selected();
+ aes_impl->ai_cbc_enc(enc, in, out, nbytes, iv, nrounds);
+}
+
+void
+aes_cbc_dec(struct aesdec *dec, const uint8_t in[static 16],
+ uint8_t out[static 16], size_t nbytes, uint8_t iv[static 16],
+ uint32_t nrounds)
+{
+
+ aes_guarantee_selected();
+ aes_impl->ai_cbc_dec(dec, in, out, nbytes, iv, nrounds);
+}
+
+void
+aes_xts_enc(struct aesenc *enc, const uint8_t in[static 16],
+ uint8_t out[static 16], size_t nbytes, uint8_t tweak[static 16],
+ uint32_t nrounds)
+{
+
+ aes_guarantee_selected();
+ aes_impl->ai_xts_enc(enc, in, out, nbytes, tweak, nrounds);
+}
+
+void
+aes_xts_dec(struct aesdec *dec, const uint8_t in[static 16],
+ uint8_t out[static 16], size_t nbytes, uint8_t tweak[static 16],
+ uint32_t nrounds)
+{
+
+ aes_guarantee_selected();
+ aes_impl->ai_xts_dec(dec, in, out, nbytes, tweak, nrounds);
+}
diff -r 81a487955535 -r 9d6b84c40f65 sys/crypto/aes/aes_rijndael.c
--- /dev/null Thu Jan 01 00:00:00 1970 +0000
+++ b/sys/crypto/aes/aes_rijndael.c Fri Jun 12 05:16:46 2020 +0000
@@ -0,0 +1,306 @@
+/* $NetBSD$ */
+
+/*-
+ * Copyright (c) 2020 The NetBSD Foundation, Inc.
+ * All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ * 1. Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * 2. Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in the
+ * documentation and/or other materials provided with the distribution.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE NETBSD FOUNDATION, INC. AND CONTRIBUTORS
+ * ``AS IS'' AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED
+ * TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
+ * PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE FOUNDATION OR CONTRIBUTORS
+ * BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ */
+
+/*
+ * Legacy `Rijndael' API
+ *
+ * rijndael_set_key
+ * rijndael_encrypt
+ * rijndael_decrypt
+ *
+ * rijndaelKeySetupEnc
+ * rijndaelKeySetupDec
+ * rijndaelEncrypt
+ * rijndaelDecrypt
+ * rijndael_makeKey
+ * rijndael_cipherInit
+ * rijndael_blockEncrypt
+ * rijndael_blockDecrypt
+ */
+
+#include <sys/cdefs.h>
+__KERNEL_RCSID(1, "$NetBSD$");
+
+#include <sys/types.h>
+#include <sys/systm.h>
+
+#include <crypto/aes/aes.h>
+#include <crypto/rijndael/rijndael.h>
+#include <crypto/rijndael/rijndael-alg-fst.h>
+#include <crypto/rijndael/rijndael-api-fst.h>
+
+void
+rijndael_set_key(rijndael_ctx *ctx, const uint8_t *key, int keybits)
+{
+
+ ctx->Nr = rijndaelKeySetupEnc(ctx->ek, key, keybits);
+ rijndaelKeySetupDec(ctx->dk, key, keybits);
+}
+
+void
+rijndael_encrypt(const rijndael_ctx *ctx, const uint8_t *in, uint8_t *out)
+{
+
+ rijndaelEncrypt(ctx->ek, ctx->Nr, in, out);
+}
+
+void
+rijndael_decrypt(const rijndael_ctx *ctx, const u_char *in, uint8_t *out)
+{
+
+ rijndaelDecrypt(ctx->dk, ctx->Nr, in, out);
+}
+
+int
+rijndaelKeySetupEnc(uint32_t *rk, const uint8_t *key, int keybits)
+{
+ struct aesenc enc;
+ unsigned nrounds;
+
+ switch (keybits) {
+ case 128:
+ nrounds = aes_setenckey128(&enc, key);
+ break;
+ case 192:
+ nrounds = aes_setenckey192(&enc, key);
+ break;
+ case 256:
+ nrounds = aes_setenckey256(&enc, key);
+ break;
+ default:
+ panic("invalid AES key bits: %d", keybits);
+ }
+
+ memcpy(rk, enc.aese_aes.aes_rk, 4*(nrounds + 1)*sizeof(rk[0]));
+ explicit_memset(&enc, 0, sizeof enc);
+
+ return nrounds;
+}
+
+int
+rijndaelKeySetupDec(uint32_t *rk, const uint8_t *key, int keybits)
+{
+ struct aesdec dec;
+ unsigned nrounds;
+
+ switch (keybits) {
+ case 128:
+ nrounds = aes_setdeckey128(&dec, key);
+ break;
+ case 192:
+ nrounds = aes_setdeckey192(&dec, key);
+ break;
+ case 256:
+ nrounds = aes_setdeckey256(&dec, key);
+ break;
+ default:
+ panic("invalid AES key bits: %d", keybits);
+ }
+
+ memcpy(rk, dec.aesd_aes.aes_rk, 4*(nrounds + 1)*sizeof(rk[0]));
+ explicit_memset(&dec, 0, sizeof dec);
+
+ return nrounds;
+}
+
+void
+rijndaelEncrypt(const uint32_t *rk, int nrounds, const uint8_t in[16],
+ uint8_t out[16])
+{
+ struct aesenc enc;
+
+ memcpy(enc.aese_aes.aes_rk, rk, 4*(nrounds + 1)*sizeof(rk[0]));
+ aes_enc(&enc, in, out, nrounds);
+ explicit_memset(&enc, 0, sizeof enc);
+}
+
+void
+rijndaelDecrypt(const uint32_t *rk, int nrounds, const uint8_t in[16],
+ uint8_t out[16])
+{
+ struct aesdec dec;
+
+ memcpy(dec.aesd_aes.aes_rk, rk, 4*(nrounds + 1)*sizeof(rk[0]));
+ aes_dec(&dec, in, out, nrounds);
+ explicit_memset(&dec, 0, sizeof dec);
+}
+
+int
+rijndael_makeKey(keyInstance *key, BYTE direction, int keybits,
+ const char *keyp)
+{
+
+ if (key == NULL)
+ return BAD_KEY_INSTANCE;
+
+ memset(key, 0x1a, sizeof(*key));
+
+ switch (direction) {
+ case DIR_ENCRYPT:
+ case DIR_DECRYPT:
+ key->direction = direction;
+ break;
+ default:
+ return BAD_KEY_DIR;
+ }
+
+ switch (keybits) {
+ case 128:
+ case 192:
+ case 256:
+ key->keyLen = keybits;
+ break;
+ default:
+ return BAD_KEY_MAT;
+ }
+
+ if (keyp)
+ memcpy(key->keyMaterial, keyp, keybits/8);
+
+ switch (direction) {
+ case DIR_ENCRYPT:
+ key->Nr = rijndaelKeySetupEnc(key->rk,
+ (const uint8_t *)key->keyMaterial, keybits);
+ break;
+ case DIR_DECRYPT:
+ key->Nr = rijndaelKeySetupDec(key->rk,
+ (const uint8_t *)key->keyMaterial, keybits);
+ break;
+ default:
+ panic("unknown encryption direction %d", direction);
+ }
+ rijndaelKeySetupEnc(key->ek, (const uint8_t *)key->keyMaterial,
+ keybits);
+
+ return 1;
+}
+
+int
+rijndael_cipherInit(cipherInstance *cipher, BYTE mode, const char *iv)
+{
+
+ switch (mode) {
+ case MODE_ECB: /* used only for encrypting one block */
+ case MODE_CBC:
+ case MODE_XTS:
+ cipher->mode = mode;
+ break;
+ case MODE_CFB1: /* unused */
+ default:
+ return BAD_CIPHER_MODE;
+ }
+
+ if (iv)
+ memcpy(cipher->IV, iv, RIJNDAEL_MAX_IV_SIZE);
+ else
+ memset(cipher->IV, 0, RIJNDAEL_MAX_IV_SIZE);
+
+ return 1;
+}
+
+int
+rijndael_blockEncrypt(cipherInstance *cipher, keyInstance *key,
+ const BYTE *in, int nbits, BYTE *out)
+{
+ struct aesenc enc;
+
+ if (cipher == NULL)
+ return BAD_CIPHER_STATE;
+ if (key == NULL)
+ return BAD_CIPHER_STATE;
+ if (key->direction != DIR_ENCRYPT)
+ return BAD_CIPHER_STATE;
+
+ if (in == NULL || nbits <= 0)
+ return 0;
+
+ memcpy(enc.aese_aes.aes_rk, key->rk,
+ 4*(key->Nr + 1)*sizeof(key->rk[0]));
+ switch (cipher->mode) {
+ case MODE_ECB:
+ KASSERT(nbits == 128);
+ aes_enc(&enc, in, out, key->Nr);
+ break;
+ case MODE_CBC:
+ KASSERT(nbits % 128 == 0);
+ aes_cbc_enc(&enc, in, out, nbits/8, (uint8_t *)cipher->IV,
+ key->Nr);
+ break;
+ case MODE_XTS:
+ KASSERT(nbits % 128 == 0);
+ aes_xts_enc(&enc, in, out, nbits/8, (uint8_t *)cipher->IV,
+ key->Nr);
+ break;
+ default:
+ panic("invalid AES mode: %d", cipher->mode);
+ }
+ explicit_memset(&enc, 0, sizeof enc);
+
+ return nbits;
+}
+
+int
+rijndael_blockDecrypt(cipherInstance *cipher, keyInstance *key,
+ const BYTE *in, int nbits, BYTE *out)
+{
+ struct aesdec dec;
+
+ if (cipher == NULL)
+ return BAD_CIPHER_STATE;
+ if (key == NULL)
+ return BAD_CIPHER_STATE;
+ if (key->direction != DIR_DECRYPT)
+ return BAD_CIPHER_STATE;
+
+ if (in == NULL || nbits <= 0)
+ return 0;
+
+ memcpy(dec.aesd_aes.aes_rk, key->rk,
+ 4*(key->Nr + 1)*sizeof(key->rk[0]));
+ switch (cipher->mode) {
+ case MODE_ECB:
+ KASSERT(nbits == 128);
+ aes_dec(&dec, in, out, key->Nr);
+ break;
+ case MODE_CBC:
+ KASSERT(nbits % 128 == 0);
+ aes_cbc_dec(&dec, in, out, nbits/8, (uint8_t *)cipher->IV,
+ key->Nr);
+ break;
+ case MODE_XTS:
+ KASSERT(nbits % 128 == 0);
+ aes_xts_dec(&dec, in, out, nbits/8, (uint8_t *)cipher->IV,
+ key->Nr);
+ break;
+ default:
+ panic("invalid AES mode: %d", cipher->mode);
+ }
+ explicit_memset(&dec, 0, sizeof dec);
+
+ return nbits;
+}
diff -r 81a487955535 -r 9d6b84c40f65 sys/crypto/aes/aes_selftest.c
--- /dev/null Thu Jan 01 00:00:00 1970 +0000
+++ b/sys/crypto/aes/aes_selftest.c Fri Jun 12 05:16:46 2020 +0000
@@ -0,0 +1,387 @@
+/* $NetBSD$ */
+
+/*-
+ * Copyright (c) 2020 The NetBSD Foundation, Inc.
+ * All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ * 1. Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * 2. Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in the
+ * documentation and/or other materials provided with the distribution.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE NETBSD FOUNDATION, INC. AND CONTRIBUTORS
+ * ``AS IS'' AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED
+ * TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
+ * PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE FOUNDATION OR CONTRIBUTORS
+ * BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <sys/cdefs.h>
+__KERNEL_RCSID(1, "$NetBSD$");
+
+#include <sys/types.h>
+#include <sys/systm.h>
+
+#include <lib/libkern/libkern.h>
+
+#include <crypto/aes/aes.h>
+
+static const unsigned aes_keybytes[] __unused = { 16, 24, 32 };
+static const unsigned aes_keybits[] __unused = { 128, 192, 256 };
+static const unsigned aes_nrounds[] = { 10, 12, 14 };
+
+#define aes_selftest_fail(impl, actual, expected, nbytes, fmt, args...) \
+({ \
+ printf("%s "fmt": self-test failed\n", (impl)->ai_name, ##args); \
+ hexdump(printf, "was", (actual), (nbytes)); \
+ hexdump(printf, "expected", (expected), (nbytes)); \
+ -1; \
+})
+
+static int
+aes_selftest_encdec(const struct aes_impl *impl)
+{
+ /*
+ * head -c 16 < /dev/zero | openssl enc -aes-{128,192,256}-ecb
+ * -nopad -K 000102030405060708090a0b0c0d... | hexdump -C
+ */
+ static const uint8_t expected[3][16] = {
+ [0] = {
+ 0xc6,0xa1,0x3b,0x37,0x87,0x8f,0x5b,0x82,
+ 0x6f,0x4f,0x81,0x62,0xa1,0xc8,0xd8,0x79,
+ },
+ [1] = {
+ 0x91,0x62,0x51,0x82,0x1c,0x73,0xa5,0x22,
+ 0xc3,0x96,0xd6,0x27,0x38,0x01,0x96,0x07,
+ },
+ [2] = {
+ 0xf2,0x90,0x00,0xb6,0x2a,0x49,0x9f,0xd0,
+ 0xa9,0xf3,0x9a,0x6a,0xdd,0x2e,0x77,0x80,
+ },
+ };
+ struct aesenc enc;
+ struct aesdec dec;
+ uint8_t key[32];
+ uint8_t in[16];
+ uint8_t outbuf[18] = { [0] = 0x1a, [17] = 0x1a }, *out = outbuf + 1;
+ unsigned i;
+
+ for (i = 0; i < 32; i++)
+ key[i] = i;
+ for (i = 0; i < 16; i++)
+ in[i] = 0;
+
+ for (i = 0; i < 3; i++) {
+ impl->ai_setenckey(&enc, key, aes_nrounds[i]);
+ impl->ai_setdeckey(&dec, key, aes_nrounds[i]);
+ impl->ai_enc(&enc, in, out, aes_nrounds[i]);
+ if (memcmp(out, expected[i], 16))
+ return aes_selftest_fail(impl, out, expected[i], 16,
+ "AES-%u enc", aes_keybits[i]);
+ impl->ai_dec(&dec, out, out, aes_nrounds[i]);
+ if (memcmp(out, in, 16))
+ return aes_selftest_fail(impl, out, in, 16,
+ "AES-%u dec", aes_keybits[i]);
+ }
+
+ if (outbuf[0] != 0x1a)
+ return aes_selftest_fail(impl, outbuf,
+ (const uint8_t[1]){0x1a}, 1,
+ "AES overrun preceding");
+ if (outbuf[17] != 0x1a)
+ return aes_selftest_fail(impl, outbuf + 17,
+ (const uint8_t[1]){0x1a}, 1,
+ "AES overrun folllowing");
+
+ /* Success! */
+ return 0;
+}
+
+static int
+aes_selftest_encdec_cbc(const struct aes_impl *impl)
+{
+ static const uint8_t expected[3][144] = {
+ [0] = {
+ 0xfe,0xf1,0xa8,0xb6,0x25,0xf0,0xc4,0x3a,
+ 0x71,0x08,0xb6,0x23,0xa6,0xfb,0x90,0xca,
+ 0x9e,0x64,0x6d,0x95,0xb5,0xf5,0x41,0x24,
+ 0xd2,0xe6,0x60,0xda,0x6c,0x69,0xc4,0xa0,
+ 0x4d,0xaa,0x94,0xf6,0x66,0x1e,0xaa,0x85,
+ 0x68,0xc5,0x6b,0x2e,0x77,0x7a,0x68,0xff,
+ 0x45,0x15,0x45,0xc5,0x9c,0xbb,0x3a,0x23,
+ 0x08,0x3a,0x06,0xdd,0xc0,0x52,0xd2,0xb7,
+ 0x47,0xaa,0x1c,0xc7,0xb5,0xa9,0x7d,0x04,
+ 0x60,0x67,0x78,0xf6,0xb9,0xba,0x26,0x84,
+ 0x45,0x72,0x44,0xed,0xa3,0xd3,0xa0,0x3f,
+ 0x19,0xee,0x3f,0x94,0x59,0x52,0x4b,0x13,
+ 0xfd,0x81,0xcc,0xf9,0xf2,0x29,0xd7,0xec,
+ 0xde,0x03,0x56,0x01,0x4a,0x19,0x86,0xc0,
+ 0x87,0xce,0xe1,0xcc,0x13,0xf1,0x2e,0xda,
+ 0x3f,0xfe,0xa4,0x64,0xe7,0x48,0xb4,0x7b,
+ 0x73,0x62,0x5a,0x80,0x5e,0x01,0x20,0xa5,
+ 0x0a,0xd7,0x98,0xa7,0xd9,0x8b,0xff,0xc2,
+ },
+ [1] = {
+ 0xa6,0x87,0xf0,0x92,0x68,0xc8,0xd6,0x42,
+ 0xa8,0x83,0x1c,0x92,0x65,0x8c,0xd9,0xfe,
+ 0x0b,0x1a,0xc6,0x96,0x27,0x44,0xd4,0x14,
+ 0xfc,0xe7,0x85,0xb2,0x71,0xc7,0x11,0x39,
+ 0xed,0x36,0xd3,0x5c,0xa7,0xf7,0x3d,0xc9,
+ 0xa2,0x54,0x8b,0xb4,0xfa,0xe8,0x21,0xf9,
+ 0xfd,0x6a,0x42,0x85,0xde,0x66,0xd4,0xc0,
+ 0xa7,0xd3,0x5b,0xe1,0xe6,0xac,0xea,0xf9,
+ 0xa3,0x15,0x68,0xf4,0x66,0x4c,0x23,0x75,
+ 0x58,0xba,0x7f,0xca,0xbf,0x40,0x56,0x79,
+ 0x2f,0xbf,0xdf,0x5f,0x56,0xcb,0xa0,0xe4,
+ 0x22,0x65,0x6a,0x8f,0x4f,0xff,0x11,0x6b,
+ 0x57,0xeb,0x45,0xeb,0x9d,0x7f,0xfe,0x9c,
+ 0x8b,0x30,0xa8,0xb0,0x7e,0x27,0xf8,0xbc,
+ 0x1f,0xf8,0x15,0x34,0x36,0x4f,0x46,0x73,
+ 0x81,0x90,0x4b,0x4b,0x46,0x4d,0x01,0x45,
+ 0xa1,0xc3,0x0b,0xa8,0x5a,0xab,0xc1,0x88,
+ 0x66,0xc8,0x1a,0x94,0x17,0x64,0x6f,0xf4,
+ },
+ [2] = {
+ 0x22,0x4c,0x27,0xf4,0xba,0x37,0x8b,0x27,
+ 0xd3,0xd6,0x88,0x8a,0xdc,0xed,0x64,0x42,
+ 0x19,0x60,0x31,0x09,0xf3,0x72,0xd2,0xc2,
+ 0xd3,0xe3,0xff,0xce,0xc5,0x03,0x9f,0xce,
+ 0x99,0x49,0x8a,0xf2,0xe1,0xba,0xe2,0xa8,
+ 0xd7,0x32,0x07,0x2d,0xb0,0xb3,0xbc,0x67,
+ 0x32,0x9a,0x3e,0x7d,0x16,0x23,0xe7,0x24,
+ 0x84,0xe1,0x15,0x03,0x9c,0xa2,0x7a,0x95,
+ 0x34,0xa8,0x04,0x4e,0x79,0x31,0x50,0x26,
+ 0x76,0xd1,0x10,0xce,0xec,0x13,0xf7,0xfb,
+ 0x94,0x6b,0x76,0x50,0x5f,0xb2,0x3e,0x7c,
+ 0xbe,0x97,0xe7,0x13,0x06,0x9e,0x2d,0xc4,
+ 0x46,0x65,0xa7,0x69,0x37,0x07,0x25,0x37,
+ 0xe5,0x48,0x51,0xa8,0x58,0xe8,0x4d,0x7c,
+ 0xb5,0xbe,0x25,0x13,0xbc,0x11,0xc2,0xde,
+ 0xdb,0x00,0xef,0x1c,0x1d,0xeb,0xe3,0x49,
+ 0x1c,0xc0,0x78,0x29,0x76,0xc0,0xde,0x3a,
+ 0x0e,0x96,0x8f,0xea,0xd7,0x42,0x4e,0xb4,
+ },
+ };
+ struct aesenc enc;
+ struct aesdec dec;
+ uint8_t key[32];
+ uint8_t in[144];
+ uint8_t outbuf[146] = { [0] = 0x1a, [145] = 0x1a }, *out = outbuf + 1;
+ uint8_t iv0[16], iv[16];
+ unsigned i;
+
+ for (i = 0; i < 32; i++)
+ key[i] = i;
+ for (i = 0; i < 16; i++)
+ iv0[i] = 0x20 ^ i;
+ for (i = 0; i < 144; i++)
+ in[i] = 0x80 ^ i;
+
+ for (i = 0; i < 3; i++) {
+ impl->ai_setenckey(&enc, key, aes_nrounds[i]);
+ impl->ai_setdeckey(&dec, key, aes_nrounds[i]);
+
+ /* Try one swell foop. */
+ memcpy(iv, iv0, 16);
+ impl->ai_cbc_enc(&enc, in, out, 144, iv, aes_nrounds[i]);
+ if (memcmp(out, expected[i], 144))
+ return aes_selftest_fail(impl, out, expected[i], 144,
+ "AES-%u-CBC enc", aes_keybits[i]);
+
+ memcpy(iv, iv0, 16);
+ impl->ai_cbc_dec(&dec, out, out, 144, iv, aes_nrounds[i]);
+ if (memcmp(out, in, 144))
+ return aes_selftest_fail(impl, out, in, 144,
+ "AES-%u-CBC dec", aes_keybits[i]);
+
+ /* Try incrementally, with IV update. */
+ memcpy(iv, iv0, 16);
+ impl->ai_cbc_enc(&enc, in, out, 16, iv, aes_nrounds[i]);
+ impl->ai_cbc_enc(&enc, in + 16, out + 16, 128, iv,
+ aes_nrounds[i]);
+ if (memcmp(out, expected[i], 144))
+ return aes_selftest_fail(impl, out, expected[i], 144,
+ "AES-%u-CBC enc incremental", aes_keybits[i]);
+
+ memcpy(iv, iv0, 16);
+ impl->ai_cbc_dec(&dec, out, out, 128, iv, aes_nrounds[i]);
+ impl->ai_cbc_dec(&dec, out + 128, out + 128, 16, iv,
+ aes_nrounds[i]);
+ if (memcmp(out, in, 144))
+ return aes_selftest_fail(impl, out, in, 144,
+ "AES-%u-CBC dec incremental", aes_keybits[i]);
+ }
+
+ if (outbuf[0] != 0x1a)
+ return aes_selftest_fail(impl, outbuf,
+ (const uint8_t[1]){0x1a}, 1,
+ "AES-CBC overrun preceding");
+ if (outbuf[145] != 0x1a)
+ return aes_selftest_fail(impl, outbuf + 145,
+ (const uint8_t[1]){0x1a}, 1,
+ "AES-CBC overrun following");
+
+ /* Success! */
+ return 0;
+}
+
+static int
+aes_selftest_encdec_xts(const struct aes_impl *impl)
+{
+ uint64_t blkno[3] = { 0, 1, 0xff };
+ static const uint8_t expected[3][144] = {
+ [0] = {
+ /* IEEE P1619-D16, XTS-AES-128, Vector 4, truncated */
+ 0x27,0xa7,0x47,0x9b,0xef,0xa1,0xd4,0x76,
+ 0x48,0x9f,0x30,0x8c,0xd4,0xcf,0xa6,0xe2,
+ 0xa9,0x6e,0x4b,0xbe,0x32,0x08,0xff,0x25,
+ 0x28,0x7d,0xd3,0x81,0x96,0x16,0xe8,0x9c,
+ 0xc7,0x8c,0xf7,0xf5,0xe5,0x43,0x44,0x5f,
+ 0x83,0x33,0xd8,0xfa,0x7f,0x56,0x00,0x00,
+ 0x05,0x27,0x9f,0xa5,0xd8,0xb5,0xe4,0xad,
+ 0x40,0xe7,0x36,0xdd,0xb4,0xd3,0x54,0x12,
+ 0x32,0x80,0x63,0xfd,0x2a,0xab,0x53,0xe5,
+ 0xea,0x1e,0x0a,0x9f,0x33,0x25,0x00,0xa5,
+ 0xdf,0x94,0x87,0xd0,0x7a,0x5c,0x92,0xcc,
+ 0x51,0x2c,0x88,0x66,0xc7,0xe8,0x60,0xce,
+ 0x93,0xfd,0xf1,0x66,0xa2,0x49,0x12,0xb4,
+ 0x22,0x97,0x61,0x46,0xae,0x20,0xce,0x84,
+ 0x6b,0xb7,0xdc,0x9b,0xa9,0x4a,0x76,0x7a,
+ 0xae,0xf2,0x0c,0x0d,0x61,0xad,0x02,0x65,
+ 0x5e,0xa9,0x2d,0xc4,0xc4,0xe4,0x1a,0x89,
+ 0x52,0xc6,0x51,0xd3,0x31,0x74,0xbe,0x51,
+ },
+ [1] = {
+ },
+ [2] = {
+ /* IEEE P1619-D16, XTS-AES-256, Vector 10, truncated */
+ 0x1c,0x3b,0x3a,0x10,0x2f,0x77,0x03,0x86,
+ 0xe4,0x83,0x6c,0x99,0xe3,0x70,0xcf,0x9b,
+ 0xea,0x00,0x80,0x3f,0x5e,0x48,0x23,0x57,
+ 0xa4,0xae,0x12,0xd4,0x14,0xa3,0xe6,0x3b,
+ 0x5d,0x31,0xe2,0x76,0xf8,0xfe,0x4a,0x8d,
+ 0x66,0xb3,0x17,0xf9,0xac,0x68,0x3f,0x44,
+ 0x68,0x0a,0x86,0xac,0x35,0xad,0xfc,0x33,
+ 0x45,0xbe,0xfe,0xcb,0x4b,0xb1,0x88,0xfd,
+ 0x57,0x76,0x92,0x6c,0x49,0xa3,0x09,0x5e,
+ 0xb1,0x08,0xfd,0x10,0x98,0xba,0xec,0x70,
+ 0xaa,0xa6,0x69,0x99,0xa7,0x2a,0x82,0xf2,
+ 0x7d,0x84,0x8b,0x21,0xd4,0xa7,0x41,0xb0,
+ 0xc5,0xcd,0x4d,0x5f,0xff,0x9d,0xac,0x89,
+ 0xae,0xba,0x12,0x29,0x61,0xd0,0x3a,0x75,
+ 0x71,0x23,0xe9,0x87,0x0f,0x8a,0xcf,0x10,
+ 0x00,0x02,0x08,0x87,0x89,0x14,0x29,0xca,
+ 0x2a,0x3e,0x7a,0x7d,0x7d,0xf7,0xb1,0x03,
+ 0x55,0x16,0x5c,0x8b,0x9a,0x6d,0x0a,0x7d,
+ },
+ };
+ static const uint8_t key1[32] = {
+ 0x27,0x18,0x28,0x18,0x28,0x45,0x90,0x45,
+ 0x23,0x53,0x60,0x28,0x74,0x71,0x35,0x26,
+ 0x62,0x49,0x77,0x57,0x24,0x70,0x93,0x69,
+ 0x99,0x59,0x57,0x49,0x66,0x96,0x76,0x27,
+ };
+ static const uint8_t key2[32] = {
+ 0x31,0x41,0x59,0x26,0x53,0x58,0x97,0x93,
+ 0x23,0x84,0x62,0x64,0x33,0x83,0x27,0x95,
+ 0x02,0x88,0x41,0x97,0x16,0x93,0x99,0x37,
+ 0x51,0x05,0x82,0x09,0x74,0x94,0x45,0x92,
+ };
+ struct aesenc enc;
+ struct aesdec dec;
+ uint8_t in[144];
+ uint8_t outbuf[146] = { [0] = 0x1a, [145] = 0x1a }, *out = outbuf + 1;
+ uint8_t blkno_buf[16];
+ uint8_t iv0[16], iv[16];
+ unsigned i;
+
+ for (i = 0; i < 144; i++)
+ in[i] = i;
+
+ for (i = 0; i < 3; i++) {
+ if (i == 1) /* XXX missing AES-192 test vector */
+ continue;
+
+ /* Format the data unit sequence number. */
+ memset(blkno_buf, 0, sizeof blkno_buf);
+ le64enc(blkno_buf, blkno[i]);
+
+ /* Generate the tweak. */
+ impl->ai_setenckey(&enc, key2, aes_nrounds[i]);
+ impl->ai_enc(&enc, blkno_buf, iv0, aes_nrounds[i]);
+
+ /* Load the data encryption key. */
+ impl->ai_setenckey(&enc, key1, aes_nrounds[i]);
+ impl->ai_setdeckey(&dec, key1, aes_nrounds[i]);
+
+ /* Try one swell foop. */
+ memcpy(iv, iv0, 16);
+ impl->ai_xts_enc(&enc, in, out, 144, iv, aes_nrounds[i]);
+ if (memcmp(out, expected[i], 144))
+ return aes_selftest_fail(impl, out, expected[i], 144,
+ "AES-%u-XTS enc", aes_keybits[i]);
+
+ memcpy(iv, iv0, 16);
+ impl->ai_xts_dec(&dec, out, out, 144, iv, aes_nrounds[i]);
+ if (memcmp(out, in, 144))
+ return aes_selftest_fail(impl, out, in, 144,
+ "AES-%u-XTS dec", aes_keybits[i]);
+
+ /* Try incrementally, with IV update. */
+ memcpy(iv, iv0, 16);
+ impl->ai_xts_enc(&enc, in, out, 16, iv, aes_nrounds[i]);
+ impl->ai_xts_enc(&enc, in + 16, out + 16, 128, iv,
+ aes_nrounds[i]);
+ if (memcmp(out, expected[i], 144))
+ return aes_selftest_fail(impl, out, expected[i], 144,
+ "AES-%u-XTS enc incremental", aes_keybits[i]);
+
+ memcpy(iv, iv0, 16);
+ impl->ai_xts_dec(&dec, out, out, 128, iv, aes_nrounds[i]);
+ impl->ai_xts_dec(&dec, out + 128, out + 128, 16, iv,
+ aes_nrounds[i]);
+ if (memcmp(out, in, 144))
+ return aes_selftest_fail(impl, out, in, 144,
+ "AES-%u-XTS dec incremental", aes_keybits[i]);
+ }
+
+ if (outbuf[0] != 0x1a)
+ return aes_selftest_fail(impl, outbuf,
+ (const uint8_t[1]){0x1a}, 1,
+ "AES-XTS overrun preceding");
+ if (outbuf[145] != 0x1a)
+ return aes_selftest_fail(impl, outbuf + 145,
+ (const uint8_t[1]){0x1a}, 1,
+ "AES-XTS overrun following");
+
+ /* Success! */
+ return 0;
+}
+
+int
+aes_selftest(const struct aes_impl *impl)
+{
+ int result = 0;
+
+ if (impl->ai_probe())
+ return -1;
+
+ if (aes_selftest_encdec(impl))
+ result = -1;
+ if (aes_selftest_encdec_cbc(impl))
+ result = -1;
+ if (aes_selftest_encdec_xts(impl))
+ result = -1;
+
+ return result;
+}
diff -r 81a487955535 -r 9d6b84c40f65 sys/crypto/aes/files.aes
--- /dev/null Thu Jan 01 00:00:00 1970 +0000
+++ b/sys/crypto/aes/files.aes Fri Jun 12 05:16:46 2020 +0000
@@ -0,0 +1,12 @@
+# $NetBSD$
+
+define aes
+define rijndael: aes # legacy Rijndael API
+
+file crypto/aes/aes_bear.c aes
+file crypto/aes/aes_ct.c aes
+file crypto/aes/aes_ct_dec.c aes
+file crypto/aes/aes_ct_enc.c aes
+file crypto/aes/aes_impl.c aes
+file crypto/aes/aes_rijndael.c rijndael
+file crypto/aes/aes_selftest.c aes
diff -r 81a487955535 -r 9d6b84c40f65 sys/crypto/rijndael/files.rijndael
--- a/sys/crypto/rijndael/files.rijndael Sun Jun 14 15:58:39 2020 +0000
+++ /dev/null Thu Jan 01 00:00:00 1970 +0000
@@ -1,7 +0,0 @@
-# $NetBSD: files.rijndael,v 1.7 2020/04/22 09:15:40 rin Exp $
-
-define rijndael
-
-file crypto/rijndael/rijndael-alg-fst.c rijndael
-file crypto/rijndael/rijndael-api-fst.c rijndael
-file crypto/rijndael/rijndael.c rijndael
diff -r 81a487955535 -r 9d6b84c40f65 sys/crypto/rijndael/rijndael-alg-fst.c
--- a/sys/crypto/rijndael/rijndael-alg-fst.c Sun Jun 14 15:58:39 2020 +0000
+++ /dev/null Thu Jan 01 00:00:00 1970 +0000
@@ -1,1225 +0,0 @@
-/* $NetBSD: rijndael-alg-fst.c,v 1.7 2005/12/11 12:20:52 christos Exp $ */
-/* $KAME: rijndael-alg-fst.c,v 1.10 2003/07/15 10:47:16 itojun Exp $ */
-/**
- * rijndael-alg-fst.c
- *
- * @version 3.0 (December 2000)
- *
- * Optimised ANSI C code for the Rijndael cipher (now AES)
- *
- * @author Vincent Rijmen <vincent.rijmen%esat.kuleuven.ac.be@localhost>
- * @author Antoon Bosselaers <antoon.bosselaers%esat.kuleuven.ac.be@localhost>
- * @author Paulo Barreto <paulo.barreto%terra.com.br@localhost>
- *
- * This code is hereby placed in the public domain.
- *
- * THIS SOFTWARE IS PROVIDED BY THE AUTHORS ''AS IS'' AND ANY EXPRESS
- * OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
- * WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
- * ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHORS OR CONTRIBUTORS BE
- * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
- * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
- * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR
- * BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY,
- * WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE
- * OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE,
- * EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
- */
-
-#include <sys/cdefs.h>
-__KERNEL_RCSID(0, "$NetBSD: rijndael-alg-fst.c,v 1.7 2005/12/11 12:20:52 christos Exp $");
-
-#include <sys/types.h>
-#ifdef _KERNEL
-#include <sys/systm.h>
-#else
-#include <string.h>
-#endif
-
-#include <crypto/rijndael/rijndael-alg-fst.h>
-#include <crypto/rijndael/rijndael_local.h>
-
-/*
-Te0[x] = S [x].[02, 01, 01, 03];
-Te1[x] = S [x].[03, 02, 01, 01];
-Te2[x] = S [x].[01, 03, 02, 01];
-Te3[x] = S [x].[01, 01, 03, 02];
-Te4[x] = S [x].[01, 01, 01, 01];
-
-Td0[x] = Si[x].[0e, 09, 0d, 0b];
-Td1[x] = Si[x].[0b, 0e, 09, 0d];
-Td2[x] = Si[x].[0d, 0b, 0e, 09];
-Td3[x] = Si[x].[09, 0d, 0b, 0e];
-Td4[x] = Si[x].[01, 01, 01, 01];
-*/
-
-static const u32 Te0[256] = {
- 0xc66363a5U, 0xf87c7c84U, 0xee777799U, 0xf67b7b8dU,
- 0xfff2f20dU, 0xd66b6bbdU, 0xde6f6fb1U, 0x91c5c554U,
- 0x60303050U, 0x02010103U, 0xce6767a9U, 0x562b2b7dU,
- 0xe7fefe19U, 0xb5d7d762U, 0x4dababe6U, 0xec76769aU,
- 0x8fcaca45U, 0x1f82829dU, 0x89c9c940U, 0xfa7d7d87U,
- 0xeffafa15U, 0xb25959ebU, 0x8e4747c9U, 0xfbf0f00bU,
- 0x41adadecU, 0xb3d4d467U, 0x5fa2a2fdU, 0x45afafeaU,
- 0x239c9cbfU, 0x53a4a4f7U, 0xe4727296U, 0x9bc0c05bU,
- 0x75b7b7c2U, 0xe1fdfd1cU, 0x3d9393aeU, 0x4c26266aU,
- 0x6c36365aU, 0x7e3f3f41U, 0xf5f7f702U, 0x83cccc4fU,
- 0x6834345cU, 0x51a5a5f4U, 0xd1e5e534U, 0xf9f1f108U,
- 0xe2717193U, 0xabd8d873U, 0x62313153U, 0x2a15153fU,
- 0x0804040cU, 0x95c7c752U, 0x46232365U, 0x9dc3c35eU,
- 0x30181828U, 0x379696a1U, 0x0a05050fU, 0x2f9a9ab5U,
- 0x0e070709U, 0x24121236U, 0x1b80809bU, 0xdfe2e23dU,
- 0xcdebeb26U, 0x4e272769U, 0x7fb2b2cdU, 0xea75759fU,
- 0x1209091bU, 0x1d83839eU, 0x582c2c74U, 0x341a1a2eU,
- 0x361b1b2dU, 0xdc6e6eb2U, 0xb45a5aeeU, 0x5ba0a0fbU,
- 0xa45252f6U, 0x763b3b4dU, 0xb7d6d661U, 0x7db3b3ceU,
- 0x5229297bU, 0xdde3e33eU, 0x5e2f2f71U, 0x13848497U,
- 0xa65353f5U, 0xb9d1d168U, 0x00000000U, 0xc1eded2cU,
- 0x40202060U, 0xe3fcfc1fU, 0x79b1b1c8U, 0xb65b5bedU,
- 0xd46a6abeU, 0x8dcbcb46U, 0x67bebed9U, 0x7239394bU,
- 0x944a4adeU, 0x984c4cd4U, 0xb05858e8U, 0x85cfcf4aU,
- 0xbbd0d06bU, 0xc5efef2aU, 0x4faaaae5U, 0xedfbfb16U,
- 0x864343c5U, 0x9a4d4dd7U, 0x66333355U, 0x11858594U,
- 0x8a4545cfU, 0xe9f9f910U, 0x04020206U, 0xfe7f7f81U,
- 0xa05050f0U, 0x783c3c44U, 0x259f9fbaU, 0x4ba8a8e3U,
- 0xa25151f3U, 0x5da3a3feU, 0x804040c0U, 0x058f8f8aU,
- 0x3f9292adU, 0x219d9dbcU, 0x70383848U, 0xf1f5f504U,
- 0x63bcbcdfU, 0x77b6b6c1U, 0xafdada75U, 0x42212163U,
- 0x20101030U, 0xe5ffff1aU, 0xfdf3f30eU, 0xbfd2d26dU,
- 0x81cdcd4cU, 0x180c0c14U, 0x26131335U, 0xc3ecec2fU,
- 0xbe5f5fe1U, 0x359797a2U, 0x884444ccU, 0x2e171739U,
- 0x93c4c457U, 0x55a7a7f2U, 0xfc7e7e82U, 0x7a3d3d47U,
- 0xc86464acU, 0xba5d5de7U, 0x3219192bU, 0xe6737395U,
- 0xc06060a0U, 0x19818198U, 0x9e4f4fd1U, 0xa3dcdc7fU,
- 0x44222266U, 0x542a2a7eU, 0x3b9090abU, 0x0b888883U,
- 0x8c4646caU, 0xc7eeee29U, 0x6bb8b8d3U, 0x2814143cU,
- 0xa7dede79U, 0xbc5e5ee2U, 0x160b0b1dU, 0xaddbdb76U,
- 0xdbe0e03bU, 0x64323256U, 0x743a3a4eU, 0x140a0a1eU,
- 0x924949dbU, 0x0c06060aU, 0x4824246cU, 0xb85c5ce4U,
- 0x9fc2c25dU, 0xbdd3d36eU, 0x43acacefU, 0xc46262a6U,
- 0x399191a8U, 0x319595a4U, 0xd3e4e437U, 0xf279798bU,
- 0xd5e7e732U, 0x8bc8c843U, 0x6e373759U, 0xda6d6db7U,
- 0x018d8d8cU, 0xb1d5d564U, 0x9c4e4ed2U, 0x49a9a9e0U,
- 0xd86c6cb4U, 0xac5656faU, 0xf3f4f407U, 0xcfeaea25U,
- 0xca6565afU, 0xf47a7a8eU, 0x47aeaee9U, 0x10080818U,
- 0x6fbabad5U, 0xf0787888U, 0x4a25256fU, 0x5c2e2e72U,
- 0x381c1c24U, 0x57a6a6f1U, 0x73b4b4c7U, 0x97c6c651U,
- 0xcbe8e823U, 0xa1dddd7cU, 0xe874749cU, 0x3e1f1f21U,
- 0x964b4bddU, 0x61bdbddcU, 0x0d8b8b86U, 0x0f8a8a85U,
- 0xe0707090U, 0x7c3e3e42U, 0x71b5b5c4U, 0xcc6666aaU,
- 0x904848d8U, 0x06030305U, 0xf7f6f601U, 0x1c0e0e12U,
- 0xc26161a3U, 0x6a35355fU, 0xae5757f9U, 0x69b9b9d0U,
- 0x17868691U, 0x99c1c158U, 0x3a1d1d27U, 0x279e9eb9U,
- 0xd9e1e138U, 0xebf8f813U, 0x2b9898b3U, 0x22111133U,
- 0xd26969bbU, 0xa9d9d970U, 0x078e8e89U, 0x339494a7U,
- 0x2d9b9bb6U, 0x3c1e1e22U, 0x15878792U, 0xc9e9e920U,
- 0x87cece49U, 0xaa5555ffU, 0x50282878U, 0xa5dfdf7aU,
- 0x038c8c8fU, 0x59a1a1f8U, 0x09898980U, 0x1a0d0d17U,
- 0x65bfbfdaU, 0xd7e6e631U, 0x844242c6U, 0xd06868b8U,
- 0x824141c3U, 0x299999b0U, 0x5a2d2d77U, 0x1e0f0f11U,
- 0x7bb0b0cbU, 0xa85454fcU, 0x6dbbbbd6U, 0x2c16163aU,
-};
-static const u32 Te1[256] = {
- 0xa5c66363U, 0x84f87c7cU, 0x99ee7777U, 0x8df67b7bU,
- 0x0dfff2f2U, 0xbdd66b6bU, 0xb1de6f6fU, 0x5491c5c5U,
- 0x50603030U, 0x03020101U, 0xa9ce6767U, 0x7d562b2bU,
- 0x19e7fefeU, 0x62b5d7d7U, 0xe64dababU, 0x9aec7676U,
- 0x458fcacaU, 0x9d1f8282U, 0x4089c9c9U, 0x87fa7d7dU,
- 0x15effafaU, 0xebb25959U, 0xc98e4747U, 0x0bfbf0f0U,
- 0xec41adadU, 0x67b3d4d4U, 0xfd5fa2a2U, 0xea45afafU,
- 0xbf239c9cU, 0xf753a4a4U, 0x96e47272U, 0x5b9bc0c0U,
- 0xc275b7b7U, 0x1ce1fdfdU, 0xae3d9393U, 0x6a4c2626U,
- 0x5a6c3636U, 0x417e3f3fU, 0x02f5f7f7U, 0x4f83ccccU,
- 0x5c683434U, 0xf451a5a5U, 0x34d1e5e5U, 0x08f9f1f1U,
- 0x93e27171U, 0x73abd8d8U, 0x53623131U, 0x3f2a1515U,
- 0x0c080404U, 0x5295c7c7U, 0x65462323U, 0x5e9dc3c3U,
- 0x28301818U, 0xa1379696U, 0x0f0a0505U, 0xb52f9a9aU,
- 0x090e0707U, 0x36241212U, 0x9b1b8080U, 0x3ddfe2e2U,
- 0x26cdebebU, 0x694e2727U, 0xcd7fb2b2U, 0x9fea7575U,
- 0x1b120909U, 0x9e1d8383U, 0x74582c2cU, 0x2e341a1aU,
- 0x2d361b1bU, 0xb2dc6e6eU, 0xeeb45a5aU, 0xfb5ba0a0U,
- 0xf6a45252U, 0x4d763b3bU, 0x61b7d6d6U, 0xce7db3b3U,
- 0x7b522929U, 0x3edde3e3U, 0x715e2f2fU, 0x97138484U,
- 0xf5a65353U, 0x68b9d1d1U, 0x00000000U, 0x2cc1ededU,
- 0x60402020U, 0x1fe3fcfcU, 0xc879b1b1U, 0xedb65b5bU,
- 0xbed46a6aU, 0x468dcbcbU, 0xd967bebeU, 0x4b723939U,
- 0xde944a4aU, 0xd4984c4cU, 0xe8b05858U, 0x4a85cfcfU,
- 0x6bbbd0d0U, 0x2ac5efefU, 0xe54faaaaU, 0x16edfbfbU,
- 0xc5864343U, 0xd79a4d4dU, 0x55663333U, 0x94118585U,
- 0xcf8a4545U, 0x10e9f9f9U, 0x06040202U, 0x81fe7f7fU,
- 0xf0a05050U, 0x44783c3cU, 0xba259f9fU, 0xe34ba8a8U,
- 0xf3a25151U, 0xfe5da3a3U, 0xc0804040U, 0x8a058f8fU,
- 0xad3f9292U, 0xbc219d9dU, 0x48703838U, 0x04f1f5f5U,
- 0xdf63bcbcU, 0xc177b6b6U, 0x75afdadaU, 0x63422121U,
- 0x30201010U, 0x1ae5ffffU, 0x0efdf3f3U, 0x6dbfd2d2U,
- 0x4c81cdcdU, 0x14180c0cU, 0x35261313U, 0x2fc3ececU,
- 0xe1be5f5fU, 0xa2359797U, 0xcc884444U, 0x392e1717U,
- 0x5793c4c4U, 0xf255a7a7U, 0x82fc7e7eU, 0x477a3d3dU,
- 0xacc86464U, 0xe7ba5d5dU, 0x2b321919U, 0x95e67373U,
- 0xa0c06060U, 0x98198181U, 0xd19e4f4fU, 0x7fa3dcdcU,
- 0x66442222U, 0x7e542a2aU, 0xab3b9090U, 0x830b8888U,
- 0xca8c4646U, 0x29c7eeeeU, 0xd36bb8b8U, 0x3c281414U,
- 0x79a7dedeU, 0xe2bc5e5eU, 0x1d160b0bU, 0x76addbdbU,
- 0x3bdbe0e0U, 0x56643232U, 0x4e743a3aU, 0x1e140a0aU,
- 0xdb924949U, 0x0a0c0606U, 0x6c482424U, 0xe4b85c5cU,
- 0x5d9fc2c2U, 0x6ebdd3d3U, 0xef43acacU, 0xa6c46262U,
- 0xa8399191U, 0xa4319595U, 0x37d3e4e4U, 0x8bf27979U,
- 0x32d5e7e7U, 0x438bc8c8U, 0x596e3737U, 0xb7da6d6dU,
- 0x8c018d8dU, 0x64b1d5d5U, 0xd29c4e4eU, 0xe049a9a9U,
- 0xb4d86c6cU, 0xfaac5656U, 0x07f3f4f4U, 0x25cfeaeaU,
- 0xafca6565U, 0x8ef47a7aU, 0xe947aeaeU, 0x18100808U,
- 0xd56fbabaU, 0x88f07878U, 0x6f4a2525U, 0x725c2e2eU,
- 0x24381c1cU, 0xf157a6a6U, 0xc773b4b4U, 0x5197c6c6U,
- 0x23cbe8e8U, 0x7ca1ddddU, 0x9ce87474U, 0x213e1f1fU,
- 0xdd964b4bU, 0xdc61bdbdU, 0x860d8b8bU, 0x850f8a8aU,
- 0x90e07070U, 0x427c3e3eU, 0xc471b5b5U, 0xaacc6666U,
- 0xd8904848U, 0x05060303U, 0x01f7f6f6U, 0x121c0e0eU,
- 0xa3c26161U, 0x5f6a3535U, 0xf9ae5757U, 0xd069b9b9U,
- 0x91178686U, 0x5899c1c1U, 0x273a1d1dU, 0xb9279e9eU,
- 0x38d9e1e1U, 0x13ebf8f8U, 0xb32b9898U, 0x33221111U,
- 0xbbd26969U, 0x70a9d9d9U, 0x89078e8eU, 0xa7339494U,
- 0xb62d9b9bU, 0x223c1e1eU, 0x92158787U, 0x20c9e9e9U,
- 0x4987ceceU, 0xffaa5555U, 0x78502828U, 0x7aa5dfdfU,
- 0x8f038c8cU, 0xf859a1a1U, 0x80098989U, 0x171a0d0dU,
- 0xda65bfbfU, 0x31d7e6e6U, 0xc6844242U, 0xb8d06868U,
- 0xc3824141U, 0xb0299999U, 0x775a2d2dU, 0x111e0f0fU,
- 0xcb7bb0b0U, 0xfca85454U, 0xd66dbbbbU, 0x3a2c1616U,
-};
-static const u32 Te2[256] = {
- 0x63a5c663U, 0x7c84f87cU, 0x7799ee77U, 0x7b8df67bU,
- 0xf20dfff2U, 0x6bbdd66bU, 0x6fb1de6fU, 0xc55491c5U,
- 0x30506030U, 0x01030201U, 0x67a9ce67U, 0x2b7d562bU,
- 0xfe19e7feU, 0xd762b5d7U, 0xabe64dabU, 0x769aec76U,
- 0xca458fcaU, 0x829d1f82U, 0xc94089c9U, 0x7d87fa7dU,
- 0xfa15effaU, 0x59ebb259U, 0x47c98e47U, 0xf00bfbf0U,
- 0xadec41adU, 0xd467b3d4U, 0xa2fd5fa2U, 0xafea45afU,
- 0x9cbf239cU, 0xa4f753a4U, 0x7296e472U, 0xc05b9bc0U,
- 0xb7c275b7U, 0xfd1ce1fdU, 0x93ae3d93U, 0x266a4c26U,
- 0x365a6c36U, 0x3f417e3fU, 0xf702f5f7U, 0xcc4f83ccU,
- 0x345c6834U, 0xa5f451a5U, 0xe534d1e5U, 0xf108f9f1U,
- 0x7193e271U, 0xd873abd8U, 0x31536231U, 0x153f2a15U,
- 0x040c0804U, 0xc75295c7U, 0x23654623U, 0xc35e9dc3U,
- 0x18283018U, 0x96a13796U, 0x050f0a05U, 0x9ab52f9aU,
- 0x07090e07U, 0x12362412U, 0x809b1b80U, 0xe23ddfe2U,
- 0xeb26cdebU, 0x27694e27U, 0xb2cd7fb2U, 0x759fea75U,
- 0x091b1209U, 0x839e1d83U, 0x2c74582cU, 0x1a2e341aU,
- 0x1b2d361bU, 0x6eb2dc6eU, 0x5aeeb45aU, 0xa0fb5ba0U,
- 0x52f6a452U, 0x3b4d763bU, 0xd661b7d6U, 0xb3ce7db3U,
- 0x297b5229U, 0xe33edde3U, 0x2f715e2fU, 0x84971384U,
- 0x53f5a653U, 0xd168b9d1U, 0x00000000U, 0xed2cc1edU,
- 0x20604020U, 0xfc1fe3fcU, 0xb1c879b1U, 0x5bedb65bU,
- 0x6abed46aU, 0xcb468dcbU, 0xbed967beU, 0x394b7239U,
- 0x4ade944aU, 0x4cd4984cU, 0x58e8b058U, 0xcf4a85cfU,
- 0xd06bbbd0U, 0xef2ac5efU, 0xaae54faaU, 0xfb16edfbU,
- 0x43c58643U, 0x4dd79a4dU, 0x33556633U, 0x85941185U,
- 0x45cf8a45U, 0xf910e9f9U, 0x02060402U, 0x7f81fe7fU,
- 0x50f0a050U, 0x3c44783cU, 0x9fba259fU, 0xa8e34ba8U,
- 0x51f3a251U, 0xa3fe5da3U, 0x40c08040U, 0x8f8a058fU,
- 0x92ad3f92U, 0x9dbc219dU, 0x38487038U, 0xf504f1f5U,
- 0xbcdf63bcU, 0xb6c177b6U, 0xda75afdaU, 0x21634221U,
- 0x10302010U, 0xff1ae5ffU, 0xf30efdf3U, 0xd26dbfd2U,
- 0xcd4c81cdU, 0x0c14180cU, 0x13352613U, 0xec2fc3ecU,
- 0x5fe1be5fU, 0x97a23597U, 0x44cc8844U, 0x17392e17U,
- 0xc45793c4U, 0xa7f255a7U, 0x7e82fc7eU, 0x3d477a3dU,
- 0x64acc864U, 0x5de7ba5dU, 0x192b3219U, 0x7395e673U,
- 0x60a0c060U, 0x81981981U, 0x4fd19e4fU, 0xdc7fa3dcU,
- 0x22664422U, 0x2a7e542aU, 0x90ab3b90U, 0x88830b88U,
- 0x46ca8c46U, 0xee29c7eeU, 0xb8d36bb8U, 0x143c2814U,
- 0xde79a7deU, 0x5ee2bc5eU, 0x0b1d160bU, 0xdb76addbU,
- 0xe03bdbe0U, 0x32566432U, 0x3a4e743aU, 0x0a1e140aU,
- 0x49db9249U, 0x060a0c06U, 0x246c4824U, 0x5ce4b85cU,
- 0xc25d9fc2U, 0xd36ebdd3U, 0xacef43acU, 0x62a6c462U,
- 0x91a83991U, 0x95a43195U, 0xe437d3e4U, 0x798bf279U,
- 0xe732d5e7U, 0xc8438bc8U, 0x37596e37U, 0x6db7da6dU,
- 0x8d8c018dU, 0xd564b1d5U, 0x4ed29c4eU, 0xa9e049a9U,
- 0x6cb4d86cU, 0x56faac56U, 0xf407f3f4U, 0xea25cfeaU,
- 0x65afca65U, 0x7a8ef47aU, 0xaee947aeU, 0x08181008U,
- 0xbad56fbaU, 0x7888f078U, 0x256f4a25U, 0x2e725c2eU,
- 0x1c24381cU, 0xa6f157a6U, 0xb4c773b4U, 0xc65197c6U,
- 0xe823cbe8U, 0xdd7ca1ddU, 0x749ce874U, 0x1f213e1fU,
- 0x4bdd964bU, 0xbddc61bdU, 0x8b860d8bU, 0x8a850f8aU,
- 0x7090e070U, 0x3e427c3eU, 0xb5c471b5U, 0x66aacc66U,
- 0x48d89048U, 0x03050603U, 0xf601f7f6U, 0x0e121c0eU,
- 0x61a3c261U, 0x355f6a35U, 0x57f9ae57U, 0xb9d069b9U,
- 0x86911786U, 0xc15899c1U, 0x1d273a1dU, 0x9eb9279eU,
- 0xe138d9e1U, 0xf813ebf8U, 0x98b32b98U, 0x11332211U,
- 0x69bbd269U, 0xd970a9d9U, 0x8e89078eU, 0x94a73394U,
- 0x9bb62d9bU, 0x1e223c1eU, 0x87921587U, 0xe920c9e9U,
- 0xce4987ceU, 0x55ffaa55U, 0x28785028U, 0xdf7aa5dfU,
- 0x8c8f038cU, 0xa1f859a1U, 0x89800989U, 0x0d171a0dU,
- 0xbfda65bfU, 0xe631d7e6U, 0x42c68442U, 0x68b8d068U,
- 0x41c38241U, 0x99b02999U, 0x2d775a2dU, 0x0f111e0fU,
- 0xb0cb7bb0U, 0x54fca854U, 0xbbd66dbbU, 0x163a2c16U,
-};
-static const u32 Te3[256] = {
-
- 0x6363a5c6U, 0x7c7c84f8U, 0x777799eeU, 0x7b7b8df6U,
- 0xf2f20dffU, 0x6b6bbdd6U, 0x6f6fb1deU, 0xc5c55491U,
- 0x30305060U, 0x01010302U, 0x6767a9ceU, 0x2b2b7d56U,
- 0xfefe19e7U, 0xd7d762b5U, 0xababe64dU, 0x76769aecU,
- 0xcaca458fU, 0x82829d1fU, 0xc9c94089U, 0x7d7d87faU,
- 0xfafa15efU, 0x5959ebb2U, 0x4747c98eU, 0xf0f00bfbU,
- 0xadadec41U, 0xd4d467b3U, 0xa2a2fd5fU, 0xafafea45U,
- 0x9c9cbf23U, 0xa4a4f753U, 0x727296e4U, 0xc0c05b9bU,
- 0xb7b7c275U, 0xfdfd1ce1U, 0x9393ae3dU, 0x26266a4cU,
- 0x36365a6cU, 0x3f3f417eU, 0xf7f702f5U, 0xcccc4f83U,
- 0x34345c68U, 0xa5a5f451U, 0xe5e534d1U, 0xf1f108f9U,
- 0x717193e2U, 0xd8d873abU, 0x31315362U, 0x15153f2aU,
- 0x04040c08U, 0xc7c75295U, 0x23236546U, 0xc3c35e9dU,
- 0x18182830U, 0x9696a137U, 0x05050f0aU, 0x9a9ab52fU,
- 0x0707090eU, 0x12123624U, 0x80809b1bU, 0xe2e23ddfU,
- 0xebeb26cdU, 0x2727694eU, 0xb2b2cd7fU, 0x75759feaU,
- 0x09091b12U, 0x83839e1dU, 0x2c2c7458U, 0x1a1a2e34U,
- 0x1b1b2d36U, 0x6e6eb2dcU, 0x5a5aeeb4U, 0xa0a0fb5bU,
- 0x5252f6a4U, 0x3b3b4d76U, 0xd6d661b7U, 0xb3b3ce7dU,
- 0x29297b52U, 0xe3e33eddU, 0x2f2f715eU, 0x84849713U,
- 0x5353f5a6U, 0xd1d168b9U, 0x00000000U, 0xeded2cc1U,
- 0x20206040U, 0xfcfc1fe3U, 0xb1b1c879U, 0x5b5bedb6U,
- 0x6a6abed4U, 0xcbcb468dU, 0xbebed967U, 0x39394b72U,
- 0x4a4ade94U, 0x4c4cd498U, 0x5858e8b0U, 0xcfcf4a85U,
- 0xd0d06bbbU, 0xefef2ac5U, 0xaaaae54fU, 0xfbfb16edU,
- 0x4343c586U, 0x4d4dd79aU, 0x33335566U, 0x85859411U,
- 0x4545cf8aU, 0xf9f910e9U, 0x02020604U, 0x7f7f81feU,
- 0x5050f0a0U, 0x3c3c4478U, 0x9f9fba25U, 0xa8a8e34bU,
- 0x5151f3a2U, 0xa3a3fe5dU, 0x4040c080U, 0x8f8f8a05U,
- 0x9292ad3fU, 0x9d9dbc21U, 0x38384870U, 0xf5f504f1U,
- 0xbcbcdf63U, 0xb6b6c177U, 0xdada75afU, 0x21216342U,
- 0x10103020U, 0xffff1ae5U, 0xf3f30efdU, 0xd2d26dbfU,
- 0xcdcd4c81U, 0x0c0c1418U, 0x13133526U, 0xecec2fc3U,
- 0x5f5fe1beU, 0x9797a235U, 0x4444cc88U, 0x1717392eU,
- 0xc4c45793U, 0xa7a7f255U, 0x7e7e82fcU, 0x3d3d477aU,
- 0x6464acc8U, 0x5d5de7baU, 0x19192b32U, 0x737395e6U,
- 0x6060a0c0U, 0x81819819U, 0x4f4fd19eU, 0xdcdc7fa3U,
- 0x22226644U, 0x2a2a7e54U, 0x9090ab3bU, 0x8888830bU,
- 0x4646ca8cU, 0xeeee29c7U, 0xb8b8d36bU, 0x14143c28U,
- 0xdede79a7U, 0x5e5ee2bcU, 0x0b0b1d16U, 0xdbdb76adU,
- 0xe0e03bdbU, 0x32325664U, 0x3a3a4e74U, 0x0a0a1e14U,
- 0x4949db92U, 0x06060a0cU, 0x24246c48U, 0x5c5ce4b8U,
- 0xc2c25d9fU, 0xd3d36ebdU, 0xacacef43U, 0x6262a6c4U,
- 0x9191a839U, 0x9595a431U, 0xe4e437d3U, 0x79798bf2U,
- 0xe7e732d5U, 0xc8c8438bU, 0x3737596eU, 0x6d6db7daU,
- 0x8d8d8c01U, 0xd5d564b1U, 0x4e4ed29cU, 0xa9a9e049U,
- 0x6c6cb4d8U, 0x5656faacU, 0xf4f407f3U, 0xeaea25cfU,
- 0x6565afcaU, 0x7a7a8ef4U, 0xaeaee947U, 0x08081810U,
- 0xbabad56fU, 0x787888f0U, 0x25256f4aU, 0x2e2e725cU,
- 0x1c1c2438U, 0xa6a6f157U, 0xb4b4c773U, 0xc6c65197U,
- 0xe8e823cbU, 0xdddd7ca1U, 0x74749ce8U, 0x1f1f213eU,
- 0x4b4bdd96U, 0xbdbddc61U, 0x8b8b860dU, 0x8a8a850fU,
- 0x707090e0U, 0x3e3e427cU, 0xb5b5c471U, 0x6666aaccU,
- 0x4848d890U, 0x03030506U, 0xf6f601f7U, 0x0e0e121cU,
- 0x6161a3c2U, 0x35355f6aU, 0x5757f9aeU, 0xb9b9d069U,
- 0x86869117U, 0xc1c15899U, 0x1d1d273aU, 0x9e9eb927U,
- 0xe1e138d9U, 0xf8f813ebU, 0x9898b32bU, 0x11113322U,
- 0x6969bbd2U, 0xd9d970a9U, 0x8e8e8907U, 0x9494a733U,
- 0x9b9bb62dU, 0x1e1e223cU, 0x87879215U, 0xe9e920c9U,
- 0xcece4987U, 0x5555ffaaU, 0x28287850U, 0xdfdf7aa5U,
- 0x8c8c8f03U, 0xa1a1f859U, 0x89898009U, 0x0d0d171aU,
- 0xbfbfda65U, 0xe6e631d7U, 0x4242c684U, 0x6868b8d0U,
- 0x4141c382U, 0x9999b029U, 0x2d2d775aU, 0x0f0f111eU,
- 0xb0b0cb7bU, 0x5454fca8U, 0xbbbbd66dU, 0x16163a2cU,
-};
-static const u32 Te4[256] = {
- 0x63636363U, 0x7c7c7c7cU, 0x77777777U, 0x7b7b7b7bU,
- 0xf2f2f2f2U, 0x6b6b6b6bU, 0x6f6f6f6fU, 0xc5c5c5c5U,
- 0x30303030U, 0x01010101U, 0x67676767U, 0x2b2b2b2bU,
- 0xfefefefeU, 0xd7d7d7d7U, 0xababababU, 0x76767676U,
- 0xcacacacaU, 0x82828282U, 0xc9c9c9c9U, 0x7d7d7d7dU,
- 0xfafafafaU, 0x59595959U, 0x47474747U, 0xf0f0f0f0U,
- 0xadadadadU, 0xd4d4d4d4U, 0xa2a2a2a2U, 0xafafafafU,
- 0x9c9c9c9cU, 0xa4a4a4a4U, 0x72727272U, 0xc0c0c0c0U,
- 0xb7b7b7b7U, 0xfdfdfdfdU, 0x93939393U, 0x26262626U,
- 0x36363636U, 0x3f3f3f3fU, 0xf7f7f7f7U, 0xccccccccU,
- 0x34343434U, 0xa5a5a5a5U, 0xe5e5e5e5U, 0xf1f1f1f1U,
- 0x71717171U, 0xd8d8d8d8U, 0x31313131U, 0x15151515U,
- 0x04040404U, 0xc7c7c7c7U, 0x23232323U, 0xc3c3c3c3U,
- 0x18181818U, 0x96969696U, 0x05050505U, 0x9a9a9a9aU,
- 0x07070707U, 0x12121212U, 0x80808080U, 0xe2e2e2e2U,
- 0xebebebebU, 0x27272727U, 0xb2b2b2b2U, 0x75757575U,
- 0x09090909U, 0x83838383U, 0x2c2c2c2cU, 0x1a1a1a1aU,
- 0x1b1b1b1bU, 0x6e6e6e6eU, 0x5a5a5a5aU, 0xa0a0a0a0U,
- 0x52525252U, 0x3b3b3b3bU, 0xd6d6d6d6U, 0xb3b3b3b3U,
- 0x29292929U, 0xe3e3e3e3U, 0x2f2f2f2fU, 0x84848484U,
- 0x53535353U, 0xd1d1d1d1U, 0x00000000U, 0xededededU,
- 0x20202020U, 0xfcfcfcfcU, 0xb1b1b1b1U, 0x5b5b5b5bU,
- 0x6a6a6a6aU, 0xcbcbcbcbU, 0xbebebebeU, 0x39393939U,
- 0x4a4a4a4aU, 0x4c4c4c4cU, 0x58585858U, 0xcfcfcfcfU,
- 0xd0d0d0d0U, 0xefefefefU, 0xaaaaaaaaU, 0xfbfbfbfbU,
- 0x43434343U, 0x4d4d4d4dU, 0x33333333U, 0x85858585U,
- 0x45454545U, 0xf9f9f9f9U, 0x02020202U, 0x7f7f7f7fU,
- 0x50505050U, 0x3c3c3c3cU, 0x9f9f9f9fU, 0xa8a8a8a8U,
- 0x51515151U, 0xa3a3a3a3U, 0x40404040U, 0x8f8f8f8fU,
- 0x92929292U, 0x9d9d9d9dU, 0x38383838U, 0xf5f5f5f5U,
- 0xbcbcbcbcU, 0xb6b6b6b6U, 0xdadadadaU, 0x21212121U,
- 0x10101010U, 0xffffffffU, 0xf3f3f3f3U, 0xd2d2d2d2U,
- 0xcdcdcdcdU, 0x0c0c0c0cU, 0x13131313U, 0xececececU,
- 0x5f5f5f5fU, 0x97979797U, 0x44444444U, 0x17171717U,
- 0xc4c4c4c4U, 0xa7a7a7a7U, 0x7e7e7e7eU, 0x3d3d3d3dU,
- 0x64646464U, 0x5d5d5d5dU, 0x19191919U, 0x73737373U,
- 0x60606060U, 0x81818181U, 0x4f4f4f4fU, 0xdcdcdcdcU,
- 0x22222222U, 0x2a2a2a2aU, 0x90909090U, 0x88888888U,
- 0x46464646U, 0xeeeeeeeeU, 0xb8b8b8b8U, 0x14141414U,
- 0xdedededeU, 0x5e5e5e5eU, 0x0b0b0b0bU, 0xdbdbdbdbU,
- 0xe0e0e0e0U, 0x32323232U, 0x3a3a3a3aU, 0x0a0a0a0aU,
- 0x49494949U, 0x06060606U, 0x24242424U, 0x5c5c5c5cU,
- 0xc2c2c2c2U, 0xd3d3d3d3U, 0xacacacacU, 0x62626262U,
- 0x91919191U, 0x95959595U, 0xe4e4e4e4U, 0x79797979U,
- 0xe7e7e7e7U, 0xc8c8c8c8U, 0x37373737U, 0x6d6d6d6dU,
- 0x8d8d8d8dU, 0xd5d5d5d5U, 0x4e4e4e4eU, 0xa9a9a9a9U,
- 0x6c6c6c6cU, 0x56565656U, 0xf4f4f4f4U, 0xeaeaeaeaU,
- 0x65656565U, 0x7a7a7a7aU, 0xaeaeaeaeU, 0x08080808U,
- 0xbabababaU, 0x78787878U, 0x25252525U, 0x2e2e2e2eU,
- 0x1c1c1c1cU, 0xa6a6a6a6U, 0xb4b4b4b4U, 0xc6c6c6c6U,
- 0xe8e8e8e8U, 0xddddddddU, 0x74747474U, 0x1f1f1f1fU,
- 0x4b4b4b4bU, 0xbdbdbdbdU, 0x8b8b8b8bU, 0x8a8a8a8aU,
- 0x70707070U, 0x3e3e3e3eU, 0xb5b5b5b5U, 0x66666666U,
- 0x48484848U, 0x03030303U, 0xf6f6f6f6U, 0x0e0e0e0eU,
- 0x61616161U, 0x35353535U, 0x57575757U, 0xb9b9b9b9U,
- 0x86868686U, 0xc1c1c1c1U, 0x1d1d1d1dU, 0x9e9e9e9eU,
- 0xe1e1e1e1U, 0xf8f8f8f8U, 0x98989898U, 0x11111111U,
- 0x69696969U, 0xd9d9d9d9U, 0x8e8e8e8eU, 0x94949494U,
- 0x9b9b9b9bU, 0x1e1e1e1eU, 0x87878787U, 0xe9e9e9e9U,
- 0xcecececeU, 0x55555555U, 0x28282828U, 0xdfdfdfdfU,
- 0x8c8c8c8cU, 0xa1a1a1a1U, 0x89898989U, 0x0d0d0d0dU,
- 0xbfbfbfbfU, 0xe6e6e6e6U, 0x42424242U, 0x68686868U,
- 0x41414141U, 0x99999999U, 0x2d2d2d2dU, 0x0f0f0f0fU,
- 0xb0b0b0b0U, 0x54545454U, 0xbbbbbbbbU, 0x16161616U,
-};
-static const u32 Td0[256] = {
- 0x51f4a750U, 0x7e416553U, 0x1a17a4c3U, 0x3a275e96U,
- 0x3bab6bcbU, 0x1f9d45f1U, 0xacfa58abU, 0x4be30393U,
- 0x2030fa55U, 0xad766df6U, 0x88cc7691U, 0xf5024c25U,
- 0x4fe5d7fcU, 0xc52acbd7U, 0x26354480U, 0xb562a38fU,
- 0xdeb15a49U, 0x25ba1b67U, 0x45ea0e98U, 0x5dfec0e1U,
- 0xc32f7502U, 0x814cf012U, 0x8d4697a3U, 0x6bd3f9c6U,
- 0x038f5fe7U, 0x15929c95U, 0xbf6d7aebU, 0x955259daU,
- 0xd4be832dU, 0x587421d3U, 0x49e06929U, 0x8ec9c844U,
- 0x75c2896aU, 0xf48e7978U, 0x99583e6bU, 0x27b971ddU,
- 0xbee14fb6U, 0xf088ad17U, 0xc920ac66U, 0x7dce3ab4U,
- 0x63df4a18U, 0xe51a3182U, 0x97513360U, 0x62537f45U,
- 0xb16477e0U, 0xbb6bae84U, 0xfe81a01cU, 0xf9082b94U,
- 0x70486858U, 0x8f45fd19U, 0x94de6c87U, 0x527bf8b7U,
- 0xab73d323U, 0x724b02e2U, 0xe31f8f57U, 0x6655ab2aU,
- 0xb2eb2807U, 0x2fb5c203U, 0x86c57b9aU, 0xd33708a5U,
- 0x302887f2U, 0x23bfa5b2U, 0x02036abaU, 0xed16825cU,
- 0x8acf1c2bU, 0xa779b492U, 0xf307f2f0U, 0x4e69e2a1U,
- 0x65daf4cdU, 0x0605bed5U, 0xd134621fU, 0xc4a6fe8aU,
- 0x342e539dU, 0xa2f355a0U, 0x058ae132U, 0xa4f6eb75U,
- 0x0b83ec39U, 0x4060efaaU, 0x5e719f06U, 0xbd6e1051U,
- 0x3e218af9U, 0x96dd063dU, 0xdd3e05aeU, 0x4de6bd46U,
- 0x91548db5U, 0x71c45d05U, 0x0406d46fU, 0x605015ffU,
- 0x1998fb24U, 0xd6bde997U, 0x894043ccU, 0x67d99e77U,
- 0xb0e842bdU, 0x07898b88U, 0xe7195b38U, 0x79c8eedbU,
- 0xa17c0a47U, 0x7c420fe9U, 0xf8841ec9U, 0x00000000U,
- 0x09808683U, 0x322bed48U, 0x1e1170acU, 0x6c5a724eU,
- 0xfd0efffbU, 0x0f853856U, 0x3daed51eU, 0x362d3927U,
- 0x0a0fd964U, 0x685ca621U, 0x9b5b54d1U, 0x24362e3aU,
- 0x0c0a67b1U, 0x9357e70fU, 0xb4ee96d2U, 0x1b9b919eU,
- 0x80c0c54fU, 0x61dc20a2U, 0x5a774b69U, 0x1c121a16U,
- 0xe293ba0aU, 0xc0a02ae5U, 0x3c22e043U, 0x121b171dU,
- 0x0e090d0bU, 0xf28bc7adU, 0x2db6a8b9U, 0x141ea9c8U,
- 0x57f11985U, 0xaf75074cU, 0xee99ddbbU, 0xa37f60fdU,
- 0xf701269fU, 0x5c72f5bcU, 0x44663bc5U, 0x5bfb7e34U,
- 0x8b432976U, 0xcb23c6dcU, 0xb6edfc68U, 0xb8e4f163U,
- 0xd731dccaU, 0x42638510U, 0x13972240U, 0x84c61120U,
- 0x854a247dU, 0xd2bb3df8U, 0xaef93211U, 0xc729a16dU,
- 0x1d9e2f4bU, 0xdcb230f3U, 0x0d8652ecU, 0x77c1e3d0U,
- 0x2bb3166cU, 0xa970b999U, 0x119448faU, 0x47e96422U,
- 0xa8fc8cc4U, 0xa0f03f1aU, 0x567d2cd8U, 0x223390efU,
- 0x87494ec7U, 0xd938d1c1U, 0x8ccaa2feU, 0x98d40b36U,
- 0xa6f581cfU, 0xa57ade28U, 0xdab78e26U, 0x3fadbfa4U,
- 0x2c3a9de4U, 0x5078920dU, 0x6a5fcc9bU, 0x547e4662U,
- 0xf68d13c2U, 0x90d8b8e8U, 0x2e39f75eU, 0x82c3aff5U,
- 0x9f5d80beU, 0x69d0937cU, 0x6fd52da9U, 0xcf2512b3U,
- 0xc8ac993bU, 0x10187da7U, 0xe89c636eU, 0xdb3bbb7bU,
- 0xcd267809U, 0x6e5918f4U, 0xec9ab701U, 0x834f9aa8U,
- 0xe6956e65U, 0xaaffe67eU, 0x21bccf08U, 0xef15e8e6U,
- 0xbae79bd9U, 0x4a6f36ceU, 0xea9f09d4U, 0x29b07cd6U,
- 0x31a4b2afU, 0x2a3f2331U, 0xc6a59430U, 0x35a266c0U,
- 0x744ebc37U, 0xfc82caa6U, 0xe090d0b0U, 0x33a7d815U,
- 0xf104984aU, 0x41ecdaf7U, 0x7fcd500eU, 0x1791f62fU,
- 0x764dd68dU, 0x43efb04dU, 0xccaa4d54U, 0xe49604dfU,
- 0x9ed1b5e3U, 0x4c6a881bU, 0xc12c1fb8U, 0x4665517fU,
- 0x9d5eea04U, 0x018c355dU, 0xfa877473U, 0xfb0b412eU,
- 0xb3671d5aU, 0x92dbd252U, 0xe9105633U, 0x6dd64713U,
- 0x9ad7618cU, 0x37a10c7aU, 0x59f8148eU, 0xeb133c89U,
- 0xcea927eeU, 0xb761c935U, 0xe11ce5edU, 0x7a47b13cU,
- 0x9cd2df59U, 0x55f2733fU, 0x1814ce79U, 0x73c737bfU,
- 0x53f7cdeaU, 0x5ffdaa5bU, 0xdf3d6f14U, 0x7844db86U,
- 0xcaaff381U, 0xb968c43eU, 0x3824342cU, 0xc2a3405fU,
- 0x161dc372U, 0xbce2250cU, 0x283c498bU, 0xff0d9541U,
- 0x39a80171U, 0x080cb3deU, 0xd8b4e49cU, 0x6456c190U,
- 0x7bcb8461U, 0xd532b670U, 0x486c5c74U, 0xd0b85742U,
-};
-static const u32 Td1[256] = {
- 0x5051f4a7U, 0x537e4165U, 0xc31a17a4U, 0x963a275eU,
- 0xcb3bab6bU, 0xf11f9d45U, 0xabacfa58U, 0x934be303U,
- 0x552030faU, 0xf6ad766dU, 0x9188cc76U, 0x25f5024cU,
- 0xfc4fe5d7U, 0xd7c52acbU, 0x80263544U, 0x8fb562a3U,
- 0x49deb15aU, 0x6725ba1bU, 0x9845ea0eU, 0xe15dfec0U,
- 0x02c32f75U, 0x12814cf0U, 0xa38d4697U, 0xc66bd3f9U,
- 0xe7038f5fU, 0x9515929cU, 0xebbf6d7aU, 0xda955259U,
- 0x2dd4be83U, 0xd3587421U, 0x2949e069U, 0x448ec9c8U,
- 0x6a75c289U, 0x78f48e79U, 0x6b99583eU, 0xdd27b971U,
- 0xb6bee14fU, 0x17f088adU, 0x66c920acU, 0xb47dce3aU,
- 0x1863df4aU, 0x82e51a31U, 0x60975133U, 0x4562537fU,
- 0xe0b16477U, 0x84bb6baeU, 0x1cfe81a0U, 0x94f9082bU,
- 0x58704868U, 0x198f45fdU, 0x8794de6cU, 0xb7527bf8U,
- 0x23ab73d3U, 0xe2724b02U, 0x57e31f8fU, 0x2a6655abU,
- 0x07b2eb28U, 0x032fb5c2U, 0x9a86c57bU, 0xa5d33708U,
- 0xf2302887U, 0xb223bfa5U, 0xba02036aU, 0x5ced1682U,
- 0x2b8acf1cU, 0x92a779b4U, 0xf0f307f2U, 0xa14e69e2U,
- 0xcd65daf4U, 0xd50605beU, 0x1fd13462U, 0x8ac4a6feU,
- 0x9d342e53U, 0xa0a2f355U, 0x32058ae1U, 0x75a4f6ebU,
- 0x390b83ecU, 0xaa4060efU, 0x065e719fU, 0x51bd6e10U,
- 0xf93e218aU, 0x3d96dd06U, 0xaedd3e05U, 0x464de6bdU,
- 0xb591548dU, 0x0571c45dU, 0x6f0406d4U, 0xff605015U,
- 0x241998fbU, 0x97d6bde9U, 0xcc894043U, 0x7767d99eU,
- 0xbdb0e842U, 0x8807898bU, 0x38e7195bU, 0xdb79c8eeU,
- 0x47a17c0aU, 0xe97c420fU, 0xc9f8841eU, 0x00000000U,
- 0x83098086U, 0x48322bedU, 0xac1e1170U, 0x4e6c5a72U,
- 0xfbfd0effU, 0x560f8538U, 0x1e3daed5U, 0x27362d39U,
- 0x640a0fd9U, 0x21685ca6U, 0xd19b5b54U, 0x3a24362eU,
- 0xb10c0a67U, 0x0f9357e7U, 0xd2b4ee96U, 0x9e1b9b91U,
- 0x4f80c0c5U, 0xa261dc20U, 0x695a774bU, 0x161c121aU,
- 0x0ae293baU, 0xe5c0a02aU, 0x433c22e0U, 0x1d121b17U,
- 0x0b0e090dU, 0xadf28bc7U, 0xb92db6a8U, 0xc8141ea9U,
- 0x8557f119U, 0x4caf7507U, 0xbbee99ddU, 0xfda37f60U,
- 0x9ff70126U, 0xbc5c72f5U, 0xc544663bU, 0x345bfb7eU,
- 0x768b4329U, 0xdccb23c6U, 0x68b6edfcU, 0x63b8e4f1U,
- 0xcad731dcU, 0x10426385U, 0x40139722U, 0x2084c611U,
- 0x7d854a24U, 0xf8d2bb3dU, 0x11aef932U, 0x6dc729a1U,
- 0x4b1d9e2fU, 0xf3dcb230U, 0xec0d8652U, 0xd077c1e3U,
- 0x6c2bb316U, 0x99a970b9U, 0xfa119448U, 0x2247e964U,
- 0xc4a8fc8cU, 0x1aa0f03fU, 0xd8567d2cU, 0xef223390U,
- 0xc787494eU, 0xc1d938d1U, 0xfe8ccaa2U, 0x3698d40bU,
- 0xcfa6f581U, 0x28a57adeU, 0x26dab78eU, 0xa43fadbfU,
- 0xe42c3a9dU, 0x0d507892U, 0x9b6a5fccU, 0x62547e46U,
- 0xc2f68d13U, 0xe890d8b8U, 0x5e2e39f7U, 0xf582c3afU,
- 0xbe9f5d80U, 0x7c69d093U, 0xa96fd52dU, 0xb3cf2512U,
- 0x3bc8ac99U, 0xa710187dU, 0x6ee89c63U, 0x7bdb3bbbU,
- 0x09cd2678U, 0xf46e5918U, 0x01ec9ab7U, 0xa8834f9aU,
- 0x65e6956eU, 0x7eaaffe6U, 0x0821bccfU, 0xe6ef15e8U,
- 0xd9bae79bU, 0xce4a6f36U, 0xd4ea9f09U, 0xd629b07cU,
- 0xaf31a4b2U, 0x312a3f23U, 0x30c6a594U, 0xc035a266U,
- 0x37744ebcU, 0xa6fc82caU, 0xb0e090d0U, 0x1533a7d8U,
- 0x4af10498U, 0xf741ecdaU, 0x0e7fcd50U, 0x2f1791f6U,
- 0x8d764dd6U, 0x4d43efb0U, 0x54ccaa4dU, 0xdfe49604U,
- 0xe39ed1b5U, 0x1b4c6a88U, 0xb8c12c1fU, 0x7f466551U,
- 0x049d5eeaU, 0x5d018c35U, 0x73fa8774U, 0x2efb0b41U,
- 0x5ab3671dU, 0x5292dbd2U, 0x33e91056U, 0x136dd647U,
- 0x8c9ad761U, 0x7a37a10cU, 0x8e59f814U, 0x89eb133cU,
- 0xeecea927U, 0x35b761c9U, 0xede11ce5U, 0x3c7a47b1U,
- 0x599cd2dfU, 0x3f55f273U, 0x791814ceU, 0xbf73c737U,
- 0xea53f7cdU, 0x5b5ffdaaU, 0x14df3d6fU, 0x867844dbU,
- 0x81caaff3U, 0x3eb968c4U, 0x2c382434U, 0x5fc2a340U,
- 0x72161dc3U, 0x0cbce225U, 0x8b283c49U, 0x41ff0d95U,
- 0x7139a801U, 0xde080cb3U, 0x9cd8b4e4U, 0x906456c1U,
- 0x617bcb84U, 0x70d532b6U, 0x74486c5cU, 0x42d0b857U,
-};
-static const u32 Td2[256] = {
- 0xa75051f4U, 0x65537e41U, 0xa4c31a17U, 0x5e963a27U,
- 0x6bcb3babU, 0x45f11f9dU, 0x58abacfaU, 0x03934be3U,
- 0xfa552030U, 0x6df6ad76U, 0x769188ccU, 0x4c25f502U,
- 0xd7fc4fe5U, 0xcbd7c52aU, 0x44802635U, 0xa38fb562U,
- 0x5a49deb1U, 0x1b6725baU, 0x0e9845eaU, 0xc0e15dfeU,
- 0x7502c32fU, 0xf012814cU, 0x97a38d46U, 0xf9c66bd3U,
- 0x5fe7038fU, 0x9c951592U, 0x7aebbf6dU, 0x59da9552U,
- 0x832dd4beU, 0x21d35874U, 0x692949e0U, 0xc8448ec9U,
- 0x896a75c2U, 0x7978f48eU, 0x3e6b9958U, 0x71dd27b9U,
- 0x4fb6bee1U, 0xad17f088U, 0xac66c920U, 0x3ab47dceU,
- 0x4a1863dfU, 0x3182e51aU, 0x33609751U, 0x7f456253U,
- 0x77e0b164U, 0xae84bb6bU, 0xa01cfe81U, 0x2b94f908U,
- 0x68587048U, 0xfd198f45U, 0x6c8794deU, 0xf8b7527bU,
- 0xd323ab73U, 0x02e2724bU, 0x8f57e31fU, 0xab2a6655U,
- 0x2807b2ebU, 0xc2032fb5U, 0x7b9a86c5U, 0x08a5d337U,
- 0x87f23028U, 0xa5b223bfU, 0x6aba0203U, 0x825ced16U,
- 0x1c2b8acfU, 0xb492a779U, 0xf2f0f307U, 0xe2a14e69U,
- 0xf4cd65daU, 0xbed50605U, 0x621fd134U, 0xfe8ac4a6U,
- 0x539d342eU, 0x55a0a2f3U, 0xe132058aU, 0xeb75a4f6U,
- 0xec390b83U, 0xefaa4060U, 0x9f065e71U, 0x1051bd6eU,
-
- 0x8af93e21U, 0x063d96ddU, 0x05aedd3eU, 0xbd464de6U,
- 0x8db59154U, 0x5d0571c4U, 0xd46f0406U, 0x15ff6050U,
- 0xfb241998U, 0xe997d6bdU, 0x43cc8940U, 0x9e7767d9U,
- 0x42bdb0e8U, 0x8b880789U, 0x5b38e719U, 0xeedb79c8U,
- 0x0a47a17cU, 0x0fe97c42U, 0x1ec9f884U, 0x00000000U,
- 0x86830980U, 0xed48322bU, 0x70ac1e11U, 0x724e6c5aU,
- 0xfffbfd0eU, 0x38560f85U, 0xd51e3daeU, 0x3927362dU,
- 0xd9640a0fU, 0xa621685cU, 0x54d19b5bU, 0x2e3a2436U,
- 0x67b10c0aU, 0xe70f9357U, 0x96d2b4eeU, 0x919e1b9bU,
- 0xc54f80c0U, 0x20a261dcU, 0x4b695a77U, 0x1a161c12U,
- 0xba0ae293U, 0x2ae5c0a0U, 0xe0433c22U, 0x171d121bU,
- 0x0d0b0e09U, 0xc7adf28bU, 0xa8b92db6U, 0xa9c8141eU,
- 0x198557f1U, 0x074caf75U, 0xddbbee99U, 0x60fda37fU,
- 0x269ff701U, 0xf5bc5c72U, 0x3bc54466U, 0x7e345bfbU,
- 0x29768b43U, 0xc6dccb23U, 0xfc68b6edU, 0xf163b8e4U,
- 0xdccad731U, 0x85104263U, 0x22401397U, 0x112084c6U,
- 0x247d854aU, 0x3df8d2bbU, 0x3211aef9U, 0xa16dc729U,
- 0x2f4b1d9eU, 0x30f3dcb2U, 0x52ec0d86U, 0xe3d077c1U,
- 0x166c2bb3U, 0xb999a970U, 0x48fa1194U, 0x642247e9U,
- 0x8cc4a8fcU, 0x3f1aa0f0U, 0x2cd8567dU, 0x90ef2233U,
- 0x4ec78749U, 0xd1c1d938U, 0xa2fe8ccaU, 0x0b3698d4U,
- 0x81cfa6f5U, 0xde28a57aU, 0x8e26dab7U, 0xbfa43fadU,
- 0x9de42c3aU, 0x920d5078U, 0xcc9b6a5fU, 0x4662547eU,
- 0x13c2f68dU, 0xb8e890d8U, 0xf75e2e39U, 0xaff582c3U,
- 0x80be9f5dU, 0x937c69d0U, 0x2da96fd5U, 0x12b3cf25U,
- 0x993bc8acU, 0x7da71018U, 0x636ee89cU, 0xbb7bdb3bU,
- 0x7809cd26U, 0x18f46e59U, 0xb701ec9aU, 0x9aa8834fU,
- 0x6e65e695U, 0xe67eaaffU, 0xcf0821bcU, 0xe8e6ef15U,
- 0x9bd9bae7U, 0x36ce4a6fU, 0x09d4ea9fU, 0x7cd629b0U,
- 0xb2af31a4U, 0x23312a3fU, 0x9430c6a5U, 0x66c035a2U,
- 0xbc37744eU, 0xcaa6fc82U, 0xd0b0e090U, 0xd81533a7U,
- 0x984af104U, 0xdaf741ecU, 0x500e7fcdU, 0xf62f1791U,
- 0xd68d764dU, 0xb04d43efU, 0x4d54ccaaU, 0x04dfe496U,
- 0xb5e39ed1U, 0x881b4c6aU, 0x1fb8c12cU, 0x517f4665U,
- 0xea049d5eU, 0x355d018cU, 0x7473fa87U, 0x412efb0bU,
- 0x1d5ab367U, 0xd25292dbU, 0x5633e910U, 0x47136dd6U,
- 0x618c9ad7U, 0x0c7a37a1U, 0x148e59f8U, 0x3c89eb13U,
- 0x27eecea9U, 0xc935b761U, 0xe5ede11cU, 0xb13c7a47U,
- 0xdf599cd2U, 0x733f55f2U, 0xce791814U, 0x37bf73c7U,
- 0xcdea53f7U, 0xaa5b5ffdU, 0x6f14df3dU, 0xdb867844U,
- 0xf381caafU, 0xc43eb968U, 0x342c3824U, 0x405fc2a3U,
- 0xc372161dU, 0x250cbce2U, 0x498b283cU, 0x9541ff0dU,
- 0x017139a8U, 0xb3de080cU, 0xe49cd8b4U, 0xc1906456U,
- 0x84617bcbU, 0xb670d532U, 0x5c74486cU, 0x5742d0b8U,
-};
-static const u32 Td3[256] = {
- 0xf4a75051U, 0x4165537eU, 0x17a4c31aU, 0x275e963aU,
- 0xab6bcb3bU, 0x9d45f11fU, 0xfa58abacU, 0xe303934bU,
- 0x30fa5520U, 0x766df6adU, 0xcc769188U, 0x024c25f5U,
- 0xe5d7fc4fU, 0x2acbd7c5U, 0x35448026U, 0x62a38fb5U,
- 0xb15a49deU, 0xba1b6725U, 0xea0e9845U, 0xfec0e15dU,
- 0x2f7502c3U, 0x4cf01281U, 0x4697a38dU, 0xd3f9c66bU,
- 0x8f5fe703U, 0x929c9515U, 0x6d7aebbfU, 0x5259da95U,
- 0xbe832dd4U, 0x7421d358U, 0xe0692949U, 0xc9c8448eU,
- 0xc2896a75U, 0x8e7978f4U, 0x583e6b99U, 0xb971dd27U,
- 0xe14fb6beU, 0x88ad17f0U, 0x20ac66c9U, 0xce3ab47dU,
- 0xdf4a1863U, 0x1a3182e5U, 0x51336097U, 0x537f4562U,
- 0x6477e0b1U, 0x6bae84bbU, 0x81a01cfeU, 0x082b94f9U,
- 0x48685870U, 0x45fd198fU, 0xde6c8794U, 0x7bf8b752U,
- 0x73d323abU, 0x4b02e272U, 0x1f8f57e3U, 0x55ab2a66U,
- 0xeb2807b2U, 0xb5c2032fU, 0xc57b9a86U, 0x3708a5d3U,
- 0x2887f230U, 0xbfa5b223U, 0x036aba02U, 0x16825cedU,
- 0xcf1c2b8aU, 0x79b492a7U, 0x07f2f0f3U, 0x69e2a14eU,
- 0xdaf4cd65U, 0x05bed506U, 0x34621fd1U, 0xa6fe8ac4U,
- 0x2e539d34U, 0xf355a0a2U, 0x8ae13205U, 0xf6eb75a4U,
- 0x83ec390bU, 0x60efaa40U, 0x719f065eU, 0x6e1051bdU,
- 0x218af93eU, 0xdd063d96U, 0x3e05aeddU, 0xe6bd464dU,
- 0x548db591U, 0xc45d0571U, 0x06d46f04U, 0x5015ff60U,
- 0x98fb2419U, 0xbde997d6U, 0x4043cc89U, 0xd99e7767U,
- 0xe842bdb0U, 0x898b8807U, 0x195b38e7U, 0xc8eedb79U,
- 0x7c0a47a1U, 0x420fe97cU, 0x841ec9f8U, 0x00000000U,
- 0x80868309U, 0x2bed4832U, 0x1170ac1eU, 0x5a724e6cU,
- 0x0efffbfdU, 0x8538560fU, 0xaed51e3dU, 0x2d392736U,
- 0x0fd9640aU, 0x5ca62168U, 0x5b54d19bU, 0x362e3a24U,
- 0x0a67b10cU, 0x57e70f93U, 0xee96d2b4U, 0x9b919e1bU,
- 0xc0c54f80U, 0xdc20a261U, 0x774b695aU, 0x121a161cU,
- 0x93ba0ae2U, 0xa02ae5c0U, 0x22e0433cU, 0x1b171d12U,
- 0x090d0b0eU, 0x8bc7adf2U, 0xb6a8b92dU, 0x1ea9c814U,
- 0xf1198557U, 0x75074cafU, 0x99ddbbeeU, 0x7f60fda3U,
- 0x01269ff7U, 0x72f5bc5cU, 0x663bc544U, 0xfb7e345bU,
- 0x4329768bU, 0x23c6dccbU, 0xedfc68b6U, 0xe4f163b8U,
- 0x31dccad7U, 0x63851042U, 0x97224013U, 0xc6112084U,
- 0x4a247d85U, 0xbb3df8d2U, 0xf93211aeU, 0x29a16dc7U,
- 0x9e2f4b1dU, 0xb230f3dcU, 0x8652ec0dU, 0xc1e3d077U,
- 0xb3166c2bU, 0x70b999a9U, 0x9448fa11U, 0xe9642247U,
- 0xfc8cc4a8U, 0xf03f1aa0U, 0x7d2cd856U, 0x3390ef22U,
- 0x494ec787U, 0x38d1c1d9U, 0xcaa2fe8cU, 0xd40b3698U,
- 0xf581cfa6U, 0x7ade28a5U, 0xb78e26daU, 0xadbfa43fU,
- 0x3a9de42cU, 0x78920d50U, 0x5fcc9b6aU, 0x7e466254U,
- 0x8d13c2f6U, 0xd8b8e890U, 0x39f75e2eU, 0xc3aff582U,
- 0x5d80be9fU, 0xd0937c69U, 0xd52da96fU, 0x2512b3cfU,
- 0xac993bc8U, 0x187da710U, 0x9c636ee8U, 0x3bbb7bdbU,
- 0x267809cdU, 0x5918f46eU, 0x9ab701ecU, 0x4f9aa883U,
- 0x956e65e6U, 0xffe67eaaU, 0xbccf0821U, 0x15e8e6efU,
- 0xe79bd9baU, 0x6f36ce4aU, 0x9f09d4eaU, 0xb07cd629U,
- 0xa4b2af31U, 0x3f23312aU, 0xa59430c6U, 0xa266c035U,
- 0x4ebc3774U, 0x82caa6fcU, 0x90d0b0e0U, 0xa7d81533U,
- 0x04984af1U, 0xecdaf741U, 0xcd500e7fU, 0x91f62f17U,
- 0x4dd68d76U, 0xefb04d43U, 0xaa4d54ccU, 0x9604dfe4U,
- 0xd1b5e39eU, 0x6a881b4cU, 0x2c1fb8c1U, 0x65517f46U,
- 0x5eea049dU, 0x8c355d01U, 0x877473faU, 0x0b412efbU,
- 0x671d5ab3U, 0xdbd25292U, 0x105633e9U, 0xd647136dU,
- 0xd7618c9aU, 0xa10c7a37U, 0xf8148e59U, 0x133c89ebU,
- 0xa927eeceU, 0x61c935b7U, 0x1ce5ede1U, 0x47b13c7aU,
- 0xd2df599cU, 0xf2733f55U, 0x14ce7918U, 0xc737bf73U,
- 0xf7cdea53U, 0xfdaa5b5fU, 0x3d6f14dfU, 0x44db8678U,
- 0xaff381caU, 0x68c43eb9U, 0x24342c38U, 0xa3405fc2U,
- 0x1dc37216U, 0xe2250cbcU, 0x3c498b28U, 0x0d9541ffU,
- 0xa8017139U, 0x0cb3de08U, 0xb4e49cd8U, 0x56c19064U,
- 0xcb84617bU, 0x32b670d5U, 0x6c5c7448U, 0xb85742d0U,
-};
-static const u32 Td4[256] = {
- 0x52525252U, 0x09090909U, 0x6a6a6a6aU, 0xd5d5d5d5U,
- 0x30303030U, 0x36363636U, 0xa5a5a5a5U, 0x38383838U,
- 0xbfbfbfbfU, 0x40404040U, 0xa3a3a3a3U, 0x9e9e9e9eU,
- 0x81818181U, 0xf3f3f3f3U, 0xd7d7d7d7U, 0xfbfbfbfbU,
- 0x7c7c7c7cU, 0xe3e3e3e3U, 0x39393939U, 0x82828282U,
- 0x9b9b9b9bU, 0x2f2f2f2fU, 0xffffffffU, 0x87878787U,
- 0x34343434U, 0x8e8e8e8eU, 0x43434343U, 0x44444444U,
- 0xc4c4c4c4U, 0xdedededeU, 0xe9e9e9e9U, 0xcbcbcbcbU,
- 0x54545454U, 0x7b7b7b7bU, 0x94949494U, 0x32323232U,
- 0xa6a6a6a6U, 0xc2c2c2c2U, 0x23232323U, 0x3d3d3d3dU,
- 0xeeeeeeeeU, 0x4c4c4c4cU, 0x95959595U, 0x0b0b0b0bU,
- 0x42424242U, 0xfafafafaU, 0xc3c3c3c3U, 0x4e4e4e4eU,
- 0x08080808U, 0x2e2e2e2eU, 0xa1a1a1a1U, 0x66666666U,
- 0x28282828U, 0xd9d9d9d9U, 0x24242424U, 0xb2b2b2b2U,
- 0x76767676U, 0x5b5b5b5bU, 0xa2a2a2a2U, 0x49494949U,
- 0x6d6d6d6dU, 0x8b8b8b8bU, 0xd1d1d1d1U, 0x25252525U,
- 0x72727272U, 0xf8f8f8f8U, 0xf6f6f6f6U, 0x64646464U,
- 0x86868686U, 0x68686868U, 0x98989898U, 0x16161616U,
- 0xd4d4d4d4U, 0xa4a4a4a4U, 0x5c5c5c5cU, 0xccccccccU,
- 0x5d5d5d5dU, 0x65656565U, 0xb6b6b6b6U, 0x92929292U,
- 0x6c6c6c6cU, 0x70707070U, 0x48484848U, 0x50505050U,
- 0xfdfdfdfdU, 0xededededU, 0xb9b9b9b9U, 0xdadadadaU,
- 0x5e5e5e5eU, 0x15151515U, 0x46464646U, 0x57575757U,
- 0xa7a7a7a7U, 0x8d8d8d8dU, 0x9d9d9d9dU, 0x84848484U,
- 0x90909090U, 0xd8d8d8d8U, 0xababababU, 0x00000000U,
- 0x8c8c8c8cU, 0xbcbcbcbcU, 0xd3d3d3d3U, 0x0a0a0a0aU,
- 0xf7f7f7f7U, 0xe4e4e4e4U, 0x58585858U, 0x05050505U,
- 0xb8b8b8b8U, 0xb3b3b3b3U, 0x45454545U, 0x06060606U,
- 0xd0d0d0d0U, 0x2c2c2c2cU, 0x1e1e1e1eU, 0x8f8f8f8fU,
- 0xcacacacaU, 0x3f3f3f3fU, 0x0f0f0f0fU, 0x02020202U,
- 0xc1c1c1c1U, 0xafafafafU, 0xbdbdbdbdU, 0x03030303U,
- 0x01010101U, 0x13131313U, 0x8a8a8a8aU, 0x6b6b6b6bU,
- 0x3a3a3a3aU, 0x91919191U, 0x11111111U, 0x41414141U,
- 0x4f4f4f4fU, 0x67676767U, 0xdcdcdcdcU, 0xeaeaeaeaU,
- 0x97979797U, 0xf2f2f2f2U, 0xcfcfcfcfU, 0xcecececeU,
- 0xf0f0f0f0U, 0xb4b4b4b4U, 0xe6e6e6e6U, 0x73737373U,
- 0x96969696U, 0xacacacacU, 0x74747474U, 0x22222222U,
- 0xe7e7e7e7U, 0xadadadadU, 0x35353535U, 0x85858585U,
- 0xe2e2e2e2U, 0xf9f9f9f9U, 0x37373737U, 0xe8e8e8e8U,
- 0x1c1c1c1cU, 0x75757575U, 0xdfdfdfdfU, 0x6e6e6e6eU,
- 0x47474747U, 0xf1f1f1f1U, 0x1a1a1a1aU, 0x71717171U,
- 0x1d1d1d1dU, 0x29292929U, 0xc5c5c5c5U, 0x89898989U,
- 0x6f6f6f6fU, 0xb7b7b7b7U, 0x62626262U, 0x0e0e0e0eU,
- 0xaaaaaaaaU, 0x18181818U, 0xbebebebeU, 0x1b1b1b1bU,
- 0xfcfcfcfcU, 0x56565656U, 0x3e3e3e3eU, 0x4b4b4b4bU,
- 0xc6c6c6c6U, 0xd2d2d2d2U, 0x79797979U, 0x20202020U,
- 0x9a9a9a9aU, 0xdbdbdbdbU, 0xc0c0c0c0U, 0xfefefefeU,
- 0x78787878U, 0xcdcdcdcdU, 0x5a5a5a5aU, 0xf4f4f4f4U,
- 0x1f1f1f1fU, 0xddddddddU, 0xa8a8a8a8U, 0x33333333U,
- 0x88888888U, 0x07070707U, 0xc7c7c7c7U, 0x31313131U,
- 0xb1b1b1b1U, 0x12121212U, 0x10101010U, 0x59595959U,
- 0x27272727U, 0x80808080U, 0xececececU, 0x5f5f5f5fU,
- 0x60606060U, 0x51515151U, 0x7f7f7f7fU, 0xa9a9a9a9U,
- 0x19191919U, 0xb5b5b5b5U, 0x4a4a4a4aU, 0x0d0d0d0dU,
- 0x2d2d2d2dU, 0xe5e5e5e5U, 0x7a7a7a7aU, 0x9f9f9f9fU,
- 0x93939393U, 0xc9c9c9c9U, 0x9c9c9c9cU, 0xefefefefU,
- 0xa0a0a0a0U, 0xe0e0e0e0U, 0x3b3b3b3bU, 0x4d4d4d4dU,
- 0xaeaeaeaeU, 0x2a2a2a2aU, 0xf5f5f5f5U, 0xb0b0b0b0U,
- 0xc8c8c8c8U, 0xebebebebU, 0xbbbbbbbbU, 0x3c3c3c3cU,
- 0x83838383U, 0x53535353U, 0x99999999U, 0x61616161U,
- 0x17171717U, 0x2b2b2b2bU, 0x04040404U, 0x7e7e7e7eU,
- 0xbabababaU, 0x77777777U, 0xd6d6d6d6U, 0x26262626U,
- 0xe1e1e1e1U, 0x69696969U, 0x14141414U, 0x63636363U,
- 0x55555555U, 0x21212121U, 0x0c0c0c0cU, 0x7d7d7d7dU,
-};
-static const u32 rcon[] = {
- 0x01000000, 0x02000000, 0x04000000, 0x08000000,
- 0x10000000, 0x20000000, 0x40000000, 0x80000000,
- 0x1B000000, 0x36000000, /* for 128-bit blocks, Rijndael never uses more than 10 rcon values */
-};
-
-#define SWAP(x) (_lrotl(x, 8) & 0x00ff00ff | _lrotr(x, 8) & 0xff00ff00)
-
-#ifdef _MSC_VER
-#define GETU32(p) SWAP(*((u32 *)(p)))
-#define PUTU32(ct, st) { *((u32 *)(ct)) = SWAP((st)); }
-#else
-#define GETU32(pt) (((u32)(pt)[0] << 24) ^ ((u32)(pt)[1] << 16) ^ ((u32)(pt)[2] << 8) ^ ((u32)(pt)[3]))
-#define PUTU32(ct, st) { (ct)[0] = (u8)((st) >> 24); (ct)[1] = (u8)((st) >> 16); (ct)[2] = (u8)((st) >> 8); (ct)[3] = (u8)(st); }
-#endif
-
-/**
- * Expand the cipher key into the encryption key schedule.
- *
- * @return the number of rounds for the given cipher key size.
- */
-int rijndaelKeySetupEnc(u32 rk[/*4*(Nr + 1)*/], const u8 cipherKey[], int keyBits) {
- int i = 0;
- u32 temp;
-
- rk[0] = GETU32(cipherKey );
- rk[1] = GETU32(cipherKey + 4);
- rk[2] = GETU32(cipherKey + 8);
- rk[3] = GETU32(cipherKey + 12);
- if (keyBits == 128) {
- for (;;) {
- temp = rk[3];
- rk[4] = rk[0] ^
- (Te4[(temp >> 16) & 0xff] & 0xff000000) ^
- (Te4[(temp >> 8) & 0xff] & 0x00ff0000) ^
- (Te4[(temp ) & 0xff] & 0x0000ff00) ^
- (Te4[(temp >> 24) ] & 0x000000ff) ^
- rcon[i];
- rk[5] = rk[1] ^ rk[4];
- rk[6] = rk[2] ^ rk[5];
- rk[7] = rk[3] ^ rk[6];
- if (++i == 10) {
- return 10;
- }
- rk += 4;
- }
- }
- rk[4] = GETU32(cipherKey + 16);
- rk[5] = GETU32(cipherKey + 20);
- if (keyBits == 192) {
- for (;;) {
- temp = rk[ 5];
- rk[ 6] = rk[ 0] ^
- (Te4[(temp >> 16) & 0xff] & 0xff000000) ^
- (Te4[(temp >> 8) & 0xff] & 0x00ff0000) ^
- (Te4[(temp ) & 0xff] & 0x0000ff00) ^
- (Te4[(temp >> 24) ] & 0x000000ff) ^
- rcon[i];
- rk[ 7] = rk[ 1] ^ rk[ 6];
- rk[ 8] = rk[ 2] ^ rk[ 7];
- rk[ 9] = rk[ 3] ^ rk[ 8];
- if (++i == 8) {
- return 12;
- }
- rk[10] = rk[ 4] ^ rk[ 9];
- rk[11] = rk[ 5] ^ rk[10];
- rk += 6;
- }
- }
- rk[6] = GETU32(cipherKey + 24);
- rk[7] = GETU32(cipherKey + 28);
- if (keyBits == 256) {
- for (;;) {
- temp = rk[ 7];
- rk[ 8] = rk[ 0] ^
- (Te4[(temp >> 16) & 0xff] & 0xff000000) ^
- (Te4[(temp >> 8) & 0xff] & 0x00ff0000) ^
- (Te4[(temp ) & 0xff] & 0x0000ff00) ^
- (Te4[(temp >> 24) ] & 0x000000ff) ^
- rcon[i];
- rk[ 9] = rk[ 1] ^ rk[ 8];
- rk[10] = rk[ 2] ^ rk[ 9];
- rk[11] = rk[ 3] ^ rk[10];
- if (++i == 7) {
- return 14;
- }
- temp = rk[11];
- rk[12] = rk[ 4] ^
- (Te4[(temp >> 24) ] & 0xff000000) ^
- (Te4[(temp >> 16) & 0xff] & 0x00ff0000) ^
- (Te4[(temp >> 8) & 0xff] & 0x0000ff00) ^
- (Te4[(temp ) & 0xff] & 0x000000ff);
- rk[13] = rk[ 5] ^ rk[12];
- rk[14] = rk[ 6] ^ rk[13];
- rk[15] = rk[ 7] ^ rk[14];
-
- rk += 8;
- }
- }
- return 0;
-}
-
-/**
- * Expand the cipher key into the decryption key schedule.
- *
- * @return the number of rounds for the given cipher key size.
- */
-int rijndaelKeySetupDec(u32 rk[/*4*(Nr + 1)*/], const u8 cipherKey[], int keyBits) {
- int Nr, i, j;
- u32 temp;
-
- /* expand the cipher key: */
- Nr = rijndaelKeySetupEnc(rk, cipherKey, keyBits);
- /* invert the order of the round keys: */
- for (i = 0, j = 4*Nr; i < j; i += 4, j -= 4) {
- temp = rk[i ]; rk[i ] = rk[j ]; rk[j ] = temp;
- temp = rk[i + 1]; rk[i + 1] = rk[j + 1]; rk[j + 1] = temp;
- temp = rk[i + 2]; rk[i + 2] = rk[j + 2]; rk[j + 2] = temp;
- temp = rk[i + 3]; rk[i + 3] = rk[j + 3]; rk[j + 3] = temp;
- }
- /* apply the inverse MixColumn transform to all round keys but the first and the last: */
- for (i = 1; i < Nr; i++) {
- rk += 4;
- rk[0] =
- Td0[Te4[(rk[0] >> 24) ] & 0xff] ^
- Td1[Te4[(rk[0] >> 16) & 0xff] & 0xff] ^
- Td2[Te4[(rk[0] >> 8) & 0xff] & 0xff] ^
- Td3[Te4[(rk[0] ) & 0xff] & 0xff];
- rk[1] =
- Td0[Te4[(rk[1] >> 24) ] & 0xff] ^
- Td1[Te4[(rk[1] >> 16) & 0xff] & 0xff] ^
- Td2[Te4[(rk[1] >> 8) & 0xff] & 0xff] ^
- Td3[Te4[(rk[1] ) & 0xff] & 0xff];
- rk[2] =
- Td0[Te4[(rk[2] >> 24) ] & 0xff] ^
- Td1[Te4[(rk[2] >> 16) & 0xff] & 0xff] ^
- Td2[Te4[(rk[2] >> 8) & 0xff] & 0xff] ^
- Td3[Te4[(rk[2] ) & 0xff] & 0xff];
- rk[3] =
- Td0[Te4[(rk[3] >> 24) ] & 0xff] ^
- Td1[Te4[(rk[3] >> 16) & 0xff] & 0xff] ^
- Td2[Te4[(rk[3] >> 8) & 0xff] & 0xff] ^
- Td3[Te4[(rk[3] ) & 0xff] & 0xff];
- }
- return Nr;
-}
-
-void rijndaelEncrypt(const u32 rk[/*4*(Nr + 1)*/], int Nr, const u8 pt[16], u8 ct[16]) {
- u32 s0, s1, s2, s3, t0, t1, t2, t3;
-#ifndef FULL_UNROLL
- int r;
-#endif /* ?FULL_UNROLL */
-
- /*
- * map byte array block to cipher state
- * and add initial round key:
- */
- s0 = GETU32(pt ) ^ rk[0];
- s1 = GETU32(pt + 4) ^ rk[1];
- s2 = GETU32(pt + 8) ^ rk[2];
- s3 = GETU32(pt + 12) ^ rk[3];
-#ifdef FULL_UNROLL
- /* round 1: */
- t0 = Te0[s0 >> 24] ^ Te1[(s1 >> 16) & 0xff] ^ Te2[(s2 >> 8) & 0xff] ^ Te3[s3 & 0xff] ^ rk[ 4];
- t1 = Te0[s1 >> 24] ^ Te1[(s2 >> 16) & 0xff] ^ Te2[(s3 >> 8) & 0xff] ^ Te3[s0 & 0xff] ^ rk[ 5];
- t2 = Te0[s2 >> 24] ^ Te1[(s3 >> 16) & 0xff] ^ Te2[(s0 >> 8) & 0xff] ^ Te3[s1 & 0xff] ^ rk[ 6];
- t3 = Te0[s3 >> 24] ^ Te1[(s0 >> 16) & 0xff] ^ Te2[(s1 >> 8) & 0xff] ^ Te3[s2 & 0xff] ^ rk[ 7];
- /* round 2: */
- s0 = Te0[t0 >> 24] ^ Te1[(t1 >> 16) & 0xff] ^ Te2[(t2 >> 8) & 0xff] ^ Te3[t3 & 0xff] ^ rk[ 8];
- s1 = Te0[t1 >> 24] ^ Te1[(t2 >> 16) & 0xff] ^ Te2[(t3 >> 8) & 0xff] ^ Te3[t0 & 0xff] ^ rk[ 9];
- s2 = Te0[t2 >> 24] ^ Te1[(t3 >> 16) & 0xff] ^ Te2[(t0 >> 8) & 0xff] ^ Te3[t1 & 0xff] ^ rk[10];
- s3 = Te0[t3 >> 24] ^ Te1[(t0 >> 16) & 0xff] ^ Te2[(t1 >> 8) & 0xff] ^ Te3[t2 & 0xff] ^ rk[11];
- /* round 3: */
- t0 = Te0[s0 >> 24] ^ Te1[(s1 >> 16) & 0xff] ^ Te2[(s2 >> 8) & 0xff] ^ Te3[s3 & 0xff] ^ rk[12];
- t1 = Te0[s1 >> 24] ^ Te1[(s2 >> 16) & 0xff] ^ Te2[(s3 >> 8) & 0xff] ^ Te3[s0 & 0xff] ^ rk[13];
- t2 = Te0[s2 >> 24] ^ Te1[(s3 >> 16) & 0xff] ^ Te2[(s0 >> 8) & 0xff] ^ Te3[s1 & 0xff] ^ rk[14];
- t3 = Te0[s3 >> 24] ^ Te1[(s0 >> 16) & 0xff] ^ Te2[(s1 >> 8) & 0xff] ^ Te3[s2 & 0xff] ^ rk[15];
- /* round 4: */
- s0 = Te0[t0 >> 24] ^ Te1[(t1 >> 16) & 0xff] ^ Te2[(t2 >> 8) & 0xff] ^ Te3[t3 & 0xff] ^ rk[16];
- s1 = Te0[t1 >> 24] ^ Te1[(t2 >> 16) & 0xff] ^ Te2[(t3 >> 8) & 0xff] ^ Te3[t0 & 0xff] ^ rk[17];
- s2 = Te0[t2 >> 24] ^ Te1[(t3 >> 16) & 0xff] ^ Te2[(t0 >> 8) & 0xff] ^ Te3[t1 & 0xff] ^ rk[18];
- s3 = Te0[t3 >> 24] ^ Te1[(t0 >> 16) & 0xff] ^ Te2[(t1 >> 8) & 0xff] ^ Te3[t2 & 0xff] ^ rk[19];
- /* round 5: */
- t0 = Te0[s0 >> 24] ^ Te1[(s1 >> 16) & 0xff] ^ Te2[(s2 >> 8) & 0xff] ^ Te3[s3 & 0xff] ^ rk[20];
- t1 = Te0[s1 >> 24] ^ Te1[(s2 >> 16) & 0xff] ^ Te2[(s3 >> 8) & 0xff] ^ Te3[s0 & 0xff] ^ rk[21];
- t2 = Te0[s2 >> 24] ^ Te1[(s3 >> 16) & 0xff] ^ Te2[(s0 >> 8) & 0xff] ^ Te3[s1 & 0xff] ^ rk[22];
- t3 = Te0[s3 >> 24] ^ Te1[(s0 >> 16) & 0xff] ^ Te2[(s1 >> 8) & 0xff] ^ Te3[s2 & 0xff] ^ rk[23];
- /* round 6: */
- s0 = Te0[t0 >> 24] ^ Te1[(t1 >> 16) & 0xff] ^ Te2[(t2 >> 8) & 0xff] ^ Te3[t3 & 0xff] ^ rk[24];
- s1 = Te0[t1 >> 24] ^ Te1[(t2 >> 16) & 0xff] ^ Te2[(t3 >> 8) & 0xff] ^ Te3[t0 & 0xff] ^ rk[25];
- s2 = Te0[t2 >> 24] ^ Te1[(t3 >> 16) & 0xff] ^ Te2[(t0 >> 8) & 0xff] ^ Te3[t1 & 0xff] ^ rk[26];
- s3 = Te0[t3 >> 24] ^ Te1[(t0 >> 16) & 0xff] ^ Te2[(t1 >> 8) & 0xff] ^ Te3[t2 & 0xff] ^ rk[27];
- /* round 7: */
- t0 = Te0[s0 >> 24] ^ Te1[(s1 >> 16) & 0xff] ^ Te2[(s2 >> 8) & 0xff] ^ Te3[s3 & 0xff] ^ rk[28];
- t1 = Te0[s1 >> 24] ^ Te1[(s2 >> 16) & 0xff] ^ Te2[(s3 >> 8) & 0xff] ^ Te3[s0 & 0xff] ^ rk[29];
- t2 = Te0[s2 >> 24] ^ Te1[(s3 >> 16) & 0xff] ^ Te2[(s0 >> 8) & 0xff] ^ Te3[s1 & 0xff] ^ rk[30];
- t3 = Te0[s3 >> 24] ^ Te1[(s0 >> 16) & 0xff] ^ Te2[(s1 >> 8) & 0xff] ^ Te3[s2 & 0xff] ^ rk[31];
- /* round 8: */
- s0 = Te0[t0 >> 24] ^ Te1[(t1 >> 16) & 0xff] ^ Te2[(t2 >> 8) & 0xff] ^ Te3[t3 & 0xff] ^ rk[32];
- s1 = Te0[t1 >> 24] ^ Te1[(t2 >> 16) & 0xff] ^ Te2[(t3 >> 8) & 0xff] ^ Te3[t0 & 0xff] ^ rk[33];
- s2 = Te0[t2 >> 24] ^ Te1[(t3 >> 16) & 0xff] ^ Te2[(t0 >> 8) & 0xff] ^ Te3[t1 & 0xff] ^ rk[34];
- s3 = Te0[t3 >> 24] ^ Te1[(t0 >> 16) & 0xff] ^ Te2[(t1 >> 8) & 0xff] ^ Te3[t2 & 0xff] ^ rk[35];
- /* round 9: */
- t0 = Te0[s0 >> 24] ^ Te1[(s1 >> 16) & 0xff] ^ Te2[(s2 >> 8) & 0xff] ^ Te3[s3 & 0xff] ^ rk[36];
- t1 = Te0[s1 >> 24] ^ Te1[(s2 >> 16) & 0xff] ^ Te2[(s3 >> 8) & 0xff] ^ Te3[s0 & 0xff] ^ rk[37];
- t2 = Te0[s2 >> 24] ^ Te1[(s3 >> 16) & 0xff] ^ Te2[(s0 >> 8) & 0xff] ^ Te3[s1 & 0xff] ^ rk[38];
- t3 = Te0[s3 >> 24] ^ Te1[(s0 >> 16) & 0xff] ^ Te2[(s1 >> 8) & 0xff] ^ Te3[s2 & 0xff] ^ rk[39];
- if (Nr > 10) {
- /* round 10: */
- s0 = Te0[t0 >> 24] ^ Te1[(t1 >> 16) & 0xff] ^ Te2[(t2 >> 8) & 0xff] ^ Te3[t3 & 0xff] ^ rk[40];
- s1 = Te0[t1 >> 24] ^ Te1[(t2 >> 16) & 0xff] ^ Te2[(t3 >> 8) & 0xff] ^ Te3[t0 & 0xff] ^ rk[41];
- s2 = Te0[t2 >> 24] ^ Te1[(t3 >> 16) & 0xff] ^ Te2[(t0 >> 8) & 0xff] ^ Te3[t1 & 0xff] ^ rk[42];
- s3 = Te0[t3 >> 24] ^ Te1[(t0 >> 16) & 0xff] ^ Te2[(t1 >> 8) & 0xff] ^ Te3[t2 & 0xff] ^ rk[43];
- /* round 11: */
- t0 = Te0[s0 >> 24] ^ Te1[(s1 >> 16) & 0xff] ^ Te2[(s2 >> 8) & 0xff] ^ Te3[s3 & 0xff] ^ rk[44];
- t1 = Te0[s1 >> 24] ^ Te1[(s2 >> 16) & 0xff] ^ Te2[(s3 >> 8) & 0xff] ^ Te3[s0 & 0xff] ^ rk[45];
- t2 = Te0[s2 >> 24] ^ Te1[(s3 >> 16) & 0xff] ^ Te2[(s0 >> 8) & 0xff] ^ Te3[s1 & 0xff] ^ rk[46];
- t3 = Te0[s3 >> 24] ^ Te1[(s0 >> 16) & 0xff] ^ Te2[(s1 >> 8) & 0xff] ^ Te3[s2 & 0xff] ^ rk[47];
- if (Nr > 12) {
- /* round 12: */
- s0 = Te0[t0 >> 24] ^ Te1[(t1 >> 16) & 0xff] ^ Te2[(t2 >> 8) & 0xff] ^ Te3[t3 & 0xff] ^ rk[48];
- s1 = Te0[t1 >> 24] ^ Te1[(t2 >> 16) & 0xff] ^ Te2[(t3 >> 8) & 0xff] ^ Te3[t0 & 0xff] ^ rk[49];
- s2 = Te0[t2 >> 24] ^ Te1[(t3 >> 16) & 0xff] ^ Te2[(t0 >> 8) & 0xff] ^ Te3[t1 & 0xff] ^ rk[50];
- s3 = Te0[t3 >> 24] ^ Te1[(t0 >> 16) & 0xff] ^ Te2[(t1 >> 8) & 0xff] ^ Te3[t2 & 0xff] ^ rk[51];
- /* round 13: */
- t0 = Te0[s0 >> 24] ^ Te1[(s1 >> 16) & 0xff] ^ Te2[(s2 >> 8) & 0xff] ^ Te3[s3 & 0xff] ^ rk[52];
- t1 = Te0[s1 >> 24] ^ Te1[(s2 >> 16) & 0xff] ^ Te2[(s3 >> 8) & 0xff] ^ Te3[s0 & 0xff] ^ rk[53];
- t2 = Te0[s2 >> 24] ^ Te1[(s3 >> 16) & 0xff] ^ Te2[(s0 >> 8) & 0xff] ^ Te3[s1 & 0xff] ^ rk[54];
- t3 = Te0[s3 >> 24] ^ Te1[(s0 >> 16) & 0xff] ^ Te2[(s1 >> 8) & 0xff] ^ Te3[s2 & 0xff] ^ rk[55];
- }
- }
- rk += Nr << 2;
-#else /* !FULL_UNROLL */
- /*
- * Nr - 1 full rounds:
- */
- r = Nr >> 1;
- for (;;) {
- t0 =
- Te0[(s0 >> 24) ] ^
- Te1[(s1 >> 16) & 0xff] ^
- Te2[(s2 >> 8) & 0xff] ^
- Te3[(s3 ) & 0xff] ^
- rk[4];
- t1 =
- Te0[(s1 >> 24) ] ^
- Te1[(s2 >> 16) & 0xff] ^
- Te2[(s3 >> 8) & 0xff] ^
- Te3[(s0 ) & 0xff] ^
- rk[5];
- t2 =
- Te0[(s2 >> 24) ] ^
- Te1[(s3 >> 16) & 0xff] ^
- Te2[(s0 >> 8) & 0xff] ^
- Te3[(s1 ) & 0xff] ^
- rk[6];
- t3 =
- Te0[(s3 >> 24) ] ^
- Te1[(s0 >> 16) & 0xff] ^
- Te2[(s1 >> 8) & 0xff] ^
- Te3[(s2 ) & 0xff] ^
- rk[7];
-
- rk += 8;
- if (--r == 0) {
- break;
- }
-
- s0 =
- Te0[(t0 >> 24) ] ^
- Te1[(t1 >> 16) & 0xff] ^
- Te2[(t2 >> 8) & 0xff] ^
- Te3[(t3 ) & 0xff] ^
- rk[0];
- s1 =
- Te0[(t1 >> 24) ] ^
- Te1[(t2 >> 16) & 0xff] ^
- Te2[(t3 >> 8) & 0xff] ^
- Te3[(t0 ) & 0xff] ^
- rk[1];
- s2 =
- Te0[(t2 >> 24) ] ^
- Te1[(t3 >> 16) & 0xff] ^
- Te2[(t0 >> 8) & 0xff] ^
- Te3[(t1 ) & 0xff] ^
- rk[2];
- s3 =
- Te0[(t3 >> 24) ] ^
- Te1[(t0 >> 16) & 0xff] ^
- Te2[(t1 >> 8) & 0xff] ^
- Te3[(t2 ) & 0xff] ^
- rk[3];
- }
-#endif /* ?FULL_UNROLL */
- /*
- * apply last round and
- * map cipher state to byte array block:
- */
- s0 =
- (Te4[(t0 >> 24) ] & 0xff000000) ^
- (Te4[(t1 >> 16) & 0xff] & 0x00ff0000) ^
- (Te4[(t2 >> 8) & 0xff] & 0x0000ff00) ^
- (Te4[(t3 ) & 0xff] & 0x000000ff) ^
- rk[0];
- PUTU32(ct , s0);
- s1 =
- (Te4[(t1 >> 24) ] & 0xff000000) ^
- (Te4[(t2 >> 16) & 0xff] & 0x00ff0000) ^
- (Te4[(t3 >> 8) & 0xff] & 0x0000ff00) ^
- (Te4[(t0 ) & 0xff] & 0x000000ff) ^
- rk[1];
- PUTU32(ct + 4, s1);
- s2 =
- (Te4[(t2 >> 24) ] & 0xff000000) ^
- (Te4[(t3 >> 16) & 0xff] & 0x00ff0000) ^
- (Te4[(t0 >> 8) & 0xff] & 0x0000ff00) ^
- (Te4[(t1 ) & 0xff] & 0x000000ff) ^
- rk[2];
- PUTU32(ct + 8, s2);
- s3 =
- (Te4[(t3 >> 24) ] & 0xff000000) ^
- (Te4[(t0 >> 16) & 0xff] & 0x00ff0000) ^
- (Te4[(t1 >> 8) & 0xff] & 0x0000ff00) ^
- (Te4[(t2 ) & 0xff] & 0x000000ff) ^
- rk[3];
- PUTU32(ct + 12, s3);
-}
-
-void rijndaelDecrypt(const u32 rk[/*4*(Nr + 1)*/], int Nr, const u8 ct[16], u8 pt[16]) {
- u32 s0, s1, s2, s3, t0, t1, t2, t3;
-#ifndef FULL_UNROLL
- int r;
-#endif /* ?FULL_UNROLL */
-
- /*
- * map byte array block to cipher state
- * and add initial round key:
- */
- s0 = GETU32(ct ) ^ rk[0];
- s1 = GETU32(ct + 4) ^ rk[1];
- s2 = GETU32(ct + 8) ^ rk[2];
- s3 = GETU32(ct + 12) ^ rk[3];
-#ifdef FULL_UNROLL
- /* round 1: */
- t0 = Td0[s0 >> 24] ^ Td1[(s3 >> 16) & 0xff] ^ Td2[(s2 >> 8) & 0xff] ^ Td3[s1 & 0xff] ^ rk[ 4];
- t1 = Td0[s1 >> 24] ^ Td1[(s0 >> 16) & 0xff] ^ Td2[(s3 >> 8) & 0xff] ^ Td3[s2 & 0xff] ^ rk[ 5];
- t2 = Td0[s2 >> 24] ^ Td1[(s1 >> 16) & 0xff] ^ Td2[(s0 >> 8) & 0xff] ^ Td3[s3 & 0xff] ^ rk[ 6];
- t3 = Td0[s3 >> 24] ^ Td1[(s2 >> 16) & 0xff] ^ Td2[(s1 >> 8) & 0xff] ^ Td3[s0 & 0xff] ^ rk[ 7];
- /* round 2: */
- s0 = Td0[t0 >> 24] ^ Td1[(t3 >> 16) & 0xff] ^ Td2[(t2 >> 8) & 0xff] ^ Td3[t1 & 0xff] ^ rk[ 8];
- s1 = Td0[t1 >> 24] ^ Td1[(t0 >> 16) & 0xff] ^ Td2[(t3 >> 8) & 0xff] ^ Td3[t2 & 0xff] ^ rk[ 9];
- s2 = Td0[t2 >> 24] ^ Td1[(t1 >> 16) & 0xff] ^ Td2[(t0 >> 8) & 0xff] ^ Td3[t3 & 0xff] ^ rk[10];
- s3 = Td0[t3 >> 24] ^ Td1[(t2 >> 16) & 0xff] ^ Td2[(t1 >> 8) & 0xff] ^ Td3[t0 & 0xff] ^ rk[11];
- /* round 3: */
- t0 = Td0[s0 >> 24] ^ Td1[(s3 >> 16) & 0xff] ^ Td2[(s2 >> 8) & 0xff] ^ Td3[s1 & 0xff] ^ rk[12];
- t1 = Td0[s1 >> 24] ^ Td1[(s0 >> 16) & 0xff] ^ Td2[(s3 >> 8) & 0xff] ^ Td3[s2 & 0xff] ^ rk[13];
- t2 = Td0[s2 >> 24] ^ Td1[(s1 >> 16) & 0xff] ^ Td2[(s0 >> 8) & 0xff] ^ Td3[s3 & 0xff] ^ rk[14];
- t3 = Td0[s3 >> 24] ^ Td1[(s2 >> 16) & 0xff] ^ Td2[(s1 >> 8) & 0xff] ^ Td3[s0 & 0xff] ^ rk[15];
- /* round 4: */
- s0 = Td0[t0 >> 24] ^ Td1[(t3 >> 16) & 0xff] ^ Td2[(t2 >> 8) & 0xff] ^ Td3[t1 & 0xff] ^ rk[16];
- s1 = Td0[t1 >> 24] ^ Td1[(t0 >> 16) & 0xff] ^ Td2[(t3 >> 8) & 0xff] ^ Td3[t2 & 0xff] ^ rk[17];
- s2 = Td0[t2 >> 24] ^ Td1[(t1 >> 16) & 0xff] ^ Td2[(t0 >> 8) & 0xff] ^ Td3[t3 & 0xff] ^ rk[18];
- s3 = Td0[t3 >> 24] ^ Td1[(t2 >> 16) & 0xff] ^ Td2[(t1 >> 8) & 0xff] ^ Td3[t0 & 0xff] ^ rk[19];
- /* round 5: */
- t0 = Td0[s0 >> 24] ^ Td1[(s3 >> 16) & 0xff] ^ Td2[(s2 >> 8) & 0xff] ^ Td3[s1 & 0xff] ^ rk[20];
- t1 = Td0[s1 >> 24] ^ Td1[(s0 >> 16) & 0xff] ^ Td2[(s3 >> 8) & 0xff] ^ Td3[s2 & 0xff] ^ rk[21];
- t2 = Td0[s2 >> 24] ^ Td1[(s1 >> 16) & 0xff] ^ Td2[(s0 >> 8) & 0xff] ^ Td3[s3 & 0xff] ^ rk[22];
- t3 = Td0[s3 >> 24] ^ Td1[(s2 >> 16) & 0xff] ^ Td2[(s1 >> 8) & 0xff] ^ Td3[s0 & 0xff] ^ rk[23];
- /* round 6: */
- s0 = Td0[t0 >> 24] ^ Td1[(t3 >> 16) & 0xff] ^ Td2[(t2 >> 8) & 0xff] ^ Td3[t1 & 0xff] ^ rk[24];
- s1 = Td0[t1 >> 24] ^ Td1[(t0 >> 16) & 0xff] ^ Td2[(t3 >> 8) & 0xff] ^ Td3[t2 & 0xff] ^ rk[25];
- s2 = Td0[t2 >> 24] ^ Td1[(t1 >> 16) & 0xff] ^ Td2[(t0 >> 8) & 0xff] ^ Td3[t3 & 0xff] ^ rk[26];
- s3 = Td0[t3 >> 24] ^ Td1[(t2 >> 16) & 0xff] ^ Td2[(t1 >> 8) & 0xff] ^ Td3[t0 & 0xff] ^ rk[27];
- /* round 7: */
- t0 = Td0[s0 >> 24] ^ Td1[(s3 >> 16) & 0xff] ^ Td2[(s2 >> 8) & 0xff] ^ Td3[s1 & 0xff] ^ rk[28];
- t1 = Td0[s1 >> 24] ^ Td1[(s0 >> 16) & 0xff] ^ Td2[(s3 >> 8) & 0xff] ^ Td3[s2 & 0xff] ^ rk[29];
- t2 = Td0[s2 >> 24] ^ Td1[(s1 >> 16) & 0xff] ^ Td2[(s0 >> 8) & 0xff] ^ Td3[s3 & 0xff] ^ rk[30];
- t3 = Td0[s3 >> 24] ^ Td1[(s2 >> 16) & 0xff] ^ Td2[(s1 >> 8) & 0xff] ^ Td3[s0 & 0xff] ^ rk[31];
- /* round 8: */
- s0 = Td0[t0 >> 24] ^ Td1[(t3 >> 16) & 0xff] ^ Td2[(t2 >> 8) & 0xff] ^ Td3[t1 & 0xff] ^ rk[32];
- s1 = Td0[t1 >> 24] ^ Td1[(t0 >> 16) & 0xff] ^ Td2[(t3 >> 8) & 0xff] ^ Td3[t2 & 0xff] ^ rk[33];
- s2 = Td0[t2 >> 24] ^ Td1[(t1 >> 16) & 0xff] ^ Td2[(t0 >> 8) & 0xff] ^ Td3[t3 & 0xff] ^ rk[34];
- s3 = Td0[t3 >> 24] ^ Td1[(t2 >> 16) & 0xff] ^ Td2[(t1 >> 8) & 0xff] ^ Td3[t0 & 0xff] ^ rk[35];
- /* round 9: */
- t0 = Td0[s0 >> 24] ^ Td1[(s3 >> 16) & 0xff] ^ Td2[(s2 >> 8) & 0xff] ^ Td3[s1 & 0xff] ^ rk[36];
- t1 = Td0[s1 >> 24] ^ Td1[(s0 >> 16) & 0xff] ^ Td2[(s3 >> 8) & 0xff] ^ Td3[s2 & 0xff] ^ rk[37];
- t2 = Td0[s2 >> 24] ^ Td1[(s1 >> 16) & 0xff] ^ Td2[(s0 >> 8) & 0xff] ^ Td3[s3 & 0xff] ^ rk[38];
- t3 = Td0[s3 >> 24] ^ Td1[(s2 >> 16) & 0xff] ^ Td2[(s1 >> 8) & 0xff] ^ Td3[s0 & 0xff] ^ rk[39];
- if (Nr > 10) {
- /* round 10: */
- s0 = Td0[t0 >> 24] ^ Td1[(t3 >> 16) & 0xff] ^ Td2[(t2 >> 8) & 0xff] ^ Td3[t1 & 0xff] ^ rk[40];
- s1 = Td0[t1 >> 24] ^ Td1[(t0 >> 16) & 0xff] ^ Td2[(t3 >> 8) & 0xff] ^ Td3[t2 & 0xff] ^ rk[41];
- s2 = Td0[t2 >> 24] ^ Td1[(t1 >> 16) & 0xff] ^ Td2[(t0 >> 8) & 0xff] ^ Td3[t3 & 0xff] ^ rk[42];
- s3 = Td0[t3 >> 24] ^ Td1[(t2 >> 16) & 0xff] ^ Td2[(t1 >> 8) & 0xff] ^ Td3[t0 & 0xff] ^ rk[43];
- /* round 11: */
- t0 = Td0[s0 >> 24] ^ Td1[(s3 >> 16) & 0xff] ^ Td2[(s2 >> 8) & 0xff] ^ Td3[s1 & 0xff] ^ rk[44];
- t1 = Td0[s1 >> 24] ^ Td1[(s0 >> 16) & 0xff] ^ Td2[(s3 >> 8) & 0xff] ^ Td3[s2 & 0xff] ^ rk[45];
- t2 = Td0[s2 >> 24] ^ Td1[(s1 >> 16) & 0xff] ^ Td2[(s0 >> 8) & 0xff] ^ Td3[s3 & 0xff] ^ rk[46];
- t3 = Td0[s3 >> 24] ^ Td1[(s2 >> 16) & 0xff] ^ Td2[(s1 >> 8) & 0xff] ^ Td3[s0 & 0xff] ^ rk[47];
- if (Nr > 12) {
- /* round 12: */
- s0 = Td0[t0 >> 24] ^ Td1[(t3 >> 16) & 0xff] ^ Td2[(t2 >> 8) & 0xff] ^ Td3[t1 & 0xff] ^ rk[48];
- s1 = Td0[t1 >> 24] ^ Td1[(t0 >> 16) & 0xff] ^ Td2[(t3 >> 8) & 0xff] ^ Td3[t2 & 0xff] ^ rk[49];
- s2 = Td0[t2 >> 24] ^ Td1[(t1 >> 16) & 0xff] ^ Td2[(t0 >> 8) & 0xff] ^ Td3[t3 & 0xff] ^ rk[50];
- s3 = Td0[t3 >> 24] ^ Td1[(t2 >> 16) & 0xff] ^ Td2[(t1 >> 8) & 0xff] ^ Td3[t0 & 0xff] ^ rk[51];
- /* round 13: */
- t0 = Td0[s0 >> 24] ^ Td1[(s3 >> 16) & 0xff] ^ Td2[(s2 >> 8) & 0xff] ^ Td3[s1 & 0xff] ^ rk[52];
- t1 = Td0[s1 >> 24] ^ Td1[(s0 >> 16) & 0xff] ^ Td2[(s3 >> 8) & 0xff] ^ Td3[s2 & 0xff] ^ rk[53];
- t2 = Td0[s2 >> 24] ^ Td1[(s1 >> 16) & 0xff] ^ Td2[(s0 >> 8) & 0xff] ^ Td3[s3 & 0xff] ^ rk[54];
- t3 = Td0[s3 >> 24] ^ Td1[(s2 >> 16) & 0xff] ^ Td2[(s1 >> 8) & 0xff] ^ Td3[s0 & 0xff] ^ rk[55];
- }
- }
- rk += Nr << 2;
-#else /* !FULL_UNROLL */
- /*
- * Nr - 1 full rounds:
- */
- r = Nr >> 1;
- for (;;) {
- t0 =
- Td0[(s0 >> 24) ] ^
- Td1[(s3 >> 16) & 0xff] ^
- Td2[(s2 >> 8) & 0xff] ^
- Td3[(s1 ) & 0xff] ^
- rk[4];
- t1 =
- Td0[(s1 >> 24) ] ^
- Td1[(s0 >> 16) & 0xff] ^
- Td2[(s3 >> 8) & 0xff] ^
- Td3[(s2 ) & 0xff] ^
- rk[5];
- t2 =
- Td0[(s2 >> 24) ] ^
- Td1[(s1 >> 16) & 0xff] ^
- Td2[(s0 >> 8) & 0xff] ^
- Td3[(s3 ) & 0xff] ^
- rk[6];
- t3 =
- Td0[(s3 >> 24) ] ^
- Td1[(s2 >> 16) & 0xff] ^
- Td2[(s1 >> 8) & 0xff] ^
- Td3[(s0 ) & 0xff] ^
- rk[7];
-
- rk += 8;
- if (--r == 0) {
- break;
- }
-
- s0 =
- Td0[(t0 >> 24) ] ^
- Td1[(t3 >> 16) & 0xff] ^
- Td2[(t2 >> 8) & 0xff] ^
- Td3[(t1 ) & 0xff] ^
- rk[0];
- s1 =
- Td0[(t1 >> 24) ] ^
- Td1[(t0 >> 16) & 0xff] ^
- Td2[(t3 >> 8) & 0xff] ^
- Td3[(t2 ) & 0xff] ^
- rk[1];
- s2 =
- Td0[(t2 >> 24) ] ^
- Td1[(t1 >> 16) & 0xff] ^
- Td2[(t0 >> 8) & 0xff] ^
- Td3[(t3 ) & 0xff] ^
- rk[2];
- s3 =
- Td0[(t3 >> 24) ] ^
- Td1[(t2 >> 16) & 0xff] ^
- Td2[(t1 >> 8) & 0xff] ^
- Td3[(t0 ) & 0xff] ^
- rk[3];
- }
-#endif /* ?FULL_UNROLL */
- /*
- * apply last round and
- * map cipher state to byte array block:
- */
- s0 =
- (Td4[(t0 >> 24) ] & 0xff000000) ^
- (Td4[(t3 >> 16) & 0xff] & 0x00ff0000) ^
- (Td4[(t2 >> 8) & 0xff] & 0x0000ff00) ^
- (Td4[(t1 ) & 0xff] & 0x000000ff) ^
- rk[0];
- PUTU32(pt , s0);
- s1 =
- (Td4[(t1 >> 24) ] & 0xff000000) ^
- (Td4[(t0 >> 16) & 0xff] & 0x00ff0000) ^
- (Td4[(t3 >> 8) & 0xff] & 0x0000ff00) ^
- (Td4[(t2 ) & 0xff] & 0x000000ff) ^
- rk[1];
- PUTU32(pt + 4, s1);
- s2 =
- (Td4[(t2 >> 24) ] & 0xff000000) ^
- (Td4[(t1 >> 16) & 0xff] & 0x00ff0000) ^
- (Td4[(t0 >> 8) & 0xff] & 0x0000ff00) ^
- (Td4[(t3 ) & 0xff] & 0x000000ff) ^
- rk[2];
- PUTU32(pt + 8, s2);
- s3 =
- (Td4[(t3 >> 24) ] & 0xff000000) ^
- (Td4[(t2 >> 16) & 0xff] & 0x00ff0000) ^
- (Td4[(t1 >> 8) & 0xff] & 0x0000ff00) ^
- (Td4[(t0 ) & 0xff] & 0x000000ff) ^
- rk[3];
- PUTU32(pt + 12, s3);
-}
diff -r 81a487955535 -r 9d6b84c40f65 sys/crypto/rijndael/rijndael-api-fst.c
--- a/sys/crypto/rijndael/rijndael-api-fst.c Sun Jun 14 15:58:39 2020 +0000
+++ /dev/null Thu Jan 01 00:00:00 1970 +0000
@@ -1,430 +0,0 @@
-/* $NetBSD: rijndael-api-fst.c,v 1.25 2016/12/11 00:28:44 alnsn Exp $ */
-
-/**
- * rijndael-api-fst.c
- *
- * @version 2.9 (December 2000)
- *
- * Optimised ANSI C code for the Rijndael cipher (now AES)
- *
- * @author Vincent Rijmen <vincent.rijmen%esat.kuleuven.ac.be@localhost>
- * @author Antoon Bosselaers <antoon.bosselaers%esat.kuleuven.ac.be@localhost>
- * @author Paulo Barreto <paulo.barreto%terra.com.br@localhost>
- *
- * This code is hereby placed in the public domain.
- *
- * THIS SOFTWARE IS PROVIDED BY THE AUTHORS ''AS IS'' AND ANY EXPRESS
- * OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
- * WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
- * ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHORS OR CONTRIBUTORS BE
- * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
- * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
- * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR
- * BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY,
- * WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE
- * OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE,
- * EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
- *
- * Acknowledgements:
- *
- * We are deeply indebted to the following people for their bug reports,
- * fixes, and improvement suggestions to this implementation. Though we
- * tried to list all contributions, we apologise in advance for any
- * missing reference.
- *
- * Andrew Bales <Andrew.Bales%Honeywell.com@localhost>
- * Markus Friedl <markus.friedl%informatik.uni-erlangen.de@localhost>
- * John Skodon <skodonj%webquill.com@localhost>
- */
-
-#include <sys/cdefs.h>
-__KERNEL_RCSID(0, "$NetBSD: rijndael-api-fst.c,v 1.25 2016/12/11 00:28:44 alnsn Exp $");
-
-#include <sys/param.h>
-#ifdef _KERNEL
-#include <sys/systm.h>
-#else
-#include <stdlib.h>
-#include <string.h>
-#endif
-
-#include <crypto/rijndael/rijndael_local.h>
-#include <crypto/rijndael/rijndael-alg-fst.h>
-#include <crypto/rijndael/rijndael-api-fst.h>
-
-#define XTS_ALPHA 0x87
-
-static void xor16(uint8_t *d, const uint8_t *a, const uint8_t* b)
-{
- for (size_t i = 0; i < 4; i++) {
- *d++ = *a++ ^ *b++;
- *d++ = *a++ ^ *b++;
- *d++ = *a++ ^ *b++;
- *d++ = *a++ ^ *b++;
- }
-}
-
-static void
-xts_exponentiate(uint8_t *iv)
-{
- unsigned int carry = 0;
-
- for (size_t i = 0; i < 16; i++) {
- unsigned int msb = iv[i] >> 7;
-
- iv[i] = (iv[i] << 1) | carry;
- carry = msb;
- }
-
- if (carry != 0)
- iv[0] ^= XTS_ALPHA;
-}
-
-int
-rijndael_makeKey(keyInstance *key, BYTE direction, int keyLen,
- const char *keyMaterial)
-{
- u_int8_t cipherKey[RIJNDAEL_MAXKB];
-
- if (key == NULL) {
- return BAD_KEY_INSTANCE;
- }
-
- if ((direction == DIR_ENCRYPT) || (direction == DIR_DECRYPT)) {
- key->direction = direction;
- } else {
- return BAD_KEY_DIR;
- }
-
- if ((keyLen == 128) || (keyLen == 192) || (keyLen == 256)) {
- key->keyLen = keyLen;
- } else {
- return BAD_KEY_MAT;
- }
-
- if (keyMaterial != NULL) {
- memcpy(key->keyMaterial, keyMaterial, keyLen/8);
- }
-
- /* initialize key schedule: */
- memcpy(cipherKey, key->keyMaterial, keyLen/8);
- if (direction == DIR_ENCRYPT) {
- key->Nr = rijndaelKeySetupEnc(key->rk, cipherKey, keyLen);
- } else {
- key->Nr = rijndaelKeySetupDec(key->rk, cipherKey, keyLen);
- }
- rijndaelKeySetupEnc(key->ek, cipherKey, keyLen);
- return TRUE;
-}
-
-int
-rijndael_cipherInit(cipherInstance *cipher, BYTE mode, const char *IV)
-{
- if ((mode == MODE_ECB) || (mode == MODE_CBC) ||
- (mode == MODE_XTS) || (mode == MODE_CFB1)) {
- cipher->mode = mode;
- } else {
- return BAD_CIPHER_MODE;
- }
- if (IV != NULL) {
- memcpy(cipher->IV, IV, RIJNDAEL_MAX_IV_SIZE);
- } else {
- memset(cipher->IV, 0, RIJNDAEL_MAX_IV_SIZE);
- }
- return TRUE;
-}
-
-int
-rijndael_blockEncrypt(cipherInstance *cipher, keyInstance *key,
- const BYTE *input, int inputLen, BYTE *outBuffer)
-{
- int i, k, t, numBlocks;
- u_int8_t block[16], *iv;
-
- if (cipher == NULL ||
- key == NULL ||
- key->direction == DIR_DECRYPT) {
- return BAD_CIPHER_STATE;
- }
- if (input == NULL || inputLen <= 0) {
- return 0; /* nothing to do */
- }
-
- numBlocks = inputLen/128;
-
- switch (cipher->mode) {
- case MODE_ECB:
- for (i = numBlocks; i > 0; i--) {
- rijndaelEncrypt(key->rk, key->Nr, input, outBuffer);
- input += 16;
- outBuffer += 16;
- }
- break;
-
- case MODE_CBC:
- iv = (u_int8_t *)cipher->IV;
- for (i = numBlocks; i > 0; i--) {
- xor16(block, input, iv);
- rijndaelEncrypt(key->rk, key->Nr, block, outBuffer);
- iv = outBuffer;
- input += 16;
- outBuffer += 16;
- }
- break;
-
- case MODE_XTS:
- iv = (u_int8_t *)cipher->IV;
- for (i = numBlocks; i > 0; i--) {
- xor16(block, input, iv);
- rijndaelEncrypt(key->rk, key->Nr, block, block);
- xor16(outBuffer, block, iv);
- xts_exponentiate(iv);
- input += 16;
- outBuffer += 16;
- }
- break;
-
- case MODE_CFB1:
- iv = (u_int8_t *)cipher->IV;
- for (i = numBlocks; i > 0; i--) {
- memcpy(outBuffer, input, 16);
- for (k = 0; k < 128; k++) {
- rijndaelEncrypt(key->ek, key->Nr, iv, block);
- outBuffer[k >> 3] ^=
- (block[0] & 0x80U) >> (k & 7);
- for (t = 0; t < 15; t++) {
- iv[t] = (iv[t] << 1) | (iv[t + 1] >> 7);
- }
- iv[15] = (iv[15] << 1) |
- ((outBuffer[k >> 3] >> (7 - (k & 7))) & 1);
- }
- outBuffer += 16;
- input += 16;
- }
- break;
-
- default:
- return BAD_CIPHER_STATE;
- }
-
- return 128 * numBlocks;
-}
-
-/**
- * Encrypt data partitioned in octets, using RFC 2040-like padding.
- *
- * @param input data to be encrypted (octet sequence)
- * @param inputOctets input length in octets (not bits)
- * @param outBuffer encrypted output data
- *
- * @return length in octets (not bits) of the encrypted output buffer.
- */
-int
-rijndael_padEncrypt(cipherInstance *cipher, keyInstance *key,
- const BYTE *input, int inputOctets, BYTE *outBuffer)
-{
- int i, numBlocks, padLen;
- u_int8_t block[16], *iv;
-
- if (cipher == NULL ||
- key == NULL ||
- key->direction == DIR_DECRYPT) {
- return BAD_CIPHER_STATE;
- }
- if (input == NULL || inputOctets <= 0) {
- return 0; /* nothing to do */
- }
-
- numBlocks = inputOctets / 16;
-
- switch (cipher->mode) {
- case MODE_ECB:
- for (i = numBlocks; i > 0; i--) {
- rijndaelEncrypt(key->rk, key->Nr, input, outBuffer);
- input += 16;
- outBuffer += 16;
- }
- padLen = 16 - (inputOctets - 16*numBlocks);
- memcpy(block, input, 16 - padLen);
- memset(block + 16 - padLen, padLen, padLen);
- rijndaelEncrypt(key->rk, key->Nr, block, outBuffer);
- break;
-
- case MODE_CBC:
- iv = (u_int8_t *)cipher->IV;
- for (i = numBlocks; i > 0; i--) {
- xor16(block, input, iv);
- rijndaelEncrypt(key->rk, key->Nr, block, outBuffer);
- iv = outBuffer;
- input += 16;
- outBuffer += 16;
- }
- padLen = 16 - (inputOctets - 16*numBlocks);
- for (i = 0; i < 16 - padLen; i++) {
- block[i] = input[i] ^ iv[i];
- }
- for (i = 16 - padLen; i < 16; i++) {
- block[i] = (BYTE)padLen ^ iv[i];
- }
- rijndaelEncrypt(key->rk, key->Nr, block, outBuffer);
- break;
-
- default:
- return BAD_CIPHER_STATE;
- }
-
- return 16 * (numBlocks + 1);
-}
-
-int
-rijndael_blockDecrypt(cipherInstance *cipher, keyInstance *key,
- const BYTE *input, int inputLen, BYTE *outBuffer)
-{
- int i, k, t, numBlocks;
- u_int8_t block[16], *iv;
-
- if (cipher == NULL ||
- key == NULL ||
- (cipher->mode != MODE_CFB1 && key->direction == DIR_ENCRYPT)) {
- return BAD_CIPHER_STATE;
- }
- if (input == NULL || inputLen <= 0) {
- return 0; /* nothing to do */
- }
-
- numBlocks = inputLen/128;
-
- switch (cipher->mode) {
- case MODE_ECB:
- for (i = numBlocks; i > 0; i--) {
- rijndaelDecrypt(key->rk, key->Nr, input, outBuffer);
- input += 16;
- outBuffer += 16;
- }
- break;
-
- case MODE_CBC:
- iv = (u_int8_t *)cipher->IV;
- for (i = numBlocks; i > 0; i--) {
- rijndaelDecrypt(key->rk, key->Nr, input, block);
- xor16(block, block, iv);
- memcpy(cipher->IV, input, 16);
- memcpy(outBuffer, block, 16);
- input += 16;
- outBuffer += 16;
- }
- break;
-
- case MODE_XTS:
- iv = (u_int8_t *)cipher->IV;
- for (i = numBlocks; i > 0; i--) {
- xor16(block, input, iv);
- rijndaelDecrypt(key->rk, key->Nr, block, block);
- xor16(outBuffer, block, iv);
- xts_exponentiate(iv);
- input += 16;
- outBuffer += 16;
- }
- break;
-
- case MODE_CFB1:
- iv = (u_int8_t *)cipher->IV;
- for (i = numBlocks; i > 0; i--) {
- memcpy(outBuffer, input, 16);
- for (k = 0; k < 128; k++) {
- rijndaelEncrypt(key->ek, key->Nr, iv, block);
- for (t = 0; t < 15; t++) {
- iv[t] = (iv[t] << 1) | (iv[t + 1] >> 7);
- }
- iv[15] = (iv[15] << 1) |
- ((input[k >> 3] >> (7 - (k & 7))) & 1);
- outBuffer[k >> 3] ^= (block[0] & 0x80U) >>
- (k & 7);
- }
- outBuffer += 16;
- input += 16;
- }
- break;
-
- default:
- return BAD_CIPHER_STATE;
- }
-
- return 128 * numBlocks;
-}
-
-int
-rijndael_padDecrypt(cipherInstance *cipher, keyInstance *key,
- const BYTE *input, int inputOctets, BYTE *outBuffer)
-{
- int i, numBlocks, padLen;
- u_int8_t block[16], *iv;
-
- if (cipher == NULL ||
- key == NULL ||
- key->direction == DIR_ENCRYPT) {
- return BAD_CIPHER_STATE;
- }
- if (input == NULL || inputOctets <= 0) {
- return 0; /* nothing to do */
- }
- if (inputOctets % 16 != 0) {
- return BAD_DATA;
- }
-
- numBlocks = inputOctets/16;
-
- switch (cipher->mode) {
- case MODE_ECB:
- /* all blocks but last */
- for (i = numBlocks - 1; i > 0; i--) {
- rijndaelDecrypt(key->rk, key->Nr, input, outBuffer);
- input += 16;
- outBuffer += 16;
- }
- /* last block */
- rijndaelDecrypt(key->rk, key->Nr, input, block);
- padLen = block[15];
- if (padLen >= 16) {
- return BAD_DATA;
- }
- for (i = 16 - padLen; i < 16; i++) {
- if (block[i] != padLen) {
- return BAD_DATA;
- }
- }
- memcpy(outBuffer, block, 16 - padLen);
- break;
-
- case MODE_CBC:
- iv = (u_int8_t *)cipher->IV;
- /* all blocks but last */
- for (i = numBlocks - 1; i > 0; i--) {
- rijndaelDecrypt(key->rk, key->Nr, input, block);
- xor16(block, block, iv);
- memcpy(cipher->IV, input, 16);
- memcpy(outBuffer, block, 16);
- input += 16;
- outBuffer += 16;
- }
- /* last block */
- rijndaelDecrypt(key->rk, key->Nr, input, block);
- xor16(block, block, iv);
- padLen = block[15];
- if (padLen <= 0 || padLen > 16) {
- return BAD_DATA;
- }
- for (i = 16 - padLen; i < 16; i++) {
- if (block[i] != padLen) {
- return BAD_DATA;
- }
- }
- memcpy(outBuffer, block, 16 - padLen);
- break;
-
- default:
- return BAD_CIPHER_STATE;
- }
-
- return 16 * numBlocks - padLen;
-}
diff -r 81a487955535 -r 9d6b84c40f65 sys/crypto/rijndael/rijndael.c
--- a/sys/crypto/rijndael/rijndael.c Sun Jun 14 15:58:39 2020 +0000
+++ /dev/null Thu Jan 01 00:00:00 1970 +0000
@@ -1,57 +0,0 @@
-/* $NetBSD: rijndael.c,v 1.8 2005/12/11 12:20:52 christos Exp $ */
-
-/**
- * rijndael-alg-fst.c
- *
- * @version 3.0 (December 2000)
- *
- * Optimised ANSI C code for the Rijndael cipher (now AES)
- *
- * @author Vincent Rijmen <vincent.rijmen%esat.kuleuven.ac.be@localhost>
- * @author Antoon Bosselaers <antoon.bosselaers%esat.kuleuven.ac.be@localhost>
- * @author Paulo Barreto <paulo.barreto%terra.com.br@localhost>
- *
- * This code is hereby placed in the public domain.
- *
- * THIS SOFTWARE IS PROVIDED BY THE AUTHORS ''AS IS'' AND ANY EXPRESS
- * OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
- * WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
- * ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHORS OR CONTRIBUTORS BE
- * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
- * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
- * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR
- * BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY,
- * WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE
- * OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE,
- * EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
- */
-
-#include <sys/cdefs.h>
-__KERNEL_RCSID(0, "$NetBSD: rijndael.c,v 1.8 2005/12/11 12:20:52 christos Exp $");
-
-#include <sys/types.h>
-#include <sys/systm.h>
-
-#include <crypto/rijndael/rijndael.h>
-
-void
-rijndael_set_key(rijndael_ctx *ctx, const u_char *key, int bits)
-{
-
- ctx->Nr = rijndaelKeySetupEnc(ctx->ek, key, bits);
- rijndaelKeySetupDec(ctx->dk, key, bits);
-}
-
-void
-rijndael_decrypt(const rijndael_ctx *ctx, const u_char *src, u_char *dst)
-{
-
- rijndaelDecrypt(ctx->dk, ctx->Nr, src, dst);
-}
-
-void
-rijndael_encrypt(const rijndael_ctx *ctx, const u_char *src, u_char *dst)
-{
-
- rijndaelEncrypt(ctx->ek, ctx->Nr, src, dst);
-}
diff -r 81a487955535 -r 9d6b84c40f65 sys/crypto/rijndael/rijndael_local.h
--- a/sys/crypto/rijndael/rijndael_local.h Sun Jun 14 15:58:39 2020 +0000
+++ /dev/null Thu Jan 01 00:00:00 1970 +0000
@@ -1,7 +0,0 @@
-/* $NetBSD: rijndael_local.h,v 1.6 2005/12/11 12:20:52 christos Exp $ */
-/* $KAME: rijndael_local.h,v 1.4 2003/07/15 10:47:16 itojun Exp $ */
-
-/* the file should not be used from outside */
-typedef u_int8_t u8;
-typedef u_int16_t u16;
-typedef u_int32_t u32;
diff -r 81a487955535 -r 9d6b84c40f65 sys/rump/kern/lib/libcrypto/Makefile
--- a/sys/rump/kern/lib/libcrypto/Makefile Sun Jun 14 15:58:39 2020 +0000
+++ b/sys/rump/kern/lib/libcrypto/Makefile Fri Jun 12 05:16:46 2020 +0000
@@ -1,11 +1,11 @@
# $NetBSD: Makefile,v 1.6 2019/12/05 03:57:55 riastradh Exp $
#
-.PATH: ${.CURDIR}/../../../../crypto/blowfish \
+.PATH: ${.CURDIR}/../../../../crypto/aes \
+ ${.CURDIR}/../../../../crypto/blowfish \
${.CURDIR}/../../../../crypto/camellia \
${.CURDIR}/../../../../crypto/cast128 \
${.CURDIR}/../../../../crypto/des \
- ${.CURDIR}/../../../../crypto/rijndael \
${.CURDIR}/../../../../crypto/skipjack
LIB= rumpkern_crypto
@@ -23,8 +23,14 @@ SRCS+= cast128.c
# DES
SRCS+= des_ecb.c des_setkey.c des_enc.c des_cbc.c des_module.c
-# rijndael
-SRCS+= rijndael-alg-fst.c rijndael-api-fst.c rijndael.c
+# AES
+SRCS+= aes_bear.c
+SRCS+= aes_ct.c
+SRCS+= aes_ct_dec.c
+SRCS+= aes_ct_enc.c
+SRCS+= aes_impl.c
+SRCS+= aes_rijndael.c
+SRCS+= aes_selftest.c
# skipjack
SRCS+= skipjack.c
# HG changeset patch
# User Taylor R Campbell <riastradh%NetBSD.org@localhost>
# Date 1592435129 0
# Wed Jun 17 23:05:29 2020 +0000
# Branch trunk
# Node ID fea7aeacc09cf9da68d32a15edf9550ce78a4d45
# Parent 9d6b84c40f6517bb55848159faa9478ef1a23d02
# EXP-Topic riastradh-kernelcrypto
Add x86 AES-NI support.
Limited to amd64 for now. In principle, AES-NI should work in 32-bit
mode, and there may even be some 32-bit-only CPUs that support
AES-NI, but that requires work to adapt the assembly.
diff -r 9d6b84c40f65 -r fea7aeacc09c sys/arch/x86/conf/files.x86
--- a/sys/arch/x86/conf/files.x86 Fri Jun 12 05:16:46 2020 +0000
+++ b/sys/arch/x86/conf/files.x86 Wed Jun 17 23:05:29 2020 +0000
@@ -165,3 +165,6 @@ file arch/x86/pci/pciide_machdep.c pciid
file arch/x86/pci/pci_bus_fixup.c pci_bus_fixup
file arch/x86/pci/pci_addr_fixup.c pci_addr_fixup
+
+# AES-NI
+include "crypto/aes/arch/x86/files.aesni"
diff -r 9d6b84c40f65 -r fea7aeacc09c sys/arch/x86/x86/identcpu.c
--- a/sys/arch/x86/x86/identcpu.c Fri Jun 12 05:16:46 2020 +0000
+++ b/sys/arch/x86/x86/identcpu.c Wed Jun 17 23:05:29 2020 +0000
@@ -39,6 +39,8 @@
#include <sys/device.h>
#include <sys/cpu.h>
+#include <crypto/aes/arch/x86/aes_ni.h>
+
#include <uvm/uvm_extern.h>
#include <machine/specialreg.h>
@@ -995,6 +997,10 @@ cpu_probe(struct cpu_info *ci)
/* Early patch of text segment. */
x86_patch(true);
#endif
+#ifdef __x86_64__ /* not yet implemented on i386 */
+ if (cpu_feature[1] & CPUID2_AES)
+ aes_md_init(&aes_ni_impl);
+#endif
} else {
/*
* If not first. Warn about cpu_feature mismatch for
diff -r 9d6b84c40f65 -r fea7aeacc09c sys/crypto/aes/arch/x86/aes_ni.c
--- /dev/null Thu Jan 01 00:00:00 1970 +0000
+++ b/sys/crypto/aes/arch/x86/aes_ni.c Wed Jun 17 23:05:29 2020 +0000
@@ -0,0 +1,252 @@
+/* $NetBSD$ */
+
+/*-
+ * Copyright (c) 2020 The NetBSD Foundation, Inc.
+ * All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ * 1. Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * 2. Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in the
+ * documentation and/or other materials provided with the distribution.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE NETBSD FOUNDATION, INC. AND CONTRIBUTORS
+ * ``AS IS'' AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED
+ * TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
+ * PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE FOUNDATION OR CONTRIBUTORS
+ * BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <sys/cdefs.h>
+__KERNEL_RCSID(1, "$NetBSD$");
+
+#include <sys/types.h>
+#include <sys/systm.h>
+
+#include <crypto/aes/aes.h>
+#include <crypto/aes/arch/x86/aes_ni.h>
+
+#include <x86/cpuvar.h>
+#include <x86/fpu.h>
+#include <x86/specialreg.h>
+
+static void
+aesni_setenckey(struct aesenc *enc, const uint8_t key[static 16],
+ uint32_t nrounds)
+{
+
+ switch (nrounds) {
+ case 10:
+ aesni_setenckey128(enc, key);
+ break;
+ case 12:
+ aesni_setenckey192(enc, key);
+ break;
+ case 14:
+ aesni_setenckey256(enc, key);
+ break;
+ default:
+ panic("invalid AES rounds: %u", nrounds);
+ }
+}
+
+static void
+aesni_setenckey_impl(struct aesenc *enc, const uint8_t key[static 16],
+ uint32_t nrounds)
+{
+
+ fpu_kern_enter();
+ aesni_setenckey(enc, key, nrounds);
+ fpu_kern_leave();
+}
+
+static void
+aesni_setdeckey_impl(struct aesdec *dec, const uint8_t key[static 16],
+ uint32_t nrounds)
+{
+ struct aesenc enc;
+
+ fpu_kern_enter();
+ aesni_setenckey(&enc, key, nrounds);
+ aesni_enctodec(&enc, dec, nrounds);
+ fpu_kern_leave();
+
+ explicit_memset(&enc, 0, sizeof enc);
+}
+
+static void
+aesni_enc_impl(const struct aesenc *enc, const uint8_t in[static 16],
+ uint8_t out[static 16], uint32_t nrounds)
+{
+
+ fpu_kern_enter();
+ aesni_enc(enc, in, out, nrounds);
+ fpu_kern_leave();
+}
+
+static void
+aesni_dec_impl(const struct aesdec *dec, const uint8_t in[static 16],
+ uint8_t out[static 16], uint32_t nrounds)
+{
+
+ fpu_kern_enter();
+ aesni_dec(dec, in, out, nrounds);
+ fpu_kern_leave();
+}
+
+static void
+aesni_cbc_enc_impl(const struct aesenc *enc, const uint8_t in[static 16],
+ uint8_t out[static 16], size_t nbytes, uint8_t iv[static 16],
+ uint32_t nrounds)
+{
+
+ KASSERT(nbytes % 16 == 0);
+
+ fpu_kern_enter();
+ aesni_cbc_enc(enc, in, out, nbytes, iv, nrounds);
+ fpu_kern_leave();
+}
+
+static void
+aesni_cbc_dec_impl(const struct aesdec *dec, const uint8_t in[static 16],
+ uint8_t out[static 16], size_t nbytes, uint8_t iv[static 16],
+ uint32_t nrounds)
+{
+
+ KASSERT(nbytes % 16 == 0);
+
+ fpu_kern_enter();
+
+ if (nbytes % 128) {
+ aesni_cbc_dec1(dec, in, out, nbytes % 128, iv, nrounds);
+ in += nbytes % 128;
+ out += nbytes % 128;
+ nbytes -= nbytes % 128;
+ }
+
+ KASSERT(nbytes % 128 == 0);
+ if (nbytes)
+ aesni_cbc_dec8(dec, in, out, nbytes, iv, nrounds);
+
+ fpu_kern_leave();
+}
+
+static void
+aesni_xts_enc_impl(const struct aesenc *enc, const uint8_t in[static 16],
+ uint8_t out[static 16], size_t nbytes, uint8_t iv[static 16],
+ uint32_t nrounds)
+{
+
+ KASSERT(nbytes % 16 == 0);
+
+ fpu_kern_enter();
+
+ if (nbytes % 128) {
+ aesni_xts_enc1(enc, in, out, nbytes % 128, iv, nrounds);
+ in += nbytes % 128;
+ out += nbytes % 128;
+ nbytes -= nbytes % 128;
+ }
+
+ KASSERT(nbytes % 128 == 0);
+ if (nbytes)
+ aesni_xts_enc8(enc, in, out, nbytes, iv, nrounds);
+
+ fpu_kern_leave();
+}
+
+static void
+aesni_xts_dec_impl(const struct aesdec *dec, const uint8_t in[static 16],
+ uint8_t out[static 16], size_t nbytes, uint8_t iv[static 16],
+ uint32_t nrounds)
+{
+
+ KASSERT(nbytes % 16 == 0);
+
+ fpu_kern_enter();
+
+ if (nbytes % 128) {
+ aesni_xts_dec1(dec, in, out, nbytes % 128, iv, nrounds);
+ in += nbytes % 128;
+ out += nbytes % 128;
+ nbytes -= nbytes % 128;
+ }
+
+ KASSERT(nbytes % 128 == 0);
+ if (nbytes)
+ aesni_xts_dec8(dec, in, out, nbytes, iv, nrounds);
+
+ fpu_kern_leave();
+}
+
+static int
+aesni_xts_update_selftest(void)
+{
+ static const struct {
+ uint8_t in[16], out[16];
+ } cases[] = {
+ {{1}, {2}},
+ {{0,0,0,0x80}, {0,0,0,0,1}},
+ {{0,0,0,0,0,0,0,0x80}, {0,0,0,0,0,0,0,0,1}},
+ {{0,0,0,0x80,0,0,0,0x80}, {0,0,0,0,1,0,0,0,1}},
+ {{0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0x80}, {0x87}},
+ {{0,0,0,0,0,0,0,0x80,0,0,0,0,0,0,0,0x80},
+ {0x87,0,0,0,0,0,0,0,1}},
+ {{0,0,0,0x80,0,0,0,0,0,0,0,0,0,0,0,0x80}, {0x87,0,0,0,1}},
+ {{0,0,0,0x80,0,0,0,0x80,0,0,0,0,0,0,0,0x80},
+ {0x87,0,0,0,1,0,0,0,1}},
+ };
+ unsigned i;
+ uint8_t tweak[16];
+
+ for (i = 0; i < sizeof(cases)/sizeof(cases[0]); i++) {
+ aesni_xts_update(cases[i].in, tweak);
+ if (memcmp(tweak, cases[i].out, 16))
+ return -1;
+ }
+
+ /* Success! */
+ return 0;
+}
+
+static int
+aesni_probe(void)
+{
+ int result = 0;
+
+ /* Verify that the CPU supports AES-NI. */
+ if ((cpu_feature[1] & CPUID2_AES) == 0)
+ return -1;
+
+ fpu_kern_enter();
+
+ /* Verify that our XTS tweak update logic works. */
+ if (aesni_xts_update_selftest())
+ result = -1;
+
+ fpu_kern_leave();
+
+ return result;
+}
+
+struct aes_impl aes_ni_impl = {
+ .ai_name = "Intel AES-NI",
+ .ai_probe = aesni_probe,
+ .ai_setenckey = aesni_setenckey_impl,
+ .ai_setdeckey = aesni_setdeckey_impl,
+ .ai_enc = aesni_enc_impl,
+ .ai_dec = aesni_dec_impl,
+ .ai_cbc_enc = aesni_cbc_enc_impl,
+ .ai_cbc_dec = aesni_cbc_dec_impl,
+ .ai_xts_enc = aesni_xts_enc_impl,
+ .ai_xts_dec = aesni_xts_dec_impl,
+};
diff -r 9d6b84c40f65 -r fea7aeacc09c sys/crypto/aes/arch/x86/aes_ni.h
--- /dev/null Thu Jan 01 00:00:00 1970 +0000
+++ b/sys/crypto/aes/arch/x86/aes_ni.h Wed Jun 17 23:05:29 2020 +0000
@@ -0,0 +1,68 @@
+/* $NetBSD$ */
+
+/*-
+ * Copyright (c) 2020 The NetBSD Foundation, Inc.
+ * All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ * 1. Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * 2. Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in the
+ * documentation and/or other materials provided with the distribution.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE NETBSD FOUNDATION, INC. AND CONTRIBUTORS
+ * ``AS IS'' AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED
+ * TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
+ * PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE FOUNDATION OR CONTRIBUTORS
+ * BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef _CRYPTO_AES_ARCH_X86_AES_NI_H
+#define _CRYPTO_AES_ARCH_X86_AES_NI_H
+
+#include <sys/types.h>
+
+#include <crypto/aes/aes.h>
+
+/* Assembly routines */
+
+void aesni_setenckey128(struct aesenc *, const uint8_t[static 16]);
+void aesni_setenckey192(struct aesenc *, const uint8_t[static 24]);
+void aesni_setenckey256(struct aesenc *, const uint8_t[static 32]);
+
+void aesni_enctodec(const struct aesenc *, struct aesdec *, uint32_t);
+
+void aesni_enc(const struct aesenc *, const uint8_t[static 16],
+ uint8_t[static 16], uint32_t);
+void aesni_dec(const struct aesdec *, const uint8_t[static 16],
+ uint8_t[static 16], uint32_t);
+
+void aesni_cbc_enc(const struct aesenc *, const uint8_t[static 16],
+ uint8_t[static 16], size_t, uint8_t[static 16], uint32_t);
+void aesni_cbc_dec1(const struct aesdec *, const uint8_t[static 16],
+ uint8_t[static 16], size_t, const uint8_t[static 16], uint32_t);
+void aesni_cbc_dec8(const struct aesdec *, const uint8_t[static 128],
+ uint8_t[static 128], size_t, const uint8_t[static 16], uint32_t);
+
+void aesni_xts_enc1(const struct aesenc *, const uint8_t[static 16],
+ uint8_t[static 16], size_t, uint8_t[static 16], uint32_t);
+void aesni_xts_enc8(const struct aesenc *, const uint8_t[static 128],
+ uint8_t[static 128], size_t, uint8_t[static 16], uint32_t);
+void aesni_xts_dec1(const struct aesdec *, const uint8_t[static 16],
+ uint8_t[static 16], size_t, uint8_t[static 16], uint32_t);
+void aesni_xts_dec8(const struct aesdec *, const uint8_t[static 128],
+ uint8_t[static 128], size_t, uint8_t[static 16], uint32_t);
+void aesni_xts_update(const uint8_t[static 16], uint8_t[static 16]);
+
+extern struct aes_impl aes_ni_impl;
+
+#endif /* _CRYPTO_AES_ARCH_X86_AES_NI_H */
diff -r 9d6b84c40f65 -r fea7aeacc09c sys/crypto/aes/arch/x86/aesnifunc.S
--- /dev/null Thu Jan 01 00:00:00 1970 +0000
+++ b/sys/crypto/aes/arch/x86/aesnifunc.S Wed Jun 17 23:05:29 2020 +0000
@@ -0,0 +1,1097 @@
+/* $NetBSD$ */
+
+/*-
+ * Copyright (c) 2020 The NetBSD Foundation, Inc.
+ * All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ * 1. Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * 2. Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in the
+ * documentation and/or other materials provided with the distribution.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE NETBSD FOUNDATION, INC. AND CONTRIBUTORS
+ * ``AS IS'' AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED
+ * TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
+ * PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE FOUNDATION OR CONTRIBUTORS
+ * BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <machine/asm.h>
+
+/*
+ * MOVDQA/MOVDQU are Move Double Quadword (Aligned/Unaligned), defined
+ * to operate on integers; MOVAPS/MOVUPS are Move (Aligned/Unaligned)
+ * Packed Single, defined to operate on binary32 floats. They have
+ * exactly the same architectural effects (move a 128-bit quantity from
+ * memory into an xmm register).
+ *
+ * In principle, they might have different microarchitectural effects
+ * so that MOVAPS/MOVUPS might incur a penalty when the register is
+ * later used for integer paths, but in practice they don't. So we use
+ * the one whose instruction encoding is shorter -- MOVAPS/MOVUPS.
+ */
+#define movdqa movaps
+#define movdqu movups
+
+/*
+ * aesni_setenckey128(struct aesenc *enckey@rdi, const uint8_t key[16] @rsi)
+ *
+ * Expand a 16-byte AES-128 key into 10 round keys.
+ *
+ * Standard ABI calling convention.
+ */
+ENTRY(aesni_setenckey128)
+ movdqu (%rsi),%xmm0 /* load master key into %xmm0 */
+ movdqa %xmm0,(%rdi) /* store master key as the first round key */
+ lea 0x10(%rdi),%rdi /* advance %rdi to next round key */
+ aeskeygenassist $0x1,%xmm0,%xmm2
+ call aesni_expand128
+ aeskeygenassist $0x2,%xmm0,%xmm2
+ call aesni_expand128
+ aeskeygenassist $0x4,%xmm0,%xmm2
+ call aesni_expand128
+ aeskeygenassist $0x8,%xmm0,%xmm2
+ call aesni_expand128
+ aeskeygenassist $0x10,%xmm0,%xmm2
+ call aesni_expand128
+ aeskeygenassist $0x20,%xmm0,%xmm2
+ call aesni_expand128
+ aeskeygenassist $0x40,%xmm0,%xmm2
+ call aesni_expand128
+ aeskeygenassist $0x80,%xmm0,%xmm2
+ call aesni_expand128
+ aeskeygenassist $0x1b,%xmm0,%xmm2
+ call aesni_expand128
+ aeskeygenassist $0x36,%xmm0,%xmm2
+ call aesni_expand128
+ ret
+END(aesni_setenckey128)
+
+/*
+ * aesni_setenckey192(struct aesenc *enckey@rdi, const uint8_t key[24] @rsi)
+ *
+ * Expand a 24-byte AES-192 key into 12 round keys.
+ *
+ * Standard ABI calling convention.
+ */
+ENTRY(aesni_setenckey192)
+ movdqu (%rsi),%xmm0 /* load master key [0:128) into %xmm0 */
+ movq 0x10(%rsi),%xmm1 /* load master key [128:192) into %xmm1 */
+ movdqa %xmm0,(%rdi) /* store master key [0:128) as round key */
+ lea 0x10(%rdi),%rdi /* advance %rdi to next round key */
+ aeskeygenassist $0x1,%xmm1,%xmm2
+ call aesni_expand192a
+ aeskeygenassist $0x2,%xmm0,%xmm2
+ call aesni_expand192b
+ aeskeygenassist $0x4,%xmm1,%xmm2
+ call aesni_expand192a
+ aeskeygenassist $0x8,%xmm0,%xmm2
+ call aesni_expand192b
+ aeskeygenassist $0x10,%xmm1,%xmm2
+ call aesni_expand192a
+ aeskeygenassist $0x20,%xmm0,%xmm2
+ call aesni_expand192b
+ aeskeygenassist $0x40,%xmm1,%xmm2
+ call aesni_expand192a
+ aeskeygenassist $0x80,%xmm0,%xmm2
+ call aesni_expand192b
+ ret
+END(aesni_setenckey192)
+
+/*
+ * aesni_setenckey256(struct aesenc *enckey@rdi, const uint8_t key[32] @rsi)
+ *
+ * Expand a 32-byte AES-256 key into 14 round keys.
+ *
+ * Standard ABI calling convention.
+ */
+ENTRY(aesni_setenckey256)
+ movdqu (%rsi),%xmm0 /* load master key [0:128) into %xmm0 */
+ movdqu 0x10(%rsi),%xmm1 /* load master key [128:256) into %xmm1 */
+ movdqa %xmm0,(%rdi) /* store master key [0:128) as round key */
+ movdqa %xmm1,0x10(%rdi) /* store master key [128:256) as round key */
+ lea 0x20(%rdi),%rdi /* advance %rdi to next round key */
+ aeskeygenassist $0x1,%xmm1,%xmm2
+ call aesni_expand256a
+ aeskeygenassist $0x1,%xmm0,%xmm2
+ call aesni_expand256b
+ aeskeygenassist $0x2,%xmm1,%xmm2
+ call aesni_expand256a
+ aeskeygenassist $0x2,%xmm0,%xmm2
+ call aesni_expand256b
+ aeskeygenassist $0x4,%xmm1,%xmm2
+ call aesni_expand256a
+ aeskeygenassist $0x4,%xmm0,%xmm2
+ call aesni_expand256b
+ aeskeygenassist $0x8,%xmm1,%xmm2
+ call aesni_expand256a
+ aeskeygenassist $0x8,%xmm0,%xmm2
+ call aesni_expand256b
+ aeskeygenassist $0x10,%xmm1,%xmm2
+ call aesni_expand256a
+ aeskeygenassist $0x10,%xmm0,%xmm2
+ call aesni_expand256b
+ aeskeygenassist $0x20,%xmm1,%xmm2
+ call aesni_expand256a
+ aeskeygenassist $0x20,%xmm0,%xmm2
+ call aesni_expand256b
+ aeskeygenassist $0x40,%xmm1,%xmm2
+ call aesni_expand256a
+ ret
+END(aesni_setenckey256)
+
+/*
+ * aesni_expand128(uint128_t *rkp@rdi, uint128_t prk@xmm0,
+ * uint128_t keygenassist@xmm2)
+ *
+ * 1. Compute the AES-128 round key using the previous round key.
+ * 2. Store it at *rkp.
+ * 3. Set %xmm0 to it.
+ * 4. Advance %rdi to point at the next round key.
+ *
+ * Internal ABI. On entry:
+ *
+ * %rdi = rkp, pointer to round key to compute
+ * %xmm0 = (prk[0], prk[1], prk[2], prk[3])
+ * %xmm2 = (xxx, xxx, xxx, t = Rot(SubWord(prk[3])) ^ RCON)
+ *
+ * On exit:
+ *
+ * %rdi = &rkp[1], rkp advanced by one round key
+ * %xmm0 = rk, the round key we just computed
+ * %xmm2 = garbage
+ * %xmm4 = garbage
+ * %xmm5 = garbage
+ * %xmm6 = garbage
+ *
+ * Note: %xmm1 is preserved (as are %xmm3 and %xmm7 through %xmm15,
+ * and all other registers).
+ */
+ .text
+ _ALIGN_TEXT
+ .type aesni_expand128,@function
+aesni_expand128:
+ /*
+ * %xmm2 := (%xmm2[3], %xmm2[3], %xmm2[3], %xmm2[3]),
+ * i.e., set each word of %xmm2 to t := Rot(SubWord(prk[3])) ^ RCON.
+ */
+ pshufd $0b11111111,%xmm2,%xmm2
+
+ /*
+ * %xmm4 := (0, prk[0], prk[1], prk[2])
+ * %xmm5 := (0, 0, prk[0], prk[1])
+ * %xmm6 := (0, 0, 0, prk[0])
+ */
+ movdqa %xmm0,%xmm4
+ movdqa %xmm0,%xmm5
+ movdqa %xmm0,%xmm6
+ pslldq $4,%xmm4
+ pslldq $8,%xmm5
+ pslldq $12,%xmm6
+
+ /*
+ * %xmm0 := (rk[0] = t ^ prk[0],
+ * rk[1] = t ^ prk[0] ^ prk[1],
+ * rk[2] = t ^ prk[0] ^ prk[1] ^ prk[2],
+ * rk[3] = t ^ prk[0] ^ prk[1] ^ prk[2] ^ prk[3])
+ */
+ pxor %xmm2,%xmm0
+ pxor %xmm4,%xmm0
+ pxor %xmm5,%xmm0
+ pxor %xmm6,%xmm0
+
+ movdqa %xmm0,(%rdi) /* store round key */
+ lea 0x10(%rdi),%rdi /* advance to next round key address */
+ ret
+END(aesni_expand128)
+
+/*
+ * aesni_expand192a(uint128_t *rkp@rdi, uint128_t prk@xmm0,
+ * uint64_t rklo@xmm1, uint128_t keygenassist@xmm2)
+ *
+ * Set even-numbered AES-192 round key.
+ *
+ * Internal ABI. On entry:
+ *
+ * %rdi = rkp, pointer to two round keys to compute
+ * %xmm0 = (prk[0], prk[1], prk[2], prk[3])
+ * %xmm1 = (rklo[0], rklo[1], xxx, xxx)
+ * %xmm2 = (xxx, t = Rot(SubWord(rklo[1])) ^ RCON, xxx, xxx)
+ *
+ * On exit:
+ *
+ * %rdi = &rkp[2], rkp advanced by two round keys
+ * %xmm0 = nrk, second round key we just computed
+ * %xmm1 = rk, first round key we just computed
+ * %xmm2 = garbage
+ * %xmm4 = garbage
+ * %xmm5 = garbage
+ * %xmm6 = garbage
+ * %xmm7 = garbage
+ */
+ .text
+ _ALIGN_TEXT
+ .type aesni_expand192a,@function
+aesni_expand192a:
+ /*
+ * %xmm2 := (%xmm2[1], %xmm2[1], %xmm2[1], %xmm2[1]),
+ * i.e., set each word of %xmm2 to t := Rot(SubWord(rklo[1])) ^ RCON.
+ */
+ pshufd $0b01010101,%xmm2,%xmm2
+
+ /*
+ * We need to compute:
+ *
+ * rk[0] := rklo[0]
+ * rk[1] := rklo[1]
+ * rk[2] := Rot(Sub(rklo[1])) ^ RCON ^ prk[0]
+ * rk[3] := Rot(Sub(rklo[1])) ^ RCON ^ prk[0] ^ prk[1]
+ * nrk[0] := Rot(Sub(rklo[1])) ^ RCON ^ prk[0] ^ prk[1] ^ prk[2]
+ * nrk[1] := Rot(Sub(rklo[1])) ^ RCON ^ prk[0] ^ ... ^ prk[3]
+ * nrk[2] := Rot(Sub(rklo[1])) ^ RCON ^ prk[0] ^ ... ^ prk[3] ^ rklo[0]
+ * nrk[3] := Rot(Sub(rklo[1])) ^ RCON ^ prk[0] ^ ... ^ prk[3] ^ rklo[0]
+ * ^ rklo[1]
+ */
+
+ /*
+ * %xmm4 := (prk[0], prk[1], prk[2], prk[3])
+ * %xmm5 := (0, prk[0], prk[1], prk[2])
+ * %xmm6 := (0, 0, prk[0], prk[1])
+ * %xmm7 := (0, 0, 0, prk[0])
+ */
+ movdqa %xmm0,%xmm4
+ movdqa %xmm0,%xmm5
+ movdqa %xmm0,%xmm6
+ movdqa %xmm0,%xmm7
+ pslldq $4,%xmm5
+ pslldq $8,%xmm6
+ pslldq $12,%xmm7
+
+ /* %xmm4 := (rk[2], rk[3], nrk[0], nrk[1]) */
+ pxor %xmm2,%xmm4
+ pxor %xmm5,%xmm4
+ pxor %xmm6,%xmm4
+ pxor %xmm7,%xmm4
+
+ /*
+ * At this point, rk is split across %xmm1 (rk[0],rk[1],...) and
+ * %xmm4 (rk[2],rk[3],...); nrk is in %xmm4 (...,nrk[0],nrk[1]);
+ * and we have yet to compute nrk[2] or nrk[3], which requires
+ * rklo[0] and rklo[1] in %xmm1 (rklo[0], rklo[1], ...). We need
+ * nrk to end up in %xmm0 at the end, so gather rk into %xmm1 and
+ * nrk into %xmm0.
+ */
+
+ /* %xmm0 := (nrk[0], nrk[1], nrk[1], nrk[1]) */
+ pshufd $0b11111110,%xmm4,%xmm0
+
+ /*
+ * %xmm6 := (0, 0, rklo[0], rklo[1])
+ * %xmm7 := (0, 0, 0, rklo[0])
+ */
+ movdqa %xmm1,%xmm6
+ movdqa %xmm1,%xmm7
+
+ pslldq $8,%xmm6
+ pslldq $12,%xmm7
+
+ /*
+ * %xmm0 := (nrk[0],
+ * nrk[1],
+ * nrk[2] = nrk[1] ^ rklo[0],
+ * nrk[3] = nrk[1] ^ rklo[0] ^ rklo[1])
+ */
+ pxor %xmm6,%xmm0
+ pxor %xmm7,%xmm0
+
+ /* %xmm1 := (rk[0], rk[1], rk[2], rk[3]) */
+ shufps $0b01000100,%xmm4,%xmm1
+
+ movdqa %xmm1,(%rdi) /* store round key */
+ movdqa %xmm0,0x10(%rdi) /* store next round key */
+ lea 0x20(%rdi),%rdi /* advance two round keys */
+ ret
+END(aesni_expand192a)
+
+/*
+ * aesni_expand192b(uint128_t *roundkey@rdi, uint128_t prk@xmm0,
+ * uint128_t keygenassist@xmm2)
+ *
+ * Set odd-numbered AES-192 round key.
+ *
+ * Internal ABI. On entry:
+ *
+ * %rdi = rkp, pointer to round key to compute
+ * %xmm0 = (prk[0], prk[1], prk[2], prk[3])
+ * %xmm1 = (xxx, xxx, pprk[2], pprk[3])
+ * %xmm2 = (xxx, xxx, xxx, t = Rot(Sub(prk[3])) ^ RCON)
+ *
+ * On exit:
+ *
+ * %rdi = &rkp[1], rkp advanced by one round key
+ * %xmm0 = rk, the round key we just computed
+ * %xmm1 = (nrk[0], nrk[1], xxx, xxx), half of next round key
+ * %xmm2 = garbage
+ * %xmm4 = garbage
+ * %xmm5 = garbage
+ * %xmm6 = garbage
+ * %xmm7 = garbage
+ */
+ .text
+ _ALIGN_TEXT
+ .type aesni_expand192b,@function
+aesni_expand192b:
+ /*
+ * %xmm2 := (%xmm2[3], %xmm2[3], %xmm2[3], %xmm2[3]),
+ * i.e., set each word of %xmm2 to t := Rot(Sub(prk[3])) ^ RCON.
+ */
+ pshufd $0b11111111,%xmm2,%xmm2
+
+ /*
+ * We need to compute:
+ *
+ * rk[0] := Rot(Sub(prk[3])) ^ RCON ^ pprk[2]
+ * rk[1] := Rot(Sub(prk[3])) ^ RCON ^ pprk[2] ^ pprk[3]
+ * rk[2] := Rot(Sub(prk[3])) ^ RCON ^ pprk[2] ^ pprk[3] ^ prk[0]
+ * rk[3] := Rot(Sub(prk[3])) ^ RCON ^ pprk[2] ^ pprk[3] ^ prk[0]
+ * ^ prk[1]
+ * nrk[0] := Rot(Sub(prk[3])) ^ RCON ^ pprk[2] ^ pprk[3] ^ prk[0]
+ * ^ prk[1] ^ prk[2]
+ * nrk[1] := Rot(Sub(prk[3])) ^ RCON ^ pprk[2] ^ pprk[3] ^ prk[0]
+ * ^ prk[1] ^ prk[2] ^ prk[3]
+ */
+
+ /* %xmm1 := (pprk[2], pprk[3], prk[0], prk[1]) */
+ shufps $0b01001110,%xmm0,%xmm1
+
+ /*
+ * %xmm5 := (0, pprk[2], pprk[3], prk[0])
+ * %xmm6 := (0, 0, pprk[2], pprk[3])
+ * %xmm7 := (0, 0, 0, pprk[2])
+ */
+ movdqa %xmm1,%xmm5
+ movdqa %xmm1,%xmm6
+ movdqa %xmm1,%xmm7
+ pslldq $4,%xmm5
+ pslldq $8,%xmm6
+ pslldq $12,%xmm7
+
+ /* %xmm1 := (rk[0], rk[1], rk[2], rk[3) */
+ pxor %xmm2,%xmm1
+ pxor %xmm5,%xmm1
+ pxor %xmm6,%xmm1
+ pxor %xmm7,%xmm1
+
+ /* %xmm4 := (prk[2], prk[3], xxx, xxx) */
+ pshufd $0b00001110,%xmm0,%xmm4
+
+ /* %xmm5 := (0, prk[2], xxx, xxx) */
+ movdqa %xmm4,%xmm5
+ pslldq $4,%xmm5
+
+ /* %xmm0 := (rk[0], rk[1], rk[2], rk[3]) */
+ movdqa %xmm1,%xmm0
+
+ /* %xmm1 := (rk[3], rk[3], xxx, xxx) */
+ shufps $0b00001111,%xmm1,%xmm1
+
+ /*
+ * %xmm1 := (nrk[0] = rk[3] ^ prk[2],
+ * nrk[1] = rk[3] ^ prk[2] ^ prk[3],
+ * xxx,
+ * xxx)
+ */
+ pxor %xmm4,%xmm1
+ pxor %xmm5,%xmm1
+
+ movdqa %xmm0,(%rdi) /* store round key */
+ lea 0x10(%rdi),%rdi /* advance to next round key address */
+ ret
+END(aesni_expand192b)
+
+/*
+ * aesni_expand256a(uint128_t *rkp@rdi, uint128_t pprk@xmm0,
+ * uint128_t prk@xmm1, uint128_t keygenassist@xmm2)
+ *
+ * Set even-numbered AES-256 round key.
+ *
+ * Internal ABI. On entry:
+ *
+ * %rdi = rkp, pointer to round key to compute
+ * %xmm0 = (pprk[0], pprk[1], pprk[2], pprk[3])
+ * %xmm1 = (prk[0], prk[1], prk[2], prk[3])
+ * %xmm2 = (xxx, xxx, xxx, t = Rot(SubWord(prk[3])))
+ *
+ * On exit:
+ *
+ * %rdi = &rkp[1], rkp advanced by one round key
+ * %xmm0 = rk, the round key we just computed
+ * %xmm1 = prk, previous round key, preserved from entry
+ * %xmm2 = garbage
+ * %xmm4 = garbage
+ * %xmm5 = garbage
+ * %xmm6 = garbage
+ *
+ * The computation turns out to be the same as for AES-128; the
+ * previous round key does not figure into it, only the
+ * previous-previous round key.
+ */
+ aesni_expand256a = aesni_expand128
+
+/*
+ * aesni_expand256b(uint128_t *rkp@rdi, uint128_t prk@xmm0,
+ * uint128_t pprk@xmm1, uint128_t keygenassist@xmm2)
+ *
+ * Set odd-numbered AES-256 round key.
+ *
+ * Internal ABI. On entry:
+ *
+ * %rdi = rkp, pointer to round key to compute
+ * %xmm0 = (prk[0], prk[1], prk[2], prk[3])
+ * %xmm1 = (pprk[0], pprk[1], pprk[2], pprk[3])
+ * %xmm2 = (xxx, xxx, t = Sub(prk[3]), xxx)
+ *
+ * On exit:
+ *
+ * %rdi = &rkp[1], rkp advanced by one round key
+ * %xmm0 = prk, previous round key, preserved from entry
+ * %xmm1 = rk, the round key we just computed
+ * %xmm2 = garbage
+ * %xmm4 = garbage
+ * %xmm5 = garbage
+ * %xmm6 = garbage
+ */
+ .text
+ _ALIGN_TEXT
+ .type aesni_expand256b,@function
+aesni_expand256b:
+ /*
+ * %xmm2 := (%xmm2[3], %xmm2[3], %xmm2[3], %xmm2[3]),
+ * i.e., set each word of %xmm2 to t := Sub(prk[3]).
+ */
+ pshufd $0b10101010,%xmm2,%xmm2
+
+ /*
+ * %xmm4 := (0, pprk[0], pprk[1], pprk[2])
+ * %xmm5 := (0, 0, pprk[0], pprk[1])
+ * %xmm6 := (0, 0, 0, pprk[0])
+ */
+ movdqa %xmm1,%xmm4
+ movdqa %xmm1,%xmm5
+ movdqa %xmm1,%xmm6
+ pslldq $4,%xmm4
+ pslldq $8,%xmm5
+ pslldq $12,%xmm6
+
+ /*
+ * %xmm0 := (rk[0] = t ^ pprk[0],
+ * rk[1] = t ^ pprk[0] ^ pprk[1],
+ * rk[2] = t ^ pprk[0] ^ pprk[1] ^ pprk[2],
+ * rk[3] = t ^ pprk[0] ^ pprk[1] ^ pprk[2] ^ pprk[3])
+ */
+ pxor %xmm2,%xmm1
+ pxor %xmm4,%xmm1
+ pxor %xmm5,%xmm1
+ pxor %xmm6,%xmm1
+
+ movdqa %xmm1,(%rdi) /* store round key */
+ lea 0x10(%rdi),%rdi /* advance to next round key address */
+ ret
+END(aesni_expand256b)
+
+/*
+ * aesni_enctodec(const struct aesenc *enckey@rdi, struct aesdec *deckey@rsi,
+ * uint32_t nrounds@rdx)
+ *
+ * Convert AES encryption round keys to AES decryption round keys.
+ * `rounds' must be between 10 and 14.
+ *
+ * Standard ABI calling convention.
+ */
+ENTRY(aesni_enctodec)
+ shl $4,%edx /* rdx := byte offset of last round key */
+ movdqa (%rdi,%rdx),%xmm0 /* load last round key */
+ movdqa %xmm0,(%rsi) /* store last round key verbatim */
+1: sub $0x10,%rdx /* advance to next round key */
+ lea 0x10(%rsi),%rsi
+ jz 2f /* stop if this is the last one */
+ movdqa (%rdi,%rdx),%xmm0 /* load round key */
+ aesimc %xmm0,%xmm0 /* convert encryption to decryption */
+ movdqa %xmm0,(%rsi) /* store round key */
+ jmp 1b
+2: movdqa (%rdi),%xmm0 /* load first round key */
+ movdqa %xmm0,(%rsi) /* store first round key verbatim */
+ ret
+END(aesni_enctodec)
+
+/*
+ * aesni_enc(const struct aesenc *enckey@rdi, const uint8_t in[16] @rsi,
+ * uint8_t out[16] @rdx, uint32_t nrounds@ecx)
+ *
+ * Encrypt a single block.
+ *
+ * Standard ABI calling convention.
+ */
+ENTRY(aesni_enc)
+ movdqu (%rsi),%xmm0
+ call aesni_enc1
+ movdqu %xmm0,(%rdx)
+ ret
+END(aesni_enc)
+
+/*
+ * aesni_dec(const struct aesdec *deckey@rdi, const uint8_t in[16] @rsi,
+ * uint8_t out[16] @rdx, uint32_t nrounds@ecx)
+ *
+ * Decrypt a single block.
+ *
+ * Standard ABI calling convention.
+ */
+ENTRY(aesni_dec)
+ movdqu (%rsi),%xmm0
+ call aesni_dec1
+ movdqu %xmm0,(%rdx)
+ ret
+END(aesni_dec)
+
+/*
+ * aesni_cbc_enc(const struct aesenc *enckey@rdi, const uint8_t *in@rsi,
+ * uint8_t *out@rdx, size_t nbytes@rcx, uint8_t iv[16] @r8,
+ * uint32_t nrounds@r9d)
+ *
+ * Encrypt a contiguous sequence of blocks with AES-CBC.
+ *
+ * nbytes must be an integral multiple of 16.
+ *
+ * Standard ABI calling convention.
+ */
+ENTRY(aesni_cbc_enc)
+ cmp $0,%rcx
+ jz 2f
+ mov %rcx,%r10 /* r10 := nbytes */
+ movdqu (%r8),%xmm0 /* xmm0 := chaining value */
+1: movdqu (%rsi),%xmm1 /* xmm1 := plaintext block */
+ lea 0x10(%rsi),%rsi
+ pxor %xmm1,%xmm0 /* xmm0 := cv ^ ptxt */
+ mov %r9d,%ecx /* ecx := nrounds */
+ call aesni_enc1 /* xmm0 := ciphertext block */
+ movdqu %xmm0,(%rdx)
+ lea 0x10(%rdx),%rdx
+ sub $0x10,%r10
+ jnz 1b /* repeat if r10 is nonzero */
+ movdqu %xmm0,(%r8) /* store chaining value */
+2: ret
+END(aesni_cbc_enc)
+
+/*
+ * aesni_cbc_dec1(const struct aesdec *deckey@rdi, const uint8_t *in@rsi,
+ * uint8_t *out@rdx, size_t nbytes@rcx, const uint8_t iv[16] @r8,
+ * uint32_t nrounds@r9)
+ *
+ * Decrypt a contiguous sequence of blocks with AES-CBC.
+ *
+ * nbytes must be a positive integral multiple of 16. This routine
+ * is not vectorized; use aesni_cbc_dec8 for >=8 blocks at once.
+ *
+ * Standard ABI calling convention.
+ */
+ENTRY(aesni_cbc_dec1)
+ push %rbp /* create stack frame uint128[1] */
+ mov %rsp,%rbp
+ sub $0x10,%rsp
+ movdqu (%r8),%xmm8 /* xmm8 := iv */
+ movdqa %xmm8,(%rsp) /* save iv */
+ mov %rcx,%r10 /* r10 := nbytes */
+ movdqu -0x10(%rsi,%r10),%xmm0 /* xmm0 := last ciphertext block */
+ movdqu %xmm0,(%r8) /* update iv */
+1: mov %r9d,%ecx /* ecx := nrounds */
+ call aesni_dec1 /* xmm0 := cv ^ ptxt */
+ sub $0x10,%r10
+ jz 2f /* first block if r10 is now zero */
+ movdqu -0x10(%rsi,%r10),%xmm8 /* xmm8 := chaining value */
+ pxor %xmm8,%xmm0 /* xmm0 := ptxt */
+ movdqu %xmm0,(%rdx,%r10) /* store plaintext block */
+ movdqa %xmm8,%xmm0 /* move cv = ciphertext block */
+ jmp 1b
+2: pxor (%rsp),%xmm0 /* xmm0 := ptxt */
+ movdqu %xmm0,(%rdx) /* store first plaintext block */
+ leave
+ ret
+END(aesni_cbc_dec1)
+
+/*
+ * aesni_cbc_dec8(const struct aesdec *deckey@rdi, const uint8_t *in@rsi,
+ * uint8_t *out@rdx, size_t nbytes@rcx, const uint8_t iv[16] @r8,
+ * uint32_t nrounds@r9)
+ *
+ * Decrypt a contiguous sequence of 8-block units with AES-CBC.
+ *
+ * nbytes must be a positive integral multiple of 128.
+ *
+ * Standard ABI calling convention.
+ */
+ENTRY(aesni_cbc_dec8)
+ push %rbp /* create stack frame uint128[1] */
+ mov %rsp,%rbp
+ sub $0x10,%rsp
+ movdqu (%r8),%xmm8 /* xmm8 := iv */
+ movdqa %xmm8,(%rsp) /* save iv */
+ mov %rcx,%r10 /* r10 := nbytes */
+ movdqu -0x10(%rsi,%r10),%xmm7 /* xmm7 := ciphertext block[n-1] */
+ movdqu %xmm7,(%r8) /* update iv */
+1: movdqu -0x20(%rsi,%r10),%xmm6 /* xmm6 := ciphertext block[n-2] */
+ movdqu -0x30(%rsi,%r10),%xmm5 /* xmm5 := ciphertext block[n-3] */
+ movdqu -0x40(%rsi,%r10),%xmm4 /* xmm4 := ciphertext block[n-4] */
+ movdqu -0x50(%rsi,%r10),%xmm3 /* xmm3 := ciphertext block[n-5] */
+ movdqu -0x60(%rsi,%r10),%xmm2 /* xmm2 := ciphertext block[n-6] */
+ movdqu -0x70(%rsi,%r10),%xmm1 /* xmm1 := ciphertext block[n-7] */
+ movdqu -0x80(%rsi,%r10),%xmm0 /* xmm0 := ciphertext block[n-8] */
+ movdqa %xmm6,%xmm15 /* xmm[8+i] := cv[i], 0<i<8 */
+ movdqa %xmm5,%xmm14
+ movdqa %xmm4,%xmm13
+ movdqa %xmm3,%xmm12
+ movdqa %xmm2,%xmm11
+ movdqa %xmm1,%xmm10
+ movdqa %xmm0,%xmm9
+ mov %r9d,%ecx /* ecx := nrounds */
+ call aesni_dec8 /* xmm[i] := cv[i] ^ ptxt[i], 0<=i<8 */
+ pxor %xmm15,%xmm7 /* xmm[i] := ptxt[i], 0<i<8 */
+ pxor %xmm14,%xmm6
+ pxor %xmm13,%xmm5
+ pxor %xmm12,%xmm4
+ pxor %xmm11,%xmm3
+ pxor %xmm10,%xmm2
+ pxor %xmm9,%xmm1
+ movdqu %xmm7,-0x10(%rdx,%r10) /* store plaintext blocks */
+ movdqu %xmm6,-0x20(%rdx,%r10)
+ movdqu %xmm5,-0x30(%rdx,%r10)
+ movdqu %xmm4,-0x40(%rdx,%r10)
+ movdqu %xmm3,-0x50(%rdx,%r10)
+ movdqu %xmm2,-0x60(%rdx,%r10)
+ movdqu %xmm1,-0x70(%rdx,%r10)
+ sub $0x80,%r10
+ jz 2f /* first block if r10 is now zero */
+ movdqu -0x10(%rsi,%r10),%xmm7 /* xmm7 := cv[0] */
+ pxor %xmm7,%xmm0 /* xmm0 := ptxt[0] */
+ movdqu %xmm0,(%rdx,%r10) /* store plaintext block */
+ jmp 1b
+2: pxor (%rsp),%xmm0 /* xmm0 := ptxt[0] */
+ movdqu %xmm0,(%rdx) /* store first plaintext block */
+ leave
+ ret
+END(aesni_cbc_dec8)
+
+/*
+ * aesni_xts_enc1(const struct aesenc *enckey@rdi, const uint8_t *in@rsi,
+ * uint8_t *out@rdx, size_t nbytes@rcx, uint8_t tweak[16] @r8,
+ * uint32_t nrounds@r9d)
+ *
+ * Encrypt a contiguous sequence of blocks with AES-XTS.
+ *
+ * nbytes must be a positive integral multiple of 16. This routine
+ * is not vectorized; use aesni_xts_enc8 for >=8 blocks at once.
+ *
+ * Standard ABI calling convention.
+ */
+ENTRY(aesni_xts_enc1)
+ mov %rcx,%r10 /* r10 := nbytes */
+ movdqu (%r8),%xmm9 /* xmm9 := tweak */
+1: movdqu (%rsi),%xmm0 /* xmm0 := ptxt */
+ lea 0x10(%rsi),%rsi /* advance rdi to next block */
+ pxor %xmm9,%xmm0 /* xmm0 := ptxt ^ tweak */
+ mov %r9d,%ecx /* ecx := nrounds */
+ call aesni_enc1 /* xmm0 := AES(ptxt ^ tweak) */
+ pxor %xmm9,%xmm0 /* xmm0 := AES(ptxt ^ tweak) ^ tweak */
+ movdqu %xmm0,(%rdx) /* store ciphertext block */
+ lea 0x10(%rdx),%rdx /* advance rsi to next block */
+ call aesni_xts_mulx /* xmm9 *= x; trash xmm0 */
+ sub $0x10,%r10
+ jnz 1b /* repeat if more blocks */
+ movdqu %xmm9,(%r8) /* update tweak */
+ ret
+END(aesni_xts_enc1)
+
+/*
+ * aesni_xts_enc8(const struct aesenc *enckey@rdi, const uint8_t *in@rsi,
+ * uint8_t *out@rdx, size_t nbytes@rcx, uint8_t tweak[16] @r8,
+ * uint32_t nrounds@r9d)
+ *
+ * Encrypt a contiguous sequence of blocks with AES-XTS.
+ *
+ * nbytes must be a positive integral multiple of 128.
+ *
+ * Standard ABI calling convention.
+ */
+ENTRY(aesni_xts_enc8)
+ push %rbp /* create stack frame uint128[2] */
+ mov %rsp,%rbp
+ sub $0x20,%rsp
+ mov %rcx,%r10 /* r10 := nbytes */
+ movdqu (%r8),%xmm9 /* xmm9 := tweak[0] */
+1: movdqa %xmm9,(%rsp) /* save tweak[0] */
+ call aesni_xts_mulx /* xmm9 := tweak[1] */
+ movdqa %xmm9,0x10(%rsp) /* save tweak[1] */
+ call aesni_xts_mulx /* xmm9 := tweak[2] */
+ movdqa %xmm9,%xmm10 /* xmm10 := tweak[2] */
+ call aesni_xts_mulx /* xmm9 := tweak[3] */
+ movdqa %xmm9,%xmm11 /* xmm11 := tweak[3] */
+ call aesni_xts_mulx /* xmm9 := tweak[4] */
+ movdqa %xmm9,%xmm12 /* xmm12 := tweak[4] */
+ call aesni_xts_mulx /* xmm9 := tweak[5] */
+ movdqa %xmm9,%xmm13 /* xmm13 := tweak[5] */
+ call aesni_xts_mulx /* xmm9 := tweak[6] */
+ movdqa %xmm9,%xmm14 /* xmm14 := tweak[6] */
+ call aesni_xts_mulx /* xmm9 := tweak[7] */
+ movdqa %xmm9,%xmm15 /* xmm15 := tweak[7] */
+ movdqu (%rsi),%xmm0 /* xmm[i] := ptxt[i] */
+ movdqu 0x10(%rsi),%xmm1
+ movdqu 0x20(%rsi),%xmm2
+ movdqu 0x30(%rsi),%xmm3
+ movdqu 0x40(%rsi),%xmm4
+ movdqu 0x50(%rsi),%xmm5
+ movdqu 0x60(%rsi),%xmm6
+ movdqu 0x70(%rsi),%xmm7
+ lea 0x80(%rsi),%rsi /* advance rsi to next block group */
+ pxor (%rsp),%xmm0 /* xmm[i] := ptxt[i] ^ tweak[i] */
+ pxor 0x10(%rsp),%xmm1
+ pxor %xmm10,%xmm2
+ pxor %xmm11,%xmm3
+ pxor %xmm12,%xmm4
+ pxor %xmm13,%xmm5
+ pxor %xmm14,%xmm6
+ pxor %xmm15,%xmm7
+ mov %r9d,%ecx /* ecx := nrounds */
+ call aesni_enc8 /* xmm[i] := AES(ptxt[i] ^ tweak[i]) */
+ pxor (%rsp),%xmm0 /* xmm[i] := AES(...) ^ tweak[i] */
+ pxor 0x10(%rsp),%xmm1
+ pxor %xmm10,%xmm2
+ pxor %xmm11,%xmm3
+ pxor %xmm12,%xmm4
+ pxor %xmm13,%xmm5
+ pxor %xmm14,%xmm6
+ pxor %xmm15,%xmm7
+ movdqu %xmm0,(%rdx) /* store ciphertext blocks */
+ movdqu %xmm1,0x10(%rdx)
+ movdqu %xmm2,0x20(%rdx)
+ movdqu %xmm3,0x30(%rdx)
+ movdqu %xmm4,0x40(%rdx)
+ movdqu %xmm5,0x50(%rdx)
+ movdqu %xmm6,0x60(%rdx)
+ movdqu %xmm7,0x70(%rdx)
+ lea 0x80(%rdx),%rdx /* advance rdx to next block group */
+ movdqa %xmm15,%xmm9 /* xmm9 := tweak[7] */
+ call aesni_xts_mulx /* xmm9 := tweak[8] */
+ sub $0x80,%r10
+ jnz 1b /* repeat if more block groups */
+ movdqu %xmm9,(%r8) /* update tweak */
+ leave
+ ret
+END(aesni_xts_enc8)
+
+/*
+ * aesni_xts_dec1(const struct aesdec *deckey@rdi, const uint8_t *in@rsi,
+ * uint8_t *out@rdx, size_t nbytes@rcx, uint8_t tweak[16] @r8,
+ * uint32_t nrounds@r9d)
+ *
+ * Decrypt a contiguous sequence of blocks with AES-XTS.
+ *
+ * nbytes must be a positive integral multiple of 16. This routine
+ * is not vectorized; use aesni_xts_dec8 for >=8 blocks at once.
+ *
+ * Standard ABI calling convention.
+ */
+ENTRY(aesni_xts_dec1)
+ mov %rcx,%r10 /* r10 := nbytes */
+ movdqu (%r8),%xmm9 /* xmm9 := tweak */
+1: movdqu (%rsi),%xmm0 /* xmm0 := ctxt */
+ lea 0x10(%rsi),%rsi /* advance rdi to next block */
+ pxor %xmm9,%xmm0 /* xmm0 := ctxt ^ tweak */
+ mov %r9d,%ecx /* ecx := nrounds */
+ call aesni_dec1 /* xmm0 := AES(ctxt ^ tweak) */
+ pxor %xmm9,%xmm0 /* xmm0 := AES(ctxt ^ tweak) ^ tweak */
+ movdqu %xmm0,(%rdx) /* store plaintext block */
+ lea 0x10(%rdx),%rdx /* advance rsi to next block */
+ call aesni_xts_mulx /* xmm9 *= x; trash xmm0 */
+ sub $0x10,%r10
+ jnz 1b /* repeat if more blocks */
+ movdqu %xmm9,(%r8) /* update tweak */
+ ret
+END(aesni_xts_dec1)
+
+/*
+ * aesni_xts_dec8(const struct aesdec *deckey@rdi, const uint8_t *in@rsi,
+ * uint8_t *out@rdx, size_t nbytes@rcx, uint8_t tweak[16] @r8,
+ * uint32_t nrounds@r9d)
+ *
+ * Decrypt a contiguous sequence of blocks with AES-XTS.
+ *
+ * nbytes must be a positive integral multiple of 128.
+ *
+ * Standard ABI calling convention.
+ */
+ENTRY(aesni_xts_dec8)
+ push %rbp /* create stack frame uint128[2] */
+ mov %rsp,%rbp
+ sub $0x20,%rsp
+ mov %rcx,%r10 /* r10 := nbytes */
+ movdqu (%r8),%xmm9 /* xmm9 := tweak[0] */
+1: movdqa %xmm9,(%rsp) /* save tweak[0] */
+ call aesni_xts_mulx /* xmm9 := tweak[1] */
+ movdqa %xmm9,0x10(%rsp) /* save tweak[1] */
+ call aesni_xts_mulx /* xmm9 := tweak[2] */
+ movdqa %xmm9,%xmm10 /* xmm10 := tweak[2] */
+ call aesni_xts_mulx /* xmm9 := tweak[3] */
+ movdqa %xmm9,%xmm11 /* xmm11 := tweak[3] */
+ call aesni_xts_mulx /* xmm9 := tweak[4] */
+ movdqa %xmm9,%xmm12 /* xmm12 := tweak[4] */
+ call aesni_xts_mulx /* xmm9 := tweak[5] */
+ movdqa %xmm9,%xmm13 /* xmm13 := tweak[5] */
+ call aesni_xts_mulx /* xmm9 := tweak[6] */
+ movdqa %xmm9,%xmm14 /* xmm14 := tweak[6] */
+ call aesni_xts_mulx /* xmm9 := tweak[7] */
+ movdqa %xmm9,%xmm15 /* xmm15 := tweak[7] */
+ movdqu (%rsi),%xmm0 /* xmm[i] := ptxt[i] */
+ movdqu 0x10(%rsi),%xmm1
+ movdqu 0x20(%rsi),%xmm2
+ movdqu 0x30(%rsi),%xmm3
+ movdqu 0x40(%rsi),%xmm4
+ movdqu 0x50(%rsi),%xmm5
+ movdqu 0x60(%rsi),%xmm6
+ movdqu 0x70(%rsi),%xmm7
+ lea 0x80(%rsi),%rsi /* advance rsi to next block group */
+ pxor (%rsp),%xmm0 /* xmm[i] := ptxt[i] ^ tweak[i] */
+ pxor 0x10(%rsp),%xmm1
+ pxor %xmm10,%xmm2
+ pxor %xmm11,%xmm3
+ pxor %xmm12,%xmm4
+ pxor %xmm13,%xmm5
+ pxor %xmm14,%xmm6
+ pxor %xmm15,%xmm7
+ mov %r9d,%ecx /* ecx := nrounds */
+ call aesni_dec8 /* xmm[i] := AES(ptxt[i] ^ tweak[i]) */
+ pxor (%rsp),%xmm0 /* xmm[i] := AES(...) ^ tweak[i] */
+ pxor 0x10(%rsp),%xmm1
+ pxor %xmm10,%xmm2
+ pxor %xmm11,%xmm3
+ pxor %xmm12,%xmm4
+ pxor %xmm13,%xmm5
+ pxor %xmm14,%xmm6
+ pxor %xmm15,%xmm7
+ movdqu %xmm0,(%rdx) /* store ciphertext blocks */
+ movdqu %xmm1,0x10(%rdx)
+ movdqu %xmm2,0x20(%rdx)
+ movdqu %xmm3,0x30(%rdx)
+ movdqu %xmm4,0x40(%rdx)
+ movdqu %xmm5,0x50(%rdx)
+ movdqu %xmm6,0x60(%rdx)
+ movdqu %xmm7,0x70(%rdx)
+ lea 0x80(%rdx),%rdx /* advance rdx to next block group */
+ movdqa %xmm15,%xmm9 /* xmm9 := tweak[7] */
+ call aesni_xts_mulx /* xmm9 := tweak[8] */
+ sub $0x80,%r10
+ jnz 1b /* repeat if more block groups */
+ movdqu %xmm9,(%r8) /* update tweak */
+ leave
+ ret
+END(aesni_xts_dec8)
+
+/*
+ * aesni_xts_mulx(tweak@xmm9)
+ *
+ * Multiply xmm9 by x, modulo x^128 + x^7 + x^2 + x + 1, in place.
+ * Uses %xmm0 as temporary.
+ */
+ .text
+ _ALIGN_TEXT
+ .type aesni_xts_mulx,@function
+aesni_xts_mulx:
+ /*
+ * Simultaneously determine
+ * (a) whether the high bit of the low quadword must be
+ * shifted into the low bit of the high quadword, and
+ * (b) whether the high bit of the high quadword must be
+ * carried into x^128 = x^7 + x^2 + x + 1.
+ */
+ pxor %xmm0,%xmm0 /* xmm0 := 0 */
+ pcmpgtq %xmm9,%xmm0 /* xmm0[i] := -1 if 0 > xmm9[i], 0 otherwise */
+ pshufd $0b01001110,%xmm0,%xmm0 /* swap halves of xmm0 */
+ pand xtscarry,%xmm0 /* copy xtscarry according to mask */
+ psllq $1,%xmm9 /* shift */
+ pxor %xmm0,%xmm9 /* incorporate (a) and (b) */
+ ret
+END(aesni_xts_mulx)
+
+ .section .rodata
+ .align 16
+ .type xtscarry,@object
+xtscarry:
+ .byte 0x87,0,0,0, 0,0,0,0, 1,0,0,0, 0,0,0,0
+END(xtscarry)
+
+/*
+ * aesni_xts_update(const uint8_t in[16] @rdi, uint8_t out[16] @rsi)
+ *
+ * Update an AES-XTS tweak.
+ *
+ * Standard ABI calling convention.
+ */
+ENTRY(aesni_xts_update)
+ movdqu (%rdi),%xmm9
+ call aesni_xts_mulx
+ movdqu %xmm9,(%rsi)
+ ret
+END(aesni_xts_update)
+
+/*
+ * aesni_enc1(const struct aesenc *enckey@rdi, uint128_t block@xmm0,
+ * uint32_t nrounds@ecx)
+ *
+ * Encrypt a single AES block in %xmm0.
+ *
+ * Internal ABI. Uses %rax and %xmm8 as temporaries. Destroys %ecx.
+ */
+ .text
+ _ALIGN_TEXT
+ .type aesni_enc1,@function
+aesni_enc1:
+ pxor (%rdi),%xmm0 /* xor in first round key */
+ shl $4,%ecx /* ecx := total byte size of round keys */
+ lea 0x10(%rdi,%rcx),%rax /* rax := end of round key array */
+ neg %rcx /* rcx := byte offset of round key from end */
+1: movdqa (%rax,%rcx),%xmm8 /* load round key */
+ add $0x10,%rcx
+ jz 2f /* stop if this is the last one */
+ aesenc %xmm8,%xmm0
+ jmp 1b
+2: aesenclast %xmm8,%xmm0
+ ret
+END(aesni_enc1)
+
+/*
+ * aesni_enc8(const struct aesenc *enckey@rdi, uint128_t block0@xmm0, ...,
+ * block7@xmm7, uint32_t nrounds@ecx)
+ *
+ * Encrypt eight AES blocks in %xmm0 through %xmm7 in parallel.
+ *
+ * Internal ABI. Uses %rax and %xmm8 as temporaries. Destroys %ecx.
+ */
+ .text
+ _ALIGN_TEXT
+ .type aesni_enc8,@function
+aesni_enc8:
+ movdqa (%rdi),%xmm8 /* xor in first round key */
+ pxor %xmm8,%xmm0
+ pxor %xmm8,%xmm1
+ pxor %xmm8,%xmm2
+ pxor %xmm8,%xmm3
+ pxor %xmm8,%xmm4
+ pxor %xmm8,%xmm5
+ pxor %xmm8,%xmm6
+ pxor %xmm8,%xmm7
+ shl $4,%ecx /* ecx := total byte size of round keys */
+ lea 0x10(%rdi,%rcx),%rax /* rax := end of round key array */
+ neg %rcx /* rcx := byte offset of round key from end */
+1: movdqa (%rax,%rcx),%xmm8 /* load round key */
+ add $0x10,%rcx
+ jz 2f /* stop if this is the last one */
+ aesenc %xmm8,%xmm0
+ aesenc %xmm8,%xmm1
+ aesenc %xmm8,%xmm2
+ aesenc %xmm8,%xmm3
+ aesenc %xmm8,%xmm4
+ aesenc %xmm8,%xmm5
+ aesenc %xmm8,%xmm6
+ aesenc %xmm8,%xmm7
+ jmp 1b
+2: aesenclast %xmm8,%xmm0
+ aesenclast %xmm8,%xmm1
+ aesenclast %xmm8,%xmm2
+ aesenclast %xmm8,%xmm3
+ aesenclast %xmm8,%xmm4
+ aesenclast %xmm8,%xmm5
+ aesenclast %xmm8,%xmm6
+ aesenclast %xmm8,%xmm7
+ ret
+END(aesni_enc8)
+
+/*
+ * aesni_dec1(const struct aesdec *deckey@rdi, uint128_t block@xmm0,
+ * uint32_t nrounds@ecx)
+ *
+ * Decrypt a single AES block in %xmm0.
+ *
+ * Internal ABI. Uses %rax and %xmm8 as temporaries. Destroys %ecx.
+ */
+ .text
+ _ALIGN_TEXT
+ .type aesni_dec1,@function
+aesni_dec1:
+ pxor (%rdi),%xmm0 /* xor in first round key */
+ shl $4,%ecx /* ecx := byte offset of round key */
+ lea 0x10(%rdi,%rcx),%rax /* rax := pointer to round key */
+ neg %rcx /* rcx := byte offset of round key from end */
+1: movdqa (%rax,%rcx),%xmm8 /* load round key */
+ add $0x10,%rcx
+ jz 2f /* stop if this is the last one */
+ aesdec %xmm8,%xmm0
+ jmp 1b
+2: aesdeclast %xmm8,%xmm0
+ ret
+END(aesni_dec1)
+
+/*
+ * aesni_dec8(const struct aesdec *deckey@rdi, uint128_t block0@xmm0, ...,
+ * block7@xmm7, uint32_t nrounds@ecx)
+ *
+ * Decrypt eight AES blocks in %xmm0 through %xmm7 in parallel.
+ *
+ * Internal ABI. Uses %xmm8 as temporary. Destroys %rcx.
+ */
+ .text
+ _ALIGN_TEXT
+ .type aesni_dec8,@function
+aesni_dec8:
+ movdqa (%rdi),%xmm8 /* xor in first round key */
+ pxor %xmm8,%xmm0
+ pxor %xmm8,%xmm1
+ pxor %xmm8,%xmm2
+ pxor %xmm8,%xmm3
+ pxor %xmm8,%xmm4
+ pxor %xmm8,%xmm5
+ pxor %xmm8,%xmm6
+ pxor %xmm8,%xmm7
+ shl $4,%ecx /* ecx := byte offset of round key */
+ lea 0x10(%rdi,%rcx),%rax /* rax := pointer to round key */
+ neg %rcx /* rcx := byte offset of round key from end */
+1: movdqa (%rax,%rcx),%xmm8 /* load round key */
+ add $0x10,%rcx
+ jz 2f /* stop if this is the last one */
+ aesdec %xmm8,%xmm0
+ aesdec %xmm8,%xmm1
+ aesdec %xmm8,%xmm2
+ aesdec %xmm8,%xmm3
+ aesdec %xmm8,%xmm4
+ aesdec %xmm8,%xmm5
+ aesdec %xmm8,%xmm6
+ aesdec %xmm8,%xmm7
+ jmp 1b
+2: aesdeclast %xmm8,%xmm0
+ aesdeclast %xmm8,%xmm1
+ aesdeclast %xmm8,%xmm2
+ aesdeclast %xmm8,%xmm3
+ aesdeclast %xmm8,%xmm4
+ aesdeclast %xmm8,%xmm5
+ aesdeclast %xmm8,%xmm6
+ aesdeclast %xmm8,%xmm7
+ ret
+END(aesni_dec8)
diff -r 9d6b84c40f65 -r fea7aeacc09c sys/crypto/aes/arch/x86/files.aesni
--- /dev/null Thu Jan 01 00:00:00 1970 +0000
+++ b/sys/crypto/aes/arch/x86/files.aesni Wed Jun 17 23:05:29 2020 +0000
@@ -0,0 +1,6 @@
+# $NetBSD$
+
+ifdef amd64 # amd64-only for now; i386 left as exercise for reader
+file crypto/aes/arch/x86/aes_ni.c aes
+file crypto/aes/arch/x86/aesnifunc.S aes
+endif
# HG changeset patch
# User Taylor R Campbell <riastradh%NetBSD.org@localhost>
# Date 1592066612 0
# Sat Jun 13 16:43:32 2020 +0000
# Branch trunk
# Node ID 87d9e1c86afcd441a167bf5f6d485e98d8094594
# Parent fea7aeacc09cf9da68d32a15edf9550ce78a4d45
# EXP-Topic riastradh-kernelcrypto
Implement AES in kernel using ARMv8.0-AES on aarch64.
diff -r fea7aeacc09c -r 87d9e1c86afc sys/arch/aarch64/aarch64/cpu.c
--- a/sys/arch/aarch64/aarch64/cpu.c Wed Jun 17 23:05:29 2020 +0000
+++ b/sys/arch/aarch64/aarch64/cpu.c Sat Jun 13 16:43:32 2020 +0000
@@ -44,6 +44,8 @@
#include <sys/sysctl.h>
#include <sys/systm.h>
+#include <crypto/aes/arch/aarch64/aes_arm.h>
+
#include <aarch64/armreg.h>
#include <aarch64/cpu.h>
#include <aarch64/cpufunc.h>
@@ -70,6 +72,7 @@ static void cpu_init_counter(struct cpu_
static void cpu_setup_id(struct cpu_info *);
static void cpu_setup_sysctl(device_t, struct cpu_info *);
static void cpu_setup_rng(device_t, struct cpu_info *);
+static void cpu_setup_aes(device_t, struct cpu_info *);
#ifdef MULTIPROCESSOR
#define NCPUINFO MAXCPUS
@@ -158,6 +161,7 @@ cpu_attach(device_t dv, cpuid_t id)
cpu_setup_sysctl(dv, ci);
cpu_setup_rng(dv, ci);
+ cpu_setup_aes(dv, ci);
}
struct cpuidtab {
@@ -589,6 +593,26 @@ cpu_setup_rng(device_t dv, struct cpu_in
RND_FLAG_DEFAULT|RND_FLAG_HASCB);
}
+/*
+ * setup the AES implementation
+ */
+static void
+cpu_setup_aes(device_t dv, struct cpu_info *ci)
+{
+ struct aarch64_sysctl_cpu_id *id = &ci->ci_id;
+
+ /* Verify that it is supported. */
+ switch (__SHIFTOUT(id->ac_aa64isar0, ID_AA64ISAR0_EL1_AES)) {
+ case ID_AA64ISAR0_EL1_AES_AES:
+ case ID_AA64ISAR0_EL1_AES_PMUL:
+ break;
+ default:
+ return;
+ }
+
+ aes_md_init(&aes_arm_impl);
+}
+
#ifdef MULTIPROCESSOR
void
cpu_hatch(struct cpu_info *ci)
diff -r fea7aeacc09c -r 87d9e1c86afc sys/arch/aarch64/conf/files.aarch64
--- a/sys/arch/aarch64/conf/files.aarch64 Wed Jun 17 23:05:29 2020 +0000
+++ b/sys/arch/aarch64/conf/files.aarch64 Sat Jun 13 16:43:32 2020 +0000
@@ -138,3 +138,6 @@ file arch/aarch64/aarch64/netbsd32_sysca
# profiling support
file dev/tprof/tprof_armv8.c tprof needs-flag
+
+# AES
+include "crypto/aes/arch/aarch64/files.aesarm"
diff -r fea7aeacc09c -r 87d9e1c86afc sys/crypto/aes/arch/aarch64/aes_arm.c
--- /dev/null Thu Jan 01 00:00:00 1970 +0000
+++ b/sys/crypto/aes/arch/aarch64/aes_arm.c Sat Jun 13 16:43:32 2020 +0000
@@ -0,0 +1,257 @@
+/* $NetBSD$ */
+
+/*-
+ * Copyright (c) 2020 The NetBSD Foundation, Inc.
+ * All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ * 1. Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * 2. Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in the
+ * documentation and/or other materials provided with the distribution.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE NETBSD FOUNDATION, INC. AND CONTRIBUTORS
+ * ``AS IS'' AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED
+ * TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
+ * PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE FOUNDATION OR CONTRIBUTORS
+ * BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <sys/cdefs.h>
+__KERNEL_RCSID(1, "$NetBSD$");
+
+#include <sys/types.h>
+#include <sys/proc.h>
+#include <sys/systm.h>
+
+#include <crypto/aes/aes.h>
+#include <crypto/aes/arch/aarch64/aes_arm.h>
+
+#include <aarch64/machdep.h>
+
+static void
+aesarm_setenckey(struct aesenc *enc, const uint8_t key[static 16],
+ uint32_t nrounds)
+{
+
+ switch (nrounds) {
+ case 10:
+ aesarm_setenckey128(enc, key);
+ break;
+ case 12:
+ aesarm_setenckey192(enc, key);
+ break;
+ case 14:
+ aesarm_setenckey256(enc, key);
+ break;
+ default:
+ panic("invalid AES rounds: %u", nrounds);
+ }
+}
+
+static void
+aesarm_setenckey_impl(struct aesenc *enc, const uint8_t key[static 16],
+ uint32_t nrounds)
+{
+
+ fpu_kern_enter();
+ aesarm_setenckey(enc, key, nrounds);
+ fpu_kern_leave();
+}
+
+static void
+aesarm_setdeckey_impl(struct aesdec *dec, const uint8_t key[static 16],
+ uint32_t nrounds)
+{
+ struct aesenc enc;
+
+ fpu_kern_enter();
+ aesarm_setenckey(&enc, key, nrounds);
+ aesarm_enctodec(&enc, dec, nrounds);
+ fpu_kern_leave();
+
+ explicit_memset(&enc, 0, sizeof enc);
+}
+
+static void
+aesarm_enc_impl(const struct aesenc *enc, const uint8_t in[static 16],
+ uint8_t out[static 16], uint32_t nrounds)
+{
+
+ fpu_kern_enter();
+ aesarm_enc(enc, in, out, nrounds);
+ fpu_kern_leave();
+}
+
+static void
+aesarm_dec_impl(const struct aesdec *dec, const uint8_t in[static 16],
+ uint8_t out[static 16], uint32_t nrounds)
+{
+
+ fpu_kern_enter();
+ aesarm_dec(dec, in, out, nrounds);
+ fpu_kern_leave();
+}
+
+static void
+aesarm_cbc_enc_impl(const struct aesenc *enc, const uint8_t in[static 16],
+ uint8_t out[static 16], size_t nbytes, uint8_t iv[static 16],
+ uint32_t nrounds)
+{
+
+ KASSERT(nbytes % 16 == 0);
+
+ fpu_kern_enter();
+ aesarm_cbc_enc(enc, in, out, nbytes, iv, nrounds);
+ fpu_kern_leave();
+}
+
+static void
+aesarm_cbc_dec_impl(const struct aesdec *dec, const uint8_t in[static 16],
+ uint8_t out[static 16], size_t nbytes, uint8_t iv[static 16],
+ uint32_t nrounds)
+{
+
+ KASSERT(nbytes % 16 == 0);
+
+ fpu_kern_enter();
+
+ if (nbytes % 128) {
+ aesarm_cbc_dec1(dec, in, out, nbytes % 128, iv, nrounds);
+ in += nbytes % 128;
+ out += nbytes % 128;
+ nbytes -= nbytes % 128;
+ }
+
+ KASSERT(nbytes % 128 == 0);
+ if (nbytes)
+ aesarm_cbc_dec8(dec, in, out, nbytes, iv, nrounds);
+
+ fpu_kern_leave();
+}
+
+static void
+aesarm_xts_enc_impl(const struct aesenc *enc, const uint8_t in[static 16],
+ uint8_t out[static 16], size_t nbytes, uint8_t tweak[static 16],
+ uint32_t nrounds)
+{
+
+ KASSERT(nbytes % 16 == 0);
+
+ fpu_kern_enter();
+
+ if (nbytes % 128) {
+ aesarm_xts_enc1(enc, in, out, nbytes % 128, tweak, nrounds);
+ in += nbytes % 128;
+ out += nbytes % 128;
+ nbytes -= nbytes % 128;
+ }
+
+ KASSERT(nbytes % 128 == 0);
+ if (nbytes)
+ aesarm_xts_enc8(enc, in, out, nbytes, tweak, nrounds);
+
+ fpu_kern_leave();
+}
+
+static void
+aesarm_xts_dec_impl(const struct aesdec *dec, const uint8_t in[static 16],
+ uint8_t out[static 16], size_t nbytes, uint8_t tweak[static 16],
+ uint32_t nrounds)
+{
+
+ KASSERT(nbytes % 16 == 0);
+
+ fpu_kern_enter();
+
+ if (nbytes % 128) {
+ aesarm_xts_dec1(dec, in, out, nbytes % 128, tweak, nrounds);
+ in += nbytes % 128;
+ out += nbytes % 128;
+ nbytes -= nbytes % 128;
+ }
+
+ KASSERT(nbytes % 128 == 0);
+ if (nbytes)
+ aesarm_xts_dec8(dec, in, out, nbytes, tweak, nrounds);
+
+ fpu_kern_leave();
+}
+
+static int
+aesarm_xts_update_selftest(void)
+{
+ static const struct {
+ uint8_t in[16], out[16];
+ } cases[] = {
+ {{1}, {2}},
+ {{0,0,0,0x80}, {0,0,0,0,1}},
+ {{0,0,0,0,0,0,0,0x80}, {0,0,0,0,0,0,0,0,1}},
+ {{0,0,0,0x80,0,0,0,0x80}, {0,0,0,0,1,0,0,0,1}},
+ {{0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0x80}, {0x87}},
+ {{0,0,0,0,0,0,0,0x80,0,0,0,0,0,0,0,0x80},
+ {0x87,0,0,0,0,0,0,0,1}},
+ {{0,0,0,0x80,0,0,0,0,0,0,0,0,0,0,0,0x80}, {0x87,0,0,0,1}},
+ {{0,0,0,0x80,0,0,0,0x80,0,0,0,0,0,0,0,0x80},
+ {0x87,0,0,0,1,0,0,0,1}},
+ };
+ unsigned i;
+ uint8_t tweak[16];
+
+ for (i = 0; i < sizeof(cases)/sizeof(cases[0]); i++) {
+ aesarm_xts_update(cases[i].in, tweak);
+ if (memcmp(tweak, cases[i].out, 16))
+ return -1;
+ }
+
+ /* Success! */
+ return 0;
+}
+
+static int
+aesarm_probe(void)
+{
+ struct aarch64_sysctl_cpu_id *id = &curcpu()->ci_id;
+ int result = 0;
+
+ /* Verify that the CPU supports AES. */
+ switch (__SHIFTOUT(id->ac_aa64isar0, ID_AA64ISAR0_EL1_AES)) {
+ case ID_AA64ISAR0_EL1_AES_AES:
+ case ID_AA64ISAR0_EL1_AES_PMUL:
+ break;
+ default:
+ return -1;
+ }
+
+ fpu_kern_enter();
+
+ /* Verify that our XTS tweak update logic works. */
+ if (aesarm_xts_update_selftest())
+ result = -1;
+
+ fpu_kern_leave();
+
+ return result;
+}
+
+struct aes_impl aes_arm_impl = {
+ .ai_name = "AArch64 ARMv8.0-AES",
+ .ai_probe = aesarm_probe,
+ .ai_setenckey = aesarm_setenckey_impl,
+ .ai_setdeckey = aesarm_setdeckey_impl,
+ .ai_enc = aesarm_enc_impl,
+ .ai_dec = aesarm_dec_impl,
+ .ai_cbc_enc = aesarm_cbc_enc_impl,
+ .ai_cbc_dec = aesarm_cbc_dec_impl,
+ .ai_xts_enc = aesarm_xts_enc_impl,
+ .ai_xts_dec = aesarm_xts_dec_impl,
+};
diff -r fea7aeacc09c -r 87d9e1c86afc sys/crypto/aes/arch/aarch64/aes_arm.h
--- /dev/null Thu Jan 01 00:00:00 1970 +0000
+++ b/sys/crypto/aes/arch/aarch64/aes_arm.h Sat Jun 13 16:43:32 2020 +0000
@@ -0,0 +1,68 @@
+/* $NetBSD$ */
+
+/*-
+ * Copyright (c) 2020 The NetBSD Foundation, Inc.
+ * All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ * 1. Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * 2. Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in the
+ * documentation and/or other materials provided with the distribution.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE NETBSD FOUNDATION, INC. AND CONTRIBUTORS
+ * ``AS IS'' AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED
+ * TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
+ * PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE FOUNDATION OR CONTRIBUTORS
+ * BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef _CRYPTO_AES_AES_ARCH_AARCH64_AES_ARM_H
+#define _CRYPTO_AES_AES_ARCH_AARCH64_AES_ARM_H
+
+#include <sys/types.h>
+
+#include <crypto/aes/aes.h>
+
+/* Assembly routines */
+
+void aesarm_setenckey128(struct aesenc *, const uint8_t[static 16]);
+void aesarm_setenckey192(struct aesenc *, const uint8_t[static 24]);
+void aesarm_setenckey256(struct aesenc *, const uint8_t[static 32]);
+
+void aesarm_enctodec(const struct aesenc *, struct aesdec *, uint32_t);
+
+void aesarm_enc(const struct aesenc *, const uint8_t[static 16],
+ uint8_t[static 16], uint32_t);
+void aesarm_dec(const struct aesdec *, const uint8_t[static 16],
+ uint8_t[static 16], uint32_t);
+
+void aesarm_cbc_enc(const struct aesenc *, const uint8_t[static 16],
+ uint8_t[static 16], size_t, uint8_t[static 16], uint32_t);
+void aesarm_cbc_dec1(const struct aesdec *, const uint8_t[static 16],
+ uint8_t[static 16], size_t, const uint8_t[static 16], uint32_t);
+void aesarm_cbc_dec8(const struct aesdec *, const uint8_t[static 128],
+ uint8_t[static 128], size_t, const uint8_t[static 16], uint32_t);
+
+void aesarm_xts_enc1(const struct aesenc *, const uint8_t[static 16],
+ uint8_t[static 16], size_t, uint8_t[static 16], uint32_t);
+void aesarm_xts_enc8(const struct aesenc *, const uint8_t[static 128],
+ uint8_t[static 128], size_t, const uint8_t[static 16], uint32_t);
+void aesarm_xts_dec1(const struct aesdec *, const uint8_t[static 16],
+ uint8_t[static 16], size_t, uint8_t[static 16], uint32_t);
+void aesarm_xts_dec8(const struct aesdec *, const uint8_t[static 128],
+ uint8_t[static 128], size_t, const uint8_t[static 16], uint32_t);
+void aesarm_xts_update(const uint8_t[static 16], uint8_t[static 16]);
+
+extern struct aes_impl aes_arm_impl;
+
+#endif /* _CRYPTO_AES_AES_ARCH_AARCH64_AES_ARM_H */
diff -r fea7aeacc09c -r 87d9e1c86afc sys/crypto/aes/arch/aarch64/aesarmfunc.S
--- /dev/null Thu Jan 01 00:00:00 1970 +0000
+++ b/sys/crypto/aes/arch/aarch64/aesarmfunc.S Sat Jun 13 16:43:32 2020 +0000
@@ -0,0 +1,1014 @@
+/* $NetBSD$ */
+
+/*-
+ * Copyright (c) 2020 The NetBSD Foundation, Inc.
+ * All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ * 1. Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * 2. Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in the
+ * documentation and/or other materials provided with the distribution.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE NETBSD FOUNDATION, INC. AND CONTRIBUTORS
+ * ``AS IS'' AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED
+ * TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
+ * PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE FOUNDATION OR CONTRIBUTORS
+ * BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <aarch64/asm.h>
+
+ .arch_extension crypto
+
+/*
+ * uint32_t rcon[10]
+ *
+ * Table mapping n ---> x^n mod (x^8 + x^4 + x^3 + x + 1) in GF(2).
+ * Such elements of GF(8) need only eight bits to be represented,
+ * but we store them in 4-byte units so we can copy one into all
+ * four 4-byte lanes of a vector register with a single LD1R. The
+ * access pattern is fixed, so indices into this table are never
+ * secret.
+ */
+ .section .rodata
+ .align 4
+ .type rcon,@object
+rcon:
+ .long 0x01
+ .long 0x02
+ .long 0x04
+ .long 0x08
+ .long 0x10
+ .long 0x20
+ .long 0x40
+ .long 0x80
+ .long 0x1b
+ .long 0x36
+END(rcon)
+
+/*
+ * uint128_t unshiftrows_rotword_1
+ *
+ * Table for TBL instruction to undo ShiftRows, and then do
+ * RotWord on word 1, and then copy it into all the other words.
+ */
+ .section .rodata
+ .align 16
+ .type unshiftrows_rotword_1,@object
+unshiftrows_rotword_1:
+ .byte 0x01,0x0e,0x0b,0x04
+ .byte 0x01,0x0e,0x0b,0x04
+ .byte 0x01,0x0e,0x0b,0x04
+ .byte 0x01,0x0e,0x0b,0x04
+END(unshiftrows_rotword_1)
+
+/*
+ * uint128_t unshiftrows_3
+ *
+ * Table for TBL instruction to undo ShiftRows, and then copy word
+ * 3 into all the other words.
+ */
+ .section .rodata
+ .align 16
+ .type unshiftrows_3,@object
+unshiftrows_3:
+ .byte 0x0c,0x09,0x06,0x03
+ .byte 0x0c,0x09,0x06,0x03
+ .byte 0x0c,0x09,0x06,0x03
+ .byte 0x0c,0x09,0x06,0x03
+END(unshiftrows_3)
+
+/*
+ * uint128_t unshiftrows_rotword_3
+ *
+ * Table for TBL instruction to undo ShiftRows, and then do
+ * RotWord on word 3, and then copy it into all the other words.
+ */
+ .section .rodata
+ .align 16
+ .type unshiftrows_rotword_3,@object
+unshiftrows_rotword_3:
+ .byte 0x09,0x06,0x03,0x0c
+ .byte 0x09,0x06,0x03,0x0c
+ .byte 0x09,0x06,0x03,0x0c
+ .byte 0x09,0x06,0x03,0x0c
+END(unshiftrows_rotword_3)
+
+/*
+ * aesarm_setenckey128(struct aesenc *enckey@x0, const uint8_t key[16] @x1)
+ *
+ * Expand a 16-byte AES-128 key into 10 round keys.
+ *
+ * Standard ABI calling convention.
+ */
+ENTRY(aesarm_setenckey128)
+ ldr q1, [x1] /* q1 := master key */
+
+ adrl x4, unshiftrows_rotword_3
+ eor v0.16b, v0.16b, v0.16b /* q0 := 0 */
+ ldr q8, [x4] /* q8 := unshiftrows_rotword_3 table */
+
+ str q1, [x0], #0x10 /* store master key as first round key */
+ mov x2, #10 /* round count */
+ adrl x3, rcon /* round constant */
+
+1: /*
+ * q0 = 0
+ * v1.4s = (prk[0], prk[1], prk[2], prk[3])
+ * x0 = pointer to round key to compute
+ * x2 = round count
+ * x3 = rcon pointer
+ */
+
+ /* q3 := ShiftRows(SubBytes(q1)) */
+ mov v3.16b, v1.16b
+ aese v3.16b, v0.16b
+
+ /* v3.4s[i] := RotWords(SubBytes(prk[3])) ^ RCON */
+ ld1r {v4.4s}, [x3], #4
+ tbl v3.16b, {v3.16b}, v8.16b
+ eor v3.16b, v3.16b, v4.16b
+
+ /*
+ * v5.4s := (0,prk[0],prk[1],prk[2])
+ * v6.4s := (0,0,prk[0],prk[1])
+ * v7.4s := (0,0,0,prk[0])
+ */
+ ext v5.16b, v0.16b, v1.16b, #12
+ ext v6.16b, v0.16b, v1.16b, #8
+ ext v7.16b, v0.16b, v1.16b, #4
+
+ /* v1.4s := (rk[0], rk[1], rk[2], rk[3]) */
+ eor v1.16b, v1.16b, v3.16b
+ eor v1.16b, v1.16b, v5.16b
+ eor v1.16b, v1.16b, v6.16b
+ eor v1.16b, v1.16b, v7.16b
+
+ subs x2, x2, #1 /* count down rounds */
+ str q1, [x0], #0x10 /* store round key */
+ b.ne 1b
+
+ ret
+END(aesarm_setenckey128)
+
+/*
+ * aesarm_setenckey192(struct aesenc *enckey@x0, const uint8_t key[24] @x1)
+ *
+ * Expand a 24-byte AES-192 key into 12 round keys.
+ *
+ * Standard ABI calling convention.
+ */
+ENTRY(aesarm_setenckey192)
+ ldr q1, [x1], #0x10 /* q1 := master key[0:128) */
+ ldr d2, [x1] /* d2 := master key[128:192) */
+
+ adrl x4, unshiftrows_rotword_1
+ adrl x5, unshiftrows_rotword_3
+ eor v0.16b, v0.16b, v0.16b /* q0 := 0 */
+ ldr q8, [x4] /* q8 := unshiftrows_rotword_1 */
+ ldr q9, [x5] /* q9 := unshiftrows_rotword_3 */
+
+ str q1, [x0], #0x10 /* store master key[0:128) as round key */
+ mov x2, #12 /* round count */
+ adrl x3, rcon /* round constant */
+
+1: /*
+ * q0 = 0
+ * v1.4s = (prk[0], prk[1], prk[2], prk[3])
+ * v2.4s = (rklo[0], rklo[1], xxx, xxx)
+ * x0 = pointer to three round keys to compute
+ * x2 = round count
+ * x3 = rcon pointer
+ */
+
+ /* q3 := ShiftRows(SubBytes(q2)) */
+ mov v3.16b, v2.16b
+ aese v3.16b, v0.16b
+
+ /* v3.4s[i] := RotWords(SubBytes(rklo[1])) ^ RCON */
+ ld1r {v4.4s}, [x3], #4
+ tbl v3.16b, {v3.16b}, v8.16b
+ eor v3.16b, v3.16b, v4.16b
+
+ /*
+ * We need to compute:
+ *
+ * rk[0] := rklo[0]
+ * rk[1] := rklo[1]
+ * rk[2] := Rot(Sub(rklo[1])) ^ RCON ^ prk[0]
+ * rk[3] := Rot(Sub(rklo[1])) ^ RCON ^ prk[0] ^ prk[1]
+ * nrk[0] := Rot(Sub(rklo[1])) ^ RCON ^ prk[0] ^ prk[1] ^ prk[2]
+ * nrk[1] := Rot(Sub(rklo[1])) ^ RCON ^ prk[0] ^ ... ^ prk[3]
+ * nrk[2] := Rot(Sub(rklo[1])) ^ RCON ^ prk[0] ^ ... ^ prk[3] ^ rklo[0]
+ * nrk[3] := Rot(Sub(rklo[1])) ^ RCON ^ prk[0] ^ ... ^ prk[3] ^ rklo[0]
+ * ^ rklo[1]
+ */
+
+ /*
+ * v5.4s := (0,prk[0],prk[1],prk[2])
+ * v6.4s := (0,0,prk[0],prk[1])
+ * v7.4s := (0,0,0,prk[0])
+ */
+ ext v5.16b, v0.16b, v1.16b, #12
+ ext v6.16b, v0.16b, v1.16b, #8
+ ext v7.16b, v0.16b, v1.16b, #4
+
+ /* v5.4s := (rk[2], rk[3], nrk[0], nrk[1]) */
+ eor v5.16b, v5.16b, v1.16b
+ eor v5.16b, v5.16b, v3.16b
+ eor v5.16b, v5.16b, v6.16b
+ eor v5.16b, v5.16b, v7.16b
+
+ /*
+ * At this point, rk is split across v2.4s = (rk[0],rk[1],...)
+ * and v5.4s = (rk[2],rk[3],...); nrk is in v5.4s =
+ * (...,nrk[0],nrk[1]); and we have yet to compute nrk[2] or
+ * nrk[3], which requires rklo[0] and rklo[1] in v2.4s =
+ * (rklo[0],rklo[1],...).
+ */
+
+ /* v1.4s := (nrk[0], nrk[1], nrk[1], nrk[1]) */
+ dup v1.4s, v5.4s[3]
+ mov v1.4s[0], v5.4s[2]
+
+ /*
+ * v6.4s := (0, 0, rklo[0], rklo[1])
+ * v7.4s := (0, 0, 0, rklo[0])
+ */
+ ext v6.16b, v0.16b, v2.16b, #8
+ ext v7.16b, v0.16b, v2.16b, #4
+
+ /* v3.4s := (nrk[0], nrk[1], nrk[2], nrk[3]) */
+ eor v3.16b, v1.16b, v6.16b
+ eor v3.16b, v3.16b, v7.16b
+
+ /*
+ * Recall v2.4s = (rk[0], rk[1], xxx, xxx)
+ * and v5.4s = (rk[2], rk[3], xxx, xxx). Set
+ * v2.4s := (rk[0], rk[1], rk[2], rk[3])
+ */
+ mov v2.2d[1], v5.2d[0]
+
+ /* store two round keys */
+ stp q2, q3, [x0], #0x20
+
+ /*
+ * Live vector registers at this point:
+ *
+ * q0 = zero
+ * q2 = rk
+ * q3 = nrk
+ * v5.4s = (rk[2], rk[3], nrk[0], nrk[1])
+ * q8 = unshiftrows_rotword_1
+ * q9 = unshiftrows_rotword_3
+ *
+ * We have to compute, in q1:
+ *
+ * nnrk[0] := Rot(Sub(nrk[3])) ^ RCON' ^ rk[2]
+ * nnrk[1] := Rot(Sub(nrk[3])) ^ RCON' ^ rk[2] ^ rk[3]
+ * nnrk[2] := Rot(Sub(nrk[3])) ^ RCON' ^ rk[2] ^ rk[3] ^ nrk[0]
+ * nnrk[3] := Rot(Sub(nrk[3])) ^ RCON' ^ rk[2] ^ rk[3] ^ nrk[0]
+ * ^ nrk[1]
+ *
+ * And, if there's any more afterward, in q2:
+ *
+ * nnnrklo[0] := Rot(Sub(nrk[3])) ^ RCON' ^ rk[2] ^ rk[3] ^ nrk[0]
+ * ^ nrk[1] ^ nrk[2]
+ * nnnrklo[1] := Rot(Sub(nrk[3])) ^ RCON' ^ rk[2] ^ rk[3] ^ nrk[0]
+ * ^ nrk[1] ^ nrk[2] ^ nrk[3]
+ */
+
+ /* q1 := RotWords(SubBytes(q3)) */
+ mov v1.16b, v3.16b
+ aese v1.16b, v0.16b
+
+ /* v1.4s[i] := RotWords(SubBytes(nrk[3])) ^ RCON' */
+ ld1r {v4.4s}, [x3], #4
+ tbl v1.16b, {v1.16b}, v9.16b
+ eor v1.16b, v1.16b, v4.16b
+
+ /*
+ * v5.4s := (rk[2], rk[3], nrk[0], nrk[1]) [already]
+ * v4.4s := (0, rk[2], rk[3], nrk[0])
+ * v6.4s := (0, 0, rk[2], rk[3])
+ * v7.4s := (0, 0, 0, rk[2])
+ */
+ ext v4.16b, v0.16b, v5.16b, #12
+ ext v6.16b, v0.16b, v5.16b, #8
+ ext v7.16b, v0.16b, v5.16b, #4
+
+ /* v1.4s := (nnrk[0], nnrk[1], nnrk[2], nnrk[3]) */
+ eor v1.16b, v1.16b, v5.16b
+ eor v1.16b, v1.16b, v4.16b
+ eor v1.16b, v1.16b, v6.16b
+ eor v1.16b, v1.16b, v7.16b
+
+ subs x2, x2, #3 /* count down three rounds */
+ str q1, [x0], #0x10 /* store third round key */
+ b.eq 2f
+
+ /*
+ * v4.4s := (nrk[2], nrk[3], xxx, xxx)
+ * v5.4s := (0, nrk[2], xxx, xxx)
+ */
+ ext v4.16b, v3.16b, v0.16b, #8
+ ext v5.16b, v0.16b, v4.16b, #12
+
+ /* v2.4s := (nnrk[3], nnrk[3], xxx, xxx) */
+ dup v2.4s, v1.4s[3]
+
+ /*
+ * v2.4s := (nnnrklo[0] = nnrk[3] ^ nrk[2],
+ * nnnrklo[1] = nnrk[3] ^ nrk[2] ^ nrk[3],
+ * xxx, xxx)
+ */
+ eor v2.16b, v2.16b, v4.16b
+ eor v2.16b, v2.16b, v5.16b
+
+ b 1b
+
+2: ret
+END(aesarm_setenckey192)
+
+/*
+ * aesarm_setenckey256(struct aesenc *enckey@x0, const uint8_t key[32] @x1)
+ *
+ * Expand a 32-byte AES-256 key into 14 round keys.
+ *
+ * Standard ABI calling convention.
+ */
+ENTRY(aesarm_setenckey256)
+ /* q1 := key[0:128), q2 := key[128:256) */
+ ldp q1, q2, [x1], #0x20
+
+ adrl x4, unshiftrows_rotword_3
+ adrl x5, unshiftrows_3
+ eor v0.16b, v0.16b, v0.16b /* q0 := 0 */
+ ldr q8, [x4] /* q8 := unshiftrows_rotword_3 */
+ ldr q9, [x5] /* q9 := unshiftrows_3 */
+
+ /* store master key as first two round keys */
+ stp q1, q2, [x0], #0x20
+ mov x2, #14 /* round count */
+ adrl x3, rcon /* round constant */
+
+1: /*
+ * q0 = 0
+ * v1.4s = (pprk[0], pprk[1], pprk[2], pprk[3])
+ * v2.4s = (prk[0], prk[1], prk[2], prk[3])
+ * x2 = round count
+ * x3 = rcon pointer
+ */
+
+ /* q3 := ShiftRows(SubBytes(q2)) */
+ mov v3.16b, v2.16b
+ aese v3.16b, v0.16b
+
+ /* v3.4s[i] := RotWords(SubBytes(prk[3])) ^ RCON */
+ ld1r {v4.4s}, [x3], #4
+ tbl v3.16b, {v3.16b}, v8.16b
+ eor v3.16b, v3.16b, v4.16b
+
+ /*
+ * v5.4s := (0,pprk[0],pprk[1],pprk[2])
+ * v6.4s := (0,0,pprk[0],pprk[1])
+ * v7.4s := (0,0,0,pprk[0])
+ */
+ ext v5.16b, v0.16b, v1.16b, #12
+ ext v6.16b, v0.16b, v1.16b, #8
+ ext v7.16b, v0.16b, v1.16b, #4
+
+ /* v1.4s := (rk[0], rk[1], rk[2], rk[3]) */
+ eor v1.16b, v1.16b, v3.16b
+ eor v1.16b, v1.16b, v5.16b
+ eor v1.16b, v1.16b, v6.16b
+ eor v1.16b, v1.16b, v7.16b
+
+ subs x2, x2, #2 /* count down two rounds */
+ b.eq 2f /* stop if this is the last one */
+
+ /* q3 := ShiftRows(SubBytes(q1)) */
+ mov v3.16b, v1.16b
+ aese v3.16b, v0.16b
+
+ /* v3.4s[i] := SubBytes(rk[3]) */
+ tbl v3.16b, {v3.16b}, v9.16b
+
+ /*
+ * v5.4s := (0,prk[0],prk[1],prk[2])
+ * v6.4s := (0,0,prk[0],prk[1])
+ * v7.4s := (0,0,0,prk[0])
+ */
+ ext v5.16b, v0.16b, v2.16b, #12
+ ext v6.16b, v0.16b, v2.16b, #8
+ ext v7.16b, v0.16b, v2.16b, #4
+
+ /* v2.4s := (nrk[0], nrk[1], nrk[2], nrk[3]) */
+ eor v2.16b, v2.16b, v3.16b
+ eor v2.16b, v2.16b, v5.16b
+ eor v2.16b, v2.16b, v6.16b
+ eor v2.16b, v2.16b, v7.16b
+
+ stp q1, q2, [x0], #0x20 /* store two round keys */
+ b 1b
+
+2: str q1, [x0] /* store last round key */
+ ret
+END(aesarm_setenckey256)
+
+/*
+ * aesarm_enctodec(const struct aesenc *enckey@x0, struct aesdec *deckey@x1,
+ * uint32_t nrounds@x2)
+ *
+ * Convert AES encryption round keys to AES decryption round keys.
+ * `rounds' must be between 10 and 14.
+ *
+ * Standard ABI calling convention.
+ */
+ENTRY(aesarm_enctodec)
+ ldr q0, [x0, x2, lsl #4] /* load last round key */
+1: str q0, [x1], #0x10 /* store round key */
+ subs x2, x2, #1 /* count down round */
+ ldr q0, [x0, x2, lsl #4] /* load previous round key */
+ b.eq 2f /* stop if this is the last one */
+ aesimc v0.16b, v0.16b /* convert encryption to decryption */
+ b 1b
+2: str q0, [x1] /* store first round key verbatim */
+ ret
+END(aesarm_enctodec)
+
+/*
+ * aesarm_enc(const struct aesenc *enckey@x0, const uint8_t in[16] @x1,
+ * uint8_t out[16] @x2, uint32_t nrounds@x3)
+ *
+ * Encrypt a single block.
+ *
+ * Standard ABI calling convention.
+ */
+ENTRY(aesarm_enc)
+ stp fp, lr, [sp, #-16]! /* push stack frame */
+ mov fp, sp
+ ldr q0, [x1] /* q0 := block */
+ bl aesarm_enc1
+ str q0, [x2] /* store block */
+ ldp fp, lr, [sp], #16 /* pop stack frame */
+ ret
+END(aesarm_enc)
+
+/*
+ * aesarm_dec(const struct aesdec *deckey@x0, const uint8_t in[16] @x1,
+ * uint8_t out[16] @x2, uint32_t nrounds@x3)
+ *
+ * Decrypt a single block.
+ *
+ * Standard ABI calling convention.
+ */
+ENTRY(aesarm_dec)
+ stp fp, lr, [sp, #-16]! /* push stack frame */
+ mov fp, sp
+ ldr q0, [x1] /* q0 := block */
+ bl aesarm_dec1
+ str q0, [x2] /* store block */
+ ldp fp, lr, [sp], #16 /* pop stack frame */
+ ret
+END(aesarm_dec)
+
+/*
+ * aesarm_cbc_enc(const struct aesenc *enckey@x0, const uint8_t *in@x1,
+ * uint8_t *out@x2, size_t nbytes@x3, uint8_t iv[16] @x4,
+ * uint32_t nrounds@x5)
+ *
+ * Encrypt a contiguous sequence of blocks with AES-CBC.
+ *
+ * nbytes must be an integral multiple of 16.
+ *
+ * Standard ABI calling convention.
+ */
+ENTRY(aesarm_cbc_enc)
+ cbz x3, 2f /* stop if nothing to do */
+ stp fp, lr, [sp, #-16]! /* push stack frame */
+ mov fp, sp
+ mov x9, x0 /* x9 := enckey */
+ mov x10, x3 /* x10 := nbytes */
+ ldr q0, [x4] /* q0 := chaining value */
+1: ldr q1, [x1], #0x10 /* q1 := plaintext block */
+ eor v0.16b, v0.16b, v1.16b /* q0 := cv ^ ptxt */
+ mov x0, x9 /* x0 := enckey */
+ mov x3, x5 /* x3 := nrounds */
+ bl aesarm_enc1 /* q0 := ciphertext block */
+ subs x10, x10, #0x10 /* count down nbytes */
+ str q0, [x2], #0x10 /* store ciphertext block */
+ b.ne 1b /* repeat if x10 is nonzero */
+ str q0, [x4] /* store chaining value */
+ ldp fp, lr, [sp], #16 /* pop stack frame */
+2: ret
+END(aesarm_cbc_enc)
+
+/*
+ * aesarm_cbc_dec1(const struct aesdec *deckey@x0, const uint8_t *in@x1,
+ * uint8_t *out@x2, size_t nbytes@x3, const uint8_t iv[16] @x4,
+ * uint32_t nrounds@x5)
+ *
+ * Decrypt a contiguous sequence of blocks with AES-CBC.
+ *
+ * nbytes must be a positive integral multiple of 16. This routine
+ * is not vectorized; use aesarm_cbc_dec8 for >=8 blocks at once.
+ *
+ * Standard ABI calling convention.
+ */
+ENTRY(aesarm_cbc_dec1)
+ stp fp, lr, [sp, #-32]! /* push stack frame with uint128 */
+ mov fp, sp
+ ldr q8, [x4] /* q8 := iv */
+ str q8, [sp, #16] /* save iv */
+ mov x9, x0 /* x9 := enckey */
+ mov x10, x3 /* x10 := nbytes */
+ add x1, x1, x3 /* x1 := pointer past end of in */
+ add x2, x2, x3 /* x2 := pointer past end of out */
+ ldr q0, [x1, #-0x10]! /* q0 := last ciphertext block */
+ str q0, [x4] /* update iv */
+1: mov x0, x9 /* x0 := enckey */
+ mov x3, x5 /* x3 := nrounds */
+ bl aesarm_dec1 /* q0 := cv ^ ptxt; trash x0/x3 */
+ subs x10, x10, #0x10 /* count down nbytes */
+ b.eq 2f /* stop if this is the first block */
+ ldr q8, [x1, #-0x10]! /* q8 := chaining value */
+ eor v0.16b, v0.16b, v8.16b /* q0 := plaintext block */
+ str q0, [x2, #-0x10]! /* store plaintext block */
+ mov v0.16b, v8.16b /* move cv = ciphertext block */
+ b 1b
+2: ldr q8, [sp, #16] /* q8 := iv */
+ eor v0.16b, v0.16b, v8.16b /* q0 := first plaintext block */
+ str q0, [x2, #-0x10]! /* store first plaintext block */
+ ldp fp, lr, [sp], #32 /* pop stack frame */
+ ret
+END(aesarm_cbc_dec1)
+
+/*
+ * aesarm_cbc_dec8(const struct aesdec *deckey@x0, const uint8_t *in@x1,
+ * uint8_t *out@x2, size_t nbytes@x3, const uint8_t iv[16] @x4,
+ * uint32_t nrounds@x5)
+ *
+ * Decrypt a contiguous sequence of 8-block units with AES-CBC.
+ *
+ * nbytes must be a positive integral multiple of 128.
+ *
+ * Standard ABI calling convention.
+ */
+ENTRY(aesarm_cbc_dec8)
+ stp fp, lr, [sp, #-32]! /* push stack frame with uint128 */
+ mov fp, sp
+ ldr q8, [x4] /* q8 := iv */
+ str q8, [sp, #16] /* save iv */
+ mov x9, x0 /* x9 := enckey */
+ mov x10, x3 /* x10 := nbytes */
+ add x1, x1, x3 /* x1 := pointer past end of in */
+ add x2, x2, x3 /* x2 := pointer past end of out */
+ ldp q6, q7, [x1, #-0x20]! /* q6, q7 := last ciphertext blocks */
+ str q7, [x4] /* update iv */
+1: ldp q4, q5, [x1, #-0x20]!
+ ldp q2, q3, [x1, #-0x20]!
+ ldp q0, q1, [x1, #-0x20]!
+ mov v15.16b, v6.16b /* q[8+i] := cv[i], 0<i<8 */
+ mov v14.16b, v5.16b
+ mov v13.16b, v4.16b
+ mov v12.16b, v3.16b
+ mov v11.16b, v2.16b
+ mov v10.16b, v1.16b
+ mov v9.16b, v0.16b
+ mov x0, x9 /* x0 := enckey */
+ mov x3, x5 /* x3 := nrounds */
+ bl aesarm_dec8 /* q[i] := cv[i] ^ pt[i] */
+ eor v7.16b, v7.16b, v15.16b /* q[i] := pt[i] */
+ eor v6.16b, v6.16b, v14.16b
+ eor v5.16b, v5.16b, v13.16b
+ eor v4.16b, v4.16b, v12.16b
+ eor v3.16b, v3.16b, v11.16b
+ eor v2.16b, v2.16b, v10.16b
+ eor v1.16b, v1.16b, v9.16b
+ subs x10, x10, #0x80 /* count down nbytes */
+ stp q6, q7, [x2, #-0x20]! /* store plaintext blocks */
+ stp q4, q5, [x2, #-0x20]!
+ stp q2, q3, [x2, #-0x20]!
+ b.eq 2f /* stop if this is the first block */
+ ldp q6, q7, [x1, #-0x20]!
+ eor v0.16b, v0.16b, v7.16b /* q0 := pt0 */
+ stp q0, q1, [x2, #-0x20]!
+ b 1b
+2: ldr q8, [sp, #16] /* q8 := iv */
+ eor v0.16b, v0.16b, v8.16b /* q0 := pt0 */
+ stp q0, q1, [x2, #-0x20]! /* store first two plaintext blocks */
+ ldp fp, lr, [sp], #32 /* pop stack frame */
+ ret
+END(aesarm_cbc_dec8)
+
+/*
+ * aesarm_xts_enc1(const struct aesenc *enckey@x0, const uint8_t *in@x1,
+ * uint8_t *out@x2, size_t nbytes@x3, uint8_t tweak[16] @x4,
+ * uint32_t nrounds@x5)
+ *
+ * Encrypt a contiguous sequence of blocks with AES-XTS.
+ *
+ * nbytes must be a positive integral multiple of 16. This routine
+ * is not vectorized; use aesarm_xts_enc8 for >=8 blocks at once.
+ *
+ * Standard ABI calling convention.
+ */
+ENTRY(aesarm_xts_enc1)
+ stp fp, lr, [sp, #-16]! /* push stack frame */
+ mov fp, sp
+ mov x9, x0 /* x9 := enckey */
+ mov x10, x3 /* x10 := nbytes */
+ ldr q9, [x4] /* q9 := tweak */
+1: ldr q0, [x1], #0x10 /* q0 := ptxt */
+ mov x0, x9 /* x0 := enckey */
+ mov x3, x5 /* x3 := nrounds */
+ eor v0.16b, v0.16b, v9.16b /* q0 := ptxt ^ tweak */
+ bl aesarm_enc1 /* q0 := AES(ptxt ^ tweak) */
+ eor v0.16b, v0.16b, v9.16b /* q0 := AES(ptxt ^ tweak) ^ tweak */
+ str q0, [x2], #0x10 /* store ciphertext block */
+ bl aesarm_xts_mulx /* q9 *= x; trash x0/q0/q1 */
+ subs x10, x10, #0x10 /* count down nbytes */
+ b.ne 1b /* repeat if more blocks */
+ str q9, [x4] /* update tweak */
+ ldp fp, lr, [sp], #16 /* pop stack frame */
+ ret
+END(aesarm_xts_enc1)
+
+/*
+ * aesarm_xts_enc8(const struct aesenc *enckey@x0, const uint8_t *in@x1,
+ * uint8_t *out@x2, size_t nbytes@x3, uint8_t tweak[16] @x4,
+ * uint32_t nrounds@x5)
+ *
+ * Encrypt a contiguous sequence of blocks with AES-XTS.
+ *
+ * nbytes must be a positive integral multiple of 128.
+ *
+ * Standard ABI calling convention.
+ */
+ENTRY(aesarm_xts_enc8)
+ stp fp, lr, [sp, #-48]! /* push stack frame uint128[2] */
+ mov fp, sp
+ mov x9, x0 /* x9 := enckey */
+ mov x10, x3 /* x10 := nbytes */
+ ldr q9, [x4] /* q9 := tweak */
+1: str q9, [sp, #16] /* save tweak[0] */
+ bl aesarm_xts_mulx /* q9 *= x; trash x0/q0/q1 */
+ str q9, [sp, #32] /* save tweak[1] */
+ bl aesarm_xts_mulx /* q9 *= x; trash x0/q0/q1 */
+ mov v10.16b, v9.16b /* q10 := tweak[2] */
+ bl aesarm_xts_mulx /* q9 *= x; trash x0/q0/q1 */
+ mov v11.16b, v9.16b /* q11 := tweak[3] */
+ bl aesarm_xts_mulx /* q9 *= x; trash x0/q0/q1 */
+ mov v12.16b, v9.16b /* q11 := tweak[4] */
+ bl aesarm_xts_mulx /* q9 *= x; trash x0/q0/q1 */
+ mov v13.16b, v9.16b /* q11 := tweak[5] */
+ bl aesarm_xts_mulx /* q9 *= x; trash x0/q0/q1 */
+ mov v14.16b, v9.16b /* q11 := tweak[6] */
+ bl aesarm_xts_mulx /* q9 *= x; trash x0/q0/q1 */
+ mov v15.16b, v9.16b /* q11 := tweak[7] */
+ ldp q8, q9, [sp, #16] /* q8 := tweak[0], q9 := tweak[1] */
+ ldp q0, q1, [x1], #0x20 /* q[i] := pt[i] */
+ ldp q2, q3, [x1], #0x20
+ ldp q4, q5, [x1], #0x20
+ ldp q6, q7, [x1], #0x20
+ eor v0.16b, v0.16b, v8.16b /* q[i] := pt[i] ^ tweak[i] */
+ eor v1.16b, v1.16b, v9.16b
+ eor v2.16b, v2.16b, v10.16b
+ eor v3.16b, v3.16b, v11.16b
+ eor v4.16b, v4.16b, v12.16b
+ eor v5.16b, v5.16b, v13.16b
+ eor v6.16b, v6.16b, v14.16b
+ eor v7.16b, v7.16b, v15.16b
+ mov x0, x9 /* x0 := enckey */
+ mov x3, x5 /* x3 := nrounds */
+ bl aesarm_enc8 /* encrypt q0,...,q7; trash x0/x3/q8 */
+ ldr q8, [sp, #16] /* reload q8 := tweak[0] */
+ eor v1.16b, v1.16b, v9.16b /* q[i] := AES(...) ^ tweak[i] */
+ eor v2.16b, v2.16b, v10.16b
+ eor v3.16b, v3.16b, v11.16b
+ eor v0.16b, v0.16b, v8.16b
+ eor v4.16b, v4.16b, v12.16b
+ eor v5.16b, v5.16b, v13.16b
+ eor v6.16b, v6.16b, v14.16b
+ eor v7.16b, v7.16b, v15.16b
+ stp q0, q1, [x2], #0x20 /* store ciphertext blocks */
+ stp q2, q3, [x2], #0x20 /* store ciphertext blocks */
+ stp q4, q5, [x2], #0x20 /* store ciphertext blocks */
+ stp q6, q7, [x2], #0x20 /* store ciphertext blocks */
+ mov v9.16b, v15.16b /* q9 := q15 = tweak[7] */
+ bl aesarm_xts_mulx /* q9 *= x; trash x0/q0/q1 */
+ subs x10, x10, #0x80 /* count down nbytes */
+ b.ne 1b /* repeat if more block groups */
+ str q9, [x4] /* update tweak */
+ ldp fp, lr, [sp], #48 /* pop stack frame */
+ ret
+END(aesarm_xts_enc8)
+
+/*
+ * aesarm_xts_dec1(const struct aesdec *deckey@x0, const uint8_t *in@x1,
+ * uint8_t *out@x2, size_t nbytes@x3, uint8_t tweak[16] @x4,
+ * uint32_t nrounds@x5)
+ *
+ * Decrypt a contiguous sequence of blocks with AES-XTS.
+ *
+ * nbytes must be a positive integral multiple of 16. This routine
+ * is not vectorized; use aesarm_xts_dec8 for >=8 blocks at once.
+ *
+ * Standard ABI calling convention.
+ */
+ENTRY(aesarm_xts_dec1)
+ stp fp, lr, [sp, #-16]! /* push stack frame */
+ mov fp, sp
+ mov x9, x0 /* x9 := deckey */
+ mov x10, x3 /* x10 := nbytes */
+ ldr q9, [x4] /* q9 := tweak */
+1: ldr q0, [x1], #0x10 /* q0 := ptxt */
+ mov x0, x9 /* x0 := deckey */
+ mov x3, x5 /* x3 := nrounds */
+ eor v0.16b, v0.16b, v9.16b /* q0 := ptxt ^ tweak */
+ bl aesarm_dec1 /* q0 := AES(ptxt ^ tweak) */
+ eor v0.16b, v0.16b, v9.16b /* q0 := AES(ptxt ^ tweak) ^ tweak */
+ str q0, [x2], #0x10 /* store ciphertext block */
+ bl aesarm_xts_mulx /* q9 *= x; trash x0/q0/q1 */
+ subs x10, x10, #0x10 /* count down nbytes */
+ b.ne 1b /* repeat if more blocks */
+ str q9, [x4] /* update tweak */
+ ldp fp, lr, [sp], #16 /* pop stack frame */
+ ret
+END(aesarm_xts_dec1)
+
+/*
+ * aesarm_xts_dec8(const struct aesdec *deckey@x0, const uint8_t *in@x1,
+ * uint8_t *out@x2, size_t nbytes@x3, uint8_t tweak[16] @x4,
+ * uint32_t nrounds@x5)
+ *
+ * Decrypt a contiguous sequence of blocks with AES-XTS.
+ *
+ * nbytes must be a positive integral multiple of 128.
+ *
+ * Standard ABI calling convention.
+ */
+ENTRY(aesarm_xts_dec8)
+ stp fp, lr, [sp, #-48]! /* push stack frame uint128[2] */
+ mov fp, sp
+ mov x9, x0 /* x9 := deckey */
+ mov x10, x3 /* x10 := nbytes */
+ ldr q9, [x4] /* q9 := tweak */
+1: str q9, [sp, #16] /* save tweak[0] */
+ bl aesarm_xts_mulx /* q9 *= x; trash x0/q0/q1 */
+ str q9, [sp, #32] /* save tweak[1] */
+ bl aesarm_xts_mulx /* q9 *= x; trash x0/q0/q1 */
+ mov v10.16b, v9.16b /* q10 := tweak[2] */
+ bl aesarm_xts_mulx /* q9 *= x; trash x0/q0/q1 */
+ mov v11.16b, v9.16b /* q11 := tweak[3] */
+ bl aesarm_xts_mulx /* q9 *= x; trash x0/q0/q1 */
+ mov v12.16b, v9.16b /* q11 := tweak[4] */
+ bl aesarm_xts_mulx /* q9 *= x; trash x0/q0/q1 */
+ mov v13.16b, v9.16b /* q11 := tweak[5] */
+ bl aesarm_xts_mulx /* q9 *= x; trash x0/q0/q1 */
+ mov v14.16b, v9.16b /* q11 := tweak[6] */
+ bl aesarm_xts_mulx /* q9 *= x; trash x0/q0/q1 */
+ mov v15.16b, v9.16b /* q11 := tweak[7] */
+ ldp q8, q9, [sp, #16] /* q8 := tweak[0], q9 := tweak[1] */
+ ldp q0, q1, [x1], #0x20 /* q[i] := pt[i] */
+ ldp q2, q3, [x1], #0x20
+ ldp q4, q5, [x1], #0x20
+ ldp q6, q7, [x1], #0x20
+ eor v0.16b, v0.16b, v8.16b /* q[i] := pt[i] ^ tweak[i] */
+ eor v1.16b, v1.16b, v9.16b
+ eor v2.16b, v2.16b, v10.16b
+ eor v3.16b, v3.16b, v11.16b
+ eor v4.16b, v4.16b, v12.16b
+ eor v5.16b, v5.16b, v13.16b
+ eor v6.16b, v6.16b, v14.16b
+ eor v7.16b, v7.16b, v15.16b
+ mov x0, x9 /* x0 := deckey */
+ mov x3, x5 /* x3 := nrounds */
+ bl aesarm_dec8 /* decrypt q0,...,q7; trash x0/x3/q8 */
+ ldr q8, [sp, #16] /* reload q8 := tweak[0] */
+ eor v1.16b, v1.16b, v9.16b /* q[i] := AES(...) ^ tweak[i] */
+ eor v2.16b, v2.16b, v10.16b
+ eor v3.16b, v3.16b, v11.16b
+ eor v0.16b, v0.16b, v8.16b
+ eor v4.16b, v4.16b, v12.16b
+ eor v5.16b, v5.16b, v13.16b
+ eor v6.16b, v6.16b, v14.16b
+ eor v7.16b, v7.16b, v15.16b
+ stp q0, q1, [x2], #0x20 /* store ciphertext blocks */
+ stp q2, q3, [x2], #0x20 /* store ciphertext blocks */
+ stp q4, q5, [x2], #0x20 /* store ciphertext blocks */
+ stp q6, q7, [x2], #0x20 /* store ciphertext blocks */
+ mov v9.16b, v15.16b /* q9 := q15 = tweak[7] */
+ bl aesarm_xts_mulx /* q9 *= x; trash x0/q0/q1 */
+ subs x10, x10, #0x80 /* count down nbytes */
+ b.ne 1b /* repeat if more block groups */
+ str q9, [x4] /* update tweak */
+ ldp fp, lr, [sp], #48 /* pop stack frame */
+ ret
+END(aesarm_xts_dec8)
+
+/*
+ * aesarm_xts_mulx(tweak@q9)
+ *
+ * Multiply q9 by x, modulo x^128 + x^7 + x^2 + x + 1, in place.
+ * Uses x0 and q0/q1 as temporaries.
+ */
+ .text
+ _ALIGN_TEXT
+ .type aesarm_xts_mulx,@function
+aesarm_xts_mulx:
+ /*
+ * Simultaneously determine
+ * (a) whether the high bit of the low half must be
+ * shifted into the low bit of the high half, and
+ * (b) whether the high bit of the high half must be
+ * carried into x^128 = x^7 + x^2 + x + 1.
+ */
+ adrl x0, xtscarry
+ cmlt v1.2d, v9.2d, #0 /* v1.2d[i] := -1 if v9.2d[i] < 0, else 0 */
+ ldr q0, [x0] /* q0 := xtscarry */
+ ext v1.16b, v1.16b, v1.16b, #8 /* swap halves of q1 */
+ shl v9.2d, v9.2d, #1 /* shift */
+ and v0.16b, v0.16b, v1.16b /* copy xtscarry according to mask */
+ eor v9.16b, v9.16b, v0.16b /* incorporate (a) and (b) */
+ ret
+END(aesarm_xts_mulx)
+
+ .section .rodata
+ .align 16
+ .type xtscarry,@object
+xtscarry:
+ .byte 0x87,0,0,0, 0,0,0,0, 1,0,0,0, 0,0,0,0
+END(xtscarry)
+
+/*
+ * aesarm_xts_update(const uint8_t in[16] @x0, uint8_t out[16] @x1)
+ *
+ * Update an AES-XTS tweak.
+ *
+ * Standard ABI calling convention.
+ */
+ENTRY(aesarm_xts_update)
+ stp fp, lr, [sp, #-16]! /* push stack frame */
+ mov fp, sp
+ ldr q9, [x0] /* load tweak */
+ bl aesarm_xts_mulx /* q9 *= x */
+ str q9, [x1] /* store tweak */
+ ldp fp, lr, [sp], #16 /* pop stack frame */
+ ret
+END(aesarm_xts_update)
+
+/*
+ * aesarm_enc1(const struct aesenc *enckey@x0,
+ * uint128_t block@q0, uint32_t nrounds@x3)
+ *
+ * Encrypt a single AES block in q0.
+ *
+ * Internal ABI. Uses q8 as temporary. Destroys x0 and x3.
+ */
+ .text
+ _ALIGN_TEXT
+ .type aesarm_enc1,@function
+aesarm_enc1:
+ ldr q8, [x0], #0x10 /* load round key */
+1: subs x3, x3, #1
+ /* q0 := ShiftRows(SubBytes(AddRoundKey_q8(q0))) */
+ aese v0.16b, v8.16b
+ ldr q8, [x0], #0x10 /* load next round key */
+ b.eq 2f
+ /* q0 := MixColumns(q0) */
+ aesmc v0.16b, v0.16b
+ b 1b
+2: eor v0.16b, v0.16b, v8.16b
+ ret
+END(aesarm_enc1)
+
+/*
+ * aesarm_enc8(const struct aesenc *enckey@x0,
+ * uint128_t block0@q0, ..., uint128_t block7@q7,
+ * uint32_t nrounds@x3)
+ *
+ * Encrypt eight AES blocks in q0 through q7 in parallel.
+ *
+ * Internal ABI. Uses q8 as temporary. Destroys x0 and x3.
+ */
+ .text
+ _ALIGN_TEXT
+ .type aesarm_enc8,@function
+aesarm_enc8:
+ ldr q8, [x0], #0x10 /* load round key */
+1: subs x3, x3, #1
+ /* q[i] := ShiftRows(SubBytes(AddRoundKey_q8(q[i]))) */
+ aese v0.16b, v8.16b
+ aese v1.16b, v8.16b
+ aese v2.16b, v8.16b
+ aese v3.16b, v8.16b
+ aese v4.16b, v8.16b
+ aese v5.16b, v8.16b
+ aese v6.16b, v8.16b
+ aese v7.16b, v8.16b
+ ldr q8, [x0], #0x10 /* load next round key */
+ b.eq 2f
+ /* q[i] := MixColumns(q[i]) */
+ aesmc v0.16b, v0.16b
+ aesmc v1.16b, v1.16b
+ aesmc v2.16b, v2.16b
+ aesmc v3.16b, v3.16b
+ aesmc v4.16b, v4.16b
+ aesmc v5.16b, v5.16b
+ aesmc v6.16b, v6.16b
+ aesmc v7.16b, v7.16b
+ b 1b
+2: eor v0.16b, v0.16b, v8.16b /* AddRoundKey */
+ eor v1.16b, v1.16b, v8.16b
+ eor v2.16b, v2.16b, v8.16b
+ eor v3.16b, v3.16b, v8.16b
+ eor v4.16b, v4.16b, v8.16b
+ eor v5.16b, v5.16b, v8.16b
+ eor v6.16b, v6.16b, v8.16b
+ eor v7.16b, v7.16b, v8.16b
+ ret
+END(aesarm_enc8)
+
+/*
+ * aesarm_dec1(const struct aesdec *deckey@x0,
+ * uint128_t block@q0, uint32_t nrounds@x3)
+ *
+ * Decrypt a single AES block in q0.
+ *
+ * Internal ABI. Uses q8 as temporary. Destroys x0 and x3.
+ */
+ .text
+ _ALIGN_TEXT
+ .type aesarm_dec1,@function
+aesarm_dec1:
+ ldr q8, [x0], #0x10 /* load round key */
+1: subs x3, x3, #1
+ /* q0 := InSubBytes(InShiftRows(AddRoundKey_q8(q0))) */
+ aesd v0.16b, v8.16b
+ ldr q8, [x0], #0x10 /* load next round key */
+ b.eq 2f
+ /* q0 := InMixColumns(q0) */
+ aesimc v0.16b, v0.16b
+ b 1b
+2: eor v0.16b, v0.16b, v8.16b
+ ret
+END(aesarm_dec1)
+
+/*
+ * aesarm_dec8(const struct aesdec *deckey@x0,
+ * uint128_t block0@q0, ..., uint128_t block7@q7,
+ * uint32_t nrounds@x3)
+ *
+ * Decrypt eight AES blocks in q0 through q7 in parallel.
+ *
+ * Internal ABI. Uses q8 as temporary. Destroys x0 and x3.
+ */
+ .text
+ _ALIGN_TEXT
+ .type aesarm_dec8,@function
+aesarm_dec8:
+ ldr q8, [x0], #0x10 /* load round key */
+1: subs x3, x3, #1
+ /* q[i] := InSubBytes(InShiftRows(AddRoundKey_q8(q[i]))) */
+ aesd v0.16b, v8.16b
+ aesd v1.16b, v8.16b
+ aesd v2.16b, v8.16b
+ aesd v3.16b, v8.16b
+ aesd v4.16b, v8.16b
+ aesd v5.16b, v8.16b
+ aesd v6.16b, v8.16b
+ aesd v7.16b, v8.16b
+ ldr q8, [x0], #0x10 /* load next round key */
+ b.eq 2f
+ /* q[i] := InMixColumns(q[i]) */
+ aesimc v0.16b, v0.16b
+ aesimc v1.16b, v1.16b
+ aesimc v2.16b, v2.16b
+ aesimc v3.16b, v3.16b
+ aesimc v4.16b, v4.16b
+ aesimc v5.16b, v5.16b
+ aesimc v6.16b, v6.16b
+ aesimc v7.16b, v7.16b
+ b 1b
+2: eor v0.16b, v0.16b, v8.16b /* AddRoundKey */
+ eor v1.16b, v1.16b, v8.16b
+ eor v2.16b, v2.16b, v8.16b
+ eor v3.16b, v3.16b, v8.16b
+ eor v4.16b, v4.16b, v8.16b
+ eor v5.16b, v5.16b, v8.16b
+ eor v6.16b, v6.16b, v8.16b
+ eor v7.16b, v7.16b, v8.16b
+ ret
+END(aesarm_dec8)
diff -r fea7aeacc09c -r 87d9e1c86afc sys/crypto/aes/arch/aarch64/files.aesarm
--- /dev/null Thu Jan 01 00:00:00 1970 +0000
+++ b/sys/crypto/aes/arch/aarch64/files.aesarm Sat Jun 13 16:43:32 2020 +0000
@@ -0,0 +1,4 @@
+# $NetBSD$
+
+file crypto/aes/arch/aarch64/aes_arm.c aes
+file crypto/aes/arch/aarch64/aesarmfunc.S aes
# HG changeset patch
# User Taylor R Campbell <riastradh%NetBSD.org@localhost>
# Date 1592164233 0
# Sun Jun 14 19:50:33 2020 +0000
# Branch trunk
# Node ID 3ded3c0a82b5fec12d521ba1d98285d446d016d9
# Parent 87d9e1c86afcd441a167bf5f6d485e98d8094594
# EXP-Topic riastradh-kernelcrypto
glxsb(4): Remove rijndael dependency.
This doesn't actually seem to depend on it in any way.
XXX Compile-tested only.
diff -r 87d9e1c86afc -r 3ded3c0a82b5 sys/arch/i386/conf/files.i386
--- a/sys/arch/i386/conf/files.i386 Sat Jun 13 16:43:32 2020 +0000
+++ b/sys/arch/i386/conf/files.i386 Sun Jun 14 19:50:33 2020 +0000
@@ -416,7 +416,7 @@ obsolete defparam opt_vesafb.h VESAFB_WI
obsolete defflag opt_vesafb.h VESAFB_PM
# AMD Geode LX Security Block
-device glxsb: opencrypto, rijndael
+device glxsb: opencrypto
attach glxsb at pci
file arch/i386/pci/glxsb.c glxsb
diff -r 87d9e1c86afc -r 3ded3c0a82b5 sys/arch/i386/pci/glxsb.c
--- a/sys/arch/i386/pci/glxsb.c Sat Jun 13 16:43:32 2020 +0000
+++ b/sys/arch/i386/pci/glxsb.c Sun Jun 14 19:50:33 2020 +0000
@@ -44,7 +44,6 @@
#include <dev/pci/pcidevs.h>
#include <opencrypto/cryptodev.h>
-#include <crypto/rijndael/rijndael.h>
#define SB_GLD_MSR_CAP 0x58002000 /* RO - Capabilities */
#define SB_GLD_MSR_CONFIG 0x58002001 /* RW - Master Config */
# HG changeset patch
# User Taylor R Campbell <riastradh%NetBSD.org@localhost>
# Date 1592164303 0
# Sun Jun 14 19:51:43 2020 +0000
# Branch trunk
# Node ID 0eb81d1b858c9205fde1d048bd1fa6640ec93928
# Parent 3ded3c0a82b5fec12d521ba1d98285d446d016d9
# EXP-Topic riastradh-kernelcrypto
padlock(4): Convert legacy rijndael API to new aes API.
XXX Compile-tested only.
XXX The byte-order business here seems highly questionable.
diff -r 3ded3c0a82b5 -r 0eb81d1b858c sys/arch/x86/conf/files.x86
--- a/sys/arch/x86/conf/files.x86 Sun Jun 14 19:50:33 2020 +0000
+++ b/sys/arch/x86/conf/files.x86 Sun Jun 14 19:51:43 2020 +0000
@@ -59,7 +59,7 @@ device odcm
attach odcm at cpufeaturebus
file arch/x86/x86/odcm.c odcm
-device padlock: opencrypto, rijndael
+device padlock: opencrypto, aes
attach padlock at cpufeaturebus
file arch/x86/x86/via_padlock.c padlock
diff -r 3ded3c0a82b5 -r 0eb81d1b858c sys/arch/x86/include/via_padlock.h
--- a/sys/arch/x86/include/via_padlock.h Sun Jun 14 19:50:33 2020 +0000
+++ b/sys/arch/x86/include/via_padlock.h Sun Jun 14 19:51:43 2020 +0000
@@ -25,7 +25,8 @@
#include <sys/rndsource.h>
#include <sys/callout.h>
-#include <crypto/rijndael/rijndael.h>
+
+#include <crypto/aes/aes.h>
/* VIA C3 xcrypt-* instruction context control options */
#define C3_CRYPT_CWLO_ROUND_M 0x0000000f
@@ -43,9 +44,8 @@
#define C3_CRYPT_CWLO_KEY256 0x0000080e /* 256bit, 15 rds */
struct via_padlock_session {
- uint32_t ses_ekey[4 * (RIJNDAEL_MAXNR + 1) + 4]; /* 128 bit aligned */
- uint32_t ses_dkey[4 * (RIJNDAEL_MAXNR + 1) + 4]; /* 128 bit aligned */
- uint8_t ses_iv[16]; /* 128 bit aligned */
+ struct aesenc ses_ekey;
+ struct aesdec ses_dkey;
uint32_t ses_cw0;
struct swcr_data *swd;
int ses_klen;
diff -r 3ded3c0a82b5 -r 0eb81d1b858c sys/arch/x86/x86/via_padlock.c
--- a/sys/arch/x86/x86/via_padlock.c Sun Jun 14 19:50:33 2020 +0000
+++ b/sys/arch/x86/x86/via_padlock.c Sun Jun 14 19:51:43 2020 +0000
@@ -37,10 +37,11 @@
#include <machine/cpufunc.h>
#include <machine/cpuvar.h>
+#include <crypto/aes/aes.h>
+
#include <opencrypto/cryptodev.h>
#include <opencrypto/cryptosoft.h>
#include <opencrypto/xform.h>
-#include <crypto/rijndael/rijndael.h>
#include <opencrypto/cryptosoft_xform.c>
@@ -176,12 +177,18 @@ via_padlock_crypto_newsession(void *arg,
case CRYPTO_AES_CBC:
switch (c->cri_klen) {
case 128:
+ aes_setenckey128(&ses->ses_ekey, c->cri_key);
+ aes_setdeckey128(&ses->ses_dkey, c->cri_key);
cw0 = C3_CRYPT_CWLO_KEY128;
break;
case 192:
+ aes_setenckey192(&ses->ses_ekey, c->cri_key);
+ aes_setdeckey192(&ses->ses_dkey, c->cri_key);
cw0 = C3_CRYPT_CWLO_KEY192;
break;
case 256:
+ aes_setenckey256(&ses->ses_ekey, c->cri_key);
+ aes_setdeckey256(&ses->ses_dkey, c->cri_key);
cw0 = C3_CRYPT_CWLO_KEY256;
break;
default:
@@ -194,14 +201,12 @@ via_padlock_crypto_newsession(void *arg,
ses->ses_klen = c->cri_klen;
ses->ses_cw0 = cw0;
- /* Build expanded keys for both directions */
- rijndaelKeySetupEnc(ses->ses_ekey, c->cri_key,
- c->cri_klen);
- rijndaelKeySetupDec(ses->ses_dkey, c->cri_key,
- c->cri_klen);
- for (i = 0; i < 4 * (RIJNDAEL_MAXNR + 1); i++) {
- ses->ses_ekey[i] = ntohl(ses->ses_ekey[i]);
- ses->ses_dkey[i] = ntohl(ses->ses_dkey[i]);
+ /* Convert words to host byte order (???) */
+ for (i = 0; i < 4 * (AES_256_NROUNDS + 1); i++) {
+ ses->ses_ekey.aese_aes.aes_rk[i] =
+ ntohl(ses->ses_ekey.aese_aes.aes_rk[i]);
+ ses->ses_dkey.aesd_aes.aes_rk[i] =
+ ntohl(ses->ses_dkey.aesd_aes.aes_rk[i]);
}
break;
@@ -379,7 +384,7 @@ via_padlock_crypto_encdec(struct cryptop
if (crd->crd_flags & CRD_F_ENCRYPT) {
sc->op_cw[0] = ses->ses_cw0 | C3_CRYPT_CWLO_ENCRYPT;
- key = ses->ses_ekey;
+ key = ses->ses_ekey.aese_aes.aes_rk;
if (crd->crd_flags & CRD_F_IV_EXPLICIT)
memcpy(sc->op_iv, crd->crd_iv, 16);
else
@@ -398,7 +403,7 @@ via_padlock_crypto_encdec(struct cryptop
}
} else {
sc->op_cw[0] = ses->ses_cw0 | C3_CRYPT_CWLO_DECRYPT;
- key = ses->ses_dkey;
+ key = ses->ses_dkey.aesd_aes.aes_rk;
if (crd->crd_flags & CRD_F_IV_EXPLICIT)
memcpy(sc->op_iv, crd->crd_iv, 16);
else {
# HG changeset patch
# User Taylor R Campbell <riastradh%NetBSD.org@localhost>
# Date 1592164567 0
# Sun Jun 14 19:56:07 2020 +0000
# Branch trunk
# Node ID f2bfdffcb27b2e0de26513dbac99e057635654bb
# Parent 0eb81d1b858c9205fde1d048bd1fa6640ec93928
# EXP-Topic riastradh-kernelcrypto
cgd(4): Switch from legacy rijndael API to new aes API.
diff -r 0eb81d1b858c -r f2bfdffcb27b sys/conf/files
--- a/sys/conf/files Sun Jun 14 19:51:43 2020 +0000
+++ b/sys/conf/files Sun Jun 14 19:56:07 2020 +0000
@@ -1395,7 +1395,7 @@ file dev/ic/amdccp.c amdccp
defpseudodev vnd: disk
defflag opt_vnd.h VND_COMPRESSION
defpseudo ccd: disk
-defpseudodev cgd: disk, des, blowfish, cast128, rijndael
+defpseudodev cgd: disk, des, blowfish, cast128, aes
defpseudodev md: disk
defpseudodev fss: disk
diff -r 0eb81d1b858c -r f2bfdffcb27b sys/dev/cgd_crypto.c
--- a/sys/dev/cgd_crypto.c Sun Jun 14 19:51:43 2020 +0000
+++ b/sys/dev/cgd_crypto.c Sun Jun 14 19:56:07 2020 +0000
@@ -45,9 +45,9 @@
#include <dev/cgd_crypto.h>
+#include <crypto/aes/aes.h>
#include <crypto/blowfish/blowfish.h>
#include <crypto/des/des.h>
-#include <crypto/rijndael/rijndael-api-fst.h>
/*
* The general framework provides only one generic function.
@@ -114,8 +114,9 @@ cryptfuncs_find(const char *alg)
*/
struct aes_privdata {
- keyInstance ap_enckey;
- keyInstance ap_deckey;
+ struct aesenc ap_enckey;
+ struct aesdec ap_deckey;
+ uint32_t ap_nrounds;
};
static void *
@@ -132,8 +133,23 @@ cgd_cipher_aes_cbc_init(size_t keylen, c
if (*blocksize != 128)
return NULL;
ap = kmem_zalloc(sizeof(*ap), KM_SLEEP);
- rijndael_makeKey(&ap->ap_enckey, DIR_ENCRYPT, keylen, key);
- rijndael_makeKey(&ap->ap_deckey, DIR_DECRYPT, keylen, key);
+ switch (keylen) {
+ case 128:
+ aes_setenckey128(&ap->ap_enckey, key);
+ aes_setdeckey128(&ap->ap_deckey, key);
+ ap->ap_nrounds = AES_128_NROUNDS;
+ break;
+ case 192:
+ aes_setenckey192(&ap->ap_enckey, key);
+ aes_setdeckey192(&ap->ap_deckey, key);
+ ap->ap_nrounds = AES_192_NROUNDS;
+ break;
+ case 256:
+ aes_setenckey256(&ap->ap_enckey, key);
+ aes_setdeckey256(&ap->ap_deckey, key);
+ ap->ap_nrounds = AES_256_NROUNDS;
+ break;
+ }
return ap;
}
@@ -152,25 +168,18 @@ cgd_cipher_aes_cbc(void *privdata, void
{
struct aes_privdata *apd = privdata;
uint8_t iv[CGD_AES_BLOCK_SIZE] = {0};
- cipherInstance cipher;
- int cipher_ok __diagused;
/* Compute the CBC IV as AES_k(blkno). */
- cipher_ok = rijndael_cipherInit(&cipher, MODE_ECB, NULL);
- KASSERT(cipher_ok > 0);
- rijndael_blockEncrypt(&cipher, &apd->ap_enckey, blkno, /*nbits*/128,
- iv);
+ aes_enc(&apd->ap_enckey, blkno, iv, apd->ap_nrounds);
- cipher_ok = rijndael_cipherInit(&cipher, MODE_CBC, iv);
- KASSERT(cipher_ok > 0);
switch (dir) {
case CGD_CIPHER_ENCRYPT:
- rijndael_blockEncrypt(&cipher, &apd->ap_enckey, src,
- /*nbits*/nbytes * 8, dst);
+ aes_cbc_enc(&apd->ap_enckey, src, dst, nbytes, iv,
+ apd->ap_nrounds);
break;
case CGD_CIPHER_DECRYPT:
- rijndael_blockDecrypt(&cipher, &apd->ap_deckey, src,
- /*nbits*/nbytes * 8, dst);
+ aes_cbc_dec(&apd->ap_deckey, src, dst, nbytes, iv,
+ apd->ap_nrounds);
break;
default:
panic("%s: unrecognised direction %d", __func__, dir);
@@ -182,9 +191,10 @@ cgd_cipher_aes_cbc(void *privdata, void
*/
struct aesxts {
- keyInstance ax_enckey;
- keyInstance ax_deckey;
- keyInstance ax_tweakkey;
+ struct aesenc ax_enckey;
+ struct aesdec ax_deckey;
+ struct aesenc ax_tweakkey;
+ uint32_t ax_nrounds;
};
static void *
@@ -207,9 +217,20 @@ cgd_cipher_aes_xts_init(size_t keylen, c
key = xtskey;
key2 = key + keylen / CHAR_BIT;
- rijndael_makeKey(&ax->ax_enckey, DIR_ENCRYPT, keylen, key);
- rijndael_makeKey(&ax->ax_deckey, DIR_DECRYPT, keylen, key);
- rijndael_makeKey(&ax->ax_tweakkey, DIR_ENCRYPT, keylen, key2);
+ switch (keylen) {
+ case 128:
+ aes_setenckey128(&ax->ax_enckey, key);
+ aes_setdeckey128(&ax->ax_deckey, key);
+ aes_setenckey128(&ax->ax_tweakkey, key2);
+ ax->ax_nrounds = AES_128_NROUNDS;
+ break;
+ case 256:
+ aes_setenckey256(&ax->ax_enckey, key);
+ aes_setdeckey256(&ax->ax_deckey, key);
+ aes_setenckey256(&ax->ax_tweakkey, key2);
+ ax->ax_nrounds = AES_256_NROUNDS;
+ break;
+ }
return ax;
}
@@ -229,25 +250,18 @@ cgd_cipher_aes_xts(void *cookie, void *d
{
struct aesxts *ax = cookie;
uint8_t tweak[CGD_AES_BLOCK_SIZE];
- cipherInstance cipher;
- int cipher_ok __diagused;
/* Compute the initial tweak as AES_k(blkno). */
- cipher_ok = rijndael_cipherInit(&cipher, MODE_ECB, NULL);
- KASSERT(cipher_ok > 0);
- rijndael_blockEncrypt(&cipher, &ax->ax_tweakkey, blkno, /*nbits*/128,
- tweak);
+ aes_enc(&ax->ax_tweakkey, blkno, tweak, ax->ax_nrounds);
- cipher_ok = rijndael_cipherInit(&cipher, MODE_XTS, tweak);
- KASSERT(cipher_ok > 0);
switch (dir) {
case CGD_CIPHER_ENCRYPT:
- rijndael_blockEncrypt(&cipher, &ax->ax_enckey, src,
- /*nbits*/nbytes * 8, dst);
+ aes_xts_enc(&ax->ax_enckey, src, dst, nbytes, tweak,
+ ax->ax_nrounds);
break;
case CGD_CIPHER_DECRYPT:
- rijndael_blockDecrypt(&cipher, &ax->ax_deckey, src,
- /*nbits*/nbytes * 8, dst);
+ aes_xts_dec(&ax->ax_deckey, src, dst, nbytes, tweak,
+ ax->ax_nrounds);
break;
default:
panic("%s: unrecognised direction %d", __func__, dir);
# HG changeset patch
# User Taylor R Campbell <riastradh%NetBSD.org@localhost>
# Date 1592164643 0
# Sun Jun 14 19:57:23 2020 +0000
# Branch trunk
# Node ID b7131a05bde780d6bbcc795e46ecfde3a45e9bfb
# Parent f2bfdffcb27b2e0de26513dbac99e057635654bb
# EXP-Topic riastradh-kernelcrypto
uvm(9): Switch from legacy rijndael API to new aes API.
diff -r f2bfdffcb27b -r b7131a05bde7 sys/uvm/files.uvm
--- a/sys/uvm/files.uvm Sun Jun 14 19:56:07 2020 +0000
+++ b/sys/uvm/files.uvm Sun Jun 14 19:57:23 2020 +0000
@@ -8,7 +8,7 @@ defflag opt_uvmhist.h UVMHIST_PRINT: KE
defparam opt_uvmhist.h UVMHIST_MAPHIST_SIZE UVMHIST_PDHIST_SIZE
defflag opt_uvm.h USE_TOPDOWN_VM UVMMAP_COUNTERS
defparam opt_uvm.h UVM_RESERVED_PAGES_PER_CPU
-defflag opt_vmswap.h VMSWAP : rijndael
+defflag opt_vmswap.h VMSWAP : aes
defflag opt_readahead.h READAHEAD_STATS
defflag opt_ubc.h UBC_STATS
defparam opt_pagermap.h PAGER_MAP_SIZE
diff -r f2bfdffcb27b -r b7131a05bde7 sys/uvm/uvm_swap.c
--- a/sys/uvm/uvm_swap.c Sun Jun 14 19:56:07 2020 +0000
+++ b/sys/uvm/uvm_swap.c Sun Jun 14 19:57:23 2020 +0000
@@ -65,7 +65,7 @@
#include <miscfs/specfs/specdev.h>
-#include <crypto/rijndael/rijndael-api-fst.h>
+#include <crypto/aes/aes.h>
/*
* uvm_swap.c: manage configuration and i/o to swap space.
@@ -148,8 +148,8 @@ struct swapdev {
int swd_active; /* number of active buffers */
volatile uint32_t *swd_encmap; /* bitmap of encrypted slots */
- keyInstance swd_enckey; /* AES key expanded for enc */
- keyInstance swd_deckey; /* AES key expanded for dec */
+ struct aesenc swd_enckey; /* AES key expanded for enc */
+ struct aesdec swd_deckey; /* AES key expanded for dec */
bool swd_encinit; /* true if keys initialized */
};
@@ -2073,8 +2073,8 @@ uvm_swap_genkey(struct swapdev *sdp)
KASSERT(!sdp->swd_encinit);
cprng_strong(kern_cprng, key, sizeof key, 0);
- rijndael_makeKey(&sdp->swd_enckey, DIR_ENCRYPT, 256, key);
- rijndael_makeKey(&sdp->swd_deckey, DIR_DECRYPT, 256, key);
+ aes_setenckey256(&sdp->swd_enckey, key);
+ aes_setdeckey256(&sdp->swd_deckey, key);
explicit_memset(key, 0, sizeof key);
sdp->swd_encinit = true;
@@ -2089,27 +2089,17 @@ uvm_swap_genkey(struct swapdev *sdp)
static void
uvm_swap_encryptpage(struct swapdev *sdp, void *kva, int slot)
{
- cipherInstance aes;
uint8_t preiv[16] = {0}, iv[16];
- int ok __diagused, nbits __diagused;
/* iv := AES_k(le32enc(slot) || 0^96) */
le32enc(preiv, slot);
- ok = rijndael_cipherInit(&aes, MODE_ECB, NULL);
- KASSERT(ok);
- nbits = rijndael_blockEncrypt(&aes, &sdp->swd_enckey, preiv,
- /*length in bits*/128, iv);
- KASSERT(nbits == 128);
+ aes_enc(&sdp->swd_enckey, (const void *)preiv, iv, AES_256_NROUNDS);
/* *kva := AES-CBC_k(iv, *kva) */
- ok = rijndael_cipherInit(&aes, MODE_CBC, iv);
- KASSERT(ok);
- nbits = rijndael_blockEncrypt(&aes, &sdp->swd_enckey, kva,
- /*length in bits*/PAGE_SIZE*NBBY, kva);
- KASSERT(nbits == PAGE_SIZE*NBBY);
+ aes_cbc_enc(&sdp->swd_enckey, kva, kva, PAGE_SIZE, iv,
+ AES_256_NROUNDS);
explicit_memset(&iv, 0, sizeof iv);
- explicit_memset(&aes, 0, sizeof aes);
}
/*
@@ -2121,28 +2111,17 @@ uvm_swap_encryptpage(struct swapdev *sdp
static void
uvm_swap_decryptpage(struct swapdev *sdp, void *kva, int slot)
{
- cipherInstance aes;
uint8_t preiv[16] = {0}, iv[16];
- int ok __diagused, nbits __diagused;
/* iv := AES_k(le32enc(slot) || 0^96) */
le32enc(preiv, slot);
- ok = rijndael_cipherInit(&aes, MODE_ECB, NULL);
- KASSERT(ok);
- nbits = rijndael_blockEncrypt(&aes, &sdp->swd_enckey, preiv,
- /*length in bits*/128, iv);
- KASSERTMSG(nbits == 128, "nbits=%d expected %d\n", nbits, 128);
+ aes_enc(&sdp->swd_enckey, (const void *)preiv, iv, AES_256_NROUNDS);
/* *kva := AES-CBC^{-1}_k(iv, *kva) */
- ok = rijndael_cipherInit(&aes, MODE_CBC, iv);
- KASSERT(ok);
- nbits = rijndael_blockDecrypt(&aes, &sdp->swd_deckey, kva,
- /*length in bits*/PAGE_SIZE*NBBY, kva);
- KASSERTMSG(nbits == PAGE_SIZE*NBBY,
- "nbits=%d expected %d\n", nbits, PAGE_SIZE*NBBY);
+ aes_cbc_dec(&sdp->swd_deckey, kva, kva, PAGE_SIZE, iv,
+ AES_256_NROUNDS);
explicit_memset(&iv, 0, sizeof iv);
- explicit_memset(&aes, 0, sizeof aes);
}
SYSCTL_SETUP(sysctl_uvmswap_setup, "sysctl uvmswap setup")
# HG changeset patch
# User Taylor R Campbell <riastradh%NetBSD.org@localhost>
# Date 1592164753 0
# Sun Jun 14 19:59:13 2020 +0000
# Branch trunk
# Node ID a97bc0abe60d9a77b10f27d63951d60b0be7b987
# Parent b7131a05bde780d6bbcc795e46ecfde3a45e9bfb
# EXP-Topic riastradh-kernelcrypto
opencrypto: Switch from legacy rijndael API to new aes API.
While here, apply various rijndael->aes renames, reduce the size
of aesxcbc_ctx by 480 bytes, and convert some malloc->kmem.
Leave in the symbol enc_xform_rijndael128 for now, though, so this
doesn't break any kernel ABI.
diff -r b7131a05bde7 -r a97bc0abe60d sys/opencrypto/aesxcbcmac.c
--- a/sys/opencrypto/aesxcbcmac.c Sun Jun 14 19:57:23 2020 +0000
+++ b/sys/opencrypto/aesxcbcmac.c Sun Jun 14 19:59:13 2020 +0000
@@ -34,7 +34,8 @@
#include <sys/param.h>
#include <sys/systm.h>
-#include <crypto/rijndael/rijndael.h>
+
+#include <crypto/aes/aes.h>
#include <opencrypto/aesxcbcmac.h>
@@ -47,24 +48,31 @@ aes_xcbc_mac_init(void *vctx, const uint
{ 2,2,2,2,2,2,2,2,2,2,2,2,2,2,2,2 };
static const uint8_t k3seed[AES_BLOCKSIZE] =
{ 3,3,3,3,3,3,3,3,3,3,3,3,3,3,3,3 };
- u_int32_t r_ks[(RIJNDAEL_MAXNR+1)*4];
+ struct aesenc r_ks;
aesxcbc_ctx *ctx;
uint8_t k1[AES_BLOCKSIZE];
ctx = vctx;
memset(ctx, 0, sizeof(*ctx));
- if ((ctx->r_nr = rijndaelKeySetupEnc(r_ks, key, keylen * 8)) == 0)
- return -1;
- rijndaelEncrypt(r_ks, ctx->r_nr, k1seed, k1);
- rijndaelEncrypt(r_ks, ctx->r_nr, k2seed, ctx->k2);
- rijndaelEncrypt(r_ks, ctx->r_nr, k3seed, ctx->k3);
- if (rijndaelKeySetupEnc(ctx->r_k1s, k1, AES_BLOCKSIZE * 8) == 0)
- return -1;
- if (rijndaelKeySetupEnc(ctx->r_k2s, ctx->k2, AES_BLOCKSIZE * 8) == 0)
- return -1;
- if (rijndaelKeySetupEnc(ctx->r_k3s, ctx->k3, AES_BLOCKSIZE * 8) == 0)
- return -1;
+ switch (keylen) {
+ case 16:
+ ctx->r_nr = aes_setenckey128(&r_ks, key);
+ break;
+ case 24:
+ ctx->r_nr = aes_setenckey192(&r_ks, key);
+ break;
+ case 32:
+ ctx->r_nr = aes_setenckey256(&r_ks, key);
+ break;
+ }
+ aes_enc(&r_ks, k1seed, k1, ctx->r_nr);
+ aes_enc(&r_ks, k2seed, ctx->k2, ctx->r_nr);
+ aes_enc(&r_ks, k3seed, ctx->k3, ctx->r_nr);
+ aes_setenckey128(&ctx->r_k1s, k1);
+
+ explicit_memset(&r_ks, 0, sizeof(r_ks));
+ explicit_memset(k1, 0, sizeof(k1));
return 0;
}
@@ -83,7 +91,7 @@ aes_xcbc_mac_loop(void *vctx, const uint
if (ctx->buflen == sizeof(ctx->buf)) {
for (i = 0; i < sizeof(ctx->e); i++)
ctx->buf[i] ^= ctx->e[i];
- rijndaelEncrypt(ctx->r_k1s, ctx->r_nr, ctx->buf, ctx->e);
+ aes_enc(&ctx->r_k1s, ctx->buf, ctx->e, ctx->r_nr);
ctx->buflen = 0;
}
if (ctx->buflen + len < sizeof(ctx->buf)) {
@@ -96,7 +104,7 @@ aes_xcbc_mac_loop(void *vctx, const uint
sizeof(ctx->buf) - ctx->buflen);
for (i = 0; i < sizeof(ctx->e); i++)
ctx->buf[i] ^= ctx->e[i];
- rijndaelEncrypt(ctx->r_k1s, ctx->r_nr, ctx->buf, ctx->e);
+ aes_enc(&ctx->r_k1s, ctx->buf, ctx->e, ctx->r_nr);
addr += sizeof(ctx->buf) - ctx->buflen;
ctx->buflen = 0;
}
@@ -105,7 +113,7 @@ aes_xcbc_mac_loop(void *vctx, const uint
memcpy(buf, addr, AES_BLOCKSIZE);
for (i = 0; i < sizeof(buf); i++)
buf[i] ^= ctx->e[i];
- rijndaelEncrypt(ctx->r_k1s, ctx->r_nr, buf, ctx->e);
+ aes_enc(&ctx->r_k1s, buf, ctx->e, ctx->r_nr);
addr += AES_BLOCKSIZE;
}
if (addr < ep) {
@@ -129,7 +137,7 @@ aes_xcbc_mac_result(uint8_t *addr, void
ctx->buf[i] ^= ctx->e[i];
ctx->buf[i] ^= ctx->k2[i];
}
- rijndaelEncrypt(ctx->r_k1s, ctx->r_nr, ctx->buf, digest);
+ aes_enc(&ctx->r_k1s, ctx->buf, digest, ctx->r_nr);
} else {
for (i = ctx->buflen; i < sizeof(ctx->buf); i++)
ctx->buf[i] = (i == ctx->buflen) ? 0x80 : 0x00;
@@ -137,7 +145,7 @@ aes_xcbc_mac_result(uint8_t *addr, void
ctx->buf[i] ^= ctx->e[i];
ctx->buf[i] ^= ctx->k3[i];
}
- rijndaelEncrypt(ctx->r_k1s, ctx->r_nr, ctx->buf, digest);
+ aes_enc(&ctx->r_k1s, ctx->buf, digest, ctx->r_nr);
}
memcpy(addr, digest, sizeof(digest));
diff -r b7131a05bde7 -r a97bc0abe60d sys/opencrypto/aesxcbcmac.h
--- a/sys/opencrypto/aesxcbcmac.h Sun Jun 14 19:57:23 2020 +0000
+++ b/sys/opencrypto/aesxcbcmac.h Sun Jun 14 19:59:13 2020 +0000
@@ -1,5 +1,8 @@
/* $NetBSD: aesxcbcmac.h,v 1.1 2011/05/24 19:10:09 drochner Exp $ */
+#ifndef _OPENCRYPTO_AESXCBCMAC_H
+#define _OPENCRYPTO_AESXCBCMAC_H
+
#include <sys/types.h>
#define AES_BLOCKSIZE 16
@@ -8,9 +11,7 @@ typedef struct {
u_int8_t e[AES_BLOCKSIZE];
u_int8_t buf[AES_BLOCKSIZE];
size_t buflen;
- u_int32_t r_k1s[(RIJNDAEL_MAXNR+1)*4];
- u_int32_t r_k2s[(RIJNDAEL_MAXNR+1)*4];
- u_int32_t r_k3s[(RIJNDAEL_MAXNR+1)*4];
+ struct aesenc r_k1s;
int r_nr; /* key-length-dependent number of rounds */
u_int8_t k2[AES_BLOCKSIZE];
u_int8_t k3[AES_BLOCKSIZE];
@@ -19,3 +20,5 @@ typedef struct {
int aes_xcbc_mac_init(void *, const u_int8_t *, u_int16_t);
int aes_xcbc_mac_loop(void *, const u_int8_t *, u_int16_t);
void aes_xcbc_mac_result(u_int8_t *, void *);
+
+#endif /* _OPENCRYPTO_AESXCBCMAC_H */
diff -r b7131a05bde7 -r a97bc0abe60d sys/opencrypto/cryptosoft.c
--- a/sys/opencrypto/cryptosoft.c Sun Jun 14 19:57:23 2020 +0000
+++ b/sys/opencrypto/cryptosoft.c Sun Jun 14 19:59:13 2020 +0000
@@ -831,8 +831,8 @@ swcr_newsession(void *arg, u_int32_t *si
case CRYPTO_SKIPJACK_CBC:
txf = &swcr_enc_xform_skipjack;
goto enccommon;
- case CRYPTO_RIJNDAEL128_CBC:
- txf = &swcr_enc_xform_rijndael128;
+ case CRYPTO_AES_CBC:
+ txf = &swcr_enc_xform_aes;
goto enccommon;
case CRYPTO_CAMELLIA_CBC:
txf = &swcr_enc_xform_camellia;
@@ -890,15 +890,13 @@ swcr_newsession(void *arg, u_int32_t *si
axf = &swcr_auth_hash_hmac_ripemd_160_96;
goto authcommon; /* leave this for safety */
authcommon:
- (*swd)->sw_ictx = malloc(axf->ctxsize,
- M_CRYPTO_DATA, M_NOWAIT);
+ (*swd)->sw_ictx = kmem_alloc(axf->ctxsize, KM_NOSLEEP);
if ((*swd)->sw_ictx == NULL) {
swcr_freesession(NULL, i);
return ENOBUFS;
}
- (*swd)->sw_octx = malloc(axf->ctxsize,
- M_CRYPTO_DATA, M_NOWAIT);
+ (*swd)->sw_octx = kmem_alloc(axf->ctxsize, KM_NOSLEEP);
if ((*swd)->sw_octx == NULL) {
swcr_freesession(NULL, i);
return ENOBUFS;
@@ -936,16 +934,15 @@ swcr_newsession(void *arg, u_int32_t *si
CTASSERT(SHA1_DIGEST_LENGTH >= MD5_DIGEST_LENGTH);
axf = &swcr_auth_hash_key_sha1;
auth2common:
- (*swd)->sw_ictx = malloc(axf->ctxsize,
- M_CRYPTO_DATA, M_NOWAIT);
+ (*swd)->sw_ictx = kmem_alloc(axf->ctxsize, KM_NOSLEEP);
if ((*swd)->sw_ictx == NULL) {
swcr_freesession(NULL, i);
return ENOBUFS;
}
/* Store the key so we can "append" it to the payload */
- (*swd)->sw_octx = malloc(cri->cri_klen / 8, M_CRYPTO_DATA,
- M_NOWAIT);
+ (*swd)->sw_octx = kmem_alloc(cri->cri_klen / 8,
+ KM_NOSLEEP);
if ((*swd)->sw_octx == NULL) {
swcr_freesession(NULL, i);
return ENOBUFS;
@@ -968,8 +965,7 @@ swcr_newsession(void *arg, u_int32_t *si
case CRYPTO_SHA1:
axf = &swcr_auth_hash_sha1;
auth3common:
- (*swd)->sw_ictx = malloc(axf->ctxsize,
- M_CRYPTO_DATA, M_NOWAIT);
+ (*swd)->sw_ictx = kmem_alloc(axf->ctxsize, KM_NOSLEEP);
if ((*swd)->sw_ictx == NULL) {
swcr_freesession(NULL, i);
return ENOBUFS;
@@ -991,8 +987,7 @@ swcr_newsession(void *arg, u_int32_t *si
case CRYPTO_AES_256_GMAC:
axf = &swcr_auth_hash_gmac_aes_256;
auth4common:
- (*swd)->sw_ictx = malloc(axf->ctxsize,
- M_CRYPTO_DATA, M_NOWAIT);
+ (*swd)->sw_ictx = kmem_alloc(axf->ctxsize, KM_NOSLEEP);
if ((*swd)->sw_ictx == NULL) {
swcr_freesession(NULL, i);
return ENOBUFS;
@@ -1057,7 +1052,7 @@ swcr_freesession(void *arg, u_int64_t ti
case CRYPTO_BLF_CBC:
case CRYPTO_CAST_CBC:
case CRYPTO_SKIPJACK_CBC:
- case CRYPTO_RIJNDAEL128_CBC:
+ case CRYPTO_AES_CBC:
case CRYPTO_CAMELLIA_CBC:
case CRYPTO_AES_CTR:
case CRYPTO_AES_GCM_16:
@@ -1083,11 +1078,11 @@ swcr_freesession(void *arg, u_int64_t ti
if (swd->sw_ictx) {
explicit_memset(swd->sw_ictx, 0, axf->ctxsize);
- free(swd->sw_ictx, M_CRYPTO_DATA);
+ kmem_free(swd->sw_ictx, axf->ctxsize);
}
if (swd->sw_octx) {
explicit_memset(swd->sw_octx, 0, axf->ctxsize);
- free(swd->sw_octx, M_CRYPTO_DATA);
+ kmem_free(swd->sw_octx, axf->ctxsize);
}
break;
@@ -1097,11 +1092,11 @@ swcr_freesession(void *arg, u_int64_t ti
if (swd->sw_ictx) {
explicit_memset(swd->sw_ictx, 0, axf->ctxsize);
- free(swd->sw_ictx, M_CRYPTO_DATA);
+ kmem_free(swd->sw_ictx, axf->ctxsize);
}
if (swd->sw_octx) {
explicit_memset(swd->sw_octx, 0, swd->sw_klen);
- free(swd->sw_octx, M_CRYPTO_DATA);
+ kmem_free(swd->sw_octx, axf->ctxsize);
}
break;
@@ -1115,7 +1110,7 @@ swcr_freesession(void *arg, u_int64_t ti
if (swd->sw_ictx) {
explicit_memset(swd->sw_ictx, 0, axf->ctxsize);
- free(swd->sw_ictx, M_CRYPTO_DATA);
+ kmem_free(swd->sw_ictx, axf->ctxsize);
}
break;
@@ -1193,7 +1188,7 @@ swcr_process(void *arg, struct cryptop *
case CRYPTO_BLF_CBC:
case CRYPTO_CAST_CBC:
case CRYPTO_SKIPJACK_CBC:
- case CRYPTO_RIJNDAEL128_CBC:
+ case CRYPTO_AES_CBC:
case CRYPTO_CAMELLIA_CBC:
case CRYPTO_AES_CTR:
if ((crp->crp_etype = swcr_encdec(crd, sw,
@@ -1294,7 +1289,7 @@ swcr_init(void)
REGISTER(CRYPTO_AES_128_GMAC);
REGISTER(CRYPTO_AES_192_GMAC);
REGISTER(CRYPTO_AES_256_GMAC);
- REGISTER(CRYPTO_RIJNDAEL128_CBC);
+ REGISTER(CRYPTO_AES_CBC);
REGISTER(CRYPTO_DEFLATE_COMP);
REGISTER(CRYPTO_DEFLATE_COMP_NOGROW);
REGISTER(CRYPTO_GZIP_COMP);
diff -r b7131a05bde7 -r a97bc0abe60d sys/opencrypto/cryptosoft_xform.c
--- a/sys/opencrypto/cryptosoft_xform.c Sun Jun 14 19:57:23 2020 +0000
+++ b/sys/opencrypto/cryptosoft_xform.c Sun Jun 14 19:59:13 2020 +0000
@@ -42,21 +42,22 @@
#include <sys/cdefs.h>
__KERNEL_RCSID(1, "$NetBSD: cryptosoft_xform.c,v 1.28 2019/10/12 00:49:30 christos Exp $");
-#include <crypto/blowfish/blowfish.h>
-#include <crypto/cast128/cast128.h>
-#include <crypto/des/des.h>
-#include <crypto/rijndael/rijndael.h>
-#include <crypto/skipjack/skipjack.h>
-#include <crypto/camellia/camellia.h>
-
-#include <opencrypto/deflate.h>
-
+#include <sys/cprng.h>
+#include <sys/kmem.h>
#include <sys/md5.h>
#include <sys/rmd160.h>
#include <sys/sha1.h>
#include <sys/sha2.h>
-#include <sys/cprng.h>
+
+#include <crypto/aes/aes.h>
+#include <crypto/blowfish/blowfish.h>
+#include <crypto/camellia/camellia.h>
+#include <crypto/cast128/cast128.h>
+#include <crypto/des/des.h>
+#include <crypto/skipjack/skipjack.h>
+
#include <opencrypto/aesxcbcmac.h>
+#include <opencrypto/deflate.h>
#include <opencrypto/gmac.h>
struct swcr_auth_hash {
@@ -94,7 +95,7 @@ static int des3_setkey(u_int8_t **, cons
static int blf_setkey(u_int8_t **, const u_int8_t *, int);
static int cast5_setkey(u_int8_t **, const u_int8_t *, int);
static int skipjack_setkey(u_int8_t **, const u_int8_t *, int);
-static int rijndael128_setkey(u_int8_t **, const u_int8_t *, int);
+static int aes_setkey(u_int8_t **, const u_int8_t *, int);
static int cml_setkey(u_int8_t **, const u_int8_t *, int);
static int aes_ctr_setkey(u_int8_t **, const u_int8_t *, int);
static int aes_gmac_setkey(u_int8_t **, const u_int8_t *, int);
@@ -103,14 +104,14 @@ static void des3_encrypt(void *, u_int8_
static void blf_encrypt(void *, u_int8_t *);
static void cast5_encrypt(void *, u_int8_t *);
static void skipjack_encrypt(void *, u_int8_t *);
-static void rijndael128_encrypt(void *, u_int8_t *);
+static void aes_encrypt(void *, u_int8_t *);
static void cml_encrypt(void *, u_int8_t *);
static void des1_decrypt(void *, u_int8_t *);
static void des3_decrypt(void *, u_int8_t *);
static void blf_decrypt(void *, u_int8_t *);
static void cast5_decrypt(void *, u_int8_t *);
static void skipjack_decrypt(void *, u_int8_t *);
-static void rijndael128_decrypt(void *, u_int8_t *);
+static void aes_decrypt(void *, u_int8_t *);
static void cml_decrypt(void *, u_int8_t *);
static void aes_ctr_crypt(void *, u_int8_t *);
static void des1_zerokey(u_int8_t **);
@@ -118,7 +119,7 @@ static void des3_zerokey(u_int8_t **);
static void blf_zerokey(u_int8_t **);
static void cast5_zerokey(u_int8_t **);
static void skipjack_zerokey(u_int8_t **);
-static void rijndael128_zerokey(u_int8_t **);
+static void aes_zerokey(u_int8_t **);
static void cml_zerokey(u_int8_t **);
static void aes_ctr_zerokey(u_int8_t **);
static void aes_gmac_zerokey(u_int8_t **);
@@ -204,12 +205,12 @@ static const struct swcr_enc_xform swcr_
NULL
};
-static const struct swcr_enc_xform swcr_enc_xform_rijndael128 = {
+static const struct swcr_enc_xform swcr_enc_xform_aes = {
&enc_xform_rijndael128,
- rijndael128_encrypt,
- rijndael128_decrypt,
- rijndael128_setkey,
- rijndael128_zerokey,
+ aes_encrypt,
+ aes_decrypt,
+ aes_setkey,
+ aes_zerokey,
NULL
};
@@ -599,38 +600,68 @@ skipjack_zerokey(u_int8_t **sched)
*sched = NULL;
}
+struct aes_ctx {
+ struct aesenc enc;
+ struct aesdec dec;
+ uint32_t nr;
+};
+
static void
-rijndael128_encrypt(void *key, u_int8_t *blk)
+aes_encrypt(void *key, u_int8_t *blk)
{
- rijndael_encrypt((rijndael_ctx *) key, (u_char *) blk, (u_char *) blk);
+ struct aes_ctx *ctx = key;
+
+ aes_enc(&ctx->enc, blk, blk, ctx->nr);
}
static void
-rijndael128_decrypt(void *key, u_int8_t *blk)
+aes_decrypt(void *key, u_int8_t *blk)
{
- rijndael_decrypt((rijndael_ctx *) key, (u_char *) blk,
- (u_char *) blk);
+ struct aes_ctx *ctx = key;
+
+ aes_dec(&ctx->dec, blk, blk, ctx->nr);
}
static int
-rijndael128_setkey(u_int8_t **sched, const u_int8_t *key, int len)
+aes_setkey(u_int8_t **sched, const u_int8_t *key, int len)
{
+ struct aes_ctx *ctx;
if (len != 16 && len != 24 && len != 32)
return EINVAL;
- *sched = malloc(sizeof(rijndael_ctx), M_CRYPTO_DATA,
- M_NOWAIT|M_ZERO);
- if (*sched == NULL)
+ ctx = kmem_zalloc(sizeof(*ctx), KM_NOSLEEP);
+ if (ctx == NULL)
return ENOMEM;
- rijndael_set_key((rijndael_ctx *) *sched, key, len * 8);
+
+ switch (len) {
+ case 16:
+ aes_setenckey128(&ctx->enc, key);
+ aes_setdeckey128(&ctx->dec, key);
+ ctx->nr = AES_128_NROUNDS;
+ break;
+ case 24:
+ aes_setenckey192(&ctx->enc, key);
+ aes_setdeckey192(&ctx->dec, key);
+ ctx->nr = AES_192_NROUNDS;
+ break;
+ case 32:
+ aes_setenckey256(&ctx->enc, key);
+ aes_setdeckey256(&ctx->dec, key);
+ ctx->nr = AES_256_NROUNDS;
+ break;
+ }
+
+ *sched = (void *)ctx;
return 0;
}
static void
-rijndael128_zerokey(u_int8_t **sched)
+aes_zerokey(u_int8_t **sched)
{
- memset(*sched, 0, sizeof(rijndael_ctx));
- free(*sched, M_CRYPTO_DATA);
+ struct aes_ctx *ctx = (void *)*sched;
+
+ explicit_memset(ctx, 0, sizeof(*ctx));
+ kmem_free(ctx, sizeof(*ctx));
*sched = NULL;
}
@@ -678,7 +709,7 @@ cml_zerokey(u_int8_t **sched)
struct aes_ctr_ctx {
/* need only encryption half */
- u_int32_t ac_ek[4*(RIJNDAEL_MAXNR + 1)];
+ struct aesenc ac_ek;
u_int8_t ac_block[AESCTR_BLOCKSIZE];
int ac_nr;
struct {
@@ -699,10 +730,10 @@ aes_ctr_crypt(void *key, u_int8_t *blk)
i >= AESCTR_NONCESIZE + AESCTR_IVSIZE; i--)
if (++ctx->ac_block[i]) /* continue on overflow */
break;
- rijndaelEncrypt(ctx->ac_ek, ctx->ac_nr, ctx->ac_block, keystream);
+ aes_enc(&ctx->ac_ek, ctx->ac_block, keystream, ctx->ac_nr);
for (i = 0; i < AESCTR_BLOCKSIZE; i++)
blk[i] ^= keystream[i];
- memset(keystream, 0, sizeof(keystream));
+ explicit_memset(keystream, 0, sizeof(keystream));
}
int
@@ -713,13 +744,20 @@ aes_ctr_setkey(u_int8_t **sched, const u
if (len < AESCTR_NONCESIZE)
return EINVAL;
- ctx = malloc(sizeof(struct aes_ctr_ctx), M_CRYPTO_DATA,
- M_NOWAIT|M_ZERO);
+ ctx = kmem_zalloc(sizeof(*ctx), KM_NOSLEEP);
if (!ctx)
return ENOMEM;
- ctx->ac_nr = rijndaelKeySetupEnc(ctx->ac_ek, (const u_char *)key,
- (len - AESCTR_NONCESIZE) * 8);
- if (!ctx->ac_nr) { /* wrong key len */
+ switch (len) {
+ case 16 + AESCTR_NONCESIZE:
+ ctx->ac_nr = aes_setenckey128(&ctx->ac_ek, key);
+ break;
+ case 24 + AESCTR_NONCESIZE:
+ ctx->ac_nr = aes_setenckey192(&ctx->ac_ek, key);
+ break;
+ case 32 + AESCTR_NONCESIZE:
+ ctx->ac_nr = aes_setenckey256(&ctx->ac_ek, key);
+ break;
+ default:
aes_ctr_zerokey((u_int8_t **)&ctx);
return EINVAL;
}
@@ -733,9 +771,10 @@ aes_ctr_setkey(u_int8_t **sched, const u
void
aes_ctr_zerokey(u_int8_t **sched)
{
+ struct aes_ctr_ctx *ctx = (void *)*sched;
- memset(*sched, 0, sizeof(struct aes_ctr_ctx));
- free(*sched, M_CRYPTO_DATA);
+ explicit_memset(ctx, 0, sizeof(*ctx));
+ kmem_free(ctx, sizeof(*ctx));
*sched = NULL;
}
@@ -783,8 +822,7 @@ aes_gmac_setkey(u_int8_t **sched, const
{
struct aes_gmac_ctx *ctx;
- ctx = malloc(sizeof(struct aes_gmac_ctx), M_CRYPTO_DATA,
- M_NOWAIT|M_ZERO);
+ ctx = kmem_zalloc(sizeof(*ctx), KM_NOSLEEP);
if (!ctx)
return ENOMEM;
@@ -797,8 +835,9 @@ aes_gmac_setkey(u_int8_t **sched, const
void
aes_gmac_zerokey(u_int8_t **sched)
{
+ struct aes_gmac_ctx *ctx = (void *)*sched;
- free(*sched, M_CRYPTO_DATA);
+ kmem_free(ctx, sizeof(*ctx));
*sched = NULL;
}
diff -r b7131a05bde7 -r a97bc0abe60d sys/opencrypto/files.opencrypto
--- a/sys/opencrypto/files.opencrypto Sun Jun 14 19:57:23 2020 +0000
+++ b/sys/opencrypto/files.opencrypto Sun Jun 14 19:59:13 2020 +0000
@@ -7,7 +7,7 @@
# that use the opencrypto framework, should list opencrypto as a dependency
# to pull in the framework.
-define opencrypto: rijndael
+define opencrypto: aes
file opencrypto/criov.c opencrypto
file opencrypto/xform.c opencrypto
file opencrypto/crypto.c opencrypto
diff -r b7131a05bde7 -r a97bc0abe60d sys/opencrypto/gmac.c
--- a/sys/opencrypto/gmac.c Sun Jun 14 19:57:23 2020 +0000
+++ b/sys/opencrypto/gmac.c Sun Jun 14 19:59:13 2020 +0000
@@ -26,7 +26,8 @@
#include <sys/param.h>
#include <sys/systm.h>
-#include <crypto/rijndael/rijndael.h>
+#include <crypto/aes/aes.h>
+
#include <opencrypto/gmac.h>
void ghash_gfmul(const GMAC_INT *, const GMAC_INT *, GMAC_INT *);
@@ -114,13 +115,25 @@ AES_GMAC_Setkey(AES_GMAC_CTX *ctx, const
{
int i;
- ctx->rounds = rijndaelKeySetupEnc(ctx->K, (const u_char *)key,
- (klen - AESCTR_NONCESIZE) * 8);
+ switch (klen) {
+ case 16 + AESCTR_NONCESIZE:
+ ctx->rounds = aes_setenckey128(&ctx->K, key);
+ break;
+ case 24 + AESCTR_NONCESIZE:
+ ctx->rounds = aes_setenckey192(&ctx->K, key);
+ break;
+ case 32 + AESCTR_NONCESIZE:
+ ctx->rounds = aes_setenckey256(&ctx->K, key);
+ break;
+ default:
+ panic("invalid AES_GMAC_Setkey length in bytes: %u",
+ (unsigned)klen);
+ }
/* copy out salt to the counter block */
memcpy(ctx->J, key + klen - AESCTR_NONCESIZE, AESCTR_NONCESIZE);
/* prepare a hash subkey */
- rijndaelEncrypt(ctx->K, ctx->rounds, (void *)ctx->ghash.H,
- (void *)ctx->ghash.H);
+ aes_enc(&ctx->K, (const void *)ctx->ghash.H, (void *)ctx->ghash.H,
+ ctx->rounds);
#if GMAC_INTLEN == 8
for (i = 0; i < 2; i++)
ctx->ghash.H[i] = be64toh(ctx->ghash.H[i]);
@@ -163,7 +176,7 @@ AES_GMAC_Final(uint8_t digest[GMAC_DIGES
/* do one round of GCTR */
ctx->J[GMAC_BLOCK_LEN - 1] = 1;
- rijndaelEncrypt(ctx->K, ctx->rounds, ctx->J, keystream);
+ aes_enc(&ctx->K, ctx->J, keystream, ctx->rounds);
k = keystream;
d = digest;
#if GMAC_INTLEN == 8
diff -r b7131a05bde7 -r a97bc0abe60d sys/opencrypto/gmac.h
--- a/sys/opencrypto/gmac.h Sun Jun 14 19:57:23 2020 +0000
+++ b/sys/opencrypto/gmac.h Sun Jun 14 19:59:13 2020 +0000
@@ -20,7 +20,7 @@
#ifndef _GMAC_H_
#define _GMAC_H_
-#include <crypto/rijndael/rijndael.h>
+#include <crypto/aes/aes.h>
#define GMAC_BLOCK_LEN 16
#define GMAC_DIGEST_LEN 16
@@ -41,7 +41,7 @@ typedef struct _GHASH_CTX {
typedef struct _AES_GMAC_CTX {
GHASH_CTX ghash;
- uint32_t K[4*(RIJNDAEL_MAXNR + 1)];
+ struct aesenc K;
uint8_t J[GMAC_BLOCK_LEN]; /* counter block */
int rounds;
} AES_GMAC_CTX;
diff -r b7131a05bde7 -r a97bc0abe60d sys/opencrypto/xform.c
--- a/sys/opencrypto/xform.c Sun Jun 14 19:57:23 2020 +0000
+++ b/sys/opencrypto/xform.c Sun Jun 14 19:59:13 2020 +0000
@@ -145,8 +145,8 @@ const struct enc_xform enc_xform_skipjac
};
const struct enc_xform enc_xform_rijndael128 = {
- .type = CRYPTO_RIJNDAEL128_CBC,
- .name = "Rijndael-128/AES",
+ .type = CRYPTO_AES_CBC,
+ .name = "AES",
.blocksize = 16,
.ivsize = 16,
.minkey = 16,
# HG changeset patch
# User Taylor R Campbell <riastradh%NetBSD.org@localhost>
# Date 1592251492 0
# Mon Jun 15 20:04:52 2020 +0000
# Branch trunk
# Node ID 401917dcba81934295869fc7e7a3c4c7755ff186
# Parent a97bc0abe60d9a77b10f27d63951d60b0be7b987
# EXP-Topic riastradh-kernelcrypto
cgd(4): Print which key size is broken when a self-test fails.
Can be gleaned from the test index but this is a little quicker.
diff -r a97bc0abe60d -r 401917dcba81 sys/dev/cgd.c
--- a/sys/dev/cgd.c Sun Jun 14 19:59:13 2020 +0000
+++ b/sys/dev/cgd.c Mon Jun 15 20:04:52 2020 +0000
@@ -1699,8 +1699,8 @@ cgd_selftest(void)
if (memcmp(buf, selftests[i].ctxt, txtlen) != 0) {
hexdump(printf, "was", buf, txtlen);
hexdump(printf, "exp", selftests[i].ctxt, txtlen);
- panic("cgd %s encryption is broken [%zu]",
- selftests[i].alg, i);
+ panic("cgd %s-%d encryption is broken [%zu]",
+ selftests[i].alg, keylen, i);
}
cgd_cipher(&sc, buf, buf, txtlen, selftests[i].blkno,
@@ -1708,8 +1708,8 @@ cgd_selftest(void)
if (memcmp(buf, selftests[i].ptxt, txtlen) != 0) {
hexdump(printf, "was", buf, txtlen);
hexdump(printf, "exp", selftests[i].ptxt, txtlen);
- panic("cgd %s decryption is broken [%zu]",
- selftests[i].alg, i);
+ panic("cgd %s-%d decryption is broken [%zu]",
+ selftests[i].alg, keylen, i);
}
kmem_free(buf, txtlen);
# HG changeset patch
# User Taylor R Campbell <riastradh%NetBSD.org@localhost>
# Date 1592251571 0
# Mon Jun 15 20:06:11 2020 +0000
# Branch trunk
# Node ID 375cb5e0f08e74a884c537b40ac52fe31c512837
# Parent 401917dcba81934295869fc7e7a3c4c7755ff186
# EXP-Topic riastradh-kernelcrypto
cgd(4): Align IVs on the stack.
This will make it easier for some hardware crypto support.
diff -r 401917dcba81 -r 375cb5e0f08e sys/dev/cgd.c
--- a/sys/dev/cgd.c Mon Jun 15 20:04:52 2020 +0000
+++ b/sys/dev/cgd.c Mon Jun 15 20:06:11 2020 +0000
@@ -1587,7 +1587,7 @@ cgd_cipher(struct cgd_softc *sc, void *d
cfunc_cipher *cipher = sc->sc_cfuncs->cf_cipher;
size_t blocksize = sc->sc_cdata.cf_blocksize;
size_t todo;
- char blkno_buf[CGD_MAXBLOCKSIZE];
+ char blkno_buf[CGD_MAXBLOCKSIZE] __aligned(CGD_BLOCKALIGN);
DPRINTF_FOLLOW(("cgd_cipher() dir=%d\n", dir));
diff -r 401917dcba81 -r 375cb5e0f08e sys/dev/cgd_crypto.c
--- a/sys/dev/cgd_crypto.c Mon Jun 15 20:04:52 2020 +0000
+++ b/sys/dev/cgd_crypto.c Mon Jun 15 20:06:11 2020 +0000
@@ -167,7 +167,7 @@ cgd_cipher_aes_cbc(void *privdata, void
const void *blkno, int dir)
{
struct aes_privdata *apd = privdata;
- uint8_t iv[CGD_AES_BLOCK_SIZE] = {0};
+ uint8_t iv[CGD_AES_BLOCK_SIZE] __aligned(CGD_AES_BLOCK_SIZE) = {0};
/* Compute the CBC IV as AES_k(blkno). */
aes_enc(&apd->ap_enckey, blkno, iv, apd->ap_nrounds);
diff -r 401917dcba81 -r 375cb5e0f08e sys/dev/cgd_crypto.h
--- a/sys/dev/cgd_crypto.h Mon Jun 15 20:04:52 2020 +0000
+++ b/sys/dev/cgd_crypto.h Mon Jun 15 20:06:11 2020 +0000
@@ -39,6 +39,8 @@
#define CGD_3DES_BLOCK_SIZE 8
#define CGD_BF_BLOCK_SIZE 8
+#define CGD_BLOCKALIGN 16
+
typedef void *(cfunc_init)(size_t, const void *, size_t *);
typedef void (cfunc_destroy)(void *);
typedef void (cfunc_cipher)(void *, void *, const void *, size_t,
# HG changeset patch
# User Taylor R Campbell <riastradh%NetBSD.org@localhost>
# Date 1592237969 0
# Mon Jun 15 16:19:29 2020 +0000
# Branch trunk
# Node ID 28973955038a44907a800f3333d8dec03c77c8b2
# Parent 375cb5e0f08e74a884c537b40ac52fe31c512837
# EXP-Topic riastradh-kernelcrypto
Provide the standard AES key schedule.
Different AES implementations prefer different variations on it, but
some of them -- notably VIA -- require the standard key schedule to
be available and don't provide hardware support for computing it
themselves. So adapt BearSSL's logic to generate the standard key
schedule (and decryption keys, with InvMixColumns), rather than the
bitsliced key schedule that BearSSL uses natively.
diff -r 375cb5e0f08e -r 28973955038a sys/crypto/aes/aes_bear.h
--- a/sys/crypto/aes/aes_bear.h Mon Jun 15 20:06:11 2020 +0000
+++ b/sys/crypto/aes/aes_bear.h Mon Jun 15 16:19:29 2020 +0000
@@ -45,6 +45,12 @@ void br_aes_ct_skey_expand(uint32_t *, u
void br_aes_ct_bitslice_encrypt(unsigned, const uint32_t *, uint32_t *);
void br_aes_ct_bitslice_decrypt(unsigned, const uint32_t *, uint32_t *);
+/* NetBSD additions */
+
+void br_aes_ct_inv_mix_columns(uint32_t *);
+u_int br_aes_ct_keysched_stdenc(uint32_t *, const void *, size_t);
+u_int br_aes_ct_keysched_stddec(uint32_t *, const void *, size_t);
+
extern struct aes_impl aes_bear_impl;
#endif /* _CRYPTO_AES_AES_BEAR_H */
diff -r 375cb5e0f08e -r 28973955038a sys/crypto/aes/aes_ct.c
--- a/sys/crypto/aes/aes_ct.c Mon Jun 15 20:06:11 2020 +0000
+++ b/sys/crypto/aes/aes_ct.c Mon Jun 15 16:19:29 2020 +0000
@@ -29,6 +29,8 @@
#include <sys/types.h>
+#include <lib/libkern/libkern.h>
+
#include <crypto/aes/aes_bear.h>
/* see inner.h */
@@ -333,3 +335,92 @@ br_aes_ct_skey_expand(uint32_t *skey,
skey[v + 1] = y | (y >> 1);
}
}
+
+/* NetBSD additions, for computing the standard AES key schedule */
+
+unsigned
+br_aes_ct_keysched_stdenc(uint32_t *skey, const void *key, size_t key_len)
+{
+ unsigned num_rounds;
+ int i, j, k, nk, nkf;
+ uint32_t tmp;
+
+ switch (key_len) {
+ case 16:
+ num_rounds = 10;
+ break;
+ case 24:
+ num_rounds = 12;
+ break;
+ case 32:
+ num_rounds = 14;
+ break;
+ default:
+ /* abort(); */
+ return 0;
+ }
+ nk = (int)(key_len >> 2);
+ nkf = (int)((num_rounds + 1) << 2);
+ tmp = 0;
+ for (i = 0; i < nk; i ++) {
+ tmp = br_dec32le((const unsigned char *)key + (i << 2));
+ skey[i] = tmp;
+ }
+ for (i = nk, j = 0, k = 0; i < nkf; i ++) {
+ if (j == 0) {
+ tmp = (tmp << 24) | (tmp >> 8);
+ tmp = sub_word(tmp) ^ Rcon[k];
+ } else if (nk > 6 && j == 4) {
+ tmp = sub_word(tmp);
+ }
+ tmp ^= skey[i - nk];
+ skey[i] = tmp;
+ if (++ j == nk) {
+ j = 0;
+ k ++;
+ }
+ }
+ return num_rounds;
+}
+
+unsigned
+br_aes_ct_keysched_stddec(uint32_t *skey, const void *key, size_t key_len)
+{
+ uint32_t tkey[60];
+ uint32_t q[8];
+ unsigned num_rounds;
+ unsigned i;
+
+ num_rounds = br_aes_ct_keysched_stdenc(skey, key, key_len);
+ if (num_rounds == 0)
+ return 0;
+
+ tkey[0] = skey[4*num_rounds + 0];
+ tkey[1] = skey[4*num_rounds + 1];
+ tkey[2] = skey[4*num_rounds + 2];
+ tkey[3] = skey[4*num_rounds + 3];
+ for (i = 1; i < num_rounds; i++) {
+ q[2*0] = skey[4*i + 0];
+ q[2*1] = skey[4*i + 1];
+ q[2*2] = skey[4*i + 2];
+ q[2*3] = skey[4*i + 3];
+ q[1] = q[3] = q[5] = q[7] = 0;
+
+ br_aes_ct_ortho(q);
+ br_aes_ct_inv_mix_columns(q);
+ br_aes_ct_ortho(q);
+
+ tkey[4*(num_rounds - i) + 0] = q[2*0];
+ tkey[4*(num_rounds - i) + 1] = q[2*1];
+ tkey[4*(num_rounds - i) + 2] = q[2*2];
+ tkey[4*(num_rounds - i) + 3] = q[2*3];
+ }
+ tkey[4*num_rounds + 0] = skey[0];
+ tkey[4*num_rounds + 1] = skey[1];
+ tkey[4*num_rounds + 2] = skey[2];
+ tkey[4*num_rounds + 3] = skey[3];
+
+ memcpy(skey, tkey, 4*(num_rounds + 1)*sizeof(uint32_t));
+ explicit_memset(tkey, 0, 4*(num_rounds + 1)*sizeof(uint32_t));
+ return num_rounds;
+}
diff -r 375cb5e0f08e -r 28973955038a sys/crypto/aes/aes_ct_dec.c
--- a/sys/crypto/aes/aes_ct_dec.c Mon Jun 15 20:06:11 2020 +0000
+++ b/sys/crypto/aes/aes_ct_dec.c Mon Jun 15 16:19:29 2020 +0000
@@ -175,3 +175,11 @@ br_aes_ct_bitslice_decrypt(unsigned num_
br_aes_ct_bitslice_invSbox(q);
add_round_key(q, skey);
}
+
+/* NetBSD addition, for generating compatible decryption keys */
+void
+br_aes_ct_inv_mix_columns(uint32_t *q)
+{
+
+ inv_mix_columns(q);
+}
diff -r 375cb5e0f08e -r 28973955038a sys/crypto/aes/aes_impl.c
--- a/sys/crypto/aes/aes_impl.c Mon Jun 15 20:06:11 2020 +0000
+++ b/sys/crypto/aes/aes_impl.c Mon Jun 15 16:19:29 2020 +0000
@@ -38,6 +38,8 @@
#include <crypto/aes/aes.h>
#include <crypto/aes/aes_bear.h> /* default implementation */
+static int aes_selftest_stdkeysched(void);
+
static const struct aes_impl *aes_md_impl __read_mostly;
static const struct aes_impl *aes_impl __read_mostly;
@@ -61,6 +63,9 @@ aes_select(void)
KASSERT(aes_impl == NULL);
+ if (aes_selftest_stdkeysched())
+ panic("AES is busted");
+
if (aes_md_impl) {
if (aes_selftest(aes_md_impl))
aprint_error("aes: self-test failed: %s\n",
@@ -254,3 +259,131 @@ aes_xts_dec(struct aesdec *dec, const ui
aes_guarantee_selected();
aes_impl->ai_xts_dec(dec, in, out, nbytes, tweak, nrounds);
}
+
+/*
+ * Known-answer self-tests for the standard key schedule.
+ */
+static int
+aes_selftest_stdkeysched(void)
+{
+ static const uint8_t key[32] = {
+ 0x00,0x01,0x02,0x03,0x04,0x05,0x06,0x07,
+ 0x08,0x09,0x0a,0x0b,0x0c,0x0d,0x0e,0x0f,
+ 0x10,0x11,0x12,0x13,0x14,0x15,0x16,0x17,
+ 0x18,0x19,0x1a,0x1b,0x1c,0x1d,0x1e,0x1f,
+ };
+ static const uint32_t rk128enc[] = {
+ 0x03020100, 0x07060504, 0x0b0a0908, 0x0f0e0d0c,
+ 0xfd74aad6, 0xfa72afd2, 0xf178a6da, 0xfe76abd6,
+ 0x0bcf92b6, 0xf1bd3d64, 0x00c59bbe, 0xfeb33068,
+ 0x4e74ffb6, 0xbfc9c2d2, 0xbf0c596c, 0x41bf6904,
+ 0xbcf7f747, 0x033e3595, 0xbc326cf9, 0xfd8d05fd,
+ 0xe8a3aa3c, 0xeb9d9fa9, 0x57aff350, 0xaa22f6ad,
+ 0x7d0f395e, 0x9692a6f7, 0xc13d55a7, 0x6b1fa30a,
+ 0x1a70f914, 0x8ce25fe3, 0x4ddf0a44, 0x26c0a94e,
+ 0x35874347, 0xb9651ca4, 0xf4ba16e0, 0xd27abfae,
+ 0xd1329954, 0x685785f0, 0x9ced9310, 0x4e972cbe,
+ 0x7f1d1113, 0x174a94e3, 0x8ba707f3, 0xc5302b4d,
+ };
+ static const uint32_t rk192enc[] = {
+ 0x03020100, 0x07060504, 0x0b0a0908, 0x0f0e0d0c,
+ 0x13121110, 0x17161514, 0xf9f24658, 0xfef4435c,
+ 0xf5fe4a54, 0xfaf04758, 0xe9e25648, 0xfef4435c,
+ 0xb349f940, 0x4dbdba1c, 0xb843f048, 0x42b3b710,
+ 0xab51e158, 0x55a5a204, 0x41b5ff7e, 0x0c084562,
+ 0xb44bb52a, 0xf6f8023a, 0x5da9e362, 0x080c4166,
+ 0x728501f5, 0x7e8d4497, 0xcac6f1bd, 0x3c3ef387,
+ 0x619710e5, 0x699b5183, 0x9e7c1534, 0xe0f151a3,
+ 0x2a37a01e, 0x16095399, 0x779e437c, 0x1e0512ff,
+ 0x880e7edd, 0x68ff2f7e, 0x42c88f60, 0x54c1dcf9,
+ 0x235f9f85, 0x3d5a8d7a, 0x5229c0c0, 0x3ad6efbe,
+ 0x781e60de, 0x2cdfbc27, 0x0f8023a2, 0x32daaed8,
+ 0x330a97a4, 0x09dc781a, 0x71c218c4, 0x5d1da4e3,
+ };
+ static const uint32_t rk256enc[] = {
+ 0x03020100, 0x07060504, 0x0b0a0908, 0x0f0e0d0c,
+ 0x13121110, 0x17161514, 0x1b1a1918, 0x1f1e1d1c,
+ 0x9fc273a5, 0x98c476a1, 0x93ce7fa9, 0x9cc072a5,
+ 0xcda85116, 0xdabe4402, 0xc1a45d1a, 0xdeba4006,
+ 0xf0df87ae, 0x681bf10f, 0xfbd58ea6, 0x6715fc03,
+ 0x48f1e16d, 0x924fa56f, 0x53ebf875, 0x8d51b873,
+ 0x7f8256c6, 0x1799a7c9, 0xec4c296f, 0x8b59d56c,
+ 0x753ae23d, 0xe7754752, 0xb49ebf27, 0x39cf0754,
+ 0x5f90dc0b, 0x48097bc2, 0xa44552ad, 0x2f1c87c1,
+ 0x60a6f545, 0x87d3b217, 0x334d0d30, 0x0a820a64,
+ 0x1cf7cf7c, 0x54feb4be, 0xf0bbe613, 0xdfa761d2,
+ 0xfefa1af0, 0x7929a8e7, 0x4a64a5d7, 0x40e6afb3,
+ 0x71fe4125, 0x2500f59b, 0xd5bb1388, 0x0a1c725a,
+ 0x99665a4e, 0xe04ff2a9, 0xaa2b577e, 0xeacdf8cd,
+ 0xcc79fc24, 0xe97909bf, 0x3cc21a37, 0x36de686d,
+ };
+ static const uint32_t rk128dec[] = {
+ 0x7f1d1113, 0x174a94e3, 0x8ba707f3, 0xc5302b4d,
+ 0xbe29aa13, 0xf6af8f9c, 0x80f570f7, 0x03bff700,
+ 0x63a46213, 0x4886258f, 0x765aff6b, 0x834a87f7,
+ 0x74fc828d, 0x2b22479c, 0x3edcdae4, 0xf510789c,
+ 0x8d09e372, 0x5fdec511, 0x15fe9d78, 0xcbcca278,
+ 0x2710c42e, 0xd2d72663, 0x4a205869, 0xde323f00,
+ 0x04f5a2a8, 0xf5c7e24d, 0x98f77e0a, 0x94126769,
+ 0x91e3c6c7, 0xf13240e5, 0x6d309c47, 0x0ce51963,
+ 0x9902dba0, 0x60d18622, 0x9c02dca2, 0x61d58524,
+ 0xf0df568c, 0xf9d35d82, 0xfcd35a80, 0xfdd75986,
+ 0x03020100, 0x07060504, 0x0b0a0908, 0x0f0e0d0c,
+ };
+ static const uint32_t rk192dec[] = {
+ 0x330a97a4, 0x09dc781a, 0x71c218c4, 0x5d1da4e3,
+ 0x0dbdbed6, 0x49ea09c2, 0x8073b04d, 0xb91b023e,
+ 0xc999b98f, 0x3968b273, 0x9dd8f9c7, 0x728cc685,
+ 0xc16e7df7, 0xef543f42, 0x7f317853, 0x4457b714,
+ 0x90654711, 0x3b66cf47, 0x8dce0e9b, 0xf0f10bfc,
+ 0xb6a8c1dc, 0x7d3f0567, 0x4a195ccc, 0x2e3a42b5,
+ 0xabb0dec6, 0x64231e79, 0xbe5f05a4, 0xab038856,
+ 0xda7c1bdd, 0x155c8df2, 0x1dab498a, 0xcb97c4bb,
+ 0x08f7c478, 0xd63c8d31, 0x01b75596, 0xcf93c0bf,
+ 0x10efdc60, 0xce249529, 0x15efdb62, 0xcf20962f,
+ 0xdbcb4e4b, 0xdacf4d4d, 0xc7d75257, 0xdecb4949,
+ 0x1d181f1a, 0x191c1b1e, 0xd7c74247, 0xdecb4949,
+ 0x03020100, 0x07060504, 0x0b0a0908, 0x0f0e0d0c,
+ };
+ static const uint32_t rk256dec[] = {
+ 0xcc79fc24, 0xe97909bf, 0x3cc21a37, 0x36de686d,
+ 0xffd1f134, 0x2faacebf, 0x5fe2e9fc, 0x6e015825,
+ 0xeb48165e, 0x0a354c38, 0x46b77175, 0x84e680dc,
+ 0x8005a3c8, 0xd07b3f8b, 0x70482743, 0x31e3b1d9,
+ 0x138e70b5, 0xe17d5a66, 0x4c823d4d, 0xc251f1a9,
+ 0xa37bda74, 0x507e9c43, 0xa03318c8, 0x41ab969a,
+ 0x1597a63c, 0xf2f32ad3, 0xadff672b, 0x8ed3cce4,
+ 0xf3c45ff8, 0xf3054637, 0xf04d848b, 0xe1988e52,
+ 0x9a4069de, 0xe7648cef, 0x5f0c4df8, 0x232cabcf,
+ 0x1658d5ae, 0x00c119cf, 0x0348c2bc, 0x11d50ad9,
+ 0xbd68c615, 0x7d24e531, 0xb868c117, 0x7c20e637,
+ 0x0f85d77f, 0x1699cc61, 0x0389db73, 0x129dc865,
+ 0xc940282a, 0xc04c2324, 0xc54c2426, 0xc4482720,
+ 0x1d181f1a, 0x191c1b1e, 0x15101712, 0x11141316,
+ 0x03020100, 0x07060504, 0x0b0a0908, 0x0f0e0d0c,
+ };
+ static const struct {
+ unsigned len;
+ unsigned nr;
+ const uint32_t *enc, *dec;
+ } C[] = {
+ { 16, AES_128_NROUNDS, rk128enc, rk128dec },
+ { 24, AES_192_NROUNDS, rk192enc, rk192dec },
+ { 32, AES_256_NROUNDS, rk256enc, rk256dec },
+ };
+ uint32_t rk[60];
+ unsigned i;
+
+ for (i = 0; i < __arraycount(C); i++) {
+ if (br_aes_ct_keysched_stdenc(rk, key, C[i].len) != C[i].nr)
+ return -1;
+ if (memcmp(rk, C[i].enc, 4*(C[i].nr + 1)))
+ return -1;
+ if (br_aes_ct_keysched_stddec(rk, key, C[i].len) != C[i].nr)
+ return -1;
+ if (memcmp(rk, C[i].dec, 4*(C[i].nr + 1)))
+ return -1;
+ }
+
+ return 0;
+}
# HG changeset patch
# User Taylor R Campbell <riastradh%NetBSD.org@localhost>
# Date 1592238453 0
# Mon Jun 15 16:27:33 2020 +0000
# Branch trunk
# Node ID 86fed1861ac3279e6d19505769e4331842fea55c
# Parent 28973955038a44907a800f3333d8dec03c77c8b2
# EXP-Topic riastradh-kernelcrypto
Add AES implementation with VIA ACE.
diff -r 28973955038a -r 86fed1861ac3 sys/arch/x86/conf/files.x86
--- a/sys/arch/x86/conf/files.x86 Mon Jun 15 16:19:29 2020 +0000
+++ b/sys/arch/x86/conf/files.x86 Mon Jun 15 16:27:33 2020 +0000
@@ -168,3 +168,6 @@ file arch/x86/pci/pci_addr_fixup.c pci_a
# AES-NI
include "crypto/aes/arch/x86/files.aesni"
+
+# VIA ACE
+include "crypto/aes/arch/x86/files.aesvia"
diff -r 28973955038a -r 86fed1861ac3 sys/arch/x86/x86/identcpu.c
--- a/sys/arch/x86/x86/identcpu.c Mon Jun 15 16:19:29 2020 +0000
+++ b/sys/arch/x86/x86/identcpu.c Mon Jun 15 16:27:33 2020 +0000
@@ -40,6 +40,7 @@
#include <sys/cpu.h>
#include <crypto/aes/arch/x86/aes_ni.h>
+#include <crypto/aes/arch/x86/aes_via.h>
#include <uvm/uvm_extern.h>
@@ -1000,7 +1001,10 @@ cpu_probe(struct cpu_info *ci)
#ifdef __x86_64__ /* not yet implemented on i386 */
if (cpu_feature[1] & CPUID2_AES)
aes_md_init(&aes_ni_impl);
+ else
#endif
+ if (cpu_feature[4] & CPUID_VIA_HAS_ACE)
+ aes_md_init(&aes_via_impl);
} else {
/*
* If not first. Warn about cpu_feature mismatch for
diff -r 28973955038a -r 86fed1861ac3 sys/crypto/aes/arch/x86/aes_via.c
--- /dev/null Thu Jan 01 00:00:00 1970 +0000
+++ b/sys/crypto/aes/arch/x86/aes_via.c Mon Jun 15 16:27:33 2020 +0000
@@ -0,0 +1,626 @@
+/* $NetBSD$ */
+
+/*-
+ * Copyright (c) 2020 The NetBSD Foundation, Inc.
+ * All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ * 1. Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * 2. Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in the
+ * documentation and/or other materials provided with the distribution.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE NETBSD FOUNDATION, INC. AND CONTRIBUTORS
+ * ``AS IS'' AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED
+ * TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
+ * PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE FOUNDATION OR CONTRIBUTORS
+ * BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <sys/cdefs.h>
+__KERNEL_RCSID(1, "$NetBSD$");
+
+#include <sys/types.h>
+#include <sys/evcnt.h>
+#include <sys/systm.h>
+
+#include <crypto/aes/aes.h>
+#include <crypto/aes/aes_bear.h>
+
+#include <x86/cpufunc.h>
+#include <x86/cpuvar.h>
+#include <x86/fpu.h>
+#include <x86/specialreg.h>
+#include <x86/via_padlock.h>
+
+static void
+aesvia_reload_keys(void)
+{
+
+ asm volatile("pushf; popf");
+}
+
+static uint32_t
+aesvia_keylen_cw0(unsigned nrounds)
+{
+
+ /*
+ * Determine the control word bits for the key size / number of
+ * rounds. For AES-128, the hardware can do key expansion on
+ * the fly; for AES-192 and AES-256, software must do it.
+ */
+ switch (nrounds) {
+ case AES_128_NROUNDS:
+ return C3_CRYPT_CWLO_KEY128;
+ case AES_192_NROUNDS:
+ return C3_CRYPT_CWLO_KEY192 | C3_CRYPT_CWLO_KEYGEN_SW;
+ case AES_256_NROUNDS:
+ return C3_CRYPT_CWLO_KEY256 | C3_CRYPT_CWLO_KEYGEN_SW;
+ default:
+ panic("invalid AES nrounds: %u", nrounds);
+ }
+}
+
+static void
+aesvia_setenckey(struct aesenc *enc, const uint8_t *key, uint32_t nrounds)
+{
+ size_t key_len;
+
+ switch (nrounds) {
+ case AES_128_NROUNDS:
+ enc->aese_aes.aes_rk[0] = le32dec(key + 4*0);
+ enc->aese_aes.aes_rk[1] = le32dec(key + 4*1);
+ enc->aese_aes.aes_rk[2] = le32dec(key + 4*2);
+ enc->aese_aes.aes_rk[3] = le32dec(key + 4*3);
+ return;
+ case AES_192_NROUNDS:
+ key_len = 24;
+ break;
+ case AES_256_NROUNDS:
+ key_len = 32;
+ break;
+ default:
+ panic("invalid AES nrounds: %u", nrounds);
+ }
+ br_aes_ct_keysched_stdenc(enc->aese_aes.aes_rk, key, key_len);
+}
+
+static void
+aesvia_setdeckey(struct aesdec *dec, const uint8_t *key, uint32_t nrounds)
+{
+ size_t key_len;
+
+ switch (nrounds) {
+ case AES_128_NROUNDS:
+ dec->aesd_aes.aes_rk[0] = le32dec(key + 4*0);
+ dec->aesd_aes.aes_rk[1] = le32dec(key + 4*1);
+ dec->aesd_aes.aes_rk[2] = le32dec(key + 4*2);
+ dec->aesd_aes.aes_rk[3] = le32dec(key + 4*3);
+ return;
+ case AES_192_NROUNDS:
+ key_len = 24;
+ break;
+ case AES_256_NROUNDS:
+ key_len = 32;
+ break;
+ default:
+ panic("invalid AES nrounds: %u", nrounds);
+ }
+ br_aes_ct_keysched_stddec(dec->aesd_aes.aes_rk, key, key_len);
+}
+
+static inline void
+aesvia_enc1(const struct aesenc *enc, const uint8_t in[static 16],
+ uint8_t out[static 16], uint32_t cw0)
+{
+ const uint32_t cw[4] __aligned(16) = {
+ [0] = (cw0
+ | C3_CRYPT_CWLO_ALG_AES
+ | C3_CRYPT_CWLO_ENCRYPT
+ | C3_CRYPT_CWLO_NORMAL),
+ };
+ size_t nblocks = 1;
+
+ KASSERT(((uintptr_t)enc & 0xf) == 0);
+ KASSERT(((uintptr_t)in & 0xf) == 0);
+ KASSERT(((uintptr_t)out & 0xf) == 0);
+
+ asm volatile("rep xcrypt-ecb"
+ : "+c"(nblocks), "+S"(in), "+D"(out)
+ : "b"(enc), "d"(cw)
+ : "memory", "cc");
+}
+
+static inline void
+aesvia_dec1(const struct aesdec *dec, const uint8_t in[static 16],
+ uint8_t out[static 16], uint32_t cw0)
+{
+ const uint32_t cw[4] __aligned(16) = {
+ [0] = (cw0
+ | C3_CRYPT_CWLO_ALG_AES
+ | C3_CRYPT_CWLO_DECRYPT
+ | C3_CRYPT_CWLO_NORMAL),
+ };
+ size_t nblocks = 1;
+
+ KASSERT(((uintptr_t)dec & 0xf) == 0);
+ KASSERT(((uintptr_t)in & 0xf) == 0);
+ KASSERT(((uintptr_t)out & 0xf) == 0);
+
+ asm volatile("rep xcrypt-ecb"
+ : "+c"(nblocks), "+S"(in), "+D"(out)
+ : "b"(dec), "d"(cw)
+ : "memory", "cc");
+}
+
+static struct evcnt enc_aligned_evcnt = EVCNT_INITIALIZER(EVCNT_TYPE_MISC,
+ NULL, "aesvia", "enc aligned");
+EVCNT_ATTACH_STATIC(enc_aligned_evcnt);
+static struct evcnt enc_unaligned_evcnt = EVCNT_INITIALIZER(EVCNT_TYPE_MISC,
+ NULL, "aesvia", "dec unaligned");
+EVCNT_ATTACH_STATIC(enc_unaligned_evcnt);
+
+static void
+aesvia_enc(const struct aesenc *enc, const uint8_t in[static 16],
+ uint8_t out[static 16], uint32_t nrounds)
+{
+ const uint32_t cw0 = aesvia_keylen_cw0(nrounds);
+
+ fpu_kern_enter();
+ aesvia_reload_keys();
+ if ((((uintptr_t)in | (uintptr_t)out) & 0xf) == 0 &&
+ ((uintptr_t)in & 0xff0) != 0xff0) {
+ enc_aligned_evcnt.ev_count++;
+ aesvia_enc1(enc, in, out, cw0);
+ } else {
+ enc_unaligned_evcnt.ev_count++;
+ /*
+ * VIA requires 16-byte/128-bit alignment, and
+ * xcrypt-ecb reads one block past the one we're
+ * working on -- which may go past the end of the page
+ * into unmapped territory. Use a bounce buffer if
+ * either constraint is violated.
+ */
+ uint8_t inbuf[16] __aligned(16);
+ uint8_t outbuf[16] __aligned(16);
+
+ memcpy(inbuf, in, 16);
+ aesvia_enc1(enc, inbuf, outbuf, cw0);
+ memcpy(out, outbuf, 16);
+
+ explicit_memset(inbuf, 0, sizeof inbuf);
+ explicit_memset(outbuf, 0, sizeof outbuf);
+ }
+ fpu_kern_leave();
+}
+
+static struct evcnt dec_aligned_evcnt = EVCNT_INITIALIZER(EVCNT_TYPE_MISC,
+ NULL, "aesvia", "dec aligned");
+EVCNT_ATTACH_STATIC(dec_aligned_evcnt);
+static struct evcnt dec_unaligned_evcnt = EVCNT_INITIALIZER(EVCNT_TYPE_MISC,
+ NULL, "aesvia", "dec unaligned");
+EVCNT_ATTACH_STATIC(dec_unaligned_evcnt);
+
+static void
+aesvia_dec(const struct aesdec *dec, const uint8_t in[static 16],
+ uint8_t out[static 16], uint32_t nrounds)
+{
+ const uint32_t cw0 = aesvia_keylen_cw0(nrounds);
+
+ fpu_kern_enter();
+ aesvia_reload_keys();
+ if ((((uintptr_t)in | (uintptr_t)out) & 0xf) == 0 &&
+ ((uintptr_t)in & 0xff0) != 0xff0) {
+ dec_aligned_evcnt.ev_count++;
+ aesvia_dec1(dec, in, out, cw0);
+ } else {
+ dec_unaligned_evcnt.ev_count++;
+ /*
+ * VIA requires 16-byte/128-bit alignment, and
+ * xcrypt-ecb reads one block past the one we're
+ * working on -- which may go past the end of the page
+ * into unmapped territory. Use a bounce buffer if
+ * either constraint is violated.
+ */
+ uint8_t inbuf[16] __aligned(16);
+ uint8_t outbuf[16] __aligned(16);
+
+ memcpy(inbuf, in, 16);
+ aesvia_dec1(dec, inbuf, outbuf, cw0);
+ memcpy(out, outbuf, 16);
+
+ explicit_memset(inbuf, 0, sizeof inbuf);
+ explicit_memset(outbuf, 0, sizeof outbuf);
+ }
+ fpu_kern_leave();
+}
+
+static inline void
+aesvia_cbc_enc1(const struct aesenc *enc, const uint8_t in[static 16],
+ uint8_t out[static 16], size_t nblocks, uint8_t **ivp, uint32_t cw0)
+{
+ const uint32_t cw[4] __aligned(16) = {
+ [0] = (cw0
+ | C3_CRYPT_CWLO_ALG_AES
+ | C3_CRYPT_CWLO_ENCRYPT
+ | C3_CRYPT_CWLO_NORMAL),
+ };
+
+ KASSERT(((uintptr_t)enc & 0xf) == 0);
+ KASSERT(((uintptr_t)in & 0xf) == 0);
+ KASSERT(((uintptr_t)out & 0xf) == 0);
+ KASSERT(((uintptr_t)*ivp & 0xf) == 0);
+
+ /*
+ * Register effects:
+ * - Counts nblocks down to zero.
+ * - Advances in by nblocks (units of blocks).
+ * - Advances out by nblocks (units of blocks).
+ * - Updates *ivp to point at the last block of out.
+ */
+ asm volatile("rep xcrypt-cbc"
+ : "+c"(nblocks), "+S"(in), "+D"(out), "+a"(*ivp)
+ : "b"(enc), "d"(cw)
+ : "memory", "cc");
+}
+
+static inline void
+aesvia_cbc_dec1(const struct aesdec *dec, const uint8_t in[static 16],
+ uint8_t out[static 16], size_t nblocks, uint8_t iv[static 16],
+ uint32_t cw0)
+{
+ const uint32_t cw[4] __aligned(16) = {
+ [0] = (cw0
+ | C3_CRYPT_CWLO_ALG_AES
+ | C3_CRYPT_CWLO_DECRYPT
+ | C3_CRYPT_CWLO_NORMAL),
+ };
+
+ KASSERT(((uintptr_t)dec & 0xf) == 0);
+ KASSERT(((uintptr_t)in & 0xf) == 0);
+ KASSERT(((uintptr_t)out & 0xf) == 0);
+ KASSERT(((uintptr_t)iv & 0xf) == 0);
+
+ /*
+ * Register effects:
+ * - Counts nblocks down to zero.
+ * - Advances in by nblocks (units of blocks).
+ * - Advances out by nblocks (units of blocks).
+ * Memory side effects:
+ * - Writes what was the last block of in at the address iv.
+ */
+ asm volatile("rep xcrypt-cbc"
+ : "+c"(nblocks), "+S"(in), "+D"(out)
+ : "a"(iv), "b"(dec), "d"(cw)
+ : "memory", "cc");
+}
+
+static inline void
+xor128(void *x, const void *a, const void *b)
+{
+ uint32_t *x32 = x;
+ const uint32_t *a32 = a;
+ const uint32_t *b32 = b;
+
+ x32[0] = a32[0] ^ b32[0];
+ x32[1] = a32[1] ^ b32[1];
+ x32[2] = a32[2] ^ b32[2];
+ x32[3] = a32[3] ^ b32[3];
+}
+
+static struct evcnt cbcenc_aligned_evcnt = EVCNT_INITIALIZER(EVCNT_TYPE_MISC,
+ NULL, "aesvia", "cbcenc aligned");
+EVCNT_ATTACH_STATIC(cbcenc_aligned_evcnt);
+static struct evcnt cbcenc_unaligned_evcnt = EVCNT_INITIALIZER(EVCNT_TYPE_MISC,
+ NULL, "aesvia", "cbcenc unaligned");
+EVCNT_ATTACH_STATIC(cbcenc_unaligned_evcnt);
+
+static void
+aesvia_cbc_enc(const struct aesenc *enc, const uint8_t in[static 16],
+ uint8_t out[static 16], size_t nbytes, uint8_t iv[static 16],
+ uint32_t nrounds)
+{
+ const uint32_t cw0 = aesvia_keylen_cw0(nrounds);
+
+ KASSERT(nbytes % 16 == 0);
+ if (nbytes == 0)
+ return;
+
+ fpu_kern_enter();
+ aesvia_reload_keys();
+ if ((((uintptr_t)in | (uintptr_t)out | (uintptr_t)iv) & 0xf) == 0) {
+ cbcenc_aligned_evcnt.ev_count++;
+ uint8_t *ivp = iv;
+ aesvia_cbc_enc1(enc, in, out, nbytes/16, &ivp, cw0);
+ memcpy(iv, ivp, 16);
+ } else {
+ cbcenc_unaligned_evcnt.ev_count++;
+ uint8_t cv[16] __aligned(16);
+ uint8_t tmp[16] __aligned(16);
+
+ memcpy(cv, iv, 16);
+ for (; nbytes; nbytes -= 16, in += 16, out += 16) {
+ memcpy(tmp, in, 16);
+ xor128(tmp, tmp, cv);
+ aesvia_enc1(enc, tmp, cv, cw0);
+ memcpy(out, cv, 16);
+ }
+ memcpy(iv, cv, 16);
+ }
+ fpu_kern_leave();
+}
+
+static struct evcnt cbcdec_aligned_evcnt = EVCNT_INITIALIZER(EVCNT_TYPE_MISC,
+ NULL, "aesvia", "cbcdec aligned");
+EVCNT_ATTACH_STATIC(cbcdec_aligned_evcnt);
+static struct evcnt cbcdec_unaligned_evcnt = EVCNT_INITIALIZER(EVCNT_TYPE_MISC,
+ NULL, "aesvia", "cbcdec unaligned");
+EVCNT_ATTACH_STATIC(cbcdec_unaligned_evcnt);
+
+static void
+aesvia_cbc_dec(const struct aesdec *dec, const uint8_t in[static 16],
+ uint8_t out[static 16], size_t nbytes, uint8_t iv[static 16],
+ uint32_t nrounds)
+{
+ const uint32_t cw0 = aesvia_keylen_cw0(nrounds);
+
+ KASSERT(nbytes % 16 == 0);
+ if (nbytes == 0)
+ return;
+
+ fpu_kern_enter();
+ aesvia_reload_keys();
+ if ((((uintptr_t)in | (uintptr_t)out | (uintptr_t)iv) & 0xf) == 0) {
+ cbcdec_aligned_evcnt.ev_count++;
+ aesvia_cbc_dec1(dec, in, out, nbytes/16, iv, cw0);
+ } else {
+ cbcdec_unaligned_evcnt.ev_count++;
+ uint8_t iv0[16] __aligned(16);
+ uint8_t cv[16] __aligned(16);
+ uint8_t tmp[16] __aligned(16);
+
+ memcpy(iv0, iv, 16);
+ memcpy(cv, in + nbytes - 16, 16);
+ memcpy(iv, cv, 16);
+
+ for (;;) {
+ aesvia_dec1(dec, cv, tmp, cw0);
+ if ((nbytes -= 16) == 0)
+ break;
+ memcpy(cv, in + nbytes - 16, 16);
+ xor128(tmp, tmp, cv);
+ memcpy(out + nbytes, tmp, 16);
+ }
+
+ xor128(tmp, tmp, iv0);
+ memcpy(out, tmp, 16);
+ explicit_memset(tmp, 0, sizeof tmp);
+ }
+ fpu_kern_leave();
+}
+
+static inline void
+aesvia_xts_update(uint32_t *t0, uint32_t *t1, uint32_t *t2, uint32_t *t3)
+{
+ uint32_t s0, s1, s2, s3;
+
+ s0 = *t0 >> 31;
+ s1 = *t1 >> 31;
+ s2 = *t2 >> 31;
+ s3 = *t3 >> 31;
+ *t0 = (*t0 << 1) ^ (-s3 & 0x87);
+ *t1 = (*t1 << 1) ^ s0;
+ *t2 = (*t2 << 1) ^ s1;
+ *t3 = (*t3 << 1) ^ s2;
+}
+
+static int
+aesvia_xts_update_selftest(void)
+{
+ static const struct {
+ uint32_t in[4], out[4];
+ } cases[] = {
+ { {1}, {2} },
+ { {0x80000000U,0,0,0}, {0,1,0,0} },
+ { {0,0x80000000U,0,0}, {0,0,1,0} },
+ { {0,0,0x80000000U,0}, {0,0,0,1} },
+ { {0,0,0,0x80000000U}, {0x87,0,0,0} },
+ { {0,0x80000000U,0,0x80000000U}, {0x87,0,1,0} },
+ };
+ unsigned i;
+ uint32_t t0, t1, t2, t3;
+
+ for (i = 0; i < sizeof(cases)/sizeof(cases[0]); i++) {
+ t0 = cases[i].in[0];
+ t1 = cases[i].in[1];
+ t2 = cases[i].in[2];
+ t3 = cases[i].in[3];
+ aesvia_xts_update(&t0, &t1, &t2, &t3);
+ if (t0 != cases[i].out[0] ||
+ t1 != cases[i].out[1] ||
+ t2 != cases[i].out[2] ||
+ t3 != cases[i].out[3])
+ return -1;
+ }
+
+ /* Success! */
+ return 0;
+}
+
+static struct evcnt xtsenc_aligned_evcnt = EVCNT_INITIALIZER(EVCNT_TYPE_MISC,
+ NULL, "aesvia", "xtsenc aligned");
+EVCNT_ATTACH_STATIC(xtsenc_aligned_evcnt);
+static struct evcnt xtsenc_unaligned_evcnt = EVCNT_INITIALIZER(EVCNT_TYPE_MISC,
+ NULL, "aesvia", "xtsenc unaligned");
+EVCNT_ATTACH_STATIC(xtsenc_unaligned_evcnt);
+
+static void
+aesvia_xts_enc(const struct aesenc *enc, const uint8_t in[static 16],
+ uint8_t out[static 16], size_t nbytes, uint8_t tweak[static 16],
+ uint32_t nrounds)
+{
+ const uint32_t cw0 = aesvia_keylen_cw0(nrounds);
+ uint32_t t[4];
+
+ KASSERT(nbytes % 16 == 0);
+
+ memcpy(t, tweak, 16);
+
+ fpu_kern_enter();
+ aesvia_reload_keys();
+ if ((((uintptr_t)in | (uintptr_t)out) & 0xf) == 0) {
+ xtsenc_aligned_evcnt.ev_count++;
+ unsigned lastblock = 0;
+
+ /*
+ * Make sure the last block is not the last block of a
+ * page. (Note that we store the AES input in `out' as
+ * a temporary buffer, rather than reading it directly
+ * from `in', since we have to combine the tweak
+ * first.)
+ */
+ lastblock = 16*(((uintptr_t)(out + nbytes) & 0xfff) == 0);
+ nbytes -= lastblock;
+
+ for (; nbytes; nbytes -= 16, in += 16, out += 16) {
+ xor128(out, in, t);
+ aesvia_enc1(enc, out, out, cw0);
+ xor128(out, out, t);
+ aesvia_xts_update(&t[0], &t[1], &t[2], &t[3]);
+ }
+
+ /* Handle the last block of a page, if necessary. */
+ if (lastblock) {
+ uint8_t buf[16] __aligned(16);
+ xor128(buf, in, t);
+ aesvia_enc1(enc, buf, out, cw0);
+ explicit_memset(buf, 0, sizeof buf);
+ }
+ } else {
+ xtsenc_unaligned_evcnt.ev_count++;
+ uint8_t buf[16] __aligned(16);
+
+ for (; nbytes; nbytes -= 16, in += 16, out += 16) {
+ memcpy(buf, in, 16);
+ xor128(buf, buf, t);
+ aesvia_enc1(enc, buf, buf, cw0);
+ xor128(buf, buf, t);
+ memcpy(out, buf, 16);
+ aesvia_xts_update(&t[0], &t[1], &t[2], &t[3]);
+ }
+
+ explicit_memset(buf, 0, sizeof buf);
+ }
+ fpu_kern_leave();
+
+ memcpy(tweak, t, 16);
+ explicit_memset(t, 0, sizeof t);
+}
+
+static struct evcnt xtsdec_aligned_evcnt = EVCNT_INITIALIZER(EVCNT_TYPE_MISC,
+ NULL, "aesvia", "xtsdec aligned");
+EVCNT_ATTACH_STATIC(xtsdec_aligned_evcnt);
+static struct evcnt xtsdec_unaligned_evcnt = EVCNT_INITIALIZER(EVCNT_TYPE_MISC,
+ NULL, "aesvia", "xtsdec unaligned");
+EVCNT_ATTACH_STATIC(xtsdec_unaligned_evcnt);
+
+static void
+aesvia_xts_dec(const struct aesdec *dec, const uint8_t in[static 16],
+ uint8_t out[static 16], size_t nbytes, uint8_t tweak[static 16],
+ uint32_t nrounds)
+{
+ const uint32_t cw0 = aesvia_keylen_cw0(nrounds);
+ uint32_t t[4];
+
+ KASSERT(nbytes % 16 == 0);
+
+ memcpy(t, tweak, 16);
+
+ fpu_kern_enter();
+ aesvia_reload_keys();
+ if ((((uintptr_t)in | (uintptr_t)out) & 0xf) == 0) {
+ xtsdec_aligned_evcnt.ev_count++;
+ unsigned lastblock = 0;
+
+ /*
+ * Make sure the last block is not the last block of a
+ * page. (Note that we store the AES input in `out' as
+ * a temporary buffer, rather than reading it directly
+ * from `in', since we have to combine the tweak
+ * first.)
+ */
+ lastblock = 16*(((uintptr_t)(out + nbytes) & 0xfff) == 0);
+ nbytes -= lastblock;
+
+ for (; nbytes; nbytes -= 16, in += 16, out += 16) {
+ xor128(out, in, t);
+ aesvia_dec1(dec, out, out, cw0);
+ xor128(out, out, t);
+ aesvia_xts_update(&t[0], &t[1], &t[2], &t[3]);
+ }
+
+ /* Handle the last block of a page, if necessary. */
+ if (lastblock) {
+ uint8_t buf[16] __aligned(16);
+ xor128(buf, in, t);
+ aesvia_dec1(dec, buf, out, cw0);
+ explicit_memset(buf, 0, sizeof buf);
+ }
+ } else {
+ xtsdec_unaligned_evcnt.ev_count++;
+ uint8_t buf[16] __aligned(16);
+
+ for (; nbytes; nbytes -= 16, in += 16, out += 16) {
+ memcpy(buf, in, 16);
+ xor128(buf, buf, t);
+ aesvia_dec1(dec, buf, buf, cw0);
+ xor128(buf, buf, t);
+ memcpy(out, buf, 16);
+ aesvia_xts_update(&t[0], &t[1], &t[2], &t[3]);
+ }
+
+ explicit_memset(buf, 0, sizeof buf);
+ }
+ fpu_kern_leave();
+
+ memcpy(tweak, t, 16);
+ explicit_memset(t, 0, sizeof t);
+}
+
+static int
+aesvia_probe(void)
+{
+
+ /* Verify that the CPU advertises VIA ACE support. */
+ if ((cpu_feature[4] & CPUID_VIA_HAS_ACE) == 0)
+ return -1;
+
+ /* Verify that our XTS tweak update logic works. */
+ if (aesvia_xts_update_selftest())
+ return -1;
+
+ /* Success! */
+ return 0;
+}
+
+struct aes_impl aes_via_impl = {
+ .ai_name = "VIA ACE",
+ .ai_probe = aesvia_probe,
+ .ai_setenckey = aesvia_setenckey,
+ .ai_setdeckey = aesvia_setdeckey,
+ .ai_enc = aesvia_enc,
+ .ai_dec = aesvia_dec,
+ .ai_cbc_enc = aesvia_cbc_enc,
+ .ai_cbc_dec = aesvia_cbc_dec,
+ .ai_xts_enc = aesvia_xts_enc,
+ .ai_xts_dec = aesvia_xts_dec,
+};
diff -r 28973955038a -r 86fed1861ac3 sys/crypto/aes/arch/x86/aes_via.h
--- /dev/null Thu Jan 01 00:00:00 1970 +0000
+++ b/sys/crypto/aes/arch/x86/aes_via.h Mon Jun 15 16:27:33 2020 +0000
@@ -0,0 +1,36 @@
+/* $NetBSD$ */
+
+/*-
+ * Copyright (c) 2020 The NetBSD Foundation, Inc.
+ * All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ * 1. Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * 2. Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in the
+ * documentation and/or other materials provided with the distribution.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE NETBSD FOUNDATION, INC. AND CONTRIBUTORS
+ * ``AS IS'' AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED
+ * TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
+ * PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE FOUNDATION OR CONTRIBUTORS
+ * BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef _CRYPTO_AES_ARCH_X86_AES_VIA_H
+#define _CRYPTO_AES_ARCH_X86_AES_VIA_H
+
+#include <crypto/aes/aes.h>
+
+extern struct aes_impl aes_via_impl;
+
+#endif /* _CRYPTO_AES_ARCH_X86_AES_VIA_H */
diff -r 28973955038a -r 86fed1861ac3 sys/crypto/aes/arch/x86/files.aesvia
--- /dev/null Thu Jan 01 00:00:00 1970 +0000
+++ b/sys/crypto/aes/arch/x86/files.aesvia Mon Jun 15 16:27:33 2020 +0000
@@ -0,0 +1,3 @@
+# $NetBSD$
+
+file crypto/aes/arch/x86/aes_via.c aes
# HG changeset patch
# User Taylor R Campbell <riastradh%NetBSD.org@localhost>
# Date 1592258370 0
# Mon Jun 15 21:59:30 2020 +0000
# Branch trunk
# Node ID 1ff6250fd07e4ed180d6b61958c9e7ae3667d4f7
# Parent 86fed1861ac3279e6d19505769e4331842fea55c
# EXP-Topic riastradh-kernelcrypto
uvm: Make sure swap encryption IV is 128-bit-aligned on stack.
Will help hardware-assisted AES.
diff -r 86fed1861ac3 -r 1ff6250fd07e sys/uvm/uvm_swap.c
--- a/sys/uvm/uvm_swap.c Mon Jun 15 16:27:33 2020 +0000
+++ b/sys/uvm/uvm_swap.c Mon Jun 15 21:59:30 2020 +0000
@@ -2089,7 +2089,7 @@ uvm_swap_genkey(struct swapdev *sdp)
static void
uvm_swap_encryptpage(struct swapdev *sdp, void *kva, int slot)
{
- uint8_t preiv[16] = {0}, iv[16];
+ uint8_t preiv[16] __aligned(16) = {0}, iv[16] __aligned(16);
/* iv := AES_k(le32enc(slot) || 0^96) */
le32enc(preiv, slot);
@@ -2111,7 +2111,7 @@ uvm_swap_encryptpage(struct swapdev *sdp
static void
uvm_swap_decryptpage(struct swapdev *sdp, void *kva, int slot)
{
- uint8_t preiv[16] = {0}, iv[16];
+ uint8_t preiv[16] __aligned(16) = {0}, iv[16] __aligned(16);
/* iv := AES_k(le32enc(slot) || 0^96) */
le32enc(preiv, slot);
# HG changeset patch
# User Taylor R Campbell <riastradh%NetBSD.org@localhost>
# Date 1592261759 0
# Mon Jun 15 22:55:59 2020 +0000
# Branch trunk
# Node ID 36794fee0d0481ed3f3253e8d4ef6b87c96c13b7
# Parent 1ff6250fd07e4ed180d6b61958c9e7ae3667d4f7
# EXP-Topic riastradh-kernelcrypto
Batch AES-XTS computation into eight blocks at a time.
Experimental -- performance improvement is not clearly worth the
complexity.
diff -r 1ff6250fd07e -r 36794fee0d04 sys/crypto/aes/arch/x86/aes_via.c
--- a/sys/crypto/aes/arch/x86/aes_via.c Mon Jun 15 21:59:30 2020 +0000
+++ b/sys/crypto/aes/arch/x86/aes_via.c Mon Jun 15 22:55:59 2020 +0000
@@ -119,8 +119,8 @@ aesvia_setdeckey(struct aesdec *dec, con
}
static inline void
-aesvia_enc1(const struct aesenc *enc, const uint8_t in[static 16],
- uint8_t out[static 16], uint32_t cw0)
+aesvia_encN(const struct aesenc *enc, const uint8_t in[static 16],
+ uint8_t out[static 16], size_t nblocks, uint32_t cw0)
{
const uint32_t cw[4] __aligned(16) = {
[0] = (cw0
@@ -128,7 +128,6 @@ aesvia_enc1(const struct aesenc *enc, co
| C3_CRYPT_CWLO_ENCRYPT
| C3_CRYPT_CWLO_NORMAL),
};
- size_t nblocks = 1;
KASSERT(((uintptr_t)enc & 0xf) == 0);
KASSERT(((uintptr_t)in & 0xf) == 0);
@@ -141,8 +140,8 @@ aesvia_enc1(const struct aesenc *enc, co
}
static inline void
-aesvia_dec1(const struct aesdec *dec, const uint8_t in[static 16],
- uint8_t out[static 16], uint32_t cw0)
+aesvia_decN(const struct aesdec *dec, const uint8_t in[static 16],
+ uint8_t out[static 16], size_t nblocks, uint32_t cw0)
{
const uint32_t cw[4] __aligned(16) = {
[0] = (cw0
@@ -150,7 +149,6 @@ aesvia_dec1(const struct aesdec *dec, co
| C3_CRYPT_CWLO_DECRYPT
| C3_CRYPT_CWLO_NORMAL),
};
- size_t nblocks = 1;
KASSERT(((uintptr_t)dec & 0xf) == 0);
KASSERT(((uintptr_t)in & 0xf) == 0);
@@ -180,7 +178,7 @@ aesvia_enc(const struct aesenc *enc, con
if ((((uintptr_t)in | (uintptr_t)out) & 0xf) == 0 &&
((uintptr_t)in & 0xff0) != 0xff0) {
enc_aligned_evcnt.ev_count++;
- aesvia_enc1(enc, in, out, cw0);
+ aesvia_encN(enc, in, out, 1, cw0);
} else {
enc_unaligned_evcnt.ev_count++;
/*
@@ -194,7 +192,7 @@ aesvia_enc(const struct aesenc *enc, con
uint8_t outbuf[16] __aligned(16);
memcpy(inbuf, in, 16);
- aesvia_enc1(enc, inbuf, outbuf, cw0);
+ aesvia_encN(enc, inbuf, outbuf, 1, cw0);
memcpy(out, outbuf, 16);
explicit_memset(inbuf, 0, sizeof inbuf);
@@ -221,7 +219,7 @@ aesvia_dec(const struct aesdec *dec, con
if ((((uintptr_t)in | (uintptr_t)out) & 0xf) == 0 &&
((uintptr_t)in & 0xff0) != 0xff0) {
dec_aligned_evcnt.ev_count++;
- aesvia_dec1(dec, in, out, cw0);
+ aesvia_decN(dec, in, out, 1, cw0);
} else {
dec_unaligned_evcnt.ev_count++;
/*
@@ -235,7 +233,7 @@ aesvia_dec(const struct aesdec *dec, con
uint8_t outbuf[16] __aligned(16);
memcpy(inbuf, in, 16);
- aesvia_dec1(dec, inbuf, outbuf, cw0);
+ aesvia_decN(dec, inbuf, outbuf, 1, cw0);
memcpy(out, outbuf, 16);
explicit_memset(inbuf, 0, sizeof inbuf);
@@ -245,7 +243,7 @@ aesvia_dec(const struct aesdec *dec, con
}
static inline void
-aesvia_cbc_enc1(const struct aesenc *enc, const uint8_t in[static 16],
+aesvia_cbc_encN(const struct aesenc *enc, const uint8_t in[static 16],
uint8_t out[static 16], size_t nblocks, uint8_t **ivp, uint32_t cw0)
{
const uint32_t cw[4] __aligned(16) = {
@@ -274,7 +272,7 @@ aesvia_cbc_enc1(const struct aesenc *enc
}
static inline void
-aesvia_cbc_dec1(const struct aesdec *dec, const uint8_t in[static 16],
+aesvia_cbc_decN(const struct aesdec *dec, const uint8_t in[static 16],
uint8_t out[static 16], size_t nblocks, uint8_t iv[static 16],
uint32_t cw0)
{
@@ -340,7 +338,7 @@ aesvia_cbc_enc(const struct aesenc *enc,
if ((((uintptr_t)in | (uintptr_t)out | (uintptr_t)iv) & 0xf) == 0) {
cbcenc_aligned_evcnt.ev_count++;
uint8_t *ivp = iv;
- aesvia_cbc_enc1(enc, in, out, nbytes/16, &ivp, cw0);
+ aesvia_cbc_encN(enc, in, out, nbytes/16, &ivp, cw0);
memcpy(iv, ivp, 16);
} else {
cbcenc_unaligned_evcnt.ev_count++;
@@ -351,7 +349,7 @@ aesvia_cbc_enc(const struct aesenc *enc,
for (; nbytes; nbytes -= 16, in += 16, out += 16) {
memcpy(tmp, in, 16);
xor128(tmp, tmp, cv);
- aesvia_enc1(enc, tmp, cv, cw0);
+ aesvia_encN(enc, tmp, cv, 1, cw0);
memcpy(out, cv, 16);
}
memcpy(iv, cv, 16);
@@ -381,7 +379,7 @@ aesvia_cbc_dec(const struct aesdec *dec,
aesvia_reload_keys();
if ((((uintptr_t)in | (uintptr_t)out | (uintptr_t)iv) & 0xf) == 0) {
cbcdec_aligned_evcnt.ev_count++;
- aesvia_cbc_dec1(dec, in, out, nbytes/16, iv, cw0);
+ aesvia_cbc_decN(dec, in, out, nbytes/16, iv, cw0);
} else {
cbcdec_unaligned_evcnt.ev_count++;
uint8_t iv0[16] __aligned(16);
@@ -393,7 +391,7 @@ aesvia_cbc_dec(const struct aesdec *dec,
memcpy(iv, cv, 16);
for (;;) {
- aesvia_dec1(dec, cv, tmp, cw0);
+ aesvia_decN(dec, cv, tmp, 1, cw0);
if ((nbytes -= 16) == 0)
break;
memcpy(cv, in + nbytes - 16, 16);
@@ -480,6 +478,7 @@ aesvia_xts_enc(const struct aesenc *enc,
if ((((uintptr_t)in | (uintptr_t)out) & 0xf) == 0) {
xtsenc_aligned_evcnt.ev_count++;
unsigned lastblock = 0;
+ uint32_t buf[8*4] __aligned(16);
/*
* Make sure the last block is not the last block of a
@@ -491,20 +490,43 @@ aesvia_xts_enc(const struct aesenc *enc,
lastblock = 16*(((uintptr_t)(out + nbytes) & 0xfff) == 0);
nbytes -= lastblock;
- for (; nbytes; nbytes -= 16, in += 16, out += 16) {
- xor128(out, in, t);
- aesvia_enc1(enc, out, out, cw0);
- xor128(out, out, t);
- aesvia_xts_update(&t[0], &t[1], &t[2], &t[3]);
+ /*
+ * Handle an odd number of initial blocks so we can
+ * process the rest in eight-block (128-byte) chunks.
+ */
+ if (nbytes % 128) {
+ unsigned nbytes128 = nbytes % 128;
+
+ nbytes -= nbytes128;
+ for (; nbytes128; nbytes128 -= 16, in += 16, out += 16)
+ {
+ xor128(out, in, t);
+ aesvia_encN(enc, out, out, 1, cw0);
+ xor128(out, out, t);
+ aesvia_xts_update(&t[0], &t[1], &t[2], &t[3]);
+ }
+ }
+
+ /* Process eight blocks at a time. */
+ for (; nbytes; nbytes -= 128, in += 128, out += 128) {
+ unsigned i;
+ for (i = 0; i < 8; i++) {
+ memcpy(buf + 4*i, t, 16);
+ xor128(out + 4*i, in + 4*i, t);
+ aesvia_xts_update(&t[0], &t[1], &t[2], &t[3]);
+ }
+ aesvia_encN(enc, out, out, 8, cw0);
+ for (i = 0; i < 8; i++)
+ xor128(out + 4*i, in + 4*i, buf + 4*i);
}
/* Handle the last block of a page, if necessary. */
if (lastblock) {
- uint8_t buf[16] __aligned(16);
xor128(buf, in, t);
- aesvia_enc1(enc, buf, out, cw0);
- explicit_memset(buf, 0, sizeof buf);
+ aesvia_encN(enc, (const void *)buf, out, 1, cw0);
}
+
+ explicit_memset(buf, 0, sizeof buf);
} else {
xtsenc_unaligned_evcnt.ev_count++;
uint8_t buf[16] __aligned(16);
@@ -512,7 +534,7 @@ aesvia_xts_enc(const struct aesenc *enc,
for (; nbytes; nbytes -= 16, in += 16, out += 16) {
memcpy(buf, in, 16);
xor128(buf, buf, t);
- aesvia_enc1(enc, buf, buf, cw0);
+ aesvia_encN(enc, buf, buf, 1, cw0);
xor128(buf, buf, t);
memcpy(out, buf, 16);
aesvia_xts_update(&t[0], &t[1], &t[2], &t[3]);
@@ -550,6 +572,7 @@ aesvia_xts_dec(const struct aesdec *dec,
if ((((uintptr_t)in | (uintptr_t)out) & 0xf) == 0) {
xtsdec_aligned_evcnt.ev_count++;
unsigned lastblock = 0;
+ uint32_t buf[8*4] __aligned(16);
/*
* Make sure the last block is not the last block of a
@@ -561,20 +584,43 @@ aesvia_xts_dec(const struct aesdec *dec,
lastblock = 16*(((uintptr_t)(out + nbytes) & 0xfff) == 0);
nbytes -= lastblock;
- for (; nbytes; nbytes -= 16, in += 16, out += 16) {
- xor128(out, in, t);
- aesvia_dec1(dec, out, out, cw0);
- xor128(out, out, t);
- aesvia_xts_update(&t[0], &t[1], &t[2], &t[3]);
+ /*
+ * Handle an odd number of initial blocks so we can
+ * process the rest in eight-block (128-byte) chunks.
+ */
+ if (nbytes % 128) {
+ unsigned nbytes128 = nbytes % 128;
+
+ nbytes -= nbytes128;
+ for (; nbytes128; nbytes128 -= 16, in += 16, out += 16)
+ {
+ xor128(out, in, t);
+ aesvia_decN(dec, out, out, 1, cw0);
+ xor128(out, out, t);
+ aesvia_xts_update(&t[0], &t[1], &t[2], &t[3]);
+ }
+ }
+
+ /* Process eight blocks at a time. */
+ for (; nbytes; nbytes -= 128, in += 128, out += 128) {
+ unsigned i;
+ for (i = 0; i < 8; i++) {
+ memcpy(buf + 4*i, t, 16);
+ xor128(out + 4*i, in + 4*i, t);
+ aesvia_xts_update(&t[0], &t[1], &t[2], &t[3]);
+ }
+ aesvia_decN(dec, out, out, 8, cw0);
+ for (i = 0; i < 8; i++)
+ xor128(out + 4*i, in + 4*i, buf + 4*i);
}
/* Handle the last block of a page, if necessary. */
if (lastblock) {
- uint8_t buf[16] __aligned(16);
xor128(buf, in, t);
- aesvia_dec1(dec, buf, out, cw0);
- explicit_memset(buf, 0, sizeof buf);
+ aesvia_decN(dec, (const void *)buf, out, 1, cw0);
}
+
+ explicit_memset(buf, 0, sizeof buf);
} else {
xtsdec_unaligned_evcnt.ev_count++;
uint8_t buf[16] __aligned(16);
@@ -582,7 +628,7 @@ aesvia_xts_dec(const struct aesdec *dec,
for (; nbytes; nbytes -= 16, in += 16, out += 16) {
memcpy(buf, in, 16);
xor128(buf, buf, t);
- aesvia_dec1(dec, buf, buf, cw0);
+ aesvia_decN(dec, buf, buf, 1, cw0);
xor128(buf, buf, t);
memcpy(out, buf, 16);
aesvia_xts_update(&t[0], &t[1], &t[2], &t[3]);
# HG changeset patch
# User Taylor R Campbell <riastradh%NetBSD.org@localhost>
# Date 1592362063 0
# Wed Jun 17 02:47:43 2020 +0000
# Branch trunk
# Node ID 9fde04e138c10fd0fca4362c7d93fd3ef4b325ad
# Parent 36794fee0d0481ed3f3253e8d4ef6b87c96c13b7
# EXP-Topic riastradh-kernelcrypto
New cgd cipher adiantum.
Adiantum is a wide-block cipher, built out of AES, XChaCha12,
Poly1305, and NH, defined in
Paul Crowley and Eric Biggers, `Adiantum: length-preserving
encryption for entry-level processors', IACR Transactions on
Symmetric Cryptology 2018(4), pp. 39--61.
Adiantum provides better security than a narrow-block cipher with CBC
or XTS, because every bit of each sector affects every other bit,
whereas with CBC each block of plaintext only affects the following
blocks of ciphertext in the disk sector, and with XTS each block of
plaintext only affects its own block of ciphertext and nothing else.
Adiantum generally provides much better performance than
constant-time AES-CBC or AES-XTS software do without hardware
support, and performance comparable to or better than the
variable-time (i.e., leaky) AES-CBC and AES-XTS software we had
before. (Note: Adiantum also uses AES as a subroutine, but only once
per disk sector. It takes only a small fraction of the time spent by
Adiantum, so there's relatively little performance impact to using
constant-time AES software over using variable-time AES software for
it.)
Adiantum naturally scales to essentially arbitrary disk sector sizes;
sizes >=1024-bytes take the most advantage of Adiantum's design for
performance, so 4096-byte sectors would be a natural choice if we
taught cgd to change the disk sector size. (However, it's a
different cipher for each disk sector size, so it _must_ be a cgd
parameter.)
The paper presents a similar construction HPolyC. The salient
difference is that HPolyC uses Poly1305 directly, whereas Adiantum
uses Poly1395(NH(...)). NH is annoying because it requires a
1072-byte key, which means the test vectors are ginormous, and
changing keys is costly; HPolyC avoids these shortcomings by using
Poly1305 directly, but HPolyC is measurably slower, costing about
1.5x what Adiantum costs on 4096-byte sectors.
For the purposes of cgd, we will reuse each key for many messages,
and there will be very few keys in total (one per cgd volume) so --
except for the annoying verbosity of test vectors -- the tradeoff
weighs in the favour of Adiantum, especially if we teach cgd to do
>>512-byte sectors.
For now, everything that Adiantum needs beyond what's already in the
kernel is gathered into a single file, including NH, Poly1305, and
XChaCha12. We can split those out -- and reuse them, and provide MD
tuned implementations, and so on -- as needed; this is just a first
pass to get Adiantum implemented for experimentation.
diff -r 36794fee0d04 -r 9fde04e138c1 sys/conf/files
--- a/sys/conf/files Mon Jun 15 22:55:59 2020 +0000
+++ b/sys/conf/files Wed Jun 17 02:47:43 2020 +0000
@@ -200,6 +200,7 @@ defflag opt_machdep.h MACHDEP
# use it.
# Individual crypto transforms
+include "crypto/adiantum/files.adiantum"
include "crypto/aes/files.aes"
include "crypto/des/files.des"
include "crypto/blowfish/files.blowfish"
@@ -1395,7 +1396,7 @@ file dev/ic/amdccp.c amdccp
defpseudodev vnd: disk
defflag opt_vnd.h VND_COMPRESSION
defpseudo ccd: disk
-defpseudodev cgd: disk, des, blowfish, cast128, aes
+defpseudodev cgd: disk, des, blowfish, cast128, aes, adiantum
defpseudodev md: disk
defpseudodev fss: disk
diff -r 36794fee0d04 -r 9fde04e138c1 sys/crypto/adiantum/adiantum.c
--- /dev/null Thu Jan 01 00:00:00 1970 +0000
+++ b/sys/crypto/adiantum/adiantum.c Wed Jun 17 02:47:43 2020 +0000
@@ -0,0 +1,2316 @@
+/* $NetBSD$ */
+
+/*-
+ * Copyright (c) 2020 The NetBSD Foundation, Inc.
+ * All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ * 1. Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * 2. Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in the
+ * documentation and/or other materials provided with the distribution.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE NETBSD FOUNDATION, INC. AND CONTRIBUTORS
+ * ``AS IS'' AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED
+ * TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
+ * PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE FOUNDATION OR CONTRIBUTORS
+ * BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ */
+
+/*
+ * The Adiantum wide-block cipher, from
+ *
+ * Paul Crowley and Eric Biggers, `Adiantum: length-preserving
+ * encryption for entry-level processors', IACR Transactions on
+ * Symmetric Cryptology 2018(4), pp. 39--61.
+ *
+ * https://doi.org/10.13154/tosc.v2018.i4.39-61
+ */
+
+#include <sys/cdefs.h>
+__KERNEL_RCSID(1, "$NetBSD$");
+
+#include <sys/types.h>
+#include <sys/endian.h>
+
+#ifdef _KERNEL
+
+#include <sys/module.h>
+#include <sys/systm.h>
+
+#include <lib/libkern/libkern.h>
+
+#include <crypto/adiantum/adiantum.h>
+#include <crypto/aes/aes.h>
+
+#else /* !defined(_KERNEL) */
+
+#include <sys/cdefs.h>
+
+#include <assert.h>
+#include <stdint.h>
+#include <stdio.h>
+#include <string.h>
+
+#include <openssl/aes.h>
+
+struct aesenc {
+ AES_KEY enckey;
+};
+
+struct aesdec {
+ AES_KEY deckey;
+};
+
+#define AES_256_NROUNDS 14
+#define aes_setenckey256(E, K) AES_set_encrypt_key((K), 256, &(E)->enckey)
+#define aes_setdeckey256(D, K) AES_set_decrypt_key((K), 256, &(D)->deckey)
+#define aes_enc(E, P, C, NR) AES_encrypt(P, C, &(E)->enckey)
+#define aes_dec(D, C, P, NR) AES_decrypt(C, P, &(D)->deckey)
+
+#include "adiantum.h"
+
+#define CTASSERT __CTASSERT
+#define KASSERT assert
+#define MIN(x,y) ((x) < (y) ? (x) : (y))
+
+static void
+hexdump(int (*prf)(const char *, ...) __printflike(1,2), const char *prefix,
+ const void *buf, size_t len)
+{
+ const uint8_t *p = buf;
+ size_t i;
+
+ (*prf)("%s (%zu bytes)\n", prefix, len);
+ for (i = 0; i < len; i++) {
+ if (i % 16 == 8)
+ (*prf)(" ");
+ else
+ (*prf)(" ");
+ (*prf)("%02hhx", p[i]);
+ if ((i + 1) % 16 == 0)
+ (*prf)("\n");
+ }
+ if (i % 16)
+ (*prf)("\n");
+}
+
+#endif /* _KERNEL */
+
+/* Arithmetic modulo 2^128, represented by 16-digit strings in radix 2^8. */
+
+/* s := a + b (mod 2^128) */
+static inline void
+add128(uint8_t s[restrict static 16],
+ const uint8_t a[static 16], const uint8_t b[static 16])
+{
+ unsigned i, c;
+
+ c = 0;
+ for (i = 0; i < 16; i++) {
+ c = a[i] + b[i] + c;
+ s[i] = c & 0xff;
+ c >>= 8;
+ }
+}
+
+/* s := a - b (mod 2^128) */
+static inline void
+sub128(uint8_t d[restrict static 16],
+ const uint8_t a[static 16], const uint8_t b[static 16])
+{
+ unsigned i, c;
+
+ c = 0;
+ for (i = 0; i < 16; i++) {
+ c = a[i] - b[i] - c;
+ d[i] = c & 0xff;
+ c = 1 & (c >> 8);
+ }
+}
+
+static int
+addsub128_selftest(void)
+{
+ static const uint8_t zero[16] = {
+ 0x00,0x00,0x00,0x00, 0x00,0x00,0x00,0x00,
+ 0x00,0x00,0x00,0x00, 0x00,0x00,0x00,0x00,
+ };
+ static const uint8_t one[16] = {
+ 0x01,0x00,0x00,0x00, 0x00,0x00,0x00,0x00,
+ 0x00,0x00,0x00,0x00, 0x00,0x00,0x00,0x00,
+ };
+ static const uint8_t negativeone[16] = {
+ 0xff,0xff,0xff,0xff, 0xff,0xff,0xff,0xff,
+ 0xff,0xff,0xff,0xff, 0xff,0xff,0xff,0xff,
+ };
+ static const uint8_t a[16] = {
+ 0x03,0x80,0x00,0x00, 0x00,0x00,0x00,0x00,
+ 0x00,0x00,0x00,0x00, 0x00,0x00,0x00,0x00,
+ };
+ static const uint8_t b[16] = {
+ 0x01,0x82,0x00,0x00, 0x00,0x00,0x00,0x00,
+ 0x00,0x00,0x00,0x00, 0x00,0x00,0x00,0x00,
+ };
+ static const uint8_t c[16] = {
+ 0x02,0xfe,0xff,0xff, 0xff,0xff,0xff,0xff,
+ 0xff,0xff,0xff,0xff, 0xff,0xff,0xff,0xff,
+ };
+ uint8_t r[16];
+ int result = 0;
+
+ sub128(r, zero, one);
+ if (memcmp(r, negativeone, 16)) {
+ hexdump(printf, "sub128 1", r, sizeof r);
+ result = -1;
+ }
+
+ sub128(r, a, b);
+ if (memcmp(r, c, 16)) {
+ hexdump(printf, "sub128 2", r, sizeof r);
+ result = -1;
+ }
+
+ return result;
+}
+
+/* Poly1305 */
+
+struct poly1305 {
+ uint32_t r[5]; /* evaluation point */
+ uint32_t h[5]; /* value */
+};
+
+static void
+poly1305_init(struct poly1305 *P, const uint8_t key[static 16])
+{
+
+ /* clamp */
+ P->r[0] = (le32dec(key + 0) >> 0) & 0x03ffffff;
+ P->r[1] = (le32dec(key + 3) >> 2) & 0x03ffff03;
+ P->r[2] = (le32dec(key + 6) >> 4) & 0x03ffc0ff;
+ P->r[3] = (le32dec(key + 9) >> 6) & 0x03f03fff;
+ P->r[4] = (le32dec(key + 12) >> 8) & 0x000fffff;
+
+ /* initialize polynomial evaluation */
+ P->h[0] = P->h[1] = P->h[2] = P->h[3] = P->h[4] = 0;
+}
+
+static void
+poly1305_update_internal(struct poly1305 *P, const uint8_t m[static 16],
+ uint32_t pad)
+{
+ uint32_t r0 = P->r[0];
+ uint32_t r1 = P->r[1];
+ uint32_t r2 = P->r[2];
+ uint32_t r3 = P->r[3];
+ uint32_t r4 = P->r[4];
+ uint32_t h0 = P->h[0];
+ uint32_t h1 = P->h[1];
+ uint32_t h2 = P->h[2];
+ uint32_t h3 = P->h[3];
+ uint32_t h4 = P->h[4];
+ uint64_t k0, k1, k2, k3, k4; /* 64-bit extension of h */
+ uint64_t p0, p1, p2, p3, p4; /* columns of product */
+ uint32_t c; /* carry */
+
+ /* h' := h + m */
+ h0 += (le32dec(m + 0) >> 0) & 0x03ffffff;
+ h1 += (le32dec(m + 3) >> 2) & 0x03ffffff;
+ h2 += (le32dec(m + 6) >> 4) & 0x03ffffff;
+ h3 += (le32dec(m + 9) >> 6);
+ h4 += (le32dec(m + 12) >> 8) | (pad << 24);
+
+ /* extend to 64 bits */
+ k0 = h0;
+ k1 = h1;
+ k2 = h2;
+ k3 = h3;
+ k4 = h4;
+
+ /* p := h' * r = (h + m)*r mod 2^130 - 5 */
+ p0 = r0*k0 + 5*r4*k1 + 5*r3*k2 + 5*r2*k3 + 5*r1*k4;
+ p1 = r1*k0 + r0*k1 + 5*r4*k2 + 5*r3*k3 + 5*r2*k4;
+ p2 = r2*k0 + r1*k1 + r0*k2 + 5*r4*k3 + 5*r3*k4;
+ p3 = r3*k0 + r2*k1 + r1*k2 + r0*k3 + 5*r4*k4;
+ p4 = r4*k0 + r3*k1 + r2*k2 + r1*k3 + r0*k4;
+
+ /* propagate carries */
+ p0 += 0; c = p0 >> 26; h0 = p0 & 0x03ffffff;
+ p1 += c; c = p1 >> 26; h1 = p1 & 0x03ffffff;
+ p2 += c; c = p2 >> 26; h2 = p2 & 0x03ffffff;
+ p3 += c; c = p3 >> 26; h3 = p3 & 0x03ffffff;
+ p4 += c; c = p4 >> 26; h4 = p4 & 0x03ffffff;
+
+ /* reduce 2^130 = 5 */
+ h0 += c*5; c = h0 >> 26; h0 &= 0x03ffffff;
+ h1 += c;
+
+ /* update hash values */
+ P->h[0] = h0;
+ P->h[1] = h1;
+ P->h[2] = h2;
+ P->h[3] = h3;
+ P->h[4] = h4;
+}
+
+static void
+poly1305_update_block(struct poly1305 *P, const uint8_t m[static 16])
+{
+
+ poly1305_update_internal(P, m, 1);
+}
+
+static void
+poly1305_update_last(struct poly1305 *P, const uint8_t *m, size_t mlen)
+{
+ uint8_t buf[16];
+ unsigned i;
+
+ if (mlen == 16) {
+ poly1305_update_internal(P, m, 1);
+ return;
+ }
+
+ for (i = 0; i < mlen; i++)
+ buf[i] = m[i];
+ buf[i++] = 1;
+ for (; i < 16; i++)
+ buf[i] = 0;
+ poly1305_update_internal(P, buf, 0);
+}
+
+static void
+poly1305_final(uint8_t *h, struct poly1305 *P)
+{
+ uint32_t h0 = P->h[0];
+ uint32_t h1 = P->h[1];
+ uint32_t h2 = P->h[2];
+ uint32_t h3 = P->h[3];
+ uint32_t h4 = P->h[4];
+ uint32_t s0, s1, s2, s3, s4; /* h - (2^130 - 5) */
+ uint32_t m; /* mask */
+ uint32_t c;
+
+ /* propagate carries */
+ h1 += 0; c = h1 >> 26; h1 &= 0x03ffffff;
+ h2 += c; c = h2 >> 26; h2 &= 0x03ffffff;
+ h3 += c; c = h3 >> 26; h3 &= 0x03ffffff;
+ h4 += c; c = h4 >> 26; h4 &= 0x03ffffff;
+
+ /* reduce 2^130 = 5 */
+ h0 += c*5; c = h0 >> 26; h0 &= 0x03ffffff;
+ h1 += c;
+
+ /* s := h - (2^130 - 5) */
+ c = 5;
+ s0 = h0 + c; c = s0 >> 26; s0 &= 0x03ffffff;
+ s1 = h1 + c; c = s1 >> 26; s1 &= 0x03ffffff;
+ s2 = h2 + c; c = s2 >> 26; s2 &= 0x03ffffff;
+ s3 = h3 + c; c = s3 >> 26; s3 &= 0x03ffffff;
+ s4 = h4 + c;
+ s4 -= 0x04000000;
+
+ /* m := -1 if h < 2^130 - 5 else 0 */
+ m = -(s4 >> 31);
+
+ /* conditional subtract */
+ h0 = (m & h0) | (~m & s0);
+ h1 = (m & h1) | (~m & s1);
+ h2 = (m & h2) | (~m & s2);
+ h3 = (m & h3) | (~m & s3);
+ h4 = (m & h4) | (~m & s4);
+
+ /* reduce modulo 2^128 */
+ le32enc(h + 0, ((h1 << 26) | (h0 >> 0)) & 0xffffffff);
+ le32enc(h + 4, ((h2 << 20) | (h1 >> 6)) & 0xffffffff);
+ le32enc(h + 8, ((h3 << 14) | (h2 >> 12)) & 0xffffffff);
+ le32enc(h + 12, ((h4 << 8) | (h3 >> 18)) & 0xffffffff);
+}
+
+static void
+poly1305(uint8_t h[static 16], const uint8_t *m, size_t mlen,
+ const uint8_t k[static 16])
+{
+ struct poly1305 P;
+
+ poly1305_init(&P, k);
+ for (; mlen > 16; mlen -= 16, m += 16)
+ poly1305_update_block(&P, m);
+ poly1305_update_last(&P, m, mlen);
+ poly1305_final(h, &P);
+}
+
+static int
+poly1305_selftest(void)
+{
+ /* https://tools.ietf.org/html/rfc7539#section-2.5.2 */
+ static const uint8_t r[16] = {
+ 0x85,0xd6,0xbe,0x78, 0x57,0x55,0x6d,0x33,
+ 0x7f,0x44,0x52,0xfe, 0x42,0xd5,0x06,0xa8,
+ };
+ static const uint8_t s[16] = {
+ 0x01,0x03,0x80,0x8a, 0xfb,0x0d,0xb2,0xfd,
+ 0x4a,0xbf,0xf6,0xaf, 0x41,0x49,0xf5,0x1b,
+ };
+ static const uint8_t m[] = {
+ 0x43,0x72,0x79,0x70, 0x74,0x6f,0x67,0x72,
+ 0x61,0x70,0x68,0x69, 0x63,0x20,0x46,0x6f,
+ 0x72,0x75,0x6d,0x20, 0x52,0x65,0x73,0x65,
+ 0x61,0x72,0x63,0x68, 0x20,0x47,0x72,0x6f,
+ 0x75,0x70,
+ };
+ static const uint8_t expected[16] = {
+ 0xa8,0x06,0x1d,0xc1, 0x30,0x51,0x36,0xc6,
+ 0xc2,0x2b,0x8b,0xaf, 0x0c,0x01,0x27,0xa9,
+ };
+ uint8_t h[16], t[16];
+ int result = 0;
+
+ poly1305(h, m, sizeof m, r);
+ add128(t, h, s);
+ if (memcmp(t, expected, 16)) {
+ hexdump(printf, "poly1305 h", h, sizeof h);
+ hexdump(printf, "poly1305 t", t, sizeof t);
+ result = -1;
+ }
+
+ return result;
+}
+
+/* NHPoly1305 */
+
+static void
+nh(uint8_t h[32], const uint8_t *m, size_t mlen,
+ const uint32_t k[268 /* u/w + 2s(r - 1) */])
+{
+ const unsigned w = 32; /* word size */
+ const unsigned s = 2; /* stride */
+ const unsigned r = 4; /* rounds */
+ const unsigned u = 8192; /* unit count (bits per msg unit) */
+ uint64_t h0 = 0, h1 = 0, h2 = 0, h3 = 0;
+ unsigned i;
+
+ CTASSERT(r*w/8 == 16);
+
+ KASSERT(mlen <= u/8);
+ KASSERT(mlen % 16 == 0);
+
+ for (i = 0; i < mlen/16; i++) {
+ uint32_t m0 = le32dec(m + 16*i + 4*0);
+ uint32_t m1 = le32dec(m + 16*i + 4*1);
+ uint32_t m2 = le32dec(m + 16*i + 4*2);
+ uint32_t m3 = le32dec(m + 16*i + 4*3);
+
+ uint32_t k00 = k[4*i + 4*0 + 0];
+ uint32_t k01 = k[4*i + 4*0 + 1];
+ uint32_t k02 = k[4*i + 4*0 + 2];
+ uint32_t k03 = k[4*i + 4*0 + 3];
+ uint32_t k10 = k[4*i + 4*1 + 0];
+ uint32_t k11 = k[4*i + 4*1 + 1];
+ uint32_t k12 = k[4*i + 4*1 + 2];
+ uint32_t k13 = k[4*i + 4*1 + 3];
+ uint32_t k20 = k[4*i + 4*2 + 0];
+ uint32_t k21 = k[4*i + 4*2 + 1];
+ uint32_t k22 = k[4*i + 4*2 + 2];
+ uint32_t k23 = k[4*i + 4*2 + 3];
+ uint32_t k30 = k[4*i + 4*3 + 0];
+ uint32_t k31 = k[4*i + 4*3 + 1];
+ uint32_t k32 = k[4*i + 4*3 + 2];
+ uint32_t k33 = k[4*i + 4*3 + 3];
+
+ CTASSERT(s == 2);
+ h0 += (uint64_t)(m0 + k00) * (m2 + k02);
+ h1 += (uint64_t)(m0 + k10) * (m2 + k12);
+ h2 += (uint64_t)(m0 + k20) * (m2 + k22);
+ h3 += (uint64_t)(m0 + k30) * (m2 + k32);
+ h0 += (uint64_t)(m1 + k01) * (m3 + k03);
+ h1 += (uint64_t)(m1 + k11) * (m3 + k13);
+ h2 += (uint64_t)(m1 + k21) * (m3 + k23);
+ h3 += (uint64_t)(m1 + k31) * (m3 + k33);
+ }
+
+ le64enc(h + 8*0, h0);
+ le64enc(h + 8*1, h1);
+ le64enc(h + 8*2, h2);
+ le64enc(h + 8*3, h3);
+}
+
+static void
+nhpoly1305(uint8_t h[restrict static 16], const uint8_t *m, size_t mlen,
+ const uint8_t pk[static 16],
+ const uint32_t nhk[static 268 /* u/w + 2s(r - 1) */])
+{
+ struct poly1305 P;
+ uint8_t h0[32];
+
+ /*
+ * In principle NHPoly1305 is defined on uneven message
+ * lengths, but that's a pain in the patootie.
+ */
+ KASSERT(mlen % 16 == 0);
+
+ poly1305_init(&P, pk);
+ for (; mlen; m += MIN(mlen, 1024), mlen -= MIN(mlen, 1024)) {
+ nh(h0, m, MIN(mlen, 1024), nhk);
+ poly1305_update_block(&P, h0 + 16*0);
+ poly1305_update_block(&P, h0 + 16*1);
+ }
+ poly1305_final(h, &P);
+}
+
+/* https://github.com/google/adiantum/blob/68971e9c6684121b2203b4b05a22768b84051b58/test_vectors/ours/NH/NH.json */
+static int
+nh_selftest(void)
+{
+ static const struct {
+ uint8_t k[1072];
+ unsigned mlen;
+ uint8_t m[1024];
+ uint8_t h[32];
+ } C[] = {
+ [0] = { /* 16-byte message */
+ .k = {
+ 0x22,0x5b,0x80,0xc8, 0x18,0x05,0x37,0x09,
+ 0x76,0x14,0x4b,0x67, 0xc4,0x50,0x7f,0x2b,
+ 0x2c,0xff,0x56,0xc5, 0xd5,0x66,0x45,0x68,
+ 0x35,0xe6,0xd2,0x9a, 0xe5,0xd0,0xc1,0xfb,
+ 0xac,0x59,0x81,0x1a, 0x60,0xb0,0x3d,0x81,
+ 0x4b,0xa3,0x5b,0xa9, 0xcc,0xb3,0xfe,0x2d,
+ 0xc2,0x4d,0xd9,0x26, 0xad,0x36,0xcf,0x8c,
+ 0x05,0x11,0x3b,0x8a, 0x99,0x15,0x81,0xc8,
+ 0x23,0xf5,0x5a,0x94, 0x10,0x2f,0x92,0x80,
+ 0x38,0xc5,0xb2,0x63, 0x80,0xd5,0xdc,0xa3,
+ 0x6c,0x2f,0xaa,0x03, 0x96,0x4a,0x75,0x33,
+ 0x4c,0xa8,0x60,0x05, 0x96,0xbf,0xe5,0x7a,
+ 0xc8,0x4f,0x5c,0x22, 0xf9,0x92,0x74,0x4a,
+ 0x75,0x5f,0xa2,0x2a, 0x8d,0x3f,0xe2,0x43,
+ 0xfd,0xd9,0x04,0x8c, 0x8e,0xea,0x84,0xcc,
+ 0x4d,0x3f,0x94,0x96, 0xed,0x1a,0x51,0xbb,
+ 0x2f,0xc4,0x63,0x28, 0x31,0x0b,0xda,0x92,
+ 0x1e,0x4d,0xe2,0x1d, 0x82,0xb5,0x65,0xb4,
+ 0x75,0x69,0xd7,0x6f, 0x29,0xe4,0xbe,0x7e,
+ 0xcc,0xbd,0x95,0xbd, 0x7a,0x62,0xea,0xfa,
+ 0x33,0x34,0x80,0x58, 0xbf,0xfa,0x00,0x7e,
+ 0xa7,0xb4,0xc9,0x32, 0x7c,0xc7,0x8f,0x8a,
+ 0x28,0x27,0xdd,0xeb, 0xb9,0x1c,0x01,0xad,
+ 0xec,0xf4,0x30,0x5e, 0xce,0x3b,0xaa,0x22,
+ 0x60,0xbd,0x84,0xd9, 0x9e,0xaf,0xe8,0x4c,
+ 0x44,0xb6,0x84,0x2d, 0x5c,0xe6,0x26,0xee,
+ 0x8a,0xa2,0x0d,0xe3, 0x97,0xed,0xf5,0x47,
+ 0xdb,0x50,0x72,0x4a, 0x5e,0x9a,0x8d,0x10,
+ 0xc2,0x25,0xdd,0x5b, 0xd0,0x39,0xc4,0x5b,
+ 0x2a,0x79,0x81,0xb7, 0x5c,0xda,0xed,0x77,
+ 0x17,0x53,0xb5,0x8b, 0x1e,0x5f,0xf3,0x48,
+ 0x30,0xac,0x97,0x7d, 0x29,0xe3,0xc9,0x18,
+ 0xe1,0x2b,0x31,0xa0, 0x08,0xe9,0x15,0x59,
+ 0x29,0xdb,0x84,0x2a, 0x33,0x98,0x8a,0xd4,
+ 0xc3,0xfc,0xf7,0xca, 0x65,0x02,0x4d,0x9f,
+ 0xe2,0xb1,0x5e,0xa6, 0x6a,0x01,0xf9,0xcf,
+ 0x7e,0xa6,0x09,0xd9, 0x16,0x90,0x14,0x5f,
+ 0x3a,0xf8,0xd8,0x34, 0x38,0xd6,0x1f,0x89,
+ 0x0c,0x81,0xc2,0x68, 0xc4,0x65,0x78,0xf3,
+ 0xfe,0x27,0x48,0x70, 0x38,0x43,0x48,0x5a,
+ 0xc1,0x24,0xc5,0x6f, 0x65,0x63,0x1b,0xb0,
+ 0x5b,0xb4,0x07,0x1e, 0x69,0x08,0x8f,0xfc,
+ 0x93,0x29,0x04,0x16, 0x6a,0x8b,0xb3,0x3d,
+ 0x0f,0xba,0x5f,0x46, 0xff,0xfe,0x77,0xa1,
+ 0xb9,0xdc,0x29,0x66, 0x9a,0xd1,0x08,0xdd,
+ 0x32,0xe3,0x21,0x7b, 0xcc,0x2e,0x5c,0xf7,
+ 0x79,0x68,0xd4,0xc1, 0x8b,0x3c,0x5d,0x0e,
+ 0xd4,0x26,0xa6,0x19, 0x92,0x45,0xf7,0x19,
+ 0x0e,0xa2,0x17,0xd8, 0x1c,0x7f,0x8d,0xd6,
+ 0x68,0x37,0x6c,0xbf, 0xb1,0x8a,0x5e,0x36,
+ 0x4b,0xc0,0xca,0x21, 0x02,0x24,0x69,0x9b,
+ 0x2b,0x19,0x0a,0x1b, 0xe3,0x17,0x30,0x57,
+ 0xf6,0xfc,0xd6,0x66, 0x36,0x30,0xc2,0x11,
+ 0x08,0x8d,0xc5,0x84, 0x67,0xa0,0x89,0xc3,
+ 0x74,0x48,0x15,0xca, 0x6e,0x0c,0x6d,0x78,
+ 0x66,0x15,0x73,0x85, 0xf9,0x8b,0xba,0xb2,
+ 0x09,0xda,0x79,0xe6, 0x00,0x08,0x2a,0xda,
+ 0x6b,0xd7,0xd1,0xa7, 0x8b,0x5f,0x11,0x87,
+ 0x96,0x1b,0x23,0xb0, 0x6c,0x55,0xb6,0x86,
+ 0xfb,0xff,0xe3,0x69, 0xac,0x43,0xcd,0x8f,
+ 0x8a,0xe7,0x1c,0x3c, 0xa0,0x6a,0xd5,0x63,
+ 0x80,0x66,0xd8,0x7f, 0xb5,0xb8,0x96,0xd4,
+ 0xe2,0x20,0x40,0x53, 0x6d,0x0d,0x8b,0x6d,
+ 0xd5,0x5d,0x51,0xfb, 0x4d,0x80,0x82,0x01,
+ 0x14,0x97,0x96,0x9b, 0x13,0xb8,0x1d,0x76,
+ 0x7a,0xa1,0xca,0x19, 0x90,0xec,0x7b,0xe0,
+ 0x8e,0xa8,0xb4,0xf2, 0x33,0x67,0x0e,0x10,
+ 0xb1,0xa2,0x82,0xea, 0x81,0x82,0xa2,0xc6,
+ 0x78,0x51,0xa6,0xd3, 0x25,0xe4,0x9c,0xf2,
+ 0x6b,0xa8,0xec,0xfb, 0xd4,0x1d,0x5b,0xa4,
+ 0x79,0x66,0x62,0xb8, 0x2b,0x6f,0x9e,0x0f,
+ 0xcc,0xcb,0x9e,0x92, 0x6f,0x06,0xdb,0xf0,
+ 0x97,0xce,0x3f,0x90, 0xa2,0x1f,0xbe,0x3b,
+ 0x7b,0x10,0xf0,0x23, 0x30,0x0c,0xc5,0x0c,
+ 0x6c,0x78,0xfc,0xa8, 0x71,0x62,0xcf,0x98,
+ 0xa2,0xb1,0x44,0xb5, 0xc6,0x3b,0x5c,0x63,
+ 0x83,0x1d,0x35,0xf2, 0xc7,0x42,0x67,0x5d,
+ 0xc1,0x26,0x36,0xc8, 0x6e,0x1d,0xf6,0xd5,
+ 0x52,0x35,0xa4,0x9e, 0xce,0x4c,0x3b,0x92,
+ 0x20,0x86,0xb7,0x89, 0x63,0x73,0x1a,0x8b,
+ 0xa6,0x35,0xfe,0xb9, 0xdf,0x5e,0x0e,0x53,
+ 0x0b,0xf2,0xb3,0x4d, 0x34,0x1d,0x66,0x33,
+ 0x1f,0x08,0xf5,0xf5, 0x0a,0xab,0x76,0x19,
+ 0xde,0x82,0x2f,0xcf, 0x11,0xa6,0xcb,0xb3,
+ 0x17,0xec,0x8d,0xaf, 0xcb,0xf0,0x92,0x1e,
+ 0xb8,0xa3,0x04,0x0a, 0xac,0x2c,0xae,0xc5,
+ 0x0b,0xc4,0x4e,0xef, 0x0a,0xe2,0xda,0xe9,
+ 0xd7,0x75,0x2d,0x95, 0xc7,0x1b,0xf3,0x0b,
+ 0x43,0x19,0x16,0xd7, 0xc6,0x90,0x2d,0x6b,
+ 0xe1,0xb2,0xce,0xbe, 0xd0,0x7d,0x15,0x99,
+ 0x24,0x37,0xbc,0xb6, 0x8c,0x89,0x7a,0x8c,
+ 0xcb,0xa7,0xf7,0x0b, 0x5f,0xd4,0x96,0x8d,
+ 0xf5,0x80,0xa3,0xce, 0xf5,0x9e,0xed,0x60,
+ 0x00,0x92,0xa5,0x67, 0xc9,0x21,0x79,0x0b,
+ 0xfb,0xe2,0x57,0x0e, 0xdf,0xb6,0x16,0x90,
+ 0xd3,0x75,0xf6,0xb0, 0xa3,0x4e,0x43,0x9a,
+ 0xb7,0xf4,0x73,0xd8, 0x34,0x46,0xc6,0xbe,
+ 0x80,0xec,0x4a,0xc0, 0x7f,0x9e,0xb6,0xb0,
+ 0x58,0xc2,0xae,0xa1, 0xf3,0x60,0x04,0x62,
+ 0x11,0xea,0x0f,0x90, 0xa9,0xea,0x6f,0x0c,
+ 0x4c,0xcf,0xe8,0xd0, 0xea,0xbf,0xdb,0xf2,
+ 0x53,0x0c,0x09,0x4d, 0xd4,0xed,0xf3,0x22,
+ 0x10,0x99,0xc6,0x4f, 0xcf,0xcf,0x96,0xc9,
+ 0xd9,0x6b,0x08,0x3b, 0xf0,0x62,0x2d,0xac,
+ 0x55,0x38,0xd5,0x5c, 0x57,0xad,0x51,0xc3,
+ 0xf5,0xd2,0x37,0x45, 0xb3,0x3f,0x6d,0xaf,
+ 0x10,0x62,0x57,0xb9, 0x58,0x40,0xb3,0x3c,
+ 0x6a,0x98,0x97,0x1a, 0x9c,0xeb,0x66,0xf1,
+ 0xa5,0x93,0x0b,0xe7, 0x8b,0x29,0x0f,0xff,
+ 0x2c,0xd0,0x90,0xf2, 0x67,0xa0,0x69,0xcd,
+ 0xd3,0x59,0xad,0xad, 0xf1,0x1f,0xd7,0xad,
+ 0x24,0x74,0x29,0xcd, 0x06,0xd5,0x42,0x90,
+ 0xf9,0x96,0x4a,0xd9, 0xa0,0x37,0xe4,0x64,
+ 0x8e,0x13,0x2a,0x2a, 0xe7,0xc2,0x1e,0xf6,
+ 0xb2,0xd3,0xdc,0x9f, 0x33,0x32,0x0c,0x50,
+ 0x88,0x37,0x8b,0x9b, 0xfe,0x6f,0xfd,0x05,
+ 0x96,0x26,0x6c,0x96, 0x73,0x73,0xe1,0x09,
+ 0x28,0xf3,0x7f,0xa6, 0x59,0xc5,0x2e,0xf4,
+ 0xd3,0xd5,0xda,0x6b, 0xca,0x42,0x05,0xe5,
+ 0xed,0x13,0xe2,0x4e, 0xcd,0xd5,0xd0,0xfb,
+ 0x6e,0xf7,0x8a,0x3e, 0x91,0x9d,0x6b,0xc5,
+ 0x33,0x05,0x07,0x86, 0xb2,0x26,0x41,0x6e,
+ 0xf8,0x38,0x38,0x7a, 0xf0,0x6c,0x27,0x5a,
+ 0x01,0xd8,0x03,0xe5, 0x91,0x33,0xaa,0x20,
+ 0xcd,0xa7,0x4f,0x18, 0xa0,0x91,0x28,0x74,
+ 0xc0,0x58,0x27,0x0f, 0x9b,0xa8,0x85,0xb0,
+ 0xe0,0xfd,0x5b,0xdb, 0x5b,0xb8,0x86,0x79,
+ 0x94,0x6d,0xde,0x26, 0x64,0x2d,0x6c,0xb9,
+ 0xba,0xc7,0xf0,0xd7, 0xaa,0x68,0x68,0xd0,
+ 0x40,0x71,0xdb,0x94, 0x54,0x62,0xa5,0x7f,
+ 0x98,0xea,0xe3,0x4c, 0xe4,0x44,0x9a,0x03,
+ 0xf9,0x1c,0x20,0x36, 0xeb,0x0d,0xa4,0x41,
+ 0x24,0x06,0xcb,0x94, 0x86,0x35,0x22,0x62,
+ 0x80,0x19,0x16,0xba, 0x2c,0x10,0x38,0x96,
+ },
+ .mlen = 16,
+ .m = {
+ 0xd3,0x82,0xe7,0x04, 0x35,0xcc,0xf7,0xa4,
+ 0xf9,0xb2,0xc5,0xed, 0x5a,0xd9,0x58,0xeb,
+ },
+ .h = {
+ 0x41,0xd9,0xad,0x54, 0x5a,0x0d,0xcc,0x53,
+ 0x48,0xf6,0x4c,0x75, 0x43,0x5d,0xdd,0x77,
+ 0xda,0xca,0x7d,0xec, 0x91,0x3b,0x53,0x16,
+ 0x5c,0x4b,0x58,0xdc, 0x70,0x0a,0x7b,0x37,
+ },
+ },
+ [1] = { /* 1008-byte message */
+ .k = {
+ 0xd9,0x94,0x65,0xda, 0xc2,0x60,0xdd,0xa9,
+ 0x39,0xe5,0x37,0x11, 0xf6,0x74,0xa5,0x95,
+ 0x36,0x07,0x24,0x99, 0x64,0x6b,0xda,0xe2,
+ 0xd5,0xd1,0xd2,0xd9, 0x25,0xd5,0xcc,0x48,
+ 0xf8,0xa5,0x9e,0xff, 0x84,0x5a,0xd1,0x6f,
+ 0xb7,0x6a,0x4d,0xd2, 0xc8,0x13,0x3d,0xde,
+ 0x17,0xed,0x64,0xf1, 0x2b,0xcc,0xdd,0x65,
+ 0x11,0x16,0xf2,0xaf, 0x34,0xd2,0xc5,0x31,
+ 0xaa,0x69,0x33,0x0a, 0x0b,0xc1,0xb4,0x6d,
+ 0xaa,0xcd,0x43,0xc4, 0x0b,0xef,0xf9,0x7d,
+ 0x97,0x3c,0xa7,0x22, 0xda,0xa6,0x6a,0xf0,
+ 0xad,0xe3,0x6f,0xde, 0xfb,0x33,0xf3,0xd8,
+ 0x96,0x5f,0xca,0xda, 0x18,0x63,0x03,0xd0,
+ 0x8f,0xb6,0xc4,0x62, 0x9d,0x50,0x6c,0x8f,
+ 0x85,0xdd,0x6d,0x52, 0x2d,0x45,0x01,0x36,
+ 0x57,0x9f,0x51,0xf0, 0x70,0xe0,0xb2,0x99,
+ 0x3a,0x11,0x68,0xbd, 0xe5,0xfa,0x7c,0x59,
+ 0x12,0x5a,0xbc,0xd9, 0xd6,0x9a,0x09,0xe6,
+ 0xa2,0x80,0x1f,0xd6, 0x47,0x20,0x82,0x4e,
+ 0xac,0xb5,0x6d,0xde, 0x5b,0xff,0x9c,0xd4,
+ 0x2a,0xae,0x27,0x7c, 0x0f,0x5a,0x5d,0x35,
+ 0x2d,0xff,0x07,0xf9, 0x79,0x6a,0xf9,0x3e,
+ 0xd9,0x22,0x62,0x30, 0x40,0xce,0xe1,0xf4,
+ 0x46,0x0a,0x24,0xca, 0x7a,0x3e,0xa1,0x92,
+ 0x1a,0x29,0xa0,0xbf, 0x23,0x95,0x99,0x31,
+ 0xe3,0x51,0x25,0x3d, 0xaf,0x1e,0xfc,0xb3,
+ 0x65,0xa2,0x10,0x37, 0xe6,0xa7,0x20,0xa0,
+ 0xe3,0x6a,0xd4,0x81, 0x2c,0x8d,0xa0,0x87,
+ 0xec,0xae,0x9f,0x44, 0x10,0xda,0x2e,0x17,
+ 0xba,0xb2,0xa5,0x5c, 0x89,0xc6,0xfa,0x70,
+ 0x7e,0xc2,0xe3,0xb6, 0xa0,0x98,0x9c,0xb8,
+ 0x14,0x33,0x27,0x3a, 0x6e,0x4d,0x94,0x72,
+ 0x4b,0xc8,0xac,0x24, 0x2f,0x85,0xd9,0xa4,
+ 0xda,0x22,0x95,0xc5, 0xb3,0xfc,0xbe,0xd2,
+ 0x96,0x57,0x91,0xf9, 0xfd,0x18,0x9c,0x56,
+ 0x70,0x15,0x5f,0xe7, 0x40,0x45,0x28,0xb3,
+ 0x2b,0x56,0x44,0xca, 0x6a,0x2b,0x0e,0x25,
+ 0x66,0x3e,0x32,0x04, 0xe2,0xb7,0x91,0xc8,
+ 0xd2,0x02,0x79,0x0f, 0x7e,0xa9,0xb3,0x86,
+ 0xb2,0x76,0x74,0x18, 0x57,0x16,0x63,0x06,
+ 0x6e,0x16,0xfa,0xef, 0x52,0x3c,0x5e,0x0d,
+ 0x33,0x55,0xd2,0x8d, 0x57,0x4d,0xfe,0x54,
+ 0x65,0x7a,0x54,0x52, 0xf0,0x7b,0x2c,0xf8,
+ 0xd5,0x43,0xba,0x92, 0xa5,0x2e,0xbe,0x1a,
+ 0xce,0x25,0x4f,0x34, 0x31,0xe7,0xa3,0xff,
+ 0x90,0xf6,0xbc,0x0c, 0xbc,0x98,0xdf,0x4a,
+ 0xc3,0xeb,0xb6,0x27, 0x68,0xa9,0xb5,0x33,
+ 0xbc,0x13,0xe8,0x13, 0x7c,0x6b,0xec,0x31,
+ 0xd9,0x79,0x2a,0xa7, 0xe4,0x02,0x4f,0x02,
+ 0xd4,0x5c,0x57,0x4f, 0xa4,0xbc,0xa3,0xe1,
+ 0x7e,0x36,0x8a,0xde, 0x11,0x55,0xec,0xb3,
+ 0x8b,0x65,0x06,0x02, 0x9a,0x68,0x06,0x64,
+ 0x63,0xc7,0x9a,0x67, 0xdc,0x70,0xbf,0xb5,
+ 0xf8,0x49,0x2a,0xe1, 0x59,0x4c,0xe4,0x1e,
+ 0xb5,0x56,0xa5,0xad, 0x24,0x82,0x8c,0xd0,
+ 0x66,0xe4,0x72,0x79, 0x02,0x5d,0x0d,0xf9,
+ 0x19,0x44,0xe3,0x86, 0x1a,0xda,0xda,0xf0,
+ 0x2d,0x47,0xc0,0x07, 0x47,0x0b,0xf8,0x06,
+ 0xf6,0x45,0x8a,0x7f, 0xb9,0xf9,0x33,0x2e,
+ 0xc2,0xf1,0xf1,0x81, 0x41,0x99,0xcd,0xf6,
+ 0xb1,0x71,0x1b,0xfa, 0x21,0x53,0x7c,0xa1,
+ 0xeb,0x2a,0x38,0x5b, 0x9b,0xfe,0x96,0xa5,
+ 0xe3,0x78,0x77,0x47, 0x98,0x0f,0x7d,0xef,
+ 0xf6,0x05,0x37,0x88, 0x79,0x0c,0x21,0x8d,
+ 0x87,0x1f,0xae,0xce, 0x83,0xaf,0xa3,0xd6,
+ 0x6e,0xc5,0x3c,0x47, 0xc6,0xd6,0x4a,0xdc,
+ 0x7c,0xcc,0xdc,0x11, 0x7c,0x7d,0x0f,0x03,
+ 0xc1,0x80,0x75,0x2a, 0x64,0x76,0xf0,0x08,
+ 0x0c,0x11,0x4b,0xe4, 0x05,0x41,0x78,0x0f,
+ 0x86,0xa0,0xd6,0x61, 0xb0,0xfb,0x15,0x3d,
+ 0x3c,0xc3,0xd5,0x1b, 0x72,0x0e,0x79,0x53,
+ 0x07,0xd2,0x2c,0x6e, 0x83,0xbd,0x72,0x88,
+ 0x41,0x07,0x4b,0xd2, 0xe9,0xcc,0x2a,0x9d,
+ 0x5b,0x82,0x0d,0x02, 0x29,0x6e,0xf3,0xbc,
+ 0x34,0x31,0x62,0x8d, 0x83,0xc1,0x7e,0x94,
+ 0x21,0xd5,0xfd,0xa6, 0x6a,0x2b,0xe8,0x86,
+ 0x05,0x48,0x97,0x41, 0xad,0xca,0xef,0x79,
+ 0x5e,0xd8,0x51,0xc4, 0xae,0xf7,0xfa,0xac,
+ 0x3d,0x74,0x2e,0xf4, 0x41,0x3b,0x19,0xc2,
+ 0x04,0xf3,0x40,0xfe, 0x77,0x7c,0x6a,0x4c,
+ 0x8e,0x24,0x84,0xe0, 0x70,0xe4,0xb2,0x19,
+ 0x6c,0x0c,0x85,0x9e, 0xe1,0xad,0xa4,0x73,
+ 0x90,0xdd,0xbf,0x7d, 0x1b,0x6f,0x8b,0x4d,
+ 0x3b,0xec,0xd7,0xb0, 0xd9,0x90,0xf1,0xf5,
+ 0xb9,0x32,0xe3,0x79, 0x15,0x08,0x3e,0x71,
+ 0xed,0x91,0xc4,0x5c, 0x18,0xe8,0x16,0x52,
+ 0xae,0x9d,0xf3,0x09, 0xac,0x57,0x11,0xf8,
+ 0x16,0x55,0xd0,0x28, 0x60,0xc1,0x7e,0x6d,
+ 0x87,0xc1,0x7a,0xe8, 0x5d,0xc5,0x12,0x68,
+ 0x6d,0x63,0x39,0x27, 0x49,0xb8,0x0c,0x78,
+ 0x92,0xea,0x6f,0x52, 0xeb,0x43,0xc2,0x0b,
+ 0xd8,0x28,0x77,0xe5, 0x43,0x5f,0xb8,0xa6,
+ 0x32,0xb7,0xaa,0x01, 0x1e,0xa6,0xde,0xe4,
+ 0x9b,0x0f,0xb6,0x49, 0xcc,0x6f,0x2c,0x04,
+ 0x41,0xcb,0xd8,0x80, 0xd1,0x15,0x5e,0x57,
+ 0x1e,0x4a,0x77,0xbf, 0xc4,0xcb,0x09,0x7c,
+ 0x6e,0x81,0xb8,0x64, 0x51,0x6a,0xf2,0x71,
+ 0x06,0xf6,0x00,0xac, 0x79,0x2c,0x83,0x7a,
+ 0x6c,0xa4,0x85,0x89, 0x69,0x06,0x26,0x72,
+ 0xe1,0x00,0x66,0xc0, 0xc5,0x8e,0xc8,0x51,
+ 0x6e,0x25,0xdd,0xc9, 0x54,0x98,0x45,0x64,
+ 0xaa,0x51,0x18,0x1b, 0xe4,0xbe,0x1b,0xee,
+ 0x13,0xd6,0x34,0x50, 0x4c,0xcf,0x3c,0x31,
+ 0x9b,0xd2,0x6f,0x07, 0x79,0xf4,0x63,0x3f,
+ 0x09,0x01,0x64,0xf1, 0xc1,0xf1,0xae,0xa9,
+ 0x0c,0x60,0xc9,0x62, 0x84,0xf6,0xe8,0x15,
+ 0x55,0xdf,0xdd,0x71, 0x95,0xa9,0x0f,0x65,
+ 0x97,0x40,0x79,0x86, 0x95,0xd9,0x57,0x23,
+ 0x2f,0x61,0x51,0xb5, 0x16,0x18,0x62,0xd2,
+ 0x1a,0xd9,0x8b,0x88, 0x84,0xa9,0x9b,0x47,
+ 0xd7,0x22,0x68,0xe9, 0x9c,0x69,0x68,0x74,
+ 0x13,0x95,0xd3,0x99, 0x33,0xdb,0x30,0x96,
+ 0xbf,0x01,0xc6,0x68, 0xbd,0x19,0x32,0xc1,
+ 0xf8,0xa9,0x7f,0x2b, 0xc5,0x69,0x2f,0xa2,
+ 0xce,0x5a,0x46,0x43, 0x8d,0x36,0x9c,0xfa,
+ 0x5c,0x7f,0x03,0xe0, 0x80,0xaa,0xc7,0x9e,
+ 0x3b,0xa3,0x27,0x6b, 0x2e,0xc6,0x59,0x0a,
+ 0xf6,0x36,0x37,0xa6, 0xc0,0xd1,0xa1,0xa1,
+ 0x7e,0xc1,0xf8,0x5b, 0x0f,0x9b,0xdd,0x6d,
+ 0x9f,0x54,0x16,0x6b, 0x6e,0x53,0xfd,0xe8,
+ 0x72,0xd0,0x3e,0x46, 0xce,0xaf,0x94,0x36,
+ 0x85,0xa8,0xae,0x4c, 0x8d,0xb5,0xc2,0x1b,
+ 0x5d,0x29,0x46,0x40, 0x87,0x50,0x59,0xdd,
+ 0x04,0xbe,0xba,0x8f, 0x0b,0x9b,0xd2,0x50,
+ 0x67,0x19,0x83,0x80, 0x87,0x5c,0x58,0x86,
+ 0x20,0x39,0xbf,0xdf, 0xd2,0xc8,0xbb,0xe8,
+ 0xc8,0xd8,0xe8,0x8d, 0xcc,0x97,0xe0,0xc9,
+ 0x6c,0x2f,0x47,0xb6, 0x75,0x8f,0x0d,0x37,
+ 0x5a,0x83,0xb0,0xce, 0x59,0xc2,0x0b,0x84,
+ 0xa2,0x54,0xe5,0x38, 0x59,0x29,0x0f,0xa8,
+ 0x26,0x2d,0x11,0xa9, 0x89,0x0e,0x0b,0x75,
+ 0xe0,0xbc,0xf0,0xf8, 0x92,0x1f,0x29,0x71,
+ 0x91,0xc4,0x63,0xcc, 0xf8,0x52,0xb5,0xd4,
+ 0xb8,0x94,0x6a,0x30, 0x90,0xf7,0x44,0xbe,
+ },
+ .mlen = 1008,
+ .m = {
+ 0x05,0xe3,0x6f,0x44, 0xa4,0x40,0x35,0xf6,
+ 0xeb,0x86,0xa9,0x6d, 0xed,0x16,0xdb,0xb6,
+ 0x5b,0x59,0xda,0x30, 0x54,0x6c,0x59,0x35,
+ 0x42,0x59,0x56,0x45, 0x9a,0x85,0x20,0x73,
+ 0xcf,0x21,0xf5,0x98, 0x58,0x07,0x0e,0x7f,
+ 0x44,0x1f,0xf1,0x53, 0x92,0xc7,0x81,0x53,
+ 0x5e,0x97,0x8a,0x23, 0x1d,0xe8,0xad,0xca,
+ 0x19,0x55,0x96,0x9d, 0x9b,0xfd,0x0a,0x0a,
+ 0xad,0xa8,0x0f,0x76, 0xe2,0x6a,0x8f,0x33,
+ 0x36,0xbf,0xcb,0x7a, 0xfd,0x61,0xc6,0xfb,
+ 0x75,0xea,0xd4,0x09, 0x5e,0x70,0xfb,0x32,
+ 0x54,0xe3,0x47,0x48, 0xd4,0x8c,0xa9,0x7c,
+ 0x72,0xdb,0xdb,0xf7, 0x09,0x6d,0x58,0xa6,
+ 0x42,0xb5,0x74,0x8c, 0x98,0x66,0x83,0x7a,
+ 0x6d,0xeb,0x91,0xfb, 0x22,0x1c,0x78,0x3d,
+ 0x22,0xa6,0xf8,0xb0, 0xd1,0x9f,0xc8,0x69,
+ 0x8a,0xba,0xd3,0x78, 0x21,0xb0,0x7b,0x9f,
+ 0xb8,0xed,0xe0,0x65, 0xff,0xa0,0x8b,0x4c,
+ 0x17,0x9e,0xf7,0x3e, 0xa2,0x5f,0x82,0x77,
+ 0xce,0x2a,0xda,0x41, 0x76,0x07,0x68,0xa4,
+ 0xa1,0xbb,0xe0,0x1d, 0x7b,0xab,0x9c,0x03,
+ 0x90,0x2c,0xd2,0x93, 0x46,0x43,0x3a,0x44,
+ 0x29,0xe8,0xb5,0x7a, 0x23,0xbb,0xe9,0xaf,
+ 0x2b,0x17,0x88,0x8f, 0x7a,0x81,0x7a,0x25,
+ 0x3b,0xc7,0x1e,0x6e, 0xde,0x3e,0x54,0xbc,
+ 0xc6,0xff,0x07,0xdc, 0xe6,0x29,0x02,0x4c,
+ 0x95,0x57,0x0e,0x44, 0xc4,0x9c,0xc7,0x45,
+ 0x01,0xd7,0x17,0xfd, 0x0f,0x1a,0x83,0x74,
+ 0xa0,0xd5,0xb3,0x1a, 0xc0,0x97,0xdc,0xc3,
+ 0x0f,0x3d,0x5d,0x8c, 0x02,0x58,0xc6,0x4d,
+ 0x43,0x10,0xae,0xc9, 0x94,0xe2,0x9b,0xcd,
+ 0xf9,0xcc,0xfe,0xbd, 0x9c,0x69,0xd0,0xec,
+ 0xf8,0x67,0xde,0x98, 0xe5,0x50,0x5e,0x93,
+ 0x6a,0x5b,0x31,0x2a, 0x62,0xee,0x03,0xbe,
+ 0x76,0x9c,0x1d,0x13, 0x16,0x13,0xcf,0x63,
+ 0x30,0x18,0x7d,0x1e, 0x55,0x94,0xf5,0x29,
+ 0xb4,0x91,0xb4,0x76, 0x1c,0x31,0x9e,0xe5,
+ 0x1b,0x0a,0xee,0x89, 0xb4,0xd9,0x45,0x19,
+ 0xd7,0x47,0x2c,0x01, 0x20,0xe6,0x1d,0x7c,
+ 0xb3,0x5e,0x1b,0x2a, 0x8c,0x3d,0x4d,0x1a,
+ 0x6b,0x35,0x84,0x41, 0x6a,0xe4,0x32,0x8f,
+ 0x9a,0x0d,0xbf,0x90, 0xff,0xcf,0x4c,0xfb,
+ 0x9b,0x07,0x81,0x94, 0xcf,0x8e,0x1a,0x8a,
+ 0xfc,0xbd,0x91,0xfe, 0xc3,0xe1,0x18,0xc7,
+ 0x1f,0x0d,0x8e,0x1c, 0x2e,0xfc,0x02,0xe8,
+ 0x39,0xbf,0x05,0x90, 0x58,0x94,0xee,0xe7,
+ 0x15,0x31,0x5d,0x9f, 0x68,0x36,0x64,0x32,
+ 0x25,0x49,0xdd,0x3e, 0xc8,0xb6,0x83,0x5e,
+ 0x09,0x90,0xcd,0x48, 0xaf,0x9e,0xfe,0xd6,
+ 0x79,0x8e,0x69,0x4b, 0x94,0xd5,0xf4,0x84,
+ 0x7b,0xce,0xea,0x2f, 0x9b,0x79,0x7a,0x7c,
+ 0x22,0x28,0x4d,0xa1, 0x38,0x1a,0x66,0x24,
+ 0x79,0xa3,0xfa,0xfa, 0x8d,0x98,0x7c,0x54,
+ 0x71,0x54,0xef,0x37, 0xa6,0xf1,0x97,0x54,
+ 0xad,0xe7,0x67,0xa0, 0xf3,0x33,0xcf,0x4f,
+ 0x4e,0xa3,0x47,0xee, 0x31,0xd3,0x98,0xf9,
+ 0x7f,0x9f,0x44,0x18, 0x2f,0x13,0x1b,0x44,
+ 0x57,0xcd,0x15,0x5b, 0xde,0x8f,0x1a,0x3c,
+ 0xb5,0x1e,0xa7,0x2d, 0x4d,0xbe,0x85,0x08,
+ 0x78,0xeb,0xe2,0x35, 0x3a,0xbe,0x55,0x6b,
+ 0xc3,0xe1,0x0f,0x77, 0x43,0x41,0x11,0x5a,
+ 0x61,0xc9,0x3b,0xbc, 0xad,0x88,0x9e,0xba,
+ 0xc6,0xd2,0xdc,0x87, 0xd9,0x54,0xcc,0x86,
+ 0x46,0xe6,0xa5,0x29, 0x2c,0x08,0x49,0x53,
+ 0x2c,0xe3,0x0e,0x60, 0xc5,0x48,0xca,0x62,
+ 0x3f,0xf6,0x93,0xc1, 0xba,0x8d,0x36,0x49,
+ 0xe7,0x0f,0x9c,0x49, 0x7d,0xee,0x2a,0x22,
+ 0xc3,0xe5,0x11,0x21, 0xfa,0xc7,0xeb,0x79,
+ 0xcc,0x4d,0x75,0x4e, 0x66,0x33,0xf5,0x09,
+ 0xa3,0xb9,0x60,0xa5, 0xd6,0xbd,0x38,0x75,
+ 0x0c,0x2f,0x5f,0x1f, 0xea,0xa5,0x9d,0x45,
+ 0x3c,0xe4,0x41,0xb8, 0xf6,0x4e,0x15,0x87,
+ 0x0b,0x7f,0x42,0x4e, 0x51,0x3d,0xc4,0x9a,
+ 0xb2,0xca,0x37,0x16, 0x0f,0xed,0x9e,0x0b,
+ 0x93,0x86,0x12,0x93, 0x36,0x5e,0x39,0xc4,
+ 0xf0,0xf4,0x48,0xdb, 0xeb,0x18,0x5e,0x50,
+ 0x71,0x30,0x83,0xe5, 0x0f,0xb1,0x73,0xa7,
+ 0xc6,0xf0,0xca,0x29, 0x0e,0xc4,0x07,0x5b,
+ 0x8b,0x09,0x68,0x68, 0x10,0x32,0x92,0x62,
+ 0x6a,0x6c,0x56,0x8b, 0x01,0x46,0x9a,0x20,
+ 0x89,0xe0,0x93,0x85, 0x8c,0x53,0x87,0xf6,
+ 0x02,0xd3,0x8d,0x72, 0x31,0x35,0xa1,0x34,
+ 0x63,0x70,0x61,0x80, 0x06,0xf1,0x54,0xb3,
+ 0x5d,0xdf,0xad,0x9c, 0x7e,0x3a,0xc2,0x8f,
+ 0x76,0x8b,0x4c,0x74, 0x2c,0x8c,0x6f,0x0a,
+ 0x60,0x13,0xa8,0xce, 0x4c,0x49,0x70,0x90,
+ 0x59,0x57,0xf5,0x7b, 0x03,0x94,0x37,0x87,
+ 0xfa,0xfe,0xeb,0xe7, 0x2d,0x01,0x45,0x69,
+ 0xb4,0x10,0x80,0x6d, 0x13,0x26,0xe3,0x9b,
+ 0x49,0x2a,0x0b,0xb1, 0x36,0xf9,0x62,0x63,
+ 0x33,0x2a,0xee,0x51, 0x5e,0x35,0xa4,0x2e,
+ 0x34,0xa1,0x77,0xac, 0x27,0x99,0x03,0xc6,
+ 0xe2,0x83,0x11,0x72, 0x77,0x30,0x8b,0xb7,
+ 0xde,0x1a,0xa1,0x4b, 0xa9,0x9c,0x07,0x02,
+ 0xf2,0xdc,0x06,0x45, 0xf2,0xab,0x31,0x46,
+ 0x50,0x25,0x34,0x54, 0xa8,0x06,0x88,0x6c,
+ 0xfc,0x88,0xb5,0xae, 0x30,0xbd,0xe1,0xe7,
+ 0xfe,0x51,0x46,0x05, 0x9a,0x29,0xd9,0x93,
+ 0x99,0x60,0x69,0x4a, 0x5c,0xb2,0x29,0x6b,
+ 0xa1,0xbb,0x9d,0xe4, 0x9b,0x7d,0x4a,0xe5,
+ 0x37,0xcb,0x16,0x6f, 0x44,0x93,0xe4,0x71,
+ 0x34,0x7b,0x54,0xec, 0x5b,0x2b,0xe0,0xf7,
+ 0x32,0xed,0x77,0xa6, 0xb3,0x7c,0x8d,0x1a,
+ 0xc0,0x57,0xbe,0x2b, 0x6d,0x7f,0xd7,0x35,
+ 0xe6,0x93,0xed,0x90, 0x26,0xfe,0x41,0xf3,
+ 0x58,0x55,0x03,0xb7, 0xb2,0x94,0xe2,0x0c,
+ 0x34,0xc3,0x06,0xc6, 0x9e,0x4b,0x17,0xc7,
+ 0xb9,0x58,0x23,0x58, 0xd3,0x73,0x18,0x5e,
+ 0xcf,0x28,0xac,0x90, 0xa0,0xba,0x35,0x90,
+ 0x96,0xb3,0xc7,0x6c, 0xe1,0x07,0xdf,0x5d,
+ 0xaa,0x2c,0xa6,0x6b, 0x82,0x2d,0x71,0x66,
+ 0xb7,0x76,0x37,0xdb, 0x39,0x7f,0x22,0x8f,
+ 0x38,0x70,0xd4,0xeb, 0xf8,0xf0,0x73,0xed,
+ 0xb6,0x67,0x75,0xaf, 0xd7,0x5d,0x01,0x01,
+ 0xc4,0xd6,0x7c,0xbc, 0xc3,0xe6,0xad,0x9a,
+ 0x9c,0x6a,0x43,0x9b, 0xfb,0x34,0x55,0x47,
+ 0xcd,0xeb,0x4e,0x2c, 0x29,0x6f,0xb0,0xeb,
+ 0xb5,0x08,0xdb,0x6b, 0x40,0x26,0x51,0x54,
+ 0x5a,0x97,0x64,0x74, 0x95,0xe6,0xae,0x8a,
+ 0x4c,0xe9,0x44,0x47, 0x85,0xd6,0xcf,0xe0,
+ 0x11,0x65,0x45,0xb3, 0xe1,0xfc,0x6a,0x01,
+ 0x38,0x40,0x8a,0x71, 0xc5,0xd6,0x64,0xa8,
+ 0x36,0x95,0x44,0x9c, 0x10,0x41,0xa3,0x71,
+ 0xb4,0x70,0x02,0xdf, 0xf9,0xad,0x2b,0xec,
+ 0x75,0xf7,0x09,0x6c, 0x5d,0x2a,0xd0,0x0b,
+ 0x2e,0xb3,0xf0,0xd3, 0xce,0xdb,0x26,0x80,
+ },
+ .h = {
+ 0x2d,0xb3,0x7e,0x73, 0xde,0x6a,0x9e,0xa9,
+ 0x54,0x9a,0x0f,0xb3, 0x0b,0xcc,0xc9,0xde,
+ 0x7a,0x4e,0x4a,0x71, 0x07,0x33,0xee,0x06,
+ 0x5c,0x9a,0xa1,0x30, 0x5e,0x39,0x4e,0x10,
+ },
+ },
+ [2] = { /* 1024-byte message */
+ .k = {
+ 0x4c,0xe4,0x3c,0x6e, 0xa0,0xe3,0x0e,0x64,
+ 0x35,0x44,0x3e,0x0b, 0x4d,0x29,0xbe,0x04,
+ 0xa7,0xaa,0x88,0xe0, 0xe0,0x07,0x7d,0xa8,
+ 0x2b,0x87,0x7d,0x08, 0xa6,0x59,0xd0,0xa5,
+ 0x03,0xae,0x9b,0xee, 0xd4,0x11,0x39,0x7d,
+ 0x9e,0x1d,0x89,0xe3, 0xc6,0x92,0x36,0x07,
+ 0xa4,0x43,0xad,0x2f, 0xd5,0x71,0x84,0x2d,
+ 0xc0,0x37,0xed,0x62, 0x4e,0x2b,0x8c,0xd5,
+ 0x1d,0xf7,0x00,0xbb, 0x3d,0x5e,0xcc,0xc5,
+ 0x6d,0xdd,0x17,0xf2, 0x89,0x25,0x30,0x16,
+ 0x04,0xd7,0x1f,0x84, 0x7d,0x61,0xa0,0x7a,
+ 0x49,0x88,0x44,0x46, 0xc6,0x05,0xd1,0xc9,
+ 0xa0,0x2a,0x86,0xdd, 0xd3,0x80,0x40,0xa4,
+ 0x28,0xb3,0xa4,0x3b, 0x71,0x0a,0x7f,0x2d,
+ 0x3b,0xcd,0xe6,0xac, 0x59,0xda,0x43,0x56,
+ 0x6e,0x9a,0x3f,0x1e, 0x82,0xcf,0xb3,0xa0,
+ 0xa1,0x46,0xcf,0x2e, 0x32,0x05,0xcd,0x68,
+ 0xbb,0x51,0x71,0x8a, 0x16,0x75,0xbe,0x49,
+ 0x7e,0xb3,0x63,0x30, 0x95,0x34,0xe6,0x85,
+ 0x7e,0x9a,0xdd,0xe6, 0x43,0xd6,0x59,0xf8,
+ 0x6a,0xb8,0x8f,0x5f, 0x5d,0xd9,0x55,0x41,
+ 0x12,0xf9,0x98,0xc6, 0x93,0x7c,0x3f,0x46,
+ 0xab,0x7c,0x8b,0x28, 0xde,0x9a,0xb1,0xf0,
+ 0x6c,0x43,0x2a,0xb3, 0x70,0xc5,0x9d,0xc0,
+ 0x26,0xcf,0xad,0x9c, 0x87,0x9b,0x3f,0x7c,
+ 0x24,0xac,0xe7,0xd4, 0xe8,0x14,0xe3,0x3e,
+ 0xf6,0x8a,0x97,0x87, 0x63,0x2c,0x88,0xdc,
+ 0xc5,0x23,0x68,0x6e, 0x94,0xe1,0x09,0xc4,
+ 0x44,0xda,0x8f,0xa7, 0x9f,0xc4,0x52,0xa4,
+ 0x18,0x1d,0x3c,0x08, 0xca,0x0a,0x3e,0xb4,
+ 0xbf,0xbe,0xc6,0x47, 0xe2,0x89,0x2b,0x07,
+ 0x71,0xd9,0xc8,0x6a, 0x06,0xd5,0xd0,0x47,
+ 0x4e,0x07,0x4f,0x6b, 0xdb,0xdf,0x3d,0xf0,
+ 0x7c,0x5f,0x49,0x70, 0x17,0x4f,0x9f,0x33,
+ 0x7e,0x4b,0x72,0x3b, 0x8c,0x68,0x22,0xf9,
+ 0xd2,0xad,0xe4,0xe4, 0xb2,0x61,0x9d,0xb8,
+ 0xc2,0x5c,0xf0,0x3b, 0x08,0xb2,0x75,0x30,
+ 0x3a,0xd0,0x7d,0xf9, 0xb2,0x00,0x40,0x56,
+ 0x79,0xe2,0x0d,0x31, 0x72,0xe2,0xc2,0xd1,
+ 0x2e,0x27,0xe7,0xc8, 0x96,0x1a,0xc6,0x7e,
+ 0xb8,0xc1,0x93,0xfb, 0x1d,0xbc,0xed,0x97,
+ 0x2f,0x2f,0xea,0xa1, 0x40,0x49,0xf6,0x1d,
+ 0xab,0x54,0x46,0x2e, 0x73,0xf2,0x74,0xf1,
+ 0x6d,0x5c,0xe6,0xa0, 0xd4,0x73,0x1c,0xbc,
+ 0x07,0x81,0xf5,0x94, 0xe6,0x18,0xdc,0x42,
+ 0x68,0xb9,0xeb,0xfb, 0xa3,0x76,0x8c,0x83,
+ 0x98,0xe9,0x96,0xa6, 0xa6,0x5e,0x0e,0xd1,
+ 0xfc,0xb7,0x8e,0x8b, 0x9e,0xa4,0x00,0x76,
+ 0x0e,0x35,0x92,0x5e, 0x05,0xa1,0x92,0xc4,
+ 0x0c,0xd1,0xec,0x8c, 0x04,0x8e,0x65,0x56,
+ 0x43,0xae,0x16,0x18, 0x2e,0x3e,0xfe,0x47,
+ 0x92,0xe1,0x76,0x1b, 0xb6,0xcc,0x0b,0x82,
+ 0xe1,0x8c,0x7b,0x43, 0xe4,0x90,0xed,0x28,
+ 0x0b,0xe6,0x05,0xea, 0x4a,0xc0,0xf1,0x12,
+ 0x54,0x09,0x93,0xda, 0xfc,0xf4,0x86,0xff,
+ 0x4c,0xaa,0x7d,0xbe, 0xd0,0x4a,0xa6,0x9d,
+ 0x6b,0x27,0x8f,0xb1, 0xb5,0x3a,0x9b,0xce,
+ 0xe2,0x5c,0x29,0x35, 0xd6,0xe7,0xf3,0xa4,
+ 0x5e,0x70,0xf6,0xc6, 0xde,0x63,0x86,0xf7,
+ 0xc9,0xab,0x42,0xb9, 0xe7,0x5d,0x1c,0x68,
+ 0x73,0xa3,0xed,0xb0, 0xa0,0xb6,0x18,0x15,
+ 0xe6,0x57,0x4c,0x21, 0xf7,0xf3,0xc6,0x32,
+ 0x4d,0x07,0x4a,0x14, 0xde,0xb2,0xc7,0xca,
+ 0xf0,0x78,0xc4,0x85, 0xe3,0xdc,0xfb,0x35,
+ 0x7c,0x6b,0xc0,0xb8, 0xcd,0x7a,0x22,0xfc,
+ 0xe4,0xe8,0xe2,0x98, 0x6c,0x8e,0xdf,0x37,
+ 0x8e,0x0f,0x25,0x23, 0xdd,0xea,0x40,0x6f,
+ 0xb3,0x07,0x7e,0x7a, 0x6b,0xa1,0xa1,0xcf,
+ 0x24,0xd9,0xad,0x72, 0x7a,0x45,0x49,0xca,
+ 0xfe,0xc7,0x2e,0x6d, 0xaa,0xc1,0x08,0x2c,
+ 0xe6,0xde,0xde,0x73, 0x01,0x9c,0xdc,0x65,
+ 0x3a,0xdf,0xc6,0x15, 0x37,0x62,0x0b,0x2c,
+ 0x9a,0x36,0xed,0x37, 0xd9,0xfc,0xa9,0xb3,
+ 0x32,0xc3,0xde,0x26, 0xe7,0xf0,0x3f,0x02,
+ 0xed,0x35,0x74,0xea, 0xdd,0x32,0xe9,0x96,
+ 0x75,0x66,0xb8,0xf0, 0x75,0x98,0x8f,0x3a,
+ 0xd0,0xc2,0xa1,0x98, 0x5f,0xf9,0x32,0x31,
+ 0x00,0x18,0x7d,0xc5, 0x9d,0x15,0x5b,0xdc,
+ 0x13,0x37,0x69,0xfc, 0x95,0x7a,0x62,0x0e,
+ 0x8a,0x86,0xed,0x18, 0x78,0x3c,0x49,0xf4,
+ 0x18,0x73,0xcd,0x2e, 0x7b,0xa3,0x40,0xd7,
+ 0x01,0xf6,0xc7,0x2a, 0xc5,0xce,0x13,0x09,
+ 0xb1,0xe5,0x25,0x17, 0xdf,0x9d,0x7e,0x0b,
+ 0x50,0x46,0x62,0x78, 0xb5,0x25,0xb2,0xd9,
+ 0x65,0xfa,0x5b,0xf7, 0xfe,0xc6,0xe0,0x7b,
+ 0x7b,0x4e,0x14,0x2e, 0x0d,0x3a,0xd0,0xe0,
+ 0xa0,0xd2,0xeb,0x4d, 0x87,0x11,0x42,0x28,
+ 0x02,0x7e,0xa8,0x56, 0x5b,0x53,0xbd,0x76,
+ 0x47,0x8f,0x5f,0x8b, 0xc7,0xd9,0x72,0xf7,
+ 0x11,0xbb,0x94,0xdb, 0x0d,0x07,0xb7,0x0a,
+ 0xcc,0x41,0x00,0xcd, 0xd0,0x50,0x25,0x31,
+ 0xc9,0x47,0x6b,0xdd, 0x3f,0x70,0x24,0x3e,
+ 0xde,0x02,0x62,0x6c, 0xb4,0x44,0x92,0x8e,
+ 0x98,0x9c,0x0e,0x30, 0x2f,0x80,0xb9,0x5e,
+ 0x75,0x90,0xa6,0x02, 0xf0,0xed,0xb0,0x8b,
+ 0x44,0xa3,0x59,0x2d, 0xc3,0x08,0xe5,0xd9,
+ 0x89,0x6a,0x71,0x44, 0x04,0xc4,0xb2,0x61,
+ 0x5b,0xf5,0x46,0x44, 0xdc,0x36,0x2e,0xfd,
+ 0x41,0xf5,0xa1,0x3a, 0xb3,0x93,0x74,0x7d,
+ 0x54,0x5e,0x64,0xdc, 0xbc,0xd7,0x07,0x48,
+ 0x3e,0x73,0x81,0x22, 0x9c,0x5a,0xf6,0xde,
+ 0x94,0x42,0xe1,0x6c, 0x92,0xe7,0x6d,0xa0,
+ 0x5e,0xc3,0xd6,0xe9, 0x84,0xd9,0xba,0x57,
+ 0xef,0x85,0x6a,0x9b, 0xe6,0x9a,0x2b,0xf8,
+ 0x8d,0xfe,0x9d,0xad, 0x70,0x26,0x05,0x14,
+ 0x45,0x07,0xcb,0x72, 0xd4,0x8b,0x14,0x44,
+ 0x74,0x40,0x9c,0x29, 0x8b,0xba,0x40,0x09,
+ 0x52,0xfc,0xc5,0x40, 0xb1,0x25,0x69,0xaa,
+ 0x8f,0x12,0xc4,0xc6, 0x2b,0x3f,0x73,0x9d,
+ 0xff,0x52,0xd4,0xac, 0x77,0x43,0xdc,0xd2,
+ 0x06,0x9a,0x1b,0xfc, 0x0c,0x8f,0x6b,0x59,
+ 0xa5,0xd4,0xde,0x06, 0x16,0x34,0xef,0x75,
+ 0x22,0x54,0x9c,0x53, 0x38,0x0b,0x57,0xc7,
+ 0xaa,0x78,0x2d,0x3a, 0x9b,0xdd,0xed,0xb5,
+ 0x0b,0xb0,0x08,0x5f, 0x57,0xdb,0xfc,0xbe,
+ 0x44,0xfd,0x71,0x5f, 0x71,0x14,0xd5,0x14,
+ 0x70,0xb6,0xee,0xd0, 0xf3,0x37,0x6f,0x57,
+ 0x55,0x3c,0x7c,0x23, 0x6f,0xbe,0x83,0x5c,
+ 0xb5,0x64,0xfd,0x6d, 0x7c,0xe4,0x05,0x2b,
+ 0xdb,0xc4,0xf5,0xa0, 0xd3,0xa6,0x15,0x48,
+ 0xc2,0x50,0xf8,0xf7, 0xc2,0xab,0xb5,0x6a,
+ 0x0d,0x1a,0xb5,0x30, 0x33,0xf8,0x12,0x2d,
+ 0xfb,0xa6,0x2e,0xe5, 0xbe,0x40,0xba,0x48,
+ 0xef,0x05,0xc8,0x37, 0x3a,0x36,0xad,0x99,
+ 0x77,0x87,0x84,0xac, 0xd8,0xcb,0x7a,0x88,
+ 0x3e,0x2d,0x8b,0xbe, 0x9a,0x35,0x88,0x26,
+ 0xe9,0x20,0xd4,0x66, 0x80,0x8b,0xf8,0x54,
+ 0xba,0xcd,0xa8,0x47, 0x35,0x1b,0xc4,0x09,
+ 0x6d,0xff,0x0e,0x60, 0x7c,0xf3,0x68,0xbf,
+ 0xe3,0xe9,0x73,0x07, 0x84,0xf0,0x08,0x45,
+ 0x97,0x65,0x94,0xd1, 0x35,0x4e,0x67,0x0c,
+ 0xe3,0xb7,0x61,0x7b, 0x09,0x22,0xed,0x18,
+ 0xee,0x0b,0x54,0xc0, 0xab,0x8b,0xaa,0x71,
+ 0x4c,0x40,0xbf,0xf7, 0xe0,0x7e,0x08,0xaa,
+ },
+ .mlen = 1024,
+ .m = {
+ 0x1d,0xea,0xe5,0x2b, 0x4c,0x22,0x4d,0xf3,
+ 0x15,0x53,0xcb,0x41, 0xf5,0xcf,0x0b,0x7b,
+ 0xc9,0x80,0xc0,0x95, 0xd2,0x7b,0x08,0x4b,
+ 0x3d,0xcd,0xd8,0x3b, 0x2f,0x18,0xd4,0x70,
+ 0x38,0xb2,0xa7,0x2f, 0x7f,0xba,0xd8,0xed,
+ 0xbc,0x8f,0xac,0xe4, 0xe2,0x11,0x2d,0x6d,
+ 0xe6,0xa4,0x36,0x90, 0xc2,0x7f,0xdf,0xe3,
+ 0xdc,0x50,0xdb,0x6c, 0x56,0xcf,0x7d,0xd6,
+ 0xd0,0xcb,0xd6,0x9b, 0x01,0xbb,0xef,0x1c,
+ 0x0a,0x6c,0x92,0x23, 0xeb,0x77,0xf9,0xd1,
+ 0x25,0xdc,0x94,0x30, 0x30,0xa4,0x96,0x3e,
+ 0xdf,0x52,0x4c,0xe7, 0xdf,0x27,0x9f,0x73,
+ 0x78,0x0c,0x8c,0x7f, 0x9d,0xae,0x79,0x5d,
+ 0x91,0x5e,0x4b,0x02, 0xa9,0x31,0x9c,0xff,
+ 0x46,0x73,0xec,0x0d, 0x5a,0xb8,0xeb,0x48,
+ 0x19,0x9c,0x44,0xe0, 0xc8,0x81,0x96,0x4c,
+ 0x47,0x0c,0xe7,0x1d, 0x2a,0x9c,0xd5,0xe0,
+ 0xe7,0xd6,0xa0,0x88, 0xf0,0xf6,0xda,0xa7,
+ 0x6a,0xdd,0xfd,0x4f, 0x00,0x6e,0x25,0x7d,
+ 0xb9,0x81,0x19,0x2f, 0x4e,0xcc,0x8d,0x6e,
+ 0xa6,0x92,0xcf,0xd8, 0x6e,0x78,0x0a,0xf6,
+ 0x8a,0x43,0xeb,0x60, 0x0c,0x8b,0x93,0x50,
+ 0x88,0xd1,0x67,0x05, 0x0c,0xdc,0x43,0x85,
+ 0x50,0x91,0x63,0xa4, 0x32,0x14,0x66,0x84,
+ 0xdb,0x04,0x9f,0x77, 0x95,0x60,0x19,0xc6,
+ 0x98,0x60,0x62,0xe4, 0xc6,0xee,0x70,0x76,
+ 0xb0,0x59,0x80,0x59, 0x46,0xae,0x99,0x26,
+ 0x62,0x4a,0xf0,0x45, 0x8f,0xf0,0x70,0x5b,
+ 0x52,0xfc,0xee,0x4d, 0x30,0x47,0xc8,0xae,
+ 0xe2,0xbc,0x2c,0x73, 0x78,0x67,0xf1,0x00,
+ 0xb4,0xda,0x01,0xad, 0x3b,0xc4,0x5c,0x6c,
+ 0x65,0xca,0x84,0x22, 0x95,0x32,0x95,0x20,
+ 0x4d,0xdc,0x96,0x2e, 0x61,0xe4,0xc8,0xec,
+ 0x2d,0xbf,0xc1,0x5d, 0x70,0xf9,0x75,0xf2,
+ 0xad,0x0a,0xc9,0xd7, 0x0a,0x81,0x3c,0xa1,
+ 0x13,0xec,0x63,0xd4, 0xd0,0x67,0xf4,0xcc,
+ 0x6e,0xb8,0x52,0x08, 0x46,0xc9,0x2a,0x92,
+ 0x59,0xd9,0x14,0x17, 0xde,0x2f,0xc7,0x36,
+ 0xd5,0xd5,0xfc,0x8a, 0x63,0xd5,0x5f,0xe3,
+ 0xdd,0x55,0x00,0x8e, 0x5e,0xc9,0xed,0x04,
+ 0x1d,0xeb,0xae,0xc5, 0xd0,0xf9,0x73,0x28,
+ 0xf3,0x81,0xd5,0xb4, 0x60,0xb2,0x42,0x81,
+ 0x68,0xf3,0xb9,0x73, 0x07,0x2e,0x34,0x8e,
+ 0x47,0x12,0xae,0x7c, 0xa8,0xc2,0xce,0xad,
+ 0x0f,0x6e,0x44,0xa5, 0x35,0x5e,0x61,0x6b,
+ 0xfc,0x67,0x9c,0x82, 0xa1,0xd2,0xff,0xfe,
+ 0x60,0x7c,0x40,0x02, 0x24,0x9e,0x8b,0x90,
+ 0xa0,0x89,0xd9,0x83, 0x04,0xd8,0xef,0x9c,
+ 0x96,0x28,0x77,0x3e, 0xe3,0xb0,0xf8,0x3d,
+ 0xfb,0x91,0x8f,0x6f, 0x83,0x58,0x1e,0x4b,
+ 0x64,0xc7,0xf6,0xe0, 0x85,0x03,0xe3,0xf9,
+ 0x6b,0xc9,0x9e,0x9d, 0x57,0x25,0xe4,0x69,
+ 0x08,0x59,0x28,0x4a, 0x52,0x9c,0x49,0x19,
+ 0x24,0x49,0xba,0xb1, 0x82,0xd4,0xcf,0xd0,
+ 0x1e,0x1d,0xc2,0x02, 0x42,0x4e,0xdf,0xf7,
+ 0x2b,0x3d,0x99,0xf6, 0x99,0xa4,0x3a,0xe1,
+ 0x9d,0x68,0xc8,0x08, 0xec,0xec,0x1c,0xa8,
+ 0x41,0x4a,0x27,0x84, 0xe9,0x0d,0x95,0x54,
+ 0x1a,0xca,0x5f,0x5d, 0x5a,0x96,0xb9,0x5b,
+ 0x6e,0xbc,0x39,0x7f, 0x7a,0x20,0xc5,0xb2,
+ 0x60,0x0c,0xa3,0x78, 0xc3,0x2b,0x87,0xcc,
+ 0xea,0xb0,0x4d,0x27, 0xfb,0x6c,0x58,0x51,
+ 0xce,0x90,0xca,0xd6, 0x86,0x91,0x4d,0x2c,
+ 0x8c,0x82,0xf0,0xc9, 0x9a,0x0a,0x73,0xb3,
+ 0xcb,0xa9,0xd4,0x26, 0x4d,0x74,0xbe,0x0e,
+ 0x4a,0x6e,0x10,0xeb, 0x4e,0xba,0x4e,0xba,
+ 0x0d,0x26,0x69,0x87, 0x5e,0x08,0x2b,0x43,
+ 0xbe,0x97,0x4e,0x2a, 0x63,0xbc,0x52,0xb7,
+ 0xda,0x23,0x23,0x11, 0xfa,0xcf,0x89,0xac,
+ 0x90,0x5f,0x60,0x7a, 0x50,0xb7,0xbe,0x79,
+ 0x0b,0x2c,0xf0,0x27, 0xf0,0xfb,0xaf,0x64,
+ 0xc8,0x57,0x7c,0xeb, 0x1c,0xf7,0x36,0xec,
+ 0x09,0x97,0x66,0x31, 0x54,0xe4,0x00,0xcf,
+ 0x68,0x24,0x77,0x1a, 0xbc,0x27,0x3a,0xad,
+ 0x8a,0x01,0x7e,0x45, 0xe7,0xe4,0xa4,0xeb,
+ 0x38,0x62,0x9d,0x90, 0xea,0x00,0x9c,0x03,
+ 0x5e,0xb2,0x7d,0xd8, 0x2f,0xe9,0xc9,0x3c,
+ 0x1a,0x5c,0x21,0x1a, 0x59,0x45,0x62,0x47,
+ 0x93,0x1b,0xdc,0xd8, 0x3e,0x07,0x8b,0x75,
+ 0xd0,0x6d,0xcc,0x8d, 0xec,0x79,0xa8,0x9a,
+ 0x51,0xa5,0x50,0x18, 0xae,0x44,0x93,0x75,
+ 0xc1,0xc8,0x1e,0x10, 0x59,0x1e,0x0b,0xb3,
+ 0x06,0x30,0xa8,0x66, 0x8d,0x8e,0xd6,0x4d,
+ 0x0d,0x8a,0xb4,0x28, 0xdc,0xfb,0x5d,0x59,
+ 0xe0,0x92,0x77,0x38, 0xfa,0xad,0x46,0x46,
+ 0x25,0x15,0x4c,0xca, 0x09,0x2b,0x31,0xe9,
+ 0x36,0xe8,0xc2,0x67, 0x34,0x4d,0x5e,0xa0,
+ 0x8f,0x9a,0xe8,0x7f, 0xf2,0x2a,0x92,0x78,
+ 0xde,0x09,0x75,0xe7, 0xe5,0x50,0x0a,0x2e,
+ 0x88,0x63,0xc0,0x8f, 0xa8,0x73,0x0f,0xe5,
+ 0x1e,0x9d,0xdb,0xce, 0x53,0xe0,0x42,0x94,
+ 0x7b,0x5c,0xa1,0x5e, 0x1e,0x8f,0x0a,0x6e,
+ 0x8b,0x1a,0xad,0x93, 0x70,0x86,0xf1,0x69,
+ 0x70,0x93,0x24,0xe3, 0x83,0x2f,0xa8,0x04,
+ 0xba,0x27,0x0a,0x2e, 0x03,0xeb,0x69,0xd9,
+ 0x56,0x0e,0xc4,0x10, 0x55,0x31,0x2c,0x3f,
+ 0xd1,0xb2,0x94,0x0f, 0x28,0x15,0x3c,0x02,
+ 0x15,0x5e,0xec,0x26, 0x9c,0xc3,0xfc,0xa7,
+ 0x5c,0xb0,0xfa,0xc0, 0x02,0xf9,0x01,0x3f,
+ 0x01,0x73,0x24,0x22, 0x50,0x28,0x2a,0xca,
+ 0xb1,0xf2,0x03,0x00, 0x2f,0xc6,0x6f,0x28,
+ 0x4f,0x4b,0x4f,0x1a, 0x9a,0xb8,0x16,0x93,
+ 0x31,0x60,0x7c,0x3d, 0x35,0xc8,0xd6,0x90,
+ 0xde,0x8c,0x89,0x39, 0xbd,0x21,0x11,0x05,
+ 0xe8,0xc4,0x04,0x3b, 0x65,0xa5,0x15,0xcf,
+ 0xcf,0x15,0x14,0xf6, 0xe7,0x2e,0x3c,0x47,
+ 0x59,0x0b,0xaa,0xc0, 0xd4,0xab,0x04,0x14,
+ 0x9c,0xd7,0xe2,0x43, 0xc7,0x87,0x09,0x03,
+ 0x27,0xd2,0x0a,0xff, 0x8d,0xd5,0x80,0x34,
+ 0x93,0xa2,0x2c,0xb1, 0x4e,0x16,0x2d,0x82,
+ 0x51,0x5c,0x3c,0xe5, 0x75,0x51,0x7b,0xb4,
+ 0xd8,0x1e,0x59,0x98, 0x0f,0x75,0xed,0x02,
+ 0x1c,0x13,0xf6,0x02, 0xda,0xf9,0x47,0xf7,
+ 0x45,0x25,0x0f,0x58, 0x22,0x5d,0xef,0xf0,
+ 0x1b,0xdb,0xae,0xaf, 0xbe,0xc6,0xe1,0xcd,
+ 0x70,0x46,0x6e,0x03, 0x9a,0x20,0x77,0x00,
+ 0x3c,0x32,0xb5,0x8f, 0x04,0xb6,0x6f,0xa2,
+ 0x31,0xc9,0x7c,0xf9, 0x84,0x67,0x87,0xfb,
+ 0x7b,0x13,0xb0,0x4d, 0x35,0xfd,0x37,0x5b,
+ 0xf4,0x25,0xf0,0x02, 0x74,0xa0,0x69,0xd4,
+ 0x53,0x61,0x4b,0x54, 0x68,0x94,0x0e,0x08,
+ 0x25,0x82,0x90,0xfc, 0x25,0xb6,0x63,0xe2,
+ 0x07,0x9f,0x42,0xf1, 0xbb,0x33,0xea,0xab,
+ 0x92,0x54,0x2b,0x9f, 0x88,0xc0,0x31,0x2b,
+ 0xfd,0x36,0x50,0x80, 0xfc,0x1a,0xff,0xab,
+ 0xe8,0xc4,0x7f,0xb6, 0x98,0xb9,0x2e,0x17,
+ 0xca,0x28,0x3d,0xdf, 0x0f,0x07,0x43,0x20,
+ 0xf0,0x07,0xea,0xe5, 0xcd,0x4e,0x81,0x34,
+ },
+ .h = {
+ 0x9d,0x22,0x88,0xfd, 0x41,0x43,0x88,0x45,
+ 0x34,0xfe,0x85,0xc4, 0xb9,0xff,0xe1,0x55,
+ 0x40,0x1d,0x25,0x37, 0xd1,0xf8,0xfc,0x2b,
+ 0x3a,0xf5,0x3b,0x69, 0xbf,0xa6,0x9d,0xed,
+ },
+ },
+ };
+ static uint32_t k[268];
+ uint8_t h[32];
+ unsigned i, j;
+ int result = 0;
+
+ for (i = 0; i < __arraycount(C); i++) {
+ for (j = 0; j < 268; j++)
+ k[j] = le32dec(C[i].k + 4*j);
+ nh(h, C[i].m, C[i].mlen, k);
+ if (memcmp(h, C[i].h, 32)) {
+ char prefix[10];
+ snprintf(prefix, sizeof prefix, "nh %u", i);
+ hexdump(printf, prefix, h, 32);
+ result = -1;
+ }
+ }
+
+ return result;
+}
+
+/* https://github.com/google/adiantum/blob/a5ad5134ab11b10a3ee982c52385953fac88fedc/test_vectors/ours/NHPoly1305/NHPoly1305.json */
+static int
+nhpoly1305_selftest(void)
+{
+ static const struct {
+ uint8_t k[1088];
+ unsigned mlen;
+ uint8_t m[1024];
+ uint8_t h[16];
+ } C[] = {
+ [0] = { /* 0-byte message */
+ .k = {
+ /* Poly1305 key */
+ 0xd2,0x5d,0x4c,0xdd, 0x8d,0x2b,0x7f,0x7a,
+ 0xd9,0xbe,0x71,0xec, 0xd1,0x83,0x52,0xe3,
+
+ /* NH key */
+ 0xe1,0xad,0xd7,0x5c, 0x0a,0x75,0x9d,0xec,
+ 0x1d,0x13,0x7e,0x5d, 0x71,0x07,0xc9,0xe4,
+ 0x57,0x2d,0x44,0x68, 0xcf,0xd8,0xd6,0xc5,
+ 0x39,0x69,0x7d,0x32, 0x75,0x51,0x4f,0x7e,
+ 0xb2,0x4c,0xc6,0x90, 0x51,0x6e,0xd9,0xd6,
+ 0xa5,0x8b,0x2d,0xf1, 0x94,0xf9,0xf7,0x5e,
+ 0x2c,0x84,0x7b,0x41, 0x0f,0x88,0x50,0x89,
+ 0x30,0xd9,0xa1,0x38, 0x46,0x6c,0xc0,0x4f,
+ 0xe8,0xdf,0xdc,0x66, 0xab,0x24,0x43,0x41,
+ 0x91,0x55,0x29,0x65, 0x86,0x28,0x5e,0x45,
+ 0xd5,0x2d,0xb7,0x80, 0x08,0x9a,0xc3,0xd4,
+ 0x9a,0x77,0x0a,0xd4, 0xef,0x3e,0xe6,0x3f,
+ 0x6f,0x2f,0x9b,0x3a, 0x7d,0x12,0x1e,0x80,
+ 0x6c,0x44,0xa2,0x25, 0xe1,0xf6,0x60,0xe9,
+ 0x0d,0xaf,0xc5,0x3c, 0xa5,0x79,0xae,0x64,
+ 0xbc,0xa0,0x39,0xa3, 0x4d,0x10,0xe5,0x4d,
+ 0xd5,0xe7,0x89,0x7a, 0x13,0xee,0x06,0x78,
+ 0xdc,0xa4,0xdc,0x14, 0x27,0xe6,0x49,0x38,
+ 0xd0,0xe0,0x45,0x25, 0x36,0xc5,0xf4,0x79,
+ 0x2e,0x9a,0x98,0x04, 0xe4,0x2b,0x46,0x52,
+ 0x7c,0x33,0xca,0xe2, 0x56,0x51,0x50,0xe2,
+ 0xa5,0x9a,0xae,0x18, 0x6a,0x13,0xf8,0xd2,
+ 0x21,0x31,0x66,0x02, 0xe2,0xda,0x8d,0x7e,
+ 0x41,0x19,0xb2,0x61, 0xee,0x48,0x8f,0xf1,
+ 0x65,0x24,0x2e,0x1e, 0x68,0xce,0x05,0xd9,
+ 0x2a,0xcf,0xa5,0x3a, 0x57,0xdd,0x35,0x91,
+ 0x93,0x01,0xca,0x95, 0xfc,0x2b,0x36,0x04,
+ 0xe6,0x96,0x97,0x28, 0xf6,0x31,0xfe,0xa3,
+ 0x9d,0xf6,0x6a,0x1e, 0x80,0x8d,0xdc,0xec,
+ 0xaf,0x66,0x11,0x13, 0x02,0x88,0xd5,0x27,
+ 0x33,0xb4,0x1a,0xcd, 0xa3,0xf6,0xde,0x31,
+ 0x8e,0xc0,0x0e,0x6c, 0xd8,0x5a,0x97,0x5e,
+ 0xdd,0xfd,0x60,0x69, 0x38,0x46,0x3f,0x90,
+ 0x5e,0x97,0xd3,0x32, 0x76,0xc7,0x82,0x49,
+ 0xfe,0xba,0x06,0x5f, 0x2f,0xa2,0xfd,0xff,
+ 0x80,0x05,0x40,0xe4, 0x33,0x03,0xfb,0x10,
+ 0xc0,0xde,0x65,0x8c, 0xc9,0x8d,0x3a,0x9d,
+ 0xb5,0x7b,0x36,0x4b, 0xb5,0x0c,0xcf,0x00,
+ 0x9c,0x87,0xe4,0x49, 0xad,0x90,0xda,0x4a,
+ 0xdd,0xbd,0xff,0xe2, 0x32,0x57,0xd6,0x78,
+ 0x36,0x39,0x6c,0xd3, 0x5b,0x9b,0x88,0x59,
+ 0x2d,0xf0,0x46,0xe4, 0x13,0x0e,0x2b,0x35,
+ 0x0d,0x0f,0x73,0x8a, 0x4f,0x26,0x84,0x75,
+ 0x88,0x3c,0xc5,0x58, 0x66,0x18,0x1a,0xb4,
+ 0x64,0x51,0x34,0x27, 0x1b,0xa4,0x11,0xc9,
+ 0x6d,0x91,0x8a,0xfa, 0x32,0x60,0x9d,0xd7,
+ 0x87,0xe5,0xaa,0x43, 0x72,0xf8,0xda,0xd1,
+ 0x48,0x44,0x13,0x61, 0xdc,0x8c,0x76,0x17,
+ 0x0c,0x85,0x4e,0xf3, 0xdd,0xa2,0x42,0xd2,
+ 0x74,0xc1,0x30,0x1b, 0xeb,0x35,0x31,0x29,
+ 0x5b,0xd7,0x4c,0x94, 0x46,0x35,0xa1,0x23,
+ 0x50,0xf2,0xa2,0x8e, 0x7e,0x4f,0x23,0x4f,
+ 0x51,0xff,0xe2,0xc9, 0xa3,0x7d,0x56,0x8b,
+ 0x41,0xf2,0xd0,0xc5, 0x57,0x7e,0x59,0xac,
+ 0xbb,0x65,0xf3,0xfe, 0xf7,0x17,0xef,0x63,
+ 0x7c,0x6f,0x23,0xdd, 0x22,0x8e,0xed,0x84,
+ 0x0e,0x3b,0x09,0xb3, 0xf3,0xf4,0x8f,0xcd,
+ 0x37,0xa8,0xe1,0xa7, 0x30,0xdb,0xb1,0xa2,
+ 0x9c,0xa2,0xdf,0x34, 0x17,0x3e,0x68,0x44,
+ 0xd0,0xde,0x03,0x50, 0xd1,0x48,0x6b,0x20,
+ 0xe2,0x63,0x45,0xa5, 0xea,0x87,0xc2,0x42,
+ 0x95,0x03,0x49,0x05, 0xed,0xe0,0x90,0x29,
+ 0x1a,0xb8,0xcf,0x9b, 0x43,0xcf,0x29,0x7a,
+ 0x63,0x17,0x41,0x9f, 0xe0,0xc9,0x10,0xfd,
+ 0x2c,0x56,0x8c,0x08, 0x55,0xb4,0xa9,0x27,
+ 0x0f,0x23,0xb1,0x05, 0x6a,0x12,0x46,0xc7,
+ 0xe1,0xfe,0x28,0x93, 0x93,0xd7,0x2f,0xdc,
+ 0x98,0x30,0xdb,0x75, 0x8a,0xbe,0x97,0x7a,
+ 0x02,0xfb,0x8c,0xba, 0xbe,0x25,0x09,0xbe,
+ 0xce,0xcb,0xa2,0xef, 0x79,0x4d,0x0e,0x9d,
+ 0x1b,0x9d,0xb6,0x39, 0x34,0x38,0xfa,0x07,
+ 0xec,0xe8,0xfc,0x32, 0x85,0x1d,0xf7,0x85,
+ 0x63,0xc3,0x3c,0xc0, 0x02,0x75,0xd7,0x3f,
+ 0xb2,0x68,0x60,0x66, 0x65,0x81,0xc6,0xb1,
+ 0x42,0x65,0x4b,0x4b, 0x28,0xd7,0xc7,0xaa,
+ 0x9b,0xd2,0xdc,0x1b, 0x01,0xe0,0x26,0x39,
+ 0x01,0xc1,0x52,0x14, 0xd1,0x3f,0xb7,0xe6,
+ 0x61,0x41,0xc7,0x93, 0xd2,0xa2,0x67,0xc6,
+ 0xf7,0x11,0xb5,0xf5, 0xea,0xdd,0x19,0xfb,
+ 0x4d,0x21,0x12,0xd6, 0x7d,0xf1,0x10,0xb0,
+ 0x89,0x07,0xc7,0x5a, 0x52,0x73,0x70,0x2f,
+ 0x32,0xef,0x65,0x2b, 0x12,0xb2,0xf0,0xf5,
+ 0x20,0xe0,0x90,0x59, 0x7e,0x64,0xf1,0x4c,
+ 0x41,0xb3,0xa5,0x91, 0x08,0xe6,0x5e,0x5f,
+ 0x05,0x56,0x76,0xb4, 0xb0,0xcd,0x70,0x53,
+ 0x10,0x48,0x9c,0xff, 0xc2,0x69,0x55,0x24,
+ 0x87,0xef,0x84,0xea, 0xfb,0xa7,0xbf,0xa0,
+ 0x91,0x04,0xad,0x4f, 0x8b,0x57,0x54,0x4b,
+ 0xb6,0xe9,0xd1,0xac, 0x37,0x2f,0x1d,0x2e,
+ 0xab,0xa5,0xa4,0xe8, 0xff,0xfb,0xd9,0x39,
+ 0x2f,0xb7,0xac,0xd1, 0xfe,0x0b,0x9a,0x80,
+ 0x0f,0xb6,0xf4,0x36, 0x39,0x90,0x51,0xe3,
+ 0x0a,0x2f,0xb6,0x45, 0x76,0x89,0xcd,0x61,
+ 0xfe,0x48,0x5f,0x75, 0x1d,0x13,0x00,0x62,
+ 0x80,0x24,0x47,0xe7, 0xbc,0x37,0xd7,0xe3,
+ 0x15,0xe8,0x68,0x22, 0xaf,0x80,0x6f,0x4b,
+ 0xa8,0x9f,0x01,0x10, 0x48,0x14,0xc3,0x02,
+ 0x52,0xd2,0xc7,0x75, 0x9b,0x52,0x6d,0x30,
+ 0xac,0x13,0x85,0xc8, 0xf7,0xa3,0x58,0x4b,
+ 0x49,0xf7,0x1c,0x45, 0x55,0x8c,0x39,0x9a,
+ 0x99,0x6d,0x97,0x27, 0x27,0xe6,0xab,0xdd,
+ 0x2c,0x42,0x1b,0x35, 0xdd,0x9d,0x73,0xbb,
+ 0x6c,0xf3,0x64,0xf1, 0xfb,0xb9,0xf7,0xe6,
+ 0x4a,0x3c,0xc0,0x92, 0xc0,0x2e,0xb7,0x1a,
+ 0xbe,0xab,0xb3,0x5a, 0xe5,0xea,0xb1,0x48,
+ 0x58,0x13,0x53,0x90, 0xfd,0xc3,0x8e,0x54,
+ 0xf9,0x18,0x16,0x73, 0xe8,0xcb,0x6d,0x39,
+ 0x0e,0xd7,0xe0,0xfe, 0xb6,0x9f,0x43,0x97,
+ 0xe8,0xd0,0x85,0x56, 0x83,0x3e,0x98,0x68,
+ 0x7f,0xbd,0x95,0xa8, 0x9a,0x61,0x21,0x8f,
+ 0x06,0x98,0x34,0xa6, 0xc8,0xd6,0x1d,0xf3,
+ 0x3d,0x43,0xa4,0x9a, 0x8c,0xe5,0xd3,0x5a,
+ 0x32,0xa2,0x04,0x22, 0xa4,0x19,0x1a,0x46,
+ 0x42,0x7e,0x4d,0xe5, 0xe0,0xe6,0x0e,0xca,
+ 0xd5,0x58,0x9d,0x2c, 0xaf,0xda,0x33,0x5c,
+ 0xb0,0x79,0x9e,0xc9, 0xfc,0xca,0xf0,0x2f,
+ 0xa8,0xb2,0x77,0xeb, 0x7a,0xa2,0xdd,0x37,
+ 0x35,0x83,0x07,0xd6, 0x02,0x1a,0xb6,0x6c,
+ 0x24,0xe2,0x59,0x08, 0x0e,0xfd,0x3e,0x46,
+ 0xec,0x40,0x93,0xf4, 0x00,0x26,0x4f,0x2a,
+ 0xff,0x47,0x2f,0xeb, 0x02,0x92,0x26,0x5b,
+ 0x53,0x17,0xc2,0x8d, 0x2a,0xc7,0xa3,0x1b,
+ 0xcd,0xbc,0xa7,0xe8, 0xd1,0x76,0xe3,0x80,
+ 0x21,0xca,0x5d,0x3b, 0xe4,0x9c,0x8f,0xa9,
+ 0x5b,0x7f,0x29,0x7f, 0x7c,0xd8,0xed,0x6d,
+ 0x8c,0xb2,0x86,0x85, 0xe7,0x77,0xf2,0x85,
+ 0xab,0x38,0xa9,0x9d, 0xc1,0x4e,0xc5,0x64,
+ 0x33,0x73,0x8b,0x59, 0x03,0xad,0x05,0xdf,
+ 0x25,0x98,0x31,0xde, 0xef,0x13,0xf1,0x9b,
+ 0x3c,0x91,0x9d,0x7b, 0xb1,0xfa,0xe6,0xbf,
+ 0x5b,0xed,0xa5,0x55, 0xe6,0xea,0x6c,0x74,
+ 0xf4,0xb9,0xe4,0x45, 0x64,0x72,0x81,0xc2,
+ 0x4c,0x28,0xd4,0xcd, 0xac,0xe2,0xde,0xf9,
+ 0xeb,0x5c,0xeb,0x61, 0x60,0x5a,0xe5,0x28,
+ },
+ .mlen = 0,
+ .h = {0},
+ },
+ [1] = { /* 16-byte message */
+ .k = {
+ /* Poly1305 key */
+ 0x29,0x21,0x43,0xcb, 0xcb,0x13,0x07,0xde,
+ 0xbf,0x48,0xdf,0x8a, 0x7f,0xa2,0x84,0xde,
+
+ /* NH key */
+ 0x72,0x23,0x9d,0xf5, 0xf0,0x07,0xf2,0x4c,
+ 0x20,0x3a,0x93,0xb9, 0xcd,0x5d,0xfe,0xcb,
+ 0x99,0x2c,0x2b,0x58, 0xc6,0x50,0x5f,0x94,
+ 0x56,0xc3,0x7c,0x0d, 0x02,0x3f,0xb8,0x5e,
+ 0x7b,0xc0,0x6c,0x51, 0x34,0x76,0xc0,0x0e,
+ 0xc6,0x22,0xc8,0x9e, 0x92,0xa0,0x21,0xc9,
+ 0x85,0x5c,0x7c,0xf8, 0xe2,0x64,0x47,0xc9,
+ 0xe4,0xa2,0x57,0x93, 0xf8,0xa2,0x69,0xcd,
+ 0x62,0x98,0x99,0xf4, 0xd7,0x7b,0x14,0xb1,
+ 0xd8,0x05,0xff,0x04, 0x15,0xc9,0xe1,0x6e,
+ 0x9b,0xe6,0x50,0x6b, 0x0b,0x3f,0x22,0x1f,
+ 0x08,0xde,0x0c,0x5b, 0x08,0x7e,0xc6,0x2f,
+ 0x6c,0xed,0xd6,0xb2, 0x15,0xa4,0xb3,0xf9,
+ 0xa7,0x46,0x38,0x2a, 0xea,0x69,0xa5,0xde,
+ 0x02,0xc3,0x96,0x89, 0x4d,0x55,0x3b,0xed,
+ 0x3d,0x3a,0x85,0x77, 0xbf,0x97,0x45,0x5c,
+ 0x9e,0x02,0x69,0xe2, 0x1b,0x68,0xbe,0x96,
+ 0xfb,0x64,0x6f,0x0f, 0xf6,0x06,0x40,0x67,
+ 0xfa,0x04,0xe3,0x55, 0xfa,0xbe,0xa4,0x60,
+ 0xef,0x21,0x66,0x97, 0xe6,0x9d,0x5c,0x1f,
+ 0x62,0x37,0xaa,0x31, 0xde,0xe4,0x9c,0x28,
+ 0x95,0xe0,0x22,0x86, 0xf4,0x4d,0xf3,0x07,
+ 0xfd,0x5f,0x3a,0x54, 0x2c,0x51,0x80,0x71,
+ 0xba,0x78,0x69,0x5b, 0x65,0xab,0x1f,0x81,
+ 0xed,0x3b,0xff,0x34, 0xa3,0xfb,0xbc,0x73,
+ 0x66,0x7d,0x13,0x7f, 0xdf,0x6e,0xe2,0xe2,
+ 0xeb,0x4f,0x6c,0xda, 0x7d,0x33,0x57,0xd0,
+ 0xd3,0x7c,0x95,0x4f, 0x33,0x58,0x21,0xc7,
+ 0xc0,0xe5,0x6f,0x42, 0x26,0xc6,0x1f,0x5e,
+ 0x85,0x1b,0x98,0x9a, 0xa2,0x1e,0x55,0x77,
+ 0x23,0xdf,0x81,0x5e, 0x79,0x55,0x05,0xfc,
+ 0xfb,0xda,0xee,0xba, 0x5a,0xba,0xf7,0x77,
+ 0x7f,0x0e,0xd3,0xe1, 0x37,0xfe,0x8d,0x2b,
+ 0xd5,0x3f,0xfb,0xd0, 0xc0,0x3c,0x0b,0x3f,
+ 0xcf,0x3c,0x14,0xcf, 0xfb,0x46,0x72,0x4c,
+ 0x1f,0x39,0xe2,0xda, 0x03,0x71,0x6d,0x23,
+ 0xef,0x93,0xcd,0x39, 0xd9,0x37,0x80,0x4d,
+ 0x65,0x61,0xd1,0x2c, 0x03,0xa9,0x47,0x72,
+ 0x4d,0x1e,0x0e,0x16, 0x33,0x0f,0x21,0x17,
+ 0xec,0x92,0xea,0x6f, 0x37,0x22,0xa4,0xd8,
+ 0x03,0x33,0x9e,0xd8, 0x03,0x69,0x9a,0xe8,
+ 0xb2,0x57,0xaf,0x78, 0x99,0x05,0x12,0xab,
+ 0x48,0x90,0x80,0xf0, 0x12,0x9b,0x20,0x64,
+ 0x7a,0x1d,0x47,0x5f, 0xba,0x3c,0xf9,0xc3,
+ 0x0a,0x0d,0x8d,0xa1, 0xf9,0x1b,0x82,0x13,
+ 0x3e,0x0d,0xec,0x0a, 0x83,0xc0,0x65,0xe1,
+ 0xe9,0x95,0xff,0x97, 0xd6,0xf2,0xe4,0xd5,
+ 0x86,0xc0,0x1f,0x29, 0x27,0x63,0xd7,0xde,
+ 0xb7,0x0a,0x07,0x99, 0x04,0x2d,0xa3,0x89,
+ 0xa2,0x43,0xcf,0xf3, 0xe1,0x43,0xac,0x4a,
+ 0x06,0x97,0xd0,0x05, 0x4f,0x87,0xfa,0xf9,
+ 0x9b,0xbf,0x52,0x70, 0xbd,0xbc,0x6c,0xf3,
+ 0x03,0x13,0x60,0x41, 0x28,0x09,0xec,0xcc,
+ 0xb1,0x1a,0xec,0xd6, 0xfb,0x6f,0x2a,0x89,
+ 0x5d,0x0b,0x53,0x9c, 0x59,0xc1,0x84,0x21,
+ 0x33,0x51,0x47,0x19, 0x31,0x9c,0xd4,0x0a,
+ 0x4d,0x04,0xec,0x50, 0x90,0x61,0xbd,0xbc,
+ 0x7e,0xc8,0xd9,0x6c, 0x98,0x1d,0x45,0x41,
+ 0x17,0x5e,0x97,0x1c, 0xc5,0xa8,0xe8,0xea,
+ 0x46,0x58,0x53,0xf7, 0x17,0xd5,0xad,0x11,
+ 0xc8,0x54,0xf5,0x7a, 0x33,0x90,0xf5,0x19,
+ 0xba,0x36,0xb4,0xfc, 0x52,0xa5,0x72,0x3d,
+ 0x14,0xbb,0x55,0xa7, 0xe9,0xe3,0x12,0xf7,
+ 0x1c,0x30,0xa2,0x82, 0x03,0xbf,0x53,0x91,
+ 0x2e,0x60,0x41,0x9f, 0x5b,0x69,0x39,0xf6,
+ 0x4d,0xc8,0xf8,0x46, 0x7a,0x7f,0xa4,0x98,
+ 0x36,0xff,0x06,0xcb, 0xca,0xe7,0x33,0xf2,
+ 0xc0,0x4a,0xf4,0x3c, 0x14,0x44,0x5f,0x6b,
+ 0x75,0xef,0x02,0x36, 0x75,0x08,0x14,0xfd,
+ 0x10,0x8e,0xa5,0x58, 0xd0,0x30,0x46,0x49,
+ 0xaf,0x3a,0xf8,0x40, 0x3d,0x35,0xdb,0x84,
+ 0x11,0x2e,0x97,0x6a, 0xb7,0x87,0x7f,0xad,
+ 0xf1,0xfa,0xa5,0x63, 0x60,0xd8,0x5e,0xbf,
+ 0x41,0x78,0x49,0xcf, 0x77,0xbb,0x56,0xbb,
+ 0x7d,0x01,0x67,0x05, 0x22,0xc8,0x8f,0x41,
+ 0xba,0x81,0xd2,0xca, 0x2c,0x38,0xac,0x76,
+ 0x06,0xc1,0x1a,0xc2, 0xce,0xac,0x90,0x67,
+ 0x57,0x3e,0x20,0x12, 0x5b,0xd9,0x97,0x58,
+ 0x65,0x05,0xb7,0x04, 0x61,0x7e,0xd8,0x3a,
+ 0xbf,0x55,0x3b,0x13, 0xe9,0x34,0x5a,0x37,
+ 0x36,0xcb,0x94,0x45, 0xc5,0x32,0xb3,0xa0,
+ 0x0c,0x3e,0x49,0xc5, 0xd3,0xed,0xa7,0xf0,
+ 0x1c,0x69,0xcc,0xea, 0xcc,0x83,0xc9,0x16,
+ 0x95,0x72,0x4b,0xf4, 0x89,0xd5,0xb9,0x10,
+ 0xf6,0x2d,0x60,0x15, 0xea,0x3c,0x06,0x66,
+ 0x9f,0x82,0xad,0x17, 0xce,0xd2,0xa4,0x48,
+ 0x7c,0x65,0xd9,0xf8, 0x02,0x4d,0x9b,0x4c,
+ 0x89,0x06,0x3a,0x34, 0x85,0x48,0x89,0x86,
+ 0xf9,0x24,0xa9,0x54, 0x72,0xdb,0x44,0x95,
+ 0xc7,0x44,0x1c,0x19, 0x11,0x4c,0x04,0xdc,
+ 0x13,0xb9,0x67,0xc8, 0xc3,0x3a,0x6a,0x50,
+ 0xfa,0xd1,0xfb,0xe1, 0x88,0xb6,0xf1,0xa3,
+ 0xc5,0x3b,0xdc,0x38, 0x45,0x16,0x26,0x02,
+ 0x3b,0xb8,0x8f,0x8b, 0x58,0x7d,0x23,0x04,
+ 0x50,0x6b,0x81,0x9f, 0xae,0x66,0xac,0x6f,
+ 0xcf,0x2a,0x9d,0xf1, 0xfd,0x1d,0x57,0x07,
+ 0xbe,0x58,0xeb,0x77, 0x0c,0xe3,0xc2,0x19,
+ 0x14,0x74,0x1b,0x51, 0x1c,0x4f,0x41,0xf3,
+ 0x32,0x89,0xb3,0xe7, 0xde,0x62,0xf6,0x5f,
+ 0xc7,0x6a,0x4a,0x2a, 0x5b,0x0f,0x5f,0x87,
+ 0x9c,0x08,0xb9,0x02, 0x88,0xc8,0x29,0xb7,
+ 0x94,0x52,0xfa,0x52, 0xfe,0xaa,0x50,0x10,
+ 0xba,0x48,0x75,0x5e, 0x11,0x1b,0xe6,0x39,
+ 0xd7,0x82,0x2c,0x87, 0xf1,0x1e,0xa4,0x38,
+ 0x72,0x3e,0x51,0xe7, 0xd8,0x3e,0x5b,0x7b,
+ 0x31,0x16,0x89,0xba, 0xd6,0xad,0x18,0x5e,
+ 0xba,0xf8,0x12,0xb3, 0xf4,0x6c,0x47,0x30,
+ 0xc0,0x38,0x58,0xb3, 0x10,0x8d,0x58,0x5d,
+ 0xb4,0xfb,0x19,0x7e, 0x41,0xc3,0x66,0xb8,
+ 0xd6,0x72,0x84,0xe1, 0x1a,0xc2,0x71,0x4c,
+ 0x0d,0x4a,0x21,0x7a, 0xab,0xa2,0xc0,0x36,
+ 0x15,0xc5,0xe9,0x46, 0xd7,0x29,0x17,0x76,
+ 0x5e,0x47,0x36,0x7f, 0x72,0x05,0xa7,0xcc,
+ 0x36,0x63,0xf9,0x47, 0x7d,0xe6,0x07,0x3c,
+ 0x8b,0x79,0x1d,0x96, 0x61,0x8d,0x90,0x65,
+ 0x7c,0xf5,0xeb,0x4e, 0x6e,0x09,0x59,0x6d,
+ 0x62,0x50,0x1b,0x0f, 0xe0,0xdc,0x78,0xf2,
+ 0x5b,0x83,0x1a,0xa1, 0x11,0x75,0xfd,0x18,
+ 0xd7,0xe2,0x8d,0x65, 0x14,0x21,0xce,0xbe,
+ 0xb5,0x87,0xe3,0x0a, 0xda,0x24,0x0a,0x64,
+ 0xa9,0x9f,0x03,0x8d, 0x46,0x5d,0x24,0x1a,
+ 0x8a,0x0c,0x42,0x01, 0xca,0xb1,0x5f,0x7c,
+ 0xa5,0xac,0x32,0x4a, 0xb8,0x07,0x91,0x18,
+ 0x6f,0xb0,0x71,0x3c, 0xc9,0xb1,0xa8,0xf8,
+ 0x5f,0x69,0xa5,0xa1, 0xca,0x9e,0x7a,0xaa,
+ 0xac,0xe9,0xc7,0x47, 0x41,0x75,0x25,0xc3,
+ 0x73,0xe2,0x0b,0xdd, 0x6d,0x52,0x71,0xbe,
+ 0xc5,0xdc,0xb4,0xe7, 0x01,0x26,0x53,0x77,
+ 0x86,0x90,0x85,0x68, 0x6b,0x7b,0x03,0x53,
+ 0xda,0x52,0x52,0x51, 0x68,0xc8,0xf3,0xec,
+ 0x6c,0xd5,0x03,0x7a, 0xa3,0x0e,0xb4,0x02,
+ 0x5f,0x1a,0xab,0xee, 0xca,0x67,0x29,0x7b,
+ 0xbd,0x96,0x59,0xb3, 0x8b,0x32,0x7a,0x92,
+ 0x9f,0xd8,0x25,0x2b, 0xdf,0xc0,0x4c,0xda,
+ },
+ .mlen = 16,
+ .m = {
+ 0xbc,0xda,0x81,0xa8, 0x78,0x79,0x1c,0xbf,
+ 0x77,0x53,0xba,0x4c, 0x30,0x5b,0xb8,0x33,
+ },
+ .h = {
+ 0x04,0xbf,0x7f,0x6a, 0xce,0x72,0xea,0x6a,
+ 0x79,0xdb,0xb0,0xc9, 0x60,0xf6,0x12,0xcc,
+ },
+ },
+ [2] = { /* 1024-byte message */
+ .k = {
+ 0x65,0x4d,0xe3,0xf8, 0xd2,0x4c,0xac,0x28,
+ 0x68,0xf5,0xb3,0x81, 0x71,0x4b,0xa1,0xfa,
+ 0x04,0x0e,0xd3,0x81, 0x36,0xbe,0x0c,0x81,
+ 0x5e,0xaf,0xbc,0x3a, 0xa4,0xc0,0x8e,0x8b,
+ 0x55,0x63,0xd3,0x52, 0x97,0x88,0xd6,0x19,
+ 0xbc,0x96,0xdf,0x49, 0xff,0x04,0x63,0xf5,
+ 0x0c,0x11,0x13,0xaa, 0x9e,0x1f,0x5a,0xf7,
+ 0xdd,0xbd,0x37,0x80, 0xc3,0xd0,0xbe,0xa7,
+ 0x05,0xc8,0x3c,0x98, 0x1e,0x05,0x3c,0x84,
+ 0x39,0x61,0xc4,0xed, 0xed,0x71,0x1b,0xc4,
+ 0x74,0x45,0x2c,0xa1, 0x56,0x70,0x97,0xfd,
+ 0x44,0x18,0x07,0x7d, 0xca,0x60,0x1f,0x73,
+ 0x3b,0x6d,0x21,0xcb, 0x61,0x87,0x70,0x25,
+ 0x46,0x21,0xf1,0x1f, 0x21,0x91,0x31,0x2d,
+ 0x5d,0xcc,0xb7,0xd1, 0x84,0x3e,0x3d,0xdb,
+ 0x03,0x53,0x2a,0x82, 0xa6,0x9a,0x95,0xbc,
+ 0x1a,0x1e,0x0a,0x5e, 0x07,0x43,0xab,0x43,
+ 0xaf,0x92,0x82,0x06, 0x91,0x04,0x09,0xf4,
+ 0x17,0x0a,0x9a,0x2c, 0x54,0xdb,0xb8,0xf4,
+ 0xd0,0xf0,0x10,0x66, 0x24,0x8d,0xcd,0xda,
+ 0xfe,0x0e,0x45,0x9d, 0x6f,0xc4,0x4e,0xf4,
+ 0x96,0xaf,0x13,0xdc, 0xa9,0xd4,0x8c,0xc4,
+ 0xc8,0x57,0x39,0x3c, 0xc2,0xd3,0x0a,0x76,
+ 0x4a,0x1f,0x75,0x83, 0x44,0xc7,0xd1,0x39,
+ 0xd8,0xb5,0x41,0xba, 0x73,0x87,0xfa,0x96,
+ 0xc7,0x18,0x53,0xfb, 0x9b,0xda,0xa0,0x97,
+ 0x1d,0xee,0x60,0x85, 0x9e,0x14,0xc3,0xce,
+ 0xc4,0x05,0x29,0x3b, 0x95,0x30,0xa3,0xd1,
+ 0x9f,0x82,0x6a,0x04, 0xf5,0xa7,0x75,0x57,
+ 0x82,0x04,0xfe,0x71, 0x51,0x71,0xb1,0x49,
+ 0x50,0xf8,0xe0,0x96, 0xf1,0xfa,0xa8,0x88,
+ 0x3f,0xa0,0x86,0x20, 0xd4,0x60,0x79,0x59,
+ 0x17,0x2d,0xd1,0x09, 0xf4,0xec,0x05,0x57,
+ 0xcf,0x62,0x7e,0x0e, 0x7e,0x60,0x78,0xe6,
+ 0x08,0x60,0x29,0xd8, 0xd5,0x08,0x1a,0x24,
+ 0xc4,0x6c,0x24,0xe7, 0x92,0x08,0x3d,0x8a,
+ 0x98,0x7a,0xcf,0x99, 0x0a,0x65,0x0e,0xdc,
+ 0x8c,0x8a,0xbe,0x92, 0x82,0x91,0xcc,0x62,
+ 0x30,0xb6,0xf4,0x3f, 0xc6,0x8a,0x7f,0x12,
+ 0x4a,0x8a,0x49,0xfa, 0x3f,0x5c,0xd4,0x5a,
+ 0xa6,0x82,0xa3,0xe6, 0xaa,0x34,0x76,0xb2,
+ 0xab,0x0a,0x30,0xef, 0x6c,0x77,0x58,0x3f,
+ 0x05,0x6b,0xcc,0x5c, 0xae,0xdc,0xd7,0xb9,
+ 0x51,0x7e,0x8d,0x32, 0x5b,0x24,0x25,0xbe,
+ 0x2b,0x24,0x01,0xcf, 0x80,0xda,0x16,0xd8,
+ 0x90,0x72,0x2c,0xad, 0x34,0x8d,0x0c,0x74,
+ 0x02,0xcb,0xfd,0xcf, 0x6e,0xef,0x97,0xb5,
+ 0x4c,0xf2,0x68,0xca, 0xde,0x43,0x9e,0x8a,
+ 0xc5,0x5f,0x31,0x7f, 0x14,0x71,0x38,0xec,
+ 0xbd,0x98,0xe5,0x71, 0xc4,0xb5,0xdb,0xef,
+ 0x59,0xd2,0xca,0xc0, 0xc1,0x86,0x75,0x01,
+ 0xd4,0x15,0x0d,0x6f, 0xa4,0xf7,0x7b,0x37,
+ 0x47,0xda,0x18,0x93, 0x63,0xda,0xbe,0x9e,
+ 0x07,0xfb,0xb2,0x83, 0xd5,0xc4,0x34,0x55,
+ 0xee,0x73,0xa1,0x42, 0x96,0xf9,0x66,0x41,
+ 0xa4,0xcc,0xd2,0x93, 0x6e,0xe1,0x0a,0xbb,
+ 0xd2,0xdd,0x18,0x23, 0xe6,0x6b,0x98,0x0b,
+ 0x8a,0x83,0x59,0x2c, 0xc3,0xa6,0x59,0x5b,
+ 0x01,0x22,0x59,0xf7, 0xdc,0xb0,0x87,0x7e,
+ 0xdb,0x7d,0xf4,0x71, 0x41,0xab,0xbd,0xee,
+ 0x79,0xbe,0x3c,0x01, 0x76,0x0b,0x2d,0x0a,
+ 0x42,0xc9,0x77,0x8c, 0xbb,0x54,0x95,0x60,
+ 0x43,0x2e,0xe0,0x17, 0x52,0xbd,0x90,0xc9,
+ 0xc2,0x2c,0xdd,0x90, 0x24,0x22,0x76,0x40,
+ 0x5c,0xb9,0x41,0xc9, 0xa1,0xd5,0xbd,0xe3,
+ 0x44,0xe0,0xa4,0xab, 0xcc,0xb8,0xe2,0x32,
+ 0x02,0x15,0x04,0x1f, 0x8c,0xec,0x5d,0x14,
+ 0xac,0x18,0xaa,0xef, 0x6e,0x33,0x19,0x6e,
+ 0xde,0xfe,0x19,0xdb, 0xeb,0x61,0xca,0x18,
+ 0xad,0xd8,0x3d,0xbf, 0x09,0x11,0xc7,0xa5,
+ 0x86,0x0b,0x0f,0xe5, 0x3e,0xde,0xe8,0xd9,
+ 0x0a,0x69,0x9e,0x4c, 0x20,0xff,0xf9,0xc5,
+ 0xfa,0xf8,0xf3,0x7f, 0xa5,0x01,0x4b,0x5e,
+ 0x0f,0xf0,0x3b,0x68, 0xf0,0x46,0x8c,0x2a,
+ 0x7a,0xc1,0x8f,0xa0, 0xfe,0x6a,0x5b,0x44,
+ 0x70,0x5c,0xcc,0x92, 0x2c,0x6f,0x0f,0xbd,
+ 0x25,0x3e,0xb7,0x8e, 0x73,0x58,0xda,0xc9,
+ 0xa5,0xaa,0x9e,0xf3, 0x9b,0xfd,0x37,0x3e,
+ 0xe2,0x88,0xa4,0x7b, 0xc8,0x5c,0xa8,0x93,
+ 0x0e,0xe7,0x9a,0x9c, 0x2e,0x95,0x18,0x9f,
+ 0xc8,0x45,0x0c,0x88, 0x9e,0x53,0x4f,0x3a,
+ 0x76,0xc1,0x35,0xfa, 0x17,0xd8,0xac,0xa0,
+ 0x0c,0x2d,0x47,0x2e, 0x4f,0x69,0x9b,0xf7,
+ 0xd0,0xb6,0x96,0x0c, 0x19,0xb3,0x08,0x01,
+ 0x65,0x7a,0x1f,0xc7, 0x31,0x86,0xdb,0xc8,
+ 0xc1,0x99,0x8f,0xf8, 0x08,0x4a,0x9d,0x23,
+ 0x22,0xa8,0xcf,0x27, 0x01,0x01,0x88,0x93,
+ 0x9c,0x86,0x45,0xbd, 0xe0,0x51,0xca,0x52,
+ 0x84,0xba,0xfe,0x03, 0xf7,0xda,0xc5,0xce,
+ 0x3e,0x77,0x75,0x86, 0xaf,0x84,0xc8,0x05,
+ 0x44,0x01,0x0f,0x02, 0xf3,0x58,0xb0,0x06,
+ 0x5a,0xd7,0x12,0x30, 0x8d,0xdf,0x1f,0x1f,
+ 0x0a,0xe6,0xd2,0xea, 0xf6,0x3a,0x7a,0x99,
+ 0x63,0xe8,0xd2,0xc1, 0x4a,0x45,0x8b,0x40,
+ 0x4d,0x0a,0xa9,0x76, 0x92,0xb3,0xda,0x87,
+ 0x36,0x33,0xf0,0x78, 0xc3,0x2f,0x5f,0x02,
+ 0x1a,0x6a,0x2c,0x32, 0xcd,0x76,0xbf,0xbd,
+ 0x5a,0x26,0x20,0x28, 0x8c,0x8c,0xbc,0x52,
+ 0x3d,0x0a,0xc9,0xcb, 0xab,0xa4,0x21,0xb0,
+ 0x54,0x40,0x81,0x44, 0xc7,0xd6,0x1c,0x11,
+ 0x44,0xc6,0x02,0x92, 0x14,0x5a,0xbf,0x1a,
+ 0x09,0x8a,0x18,0xad, 0xcd,0x64,0x3d,0x53,
+ 0x4a,0xb6,0xa5,0x1b, 0x57,0x0e,0xef,0xe0,
+ 0x8c,0x44,0x5f,0x7d, 0xbd,0x6c,0xfd,0x60,
+ 0xae,0x02,0x24,0xb6, 0x99,0xdd,0x8c,0xaf,
+ 0x59,0x39,0x75,0x3c, 0xd1,0x54,0x7b,0x86,
+ 0xcc,0x99,0xd9,0x28, 0x0c,0xb0,0x94,0x62,
+ 0xf9,0x51,0xd1,0x19, 0x96,0x2d,0x66,0xf5,
+ 0x55,0xcf,0x9e,0x59, 0xe2,0x6b,0x2c,0x08,
+ 0xc0,0x54,0x48,0x24, 0x45,0xc3,0x8c,0x73,
+ 0xea,0x27,0x6e,0x66, 0x7d,0x1d,0x0e,0x6e,
+ 0x13,0xe8,0x56,0x65, 0x3a,0xb0,0x81,0x5c,
+ 0xf0,0xe8,0xd8,0x00, 0x6b,0xcd,0x8f,0xad,
+ 0xdd,0x53,0xf3,0xa4, 0x6c,0x43,0xd6,0x31,
+ 0xaf,0xd2,0x76,0x1e, 0x91,0x12,0xdb,0x3c,
+ 0x8c,0xc2,0x81,0xf0, 0x49,0xdb,0xe2,0x6b,
+ 0x76,0x62,0x0a,0x04, 0xe4,0xaa,0x8a,0x7c,
+ 0x08,0x0b,0x5d,0xd0, 0xee,0x1d,0xfb,0xc4,
+ 0x02,0x75,0x42,0xd6, 0xba,0xa7,0x22,0xa8,
+ 0x47,0x29,0xb7,0x85, 0x6d,0x93,0x3a,0xdb,
+ 0x00,0x53,0x0b,0xa2, 0xeb,0xf8,0xfe,0x01,
+ 0x6f,0x8a,0x31,0xd6, 0x17,0x05,0x6f,0x67,
+ 0x88,0x95,0x32,0xfe, 0x4f,0xa6,0x4b,0xf8,
+ 0x03,0xe4,0xcd,0x9a, 0x18,0xe8,0x4e,0x2d,
+ 0xf7,0x97,0x9a,0x0c, 0x7d,0x9f,0x7e,0x44,
+ 0x69,0x51,0xe0,0x32, 0x6b,0x62,0x86,0x8f,
+ 0xa6,0x8e,0x0b,0x21, 0x96,0xe5,0xaf,0x77,
+ 0xc0,0x83,0xdf,0xa5, 0x0e,0xd0,0xa1,0x04,
+ 0xaf,0xc1,0x10,0xcb, 0x5a,0x40,0xe4,0xe3,
+ 0x38,0x7e,0x07,0xe8, 0x4d,0xfa,0xed,0xc5,
+ 0xf0,0x37,0xdf,0xbb, 0x8a,0xcf,0x3d,0xdc,
+ 0x61,0xd2,0xc6,0x2b, 0xff,0x07,0xc9,0x2f,
+ 0x0c,0x2d,0x5c,0x07, 0xa8,0x35,0x6a,0xfc,
+ 0xae,0x09,0x03,0x45, 0x74,0x51,0x4d,0xc4,
+ 0xb8,0x23,0x87,0x4a, 0x99,0x27,0x20,0x87,
+ 0x62,0x44,0x0a,0x4a, 0xce,0x78,0x47,0x22,
+ },
+ .mlen = 1024,
+ .m = {
+ 0x8e,0xb0,0x4c,0xde, 0x9c,0x4a,0x04,0x5a,
+ 0xf6,0xa9,0x7f,0x45, 0x25,0xa5,0x7b,0x3a,
+ 0xbc,0x4d,0x73,0x39, 0x81,0xb5,0xbd,0x3d,
+ 0x21,0x6f,0xd7,0x37, 0x50,0x3c,0x7b,0x28,
+ 0xd1,0x03,0x3a,0x17, 0xed,0x7b,0x7c,0x2a,
+ 0x16,0xbc,0xdf,0x19, 0x89,0x52,0x71,0x31,
+ 0xb6,0xc0,0xfd,0xb5, 0xd3,0xba,0x96,0x99,
+ 0xb6,0x34,0x0b,0xd0, 0x99,0x93,0xfc,0x1a,
+ 0x01,0x3c,0x85,0xc6, 0x9b,0x78,0x5c,0x8b,
+ 0xfe,0xae,0xd2,0xbf, 0xb2,0x6f,0xf9,0xed,
+ 0xc8,0x25,0x17,0xfe, 0x10,0x3b,0x7d,0xda,
+ 0xf4,0x8d,0x35,0x4b, 0x7c,0x7b,0x82,0xe7,
+ 0xc2,0xb3,0xee,0x60, 0x4a,0x03,0x86,0xc9,
+ 0x4e,0xb5,0xc4,0xbe, 0xd2,0xbd,0x66,0xf1,
+ 0x13,0xf1,0x09,0xab, 0x5d,0xca,0x63,0x1f,
+ 0xfc,0xfb,0x57,0x2a, 0xfc,0xca,0x66,0xd8,
+ 0x77,0x84,0x38,0x23, 0x1d,0xac,0xd3,0xb3,
+ 0x7a,0xad,0x4c,0x70, 0xfa,0x9c,0xc9,0x61,
+ 0xa6,0x1b,0xba,0x33, 0x4b,0x4e,0x33,0xec,
+ 0xa0,0xa1,0x64,0x39, 0x40,0x05,0x1c,0xc2,
+ 0x3f,0x49,0x9d,0xae, 0xf2,0xc5,0xf2,0xc5,
+ 0xfe,0xe8,0xf4,0xc2, 0xf9,0x96,0x2d,0x28,
+ 0x92,0x30,0x44,0xbc, 0xd2,0x7f,0xe1,0x6e,
+ 0x62,0x02,0x8f,0x3d, 0x1c,0x80,0xda,0x0e,
+ 0x6a,0x90,0x7e,0x75, 0xff,0xec,0x3e,0xc4,
+ 0xcd,0x16,0x34,0x3b, 0x05,0x6d,0x4d,0x20,
+ 0x1c,0x7b,0xf5,0x57, 0x4f,0xfa,0x3d,0xac,
+ 0xd0,0x13,0x55,0xe8, 0xb3,0xe1,0x1b,0x78,
+ 0x30,0xe6,0x9f,0x84, 0xd4,0x69,0xd1,0x08,
+ 0x12,0x77,0xa7,0x4a, 0xbd,0xc0,0xf2,0xd2,
+ 0x78,0xdd,0xa3,0x81, 0x12,0xcb,0x6c,0x14,
+ 0x90,0x61,0xe2,0x84, 0xc6,0x2b,0x16,0xcc,
+ 0x40,0x99,0x50,0x88, 0x01,0x09,0x64,0x4f,
+ 0x0a,0x80,0xbe,0x61, 0xae,0x46,0xc9,0x0a,
+ 0x5d,0xe0,0xfb,0x72, 0x7a,0x1a,0xdd,0x61,
+ 0x63,0x20,0x05,0xa0, 0x4a,0xf0,0x60,0x69,
+ 0x7f,0x92,0xbc,0xbf, 0x4e,0x39,0x4d,0xdd,
+ 0x74,0xd1,0xb7,0xc0, 0x5a,0x34,0xb7,0xae,
+ 0x76,0x65,0x2e,0xbc, 0x36,0xb9,0x04,0x95,
+ 0x42,0xe9,0x6f,0xca, 0x78,0xb3,0x72,0x07,
+ 0xa3,0xba,0x02,0x94, 0x67,0x4c,0xb1,0xd7,
+ 0xe9,0x30,0x0d,0xf0, 0x3b,0xb8,0x10,0x6d,
+ 0xea,0x2b,0x21,0xbf, 0x74,0x59,0x82,0x97,
+ 0x85,0xaa,0xf1,0xd7, 0x54,0x39,0xeb,0x05,
+ 0xbd,0xf3,0x40,0xa0, 0x97,0xe6,0x74,0xfe,
+ 0xb4,0x82,0x5b,0xb1, 0x36,0xcb,0xe8,0x0d,
+ 0xce,0x14,0xd9,0xdf, 0xf1,0x94,0x22,0xcd,
+ 0xd6,0x00,0xba,0x04, 0x4c,0x05,0x0c,0xc0,
+ 0xd1,0x5a,0xeb,0x52, 0xd5,0xa8,0x8e,0xc8,
+ 0x97,0xa1,0xaa,0xc1, 0xea,0xc1,0xbe,0x7c,
+ 0x36,0xb3,0x36,0xa0, 0xc6,0x76,0x66,0xc5,
+ 0xe2,0xaf,0xd6,0x5c, 0xe2,0xdb,0x2c,0xb3,
+ 0x6c,0xb9,0x99,0x7f, 0xff,0x9f,0x03,0x24,
+ 0xe1,0x51,0x44,0x66, 0xd8,0x0c,0x5d,0x7f,
+ 0x5c,0x85,0x22,0x2a, 0xcf,0x6d,0x79,0x28,
+ 0xab,0x98,0x01,0x72, 0xfe,0x80,0x87,0x5f,
+ 0x46,0xba,0xef,0x81, 0x24,0xee,0xbf,0xb0,
+ 0x24,0x74,0xa3,0x65, 0x97,0x12,0xc4,0xaf,
+ 0x8b,0xa0,0x39,0xda, 0x8a,0x7e,0x74,0x6e,
+ 0x1b,0x42,0xb4,0x44, 0x37,0xfc,0x59,0xfd,
+ 0x86,0xed,0xfb,0x8c, 0x66,0x33,0xda,0x63,
+ 0x75,0xeb,0xe1,0xa4, 0x85,0x4f,0x50,0x8f,
+ 0x83,0x66,0x0d,0xd3, 0x37,0xfa,0xe6,0x9c,
+ 0x4f,0x30,0x87,0x35, 0x18,0xe3,0x0b,0xb7,
+ 0x6e,0x64,0x54,0xcd, 0x70,0xb3,0xde,0x54,
+ 0xb7,0x1d,0xe6,0x4c, 0x4d,0x55,0x12,0x12,
+ 0xaf,0x5f,0x7f,0x5e, 0xee,0x9d,0xe8,0x8e,
+ 0x32,0x9d,0x4e,0x75, 0xeb,0xc6,0xdd,0xaa,
+ 0x48,0x82,0xa4,0x3f, 0x3c,0xd7,0xd3,0xa8,
+ 0x63,0x9e,0x64,0xfe, 0xe3,0x97,0x00,0x62,
+ 0xe5,0x40,0x5d,0xc3, 0xad,0x72,0xe1,0x28,
+ 0x18,0x50,0xb7,0x75, 0xef,0xcd,0x23,0xbf,
+ 0x3f,0xc0,0x51,0x36, 0xf8,0x41,0xc3,0x08,
+ 0xcb,0xf1,0x8d,0x38, 0x34,0xbd,0x48,0x45,
+ 0x75,0xed,0xbc,0x65, 0x7b,0xb5,0x0c,0x9b,
+ 0xd7,0x67,0x7d,0x27, 0xb4,0xc4,0x80,0xd7,
+ 0xa9,0xb9,0xc7,0x4a, 0x97,0xaa,0xda,0xc8,
+ 0x3c,0x74,0xcf,0x36, 0x8f,0xe4,0x41,0xe3,
+ 0xd4,0xd3,0x26,0xa7, 0xf3,0x23,0x9d,0x8f,
+ 0x6c,0x20,0x05,0x32, 0x3e,0xe0,0xc3,0xc8,
+ 0x56,0x3f,0xa7,0x09, 0xb7,0xfb,0xc7,0xf7,
+ 0xbe,0x2a,0xdd,0x0f, 0x06,0x7b,0x0d,0xdd,
+ 0xb0,0xb4,0x86,0x17, 0xfd,0xb9,0x04,0xe5,
+ 0xc0,0x64,0x5d,0xad, 0x2a,0x36,0x38,0xdb,
+ 0x24,0xaf,0x5b,0xff, 0xca,0xf9,0x41,0xe8,
+ 0xf9,0x2f,0x1e,0x5e, 0xf9,0xf5,0xd5,0xf2,
+ 0xb2,0x88,0xca,0xc9, 0xa1,0x31,0xe2,0xe8,
+ 0x10,0x95,0x65,0xbf, 0xf1,0x11,0x61,0x7a,
+ 0x30,0x1a,0x54,0x90, 0xea,0xd2,0x30,0xf6,
+ 0xa5,0xad,0x60,0xf9, 0x4d,0x84,0x21,0x1b,
+ 0xe4,0x42,0x22,0xc8, 0x12,0x4b,0xb0,0x58,
+ 0x3e,0x9c,0x2d,0x32, 0x95,0x0a,0x8e,0xb0,
+ 0x0a,0x7e,0x77,0x2f, 0xe8,0x97,0x31,0x6a,
+ 0xf5,0x59,0xb4,0x26, 0xe6,0x37,0x12,0xc9,
+ 0xcb,0xa0,0x58,0x33, 0x6f,0xd5,0x55,0x55,
+ 0x3c,0xa1,0x33,0xb1, 0x0b,0x7e,0x2e,0xb4,
+ 0x43,0x2a,0x84,0x39, 0xf0,0x9c,0xf4,0x69,
+ 0x4f,0x1e,0x79,0xa6, 0x15,0x1b,0x87,0xbb,
+ 0xdb,0x9b,0xe0,0xf1, 0x0b,0xba,0xe3,0x6e,
+ 0xcc,0x2f,0x49,0x19, 0x22,0x29,0xfc,0x71,
+ 0xbb,0x77,0x38,0x18, 0x61,0xaf,0x85,0x76,
+ 0xeb,0xd1,0x09,0xcc, 0x86,0x04,0x20,0x9a,
+ 0x66,0x53,0x2f,0x44, 0x8b,0xc6,0xa3,0xd2,
+ 0x5f,0xc7,0x79,0x82, 0x66,0xa8,0x6e,0x75,
+ 0x7d,0x94,0xd1,0x86, 0x75,0x0f,0xa5,0x4f,
+ 0x3c,0x7a,0x33,0xce, 0xd1,0x6e,0x9d,0x7b,
+ 0x1f,0x91,0x37,0xb8, 0x37,0x80,0xfb,0xe0,
+ 0x52,0x26,0xd0,0x9a, 0xd4,0x48,0x02,0x41,
+ 0x05,0xe3,0x5a,0x94, 0xf1,0x65,0x61,0x19,
+ 0xb8,0x88,0x4e,0x2b, 0xea,0xba,0x8b,0x58,
+ 0x8b,0x42,0x01,0x00, 0xa8,0xfe,0x00,0x5c,
+ 0xfe,0x1c,0xee,0x31, 0x15,0x69,0xfa,0xb3,
+ 0x9b,0x5f,0x22,0x8e, 0x0d,0x2c,0xe3,0xa5,
+ 0x21,0xb9,0x99,0x8a, 0x8e,0x94,0x5a,0xef,
+ 0x13,0x3e,0x99,0x96, 0x79,0x6e,0xd5,0x42,
+ 0x36,0x03,0xa9,0xe2, 0xca,0x65,0x4e,0x8a,
+ 0x8a,0x30,0xd2,0x7d, 0x74,0xe7,0xf0,0xaa,
+ 0x23,0x26,0xdd,0xcb, 0x82,0x39,0xfc,0x9d,
+ 0x51,0x76,0x21,0x80, 0xa2,0xbe,0x93,0x03,
+ 0x47,0xb0,0xc1,0xb6, 0xdc,0x63,0xfd,0x9f,
+ 0xca,0x9d,0xa5,0xca, 0x27,0x85,0xe2,0xd8,
+ 0x15,0x5b,0x7e,0x14, 0x7a,0xc4,0x89,0xcc,
+ 0x74,0x14,0x4b,0x46, 0xd2,0xce,0xac,0x39,
+ 0x6b,0x6a,0x5a,0xa4, 0x0e,0xe3,0x7b,0x15,
+ 0x94,0x4b,0x0f,0x74, 0xcb,0x0c,0x7f,0xa9,
+ 0xbe,0x09,0x39,0xa3, 0xdd,0x56,0x5c,0xc7,
+ 0x99,0x56,0x65,0x39, 0xf4,0x0b,0x7d,0x87,
+ 0xec,0xaa,0xe3,0x4d, 0x22,0x65,0x39,0x4e,
+ },
+ .h = {
+ 0x64,0x3a,0xbc,0xc3, 0x3f,0x74,0x40,0x51,
+ 0x6e,0x56,0x01,0x1a, 0x51,0xec,0x36,0xde,
+ },
+ },
+ };
+ const uint8_t *pk;
+ const uint8_t *nhk;
+ static uint32_t nhk32[268];
+ uint8_t h[16];
+ unsigned i, j;
+ int result = 0;
+
+ for (i = 0; i < __arraycount(C); i++) {
+ pk = C[i].k;
+ nhk = C[i].k + 16;
+ for (j = 0; j < 268; j++)
+ nhk32[j] = le32dec(nhk + 4*j);
+ nhpoly1305(h, C[i].m, C[i].mlen, pk, nhk32);
+ if (memcmp(h, C[i].h, 16)) {
+ char prefix[16];
+ snprintf(prefix, sizeof prefix, "nhpoly1305 %u", i);
+ hexdump(printf, prefix, h, 32);
+ result = -1;
+ }
+ }
+
+ return result;
+}
+
+/* ChaCha core */
+
+static uint32_t
+rol32(uint32_t u, unsigned c)
+{
+
+ return (u << c) | (u >> (32 - c));
+}
+
+#define CHACHA_QUARTERROUND(a, b, c, d) do { \
+ (a) += (b); (d) ^= (a); (d) = rol32((d), 16); \
+ (c) += (d); (b) ^= (c); (b) = rol32((b), 12); \
+ (a) += (b); (d) ^= (a); (d) = rol32((d), 8); \
+ (c) += (d); (b) ^= (c); (b) = rol32((b), 7); \
+} while (/*CONSTCOND*/0)
+
+const uint8_t chacha_const32[16] = "expand 32-byte k";
+
+static void
+chacha_core(uint8_t out[restrict static 64], const uint8_t in[static 16],
+ const uint8_t k[static 32], const uint8_t c[static 16], unsigned nr)
+{
+ uint32_t x0,x1,x2,x3,x4,x5,x6,x7,x8,x9,x10,x11,x12,x13,x14,x15;
+ uint32_t y0,y1,y2,y3,y4,y5,y6,y7,y8,y9,y10,y11,y12,y13,y14,y15;
+ int i;
+
+ x0 = y0 = le32dec(c + 0);
+ x1 = y1 = le32dec(c + 4);
+ x2 = y2 = le32dec(c + 8);
+ x3 = y3 = le32dec(c + 12);
+ x4 = y4 = le32dec(k + 0);
+ x5 = y5 = le32dec(k + 4);
+ x6 = y6 = le32dec(k + 8);
+ x7 = y7 = le32dec(k + 12);
+ x8 = y8 = le32dec(k + 16);
+ x9 = y9 = le32dec(k + 20);
+ x10 = y10 = le32dec(k + 24);
+ x11 = y11 = le32dec(k + 28);
+ x12 = y12 = le32dec(in + 0);
+ x13 = y13 = le32dec(in + 4);
+ x14 = y14 = le32dec(in + 8);
+ x15 = y15 = le32dec(in + 12);
+
+ for (i = nr; i > 0; i -= 2) {
+ CHACHA_QUARTERROUND( y0, y4, y8,y12);
+ CHACHA_QUARTERROUND( y1, y5, y9,y13);
+ CHACHA_QUARTERROUND( y2, y6,y10,y14);
+ CHACHA_QUARTERROUND( y3, y7,y11,y15);
+ CHACHA_QUARTERROUND( y0, y5,y10,y15);
+ CHACHA_QUARTERROUND( y1, y6,y11,y12);
+ CHACHA_QUARTERROUND( y2, y7, y8,y13);
+ CHACHA_QUARTERROUND( y3, y4, y9,y14);
+ }
+
+ le32enc(out + 0, x0 + y0);
+ le32enc(out + 4, x1 + y1);
+ le32enc(out + 8, x2 + y2);
+ le32enc(out + 12, x3 + y3);
+ le32enc(out + 16, x4 + y4);
+ le32enc(out + 20, x5 + y5);
+ le32enc(out + 24, x6 + y6);
+ le32enc(out + 28, x7 + y7);
+ le32enc(out + 32, x8 + y8);
+ le32enc(out + 36, x9 + y9);
+ le32enc(out + 40, x10 + y10);
+ le32enc(out + 44, x11 + y11);
+ le32enc(out + 48, x12 + y12);
+ le32enc(out + 52, x13 + y13);
+ le32enc(out + 56, x14 + y14);
+ le32enc(out + 60, x15 + y15);
+}
+
+/* https://tools.ietf.org/html/draft-strombergson-chacha-test-vectors-00 */
+static int
+chacha_core_selftest(void)
+{
+ /* TC1, 32-byte key, rounds=12, keystream block 1 */
+ static const uint8_t zero[32];
+ static const uint8_t expected0[64] = {
+ 0x9b,0xf4,0x9a,0x6a, 0x07,0x55,0xf9,0x53,
+ 0x81,0x1f,0xce,0x12, 0x5f,0x26,0x83,0xd5,
+ 0x04,0x29,0xc3,0xbb, 0x49,0xe0,0x74,0x14,
+ 0x7e,0x00,0x89,0xa5, 0x2e,0xae,0x15,0x5f,
+ 0x05,0x64,0xf8,0x79, 0xd2,0x7a,0xe3,0xc0,
+ 0x2c,0xe8,0x28,0x34, 0xac,0xfa,0x8c,0x79,
+ 0x3a,0x62,0x9f,0x2c, 0xa0,0xde,0x69,0x19,
+ 0x61,0x0b,0xe8,0x2f, 0x41,0x13,0x26,0xbe,
+ };
+ /* TC7, 32-byte key, rounds=12, keystream block 2 */
+ static const uint8_t k1[32] = {
+ 0x00,0x11,0x22,0x33, 0x44,0x55,0x66,0x77,
+ 0x88,0x99,0xaa,0xbb, 0xcc,0xdd,0xee,0xff,
+ 0xff,0xee,0xdd,0xcc, 0xbb,0xaa,0x99,0x88,
+ 0x77,0x66,0x55,0x44, 0x33,0x22,0x11,0x00,
+ };
+ static const uint8_t in1[16] = {
+ 0x01,0x00,0x00,0x00, 0x00,0x00,0x00,0x00,
+ 0x0f,0x1e,0x2d,0x3c, 0x4b,0x59,0x68,0x77,
+ };
+ static const uint8_t expected1[64] = {
+ 0xcd,0x9a,0x2a,0xa9, 0xea,0x93,0xc2,0x67,
+ 0x5e,0x82,0x88,0x14, 0x08,0xde,0x85,0x2c,
+ 0x62,0xfa,0x74,0x6a, 0x30,0xe5,0x2b,0x45,
+ 0xa2,0x69,0x62,0xcf, 0x43,0x51,0xe3,0x04,
+ 0xd3,0x13,0x20,0xbb, 0xd6,0xaa,0x6c,0xc8,
+ 0xf3,0x26,0x37,0xf9, 0x59,0x34,0xe4,0xc1,
+ 0x45,0xef,0xd5,0x62, 0x31,0xef,0x31,0x61,
+ 0x03,0x28,0x36,0xf4, 0x96,0x71,0x83,0x3e,
+ };
+ uint8_t out[64];
+ int result = 0;
+
+ chacha_core(out, zero, zero, chacha_const32, 12);
+ if (memcmp(out, expected0, 64)) {
+ hexdump(printf, "chacha core 1", out, sizeof out);
+ result = -1;
+ }
+
+ chacha_core(out, in1, k1, chacha_const32, 12);
+ if (memcmp(out, expected1, 64)) {
+ hexdump(printf, "chacha core 2", out, sizeof out);
+ result = -1;
+ }
+
+ return result;
+}
+
+/* HChaCha */
+
+static void
+hchacha(uint8_t out[restrict static 32], const uint8_t in[static 16],
+ const uint8_t k[static 32], const uint8_t c[static 16], unsigned nr)
+{
+ uint8_t t[64];
+
+ chacha_core(t, in, k, c, nr);
+ le32enc(out + 0, le32dec(t + 0) - le32dec(c + 0));
+ le32enc(out + 4, le32dec(t + 4) - le32dec(c + 4));
+ le32enc(out + 8, le32dec(t + 8) - le32dec(c + 8));
+ le32enc(out + 12, le32dec(t + 12) - le32dec(c + 12));
+ le32enc(out + 16, le32dec(t + 48) - le32dec(in + 0));
+ le32enc(out + 20, le32dec(t + 52) - le32dec(in + 4));
+ le32enc(out + 24, le32dec(t + 56) - le32dec(in + 8));
+ le32enc(out + 28, le32dec(t + 60) - le32dec(in + 12));
+}
+
+static int
+hchacha_selftest(void)
+{
+ /* https://tools.ietf.org/html/draft-irtf-cfrg-xchacha-03, §2.2.1 */
+ static const uint8_t k[32] = {
+ 0x00,0x01,0x02,0x03, 0x04,0x05,0x06,0x07,
+ 0x08,0x09,0x0a,0x0b, 0x0c,0x0d,0x0e,0x0f,
+ 0x10,0x11,0x12,0x13, 0x14,0x15,0x16,0x17,
+ 0x18,0x19,0x1a,0x1b, 0x1c,0x1d,0x1e,0x1f,
+ };
+ static const uint8_t in[16] = {
+ 0x00,0x00,0x00,0x09, 0x00,0x00,0x00,0x4a,
+ 0x00,0x00,0x00,0x00, 0x31,0x41,0x59,0x27,
+ };
+ static const uint8_t expected[32] = {
+ 0x82,0x41,0x3b,0x42, 0x27,0xb2,0x7b,0xfe,
+ 0xd3,0x0e,0x42,0x50, 0x8a,0x87,0x7d,0x73,
+ 0xa0,0xf9,0xe4,0xd5, 0x8a,0x74,0xa8,0x53,
+ 0xc1,0x2e,0xc4,0x13, 0x26,0xd3,0xec,0xdc,
+ };
+ uint8_t out[32];
+ int result = 0;
+
+ hchacha(out, in, k, chacha_const32, 20);
+ if (memcmp(out, expected, 32)) {
+ hexdump(printf, "hchacha", out, sizeof out);
+ result = -1;
+ }
+
+ return result;
+}
+
+/* XChaCha */
+
+static void
+xchacha_xor(uint8_t *c, const uint8_t *p, size_t nbytes,
+ const uint8_t nonce[static 24], const uint8_t k[static 32], unsigned nr)
+{
+ uint8_t h[32];
+ uint8_t in[16];
+ uint8_t block[64];
+ unsigned i;
+
+ hchacha(h, nonce, k, chacha_const32, nr);
+ memset(in, 0, 8);
+ memcpy(in + 8, nonce + 16, 8);
+
+ for (; nbytes; nbytes -= i, c += i, p += i) {
+ chacha_core(block, in, h, chacha_const32, nr);
+ for (i = 0; i < MIN(nbytes, 64); i++)
+ c[i] = p[i] ^ block[i];
+ le32enc(in, 1 + le32dec(in));
+ }
+}
+
+static int
+xchacha_selftest(void)
+{
+ /* https://tools.ietf.org/html/draft-irtf-cfrg-xchacha-03, A.2.2 */
+ static const uint8_t k[32] = {
+ 0x80,0x81,0x82,0x83, 0x84,0x85,0x86,0x87,
+ 0x88,0x89,0x8a,0x8b, 0x8c,0x8d,0x8e,0x8f,
+ 0x90,0x91,0x92,0x93, 0x94,0x95,0x96,0x97,
+ 0x98,0x99,0x9a,0x9b, 0x9c,0x9d,0x9e,0x9f,
+ };
+ static const uint8_t nonce[24] = {
+ 0x40,0x41,0x42,0x43, 0x44,0x45,0x46,0x47,
+ 0x48,0x49,0x4a,0x4b, 0x4c,0x4d,0x4e,0x4f,
+ 0x50,0x51,0x52,0x53, 0x54,0x55,0x56,0x58,
+ };
+ static const uint8_t p[128] = {
+ 0x54,0x68,0x65,0x20, 0x64,0x68,0x6f,0x6c,
+ 0x65,0x20,0x28,0x70, 0x72,0x6f,0x6e,0x6f,
+ 0x75,0x6e,0x63,0x65, 0x64,0x20,0x22,0x64,
+ 0x6f,0x6c,0x65,0x22, 0x29,0x20,0x69,0x73,
+ 0x20,0x61,0x6c,0x73, 0x6f,0x20,0x6b,0x6e,
+ 0x6f,0x77,0x6e,0x20, 0x61,0x73,0x20,0x74,
+ 0x68,0x65,0x20,0x41, 0x73,0x69,0x61,0x74,
+ 0x69,0x63,0x20,0x77, 0x69,0x6c,0x64,0x20,
+ 0x64,0x6f,0x67,0x2c, 0x20,0x72,0x65,0x64,
+ 0x20,0x64,0x6f,0x67, 0x2c,0x20,0x61,0x6e,
+ 0x64,0x20,0x77,0x68, 0x69,0x73,0x74,0x6c,
+ 0x69,0x6e,0x67,0x20, 0x64,0x6f,0x67,0x2e,
+ 0x20,0x49,0x74,0x20, 0x69,0x73,0x20,0x61,
+ 0x62,0x6f,0x75,0x74, 0x20,0x74,0x68,0x65,
+ 0x20,0x73,0x69,0x7a, 0x65,0x20,0x6f,0x66,
+ 0x20,0x61,0x20,0x47, 0x65,0x72,0x6d,0x61,
+ };
+ static const uint8_t expected[128] = {
+ 0x45,0x59,0xab,0xba, 0x4e,0x48,0xc1,0x61,
+ 0x02,0xe8,0xbb,0x2c, 0x05,0xe6,0x94,0x7f,
+ 0x50,0xa7,0x86,0xde, 0x16,0x2f,0x9b,0x0b,
+ 0x7e,0x59,0x2a,0x9b, 0x53,0xd0,0xd4,0xe9,
+ 0x8d,0x8d,0x64,0x10, 0xd5,0x40,0xa1,0xa6,
+ 0x37,0x5b,0x26,0xd8, 0x0d,0xac,0xe4,0xfa,
+ 0xb5,0x23,0x84,0xc7, 0x31,0xac,0xbf,0x16,
+ 0xa5,0x92,0x3c,0x0c, 0x48,0xd3,0x57,0x5d,
+ 0x4d,0x0d,0x2c,0x67, 0x3b,0x66,0x6f,0xaa,
+ 0x73,0x10,0x61,0x27, 0x77,0x01,0x09,0x3a,
+ 0x6b,0xf7,0xa1,0x58, 0xa8,0x86,0x42,0x92,
+ 0xa4,0x1c,0x48,0xe3, 0xa9,0xb4,0xc0,0xda,
+ 0xec,0xe0,0xf8,0xd9, 0x8d,0x0d,0x7e,0x05,
+ 0xb3,0x7a,0x30,0x7b, 0xbb,0x66,0x33,0x31,
+ 0x64,0xec,0x9e,0x1b, 0x24,0xea,0x0d,0x6c,
+ 0x3f,0xfd,0xdc,0xec, 0x4f,0x68,0xe7,0x44,
+ };
+ uint8_t c[128];
+ int result = 0;
+
+ xchacha_xor(c, p, 128, nonce, k, 20);
+ if (memcmp(c, expected, 128)) {
+ hexdump(printf, "xchacha", c, sizeof c);
+ result = -1;
+ }
+
+ return result;
+}
+
+void
+adiantum_init(struct adiantum *A, const uint8_t key[static 32])
+{
+ uint8_t nonce[24] = {1};
+ unsigned i;
+
+ memcpy(A->ks, key, 32);
+
+ /* Relies on ordering of struct members. */
+ memset(A->kk, 0, 32 + 16 + 16 + 1072);
+ xchacha_xor(A->kk, A->kk, 32 + 16 + 16 + 1072, nonce, A->ks, 12);
+
+ /* Put the NH key words into host byte order. */
+ for (i = 0; i < __arraycount(A->kn); i++)
+ A->kn[i] = le32toh(A->kn[i]);
+
+ /* Expand the AES key. */
+ aes_setenckey256(&A->kk_enc, A->kk);
+ aes_setdeckey256(&A->kk_dec, A->kk);
+}
+
+static void
+adiantum_hash(uint8_t h[static 16], const void *l, size_t llen,
+ const void *t, size_t tlen,
+ const uint8_t kt[static 16],
+ const uint8_t kl[static 16],
+ const uint32_t kn[static 268])
+{
+ const uint8_t *t8 = t;
+ struct poly1305 P;
+ uint8_t llenbuf[16];
+ uint8_t ht[16];
+ uint8_t hl[16];
+
+ KASSERT(llen % 16 == 0);
+
+ memset(llenbuf, 0, sizeof llenbuf);
+ le64enc(llenbuf, 8*llen);
+
+ /* Compute H_T := Poly1305_{K_T}(le128(|l|) || tweak). */
+ poly1305_init(&P, kt);
+ if (tlen == 0) {
+ poly1305_update_last(&P, llenbuf, 16);
+ } else {
+ poly1305_update_block(&P, llenbuf);
+ for (; tlen > 16; t8 += 16, tlen -= 16)
+ poly1305_update_block(&P, t8);
+ poly1305_update_last(&P, t8, tlen);
+ }
+ poly1305_final(ht, &P);
+
+ /* Compute H_L := Poly1305_{K_L}(NH(pad_128(l))). */
+ nhpoly1305(hl, l, llen, kl, kn);
+
+ /* Compute H := H_T + H_L (mod 2^128). */
+ add128(h, ht, hl);
+}
+
+void
+adiantum_enc(void *c, const void *p, size_t len, const void *t, size_t tlen,
+ const struct adiantum *A)
+{
+ size_t Rlen = 16;
+ size_t Llen = len - Rlen;
+ uint8_t *c8 = c;
+ uint8_t *cL = c8;
+ uint8_t *cR = c8 + Llen;
+ const uint8_t *p8 = p;
+ const uint8_t *pL = p8;
+ const uint8_t *pR = p8 + Llen;
+ uint8_t h[16];
+ uint8_t buf[16] __aligned(16);
+ uint8_t nonce[24];
+
+ KASSERT(len % 16 == 0);
+
+ aes_enc(&A->kk_enc, p, buf, AES_256_NROUNDS);
+
+ adiantum_hash(h, pL, Llen, t, tlen, A->kt, A->kl, A->kn);
+ add128(buf, pR, h); /* buf := P_M */
+ aes_enc(&A->kk_enc, buf, buf, AES_256_NROUNDS); /* buf := C_M */
+
+ memcpy(nonce, buf, 16);
+ le64enc(nonce + 16, 1);
+ xchacha_xor(cL, pL, Llen, nonce, A->ks, 12);
+
+ adiantum_hash(h, cL, Llen, t, tlen, A->kt, A->kl, A->kn);
+ sub128(cR, buf, h);
+
+ explicit_memset(h, 0, sizeof h);
+ explicit_memset(buf, 0, sizeof buf);
+}
+
+void
+adiantum_dec(void *p, const void *c, size_t len, const void *t, size_t tlen,
+ const struct adiantum *A)
+{
+ size_t Rlen = 16;
+ size_t Llen = len - Rlen;
+ const uint8_t *c8 = c;
+ const uint8_t *cL = c8;
+ const uint8_t *cR = c8 + Llen;
+ uint8_t *p8 = p;
+ uint8_t *pL = p8;
+ uint8_t *pR = p8 + Llen;
+ uint8_t h[16];
+ uint8_t buf[16] __aligned(16);
+ uint8_t nonce[24];
+
+ KASSERT(len % 16 == 0);
+
+ adiantum_hash(h, cL, Llen, t, tlen, A->kt, A->kl, A->kn);
+ add128(buf, cR, h); /* buf := P_M */
+
+ memcpy(nonce, buf, 16);
+ le64enc(nonce + 16, 1);
+ xchacha_xor(pL, cL, Llen, nonce, A->ks, 12);
+
+ aes_dec(&A->kk_dec, buf, buf, AES_256_NROUNDS); /* buf := P_M */
+ adiantum_hash(h, pL, Llen, t, tlen, A->kt, A->kl, A->kn);
+ sub128(pR, buf, h);
+
+ explicit_memset(h, 0, sizeof h);
+ explicit_memset(buf, 0, sizeof buf);
+}
+
+#ifdef _KERNEL
+
+MODULE(MODULE_CLASS_MISC, adiantum, "aes");
+
+static int
+adiantum_modcmd(modcmd_t cmd, void *opaque)
+{
+
+ switch (cmd) {
+ case MODULE_CMD_INIT: {
+ int result = 0;
+ result |= addsub128_selftest();
+ result |= poly1305_selftest();
+ result |= nh_selftest();
+ result |= nhpoly1305_selftest();
+ result |= chacha_core_selftest();
+ result |= hchacha_selftest();
+ result |= xchacha_selftest();
+ result |= adiantum_selftest();
+ if (result)
+ panic("adiantum self-tests failed");
+ return 0;
+ }
+ case MODULE_CMD_FINI:
+ return 0;
+ default:
+ return ENOTTY;
+ }
+}
+
+#else /* !defined(_KERNEL) */
+
+#include <err.h>
+#include <stdio.h>
+#include <unistd.h>
+
+static int
+read_block(int fd, void *buf, size_t len)
+{
+ char *p = buf;
+ size_t n = len;
+ ssize_t nread;
+
+ for (;;) {
+ if ((nread = read(fd, p, n)) == -1)
+ err(1, "read");
+ if (nread == 0) {
+ if (n < len)
+ errx(1, "partial block");
+ return -1; /* eof */
+ }
+ if ((size_t)nread >= n)
+ break;
+ p += (size_t)nread;
+ n -= (size_t)nread;
+ }
+
+ return 0;
+}
+
+static void
+write_block(int fd, const void *buf, size_t len)
+{
+ const char *p = buf;
+ size_t n = len;
+ ssize_t nwrit;
+
+ for (;;) {
+ if ((nwrit = write(fd, p, n)) == -1)
+ err(1, "write");
+ if ((size_t)nwrit >= n)
+ break;
+ p += (size_t)nwrit;
+ n -= (size_t)nwrit;
+ }
+}
+
+#define SECSIZE 512
+
+static void
+process(void)
+{
+ static const uint8_t k[32] = {0};
+ static uint8_t buf[65536];
+ static struct adiantum C;
+ uint8_t blkno[8] = {0};
+ unsigned i;
+
+ adiantum_init(&C, k);
+ while (read_block(STDIN_FILENO, buf, sizeof buf) == 0) {
+ for (i = 0; i < sizeof buf; i += SECSIZE) {
+ adiantum_enc(buf + i, buf + i, SECSIZE, blkno, 8, &C);
+ le64enc(blkno, 1 + le32dec(blkno));
+ }
+ write_block(STDOUT_FILENO, buf, sizeof buf);
+ if (le64dec(blkno) == 1024*1024*1024/SECSIZE)
+ return;
+ }
+}
+
+int
+main(void)
+{
+ int result = 0;
+
+ result |= addsub128_selftest();
+ result |= poly1305_selftest();
+ result |= nh_selftest();
+ result |= nhpoly1305_selftest();
+ result |= chacha_core_selftest();
+ result |= hchacha_selftest();
+ result |= xchacha_selftest();
+ result |= adiantum_selftest();
+ if (result)
+ return result;
+
+ process();
+ return 0;
+}
+
+#endif /* _KERNEL */
diff -r 36794fee0d04 -r 9fde04e138c1 sys/crypto/adiantum/adiantum_selftest.c
--- /dev/null Thu Jan 01 00:00:00 1970 +0000
+++ b/sys/crypto/adiantum/adiantum_selftest.c Wed Jun 17 02:47:43 2020 +0000
@@ -0,0 +1,1835 @@
+/* $NetBSD$ */
+
+/*-
+ * Copyright (c) 2020 The NetBSD Foundation, Inc.
+ * All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ * 1. Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * 2. Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in the
+ * documentation and/or other materials provided with the distribution.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE NETBSD FOUNDATION, INC. AND CONTRIBUTORS
+ * ``AS IS'' AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED
+ * TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
+ * PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE FOUNDATION OR CONTRIBUTORS
+ * BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <sys/cdefs.h>
+__KERNEL_RCSID(1, "$NetBSD$");
+
+#include <sys/types.h>
+
+#ifdef _KERNEL
+
+#include <sys/systm.h>
+
+#include <lib/libkern/libkern.h>
+
+#else
+
+#include <string.h>
+#include <stdio.h>
+
+#include <openssl/aes.h>
+
+struct aesenc {
+ AES_KEY enckey;
+};
+
+struct aesdec {
+ AES_KEY deckey;
+};
+
+static void
+hexdump(int (*prf)(const char *, ...) __printflike(1,2), const char *prefix,
+ const void *buf, size_t len)
+{
+ const uint8_t *p = buf;
+ size_t i;
+
+ (*prf)("%s (%zu bytes)\n", prefix, len);
+ for (i = 0; i < len; i++) {
+ if (i % 16 == 8)
+ (*prf)(" ");
+ else
+ (*prf)(" ");
+ (*prf)("%02hhx", p[i]);
+ if ((i + 1) % 16 == 0)
+ (*prf)("\n");
+ }
+ if (i % 16)
+ (*prf)("\n");
+}
+
+#endif
+
+#include "adiantum.h"
+
+/* https://github.com/google/adiantum/blob/aab35db7bfb6e05d5ad0b41b5088a9f5a840bde3/test_vectors/ours/Adiantum/Adiantum_XChaCha12_32_AES256.json */
+
+int
+adiantum_selftest(void)
+{
+ static const struct {
+ uint8_t k[32];
+ unsigned tlen;
+ uint8_t t[64];
+ unsigned len;
+ uint8_t p[4096];
+ uint8_t c[4096];
+ } C[] = {
+ [0] = {
+ .k = {
+ 0x7f,0xc7,0x15,0x2a, 0xe1,0xf5,0xfd,0xa4,
+ 0x17,0x67,0x69,0xae, 0xc9,0x2b,0xba,0x82,
+ 0xa3,0x14,0xe7,0xcf, 0xad,0xfd,0x85,0x40,
+ 0xda,0x7b,0x7d,0x24, 0xbd,0xf1,0x7d,0x07,
+ },
+ .tlen = 0,
+ .len = 16,
+ .p = {
+ 0x9b,0xe3,0x82,0xc6, 0x5a,0xc1,0x9f,0xad,
+ 0x46,0x59,0xb8,0x0b, 0xac,0xc8,0x57,0xa0,
+ },
+ .c = {
+ 0x82,0x0a,0xe4,0x44, 0x77,0xdd,0x9a,0x18,
+ 0x6f,0x80,0x28,0x8b, 0x25,0x07,0x0e,0x85,
+ },
+ },
+ [1] = {
+ .k = {
+ 0x26,0x6a,0xf9,0x4a, 0x21,0x49,0x6b,0x4e,
+ 0x3e,0xff,0x43,0x46, 0x9c,0xc1,0xfa,0x72,
+ 0x0e,0x77,0x9a,0xd5, 0x37,0x47,0x00,0x38,
+ 0xb3,0x6f,0x58,0x6c, 0xde,0xc0,0xa6,0x74,
+ },
+ .tlen = 0,
+ .len = 128,
+ .p = {
+ 0xdd,0x07,0xfe,0x61, 0x97,0x0c,0x31,0x48,
+ 0x09,0xbf,0xdb,0x9b, 0x4b,0x7d,0x9c,0x80,
+ 0xe6,0x11,0xe5,0x76, 0x5b,0xcc,0x76,0xdf,
+ 0x34,0xd5,0x23,0xcd, 0xe1,0xdc,0x4e,0x4f,
+ 0x65,0x20,0x58,0x8e, 0xe8,0x2c,0xc2,0x64,
+ 0x32,0x83,0x7a,0xbf, 0xe1,0xca,0x0b,0x4b,
+ 0xc6,0xec,0x0d,0xc5, 0x4a,0xb6,0x9b,0xa5,
+ 0xc4,0x01,0x54,0xf5, 0xb5,0xfa,0x8f,0x58,
+ 0x45,0x72,0x28,0xd8, 0x55,0x21,0xa2,0x5c,
+ 0x7d,0xc8,0x0c,0x3c, 0x3c,0x99,0xc4,0x1a,
+ 0xc2,0xe7,0x1c,0x0c, 0x14,0x72,0x1d,0xf8,
+ 0x45,0xb7,0x9c,0x97, 0x07,0x04,0x9b,0x91,
+ 0x5e,0x95,0xef,0x5f, 0xe6,0xad,0xbd,0xbb,
+ 0xe7,0xd1,0x22,0xc3, 0x98,0x44,0x89,0x05,
+ 0xe8,0x63,0x0d,0x44, 0xcb,0x36,0xd5,0x43,
+ 0xcc,0x05,0x7c,0x31, 0xd3,0xbc,0x17,0x7f,
+ },
+ .c = {
+ 0xba,0xd3,0xbf,0xbf, 0xb2,0x4e,0x1a,0xfd,
+ 0x59,0xbe,0x9d,0x40, 0xe0,0x27,0x94,0xdd,
+ 0x5c,0x08,0x1c,0xa5, 0xd0,0x25,0x87,0xca,
+ 0x15,0x6a,0x35,0xe9, 0x8a,0x05,0x67,0x53,
+ 0x04,0x4d,0xdf,0x35, 0x07,0x19,0x25,0xa0,
+ 0x44,0x1a,0x5b,0xd6, 0x8b,0x0f,0xd3,0x36,
+ 0x8a,0x60,0x8c,0x6b, 0x53,0xdb,0x69,0xb0,
+ 0x37,0x69,0xb5,0x1b, 0x1f,0xf5,0xd5,0xab,
+ 0x47,0x3a,0x45,0xb2, 0x37,0x6c,0xc3,0xc1,
+ 0x1f,0xdb,0x74,0x6b, 0x1f,0x3b,0x2c,0x1a,
+ 0xee,0xff,0xe9,0x28, 0xfe,0xa3,0x49,0x96,
+ 0x7a,0xb3,0x68,0x4e, 0xb1,0xc4,0x85,0xdc,
+ 0x18,0x87,0xfd,0xbf, 0x84,0x39,0xb2,0x20,
+ 0x29,0x46,0x8a,0x3e, 0xa9,0xf9,0xcc,0x56,
+ 0x6b,0x2f,0x43,0x4a, 0x1b,0x48,0x6b,0xd6,
+ 0x03,0x1d,0x66,0xa1, 0x49,0xba,0xe9,0xf5,
+ },
+ },
+ [2] = {
+ .k = {
+ 0x7c,0xab,0xc4,0x63, 0xc0,0x40,0x5e,0xad,
+ 0x8f,0x02,0x5a,0xa9, 0xba,0x68,0x58,0xe3,
+ 0xb6,0xbb,0x03,0xc9, 0xe6,0x1e,0xe7,0xc3,
+ 0xd7,0x2c,0xf7,0x7a, 0xf7,0x2c,0xd1,0x07,
+ },
+ .tlen = 0,
+ .len = 512,
+ .p = {
+ 0x4f,0xc9,0x8f,0xa7, 0x81,0x81,0x3a,0xb7,
+ 0x3c,0x55,0x8f,0x8f, 0x18,0xc4,0x7a,0xd2,
+ 0x13,0x70,0x94,0x0f, 0x46,0xb2,0x0f,0x53,
+ 0xde,0xdf,0x06,0xf8, 0x60,0x34,0xad,0x39,
+ 0xe9,0x47,0x23,0x31, 0x94,0xf3,0x59,0x88,
+ 0x96,0x14,0x52,0x3b, 0x88,0xb7,0x55,0xe9,
+ 0x4a,0xbc,0x41,0xea, 0x24,0x03,0x35,0x78,
+ 0xb7,0x4b,0x9f,0x8b, 0xe4,0x36,0x77,0x0a,
+ 0x70,0x19,0x90,0x9b, 0xb1,0x70,0x27,0x23,
+ 0x31,0xd9,0xe5,0x26, 0x36,0x71,0x06,0xc7,
+ 0xd3,0xb1,0xb8,0x52, 0x6a,0xe1,0x95,0x86,
+ 0x76,0xc3,0x02,0x2c, 0xd2,0xe7,0xc2,0x1c,
+ 0x6f,0xcb,0x61,0x56, 0xfc,0x5e,0xf2,0x57,
+ 0x90,0x46,0xfb,0x6a, 0xc1,0x5e,0x56,0x5b,
+ 0x18,0x8d,0x0e,0x4f, 0x4e,0x14,0x4c,0x6d,
+ 0x97,0xf9,0x73,0xed, 0xc5,0x41,0x94,0x24,
+ 0xaa,0x35,0x2f,0x01, 0xef,0x8f,0xb2,0xfd,
+ 0xc2,0xc7,0x8b,0x9c, 0x9b,0x10,0x89,0xec,
+ 0x64,0xbb,0x54,0xa5, 0x01,0xdc,0x51,0x57,
+ 0xc8,0x5a,0x03,0xcb, 0x91,0x73,0xb2,0x08,
+ 0xc3,0xcc,0x3c,0x1b, 0xae,0x3e,0x0f,0xf3,
+ 0x93,0xb9,0xc3,0x27, 0xd7,0x88,0x66,0xa2,
+ 0x40,0xf9,0xfd,0x02, 0x61,0xe1,0x2b,0x5d,
+ 0xc9,0xe8,0xd6,0xac, 0xf0,0xd0,0xe3,0x79,
+ 0x94,0xff,0x50,0x09, 0x4e,0x68,0xe8,0x5e,
+ 0x3f,0x58,0xc8,0xb8, 0x0f,0xd7,0xc2,0x2d,
+ 0x91,0x3e,0x47,0x10, 0x50,0x98,0xa6,0xf9,
+ 0x37,0xd6,0x90,0xed, 0xb7,0x5e,0x3a,0xd0,
+ 0xd7,0x50,0xc4,0x69, 0xe6,0x29,0xb8,0x9a,
+ 0xc1,0x5c,0x2b,0x34, 0x6d,0x44,0x58,0xd6,
+ 0xd4,0x7e,0xe2,0x42, 0x67,0x45,0xe5,0x64,
+ 0x48,0xac,0x00,0xe9, 0xb6,0xd0,0xc3,0xc5,
+ 0x5d,0x9e,0x95,0x4e, 0x10,0x18,0x29,0x86,
+ 0xaa,0x37,0xa3,0x3c, 0xe1,0xd6,0x5d,0x6d,
+ 0x4a,0xca,0xc3,0xe2, 0x25,0xb7,0x49,0x4a,
+ 0x36,0x67,0xc0,0xe1, 0x02,0x45,0xcc,0xd4,
+ 0x11,0x37,0x11,0x8e, 0x54,0xf5,0xea,0x80,
+ 0x04,0x72,0x06,0x36, 0x8f,0xf9,0x1e,0xed,
+ 0x91,0x14,0x9d,0x42, 0x59,0xc1,0x87,0xb8,
+ 0xf1,0xce,0xb2,0x17, 0x42,0xa1,0x2f,0x96,
+ 0xa3,0x50,0xe9,0x01, 0x24,0x9e,0xe5,0xbb,
+ 0x97,0x83,0x31,0x12, 0xa8,0x7c,0xca,0x7b,
+ 0x90,0x33,0xad,0x1c, 0x99,0x81,0x1a,0xb8,
+ 0xa1,0xe0,0xf1,0x5a, 0xbc,0x08,0xde,0xab,
+ 0x69,0x0a,0x89,0xa0, 0x9f,0x02,0x5e,0x3a,
+ 0xf3,0xba,0xb9,0x6e, 0x34,0xdf,0x15,0x13,
+ 0x64,0x51,0xa9,0x55, 0x67,0xa3,0xba,0x6b,
+ 0x35,0xb0,0x8a,0x05, 0xf5,0x79,0x84,0x97,
+ 0x92,0x8e,0x11,0xeb, 0xef,0xec,0x65,0xb5,
+ 0xe6,0x42,0xfb,0x06, 0x33,0x93,0x6b,0xff,
+ 0xc2,0x49,0x15,0x71, 0xb0,0xca,0x62,0xd1,
+ 0x81,0x40,0xd2,0xab, 0x0b,0x7d,0x7e,0x1a,
+ 0xe9,0xec,0xfc,0xde, 0xdb,0xd5,0xa7,0x56,
+ 0x83,0x25,0x0e,0x5e, 0xac,0x0c,0x42,0x26,
+ 0x00,0x59,0x55,0x17, 0x8b,0x5a,0x03,0x7b,
+ 0x85,0xe9,0xc1,0xa3, 0xe4,0xeb,0xd3,0xde,
+ 0xd8,0x81,0xf5,0x31, 0x2c,0xda,0x21,0xbc,
+ 0xb5,0xd9,0x7a,0xd0, 0x1e,0x2a,0x6b,0xcf,
+ 0xad,0x06,0x3c,0xf2, 0xf7,0x5c,0x3a,0xf1,
+ 0xa7,0x0f,0x5f,0x53, 0xe9,0x3f,0x3c,0xf1,
+ 0xb7,0x47,0x53,0x16, 0x19,0xd9,0xef,0xf0,
+ 0xcb,0x16,0xe4,0xc9, 0xa3,0x8f,0xd6,0x3f,
+ 0xf8,0xb2,0x22,0x65, 0xf9,0xa1,0xa3,0x03,
+ 0xe4,0x06,0x75,0x69, 0xf5,0x32,0x48,0x80,
+ },
+ .c = {
+ 0x66,0x3f,0xf7,0x7a, 0x20,0xa4,0x35,0xd6,
+ 0x0e,0xe8,0x17,0x32, 0x84,0xae,0xee,0x18,
+ 0x0f,0x64,0x83,0x66, 0xa4,0xf4,0x24,0x53,
+ 0xe6,0x58,0x2e,0xd5, 0x61,0x58,0xdd,0x5f,
+ 0x1d,0xb9,0xba,0x34, 0xd0,0xd3,0x64,0xde,
+ 0x99,0x47,0x92,0x3a, 0x26,0x90,0xbb,0x98,
+ 0xb0,0xbd,0xf4,0x5e, 0x26,0x57,0xe0,0xe1,
+ 0x09,0x27,0xc1,0xc4, 0x86,0x2b,0x4b,0x48,
+ 0xbb,0xcd,0xec,0x2f, 0xd1,0x54,0xe9,0x21,
+ 0xa0,0x40,0x76,0x01, 0x2d,0xb1,0xe7,0x75,
+ 0xa1,0xd7,0x04,0x23, 0x9d,0xd3,0x0f,0x3b,
+ 0x7e,0xb8,0xd0,0x37, 0xe4,0xd9,0x48,0xaa,
+ 0xe1,0x4d,0x0f,0xf6, 0xae,0x29,0x20,0xae,
+ 0xda,0x35,0x18,0x97, 0x2c,0xc2,0xa9,0xdd,
+ 0x6e,0x50,0x73,0x52, 0x0a,0x8a,0x2a,0xd2,
+ 0x2a,0xf4,0x12,0xe9, 0x7d,0x88,0x37,0xae,
+ 0x12,0x81,0x92,0x96, 0xbe,0xea,0x15,0xa4,
+ 0x3c,0x53,0xad,0x1f, 0x75,0x54,0x24,0x81,
+ 0xaa,0x1b,0x92,0x84, 0x7c,0xb2,0xd7,0x10,
+ 0x5e,0xb6,0xab,0x83, 0x25,0xf7,0x03,0x2b,
+ 0xd9,0x53,0x4d,0xf9, 0x41,0x21,0xef,0xef,
+ 0x40,0x3a,0x2d,0x54, 0xa9,0xf0,0x72,0xff,
+ 0x03,0x59,0x2e,0x91, 0x07,0xff,0xe2,0x86,
+ 0x33,0x59,0x98,0xdf, 0xa4,0x7d,0x9e,0x52,
+ 0x95,0xd9,0x77,0x4b, 0xdf,0x93,0xc8,0x2d,
+ 0xbc,0x81,0x2b,0x77, 0x89,0xae,0x52,0xdc,
+ 0xfc,0xb7,0x22,0xf0, 0x1a,0x9d,0xc1,0x28,
+ 0x70,0xe2,0x15,0xe4, 0x77,0x11,0x49,0x09,
+ 0x89,0xf4,0x06,0x00, 0x64,0x78,0xb6,0x3f,
+ 0x63,0x36,0xfd,0x9f, 0x35,0x33,0x85,0x52,
+ 0x18,0x26,0xc1,0x0d, 0xf7,0xab,0x5a,0x06,
+ 0x9c,0x3a,0xab,0x5f, 0x81,0x36,0x39,0xe3,
+ 0xe6,0xf7,0x33,0xb0, 0xec,0xe6,0x8d,0x05,
+ 0xbd,0xc7,0xbd,0x20, 0x5f,0x74,0xdf,0x98,
+ 0x3a,0xa9,0xde,0xae, 0x89,0xee,0xcc,0x60,
+ 0x8b,0x23,0xed,0x0f, 0x55,0x4d,0x56,0xd2,
+ 0x69,0xa5,0xf8,0xff, 0x94,0x62,0x99,0xc6,
+ 0xd4,0x02,0x0b,0xcf, 0xe4,0x86,0x23,0x5e,
+ 0xed,0x12,0x12,0x2e, 0x0a,0x0f,0xda,0x12,
+ 0x0a,0x68,0x56,0xea, 0x16,0x92,0xa5,0xdb,
+ 0xf5,0x9d,0x0e,0xe6, 0x39,0x5d,0x76,0x50,
+ 0x41,0x85,0xb4,0xcc, 0xb3,0x9e,0x84,0x46,
+ 0xd3,0x93,0xcf,0xa1, 0xee,0x5b,0x51,0x94,
+ 0x05,0x46,0x16,0xbb, 0xd1,0xae,0x94,0xe4,
+ 0x1c,0x3d,0xeb,0xf4, 0x09,0x00,0xf7,0x86,
+ 0x57,0x60,0x49,0x94, 0xf5,0xa7,0x7e,0x4b,
+ 0x32,0x4a,0x6a,0xae, 0x2c,0x5f,0x30,0x2d,
+ 0x7c,0xa1,0x71,0x5e, 0x63,0x7a,0x70,0x56,
+ 0x1f,0xaf,0x3e,0xf3, 0x46,0xb5,0x68,0x61,
+ 0xe2,0xd4,0x16,0x6b, 0xaf,0x94,0x07,0xa9,
+ 0x5d,0x7a,0xee,0x4c, 0xad,0x85,0xcc,0x3e,
+ 0x99,0xf3,0xfa,0x21, 0xab,0x9d,0x12,0xdf,
+ 0x33,0x32,0x23,0x68, 0x96,0x8f,0x8f,0x78,
+ 0xb3,0x63,0xa0,0x83, 0x16,0x06,0x64,0xbd,
+ 0xea,0x1f,0x69,0x73, 0x9c,0x54,0xe1,0x60,
+ 0xe8,0x98,0xc9,0x94, 0xe9,0xdf,0x0c,0xee,
+ 0xf4,0x38,0x1e,0x9f, 0x26,0xda,0x3f,0x4c,
+ 0xfd,0x6d,0xf5,0xee, 0x75,0x91,0x7c,0x4f,
+ 0x4d,0xc2,0xe8,0x1a, 0x7b,0x1b,0xa9,0x52,
+ 0x1e,0x24,0x22,0x5a, 0x73,0xa5,0x10,0xa2,
+ 0x37,0x39,0x1e,0xd2, 0xf7,0xe0,0xab,0x77,
+ 0xb7,0x93,0x5d,0x30, 0xd2,0x5a,0x33,0xf4,
+ 0x63,0x98,0xe8,0x6d, 0x3f,0x34,0x4a,0xb9,
+ 0x44,0x57,0x39,0xe7, 0xa9,0xdd,0xac,0x91,
+ },
+ },
+ [3] = {
+ .k = {
+ 0xac,0x95,0xec,0x00, 0xa5,0x57,0x8e,0x99,
+ 0x14,0x54,0x95,0x60, 0xdc,0xae,0x56,0x66,
+ 0x03,0x22,0xa1,0x55, 0xbf,0xa5,0x2b,0x1c,
+ 0x02,0xc9,0x0c,0x2f, 0xa1,0x5d,0x1b,0x84,
+ },
+ .tlen = 0,
+ .len = 1536,
+ .p = {
+ 0xd2,0x80,0x06,0x95, 0xcd,0xe1,0x71,0x2c,
+ 0xcf,0x89,0xa6,0xc7, 0x8b,0xa7,0xe3,0xcb,
+ 0x66,0x3e,0x6b,0x58, 0x2a,0x20,0xd1,0xc4,
+ 0x07,0xd6,0x3b,0x03, 0xdc,0x26,0xda,0x1b,
+ 0xe0,0x51,0xd5,0x1c, 0x4c,0xed,0xd0,0xf5,
+ 0xe2,0x7f,0x89,0xe8, 0x3d,0x41,0x1a,0xa0,
+ 0xb1,0xed,0x61,0xa8, 0xc7,0x0a,0xe8,0x69,
+ 0x4d,0xb8,0x18,0x81, 0x6c,0x76,0x67,0x83,
+ 0x8a,0x47,0xa2,0x4b, 0xfb,0xfd,0x6f,0x65,
+ 0x88,0xa8,0xf6,0x6d, 0x9f,0x71,0x6e,0x33,
+ 0x4f,0x82,0xee,0x8f, 0x38,0x5c,0xe4,0x9b,
+ 0x45,0x29,0xca,0xda, 0x9b,0x5d,0x65,0x06,
+ 0xab,0xf5,0x86,0x28, 0x8c,0x3e,0x20,0x38,
+ 0x1a,0x4c,0xb2,0xd9, 0x1f,0xc0,0x10,0x59,
+ 0x6b,0x2c,0xb5,0x41, 0x41,0xc5,0xd9,0xb7,
+ 0x4f,0xc3,0x36,0x08, 0xd4,0xdc,0xff,0x57,
+ 0xd7,0x97,0x77,0x45, 0xc4,0x28,0x93,0x2c,
+ 0xbe,0xdc,0xae,0x1d, 0x18,0xc8,0xfa,0x9a,
+ 0xd4,0x41,0x2e,0x5a, 0x26,0x03,0xae,0x7a,
+ 0xb2,0x6a,0xc0,0x0c, 0xb6,0x3e,0xf0,0x73,
+ 0x36,0xed,0xea,0xc1, 0xae,0x9d,0xc9,0xa1,
+ 0x85,0x4c,0x57,0x14, 0xb0,0xf3,0xf8,0x4e,
+ 0x91,0x99,0x06,0x65, 0x17,0x66,0xc2,0x9a,
+ 0x7a,0x4f,0x39,0x77, 0x32,0x44,0xc8,0x3f,
+ 0xe2,0x3c,0xc2,0x31, 0x0b,0x40,0x84,0xee,
+ 0xa1,0xeb,0xc6,0xc2, 0xb4,0x48,0xe6,0x09,
+ 0xc5,0xf5,0x3d,0x96, 0x90,0xa2,0x1d,0xf2,
+ 0x89,0x26,0x9f,0x10, 0x49,0x30,0x0f,0xe1,
+ 0x5e,0xca,0x1c,0x3f, 0x82,0xda,0xcb,0x8d,
+ 0x91,0x6d,0x08,0x96, 0x9e,0x57,0x88,0x16,
+ 0xee,0xa7,0x9e,0xe8, 0x1b,0xc1,0x63,0xb0,
+ 0x57,0xfa,0xfd,0x56, 0x49,0xec,0x51,0x1d,
+ 0x34,0x2e,0xc6,0xda, 0xc0,0x1d,0x02,0x3e,
+ 0x52,0xaf,0x44,0x24, 0xc6,0x80,0x12,0x64,
+ 0xbe,0x44,0xa8,0x46, 0xb5,0x8d,0x80,0xfd,
+ 0x95,0x4a,0xeb,0x3d, 0x4f,0x85,0x1f,0x1c,
+ 0xa4,0x3f,0x5c,0x0c, 0x71,0xed,0x96,0x41,
+ 0xde,0xb0,0xbd,0x08, 0xf3,0x4d,0x37,0xd2,
+ 0xb1,0x4f,0x71,0x04, 0xf1,0x14,0x66,0x4a,
+ 0x59,0x73,0xdc,0x98, 0x5b,0x61,0x56,0xfd,
+ 0x50,0xe5,0x76,0xd9, 0x6a,0x9f,0x30,0x82,
+ 0x6f,0xdf,0x6e,0x7b, 0x91,0xc2,0x5e,0x4f,
+ 0x74,0x92,0x92,0xb8, 0x24,0xd3,0x30,0x21,
+ 0x5d,0x4b,0xb1,0x01, 0xf7,0x62,0x27,0x94,
+ 0xb3,0x88,0x86,0x75, 0xe8,0xab,0xe8,0x42,
+ 0x50,0x15,0xb7,0xde, 0xc0,0xc4,0x8d,0x4e,
+ 0x08,0x17,0xcb,0xf9, 0x4a,0x2e,0xe3,0x69,
+ 0xbd,0xe7,0xdb,0xd1, 0xf1,0xfa,0x47,0xed,
+ 0x78,0xa9,0x26,0xf0, 0xd1,0xbb,0x02,0xa1,
+ 0x07,0x5c,0x1f,0xe8, 0x2f,0x52,0xd8,0x95,
+ 0xd7,0xa9,0x2b,0x79, 0x77,0xf4,0xee,0xee,
+ 0xbc,0x1f,0xaa,0x46, 0xe7,0x66,0x75,0xb1,
+ 0x43,0x01,0x35,0xac, 0xc6,0x85,0xad,0x44,
+ 0x23,0x59,0x50,0x0b, 0x39,0x47,0x51,0x54,
+ 0x68,0x92,0x89,0x00, 0x08,0xa3,0xaa,0x24,
+ 0x03,0x3f,0xf6,0xab, 0x19,0x42,0xff,0x0c,
+ 0xc5,0xa3,0x96,0xcb, 0xd9,0x6d,0xa0,0xcc,
+ 0x24,0x9e,0x71,0xb1, 0x87,0x95,0x7a,0x2e,
+ 0x31,0x5e,0x17,0x26, 0x5a,0x1b,0xa1,0x33,
+ 0x10,0x3f,0xd7,0xce, 0xa0,0xd9,0xbc,0xd8,
+ 0x72,0xbe,0x75,0xc4, 0x78,0x3b,0x67,0xf5,
+ 0xc3,0x82,0x2d,0x21, 0x49,0x74,0x2e,0xd5,
+ 0x63,0xaa,0xa2,0x54, 0xc5,0xe2,0x98,0x82,
+ 0x39,0xd9,0xda,0x14, 0x3c,0x75,0x18,0xc8,
+ 0x75,0x6a,0xa1,0x7d, 0xfa,0x72,0x0f,0x9b,
+ 0x5a,0xb3,0x7c,0x15, 0xc2,0xa5,0x6d,0x98,
+ 0x02,0x6c,0xa2,0x26, 0xaa,0xc0,0x69,0xc5,
+ 0xa7,0xa2,0xca,0xf5, 0xf3,0x8c,0x80,0x4e,
+ 0x7e,0x47,0xc9,0x87, 0x47,0x36,0xd6,0xc6,
+ 0xe8,0x49,0xb5,0x97, 0xa8,0xdc,0x4a,0x55,
+ 0x6f,0x02,0x79,0x83, 0xe4,0x7c,0x4c,0x69,
+ 0xa6,0x4d,0x4f,0x8a, 0x48,0x18,0x00,0xf9,
+ 0xad,0xd1,0xb2,0xca, 0xc4,0x50,0x47,0x21,
+ 0x4e,0xa7,0xce,0x6e, 0xdf,0xbd,0x2a,0x4d,
+ 0xca,0x13,0x33,0xde, 0xa2,0x30,0xe1,0x03,
+ 0xcd,0x2c,0x74,0xd3, 0x30,0x0d,0x61,0xe6,
+ 0x9d,0xf3,0x09,0xc5, 0x27,0x99,0x0e,0x23,
+ 0xbc,0x21,0xdb,0xdb, 0xeb,0x77,0xea,0xd4,
+ 0x4b,0xbf,0x9b,0x49, 0x30,0xd4,0xc2,0xe7,
+ 0x5e,0x85,0xe8,0xb6, 0xa5,0xe3,0x4e,0x64,
+ 0xf0,0x45,0x95,0x04, 0x9a,0xed,0xaa,0x4d,
+ 0xbd,0x5e,0x03,0x9f, 0xd4,0x2b,0xae,0x14,
+ 0x1a,0x3d,0x49,0x92, 0xd6,0x6f,0x64,0xc7,
+ 0xca,0x18,0x32,0x16, 0xf6,0x07,0x00,0x22,
+ 0xfd,0xe1,0x45,0xe6, 0x19,0x24,0x5b,0x6e,
+ 0xd3,0x67,0xf2,0x60, 0x36,0xf5,0x22,0xeb,
+ 0x5f,0x42,0xba,0x70, 0x38,0xfc,0x98,0x96,
+ 0x58,0x72,0xbf,0x13, 0x60,0xcc,0x32,0x45,
+ 0x8d,0x00,0x44,0x60, 0xaf,0x7a,0x19,0xd6,
+ 0xc0,0x14,0x33,0x96, 0xf3,0x33,0xc3,0xa8,
+ 0x34,0x77,0x69,0x0c, 0x50,0xe5,0xfc,0x1b,
+ 0x42,0x39,0x96,0x24, 0x3a,0x3a,0x47,0x0e,
+ 0x27,0x66,0xa8,0x18, 0x50,0xdf,0x6d,0xa7,
+ 0xad,0x4f,0xe5,0x88, 0x79,0xea,0x30,0xe2,
+ 0xcd,0x27,0x05,0x36, 0x0c,0x3c,0x97,0x12,
+ 0x69,0xa6,0xc0,0xa2, 0xa7,0x58,0x82,0x20,
+ 0x68,0xfc,0xd0,0x81, 0x49,0xc0,0xcf,0xba,
+ 0x90,0xe1,0x03,0xce, 0x70,0xd6,0x94,0x1a,
+ 0xc0,0x22,0x3b,0xdc, 0x7f,0x63,0x6b,0xc4,
+ 0x91,0xc2,0x21,0xdc, 0x84,0x42,0x80,0x04,
+ 0x6f,0x14,0xc3,0x2c, 0x79,0x49,0x3c,0xb1,
+ 0x5f,0xc7,0x69,0x4a, 0x4f,0xf5,0xd5,0x4b,
+ 0x7c,0xe7,0x83,0x79, 0x30,0xff,0x74,0xe0,
+ 0xf7,0xd3,0x6c,0x95, 0xef,0x77,0xe8,0x7b,
+ 0x1f,0x54,0xad,0xc7, 0x4b,0xe8,0x5a,0x37,
+ 0xd7,0xe9,0xfe,0xcb, 0x11,0x7b,0x54,0xb8,
+ 0xd2,0xc7,0x80,0x1d, 0x80,0x17,0xdd,0x21,
+ 0xa6,0xed,0x20,0x2c, 0x8a,0xa1,0x0b,0x3a,
+ 0x08,0xde,0x34,0xe4, 0xa0,0xff,0x68,0xfa,
+ 0x4a,0x01,0xcc,0x4f, 0x57,0x5f,0x84,0x95,
+ 0x88,0xe2,0x7f,0xb7, 0x5d,0x35,0x36,0xe2,
+ 0xa1,0xca,0xc0,0x9b, 0x4a,0xb0,0x6f,0x35,
+ 0xef,0x08,0xd7,0x5a, 0xec,0x4f,0x97,0x20,
+ 0x92,0x2a,0x63,0x1d, 0x15,0x07,0x73,0x1f,
+ 0x97,0xcf,0x28,0x41, 0x65,0x0d,0x41,0xee,
+ 0xca,0xd8,0x90,0x65, 0xaa,0x3d,0x04,0x7f,
+ 0x35,0x4b,0x9e,0xe9, 0x96,0xa9,0x61,0xcb,
+ 0x43,0xc9,0xfa,0x1d, 0xc8,0x85,0x40,0x64,
+ 0x88,0x89,0xea,0xb5, 0xf7,0xe5,0xe4,0xfe,
+ 0xaf,0x8e,0x52,0xf9, 0x7e,0x7d,0x83,0x92,
+ 0x90,0x51,0x4c,0xf0, 0x49,0x52,0x5e,0x56,
+ 0xc9,0xb7,0x4c,0xca, 0x57,0x01,0x3d,0x28,
+ 0xe2,0x7d,0xaa,0x96, 0xd7,0xad,0xad,0xd9,
+ 0xd5,0x1a,0xd5,0xc2, 0xd0,0x5a,0xd3,0x7a,
+ 0x9a,0x91,0xa0,0xb8, 0x6f,0x28,0xff,0xa0,
+ 0x1c,0x1d,0xf1,0x5e, 0x45,0x53,0x3f,0x85,
+ 0x1b,0xc2,0x76,0x51, 0xbf,0x25,0x02,0xf7,
+ 0x10,0xde,0xb7,0x1a, 0x04,0x6c,0x9a,0xeb,
+ 0xb9,0x4b,0x67,0xfb, 0xa1,0x5b,0xa8,0x02,
+ 0x01,0x1f,0x38,0xa9, 0x9d,0x96,0x50,0x07,
+ 0xef,0xa7,0xc3,0xb4, 0x0f,0xcd,0x1b,0x9f,
+ 0xd2,0x08,0x87,0xca, 0xd5,0x65,0x1a,0x5e,
+ 0x1a,0xff,0x97,0xb0, 0x4b,0x43,0x67,0x51,
+ 0x22,0xfd,0x49,0xcd, 0x54,0x2f,0xf8,0x9b,
+ 0xed,0x46,0x7e,0x00, 0x5b,0x67,0x06,0xeb,
+ 0xb7,0x4d,0x1c,0x72, 0x74,0xdd,0xbd,0xb1,
+ 0x71,0x0a,0x28,0xc7, 0x7b,0xa8,0x12,0xac,
+ 0x58,0x53,0xa4,0xfb, 0x41,0x74,0xb4,0x52,
+ 0x95,0x99,0xf6,0x38, 0x53,0xff,0x2d,0x26,
+ 0xef,0x12,0x91,0xc6, 0x52,0xe1,0xa9,0x50,
+ 0xfa,0x8e,0x2e,0x82, 0x8b,0x4f,0xb7,0xad,
+ 0xe1,0x74,0x0d,0xbf, 0x73,0x04,0xdf,0x3f,
+ 0xf6,0xf8,0x09,0x9d, 0xdf,0x18,0x07,0x13,
+ 0xe6,0x60,0xf0,0x6a, 0x98,0x22,0x15,0xdf,
+ 0x0c,0x72,0x6a,0x9d, 0x6e,0x67,0x76,0x61,
+ 0xda,0xbe,0x10,0xd6, 0xf0,0x5f,0x06,0x74,
+ 0x76,0xce,0x63,0xee, 0x91,0x39,0x24,0xa9,
+ 0xcf,0xc7,0xca,0xd5, 0xb4,0xff,0x30,0x6e,
+ 0x05,0x32,0x0c,0x9d, 0xeb,0xfb,0xc6,0x3e,
+ 0xe4,0xc6,0x20,0xc5, 0x3e,0x1d,0x5c,0xd6,
+ 0x05,0xbe,0xb8,0xc3, 0x44,0xe3,0xc9,0xc1,
+ 0x38,0xaa,0xc5,0xc8, 0xe3,0x11,0x8d,0xde,
+ 0xdc,0x48,0x8e,0xe9, 0x38,0xe5,0x80,0xec,
+ 0x82,0x17,0xf2,0xcf, 0x26,0x55,0xf7,0xdc,
+ 0x78,0x7f,0xfb,0xc1, 0xb4,0x6c,0x80,0xcc,
+ 0xf8,0x5a,0xbc,0x8f, 0x9d,0x62,0xfe,0x35,
+ 0x17,0x7c,0x10,0xb7, 0x4a,0x0f,0x81,0x43,
+ 0x11,0xbd,0x33,0x47, 0x9c,0x61,0x02,0xec,
+ 0xab,0xde,0xb2,0x3f, 0x73,0x48,0xfb,0x5c,
+ 0x84,0x4a,0xeb,0xab, 0x58,0x07,0x18,0xdc,
+ 0x57,0x85,0xb8,0xe7, 0xff,0x9c,0xc2,0xc8,
+ 0xb3,0xef,0x5b,0x50, 0x16,0xb1,0x38,0x6e,
+ 0xa7,0xd7,0x9c,0xb1, 0x29,0x6b,0x74,0x9c,
+ 0x50,0xcc,0x90,0xee, 0x86,0x2a,0x7c,0x07,
+ 0xd4,0xcb,0xc2,0x24, 0x53,0xb0,0x3f,0x4f,
+ 0x9b,0xc4,0x62,0x73, 0x85,0x3d,0x1e,0x54,
+ 0x86,0xda,0x1e,0x5e, 0x70,0x73,0x6a,0x2a,
+ 0x29,0x75,0xb7,0x18, 0x1a,0x72,0x81,0x64,
+ 0x58,0xa0,0xb3,0x70, 0x61,0x9f,0x22,0x37,
+ 0xac,0xdc,0xe8,0xaf, 0xe2,0x74,0xe4,0xa7,
+ 0xed,0x92,0x5c,0x47, 0xff,0xc3,0xaf,0x9e,
+ 0x59,0xe1,0x09,0x22, 0x72,0x18,0x96,0x35,
+ 0x23,0x91,0x00,0xa3, 0x7d,0x95,0x25,0x95,
+ 0xd5,0xad,0xf8,0x6e, 0xcc,0x14,0x31,0xb2,
+ 0x52,0x20,0x2a,0x41, 0xf1,0xaf,0x9a,0xaf,
+ 0xdd,0xbd,0x04,0x5a, 0xcd,0x1a,0x86,0xb1,
+ 0x45,0x1b,0x6f,0x7a, 0x02,0x45,0x05,0xef,
+ 0x74,0xdf,0xe8,0x72, 0x1c,0x82,0x57,0xea,
+ 0x2a,0x24,0x1b,0x46, 0x3f,0x66,0x89,0x9f,
+ 0x00,0xb9,0xec,0xf7, 0x59,0x6d,0xeb,0xac,
+ 0xca,0x82,0x14,0x79, 0xbf,0x7f,0xd5,0x18,
+ 0x26,0x6b,0xee,0x34, 0x44,0xee,0x6d,0x8a,
+ 0x82,0x8f,0x4f,0xa3, 0x1a,0xc3,0x9b,0x2e,
+ 0x57,0x83,0xb8,0x7d, 0xa0,0x21,0xc6,0x66,
+ 0x96,0x7d,0x30,0x81, 0x29,0xc7,0x05,0x46,
+ 0x99,0xd4,0x35,0x7b, 0x40,0xe8,0x87,0x60,
+ 0x13,0xa5,0xa6,0xb9, 0x24,0x59,0xca,0xa8,
+ 0xcd,0x62,0xeb,0xc5, 0x22,0xff,0x49,0x64,
+ 0x03,0x2d,0x42,0x01, 0xa2,0x09,0x4a,0x45,
+ 0x41,0x34,0x88,0x44, 0xf4,0xe1,0xa3,0x48,
+ 0xcf,0x2d,0xee,0xee, 0xbf,0x83,0x1a,0x42,
+ 0x8d,0xa4,0x15,0x3d, 0xfc,0x92,0x67,0x91,
+ },
+ .c = {
+ 0x5c,0xb9,0xab,0x7c, 0xe4,0x0b,0xbe,0xa5,
+ 0x17,0x18,0xdf,0xd7, 0x17,0x13,0x98,0xbd,
+ 0xcb,0x1c,0xa3,0x39, 0x9c,0xbc,0x19,0x1f,
+ 0xca,0xcb,0x50,0x89, 0x1d,0x69,0xc3,0xcb,
+ 0xd1,0x76,0x70,0x6b, 0x7c,0x62,0x49,0xe8,
+ 0xb1,0xa8,0xb7,0x58, 0x87,0xf6,0x79,0xf7,
+ 0xf2,0xc1,0xd8,0xb2, 0x1d,0xd2,0x1a,0xf5,
+ 0xa0,0x41,0xda,0x17, 0x3f,0xaa,0xdb,0xf6,
+ 0xa9,0xf2,0x49,0x1c, 0x6f,0x20,0xf3,0xae,
+ 0x4a,0x5e,0x55,0xdd, 0xa6,0x9e,0xc4,0x03,
+ 0x07,0x22,0xc0,0xbe, 0x5e,0x58,0xdd,0xf0,
+ 0x7e,0xfe,0xcf,0x2c, 0x96,0x33,0x32,0xbd,
+ 0xe8,0xdf,0x84,0x71, 0x45,0x35,0x40,0x48,
+ 0xcf,0x10,0x45,0x47, 0x97,0x4c,0x20,0x6b,
+ 0x3a,0xdd,0x73,0xd0, 0xce,0x0c,0x4c,0xf1,
+ 0x78,0xcd,0x93,0xd2, 0x21,0x70,0xeb,0x2f,
+ 0x23,0x99,0x64,0xbb, 0x97,0x28,0xe9,0xde,
+ 0xef,0x9c,0xf2,0x7f, 0x4b,0x4d,0x2c,0x66,
+ 0x7b,0x6e,0x70,0xf7, 0x25,0x68,0xea,0x93,
+ 0x3a,0x27,0xbd,0x04, 0x8b,0xcd,0xd9,0xed,
+ 0x1a,0x9d,0xca,0x8f, 0x15,0x2d,0xa1,0x25,
+ 0xb8,0x66,0x1b,0x3d, 0xd4,0xd4,0x9b,0xab,
+ 0x3a,0xa8,0xe8,0x88, 0xc6,0xd2,0x5a,0x28,
+ 0x51,0x4d,0x11,0xb6, 0x4a,0x2b,0x6d,0xe4,
+ 0xc9,0xc1,0x20,0x6f, 0xba,0x23,0x72,0xc9,
+ 0x6d,0x44,0xf0,0xaa, 0x06,0x8c,0x9b,0xbb,
+ 0x4b,0xd2,0xa0,0x94, 0x5f,0x0b,0xc8,0xa3,
+ 0x4c,0xe9,0xe2,0x8a, 0xe5,0xf9,0xe3,0x2c,
+ 0xc7,0x87,0x75,0xc1, 0xc9,0x62,0xb5,0xb4,
+ 0x04,0x86,0x6a,0x31, 0x54,0x0e,0x31,0xf7,
+ 0xad,0xea,0xbb,0xa6, 0x8e,0x6c,0xac,0x24,
+ 0x52,0x2c,0x9d,0x1f, 0xde,0x70,0xfd,0xc4,
+ 0x93,0x8b,0x75,0x6c, 0xef,0xa7,0x89,0xaf,
+ 0x2c,0x4c,0xf6,0x38, 0xdd,0x79,0xfa,0x70,
+ 0x54,0x1e,0x92,0xd4, 0xb4,0x04,0x69,0x8e,
+ 0x6b,0x9e,0x12,0xfe, 0x15,0x15,0xf7,0x99,
+ 0xb6,0x2f,0xfc,0xfa, 0x66,0xe9,0x40,0xb5,
+ 0xd3,0x10,0xbb,0x42, 0xf9,0x68,0x64,0xd4,
+ 0x2a,0xcd,0x43,0x75, 0xb0,0x9c,0x61,0x34,
+ 0xc1,0xc4,0x42,0xf3, 0xf1,0xa7,0x65,0xf4,
+ 0xcb,0x42,0xe9,0xc2, 0x5a,0x05,0xdf,0x98,
+ 0xa3,0xba,0xf7,0xe0, 0x15,0xa1,0xdf,0xf7,
+ 0xce,0xd5,0xf0,0x62, 0x89,0xe1,0x44,0x3a,
+ 0x4f,0x6f,0x75,0x3e, 0xfc,0x19,0xe3,0x5f,
+ 0x36,0x48,0xc1,0x95, 0x08,0x22,0x09,0xf9,
+ 0x07,0x74,0x1c,0xa4, 0x1b,0x7e,0xa8,0x82,
+ 0xca,0x0b,0xd9,0x1e, 0xe3,0x5b,0x1c,0xb5,
+ 0x57,0x13,0x7d,0xbd, 0xbd,0x16,0x88,0xd4,
+ 0xb1,0x8e,0xdb,0x6f, 0x2f,0x7b,0x55,0x72,
+ 0x79,0xc9,0x49,0x7b, 0xf7,0x86,0xa9,0x3d,
+ 0x2d,0x11,0x33,0x7d, 0x82,0x38,0xc7,0xb5,
+ 0x7c,0x6b,0x0b,0x28, 0x42,0x50,0x47,0x69,
+ 0xd8,0x48,0xc6,0x85, 0x0b,0x1b,0xca,0x08,
+ 0x85,0x36,0x6d,0x97, 0xe9,0x3e,0xeb,0xe2,
+ 0x28,0x6a,0x17,0x61, 0x7d,0xcb,0xb6,0xb3,
+ 0x23,0x44,0x76,0xd3, 0x57,0x39,0x9b,0x1d,
+ 0x69,0x30,0xd8,0x3f, 0x21,0xe8,0x68,0x94,
+ 0x82,0x85,0x97,0xb1, 0x1f,0x0c,0x99,0x6e,
+ 0x6e,0x44,0xa6,0x82, 0xd0,0xa2,0xe6,0xfe,
+ 0xff,0x08,0x41,0x49, 0x54,0x18,0x51,0x88,
+ 0x23,0xd5,0x14,0xbd, 0xfe,0xea,0x5d,0x15,
+ 0xd4,0x0b,0x2d,0x92, 0x94,0x8d,0xd4,0xe5,
+ 0xaf,0x60,0x88,0x2b, 0x67,0xae,0xbb,0xa8,
+ 0xec,0xae,0x9b,0x35, 0xa2,0xd7,0xe8,0xb6,
+ 0xe5,0xaa,0x12,0xd5, 0xef,0x05,0x5a,0x64,
+ 0xe0,0xff,0x79,0x16, 0xb6,0xa3,0xdb,0x1e,
+ 0xee,0xe8,0xb7,0xd6, 0x71,0xbd,0x76,0xbf,
+ 0x66,0x2a,0x9c,0xec, 0xbe,0x8c,0xb5,0x8e,
+ 0x8e,0xc0,0x89,0x07, 0x5d,0x22,0xd8,0xe0,
+ 0x27,0xcf,0x58,0x8a, 0x8c,0x4d,0xc7,0xa4,
+ 0x45,0xfc,0xe5,0xa4, 0x32,0x7c,0xbf,0x86,
+ 0xf0,0x82,0x96,0x05, 0x1e,0x86,0x03,0x0f,
+ 0x1f,0x0d,0xf2,0xfc, 0x28,0x62,0x90,0x53,
+ 0xfe,0xd4,0x28,0x52, 0x4f,0xa6,0xbc,0x4d,
+ 0xba,0x5d,0x04,0xc0, 0x83,0x61,0xf6,0x41,
+ 0xc8,0x58,0x40,0x49, 0x1d,0x27,0xd5,0x9f,
+ 0x93,0x4f,0xb5,0x7a, 0xea,0x7b,0x86,0x31,
+ 0x2b,0xe5,0x92,0x51, 0x3e,0x7a,0xbe,0xdb,
+ 0x04,0xae,0x21,0x71, 0x5a,0x70,0xf9,0x9b,
+ 0xa8,0xb6,0xdb,0xcd, 0x21,0x56,0x75,0x2e,
+ 0x98,0x38,0x78,0x4d, 0x51,0x4a,0xa6,0x03,
+ 0x8a,0x84,0xb2,0xf9, 0x6b,0x98,0x6d,0xf3,
+ 0x12,0xaa,0xd4,0xea, 0xb3,0x7c,0xb0,0xd9,
+ 0x5e,0x1c,0xb0,0x69, 0x48,0x67,0x13,0x26,
+ 0xf0,0x25,0x04,0x93, 0x6d,0xc6,0x6c,0xb2,
+ 0xcd,0x7c,0x36,0x62, 0x6d,0x38,0x44,0xe9,
+ 0x6b,0xe2,0x7f,0xc1, 0x40,0xdb,0x55,0xe1,
+ 0xa6,0x71,0x94,0x0a, 0x13,0x5f,0x9e,0x66,
+ 0x3b,0xb3,0x11,0x90, 0xbb,0x68,0xd4,0x11,
+ 0xf2,0xb7,0x61,0xbd, 0xac,0x4a,0x56,0xf4,
+ 0x9e,0xe2,0xd0,0x1e, 0xb4,0xa1,0xb8,0x4e,
+ 0xbb,0xc2,0x73,0x63, 0x04,0x99,0x97,0x9f,
+ 0x76,0x18,0x82,0x11, 0x7e,0xe1,0xcc,0x58,
+ 0xb7,0xb5,0x37,0x78, 0x60,0x19,0x6c,0x2b,
+ 0x6e,0x65,0x15,0x10, 0x3c,0x93,0xf0,0xc5,
+ 0x3d,0x9e,0xeb,0x77, 0x72,0x25,0x95,0xf0,
+ 0x27,0xe8,0xbd,0x81, 0x9c,0x22,0x38,0xa7,
+ 0x8d,0xe9,0x94,0xf2, 0x27,0x8d,0x3a,0x34,
+ 0x36,0xba,0x26,0xa0, 0xd7,0x3e,0xd8,0xbe,
+ 0x60,0xd1,0x53,0x58, 0x56,0xe6,0xf3,0xa1,
+ 0x0d,0x62,0x5e,0x44, 0xd3,0x7c,0xc9,0x25,
+ 0x87,0xc8,0x1a,0x57, 0x7f,0xfa,0x79,0x4a,
+ 0x15,0xf6,0x3e,0x2e, 0xd0,0x6b,0x83,0x9b,
+ 0xe6,0xfe,0x6c,0xd3, 0x8e,0x40,0x4a,0x12,
+ 0x57,0x41,0xc9,0x5a, 0x42,0x91,0x0b,0x28,
+ 0x56,0x38,0xfc,0x45, 0x4b,0x26,0xbf,0x3a,
+ 0xa3,0x46,0x75,0x73, 0xde,0x7e,0x18,0x7c,
+ 0x82,0x92,0x73,0xe6, 0xb5,0xd2,0x1f,0x1c,
+ 0xdd,0xb3,0xd5,0x71, 0x9f,0xd2,0xa5,0xf4,
+ 0xf1,0xcb,0xfe,0xfb, 0xd3,0xb6,0x32,0xbd,
+ 0x8e,0x0d,0x73,0x0a, 0xb6,0xb1,0xfd,0x31,
+ 0xa5,0xa4,0x7a,0xb1, 0xa1,0xbb,0xf0,0x0b,
+ 0x97,0x21,0x27,0xe1, 0xbb,0x6a,0x2a,0x5b,
+ 0x95,0xda,0x01,0xd3, 0x06,0x8e,0x53,0xd8,
+ 0x23,0xa3,0xa9,0x82, 0x8a,0xa2,0x8f,0xdb,
+ 0x87,0x37,0x41,0x41, 0x2b,0x36,0xf3,0xb3,
+ 0xa6,0x32,0x5f,0x3e, 0xbf,0x70,0x3a,0x13,
+ 0xba,0x11,0xa1,0x4e, 0x11,0xa8,0xc0,0xb7,
+ 0xb2,0x1b,0xab,0xc8, 0xcb,0x38,0x35,0x2e,
+ 0x76,0xa7,0x0b,0x5a, 0x6c,0x53,0x83,0x60,
+ 0x4f,0xee,0x91,0xe8, 0xca,0x1e,0x7f,0x76,
+ 0x2b,0x4c,0xe7,0xd4, 0xcb,0xf8,0xeb,0x94,
+ 0x76,0x17,0x68,0x23, 0x95,0x93,0x7f,0x60,
+ 0x80,0x7a,0x85,0x70, 0x95,0x56,0xb9,0x76,
+ 0x76,0xb6,0x8f,0xe2, 0x93,0x60,0xfc,0x70,
+ 0x57,0x4a,0x27,0xc0, 0xfb,0x49,0x2f,0xac,
+ 0xde,0x87,0x2f,0x1a, 0x80,0xca,0x68,0x5e,
+ 0xc6,0x18,0x4e,0x3a, 0x4b,0x36,0xdc,0x24,
+ 0x78,0x7e,0xb0,0x58, 0x85,0x4d,0xa9,0xbc,
+ 0x0d,0x87,0xdd,0x02, 0xa6,0x0d,0x46,0xae,
+ 0xf7,0x2f,0x8e,0xeb, 0xf4,0x29,0xe0,0xbc,
+ 0x9a,0x34,0x30,0xc3, 0x29,0xea,0x2c,0xb3,
+ 0xb4,0xa2,0x9c,0x45, 0x6e,0xcb,0xa4,0x9d,
+ 0x22,0xe6,0x71,0xe0, 0xcb,0x9f,0x05,0xef,
+ 0x2f,0xf7,0x12,0xfd, 0x5d,0x48,0x6c,0x9e,
+ 0x8b,0xaa,0x90,0xb6, 0xa8,0x78,0xeb,0xde,
+ 0xeb,0x4c,0xce,0x7b, 0x62,0x60,0x69,0xc0,
+ 0x54,0xc3,0x13,0x76, 0xdc,0x7e,0xd1,0xc3,
+ 0x8e,0x24,0x58,0x43, 0x3c,0xbc,0xa0,0x75,
+ 0xf2,0x7c,0x2d,0x1e, 0x94,0xec,0x40,0x15,
+ 0xe1,0x78,0xac,0x4a, 0x93,0xef,0x87,0xec,
+ 0x99,0x94,0xcb,0x65, 0xde,0xcb,0x38,0xd7,
+ 0x89,0x90,0xa2,0x68, 0xcf,0xfd,0x98,0xf8,
+ 0x1f,0x06,0xd5,0x6c, 0x53,0x1d,0xd3,0xa7,
+ 0x06,0x0b,0xa9,0x92, 0xbb,0x6e,0x6f,0xaa,
+ 0x5a,0x54,0x71,0xb7, 0x90,0x00,0x06,0x6b,
+ 0xf9,0x34,0xba,0x41, 0x73,0x58,0x98,0xfc,
+ 0xca,0x98,0xbd,0xd3, 0x7d,0xa4,0x49,0xcc,
+ 0xa8,0x19,0xc1,0x40, 0x75,0x81,0x02,0x33,
+ 0xac,0x90,0xcd,0x58, 0xeb,0x1b,0xb4,0x4e,
+ 0xe0,0x8a,0xa9,0x0f, 0x15,0x8e,0x51,0x85,
+ 0x06,0x09,0x92,0x40, 0xe3,0x75,0x60,0x64,
+ 0xcf,0x9b,0x88,0xc7, 0xb0,0xab,0x37,0x5d,
+ 0x43,0x21,0x18,0x09, 0xff,0xec,0xa0,0xb3,
+ 0x47,0x09,0x22,0x4c, 0x55,0xc2,0x2d,0x2b,
+ 0xce,0xb9,0x3a,0xcc, 0xd7,0x0c,0xb2,0x9a,
+ 0xff,0x2a,0x73,0xac, 0x7a,0xf2,0x11,0x73,
+ 0x94,0xd9,0xbe,0x31, 0x9f,0xae,0x62,0xab,
+ 0x03,0xac,0x5f,0xe2, 0x99,0x90,0xfb,0xa5,
+ 0x74,0xc0,0xfa,0xb9, 0x3c,0x96,0x7c,0x36,
+ 0x25,0xab,0xff,0x2f, 0x24,0x65,0x73,0x21,
+ 0xc3,0x21,0x73,0xc9, 0x23,0x06,0x22,0x6c,
+ 0xb2,0x22,0x26,0x1d, 0x88,0x6f,0xd3,0x5f,
+ 0x6f,0x4d,0xf0,0x6d, 0x13,0x70,0x7d,0x67,
+ 0xe8,0x5c,0x3b,0x35, 0x27,0x8a,0x8c,0x65,
+ 0xae,0x50,0x78,0xe1, 0x26,0x07,0xf8,0x18,
+ 0xfc,0xea,0xa3,0x58, 0x73,0x2b,0xca,0x92,
+ 0x10,0xdc,0xb5,0x39, 0xd5,0x2d,0x21,0xfe,
+ 0x79,0xac,0x7d,0xe8, 0x0c,0xe9,0x6d,0x3e,
+ 0xb4,0x8a,0x23,0x65, 0x08,0xbc,0x57,0x51,
+ 0xe1,0xf8,0x8d,0x5b, 0xe4,0xfe,0x14,0x60,
+ 0x02,0xe7,0xd1,0xc2, 0xd2,0x2c,0x3f,0x4d,
+ 0x08,0xd1,0xd0,0xe7, 0x3b,0xcb,0x85,0x84,
+ 0x32,0xd6,0xb9,0xfb, 0xf7,0x45,0xa1,0xaf,
+ 0x9c,0xa3,0x8d,0x37, 0xde,0x03,0x6b,0xf4,
+ 0xae,0x58,0x03,0x26, 0x58,0x4f,0x73,0x49,
+ 0xc8,0x7f,0xa3,0xdd, 0x51,0xf2,0xec,0x34,
+ 0x8f,0xd5,0xe0,0xc2, 0xe5,0x33,0xf7,0x31,
+ 0x33,0xe7,0x98,0x5f, 0x26,0x14,0x4f,0xbb,
+ 0x88,0x1f,0xb3,0x92, 0x4e,0x97,0x2d,0xee,
+ 0x08,0x5f,0x9c,0x14, 0x5f,0xaf,0x6c,0x10,
+ 0xf9,0x47,0x41,0x81, 0xe9,0x99,0x49,0x52,
+ 0x86,0x29,0x55,0xba, 0x2e,0xb6,0x62,0x24,
+ 0x58,0xf7,0x4d,0x99, 0xce,0x75,0xa8,0x45,
+ 0x66,0x27,0x48,0x3f, 0x78,0xe3,0x48,0x7c,
+ 0xd7,0x1a,0x6c,0x89, 0x9d,0xb2,0x6a,0x23,
+ 0x9d,0xd7,0xed,0x82, 0x31,0x94,0x40,0x66,
+ 0xc8,0x28,0x52,0x23, 0xe7,0x61,0xde,0x71,
+ 0x69,0xf2,0x53,0x43, 0x30,0xce,0x6a,0x1a,
+ 0xfe,0x1e,0xeb,0xc2, 0x9f,0x61,0x81,0x94,
+ 0x18,0xed,0x58,0xbb, 0x01,0x13,0x92,0xb3,
+ 0xa6,0x90,0x7f,0xb5, 0xf4,0xbd,0xff,0xae,
+ },
+ },
+ [4] = {
+ .k = {
+ 0x7f,0x56,0x7d,0x15, 0x77,0xe6,0x83,0xac,
+ 0xd3,0xc5,0xb7,0x39, 0x9e,0x9f,0xf9,0x17,
+ 0xc7,0xff,0x50,0xb0, 0x33,0xee,0x8f,0xd7,
+ 0x3a,0xab,0x0b,0xfe, 0x6d,0xd1,0x41,0x8a,
+ },
+ .tlen = 0,
+ .len = 4096,
+ .p = {
+ 0x95,0x96,0x98,0xef, 0x73,0x92,0xb5,0x20,
+ 0xec,0xfc,0x4d,0x91, 0x54,0xbf,0x8d,0x9d,
+ 0x54,0xbc,0x4f,0x0f, 0x94,0xfc,0x94,0xcf,
+ 0x07,0xf6,0xef,0xbb, 0xed,0x3f,0xd3,0x60,
+ 0xba,0x85,0x1d,0x04, 0x08,0x54,0x92,0x08,
+ 0x06,0x52,0x7f,0x33, 0xfd,0xf3,0xdf,0x2a,
+ 0x17,0x2d,0xda,0x73, 0x03,0x56,0x21,0xa9,
+ 0xa3,0xab,0xf7,0x24, 0x17,0x39,0x7e,0x0f,
+ 0x00,0xdd,0xac,0x55, 0xb0,0x8b,0x2d,0x72,
+ 0x3b,0x9a,0x36,0x5a, 0xd9,0x0a,0x8e,0x0f,
+ 0xe2,0x1d,0xe8,0x85, 0xc3,0xc1,0x17,0x11,
+ 0xa7,0x2c,0x87,0x77, 0x9d,0x6c,0x3a,0xa6,
+ 0x90,0x59,0x10,0x24, 0xb0,0x92,0xe1,0xb6,
+ 0xa9,0x89,0x7c,0x95, 0x0a,0xf2,0xb2,0xa3,
+ 0x4a,0x40,0x88,0x35, 0x71,0x4e,0xa5,0xc9,
+ 0xde,0xba,0xd7,0x62, 0x56,0x46,0x40,0x1e,
+ 0xda,0x80,0xaf,0x28, 0x5d,0x40,0x36,0xf6,
+ 0x09,0x06,0x29,0x6e, 0xaa,0xca,0xe3,0x9e,
+ 0x9a,0x4f,0x4c,0x7e, 0x71,0x81,0x6f,0x9e,
+ 0x50,0x05,0x91,0x58, 0x13,0x6c,0x75,0x6a,
+ 0xd3,0x0e,0x7e,0xaf, 0xe1,0xbc,0xd9,0x38,
+ 0x18,0x47,0x73,0x3a, 0xf3,0x78,0x6f,0xcc,
+ 0x3e,0xea,0x52,0x82, 0xb9,0x0a,0xc5,0xfe,
+ 0x77,0xd6,0x25,0x56, 0x2f,0xec,0x04,0x59,
+ 0xda,0xd0,0xc9,0x22, 0xb1,0x01,0x60,0x7c,
+ 0x48,0x1a,0x31,0x3e, 0xcd,0x3d,0xc4,0x87,
+ 0xe4,0x83,0xc2,0x06, 0x91,0xf7,0x02,0x86,
+ 0xd2,0x9b,0xfd,0x26, 0x5b,0x9b,0x32,0xd1,
+ 0x5c,0xfd,0xb4,0xa8, 0x58,0x3f,0xd8,0x10,
+ 0x8a,0x56,0xee,0x04, 0xd0,0xbc,0xaa,0xa7,
+ 0x62,0xfd,0x9a,0x52, 0xec,0xb6,0x80,0x52,
+ 0x39,0x9e,0x07,0xc8, 0xb4,0x50,0xba,0x5a,
+ 0xb4,0x9a,0x27,0xdb, 0x93,0xb6,0x98,0xfe,
+ 0x52,0x08,0xa9,0x45, 0xeb,0x03,0x28,0x89,
+ 0x26,0x3c,0x9e,0x97, 0x0f,0x0d,0x0b,0x67,
+ 0xb0,0x00,0x01,0x71, 0x4b,0xa0,0x57,0x62,
+ 0xfe,0xb2,0x6d,0xbb, 0xe6,0xe4,0xdf,0xe9,
+ 0xbf,0xe6,0x21,0x58, 0xd7,0xf6,0x97,0x69,
+ 0xce,0xad,0xd8,0xfa, 0xce,0xe6,0x80,0xa5,
+ 0x60,0x10,0x2a,0x13, 0xb2,0x0b,0xbb,0x88,
+ 0xfb,0x64,0x66,0x00, 0x72,0x8c,0x4e,0x21,
+ 0x47,0x33,0x00,0x1f, 0x85,0xa6,0x3a,0xd3,
+ 0xe2,0x6c,0xc7,0x42, 0xb6,0x7b,0xc0,0x56,
+ 0x75,0xe2,0x61,0x72, 0x15,0xd1,0x88,0x08,
+ 0x3f,0x4d,0xfd,0xe2, 0x68,0x64,0xe5,0x7a,
+ 0x23,0x9b,0x3f,0x6c, 0xc3,0xd6,0x51,0x08,
+ 0x24,0x33,0x24,0x47, 0x7e,0xea,0x23,0xdc,
+ 0x07,0x41,0x66,0xa2, 0xa4,0xeb,0x23,0xa1,
+ 0x37,0x31,0xc0,0x7a, 0xe6,0xa4,0x63,0x05,
+ 0x20,0x44,0xe2,0x70, 0xd3,0x3e,0xee,0xd8,
+ 0x24,0x34,0x5d,0x80, 0xde,0xc2,0x34,0x66,
+ 0x5a,0x2b,0x6a,0x20, 0x4c,0x99,0x0d,0xbc,
+ 0x37,0x59,0xc5,0x8b, 0x70,0x4d,0xb4,0x0e,
+ 0x51,0xec,0x59,0xf6, 0x4f,0x08,0x1e,0x54,
+ 0x3d,0x45,0x31,0x99, 0x4d,0x5e,0x29,0x5f,
+ 0x12,0x57,0x46,0x09, 0x33,0xb9,0xf2,0x66,
+ 0xb4,0xc2,0xfa,0x63, 0xbe,0x42,0x6c,0x21,
+ 0x68,0x33,0x40,0xc6, 0xbd,0xd8,0x8a,0x55,
+ 0xd7,0x90,0x27,0x25, 0x7d,0x1e,0xed,0x02,
+ 0x50,0xd8,0xb1,0xac, 0xfa,0xd9,0xd4,0xcb,
+ 0x1c,0xc9,0x43,0x60, 0x44,0xab,0xd8,0x97,
+ 0x04,0xac,0xef,0x72, 0xa3,0x88,0xdc,0xb0,
+ 0xb0,0xb6,0xc6,0xd4, 0xd0,0x38,0xaf,0xc7,
+ 0xcd,0x8d,0x2a,0xa4, 0x13,0x53,0xd9,0xfd,
+ 0x2d,0x0b,0x91,0xb4, 0x3c,0x3a,0x72,0x11,
+ 0x6c,0x8b,0x96,0xa3, 0xc6,0x0b,0xd6,0x9a,
+ 0xa2,0xb9,0xae,0x76, 0xad,0xfd,0x01,0x90,
+ 0xab,0x93,0x9c,0x4b, 0xde,0x7e,0xf2,0x82,
+ 0x96,0xb9,0x98,0x55, 0xe2,0x68,0xe0,0xd8,
+ 0x61,0xb8,0x91,0x9a, 0xaf,0x92,0xd7,0xe5,
+ 0xeb,0x88,0xc5,0xb0, 0xcb,0x75,0x55,0xa9,
+ 0x94,0x7c,0x9c,0x11, 0x14,0x81,0x1a,0x09,
+ 0x61,0xd8,0x22,0x44, 0x13,0xba,0xe8,0x06,
+ 0x78,0xfd,0xd5,0x82, 0x73,0x19,0x9a,0xd1,
+ 0x5d,0x16,0xf5,0xd8, 0x86,0x7e,0xe3,0xcd,
+ 0xdc,0xe8,0x6a,0x18, 0x05,0xba,0x10,0xe4,
+ 0x06,0xc7,0xb2,0xf3, 0xb2,0x3e,0x1c,0x74,
+ 0x86,0xdd,0xad,0x8c, 0x82,0xf0,0x73,0x15,
+ 0x34,0xac,0x1d,0x95, 0x5e,0xba,0x2a,0xba,
+ 0xf8,0xac,0xbd,0xd7, 0x28,0x74,0x28,0xc7,
+ 0x29,0xa0,0x00,0x11, 0xda,0x31,0x7c,0xab,
+ 0x66,0x4d,0xb2,0x5e, 0xae,0x71,0xc5,0x31,
+ 0xcc,0x2b,0x9f,0x36, 0x2e,0xe6,0x97,0xa4,
+ 0xe1,0xb8,0x4b,0xc9, 0x00,0x87,0x7b,0x54,
+ 0xaa,0xeb,0xff,0x1a, 0x15,0xe8,0x3e,0x11,
+ 0xf7,0x25,0x3a,0xce, 0x94,0x23,0x27,0x44,
+ 0x77,0x80,0x6e,0xdd, 0x3f,0x8e,0x5a,0x92,
+ 0xae,0xee,0xb9,0x00, 0x79,0xc3,0x1d,0xab,
+ 0x17,0xb8,0x2b,0xff, 0x0d,0x64,0x29,0xb7,
+ 0x61,0x4d,0xd0,0x8d, 0x3d,0x36,0x3d,0x13,
+ 0xed,0x12,0xe8,0x08, 0xdd,0x4b,0x37,0xf7,
+ 0x2b,0xe7,0xeb,0x92, 0x78,0x98,0xc2,0xd6,
+ 0x13,0x15,0x94,0xff, 0xef,0xdc,0xda,0x27,
+ 0x7b,0xf9,0x58,0x5b, 0x90,0xf3,0xcd,0x1b,
+ 0x38,0x8a,0x00,0x38, 0x9b,0x95,0xcb,0x18,
+ 0x1f,0x97,0xd2,0x1f, 0x60,0x9d,0x6c,0xac,
+ 0xb8,0x72,0x08,0xd9, 0xc1,0xf4,0x98,0x72,
+ 0xf9,0x44,0xf2,0x2b, 0xe1,0x6e,0x76,0x15,
+ 0x63,0xfc,0x57,0x12, 0x23,0x4a,0xff,0xd3,
+ 0x1f,0x0d,0x0c,0xb9, 0x14,0xf9,0x98,0x52,
+ 0xce,0x90,0x34,0x8c, 0xd4,0x54,0x14,0x9e,
+ 0xf7,0x2c,0xba,0x5f, 0x80,0xb0,0x02,0x68,
+ 0x4f,0xca,0xb0,0xda, 0x44,0x11,0xb4,0xbd,
+ 0x12,0x14,0x80,0x6b, 0xc1,0xce,0xa7,0xfe,
+ 0x0e,0x16,0x69,0x19, 0x3c,0xe7,0xb6,0xfe,
+ 0x5a,0x59,0x02,0xf6, 0x78,0x3e,0xa4,0x65,
+ 0x57,0xa1,0xf2,0x65, 0xad,0x64,0xfc,0xba,
+ 0xd8,0x47,0xc8,0x8d, 0x11,0xf9,0x6a,0x25,
+ 0x22,0xa7,0x7f,0xa9, 0x43,0xe4,0x07,0x6b,
+ 0x49,0x26,0x42,0xe4, 0x03,0x1f,0x56,0xcd,
+ 0xf1,0x49,0xf8,0x0d, 0xea,0x1d,0x4f,0x77,
+ 0x5c,0x3c,0xcd,0x6d, 0x58,0xa8,0x92,0x6d,
+ 0x50,0x4a,0x81,0x6e, 0x09,0x2a,0x15,0x9e,
+ 0x3b,0x56,0xd3,0xb4, 0xef,0xe6,0x12,0xaf,
+ 0x60,0x3b,0x73,0xe7, 0xd8,0x2e,0xab,0x13,
+ 0xfb,0x7e,0xea,0xb1, 0x7b,0x54,0xc5,0x26,
+ 0x41,0x93,0x31,0xda, 0xb5,0x7a,0xe3,0x46,
+ 0x7a,0x8a,0xb0,0x81, 0xab,0xd5,0x90,0x85,
+ 0x4b,0xef,0x30,0x11, 0xb8,0x00,0x19,0x39,
+ 0xd3,0x11,0x54,0x53, 0x48,0x7a,0x7e,0xc5,
+ 0x4e,0x52,0xe5,0x4c, 0xeb,0xa2,0x9f,0x7a,
+ 0xdc,0xb5,0xc8,0x4e, 0x3b,0x5c,0x92,0x0f,
+ 0x19,0xcb,0x0a,0x9d, 0xda,0x01,0xfc,0x17,
+ 0x62,0xc3,0x46,0x63, 0x8b,0x4e,0x85,0x92,
+ 0x75,0x01,0x00,0xb3, 0x74,0xa8,0x23,0xd1,
+ 0xd2,0x91,0x53,0x0f, 0xd0,0xe9,0xed,0x90,
+ 0xde,0x9c,0x8c,0xb7, 0xf1,0x6a,0xd6,0x49,
+ 0x3c,0x22,0x2b,0xd7, 0x73,0x76,0x38,0x79,
+ 0xb5,0x88,0x1e,0xee, 0xdf,0xed,0x9f,0xfd,
+ 0x1a,0x0e,0xe7,0xd5, 0xc6,0xc9,0xfb,0x03,
+ 0xcc,0x84,0xb5,0xd2, 0x49,0xca,0x49,0x0a,
+ 0x1b,0x7c,0x78,0xe4, 0xd1,0x2e,0x7c,0x14,
+ 0x80,0x38,0x9d,0xba, 0x64,0x13,0xd3,0xf8,
+ 0x8e,0x05,0x4a,0xd6, 0x0d,0x73,0x09,0x1e,
+ 0xf1,0x75,0x63,0x59, 0xed,0xfc,0xbe,0x83,
+ 0x56,0x91,0x22,0x84, 0xd2,0x1e,0xf2,0x61,
+ 0x12,0x3d,0x50,0x6c, 0x9f,0xea,0x6b,0xcd,
+ 0x8c,0xac,0x28,0x0d, 0xad,0xf4,0xfd,0x77,
+ 0x45,0x68,0x17,0xb6, 0x03,0x13,0x54,0x7a,
+ 0xc0,0x8e,0x6b,0x56, 0x8a,0xd2,0xc6,0x1b,
+ 0xb3,0x3e,0x4f,0x68, 0x91,0x2e,0x2d,0x35,
+ 0x2a,0x32,0x27,0x86, 0x67,0x36,0x73,0xb8,
+ 0xfc,0x08,0xb8,0xf8, 0x1f,0x67,0x0b,0x32,
+ 0x89,0x00,0xfb,0x2d, 0xbe,0x74,0xae,0x41,
+ 0x3a,0xd3,0xed,0xf1, 0x67,0xee,0xe5,0x26,
+ 0xd4,0x59,0xdc,0x3b, 0x6b,0xf7,0x33,0x67,
+ 0xed,0xef,0xb0,0x5d, 0x5e,0x43,0x34,0xa2,
+ 0x3d,0x55,0x16,0x99, 0x4b,0x90,0x49,0x40,
+ 0x82,0x35,0x0d,0x82, 0xa6,0x16,0xd2,0x41,
+ 0xc8,0x65,0xd4,0xe7, 0x1a,0xdb,0xad,0xe6,
+ 0x48,0x5e,0xeb,0x94, 0xa6,0x9f,0x97,0x1e,
+ 0xd4,0x38,0x5d,0xff, 0x6e,0x17,0x0c,0xd0,
+ 0xb3,0xd5,0xb4,0x06, 0xd7,0xcb,0x8e,0xa3,
+ 0x27,0x75,0x24,0xb5, 0x14,0xe9,0x55,0x94,
+ 0x51,0x14,0xaf,0x15, 0x02,0xd3,0x9c,0x5f,
+ 0x43,0xfe,0x97,0xf4, 0x0b,0x4e,0x4d,0x89,
+ 0x15,0x33,0x4a,0x04, 0x10,0xf3,0xeb,0x13,
+ 0x71,0x86,0xb4,0x8a, 0x2c,0x75,0x04,0x47,
+ 0xb9,0x60,0xe9,0x2a, 0x5a,0xe8,0x7e,0x8b,
+ 0x91,0xa7,0x01,0x49, 0xcf,0xfc,0x48,0x83,
+ 0xa7,0x42,0xc8,0x2f, 0x80,0x92,0x04,0x64,
+ 0x03,0xf7,0x9f,0x1d, 0xc2,0x82,0x0b,0x14,
+ 0x65,0x4d,0x04,0x09, 0x13,0x5f,0xb8,0x66,
+ 0x19,0x14,0x7a,0x09, 0xa7,0xf8,0x73,0x2d,
+ 0x4d,0x90,0x86,0x14, 0x25,0xd6,0xd6,0xf5,
+ 0x82,0x9c,0x32,0xab, 0x5c,0x37,0x12,0x28,
+ 0xd1,0xfe,0xfa,0x0d, 0x90,0x8d,0x28,0x20,
+ 0xb1,0x1e,0xbe,0x30, 0x80,0xd7,0xb1,0x63,
+ 0xd9,0x23,0x83,0x0b, 0x9d,0xf5,0x0e,0x9c,
+ 0xa2,0x88,0x5f,0x2c, 0xf2,0xa6,0x9d,0x23,
+ 0x45,0x1c,0x9b,0x7a, 0xd2,0x60,0xa6,0x0f,
+ 0x44,0xba,0x91,0x3d, 0xc6,0xf7,0xef,0x2f,
+ 0x5c,0xa8,0x5e,0x2b, 0x50,0xd3,0xd1,0x85,
+ 0xfd,0xed,0x52,0x48, 0xe2,0xd9,0xd2,0x12,
+ 0x4e,0x03,0xc9,0x3d, 0x8f,0x8d,0x1f,0x8e,
+ 0x6b,0xd8,0xe3,0x32, 0xa7,0x5b,0x39,0x57,
+ 0x91,0x08,0x52,0x09, 0xa4,0x7a,0x40,0xc6,
+ 0xcf,0xcf,0x68,0xba, 0xb1,0x97,0xf8,0x38,
+ 0x94,0x1d,0x18,0x69, 0x80,0x6a,0x11,0x15,
+ 0xc2,0xfb,0x2d,0x6c, 0xd1,0xd4,0x88,0x50,
+ 0xbb,0xca,0x8c,0x56, 0x36,0xb6,0xc4,0x41,
+ 0x97,0xe6,0xb0,0x5c, 0x7f,0x51,0x00,0x6f,
+ 0x17,0xe5,0xde,0x27, 0xf7,0xb4,0x85,0x3b,
+ 0xc5,0xa1,0x60,0x1c, 0xba,0x21,0xd6,0xed,
+ 0xd5,0x08,0x62,0x80, 0xb4,0x85,0x52,0x15,
+ 0x5c,0x94,0x19,0x3a, 0x10,0x92,0xa4,0x06,
+ 0xf1,0x86,0x02,0xce, 0x94,0xd3,0xd5,0x33,
+ 0xe7,0x59,0x47,0x72, 0x12,0xf4,0x8b,0x06,
+ 0x29,0xa3,0xb0,0x39, 0x78,0x8f,0x46,0x56,
+ 0x4a,0x42,0x4f,0x89, 0x1b,0x3f,0x09,0x12,
+ 0xc4,0x24,0x0b,0x22, 0xf0,0x27,0x04,0x4d,
+ 0x39,0xd8,0x59,0xc8, 0x7c,0x59,0x18,0x0a,
+ 0x36,0xa8,0x3c,0xba, 0x42,0xe2,0xf7,0x7a,
+ 0x23,0x90,0x73,0xff, 0xd6,0xa3,0xb2,0xcf,
+ 0x60,0xc6,0x62,0x76, 0x61,0xa3,0xcd,0x53,
+ 0x94,0x37,0x3c,0x24, 0x4b,0xc1,0xc5,0x3b,
+ 0x26,0xf8,0x67,0x1d, 0xca,0xdd,0x08,0xcb,
+ 0xdb,0x00,0x96,0x34, 0xd0,0x5d,0xef,0x4e,
+ 0x64,0x18,0xb1,0xdc, 0x46,0x13,0xc1,0x8c,
+ 0x87,0xbf,0xa3,0xfe, 0xd7,0x49,0x7e,0xb3,
+ 0x94,0xe4,0x38,0x70, 0x2a,0xde,0xaf,0x73,
+ 0x46,0xda,0xff,0xec, 0xfc,0x18,0xe2,0x02,
+ 0x64,0x5f,0x9b,0xd2, 0xdf,0x8b,0xa8,0xd0,
+ 0x4c,0xd7,0x5c,0xc7, 0x80,0x59,0x4d,0x66,
+ 0x68,0xd3,0x4a,0x51, 0xc3,0x68,0xe2,0x0a,
+ 0x17,0x31,0x4b,0xd7, 0x23,0x28,0x25,0x26,
+ 0x4a,0xef,0x02,0xd7, 0x3a,0x53,0xdb,0x09,
+ 0x19,0x85,0x68,0xab, 0xa9,0x8c,0xff,0x7e,
+ 0x30,0xfb,0x42,0x08, 0xa1,0x5a,0xd1,0xc9,
+ 0x3f,0xc9,0x00,0xfb, 0xd4,0x3e,0xb0,0x1c,
+ 0x99,0xba,0xdc,0xb4, 0x69,0xe7,0xe1,0xb0,
+ 0x67,0x53,0x46,0xa6, 0xc6,0x34,0x5c,0x94,
+ 0xfa,0xd3,0x9b,0x48, 0x92,0xa1,0xd3,0xe5,
+ 0xa7,0xea,0xe1,0x86, 0x5e,0x90,0x26,0x2d,
+ 0x4b,0x85,0xe1,0x68, 0xee,0xc2,0xf1,0x25,
+ 0xb7,0xff,0x01,0x96, 0x61,0x54,0xba,0xf3,
+ 0x09,0x62,0x7f,0xa3, 0x92,0x6b,0xe7,0x00,
+ 0xfc,0xd4,0x04,0xfd, 0x2d,0x42,0x7e,0x56,
+ 0x91,0x33,0x6e,0xf8, 0x08,0x94,0xff,0xce,
+ 0x03,0x7e,0x4d,0x0a, 0x91,0x41,0x4f,0xaa,
+ 0xdd,0xd1,0x8c,0x34, 0x99,0x46,0xb5,0xfb,
+ 0x0e,0x09,0x26,0xcc, 0x6d,0x35,0x58,0x0a,
+ 0xc6,0xc0,0x89,0xa0, 0xbd,0xb6,0x89,0xd1,
+ 0x51,0x64,0x85,0x96, 0x4d,0x6a,0x16,0x26,
+ 0x30,0xb7,0xb3,0xe4, 0x80,0x46,0xaa,0x37,
+ 0x4c,0x9b,0x2b,0xa3, 0x76,0x5e,0x8b,0x52,
+ 0x13,0x42,0xe5,0xe3, 0xa8,0xe9,0xaf,0x83,
+ 0x60,0xc0,0xb0,0xf8, 0x3d,0x82,0x0a,0x21,
+ 0x60,0xd2,0x3f,0x1c, 0xb4,0xb5,0x53,0x31,
+ 0x2e,0x16,0xfd,0xf3, 0xc3,0x46,0xfa,0xcc,
+ 0x45,0x1f,0xd1,0xac, 0x22,0xe2,0x41,0xb5,
+ 0x21,0xf3,0xdd,0x1f, 0x81,0xbf,0x03,0xaf,
+ 0xd6,0x31,0xc1,0x6a, 0x2e,0xff,0xc1,0x2d,
+ 0x44,0x53,0xd0,0xb5, 0xa2,0x7c,0x5f,0xf4,
+ 0x47,0xf7,0x4d,0x1e, 0x77,0xe2,0x29,0xcc,
+ 0xd2,0x46,0x85,0xfa, 0xdb,0x7f,0x46,0xf5,
+ 0xc9,0x60,0x4a,0x2c, 0xb7,0xf2,0xa2,0x2c,
+ 0x9d,0x76,0xcd,0x82, 0x67,0xae,0xbb,0xe0,
+ 0x92,0x56,0x48,0xcb, 0xe5,0xf5,0x3c,0x2c,
+ 0xe0,0xe8,0x6a,0x6a, 0x5a,0x0a,0x20,0x7c,
+ 0xa6,0x9d,0x8e,0x84, 0xfa,0xfe,0x61,0x13,
+ 0x54,0x79,0xe0,0x83, 0xd2,0x15,0xe0,0x33,
+ 0xe4,0xf9,0xad,0xb8, 0x1e,0x75,0x35,0xd3,
+ 0xee,0x7e,0x4a,0x63, 0x2f,0xeb,0xf1,0xe6,
+ 0x22,0xac,0x77,0x74, 0xa1,0xc0,0xa0,0x21,
+ 0x66,0x59,0x7c,0x48, 0x7f,0xaa,0x05,0xe8,
+ 0x51,0xd9,0xc7,0xed, 0xb9,0xea,0x7a,0xdd,
+ 0x23,0x53,0xea,0x8f, 0xef,0xaa,0xe6,0x9e,
+ 0x19,0x21,0x84,0x27, 0xc5,0x78,0x2e,0x8c,
+ 0x52,0x40,0x15,0x1c, 0x2b,0x91,0xb3,0x4c,
+ 0xe8,0xfa,0xd3,0x64, 0x0f,0xf9,0xf4,0xb8,
+ 0x59,0x4d,0x6b,0x2d, 0x44,0x6c,0x8d,0xb2,
+ 0xdb,0x73,0x29,0x66, 0xb1,0xc2,0x28,0xfc,
+ 0x85,0xba,0x60,0x5e, 0x27,0x8f,0xfb,0xb3,
+ 0xc9,0x20,0x43,0xb1, 0x3e,0x18,0x97,0x42,
+ 0x63,0x2d,0x0c,0x97, 0xf2,0xcc,0xcd,0x90,
+ 0x46,0x5f,0x1a,0x85, 0xca,0x44,0x2a,0x1a,
+ 0x52,0xf7,0xbb,0x4e, 0xd1,0xab,0xd5,0xa3,
+ 0x58,0x6b,0xb6,0x5a, 0x88,0x1c,0x9d,0x3b,
+ 0xe2,0x46,0xe4,0x3b, 0x33,0x64,0x6c,0xfd,
+ 0xeb,0x36,0x8e,0x32, 0x1f,0x71,0xbd,0x95,
+ 0xb6,0xfd,0x1a,0xcb, 0xfb,0x4a,0x88,0x27,
+ 0xd6,0x28,0x7b,0x5e, 0xa3,0x8a,0x0c,0x36,
+ 0xa8,0x5d,0x2f,0x28, 0xa9,0xad,0xb2,0x88,
+ 0x9e,0x62,0x9d,0x4a, 0x07,0x74,0x00,0x04,
+ 0x0c,0xc1,0x6a,0x09, 0xe1,0x0b,0xfa,0xf3,
+ 0xd1,0x41,0xdd,0x94, 0x52,0x06,0xb8,0x9e,
+ 0xba,0x81,0xe0,0x52, 0xdf,0x52,0x5d,0x74,
+ 0x40,0x59,0x36,0x05, 0xf2,0x30,0xc4,0x84,
+ 0x85,0xdc,0xb8,0xba, 0xd9,0xf4,0x5f,0x11,
+ 0x83,0xce,0x25,0x57, 0x97,0xf5,0x0f,0xb5,
+ 0x0b,0xd6,0x6d,0x1c, 0xfb,0xf2,0x30,0xda,
+ 0xc2,0x05,0xa8,0xe1, 0xc2,0x57,0x0a,0x05,
+ 0x2d,0x4c,0x8b,0xb7, 0x5a,0xc0,0x8a,0xba,
+ 0xa9,0x85,0x7c,0xf0, 0xb8,0xce,0x72,0x79,
+ 0xf5,0x27,0x99,0xd7, 0xed,0xcf,0x85,0xfa,
+ 0x92,0x15,0xf1,0x47, 0x02,0x24,0x39,0x07,
+ 0x89,0xb6,0xdd,0x4a, 0xb8,0xbc,0xd5,0x9d,
+ 0x4c,0x03,0x8b,0x1d, 0x45,0x58,0x1c,0x86,
+ 0x46,0x71,0x0a,0x0d, 0x7c,0x5b,0xf9,0xdc,
+ 0x60,0xb5,0xb0,0x00, 0x70,0x47,0x83,0xa6,
+ 0x8e,0x79,0xba,0x1d, 0x21,0x20,0xc0,0x24,
+ 0x56,0x35,0x6a,0x49, 0xb6,0xa3,0x58,0x87,
+ 0x16,0xae,0xd9,0x77, 0x62,0xa0,0x61,0xce,
+ 0x3d,0xe6,0x77,0x9e, 0x83,0xec,0xc2,0x04,
+ 0x8c,0xba,0x62,0xac, 0x32,0xda,0xf0,0x89,
+ 0x7b,0x2b,0xb0,0xa3, 0x3a,0x5f,0x8b,0x0d,
+ 0xbd,0xe9,0x14,0xcd, 0x5b,0x7a,0xde,0xd5,
+ 0x0d,0xc3,0x4b,0x38, 0x92,0x31,0x97,0xd8,
+ 0xae,0x89,0x17,0x2c, 0xc9,0x54,0x96,0x66,
+ 0xd0,0x9f,0x60,0x7a, 0x7d,0x63,0x67,0xfc,
+ 0xb6,0x02,0xce,0xcc, 0x97,0x36,0x9c,0x3c,
+ 0x1e,0x69,0x3e,0xdb, 0x54,0x84,0x0a,0x77,
+ 0x6d,0x0b,0x6e,0x10, 0x9f,0xfb,0x2a,0xb1,
+ 0x49,0x31,0x71,0xf2, 0xd1,0x1e,0xea,0x87,
+ 0xb9,0xd6,0x4a,0x4c, 0x57,0x17,0xbc,0x8b,
+ 0x38,0x66,0x2d,0x5f, 0x25,0xca,0x6d,0x10,
+ 0xc6,0x2e,0xd7,0x2c, 0x89,0xf1,0x4c,0x1d,
+ 0xc9,0x9c,0x02,0x23, 0xc6,0x1f,0xd6,0xc3,
+ 0xb8,0xc7,0x85,0x29, 0x75,0x40,0x1e,0x04,
+ 0x6e,0xc7,0xb4,0x60, 0xfc,0xea,0x30,0x8b,
+ 0x4d,0x9d,0xb7,0x5d, 0x91,0xfb,0x8e,0xb8,
+ 0xc2,0x54,0xdf,0xdb, 0x79,0x58,0x32,0xda,
+ 0xd0,0xa1,0xd6,0xd6, 0xc4,0xc8,0xa4,0x16,
+ 0x95,0xbb,0xe5,0x58, 0xd2,0xb6,0x83,0x76,
+ 0x1d,0xd7,0x45,0xbc, 0xb8,0x14,0x79,0x3b,
+ 0x4e,0x1a,0x0b,0x5c, 0xfc,0xa5,0xa0,0xc3,
+ 0xf1,0x64,0x74,0xb0, 0x0d,0x82,0x90,0x62,
+ 0x87,0x02,0x0f,0x71, 0xc7,0xab,0x7d,0x2b,
+ 0x70,0xf1,0x9b,0x9e, 0xe7,0x6b,0x99,0x18,
+ 0x6c,0x54,0x17,0x0b, 0xf5,0x44,0x58,0x54,
+ 0x44,0x9b,0x54,0x30, 0x5e,0xaf,0xa6,0xfa,
+ 0x42,0x37,0xe8,0x67, 0xbf,0xf7,0x6c,0x1e,
+ 0x73,0xd8,0xc7,0x5c, 0xfa,0x51,0xd5,0x1f,
+ 0xab,0xfc,0x91,0x03, 0xc1,0xc1,0x22,0x58,
+ 0xc7,0xe8,0x60,0xae, 0xb6,0x58,0x44,0xad,
+ 0x1e,0x07,0x5d,0x3c, 0x90,0x33,0x43,0xe0,
+ 0x67,0x44,0x9f,0x8c, 0xf3,0xef,0xce,0x3a,
+ 0x22,0x2b,0x1b,0x97, 0x83,0x6f,0x9f,0xd3,
+ 0x46,0xc3,0xa1,0xdf, 0xde,0x60,0xf0,0x32,
+ 0x2e,0xcf,0xed,0x72, 0x27,0x0d,0xa7,0xd0,
+ 0x91,0x6a,0xf0,0x6d, 0x41,0xfa,0x77,0x2e,
+ 0xd8,0x43,0xce,0xe2, 0xf5,0x7a,0x9e,0x04,
+ 0x30,0x4c,0xe7,0x08, 0xf3,0x2e,0x13,0x05,
+ 0x5e,0xfa,0x16,0x2c, 0x6c,0x53,0x02,0xb5,
+ 0x2f,0x2c,0x7d,0x86, 0x61,0x0e,0x5f,0x96,
+ 0xe1,0x1c,0x37,0x87, 0xf0,0x84,0xe4,0x1d,
+ 0x53,0x4d,0xb1,0x13, 0xe2,0xcb,0x71,0x6e,
+ 0x86,0x7b,0xad,0x97, 0x3e,0x16,0xb3,0xb4,
+ 0x0f,0x32,0x01,0x69, 0x31,0x1f,0x49,0x99,
+ 0x7a,0x46,0xd9,0x9b, 0x5f,0x17,0x3d,0xcb,
+ 0xe4,0xfd,0xbc,0xbb, 0xe3,0xec,0x8c,0x54,
+ 0xc4,0x14,0x44,0x89, 0xa3,0x65,0x25,0xc0,
+ 0x06,0x9b,0x7d,0x9b, 0x7f,0x15,0x8f,0x84,
+ 0xe1,0x08,0x0d,0x2c, 0x0a,0x91,0x9a,0x85,
+ 0x4e,0xa1,0x50,0xee, 0x72,0x70,0xf4,0xd2,
+ 0x1c,0x67,0x20,0x1f, 0xe6,0xb2,0x9d,0x95,
+ 0x85,0x7e,0xf2,0x9d, 0xf0,0x73,0x10,0xe7,
+ 0xfc,0x62,0x9d,0xea, 0x8d,0x63,0xdc,0x70,
+ 0xe0,0x2b,0x30,0x01, 0x7c,0xcd,0x24,0x22,
+ 0x03,0xf9,0x8b,0xe4, 0x77,0xef,0x2c,0xdc,
+ 0xa5,0xfb,0x29,0x66, 0x50,0x1c,0xd7,0x4e,
+ 0x8f,0x0f,0xbf,0x61, 0x0c,0xea,0xc0,0xe6,
+ 0xc6,0xc3,0xa1,0xae, 0xf3,0xea,0x4c,0xfb,
+ 0x21,0x96,0xd1,0x38, 0x64,0xe0,0xdd,0xa8,
+ 0xa4,0xd0,0x33,0x82, 0xf0,0xdd,0x91,0x6e,
+ 0x88,0x27,0xe1,0x0d, 0x8b,0xfb,0xc6,0x36,
+ 0xc5,0x9a,0x9d,0xbc, 0x32,0x8f,0x8a,0x3a,
+ 0xfb,0xd0,0x88,0x1e, 0xe5,0xb8,0x68,0x35,
+ 0x4b,0x22,0x72,0x55, 0x9e,0x77,0x39,0x1d,
+ 0x64,0x81,0x6e,0xfd, 0xe3,0x29,0xb8,0xa5,
+ 0x3e,0xc8,0x4c,0x6f, 0x41,0xc2,0xbd,0xb6,
+ 0x15,0xd1,0xd5,0xe9, 0x77,0x97,0xb6,0x54,
+ 0x9e,0x60,0xdd,0xf3, 0x48,0xdb,0x65,0x04,
+ 0x54,0xa2,0x93,0x12, 0xf0,0x66,0x6c,0xae,
+ 0xa2,0x2c,0xb9,0xeb, 0xf0,0x7c,0x9c,0xae,
+ 0x8e,0x49,0xf5,0x0f, 0xfc,0x4b,0x2a,0xdb,
+ 0xaf,0xff,0x96,0x0d, 0xa6,0x05,0xe9,0x37,
+ 0x81,0x43,0x41,0xb2, 0x69,0x88,0xd5,0x2c,
+ 0xa2,0xa9,0x9b,0xf2, 0xf1,0x77,0x68,0x05,
+ 0x84,0x0f,0x6a,0xee, 0xd0,0xb5,0x65,0x4b,
+ 0x35,0x18,0xeb,0x34, 0xba,0x09,0x4f,0xc3,
+ 0x5a,0xac,0x44,0x5b, 0x03,0xf5,0xf5,0x1d,
+ 0x10,0x04,0xfd,0xb5, 0xc4,0x26,0x84,0x13,
+ 0x8a,0xde,0x8d,0xbb, 0x51,0xd0,0x6f,0x58,
+ 0xc1,0xe5,0x9e,0x12, 0xe6,0xba,0x13,0x73,
+ 0x27,0x3e,0x3f,0xf0, 0x4f,0x0f,0x64,0x6c,
+ 0x0e,0x36,0xe9,0xcc, 0x38,0x93,0x9b,0xda,
+ 0xf9,0xfd,0xc2,0xe9, 0x44,0x7a,0x93,0xa6,
+ 0x73,0xf6,0x2a,0xc0, 0x21,0x42,0xbc,0x58,
+ 0x9e,0xe3,0x0c,0x6f, 0xa1,0xd0,0xdd,0x67,
+ 0x14,0x3d,0x49,0xf1, 0x5b,0xc3,0xc3,0xa4,
+ 0x52,0xa3,0xe7,0x0f, 0xb4,0x26,0xf4,0x62,
+ 0x73,0xf5,0x9f,0x75, 0x5b,0x6e,0x38,0xc8,
+ 0x4a,0xcc,0xf6,0xfa, 0xcf,0xfb,0x28,0x02,
+ 0x8a,0xdb,0x6b,0x63, 0x52,0x17,0x94,0x87,
+ 0x71,0xa2,0xf5,0x5a, 0x1d,0x94,0xe3,0xcd,
+ 0x28,0x70,0x96,0xd5, 0xb1,0xaf,0xec,0xd6,
+ 0xea,0xf4,0xfc,0xe9, 0x10,0x66,0xd9,0x8a,
+ 0x1e,0x03,0x03,0xf1, 0x54,0x2d,0xc5,0x8c,
+ 0x85,0x71,0xed,0xa7, 0xa4,0x1e,0x5a,0xff,
+ 0xab,0xb8,0x07,0xb3, 0x0b,0x84,0x00,0x0a,
+ 0x7f,0xa5,0x38,0x20, 0x66,0x33,0x84,0x2f,
+ 0xec,0x16,0x94,0x78, 0xa8,0x42,0x98,0x55,
+ 0xa3,0xe5,0xd3,0x62, 0x2a,0xfc,0xed,0xec,
+ 0x7a,0x96,0x41,0x35, 0xc0,0xd2,0xe6,0x53,
+ 0xf8,0x0f,0x59,0x94, 0x0a,0xa0,0x50,0xef,
+ 0x0d,0x9f,0x04,0x1c, 0x5f,0x48,0xfe,0x33,
+ 0x20,0xca,0x8d,0x09, 0xdd,0x0b,0xf8,0x59,
+ 0xd3,0x63,0x8a,0xa4, 0xf5,0x73,0x6b,0x3e,
+ 0x7e,0x0f,0xff,0xdb, 0x96,0x62,0x4d,0x3a,
+ 0xdb,0x8d,0x8c,0x9b, 0x8c,0xb3,0xa1,0xff,
+ 0x16,0xb9,0x2c,0x8c, 0xf6,0xbb,0x0d,0x9e,
+ 0x6f,0xff,0x24,0x6f, 0x59,0xee,0x02,0xe6,
+ 0x57,0x38,0xbd,0x5f, 0xbd,0xd4,0xe5,0x74,
+ 0x14,0xea,0x85,0xbb, 0x0c,0xfe,0xad,0xad,
+ 0x98,0x82,0x8a,0x81, 0x0b,0x37,0xdc,0x7d,
+ 0xda,0x13,0x74,0x8a, 0xa5,0xaf,0x74,0x82,
+ 0x95,0x35,0x1f,0x0b, 0x03,0x88,0x17,0xf3,
+ 0x67,0x11,0x40,0xd1, 0x9d,0x48,0xec,0x9b,
+ 0xc8,0xb2,0xcc,0xb4, 0x93,0xd2,0x0b,0x0a,
+ 0xd6,0x6f,0x34,0x32, 0xd1,0x9a,0x0d,0x89,
+ 0x93,0x1f,0x96,0x5a, 0x7a,0x57,0x06,0x02,
+ 0x1d,0xbf,0x57,0x3c, 0x9e,0xca,0x5d,0x68,
+ 0xe8,0x4e,0xea,0x4f, 0x0b,0x11,0xf0,0x35,
+ 0x73,0x5a,0x77,0x24, 0x29,0xc3,0x60,0x51,
+ 0xf0,0x15,0x93,0x45, 0x6b,0xb1,0x70,0xe0,
+ 0xda,0xf7,0xf4,0x0a, 0x70,0xd1,0x73,0x3f,
+ 0x9c,0x9d,0x07,0x19, 0xad,0xb2,0x28,0xae,
+ 0xf2,0xe2,0xb6,0xf4, 0xbc,0x71,0x63,0x00,
+ 0xde,0xe3,0xdc,0xb1, 0xa3,0xd5,0x4c,0x34,
+ 0xf8,0x6b,0x68,0x4c, 0x73,0x84,0xab,0xd4,
+ 0x89,0xae,0x07,0x1a, 0x0d,0x3d,0x8e,0xaa,
+ 0x6c,0xa2,0x54,0xb3, 0xd9,0x46,0x81,0x87,
+ 0xe2,0xdc,0x49,0xb1, 0x14,0x5c,0xcc,0x72,
+ 0x56,0xf0,0x0f,0xa9, 0x3d,0x31,0x2f,0x08,
+ 0xbc,0x15,0xb7,0xd3, 0x0d,0x4f,0xd1,0xc9,
+ 0x4e,0xde,0x1c,0x03, 0xd1,0xae,0xaf,0x14,
+ 0x62,0xbc,0x1f,0x33, 0x5c,0x00,0xeb,0xf4,
+ 0x8e,0xf6,0x3e,0x13, 0x6a,0x64,0x42,0x07,
+ 0x60,0x71,0x35,0xf1, 0xd0,0xff,0x8d,0x1f,
+ 0x88,0xc0,0x1c,0x3c, 0x6c,0x1c,0x54,0x71,
+ 0x6b,0x65,0x4a,0xe2, 0xe3,0x5f,0x77,0x56,
+ 0x1c,0x8d,0x2a,0x8d, 0xef,0x92,0x4a,0xa9,
+ 0xf6,0xcf,0xa5,0x67, 0x89,0x8e,0x5a,0xd9,
+ 0x60,0xaa,0x94,0x14, 0x55,0x66,0x8a,0xb0,
+ 0x18,0x4f,0x9e,0x8e, 0xf4,0xdb,0xc1,0x88,
+ 0x9b,0xf0,0x84,0x33, 0x2f,0xcd,0x2c,0xeb,
+ 0x65,0xe6,0x5d,0xde, 0x30,0x97,0xad,0xe6,
+ 0xbc,0xcb,0x83,0x93, 0xf3,0xfd,0x65,0xdc,
+ 0x07,0x27,0xf9,0x0f, 0x4a,0x56,0x5c,0xf7,
+ 0xff,0xa3,0xd1,0xad, 0xd4,0xd1,0x38,0x13,
+ 0x71,0xc9,0x42,0x0f, 0x0d,0x35,0x12,0x32,
+ 0xd2,0x2d,0x2b,0x96, 0xe4,0x01,0xdc,0x55,
+ 0xd8,0x71,0x2c,0x0c, 0xc4,0x55,0x3f,0x16,
+ 0xe8,0xaa,0xe7,0xe8, 0x45,0xfa,0x23,0x23,
+ 0x5e,0x21,0x02,0xab, 0xc8,0x6b,0x88,0x5e,
+ 0xdc,0x90,0x13,0xb5, 0xe7,0x47,0xfa,0x12,
+ 0xd5,0xa7,0x0a,0x06, 0xd2,0x7c,0x62,0x80,
+ 0xb7,0x8e,0x4f,0x77, 0x88,0xb7,0xa2,0x12,
+ 0xdb,0x19,0x1f,0xd8, 0x00,0x82,0xf5,0xf2,
+ 0x59,0x34,0xec,0x91, 0xa8,0xc1,0xd7,0x6e,
+ 0x76,0x10,0xf3,0x15, 0xa6,0x86,0xfa,0xfd,
+ 0x45,0x2f,0x86,0x18, 0x16,0x83,0x16,0x8c,
+ 0x6e,0x99,0x7e,0x43, 0x3f,0x0a,0xba,0x32,
+ 0x94,0x5b,0x15,0x32, 0x66,0xc2,0x3a,0xdc,
+ 0xf3,0xd3,0x1d,0xd1, 0x5d,0x6f,0x5f,0x9a,
+ 0x7f,0xa2,0x90,0xf1, 0xa1,0xd0,0x17,0x33,
+ 0xdf,0x9a,0x2e,0xa2, 0xdc,0x89,0xe6,0xb0,
+ 0xda,0x23,0x2b,0xf6, 0xe9,0x1f,0x82,0x3c,
+ 0x07,0x90,0xab,0x3a, 0xb9,0x87,0xb0,0x02,
+ 0xcc,0xb9,0xe7,0x2e, 0xe7,0xc6,0xee,0xfa,
+ 0xe2,0x16,0xc8,0xc3, 0xd0,0x40,0x15,0xc5,
+ 0xa7,0xc8,0x20,0x42, 0xb7,0x09,0xf8,0x66,
+ 0xeb,0x0e,0x4b,0xd7, 0x91,0x74,0xa3,0x8b,
+ 0x17,0x2a,0x0c,0xee, 0x7f,0xc1,0xea,0x63,
+ 0xc6,0x3c,0x1e,0xea, 0x8b,0xa2,0xd1,0x2e,
+ 0xf3,0xa6,0x0f,0x36, 0xff,0xdd,0x81,0x06,
+ 0xe3,0x63,0xfc,0x0c, 0x38,0xb0,0x23,0xfb,
+ 0x83,0x66,0x81,0x73, 0x5c,0x0b,0x9c,0xd4,
+ 0x23,0xdc,0x7f,0x5c, 0x00,0x8c,0xa6,0xa7,
+ 0x52,0xd4,0xc1,0x00, 0xea,0x99,0x6b,0x59,
+ 0x19,0x8e,0x34,0x32, 0x24,0xea,0x0c,0x61,
+ 0x95,0x9d,0xdb,0xf0, 0x63,0xcc,0xa9,0xfd,
+ 0x1b,0xeb,0xd7,0xbc, 0x0c,0xa4,0x74,0x24,
+ 0xfd,0xfa,0x32,0x58, 0xe3,0x74,0x1c,0x8f,
+ 0x76,0xa6,0x53,0x0d, 0xea,0xde,0x50,0x92,
+ 0xbd,0x3f,0x3d,0x56, 0x8f,0x48,0x4e,0xb7,
+ 0x8c,0x5e,0x83,0x2c, 0xf7,0xec,0x04,0x2c,
+ 0x35,0xdf,0xa9,0x72, 0xc0,0x77,0xf5,0x44,
+ 0xe5,0xa7,0x56,0x3e, 0xa4,0x8d,0xb8,0x6e,
+ 0x31,0x86,0x15,0x1d, 0xc4,0x66,0x86,0x75,
+ 0xf8,0x1a,0xea,0x2f, 0x3a,0xb7,0xbf,0x97,
+ 0xe9,0x11,0x53,0x64, 0xa8,0x71,0xc6,0x78,
+ 0x8a,0x70,0xb5,0x18, 0xd7,0x9c,0xe3,0x44,
+ 0x1a,0x7c,0x6b,0x1b, 0x41,0xe1,0x1c,0x0d,
+ 0x98,0x43,0x67,0x28, 0xb8,0x14,0xb4,0x48,
+ 0x01,0x85,0x79,0x20, 0x94,0x36,0x25,0x3a,
+ 0x5c,0x48,0xd2,0x2e, 0x91,0x91,0xfd,0x85,
+ 0x38,0xc1,0xc5,0xa5, 0x4d,0x52,0x1f,0xb4,
+ 0xe7,0x44,0x7a,0xff, 0xb1,0x65,0xdf,0x53,
+ 0x86,0x2a,0xff,0x25, 0x2b,0xeb,0x3e,0xdc,
+ 0x3d,0xec,0x72,0xae, 0xa9,0xd1,0xdf,0xe9,
+ 0x4a,0x3e,0xe8,0xf1, 0x74,0xe0,0xee,0xd6,
+ 0x0b,0xba,0x9b,0x14, 0x9b,0x0c,0x4a,0xf9,
+ 0x55,0xee,0x7e,0x82, 0xa4,0xb5,0xa5,0xb7,
+ 0x2f,0x75,0x48,0x51, 0x60,0xcc,0x41,0x8e,
+ 0x65,0xe3,0xb7,0x29, 0xe0,0x32,0xe7,0x1b,
+ 0x2f,0xa0,0x80,0xce, 0x73,0x28,0x6c,0xf4,
+ 0xd0,0xc7,0x05,0x69, 0xbd,0x3e,0x2e,0x77,
+ 0x1a,0x7f,0x9a,0x98, 0x60,0x31,0xdb,0x47,
+ 0xc2,0xa2,0x12,0xcb, 0x8c,0x35,0xff,0x58,
+ 0xe3,0x07,0x22,0xe4, 0x2f,0x26,0x87,0x30,
+ 0x16,0xea,0x64,0x4f, 0x44,0x64,0x3d,0xe4,
+ 0x7b,0x41,0x06,0xca, 0xee,0x02,0xcf,0xf3,
+ 0x26,0x4c,0xfe,0x9c, 0xf6,0x64,0x96,0xd4,
+ 0xd9,0x7e,0x04,0x47, 0x1d,0xdb,0xc7,0x8c,
+ 0xae,0xd7,0x9d,0xea, 0xe3,0x3a,0xee,0x24,
+ 0xa9,0x2d,0x65,0xba, 0xd5,0x9f,0x38,0x81,
+ 0x61,0x42,0x15,0xdf, 0xcc,0x29,0xd9,0xf7,
+ 0xd4,0x30,0xb9,0xc9, 0x86,0x76,0xdc,0xee,
+ 0xa5,0x27,0xa6,0x27, 0xa3,0xbb,0x8f,0x3b,
+ 0xaa,0xca,0x01,0x52, 0x37,0x12,0xc0,0x55,
+ 0x39,0x4a,0xb2,0xce, 0x85,0x73,0xf2,0x10,
+ 0x9c,0x7f,0xa6,0x34, 0x7f,0x0f,0x69,0x63,
+ 0x03,0xc4,0xde,0xe2, 0x7b,0x10,0xbf,0x91,
+ 0x3e,0x7e,0xad,0xb7, 0xa8,0x85,0xc7,0x99,
+ 0xae,0x8e,0x7c,0x2e, 0x02,0x25,0x5b,0xd5,
+ 0xf4,0x46,0xd1,0x49, 0x48,0xa0,0x12,0x6a,
+ 0x6a,0x01,0x23,0xb9, 0x7e,0x67,0x8b,0x48,
+ 0xac,0xf7,0x88,0x88, 0xeb,0xd9,0x39,0x3a,
+ 0xc8,0xa0,0x06,0xd9, 0x0b,0x80,0xc4,0x84,
+ },
+ .c = {
+ 0x10,0x46,0xb6,0xc8, 0xaa,0x83,0x67,0x7b,
+ 0xc5,0x9a,0x9a,0x0d, 0xe2,0xec,0x6f,0x9a,
+ 0x3e,0x74,0xa7,0xfa, 0x43,0x93,0x9d,0xc5,
+ 0x23,0x27,0xad,0x99, 0x74,0xb4,0xc0,0xe4,
+ 0xd7,0x70,0x5c,0x95, 0x58,0xe3,0x8f,0x72,
+ 0xe3,0x03,0x3d,0xc2, 0xd9,0x69,0x37,0x3e,
+ 0x8e,0x2a,0x0c,0x2b, 0x75,0x59,0x05,0x18,
+ 0x4a,0x50,0x67,0xd4, 0xf5,0x4b,0xb0,0x59,
+ 0x08,0xaf,0xbc,0x6f, 0xb1,0x95,0xa1,0x32,
+ 0xe7,0x77,0x1a,0xfd, 0xaf,0xe8,0x4d,0x32,
+ 0x87,0x9c,0x87,0x90, 0x5e,0xe8,0x08,0xc3,
+ 0xb4,0x0c,0x80,0x9a, 0x9e,0x23,0xeb,0x5a,
+ 0x5c,0x18,0x4a,0x7c, 0xd0,0x4a,0x91,0x57,
+ 0x7e,0x6c,0x53,0xde, 0x98,0xc0,0x09,0x80,
+ 0x8d,0x41,0x0b,0xbc, 0x56,0x5e,0x69,0x61,
+ 0xd3,0x56,0x48,0x43, 0x19,0x49,0x49,0xaf,
+ 0xcf,0xad,0x98,0x3e, 0x88,0x4b,0x44,0x69,
+ 0x73,0xd2,0xcb,0xdf, 0x30,0xdb,0x76,0x1d,
+ 0xfb,0x4b,0xc5,0x66, 0x22,0x34,0x6f,0x07,
+ 0x0b,0xcd,0x1c,0xed, 0x88,0xd9,0x0d,0x30,
+ 0xe9,0x96,0xcb,0xf5, 0xde,0x57,0x5f,0x0b,
+ 0x12,0x11,0xcf,0x52, 0xf5,0x0d,0xf8,0x29,
+ 0x39,0x87,0xb2,0xa5, 0x7f,0x7a,0x2b,0x9d,
+ 0x66,0x11,0x32,0xf4, 0xd4,0x37,0x16,0x75,
+ 0xe3,0x0b,0x55,0x98, 0x44,0x6f,0xc7,0x5c,
+ 0xd4,0x89,0xf8,0xb3, 0xee,0xe4,0x5e,0x45,
+ 0x34,0xc2,0xc0,0xef, 0xdd,0x4d,0xbb,0xb4,
+ 0x0a,0x7b,0xda,0xe3, 0x6e,0x41,0xe1,0xb4,
+ 0x73,0xf8,0x9b,0x65, 0x1c,0x5f,0xdf,0x9c,
+ 0xd7,0x71,0x91,0x72, 0x6f,0x9e,0x8f,0x96,
+ 0x5d,0x45,0x11,0xd1, 0xb9,0x99,0x63,0x50,
+ 0xda,0x36,0xe9,0x75, 0x21,0x9a,0xce,0xc5,
+ 0x1a,0x8a,0x12,0x81, 0x8b,0xeb,0x51,0x7c,
+ 0x00,0x5f,0x58,0x5a, 0x3e,0x65,0x10,0x9e,
+ 0xe3,0x9e,0xf0,0x6b, 0xfe,0x49,0x50,0x2a,
+ 0x2a,0x3b,0xa5,0x42, 0x1b,0x15,0x2b,0x5b,
+ 0x88,0xb8,0xfb,0x6f, 0x0c,0x5d,0x16,0x76,
+ 0x48,0x77,0x4d,0x22, 0xb9,0xf0,0x0a,0x3f,
+ 0xa6,0xdd,0xc8,0x32, 0xcc,0x98,0x76,0x41,
+ 0x84,0x36,0x24,0x6d, 0x88,0x62,0x65,0x40,
+ 0xa4,0x55,0xdc,0x39, 0x74,0xed,0x0f,0x50,
+ 0x08,0xcf,0x69,0x5f, 0x1d,0x31,0xd6,0xb4,
+ 0x39,0x94,0x5b,0x18, 0x88,0x0f,0xcb,0x56,
+ 0xfb,0xf7,0x19,0xe0, 0x80,0xe0,0x4f,0x67,
+ 0x9c,0xab,0x35,0x78, 0xc9,0xca,0x95,0xfa,
+ 0x31,0xf0,0x5f,0xa6, 0xf9,0x71,0xbd,0x7f,
+ 0xb1,0xe2,0x42,0x67, 0x9d,0xfb,0x7f,0xde,
+ 0x41,0xa6,0x7f,0xc7, 0x7f,0x75,0xd8,0x8d,
+ 0x43,0xce,0xe6,0xeb, 0x74,0xee,0x4e,0x35,
+ 0xbc,0x7b,0x7c,0xfc, 0x8b,0x4f,0x1f,0xa2,
+ 0x5e,0x34,0x3b,0x5f, 0xd0,0x05,0x9d,0x4f,
+ 0xfe,0x47,0x59,0xa3, 0xf6,0xb7,0x27,0xb0,
+ 0xa1,0xec,0x1d,0x09, 0x86,0x70,0x48,0x00,
+ 0x03,0x0a,0x15,0x98, 0x2e,0x6d,0x48,0x2a,
+ 0x81,0xa2,0xde,0x11, 0xe4,0xde,0x8b,0xb0,
+ 0x06,0x28,0x03,0x82, 0xe4,0x6e,0x40,0xfb,
+ 0x3c,0x35,0x2d,0x1b, 0x62,0x56,0x87,0xd4,
+ 0xd6,0x06,0x36,0xce, 0x70,0x26,0x2f,0x21,
+ 0xf5,0x47,0x3f,0xf8, 0x57,0x17,0xa9,0x15,
+ 0x30,0xfd,0x1f,0xa6, 0x7a,0x24,0x1c,0xf8,
+ 0x33,0xf3,0xef,0xe1, 0x6c,0xb5,0x0b,0x04,
+ 0x21,0x5d,0xb5,0xff, 0x4f,0xdb,0xd1,0x3d,
+ 0x8f,0x01,0x56,0x7f, 0x0b,0xa4,0xf1,0xf9,
+ 0xdd,0xa3,0x38,0xcb, 0xa9,0xd3,0xdd,0xe3,
+ 0x29,0x5b,0x2b,0x22, 0xd7,0xe8,0x4f,0x02,
+ 0xb1,0x73,0x83,0x80, 0xda,0xd0,0x8e,0x11,
+ 0x9f,0x4d,0xd4,0x0a, 0x86,0x45,0x11,0xa1,
+ 0x9e,0x2e,0xa9,0x59, 0x6d,0x95,0x49,0xc5,
+ 0xc9,0xcd,0x7c,0x71, 0x81,0xac,0x6b,0xb8,
+ 0x1b,0x94,0xe8,0xe3, 0xb2,0xb7,0x8a,0x9b,
+ 0xda,0x5b,0xb7,0xc6, 0x00,0xcb,0x40,0x47,
+ 0x0c,0x38,0x75,0xb8, 0xba,0x6f,0x2b,0x9d,
+ 0x01,0xf3,0xf2,0xc8, 0xf7,0xde,0xcf,0xfb,
+ 0x82,0xa8,0x8f,0x10, 0x75,0x0e,0x27,0xc5,
+ 0x4b,0x9f,0xfe,0x1d, 0x60,0x84,0x69,0x96,
+ 0xac,0xb1,0xd3,0xdd, 0x07,0x4c,0x50,0x94,
+ 0xb1,0x17,0x53,0x23, 0x98,0xbf,0x22,0xf9,
+ 0x2c,0xb0,0x3f,0x62, 0x16,0xa7,0x8f,0xea,
+ 0x43,0x25,0xfb,0x21, 0x18,0xec,0x1a,0xf6,
+ 0x5e,0x64,0xbd,0x3d, 0xcf,0x27,0xf5,0x02,
+ 0xf2,0xaf,0x1b,0x2d, 0x2c,0xcb,0xaa,0x6d,
+ 0x7d,0xa0,0xae,0x31, 0x05,0x51,0x80,0x7f,
+ 0x99,0xcf,0xbd,0x0f, 0x12,0x5a,0xda,0x4a,
+ 0x56,0x22,0xd4,0x22, 0x95,0x2c,0x46,0x5a,
+ 0xb3,0x5a,0x5e,0xd4, 0x27,0x7f,0x06,0xbd,
+ 0x3c,0xf6,0xf2,0x0f, 0x9d,0xbb,0x0c,0x14,
+ 0x8c,0xb1,0x72,0xf2, 0xb0,0xaf,0xda,0xf7,
+ 0x05,0x33,0x78,0x9c, 0x79,0xe9,0xe0,0xc5,
+ 0x8c,0x4b,0x23,0x65, 0xd1,0x70,0x81,0x3d,
+ 0x74,0xfa,0xb6,0xff, 0xf2,0x65,0x21,0x3f,
+ 0xe4,0xc2,0x9e,0x9d, 0x49,0x0e,0xad,0xaf,
+ 0xc2,0x21,0x18,0xa8, 0x19,0xa8,0x69,0x32,
+ 0xcb,0x8e,0xc2,0x9d, 0xf5,0xbd,0x50,0x60,
+ 0x72,0xa2,0xa6,0xad, 0xe6,0x6b,0xd2,0x01,
+ 0x52,0xf9,0xac,0x18, 0xfa,0xe8,0x8d,0x4a,
+ 0x98,0x25,0xd3,0xa8, 0x0e,0x97,0x2d,0xa3,
+ 0xf6,0xf1,0x34,0x7c, 0xf0,0x15,0x06,0x05,
+ 0x31,0xdf,0xc7,0x86, 0x54,0xfb,0x62,0xe2,
+ 0xd5,0x3b,0x72,0xd2, 0x70,0x7c,0x3c,0x62,
+ 0x2f,0xbd,0x47,0x0d, 0x20,0x97,0xf5,0x1f,
+ 0xa1,0xe8,0x4c,0x3e, 0x13,0xec,0xb3,0xcc,
+ 0xc9,0x15,0x01,0x23, 0xe5,0x1f,0x3b,0x2e,
+ 0xc5,0xdd,0x71,0xe3, 0xfa,0x6a,0x44,0x07,
+ 0x25,0x64,0xa5,0xa5, 0x16,0x64,0x14,0xb8,
+ 0x86,0xb1,0xae,0x6f, 0xc5,0xdb,0x6b,0xfa,
+ 0x0f,0x8f,0xc5,0x89, 0x57,0x52,0xeb,0xb3,
+ 0xca,0x4e,0x23,0xac, 0xbd,0xad,0xf5,0x77,
+ 0x58,0x72,0x18,0x2c, 0xb8,0x37,0x0b,0xfd,
+ 0xfd,0x04,0x49,0x4a, 0x7b,0x11,0x82,0x1b,
+ 0xc4,0x5f,0x54,0x46, 0x97,0xe9,0xac,0x64,
+ 0xa7,0x13,0x04,0x56, 0x5a,0x3b,0x17,0x2c,
+ 0x08,0xff,0xa4,0xe2, 0xe4,0x43,0x05,0xfa,
+ 0x94,0x3a,0xbc,0x24, 0xec,0xa8,0x89,0x02,
+ 0xd0,0xbc,0xcf,0x4a, 0xef,0x0f,0x90,0x50,
+ 0xfb,0x6a,0x25,0x4f, 0xdb,0x67,0x5b,0xd8,
+ 0xa1,0x1e,0x95,0x4d, 0xe5,0xd6,0xf3,0x22,
+ 0x2e,0x6f,0x01,0x50, 0xd8,0x2f,0x91,0x47,
+ 0x82,0x0e,0xae,0x18, 0xbf,0x3a,0xc9,0x5a,
+ 0x71,0xcf,0x5e,0xbf, 0x9e,0xec,0x1d,0x11,
+ 0x96,0x33,0x32,0x5e, 0x5e,0xee,0xc8,0xee,
+ 0x52,0x03,0xbc,0x8d, 0x97,0xd2,0x55,0xc5,
+ 0xaf,0x52,0xb0,0x55, 0x8f,0xb8,0x9b,0x83,
+ 0x60,0x9f,0x60,0x92, 0x47,0x1d,0xf2,0x6e,
+ 0xd1,0x93,0xfe,0xc2, 0x77,0x8c,0xb6,0x49,
+ 0x5e,0x3e,0xdb,0xb9, 0x7a,0x58,0x4d,0x18,
+ 0x66,0xc8,0xc2,0x67, 0xf8,0x37,0x7d,0x06,
+ 0x50,0xcc,0x42,0xab, 0x08,0x27,0x8e,0x81,
+ 0x6f,0xb3,0x03,0xbd, 0x41,0x11,0xeb,0x13,
+ 0xf1,0xaf,0xee,0x56, 0xae,0xb3,0x36,0x41,
+ 0xb8,0xc9,0x0a,0x96, 0x88,0x1d,0x98,0x25,
+ 0xc6,0x45,0xeb,0x76, 0x07,0xc1,0xfe,0xae,
+ 0xbc,0x26,0x1f,0xc4, 0x5f,0x70,0x0c,0xae,
+ 0x70,0x00,0xcf,0xc6, 0x77,0x5c,0x9c,0x24,
+ 0x8b,0x4b,0x83,0x32, 0x09,0xb7,0xb1,0x43,
+ 0x4a,0x01,0x42,0x04, 0x4d,0xca,0x5f,0x4e,
+ 0x9b,0x2b,0xa9,0xcb, 0x99,0x0b,0x0e,0x57,
+ 0x09,0xd6,0xe2,0xa0, 0xc1,0x12,0x79,0xf2,
+ 0x6f,0xe1,0x6c,0x7f, 0x0a,0x1a,0xec,0xc1,
+ 0x82,0x4a,0xf8,0x98, 0x22,0xc9,0x81,0x81,
+ 0x5d,0xf8,0x7d,0x9d, 0x86,0x97,0xdd,0x9e,
+ 0x8a,0xb5,0xce,0x6c, 0xfb,0x06,0xc3,0x8a,
+ 0x0d,0x53,0xda,0x12, 0x0c,0x4b,0x6f,0xa0,
+ 0x3f,0x8d,0xc3,0x07, 0x27,0x10,0xaf,0xc5,
+ 0x27,0xfe,0x64,0x17, 0x18,0xa5,0x3a,0xfe,
+ 0x9b,0x91,0xae,0xd0, 0x2d,0x34,0x34,0x9e,
+ 0x9f,0x31,0x5d,0x3e, 0x4c,0x26,0x1e,0xcb,
+ 0x62,0x05,0xd2,0x83, 0x8d,0x71,0xb8,0x57,
+ 0xef,0x3a,0x94,0xb3, 0x3a,0x67,0x1b,0x21,
+ 0x33,0x1f,0x7f,0x10, 0xd8,0xd7,0x89,0x1b,
+ 0x4f,0x51,0x74,0x97, 0x4a,0x0e,0x74,0x59,
+ 0x74,0x66,0xef,0xdd, 0x26,0xb6,0xa1,0x53,
+ 0xd4,0x2f,0xd7,0x76, 0x51,0x27,0xcc,0xe4,
+ 0x94,0xe3,0xed,0x26, 0x13,0x4e,0xe8,0x2c,
+ 0x11,0x6e,0xb3,0x63, 0x51,0x36,0x9c,0x91,
+ 0x2d,0x66,0x2c,0x3e, 0x0a,0xf7,0xa4,0x97,
+ 0x70,0x6d,0x04,0xaa, 0x89,0xe8,0x2c,0x5e,
+ 0xdd,0x01,0x46,0xfc, 0x99,0xce,0xe6,0x32,
+ 0x8a,0x85,0xe6,0x07, 0x1e,0x71,0x5d,0x29,
+ 0x07,0x16,0x0e,0xf9, 0xd4,0xdf,0x54,0xb4,
+ 0x7b,0x7b,0x3f,0xe0, 0xeb,0x73,0xe0,0xe1,
+ 0x92,0x51,0x50,0x74, 0xb5,0x6e,0x08,0x7e,
+ 0x57,0x70,0xb2,0x1b, 0x9c,0xf2,0xa2,0x6b,
+ 0x52,0xa3,0x35,0xf7, 0x22,0x40,0xa6,0x11,
+ 0x30,0xd3,0x5b,0x4b, 0x78,0xc9,0xd7,0x84,
+ 0x9a,0x88,0x9a,0x44, 0xb4,0x88,0xfe,0x8c,
+ 0x3f,0x10,0xab,0xc7, 0xc9,0xb6,0x59,0x9a,
+ 0xf3,0xe6,0xe6,0x4d, 0xea,0x3e,0xe0,0xeb,
+ 0x9e,0xb4,0x41,0xf6, 0xcb,0xfc,0x04,0x73,
+ 0x7d,0xc8,0x00,0xc6, 0xf2,0x10,0x00,0xcf,
+ 0x59,0xed,0x05,0x2a, 0x6a,0xde,0x7a,0xdf,
+ 0x7d,0xa9,0x25,0xc8, 0x6e,0x08,0x60,0xf9,
+ 0xd8,0x23,0x9b,0x20, 0xe5,0x93,0x9c,0x90,
+ 0x3d,0xe0,0xd0,0x33, 0x2d,0xce,0x86,0x93,
+ 0xdc,0xb3,0x9c,0x40, 0x33,0x9a,0xf0,0x71,
+ 0x47,0x0e,0xc4,0xb9, 0x58,0xc4,0x36,0xf1,
+ 0x4c,0x82,0xcf,0x91, 0x9f,0x16,0xce,0x43,
+ 0x58,0x72,0x54,0x51, 0x0d,0x8e,0x1e,0x3d,
+ 0x5e,0x67,0x7e,0x96, 0x6e,0x12,0xb8,0xee,
+ 0x1f,0x8b,0x15,0x3b, 0x49,0x95,0x2f,0xd9,
+ 0xec,0x63,0x56,0xec, 0x4e,0x88,0x37,0x2f,
+ 0xa7,0xd5,0xe5,0x4a, 0x97,0x1f,0x6f,0xa0,
+ 0x40,0x68,0x69,0xee, 0x6a,0xc6,0xbe,0x83,
+ 0xba,0x69,0xb8,0x08, 0x0a,0x5c,0x2f,0xd2,
+ 0x3e,0x3b,0x73,0x40, 0x9c,0x62,0xcc,0xe1,
+ 0x99,0x44,0xa2,0xaa, 0xb8,0xe9,0x48,0xf4,
+ 0x79,0x07,0xe8,0xe8, 0x16,0x99,0x84,0x7b,
+ 0x3d,0x53,0xb2,0x5d, 0x2d,0xa4,0xb0,0x12,
+ 0xb9,0xa9,0x0d,0x77, 0x98,0xa1,0x98,0x90,
+ 0x4e,0xe2,0x14,0xd4, 0x15,0x35,0xd0,0x85,
+ 0xbf,0xa1,0x0f,0x54, 0x05,0xa0,0x90,0x2a,
+ 0x74,0xe3,0xd3,0x1b, 0x5e,0x16,0x07,0xcf,
+ 0x36,0xbd,0xea,0x9b, 0x2d,0x35,0x47,0xea,
+ 0xea,0xb7,0xd1,0xda, 0x66,0x47,0x42,0x47,
+ 0x4e,0x76,0xe5,0x90, 0x0c,0x82,0x15,0x3f,
+ 0x17,0x1b,0xa6,0x04, 0xb6,0x58,0x67,0x42,
+ 0xfb,0x19,0x2a,0xc2, 0xd7,0x6a,0x48,0x36,
+ 0x87,0x53,0x90,0x95, 0x53,0xb7,0xf1,0xbe,
+ 0x0d,0x9f,0xa3,0x74, 0x5f,0x3d,0x89,0xef,
+ 0x29,0x07,0xe1,0xc1, 0x13,0xe0,0xc7,0xf6,
+ 0x53,0xc2,0xe5,0x7e, 0x96,0xdf,0x1f,0x12,
+ 0x98,0xd6,0x7b,0x2d, 0xdb,0x3e,0x01,0x03,
+ 0x05,0xbe,0x66,0x29, 0x42,0xeb,0x5d,0xab,
+ 0xa8,0x13,0x78,0x7f, 0x1e,0x0e,0xfd,0x7f,
+ 0xf1,0xd2,0x59,0xb2, 0x46,0x13,0x1c,0xb8,
+ 0x42,0x4f,0x87,0xb3, 0x26,0x0b,0xed,0x26,
+ 0xb2,0xd5,0x27,0xfc, 0xf1,0xec,0x32,0x66,
+ 0xe1,0x2d,0x27,0x2a, 0xe2,0x80,0xf2,0x72,
+ 0x90,0x3c,0x54,0xfa, 0xaa,0xe6,0x31,0xb0,
+ 0xb7,0xdd,0x97,0x0d, 0x22,0xb5,0x16,0x46,
+ 0x66,0x6d,0x02,0x13, 0x9a,0x7c,0x52,0xfc,
+ 0xf8,0x73,0x0c,0x81, 0xac,0xa3,0x8f,0x40,
+ 0x50,0x2e,0x80,0x3b, 0xb6,0xdf,0x88,0xbb,
+ 0xb5,0xa8,0x13,0xfa, 0xd2,0xd6,0xb8,0x07,
+ 0x47,0x7b,0xa0,0x09, 0x9f,0xc3,0x42,0xab,
+ 0xb8,0xd6,0xca,0xfa, 0x41,0xdc,0x9a,0xb5,
+ 0x96,0xf4,0xfa,0xfd, 0x09,0xca,0x8e,0x47,
+ 0x1d,0x8f,0x8d,0x54, 0x3f,0xbf,0xfd,0x22,
+ 0x30,0x25,0xbd,0xea, 0xb3,0xf6,0x90,0x68,
+ 0x6e,0x2b,0x78,0x8e, 0xc4,0x58,0x1c,0xbd,
+ 0x6b,0x36,0xdc,0x9d, 0x9f,0x27,0xce,0xf6,
+ 0x4f,0x1b,0xeb,0x41, 0x2c,0x07,0xa1,0x1f,
+ 0xaa,0xc3,0x65,0xe0, 0x78,0x85,0x80,0x22,
+ 0x00,0x94,0x1a,0x9f, 0x34,0x2b,0x2b,0x51,
+ 0x94,0x93,0x23,0x20, 0x48,0x2e,0x16,0xd6,
+ 0xdf,0x09,0xa2,0xfa, 0xb8,0x9b,0xf0,0x64,
+ 0x18,0x36,0x78,0xbc, 0xb8,0x5b,0x87,0x90,
+ 0xba,0xd2,0x2e,0x30, 0xe6,0xc5,0xe0,0x0c,
+ 0x81,0x32,0x69,0x9a, 0x8a,0x5a,0x3d,0x6f,
+ 0x06,0xe1,0x3f,0xa9, 0xf2,0x0e,0x21,0xfe,
+ 0x9e,0x63,0x31,0xa9, 0xc3,0x3e,0xb4,0xcd,
+ 0xcb,0x60,0xd9,0x45, 0xc6,0x5f,0xc5,0xca,
+ 0x9e,0xd8,0x40,0x72, 0x39,0x04,0x59,0x2d,
+ 0x4c,0xac,0xdf,0xea, 0x4a,0x78,0xa9,0xd5,
+ 0x87,0xb1,0xd6,0x59, 0x77,0x58,0x4d,0xa7,
+ 0xd3,0x9b,0xfc,0xe3, 0xdd,0x8d,0xf5,0x57,
+ 0x06,0xb3,0x96,0xf1, 0xbe,0xd9,0x07,0x54,
+ 0x36,0xa4,0x8b,0xaa, 0x0b,0xcb,0xd3,0x80,
+ 0x13,0xa6,0x53,0x8e, 0xcc,0x23,0x15,0x02,
+ 0x1e,0x1b,0x2f,0x0a, 0x02,0x5b,0xca,0x50,
+ 0x11,0x28,0x27,0x0e, 0xbe,0xfe,0x76,0x60,
+ 0x1b,0x78,0x58,0x9b, 0xe6,0x0a,0x0a,0xef,
+ 0xa3,0xa5,0x33,0x0d, 0x5b,0x65,0xe1,0x03,
+ 0x38,0xdd,0xf8,0x22, 0x92,0xcd,0x50,0x87,
+ 0x02,0xbc,0x91,0x16, 0xfd,0x05,0x9c,0xcd,
+ 0x72,0xae,0x4c,0xd7, 0xef,0xb3,0x57,0x1a,
+ 0x3f,0x79,0x23,0xfd, 0xf0,0xc3,0xfb,0x68,
+ 0xb4,0xc9,0x93,0x22, 0x33,0xd3,0x01,0x74,
+ 0xe3,0x00,0x31,0xcf, 0x0f,0x23,0xc5,0xf7,
+ 0x09,0x95,0x5a,0xa0, 0x56,0xf9,0xb0,0x20,
+ 0xb1,0xcc,0x8d,0x88, 0xd6,0x27,0x97,0x8d,
+ 0x0e,0xa3,0x3d,0x33, 0x94,0x04,0x44,0x93,
+ 0x67,0x10,0xb6,0xa0, 0x0c,0x2a,0x28,0xd4,
+ 0x1b,0x41,0x86,0xe7, 0x29,0x2c,0x68,0x2a,
+ 0x94,0xf3,0x4f,0x20, 0xa1,0xb4,0x6c,0x9d,
+ 0x85,0x6b,0xa0,0x31, 0xa2,0xbd,0x74,0xf0,
+ 0x0b,0xe5,0x2f,0xb7, 0x8a,0x33,0xd9,0x1f,
+ 0xf2,0xb5,0xad,0x85, 0xc3,0xad,0x47,0x2f,
+ 0x27,0x2a,0xc9,0x32, 0xd8,0xd9,0x05,0xc2,
+ 0x9d,0xbf,0x21,0x88, 0x02,0x05,0x12,0x6e,
+ 0x0f,0xb6,0x64,0x43, 0xa8,0xc3,0x87,0xea,
+ 0xb0,0x81,0x5b,0x51, 0x51,0xf1,0x83,0x7d,
+ 0x94,0x46,0x7f,0x0a, 0x9a,0xef,0xcc,0x68,
+ 0x73,0xef,0x9d,0x3c, 0x0e,0xfc,0x37,0x91,
+ 0xca,0x36,0x2d,0x1d, 0x72,0x7e,0x39,0x9e,
+ 0xad,0xd3,0x55,0x1b, 0x10,0x1e,0xff,0x00,
+ 0xc1,0x45,0x80,0xe7, 0xb4,0xcc,0xc8,0xb0,
+ 0x62,0xbd,0xf9,0xa5, 0x8f,0x05,0xaa,0x3b,
+ 0x86,0x73,0x14,0xf9, 0xee,0x95,0xd0,0xfd,
+ 0x95,0x30,0x68,0x22, 0xc9,0x70,0x66,0x1d,
+ 0x91,0x3f,0xc0,0x19, 0x93,0x07,0x19,0x2d,
+ 0x3c,0x21,0x6b,0xc1, 0x2a,0xeb,0xaa,0xf2,
+ 0xa4,0x45,0x35,0xff, 0x8f,0x24,0x46,0x2c,
+ 0xc8,0x75,0x58,0x68, 0x0f,0x3b,0x87,0x11,
+ 0xcb,0x9f,0xf7,0x28, 0xbd,0x66,0x91,0x01,
+ 0xeb,0x70,0x8e,0x8d, 0xe6,0x01,0xc8,0x48,
+ 0x94,0xfe,0x4e,0xa8, 0xeb,0x90,0xbf,0xd1,
+ 0xcd,0x89,0xc2,0x98, 0x34,0x92,0xf9,0x08,
+ 0xb9,0xbc,0xd4,0x34, 0x1a,0x59,0xcc,0x80,
+ 0x9a,0xe6,0xbc,0xbb, 0x23,0x12,0x9c,0xa4,
+ 0x5b,0x79,0xc6,0x8a, 0xc0,0x03,0x2b,0x16,
+ 0xe5,0x1c,0x0f,0x02, 0x37,0x4f,0x3e,0xc2,
+ 0xf3,0x4d,0x7c,0xcb, 0xde,0x9b,0x66,0x52,
+ 0xf3,0xdd,0x86,0x42, 0x4a,0x81,0x5b,0x96,
+ 0x83,0x2a,0xb1,0x48, 0x31,0x42,0x16,0x16,
+ 0xf8,0x97,0xa3,0x52, 0xeb,0xb6,0xbe,0x99,
+ 0xe1,0xbc,0xa1,0x3a, 0xdd,0xea,0x00,0xfa,
+ 0x11,0x2f,0x0b,0xf8, 0xc7,0xcc,0xba,0x1a,
+ 0xf3,0x36,0x20,0x3f, 0x59,0xea,0xf1,0xc8,
+ 0x08,0xd0,0x6d,0x8e, 0x91,0x1e,0x90,0x91,
+ 0x7b,0x80,0xdc,0xcb, 0x5c,0x94,0x74,0x26,
+ 0xd3,0x5d,0x1a,0x2d, 0xad,0xcf,0xef,0xfa,
+ 0xe9,0xa0,0x17,0xb7, 0x2b,0x7c,0x37,0x83,
+ 0x31,0x78,0x1a,0xcf, 0x04,0xa0,0xe7,0x83,
+ 0x66,0x12,0x4f,0x9d, 0x31,0x6b,0x4d,0xc5,
+ 0x31,0x1b,0x3a,0xd9, 0x79,0x76,0x49,0xc3,
+ 0x19,0xf0,0x3f,0xb5, 0xbc,0x7d,0xa4,0xa7,
+ 0x24,0x44,0x75,0xbb, 0x6d,0x65,0x59,0xf8,
+ 0xe0,0xb9,0xd7,0x29, 0x79,0xce,0x14,0x32,
+ 0xd2,0x3e,0xb8,0x22, 0x4a,0x0a,0x2a,0x6c,
+ 0xb2,0xbd,0xa5,0xd4, 0xc4,0xc5,0x68,0xb3,
+ 0x63,0xe7,0x46,0x05, 0x3a,0x18,0xa5,0xad,
+ 0xcc,0x61,0xc3,0xec, 0x3d,0x42,0xb0,0xa7,
+ 0x23,0x72,0x1e,0x14, 0xd8,0x7e,0x68,0x60,
+ 0xec,0xe9,0x1d,0x5b, 0x1f,0x86,0xda,0x5e,
+ 0x34,0x74,0x00,0xd3, 0x98,0x98,0x7e,0xbd,
+ 0x6a,0x8b,0xd3,0x6f, 0x31,0xf1,0x62,0xb3,
+ 0xa3,0x86,0x95,0x02, 0x76,0x7d,0x58,0xbc,
+ 0xf8,0xb1,0x52,0xc3, 0x0b,0xd5,0x6b,0x74,
+ 0xa5,0x84,0xef,0xf2, 0x31,0xc1,0xe4,0x83,
+ 0x42,0x12,0xb5,0xe7, 0x61,0xdd,0xba,0x43,
+ 0x39,0xf2,0x44,0x0a, 0xb4,0x62,0x06,0x32,
+ 0x5b,0x33,0x67,0x2e, 0x7a,0x93,0x85,0x1a,
+ 0x07,0x36,0x9f,0xab, 0xf7,0x2a,0x6e,0x3d,
+ 0x3e,0xe3,0x59,0x1b, 0xf8,0xd3,0xe8,0x5f,
+ 0xe5,0x24,0xb3,0x59, 0x80,0xd5,0x11,0x14,
+ 0x98,0x3a,0xb4,0x7d, 0x8f,0x37,0x18,0xb2,
+ 0xa7,0x25,0xf4,0x31, 0x74,0x61,0x3a,0x42,
+ 0x62,0x77,0x37,0x3d, 0x72,0x1b,0x67,0x87,
+ 0xb3,0x59,0x4b,0x08, 0x07,0xdb,0x0b,0x57,
+ 0xfd,0x61,0x99,0x28, 0x3b,0xe5,0x7a,0xb4,
+ 0x6c,0x06,0x95,0x65, 0x2c,0x1c,0x41,0x71,
+ 0x21,0xd7,0x94,0x51, 0x1c,0x8d,0xe6,0x38,
+ 0xc5,0x95,0x7f,0x30, 0xd5,0xc5,0xcc,0xd2,
+ 0x03,0x7f,0x69,0x2e, 0xae,0xc7,0x28,0x2e,
+ 0xc6,0xa9,0x28,0x4b, 0x77,0xc3,0xcf,0xa3,
+ 0xc3,0xd3,0x2d,0x43, 0x47,0x87,0xde,0x38,
+ 0xeb,0x3a,0xb6,0xf9, 0xe7,0x3c,0xb6,0x92,
+ 0x19,0x42,0xf8,0xc2, 0x87,0x50,0xed,0xe6,
+ 0x3d,0x2b,0xb5,0xf8, 0x89,0x14,0x42,0xf7,
+ 0x2c,0x7a,0xbe,0xdc, 0x2f,0x5d,0x49,0x83,
+ 0xf5,0x60,0xe0,0xcf, 0xbc,0x23,0x13,0x4f,
+ 0xb3,0x16,0xd7,0x9a, 0xca,0x16,0x8b,0xa5,
+ 0x08,0x80,0xcf,0x21, 0xbb,0xd8,0x32,0x5e,
+ 0x07,0x8a,0xb3,0x48, 0xba,0x99,0xd4,0xd7,
+ 0x6a,0xae,0x4b,0x9b, 0xb4,0xd7,0x2f,0x87,
+ 0xb0,0x0a,0xd1,0x1b, 0xf1,0x8b,0xf6,0x21,
+ 0x81,0x8e,0xc4,0x79, 0x9a,0x5c,0x75,0xbe,
+ 0x87,0x99,0xe5,0x11, 0xf9,0x9a,0xe1,0xf9,
+ 0x76,0xa2,0x92,0xc6, 0xc0,0xd8,0x05,0xc9,
+ 0x7d,0x8c,0x27,0xc2, 0x7f,0xf4,0xe9,0x4f,
+ 0xb7,0xbc,0xa3,0x3e, 0x66,0x3b,0xaf,0xed,
+ 0x7a,0xd9,0x78,0x20, 0x6b,0xd5,0xe1,0xfe,
+ 0xd5,0x06,0x65,0x11, 0x49,0xac,0x22,0x38,
+ 0x02,0x80,0xec,0x91, 0x11,0x18,0x1a,0x61,
+ 0x3c,0x59,0x4e,0x7a, 0xd8,0xca,0xda,0xd4,
+ 0x27,0xbd,0xf4,0x00, 0x9c,0x1b,0xde,0xf3,
+ 0x6c,0x1f,0x20,0x9a, 0x30,0xc9,0x9b,0x3c,
+ 0xe5,0x55,0xb7,0xb3, 0xc8,0x52,0x9c,0x05,
+ 0xad,0xe8,0x13,0x9e, 0x31,0xc2,0x2c,0xd4,
+ 0x3f,0x18,0x00,0xc4, 0xcf,0x08,0x05,0x7b,
+ 0x5e,0x2a,0x8e,0x11, 0x61,0x03,0xc8,0x39,
+ 0x2b,0x54,0x1a,0xd9, 0x08,0x04,0xc6,0xe9,
+ 0xda,0x69,0xb3,0x0c, 0x83,0x44,0xcd,0xe8,
+ 0x50,0x04,0x72,0xa2, 0xb4,0x10,0x17,0x39,
+ 0x68,0x32,0xdb,0xab, 0xe3,0xee,0x57,0x1b,
+ 0x05,0x45,0x1f,0x5a, 0xdc,0xdc,0x56,0x81,
+ 0x98,0x20,0xfe,0x69, 0x0a,0xa4,0xd6,0x9d,
+ 0x25,0xdd,0x7e,0xd0, 0x2b,0x33,0x41,0x75,
+ 0xf6,0x59,0xa8,0xa3, 0x3c,0xdd,0xd9,0x6b,
+ 0xa8,0xcd,0x1d,0x1f, 0xc5,0x78,0x5b,0x93,
+ 0xdf,0x10,0x71,0xeb, 0xcc,0xbd,0x35,0x4c,
+ 0x07,0x21,0x5f,0xb7, 0x47,0x21,0x6d,0x55,
+ 0x8b,0x72,0x0e,0x4a, 0x2c,0x17,0xfc,0x75,
+ 0x21,0xdd,0x76,0xfd, 0x34,0xfc,0x0f,0x1b,
+ 0xa6,0x77,0x53,0xf9, 0xdb,0x09,0x07,0x58,
+ 0xb0,0x18,0x32,0x03, 0x98,0x79,0xdf,0x55,
+ 0xd3,0x95,0xba,0xa9, 0xb6,0x9f,0xad,0xc4,
+ 0x9d,0xba,0x76,0x36, 0x47,0xb1,0xde,0x78,
+ 0x18,0xa0,0x2f,0x16, 0x41,0xeb,0x4a,0x96,
+ 0x82,0xc4,0xa4,0xde, 0x4b,0xdf,0xee,0xc7,
+ 0x33,0xdf,0xb7,0xde, 0xd3,0xa7,0x0f,0xc7,
+ 0x23,0x61,0x6b,0xd9, 0x15,0xc8,0x09,0xf7,
+ 0xe7,0xf9,0x44,0xba, 0x14,0xdc,0x94,0x5e,
+ 0xd9,0xcc,0x74,0xb2, 0x3d,0xef,0x78,0x15,
+ 0xb5,0xb9,0x56,0xd5, 0xfb,0x47,0x49,0x3a,
+ 0xbc,0x53,0x71,0x8b, 0x72,0x8b,0xb2,0xe3,
+ 0x58,0xbf,0xea,0x47, 0x7a,0x76,0x03,0x48,
+ 0xdd,0x8c,0x30,0x99, 0x81,0x2c,0x5f,0xf6,
+ 0xd3,0x9b,0x8e,0x77, 0x1c,0xb7,0xbd,0x1e,
+ 0xd4,0x28,0x05,0xf7, 0xff,0xdf,0xd6,0xb9,
+ 0x83,0x99,0xbc,0x94, 0xb7,0x41,0x93,0xc4,
+ 0x66,0xff,0x29,0x4d, 0x5c,0xba,0x79,0xd9,
+ 0x6e,0x79,0x47,0x45, 0xd6,0x2d,0xcd,0x79,
+ 0xa1,0xfa,0x49,0xee, 0x8e,0x7f,0x2b,0x08,
+ 0x3f,0x60,0x56,0xcf, 0xcb,0xe8,0x0d,0x55,
+ 0xee,0xa5,0xaf,0x04, 0xde,0x01,0xde,0xce,
+ 0xb6,0x9c,0x68,0x4e, 0xb0,0x88,0xcd,0x89,
+ 0x83,0x6b,0x01,0xb5, 0x78,0xac,0x85,0x3c,
+ 0x2c,0xcf,0x39,0xb6, 0xc8,0x5f,0x0e,0xac,
+ 0x02,0x08,0x56,0xbe, 0xd1,0x8d,0x7d,0x55,
+ 0x69,0x0c,0x33,0x33, 0xff,0x1a,0xd6,0x0b,
+ 0xcf,0x57,0x18,0x01, 0x56,0x5f,0x9c,0x6f,
+ 0xe2,0x24,0xda,0xc3, 0x9f,0x81,0xc3,0x27,
+ 0x46,0x7a,0xb4,0xae, 0xec,0xa4,0x0e,0x41,
+ 0x8b,0xb7,0x16,0xe3, 0x9b,0x2e,0x32,0x75,
+ 0xd9,0x86,0xa2,0x13, 0x68,0x4e,0xbc,0x43,
+ 0xa2,0x78,0x64,0x1a, 0x7c,0xac,0x13,0x70,
+ 0x1c,0x23,0x15,0x5b, 0xda,0x99,0xa5,0x24,
+ 0x3d,0xcf,0x29,0xf7, 0xbc,0x1d,0x10,0xe8,
+ 0x95,0x1a,0x11,0xec, 0xfc,0xfb,0x20,0x1f,
+ 0x09,0x1b,0xe3,0x3d, 0xae,0x82,0x70,0xd7,
+ 0x9e,0xf3,0x18,0x97, 0x89,0xfa,0x42,0x67,
+ 0x70,0x9c,0xc8,0xbe, 0x62,0x98,0xf1,0x82,
+ 0xfc,0x2b,0xf0,0x40, 0xaa,0xdc,0x27,0xf9,
+ 0x21,0x5a,0xc1,0x25, 0x8b,0xef,0xd5,0x48,
+ 0x6c,0x68,0xae,0xbc, 0xcd,0xa9,0x3c,0x1e,
+ 0xe9,0xcf,0xe2,0xd1, 0xc0,0x98,0xa9,0x62,
+ 0x5d,0x1f,0x57,0x7a, 0xca,0x8a,0x0f,0xfb,
+ 0xe3,0xc9,0x7e,0x98, 0x44,0x84,0x67,0x12,
+ 0x60,0x60,0xe5,0xc7, 0xcc,0x72,0x90,0x64,
+ 0x67,0x30,0x6a,0xd8, 0xa1,0x11,0xd5,0x7e,
+ 0x5e,0x0c,0x74,0xa2, 0x6f,0x0a,0xff,0x41,
+ 0xd3,0x9a,0x30,0x56, 0xd4,0xec,0x9a,0x5f,
+ 0x22,0x71,0x6b,0x4e, 0xe6,0xe0,0x19,0x69,
+ 0x56,0x4a,0xba,0x9d, 0x50,0x8a,0x73,0x6a,
+ 0xf1,0x59,0x48,0xd6, 0xcd,0xfa,0xaa,0x0c,
+ 0xbb,0x7c,0xa4,0xbc, 0xf5,0x32,0x95,0x55,
+ 0x1c,0xe9,0x9a,0x60, 0x43,0x10,0xbd,0x27,
+ 0x88,0x2f,0x05,0xcf, 0xce,0x21,0x25,0x3a,
+ 0x07,0xab,0x37,0xfd, 0xf6,0x2f,0xd6,0x51,
+ 0xbe,0xe6,0xcc,0x58, 0x3a,0xab,0x60,0x23,
+ 0x45,0xa0,0xe5,0x79, 0xe5,0xaa,0xed,0xa4,
+ 0x28,0xd0,0x4d,0x37, 0x9c,0x6a,0xd7,0xc2,
+ 0x39,0x22,0xb9,0x3e, 0x0d,0xb8,0x94,0x65,
+ 0x48,0x4d,0x4c,0x02, 0x31,0x7e,0x9c,0xc9,
+ 0xb7,0xd6,0x23,0x1a, 0x94,0x5a,0x13,0x55,
+ 0x78,0x7a,0x29,0x4a, 0xa2,0xfd,0x37,0x24,
+ 0xd8,0xd0,0x9e,0x47, 0x24,0xab,0x26,0x34,
+ 0x28,0xb5,0x2d,0x82, 0x9a,0x4d,0xdd,0x17,
+ 0x68,0xe0,0x07,0x5d, 0xb9,0x2d,0xff,0xa9,
+ 0x0c,0x11,0x59,0x75, 0xda,0x98,0xe9,0xd5,
+ 0xfa,0xb5,0x18,0x16, 0x28,0x17,0x7c,0xad,
+ 0xab,0xee,0x65,0x10, 0x13,0x0d,0x26,0xfa,
+ 0x7f,0xac,0x06,0x43, 0x4d,0x5d,0x3a,0xf4,
+ 0x77,0xe7,0x03,0x17, 0x39,0x9f,0xbe,0x52,
+ 0x9b,0x68,0x2b,0x7f, 0xd3,0xa2,0x7e,0x5c,
+ 0x78,0x22,0xc5,0xe3, 0x17,0x73,0xc6,0x9e,
+ 0x68,0x17,0x74,0x50, 0xf4,0xc5,0xa8,0xc3,
+ 0x66,0xe1,0x05,0xed, 0xdd,0xdb,0xd3,0x11,
+ 0x16,0xad,0x05,0x3a, 0x38,0x55,0x1c,0xf0,
+ 0x93,0x0b,0x22,0x83, 0xc8,0x34,0xc5,0x43,
+ 0x4d,0x65,0x57,0xf3, 0x03,0x56,0x21,0xa9,
+ 0xbd,0x04,0x41,0x49, 0x62,0xfd,0xcc,0xc2,
+ 0x75,0x59,0x09,0xb9, 0x28,0x38,0xcf,0xfb,
+ 0x54,0x64,0x51,0xc2, 0x3e,0xad,0x35,0x3e,
+ 0x31,0x87,0x6e,0xfe, 0xf0,0x41,0xef,0x1d,
+ 0xb8,0x46,0xbe,0x85, 0xb9,0xff,0xa3,0xdb,
+ 0x87,0xf9,0x65,0x95, 0x60,0x53,0x7c,0x9d,
+ 0x26,0x83,0xfc,0xa7, 0xad,0x5a,0xcb,0x8d,
+ 0x81,0xec,0x28,0xeb, 0xdd,0x96,0x25,0x31,
+ 0x24,0x3f,0x59,0x28, 0x60,0x0b,0xc0,0x59,
+ 0xea,0x36,0x15,0xad, 0x70,0xd8,0x70,0xff,
+ 0x9b,0x15,0x76,0xc5, 0x84,0xe6,0x81,0x75,
+ 0x1a,0x1e,0xc9,0xec, 0x33,0xbe,0x10,0xd4,
+ 0x6f,0x10,0x1b,0xa2, 0xdb,0xc6,0x1b,0x0a,
+ 0xfb,0xe9,0x3f,0x4d, 0x04,0x4e,0x33,0x87,
+ 0xb3,0x21,0xad,0x41, 0xbe,0xce,0x26,0x0c,
+ 0x0c,0x84,0x0f,0x9a, 0xb9,0xa7,0xa2,0x36,
+ 0x70,0x49,0xce,0x25, 0x0f,0x69,0x4a,0x4a,
+ 0x3d,0xf5,0xa0,0x9e, 0xad,0x69,0x2d,0x79,
+ 0xdb,0x8b,0x85,0xf6, 0xb8,0x55,0xcd,0xf1,
+ 0xbb,0x04,0x35,0xad, 0xa8,0xb6,0x0d,0x3f,
+ 0x23,0xec,0x39,0xd7, 0xef,0x02,0x95,0x42,
+ 0x11,0xc9,0x70,0xc6, 0xa4,0x65,0x37,0x4d,
+ 0x9f,0x51,0x99,0xd6, 0x9e,0xb1,0x18,0xcf,
+ 0x31,0x81,0xde,0x95, 0x0a,0x8c,0x0c,0x80,
+ 0xdc,0xf7,0x19,0x5d, 0xdc,0x3e,0xee,0x0c,
+ 0x17,0xaf,0xc4,0x9c, 0xbf,0x65,0xf2,0xe1,
+ 0xc9,0xdb,0xc0,0x2a, 0xd0,0xbd,0xa1,0x7f,
+ 0x4b,0x9c,0x5b,0xe6, 0x91,0x98,0xa6,0xdb,
+ 0x72,0xef,0x14,0x38, 0x24,0x77,0x1e,0x71,
+ 0x74,0x63,0x0c,0xd9, 0x16,0x90,0x23,0x4a,
+ 0xe6,0xa4,0xc1,0x53, 0x8b,0xb4,0x7e,0x90,
+ 0x1b,0x68,0x32,0x48, 0x93,0xd8,0x72,0x43,
+ 0x8e,0x32,0x09,0x1e, 0x48,0xfc,0x3a,0xc6,
+ 0x15,0xb9,0x79,0x57, 0x02,0x61,0xc6,0x4b,
+ 0x56,0x1e,0x68,0x4e, 0x65,0x26,0xe5,0x1c,
+ 0xb1,0xd1,0x86,0x1d, 0xea,0x93,0x5a,0x88,
+ 0x4c,0x3b,0x10,0xd1, 0xf7,0x5a,0x4c,0xa3,
+ 0xe7,0x59,0xf5,0x04, 0x7d,0xd7,0xe3,0x2e,
+ 0x2c,0x3e,0x14,0x14, 0x83,0xed,0x3d,0x0b,
+ 0xa4,0xab,0x65,0xcf, 0x39,0xee,0xbe,0x0c,
+ 0x5e,0x4b,0x62,0x5e, 0xb4,0xd2,0x16,0xc7,
+ 0xe0,0x71,0x2b,0x92, 0x1e,0x21,0x45,0x02,
+ 0xfd,0xa1,0xda,0x0b, 0xbe,0xa6,0xe5,0x7f,
+ 0x31,0x8b,0x5a,0xcb, 0x8f,0xb8,0x0c,0xfb,
+ 0x7f,0x2d,0x7e,0xa2, 0x14,0xfd,0xe0,0xbb,
+ 0xa4,0x1b,0xce,0x81, 0x6f,0x25,0xbd,0x72,
+ 0x44,0x00,0x13,0x18, 0x75,0x04,0xf3,0x06,
+ 0xdc,0xf1,0x5b,0xa0, 0xb1,0x5a,0x9a,0xd8,
+ 0x4f,0xe7,0x94,0xe1, 0x65,0xe5,0xb2,0xd1,
+ 0x47,0x6d,0xd8,0x81, 0x22,0x96,0x09,0xd8,
+ 0x5e,0x12,0x73,0x62, 0xd6,0x2c,0xcb,0x45,
+ 0x71,0xa9,0xc1,0x21, 0x16,0x6f,0xf0,0xaa,
+ 0xce,0x19,0x1f,0x68, 0xee,0x17,0x07,0x94,
+ 0x4f,0x93,0x9a,0x12, 0xf7,0x91,0xe1,0xc6,
+ 0x9c,0x29,0xe5,0x06, 0x7a,0x40,0xf5,0xf6,
+ 0x51,0xc8,0x32,0x94, 0x52,0xd9,0x6b,0x9b,
+ 0x3e,0xb5,0xcf,0x1a, 0xf1,0x6c,0x7b,0x0a,
+ 0x16,0x47,0xee,0xa6, 0x46,0x0f,0xed,0xe0,
+ 0x1b,0x3f,0x39,0xfa, 0x4c,0x69,0xeb,0xfb,
+ 0xd0,0x36,0x3b,0x3a, 0x04,0x94,0xa4,0x2f,
+ 0x51,0xe1,0x1a,0x47, 0xc9,0xdb,0xf6,0x09,
+ 0xab,0x35,0x46,0x2c, 0x2f,0xb7,0x19,0xed,
+ 0x55,0x7e,0xa3,0x2c, 0xec,0xff,0x39,0xba,
+ 0x0f,0xfb,0x4f,0x8b, 0xfc,0x36,0x4e,0x5e,
+ 0xa1,0xe8,0x49,0x15, 0x65,0xd2,0xfb,0x11,
+ 0x4b,0x10,0xe6,0x07, 0x82,0x3a,0x5d,0x3f,
+ 0xeb,0xc0,0x0b,0x76, 0x66,0xb5,0xed,0x65,
+ 0xb3,0x9d,0x06,0x13, 0x3b,0x18,0x70,0x7a,
+ 0xbd,0xf7,0xd8,0x20, 0x81,0xc7,0x76,0x2e,
+ 0x21,0x6f,0xdb,0x8e, 0xba,0x83,0x42,0xb1,
+ },
+ },
+ [5] = {
+ .k = {
+ 0x79,0xce,0xb0,0x8e, 0xf8,0x7a,0x67,0xc6,
+ 0x48,0x2c,0x2a,0xc0, 0xa5,0x45,0x06,0x49,
+ 0xc8,0x90,0xb8,0xe9, 0xc6,0xb6,0xb3,0x50,
+ 0xbd,0x9e,0x46,0x56, 0x26,0xf2,0xb0,0x3b,
+ },
+ .tlen = 17,
+ .t = {
+ 0xe6,0x93,0xbe,0x89, 0xf5,0xee,0x40,0xde,
+ 0xf2,0x9c,0xb5,0xec, 0x6a,0x37,0x23,0x46,
+ 0x0e,
+ },
+ .len = 16,
+ .p = {
+ 0x5d,0x83,0x98,0x37, 0xc6,0x33,0x9e,0x7e,
+ 0x59,0xad,0xd2,0x5b, 0x8a,0x3a,0x9d,0x03,
+ },
+ .c = {
+ 0x96,0x23,0x2f,0x7d, 0x52,0xfc,0x98,0x63,
+ 0x98,0xa5,0x8b,0xdf, 0xca,0xbc,0x85,0x2f,
+ },
+ },
+ [6] = {
+ .k = {
+ 0x9f,0xd3,0x36,0xb1, 0x85,0x07,0xdf,0x19,
+ 0x01,0xea,0xf9,0x52, 0x68,0xbf,0xce,0xe7,
+ 0xd0,0x49,0xf3,0xba, 0x58,0xfb,0x87,0x18,
+ 0x9f,0xca,0x24,0xca, 0x61,0xa3,0xf0,0xda,
+ },
+ .tlen = 17,
+ .t = {
+ 0xea,0xc6,0x72,0x5e, 0x66,0xd4,0xc7,0xbd,
+ 0xa1,0x6e,0xab,0x09, 0xb5,0x58,0x39,0xae,
+ 0x40,
+ },
+ .len = 128,
+ .p = {
+ 0xc7,0xd6,0x73,0x65, 0xcb,0xf3,0xf5,0x3e,
+ 0xb9,0xa7,0xbf,0xb1, 0x54,0xcb,0xac,0x01,
+ 0xee,0xb5,0x94,0x17, 0x40,0x92,0xfd,0xad,
+ 0x8f,0xdb,0x27,0x22, 0x3d,0xb1,0x0b,0xf7,
+ 0xa7,0x46,0x70,0xd0, 0x31,0xdb,0xf9,0xdb,
+ 0xb9,0xb9,0x40,0x4a, 0x0a,0xba,0x77,0x6f,
+ 0x35,0x36,0x9e,0xeb, 0x68,0xe2,0x9e,0xd7,
+ 0xef,0xc2,0x5e,0x21, 0x0d,0xb3,0xb0,0x87,
+ 0xd6,0x43,0x35,0x6e, 0x22,0xa0,0xb7,0xec,
+ 0x26,0xe0,0x7d,0x48, 0xf5,0x5d,0x58,0xd3,
+ 0x29,0xb7,0x1f,0x7e, 0xe9,0x5a,0x02,0xa4,
+ 0xb1,0xde,0x10,0x9f, 0xe1,0xa8,0x5e,0x05,
+ 0xb6,0xa2,0x59,0xca, 0x3e,0xbc,0xd1,0x94,
+ 0x09,0x4e,0x1b,0x37, 0x29,0x9c,0x15,0xef,
+ 0x8c,0x72,0x53,0xbe, 0x6f,0x25,0x2c,0x68,
+ 0x88,0x08,0x0c,0x00, 0x80,0x7a,0x85,0x64,
+ },
+ .c = {
+ 0x49,0x36,0x97,0xd2, 0xde,0xa4,0xde,0x92,
+ 0x7d,0x30,0x08,0xc3, 0xd9,0x47,0xd4,0xcb,
+ 0x5b,0x41,0x27,0x2c, 0x06,0xb8,0x2b,0xef,
+ 0x7b,0x57,0x59,0xb7, 0x5b,0x81,0x38,0xb4,
+ 0xd1,0x81,0xb3,0xe8, 0xac,0xf0,0xa0,0x06,
+ 0xcb,0x74,0x31,0x01, 0xe1,0x3d,0xcf,0x6d,
+ 0x57,0xd1,0x65,0xcd, 0xe7,0x33,0x6c,0x03,
+ 0x54,0xf0,0x2c,0x41, 0xb8,0x75,0x07,0x1d,
+ 0x70,0xf0,0x9c,0xbd, 0x8f,0x6b,0xdb,0x76,
+ 0x86,0x5b,0xe0,0xfd, 0xad,0x61,0x7a,0x4c,
+ 0xd6,0xf1,0x85,0x0b, 0xfd,0x0b,0x3a,0x5f,
+ 0xcf,0xfc,0xb0,0x0b, 0x2b,0xc7,0x31,0x07,
+ 0x9d,0x75,0x82,0xd9, 0x14,0xd4,0x33,0xd3,
+ 0xff,0x20,0xf7,0x14, 0xcf,0xe4,0xda,0xca,
+ 0x11,0xcc,0x57,0x8f, 0x51,0x52,0x9d,0x90,
+ 0x01,0xc8,0x4e,0x1f, 0x2a,0x89,0xe2,0x52,
+ },
+ },
+ };
+ static struct adiantum A;
+ static uint8_t buf[4096];
+ unsigned i;
+ int result = 0;
+
+ for (i = 0; i < __arraycount(C); i++) {
+ adiantum_init(&A, C[i].k);
+ adiantum_enc(buf, C[i].p, C[i].len, C[i].t, C[i].tlen, &A);
+ if (memcmp(buf, C[i].c, C[i].len)) {
+ char prefix[16];
+ snprintf(prefix, sizeof prefix, "adiantum enc %u", i);
+ hexdump(printf, prefix, buf, C[i].len);
+ result = -1;
+ }
+ memset(buf, 0, sizeof buf); /* paranoia */
+ adiantum_dec(buf, C[i].c, C[i].len, C[i].t, C[i].tlen, &A);
+ if (memcmp(buf, C[i].p, C[i].len)) {
+ char prefix[16];
+ snprintf(prefix, sizeof prefix, "adiantum dec %u", i);
+ hexdump(printf, prefix, buf, C[i].len);
+ result = -1;
+ }
+ }
+
+ return result;
+}
diff -r 36794fee0d04 -r 9fde04e138c1 sys/crypto/adiantum/files.adiantum
--- /dev/null Thu Jan 01 00:00:00 1970 +0000
+++ b/sys/crypto/adiantum/files.adiantum Wed Jun 17 02:47:43 2020 +0000
@@ -0,0 +1,6 @@
+# $NetBSD$
+
+define adiantum
+
+file crypto/adiantum/adiantum.c adiantum
+file crypto/adiantum/adiantum_selftest.c adiantum
diff -r 36794fee0d04 -r 9fde04e138c1 sys/dev/cgd_crypto.c
--- a/sys/dev/cgd_crypto.c Mon Jun 15 22:55:59 2020 +0000
+++ b/sys/dev/cgd_crypto.c Wed Jun 17 02:47:43 2020 +0000
@@ -45,6 +45,7 @@
#include <dev/cgd_crypto.h>
+#include <crypto/adiantum/adiantum.h>
#include <crypto/aes/aes.h>
#include <crypto/blowfish/blowfish.h>
#include <crypto/des/des.h>
@@ -72,6 +73,10 @@ static cfunc_init cgd_cipher_bf_init;
static cfunc_destroy cgd_cipher_bf_destroy;
static cfunc_cipher cgd_cipher_bf_cbc;
+static cfunc_init cgd_cipher_adiantum_init;
+static cfunc_destroy cgd_cipher_adiantum_destroy;
+static cfunc_cipher cgd_cipher_adiantum_crypt;
+
static const struct cryptfuncs cf[] = {
{
.cf_name = "aes-xts",
@@ -97,6 +102,12 @@ static const struct cryptfuncs cf[] = {
.cf_destroy = cgd_cipher_bf_destroy,
.cf_cipher = cgd_cipher_bf_cbc,
},
+ {
+ .cf_name = "adiantum",
+ .cf_init = cgd_cipher_adiantum_init,
+ .cf_destroy = cgd_cipher_adiantum_destroy,
+ .cf_cipher = cgd_cipher_adiantum_crypt,
+ },
};
const struct cryptfuncs *
cryptfuncs_find(const char *alg)
@@ -409,3 +420,61 @@ cgd_cipher_bf_cbc(void *privdata, void *
panic("%s: unrecognised direction %d", __func__, dir);
}
}
+
+/*
+ * Adiantum
+ */
+
+static void *
+cgd_cipher_adiantum_init(size_t keylen, const void *key, size_t *blocksize)
+{
+ struct adiantum *A;
+
+ if (!blocksize)
+ return NULL;
+ if (keylen != 256)
+ return NULL;
+ if (*blocksize == (size_t)-1)
+ *blocksize = 128;
+ if (*blocksize != 128)
+ return NULL;
+
+ A = kmem_zalloc(sizeof(*A), KM_SLEEP);
+ adiantum_init(A, key);
+
+ return A;
+}
+
+static void
+cgd_cipher_adiantum_destroy(void *cookie)
+{
+ struct adiantum *A = cookie;
+
+ explicit_memset(A, 0, sizeof(*A));
+ kmem_free(A, sizeof(*A));
+}
+
+static void
+cgd_cipher_adiantum_crypt(void *cookie, void *dst, const void *src,
+ size_t nbytes, const void *blkno, int dir)
+{
+ /*
+ * Treat the block number as a 128-bit block. This is more
+ * than twice as big as the largest number of reasonable
+ * blocks, but it doesn't hurt (it would be rounded up to a
+ * 128-bit input anyway).
+ */
+ const unsigned tweaklen = 16;
+ struct adiantum *A = cookie;
+
+ switch (dir) {
+ case CGD_CIPHER_ENCRYPT:
+ adiantum_enc(dst, src, nbytes, blkno, tweaklen, A);
+ break;
+ case CGD_CIPHER_DECRYPT:
+ adiantum_dec(dst, src, nbytes, blkno, tweaklen, A);
+ break;
+ default:
+ panic("%s: unrecognised direction %d", __func__, dir);
+ }
+}
diff -r 36794fee0d04 -r 9fde04e138c1 sys/rump/kern/lib/libcrypto/Makefile
--- a/sys/rump/kern/lib/libcrypto/Makefile Mon Jun 15 22:55:59 2020 +0000
+++ b/sys/rump/kern/lib/libcrypto/Makefile Wed Jun 17 02:47:43 2020 +0000
@@ -1,7 +1,8 @@
# $NetBSD: Makefile,v 1.6 2019/12/05 03:57:55 riastradh Exp $
#
-.PATH: ${.CURDIR}/../../../../crypto/aes \
+.PATH: ${.CURDIR}/../../../../crypto/adiantum \
+ ${.CURDIR}/../../../../crypto/aes \
${.CURDIR}/../../../../crypto/blowfish \
${.CURDIR}/../../../../crypto/camellia \
${.CURDIR}/../../../../crypto/cast128 \
@@ -11,6 +12,10 @@
LIB= rumpkern_crypto
COMMENT=Cryptographic routines
+# Adiantum
+SRCS+= adiantum.c
+SRCS+= adiantum_selftest.c
+
# blowfish
SRCS+= bf_ecb.c bf_enc.c bf_cbc.c bf_skey.c bf_module.c
Home |
Main Index |
Thread Index |
Old Index