From 306e440daf5f40b195afd83d05dee89fa63189e7 Mon Sep 17 00:00:00 2001 From: Ingo Molnar Date: Thu, 30 Jun 2005 02:58:55 -0700 Subject: [PATCH] x86: i8253/i8259A lock cleanup Introduce proper declarations for i8253_lock and i8259A_lock. Signed-off-by: Ingo Molnar Signed-off-by: Andrew Morton Signed-off-by: Linus Torvalds --- arch/x86_64/kernel/io_apic.c | 1 - 1 file changed, 1 deletion(-) (limited to 'arch/x86_64') diff --git a/arch/x86_64/kernel/io_apic.c b/arch/x86_64/kernel/io_apic.c index 157190d986b..d206d7e49cf 100644 --- a/arch/x86_64/kernel/io_apic.c +++ b/arch/x86_64/kernel/io_apic.c @@ -1064,7 +1064,6 @@ void print_all_local_APICs (void) void __apicdebuginit print_PIC(void) { - extern spinlock_t i8259A_lock; unsigned int v; unsigned long flags; -- cgit v1.2.3 From 6772926bef3c9f0ec761b39e5702535471fff70b Mon Sep 17 00:00:00 2001 From: Rusty Lynch Date: Tue, 5 Jul 2005 18:54:50 -0700 Subject: [PATCH] kprobes: fix namespace problem and sparc64 build The following renames arch_init, a kprobes function for performing any architecture specific initialization, to arch_init_kprobes in order to cleanup the namespace. Also, this patch adds arch_init_kprobes to sparc64 to fix the sparc64 kprobes build from the last return probe patch. Signed-off-by: Andrew Morton Signed-off-by: Linus Torvalds --- arch/x86_64/kernel/kprobes.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) (limited to 'arch/x86_64') diff --git a/arch/x86_64/kernel/kprobes.c b/arch/x86_64/kernel/kprobes.c index acd2a778ebe..5c6dc705148 100644 --- a/arch/x86_64/kernel/kprobes.c +++ b/arch/x86_64/kernel/kprobes.c @@ -682,7 +682,7 @@ static struct kprobe trampoline_p = { .pre_handler = trampoline_probe_handler }; -int __init arch_init(void) +int __init arch_init_kprobes(void) { return register_kprobe(&trampoline_p); } -- cgit v1.2.3 From a2a892a236d03a6e985471a7e57d1c863de144c8 Mon Sep 17 00:00:00 2001 From: Andreas Steinmetz Date: Wed, 6 Jul 2005 13:55:00 -0700 Subject: [CRYPTO] Add x86_64 asm AES Implementation: =============== The encrypt/decrypt code is based on an x86 implementation I did a while ago which I never published. This unpublished implementation does include an assembler based key schedule and precomputed tables. For simplicity and best acceptance, however, I took Gladman's in-kernel code for table generation and key schedule for the kernel port of my assembler code and modified this code to produce the key schedule as required by my assembler implementation. File locations and Kconfig are kept similar to the i586 AES assembler implementation. It may seem a little bit strange to use 32 bit I/O and registers in the assembler implementation but this gives the best code size. My implementation takes one instruction more per round compared to Gladman's x86 assembler but it doesn't require any stack for local variables or saved registers and it is less serialized than Gladman's code. Note that all comparisons to Gladman's code were done after my code was implemented. I did only use FIPS PUB 197 for the implementation so my implementation is independent work. If anybody has a better assembler solution for x86_64 I'll be pleased to have my code replaced with the better solution. Testing: ======== The implementation passes the in-kernel crypto testing module and I'm running it without any problems on my laptop where it is mainly used for dm-crypt. Microbenchmark: =============== The microbenchmark was done in userspace with similar compile flags as used during kernel compile. Encrypt/decrypt is about 35% faster than the generic C implementation. As the generic C as well as my assembler implementation are both table I don't really expect that there is much room for further improvements though I'll be glad to be corrected here. The key schedule is about 5% slower than the generic C implementation. This is due to the fact that some more work has to be done in the key schedule routine to fit the schedule to the assembler implementation. Code Size: ========== Encrypt and decrypt are together about 2.1 Kbytes smaller than the generic C implementation which is important with regard to L1 cache usage. The key schedule routine is about 100 bytes larger than the generic C implementation. Data Size: ========== There's no difference in data size requirements between the assembler implementation and the generic C implementation. License: ======== Gladmans's code is dual BSD/GPL whereas my assembler code is GPLv2 only (I'm not going to change the license for my code). So I had to change the module license for the x86_64 aes module from 'Dual BSD/GPL' to 'GPL' to reflect the most restrictive license within the module. Signed-off-by: Andreas Steinmetz Signed-off-by: Herbert Xu Signed-off-by: David S. Miller --- arch/x86_64/Makefile | 4 +- arch/x86_64/crypto/Makefile | 9 + arch/x86_64/crypto/aes-x86_64-asm.S | 186 +++++++++++++++++++++ arch/x86_64/crypto/aes.c | 324 ++++++++++++++++++++++++++++++++++++ 4 files changed, 522 insertions(+), 1 deletion(-) create mode 100644 arch/x86_64/crypto/Makefile create mode 100644 arch/x86_64/crypto/aes-x86_64-asm.S create mode 100644 arch/x86_64/crypto/aes.c (limited to 'arch/x86_64') diff --git a/arch/x86_64/Makefile b/arch/x86_64/Makefile index 8a73794f9b9..42891569767 100644 --- a/arch/x86_64/Makefile +++ b/arch/x86_64/Makefile @@ -65,7 +65,9 @@ CFLAGS += $(call cc-option,-mno-sse -mno-mmx -mno-sse2 -mno-3dnow,) head-y := arch/x86_64/kernel/head.o arch/x86_64/kernel/head64.o arch/x86_64/kernel/init_task.o libs-y += arch/x86_64/lib/ -core-y += arch/x86_64/kernel/ arch/x86_64/mm/ +core-y += arch/x86_64/kernel/ \ + arch/x86_64/mm/ \ + arch/x86_64/crypto/ core-$(CONFIG_IA32_EMULATION) += arch/x86_64/ia32/ drivers-$(CONFIG_PCI) += arch/x86_64/pci/ drivers-$(CONFIG_OPROFILE) += arch/x86_64/oprofile/ diff --git a/arch/x86_64/crypto/Makefile b/arch/x86_64/crypto/Makefile new file mode 100644 index 00000000000..426d20f4b72 --- /dev/null +++ b/arch/x86_64/crypto/Makefile @@ -0,0 +1,9 @@ +# +# x86_64/crypto/Makefile +# +# Arch-specific CryptoAPI modules. +# + +obj-$(CONFIG_CRYPTO_AES_X86_64) += aes-x86_64.o + +aes-x86_64-y := aes-x86_64-asm.o aes.o diff --git a/arch/x86_64/crypto/aes-x86_64-asm.S b/arch/x86_64/crypto/aes-x86_64-asm.S new file mode 100644 index 00000000000..483cbb23ab8 --- /dev/null +++ b/arch/x86_64/crypto/aes-x86_64-asm.S @@ -0,0 +1,186 @@ +/* AES (Rijndael) implementation (FIPS PUB 197) for x86_64 + * + * Copyright (C) 2005 Andreas Steinmetz, + * + * License: + * This code can be distributed under the terms of the GNU General Public + * License (GPL) Version 2 provided that the above header down to and + * including this sentence is retained in full. + */ + +.extern aes_ft_tab +.extern aes_it_tab +.extern aes_fl_tab +.extern aes_il_tab + +.text + +#define R1 %rax +#define R1E %eax +#define R1X %ax +#define R1H %ah +#define R1L %al +#define R2 %rbx +#define R2E %ebx +#define R2X %bx +#define R2H %bh +#define R2L %bl +#define R3 %rcx +#define R3E %ecx +#define R3X %cx +#define R3H %ch +#define R3L %cl +#define R4 %rdx +#define R4E %edx +#define R4X %dx +#define R4H %dh +#define R4L %dl +#define R5 %rsi +#define R5E %esi +#define R6 %rdi +#define R6E %edi +#define R7 %rbp +#define R7E %ebp +#define R8 %r8 +#define R9 %r9 +#define R10 %r10 +#define R11 %r11 + +#define prologue(FUNC,BASE,B128,B192,r1,r2,r3,r4,r5,r6,r7,r8,r9,r10,r11) \ + .global FUNC; \ + .type FUNC,@function; \ + .align 8; \ +FUNC: movq r1,r2; \ + movq r3,r4; \ + leaq BASE+52(r8),r9; \ + movq r10,r11; \ + movl (r7),r5 ## E; \ + movl 4(r7),r1 ## E; \ + movl 8(r7),r6 ## E; \ + movl 12(r7),r7 ## E; \ + movl (r8),r10 ## E; \ + xorl -48(r9),r5 ## E; \ + xorl -44(r9),r1 ## E; \ + xorl -40(r9),r6 ## E; \ + xorl -36(r9),r7 ## E; \ + cmpl $24,r10 ## E; \ + jb B128; \ + leaq 32(r9),r9; \ + je B192; \ + leaq 32(r9),r9; + +#define epilogue(r1,r2,r3,r4,r5,r6,r7,r8,r9) \ + movq r1,r2; \ + movq r3,r4; \ + movl r5 ## E,(r9); \ + movl r6 ## E,4(r9); \ + movl r7 ## E,8(r9); \ + movl r8 ## E,12(r9); \ + ret; + +#define round(TAB,OFFSET,r1,r2,r3,r4,r5,r6,r7,r8,ra,rb,rc,rd) \ + movzbl r2 ## H,r5 ## E; \ + movzbl r2 ## L,r6 ## E; \ + movl TAB+1024(,r5,4),r5 ## E;\ + movw r4 ## X,r2 ## X; \ + movl TAB(,r6,4),r6 ## E; \ + roll $16,r2 ## E; \ + shrl $16,r4 ## E; \ + movzbl r4 ## H,r7 ## E; \ + movzbl r4 ## L,r4 ## E; \ + xorl OFFSET(r8),ra ## E; \ + xorl OFFSET+4(r8),rb ## E; \ + xorl TAB+3072(,r7,4),r5 ## E;\ + xorl TAB+2048(,r4,4),r6 ## E;\ + movzbl r1 ## L,r7 ## E; \ + movzbl r1 ## H,r4 ## E; \ + movl TAB+1024(,r4,4),r4 ## E;\ + movw r3 ## X,r1 ## X; \ + roll $16,r1 ## E; \ + shrl $16,r3 ## E; \ + xorl TAB(,r7,4),r5 ## E; \ + movzbl r3 ## H,r7 ## E; \ + movzbl r3 ## L,r3 ## E; \ + xorl TAB+3072(,r7,4),r4 ## E;\ + xorl TAB+2048(,r3,4),r5 ## E;\ + movzbl r1 ## H,r7 ## E; \ + movzbl r1 ## L,r3 ## E; \ + shrl $16,r1 ## E; \ + xorl TAB+3072(,r7,4),r6 ## E;\ + movl TAB+2048(,r3,4),r3 ## E;\ + movzbl r1 ## H,r7 ## E; \ + movzbl r1 ## L,r1 ## E; \ + xorl TAB+1024(,r7,4),r6 ## E;\ + xorl TAB(,r1,4),r3 ## E; \ + movzbl r2 ## H,r1 ## E; \ + movzbl r2 ## L,r7 ## E; \ + shrl $16,r2 ## E; \ + xorl TAB+3072(,r1,4),r3 ## E;\ + xorl TAB+2048(,r7,4),r4 ## E;\ + movzbl r2 ## H,r1 ## E; \ + movzbl r2 ## L,r2 ## E; \ + xorl OFFSET+8(r8),rc ## E; \ + xorl OFFSET+12(r8),rd ## E; \ + xorl TAB+1024(,r1,4),r3 ## E;\ + xorl TAB(,r2,4),r4 ## E; + +#define move_regs(r1,r2,r3,r4) \ + movl r3 ## E,r1 ## E; \ + movl r4 ## E,r2 ## E; + +#define entry(FUNC,BASE,B128,B192) \ + prologue(FUNC,BASE,B128,B192,R2,R8,R7,R9,R1,R3,R4,R6,R10,R5,R11) + +#define return epilogue(R8,R2,R9,R7,R5,R6,R3,R4,R11) + +#define encrypt_round(TAB,OFFSET) \ + round(TAB,OFFSET,R1,R2,R3,R4,R5,R6,R7,R10,R5,R6,R3,R4) \ + move_regs(R1,R2,R5,R6) + +#define encrypt_final(TAB,OFFSET) \ + round(TAB,OFFSET,R1,R2,R3,R4,R5,R6,R7,R10,R5,R6,R3,R4) + +#define decrypt_round(TAB,OFFSET) \ + round(TAB,OFFSET,R2,R1,R4,R3,R6,R5,R7,R10,R5,R6,R3,R4) \ + move_regs(R1,R2,R5,R6) + +#define decrypt_final(TAB,OFFSET) \ + round(TAB,OFFSET,R2,R1,R4,R3,R6,R5,R7,R10,R5,R6,R3,R4) + +/* void aes_encrypt(void *ctx, u8 *out, const u8 *in) */ + + entry(aes_encrypt,0,enc128,enc192) + encrypt_round(aes_ft_tab,-96) + encrypt_round(aes_ft_tab,-80) +enc192: encrypt_round(aes_ft_tab,-64) + encrypt_round(aes_ft_tab,-48) +enc128: encrypt_round(aes_ft_tab,-32) + encrypt_round(aes_ft_tab,-16) + encrypt_round(aes_ft_tab, 0) + encrypt_round(aes_ft_tab, 16) + encrypt_round(aes_ft_tab, 32) + encrypt_round(aes_ft_tab, 48) + encrypt_round(aes_ft_tab, 64) + encrypt_round(aes_ft_tab, 80) + encrypt_round(aes_ft_tab, 96) + encrypt_final(aes_fl_tab,112) + return + +/* void aes_decrypt(void *ctx, u8 *out, const u8 *in) */ + + entry(aes_decrypt,240,dec128,dec192) + decrypt_round(aes_it_tab,-96) + decrypt_round(aes_it_tab,-80) +dec192: decrypt_round(aes_it_tab,-64) + decrypt_round(aes_it_tab,-48) +dec128: decrypt_round(aes_it_tab,-32) + decrypt_round(aes_it_tab,-16) + decrypt_round(aes_it_tab, 0) + decrypt_round(aes_it_tab, 16) + decrypt_round(aes_it_tab, 32) + decrypt_round(aes_it_tab, 48) + decrypt_round(aes_it_tab, 64) + decrypt_round(aes_it_tab, 80) + decrypt_round(aes_it_tab, 96) + decrypt_final(aes_il_tab,112) + return diff --git a/arch/x86_64/crypto/aes.c b/arch/x86_64/crypto/aes.c new file mode 100644 index 00000000000..2b5c4010ce3 --- /dev/null +++ b/arch/x86_64/crypto/aes.c @@ -0,0 +1,324 @@ +/* + * Cryptographic API. + * + * AES Cipher Algorithm. + * + * Based on Brian Gladman's code. + * + * Linux developers: + * Alexander Kjeldaas + * Herbert Valerio Riedel + * Kyle McMartin + * Adam J. Richter (conversion to 2.5 API). + * Andreas Steinmetz (adapted to x86_64 assembler) + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License as published by + * the Free Software Foundation; either version 2 of the License, or + * (at your option) any later version. + * + * --------------------------------------------------------------------------- + * Copyright (c) 2002, Dr Brian Gladman , Worcester, UK. + * All rights reserved. + * + * LICENSE TERMS + * + * The free distribution and use of this software in both source and binary + * form is allowed (with or without changes) provided that: + * + * 1. distributions of this source code include the above copyright + * notice, this list of conditions and the following disclaimer; + * + * 2. distributions in binary form include the above copyright + * notice, this list of conditions and the following disclaimer + * in the documentation and/or other associated materials; + * + * 3. the copyright holder's name is not used to endorse products + * built using this software without specific written permission. + * + * ALTERNATIVELY, provided that this notice is retained in full, this product + * may be distributed under the terms of the GNU General Public License (GPL), + * in which case the provisions of the GPL apply INSTEAD OF those given above. + * + * DISCLAIMER + * + * This software is provided 'as is' with no explicit or implied warranties + * in respect of its properties, including, but not limited to, correctness + * and/or fitness for purpose. + * --------------------------------------------------------------------------- + */ + +/* Some changes from the Gladman version: + s/RIJNDAEL(e_key)/E_KEY/g + s/RIJNDAEL(d_key)/D_KEY/g +*/ + +#include +#include +#include +#include +#include +#include +#include + +#define AES_MIN_KEY_SIZE 16 +#define AES_MAX_KEY_SIZE 32 + +#define AES_BLOCK_SIZE 16 + +/* + * #define byte(x, nr) ((unsigned char)((x) >> (nr*8))) + */ +static inline u8 byte(const u32 x, const unsigned n) +{ + return x >> (n << 3); +} + +#define u32_in(x) le32_to_cpu(*(const __le32 *)(x)) + +struct aes_ctx +{ + u32 key_length; + u32 E[60]; + u32 D[60]; +}; + +#define E_KEY ctx->E +#define D_KEY ctx->D + +static u8 pow_tab[256] __initdata; +static u8 log_tab[256] __initdata; +static u8 sbx_tab[256] __initdata; +static u8 isb_tab[256] __initdata; +static u32 rco_tab[10]; +u32 aes_ft_tab[4][256]; +u32 aes_it_tab[4][256]; + +u32 aes_fl_tab[4][256]; +u32 aes_il_tab[4][256]; + +static inline u8 f_mult(u8 a, u8 b) +{ + u8 aa = log_tab[a], cc = aa + log_tab[b]; + + return pow_tab[cc + (cc < aa ? 1 : 0)]; +} + +#define ff_mult(a, b) (a && b ? f_mult(a, b) : 0) + +#define ls_box(x) \ + (aes_fl_tab[0][byte(x, 0)] ^ \ + aes_fl_tab[1][byte(x, 1)] ^ \ + aes_fl_tab[2][byte(x, 2)] ^ \ + aes_fl_tab[3][byte(x, 3)]) + +static void __init gen_tabs(void) +{ + u32 i, t; + u8 p, q; + + /* log and power tables for GF(2**8) finite field with + 0x011b as modular polynomial - the simplest primitive + root is 0x03, used here to generate the tables */ + + for (i = 0, p = 1; i < 256; ++i) { + pow_tab[i] = (u8)p; + log_tab[p] = (u8)i; + + p ^= (p << 1) ^ (p & 0x80 ? 0x01b : 0); + } + + log_tab[1] = 0; + + for (i = 0, p = 1; i < 10; ++i) { + rco_tab[i] = p; + + p = (p << 1) ^ (p & 0x80 ? 0x01b : 0); + } + + for (i = 0; i < 256; ++i) { + p = (i ? pow_tab[255 - log_tab[i]] : 0); + q = ((p >> 7) | (p << 1)) ^ ((p >> 6) | (p << 2)); + p ^= 0x63 ^ q ^ ((q >> 6) | (q << 2)); + sbx_tab[i] = p; + isb_tab[p] = (u8)i; + } + + for (i = 0; i < 256; ++i) { + p = sbx_tab[i]; + + t = p; + aes_fl_tab[0][i] = t; + aes_fl_tab[1][i] = rol32(t, 8); + aes_fl_tab[2][i] = rol32(t, 16); + aes_fl_tab[3][i] = rol32(t, 24); + + t = ((u32)ff_mult(2, p)) | + ((u32)p << 8) | + ((u32)p << 16) | ((u32)ff_mult(3, p) << 24); + + aes_ft_tab[0][i] = t; + aes_ft_tab[1][i] = rol32(t, 8); + aes_ft_tab[2][i] = rol32(t, 16); + aes_ft_tab[3][i] = rol32(t, 24); + + p = isb_tab[i]; + + t = p; + aes_il_tab[0][i] = t; + aes_il_tab[1][i] = rol32(t, 8); + aes_il_tab[2][i] = rol32(t, 16); + aes_il_tab[3][i] = rol32(t, 24); + + t = ((u32)ff_mult(14, p)) | + ((u32)ff_mult(9, p) << 8) | + ((u32)ff_mult(13, p) << 16) | + ((u32)ff_mult(11, p) << 24); + + aes_it_tab[0][i] = t; + aes_it_tab[1][i] = rol32(t, 8); + aes_it_tab[2][i] = rol32(t, 16); + aes_it_tab[3][i] = rol32(t, 24); + } +} + +#define star_x(x) (((x) & 0x7f7f7f7f) << 1) ^ ((((x) & 0x80808080) >> 7) * 0x1b) + +#define imix_col(y, x) \ + u = star_x(x); \ + v = star_x(u); \ + w = star_x(v); \ + t = w ^ (x); \ + (y) = u ^ v ^ w; \ + (y) ^= ror32(u ^ t, 8) ^ \ + ror32(v ^ t, 16) ^ \ + ror32(t, 24) + +/* initialise the key schedule from the user supplied key */ + +#define loop4(i) \ +{ \ + t = ror32(t, 8); t = ls_box(t) ^ rco_tab[i]; \ + t ^= E_KEY[4 * i]; E_KEY[4 * i + 4] = t; \ + t ^= E_KEY[4 * i + 1]; E_KEY[4 * i + 5] = t; \ + t ^= E_KEY[4 * i + 2]; E_KEY[4 * i + 6] = t; \ + t ^= E_KEY[4 * i + 3]; E_KEY[4 * i + 7] = t; \ +} + +#define loop6(i) \ +{ \ + t = ror32(t, 8); t = ls_box(t) ^ rco_tab[i]; \ + t ^= E_KEY[6 * i]; E_KEY[6 * i + 6] = t; \ + t ^= E_KEY[6 * i + 1]; E_KEY[6 * i + 7] = t; \ + t ^= E_KEY[6 * i + 2]; E_KEY[6 * i + 8] = t; \ + t ^= E_KEY[6 * i + 3]; E_KEY[6 * i + 9] = t; \ + t ^= E_KEY[6 * i + 4]; E_KEY[6 * i + 10] = t; \ + t ^= E_KEY[6 * i + 5]; E_KEY[6 * i + 11] = t; \ +} + +#define loop8(i) \ +{ \ + t = ror32(t, 8); ; t = ls_box(t) ^ rco_tab[i]; \ + t ^= E_KEY[8 * i]; E_KEY[8 * i + 8] = t; \ + t ^= E_KEY[8 * i + 1]; E_KEY[8 * i + 9] = t; \ + t ^= E_KEY[8 * i + 2]; E_KEY[8 * i + 10] = t; \ + t ^= E_KEY[8 * i + 3]; E_KEY[8 * i + 11] = t; \ + t = E_KEY[8 * i + 4] ^ ls_box(t); \ + E_KEY[8 * i + 12] = t; \ + t ^= E_KEY[8 * i + 5]; E_KEY[8 * i + 13] = t; \ + t ^= E_KEY[8 * i + 6]; E_KEY[8 * i + 14] = t; \ + t ^= E_KEY[8 * i + 7]; E_KEY[8 * i + 15] = t; \ +} + +static int aes_set_key(void *ctx_arg, const u8 *in_key, unsigned int key_len, + u32 *flags) +{ + struct aes_ctx *ctx = ctx_arg; + u32 i, j, t, u, v, w; + + if (key_len != 16 && key_len != 24 && key_len != 32) { + *flags |= CRYPTO_TFM_RES_BAD_KEY_LEN; + return -EINVAL; + } + + ctx->key_length = key_len; + + D_KEY[key_len + 24] = E_KEY[0] = u32_in(in_key); + D_KEY[key_len + 25] = E_KEY[1] = u32_in(in_key + 4); + D_KEY[key_len + 26] = E_KEY[2] = u32_in(in_key + 8); + D_KEY[key_len + 27] = E_KEY[3] = u32_in(in_key + 12); + + switch (key_len) { + case 16: + t = E_KEY[3]; + for (i = 0; i < 10; ++i) + loop4(i); + break; + + case 24: + E_KEY[4] = u32_in(in_key + 16); + t = E_KEY[5] = u32_in(in_key + 20); + for (i = 0; i < 8; ++i) + loop6 (i); + break; + + case 32: + E_KEY[4] = u32_in(in_key + 16); + E_KEY[5] = u32_in(in_key + 20); + E_KEY[6] = u32_in(in_key + 24); + t = E_KEY[7] = u32_in(in_key + 28); + for (i = 0; i < 7; ++i) + loop8(i); + break; + } + + D_KEY[0] = E_KEY[key_len + 24]; + D_KEY[1] = E_KEY[key_len + 25]; + D_KEY[2] = E_KEY[key_len + 26]; + D_KEY[3] = E_KEY[key_len + 27]; + + for (i = 4; i < key_len + 24; ++i) { + j = key_len + 24 - (i & ~3) + (i & 3); + imix_col(D_KEY[j], E_KEY[i]); + } + + return 0; +} + +extern void aes_encrypt(void *ctx_arg, u8 *out, const u8 *in); +extern void aes_decrypt(void *ctx_arg, u8 *out, const u8 *in); + +static struct crypto_alg aes_alg = { + .cra_name = "aes", + .cra_flags = CRYPTO_ALG_TYPE_CIPHER, + .cra_blocksize = AES_BLOCK_SIZE, + .cra_ctxsize = sizeof(struct aes_ctx), + .cra_module = THIS_MODULE, + .cra_list = LIST_HEAD_INIT(aes_alg.cra_list), + .cra_u = { + .cipher = { + .cia_min_keysize = AES_MIN_KEY_SIZE, + .cia_max_keysize = AES_MAX_KEY_SIZE, + .cia_setkey = aes_set_key, + .cia_encrypt = aes_encrypt, + .cia_decrypt = aes_decrypt + } + } +}; + +static int __init aes_init(void) +{ + gen_tabs(); + return crypto_register_alg(&aes_alg); +} + +static void __exit aes_fini(void) +{ + crypto_unregister_alg(&aes_alg); +} + +module_init(aes_init); +module_exit(aes_fini); + +MODULE_DESCRIPTION("Rijndael (AES) Cipher Algorithm"); +MODULE_LICENSE("GPL"); -- cgit v1.2.3 From 3b520b238e018ef0e9d11c9115d5e7d9419c4ef9 Mon Sep 17 00:00:00 2001 From: Shaohua Li Date: Thu, 7 Jul 2005 17:56:38 -0700 Subject: [PATCH] MTRR suspend/resume cleanup There has been some discuss about solving the SMP MTRR suspend/resume breakage, but I didn't find a patch for it. This is an intent for it. The basic idea is moving mtrr initializing into cpu_identify for all APs (so it works for cpu hotplug). For BP, restore_processor_state is responsible for restoring MTRR. Signed-off-by: Shaohua Li Acked-by: Andi Kleen Signed-off-by: Andrew Morton Signed-off-by: Linus Torvalds --- arch/x86_64/kernel/setup.c | 4 ++++ arch/x86_64/kernel/suspend.c | 1 + 2 files changed, 5 insertions(+) (limited to 'arch/x86_64') diff --git a/arch/x86_64/kernel/setup.c b/arch/x86_64/kernel/setup.c index b02d921da4f..5fd03225058 100644 --- a/arch/x86_64/kernel/setup.c +++ b/arch/x86_64/kernel/setup.c @@ -1076,6 +1076,10 @@ void __cpuinit identify_cpu(struct cpuinfo_x86 *c) #ifdef CONFIG_X86_MCE mcheck_init(c); #endif + if (c == &boot_cpu_data) + mtrr_bp_init(); + else + mtrr_ap_init(); #ifdef CONFIG_NUMA if (c != &boot_cpu_data) numa_add_cpu(c - cpu_data); diff --git a/arch/x86_64/kernel/suspend.c b/arch/x86_64/kernel/suspend.c index 6c0f402e3a8..0612640d91b 100644 --- a/arch/x86_64/kernel/suspend.c +++ b/arch/x86_64/kernel/suspend.c @@ -119,6 +119,7 @@ void __restore_processor_state(struct saved_context *ctxt) fix_processor_context(); do_fpu_end(); + mtrr_ap_init(); } void restore_processor_state(void) -- cgit v1.2.3 From 6c036527a630720063b67d9a65455e8caca2c8fa Mon Sep 17 00:00:00 2001 From: Christoph Lameter Date: Thu, 7 Jul 2005 17:56:59 -0700 Subject: [PATCH] mostly_read data section Add a new section called ".data.read_mostly" for data items that are read frequently and rarely written to like cpumaps etc. If these maps are placed in the .data section then these frequenly read items may end up in cachelines with data is is frequently updated. In that case all processors in an SMP system must needlessly reload the cachelines again and again containing elements of those frequently used variables. The ability to share these cachelines will allow each cpu in an SMP system to keep local copies of those shared cachelines thereby optimizing performance. Signed-off-by: Alok N Kataria Signed-off-by: Shobhit Dayal Signed-off-by: Christoph Lameter Signed-off-by: Shai Fultheim Signed-off-by: Andrew Morton Signed-off-by: Linus Torvalds --- arch/x86_64/kernel/vmlinux.lds.S | 4 ++++ 1 file changed, 4 insertions(+) (limited to 'arch/x86_64') diff --git a/arch/x86_64/kernel/vmlinux.lds.S b/arch/x86_64/kernel/vmlinux.lds.S index 73389f51c4e..61c12758ca7 100644 --- a/arch/x86_64/kernel/vmlinux.lds.S +++ b/arch/x86_64/kernel/vmlinux.lds.S @@ -56,6 +56,10 @@ SECTIONS .data.cacheline_aligned : AT(ADDR(.data.cacheline_aligned) - LOAD_OFFSET) { *(.data.cacheline_aligned) } + . = ALIGN(CONFIG_X86_L1_CACHE_BYTES); + .data.read_mostly : AT(ADDR(.data.read_mostly) - LOAD_OFFSET) { + *(.data.read_mostly) + } #define VSYSCALL_ADDR (-10*1024*1024) #define VSYSCALL_PHYS_ADDR ((LOADADDR(.data.cacheline_aligned) + SIZEOF(.data.cacheline_aligned) + 4095) & ~(4095)) -- cgit v1.2.3 From d5950b4355049092739bea97d1bdc14433126cc5 Mon Sep 17 00:00:00 2001 From: Sam Ravnborg Date: Mon, 11 Jul 2005 21:03:49 -0700 Subject: [NET]: add a top-level Networking menu to *config Create a new top-level menu named "Networking" thus moving net related options and protocol selection way from the drivers menu and up on the top-level where they belong. To implement this all architectures has to source "net/Kconfig" before drivers/*/Kconfig in their Kconfig file. This change has been implemented for all architectures. Device drivers for ordinary NIC's are still to be found in the Device Drivers section, but Bluetooth, IrDA and ax25 are located with their corresponding menu entries under the new networking menu item. Signed-off-by: Sam Ravnborg Signed-off-by: David S. Miller --- arch/x86_64/Kconfig | 2 ++ 1 file changed, 2 insertions(+) (limited to 'arch/x86_64') diff --git a/arch/x86_64/Kconfig b/arch/x86_64/Kconfig index d09437b5c48..4b8326177c5 100644 --- a/arch/x86_64/Kconfig +++ b/arch/x86_64/Kconfig @@ -515,6 +515,8 @@ config UID16 endmenu +source "net/Kconfig" + source drivers/Kconfig source "drivers/firmware/Kconfig" -- cgit v1.2.3 From d312ceda567ab91acd756cde95ac5fbc6b40ed40 Mon Sep 17 00:00:00 2001 From: Andrew Morton Date: Tue, 12 Jul 2005 13:58:13 -0700 Subject: [PATCH] x86_64: section alignment fix This is the second time this has happened: inserting a new section requires that we adjust the arithmetic which is used to calculate the vsyscall page's offset. Cc: Christoph Lameter Cc: Andi Kleen Signed-off-by: Andrew Morton Signed-off-by: Linus Torvalds --- arch/x86_64/kernel/vmlinux.lds.S | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) (limited to 'arch/x86_64') diff --git a/arch/x86_64/kernel/vmlinux.lds.S b/arch/x86_64/kernel/vmlinux.lds.S index 61c12758ca7..2a94f9b60b2 100644 --- a/arch/x86_64/kernel/vmlinux.lds.S +++ b/arch/x86_64/kernel/vmlinux.lds.S @@ -62,8 +62,8 @@ SECTIONS } #define VSYSCALL_ADDR (-10*1024*1024) -#define VSYSCALL_PHYS_ADDR ((LOADADDR(.data.cacheline_aligned) + SIZEOF(.data.cacheline_aligned) + 4095) & ~(4095)) -#define VSYSCALL_VIRT_ADDR ((ADDR(.data.cacheline_aligned) + SIZEOF(.data.cacheline_aligned) + 4095) & ~(4095)) +#define VSYSCALL_PHYS_ADDR ((LOADADDR(.data.read_mostly) + SIZEOF(.data.read_mostly) + 4095) & ~(4095)) +#define VSYSCALL_VIRT_ADDR ((ADDR(.data.read_mostly) + SIZEOF(.data.read_mostly) + 4095) & ~(4095)) #define VLOAD_OFFSET (VSYSCALL_ADDR - VSYSCALL_PHYS_ADDR) #define VLOAD(x) (ADDR(x) - VLOAD_OFFSET) -- cgit v1.2.3