* [0/5] [CRYPTO] Speed up crypt()
@ 2005-03-21 9:40 Herbert Xu
2005-03-21 9:48 ` [1/5] [CRYPTO] Do scatterwalk_whichbuf inline Herbert Xu
2005-03-22 11:22 ` [7/*] [CRYPTO] Kill obsolete iv check in cbc_process() Herbert Xu
0 siblings, 2 replies; 13+ messages in thread
From: Herbert Xu @ 2005-03-21 9:40 UTC (permalink / raw)
To: Fruhwirth Clemens; +Cc: James Morris, Linux Kernel Mailing List, cryptoapi
Hi:
I've developed a series of patches that speed up the operations of crypt()
based on the generic scatterwalk patch by Fruhwirth Clemens. My testing
shows that the results are comparable to that of the original patch.
What I found is that the primary source of the boost in performance is
the change that results in one pair of kmap operations per page instead
of one set per block as is done currently. Since the average block size
is around 8/16 bytes this is understandable.
Apart from that eliminating unnecessary out-of-line function calls for
the fast path in crypt() also helps quite a lot.
Please let me know if you find any problems with these patches.
Cheers,
--
Visit Openswan at http://www.openswan.org/
Email: Herbert Xu ~{PmV>HI~} <herbert@gondor.apana.org.au>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt
^ permalink raw reply [flat|nested] 13+ messages in thread
* [1/5] [CRYPTO] Do scatterwalk_whichbuf inline
2005-03-21 9:40 [0/5] [CRYPTO] Speed up crypt() Herbert Xu
@ 2005-03-21 9:48 ` Herbert Xu
2005-03-21 9:49 ` [2/5] [CRYPTO] Handle in_place flag in crypt() Herbert Xu
2005-03-22 11:22 ` [7/*] [CRYPTO] Kill obsolete iv check in cbc_process() Herbert Xu
1 sibling, 1 reply; 13+ messages in thread
From: Herbert Xu @ 2005-03-21 9:48 UTC (permalink / raw)
To: Fruhwirth Clemens; +Cc: James Morris, Linux Kernel Mailing List, cryptoapi
[-- Attachment #1: Type: text/plain, Size: 721 bytes --]
Hi:
scatterwalk_whichbuf is called once for each block which could be as
small as 8/16 bytes. So it makes sense to do that work inline.
It's also a bit inflexible since we may want to use the temporary buffer
even if the block doesn't cross page boundaries. In particular, we want
to do that when the source and destination are the same.
So let's replace it with scatterwalk_across_pages.
I've also simplified the check in scatterwalk_across_pages. It is
sufficient to only check len_this_page.
Cheers,
--
Visit Openswan at http://www.openswan.org/
Email: Herbert Xu ~{PmV>HI~} <herbert@gondor.apana.org.au>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt
[-- Attachment #2: sg-1 --]
[-- Type: text/plain, Size: 2101 bytes --]
diff -Nru a/crypto/cipher.c b/crypto/cipher.c
--- a/crypto/cipher.c 2005-03-21 18:43:36 +11:00
+++ b/crypto/cipher.c 2005-03-21 18:43:36 +11:00
@@ -11,6 +11,7 @@
* any later version.
*
*/
+#include <linux/compiler.h>
#include <linux/kernel.h>
#include <linux/crypto.h>
#include <linux/errno.h>
@@ -72,8 +73,15 @@
scatterwalk_map(&walk_in, 0);
scatterwalk_map(&walk_out, 1);
- src_p = scatterwalk_whichbuf(&walk_in, bsize, tmp_src);
- dst_p = scatterwalk_whichbuf(&walk_out, bsize, tmp_dst);
+
+ src_p = walk_in.data;
+ if (unlikely(scatterwalk_across_pages(&walk_in, bsize)))
+ src_p = tmp_src;
+
+ dst_p = walk_out.data;
+ if (unlikely(scatterwalk_across_pages(&walk_out, bsize)))
+ dst_p = tmp_dst;
+
in_place = scatterwalk_samebuf(&walk_in, &walk_out,
src_p, dst_p);
diff -Nru a/crypto/scatterwalk.c b/crypto/scatterwalk.c
--- a/crypto/scatterwalk.c 2005-03-21 18:43:36 +11:00
+++ b/crypto/scatterwalk.c 2005-03-21 18:43:36 +11:00
@@ -28,16 +28,6 @@
KM_SOFTIRQ1,
};
-void *scatterwalk_whichbuf(struct scatter_walk *walk, unsigned int nbytes, void *scratch)
-{
- if (nbytes <= walk->len_this_page &&
- (((unsigned long)walk->data) & (PAGE_CACHE_SIZE - 1)) + nbytes <=
- PAGE_CACHE_SIZE)
- return walk->data;
- else
- return scratch;
-}
-
static void memcpy_dir(void *buf, void *sgdata, size_t nbytes, int out)
{
if (out)
diff -Nru a/crypto/scatterwalk.h b/crypto/scatterwalk.h
--- a/crypto/scatterwalk.h 2005-03-21 18:43:36 +11:00
+++ b/crypto/scatterwalk.h 2005-03-21 18:43:36 +11:00
@@ -42,7 +42,12 @@
walk_in->data == src_p && walk_out->data == dst_p;
}
-void *scatterwalk_whichbuf(struct scatter_walk *walk, unsigned int nbytes, void *scratch);
+static inline int scatterwalk_across_pages(struct scatter_walk *walk,
+ unsigned int nbytes)
+{
+ return nbytes > walk->len_this_page;
+}
+
void scatterwalk_start(struct scatter_walk *walk, struct scatterlist *sg);
int scatterwalk_copychunks(void *buf, struct scatter_walk *walk, size_t nbytes, int out);
void scatterwalk_map(struct scatter_walk *walk, int out);
^ permalink raw reply [flat|nested] 13+ messages in thread
* [2/5] [CRYPTO] Handle in_place flag in crypt()
2005-03-21 9:48 ` [1/5] [CRYPTO] Do scatterwalk_whichbuf inline Herbert Xu
@ 2005-03-21 9:49 ` Herbert Xu
2005-03-21 9:50 ` [3/5] [CRYPTO] Split src/dst handling out from crypt() Herbert Xu
0 siblings, 1 reply; 13+ messages in thread
From: Herbert Xu @ 2005-03-21 9:49 UTC (permalink / raw)
To: Fruhwirth Clemens; +Cc: James Morris, Linux Kernel Mailing List, cryptoapi
[-- Attachment #1: Type: text/plain, Size: 410 bytes --]
Hi:
Move the handling of in_place into crypt() itself. This means that we only
need two temporary buffers instead of three. It also allows us to simplify
the check in scatterwalk_samebuf.
Cheers,
--
Visit Openswan at http://www.openswan.org/
Email: Herbert Xu ~{PmV>HI~} <herbert@gondor.apana.org.au>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt
[-- Attachment #2: sg-2 --]
[-- Type: text/plain, Size: 2780 bytes --]
diff -Nru a/crypto/cipher.c b/crypto/cipher.c
--- a/crypto/cipher.c 2005-03-21 18:43:52 +11:00
+++ b/crypto/cipher.c 2005-03-21 18:43:52 +11:00
@@ -23,7 +23,7 @@
typedef void (cryptfn_t)(void *, u8 *, const u8 *);
typedef void (procfn_t)(struct crypto_tfm *, u8 *,
- u8*, cryptfn_t, int enc, void *, int);
+ u8*, cryptfn_t, int enc, void *);
static inline void xor_64(u8 *a, const u8 *b)
{
@@ -74,22 +74,22 @@
scatterwalk_map(&walk_in, 0);
scatterwalk_map(&walk_out, 1);
+ in_place = scatterwalk_samebuf(&walk_in, &walk_out);
+
src_p = walk_in.data;
if (unlikely(scatterwalk_across_pages(&walk_in, bsize)))
src_p = tmp_src;
dst_p = walk_out.data;
- if (unlikely(scatterwalk_across_pages(&walk_out, bsize)))
+ if (unlikely(scatterwalk_across_pages(&walk_out, bsize)) ||
+ in_place)
dst_p = tmp_dst;
- in_place = scatterwalk_samebuf(&walk_in, &walk_out,
- src_p, dst_p);
-
nbytes -= bsize;
scatterwalk_copychunks(src_p, &walk_in, bsize, 0);
- prfn(tfm, dst_p, src_p, crfn, enc, info, in_place);
+ prfn(tfm, dst_p, src_p, crfn, enc, info);
scatterwalk_done(&walk_in, 0, nbytes);
@@ -104,7 +104,7 @@
}
static void cbc_process(struct crypto_tfm *tfm, u8 *dst, u8 *src,
- cryptfn_t fn, int enc, void *info, int in_place)
+ cryptfn_t fn, int enc, void *info)
{
u8 *iv = info;
@@ -117,19 +117,14 @@
fn(crypto_tfm_ctx(tfm), dst, iv);
memcpy(iv, dst, crypto_tfm_alg_blocksize(tfm));
} else {
- u8 stack[in_place ? crypto_tfm_alg_blocksize(tfm) : 0];
- u8 *buf = in_place ? stack : dst;
-
- fn(crypto_tfm_ctx(tfm), buf, src);
- tfm->crt_u.cipher.cit_xor_block(buf, iv);
+ fn(crypto_tfm_ctx(tfm), dst, src);
+ tfm->crt_u.cipher.cit_xor_block(dst, iv);
memcpy(iv, src, crypto_tfm_alg_blocksize(tfm));
- if (buf != dst)
- memcpy(dst, buf, crypto_tfm_alg_blocksize(tfm));
}
}
static void ecb_process(struct crypto_tfm *tfm, u8 *dst, u8 *src,
- cryptfn_t fn, int enc, void *info, int in_place)
+ cryptfn_t fn, int enc, void *info)
{
fn(crypto_tfm_ctx(tfm), dst, src);
}
diff -Nru a/crypto/scatterwalk.h b/crypto/scatterwalk.h
--- a/crypto/scatterwalk.h 2005-03-21 18:43:52 +11:00
+++ b/crypto/scatterwalk.h 2005-03-21 18:43:52 +11:00
@@ -34,12 +34,10 @@
}
static inline int scatterwalk_samebuf(struct scatter_walk *walk_in,
- struct scatter_walk *walk_out,
- void *src_p, void *dst_p)
+ struct scatter_walk *walk_out)
{
return walk_in->page == walk_out->page &&
- walk_in->offset == walk_out->offset &&
- walk_in->data == src_p && walk_out->data == dst_p;
+ walk_in->offset == walk_out->offset;
}
static inline int scatterwalk_across_pages(struct scatter_walk *walk,
^ permalink raw reply [flat|nested] 13+ messages in thread
* [3/5] [CRYPTO] Split src/dst handling out from crypt()
2005-03-21 9:49 ` [2/5] [CRYPTO] Handle in_place flag in crypt() Herbert Xu
@ 2005-03-21 9:50 ` Herbert Xu
2005-03-21 9:52 ` [4/5] [CRYPTO] Eliminate most calls to scatterwalk_copychunks " Herbert Xu
0 siblings, 1 reply; 13+ messages in thread
From: Herbert Xu @ 2005-03-21 9:50 UTC (permalink / raw)
To: Fruhwirth Clemens; +Cc: James Morris, Linux Kernel Mailing List, cryptoapi
[-- Attachment #1: Type: text/plain, Size: 651 bytes --]
Hi:
Move src/dst handling from crypt() into the helpers prepare_src,
prepare_dst, complete_src and complete_dst. complete_src doesn't
actually do anything at the moment but is included for completeness.
This sets the stage for further optimisations down the track without
polluting crypt() itself.
These helpers don't belong in scatterwalk.[ch] since they only help
the particular way that crypt() is walking the scatter lists.
Cheers,
--
Visit Openswan at http://www.openswan.org/
Email: Herbert Xu ~{PmV>HI~} <herbert@gondor.apana.org.au>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt
[-- Attachment #2: sg-3 --]
[-- Type: text/plain, Size: 1908 bytes --]
diff -Nru a/crypto/cipher.c b/crypto/cipher.c
--- a/crypto/cipher.c 2005-03-21 18:44:09 +11:00
+++ b/crypto/cipher.c 2005-03-21 18:44:09 +11:00
@@ -38,7 +38,38 @@
((u32 *)a)[2] ^= ((u32 *)b)[2];
((u32 *)a)[3] ^= ((u32 *)b)[3];
}
+
+static inline void *prepare_src(struct scatter_walk *walk, int bsize,
+ void *tmp, int in_place)
+{
+ void *src = walk->data;
+
+ if (unlikely(scatterwalk_across_pages(walk, bsize)))
+ src = tmp;
+ scatterwalk_copychunks(src, walk, bsize, 0);
+ return src;
+}
+static inline void *prepare_dst(struct scatter_walk *walk, int bsize,
+ void *tmp, int in_place)
+{
+ void *dst = walk->data;
+
+ if (unlikely(scatterwalk_across_pages(walk, bsize)) || in_place)
+ dst = tmp;
+ return dst;
+}
+
+static inline void complete_src(struct scatter_walk *walk, int bsize,
+ void *src, int in_place)
+{
+}
+
+static inline void complete_dst(struct scatter_walk *walk, int bsize,
+ void *dst, int in_place)
+{
+ scatterwalk_copychunks(dst, walk, bsize, 1);
+}
/*
* Generic encrypt/decrypt wrapper for ciphers, handles operations across
@@ -76,24 +107,17 @@
in_place = scatterwalk_samebuf(&walk_in, &walk_out);
- src_p = walk_in.data;
- if (unlikely(scatterwalk_across_pages(&walk_in, bsize)))
- src_p = tmp_src;
-
- dst_p = walk_out.data;
- if (unlikely(scatterwalk_across_pages(&walk_out, bsize)) ||
- in_place)
- dst_p = tmp_dst;
+ src_p = prepare_src(&walk_in, bsize, tmp_src, in_place);
+ dst_p = prepare_dst(&walk_out, bsize, tmp_dst, in_place);
nbytes -= bsize;
- scatterwalk_copychunks(src_p, &walk_in, bsize, 0);
-
prfn(tfm, dst_p, src_p, crfn, enc, info);
+ complete_src(&walk_in, bsize, src_p, in_place);
scatterwalk_done(&walk_in, 0, nbytes);
- scatterwalk_copychunks(dst_p, &walk_out, bsize, 1);
+ complete_dst(&walk_out, bsize, dst_p, in_place);
scatterwalk_done(&walk_out, 1, nbytes);
if (!nbytes)
^ permalink raw reply [flat|nested] 13+ messages in thread
* [4/5] [CRYPTO] Eliminate most calls to scatterwalk_copychunks from crypt()
2005-03-21 9:50 ` [3/5] [CRYPTO] Split src/dst handling out from crypt() Herbert Xu
@ 2005-03-21 9:52 ` Herbert Xu
2005-03-21 9:53 ` [5/5] [CRYPTO] Optimise kmap calls in crypt() Herbert Xu
0 siblings, 1 reply; 13+ messages in thread
From: Herbert Xu @ 2005-03-21 9:52 UTC (permalink / raw)
To: Fruhwirth Clemens; +Cc: James Morris, Linux Kernel Mailing List, cryptoapi
[-- Attachment #1: Type: text/plain, Size: 367 bytes --]
Hi:
Only call scatterwalk_copychunks when the block straddles a page boundary.
This allows crypt() to skip the out-of-line call most of the time.
Cheers,
--
Visit Openswan at http://www.openswan.org/
Email: Herbert Xu ~{PmV>HI~} <herbert@gondor.apana.org.au>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt
[-- Attachment #2: sg-4 --]
[-- Type: text/plain, Size: 2865 bytes --]
diff -Nru a/crypto/cipher.c b/crypto/cipher.c
--- a/crypto/cipher.c 2005-03-21 18:44:25 +11:00
+++ b/crypto/cipher.c 2005-03-21 18:44:25 +11:00
@@ -17,6 +17,7 @@
#include <linux/errno.h>
#include <linux/mm.h>
#include <linux/slab.h>
+#include <linux/string.h>
#include <asm/scatterlist.h>
#include "internal.h"
#include "scatterwalk.h"
@@ -43,10 +44,13 @@
void *tmp, int in_place)
{
void *src = walk->data;
+ int n = bsize;
- if (unlikely(scatterwalk_across_pages(walk, bsize)))
+ if (unlikely(scatterwalk_across_pages(walk, bsize))) {
src = tmp;
- scatterwalk_copychunks(src, walk, bsize, 0);
+ n = scatterwalk_copychunks(src, walk, bsize, 0);
+ }
+ scatterwalk_advance(walk, n);
return src;
}
@@ -68,7 +72,13 @@
static inline void complete_dst(struct scatter_walk *walk, int bsize,
void *dst, int in_place)
{
- scatterwalk_copychunks(dst, walk, bsize, 1);
+ int n = bsize;
+
+ if (unlikely(scatterwalk_across_pages(walk, bsize)))
+ n = scatterwalk_copychunks(dst, walk, bsize, 1);
+ else if (in_place)
+ memcpy(walk->data, dst, bsize);
+ scatterwalk_advance(walk, n);
}
/*
diff -Nru a/crypto/scatterwalk.c b/crypto/scatterwalk.c
--- a/crypto/scatterwalk.c 2005-03-21 18:44:25 +11:00
+++ b/crypto/scatterwalk.c 2005-03-21 18:44:25 +11:00
@@ -93,22 +93,16 @@
int scatterwalk_copychunks(void *buf, struct scatter_walk *walk,
size_t nbytes, int out)
{
- if (buf != walk->data) {
- while (nbytes > walk->len_this_page) {
- memcpy_dir(buf, walk->data, walk->len_this_page, out);
- buf += walk->len_this_page;
- nbytes -= walk->len_this_page;
+ do {
+ memcpy_dir(buf, walk->data, walk->len_this_page, out);
+ buf += walk->len_this_page;
+ nbytes -= walk->len_this_page;
- crypto_kunmap(walk->data, out);
- scatterwalk_pagedone(walk, out, 1);
- scatterwalk_map(walk, out);
- }
+ crypto_kunmap(walk->data, out);
+ scatterwalk_pagedone(walk, out, 1);
+ scatterwalk_map(walk, out);
+ } while (nbytes > walk->len_this_page);
- memcpy_dir(buf, walk->data, nbytes, out);
- }
-
- walk->offset += nbytes;
- walk->len_this_page -= nbytes;
- walk->len_this_segment -= nbytes;
- return 0;
+ memcpy_dir(buf, walk->data, nbytes, out);
+ return nbytes;
}
diff -Nru a/crypto/scatterwalk.h b/crypto/scatterwalk.h
--- a/crypto/scatterwalk.h 2005-03-21 18:44:25 +11:00
+++ b/crypto/scatterwalk.h 2005-03-21 18:44:25 +11:00
@@ -46,6 +46,14 @@
return nbytes > walk->len_this_page;
}
+static inline void scatterwalk_advance(struct scatter_walk *walk,
+ unsigned int nbytes)
+{
+ walk->offset += nbytes;
+ walk->len_this_page -= nbytes;
+ walk->len_this_segment -= nbytes;
+}
+
void scatterwalk_start(struct scatter_walk *walk, struct scatterlist *sg);
int scatterwalk_copychunks(void *buf, struct scatter_walk *walk, size_t nbytes, int out);
void scatterwalk_map(struct scatter_walk *walk, int out);
^ permalink raw reply [flat|nested] 13+ messages in thread
* [5/5] [CRYPTO] Optimise kmap calls in crypt()
2005-03-21 9:52 ` [4/5] [CRYPTO] Eliminate most calls to scatterwalk_copychunks " Herbert Xu
@ 2005-03-21 9:53 ` Herbert Xu
2005-03-21 11:30 ` Fruhwirth Clemens
0 siblings, 1 reply; 13+ messages in thread
From: Herbert Xu @ 2005-03-21 9:53 UTC (permalink / raw)
To: Fruhwirth Clemens; +Cc: James Morris, Linux Kernel Mailing List, cryptoapi
[-- Attachment #1: Type: text/plain, Size: 440 bytes --]
Hi:
Perform kmap once (or twice if the buffer is not aligned correctly)
per page in crypt() instead of the current code which does it once
per block. Consequently it will yield once per page instead of once
per block.
Cheers,
--
Visit Openswan at http://www.openswan.org/
Email: Herbert Xu ~{PmV>HI~} <herbert@gondor.apana.org.au>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt
[-- Attachment #2: sg-5 --]
[-- Type: text/plain, Size: 1058 bytes --]
diff -Nru a/crypto/cipher.c b/crypto/cipher.c
--- a/crypto/cipher.c 2005-03-21 18:44:41 +11:00
+++ b/crypto/cipher.c 2005-03-21 18:44:41 +11:00
@@ -117,17 +117,21 @@
in_place = scatterwalk_samebuf(&walk_in, &walk_out);
- src_p = prepare_src(&walk_in, bsize, tmp_src, in_place);
- dst_p = prepare_dst(&walk_out, bsize, tmp_dst, in_place);
+ do {
+ src_p = prepare_src(&walk_in, bsize, tmp_src,
+ in_place);
+ dst_p = prepare_dst(&walk_out, bsize, tmp_dst,
+ in_place);
- nbytes -= bsize;
+ prfn(tfm, dst_p, src_p, crfn, enc, info);
- prfn(tfm, dst_p, src_p, crfn, enc, info);
+ complete_src(&walk_in, bsize, src_p, in_place);
+ complete_dst(&walk_out, bsize, dst_p, in_place);
- complete_src(&walk_in, bsize, src_p, in_place);
- scatterwalk_done(&walk_in, 0, nbytes);
+ nbytes -= bsize;
+ } while (nbytes && !scatterwalk_across_pages(&walk_in, bsize));
- complete_dst(&walk_out, bsize, dst_p, in_place);
+ scatterwalk_done(&walk_in, 0, nbytes);
scatterwalk_done(&walk_out, 1, nbytes);
if (!nbytes)
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [5/5] [CRYPTO] Optimise kmap calls in crypt()
2005-03-21 9:53 ` [5/5] [CRYPTO] Optimise kmap calls in crypt() Herbert Xu
@ 2005-03-21 11:30 ` Fruhwirth Clemens
2005-03-22 1:13 ` Herbert Xu
0 siblings, 1 reply; 13+ messages in thread
From: Fruhwirth Clemens @ 2005-03-21 11:30 UTC (permalink / raw)
To: Herbert Xu; +Cc: James Morris, Linux Kernel Mailing List, cryptoapi
[-- Attachment #1: Type: text/plain, Size: 1037 bytes --]
On Mon, 2005-03-21 at 20:53 +1100, Herbert Xu wrote:
> Perform kmap once (or twice if the buffer is not aligned correctly)
> per page in crypt() instead of the current code which does it once
> per block. Consequently it will yield once per page instead of once
> per block.
Thanks for your work, Herbert.
Applying all patches results in a "does not work for me". The decryption
result is different from the original and my LUKS managed partition
refuses to mount.
I assume you have a test environment already setup, so I would suggest
to find out up to which patch the following test succeeds (should be
paste-able)
cd /tmp
dd if=/dev/zero of=test-crypt count=100
losetup /dev/loop5 /tmp/test-crypt
echo 0 100 crypt aes-plain 0123456789abcdef0123456789abcdef 0 /dev/loop5 0 | dmsetup create test-map
sha1sum /dev/mapper/test-map
Result:
368d017dbdb4299ed7f27d3fc815442f7e438865 /dev/mapper/test-map
Cheers,
--
Fruhwirth Clemens - http://clemens.endorphin.org
for robots: sp4mtrap@endorphin.org
[-- Attachment #2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 189 bytes --]
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [5/5] [CRYPTO] Optimise kmap calls in crypt()
2005-03-21 11:30 ` Fruhwirth Clemens
@ 2005-03-22 1:13 ` Herbert Xu
2005-03-22 10:24 ` Fruhwirth Clemens
0 siblings, 1 reply; 13+ messages in thread
From: Herbert Xu @ 2005-03-22 1:13 UTC (permalink / raw)
To: Fruhwirth Clemens; +Cc: James Morris, Linux Kernel Mailing List, cryptoapi
[-- Attachment #1: Type: text/plain, Size: 607 bytes --]
On Mon, Mar 21, 2005 at 12:30:59PM +0100, Fruhwirth Clemens wrote:
>
> Applying all patches results in a "does not work for me". The decryption
> result is different from the original and my LUKS managed partition
> refuses to mount.
Thanks for testing this Fruhwirth. The problem is that walk->data wasn't
being incremented anymore after my last change. This patch should fix it
up.
Cheers,
--
Visit Openswan at http://www.openswan.org/
Email: Herbert Xu ~{PmV>HI~} <herbert@gondor.apana.org.au>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt
[-- Attachment #2: sg-6 --]
[-- Type: text/plain, Size: 2177 bytes --]
===== crypto/scatterwalk.c 1.4 vs edited =====
--- 1.4/crypto/scatterwalk.c 2005-03-20 22:18:58 +11:00
+++ edited/crypto/scatterwalk.c 2005-03-22 11:06:11 +11:00
@@ -17,6 +17,7 @@
#include <linux/mm.h>
#include <linux/pagemap.h>
#include <linux/highmem.h>
+#include <asm/bug.h>
#include <asm/scatterlist.h>
#include "internal.h"
#include "scatterwalk.h"
@@ -45,6 +46,8 @@
walk->page = sg->page;
walk->len_this_segment = sg->length;
+ BUG_ON(!sg->length);
+
rest_of_page = PAGE_CACHE_SIZE - (sg->offset & (PAGE_CACHE_SIZE - 1));
walk->len_this_page = min(sg->length, rest_of_page);
walk->offset = sg->offset;
@@ -55,13 +58,17 @@
walk->data = crypto_kmap(walk->page, out) + walk->offset;
}
-static void scatterwalk_pagedone(struct scatter_walk *walk, int out,
- unsigned int more)
+static inline void scatterwalk_unmap(struct scatter_walk *walk, int out)
{
/* walk->data may be pointing the first byte of the next page;
however, we know we transfered at least one byte. So,
walk->data - 1 will be a virtual address in the mapped page. */
+ crypto_kunmap(walk->data - 1, out);
+}
+static void scatterwalk_pagedone(struct scatter_walk *walk, int out,
+ unsigned int more)
+{
if (out)
flush_dcache_page(walk->page);
@@ -81,7 +88,7 @@
void scatterwalk_done(struct scatter_walk *walk, int out, int more)
{
- crypto_kunmap(walk->data, out);
+ scatterwalk_unmap(walk, out);
if (walk->len_this_page == 0 || !more)
scatterwalk_pagedone(walk, out, more);
}
@@ -98,7 +105,7 @@
buf += walk->len_this_page;
nbytes -= walk->len_this_page;
- crypto_kunmap(walk->data, out);
+ scatterwalk_unmap(walk, out);
scatterwalk_pagedone(walk, out, 1);
scatterwalk_map(walk, out);
} while (nbytes > walk->len_this_page);
===== crypto/scatterwalk.h 1.6 vs edited =====
--- 1.6/crypto/scatterwalk.h 2005-03-20 22:18:58 +11:00
+++ edited/crypto/scatterwalk.h 2005-03-22 10:57:07 +11:00
@@ -49,6 +49,7 @@
static inline void scatterwalk_advance(struct scatter_walk *walk,
unsigned int nbytes)
{
+ walk->data += nbytes;
walk->offset += nbytes;
walk->len_this_page -= nbytes;
walk->len_this_segment -= nbytes;
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [5/5] [CRYPTO] Optimise kmap calls in crypt()
2005-03-22 1:13 ` Herbert Xu
@ 2005-03-22 10:24 ` Fruhwirth Clemens
0 siblings, 0 replies; 13+ messages in thread
From: Fruhwirth Clemens @ 2005-03-22 10:24 UTC (permalink / raw)
To: Herbert Xu; +Cc: James Morris, Linux Kernel Mailing List, cryptoapi
[-- Attachment #1: Type: text/plain, Size: 655 bytes --]
On Tue, 2005-03-22 at 12:13 +1100, Herbert Xu wrote:
> On Mon, Mar 21, 2005 at 12:30:59PM +0100, Fruhwirth Clemens wrote:
> >
> > Applying all patches results in a "does not work for me". The decryption
> > result is different from the original and my LUKS managed partition
> > refuses to mount.
>
> Thanks for testing this Fruhwirth. The problem is that walk->data wasn't
> being incremented anymore after my last change.
I remember, that I almost forgot about that pointer too.
> This patch should fix it up.
Works for me now. Thanks.
--
Fruhwirth Clemens - http://clemens.endorphin.org
for robots: sp4mtrap@endorphin.org
[-- Attachment #2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 189 bytes --]
^ permalink raw reply [flat|nested] 13+ messages in thread
* [7/*] [CRYPTO] Kill obsolete iv check in cbc_process()
2005-03-21 9:40 [0/5] [CRYPTO] Speed up crypt() Herbert Xu
2005-03-21 9:48 ` [1/5] [CRYPTO] Do scatterwalk_whichbuf inline Herbert Xu
@ 2005-03-22 11:22 ` Herbert Xu
2005-03-22 11:24 ` [8/*] [CRYPTO] Split cbc_process into encrypt/decrypt Herbert Xu
1 sibling, 1 reply; 13+ messages in thread
From: Herbert Xu @ 2005-03-22 11:22 UTC (permalink / raw)
To: Fruhwirth Clemens
Cc: James Morris, Linux Kernel Mailing List, cryptoapi, linux-crypto
[-- Attachment #1: Type: text/plain, Size: 726 bytes --]
Hi:
Here's some more optimisations plus a bug fix for a pathological case
where in_place might not be set correctly which can't happen with any
of the current users. Here is the first one:
We have long since stopped using a null cit_iv as a means of doing null
encryption. In fact it doesn't work here anyway since we need to copy
src into dst to achieve null encryption.
No user of cbc_encrypt_iv/cbc_decrypt_iv does this either so let's just
get rid of this check which is sitting in the fast path.
Cheers,
--
Visit Openswan at http://www.openswan.org/
Email: Herbert Xu ~{PmV>HI~} <herbert@gondor.apana.org.au>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt
[-- Attachment #2: sg-7 --]
[-- Type: text/plain, Size: 365 bytes --]
===== cipher.c 1.24 vs edited =====
--- 1.24/crypto/cipher.c 2005-03-21 18:41:41 +11:00
+++ edited/cipher.c 2005-03-22 21:28:00 +11:00
@@ -145,11 +145,7 @@
cryptfn_t fn, int enc, void *info)
{
u8 *iv = info;
-
- /* Null encryption */
- if (!iv)
- return;
-
+
if (enc) {
tfm->crt_u.cipher.cit_xor_block(iv, src);
fn(crypto_tfm_ctx(tfm), dst, iv);
^ permalink raw reply [flat|nested] 13+ messages in thread
* [8/*] [CRYPTO] Split cbc_process into encrypt/decrypt
2005-03-22 11:22 ` [7/*] [CRYPTO] Kill obsolete iv check in cbc_process() Herbert Xu
@ 2005-03-22 11:24 ` Herbert Xu
2005-03-22 11:25 ` [9/*] [CRYPTO] Remap when walk_out crosses page in crypt() Herbert Xu
0 siblings, 1 reply; 13+ messages in thread
From: Herbert Xu @ 2005-03-22 11:24 UTC (permalink / raw)
To: Fruhwirth Clemens
Cc: James Morris, Linux Kernel Mailing List, cryptoapi, linux-crypto
[-- Attachment #1: Type: text/plain, Size: 440 bytes --]
Hi:
Rather than taking a branch on the fast path, we might as well split
cbc_process into encrypt and decrypt since they don't share anything
in common.
We can get rid of the cryptfn argument too. I'll do that next.
Cheers,
--
Visit Openswan at http://www.openswan.org/
Email: Herbert Xu ~{PmV>HI~} <herbert@gondor.apana.org.au>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt
[-- Attachment #2: sg-8 --]
[-- Type: text/plain, Size: 3731 bytes --]
===== cipher.c 1.25 vs edited =====
--- 1.25/crypto/cipher.c 2005-03-22 21:33:25 +11:00
+++ edited/cipher.c 2005-03-22 21:47:09 +11:00
@@ -24,7 +24,7 @@
typedef void (cryptfn_t)(void *, u8 *, const u8 *);
typedef void (procfn_t)(struct crypto_tfm *, u8 *,
- u8*, cryptfn_t, int enc, void *);
+ u8*, cryptfn_t, void *);
static inline void xor_64(u8 *a, const u8 *b)
{
@@ -90,7 +90,7 @@
struct scatterlist *dst,
struct scatterlist *src,
unsigned int nbytes, cryptfn_t crfn,
- procfn_t prfn, int enc, void *info)
+ procfn_t prfn, void *info)
{
struct scatter_walk walk_in, walk_out;
const unsigned int bsize = crypto_tfm_alg_blocksize(tfm);
@@ -123,7 +123,7 @@
dst_p = prepare_dst(&walk_out, bsize, tmp_dst,
in_place);
- prfn(tfm, dst_p, src_p, crfn, enc, info);
+ prfn(tfm, dst_p, src_p, crfn, info);
complete_src(&walk_in, bsize, src_p, in_place);
complete_dst(&walk_out, bsize, dst_p, in_place);
@@ -141,24 +141,28 @@
}
}
-static void cbc_process(struct crypto_tfm *tfm, u8 *dst, u8 *src,
- cryptfn_t fn, int enc, void *info)
+static void cbc_process_encrypt(struct crypto_tfm *tfm, u8 *dst, u8 *src,
+ cryptfn_t fn, void *info)
{
u8 *iv = info;
- if (enc) {
- tfm->crt_u.cipher.cit_xor_block(iv, src);
- fn(crypto_tfm_ctx(tfm), dst, iv);
- memcpy(iv, dst, crypto_tfm_alg_blocksize(tfm));
- } else {
- fn(crypto_tfm_ctx(tfm), dst, src);
- tfm->crt_u.cipher.cit_xor_block(dst, iv);
- memcpy(iv, src, crypto_tfm_alg_blocksize(tfm));
- }
+ tfm->crt_u.cipher.cit_xor_block(iv, src);
+ fn(crypto_tfm_ctx(tfm), dst, iv);
+ memcpy(iv, dst, crypto_tfm_alg_blocksize(tfm));
+}
+
+static void cbc_process_decrypt(struct crypto_tfm *tfm, u8 *dst, u8 *src,
+ cryptfn_t fn, void *info)
+{
+ u8 *iv = info;
+
+ fn(crypto_tfm_ctx(tfm), dst, src);
+ tfm->crt_u.cipher.cit_xor_block(dst, iv);
+ memcpy(iv, src, crypto_tfm_alg_blocksize(tfm));
}
static void ecb_process(struct crypto_tfm *tfm, u8 *dst, u8 *src,
- cryptfn_t fn, int enc, void *info)
+ cryptfn_t fn, void *info)
{
fn(crypto_tfm_ctx(tfm), dst, src);
}
@@ -181,7 +185,7 @@
{
return crypt(tfm, dst, src, nbytes,
tfm->__crt_alg->cra_cipher.cia_encrypt,
- ecb_process, 1, NULL);
+ ecb_process, NULL);
}
static int ecb_decrypt(struct crypto_tfm *tfm,
@@ -191,7 +195,7 @@
{
return crypt(tfm, dst, src, nbytes,
tfm->__crt_alg->cra_cipher.cia_decrypt,
- ecb_process, 1, NULL);
+ ecb_process, NULL);
}
static int cbc_encrypt(struct crypto_tfm *tfm,
@@ -201,7 +205,7 @@
{
return crypt(tfm, dst, src, nbytes,
tfm->__crt_alg->cra_cipher.cia_encrypt,
- cbc_process, 1, tfm->crt_cipher.cit_iv);
+ cbc_process_encrypt, tfm->crt_cipher.cit_iv);
}
static int cbc_encrypt_iv(struct crypto_tfm *tfm,
@@ -211,7 +215,7 @@
{
return crypt(tfm, dst, src, nbytes,
tfm->__crt_alg->cra_cipher.cia_encrypt,
- cbc_process, 1, iv);
+ cbc_process_encrypt, iv);
}
static int cbc_decrypt(struct crypto_tfm *tfm,
@@ -221,7 +225,7 @@
{
return crypt(tfm, dst, src, nbytes,
tfm->__crt_alg->cra_cipher.cia_decrypt,
- cbc_process, 0, tfm->crt_cipher.cit_iv);
+ cbc_process_decrypt, tfm->crt_cipher.cit_iv);
}
static int cbc_decrypt_iv(struct crypto_tfm *tfm,
@@ -231,7 +235,7 @@
{
return crypt(tfm, dst, src, nbytes,
tfm->__crt_alg->cra_cipher.cia_decrypt,
- cbc_process, 0, iv);
+ cbc_process_decrypt, iv);
}
static int nocrypt(struct crypto_tfm *tfm,
^ permalink raw reply [flat|nested] 13+ messages in thread
* [9/*] [CRYPTO] Remap when walk_out crosses page in crypt()
2005-03-22 11:24 ` [8/*] [CRYPTO] Split cbc_process into encrypt/decrypt Herbert Xu
@ 2005-03-22 11:25 ` Herbert Xu
2005-03-23 20:17 ` David S. Miller
0 siblings, 1 reply; 13+ messages in thread
From: Herbert Xu @ 2005-03-22 11:25 UTC (permalink / raw)
To: Fruhwirth Clemens
Cc: James Morris, Linux Kernel Mailing List, cryptoapi, linux-crypto
[-- Attachment #1: Type: text/plain, Size: 474 bytes --]
Hi:
This is needed so that we can keep the in_place assignment outside the
inner loop. Without this in pathalogical situations we can start out
having walk_out being different from walk_in, but when walk_out crosses
a page it may converge with walk_in.
Cheers,
--
Visit Openswan at http://www.openswan.org/
Email: Herbert Xu ~{PmV>HI~} <herbert@gondor.apana.org.au>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt
[-- Attachment #2: sg-9 --]
[-- Type: text/plain, Size: 509 bytes --]
===== cipher.c 1.26 vs edited =====
--- 1.26/crypto/cipher.c 2005-03-22 21:56:21 +11:00
+++ edited/cipher.c 2005-03-22 21:59:53 +11:00
@@ -129,7 +129,9 @@
complete_dst(&walk_out, bsize, dst_p, in_place);
nbytes -= bsize;
- } while (nbytes && !scatterwalk_across_pages(&walk_in, bsize));
+ } while (nbytes &&
+ !scatterwalk_across_pages(&walk_in, bsize) &&
+ !scatterwalk_across_pages(&walk_out, bsize));
scatterwalk_done(&walk_in, 0, nbytes);
scatterwalk_done(&walk_out, 1, nbytes);
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [9/*] [CRYPTO] Remap when walk_out crosses page in crypt()
2005-03-22 11:25 ` [9/*] [CRYPTO] Remap when walk_out crosses page in crypt() Herbert Xu
@ 2005-03-23 20:17 ` David S. Miller
0 siblings, 0 replies; 13+ messages in thread
From: David S. Miller @ 2005-03-23 20:17 UTC (permalink / raw)
To: Herbert Xu; +Cc: clemens, jmorris, linux-kernel, cryptoapi, linux-crypto
On Tue, 22 Mar 2005 22:25:04 +1100
Herbert Xu <herbert@gondor.apana.org.au> wrote:
> Hi:
>
> This is needed so that we can keep the in_place assignment outside the
> inner loop. Without this in pathalogical situations we can start out
> having walk_out being different from walk_in, but when walk_out crosses
> a page it may converge with walk_in.
All 9 patches applied, thanks Herbert.
Patches 7 through 9 were generated differently from the others,
look at the directory prefixes (or rather, a lack thereof):
===== cipher.c 1.26 vs edited =====
--- 1.26/crypto/cipher.c 2005-03-22 21:56:21 +11:00
+++ edited/cipher.c 2005-03-22 21:59:53 +11:00
I had to hand edit these before sending them through my patch
application scripts which expect -p1 diffs ;-)
^ permalink raw reply [flat|nested] 13+ messages in thread
end of thread, other threads:[~2005-03-23 20:23 UTC | newest]
Thread overview: 13+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2005-03-21 9:40 [0/5] [CRYPTO] Speed up crypt() Herbert Xu
2005-03-21 9:48 ` [1/5] [CRYPTO] Do scatterwalk_whichbuf inline Herbert Xu
2005-03-21 9:49 ` [2/5] [CRYPTO] Handle in_place flag in crypt() Herbert Xu
2005-03-21 9:50 ` [3/5] [CRYPTO] Split src/dst handling out from crypt() Herbert Xu
2005-03-21 9:52 ` [4/5] [CRYPTO] Eliminate most calls to scatterwalk_copychunks " Herbert Xu
2005-03-21 9:53 ` [5/5] [CRYPTO] Optimise kmap calls in crypt() Herbert Xu
2005-03-21 11:30 ` Fruhwirth Clemens
2005-03-22 1:13 ` Herbert Xu
2005-03-22 10:24 ` Fruhwirth Clemens
2005-03-22 11:22 ` [7/*] [CRYPTO] Kill obsolete iv check in cbc_process() Herbert Xu
2005-03-22 11:24 ` [8/*] [CRYPTO] Split cbc_process into encrypt/decrypt Herbert Xu
2005-03-22 11:25 ` [9/*] [CRYPTO] Remap when walk_out crosses page in crypt() Herbert Xu
2005-03-23 20:17 ` David S. Miller
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox