- 10 Jan, 2008 40 commits
-
-
Herbert Xu authored
If the underlying algorithm specifies a specific geniv algorithm then we should use it for the cryptd version as well. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Herbert Xu authored
This patch introduces the geniv field which indicates the default IV generator for each algorithm. It should point to a string that is not freed as long as the algorithm is registered. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Herbert Xu authored
Different block cipher modes have different requirements for intialisation vectors. For example, CBC can use a simple randomly generated IV while modes such as CTR must use an IV generation mechanisms that give a stronger guarantee on the lack of collisions. Furthermore, disk encryption modes have their own IV generation algorithms. Up until now IV generation has been left to the users of the symmetric key cipher API. This is inconvenient as the number of block cipher modes increase because the user needs to be aware of which mode is supposed to be paired with which IV generation algorithm. Therefore it makes sense to integrate the IV generation into the crypto API. This patch takes the first step in that direction by creating two new ablkcipher operations, givencrypt and givdecrypt that generates an IV before performing the actual encryption or decryption. The operations are currently not exposed to the user. That will be done once the underlying functionality has actually been implemented. It also creates the underlying givcipher type. Algorithms that directly generate IVs would use it instead of ablkcipher. All other algorithms (including all existing ones) would generate a givcipher algorithm upon registration. This givcipher algorithm will be constructed from the geniv string that's stored in every algorithm. That string will locate a template which is instantiated by the blkcipher/ablkcipher algorithm in question to give a givcipher algorithm. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Herbert Xu authored
Note: From now on the collective of ablkcipher/blkcipher/givcipher will be known as skcipher, i.e., symmetric key cipher. The name blkcipher has always been much of a misnomer since it supports stream ciphers too. This patch adds the function crypto_grab_skcipher as a new way of getting an ablkcipher spawn. The problem is that previously we did this in two steps, first getting the algorithm and then calling crypto_init_spawn. This meant that each spawn user had to be aware of what type and mask to use for these two steps. This is difficult and also presents a problem when the type/mask changes as they're about to be for IV generators. The new interface does both steps together just like crypto_alloc_ablkcipher. As a side-effect this also allows us to be stronger on type enforcement for spawns. For now this is only done for ablkcipher but it's trivial to extend for other types. This patch also moves the type/mask logic for skcipher into the helpers crypto_skcipher_type and crypto_skcipher_mask. Finally this patch introduces the function crypto_require_sync to determine whether the user is specifically requesting a sync algorithm. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Herbert Xu authored
This patch adds the necessary changes for GCM to be used with async ciphers. This would allow it to be used with hardware devices that support CTR. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Herbert Xu authored
As discussed previously, this patch moves the basic CTR functionality into a chainable algorithm called ctr. The IPsec-specific variant of it is now placed on top with the name rfc3686. So ctr(aes) gives a chainable cipher with IV size 16 while the IPsec variant will be called rfc3686(ctr(aes)). This patch also adjusts gcm accordingly. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Herbert Xu authored
With the impending addition of the givcipher type, both blkcipher and ablkcipher algorithms will use it to create givcipher objects. As such it no longer makes sense to split the system between ablkcipher and blkcipher. In particular, both ablkcipher.c and blkcipher.c would need to use the givcipher type which has to reside in ablkcipher.c since it shares much code with it. This patch merges the two Kconfig options as well as the modules into one. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Herbert Xu authored
This patch fixes the request context alignment so that it is actually aligned to the value required by the algorithm. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Herbert Xu authored
This patch adds a new helper crypto_attr_alg_name which is basically the first half of crypto_attr_alg. That is, it returns an algorithm name parameter as a string without looking it up. The caller can then look it up immediately or defer it until later. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Herbert Xu authored
When allocating ablkcipher/hash objects, we use a mask that's wider than the usual type mask. This patch sanitises the mask supplied by the user so we don't end up using a narrower mask which may lead to unintended results. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Borislav Petkov authored
i get here: ---- LD vmlinux SYSMAP System.map SYSMAP .tmp_System.map Building modules, stage 2. MODPOST 226 modules ERROR: "crypto_hash_type" [crypto/authenc.ko] undefined! make[1]: *** [__modpost] Error 1 make: *** [modules] Error 2 --- which fails because crypto_hash_type is declared in crypto/hash.c. You might wanna fix it like so: Signed-off-by: Borislav Petkov <bbpetkov@yahoo.de> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Adrian Bunk authored
This patch adds __dev{init,exit} annotations. Signed-off-by: Adrian Bunk <bunk@kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Herbert Xu authored
This patch merges the common hashing code between encryption and decryption. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Herbert Xu authored
This patch changes setkey to use RTA_OK to check the validity of the setkey request. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Herbert Xu authored
The ivsize should be fetched from ablkcipher, not blkcipher. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Sebastian Siewior authored
crypto_blkcipher_decrypt is wrong because it does not care about the IV. Signed-off-by: Sebastian Siewior <sebastian@breakpoint.cc> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Sebastian Siewior authored
crypto_blkcipher_decrypt is wrong because it does not care about the IV. Signed-off-by: Sebastian Siewior <sebastian@breakpoint.cc> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Tan Swee Heng authored
This patch adds a simple speed test for salsa20. Usage: modprobe tcrypt mode=206 Signed-of-by: Tan Swee Heng <thesweeheng@gmail.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Zoltan Sogor authored
Add LZO compression algorithm support Signed-off-by: Zoltan Sogor <weth@inf.u-szeged.hu> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Zoltan Sogor authored
Add common compression tester function Modify deflate test case to use the common compressor test function Signed-off-by: Zoltan Sogor <weth@inf.u-szeged.hu> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Tan Swee Heng authored
This is a large test vector for Salsa20 that crosses the 4096-bytes page boundary. Signed-off-by: Tan Swee Heng <thesweeheng@gmail.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Tan Swee Heng authored
This patch fixes the multi-page processing bug that affects large test vectors (the same bug that previously affected ctr.c). There is an optimization for the case walk.nbytes == nbytes. Also we now use crypto_xor() instead of adhoc XOR routines. Signed-off-by: Tan Swee Heng <thesweeheng@gmail.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Herbert Xu authored
The abreq structure is currently allocated on the stack. This is broken if the underlying algorithm is asynchronous. This patch changes it so that it's taken from the private context instead which has been enlarged accordingly. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Herbert Xu authored
Unfortunately the generic chaining hasn't been ported to all architectures yet, and notably not s390. So this patch restores the chainging that we've been using previously which does work everywhere. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Herbert Xu authored
The scatterwalk infrastructure is used by algorithms so it needs to move out of crypto for future users that may live in drivers/crypto or asm/*/crypto. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Herbert Xu authored
This patch changes gcm/authenc to return EBADMSG instead of EINVAL for ICV mismatches. This convention has already been adopted by IPsec. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Herbert Xu authored
The crypto_aead convention for ICVs is to include it directly in the output. If we decided to change this in future then we would make the ICV (if the algorithm has an explicit one) available in the request itself. For now no algorithm needs this so this patch changes gcm to conform to this convention. It also adjusts the tcrypt aead tests to take this into account. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Herbert Xu authored
Currently the gcm(aes) tests have to be taken together with all other ciphers. This patch makes it available by itself at number 35. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Herbert Xu authored
The previous code incorrectly included the hash in the verification which also meant that we'd crash and burn when it comes to actually verifying the hash since we'd go past the end of the SG list. This patch fixes that by subtracting authsize from cryptlen at the start. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Herbert Xu authored
Having enckeylen as a template parameter makes it a pain for hardware devices that implement ciphers with many key sizes since each one would have to be registered separately. Since the authenc algorithm is mainly used for legacy purposes where its key is going to be constructed out of two separate keys, we can in fact embed this value into the key itself. This patch does this by prepending an rtnetlink header to the key that contains the encryption key length. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Herbert Xu authored
As it is authsize is an algorithm paramter which cannot be changed at run-time. This is inconvenient because hardware that implements such algorithms would have to register each authsize that they support separately. Since authsize is a property common to all AEAD algorithms, we can add a function setauthsize that sets it at run-time, just like setkey. This patch does exactly that and also changes authenc so that authsize is no longer a parameter of its template. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Herbert Xu authored
Since alignment masks are always one less than a power of two, we can use binary or to find their maximum. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Kamalesh Babulal authored
drivers/char/hw_random/pasemi-rng.c: In function `pasemi_rng_data_present': drivers/char/hw_random/pasemi-rng.c:53: error: `wait' undeclared (first use in this function) drivers/char/hw_random/pasemi-rng.c:53: error: (Each undeclared identifier is reported only once drivers/char/hw_random/pasemi-rng.c:53: error: for each function it appears in.) drivers/char/hw_random/pasemi-rng.c: At top level: drivers/char/hw_random/pasemi-rng.c:93: warning: initialization from incompatible pointer type Signed-off-by: Kamalesh Babulal <kamalesh@linux.vnet.ibm.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Sebastian Siewior authored
Some CPUs support only 128 bit keys in HW. This patch adds SW fallback support for the other keys which may be required. The generic algorithm (and the block mode) must be availble in case of a fallback. Signed-off-by: Sebastian Siewior <sebastian@breakpoint.cc> Signed-off-by: Jan Glauber <jang@linux.vnet.ibm.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Denis Cheng authored
These utilities implemented in lib/hexdump.c are more handy, please use this. Signed-off-by: Denis Cheng <crquan@gmail.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Sebastian Siewior authored
There is no reason to keep the IV in the private structre. Instead keep just a pointer to make the patch smaller :) This also remove a few memcpy()s Signed-off-by: Sebastian Siewior <sebastian@breakpoint.cc> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Jan Glauber authored
Add test vectors to tcrypt for AES in CBC mode for key sizes 192 and 256. The test vectors are copied from NIST SP800-38A. Signed-off-by: Jan Glauber <jang@linux.vnet.ibm.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Tan Swee Heng authored
This patch adds a large AES CTR mode test vector. The test vector is 4100 bytes in size. It was generated using a C++ program that called Crypto++. Note that this patch increases considerably the size of "struct cipher_testvec" and hence the size of tcrypt.ko. Signed-off-by: Tan Swee Heng <thesweeheng@gmail.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Tan Swee Heng authored
Currently the number of entries in a cipher test vector template is limited by TVMEMSIZE/sizeof(struct cipher_testvec). This patch circumvents the problem by pointing cipher_tv to each entry in the template, rather than the template itself. Signed-off-by: Tan Swee Heng <thesweeheng@gmail.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Herbert Xu authored
When the data spans across a page boundary, CTR may incorrectly process a partial block in the middle because the blkcipher walking code may supply partial blocks in the middle as long as the total length of the supplied data is more than a block. CTR is supposed to return any unused partial block in that case to the walker. This patch fixes this by doing exactly that, returning partial blocks to the walker unless we received less than a block-worth of data to start with. This also allows us to optimise the bulk of the processing since we no longer have to worry about partial blocks until the very end. Thanks to Tan Swee Heng for fixes and actually testing this :) Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-