Index:


Security

  • Reading a smart card through a tamper evident bag

    Reading a smart card through a tamper evident bag

    Many Hardware Security Modules (HSMs) use smart cards to store cryptographic material, export the Storage Master Key (SMK), application keys, and authenticate Security Officers and Crypto Officers.

     

     IMG 7011

    These smartcards are often stored in Tamper Evident Bags (TEB) to provide a chain of custody and prove that no-one has read or otherwise tampered with the card. Unfortunately this is not secure; it’s trivial to read the card through the TEB in a way that is almost undetectable.

    If you need to mail a smart card, or store it in a safe, it should be placed in a hard-shell case, which should be sealed with tamper evident seals, and then placed in a TEB. My standard suggestion for this was to use (clear) PCMCIA card holders and foil tamper evident seals, but it is increasingly hard to find the clear PCMCIA card holders – the Pelican 0915 SD Memory Card Case or Pelican 1040 Micro Case both look like they might work (the SD Card case doesn’t have a clear front, which is unfortunate).

    This is surprisingly easy to do. I had done this in 2010, but since I deleted the writeup, I am redoing it here.

    An HSM smart card is just a “standard” smart card and the contact layout is almost exactly the same as an 8-pin DIP (dual in-line package). I took a cheap USB smart card reader ($20 on Amazon), and removed it from its packaging.

    2020 0324 092903 002  2020 0324 092822 001  

    I soldered a cheap 8-pin DIP socket onto the reader slide contacts (the “cheap” through-hole DIP sockets work better than the better quality milled ones, as the tips are sharper). Smart card readers have a switch which is activated when the card in inserted (bottom right of the second picture above) – I bridged across this with a small pushbutton, to allow me to activate it on demand.

    2020 0324 091742 0012020 0324 093259 003

    The “points” of the DIP socket can now be placed outside the bag, just above the contacts. Pressing down and “wiggling” the reader will make the points pierce the bag, and make contact with the smart card, allowing the card to be read through the bag – these holes are tiny, and difficult to see, especially if done carefully and above the bag label.

    2020 0324 095316 005 2020 0324 095401 006

    If you know where to look, this attack is detectable – a better resourced attacker could easily replace the (large) 8 pin DIP with something like 7X Tungsten Cat Whisker Fine Probes. I only have much larger die probes (and only a small number of these), but even with these the plastic seems to self-heal after poking them through. 

    In summary, don’t just trust a tamper evident bad – they are primarily designed for protecting deposits, or chain-of-custody of evidence, not protecting something like a smart card. Instead, seal the smart card in a hard-shell case, place numbered and signed tamper evident seals on all sides of the case, and then place this entire set in a (numbered and signed) tamper evident bag. 

     IMG 7015

    I’ve put a video demonstrating reading the card here: https://www.youtube.com/watch?v=oMDpXdDU1G4&feature=youtu.be

    Output:

    Tue Mar 24 09:57:49 2020
    Reader 0: Gemalto PC Twin Reader 00 00
    Event number: 49
    Card state: Card inserted,
    ATR: 3B 2A 00 80 65 A2 01 02 01 31 72 D6 43
    ATR: 3B 2A 00 80 65 A2 01 02 01 31 72 D6 43
    + TS = 3B --> Direct Convention
    + T0 = 2A, Y(1): 0010, K: 10 (historical bytes)
    TB(1) = 00 --> VPP is not electrically connected
    + Historical bytes: 80 65 A2 01 02 01 31 72 D6 43
    Category indicator byte: 80 (compact TLV data object)
    Tag: 6, len: 5 (pre-issuing data)
    Data: A2 01 02 01 31
    Tag: 7, len: 2 (card capabilities)
    Selection methods: D6
    - DF selection by full DF name
    - DF selection by partial DF name
    - DF selection by file identifier
    - Short EF identifier supported
    - Record number supported
    Data coding byte: 43
    - Behaviour of write functions: write OR
    - Value 'FF' for the first byte of BER-TLV tag fields: invalid
    - Data unit in quartets: 8
    Possibly identified card (using /usr/share/pcsc/smartcard_list.txt):
    3B 2A 00 80 65 A2 01 02 01 31 72 D6 43
    3B 2A 00 80 65 A2 01 .. .. .. 72 D6 43
    Gemplus MPCOS EMV 4 Byte sectors
    3B 2A 00 80 65 A2 01 02 01 31 72 D6 43
    MPCOS-EMV 64K Functional Sample
    THALES nShield Security World
    THALES NCIPHER product line
    Tue Mar 24 09:57:50 2020
    Reader 0: Gemalto PC Twin Reader 00 00
    Event number: 50
    Card state: Card removed,
    Tue Mar 24 09:57:51 2020
    Reader 0: Gemalto PC Twin Reader 00 00
    Event number: 51
    Card state: Card inserted,
    ATR: 3B 2A 00 80 65 A2 01 02 01 31 72 D6 43
    ATR: 3B 2A 00 80 65 A2 01 02 01 31 72 D6 43
    + TS = 3B --> Direct Convention
    + T0 = 2A, Y(1): 0010, K: 10 (historical bytes)
    TB(1) = 00 --> VPP is not electrically connected
    + Historical bytes: 80 65 A2 01 02 01 31 72 D6 43
    Category indicator byte: 80 (compact TLV data object)
    Tag: 6, len: 5 (pre-issuing data)
    Data: A2 01 02 01 31
    Tag: 7, len: 2 (card capabilities)
    Selection methods: D6
    - DF selection by full DF name
    - DF selection by partial DF name
    - DF selection by file identifier
    - Short EF identifier supported
    - Record number supported
    Data coding byte: 43
    - Behaviour of write functions: write OR
    - Value 'FF' for the first byte of BER-TLV tag fields: invalid
    - Data unit in quartets: 8
    Possibly identified card (using /usr/share/pcsc/smartcard_list.txt):
    3B 2A 00 80 65 A2 01 02 01 31 72 D6 43
    3B 2A 00 80 65 A2 01 .. .. .. 72 D6 43
    Gemplus MPCOS EMV 4 Byte sectors
    3B 2A 00 80 65 A2 01 02 01 31 72 D6 43
    MPCOS-EMV 64K Functional Sample
    THALES nShield Security World
    THALES NCIPHER product line
    Tue Mar 24 09:57:51 2020
    Reader 0: Gemalto PC Twin Reader 00 00
    Event number: 52
    Card state: Card removed

     

     

     

     

  • Implementation differences RSA key generation leads to weaker keys

    Implementation differences RSA key generation leads to weaker keys

    Optimizations in different crypto libraries leads to weaker than expected keys; OpenSSL and gnuTLS generate slightly weaker keys than mbedTLS.

    The “strength” of an RSA key against brute-force is related to how much of the P and Q the attacker knows / can eliminate. The more bits of the key the attacker knows, or the more of the relationship between P and Q, the weaker the key (I’m using imprecise terminology to make this easier to understand).

    I’m using 512 bit keys in examples, because they are faster to generate than longer keys, but the principle is the same for all key lengths.
     
    A 512 bit key has the msb (most significant bit) set. In order for this to be true, the MSB (most significant byte) of P and Q must both be >=128, so MSB(P)*MSB(Q)>= 32768 (e.g: 169*194 or 185*178). There are ~10,000 combinations that satisfy this, but Q > P, so we can ignore 1/2 of these, leaving only 5,000. 
     
    We also know that the lsb will be set – P and Q are prime, so P and Q are odd. Also, an odd number multiplied by an odd number is odd. 
     
    So, the 256bit primes are (ideally) 256 – 1 (lsb) – 1 (msb) – x. x is the small relationship between P and Q to satisfy MSB(P)*MSB(Q)>= 32768.
     
     
     
    For some background, here is a graph of MSB(random(128, 255) * random (128, 255)). As you can see, nothing below 64 ((128 * 128) >> 8 == 64):
    random x random
    However, this doesn’t satisfy the constraint that the msb must be 1. Let’s plot this again, dropping all values where the high bit is not set (MSB(random(128, 255) * random (128, 255)>=128):
    random high x random high msb 1
    So, this is the distribution of MSB of moduli that satisfy the basic constraints that msb of MSB must be 1.
     
    So, an attacker knows 3 things:
    1. the lsb of both P and Q is set (P and Q are prime)
    2. the msb of both P and Q is set, and
    3. there is some sort of minor relationship between P and Q (but this is no very helpful)
     
    But, for all intents and purposes, a good key is 508 bits strong (2* (256 – 1 (lsb) – 1 (msb)).
     
     
     
    Just for giggles (and because I have the script handy!), let’s see what happens if we set the first *2* bits of P and Q high.
     
    A plot of (random(128+64, 255) * random (128+64, 255):
    random 2high x random 2high
    Shifted more to the right (starts at 144), and steeper. Makes sense, (192*192) >> 8.
     
    Great, now that we understand how things should look, let’s look a the modulii from OpenSSL, gnuTLS and mbedTLS.

     
     
     
    First, OpenSSL:
    modulus OpenSSL
    and GnuTLS:
    modulus gnuTLS
     
    So, gnuTLS (the low level crypto happens in nettle) and OpenSSL have identical distribution of their MSB of moduli, which seems to be the product of 2 numbers, with their 2 most significant bits set.
     
    Let’s check that — we will graph the primes generated by them.
     
    figure openssl primes
    Errrrr… the lowest number is 192, which is 128 + 64….
     
    That’s suboptimal – the attacker now knows more about the primes than he should — he knows that the first 2 bits are set in each, not just that the first bit is set.
     
     
    Why is this?!
    Let’s look at the OpenSSL source.
    Generating P and Q happens in: crypto/rsa/rsa_gen.c
     
    This needs primes, that is in: crypto/bn/bn_prime.c

    static int probable_prime(BIGNUM *rnd, int bits)
    calls
    BN_priv_rand(rnd, bits, BN_RAND_TOP_TWO, BN_RAND_BOTTOM_ODD)

    What does BN_priv_rand called like this do? 

    “BN_rand() generates a cryptographically strong pseudo-random number of bits in length and stores it in rnd. If bits is less than zero, or too small to accommodate the requirements specified by the top and bottom parameters, an error is returned. The top parameters specifies requirements on the most significant bit of the generated number. If it is BN_RAND_TOP_ANY, there is no constraint. If it is BN_RAND_TOP_ONE, the top bit must be one. If it is BN_RAND_TOP_TWO, the two most significant bits of the number will be set to 1, so that the product of two such random numbers will always have 2*bits length. If bottom is BN_RAND_BOTTOM_ODD, the number will be odd; if it is BN_RAND_BOTTOM_ANY it can be odd or even. If bits is 1 then top cannot also be BN_RAND_FLG_TOPTWO.”

     

    Ah, so, BN_rand() will always return numbers with the 2 msb set, these get tested for primality, and then some pass, but they will always have the high two bits set.
     
    At least I understand *why* now, but it seems like this creates a weakness in the generated keys – the attacker now knows 3 bits of the primes: 
    1. the lsb is set
    2. the 2 msb are set. 
     
    This is more than just:
    1. the lsb is set
    2. the product of MSB(P, Q) >=32786 (5000 options)
    I *think* that this means that the primes from OpenSSL / gnuTLS are only 253 bits, not 254, so the “512 bits” keys are 506bis, and not 508bit. 
     
    Anyway, let’s looks at moduli from stock mbedTLS:
    modulus mbedTLS stock
     
    Well, that is noticeably different – and it looks like our ideal case. 
     
    Lets go look at the code. It seems like their distribution looks like like a product of 2 numbers, with only their high bits set, but they then ignore anything where the high bit is not set.
     
    From: library/rsa.c
    1:       MBEDTLS_MPI_CHK( mbedtls_mpi_gen_prime( &ctx->P, ( nbits + 1 ) >> 1, 0,
    2:                               f_rng, p_rng ) );
    3:
    4:        MBEDTLS_MPI_CHK( mbedtls_mpi_gen_prime( &ctx->Q, ( nbits + 1 ) >> 1, 0,
    5:                                f_rng, p_rng ) );
    6:
    7:        if( mbedtls_mpi_cmp_mpi( &ctx->P, &ctx->Q ) < 0 )
    8:            mbedtls_mpi_swap( &ctx->P, &ctx->Q );
    9:
    10:         if( mbedtls_mpi_cmp_mpi( &ctx->P, &ctx->Q ) == 0 )
    11:            continue;
    12:
    13:       MBEDTLS_MPI_CHK( mbedtls_mpi_mul_mpi( &ctx->N, &ctx->P, &ctx->Q ) );
    14:      if( mbedtls_mpi_bitlen( &ctx->N ) != nbits )
    15:           continue;
     
    Lines 1 – 5 make some primes. 
    Lines 7 – 8 sort them so Q>=P
    Lines 10 – 11 makes sure P!=Q. I guess this doesn’t hurt, but I’d expect a loud warning if they do – it would seem very likely that your random source is not random (unless, perhaps, you are making very small primes?).
    Line 13 calculates the modulus. 
     
    Now for the “fun” bit…
     
    Line 14-15 checks if the bitlength of the modulus != the requested keylength. Hmmm..
     
    I threw in a debug statement, and sometimes the generated keylength is 1 less than the requested keylength.
    Makes sense I guess – they don’t set *2* high bits when calculating their primes, so sometimes you will get e.g (128 * 128) >>8 == 64. This is 0b0100 0000.
     
    This does not have the highbit set, so it is (kinda) a 511 bit key. 
     
    Let’s see what happens if we comment out that check:
    modulus mbedTLS nocheck
    Yup, now we are getting some moduli where the most significant byte is <128, and so the msb is not set. This means that the bitlength of the keys with this check removed is <512.
     
    Ok, let’s graph the primes from mbedTLS
     primes mbedTLS
    Looking closer, the lowest Q is 130 – this makes sense. (130 * 255) >>8 = 129.
     
    So, it looks to me like mbedTLS makes stronger keys than OpenSSL / gnuTLS. There is not a *huge* difference (it’s ~2bits (506 -> 508)), but it *is* noticeable, and it comes about from a (IMO) gratuitous optimization…

     

  • Geiger counter based RNG

    Geiger counter based RNG

    Whenever you read a (good) cryptography book they discuss the difficulty in generating truly random numbers. The standard “random” numbers thay you get from things like the C rand() function is actually a psudo-random number — it is deterministically generated by applying a function (often a hash) to an input seed. The seed usually generated from some external source, such as the interval between packets arriving on the Ethernet interface, keyboard timing, etc. The better pusudo-random number generators (PNRG) continue to feed external entropy data into the system to try and increase the randomness of the data.

    While it may be hard to do, lots of cryptography requires very good (unpredicatable) random data — as a recent example, take the recent CVE-2008-0166 vulnerability. The short version (longer, better written version here) is that valgrind (a debugging tool that, amongst other thing, checks to make sure that you are not reading garbage from uninitialized memory) was complaining that OpenSSL was reading some uninitialzed memeory and using that data. Someone got annoyed with the warning and without completly undertanding the function that was doing this, simply commented it out (actually, he did try and figure out the reason, checked with the OpenSSL list, etc), and so stopped valgrind complaining. Unfortunatly, the uninitialized data that valgrind was complaining about was used as entropy to the PRNG function and so the amount of entropy was decreased (and you thought it took work to decrease entropy!). The proctial upshot of this is that the PRNG was no longer as random and SSH keys generated form this ended up being predictable — there were only 32,768 different keys that would be generated. Thils means that you could simply generate all of the possible keys and try them sequentially until the system let you in — and hilarity ensued. The importance of true randomness (and the difficulty in generating it) for various applications even lead the Rand Corporation to publish a book full of randomness — for only USD $90 you too can purchase “A Million Random Digits with 100,000 Normal Deviates

     

    In order to make the output of your PRNG as random as possible, you should pour more entropy into the pool — there are various hardware devices for doing this, such as thermal noise, avalanche (or Zener breakdown) noise from a diode, shot noise, etc.

     

     

    Intel used to make a motherboard chipset (i810) that included a good hardware number generator, but unfortunately have taken this out of newer chipsets. There have been some cute random number generation systems, such as SGI’s lavarnd, which basically involved pointing a webcam at a lava lamp. A similar project (that stole the name) is LavaRnd which uses the chaotic noise from a CCD in darkness, but the ultimate source (and the one that all of hte textbooks use) is radioactive decay.

     

    While all of the textbooks mention radioactive half-live as a great source of entropy, you don’t often see this implemented — so, this seemed like a fun project to do.

     

    The basic plan is to take an old Geiger counter and point it at a source of radiation — like the small Americium 241 pellet in an ionising smoke detector. I will then take the (audio) output of the Geiger counter and feed it to an interface that will connect to the serial port on a PC and watch the interval between “clicks”. If the most recent interval is greater than the previous interval I’ll count that as a 1, if less I will count it as a 0, and if equal, I’ll just drop the current interval.

     

    I have some Gieger counters that were originally designed for civil defence (back during the heady days when Russian nuclear attack was imminent) and just need to design an RS-232 interface. I m currently planning on using an OpAmp as a high impedance amplifier and driving an opto-isolator that will be connected to the CTS pin.

     

    geiger_box

     

    geiger1

     

    geiger_sch1

    Oh! No page on PRNGs would be complete without mentioning the Blum Blum Shub PRNG — I don’t have anything useful so say about it, but, with a name like that, how could I NOT mention it?! In fact, I am going to try squirrel away a reference to it on every page  from now on…