Wednesday, May 6, 2020

Public Key Infrastructure: What You Need to Know

Asymmetric, or "Public Key" cryptography underpins how much of modern computing works, but it isn't always well understood past "this is secure, and this isn't". In this post we are going to look at how public key cryptography works at three different levels: 1) A functional/conceptual level. 2) A protocol/operations level. 3) A (very basic) mathematical level.
What's the big idea?
Whether it was Julius Caesar creating "ROT-13" to communicate with his troops in the field, or the Enigma machine used in World War 2 by the Axis powers to communicate -- a device that, in its way, precipitated modern computing -- the practice of cryptography, or communicating securely, was always based on a shared secret. Something that the two parties who wished to communicate had to agree upon before the communication started. Whether it was "move forward 13 characters in the alphabet", or "configure the dials of your machine this way", the parties that wished to talk to each other securely had to securely (usually in person), agree on this shared secret.
This is problematic if you have two people who want to communicate with each other who haven't been physically proximate to agree on what the shared secret should be. When you want to talk to Amazon.com securely, you are not going to drive to Seattle and agree on a shared secret with Amazon then travel back home. This is where "Public Key Cryptography" (nee: Asymmetric encryption) comes in. It allows you to break a "secret" into two parts. One is a mathematical value that when plugged into a function turns a value into a cypher. The other is a value that when plugged into a mathematical function, turns the cypher back into the clear text, but more importantly, it works in reverse as well.
Let's look at this briefly in pseudocode:
    publicKey = X
    privateKey = Y
    cypher: byte[] = encrypt(publicKey, "Hello")
    message: string = decrypt(privateKey, cypher)
   
    // We encrypted with the public key and decrypted
    // with the private key
    message == "Hello"
    cypher = encrypt(privateKey, "Hello, back at you!");
    message = decrypt(publicKey, cypher)
   
    // We encrypted with the private key and decrypted
    //  with the public key
    message == "Hello, back at you!
   
This means that if I want you to communicate with me securely, I can give you my "Public Key" in the open -- everyone can know it, and know that it is for me -- but if you encrypt a message using my public key, only I can read it. This is the fundamental idea here: You have three components, the algorithm (the encrypt/decrypt functions), two keys (public/private), and we don't need a shared secret. I can make part of my key public, and you can communicate securely with me. As importantly, you know that if you can decrypt a particular cypher using my public key, it must have come from me.

Adding the "Infrastructure" to Public Key Cryptography

Once we have this asymmetric function for securely passing messages to only the intended recipient, or to validate that a message came from a particular recipient, we get into the idea of protocols around these functions. We want to communicate securely between two entities, and validate both ends of the communication. This is where you get the "Diffie-Hellman Key Exchange". It generally works like this…
  1. Alice wants to communicate with Bob.
  2. Alice sends her public key to Bob.
  3. Bob sends his public key to Alice.
  4. Messaging proceeds:
  • When Alice sends a message to Bob she encodes it with (message) => encrypt(bobsPublicKey, encrypt(alicesPrivateKey, message))
  • Bob decodes the message with (cypher) => decrypt(bobsPrivateKey, decrypt(alicesPublicKey, cypher))
  • And vice-versa
This means that each message passed can firstly only be from the sender, and secondly only be received by the intended recipient. In effect, the stacking of these two functions becomes the shared secret between the parties we discussed above. In practice, this almost never works this way when electrons hit silicon, but we will talk about that in the mathematics section.
There are some simple obvious problems here. First is the "man in the middle" attack. If Charlie can intercept all of Alice's communications, when the key exchange takes place, he can send his public key to both Alice and Bob during the key exchange, then decrypt both of their messages and re-encrypt them before passing the messages on to the other (If you have ever used the invaluable Charles Proxy tool for development, you have seen this happen -- and you also now understand where the name originates).
In order to prevent this, we need a way to establish a trust relationship to the public keys to prevent Charlie from hijacking the communication. This is where "Certificates" come in. You have almost certainly encountered SSL/TLS certificates in the wild.
All a "certificate" is, is an identity statement and a public key, encrypted with the private key of a trusted third party. Here we will introduce Dave as a third party whom Alice trusts and Bob has a relationship. He will play the role of a "Certificate Authority", or someone who validates that Bob's public key actually came from Bob and not someone pretending to be Bob, like Charlie did above. Let's look at how this works in the most common case of a one-way certificate trust:
  1. Bob creates a "Certificate Request" in the form "identity: Bob, key: bobsPublicKey".
  2. Bob sends the certificate request to Dave, who trusts Bob already, so he encrypts the certificate request with his private key.
  • createCertificate(identity, publicKey) => encrypt(davesPrivateKey, "identity: ${identity}, key: ${publickey")
  1. And Dave returns the certificate to Bob.
  2. Alice wants to communicate with Bob.
  3. Alice sends her public key to Bob.
  4. Bob sends his certificate key to Alice.
  5. Alice decrypts the certificate with Dave's public key, and see's that he says this is definitely "Bob" and there is a public key.
  6. Messaging proceeds:
  • When Alice sends a message to Bob she encodes it with (message) => encrypt(bobsPublicKey, encrypt(alicesPrivateKey, message))
  • Bob decodes the message with (cypher) => decrypt(bobsPrivateKey, decrypt(alicesPublicKey, cypher))
  • And vice-versa. This establishes a two-way trust based on a shared secret of these composite keys.
Now Alice is taking Dave's word for it that messages are coming from Bob. But more importantly, Charlie can't fake the certificate, signed by Dave's private key, and therefore can no longer snoop on the communication because he has control of Alice's communication channel.
This is how most SSL/TLS communication takes place today. However, if Bob would also like to validate Alice's identity, she can also send a certificate, rather than a raw public key, signed by Dave so that Bob knows this is actually Alice.
The final question to ask is what if Alice doesn't know Dave from Eve? This is where Bob and Dave can provide, rather than a single certificate, a "chain" of certificates. Bob's certificate is signed by Dave, Dave's certificate is signed by Fred, Fred's certificate is signed by Gretta, and Alice trusts Gretta. She can then start by decrypting Fred's certificate with Gretta's to get a public key, then decrypt Dave's certificate with Fred's, until she gets to decrypting Bob's certificate and receives his trusted public key.
This chain of trust is how many things work in the real world, from SSL/TLS communications over TCP, to code signing for the app stores from Google, Apple, Amazon, and Microsoft.
The final common part of "Public Key Infrastructure" you might encounter is a "Signature". Much like validating that a message comes from a known sender, sometimes you would like the message itself to be public, but you would like to ensure that it was "Signed" (sent by/approved by/etc) a known party. This is done by hashing the message to a shorter, or fixed-length form, then encrypting the hash with the private key of the signatory.
In this case, if Alice wanted to sign the message, "I agree to pay $100 to Bob" she would do the following:
  1. Alice hashes the message a simple function and encrypts it.
  • signedMessage = message + encrypt(alicesPrivateKey, crc128(message))
  1. Dave wants to know if the message is valid. He validates the signature with:
  • message, signature = split(signedMessage)
  • hash = crc128(message)
  • assert hash == decrypt(alicesPublicKey, signature)
This makes it very hard for Charlie to alter the message in a way that is still meaningful, but has the same CRC128 code that the message Alice signed has. If he changes the message to "I agree to pay $1000 to Charlie", the CRC128 hash of the message will change, and therefore the assertion about the signature would fail. In effect most of “blockchain” is simply that someone issues a signed action to alter a ledger document, then other parties sign a block of ledger operations, indicating that the network agrees they are all valid and that any future operations can start from the previous signature, rather than hashing the entire history.
Note that CRC128 isn't a secure hashing algorithm, but it is used here because we think it will be familiar to the reader.
Message singing is very common in places where the content of the message is not intended to be secret, but it might be forwarded on from a recipient. Email is the obvious example, but the SOAP/WS-* specifications also include nested payload signing so that as messages, or parts thereof, move through a system their origin can be re-validated at each step.

Moving to the Real World: TLS 1.3

TLS or Transport Layer Security 1.3 is the latest specification that performs a key exchange over the TCP for two hosts, and is what the "s" usually means in "https://" URLs these days. This is actually a big step forward over the previous SSL (Secure Socket Layer) or TLS specifications because it reduces the number of round trips between the two by about 30%.
It covers the elements of Diffie-Hellman we discussed above, but covers a larger set of considerations than the most basic examples of key exchange you see above include.
It begins with the client sending a "Hello" message to the server this looks, again in pseudocode, like this:
{
  "tls-version": 1.3,
  "message": "Hello",
  "supported-ciphers": [
   "FooCryptV1.0",
   "FooCryptV1.1",
   "BarCoSecretsV2.3"
  ],
  "key-agreements": {
      "FooCryptV1.0": {
        "key": "XXXX",
        "certificate-chain": "X1X1X1"
      },
      "FooCryptV1.1": {
        "key": "YYYY",
        "certificate-chain": "Y1Y1Y1"
       },
      "BarCoSecretsV2.3" : {
        "key": "ZZZZZ"
      }
  }
}
Here, supported ciphers are a list of the various implementations of encrypt() and decrypt() the client has available. For each of these, it will also send a "Key Agreement", which is either a simple public key, or a certificate with identity and a public key.
The server then chooses the cipher that will be used for the rest of the communication. It can do this based on any number of parameters. Maybe it knows that FooCrypt 1.0 has been "broken" and has a vulnerability, so it will never choose that one. Perhaps the server requires clients connecting to it to have a trusted certificate chain, in this case it would eliminate BarCo Secrets 2.3, so "FooCryptV1.1" it is. The server then replies with:
{
  "message": "Hello",
  "chosen-cipher": "FooCryptV1.1",
  "key-agreement": {
    "key": "AAAA",
    "certificate-chain": "A1A1A1"
   }
}
At this point, the two parties have agreed on their cipher algorithm and key, and the next message from the client should be encrypted with the agreed upon keys.
It might be that the server doesn't like ANY of the client's offerings, though. In this case it would reply to the client with:
{
  "message": "Hello-Retry",
  "supported-ciphers": [
   "BarCoSecretsV3.1",
   "BarCoSecretsV3.0"
  ],
  "key-agreements": {
      "BarCoSecretsV3.0": {
        "key": "YYYY",
        "certificate-chain": "Y1Y1Y1"
       },
       "BarCoSecretsV3.1": {
        "key": "YYYY",
        "certificate-chain": "Y1Y1Y1"
       }
  }
}
Maybe the server originally supported "BarCoSecretsV2.3", but the client didn't provide a certificate chain. Maybe it only supports these two algorithms. Either way, the client should now decide whether it can comply with what the server is offering, in which case it will send a "Hello-Retry-Response" message, or it will fail.
In practice, "Hello-Retry" doesn't happen very often. The list of mainstream cipher suites is pretty well established at this point. Though interestingly, RSA, the cipher suite most people think of when they think of PKI, is no longer supported because it has always had its own key exchange protocol that was slim lined for performance. Rather than use the D-H keys for transport traffic, RSA would use the PKI from each of the two ends to negotiate a true, new shared secret that was only used for the "session" (or life of the TCP connection). This was optimal when most people had very slow connections to the internet, and slow processors, because it kept the data traffic and CPU usage to a minimum. Today very few people are using less than a 128kbaud connection (GPRS mobile or ISDN landline) connection, so reducing the number of round trips for the key exchange is the priority. If you have 100mb throughput, but 150ms latency, sending a number of key options across the wire is less time-expensive than waiting 150ms several times for the message-response cycle.

Prime Numbers and Public Key Cryptography

How does this actually work? Well, it works by using large prime numbers to do math that are hard to guess from the message content, because factoring very large numbers (outside of a large quantum computer) is exceedingly time intensive.
First, to create a private key, we need two prime numbers:
p = 5
q = 11
Next, we find the modulus number between the two:
m = p * q
(55)
Next we need to find a number that is not related to either of these primes... This is easy to do by just subtracting from each of them and multiplying:
fn = (p - 1) * (q -1)
(40)
Now we need any number that is relatively prime to fn and less than it. Relatively prime just means they share no common factors. 40 has the factors 2, and 5, so we could use any of 3, 7, 9, 11, 13, 17, 19, 21, 23, 27, 29, 31, 33, 37, and 39. Doesn't matter which, but let's choose 7.
public_encrypt_exponent = 7
Now my "public key" that I can share with the world are the numbers 7 and 55. Now I need a "private key". This is again two numbers with our base modulus number, and a new exponent in the form:
private_decrypt_exponent = public_encrypt_exponent^-1 % fn
(23)
My private key is now my private decrypt exponent, and the modulus of my two original numbers, or (23, 55).
Now if I want to encrypt the value 50 for the owner of the private key (note, the value HAS to be below the value of the modulus, so I can't use 60. Remember, usually these are very large numbers):
cyper = 50 ^ (public_encrypt_exponent) % m
(30)
Now I want to decrypt the value:
message = 30 ^ (private_decrypt_exponent) % m
(50)
The important thing here, though, is that the wrapping of functions DOESN'T have to be done separately. We know we want to decrypt with our private key and decrypt with their public key all the time, we can actually stack these operations into more efficient mathematical operations.
message = cypher ^ (my_private_exponent * their_public_exponent) % (my_m + their_m)

Obviously there is more to it than just this. If this was all there was, there wouldn’t be a selection of ciphers from which to choose, but at a fundamental level, the mathematical operations you see above is how they all work.

Things to Know About Your Organization and Project

While we are not getting into tooling here there are some things you might want to ask yourself about your organization and or project:
  1. Does my organization have a centralized certificate authority, or are certificates requested one off from a global certificate authority?
  • Is that a certificate authority certified by one of the global certificate authorities, or is it only in my company?
  • If it is only in my company, what do I need to do to make sure my chosen language/library/runtime platform can trust the certificates my company is issuing?
  1. Certificates in the real world have an expiration date. How are we managing expiration, and are their processes that audit my running code to make sure to alert someone if a certificate is about to expire?
  2. Certificates in the real world come in two general flavors: those that identify a host, and those that identify a host and a business entity.
  • Is it important for my organization that customers see that a certificate is bound to a business entity? (This is the difference between something like LetsEncrypt.org/certbot and getting a "real" certificate from Thawte or Verisign).
  1. Finally, do I know how to trust a new certificate authority and create or sign certificate requests within the context of my chosen platform/runtime/framework/http library?

Tuesday, October 2, 2018

Executors and Futures in Java

This is part of an experiment. It is "code as blog". This entire blog post is just documented Java code.

package software.coop.know.future;

import java.util.ArrayList;
import java.util.Arrays;
import java.util.List;
import java.util.concurrent.ArrayBlockingQueue;
import java.util.concurrent.ExecutionException;
import java.util.concurrent.ExecutorService;
import java.util.concurrent.Executors;
import java.util.concurrent.ForkJoinPool;
import java.util.concurrent.Future;
import java.util.concurrent.ThreadPoolExecutor;
import java.util.concurrent.TimeUnit;
import java.util.concurrent.atomic.AtomicInteger;
import java.util.stream.Collectors;
import java.util.stream.DoubleStream;


/**
 * In this class, we will look at the most common way to interact with Futures -- via executors.
 */
public class FuturesWithExecutors {

    public static void main(String... args) throws Exception {
       doExecutorService();
       doFutures();
    }

    /** This is just a utility method to sleep without a checked exception.
     *
     * @param ms Number of milliseconds to sleep.
     */
    private static void sleepWithoutException(long ms) {
        try {
            Thread.sleep(ms);
        } catch (InterruptedException e) {
            throw new RuntimeException(e);
        }
    }

    /** A look at the ExecutorService class and how it is used...
     *
     * @throws InterruptedException
     */
    private static void doExecutorService() throws InterruptedException {
        // An ExecutorService is a service that does work off the thread you call it from. They come in many forms, but
        // generally they have a pool of threads that pull units of work off a queue, execute them, then pull the next
        // one.
        //
        // There is also the the Executors class, which has some utility methods for quickly creating executor services.
        //
        // Let's start with the dead simple example...

        ExecutorService executorService = Executors.newSingleThreadExecutor();

        // This created an executor service with a single thread to do work. So if we do...

        System.out.println("Submitting Job 1 from " + Thread.currentThread().getName());
        executorService.submit(() -> {
            System.out.println("Starting Job 1 on " + Thread.currentThread().getName());
            sleepWithoutException(2000);
            System.out.println("Finishing Job 1");
        });
        System.out.println("Submitting Job 2 from " + Thread.currentThread().getName());
        executorService.submit(() -> {
            System.out.println("Starting Job 2 on " + Thread.currentThread().getName());
            sleepWithoutException(2000);
            System.out.println("Finishing Job 2");
        });
        System.out.println("Submitted Job 2");


        executorService.shutdown();
        executorService.awaitTermination(1, TimeUnit.MINUTES);

        System.out.println("---------------------------------------------------------------------------------");

        // This will give us the following output :
        //
        // Submitting Job 1 from main          // #1 into the
        // Submitting Job 2 from main          // #1 is finished submitting
        // Starting Job 1 on pool-1-thread-1   // #1 begins running
        // Submitted Job 2                     // Since #1 is out of the queue and running, submit() for #2 completes
        // Finishing Job 1                     // #1 finishes
        // Starting Job 2 on pool-1-thread-1   // #2 begins running
        // Finishing Job 2                     // #2 finishes
        //
        // Now we have to do shutdown() and awaitTermination() to prevent the JVM from just shutting down on us. The
        // thread in the Executor is a daemon thread, which means it won't prevent the JVM from terminating when "main"
        // is done.
        //
        // shutdown() tells the Executor to stop accepting new work.
        // awaitTermination() waits for a given amount of time, blockingly, until all the jobs that are in the queue
        // have finished.
        //
        // This was perhaps the simplest example possible. Now lets look at perhaps the most complex...

        final AtomicInteger index = new AtomicInteger(0);
        executorService = new ThreadPoolExecutor(
                1,                              // a minimum number of threads.
                5,                              // a maximum number of threads
                2, TimeUnit.SECONDS,            // a time to wait before growing the pool
                new ArrayBlockingQueue<>(10),   // queue of tasks
                r -> {                          // a custom ThreadFactory
                    int thread = index.getAndIncrement();
                    System.out.println("Creating thread "+thread);
                    Thread t = new Thread(r);
                    t.setName("Custom Thread " + thread);
                    t.setDaemon(true);
                    return t;
                },
                new ThreadPoolExecutor.CallerRunsPolicy() // A policy for jobs that are rejected from the queue
                                                          // "CallerRunsPolicy" means that if you can't accept a new
                                                          // task, run it immediately on the calling thread.

        );

        for (int i = 0; i < 40; i++) {
            final int idx = i;
            executorService.submit(
                    () -> {
                        long runtime = 1900 + Math.round(Math.random() * 200D); // sleep for random time.
                        System.out.println("Starting " + idx + " for " + runtime + " ms on "
                                + Thread.currentThread().getName());
                        sleepWithoutException(runtime);
                        System.out.println("Finishing " + idx);
                    }
            );
            sleepWithoutException(100);
        }

        executorService.shutdown();
        executorService.awaitTermination(1, TimeUnit.MINUTES);
        System.out.println("---------------------------------------------------------------------------------");

        // This gives us output akin to the following...
        //
        // Creating thread 0                           < create the first thread
        // Starting 0 for 1986 ms on Custom Thread 0   < start task
        // Creating thread 1                           < grow the thread bool.
        // Starting 11 for 1928 ms on Custom Thread 1  < start the next task. Notice this ISN'T #1, it is the first job
        //                                               that won't fit in the queue
        // Creating thread 2                           < grow again.
        // Starting 12 for 2010 ms on Custom Thread 2
        // Creating thread 3
        // Starting 13 for 2001 ms on Custom Thread 3
        // Creating thread 4                           < Max thread pool size
        // Starting 14 for 2074 ms on Custom Thread 4
        // Starting 15 for 1999 ms on main             < Since we are now at max threads, and the queue is full,
        //                                               job 15 executes on the "main" thread inline with our our call
        //                                               to submit it.
        // Finishing 0
        // Starting 1 for 2042 ms on Custom Thread 0   < We just now pull the second job off the queue
        // Finishing 11
        // Starting 2 for 1993 ms on Custom Thread 1

        // As you can see, jobs are not necessarily executed in a FIFO manner, especially if you have a variable sized
        // thread pool.
    }

    /** Using Futures with Executors
     *
     */
    private static void doFutures() throws InterruptedException {
        // In the previous example, we looked entirely at submitting "Runnables" to our ExecutorService. But sometimes,
        // you want to get a result back from a task running off thread. Let's look at that.
        System.out.println("doFutures() ---------------------------------------------------------------------");
        ExecutorService executorService = Executors.newFixedThreadPool(2);

        List<Double> doubles = Arrays.asList( 0D, 1D, 2D, 3D, 4D, 5D);

        List<Future<String>> futures = doubles.stream()
                .map(d-> executorService.submit(()->{
                        long runtime =  Math.round(Math.random() * 2000D);
                        System.out.println("Running on "+Thread.currentThread().getName()+ " for "+runtime);
                        sleepWithoutException(runtime);
                        return Double.toString(d * Math.PI) +" from "+Thread.currentThread().getName();
                    })
                ).collect(Collectors.toList());
        futures.forEach(future -> {
            try {
                System.out.println(future.get());
            } catch (ExecutionException |InterruptedException e) {
                e.printStackTrace();
            }
        });

        executorService.shutdown();
        executorService.awaitTermination(1, TimeUnit.MINUTES);
        System.out.println("---------------------------------------------------------------------------------");

        // This gives us the output:
        //
        // Running on pool-1-thread-2 for 695
        // Running on pool-1-thread-1 for 173
        // 0.0 from pool-1-thread-1
        // Running on pool-1-thread-1 for 1135
        // 3.141592653589793 from pool-1-thread-2
        // Running on pool-1-thread-2 for 915
        // 6.283185307179586 from pool-1-thread-1
        // Running on pool-1-thread-1 for 316
        // 9.42477796076938 from pool-1-thread-2
        // Running on pool-1-thread-2 for 409
        // 12.566370614359172 from pool-1-thread-1
        // 15.707963267948966 from pool-1-thread-2

        // You can see that this is obviously in order from our list of Doubles, but it is also "As Fast As Possible"
        // with two threads. Why? Because despite the fact that our execution time varies wildly, we iterate over the
        // mapped Futures in order. That a job down the queue finished before a previous one doesn't stop the
        // ExecutorService from continuing to run. The value of the Callable is contained in the Future. So if a later
        // one finished before the one we are waiting on, it is just a long pole small pole problem.

        // In all of the examples so far, we have created an ExcecutorService to control threads, or queue size or
        // whatever. Java does have a default one we can use that should have some reasonable defaults:
        // The ForkJoinPool.

        // The ForkJoinPool is used when you use language-level parallelism. For example:

        DoubleStream.of(1D, 2D, 3D, 4D, 5D).parallel()
                .forEach(d-> {
                    sleepWithoutException(100);
                    System.out.println( d + " from "+Thread.currentThread().getName());
                });
        System.out.println("---------------------------------------------------------------------------------");

        // This gives us something like:
        //
        // 5.0 from ForkJoinPool.commonPool-worker-2
        // 2.0 from ForkJoinPool.commonPool-worker-1
        // 4.0 from ForkJoinPool.commonPool-worker-4
        // 3.0 from main
        // 1.0 from ForkJoinPool.commonPool-worker-3

        // 3.0 on Main? Why? Who knows. This is Java making a guess about the pool size based on the number of cores on
        // my machine (8) and whatever other heuristic it uses.

        // The important think here is you can get at this "generic" executor service the same way .parallel() does...

        ForkJoinPool forkJoin = ForkJoinPool.commonPool();
        List<Future<Double>> futureDoubles = new ArrayList<>(20);
        for(double d = 0; d < 20D; d++) {
            double finalD = d;
            futureDoubles.add(forkJoin.submit(() -> {
                sleepWithoutException(2000);
                System.out.println("Computing on " + Thread.currentThread().getName());
                return finalD * Math.PI;
            }));
        }
        futureDoubles.forEach(f-> {
            try {
                System.out.println(f.get());
            } catch (ExecutionException|InterruptedException e) {
                e.printStackTrace();
            }
        });

        System.out.println("---------------------------------------------------------------------------------");

        // This gives us something like:

        // Computing on ForkJoinPool.commonPool-worker-5
        // Computing on ForkJoinPool.commonPool-worker-6
        // 0.0
        // Computing on ForkJoinPool.commonPool-worker-4
        // Computing on ForkJoinPool.commonPool-worker-2
        // Computing on ForkJoinPool.commonPool-worker-3
        // Computing on ForkJoinPool.commonPool-worker-1
        // Computing on ForkJoinPool.commonPool-worker-7
        // 3.141592653589793
        // 6.283185307179586
        // 9.42477796076938

    }
}

Wednesday, August 29, 2018

"Unit Testing" and Third Party Software

It is legit not my intention to make this a "testing" blog, but as I started this, I found myself for the first time in a testing role, so this is stuff at the top of mind. One thing I want to talk about, though, is "What is 'Unit Testing'?"

So one of the prime directives from the "Unit Testing" world is "Don't test software that isn't yours". This is a fine idea but there is are trap around it into which you don't want to fall. One specifically I want to discuss here:

Your configuration information IS YOUR SOFTWARE.

Let's pick an easy example: Hibernate. If you are building Java software, some form of JPA, and probably Hibernate is in your stack.

So lets talk about queries. Maybe you are using something with a dynamic proxy system. Maybe you are using compiled queries directly with your EntityManager. Doesn't matter. Your ANNOTATIONS are code, and should be tested.

Do you have to test the dynamic proxy generation? No. But if you have a DAO that looks like:

@Query("SELECT o FROM Foo WHERE o.value like '%:bar%')

List<Foo> fooValuesWithBar(String bar)

Should you be writing unit tests around whether the dynamic proxy correctly interprets your query? No. "Noy my yob mah." But making sure that all the configuration information in the annotation you wrote is correct is your job. If you are not writing a unit test that covers the annotation as code, you don't really have coverage.

The long and the short of this, is if you are using a tool to dynamically generate DAOs and your unit tests aren't going all the way to the database, you aren't actually covering your code, because the metadata about how the DAO framework will construct your queries is your code. Again, let's consider the unit test you should have around fooValuesWithBar()...  If you insert a stray character into the @Query, then your unit test should fail. The fact that you aren't writing the actual implementation of fooValuesWithBar() on this interface doesn't matter. You have defined the functionality with the annotation, so you need a test around it. The annotation is code.

So let's not just bitch about it, let's solve some problems.

So if you are using Hibernate/TopLink/EclipseLink, your code should be database portable. Is it? Do you care if it is not? At my current gig we are using Flyway as part of Spring Boot to do database migrations, but that involves writing SQL files. As soon as you get into writing SQL files, you have given up on DB portability. That said, the other option is trusting the JPA provider to update your schema. As much as people SAY that is a thing that can happen, I personally don't trust it.

That said, for the purposes of unit tests, there is no reason you can't rely on your JPA provider to create a schema it thinks is reasonable. That is, you can write DAO/Entity tests against in-memory Hypersonic/Derby/JavaDB and feel those are good tests. Database migration with a tool like Flyway can still be a thing, but you can pass that off to an "integration test" without feeling like you have lost something...  mostly.

So let's go with some rules:


  1. Don't mock a DAO unless you REALLY know what you are doing. Mocking things that are loaded with configuration is, IMHO, fraught. Does your test provide value? Well, that assumes the things you are mocking comply with production systems. Mocking an external service for which you have a contract test is OK. Mocking a DAO/Service/Other Dynamic Proxy where you aren't sure your annotations are correct?
  2. Use your DAO to persist and read outside of your test, rather than use verify calls. Something in a database is real. save(any(Foo.class)) is a crutch. Create a transient database if you need to.
  3. This doesn't just apply to DAOs. Anything with Annotation-specified behavior should be unit tested. This means custom XML/JSON (de)serialization rules, too.

Quick Tip: Images in React-Native on Android Not Loading

So something I ran into recently that I never found any good tips around.

We had a problem with static asset images not painting on Android. It appears that if there is a state-triggered repaint while the image is being spun up from a drawable, it never fully paints the image on the screen.

Typically people (read: the react-native docs) tell you to do your images something like:

<Image source={require('./my-icon.png')} />

That mostly works, but require returns a Promise and it seems like when something goes weird in the paint lifecycle stuff can go bad. There is lots of discussion out there about using the resolve asset functions from the image library, but this causes much weirdness between the debug and release variants of your app. You can do Promise.all([]) from UNSAFE_componentWillMount. But there is an easier way!

import myIcon from './my-icon.png';

Why is this better than const myIcon = require('./my-icon.png')? Well import still does the same thing under the hood that require() does. The difference is import demands that all the required things are fully resolved before it begins evaluating the the script at all. This means that your image assets are guaranteed to be loaded before the script evaluates. Lemon squeezy.


Thursday, May 31, 2018

Mobile BDD with Appium and Cucumber: Capturing Testing Data (Part 3)

Of a series: Part 1, Part 2.

The code for this exercise is available on the WITH_GIF branch.

One of the problems with doing automated UI testing in a CI environment is understanding failures. Today we are going to look at extending our Cucumber drivers to help with that. We are going to make a recording of what we are doing on the client side, and capture the log information from the client when there is a failure.

Cucumber for Java, like JUnit or TestNG or whatever else you might for testing has a @Before and @After annotation that you can use to set up state for a test. The thing is, the "test" here is going to be a Scenario in your Cucumber tests. We are going to start here, though, with a before and after Step bit of code, so we need to do that ourselves.  Revisiting our BaseSteps class...


private void beforeStep() {
    
}

private void afterStep() {
    
}


private void doStep(ThrowRunnable runnable) throws Exception {
    beforeStep();
    try {
        runnable.run();
    } finally {
        afterStep()
;    }
}

private interface ThrowRunnable {
    void run() throws Exception;
}

Here we have created a method we can use to wrap a step with a generic beforeStep() and afterStep() method. We will need to get these invoked, but with Java 8+ closures, this is easy. We simply do a no args call in each of our step methods.

@Then("the \"(.*)\" is gone")
public void assertMissing(String text) throws Exception {
    doStep(()->strategy.assertMissing(text));
}

Now, let's start by getting a screenshot and logs before and after each step. We will create a Recorder class with some static fields we will use to capture this information.

public class Recorder {
    private static final Logger LOGGER = Logger.getLogger(
                Recorder.class.getCanonicalName()
    );
    private static List<File> IMAGES;
    private static List<LogEntry> LOGS;
    public static void record(File file) {
        IMAGES.add(file);
    }

    public static void log(List<LogEntry> logs){
        LOGS = logs;
    }
}


Now, let's instrument our platform strategies to give us this information. For Android:

@Override
public List<LogEntry> getLogEntries() {
    return getDriver().manage().logs().get("logcat").filter(Level.ALL);
}
@Override
public File getScreenshotAsFile() {
    return getDriver().getScreenshotAs(OutputType.FILE);
}

... and iOS:

@Override
public List<LogEntry> getLogEntries() {
    List<LogEntry> allEntries = new ArrayList<>();
    getDriver().manage().logs().getAvailableLogTypes()
            .stream()
            .filter(Objects::nonNull)
            .flatMap(s -> {
                try {
                    return getDriver().manage().logs().get(s)
                                      .filter(Level.ALL).stream();
                } catch (Exception e) {
                    return Stream.empty();
                }
            })
            .filter(Objects::nonNull)
            .forEach(allEntries::add);
    allEntries.sort((o1, o2) -> Long.compare(o2.getTimestamp(), 
                                             o1.getTimestamp()));
    return allEntries;
}

public File getScreenshotAsFile() {
    return getDriver().getScreenshotAs(OutputType.FILE);
}

Since iOS has a few different log files, we need to merge them all together into a single sorted list. For Android, hey, "logcat" is probably what we want anyway. Each of the drivers will give us a screenshot to a temp file.

Now, let's revisit the beforeStep() and afterStep() we created earlier, and capture all this information.

private void beforeStep() {
    Recorder.record(strategy.getScreenshotAsFile());
}

@SuppressWarnings("unchecked")
private void afterStep() {
    Recorder.log(strategy.getLogEntries());
    Recorder.record(strategy.getScreenshotAsFile());
}

So we get a screenshot before and after each step, and record the logs after each step.

Now let's bring it all together and persist our information for failing Scenarios. We can do this by adding the @Before and @After hook annotations to our recorder class. This will create a new instance of the class, but we can still refer to the static variables.

@Before
public void initialize() {
    IMAGES = new ArrayList<>();
    LOGS = new ArrayList<>();
} @After public void finalize(Scenario scenario) throws IOException { if (scenario.isFailed()) { File outDir = new File("build/cucumber-images"); outDir.mkdirs(); outDir.mkdir(); if(IMAGES.isEmpty()){ return; } BufferedImage first = ImageIO.read(IMAGES.iterator().next()); File destination = new File(outDir, scenario.getName().replaceAll("[^\\w]", "_") + ".gif"); try ( ImageOutputStream outputStream = new FileImageOutputStream(destination); AnimatedGIFEncoder encoder = new AnimatedGIFEncoder(outputStream, first.getType(), 750, true)) { IMAGES.stream() .map(f -> {
                        try {
                            return ImageIO.read(f);
                        } catch (Exception e) {
                            throw new RuntimeException(e);
                        }
                    })
                    .forEach(i -> {
                        try {
                            encoder.writeToSequence(i);
                        } catch (IOException e) {
                            throw new RuntimeException(e);
                        }
                    });
        }
        LOGGER.info("Wrote scenario animation to " + 
                    destination.getAbsolutePath());
        ByteArrayOutputStream baos = new ByteArrayOutputStream();
        ByteStreams.copy(new FileInputStream(destination), baos);
        scenario.embed(baos.toByteArray(), "image/gif");
        scenario.embed(logFile(), "text/plain");
    }
}

private byte[] logFile(){
    StringBuilder sb = new StringBuilder();
    LOGS.stream()
            .map(e-> new Date(e.getTimestamp()) + "," + 
                e.getLevel().getName() + ", " + e.getMessage()
            )
            .forEach(line-> sb.append(line).append("\n"));
    return sb.toString().getBytes(Charsets.UTF_8);
}


So in our @Before we initialize the static members. Then in the @After we finalize everything. If there are no images we can bounce. If there are, we will create an AnimatedGIFEncoder class and add all the images to it. I'm not going to get into the image processing thing, but you should pay attention to the last two methods of the finalize() method: by getting the Cucumber Scenario object passed into the method at the end, we can embed other data in the results by MIME type.

Now if we want to see the data we collect, we can add a reporting plugin to our build.gradle file:

buildscript {
    repositories {
        maven {
            url "http://repo.bodar.com"
        }
        maven {
            url "https://plugins.gradle.org/m2/"
        }
    }
    dependencies {
        classpath "com.github.samueltbrown:gradle-cucumber-plugin:0.9"
        classpath "gradle.plugin.com.github.spacialcircumstances:" +
              "gradle-cucumber-reporting:0.0.11"
    }
}

plugins {
    id 'java'
    id "com.github.samueltbrown.cucumber" version "0.9"
    id 'idea'
    id "com.github.spacialcircumstances.gradle-cucumber-reporting" version "0.0.11"
}

cucumberReports {
    outputDir = file("$project.buildDir/reports")
    buildName = '0'
    reports = files("$project.buildDir/cucumber.json")
}
// stuff here


tasks.cucumber.finalizedBy generateCucumberReports

Now when our cucumber gradle cucumber task runs, we will get a report telling us what failed, like so:



(this image not animated)