I was recently invited to give a talk at Gocon, a fantastic Go conference held semi-annually in Tokyo, Japan. Gocon 2014 was an entirely community-run one day event combining training and an afternoon of presentations surrounding the theme of Go in production.
The following is the text of my presentation. The original text was structured to force me to speak slowly and clearly, so I have taken the liberty of editing it slightly to be more readable.
My name is David.
I am delighted to be here at Gocon today. I have wanted to come to this conference for two years and I am very grateful to the organisers for extending me the opportunity to present to you today.
I want to begin my talk with a question.
Why are people choosing to use Go?
When people talk about their decision to learn Go, or use it in their product, they have a variety of answers, but there always three that are at the top of their list. These are the top three:
The first, Concurrency: Go’s concurrency primitives are attractive to programmers who come from single threaded scripting languages like Nodejs, Ruby, or Python, or from languages like C++ or Java with their heavyweight threading model.
Ease of deployment: We have heard today from experienced Gophers who appreciate the simplicity of deploying Go applications.
This leaves Performance. I believe an important reason why people choose to use Go is because it is fast.
For my talk today I want to discuss five features that contribute to Go’s performance.
I will also share with you the details of how Go implements these features.
Treatment and storage of values
The first feature I want to talk about is Go’s efficient treatment and storage of values.
This is an example of a value in Go. When compiled, gocon consumes exactly four bytes of memory.
Let’s compare Go with some other languages
Due to the overhead of the way Python represents variables, storing the same value using Python consumes six times more memory. This extra memory is used by Python to track type information, do reference counting, etc.
Let’s look at another example:
Similar to Go, the Java int type consumes 4 bytes of memory to store this value. However, to use this value in a collection like a List or Map , the compiler must convert it into an Integer object.
So an integer in Java frequently looks more like this and consumes between 16 and 24 bytes of memory.
Why is this important? Memory is cheap and plentiful, why should this overhead matter?
This is a graph showing CPU clock speed vs memory bus speed. Notice how the gap between CPU clock speed and memory bus speed continues to widen.
The difference between the two is effectively how much time the CPU spends waiting for memory. Since the late 1960’s CPU designers have understood this problem. Their solution is a cache, an area of smaller, faster memory which is inserted between the CPU and main memory.
This is a Location type which holds the location of some object in three dimensional space. It is written in Go, so each Location consumes exactly 24 bytes of storage.
We can use this type to construct an array type of 1,000 locations, which consumes exactly 24,000 bytes of memory. Inside the array, the Location structures are stored sequentially, rather than as pointers to 1,000 Location structures stored randomly. This is important because now all 1,000 Location structures are in the cache in sequence, packed tightly together.
Go lets you create compact data structures, avoiding unnecessary indirection. Compact data structures utilise the cache better. Better cache utilisation leads to better performance.