Primary image for Fixing race conditions and deadlocks in Go

Fixing race conditions and deadlocks in Go

In the past month we worked to scale up Every few days the bot would get stuck in a deadlock situation that caused outage of the logging service. I had some time in front of me that I decided to dedicate to improving the situation. I learned a few things along the way that I would like to share with you.

Bumpy road toward a better code base

The issues described in this post will likely remain unnoticed in your application for long period of time. One day, however, you’ll reach the critical scale where they will start to become visible. For us it was the sheer number of networks, channels, messages we are receiving and logging.

As I was learning more about Go and the subtlety of the code base I started to notice that it was suffering from many race conditions. We wrote most of the code base in the early days of Go 1.0, predating the introduction of the race detector. Upgrading to the latest Go version let me take advantage of this tool to fix them one at a time. A bit more about the race detector in the next section Tooling.

Part of fixing the races involve using an RWMutex, it is a reader/writer mutual exclusion lock. No rocket science there, it is pretty straightforward. You add a RWMutex on your struct and then you can call:

  • theStruct.Lock()
  • theStruct.Unlock()
  • theStruct.RLock()
  • theStruct.RUnlock()

Here is an example of usage inside

Even if there is nothing hard there it is easy to end up trading race conditions for deadlocks. This is especially true when you have several independent goroutines sharing mutable states. Fighting this deadlock situation has been the hardest thing in the last month. It took a long time to understand the problem and find the patterns leading to these situations. And this because I overlooked another fantastic tool in the Go standard library net/http/pprof. It serves runtime profiling data via an HTTP server in the format expected by the pprof visualization tool.

We still have some road in front of us to make botbot a race free, deadlock free, panic free program but we’ve already learned some important lessons along the way.


Race detector

The race detector added in Go 1.1 is simple to use. You just need to pass the -race flag to your build process to detect race conditions. They will appear in your log as:

Read by goroutine 12:*ircBot).String()
      <autogenerated>:12 +0xc6
      /home/yml/go/src/pkg/fmt/print.go:699 +0x694
      /home/yml/go/src/pkg/fmt/print.go:790 +0x5c4
      /home/yml/go/src/pkg/fmt/print.go:1194 +0x33f
      /home/yml/go/src/pkg/fmt/print.go:254 +0x7f*loggingT).println()
      /srv/virtualenvs/botbotenv/src/ +0x9c
      /srv/virtualenvs/botbotenv/src/ +0x5d*ircBot).readSocket()
      /srv/virtualenvs/botbotenv/src/ +0x4b8

Previous write by goroutine 13:*ircBot).act()
      /srv/virtualenvs/botbotenv/src/ +0x2da*ircBot).ListenAndSend()
      /srv/virtualenvs/botbotenv/src/ +0x3b2

Goroutine 12 (running) created at:*ircBot).Connect()
      /srv/virtualenvs/botbotenv/src/ +0x132*ircBot).Init()
      /srv/virtualenvs/botbotenv/src/ +0x1f5
      /srv/virtualenvs/botbotenv/src/ +0xa0d*NetworkManager).Connect()
      /srv/virtualenvs/botbotenv/src/ +0x161*NetworkManager).RefreshChatbots()
      /srv/virtualenvs/botbotenv/src/ +0x590
      /srv/virtualenvs/botbotenv/src/ +0x17c
      /srv/virtualenvs/botbotenv/src/ +0x19e

Goroutine 13 (running) created at:*ircBot).Init()
      /srv/virtualenvs/botbotenv/src/ +0x219
      /srv/virtualenvs/botbotenv/src/ +0xa0d*NetworkManager).Connect()
      /srv/virtualenvs/botbotenv/src/ +0x161*NetworkManager).RefreshChatbots()
      /srv/virtualenvs/botbotenv/src/ +0x590
      /srv/virtualenvs/botbotenv/src/ +0x17c
      /srv/virtualenvs/botbotenv/src/ +0x19e

As you can see, you will get lots of details to help you fix the issue. Most of the time races are easy to fix but hard to discover by human inspection.

Package pprof

Instead of paraphrasing the doc let me quote it

Package pprof serves via its HTTP server runtime profiling data in the format expected by the pprof visualization tool. For more information about pprof, see The package is typically only imported for the side effect of registering its HTTP handlers. The handled paths all begin with /debug/pprof/. To use pprof, import this package into your program.

This tool as been invaluable to troubleshoot the deadlock that started to appear on every few days. The one key feature we have been using is full goroutine stack dump. It gives you an “on demand stack trace” with the current status of each goroutine. It is a pretty lengthy piece of text but these are what you want to look for:

  • semacquire
  • mutliple goroutine blocked on chan send, 3 minutes for the same amount of time.

Below you will find a truncated version of the full goroutine stack dump that we are going to use analyze the issue.

In the first example we can see that the pointer to ircBot (0xc2080761c0) has 2 goroutines (27, 26) blocked on a chan send operation. This exhibits a fault in our design; the goroutine readSocket can blocked by ListenAndSend.

goroutine 105 [running]:

[ ... ]

goroutine 27 [chan send, 1 minutes]:*ircBot).act(0xc2080761c0, 0xc20843c0a0)
	/srv/virtualenvs/botbotenv/src/ +0x8df*ircBot).ListenAndSend(0xc2080761c0, 0xc208004a80)
	/srv/virtualenvs/botbotenv/src/ +0x3b3
created by*ircBot).Init
	/srv/virtualenvs/botbotenv/src/ +0x219

[ ... ]

goroutine 27 [chan send, 3 minutes]:*ircBot).act(0xc2080761c0, 0xc20843c0a0)
	/srv/virtualenvs/botbotenv/src/ +0x8df*ircBot).ListenAndSend(0xc2080761c0, 0xc208004a80)
	/srv/virtualenvs/botbotenv/src/ +0x3b3
created by*ircBot).Init
	/srv/virtualenvs/botbotenv/src/ +0x219

[ ... ]

goroutine 26 [chan send, 3 minutes]:*ircBot).readSocket(0xc2080761c0, 0xc208004a80)
	/srv/virtualenvs/botbotenv/src/ +0x51d
created by*ircBot).Connect
	/srv/virtualenvs/botbotenv/src/ +0x132

goroutine 28 [select]:*ircBot).monitor(0xc2080761c0, 0xc208004a80)
	/srv/virtualenvs/botbotenv/src/ +0xa6d
created by*ircBot).Init
	/srv/virtualenvs/botbotenv/src/ +0x23e

[ ... ]

The wall of text from the goroutines stack dump is available in this gist. It exhibits a few other issues that you might want to try to figure out.

** Pro tip:** Logging the address of the pointer receiver can be really useful to relate the log with the behavior of the application and the stack traces. In addition, it lets you grep the very verbose stack traces for the elements related to the problem you are debugging.

curl | grep -B10 -A 20 "0xc2080761c0"


  • Always update to the latest version of Go to take advantage of the constantly improving toolchain and standard library. One side effect is that your code will be faster and the garbage collector more efficient. All this by just rebuilding your binary.
  • Make sure your goroutines are orthogonal or do not share mutable states. If for some reason they have to share mutable states explicitly pass them instead of relying on Mutex. It is even better if you can refactor your code to share memory by communicating instead of communicating by sharing memory.
  • Avoid nested select and blocking cases.
  • Make sure that none of your case inside a select can block for an unknown period of time.
  • Evolving a code base to get rid of these issues takes some time and may require a significant effort. However Go provides tooling that will help you along the way. is the guinea pig on which we decided to learn Go. We made some mistakes along the way by implementing non-idiomatic Go solutions. Part of the growing process, scaling to several hundreds of channels on many networks, required re-architecting the application.

Thank you for your patience and interest if you have read this far!

Yann Malet

About the author

Yann Malet

Yann builds and architects performant digital platforms for publishers. In 2015, Yann co-authored High-Performance Django with Peter Baumgartner. Prior to his involvement with Lincoln Loop, Yann focused on Product Lifecycle Management systems (PLM) for several large …