The following is a simple code snippet:
for {
for a := range structOfChannels {
select {
case msg := <- structOfChannels[a].Chan1:
// Something
case msg := <- structofChannels[a].Chan2:
// Something
default:
}
}
}
All this is doing is rotating over a series of channels for a message and processing them. The default case makes sure the loop doesn't get stuck after the first iteration of "select", and the for without conditions means continue until eternity.
I noticed, when running Activity Monitor (this was using Go 1.8 on OS X) that the processor would stay near 100%. The system seemed responsive, but the processor staying that high was, to me, annoying.
The solution was simple; make the loop wait a fraction of a second each iteration.
for {
tmTimer := time.NewTimer(time.Millisecond * 50)
<-tmTimer.C
for a := range structOfChannels {
select {
case msg := <- structOfChannels[a].Chan1:
// Something
case msg := <- structofChannels[a].Chan2:
// Something
default:
}
}
}
This just makes the loop wait 50 milliseconds before ranging again, a pause smaller than most humans would perceive but enough for the computer that it dropped the processor use to near nothing.
There are a few other approaches that work, but have a similar effect. For example, if you're worried about the overhead of creating and freeing the NewTimer(), you could create a NewTicker() outside the for{} scope and keep reusing that. You can also probably lower the milliseconds to smaller values and see where the processor starts kicking up, but I'll leave that to the reader to experiment and tune.
The point is, because the system seemed responsive, it was easy to overlook the effect of a simple for{} loop used to monitor messages from goroutines and there's a possibility this could have an effect when deploying to servers. Check your performance when testing your work!