FIFO Queues in Go

November 3, 2021

FIFO Queues in Go
FIFO Queues in Go

Go doesn’t ship with a queue. Here’s how to build one that fits your use case.

Estimated Reading Time : 7m

What FIFO means

FIFO stands for First In, First Out. The first item enqueued is the first item dequeued. Order is preserved — no item jumps the line.

Go has no built-in queue type. Depending on your requirements, you have a few options.

The slice approach

type Queue struct {
    items []interface{}
}

func (q *Queue) Enqueue(item interface{}) {
    q.items = append(q.items, item)
}

func (q *Queue) Dequeue() (interface{}, bool) {
    if len(q.items) == 0 {
        return nil, false
    }
    item := q.items[0]
    q.items = q.items[1:]
    return item, true
}

func (q *Queue) Len() int {
    return len(q.items)
}

This works for single-goroutine use, but there’s a subtle memory issue: q.items[1:] re-slices the underlying array without releasing the memory for the first element. Over time, a long-running queue holds onto memory it no longer needs.

The fix is to nil out the slot before re-slicing:

func (q *Queue) Dequeue() (interface{}, bool) {
    if len(q.items) == 0 {
        return nil, false
    }
    item := q.items[0]
    q.items[0] = nil // clear the reference so GC can reclaim it
    q.items = q.items[1:]
    return item, true
}

For most workloads the simple version is fine. For queues that process large volumes of data continuously, clearing the slot matters.

Using container/list

The standard library’s container/list package provides a doubly linked list that makes a clean queue implementation — no re-slicing, no memory leak:

import "container/list"

type Queue struct {
    l *list.List
}

func NewQueue() *Queue {
    return &Queue{l: list.New()}
}

func (q *Queue) Enqueue(item interface{}) {
    q.l.PushBack(item)
}

func (q *Queue) Dequeue() (interface{}, bool) {
    front := q.l.Front()
    if front == nil {
        return nil, false
    }
    q.l.Remove(front)
    return front.Value, true
}

func (q *Queue) Len() int {
    return q.l.Len()
}

Each element is an independent node — removing the front doesn’t leave a dangling reference in an underlying array. For long-lived queues with high throughput, this is the safer choice over a raw slice.

The tradeoff is memory overhead: each node in container/list allocates a list.Element struct with prev/next pointers. For small queues or short-lived use, the slice approach is simpler and faster.

Thread-safe queue

If multiple goroutines are reading and writing, protect access with a mutex:

type SafeQueue struct {
    mu    sync.Mutex
    items []interface{}
}

func (q *SafeQueue) Enqueue(item interface{}) {
    q.mu.Lock()
    defer q.mu.Unlock()
    q.items = append(q.items, item)
}

func (q *SafeQueue) Dequeue() (interface{}, bool) {
    q.mu.Lock()
    defer q.mu.Unlock()
    if len(q.items) == 0 {
        return nil, false
    }
    item := q.items[0]
    q.items[0] = nil
    q.items = q.items[1:]
    return item, true
}

The caller is responsible for type asserting the returned value back to the concrete type:

q := &SafeQueue{}
q.Enqueue("hello")

if item, ok := q.Dequeue(); ok {
    fmt.Println(item.(string))
}

Channel-based queue

For producer-consumer patterns, a buffered channel is often more idiomatic:

queue := make(chan string, 100)

// producer
queue <- "job-1"
queue <- "job-2"

// consumer
job := <-queue
fmt.Println(job) // job-1

The channel handles synchronization. The buffer size sets the maximum capacity — sends block when full, receives block when empty.

When you can’t bound the capacity, you need an unbounded queue. One approach is a goroutine that bridges an input and output channel with an internal buffer:

func NewUnboundedQueue() (chan<- interface{}, <-chan interface{}) {
    in := make(chan interface{})
    out := make(chan interface{})

    go func() {
        defer close(out)
        var buf []interface{}
        for {
            if len(buf) == 0 {
                item, ok := <-in
                if !ok {
                    return
                }
                buf = append(buf, item)
            }
            select {
            case out <- buf[0]:
                buf = buf[1:]
            case item, ok := <-in:
                if !ok {
                    for _, v := range buf {
                        out <- v
                    }
                    return
                }
                buf = append(buf, item)
            }
        }
    }()

    return in, out
}

Close the in channel when the producer is done. The goroutine will flush remaining items to out and close it.

When to use each

Slice queue — single-goroutine use, simple task ordering, or when you’re managing access externally.

SafeQueue — shared queue accessed concurrently, where you want explicit control over synchronization.

Buffered channel — producer-consumer pipelines where a fixed capacity is acceptable and you want Go’s built-in scheduler doing the work.

Unbounded channel queue — pipelines where you can’t bound the buffer, need backpressure-free enqueuing, and still require ordered delivery.

Gotchas

Don’t use a slice as a stack and call it a FIFO. A stack is LIFO — dequeue from the front (items[0]), not the back. It’s an easy mistake to make with slices since append and items[len-1] are the natural operations.

Closing a channel you’re still writing to panics. If you’re using a channel-based queue, make sure only the producer closes the channel, and only after it’s done sending.

An unbounded queue can grow without limit. If your consumer is slower than your producer, the internal buffer will grow indefinitely. Monitor queue depth in production.