Golang DSA Chapter 1 Part 2: Algorithms and How to Think About Performance

In Part 1 we covered what data structures are, how they get classified, and a bunch of structural design patterns. Now we’re getting to the fun stuff: algorithms, how to measure their performance, and a few classic algorithm types.

This is still Chapter 1 of “Learn Data Structures and Algorithms with Golang” by Bhagvan Kommadi. Let’s keep going.

How Do You Represent an Algorithm?

Before you write code, you need a way to think about and describe your algorithm. The book talks about two classic approaches: flow charts and pseudo code.

Flow charts use visual symbols for things like entry/exit points, decisions, inputs/outputs, and tasks. You draw boxes and arrows showing the flow of logic. Honestly, most people don’t draw flow charts by hand anymore, but the concept is still useful when you need to explain logic to someone visually.

Pseudo code is more practical. It’s a high-level description of what your program does, written in plain-ish language. No strict syntax rules. The point is to sketch out your logic before committing to real code.

Here’s the book’s example of pseudo code for finding the max value in an array:

maximum(arr) {
    n <- len(arr)
    max <- arr[0]
    for k <- 0,n do  {
        If  arr[k] > max {
            max <- arr[k]
        }
    }
    return max
}

Simple, right? You can spot bugs in pseudo code way faster than in real code. That’s the whole point. Catch design mistakes early, save yourself time later.

Complexity and Performance Analysis

OK so here’s the big question: how do you know if your algorithm is any good?

You can measure things like CPU time, memory usage, disk, and network. But those numbers change depending on your hardware. If you run the same code on a laptop from 2015 versus a beefy server, you’ll get different results.

That’s why we use complexity analysis. Instead of measuring exact time, we talk about how the algorithm scales as the input grows. Double your input, does the time double? Quadruple? Explode exponentially?

The book gives a simple example. Say you have an array of 10 elements and you loop through it once, doing some math on each element:

package main

import (
    "fmt"
)

func main() {
    var m [10]int
    var k int
    for k = 0; k < 10; k++ {
        m[k] = k + 200
        fmt.Printf("Element[%d] = %d\n", k, m[k])
    }
}

The time to run this is roughly 10 * t, where t is how long one addition and array update takes. The exact value of t depends on your machine, but the important thing is that the work grows proportionally with the array size. That’s what we care about.

Big O Notation

This is where Big O notation comes in. It’s the standard way to describe algorithm complexity.

The notation T(n) = O(n) means “this algorithm’s time grows linearly with input size.” Here’s the quick cheat sheet from the book:

NotationNameExample
O(1)Constant timeAccessing an array element by index
O(log n)Logarithmic timeBinary search in a tree
O(n)Linear timeSearching through an unsorted list
O(n^2)Quadratic timeBubble sort, selection sort
O(n^3)Cubic timeNaive matrix multiplication

The book also mentions Big Omega and Big Theta notation. Big Omega is the lower bound (best case), Big Theta is the tight bound (average case). But honestly, in most day-to-day conversations, people just use Big O to talk about the worst case.

Big O applies to space too, not just time. An algorithm might be fast but eat up a ton of memory, or vice versa.

Linear Complexity: O(n)

The simplest one. Your algorithm does one pass through the data. Double the input, double the work.

String matching algorithms like Boyer-Moore have linear complexity. The book shows this with a basic loop:

package main

import (
    "fmt"
)

func main() {
    var m [10]int
    var k int
    for k = 0; k < 10; k++ {
        m[k] = k * 200
        fmt.Printf("Element[%d] = %d\n", k, m[k])
    }
}

One loop, one pass. That’s O(n).

Quadratic Complexity: O(n^2)

Now nest two loops. The classic example is a multiplication table:

package main

import (
    "fmt"
)

func main() {
    var k, l int
    for k = 1; k <= 10; k++ {
        fmt.Println(" Multiplication Table", k)
        for l = 1; l <= 10; l++ {
            var x int = l * k
            fmt.Println(x)
        }
    }
}

The outer loop runs 10 times. For each of those, the inner loop runs 10 times. That’s 10 * 10 = 100 operations. For n elements, it’s n * n = n^2. This is why sorting algorithms like bubble sort are slow on large datasets.

Cubic Complexity: O(n^3)

Three nested loops. The book demonstrates this with a 3D array:

package main

import (
    "fmt"
)

func main() {
    var k, l, m int
    var arr [10][10][10]int
    for k = 0; k < 10; k++ {
        for l = 0; l < 10; l++ {
            for m = 0; m < 10; m++ {
                arr[k][l][m] = 1
                fmt.Println("Element value ", k, l, m, " is", arr[k][l][m])
            }
        }
    }
}

That’s 10 * 10 * 10 = 1,000 operations. With n = 100, you’d be looking at a million operations. Cubic algorithms get painful fast. You’ll see this complexity in some matrix operations.

Logarithmic Complexity: O(log n)

This is the good stuff. Logarithmic algorithms get faster relative to the input size because they cut the problem in half at each step.

The classic example is binary search trees. The book shows a simple binary tree implementation in Go:

package main

import (
    "fmt"
)

type Tree struct {
    LeftNode  *Tree
    Value     int
    RightNode *Tree
}

func (tree *Tree) insert(m int) {
    if tree != nil {
        if tree.LeftNode == nil {
            tree.LeftNode = &Tree{nil, m, nil}
        } else {
            if tree.RightNode == nil {
                tree.RightNode = &Tree{nil, m, nil}
            } else {
                if tree.LeftNode != nil {
                    tree.LeftNode.insert(m)
                } else {
                    tree.RightNode.insert(m)
                }
            }
        }
    } else {
        tree = &Tree{nil, m, nil}
    }
}

func print(tree *Tree) {
    if tree != nil {
        fmt.Println(" Value", tree.Value)
        fmt.Printf("Tree Node Left")
        print(tree.LeftNode)
        fmt.Printf("Tree Node Right")
        print(tree.RightNode)
    } else {
        fmt.Printf("Nil\n")
    }
}

func main() {
    var tree *Tree = &Tree{nil, 1, nil}
    print(tree)
    tree.insert(3)
    print(tree)
    tree.insert(5)
    print(tree)
    tree.LeftNode.insert(7)
    print(tree)
}

Each time you insert into a balanced binary tree, you only need to traverse one path from root to leaf. That path is roughly log2(n) long. So inserting into a tree with a million nodes only takes about 20 steps. Compare that to a linear search through the same million elements. Big difference.

Brute Force Algorithms

Now let’s talk about actual algorithm strategies. The simplest one is brute force: just try everything.

Brute force algorithms are straightforward. You check every possible option until you find the answer. They’re easy to understand and implement, but they’re slow. The classic example is a linear search, going through every element one by one:

package main

import (
    "fmt"
)

func findElement(arr [10]int, k int) bool {
    var i int
    for i = 0; i < 10; i++ {
        if arr[i] == k {
            return true
        }
    }
    return false
}

func main() {
    var arr = [10]int{1, 4, 7, 8, 3, 9, 2, 4, 1, 8}
    var check bool = findElement(arr, 10)
    fmt.Println(check)  // false
    var check2 bool = findElement(arr, 9)
    fmt.Println(check2) // true
}

This works. It’s correct. But if your array has a billion elements and the thing you’re looking for is at the end (or not there at all), you’re going to be waiting a while. Brute force is O(n) for search, and for problems like string matching or matrix multiplication, the complexity gets worse.

When should you use brute force? When the dataset is small, or when you need something quick and correct without caring about speed. For anything at scale, you need something smarter.

Divide and Conquer Algorithms

The idea here is simple: break a big problem into smaller pieces, solve each piece, then combine the results. If the smaller pieces are still too big, break them down further. Keep going until each piece is trivial to solve.

Recursion is the natural way to implement this. Quick sort, merge sort, binary search, and fast Fourier transform all use divide and conquer.

The book’s example is the Fibonacci sequence:

package main

import (
    "fmt"
)

func fibonacci(k int) int {
    if k <= 1 {
        return 1
    }
    return fibonacci(k-1) + fibonacci(k-2)
}

func main() {
    var m int
    for m = 0; m < 8; m++ {
        var fib = fibonacci(m)
        fmt.Println(fib)
    }
}

This prints: 1, 1, 2, 3, 5, 8, 13, 21.

Each call to fibonacci breaks the problem into two smaller problems: fibonacci(k-1) and fibonacci(k-2). Eventually you hit the base case (k <= 1) and the results bubble back up.

Now, a quick note the book doesn’t emphasize enough: this recursive Fibonacci implementation is actually pretty inefficient. It recalculates the same values over and over. For fibonacci(5), it calculates fibonacci(3) twice, fibonacci(2) three times, and so on. The complexity is actually O(2^n), which is terrible. In practice, you’d use memoization or an iterative approach. But as a teaching example for divide and conquer, it gets the point across.

Backtracking Algorithms

Backtracking is like a smarter version of brute force. Instead of trying literally everything, you build a solution step by step. At each step, if you realize your current path can’t lead to a valid solution, you back up and try a different path.

Think of it like navigating a maze. You walk forward until you hit a dead end, then you go back to the last intersection and try a different direction.

The book’s example is finding combinations of array elements that sum to 18:

package main

import (
    "fmt"
)

func findElementsWithSum(arr [10]int, combinations [19]int, size int, k int, addValue int, l int, m int) int {
    var num int = 0
    if addValue > k {
        return -1
    }
    if addValue == k {
        num = num + 1
        var p int = 0
        for p = 0; p < m; p++ {
            fmt.Printf("%d,", arr[combinations[p]])
        }
        fmt.Println(" ")
    }
    var i int
    for i = l; i < size; i++ {
        combinations[m] = l
        findElementsWithSum(arr, combinations, size, k, addValue+arr[i], l, m+1)
        l = l + 1
    }
    return num
}

func main() {
    var arr = [10]int{1, 4, 7, 8, 3, 9, 2, 4, 1, 8}
    var addedSum int = 18
    var combinations [19]int
    findElementsWithSum(arr, combinations, 10, addedSum, 0, 0, 0)
}

The key line is if addValue > k { return -1 }. That’s the backtracking part. As soon as the running sum exceeds the target (18), the function stops exploring that branch and returns. No point continuing down a path that’s already too large.

This is way faster than checking every possible combination because you prune entire branches of the search tree early. Backtracking shows up in constraint satisfaction problems, puzzle solvers, the knapsack problem, and combinatorial optimization.

What Did We Learn?

Let me recap the second half of Chapter 1:

  • Flow charts and pseudo code are tools to plan algorithms before coding
  • Big O notation tells you how an algorithm scales, not how fast it runs on your specific machine
  • O(1) is best, O(n) is fine, O(n^2) gets rough, O(n^3) is painful, O(2^n) is a disaster
  • Brute force works but doesn’t scale. Use it for small problems or when correctness matters more than speed
  • Divide and conquer breaks big problems into small ones and uses recursion to solve them
  • Backtracking is brute force with an escape hatch. It prunes bad paths early

The main takeaway from this chapter is that choosing the right algorithm (and the right data structure to go with it) is one of the most impactful decisions you’ll make as a developer. A bad choice can turn a millisecond operation into one that takes minutes or hours.

Next up, we move into Chapter 2 where the book starts covering Go-specific data structures like arrays, slices, and maps.


This is part of the Learn Data Structures and Algorithms with Golang blog series.

About

About BookGrill.net

BookGrill.net is a technology book review site for developers, engineers, and anyone who builds things with code. We cover books on software engineering, AI and machine learning, cybersecurity, systems design, and the culture of technology.

Know More