In the previous post, we loaded three independent requests sequentially. Even though the operations didn’t depend on each other, each one waited for the previous to finish To run them in parallel while keeping structured concurrency, Swift gives us two main tools: async let and TaskGroup.

When to use async let

Let’s start with async let. It allows you to fire off multiple operations in parallel within the same task context:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
@MainActor
final class FeedViewModel: ObservableObject {
    @Published var posts: [Post] = []
    @Published var comments: [Comment] = []
    @Published var users: [User] = []

    func loadFeed() async {
        do {
            async let postsTask = fetchPosts()
            async let commentsTask = fetchComments()
            async let usersTask = fetchUsers()

            (posts, comments, users) = try await (postsTask, commentsTask, usersTask)
        } catch {
            ... 
        }
    }
}

Each async let creates a child task that executes immediately and runs in parallel. You don’t await at declaration time — the suspension and potential error only surface when you later read the value. That’s what happens in the tuple assignment:

1
(posts, comments, users) = try await (postsTask, commentsTask, usersTask)

At this point, all three tasks are awaited together, and their results are assigned atomically. If any one throws, the siblings are cancelled and the entire loadFeed() call throws.

Note that try await at the point of the async let declaration is redundant. The work starts immediately when the async let is created; the actual suspension/throw happens when you read the value (as in the tuple assignment).

1
2
async let usersTask = fetchUsers()
async let usersTask = try await fetchUsers()

Both of the above read the same.

When to use a Task Group

async let shines when you have a fixed, small set of heterogeneous operations like posts, comments, and users above. But if you’re dealing with a dynamic collection of tasks of the same type (e.g. downloading thumbnails for 100 posts), TaskGroup scales better. It lets you create tasks in a loop and gather results as they arrive.

Here’s a simple example that fetches thumbnails in parallel and returns them in completion order:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
func loadThumbnails(for postIDs: [String]) async throws -> [UIImage] {
    await withTaskGroup(of: UIImage?.self, returning: [UIImage].self) { group in
        for id in postIDs {
            group.addTask {
                try? await fetchThumbnail(for: id)
            }
        }

        var thumbnails: [UIImage] = []
        for await image in group {
            if let image {
                thumbnails.append(image)
            }
        }
        return thumbnails
    }
}

The above code creates up a new child task for each post ID. Each task attempts to fetch a thumbnail and returns it when ready. The results stream back as they complete, so the array gradually fills without waiting for every single download. By the end, we return all successfully loaded thumbnails.

You might have noticed the use of try? inside the task. This is significant: it means any error from loadThumbnails is swallowed. A failed network request simply produces nil, which is later filtered out. The function will always return successfully, but potentially with fewer avatars than requested. That may be fine if thumbnails are optional and placeholders can be shown, but it completely hides which downloads failed and why.

With error handling

If you want proper error handling, the safer approach is to switch to withThrowingTaskGroup. In this version, each task is allowed to throw. If one of them fails, the group automatically cancels the remaining tasks and the error is thrown back to the caller:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
func loadThumbnails(for postIDs: [String]) async throws -> [UIImage] {
    try await withThrowingTaskGroup(of: UIImage.self, returning: [UIImage].self) { group in
        for id in postIDs {
            group.addTask { try await fetchThumbnail(for: id) }
        }

        var thumbnails = [UIImage]()
        for try await image in group {
            thumbnails.append(image)
        }
        return thumbnails
    }
}

Backpressure

When I sit down to write about a topic — in this case, parallelism — I like to think I have a solid understanding of it. But when I dig into the finer details of my examples and go back-and-forth with ChatGPT whilst writing a post, I often discover new aspects I hadn’t fully considered. In this case, that was backpressure.

Backpressure shows up when you spin up too many child tasks at once. For example, if you have 1,000 thumbnails to fetch and just dump them all into a TaskGroup, you’ll overwhelm the system and actually make things slower. The fix is to bound concurrency: only keep a small number of tasks (say 5 or 10) running at once, and add new ones as others complete.

Here’s one way to do it:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
func loadThumbnails(for postIDs: [String], limit: Int = 5) async throws -> [UIImage] {
    var results: [UIImage] = []
    var iterator = postIDs.makeIterator()

    try await withThrowingTaskGroup(of: UIImage.self) { group in
        // Start with up to `limit` tasks
        for _ in 0..<limit {
            if let id = iterator.next() {
                group.addTask { try await fetchThumbnail(for: id) }
            }
        }

        // As each task completes, start a new one
        for try await image in group {
            results.append(image)

            if let nextID = iterator.next() {
                group.addTask { try await fetchThumbnail(for: nextID) }
            }
        }
    }

    return results
}

This pattern ensures you never flood the executor, while still keeping throughput high. There are also other methods of doing this that I might look into at a later stage but this example will do for now.