In the previous post, we loaded three independent requests sequentially. Even though the operations didn’t depend on each other, each one waited for the previous to finish To run them in parallel while keeping structured concurrency, Swift gives us two main tools: async let and TaskGroup.
When to use async let
Let’s start with async let. It allows you to fire off multiple operations in parallel within the same task context:
| |
Each async let creates a child task that executes immediately and runs in parallel. You don’t await at declaration time — the suspension and potential error only surface when you later read the value. That’s what happens in the tuple assignment:
| |
At this point, all three tasks are awaited together, and their results are assigned atomically. If any one throws, the siblings are cancelled and the entire loadFeed() call throws.
Note that try await at the point of the async let declaration is redundant. The work starts immediately when the async let is created; the actual suspension/throw happens when you read the value (as in the tuple assignment).
| |
Both of the above read the same.
When to use a Task Group
async let shines when you have a fixed, small set of heterogeneous operations like posts, comments, and users above. But if you’re dealing with a dynamic collection of tasks of the same type (e.g. downloading thumbnails for 100 posts), TaskGroup scales better. It lets you create tasks in a loop and gather results as they arrive.
Here’s a simple example that fetches thumbnails in parallel and returns them in completion order:
| |
The above code creates up a new child task for each post ID. Each task attempts to fetch a thumbnail and returns it when ready. The results stream back as they complete, so the array gradually fills without waiting for every single download. By the end, we return all successfully loaded thumbnails.
You might have noticed the use of try? inside the task. This is significant: it means any error from loadThumbnails is swallowed. A failed network request simply produces nil, which is later filtered out. The function will always return successfully, but potentially with fewer avatars than requested. That may be fine if thumbnails are optional and placeholders can be shown, but it completely hides which downloads failed and why.
With error handling
If you want proper error handling, the safer approach is to switch to withThrowingTaskGroup. In this version, each task is allowed to throw. If one of them fails, the group automatically cancels the remaining tasks and the error is thrown back to the caller:
| |
Backpressure
When I sit down to write about a topic — in this case, parallelism — I like to think I have a solid understanding of it. But when I dig into the finer details of my examples and go back-and-forth with ChatGPT whilst writing a post, I often discover new aspects I hadn’t fully considered. In this case, that was backpressure.
Backpressure shows up when you spin up too many child tasks at once. For example, if you have 1,000 thumbnails to fetch and just dump them all into a TaskGroup, you’ll overwhelm the system and actually make things slower. The fix is to bound concurrency: only keep a small number of tasks (say 5 or 10) running at once, and add new ones as others complete.
Here’s one way to do it:
| |
This pattern ensures you never flood the executor, while still keeping throughput high. There are also other methods of doing this that I might look into at a later stage but this example will do for now.