S3 Byte Range Fetches

📅4/18/2026
⏱️3 min read

Introduction

AWS S3 supports byte range fetches, allowing us to retrieve specific portions of an object rather than the entire file. This can be particularly useful for large files, enabling efficient data retrieval and reducing bandwidth usage, and this technique is commonly used in scenarios like video streaming, partial downloads, and resuming interrupted transfers when dealing with large objects.

Fetch the First Bytes of an Object

Using byte range fetches is simple we just need to tell S3 the range of bytes we want to fetch via the range header in the GET request and S3 will return only the specified portion of the object.

Byte range fetch example

Byte range fetch example

Fetching the first bytes of an object is useful for retrieving metadata or preview content without downloading the entire file. This is particularly helpful when indexing S3 files or when we only need a preview or metadata instead of the full file.

To better understand this i will use the AWS SDK for JavaScript to demonstrate how to fetch a byte range from an my bucket named doodooti and a file named TestFile.pdf.

File with 5Mb size

File with 5Mb size

import { S3Client, GetObjectCommand } from "@aws-sdk/client-s3"; const client = new S3Client({ region: "eu-central-1" }); const main = async () => { const response = await client.send(new GetObjectCommand({ Bucket: "doodooti", Key: "TestFile.pdf", Range: "bytes=0-499", })); const { ContentType, ContentLength, LastModified, Metadata } = response; console.log({ ContentType, ContentLength, LastModified, Metadata }); } main().then(() => console.log("Done")).catch((error) => console.error("Error:", error));
Byte range fetch response

Byte range fetch response

Fetching in parallel

For large files we can fetch different byte ranges in parallel to speed up the retrieval process and this is especially beneficial when wee need to download a large file in chunks or when we want to process different parts of a file simultaneously.

Parallel byte range fetch

Parallel byte range fetch

In order to fetch our file that we used in the previous example in parallel we can divide it into 5Mb chunks and fetch each chunk separately using byte range fetches and then we can combine the results to reconstruct the original file, so let's start first by defining a function that calculates the byte ranges for a given file size and chunk size:

const getByteRanges = (fileSize: number, chunkSize: number) => Array.from({ length: Math.ceil(fileSize / chunkSize) }, (_, i) => ({ start: i * chunkSize, end: Math.min((i + 1) * chunkSize - 1, fileSize - 1), }));

Then we define a function that fetches a specific byte range as a chunk from the S3 object:

const downloadChunk = async (start: number, end: number): Promise<Buffer> => { const response = await client.send(new GetObjectCommand({ Bucket: bucket, Key: key, Range: `bytes=${start}-${end}`, })); return Buffer.from(await response.Body!.transformToByteArray()); };

The next step is to get the file size and use it to calculate the byte ranges:

const { ContentLength } = await client.send(new HeadObjectCommand({ Bucket: bucket, Key: key })); const fileSize = ContentLength!;

Finally, we can fetch the chunks in parallel and combine them to reconstruct the original file:

const chunkSize = 1024 * 1024; // 1MB per chunk const bucket = "doodooti"; const key = "TestFile.pdf"; // Build chunk ranges const chunks = getByteRanges(fileSize, CHUNK_SIZE); console.log(`Downloading ${chunks.length} chunks in parallel...`); // Download all chunks in parallel const buffers = await Promise.all(chunks.map(({ start, end }) => downloadChunk(start, end))); const finalBuffer = Buffer.concat(buffers); console.log(`Downloaded ${finalBuffer.length} bytes`);
Parallel byte range fetch response

Parallel byte range fetch response

Fetching byte ranges in parallel dramatically cuts download time for large S3 files especially over high-latency networks. Instead of waiting for one giant sequential download chunks arrive simultaneously and are reassembled in order. This is ideal when we only need specific parts of a file, and if a single chunk fails we can retry just that piece rather than starting over from scratch.

Comparison

There are three main approaches to fetching data from S3: full download, byte-range fetch and parallel range fetch, each method has its own use cases, advantages, and tradeoffs as shown in the table below:

ApproachWhen to UseProsTradeoffs
Full downloadSmall files or simple one-time accessSimple to implementHigher bandwidth usage and slower for large objects
Byte-range fetchPreviewing or reading a specific sectionLess data transfer and faster access to partial contentRequires knowing the byte offsets
Parallel range fetchLarge files that can be split into chunksFaster download time and better throughputMore complex and needs chunk management

Conclusion

S3 byte range fetches let us retrieve exactly what we need whether previewing content, resuming interrupted transfers, or downloading large files in chunks cutting bandwidth usage and avoiding unnecessary data transfer. Push it further with parallel fetching and we unlock even greater performance, making it an essential technique for any application that demands fast and efficient data retrieval.