A 413 response means the image you sent to BackgroundErase was too large for the API to accept in its current form. In practice, this usually happens for one of two reasons: the uploaded file is too large in bytes, or the image dimensions are so large that the total pixel count exceeds the request limit.
For most teams, this is easy to fix by resizing the input, re-encoding it more efficiently, or avoiding unnecessarily huge originals before upload. If you are working with raw exports, oversized PNGs, extremely high-resolution product photos, or giant design assets, this is one of the first things to check.
Fastest fix: Re-encode the source image as a high-quality JPG or WebP, resize it down if it is extremely large, and retry with one simple direct API call before debugging anything else.
What 413 means in BackgroundErase
The most common 413 response is payload_too_large, which means the uploaded image bytes exceeded the request size limit. A typical error looks like this:
{
"error": {
"code": "payload_too_large",
"message": "image too large (>30 MB).",
"status": 413,
"request_id": "..."
}
}There is also a second 413 path for extremely large images by dimension. That shows up as image_too_large:
{
"error": {
"code": "image_too_large",
"message": "image exceeds 100000000 pixels.",
"status": 413,
"request_id": "..."
}
}So even if the file size in megabytes looks acceptable, a very large image can still fail if the total number of pixels is too high.
Quick checklist
Walk through this sequence before changing your app code:
- Check the actual file size before upload
- If the file is over 30 MB, compress or re-encode it
- Check the image dimensions
- If the image is extremely large, resize it before sending
- Prefer JPG or WebP for photographic source images when possible
- Avoid unnecessarily huge originals if your workflow does not need them
- Retry with one small test image first
The two limits you should think about
When teams hear “payload too large,” they usually only think about file size. But for image APIs there are really two practical limits:
- Encoded file size: how large the uploaded bytes are on disk or in transit
- Pixel count: how large the image is after decoding into width × height
Large PNGs are a very common cause of 413 errors because they can become huge when exported from design tools, screenshots, lossless product workflows, or mobile editing pipelines. A photo that would be reasonable as a JPG can sometimes become unnecessarily massive as a PNG.
Extremely large source dimensions can also trigger 413, even when the file compresses well. Panoramas, large scans, overbuilt marketplace exports, or giant images prepared for print are common examples.
Test with one smaller image first
Before changing your full pipeline, reduce one image to a more normal size and retry the API directly:
curl -H 'x-api-key: YOUR_API_KEY' \
-f https://api.backgrounderase.com/v2 \
-F 'image_file=@/absolute/path/to/input.jpg' \
-F 'format=png' \
-F 'size=full' \
-o output.pngIf the smaller version succeeds, you have confirmed the problem is the input image size rather than your API key, request format, or output options.
How to inspect the image before upload
The quickest way to debug a 413 is to inspect both the file size and the image dimensions before you send it. That tells you which limit you are actually hitting.
from PIL import Image
from pathlib import Path
path = Path("input.png")
size_mb = path.stat().st_size / (1024 * 1024)
img = Image.open(path)
pixels = img.width * img.height
print("File size MB:", round(size_mb, 2))
print("Width:", img.width)
print("Height:", img.height)
print("Pixels:", pixels)Once you know whether the issue is megabytes or dimensions, the next step becomes much clearer.
Reduce file size by re-encoding
If the image is too large in bytes, the simplest fix is often to re-encode it. For photographic images, JPG is usually the most practical choice. It can reduce upload size dramatically while keeping visual quality high enough for background removal.
A common example is a huge PNG from a design tool or a raw export pipeline. Converting that file to a quality-controlled JPG often solves the 413 immediately.
magick input.png -strip -resize "5000x5000>" -quality 92 output.jpgTip: Removing metadata with -strip can also reduce file size when the image carries large EXIF or editing metadata.
Resize extremely large dimensions before upload
If the problem is total dimensions rather than encoded bytes, resize the image before sending it. This is especially important for scans, print assets, giant PNG canvases, or images exported far above normal web or app resolutions.
In many real workflows, you do not need the original full-resolution source to get an excellent mask. Resizing to a more practical working size is often enough and makes the upload path faster too.
from PIL import Image, ImageOps
MAX_SIDE = 5000
img = Image.open("input.png")
img = ImageOps.exif_transpose(img)
w, h = img.size
scale = min(1.0, MAX_SIDE / max(w, h))
new_size = (int(w * scale), int(h * scale))
if new_size != (w, h):
img = img.resize(new_size, Image.LANCZOS)
img.save("resized.jpg", format="JPEG", quality=92, optimize=True)Resize in Node.js with Sharp
If your SaaS, worker, or backend pipeline is in Node.js, Sharp is one of the easiest ways to normalize very large uploads before sending them to BackgroundErase:
import sharp from "sharp";
await sharp("input.png")
.rotate()
.resize({ width: 5000, height: 5000, fit: "inside", withoutEnlargement: true })
.jpeg({ quality: 92 })
.toFile("resized.jpg");Source formats that often cause 413s
Some file types are much more likely than others to produce a 413 in real production workflows:
- Huge lossless PNG exports from design tools
- Oversized TIFF or scan-based workflows
- Massive mobile photos with no preprocessing step
- Marketplace or catalog uploads with giant source assets
- Raw or near-raw images converted poorly upstream
- Base64 payloads that bloat request size further
In general, if the source is photographic, re-encoding it as a high-quality JPG before upload is often the best first move. If the source must stay lossless, resize it to a more reasonable working resolution first.
Base64 uploads can get larger than you expect
If your app sends images as base64 inside JSON, the request can become larger than the original file on disk. That means an image that is already close to the limit may cross the threshold once it is wrapped in base64 and JSON.
When possible, use multipart file uploads instead of base64 for large images. This is usually the cleaner and more efficient wire format for image APIs.
Practical rule: If your images are already large, prefer image_file uploads over giant base64 JSON bodies.
Product and pipeline strategies that prevent future 413s
If 413 errors are showing up regularly, the real solution is usually to normalize images before they ever hit the API. Strong production pipelines typically do some combination of the following:
- Resize extremely large uploads on ingest
- Convert photographic images to JPG or WebP automatically
- Strip unnecessary metadata before sending
- Reject or warn on giant user uploads in the UI
- Store originals separately, but process a working copy
- Use preflight checks in workers or backend services
This is especially helpful for e-commerce tools, creative SaaS products, content management systems, and any workflow that allows users to upload assets from many different sources.
413 vs slow processing
A 413 is not the same thing as slow processing. If you get a 413, the request is being rejected because the input is too large. The API is not “taking too long” on that request. So resizing or re-encoding is the right fix, not retrying the same oversized file over and over.
Once you bring the image into a reasonable range, processing usually becomes much smoother and easier to scale across your app or workflow.
Final resolution path
If you are seeing payload_too_large, the cleanest resolution path is:
- Measure the file size in megabytes
- Measure the image dimensions and total pixels
- Convert large photographic PNGs to JPG or WebP
- Resize extremely large images before upload
- Use multipart file upload instead of huge base64 bodies
- Retry with one reduced image first
- Then apply the same preprocessing step in your pipeline
In most cases, that resolves the issue without any deeper API changes.
