rembg is an image background removal tool. It can be used as a command-line tool, a Python library, an HTTP server, or a Docker container. Its purpose is straightforward: take an image as input and output the foreground with an alpha channel. It works well for product images, portraits, material processing, and automated image workflows.
The best part is that it can run locally. If you do not want to upload source images to an online cutout service, need batch processing, or want to connect background removal to scripts and business systems, rembg is easier to automate than a web tool.
01 Installation
The current version requires Python >=3.11,<3.14. Choose the backend according to your hardware:
|
|
If you need the CLI, add cli:
|
|
For NVIDIA CUDA environments, install the GPU version:
|
|
For AMD ROCm environments, install onnxruntime-rocm first by following the official ROCm instructions, then install:
|
|
Most GPU-version trouble is not in rembg itself, but in whether onnxruntime-gpu, CUDA, cuDNN, and the driver versions match. If installation fails, first confirm the workflow with the CPU version, then deal with the GPU environment.
02 CLI Subcommands
After installing the CLI, you can use rembg directly in the terminal. It mainly provides 4 subcommands:
i: process a single file.p: process a whole folder.s: start an HTTP server.b: process an RGB24 pixel binary stream, often used with FFmpeg.
Show help:
|
|
Process a single local image:
|
|
Pipe in a remote image:
|
|
Specify a model:
|
|
Return only the mask:
|
|
Enable alpha matting:
|
|
-a can sometimes produce more natural hair, fuzzy edges, and semi-transparent boundaries, but it is slower and does not noticeably improve every image.
03 Batch Processing Folders
Batch processing is one of the more useful parts of rembg. Put source images in one directory and output results to another:
|
|
Watch for directory changes and automatically process new or modified images:
|
|
This mode works well with download scripts, product image cleanup, and material folders. For example, drop images into input, and let rembg generate transparent PNG files in output.
04 Using It as a Python Library
If you want to integrate it into your own script, the simplest way is remove:
|
|
You can also process PIL images directly:
|
|
For batch processing, reuse a session so the model is not initialized again for every image:
|
|
If you are building a long-running image processing service, session reuse is usually a better fit than repeatedly calling the CLI.
05 Starting an HTTP Server
rembg can also start an HTTP server directly:
|
|
After startup, visit:
|
|
Remove background from a URL:
|
|
Upload a local image:
|
|
If you only need the API and do not need the Gradio UI, disable the UI to reduce idle CPU usage:
|
|
Server mode is suitable for internal tools, automation flows, or other applications. But it is not a complete image asset management system. Authentication, rate limiting, queues, and file cleanup still need to be handled outside it.
06 Docker Usage
The CPU version can use the official image directly:
|
|
CUDA acceleration requires NVIDIA Container Toolkit on the host, and usually requires building an image from the project’s Dockerfile_nvidia_cuda_cudnn_gpu:
|
|
Run example:
|
|
The official README notes that the GPU image is much larger than the CPU image, and model files are not included in the image. To avoid downloading models repeatedly, mount the model directory:
|
|
07 Model Choices
When rembg uses a model for the first time, it automatically downloads it to ~/.u2net/. Common models include:
u2net: a general-purpose model for common cases.u2netp: a lightweight version with friendlier speed and size.u2net_human_seg: focused on human segmentation.u2net_cloth_seg: focused on clothing parsing.silueta: similar tou2net, but smaller.isnet-general-use: a newer general-purpose model.isnet-anime: focused on anime character segmentation.birefnet-general: a general image model used in the README example.sam: can work with extra parameters such as prompt points.
In practice, do not choose only by model name. Product images, portraits, anime images, complex backgrounds, and transparent objects all have different requirements. A safer approach is to pick a representative image set, run several models, compare edges, missed areas, false removals, and speed, then decide the default model.
If you want to use a custom .onnx model, place it in the default model directory ~/.u2net/, and set this when needed:
|
|
This can prevent rembg from overwriting your model file because of checksum logic.
08 Suitable Use Cases
rembg fits these tasks well:
- Batch-generate transparent-background product images.
- Extract foregrounds from portraits, ID photos, and material images.
- Integrate background removal into Python scripts or backend services.
- Deploy a simple background removal API on an internal network.
- Use FFmpeg pipes to process video frames or image sequences.
- Keep privacy-sensitive or copyrighted materials away from third-party online services.
It is less suitable for these cases:
- You need hand-retouched edges and complex transparent materials.
- Every image must reach stable commercial photography quality.
- You want a full online design tool instead of only background removal.
- You do not want to maintain a Python or Docker environment.
- Your GPU driver, CUDA, or ROCm environment is already messy and the project needs to launch quickly.
09 Usage Advice
If you only process images occasionally, the CPU version is enough:
|
|
For batch-processing thousands of images, consider:
- Reusing a Python session.
- Fixing the model directory to avoid repeated downloads.
- Using an SSD for inputs, outputs, and model files.
- Testing model quality on a small batch first.
- Deciding whether GPU acceleration is worth the trouble afterward.
The value of GPU is mainly batch throughput. For occasional single-image processing, the setup cost may be higher than the time saved. Especially on Windows, when CUDA, cuDNN, and onnxruntime-gpu versions do not match, the CPU version can be the more practical choice.
10 Quick Take
rembg is simple, open source, and flexible: it can run as a CLI, be called from Python, expose HTTP endpoints, or be packaged with Docker. It is a good base component for local automatic background removal.
But it is not a magic eraser. Complex backgrounds, fine subject edges, transparent materials, shadow preservation, and commercial-grade retouching may still require manual work or a more specialized workflow. When putting it into batch automation, it is best to keep a human review or failed-sample recovery step.
If the goal is to remove backgrounds from a batch of images quickly while keeping the process local, rembg is worth keeping in the toolbox.
Related Links
- GitHub project: https://github.com/danielgatis/rembg
- README: https://github.com/danielgatis/rembg/blob/main/README.md
- Releases: https://github.com/danielgatis/rembg/releases
- ONNX Runtime installation matrix: https://onnxruntime.ai/