Our privacy-first approach: why we don't train on your images by default

BackgroundErase does not use your images to improve our models unless you explicitly opt in. By default, we follow a privacy-first workflow and do not retain your data for over 24 hours.

Jack
Written by Jack
Updated in March 2026

At BackgroundErase, we take a privacy-first approach to customer data. That means we do not assume that images uploaded to our platform are available for model training, internal dataset building, or long-term retention by default. We treat the images you process as your data, not ours.

Our principle: We believe your proprietary data is yours alone. While many AI services treat your uploads as their training library, BackgroundErase only uses your data to improve our models if you explicitly give us the green light.

This matters because many AI companies quietly operate on the opposite assumption. In those systems, user uploads effectively become a free source of future training data unless a user finds and disables that behavior. We do not think that should be the default, especially for businesses, agencies, developers, and teams handling proprietary, client, or commercially sensitive images.


What “privacy-first” means here

Privacy-first means model improvement from customer data is opt-in, not opt-out. If you do nothing, your uploads are not treated as training material. We only use customer data to improve our models when a user explicitly turns that on.

That distinction is one of the core design choices behind BackgroundErase. We built the product for people and companies who care about output quality, but who also want clear boundaries around where their data goes and how it can be used.

  • Your images are not used for model training by default
  • Model improvement based on user data requires explicit opt-in
  • Default behavior does not allow long-term retention over 24 hours
  • The choice is visible and user-controlled from the account page

The point we want to make absolutely clear

If you have not explicitly enabled the opt-in at the top of your account page, we do not retain your data for over 24 hours. That is not a small detail or an obscure edge case. It is a deliberate part of how we operate, and it is one of the main reasons customers choose us over tools that treat uploaded content as a standing training asset.

Important: Unless you explicitly turn the opt-in feature on at the top of the account page, we will not retain your data for over 24 hours.

This is one of the strongest privacy guarantees we can make about how the product is designed. For many customers, that alone changes the risk profile of using an AI image workflow. It means the default path is built around minimization, not indefinite retention.


Where you control this setting

This control lives directly on the account page . At the top of the page, there is an opt-in feature related to allowing data usage for model improvement. If that switch is not turned on, your uploads are not part of our training workflow and are not retained for over 24 hours.

We want this choice to be explicit and understandable. The default should be safe for users who never touch the setting. If someone wants to contribute data for model improvement, that should come from a deliberate decision, not from silence or ambiguity.

Why this matters for businesses and teams

Many of the people using BackgroundErase are not just editing casual images. They are working with customer photos, marketplace assets, ecommerce catalogs, creative campaigns, product shots, unreleased content, internal media pipelines, or client-owned visual data. In those environments, data handling is not just a technical footnote. It is part of the purchasing decision.

A privacy-first default matters because it reduces the need to make risky assumptions. Teams do not have to wonder whether their uploads quietly became part of a training archive simply because they used our product. Instead, the default path is clear: no explicit opt-in means no training use and no retention over 24 hours.

This approach is:

  • Safer for proprietary product photography
  • Better aligned with client and agency expectations
  • Stronger fit for commercial and internal workflows
  • Reduces ambiguity around how uploads may be reused
  • Gives users a clearer privacy posture by default

Why we chose opt-in instead of opt-out

We believe the default should respect ownership. If a user uploads an image to remove a background, the default purpose of that upload should be to deliver the requested result, not to create a future training asset. That is the difference between a service relationship and a silent data extraction model.

Opt-in is the only approach that reflects that principle cleanly. It puts the decision where it belongs: with the customer. Users who want to contribute data for model improvement can choose that. Users who do not want that do not have to hunt through settings to turn it off.

Our view: silence should not be treated as consent to turn proprietary uploads into training data.


What happens if you do opt in

If you explicitly enable the opt-in feature at the top of your account page, you are giving us permission to use your data to help improve our models. That choice is intentional and user-controlled. The key point is not whether the option exists. The key point is that it requires a positive action from you before it applies.

This makes the product’s privacy behavior much easier to understand. There are two clean states:

  • No opt-in: no training use, no retention over 24 hours
  • Opt-in enabled: permission granted for model improvement use

We think that clarity is important. It avoids the gray area that frustrates users when a platform’s real data policy only becomes visible after close reading.

How this compares to the broader AI market mentality

The broader AI market often normalizes the idea that user uploads are part of the product’s long-term learning loop. Sometimes that is framed as model improvement. Sometimes it is buried inside platform language about service quality, research, or internal development. Either way, the result can feel the same: customers are left wondering whether every upload is feeding future training.

BackgroundErase takes the opposite stance by default. We do not believe your images should quietly become our training library. We believe your proprietary data is yours alone, and that any broader use should require your explicit permission.


Who benefits most from this policy

This approach is especially important for users in commercial and operational contexts where uploaded images may carry real business value or contractual sensitivity.

  • Ecommerce brands processing product assets
  • Agencies handling client-owned creative work
  • Marketplaces and SaaS products processing user uploads
  • Teams working with pre-release or internal media
  • Developers embedding background removal into customer-facing workflows
  • Businesses that want a clearer separation between service use and model training

In all of those cases, the default should be conservative, understandable, and easy to explain internally. That is what a privacy-first system is meant to provide.

The simplest version of the policy

If you want the shortest possible explanation, it is this:

By default, BackgroundErase does not train on your images. We only use your data to improve our models if you explicitly opt in. If you do not turn that feature on at the top of your account page, we will not retain your data for over 24 hours.

Check your setting

If you want to confirm how your account is configured, go to backgrounderase.com/account and review the opt-in feature at the top of the page. That control determines whether your data is eligible to be used for model improvement. If it is not enabled, your uploads are not retained for over 24 hours.